PostgreSQL – Checkpoint

What is Checkpoint?

Checkpoint periodically flushes all dirty buffers and creates checkpoint record in the WAL log, which is used for recovery.

The purpose of the checkpoint is to ensure that all the dirty buffers generated up to a certain point are sent to the disk. This will recycle the WAL up to that point. Read More

Knowledge worth sharing...Share on linkedin
Linkedin
Share on facebook
Facebook
Share on google
Google
Share on twitter
Twitter

PostgreSQL – Using pgbench for Benchmark Testing

pgbench is a tool for testing the performance of a PostgreSQL database. It works by simulating a specified number of concurrent clients who are executing a series of SQL commands. You can use pgbench to measure the performance of your database and to compare the performance of different database configurations. The pgbench approach is based on TPC-B . The TPC-B benchmark focuses on benchmarking as opposed to OLTP type testing. Read More

Knowledge worth sharing...Share on linkedin
Linkedin
Share on facebook
Facebook
Share on google
Google
Share on twitter
Twitter

PostgreSQL – Analyzing Queries Using pg_stat_statements

pg_stat_statements tracks planning and execution statistics of all SQL statements executed by a server.

Find the top 10 time-consuming queries

SELECT 
  round(
    (
      100 * total_exec_time / sum(total_exec_time) OVER ()
    ):: numeric, 
    2
  ) percent, 
  round(total_exec_time :: numeric, 2) AS total, 
  calls, 
  round(mean_exec_time :: numeric, 2) AS mean, 
  substring(query, 1, 200) 
FROM 
  pg_stat_statements 
ORDER BY 
  total_exec_time DESC 
LIMIT 
  10;

Find the queries that are writing to the temp the most Read More

Knowledge worth sharing...Share on linkedin
Linkedin
Share on facebook
Facebook
Share on google
Google
Share on twitter
Twitter

MySQL/MariaDB – Identifying and Avoiding Deadlocks

A deadlock is a special blocking scenario when two or more competing transactions are waiting for each other to free locks. Each process, while holding its own resources, attempts to access a resource that is locked by the other process..

Simulating a Deadlock Scenario

Transaction 1

START TRANSACTION;
SELECT * FROM departments WHERE dept_no = 'd008' LOCK IN SHARE MODE;

Transaction 2 (wait)

START TRANSACTION;
UPDATE departments
SET dept_name = 'Research & Development'
WHERE dept_no = 'd008';

Transaction 1 (deadlock)

UPDATE departments
SET dept_name = 'R&D'
WHERE dept_no = 'd008';

Identify and Analyze Deadlocks

Execute the command below in MySQL/MariaDB.

SHOW ENGINE INNODB STATUS \G
mysql> SHOW ENGINE INNODB STATUS \G
*************************** 1. row ***************************
  Type: InnoDB
  Name:
Status:
=====================================
2021-06-02 00:40:29 0x7f99d005e700 INNODB MONITOR OUTPUT
=====================================
Per second averages calculated from the last 56 seconds
-----------------
BACKGROUND THREAD
-----------------
srv_master_thread loops: 5 srv_active, 0 srv_shutdown, 4498 srv_idle
srv_master_thread log flush and writes: 0
----------
SEMAPHORES
----------
OS WAIT ARRAY INFO: reservation count 2
OS WAIT ARRAY INFO: signal count 3
RW-shared spins 9, rounds 9, OS waits 0
RW-excl spins 0, rounds 0, OS waits 0
RW-sx spins 0, rounds 0, OS waits 0
Spin rounds per wait: 1.00 RW-shared, 0.00 RW-excl, 0.00 RW-sx
------------------------
LATEST DETECTED DEADLOCK
------------------------
2021-06-02 00:40:08 0x7f99b74f8700
*** (1) TRANSACTION:
TRANSACTION 51038, ACTIVE 13 sec starting index read
mysql tables in use 1, locked 1
LOCK WAIT 2 lock struct(s), heap size 1136, 2 row lock(s)
MySQL thread id 11, OS thread handle 140298596771584, query id 180 localhost instadm updating
UPDATE departments
SET dept_name = 'Research & Development'
WHERE dept_no = 'd008'

*** (1) HOLDS THE LOCK(S):
RECORD LOCKS space id 3 page no 4 n bits 80 index PRIMARY of table `employees`.`departments` trx id 51038 lock_mode X locks rec but not gap waiting
Record lock, heap no 13 PHYSICAL RECORD: n_fields 4; compact format; info bits 0
 0: len 4; hex 64303038; asc d008;;
 1: len 6; hex 00000000c75a; asc      Z;;
 2: len 7; hex 02000000fc0151; asc       Q;;
 3: len 22; hex 5265736561726368202620446576656c6f706d656e74; asc Research & Development;;


*** (1) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 3 page no 4 n bits 80 index PRIMARY of table `employees`.`departments` trx id 51038 lock_mode X locks rec but not gap waiting
Record lock, heap no 13 PHYSICAL RECORD: n_fields 4; compact format; info bits 0
 0: len 4; hex 64303038; asc d008;;
 1: len 6; hex 00000000c75a; asc      Z;;
 2: len 7; hex 02000000fc0151; asc       Q;;
 3: len 22; hex 5265736561726368202620446576656c6f706d656e74; asc Research & Development;;


*** (2) TRANSACTION:
TRANSACTION 51039, ACTIVE 28 sec starting index read
mysql tables in use 1, locked 1
LOCK WAIT 4 lock struct(s), heap size 1136, 2 row lock(s)
MySQL thread id 9, OS thread handle 140298597062400, query id 181 localhost instadm updating
UPDATE departments
SET dept_name = 'R&D'
WHERE dept_no = 'd008'

*** (2) HOLDS THE LOCK(S):
RECORD LOCKS space id 3 page no 4 n bits 80 index PRIMARY of table `employees`.`departments` trx id 51039 lock mode S locks rec but not gap
Record lock, heap no 13 PHYSICAL RECORD: n_fields 4; compact format; info bits 0
 0: len 4; hex 64303038; asc d008;;
 1: len 6; hex 00000000c75a; asc      Z;;
 2: len 7; hex 02000000fc0151; asc       Q;;
 3: len 22; hex 5265736561726368202620446576656c6f706d656e74; asc Research & Development;;


*** (2) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 3 page no 4 n bits 80 index PRIMARY of table `employees`.`departments` trx id 51039 lock_mode X locks rec but not gap waiting
Record lock, heap no 13 PHYSICAL RECORD: n_fields 4; compact format; info bits 0
 0: len 4; hex 64303038; asc d008;;
 1: len 6; hex 00000000c75a; asc      Z;;
 2: len 7; hex 02000000fc0151; asc       Q;;
 3: len 22; hex 5265736561726368202620446576656c6f706d656e74; asc Research & Development;;

*** WE ROLL BACK TRANSACTION (1)

The output will show many info about the latest deadlock, and why it occurred. Take a close look at the portion where it indicates WAITING FOR THIS LOCK TO BE GRANTED (shows which lock the transaction is waiting for) and HOLD THE LOCK(S) (shows the locks that are holding up this transaction).

Preventing Deadlocks

  • Keep transactions small and quick to avoid clashing.
  • Commit transactions right after making a set of related changes to make them less prone to clashes.
  • Accessing resources in the same physical sequence.
    • For example, two transactions need to access two resources. If each transaction accesses the resources in the same physical sequence, then the first transaction will successfully obtain locks on the resources without being blocked by the second transaction. The second transaction will be blocked by the first while trying to obtain a lock on the first resource. The outcome will just be a typical blocking scenario instead of a deadlock.

Cheers!

Knowledge worth sharing...Share on linkedin
Linkedin
Share on facebook
Facebook
Share on google
Google
Share on twitter
Twitter

MySQL/MariaDB – optimizer_switch

To control optimizer behaviour, we can enable/disable specific optimization via optimizer_switch system variable. The optimizer_switch variable can be changed at runtime, and has global and session values.

Execute the command below to see the current set of optimizer flags.`

SELECT @@optimizer_switch \G

To change the value of optimizer_switch, use the following syntax.

SET [GLOBAL|SESSION] optimizer_switch='cmd[,cmd]...';
Syntax Description
defaultReset all optimizations to their default values.
optimization_name=defaultSet the specified optimization to its default value.
optimization_name=onEnable the specified optimization.
optimization_name=offDisable the specified optimization.
There is no need to list all flags – only those that are specified in the command will be affected.

Cheers!

Knowledge worth sharing...Share on linkedin
Linkedin
Share on facebook
Facebook
Share on google
Google
Share on twitter
Twitter

MariaDB/MySQL – Table Open Cache & Table Definition Cache

To improve performance, each concurrent session in MariaDB/MySQL independently access the same table. In other words, many clients could be executing queries for a given table simultaneously. This is also because MySQL/MariaDB is multithreaded. This, however, uses additional memory, but normally increases performance.

table_open_cache indicates the maximum number of tables the server can keep open in any one table cache instance.

View the current setting of table_open_cache by running the command below, or by checking the value in my.cnf

select @@table_open_cache;

There is a variable called “Open_tables”, which indicates the number of table-opening operations since the server started. This is the variable that you can check to determine whether your table cache is too small.

SHOW GLOBAL STATUS LIKE 'Opened_tables';

If the value is very large or increases rapidly, even when you have not issued many FLUSH TABLES statements, increase the table_open_cache value at server startup.

FLUSH TABLES is to force all tables to be closed to ensure that if someone adds a new table outside of MySQL, all threads will start using the new table.

Table Definition Cache

This parameter shows the number of table definitions (SHOW CREATE TABLE \G) that can be stored. This is to speed up opening of tables and only one entry per table. You should consider increasing this parameter if you have large number of tables (>400) in your DB instance.

Values between 400 and 2000 are good for table_definition_cache. Do take note that a bigger table definition cache also means that your database uses more RAM.

To check table definition cache value, run the command below.

select @@table_definition_cache;

Add this parameter in your configuration file as follows:

[mysqld]
table_definition_cache = 16384

Cheers!

Knowledge worth sharing...Share on linkedin
Linkedin
Share on facebook
Facebook
Share on google
Google
Share on twitter
Twitter

MariaDB/MySQL – Parallel Replication

Replication was single threaded in MariaDB 5.5 and MySQL 5.5. The previous transaction must commit on the slave before the next transaction can start.

A single I/O thread works to replicate events from the Master’s binlog to the relay log in the Slave.

On the Slave, the SQL Thread will apply those events, one after the other.

With that said, as you can imagine, replication can sometimes lag. Being single threaded, it is challenging to keep the Master and Slave in sync.

Enable Parallel Slave

To enable, indicate slave-parallel-threads=# in your my.cnf.

Configure the number (#) of worker threads to apply events in parallel for all your slaves.

If you indicate a value of zero, then that means no worker threads are created.

The value should be at least twice the number of multi-source master connections utilized.

You do not have to restart mysqld when you configure slave-parallel-threads=#, because it is a dynamic variable. However, all slaves connections must be stopped when modifying the value.

Here is a link that shows the performance improvement when using parallel replication.

https://kristiannielsen.livejournal.com/18435.html

SHOW PROCESSLIST to Check Worker Thread Status

StateMeaning
Waiting for work from main SQL threadsWorker thread is idle
Waiting for prior transaction to start commit before starting next transaction The previous batch of transactions that committed together on the master has to complete first
Waiting for prior transaction to commitTransaction has been executed by the worker thread

Parallel Slave Queue Size

SQL thread will read ahead in the relay logs when parallel replication is used. This will queue events in memory while looking for opportunities for executing events in parallel. The system variable that sets a limit for how much memory it will use for this is slave_parallel_max_queued

In order for the SQL thread to read far enough ahead in the binary log to exploit all possible paralellism, the slave_parallel_threads system variable should be set large enough.

To prevent limit throughput, the slave_parallel_max_queued system variable could be set relatively high. It should just be set low enough that total allocation of the parallel slave queue will not cause the server to run out of memory.

Slave Parallel Mode

There are 2 options for Slave Parallel Mode: In-Order and Out-of-Order

In-order runs transactions in parallel, but instructs the commit steps of the transactions in the precise same sequence as on the master.

Out-of-order will have the ability to execute and commit transactions in a different order on the slave than primarily on the master. The application must be tolerant to viewing updates occur in random order.

Cheers!

Knowledge worth sharing...Share on linkedin
Linkedin
Share on facebook
Facebook
Share on google
Google
Share on twitter
Twitter

MariaDB/MySQL – InnoDB_flush_log_at_trx_commit

In this blog post, we are going to talk about the variable InnoDB_flush_log_at_trx_commit. We are going to discuss what each value that we can assign to this variable means, and how it can affect performance and durability.

Innodb_flush_log_at_trx_commit controls the durability in ACID compliance

A – atomicity

C – consistency

I – isolation

D – durability

The possible values for this variable is 0, 1, and 2.

The specific path of the query goes to the innodb buffer pool, then log buffer (redo logs), then OS buffer, then finally to log file.

When innodb flush log at transaction commit is set to 0, your write goes thru memory, thru the buffer pool into the log buffer. That write then flushes from the log buffer to the log file on disk for every 1 second or when the OS flushes.

If this variable is set to 1, which is maximum durability. Your write goes to the log buffer, but the commit of the file ensures that it is written all the way on disk. This value will have a bit of performance hit compared to value 0.

If the value is set to 2, the write goes to the log buffer, but the file will be committed all the way to the OS buffer. And then the OS will flush to disk roughly every 1 second.

Advantages and Disadvantages

0 – when the database crashes, the log buffer within memory will get loss, and there is a possibility of losing those transaction. This setting is for performance, but not for durability.

1 – every write will surely be written to the redo log on disk. You will not lose that write regardless of the crash.

2 – You will lose only about 25% performance as compared to 1. If the DB crashes, the file is still written to disk cache, and then written to disk later. But if the DB server itself crashes, that DB server’s disk buffer may lose its data. This can be prevented though if we have battery backup or SAN.

Knowledge worth sharing...Share on linkedin
Linkedin
Share on facebook
Facebook
Share on google
Google
Share on twitter
Twitter

SQL SERVER – Sending HTML Reports to Email Using T-SQL

Below is a script that you can use to send HTML Reports to your email just by using T-SQL. The example below retrieves the disk space usage of the DB Server then sends the report to my email.

 -- Disk Space Usage
USE [master]

begin
set nocount on
/*******************************************************/
/* Enabling Ole Automation Procedures */
exec sp_configure 'show advanced options', 1
RECONFIGURE with override

exec sp_configure 'Ole Automation Procedures', 1
RECONFIGURE with override


/*******************************************************/
DECLARE @hr int
DECLARE @fso int
DECLARE @drive char(1)
DECLARE @odrive int
DECLARE @TotalSize varchar(20) DECLARE @MB Numeric ; SET @MB = 1048576

CREATE TABLE #drives 
(
	drive char(1) PRIMARY KEY, 
	FreeSpace int NULL,
	TotalSize int NULL
) 

INSERT #drives(drive,FreeSpace) 
EXEC master.dbo.xp_fixeddrives 
EXEC @hr=sp_OACreate 'Scripting.FileSystemObject',@fso OUT IF @hr <> 0 
EXEC sp_OAGetErrorInfo @fso

DECLARE dcur CURSOR LOCAL FAST_FORWARD
FOR SELECT drive from #drives ORDER by drive
OPEN dcur FETCH NEXT FROM dcur INTO @drive
WHILE @@FETCH_STATUS=0

BEGIN
	EXEC @hr = sp_OAMethod @fso,'GetDrive', @odrive OUT, @drive
		IF @hr <> 0 EXEC sp_OAGetErrorInfo @fso EXEC @hr =
		sp_OAGetProperty
		@odrive,'TotalSize', @TotalSize OUT IF @hr <> 0 EXEC sp_OAGetErrorInfo
		@odrive UPDATE #drives SET TotalSize=@TotalSize/@MB WHERE
		drive=@drive FETCH NEXT FROM dcur INTO @drive
End
Close dcur
DEALLOCATE dcur
EXEC @hr=sp_OADestroy @fso IF @hr <> 0 EXEC sp_OAGetErrorInfo @fso


 
/*******************************************************/
/* Disabling Ole Automation Procedures */
exec sp_configure 'show advanced options', 1
RECONFIGURE with override

exec sp_configure 'Ole Automation Procedures', 0
RECONFIGURE with override
/*******************************************************/

END

	
-- Check LogSpace Usage
DECLARE @SQLString nvarchar(500);
 
SET @SQLString = 'DBCC SQLPERF(LOGSPACE)'

CREATE TABLE #LogSpace
(
	DatabaseName VARCHAR(MAX)
	,LogSize INT
	,LogSpaceUsedPct INT
	,Status INT
)

INSERT INTO #LogSpace
EXEC sp_executesql @SQLString
GO

-- Check failed job

CREATE TABLE #FailedJobs
(	
	ServerName VARCHAR(50)
	,JobName VARCHAR(50)
	,JobStatus VARCHAR (20)
	,LastRunStatus VARCHAR(30)
	,LastRunDate DATETIME
)

INSERT INTO #FailedJobs
SELECT Convert(varchar(20),SERVERPROPERTY('ServerName')) AS ServerName,
j.name AS job_name,
CASE j.enabled WHEN 1 THEN 'Enabled' Else 'Disabled' END AS job_status,
CASE jh.run_status WHEN 0 THEN 'Error Failed'
				WHEN 1 THEN 'Succeeded'
				WHEN 2 THEN 'Retry'
				WHEN 3 THEN 'Cancelled'
				WHEN 4 THEN 'In Progress' ELSE
				'Status Unknown' END AS 'last_run_status',
ja.run_requested_date as last_run_date
FROM
(msdb.dbo.sysjobactivity ja LEFT JOIN msdb.dbo.sysjobhistory jh ON ja.job_history_id = jh.instance_id)
join msdb.dbo.sysjobs_view j on ja.job_id = j.job_id
WHERE ja.session_id=(SELECT MAX(session_id)  from msdb.dbo.sysjobactivity) 
AND jh.run_status = 0
ORDER BY job_name,job_status

-- Check Error logs
declare @Time_Start datetime;
declare @Time_End datetime;
set @Time_Start=getdate()-2;
set @Time_End=getdate();
-- Create the temporary table
CREATE TABLE #ErrorLog (logdate datetime
                      , processinfo varchar(255)
                      , Message varchar(MAX))
-- Populate the temporary table
INSERT #ErrorLog (logdate, processinfo, Message)
EXEC master.dbo.xp_readerrorlog 0, 1, null, null , @Time_Start, @Time_End, N'desc';



-- Create HTML Table for Disk Space Usage Report

DECLARE @tableHTML NVARCHAR(MAX);

SET @TableHTML = N'<H1>Disk Space Usage</H1>' + N'<table border="1">' + N'<tr>' +
				 N'<th bgcolor="#404040"><font color="#FFFFFF">drive</th><th bgcolor="#404040"><font color="#FFFFFF">Total(GB)</th>
					<th bgcolor="#404040"><font color="#FFFFFF">Free(GB)</th><th bgcolor="#404040"><font color="#FFFFFF">Used(GB)</th>
					<th bgcolor="#404040"><font color="#FFFFFF">PercentUsed</th>' + CAST ((
			  SELECT td = drive
			  ,''
			  ,td = cast(TotalSize/1024.0 as decimal(8,2)) 
			  ,'' 
			  ,td = cast(FreeSpace/1024.0 as decimal(8,2)) 
			  ,''
			  ,td = cast(TotalSize/1024.0-FreeSpace/1024.0 as decimal(8,2))
			  ,''
			  ,td = cast((TotalSize/1024.0-FreeSpace/1024.0)/(TotalSize/1024.0)*100 as decimal (8,2)) 
	   FROM
			  #drives
	   ORDER BY drive
	   FOR XML PATH('tr')
                    ,TYPE
                ) AS NVARCHAR(MAX)) + N'</table>';



-- Create HTML Table for Log Space Used
DECLARE @tableHTML_LogSpaceUsage NVARCHAR(MAX);

SET @tableHTML_LogSpaceUsage = N'<H1>Log Space Used</H1>' + N'<table border="1">' + N'<tr>' +
				 N'<th bgcolor="#404040"><font color="#FFFFFF">DatabaseName</th><th bgcolor="#404040"><font color="#FFFFFF">LogSize(MB)</th><th bgcolor="#404040"><font color="#FFFFFF">LogSpaceUsedPct</th>' + CAST ((
			  SELECT td = DatabaseName
			  ,''
			  ,td = LogSize
			  ,'' 
			  ,td = LogSpaceUsedPct
	   FROM
			  #LogSpace
	   FOR XML PATH('tr')
                    ,TYPE
                ) AS NVARCHAR(MAX)) + N'</table>';



-- Create HTML Table for Failed Jobs
DECLARE @tableHTML_FailedJobs NVARCHAR(MAX);

SET @tableHTML_FailedJobs = N'<H1>Failed Jobs</H1>' + N'<table border="1">' + N'<tr>' +
							N'<th bgcolor="#404040"><font color="#FFFFFF">ServerName</th><th bgcolor="#404040"><font color="#FFFFFF">JobName</th><th bgcolor="#404040"><font color="#FFFFFF">JobStatus</th>
							  <th bgcolor="#404040"><font color="#FFFFFF">LastRunStatus</th><th bgcolor="#404040"><font color="#FFFFFF">LastRunDate</th>' + CAST ((
							  SELECT td = ServerName
								  ,''
								  ,td = JobName
								  ,''
								  ,td = JobStatus
								  ,''
								  ,td = LastRunStatus
								  ,''
								  ,td = LastRunDate
							  FROM
								  #FailedJobs
							  FOR XML PATH('tr')
							               ,TYPE
									  ) AS NVARCHAR(MAX)) + N'</table>';

					
-- Create HTML Table for Error Log
DECLARE @tableHTML_ErrorLog NVARCHAR(MAX);
SET @tableHTML_ErrorLog = N'<H1>Error Log</H1>' + N'<table border="1">' + N'<tr>' +
						  N'<th bgcolor="#404040"><font color="#FFFFFF">LogDate</th><th bgcolor="#404040"><font color="#FFFFFF">Message</th>' + CAST ((
							SELECT td = LogDate
								,''
								,td = Message
							FROM
								#ErrorLog
							
							WHERE 
								(Message LIKE '%error%' OR Message LIKE '%failed%') AND processinfo NOT LIKE 'logon'
							ORDER BY
								logdate DESC
							 FOR XML PATH('tr')
							               ,TYPE
									  ) AS NVARCHAR(MAX)) + N'</table>';
									  
								
			
-- Send Email
DECLARE @AllTables NVARCHAR(MAX);

IF @tableHTML IS NOT NULL
	SET @AllTables = @tableHTML
	
IF @tableHTML_LogSpaceUsage IS NOT NULL
	SET @AllTables = @AllTables + @tableHTML_LogSpaceUsage

IF @tableHTML_FailedJobs IS NOT NULL
	SET @AllTables = @AllTables + @tableHTML_FailedJobs

IF @tableHTML_ErrorLog IS NOT NULL
	SET @AllTables = @AllTables + @tableHTML_ErrorLog

    EXEC msdb.dbo.sp_send_dbmail @body = @AllTables 
        ,@body_format = 'HTML'
        ,@profile_name = N'DBA'
        ,@recipients = N'ricky.valencia@optimalplus.com'
        ,@Subject = N'Database Health Check Report - ASEKH'



DROP TABLE #FailedJobs
DROP TABLE #LogSpace
DROP TABLE #drives
DROP TABLE #ErrorLog

You may put this script in SQL job.

Use this script as a template for whatever report you may think about.

Cheers!

Vertica – Check System Resources

Below is a script to check on System Resources. The system resource usage system table provides history about system resources, such as memory, CPU, network, disk, I/O.

SELECT * FROM v_monitor.system_resource_usage ORDER BY end_time DESC;

Cheers!