PostgreSQL – TOAST (The Oversized-Attribute Storage Technique)

TOAST stands for The Oversized-Attribute Storage Technique.
PostgreSQL uses a fixed page size (commonly 8 kB), and does not allow tuples to span multiple pages. Therefore, it is not possible to store very large field values directly. To overcome this limitation, large field values are compressed and/or broken up into multiple physical rows. Read More

Knowledge worth sharing...Share on linkedin
Linkedin
Share on facebook
Facebook
Share on google
Google
Share on twitter
Twitter

PostgreSQL – Check Table Level Locks

Here is a script to check table level locks

SELECT 
  act1.query as blocking_query, 
  act2.query as blocked_query, 
  l1.pid AS blocked_pid, 
  l2.pid AS blocking_pid, 
  l1.relation :: regclass 
FROM 
  pg_locks l1, 
  pg_locks l2, 
  pg_stat_activity act1, 
  pg_stat_activity act2 
WHERE 
  l1.granted = true 
  AND l2.granted = false 
  AND l1.pid = act1.pid 
  AND l2.pid = act2.pid 
  AND l1.relation = l2.relation;

Cheers!

Knowledge worth sharing...Share on linkedin
Linkedin
Share on facebook
Facebook
Share on google
Google
Share on twitter
Twitter

PostgreSQL – Check Transactional Locks / Blocking

Here is a script to check which query is blocking who.

SELECT 
  bl.pid AS blocked_pid, 
  a.usename AS blocked_user, 
  ka.query AS current_or_recent_statement_in_blocking_process, 
  ka.state AS state_of_blocking_process, 
  now() - ka.query_start AS blocking_duration, 
  kl.pid AS blocking_pid, 
  ka.usename AS blocking_user, 
  a.query AS blocked_statement, 
  now() - a.query_start AS blocked_duration 
FROM 
  pg_catalog.pg_locks bl 
  JOIN pg_catalog.pg_stat_activity a ON a.pid = bl.pid 
  JOIN pg_catalog.pg_locks kl ON kl.transactionid = bl.transactionid 
  AND kl.pid != bl.pid 
  JOIN pg_catalog.pg_stat_activity ka ON ka.pid = kl.pid 
WHERE 
  NOT bl.GRANTED;

Cheers!

Knowledge worth sharing...Share on linkedin
Linkedin
Share on facebook
Facebook
Share on google
Google
Share on twitter
Twitter

PostgreSQL – Get the Size of All Databases

Here is a script to get the size of all your PostgreSQL databases.

SELECT 
  d.datname AS Name, 
  pg_catalog.pg_get_userbyid(d.datdba) AS Owner, 
  CASE WHEN pg_catalog.has_database_privilege(d.datname, 'CONNECT') THEN pg_catalog.pg_size_pretty(
    pg_catalog.pg_database_size(d.datname)
  ) ELSE 'No Access' END AS SIZE 
FROM 
  pg_catalog.pg_database d 
ORDER BY 
  CASE WHEN pg_catalog.has_database_privilege(d.datname, 'CONNECT') THEN pg_catalog.pg_database_size(d.datname) ELSE NULL END DESC;

Cheers!

Knowledge worth sharing...Share on linkedin
Linkedin
Share on facebook
Facebook
Share on google
Google
Share on twitter
Twitter

PostgreSQL – Get the Top 20 Biggest Tables

Here is a script to get the top 20 biggest tables

SELECT 
  nspname || '.' || relname AS "relation", 
  pg_size_pretty(
    pg_relation_size(C.oid)
  ) AS "size" 
FROM 
  pg_class C 
  LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace) 
WHERE 
  nspname NOT IN (
    'pg_catalog', 'information_schema', 
    'pg_toast'
  ) 
ORDER BY 
  pg_relation_size(C.oid) DESC 
LIMIT 
  20;

Cheers!

Knowledge worth sharing...Share on linkedin
Linkedin
Share on facebook
Facebook
Share on google
Google
Share on twitter
Twitter

MySQL and PostgreSQL Equivalent Commands

In this blog post, I’ve associated several PostgreSQL commands with what I know in MySQL. Below is a table of some of the frequently used commands.

MySQLPostgreSQLDescription
mysqldumppg_dumpallextract a PostgreSQL database cluster into a script file
mysqldumppg_dumpextract a PostgreSQL database into a script file or other archive file
N/Apg_restorerestore a PostgreSQL database from an archive file created by pg_dump
N/Apg_toppg_top is a PostgreSQL interactive monitoring utility, similar to the Unix top utility, providing a dynamically updated view of activity within a PostgreSQL database
mysqlpsqlPostgreSQL interactive terminal
Exit Mysql
{quit|exit|
[ctrl-c]}
Exit Postgres\qquit psql,
show databases;\l or \listView all of the defined databases on the server
use [dbname];\c [dbname]Connect to a database
show tables;\dt or \dt+List all the tables in the current schema
describe [tablename];\d [tablename]Shows the columns, types, modifiers, indexes, and tables referenced by keys.
show create table [tablename];No direct equivalent, use below command from shell:
pg_dump -st tablename dbname
This will provide the sql used to create a table.
select * from mysql.user;select * from pg_user;
\du
Lists all database roles
show full processlist;select * from pg_stat_activity;Show what queries are currently running in the DB
show variables;show all;Displays the current setting of run-time parameters
show engine innodb status\GSELECT * FROM pg_stat_activity;
SELECT * FROM pg_stat_database;
SELECT * FROM pg_stat_user_tables;
SELECT * FROM pg_stat_user_indexes;
SELECT * FROM pg_locks;
There is no central place in PostgreSQL to get all of the information obtained by running show engine InnoDB status in MySQL. However, there are a number of queries you can run to get roughly equivalent data.
show slave status\Gselect * from pg_stat_replication;
select now() – pg_last_xact_replay_timestamp() AS replication_delay;
 Monitor replication 
\G\xSets or toggles expanded table formatting mode

Cheers!

Knowledge worth sharing...Share on linkedin
Linkedin
Share on facebook
Facebook
Share on google
Google
Share on twitter
Twitter

PostgreSQL – Get The Top 10 Time-Consuming Queries in AWS RDS PostgreSQL and TencentDB for PostgreSQL

Here is a script to get the Top 10 Time-Consuming Queries (This will work for PostgreSQL version 13 and above)

We will make use of the pg_stat_statements view. This view will tell us which types of queries are slow and how often these queries are called.

In AWS RDS, this module is already installed by default. So, all you need to do is to create the extension before running the script.

create extension pg_stat_statements;

In Tencent DB for PostgreSQL, both module and the extension are enabled by default. So, there is no need to do anything before running the query.

Disable expanded display

\x

Script to get Top 10 Time Consuming Queries

SELECT 
  t2.rolname, 
  t3.datname, 
  queryid, 
  calls, 
  (
    total_plan_time + total_exec_time
  ) / 1000 as total_time_seconds, 
  (min_plan_time + min_exec_time) / 1000 as min_time_seconds, 
  (max_plan_time + max_exec_time) / 1000 as max_time_seconds, 
  (mean_plan_time + mean_exec_time) / 1000 as mean_time_seconds, 
  (
    stddev_plan_time + stddev_exec_time
  ) / 1000 as stddev_time_seconds, 
  rows, 
  shared_blks_hit, 
  shared_blks_read, 
  shared_blks_dirtied, 
  shared_blks_written, 
  local_blks_hit, 
  local_blks_read, 
  local_blks_dirtied, 
  local_blks_written, 
  temp_blks_read, 
  temp_blks_written, 
  blk_read_time / 1000 as blk_read_time_seconds, 
  blk_write_time / 1000 as blk_write_time_seconds 
FROM 
  pg_stat_statements t1 
  JOIN pg_roles t2 ON (t1.userid = t2.oid) 
  JOIN pg_database t3 ON (t1.dbid = t3.oid) 
WHERE 
  t2.rolname != 'rdsadmin' 
  AND queryid IS NOT NULL;

Cheers!

Knowledge worth sharing...Share on linkedin
Linkedin
Share on facebook
Facebook
Share on google
Google
Share on twitter
Twitter

Elastic Cloud Monitoring and Alerting

In this blog post, I will show you how to:

  • Enable logging and monitoring that will let us monitor our deployments in Kibana.
  • Integrate Elastic Alert with PagerDuty.

Enabling Logging and Monitoring

In Production, the best practice is to send our deployment logs and metrics to a dedicated monitoring deployment. Monitoring indexes logs and metrics into Elasticsearch and these indexes consume storage, memory, and CPU cycles like any other index. We can avoid affecting other production deployments and view the logs and metrics, even when production deployment is unavailable, by using a separate monitoring deployment.We need a minimum of three monitoring nodes to make monitoring highly available. Read More

Knowledge worth sharing...Share on linkedin
Linkedin
Share on facebook
Facebook
Share on google
Google
Share on twitter
Twitter