Elastic Cloud Monitoring and Alerting

In this blog post, I will show you how to:

  • Enable logging and monitoring that will let us monitor our deployments in Kibana.
  • Integrate Elastic Alert with PagerDuty.

Enabling Logging and Monitoring

In Production, the best practice is to send our deployment logs and metrics to a dedicated monitoring deployment. Monitoring indexes logs and metrics into Elasticsearch and these indexes consume storage, memory, and CPU cycles like any other index. We can avoid affecting other production deployments and view the logs and metrics, even when production deployment is unavailable, by using a separate monitoring deployment.We need a minimum of three monitoring nodes to make monitoring highly available. Read More

Knowledge worth sharing...Share on linkedin
Linkedin
Share on facebook
Facebook
Share on google
Google
Share on twitter
Twitter

Elasticsearch – Curator

Curator is an index management tool provided by open source Elasticsearch. This tool allows you to create, delete, and disable indexes.
It also allows you to merge index segments.

This blog postdescribes how to install Curator and how to delete old indices based on time.

Installing Curator

pip3 install elasticsearch-curator

Check curator version

curator --version

Note: If you encounter this error while installing.

ERROR: Cannot uninstall ‘PyYAML’. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.

Execute the command below to fix it.

sudo -H pip3 install --ignore-installed PyYAML

Create a curator.yml file

In this file, indicate the host, port, username, and password.

Reference https://www.elastic.co/guide/en/elasticsearch/client/curator/5.0/configfile.html

# Remember, leave a key empty if there is no value.  None will be a string,
# not a Python "NoneType"
client:
  hosts:
    - 192.168.1.1
  port: 9200
  url_prefix:
  use_ssl: False
  certificate:
  client_cert:
  client_key:
  ssl_no_validate: False
  username: elastic
  password: Password
  timeout: 30
  master_only: False

logging:
  loglevel: INFO
  logfile:
  logformat: default
  blacklist: ['elasticsearch', 'urllib3']

Create a delete_indices_time_base.yml file

Reference: https://www.elastic.co/guide/en/elasticsearch/client/curator/current/ex_delete_indices.html

The example configuration below will delete indices with a prefix pattern basketbal-scores- (full index format: basketbal-scores-2022.04.01) older than 14 days.

---
actions:
  1:
    action: delete_indices
    description: >-
      Delete indices older than 14 days (based on index name), for logstash-
      prefixed indices. Ignore the error if the filter does not result in an
      actionable list of indices (ignore_empty_list) and exit cleanly.
    options:
      ignore_empty_list: True
      timeout_override:
      continue_if_exception: False
      disable_action: False
    filters:
    - filtertype: pattern
      kind: prefix
      value: basketbal-scores-
      exclude:
    - filtertype: age
      source: name
      direction: older
      timestring: '%Y.%m.%d'
      unit: days
      unit_count: 14
      exclude:

Manually run script

/usr/local/bin/curator /home/scripts/delete_indices_time_base.yml --config /home/scripts/curator.yml

Schedule the script to run daily via cronjob

# Housekeep indices more than 14 days
0 0 * * * /usr/local/bin/curator /home/scripts/delete_indices_time_base.yml --config /home/scripts/curator.yml >> /home/scripts/log/curator_purging_time_base.log 2>&1

Knowledge worth sharing...Share on linkedin
Linkedin
Share on facebook
Facebook
Share on google
Google
Share on twitter
Twitter

Redis – Automate Redis Enterprise Cloud Deployment Using Terraform

This blog describes the steps to create and deploy Redis Enterprise Cloud Subscription and Databases as code on GCP.

Install Terraform in a centos VM. If you want to test the connection with the Redis private endpoint in the same VM after provisioning the databases, then use a VM  that is part of the VPC network that you want to peer with Redis Enterprise later.

sudo yum install -y yum-utils
sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo
sudo yum -y install terraform

# Verify version
terraform --version

You may also run the terraform scripts in the cloud shell.

Here are the Terraform scripts that you need to upload to your server.

provider.tf

terraform {
  required_providers {
    google = {
      source  = "hashicorp/google"
      version = "4.21.0"
    }
   rediscloud = {
     source = "RedisLabs/rediscloud"
     version = "0.2.9"
   }
 }
}

provider "rediscloud" {
  # Configuration options
  api_key = "${var.api_key}"
  secret_key = "${var.secret_key}"
}

variables.tf

# You may get the values of api_key and secret_key (Also known as API user keys) by following the steps from this site https://docs.redis.com/latest/rc/api/get-started/manage-api-keys/#:~:text=Sign%20in%20to%20your%20Redis,select%20the%20API%20Keys%20tab
variable "api_key" {
  type = string
  default = "<my_api_key>"
}


variable "secret_key" {
  type = string
  default = "<my_secret_key>"
}


# Project name as Prefix of the Redis instance
# Example: "${projname}-gacc-cache-redis"
variable "projname" {
  type = string
  default = "test-project"
}

variable "region" {
  type = string
  default = "us-west1"
}

variable "preferred_zones" {
  type = string
  default = "us-west1-a"
}

variable "multiple_zones" {
  type = bool
  default = false
}

variable "cidr" {
  type = string
  default = "192.168.1.0/24"
}

main.tf

# Generates a random password for the database
resource "random_password" "passwords" {
 count = 3   # this number should be equal to the number of Redis database to be created
 length = 20
 upper = true
 lower = true
 numeric = true
 special = false
}
 
resource "rediscloud_subscription" "MY-REDISCLOUD-SUBSCRIPTION" {
 name = "My-Redis-Subscription"
 memory_storage = "ram"
 
 cloud_provider {
   # Running in GCP on Redis resources
   provider = "GCP"
   region {
     region = "${var.region}"
     networking_deployment_cidr = "${var.cidr}"
     preferred_availability_zones = ["${var.preferred_zones}"]
     multiple_availability_zones  = "${var.multiple_zones}"
   }
 }
 
  database {
   name = "${var.projname}-redis-database1"
   protocol = "redis"
   memory_limit_in_gb = 6
   replication = true
   data_persistence = "none"
   throughput_measurement_by = "operations-per-second"
   throughput_measurement_value = 10000
   password = random_password.passwords[0].result
   alert {
      name = "dataset-size"
      value = 80
    }
   alert {
      name = "throughput-higher-than"
      value = 10000
    }
 }
  
  database {
   name = "${var.projname}-redis-database2"
   protocol = "redis"
   memory_limit_in_gb = 6
   replication = true
   data_persistence = "none"
   throughput_measurement_by = "operations-per-second"
   throughput_measurement_value = 10000
   password = random_password.passwords[1].result
   alert {
      name = "dataset-size"
      value = 80
    }
   alert {
      name = "throughput-higher-than"
      value = 10000
    }
 }

 database {
   name = "${var.projname}-redis-database3"
   protocol = "redis"
   memory_limit_in_gb = 13
   replication = true
   data_persistence = "aof-every-1-second"
   throughput_measurement_by = "number-of-shards"
   throughput_measurement_value = 4
   password = random_password.passwords[2].result
   alert {
      name = "dataset-size"
      value = 80
    }
   alert {
      name = "throughput-higher-than"
      value = 10000
    }
 }
 
}

outputs.tf

# Terraform output values

output "database_private_endpoints" {
    description = "Output private endpoints"
    sensitive = true
    value = {
    for database in rediscloud_subscription.MY-REDISCLOUD-SUBSCRIPTION.database:
    database.name => database.private_endpoint}
}

output "database_passwords" {
    description = "Output passwords"
    sensitive = true
    value = {
    for database in rediscloud_subscription.MY-REDISCLOUD-SUBSCRIPTION.database:
    database.name => database.password}
}



Few Notes

Update Redis Enterprise Cloud API Keys in variables.tf

You can get the values of api_key and secret_key (Also known as API user keys) by following the steps from this site.

The Account key identifies the account associated with the Redis Enterprise Cloud subscription.

The User key (secret_key) identifies the user and (optionally) the context of a request. Generated by account owners.

In main.tf file, add database blocks with configuration based on application requirements.

Example:

Also in the main.tf, edit the count parameter under random_password resource. The number should be equal to the number  of Redis instance to be created.

Executing Terraform Commands To Create Redis Enterprise Databases

Execute terraform init command to initialize a working directory containing Terraform configuration files.

terraform init

Run terraform plan command to create an execution plan, which lets you preview the changes that Terraform plans to make to your infrastructure.

terraform plan

If the plan is okay, then execute terraform apply.

terraform apply

Outputting sensitive data

The database_private_endpoints and database_passwords are sensitive data. So, the contents will not be instantly outputted after executing terraform apply.

terraform output -json database_private_endpoints   terraform output -json database_passwords  

To output the contents, run the commands below.

terraform output -json database_private_endpoints

terraform output -json database_passwords

Verifying Redis databases from the console.

Bonus Section: VPC Peering

Google Cloud VPC Network Peering allows internal IP address connectivity across two Virtual Private Cloud (VPC) networks regardless of whether they belong to the same project or the same organization.

In your Redis subscription, Go to Connectivty tab, then click + Add peering.

Provide Project ID and Network Name. Then Copy the Google cloud command.

Click Initiate peering.

In GCP console, open cloud a cloud shell, then execute the command you just copied.

Configure your GCP project and region.

gcloud config set core/project 
gcloud config set compute/zone zone

Execute the command to accept VPC Peering. This is the Google cloud command you copied from Redis Enterprise portal.

Example:

gcloud compute networks peerings create rl-c19541-us-west1-1-rlrcp --project  --network --peer-network=c19541-us-west1-1-rlrcp --peer-project=g0c4be81afe238245-tp

Check the peering status in Redis Enterprise portal. It may take several minutes to become active.

Once the peering is successful, the status will become green.

Cheers!

Knowledge worth sharing...Share on linkedin
Linkedin
Share on facebook
Facebook
Share on google
Google
Share on twitter
Twitter

MySQL/MariaDB Memory

In this blog post, I’m gonna talk about MySQL Memory Issues. What causes High memory usage and what is the impact on MySQL database instances.

But first, we can do a quick review on the following topics:

  • How MySQL uses memory? (Log files, buffers, InnoDB buffer pool size)
  • How MySQL allocates memory?
  • Swap memory
  • Troubleshooting
  • Best practices.
Globally shared memory is allocated upon the creation of an instance and is shared by all connections
Example: thread cache, query cache, buffer pool, and log buffer.

Private memory is used to allocate cache upon connection to MySQL server. It is allocated for each thread. For example; sort buffer, join buffer, and temporary table.

InnoDB maintains one or more buffer pools that cache frequently used data and indexes in the main memory.

So, when a read query is executed from the client program, InnoDB checks to see if the required data pages are in the buffer pool. If it is not in the buffer pool, InnoDB requests the data from the tablespace. Then it will put the data pages in the buffer pool. And then, MySQL will return the results to the client.

 
When transactions are performed, data changes are written to the log buffer in memory. then every time a transaction is committed, the transaction logs get flushed to the disk, but this behavior can be changed (InnoDB_flush_log_at_trx_commit). If a crash occurs (while the tables are being modified), the redo log files are used for recovery.
 
On the other hand, Dirty pages in the buffer pool get written to the disk at checkpoints. This is to ensure that the cache has enough free space.

innodb_buffer_pool_size is the most important tuning parameter. It caches table and index data. The buffer pool permits frequently used data to be accessed directly from memory, which speeds up processing.

Setting it too low can degrade the performance. Setting it too high can increase the memory consumption causing the DB to crash.

MySQL allocation is not only from innodb_buffer_pool_size, but also from other buffers in the database such sort_buffer_size, read_buffer_size, read_rnd_buffer, join_buffer_size, and tmp_table_size and this will require additional 5-10% extra memory

Below is a formula to calculate the approximate memory usage for your MySQL:

Maximum MySQL Memory Usage = innodb_buffer_pool_size + key_buffer_size + ((read_buffer_size + read_rnd_buffer_size + sort_buffer_size + join_buffer_size) X max_connections)

Swapping can happen when a system requires more memory than is allocated .A large buffer may lead to swapping in the operating system and make the performance slow.

To avoid your MySQL/MariaDB data being SWAP instead of RAM, you have to play with a kernel parameter called swappiness.

A swappiness value is used to change the balance between swapping out runtimememory and dropping pages from the system page cache. The higher the value, the more the system will swap. The lower the value, the less the system will swap. The maximum value is 100, the minimum is 0, and 60 is the default.

I performed a test in AWS RDS MySQL. I ran a stored procedure that consumed a lot of memory until the server crashed. We will notice that when memory gets low, Swap usage will increase. We will also see a spike in disk IO usage.

When MySQL crashed, we will lose connection to the database. In AWS RDS, we can see the logs and EVENTs to identify the cause of the crash. The log indicates that the database process was killed by the OS due to excessive memory consumption.

I also did the same test in Azure MySQL. The difference between Azure and AWS is that when the DB crash, in Azure, it will failover to the Standby replica, but in AWS, a mysqld process crash would not trigger a failover.

In Azure, under Private DNS Zone, if we see that the IP address changed, then it means that it failed over to the standby replica successfully.

Cheers!

Knowledge worth sharing...Share on linkedin
Linkedin
Share on facebook
Facebook
Share on google
Google
Share on twitter
Twitter

MySQL/MariaDB – FLUSH TABLES WITH READ LOCK

To block all write access and to mark all tables as ‘properly closed’ on disk, we execute the FLUSH TABLES WITH READ LOCK. Note that the tables can still be used for read operations. Open tables will be closed and all tables will be locked for all databases with a global lock.

FLUSH TABLES and RELOAD privilege are required to execute this operation.

To release the lock, you may use UNLOCK TABLES, which implicitly commits any active transaction only if any tables currently have been locked with the LOCK TABLES. If the UNLOCK TABLES is executed after FLUSH TABLES WITH READ LOCK, then the commit will not occur because FLUSH TABLES WITH READ LOCK does not acquire table locks.

Cheers!

Knowledge worth sharing...Share on linkedin
Linkedin
Share on facebook
Facebook
Share on google
Google
Share on twitter
Twitter

Redis – Predixy Proxy Auto Setup

Here is a shell script I wrote to automate the setup of Predixy Proxy in Centos 7.

This script will work if you have a single endpoint (i.e. AWS Elasticache cluster configuration endpoint or a standalone Elasticache endpoint).

#!/bin/bash

# Created By  : Enrique Valencia 
# Script Name : predixy_auto_setup.sh
# Description : Predixy Automatic Setup

################################################################################################################
# Usage	          : ./auto_predixy_setup.sh <configuration_end_point>"
# Example         : ./auto_predixy_setup.sh dba-redis-test01.bbssxp.clustercfg.apse1.cache.amazonaws.com:6379"
# To Check Status : systemctl status predixy 
# To Start	      : systemctl start predixy
# To Stop		  : systemctl stop predixy
# To Restart	  : systemctl restart predixy
################################################################################################################

################################################################################################################
##### Validate Usage
################################################################################################################

if [ $# -ne 1 ]
then
  echo ""
  echo "************************************************************************************************************"
  echo "Usage:   ./predixy_auto_setup.sh <configuration_end_point>"
  echo ""
  echo "Example: ./predixy_auto_setup.sh dba-redis-test01.bbssxp.clustercfg.apse1.cache.amazonaws.com:6379"
  echo "************************************************************************************************************"
  echo ""
  exit 1
fi

################################################################################################################
##### Variable
################################################################################################################

ENDPOINT=$1

################################################################################################################
##### Install Predixy
################################################################################################################

InstallPredixy(){
	mkdir /predixy_data
	cd /predixy_data
	sudo yum install git -y
	sudo yum install libstdc++-static -y
	yum -y install gcc
	yum -y install gcc-c++ 
	sudo git clone https://github.com/joyieldInc/predixy.git
	cd predixy
	sudo make
	mkdir -p /predixy_data/predixy/log
}

################################################################################################################
##### Cluster
################################################################################################################

cluster(){

cat >> /predixy_data/predixy/conf/cluster.conf << EOF
ClusterServerPool {
	MasterReadPriority 100  # 100 
	# Password sjwkk123456 # redis 
	StaticSlaveReadPriority 0  #  redis slave redis 0
	DynamicSlaveReadPriority 0 #  redis sentinel 0
	RefreshInterval 1 # predixy redis sentinel 1 
	ServerTimeout 1 #  predixy/redis predixy redis blpop 0 redis 0
	ServerFailureLimit 10 #  redis 10
	ServerRetryTimeout 1 #  redis 1 
	KeepAlive 120 #predixy redis tcp keepalive 0 0
	Servers {
	##  
		+ $ENDPOINT
	}
}
EOF

	sed -i 's/Auth "#a complex password#" {/Auth "123456" {/' auth.conf
	sed -i 's/WorkerThreads 1/WorkerThreads 4/' /predixy_data/predixy/conf/predixy.conf
	sed -i 's/# Log .\/predixy.log/Log \/predixy_data\/predixy\/log\/predixy.log/' /predixy_data/predixy/conf/predixy.conf
	sed -i '/# LogRotate 1d 2G/d' /predixy_data/predixy/conf/predixy.conf
	sed -i 's/# LogRotate 1d/LogRotate 1d/' /predixy_data/predixy/conf/predixy.conf
	sed -i 's/Include try.conf/# Include try.conf/' /predixy_data/predixy/conf/predixy.conf
	sed -i 's/# Include cluster.conf/Include cluster.conf/' /predixy_data/predixy/conf/predixy.conf
}


################################################################################################################
##### Standalone
################################################################################################################

standalone(){
	
cat >> /predixy_data/predixy/conf/standalone.conf << EOF

StandaloneServerPool {
	RefreshMethod fixed
	Group shard001 {
		+ $ENDPOINT
	}
}
EOF

	sed -i 's/Auth "#a complex password#" {/Auth "123456" {/' auth.conf
	sed -i 's/WorkerThreads 1/WorkerThreads 4/' /predixy_data/predixy/conf/predixy.conf
	sed -i 's/# Log .\/predixy.log/Log \/predixy_data\/predixy\/log\/predixy.log/' /predixy_data/predixy/conf/predixy.conf
	sed -i '/# LogRotate 1d 2G/d' /predixy_data/predixy/conf/predixy.conf
	sed -i 's/# LogRotate 1d/LogRotate 1d/' /predixy_data/predixy/conf/predixy.conf
	sed -i 's/Include try.conf/# Include try.conf/' /predixy_data/predixy/conf/predixy.conf
	sed -i 's/# Include sentinel.conf/Include standalone.conf/' /predixy_data/predixy/conf/predixy.conf
}

################################################################################################################
##### Setup Predixy as a Systemd Service
################################################################################################################

systemd(){
	echo "/predixy_data/predixy/src/predixy /predixy_data/predixy/conf/predixy.conf" > /usr/sbin/predixy_start.sh

	cat > /etc/systemd/system/predixy.service <<-EOF
	[Unit]
	Description=Predixy

	[Service]
	ExecStart=/bin/bash /usr/sbin/predixy_start.sh


	[Install]
	WantedBy=multi-user.target
	EOF

	sudo chmod 640 /etc/systemd/system/predixy.service
	sudo systemctl daemon-reload
	sudo systemctl enable predixy
}


################################################################################################################
##### Check Predixy Status
################################################################################################################

CheckStatus() {
	if [ `ps -ef | grep predixy.conf | grep -v grep | wc -l` -gt 0 ]; then
		echo ""
		echo "************************************************************************"
		echo "Predixy Setup Completed."
		echo "************************************************************************"
		echo ""
	else
		echo ""
		echo "************************************************************************"
		echo "Predixy Proxy is unable to start. Please check."
		echo "************************************************************************"
		echo ""
		exit 1
	fi
}

################################################################################################################
##### MAIN
################################################################################################################

echo -ne "
Menu: 
=====

1) cluster
2) standalone
3) Exit

Please choose: "

read -r choice

case $choice in

1)
	echo ""
	echo "************************************************************************"
	echo "You chose 1 - Cluster. Starting Predixy Setup."
	echo "************************************************************************"
	echo ""
	sleep 3
	InstallPredixy
	cluster
	systemd
	systemctl start predixy
	sleep 3
	CheckStatus
	;;

2)
	echo ""
	echo "************************************************************************"
	echo "You chose 2 - Standalone. Starting Predixy Setup."
	echo "************************************************************************"
	echo ""
	sleep 3
	InstallPredixy
	standalone
	systemd
	systemctl start predixy
	sleep 3
	CheckStatus
	;;
	
3)
	echo ""
	echo "*********"
	echo "再见"
	echo "*********"
	echo ""
	exit 1
	;;
*)
	echo ""
	echo "************************************************************************"
	echo "Invalid Option. Please try again."
	echo "************************************************************************"
	echo ""
	exit 1
esac

Cheers!

Knowledge worth sharing...Share on linkedin
Linkedin
Share on facebook
Facebook
Share on google
Google
Share on twitter
Twitter

Redis – Monitor Redis Using Grafana

In this blog post, I will outline the steps to set up Grafana to monitor your Redis database.

Install Grafana

Go to Grafana website download page.

Example: For Red Hat, CentOS, RHEL, and Fedora(64 Bit).

Execute the following commands to make Grafana start automatically when the server is booted up.

sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable grafana-server.service

Install Redis Plugin

The Redis Data Source for Grafana is a plugin that allows users to connect to any Redis database On-Premises and in the Cloud. It provides out-of-the-box predefined dashboards and lets you build customized dashboards to monitor Redis and application data.

grafana-cli plugins install redis-datasource

Start Grafana

Start Grafana.

sudo systemctl start grafana-server.service

Check Grafana Status.

sudo systemctl status grafana-server.service

Access Grafana Monitor Dashboard

Paste the server’s IP, plus port 3000 on your web browser to access Grafana Dashboar<Ipd.

<ServerIPaddress>:3000

The initial username and password is admin/admin.

Configure Data source

Go to Configuration > Data sources.

Add data source.

Search for “Redis“, then select the data source below.

Enter a name and address in the following format.

redis://your-redis-endpoint.ng.0001.sae1.cache.amazonaws.com:6379

Click Save and test and verify that the data source is working as expected.

Import Dashboard

This step will be performed one time only.

Go to Dashboards. Then click Import for both Redis and Redis Streaming.

View Dashboard

Go to Dashboard > Manage.

Choose Redis or Redis Streaming.

Knowledge worth sharing...Share on linkedin
Linkedin
Share on facebook
Facebook
Share on google
Google
Share on twitter
Twitter

Deploy TiDB on GCP GKE (Google Kubernetes Engine)

This blog post describes how to deploy a TiDB cluster on GCP Google Kubernetes Engine (GKE). TiDB on Kubernetes is the standard way to deploy TiDB on public clouds.

TiDB Architecture

TiDB is designed to consist of multiple components. These components communicate with each other and form a complete TiDB system. The architecture is as follows:

TiDB server

The TiDB server is a stateless SQL layer that exposes the connection endpoint of the MySQL protocol to the outside. The TiDB server receives SQL requests, performs SQL parsing and optimization, and ultimately generates a distributed execution plan. It is horizontally scalable and provides the unified interface to the outside through the load balancing components such as Linux Virtual Server (LVS), HAProxy, or F5. It does not store data and is only for computing and SQL analyzing, transmitting actual data read request to TiKV nodes (or TiFlash nodes).

Placement Driver (PD) server

The PD server is the metadata managing component of the entire cluster. It stores metadata of real-time data distribution of every single TiKV node and the topology structure of the entire TiDB cluster, provides the TiDB Dashboard management UI, and allocates transaction IDs to distributed transactions. The PD server is “the brain” of the entire TiDB cluster because it not only stores metadata of the cluster, but also sends data scheduling command to specific TiKV nodes according to the data distribution state reported by TiKV nodes in real time. In addition, the PD server consists of three nodes at least and has high availability. It is recommended to deploy an odd number of PD nodes.

Storage servers

Storage servers

TiKV server

The TiKV server is responsible for storing data. TiKV is a distributed transactional key-value storage engine. Region is the basic unit to store data. Each Region stores the data for a particular Key Range which is a left-closed and right-open interval from StartKey to EndKey. Multiple Regions exist in each TiKV node. TiKV APIs provide native support to distributed transactions at the key-value pair level and supports the Snapshot Isolation level isolation by default. This is the core of how TiDB supports distributed transactions at the SQL level. After processing SQL statements, the TiDB server converts the SQL execution plan to an actual call to the TiKV API. Therefore, data is stored in TiKV. All the data in TiKV is automatically maintained in multiple replicas (three replicas by default), so TiKV has native high availability and supports automatic failover.

TiFlash server

The TiFlash Server is a special type of storage server. Unlike ordinary TiKV nodes, TiFlash stores data by column, mainly designed to accelerate analytical processing.

Prerequisites

Before deploying a TiDB cluster on GCP GKE, make sure the following requirements are satisfied

1) Create a project

2) Enable Kubernetes Engine API

3) Activate Cloud Shell

Ensure that you have the available quote for Compute Engine CPU in your cluster’s region.

4) Configure the GCP service

Configure your GCP project and default region.

gcloud config set core/project 
gcloud config set compute/region 

Example:
gcloud config set core/project erudite-spot-326413
gcloud config set compute/zone us-west1-a

Create a GKE cluster and node pool

Enable container.googleapis.com

gcloud services enable container.googleapis.com

Create a GKE cluster and a default node pool

gcloud container clusters create tidb --region us-west1-a --machine-type n1-standard-4 --num-nodes=1

Create separate node pools for PD, TiKV, and TiDB

gcloud container node-pools create pd --cluster tidb --machine-type n1-standard-4 --num-nodes=1 \
--node-labels=dedicated=pd --node-taints=dedicated=pd:NoSchedule

gcloud container node-pools create tikv --cluster tidb --machine-type n1-highmem-8 --num-nodes=1 \
--node-labels=dedicated=tikv --node-taints=dedicated=tikv:NoSchedule

gcloud container node-pools create tidb --cluster tidb --machine-type n1-standard-8 --num-nodes=1 \
    --node-labels=dedicated=tidb --node-taints=dedicated=tidb:NoSchedule

Deploy TiDB Operator

This section describes how to deploy a TiDB Operator on GCP GKE

Install Helm

Helm is used for deploying TiDB Operator

curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
git clone https://github.com/pingcap/tidb-operator.git && cd tidb-operator &&
kubectl create serviceaccount tiller --namespace kube-system &&
kubectl apply -f ./manifests/tiller-rbac.yaml &&
helm init --service-account tiller --upgrade

Helm will also need a couple of permissions to work properly. We can download them from the tidb-operator project.

Ensure that the tiller pod is running.

kubectl get pods -n kube-system

Note: If it is not running (Status: ImagePullBackOff), then run the following commands. Then check the status again.

kubectl delete -n kube-system deployment tiller-deploy

helm init --service-account tiller --upgrade

Install TiDB Operator CRDs

TiDB Operator uses Custom Resource Definition (CRD) to extend Kubernetes. Therefore, to use TiDB Operator, you must first create the TidbCluster CRD, which is a one-time job in your Kubernetes cluster.

kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/crd.yaml

Add the PingCAP repository

helm repo add pingcap https://charts.pingcap.org/

Create a namespace for TiDB Operator

kubectl create namespace tidb-admin

Install TiDB Operator

helm install ./charts/tidb-operator -n tidb-admin --namespace=tidb-admin --version v1.2.3

Make sure tidb-operator components are running.

kubectl get pods --namespace tidb-admin -l app.kubernetes.io/instance=tidb-admin
kubectl get pods --namespace tidb-admin -o wide

Deploy a TiDB Cluster and the Monitoring Component

This section describes how to deploy a TiDB cluster and its monitoring services.

Create namespace

kubectl create namespace tidb-cluster 

Note: A namespace is a virtual cluster backed by the same physical cluster. This document takes tidb-cluster as    an example. If you want to use other namespace, modify the corresponding arguments of -n or –namespace.

Download the sample TidbCluster and TidbMonitor configuration files

curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/gcp/tidb-cluster.yaml && \
curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/gcp/tidb-monitor.yaml

Deploy the TidbCluster and TidbMonitor CR in the GKE cluster

kubectl create -f tidb-cluster.yaml -n tidb-cluster && \
kubectl create -f tidb-monitor.yaml -n tidb-cluster

Watch Cluster Status

watch kubectl get pods -n tidb-cluster

Wait until all Pods for all services are started. As soon as you see Pods of each type (-pd, -tikv, and -tidb) are in the “Running” state, you can press Ctrl+C to get back to the command line and go on to connect to your TiDB cluster.

View the cluster status

kubectl get pods -n tidb-cluster

Get list of services in the tidb-cluster

kubectl get svc -n tidb-cluster

Access the TiDB database

After you deploy a TiDB cluster, you can access the TiDB database via MySQL client.

Prepare a bastion host

The LoadBalancer created for your TiDB cluster is an intranet LoadBalancer. You can create a bastion host in the cluster VPC to access the database.

Note: You can also create the bastion host in other zones in the same region.

gcloud compute instances create bastion \
    --machine-type=n1-standard-4 \
    --image-project=centos-cloud \
    --image-family=centos-7 \
    --zone=us-west1-a

Install the MySQL client and Connect

After the bastion host is created, you can connect to the bastion host via SSH and access the TiDB cluster via the MySQL client.

Connect to the bastion host via SSH.

gcloud compute ssh tidb@bastion

Install the MySQL Client.

sudo yum install mysql -y

Connect the client to the TiDB cluster

mysql -h ${tidb-nlb-dnsname} -P 4000 -u root

${tidb-nlb-dnsname} is the LoadBalancer IP of the TiDB service.

You can view the IP in the EXTERNAL-IP field of the kubectl get svc basic-tidb -n tidb-cluster execution result.

kubectl get svc basic-tidb -n tidb-cluster
mysql -h 10.138.0.6 -P 4000 -u root

Check TiDB Version

select tidb_version()\G

Create Test table

use test;

create table test_table (id int unsigned not null auto_increment primary key, v varchar(32));

select * from information_schema.tikv_region_status where db_name=database() and table_name='test_table'\G

Query the TiKV store status

select * from information_schema.tikv_store_status\G

Query the TiDB cluster information

select * from information_schema.cluster_info\G

Access the Grafana Monitor Dashboard

Obtain the LoadBalancer IP of Grafana

kubectl -n tidb-cluster get svc basic-grafana

In the output above, the EXTERNAL-IP column is the LoadBalancer IP.

You can access the ${grafana-lb}:3000 address using your web browser to view monitoring metrics. Replace ${grafana-lb} with the LoadBalancer IP.

Scale out

Before scaling out the cluster, you need to scale out the corresponding node pool so that the new instances have enough resources for operation.

This section describes how to scale out the EKS node group and TiDB components.

Scale out GKE node group

gcloud container clusters resize tidb –node-pool tikv –num-nodes 2

The following example shows how to scale out the tikv node pool of the tidb cluster to 6 nodes:

gcloud container clusters resize tidb --node-pool tikv --num-nodes 2

Note: In the regional cluster, the nodes are created in 3 zones. Therefore, after scaling out, the number of nodes is 2 * 3 = 6.    

After that, execute kubectl edit tc basic -n tidb-cluster and modify each component’s replicas to the desired number of replicas. The scaling-out process is then completed.

kubectl edit tc basic -n tidb-cluster

Deploy TiFlash and TiCDC

TiFlash is the columnar storage extension of TiKV.

TiCDC is a tool for replicating the incremental data of TiDB by pulling TiKV change logs.

Create new node pools

  • Create a node pool for TiFlash:
gcloud container node-pools create tiflash --cluster tidb --machine-type n1-highmem-8 --num-nodes=1 \
    --node-labels dedicated=tiflash --node-taints dedicated=tiflash:NoSchedule
  • Create a node pool for TiCD
gcloud container node-pools create ticdc --cluster tidb --machine-type n1-standard-4 --num-nodes=1 \
    --node-labels dedicated=ticdc --node-taints dedicated=ticdc:NoSchedule

Configure and deploy

  • To deploy TiFlash, configure spec.tiflash in tidb-cluster.yaml.
  tiflash:
    baseImage: pingcap/tiflash
    replicas: 1
    storageClaims:
    - resources:
        requests:
          storage: 100Gi
    nodeSelector:
      dedicated: tiflash
    tolerations:
    - effect: NoSchedule
      key: dedicated
      operator: Equal
      value: tiflash
  • To deploy TiCDC, configure spec.ticdc in tidb-cluster.yaml
ticdc:
    baseImage: pingcap/ticdc
    replicas: 1
    nodeSelector:
      dedicated: ticdc
    tolerations:
    - effect: NoSchedule
      key: dedicated
      operator: Equal
      value: ticdc
  • Finally, execute kubectl -n tidb-cluster apply -f tidb-cluster.yaml to update the TiDB cluster configuration
kubectl -n tidb-cluster apply -f tidb-cluster.yaml

Delete Cluster

List existing clusters for running containers

gcloud container clusters list

Delete cluster.

gcloud container clusters delete tidb

Cheers!

Knowledge worth sharing...Share on linkedin
Linkedin
Share on facebook
Facebook
Share on google
Google
Share on twitter
Twitter

Deploy TiDB on AWS EKS (Elastic Kubernetes Service)

This blog post describes how to deploy a TiDB cluster on AWS Elastic Kubernetes Service (EKS). TiDB on Kubernetes is the standard way to deploy TiDB on public clouds

Install AWS, kubectl & eksctl CLI’s

Install AWS CLI

MAC – Install and configure AWS CLI

# Download Binary
curl "https://awscli.amazonaws.com/AWSCLIV2.pkg" -o "AWSCLIV2.pkg"

# Install the binary
sudo installer -pkg ./AWSCLIV2.pkg -target /

# Verify the installation
aws --version

Reference: https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-mac.html

Windows 10 – Install and configure AWS CLI

  • The AWS CLI version 2 is supported on Windows XP or later.
  • The AWS CLI version 2 supports only 64-bit versions of Windows.
  • Download Binary: https://awscli.amazonaws.com/AWSCLIV2.msi
  • Install the downloaded binary (standard windows install)
# Verify the installation
aws --version

Reference: https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-windows.html

Configure AWS Command Line using Security Credentials

  • Go to AWS Management Console –> Services –> IAM
  • Select the IAM User: <user>
  • **Important Note:** Use only IAM user to generate **Security Credentials**. Never ever use Root User. (Highly not recommended)
  • Click on **Security credentials** tab
  • Click on **Create access key**
  • Copy Access ID and Secret access key
  • Go to command line and provide the required details
aws configure

Test if AWS CLI is working after configuring the above:

aws ec2 describe-vpcs

Install kubectl CLI

MAC – Install and configure kubectl

# Download the Package
mkdir kubectlbinary
cd kubectlbinary
curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.16.8/2020-04-16/bin/darwin/amd64/kubectl

# Provide execute permissions
chmod +x ./kubectl

# Set the Path by copying to user Home Directory
mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$PATH:$HOME/bin
echo 'export PATH=$PATH:$HOME/bin' >> ~/.bash_profile

# Verify the kubectl version
kubectl version --short --client
Output: Client Version: v1.16.8-eks-e16311
mkdir kubectlbinary
cd kubectlbinary
curl -o kubectl.exe https://amazon-eks.s3.us-west-2.amazonaws.com/1.16.8/2020-04-16/bin/windows/amd64/kubectl.exe

```
- Update the system **Path** environment variable
```

# Verify the kubectl client version
kubectl version --short --client
kubectl version --client

Install eksctl CLI

eksctl on Mac

# Install Homebrew on MacOs
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"

# Install the Weaveworks Homebrew tap.
brew tap weaveworks/tap

# Install the Weaveworks Homebrew tap.
brew install weaveworks/tap/eksctl

# Verify eksctl version
eksctl version

eksctl on windows or linux

Getting started with Amazon EKS – eksctl

https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html

TiDB Architecture

TiDB is designed to consist of multiple components. These components communicate with each other and form a complete TiDB system. The architecture is as follows:

TiDB server

The TiDB server is a stateless SQL layer that exposes the connection endpoint of the MySQL protocol to the outside. The TiDB server receives SQL requests, performs SQL parsing and optimization, and ultimately generates a distributed execution plan. It is horizontally scalable and provides the unified interface to the outside through the load balancing components such as Linux Virtual Server (LVS), HAProxy, or F5. It does not store data and is only for computing and SQL analyzing, transmitting actual data read request to TiKV nodes (or TiFlash nodes).

Placement Driver (PD) server

The PD server is the metadata managing component of the entire cluster. It stores metadata of real-time data distribution of every single TiKV node and the topology structure of the entire TiDB cluster, provides the TiDB Dashboard management UI, and allocates transaction IDs to distributed transactions. The PD server is “the brain” of the entire TiDB cluster because it not only stores metadata of the cluster, but also sends data scheduling command to specific TiKV nodes according to the data distribution state reported by TiKV nodes in real time. In addition, the PD server consists of three nodes at least and has high availability. It is recommended to deploy an odd number of PD nodes.

Storage servers

TiKV server

The TiKV server is responsible for storing data. TiKV is a distributed transactional key-value storage engine. Region is the basic unit to store data. Each Region stores the data for a particular Key Range which is a left-closed and right-open interval from StartKey to EndKey. Multiple Regions exist in each TiKV node. TiKV APIs provide native support to distributed transactions at the key-value pair level and supports the Snapshot Isolation level isolation by default. This is the core of how TiDB supports distributed transactions at the SQL level. After processing SQL statements, the TiDB server converts the SQL execution plan to an actual call to the TiKV API. Therefore, data is stored in TiKV. All the data in TiKV is automatically maintained in multiple replicas (three replicas by default), so TiKV has native high availability and supports automatic failover.

TiFlash server

The TiFlash Server is a special type of storage server. Unlike ordinary TiKV nodes, TiFlash stores data by column, mainly designed to accelerate analytical processing.

Create a EKS cluster and a node pool

It is recommended to create a node pool in each availability zone (at least 3 in total) for each component when creating an EKS.

References:

https://aws.amazon.com/blogs/containers/amazon-eks-cluster-multi-zone-auto-scaling-groups/

https://aws.github.io/aws-eks-best-practices/reliability/docs/dataplane/#ensure-capacity-in-each-az-when-using-ebs-volumes

Save the following configuration as the cluster.yaml file. Replace ${clusterame} with your preferred cluster name, and specify your preferred region.

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: ${clusterName}
  region: ap-northeast-1

nodeGroups:
  - name: admin
    desiredCapacity: 1
    privateNetworking: true
    labels:
      dedicated: admin

  - name: tidb-1a
    desiredCapacity: 1
    privateNetworking: true
    availabilityZones: ["ap-northeast-1a"]
    labels:
      dedicated: tidb
    taints:
      dedicated: tidb:NoSchedule
  - name: tidb-1d
    desiredCapacity: 0
    privateNetworking: true
    availabilityZones: ["ap-northeast-1d"]
    labels:
      dedicated: tidb
    taints:
      dedicated: tidb:NoSchedule
  - name: tidb-1c
    desiredCapacity: 1
    privateNetworking: true
    availabilityZones: ["ap-northeast-1c"]
    labels:
      dedicated: tidb
    taints:
      dedicated: tidb:NoSchedule

  - name: pd-1a
    desiredCapacity: 1
    privateNetworking: true
    availabilityZones: ["ap-northeast-1a"]
    labels:
      dedicated: pd
    taints:
      dedicated: pd:NoSchedule
  - name: pd-1d
    desiredCapacity: 1
    privateNetworking: true
    availabilityZones: ["ap-northeast-1d"]
    labels:
      dedicated: pd
    taints:
      dedicated: pd:NoSchedule
  - name: pd-1c
    desiredCapacity: 1
    privateNetworking: true
    availabilityZones: ["ap-northeast-1c"]
    labels:
      dedicated: pd
    taints:
      dedicated: pd:NoSchedule

  - name: tikv-1a
    desiredCapacity: 1
    privateNetworking: true
    availabilityZones: ["ap-northeast-1a"]
    labels:
      dedicated: tikv
    taints:
      dedicated: tikv:NoSchedule
  - name: tikv-1d
    desiredCapacity: 1
    privateNetworking: true
    availabilityZones: ["ap-northeast-1d"]
    labels:
      dedicated: tikv
    taints:
      dedicated: tikv:NoSchedule
  - name: tikv-1c
    desiredCapacity: 1
    privateNetworking: true
    availabilityZones: ["ap-northeast-1c"]
    labels:
      dedicated: tikv
    taints:
      dedicated: tikv:NoSchedule

Create Cluster

eksctl create cluster -f cluster.yaml

Deploy TiDB Operator

This section describes how to deploy a TiDB Operator on AWS EKS.

Install Helm (Prerequisite)

MAC – Install Helm

brew install helm

Windows 10 – Install Helm

choco install kubernetes-helm

Create CRD

TiDB Operator uses Custom Resource Definition (CRD) to extend Kubernetes. Therefore, to use TiDB Operator, you must first create the TidbCluster CRD, which is a one-time job in your Kubernetes cluster.

Create a file called crd.yaml. Copy the configuration from the link below.

https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/crd.yaml

Build the TiDBCluster CRD by executing the command below.

kubectl apply -f crd.yaml

Add the PingCAP repository

helm repo add pingcap https://charts.pingcap.org/

Expected output:

“pingcap” has been added to your repositories

Create a namespace for TiDB Operator

kubectl create namespace tidb-admin

Expected output:

namespace/tidb-admin created

Install TiDB Operator

helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.2.1

To confirm that the TiDB Operator components are running, execute the following command:

kubectl get pods --namespace tidb-admin -l app.kubernetes.io/instance=tidb-operator

Deploy a TiDB cluster and the Monitoring Component

This section describes how to deploy a TiDB cluster and its monitoring component in AWS EKS.

Create namespace

kubectl create namespace tidb-cluster

Note: A namespace is a virtual cluster backed by the same physical cluster. This document takes tidb-cluster as    an example. If you want to use a different namespace, modify the corresponding arguments of -n or –namespace.

Deploy

Download the sample TidbCluster and TidbMonitor configuration files:

curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/aws/tidb-cluster.yaml && \
curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/aws/tidb-monitor.yaml

Execute the command below the deploy TiDB cluster and its monitoring component.

kubectl apply -f tidb-cluster.yaml -n tidb-cluster
kubectl apply -f tidb-monitor.yaml -n tidb-cluster

After the yaml file above is applied to the Kubernetes cluster, TiDB Operator creates the desired TiDB cluster and its monitoring component according to the yaml file.

Verify Cluster & Nodes

View cluster status

kubectl get pods -n tidb-cluster

When all the Pods are in the Running or Ready state, the TiDB cluster is successfully started.

List worker nodes

List Nodes in current kubernetes cluster

kubectl get nodes -o wide

Verify Cluster, NodeGroup in EKS Management Console

Go to Services -> Elastic Kubernetes Service -> ${clustername}

Verify Worker Node IAM Role and list of Policies

Go to Services -> EC2 -> Worker Nodes

Verify CloudFormation Stacks

Verify Control Plane Stack & Events

Verify NodeGroup Stack & Events

Below are the associated NodeGroup Events

Access the Database

You can access the TiDB database to test or develop your application after you have deployed a TiDB cluster.

Prepare a bastion host

The LoadBalancer created for your TiDB cluster is an intranet LoadBalancer. You can create a bastion host in the cluster VPC to access the database.

Select the cluster’s VPC and Subnet and verify whether the cluster name is correct in the dropdown box.

You can view the cluster’s VPC and Subnet by running the following command:

eksctl get cluster -n tidbcluster -r ap-northeast-1

Allow the bastion host to access the Internet. Select the correct key pair so that you can log in to the host via SSH.

Install the MySQL client and connect

sudo yum install mysql -y

Connect the client to the TiDB cluster

mysql -h ${tidb-nlb-dnsname} -P 4000 -u root
kubectl get svc basic-tidb -n tidb-cluster

${tidb-nlb-dnsname} is the LoadBalancer domain name of the TiDB service. You can view the domain name in the EXTERNAL-IP field by executing kubectl get svc basic-tidb -n tidb-cluster.

kubectl get svc basic-tidb -n tidb-cluster

Check TiDB version

select tidb_version()\G

Create test table

use test;

create table test_table (id int unsigned not null auto_increment primary key, v varchar(32));

select * from information_schema.tikv_region_status where db_name=database() and table_name='test_table'\G

Query the TiKV store status

select * from information_schema.tikv_store_status\G

Query the TiDB cluster information

select * from information_schema.cluster_info\G

Access the Grafana Monitoring Dashboard

Obtain the LoadBalancer domain name of Grafana

kubectl -n tidb-cluster get svc basic-grafana

In the output below, the EXTERNAL-IP column is the LoadBalancer domain name.       

You can access the ${grafana-lb}:3000 address using your web browser to view monitoring metrics. Replace ${grafana-lb} with the LoadBalancer domain name.

Upgrade

To upgrade the TiDB cluster, edit the spec.version by executing the command below.

kubectl edit tc basic -n tidb-cluster

Scale out

Before scaling out the cluster, you need to scale out the corresponding node group so that the new instances have enough resources for operation.

This section describes how to scale out the EKS node group and TiDB components.

Scale out EKS node group

When scaling out TiKV, the node groups must be scaled out evenly among the different availability zones. The following example shows how to scale out the tikv-1atikv-1c, and tikv-1d groups of the ${clusterName} cluster to 2 nodes.

eksctl scale nodegroup --cluster ${clusterName} --name tikv-1a --nodes 2 --nodes-min 2 --nodes-max 2

Scale out TiDB components

After scaling out the EKS node group, execute kubectl edit tc basic -n tidb-cluster, and modify each component’s replicas to the desired number of replicas. The scaling-out process is then completed.

Deploy TiFlash/TiCDC

TiFlash is the columnar storage extension of TiKV.

TiCDC is a tool for replicating the incremental data of TiDB by pulling TiKV change logs.

In the configuration file of eksctl (cluster.yaml), add the following two items to add a node group for TiFlash/TiCDC respectively. desiredCapacity is the number of nodes you desire.

  - name: tiflash-1a
    desiredCapacity: 1
    privateNetworking: true
    availabilityZones: ["ap-northeast-1a"]
    labels:
      dedicated: tiflash
    taints:
      dedicated: tiflash:NoSchedule
  - name: tiflash-1d
    desiredCapacity: 1
    privateNetworking: true
    availabilityZones: ["ap-northeast-1d"]
    labels:
      dedicated: tiflash
    taints:
      dedicated: tiflash:NoSchedule
  - name: tiflash-1c
    desiredCapacity: 1
    privateNetworking: true
    availabilityZones: ["ap-northeast-1c"]
    labels:
      dedicated: tiflash
    taints:
      dedicated: tiflash:NoSchedule

  - name: ticdc-1a
    desiredCapacity: 1
    privateNetworking: true
    availabilityZones: ["ap-northeast-1a"]
    labels:
      dedicated: ticdc
    taints:
      dedicated: ticdc:NoSchedule
  - name: ticdc-1d
    desiredCapacity: 1
    privateNetworking: true
    availabilityZones: ["ap-northeast-1d"]
    labels:
      dedicated: ticdc
    taints:
      dedicated: ticdc:NoSchedule
  - name: ticdc-1c
    desiredCapacity: 1
    privateNetworking: true
    availabilityZones: ["ap-northeast-1c"]
    labels:
      dedicated: ticdc
    taints:
      dedicated: ticdc:NoSchedule

Depending on the EKS cluster status, use different commands:

  • If the cluster is not created, execute eksctl create cluster -f cluster.yaml to create the cluster and node groups.
  • If the cluster is already created, execute eksctl create nodegroup -f cluster.yaml to create the node groups. The existing node groups are ignored and will not be created again.

Deploy TiFlash/TiCDC

To deploy TiFlash, configure spec.tiflash in tidb-cluster.yaml:

spec:
  ...
  tiflash:
    baseImage: pingcap/tiflash
    replicas: 1
    storageClaims:
    - resources:
        requests:
          storage: 100Gi
    tolerations:
    - effect: NoSchedule
      key: dedicated
      operator: Equal
      value: tiflash

Deploy TiFlash/TiCDC

To deploy TiCDC, configure spec.ticdc in tidb-cluster.yaml

kubectl -n tidb-cluster apply -f tidb-cluster.yaml

Finally, execute kubectl -n tidb-cluster apply -f tidb-cluster.yaml to update the TiDB cluster configuration.

View Cluster Status

kubectl get pods -n tidb-cluster

Delete EKS Cluster & Node Groups

This section describes how to delete EKS cluster and Node Groups.

List EKS Clusters

eksctl get clusters -r ap-northeast-1

Delete Clusters

eksctl get clusters -r ap-northeast-1

Delete Clusters

eksctl delete cluster tidbcluster -r ap-northeast-1

OR;

eksctl delete cluster --region=ap-northeast-1 --name=tidbcluster

Cheers!

Knowledge worth sharing...Share on linkedin
Linkedin
Share on facebook
Facebook
Share on google
Google
Share on twitter
Twitter

Flush Tables With Read Lock

This command closes all open tables and locks all tables for all databases with a global read lock

FLUSH TABLES or RELOAD privilege is required for this operation.

To release the lock, use UNLOCK TABLES. This command implicitly commits any active transaction only if any tables currently have been locked with LOCK TABLES.

Inserting rows into the log tables is not prevented with FLUSH TABLES WITH READ LOCK.

Cheers!

Knowledge worth sharing...Share on linkedin
Linkedin
Share on facebook
Facebook
Share on google
Google
Share on twitter
Twitter