Deploy TiDB on GCP GKE (Google Kubernetes Engine)

This blog post describes how to deploy a TiDB cluster on GCP Google Kubernetes Engine (GKE). TiDB on Kubernetes is the standard way to deploy TiDB on public clouds.

TiDB Architecture

TiDB is designed to consist of multiple components. These components communicate with each other and form a complete TiDB system. The architecture is as follows:

TiDB server

The TiDB server is a stateless SQL layer that exposes the connection endpoint of the MySQL protocol to the outside. The TiDB server receives SQL requests, performs SQL parsing and optimization, and ultimately generates a distributed execution plan. It is horizontally scalable and provides the unified interface to the outside through the load balancing components such as Linux Virtual Server (LVS), HAProxy, or F5. It does not store data and is only for computing and SQL analyzing, transmitting actual data read request to TiKV nodes (or TiFlash nodes).

Placement Driver (PD) server

The PD server is the metadata managing component of the entire cluster. It stores metadata of real-time data distribution of every single TiKV node and the topology structure of the entire TiDB cluster, provides the TiDB Dashboard management UI, and allocates transaction IDs to distributed transactions. The PD server is “the brain” of the entire TiDB cluster because it not only stores metadata of the cluster, but also sends data scheduling command to specific TiKV nodes according to the data distribution state reported by TiKV nodes in real time. In addition, the PD server consists of three nodes at least and has high availability. It is recommended to deploy an odd number of PD nodes.

Storage servers

Storage servers

TiKV server

The TiKV server is responsible for storing data. TiKV is a distributed transactional key-value storage engine. Region is the basic unit to store data. Each Region stores the data for a particular Key Range which is a left-closed and right-open interval from StartKey to EndKey. Multiple Regions exist in each TiKV node. TiKV APIs provide native support to distributed transactions at the key-value pair level and supports the Snapshot Isolation level isolation by default. This is the core of how TiDB supports distributed transactions at the SQL level. After processing SQL statements, the TiDB server converts the SQL execution plan to an actual call to the TiKV API. Therefore, data is stored in TiKV. All the data in TiKV is automatically maintained in multiple replicas (three replicas by default), so TiKV has native high availability and supports automatic failover.

TiFlash server

The TiFlash Server is a special type of storage server. Unlike ordinary TiKV nodes, TiFlash stores data by column, mainly designed to accelerate analytical processing.

Prerequisites

Before deploying a TiDB cluster on GCP GKE, make sure the following requirements are satisfied

1) Create a project

2) Enable Kubernetes Engine API

3) Activate Cloud Shell

Ensure that you have the available quote for Compute Engine CPU in your cluster’s region.

4) Configure the GCP service

Configure your GCP project and default region.

gcloud config set core/project 
gcloud config set compute/region 

Example:
gcloud config set core/project erudite-spot-326413
gcloud config set compute/zone us-west1-a

Create a GKE cluster and node pool

Enable container.googleapis.com

gcloud services enable container.googleapis.com

Create a GKE cluster and a default node pool

gcloud container clusters create tidb --region us-west1-a --machine-type n1-standard-4 --num-nodes=1

Create separate node pools for PD, TiKV, and TiDB

gcloud container node-pools create pd --cluster tidb --machine-type n1-standard-4 --num-nodes=1 \
--node-labels=dedicated=pd --node-taints=dedicated=pd:NoSchedule

gcloud container node-pools create tikv --cluster tidb --machine-type n1-highmem-8 --num-nodes=1 \
--node-labels=dedicated=tikv --node-taints=dedicated=tikv:NoSchedule

gcloud container node-pools create tidb --cluster tidb --machine-type n1-standard-8 --num-nodes=1 \
    --node-labels=dedicated=tidb --node-taints=dedicated=tidb:NoSchedule

Deploy TiDB Operator

This section describes how to deploy a TiDB Operator on GCP GKE

Install Helm

Helm is used for deploying TiDB Operator

curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
git clone https://github.com/pingcap/tidb-operator.git && cd tidb-operator &&
kubectl create serviceaccount tiller --namespace kube-system &&
kubectl apply -f ./manifests/tiller-rbac.yaml &&
helm init --service-account tiller --upgrade

Helm will also need a couple of permissions to work properly. We can download them from the tidb-operator project.

Ensure that the tiller pod is running.

kubectl get pods -n kube-system

Note: If it is not running (Status: ImagePullBackOff), then run the following commands. Then check the status again.

kubectl delete -n kube-system deployment tiller-deploy

helm init --service-account tiller --upgrade

Install TiDB Operator CRDs

TiDB Operator uses Custom Resource Definition (CRD) to extend Kubernetes. Therefore, to use TiDB Operator, you must first create the TidbCluster CRD, which is a one-time job in your Kubernetes cluster.

kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/crd.yaml

Add the PingCAP repository

helm repo add pingcap https://charts.pingcap.org/

Create a namespace for TiDB Operator

kubectl create namespace tidb-admin

Install TiDB Operator

helm install ./charts/tidb-operator -n tidb-admin --namespace=tidb-admin --version v1.2.3

Make sure tidb-operator components are running.

kubectl get pods --namespace tidb-admin -l app.kubernetes.io/instance=tidb-admin
kubectl get pods --namespace tidb-admin -o wide

Deploy a TiDB Cluster and the Monitoring Component

This section describes how to deploy a TiDB cluster and its monitoring services.

Create namespace

kubectl create namespace tidb-cluster 

Note: A namespace is a virtual cluster backed by the same physical cluster. This document takes tidb-cluster as    an example. If you want to use other namespace, modify the corresponding arguments of -n or –namespace.

Download the sample TidbCluster and TidbMonitor configuration files

curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/gcp/tidb-cluster.yaml && \
curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/gcp/tidb-monitor.yaml

Deploy the TidbCluster and TidbMonitor CR in the GKE cluster

kubectl create -f tidb-cluster.yaml -n tidb-cluster && \
kubectl create -f tidb-monitor.yaml -n tidb-cluster

Watch Cluster Status

watch kubectl get pods -n tidb-cluster

Wait until all Pods for all services are started. As soon as you see Pods of each type (-pd, -tikv, and -tidb) are in the “Running” state, you can press Ctrl+C to get back to the command line and go on to connect to your TiDB cluster.

View the cluster status

kubectl get pods -n tidb-cluster

Get list of services in the tidb-cluster

kubectl get svc -n tidb-cluster

Access the TiDB database

After you deploy a TiDB cluster, you can access the TiDB database via MySQL client.

Prepare a bastion host

The LoadBalancer created for your TiDB cluster is an intranet LoadBalancer. You can create a bastion host in the cluster VPC to access the database.

Note: You can also create the bastion host in other zones in the same region.

gcloud compute instances create bastion \
    --machine-type=n1-standard-4 \
    --image-project=centos-cloud \
    --image-family=centos-7 \
    --zone=us-west1-a

Install the MySQL client and Connect

After the bastion host is created, you can connect to the bastion host via SSH and access the TiDB cluster via the MySQL client.

Connect to the bastion host via SSH.

gcloud compute ssh tidb@bastion

Install the MySQL Client.

sudo yum install mysql -y

Connect the client to the TiDB cluster

mysql -h ${tidb-nlb-dnsname} -P 4000 -u root

${tidb-nlb-dnsname} is the LoadBalancer IP of the TiDB service.

You can view the IP in the EXTERNAL-IP field of the kubectl get svc basic-tidb -n tidb-cluster execution result.

kubectl get svc basic-tidb -n tidb-cluster
mysql -h 10.138.0.6 -P 4000 -u root

Check TiDB Version

select tidb_version()\G

Create Test table

use test;

create table test_table (id int unsigned not null auto_increment primary key, v varchar(32));

select * from information_schema.tikv_region_status where db_name=database() and table_name='test_table'\G

Query the TiKV store status

select * from information_schema.tikv_store_status\G

Query the TiDB cluster information

select * from information_schema.cluster_info\G

Access the Grafana Monitor Dashboard

Obtain the LoadBalancer IP of Grafana

kubectl -n tidb-cluster get svc basic-grafana

In the output above, the EXTERNAL-IP column is the LoadBalancer IP.

You can access the ${grafana-lb}:3000 address using your web browser to view monitoring metrics. Replace ${grafana-lb} with the LoadBalancer IP.

Scale out

Before scaling out the cluster, you need to scale out the corresponding node pool so that the new instances have enough resources for operation.

This section describes how to scale out the EKS node group and TiDB components.

Scale out GKE node group

gcloud container clusters resize tidb –node-pool tikv –num-nodes 2

The following example shows how to scale out the tikv node pool of the tidb cluster to 6 nodes:

gcloud container clusters resize tidb --node-pool tikv --num-nodes 2

Note: In the regional cluster, the nodes are created in 3 zones. Therefore, after scaling out, the number of nodes is 2 * 3 = 6.    

After that, execute kubectl edit tc basic -n tidb-cluster and modify each component’s replicas to the desired number of replicas. The scaling-out process is then completed.

kubectl edit tc basic -n tidb-cluster

Deploy TiFlash and TiCDC

TiFlash is the columnar storage extension of TiKV.

TiCDC is a tool for replicating the incremental data of TiDB by pulling TiKV change logs.

Create new node pools

  • Create a node pool for TiFlash:
gcloud container node-pools create tiflash --cluster tidb --machine-type n1-highmem-8 --num-nodes=1 \
    --node-labels dedicated=tiflash --node-taints dedicated=tiflash:NoSchedule
  • Create a node pool for TiCD
gcloud container node-pools create ticdc --cluster tidb --machine-type n1-standard-4 --num-nodes=1 \
    --node-labels dedicated=ticdc --node-taints dedicated=ticdc:NoSchedule

Configure and deploy

  • To deploy TiFlash, configure spec.tiflash in tidb-cluster.yaml.
  tiflash:
    baseImage: pingcap/tiflash
    replicas: 1
    storageClaims:
    - resources:
        requests:
          storage: 100Gi
    nodeSelector:
      dedicated: tiflash
    tolerations:
    - effect: NoSchedule
      key: dedicated
      operator: Equal
      value: tiflash
  • To deploy TiCDC, configure spec.ticdc in tidb-cluster.yaml
ticdc:
    baseImage: pingcap/ticdc
    replicas: 1
    nodeSelector:
      dedicated: ticdc
    tolerations:
    - effect: NoSchedule
      key: dedicated
      operator: Equal
      value: ticdc
  • Finally, execute kubectl -n tidb-cluster apply -f tidb-cluster.yaml to update the TiDB cluster configuration
kubectl -n tidb-cluster apply -f tidb-cluster.yaml

Delete Cluster

List existing clusters for running containers

gcloud container clusters list

Delete cluster.

gcloud container clusters delete tidb

Cheers!

Knowledge worth sharing...Share on linkedin
Linkedin
Share on facebook
Facebook
Share on google
Google
Share on twitter
Twitter

Deploy TiDB on AWS EKS (Elastic Kubernetes Service)

This blog post describes how to deploy a TiDB cluster on AWS Elastic Kubernetes Service (EKS). TiDB on Kubernetes is the standard way to deploy TiDB on public clouds

Install AWS, kubectl & eksctl CLI’s

Install AWS CLI

MAC – Install and configure AWS CLI

# Download Binary
curl "https://awscli.amazonaws.com/AWSCLIV2.pkg" -o "AWSCLIV2.pkg"

# Install the binary
sudo installer -pkg ./AWSCLIV2.pkg -target /

# Verify the installation
aws --version

Reference: https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-mac.html

Windows 10 – Install and configure AWS CLI

  • The AWS CLI version 2 is supported on Windows XP or later.
  • The AWS CLI version 2 supports only 64-bit versions of Windows.
  • Download Binary: https://awscli.amazonaws.com/AWSCLIV2.msi
  • Install the downloaded binary (standard windows install)
# Verify the installation
aws --version

Reference: https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-windows.html

Configure AWS Command Line using Security Credentials

  • Go to AWS Management Console –> Services –> IAM
  • Select the IAM User: <user>
  • **Important Note:** Use only IAM user to generate **Security Credentials**. Never ever use Root User. (Highly not recommended)
  • Click on **Security credentials** tab
  • Click on **Create access key**
  • Copy Access ID and Secret access key
  • Go to command line and provide the required details
aws configure

Test if AWS CLI is working after configuring the above:

aws ec2 describe-vpcs

Install kubectl CLI

MAC – Install and configure kubectl

# Download the Package
mkdir kubectlbinary
cd kubectlbinary
curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.16.8/2020-04-16/bin/darwin/amd64/kubectl

# Provide execute permissions
chmod +x ./kubectl

# Set the Path by copying to user Home Directory
mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$PATH:$HOME/bin
echo 'export PATH=$PATH:$HOME/bin' >> ~/.bash_profile

# Verify the kubectl version
kubectl version --short --client
Output: Client Version: v1.16.8-eks-e16311
mkdir kubectlbinary
cd kubectlbinary
curl -o kubectl.exe https://amazon-eks.s3.us-west-2.amazonaws.com/1.16.8/2020-04-16/bin/windows/amd64/kubectl.exe

```
- Update the system **Path** environment variable
```

# Verify the kubectl client version
kubectl version --short --client
kubectl version --client

Install eksctl CLI

eksctl on Mac

# Install Homebrew on MacOs
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"

# Install the Weaveworks Homebrew tap.
brew tap weaveworks/tap

# Install the Weaveworks Homebrew tap.
brew install weaveworks/tap/eksctl

# Verify eksctl version
eksctl version

eksctl on windows or linux

Getting started with Amazon EKS – eksctl

https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html

TiDB Architecture

TiDB is designed to consist of multiple components. These components communicate with each other and form a complete TiDB system. The architecture is as follows:

TiDB server

The TiDB server is a stateless SQL layer that exposes the connection endpoint of the MySQL protocol to the outside. The TiDB server receives SQL requests, performs SQL parsing and optimization, and ultimately generates a distributed execution plan. It is horizontally scalable and provides the unified interface to the outside through the load balancing components such as Linux Virtual Server (LVS), HAProxy, or F5. It does not store data and is only for computing and SQL analyzing, transmitting actual data read request to TiKV nodes (or TiFlash nodes).

Placement Driver (PD) server

The PD server is the metadata managing component of the entire cluster. It stores metadata of real-time data distribution of every single TiKV node and the topology structure of the entire TiDB cluster, provides the TiDB Dashboard management UI, and allocates transaction IDs to distributed transactions. The PD server is “the brain” of the entire TiDB cluster because it not only stores metadata of the cluster, but also sends data scheduling command to specific TiKV nodes according to the data distribution state reported by TiKV nodes in real time. In addition, the PD server consists of three nodes at least and has high availability. It is recommended to deploy an odd number of PD nodes.

Storage servers

TiKV server

The TiKV server is responsible for storing data. TiKV is a distributed transactional key-value storage engine. Region is the basic unit to store data. Each Region stores the data for a particular Key Range which is a left-closed and right-open interval from StartKey to EndKey. Multiple Regions exist in each TiKV node. TiKV APIs provide native support to distributed transactions at the key-value pair level and supports the Snapshot Isolation level isolation by default. This is the core of how TiDB supports distributed transactions at the SQL level. After processing SQL statements, the TiDB server converts the SQL execution plan to an actual call to the TiKV API. Therefore, data is stored in TiKV. All the data in TiKV is automatically maintained in multiple replicas (three replicas by default), so TiKV has native high availability and supports automatic failover.

TiFlash server

The TiFlash Server is a special type of storage server. Unlike ordinary TiKV nodes, TiFlash stores data by column, mainly designed to accelerate analytical processing.

Create a EKS cluster and a node pool

It is recommended to create a node pool in each availability zone (at least 3 in total) for each component when creating an EKS.

References:

https://aws.amazon.com/blogs/containers/amazon-eks-cluster-multi-zone-auto-scaling-groups/

https://aws.github.io/aws-eks-best-practices/reliability/docs/dataplane/#ensure-capacity-in-each-az-when-using-ebs-volumes

Save the following configuration as the cluster.yaml file. Replace ${clusterame} with your preferred cluster name, and specify your preferred region.

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: ${clusterName}
  region: ap-northeast-1

nodeGroups:
  - name: admin
    desiredCapacity: 1
    privateNetworking: true
    labels:
      dedicated: admin

  - name: tidb-1a
    desiredCapacity: 1
    privateNetworking: true
    availabilityZones: ["ap-northeast-1a"]
    labels:
      dedicated: tidb
    taints:
      dedicated: tidb:NoSchedule
  - name: tidb-1d
    desiredCapacity: 0
    privateNetworking: true
    availabilityZones: ["ap-northeast-1d"]
    labels:
      dedicated: tidb
    taints:
      dedicated: tidb:NoSchedule
  - name: tidb-1c
    desiredCapacity: 1
    privateNetworking: true
    availabilityZones: ["ap-northeast-1c"]
    labels:
      dedicated: tidb
    taints:
      dedicated: tidb:NoSchedule

  - name: pd-1a
    desiredCapacity: 1
    privateNetworking: true
    availabilityZones: ["ap-northeast-1a"]
    labels:
      dedicated: pd
    taints:
      dedicated: pd:NoSchedule
  - name: pd-1d
    desiredCapacity: 1
    privateNetworking: true
    availabilityZones: ["ap-northeast-1d"]
    labels:
      dedicated: pd
    taints:
      dedicated: pd:NoSchedule
  - name: pd-1c
    desiredCapacity: 1
    privateNetworking: true
    availabilityZones: ["ap-northeast-1c"]
    labels:
      dedicated: pd
    taints:
      dedicated: pd:NoSchedule

  - name: tikv-1a
    desiredCapacity: 1
    privateNetworking: true
    availabilityZones: ["ap-northeast-1a"]
    labels:
      dedicated: tikv
    taints:
      dedicated: tikv:NoSchedule
  - name: tikv-1d
    desiredCapacity: 1
    privateNetworking: true
    availabilityZones: ["ap-northeast-1d"]
    labels:
      dedicated: tikv
    taints:
      dedicated: tikv:NoSchedule
  - name: tikv-1c
    desiredCapacity: 1
    privateNetworking: true
    availabilityZones: ["ap-northeast-1c"]
    labels:
      dedicated: tikv
    taints:
      dedicated: tikv:NoSchedule

Create Cluster

eksctl create cluster -f cluster.yaml

Deploy TiDB Operator

This section describes how to deploy a TiDB Operator on AWS EKS.

Install Helm (Prerequisite)

MAC – Install Helm

brew install helm

Windows 10 – Install Helm

choco install kubernetes-helm

Create CRD

TiDB Operator uses Custom Resource Definition (CRD) to extend Kubernetes. Therefore, to use TiDB Operator, you must first create the TidbCluster CRD, which is a one-time job in your Kubernetes cluster.

Create a file called crd.yaml. Copy the configuration from the link below.

https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/crd.yaml

Build the TiDBCluster CRD by executing the command below.

kubectl apply -f crd.yaml

Add the PingCAP repository

helm repo add pingcap https://charts.pingcap.org/

Expected output:

“pingcap” has been added to your repositories

Create a namespace for TiDB Operator

kubectl create namespace tidb-admin

Expected output:

namespace/tidb-admin created

Install TiDB Operator

helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.2.1

To confirm that the TiDB Operator components are running, execute the following command:

kubectl get pods --namespace tidb-admin -l app.kubernetes.io/instance=tidb-operator

Deploy a TiDB cluster and the Monitoring Component

This section describes how to deploy a TiDB cluster and its monitoring component in AWS EKS.

Create namespace

kubectl create namespace tidb-cluster

Note: A namespace is a virtual cluster backed by the same physical cluster. This document takes tidb-cluster as    an example. If you want to use a different namespace, modify the corresponding arguments of -n or –namespace.

Deploy

Download the sample TidbCluster and TidbMonitor configuration files:

curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/aws/tidb-cluster.yaml && \
curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/aws/tidb-monitor.yaml

Execute the command below the deploy TiDB cluster and its monitoring component.

kubectl apply -f tidb-cluster.yaml -n tidb-cluster
kubectl apply -f tidb-monitor.yaml -n tidb-cluster

After the yaml file above is applied to the Kubernetes cluster, TiDB Operator creates the desired TiDB cluster and its monitoring component according to the yaml file.

Verify Cluster & Nodes

View cluster status

kubectl get pods -n tidb-cluster

When all the Pods are in the Running or Ready state, the TiDB cluster is successfully started.

List worker nodes

List Nodes in current kubernetes cluster

kubectl get nodes -o wide

Verify Cluster, NodeGroup in EKS Management Console

Go to Services -> Elastic Kubernetes Service -> ${clustername}

Verify Worker Node IAM Role and list of Policies

Go to Services -> EC2 -> Worker Nodes

Verify CloudFormation Stacks

Verify Control Plane Stack & Events

Verify NodeGroup Stack & Events

Below are the associated NodeGroup Events

Access the Database

You can access the TiDB database to test or develop your application after you have deployed a TiDB cluster.

Prepare a bastion host

The LoadBalancer created for your TiDB cluster is an intranet LoadBalancer. You can create a bastion host in the cluster VPC to access the database.

Select the cluster’s VPC and Subnet and verify whether the cluster name is correct in the dropdown box.

You can view the cluster’s VPC and Subnet by running the following command:

eksctl get cluster -n tidbcluster -r ap-northeast-1

Allow the bastion host to access the Internet. Select the correct key pair so that you can log in to the host via SSH.

Install the MySQL client and connect

sudo yum install mysql -y

Connect the client to the TiDB cluster

mysql -h ${tidb-nlb-dnsname} -P 4000 -u root
kubectl get svc basic-tidb -n tidb-cluster

${tidb-nlb-dnsname} is the LoadBalancer domain name of the TiDB service. You can view the domain name in the EXTERNAL-IP field by executing kubectl get svc basic-tidb -n tidb-cluster.

kubectl get svc basic-tidb -n tidb-cluster

Check TiDB version

select tidb_version()\G

Create test table

use test;

create table test_table (id int unsigned not null auto_increment primary key, v varchar(32));

select * from information_schema.tikv_region_status where db_name=database() and table_name='test_table'\G

Query the TiKV store status

select * from information_schema.tikv_store_status\G

Query the TiDB cluster information

select * from information_schema.cluster_info\G

Access the Grafana Monitoring Dashboard

Obtain the LoadBalancer domain name of Grafana

kubectl -n tidb-cluster get svc basic-grafana

In the output below, the EXTERNAL-IP column is the LoadBalancer domain name.       

You can access the ${grafana-lb}:3000 address using your web browser to view monitoring metrics. Replace ${grafana-lb} with the LoadBalancer domain name.

Upgrade

To upgrade the TiDB cluster, edit the spec.version by executing the command below.

kubectl edit tc basic -n tidb-cluster

Scale out

Before scaling out the cluster, you need to scale out the corresponding node group so that the new instances have enough resources for operation.

This section describes how to scale out the EKS node group and TiDB components.

Scale out EKS node group

When scaling out TiKV, the node groups must be scaled out evenly among the different availability zones. The following example shows how to scale out the tikv-1atikv-1c, and tikv-1d groups of the ${clusterName} cluster to 2 nodes.

eksctl scale nodegroup --cluster ${clusterName} --name tikv-1a --nodes 2 --nodes-min 2 --nodes-max 2

Scale out TiDB components

After scaling out the EKS node group, execute kubectl edit tc basic -n tidb-cluster, and modify each component’s replicas to the desired number of replicas. The scaling-out process is then completed.

Deploy TiFlash/TiCDC

TiFlash is the columnar storage extension of TiKV.

TiCDC is a tool for replicating the incremental data of TiDB by pulling TiKV change logs.

In the configuration file of eksctl (cluster.yaml), add the following two items to add a node group for TiFlash/TiCDC respectively. desiredCapacity is the number of nodes you desire.

  - name: tiflash-1a
    desiredCapacity: 1
    privateNetworking: true
    availabilityZones: ["ap-northeast-1a"]
    labels:
      dedicated: tiflash
    taints:
      dedicated: tiflash:NoSchedule
  - name: tiflash-1d
    desiredCapacity: 1
    privateNetworking: true
    availabilityZones: ["ap-northeast-1d"]
    labels:
      dedicated: tiflash
    taints:
      dedicated: tiflash:NoSchedule
  - name: tiflash-1c
    desiredCapacity: 1
    privateNetworking: true
    availabilityZones: ["ap-northeast-1c"]
    labels:
      dedicated: tiflash
    taints:
      dedicated: tiflash:NoSchedule

  - name: ticdc-1a
    desiredCapacity: 1
    privateNetworking: true
    availabilityZones: ["ap-northeast-1a"]
    labels:
      dedicated: ticdc
    taints:
      dedicated: ticdc:NoSchedule
  - name: ticdc-1d
    desiredCapacity: 1
    privateNetworking: true
    availabilityZones: ["ap-northeast-1d"]
    labels:
      dedicated: ticdc
    taints:
      dedicated: ticdc:NoSchedule
  - name: ticdc-1c
    desiredCapacity: 1
    privateNetworking: true
    availabilityZones: ["ap-northeast-1c"]
    labels:
      dedicated: ticdc
    taints:
      dedicated: ticdc:NoSchedule

Depending on the EKS cluster status, use different commands:

  • If the cluster is not created, execute eksctl create cluster -f cluster.yaml to create the cluster and node groups.
  • If the cluster is already created, execute eksctl create nodegroup -f cluster.yaml to create the node groups. The existing node groups are ignored and will not be created again.

Deploy TiFlash/TiCDC

To deploy TiFlash, configure spec.tiflash in tidb-cluster.yaml:

spec:
  ...
  tiflash:
    baseImage: pingcap/tiflash
    replicas: 1
    storageClaims:
    - resources:
        requests:
          storage: 100Gi
    tolerations:
    - effect: NoSchedule
      key: dedicated
      operator: Equal
      value: tiflash

Deploy TiFlash/TiCDC

To deploy TiCDC, configure spec.ticdc in tidb-cluster.yaml

kubectl -n tidb-cluster apply -f tidb-cluster.yaml

Finally, execute kubectl -n tidb-cluster apply -f tidb-cluster.yaml to update the TiDB cluster configuration.

View Cluster Status

kubectl get pods -n tidb-cluster

Delete EKS Cluster & Node Groups

This section describes how to delete EKS cluster and Node Groups.

List EKS Clusters

eksctl get clusters -r ap-northeast-1

Delete Clusters

eksctl get clusters -r ap-northeast-1

Delete Clusters

eksctl delete cluster tidbcluster -r ap-northeast-1

OR;

eksctl delete cluster --region=ap-northeast-1 --name=tidbcluster

Cheers!

Knowledge worth sharing...Share on linkedin
Linkedin
Share on facebook
Facebook
Share on google
Google
Share on twitter
Twitter

Flush Tables With Read Lock

This command closes all open tables and locks all tables for all databases with a global read lock

FLUSH TABLES or RELOAD privilege is required for this operation.

To release the lock, use UNLOCK TABLES. This command implicitly commits any active transaction only if any tables currently have been locked with LOCK TABLES.

Inserting rows into the log tables is not prevented with FLUSH TABLES WITH READ LOCK.

Cheers!

Knowledge worth sharing...Share on linkedin
Linkedin
Share on facebook
Facebook
Share on google
Google
Share on twitter
Twitter

MySQL/MariaDB – ulimit open files & MySQL open_files_limit

The amount of resources that can be used can be controlled from the operating system perspective. Each user has limits that are set, but for that particular user, the limits are applied individually to each of its processes.

Limits can either be hard or soft. Only the root user can set the Hardlimits. Other users can set Soft limits, but it cannot be more than the hard limit.

The default value of ulimit open files limit is 1024. This is very low for a typical web server environment that hosts many have database-driven sites.

MySQL/MariaDB also uses this setting. The open_files_limit is set by MySQL/MariaDB to the system’s ulimit. Default is 1024.

NOTE: MySQL/MariaDB can only set its open_files_limit lower than what is specified under ulimit ‘open files’. It cannot be set higher than that.

Examine Current Limits

To inspect current limits


ulimit -a

# -a will show all current limits including hard, soft, open files, etc.

To inspect the current hard and soft limits.


# Hard Limits
ulimit -Ha

# Soft Limits
ulimit -Sa

# H for hard limits, or S for soft limits.

To check current open file limits.


ulimit -n

# –n for number of open files limits

Set ‘open files’ Limit Persistently

Open  /etc/security/limits.conf using the text editor of your choice, and add the following lines, then save it.


* soft nofile 102400
* hard nofile 102400
* soft nproc 10240
* hard nproc 10240

Edit the file /etc/security/limits.d/90-nproc.conf using the text editor of your choice, and add the following lines, then save it.


* soft nofile 1024000
* hard nofile 1024000
* soft nproc 10240
* hard nproc 10240
root soft nproc unlimited

Set open_files_limit in my.cnf (MySQL)

Open and edit /etc/my.cnf

Insert the lines below under [mysql], then save it.


[mysqld]
open_files_limit = 102400

Run the command below to see if there are any .conf files being used by MySQL that overrides the values for open limits.


systemctl status mysqld

Below is something you will see after running the command above.


/etc/systemd/system/mariadb.service.d
└─limits.conf

This means that there is a  /etc/systemd/system/mariadb.service.d/limts.conf  file, whicv is loaded with MySQL. Now, edit that file as well.


[Service]
LimitNOFILE=102400

Execute the command below for the changes to take effect.

systemctl daemon-reload && /scripts/restartsrv_mysql

Perform Server Reboot

Run the command below in MySQL to see the value of open_files_limit.


SHOW VARIABLES LIKE 'open_files_limit';

Output:


+------------------+--------+
| Variable_name    | Value  |
+------------------+--------+
| open_files_limit | 102400 |
+------------------+--------+
1 row in set (0.00 sec)

Cheers!

Knowledge worth sharing...Share on linkedin
Linkedin
Share on facebook
Facebook
Share on google
Google
Share on twitter
Twitter

MySQL/MariaDB – Innochecksum

innochecksum prints checksums for InnoDB files. This tool reads an InnoDB tablespace file, calculates the checksum for each page, compares the calculated checksum to the stored checksum, and reports mismatches, which indicate damaged pages.

It was originally developed to speed up verifying the integrity of tablespace files after power outages but can also be used after file copies. Because checksum mismatches cause InnoDB to deliberately shut down a running server, it may be preferable to use this tool rather than waiting for an in-production server to encounter the damaged pages.

Below is a script you can use to get corrupted tables. Replace the directory path where your data files are located. (Stop MySQL/MariaDB service first, before running it)

INNOCKSM_LOG=/mysql/backup/innochecksum_`date +%Y%m%d_%H%M%S`.log

for DB in `ls -1vd /mysql/data/*/ | grep -wv '/mysql/data/mysql/\|/mysql/data/performance_schema/\|/mysql/data/lost+found/'`
do
   for IBD in `ls -1v $DB | grep .ibd`
   do
      innochecksum ${DB}${IBD}
   if [ $? -ne 0 ]; then
      echo ${DB}${IBD} >> $INNOCKSM_LOG
      innochecksum ${DB}${IBD} >> $INNOCKSM_LOG 2>&1
   fi
   done
done

Cheers!

Knowledge worth sharing...Share on linkedin
Linkedin
Share on facebook
Facebook
Share on google
Google
Share on twitter
Twitter

MySQL/MariaDB – Identifying and Avoiding Deadlocks

A deadlock is a special blocking scenario when two or more competing transactions are waiting for each other to free locks. Each process, while holding its own resources, attempts to access a resource that is locked by the other process..

Simulating a Deadlock Scenario

Transaction 1

START TRANSACTION;
SELECT * FROM departments WHERE dept_no = 'd008' LOCK IN SHARE MODE;

Transaction 2 (wait)

START TRANSACTION;
UPDATE departments
SET dept_name = 'Research & Development'
WHERE dept_no = 'd008';

Transaction 1 (deadlock)

UPDATE departments
SET dept_name = 'R&D'
WHERE dept_no = 'd008';

Identify and Analyze Deadlocks

Execute the command below in MySQL/MariaDB.

SHOW ENGINE INNODB STATUS \G
mysql> SHOW ENGINE INNODB STATUS \G
*************************** 1. row ***************************
  Type: InnoDB
  Name:
Status:
=====================================
2021-06-02 00:40:29 0x7f99d005e700 INNODB MONITOR OUTPUT
=====================================
Per second averages calculated from the last 56 seconds
-----------------
BACKGROUND THREAD
-----------------
srv_master_thread loops: 5 srv_active, 0 srv_shutdown, 4498 srv_idle
srv_master_thread log flush and writes: 0
----------
SEMAPHORES
----------
OS WAIT ARRAY INFO: reservation count 2
OS WAIT ARRAY INFO: signal count 3
RW-shared spins 9, rounds 9, OS waits 0
RW-excl spins 0, rounds 0, OS waits 0
RW-sx spins 0, rounds 0, OS waits 0
Spin rounds per wait: 1.00 RW-shared, 0.00 RW-excl, 0.00 RW-sx
------------------------
LATEST DETECTED DEADLOCK
------------------------
2021-06-02 00:40:08 0x7f99b74f8700
*** (1) TRANSACTION:
TRANSACTION 51038, ACTIVE 13 sec starting index read
mysql tables in use 1, locked 1
LOCK WAIT 2 lock struct(s), heap size 1136, 2 row lock(s)
MySQL thread id 11, OS thread handle 140298596771584, query id 180 localhost instadm updating
UPDATE departments
SET dept_name = 'Research & Development'
WHERE dept_no = 'd008'

*** (1) HOLDS THE LOCK(S):
RECORD LOCKS space id 3 page no 4 n bits 80 index PRIMARY of table `employees`.`departments` trx id 51038 lock_mode X locks rec but not gap waiting
Record lock, heap no 13 PHYSICAL RECORD: n_fields 4; compact format; info bits 0
 0: len 4; hex 64303038; asc d008;;
 1: len 6; hex 00000000c75a; asc      Z;;
 2: len 7; hex 02000000fc0151; asc       Q;;
 3: len 22; hex 5265736561726368202620446576656c6f706d656e74; asc Research & Development;;


*** (1) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 3 page no 4 n bits 80 index PRIMARY of table `employees`.`departments` trx id 51038 lock_mode X locks rec but not gap waiting
Record lock, heap no 13 PHYSICAL RECORD: n_fields 4; compact format; info bits 0
 0: len 4; hex 64303038; asc d008;;
 1: len 6; hex 00000000c75a; asc      Z;;
 2: len 7; hex 02000000fc0151; asc       Q;;
 3: len 22; hex 5265736561726368202620446576656c6f706d656e74; asc Research & Development;;


*** (2) TRANSACTION:
TRANSACTION 51039, ACTIVE 28 sec starting index read
mysql tables in use 1, locked 1
LOCK WAIT 4 lock struct(s), heap size 1136, 2 row lock(s)
MySQL thread id 9, OS thread handle 140298597062400, query id 181 localhost instadm updating
UPDATE departments
SET dept_name = 'R&D'
WHERE dept_no = 'd008'

*** (2) HOLDS THE LOCK(S):
RECORD LOCKS space id 3 page no 4 n bits 80 index PRIMARY of table `employees`.`departments` trx id 51039 lock mode S locks rec but not gap
Record lock, heap no 13 PHYSICAL RECORD: n_fields 4; compact format; info bits 0
 0: len 4; hex 64303038; asc d008;;
 1: len 6; hex 00000000c75a; asc      Z;;
 2: len 7; hex 02000000fc0151; asc       Q;;
 3: len 22; hex 5265736561726368202620446576656c6f706d656e74; asc Research & Development;;


*** (2) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 3 page no 4 n bits 80 index PRIMARY of table `employees`.`departments` trx id 51039 lock_mode X locks rec but not gap waiting
Record lock, heap no 13 PHYSICAL RECORD: n_fields 4; compact format; info bits 0
 0: len 4; hex 64303038; asc d008;;
 1: len 6; hex 00000000c75a; asc      Z;;
 2: len 7; hex 02000000fc0151; asc       Q;;
 3: len 22; hex 5265736561726368202620446576656c6f706d656e74; asc Research & Development;;

*** WE ROLL BACK TRANSACTION (1)

The output will show many info about the latest deadlock, and why it occurred. Take a close look at the portion where it indicates WAITING FOR THIS LOCK TO BE GRANTED (shows which lock the transaction is waiting for) and HOLD THE LOCK(S) (shows the locks that are holding up this transaction).

Preventing Deadlocks

  • Keep transactions small and quick to avoid clashing.
  • Commit transactions right after making a set of related changes to make them less prone to clashes.
  • Accessing resources in the same physical sequence.
    • For example, two transactions need to access two resources. If each transaction accesses the resources in the same physical sequence, then the first transaction will successfully obtain locks on the resources without being blocked by the second transaction. The second transaction will be blocked by the first while trying to obtain a lock on the first resource. The outcome will just be a typical blocking scenario instead of a deadlock.

Cheers!

Knowledge worth sharing...Share on linkedin
Linkedin
Share on facebook
Facebook
Share on google
Google
Share on twitter
Twitter

MySQL/MariaDB – Swapping

When you assign more memory to buffers than your server has physical RAM, swapping can happen. swapping degrades performance significantly.

SWAP is slower than RAM, because it is used on a physical disk (magnetic or SSD). In other words, it is an emulated memory on disk.

We have to tweak a kernel parameter called swappines, to avoid MySQL/MariaDB data being SWAP instead of RAM.

The balance between swapping out runtime memory and dropping pages from the system page cache can be done using swapping value. The bigger the value, the more system will swap. The smaller the value, the less the system will swap.

The maximum is 100, the minimum is 0, and the default is 60.

Add the following line in your sysctl.conf file in /etc/sysctl.conf to change this parameter in the persistence mode.

vm.swappiness = 0

Cheers!

Knowledge worth sharing...Share on linkedin
Linkedin
Share on facebook
Facebook
Share on google
Google
Share on twitter
Twitter

Linux – strace

To trace system calls and signals, we use the strace command.

Install strace in RedHat/CentOS:

yum install strace -y

The command below provides information about all the system calls that the application is using.

strace ls

The command below provides counters including the number of errors that were encountered while the application is operational.

strace -c ls

Cheers!

Knowledge worth sharing...Share on linkedin
Linkedin
Share on facebook
Facebook
Share on google
Google
Share on twitter
Twitter

MySQL/MariaDB – optimizer_switch

To control optimizer behaviour, we can enable/disable specific optimization via optimizer_switch system variable. The optimizer_switch variable can be changed at runtime, and has global and session values.

Execute the command below to see the current set of optimizer flags.`

SELECT @@optimizer_switch \G

To change the value of optimizer_switch, use the following syntax.

SET [GLOBAL|SESSION] optimizer_switch='cmd[,cmd]...';
Syntax Description
defaultReset all optimizations to their default values.
optimization_name=defaultSet the specified optimization to its default value.
optimization_name=onEnable the specified optimization.
optimization_name=offDisable the specified optimization.
There is no need to list all flags – only those that are specified in the command will be affected.

Cheers!

Knowledge worth sharing...Share on linkedin
Linkedin
Share on facebook
Facebook
Share on google
Google
Share on twitter
Twitter

MariaDB/MySQL – Seconds Behind Master Deep Dive

seconds_behind_master takes the difference between the timestamp on the Replica minus the timestamp of the event that the SQL_THREAD is processing that (timestamp) that was on the master.

DEMO

Execute the script below in Master Server.

create database if not exists sbm_db;

USE `sbm_db`;
DROP procedure IF EXISTS `gendata`;

DELIMITER $$
CREATE PROCEDURE gendata (in loopLimit int)
BEGIN
	declare c int;
    set c = 0;
    
    label: LOOP
		insert into tbl (fld)
        values (FLOOR(1 + (RAND() * 60000)));
        set c = c + 1;
        if c > loopLimit then
			leave label;
        end if;
    end LOOP label;
END$$

DELIMITER ;


drop table if exists tbl;
create table tbl (
tblId int not null primary key auto_increment,
fld varchar(255),
updatedAt timestamp not null default current_timestamp
);

Execute in Replica

STOP SLAVE SQL_THREAD;

Execute in Master

call `gendata`(3000);
mysql> SELECT COUNT(*) FROM tbl;
+----------+
| COUNT(*) |
+----------+
|     3000 |
+----------+

In the Replica, since the SQL_THREAD is not running, these records don’t exists.

mysql> SELECT COUNT(*) FROM tbl;
+----------+
| COUNT(*) |
+----------+
|        0 |
+----------+

Now, run these commands in one shot in the Replica to start the SQL_THREAD, get the unix_timestamp(), then stop the SQL_THREAD again.

START SLAVE SQL_THREAD;

SHOW SLAVE STATUS \G

SELECT unix_timestamp();

STOP SLAVE SQL_THREAD;

SELECT COUNT(*) FROM tbl;

Below are my output.

mysql> START SLAVE SQL_THREAD;
Query OK, 0 rows affected, 1 warning (0.00 sec)

mysql> SHOW SLAVE STATUS \G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.33.17
                  Master_User: repl
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: db1-bin.000018
          Read_Master_Log_Pos: 3349116
               Relay_Log_File: mysqldb2-relay-bin.000006
                Relay_Log_Pos: 2512613
        Relay_Master_Log_File: db1-bin.000018
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
              Replicate_Do_DB:
          Replicate_Ignore_DB:
           Replicate_Do_Table:
       Replicate_Ignore_Table:
      Replicate_Wild_Do_Table:
  Replicate_Wild_Ignore_Table:
                   Last_Errno: 0
                   Last_Error:
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 2512410
              Relay_Log_Space: 3349604
              Until_Condition: None
               Until_Log_File:
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File:
           Master_SSL_CA_Path:
              Master_SSL_Cert:
            Master_SSL_Cipher:
               Master_SSL_Key:
        Seconds_Behind_Master: 93
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error:
               Last_SQL_Errno: 0
               Last_SQL_Error:


mysql> SELECT unix_timestamp();
+------------------+
| unix_timestamp() |
+------------------+
|       1621660163 |
+------------------+
1 row in set (0.00 sec)


mysql> STOP SLAVE SQL_THREAD;
Query OK, 0 rows affected, 1 warning (0.00 sec)

mysql> SELECT COUNT(*) FROM tbl;
+----------+
| COUNT(*) |
+----------+
|      552 |
+----------+
1 row in set (0.00 sec)


You might be wondering, why seconds behind master is 93 when there were only 5 seconds of worth of data?

How seconds master works is it takes the difference between the timestamp on the replica minus the timestamp of the event that the sql thread is processing (The timestamp that is on the master).

mysqlbinlog --base64-output="decode-rows" --verbose mysqldb2-relay-bin.000006 | less

# at 2512613
#210522  1:07:50 server id 1  end_log_pos 2512485       GTID    last_committed=9012     sequence_number=9013    rbr_only=yes    original_committed_timestamp=1621660070   immediate_commit_timestamp=1621660070915181     transaction_length=279

mysql> select 1621660163 - 1621660070;
+-------------------------+
| 1621660163 - 1621660070 |
+-------------------------+
|                      93 |
+-------------------------+



Cheers!

Knowledge worth sharing...Share on linkedin
Linkedin
Share on facebook
Facebook
Share on google
Google
Share on twitter
Twitter