Nowadays organizations are starting to adopt a cloud-native environment
with a microservices architecture where applications run on containers. Many
organizations adopt application deployments on containers that allow them to
have more frequent application releases than one large release, which
continually increases organizational efficiency and productivity, which
ultimately provides enhanced business agility. Kubernetes has so far been the main
pioneer of the adopted open-source container-orchestration.
MySQL is designed and optimized for web-scale applications that require a
lot of flexibility to scale and distributed data to meet the needs of
continuous integration applications and deployment requirements, and MySQL
works well in containerization with Kubernetes.
Last month I delivered a webinar about running MySQL InnoDB Cluster (IDC) on
Kubernetes and I decided to share the steps on this blog. MySQL InnoDB Cluster (IDC)
is a shared-nothing clustering solution based on group replication that provide
solid and consistent high availability experience on bare-metal, VMs, and
Kubernetes. Further information on IDC can be obtained from the following
online-source:
https://dev.mysql.com/doc/refman/8.0/en/mysql-innodb-cluster-userguide.html
I start my technical discussion
here with the common understanding that database is a stateful “application”,
and hence it is best to be deployed as StatefulSet in Kubernetes.
Architecture diagram
for 3-nodes InnoDB Cluster
Deployed as StatefulSet
in Kubernetes
Every cluster nodes will run as a
container within its own Pod. One thing to note is the IDC stores nodes’
hostname in its metadata, therefore to facilitate the group replication, we
shall create a service for each Pods. A service in Kubernetes is an abstraction
that defines a logical set of Pods.
We are going to deploy the IDC
manually without an operator, to understand the basic concept of the
deployment.
Let’s try that on Minikube!
A. Prepare environment for the 1st
time
·
Install kubectl
Install kubectl in Linux:
Install kubectl in MacOS:
Install kubectl in Windows:
Install Cygwin (Windows only):
·
Install kubectl
·
Install Minikube
For Windows:
For Linux:
$ curl -Lo minikube
https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
&& chmod +x minikube
$ sudo mkdir -p
/usr/local/bin/
$ sudo install minikube
/usr/local/bin/
·
Start minikube
$ minikube start
B. Create Kubernetes Namespace
Namespace is a compartmentlike as
a way to divide Kubernetes cluster resources between multiple users, projects,
departments, etc. We need a namespace to run IDC and let’s name it as “mysql-cluster”:
$ kubectl create ns
mysql-cluster
C. Store MySQL Root Password into a Secret
The following secret.yaml stores
the encrypted word "root"
as a secret in Kubernetes which we named “mysql-root-password”.
apiVersion: v1
kind: Secret
metadata:
name: mysql-root-password
namespace: mysql-cluster
type: Opaque
data:
password: cm9vdA==
Apply secret.yaml:
$ kubectl apply -f
secret.yaml
Check if the secret is applied correctly into mysql-cluster
namespace:
$ kubectl -n mysql-cluster
get secret
// below is what I have on
my environment:
NAME TYPE DATA AGE
default-token-2ktnz
kubernetes.io/service-account-token
3 50d
mysql-operator-token-lgs6r kubernetes.io/service-account-token 3
50d
mysql-root-password Opaque 1 2s
D. Deploy the StatefulSet for IDC
The following Innodb-cluster.yaml
deploys:
·
StatefulSet with 3 replicas (three Pods)
·
One MySQL database container on each Pod.
·
Service for each Pod where the name is the same as
the Pod name
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: innodb-cluster
namespace: mysql-cluster
spec:
replicas: 3
serviceName: innodb-cluster
selector:
matchLabels:
app: innodb-cluster
template:
metadata:
labels:
app: innodb-cluster
spec:
containers:
- image: mysql/mysql-server:latest
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-root-password
key: password
volumeMounts:
- name: data-vol
mountPath: /var/lib/mysql
volumeClaimTemplates:
- metadata:
name: data-vol
spec:
accessModes: [ "ReadWriteOnce"
]
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: innodb-cluster-0
namespace: mysql-cluster
spec:
ports:
- port: 3306
selector:
statefulset.kubernetes.io/pod-name:
innodb-cluster-0
clusterIP: None
---
apiVersion: v1
kind: Service
metadata:
name: innodb-cluster-1
namespace: mysql-cluster
spec:
ports:
- port: 3306
selector:
statefulset.kubernetes.io/pod-name:
innodb-cluster-1
clusterIP: None
---
apiVersion: v1
kind: Service
metadata:
name: innodb-cluster-2
namespace: mysql-cluster
spec:
ports:
- port: 3306
selector:
statefulset.kubernetes.io/pod-name:
innodb-cluster-2
clusterIP: None
Apply innodb-cluster.yaml:
$ kubectl apply
innodb-cluster.yaml
Get Pod status and repeat until all Pods status shows "RUNNING"
$ kubectl -n mysql-cluster get
pod
// below is what I have on
my environment after run “get pod” multiple time.
NAME READY STATUS
RESTARTS AGE
innodb-cluster-0 1/1
Running 0 33s
innodb-cluster-1 1/1
Running 0 23s
innodb-cluster-2 1/1
Running 0 13s
E. Configure the IDC
First, we need to run configureInstance on
each database cluster nodes. Run configureInstance will finds any settings
which are not compatible with InnoDB cluster and make changes. To provide
smooth execution when run on containers, we can set persist some system
variables.
// configure node 1
kubectl -n mysql-cluster
exec -it innodb-cluster-0 -- mysql -uroot -proot -s -N -e "create user
root@'%' identified with mysql_native_password by 'root'; grant all privileges
on *.* to root@'%' with grant option; set persist binlog_checksum=NONE; set
persist gtid_mode=OFF_PERMISSIVE; set persist gtid_mode=ON_PERMISSIVE; set
persist enforce_gtid_consistency=ON; set persist gtid_mode=ON; set persist
server_id=100; SET sql_log_bin=OFF; reset master; drop database if exists mysql_innodb_cluster_metadata;
SET sql_log_bin=ON;"
// configure node 2
kubectl -n mysql-cluster
exec -it innodb-cluster-1 -- mysql -uroot -proot -s -N -e "create user
root@'%' identified with mysql_native_password by 'root'; grant all privileges
on *.* to root@'%' with grant option; set persist binlog_checksum=NONE; set
persist gtid_mode=OFF_PERMISSIVE; set persist gtid_mode=ON_PERMISSIVE; set
persist enforce_gtid_consistency=ON; set persist gtid_mode=ON; set persist
server_id=200; SET sql_log_bin=OFF; reset master; drop database if exists
mysql_innodb_cluster_metadata; SET sql_log_bin=ON;"
// configure node 3
kubectl -n mysql-cluster
exec -it innodb-cluster-2 -- mysql -uroot -proot -s -N -e "create user
root@'%' identified with mysql_native_password by 'root'; grant all privileges
on *.* to root@'%' with grant option; set persist binlog_checksum=NONE; set
persist gtid_mode=OFF_PERMISSIVE; set persist gtid_mode=ON_PERMISSIVE; set
persist enforce_gtid_consistency=ON; set persist gtid_mode=ON; set persist
server_id=300; SET sql_log_bin=OFF; reset master; drop database if exists
mysql_innodb_cluster_metadata; SET sql_log_bin=ON;"
Then, we can run configureInstance
on each node as follow:
// configureInstance on node
1
kubectl -n mysql-cluster
exec -it innodb-cluster-0 -- mysqlsh -- dba configure-instance {
--host=127.0.0.1 --port=3306 --user=root --password=root } --clusterAdmin=root
--restart=false --interactive=false
// configureInstance on node
2
kubectl -n mysql-cluster
exec -it innodb-cluster-1 -- mysqlsh -- dba configure-instance {
--host=127.0.0.1 --port=3306 --user=root --password=root } --clusterAdmin=root
--restart=false --interactive=false
// configureInstance on node
3
kubectl -n mysql-cluster
exec -it innodb-cluster-2 -- mysqlsh -- dba configure-instance {
--host=127.0.0.1 --port=3306 --user=root --password=root } --clusterAdmin=root
--restart=false --interactive=false
Once done, we create cluster using
the following command:
// create cluster
kubectl -n mysql-cluster
exec -it innodb-cluster-0 -- mysqlsh root@localhost:3306 --password=root -- dba
create-cluster myCluster --ipAllowlist='10.0.0.0/8'
Add those remaining two nodes into
cluster:
// add instance node 2 to
cluster
kubectl -n mysql-cluster
exec -it innodb-cluster-0 -- mysqlsh root:root@localhost:3306 -- cluster add-instance root:root@innodb-cluster-1:3306 --recoveryMethod=clone --ipAllowlist='10.0.0.0/8'
// add instance mode 3 to
cluster
kubectl -n mysql-cluster
exec -it innodb-cluster-1 -- mysqlsh root:root@localhost:3306 -- cluster add-instance root:root@innodb-cluster-2:3306 --recoveryMethod=clone --ipAllowlist='10.0.0.0/8'
Check the cluster status using the
following command:
// check cluster
kubectl -n mysql-cluster
exec -it innodb-cluster-0 -- mysqlsh root:root@localhost:3306 -- cluster status
Herewith the output on my
environment:
WARNING: Using a password on
the command line interface can be insecure.
{
"clusterName":
"myCluster",
"defaultReplicaSet": {
"name": "default",
"primary":
"innodb-cluster-0:3306",
"ssl": "REQUIRED",
"status": "OK",
"statusText": "Cluster
is ONLINE and can tolerate up to ONE failure.",
"topology": {
"innodb-cluster-0:3306":
{
"address":
"innodb-cluster-0:3306",
"mode":
"R/W",
"readReplicas": {},
"replicationLag":
null,
"role":
"HA",
"status":
"ONLINE",
"version": "8.0.22"
},
"innodb-cluster-1:3306":
{
"address":
"innodb-cluster-1:3306",
"mode":
"R/O",
"readReplicas": {},
"replicationLag":
null,
"role":
"HA",
"status":
"ONLINE",
"version":
"8.0.22"
},
"innodb-cluster-2:3306":
{
"address":
"innodb-cluster-2:3306",
"mode":
"R/O",
"readReplicas": {},
"replicationLag":
null,
"role":
"HA",
"status":
"ONLINE",
"version":
"8.0.22"
}
},
"topologyMode":
"Single-Primary"
},
"groupInformationSourceMember":
"innodb-cluster-0:3306"
}
Finally! We have InnoDB Cluster
runs on Kubernetes.
Next, I will write how to deploy
MySQL Router on Kubernetes.
Disclaimer:
The method and tricks presented here are experimental only and it’s your own responsibility to test, implement and support in case of issues. It is only showing example and not for a production deployment, as this is not formally a supported configuration. I encourage more testing to be done, including by development team.
Disclaimer:
The method and tricks presented here are experimental only and it’s your own responsibility to test, implement and support in case of issues. It is only showing example and not for a production deployment, as this is not formally a supported configuration. I encourage more testing to be done, including by development team.
No comments:
Post a Comment