In the previous article, I discussed steps how to deploy NDB Cluster on
Kubernetes. You can find these steps here https://mysqlsg.blogspot.com/2020/06/deploying-ndb-cluster-on-kubernetes-in_30.html. This article is the continuation of previous article
discussing steps how to backup data nodes and restore the backup files to
another NDB Cluster on Kubernetes.
Content:
A.
Prepare NDB
Cluster (Source)
B.
Create dummy
object and data on source
C.
Run backup on
source
D.
Create a new NDB
Cluster (Target)
E.
Copy backup files
from source to target cluster
F.
Restore backup files
to target cluster
G.
Test it!
A. PREPARE NDB CLUSTER (SOURCE)
Start the Minikube if this hasn’t been started. Let’s check the pod’s
status:
$ kubectl -n mysql-cluster
get pod
NAME READY STATUS
RESTARTS AGE
dataa-0 1/1 Running
1 5d15h
datab-0 1/1 Running
1 5d15h
mgmt-0 1/1 Running
1 5d15h
mysql-cluster-796d8b4d78-dwn9j 1/1
Running 1 5d15h
mysql-cluster-796d8b4d78-n6n4l 1/1
Running 1 5d15h
$
Let’s check our cluster, if it is up.
$ kubectl -n mysql-cluster
exec -it mgmt-0 -- ndb_mgm -c localhost -e show
Connected to Management
Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 @172.17.0.5
(mysql-5.7.30 ndb-7.6.14, Nodegroup: 0, *)
id=3 @172.17.0.4
(mysql-5.7.30 ndb-7.6.14, Nodegroup: 0)
[ndb_mgmd(MGM)] 1 node(s)
id=1 @172.17.0.6
(mysql-5.7.30 ndb-7.6.14)
[mysqld(API)] 2 node(s)
id=4 @172.17.0.2
(mysql-5.7.30 ndb-7.6.14)
id=5 @172.17.0.3
(mysql-5.7.30 ndb-7.6.14)
$
In case cluster doesn’t show up, then start cluster manually, as follow:
$ kubectl -n mysql-cluster
exec -it mgmt-0 -- ndb_mgmd --config-file=/etc/config.ini --config-dir=/home
MySQL Cluster Management
Server mysql-5.7.30 ndb-7.6.14
$ kubectl -n mysql-cluster
exec -it dataa-0 -- ndbd --defaults-file=/etc/datanode.cnf
--ndb-connectstring=mgmt-0 --ndb-nodeid=2
2020-07-06 00:19:38 [ndbd]
INFO -- Angel connected to
'mgmt-0:1186'
2020-07-06 00:19:38 [ndbd]
INFO -- Angel allocated nodeid: 2
$ kubectl -n mysql-cluster
exec -it datab-0 -- ndbd --defaults-file=/etc/datanode.cnf
--ndb-connectstring=mgmt-0 --ndb-nodeid=3
2020-07-06 00:19:44 [ndbd]
INFO -- Angel connected to
'mgmt-0:1186'
2020-07-06 00:19:44 [ndbd]
INFO -- Angel allocated nodeid: 3
B. CREATE DUMMY OBJECT AND DATA ON SOURCE
Create dummy database and table for our testing, fill this table with
simple data by connecting to the first SQL Node
$ kubectl -n mysql-cluster
exec -it mysql-cluster-796d8b4d78-dwn9j -- mysql -uroot -h127.0.0.1 -proot -e
"create database dummy; create table dummy.dummy (i int primary key)
engine=ndb; insert into dummy.dummy values (1); insert into dummy.dummy values
(2); insert into dummy.dummy values (3);"
mysql: [Warning] Using a
password on the command line interface can be insecure.
Check if those transactions are query-able from the second SQL node.
$ kubectl -n mysql-cluster
exec -it mysql-cluster-796d8b4d78-n6n4l -- mysql -uroot -h127.0.0.1 -proot -e
"select * from dummy.dummy"
mysql: [Warning] Using a
password on the command line interface can be insecure.
+---+
| i |
+---+
| 1 |
| 2 |
| 3 |
+---+
Perfect!
C. RUN BACKUP ON SOURCE
Now, we do Data Nodes backup from Management Node.
$ kubectl -n mysql-cluster
exec -it mgmt-0 -- ndb_mgm -c localhost -e "start backup;"
Connected to Management
Server at: localhost:1186
Waiting for completed, this
may take several minutes
Node 2: Backup 1 started
from node 1
Node 2: Backup 1 started
from node 1 completed
StartGCP: 2205 StopGCP: 2208
#Records: 2060 #LogRecords: 0
Data: 51352 bytes Log: 0 bytes
Then, we do mysqldump to backup database schema from the first SQL Node.
$ kubectl -n mysql-cluster
exec -it mysql-cluster-796d8b4d78-dwn9j -- mysqldump --no-data --all-databases
-uroot -proot -h127.0.0.1 > /tmp/mydatabase.sql
Done! We have backup of NDB data and database schema structure.
D. CREATE NEW NDB CLUSTER (TARGET)
The following YAML file is used to create a new NDB Cluster on
Kubernetes with 1 Management Node, 2 Data Nodes, and 1 SQL Node. PV/PVC can be
added in the real case scenario to persist NDB table data on Data Nodes.
1.
Deploy
Configuration files as Kubernetes ConfigMaps
Configuration file for management node (file name: c2-config_ini.yaml)
[ndbd default]
NoOfReplicas=2
DataMemory=98M
[ndb_mgmd]
NodeId=1
HostName=c2-mgmt-0
DataDir=/var/lib/mysql
[ndbd]
HostName=c2-dataa-0
NodeId=2
DataDir=/var/lib/mysql
ServerPort=2202
[ndbd]
HostName=c2-datab-0
NodeId=3
DataDir=/var/lib/mysql
ServerPort=2202
[mysqld]
[mysqld]
Configuration file for data node (file name: c2-datanode_ini.yaml)
[mysqld]
ndbcluster
ndb-connectstring=c2-mgmt-0
Configuration file for SQL Node (file name: c2-my_cnf.yaml)
[mysqld]
ndbcluster
ndb-connectstring=c2-mgmt-0
user=mysql
Deploy them as ConfigMaps:
$ kubectl -n mysql-cluster
create configmap c2-config-ini --from-file=c2-config_ini.yaml
configmap/c2-config-ini
created
$ kubectl -n mysql-cluster
create configmap c2-datanode --from-file=c2-datanode_ini.yaml
configmap/c2-datanode
created
$ kubectl -n mysql-cluster
create configmap c2-my-cnf --from-file=c2-my_cnf.yaml
configmap/c2-my-cnf created
2.
Deploy NDB
Cluster
Deploy the second cluster with the following YAML (file name:
c2_all_nodes.yaml):
---
apiVersion: v1
kind: Service
metadata:
name: c2-mgmt-0
namespace: mysql-cluster
spec:
ports:
- name: mgmtport
port: 1186
targetPort: 1186
- name: pingport
port: 7
targetPort: 7
selector:
statefulset.kubernetes.io/pod-name:
c2-mgmt-0
clusterIP: None
---
apiVersion: v1
kind: Service
metadata:
name: c2-dataa-0
namespace: mysql-cluster
spec:
ports:
- name: dataport
port: 2202
targetPort: 2202
- name: pingport
port: 7
targetPort: 7
selector:
statefulset.kubernetes.io/pod-name:
c2-dataa-0
clusterIP: None
---
apiVersion: v1
kind: Service
metadata:
name: c2-datab-0
namespace: mysql-cluster
spec:
ports:
- name: dataport
port: 2202
targetPort: 2202
- name: pingport
port: 7
targetPort: 7
selector:
statefulset.kubernetes.io/pod-name:
c2-datab-0
clusterIP: None
---
apiVersion: v1
kind: Service
metadata:
name: c2-mysql-cluster
namespace: mysql-cluster
labels:
app: c2-mysql-cluster
spec:
ports:
- name: tcp-rw
port: 3306
targetPort: 3306
selector:
app: c2-mysql-cluster
type: LoadBalancer
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: c2-mgmt
namespace: mysql-cluster
spec:
serviceName: "mgmt"
replicas: 1
selector:
matchLabels:
app: c2-mgmt
template:
metadata:
labels:
app: c2-mgmt
spec:
containers:
- image: mysql/mysql-cluster:latest
name: mysql
volumeMounts:
- name: mysql-configmap-volume
mountPath: /etc/config.ini
subPath: config.ini
command: ["/bin/sh"] # do not modify
args: ["-c", "sleep 30;
ndb_mgmd --config-file=/etc/config.ini --config-dir=/home; while true; do sleep
1; done;"] # do not modify
volumes:
- name: mysql-configmap-volume
configMap:
name: c2-config-ini
items:
- key: c2-config_ini.yaml
path: config.ini
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: c2-dataa
namespace: mysql-cluster
spec:
serviceName: "c2-dataa"
replicas: 1
selector:
matchLabels:
app: c2-dataa
template:
metadata:
labels:
app: c2-dataa
spec:
containers:
- image: mysql/mysql-cluster:latest
name: mysql
volumeMounts:
- name: mysql-configmap-volume
mountPath: /etc/datanode.cnf
subPath: datanode.cnf
command: ["/bin/sh",
"-c", "sleep 60; ndbd --defaults-file=/etc/datanode.cnf
--ndb-connectstring=c2-mgmt-0 --ndb-nodeid=2; while true; do sleep 1;
done;"]
volumes:
- name: mysql-configmap-volume
configMap:
name: c2-datanode
items:
- key: c2-datanode_ini.yaml
path: datanode.cnf
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: c2-datab
namespace: mysql-cluster
spec:
serviceName: "datab"
replicas: 1
selector:
matchLabels:
app: c2-datab
template:
metadata:
labels:
app: c2-datab
spec:
containers:
- image: mysql/mysql-cluster:latest
name: mysql
volumeMounts:
- name: mysql-configmap-volume
mountPath: /etc/datanode.cnf
subPath: datanode.cnf
command: ["/bin/sh",
"-c", "sleep 60; ndbd --defaults-file=/etc/datanode.cnf
--ndb-connectstring=c2-mgmt-0 --ndb-nodeid=3; while true; do sleep 1;
done;"]
volumes:
- name: mysql-configmap-volume
configMap:
name: c2-datanode
items:
- key: c2-datanode_ini.yaml
path: datanode.cnf
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: c2-mysql-cluster
namespace: mysql-cluster
spec:
replicas: 1
selector:
matchLabels:
app: c2-mysql-cluster
strategy:
type: Recreate
template:
metadata:
labels:
app: c2-mysql-cluster
spec:
hostname: mysql-cluster
containers:
- image: mysql/mysql-cluster:latest
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-root-password
key: password
volumeMounts:
- name: mysql-configmap-volume
mountPath: /etc/my.cnf
subPath: my.cnf
volumes:
- name: mysql-configmap-volume
configMap:
name: c2-my-cnf
items:
- key: c2-my_cnf.yaml
path: my.cnf
Execute cluster deployment:
$ kubectl apply -f
c2_all_nodes.yaml
service/c2-mgmt-0 created
service/c2-dataa-0 created
service/c2-datab-0 created
service/c2-mysql-cluster
created
statefulset.apps/c2-mgmt
created
statefulset.apps/c2-dataa
created
statefulset.apps/c2-datab
created
deployment.apps/c2-mysql-cluster
created
Check cluster after 1 minute:
$ kubectl -n mysql-cluster
get pod
NAME READY STATUS
RESTARTS AGE
c2-dataa-0 1/1 Running
0 28m
c2-datab-0 1/1 Running
0 28m
c2-mgmt-0 1/1 Running
0 28m
c2-mysql-cluster-5b5cfd5788-k7jtk 1/1
Running 0 25m
…
$ kubectl -n mysql-cluster
exec -it c2-mgmt-0 -- ndb_mgm -c localhost -e show
Connected to Management
Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 @172.17.0.17
(mysql-5.7.30 ndb-7.6.14, Nodegroup: 0, *)
id=3 @172.17.0.18
(mysql-5.7.30 ndb-7.6.14, Nodegroup: 0)
[ndb_mgmd(MGM)] 1 node(s)
id=1 @172.17.0.20
(mysql-5.7.30 ndb-7.6.14)
[mysqld(API)] 2 node(s)
id=4 @172.17.0.21
(mysql-5.7.30 ndb-7.6.14)
id=5 (not connected,
accepting connect from any host)
E. COPY BACKUP FILES FROM SOURCE TO TARGET
Many ways to transfer backup files from source to target on Kubernetes.
We can use cluster file system or NFS based PVC, etc. But this article does not
cover that topic as those are too wide for a blog. Instead, use a simple way on
Minikube as follow:
1.
Transfer dump
file (between SQL Node)
Get source container ID (SQL Node of source cluster)
$ minikube ssh -- docker ps
| grep mysql-cluster-796d8b4d78-dwn9j
d9e25d23a0d5 mysql/mysql-cluster "/entrypoint.sh mysq…" 3 hours ago Up 3 hours k8s_mysql_mysql-cluster-796d8b4d78-dwn9j_mysql-cluster_bc84a88a-f8ae-43b8-ad3e-469e187d4fbc_3
Copy file to local filesystem of the Minikube
$ minikube ssh -- docker cp
d9e25d23a0d5:/tmp/mydatabase.sql /home/docker/mydatabase.sql
Get target container ID (SQL Node of target cluster)
$ minikube ssh -- docker ps
| grep c2-mysql-cluster-5b5cfd5788-k7jtk
cef9746c777d mysql/mysql-cluster "/entrypoint.sh mysq…" 18 minutes ago Up 18 minutes
k8s_mysql_c2-mysql-cluster-5b5cfd5788-k7jtk_mysql-cluster_c8a98f5e-1418-4e0a-a06c-dbe0f4d6f8d8_0
Copy file from local filesystem of the Minikube to the SQL Node of
target cluster
$ minikube ssh -- docker cp
/home/docker/mydatabase.sql cef9746c777d:/tmp/mydatabase.sql
2.
Transfer Data
Node (dataa-0) backup files
Get source container ID
$ minikube ssh -- docker ps
| grep dataa-0
549625c7dfe0 mysql/mysql-cluster "/bin/sh -c 'sleep 6…" 3 hours ago Up 3 hours
k8s_mysql_dataa-0_mysql-cluster_86104692-f1a1-454a-b6b0-ad8d86bd5308_1
Copy backup files to local filesystem of the Minikube
$ minikube ssh -- mkdir
backup-dataa-0
$ minikube ssh -- docker cp
549625c7dfe0:/var/lib/mysql/BACKUP/BACKUP-1/BACKUP-1-0.2.Data
/home/docker/backup-dataa-0/
$ minikube ssh -- docker cp
549625c7dfe0:/var/lib/mysql/BACKUP/BACKUP-1/BACKUP-1.2.ctl
/home/docker/backup-dataa-0/
$ minikube ssh -- docker cp
549625c7dfe0:/var/lib/mysql/BACKUP/BACKUP-1/BACKUP-1.2.log
/home/docker/backup-dataa-0/
Get Target container ID
$ minikube ssh -- docker ps
| grep c2-dataa-0
54879636157e mysql/mysql-cluster "/bin/sh -c 'sleep 6…" About an hour ago Up About an hour
k8s_mysql_c2-dataa-0_mysql-cluster_5a718a2c-f810-4ea4-89d5-3a9a246fe67f_0
Copy file from local filesystem of the Minikube to the Data Node of
target cluster
$ minikube ssh -- docker cp
/home/docker/backup-dataa-0/BACKUP-1-0.2.Data
54879636157e:/tmp/BACKUP-1-0.2.Data
$ minikube ssh -- docker cp
/home/docker/backup-dataa-0/BACKUP-1.2.ctl 54879636157e:/tmp/BACKUP-1.2.ctl
$ minikube ssh -- docker cp
/home/docker/backup-dataa-0/BACKUP-1.2.log 54879636157e:/tmp/BACKUP-1.2.log
3.
Transfer Data
Node (datab-0) backup files
Get source container ID
$ minikube ssh -- docker ps
| grep datab-0
5d55a6bd7238 mysql/mysql-cluster "/bin/sh -c 'sleep 6…" 4 hours ago Up 4 hours
k8s_mysql_datab-0_mysql-cluster_cedbc442-7451-453b-8496-39e10468fe5f_1
Copy backup files to local filesystem of the Minikube
$ minikube ssh -- mkdir
backup-datab-0
$ minikube ssh -- docker cp 5d55a6bd7238:/var/lib/mysql/BACKUP/BACKUP-1/BACKUP-1-0.3.Data
/home/docker/backup-datab-0/
$ minikube ssh -- docker cp 5d55a6bd7238:/var/lib/mysql/BACKUP/BACKUP-1/BACKUP-1.3.ctl
/home/docker/backup-datab-0/
$ minikube ssh -- docker cp 5d55a6bd7238:/var/lib/mysql/BACKUP/BACKUP-1/BACKUP-1.3.log
/home/docker/backup-datab-0/
Get Target container ID
$ minikube ssh -- docker ps
| grep c2-datab-0
e2f4ee7b2bf9 mysql/mysql-cluster "/bin/sh -c 'sleep 6…" About an hour ago Up About an hour
k8s_mysql_c2-datab-0_mysql-cluster_70e2a5a8-7fd7-4c7b-a174-778c4aa47313_0
Copy file from local filesystem of the Minikube to the Data Node of
target cluster
$ minikube ssh -- docker cp
/home/docker/backup-datab-0/BACKUP-1-0.3.Data e2f4ee7b2bf9:/tmp/BACKUP-1-0.3.Data
$ minikube ssh -- docker cp
/home/docker/backup-datab-0/BACKUP-1.3.ctl e2f4ee7b2bf9:/tmp/BACKUP-1.3.ctl
$ minikube ssh -- docker cp
/home/docker/backup-datab-0/BACKUP-1.3.log e2f4ee7b2bf9:/tmp/BACKUP-1.3.log
F. RESTORE BACKUP FILES TO TARGET CLUSTER
The mysql-cluster docker image does not have ndb_restore. Thus, get this
file manually and copy ndb_restore for release 7.6.14 to both data nodes
c2-dataa-0 and c2-datab-0.
Restore table schema to the SQL Node:
$ kubectl -n mysql-cluster
exec -it c2-mysql-cluster-5b5cfd5788-k7jtk -- mysql -uroot -h127.0.0.1 -proot
-e "source /tmp/mydatabase.sql;"
Restore backup files to data node 1 (c2-nodea-0):
$ kubectl -n mysql-cluster
exec -it c2-dataa-0 -- ndb_restore -c c2-mgmt-0 --restore-data -b 1 -n 2 --backup-path=/tmp
Restore backup files to data node 2 (c2-nodeb-0):
$ kubectl -n mysql-cluster
exec -it c2-datab-0 -- ndb_restore -c c2-mgmt-0 --restore-data -b 1 -n 3
--backup-path=/tmp
G. TEST IT !
Connect to the SQL Node and query the restored table:
$ kubectl -n mysql-cluster
exec -it c2-mysql-cluster-5b5cfd5788-k7jtk -- mysql -uroot -h127.0.0.1 -proot
-e "select * from dummy.dummy"
mysql: [Warning] Using a
password on the command line interface can be insecure.
+---+
| i |
+---+
| 1 |
| 2 |
| 3 |
+---+
Done! Table is successfully restored. To add additional SQL Node for
target cluster, just amend replica=1 to replica=2 on mysql-cluster Deployment
section of c2_all_nodes.yaml.
H. How about Cluster Manager
As you see in this article, backup and restoration are quite manual without
MySQL Cluster Manager (mcm). The more Data Nodes and SQL Nodes to handle, the
task will get complicated. Using the Cluster Manager (mcm), DBA can use a lot
of handy commands to simplify cluster management, including backup and recovery.
See this URL for detail: https://dev.mysql.com/doc/mysql-cluster-manager/1.4/en/
Disclaimer:
The method and tricks presented here are experimental only and it’s your
own responsibility to test, implement, and support in case of issues. It is
only showing example and not for a production deployment, as this is not
formally a supported configuration. I encourage more testing to be done,
including by development team. Implementation with less layer as possible is
recommended, and with having support from Oracle Support for real world
implementation serving business application.
This comment has been removed by the author.
ReplyDeleteGreat Blog!!! thanks for sharing with us.
ReplyDeletecareer in software testing
software testing career