On my first article about NDB Cluster on Kubernetes, I
discussed steps how to deploy NDB Cluster on Kubernetes. You can find the article here https://mysqlsg.blogspot.com/2020/06/deploying-ndb-cluster-on-kubernetes-in_30.html. The second article, I discussed steps how to clone NDB
Cluster here https://mysqlsg.blogspot.com/2020/07/clone-ndb-cluster-on-kubernetes-using.html.
This article is
the continuation of previous two articles discussing steps how to scaling up
data nodes and SQL nodes when running on Kubernetes.
Content:
A.
Start the NDB
Cluster
B.
Edit config.ini of
the management node
C.
Spin new data
nodes on Kubernetes
D.
Restart all cluster
nodes
E.
Adding New data
nodes
F.
Adding SQL nodes
G.
Test it!
A. START THE NDB CLUSTER
Start the Minikube if this hasn’t been started. Let’s check the pod’s
status:
$ kubectl -n mysql-cluster
get pod
NAME READY STATUS
RESTARTS AGE
dataa-0 1/1 Running
3 5d1h
datab-0 1/1 Running
3 5d1h
mgmt-0 1/1 Running
3 5d1h
mysql-cluster-796d8b4d78-dwn9j 1/1
Running 5 10d
mysql-cluster-796d8b4d78-n6n4l 1/1
Running 5 10d
$
Pod “mgmt-0” is the management node, “dataa-0” and “datab-0” are data
nodes of data node group 0 where data is stored (see below for brief
explanation), and “mysql-cluster-…” are SQL nodes.
Now let’s check our cluster, if it is automatically started up.
$ kubectl -n mysql-cluster
exec -it mgmt-0 -- ndb_mgm -c localhost -e show
Connected to Management
Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 @172.17.0.5
(mysql-5.7.30 ndb-7.6.14, Nodegroup: 0, *)
id=3 @172.17.0.7
(mysql-5.7.30 ndb-7.6.14, Nodegroup: 0)
[ndb_mgmd(MGM)] 1 node(s)
id=1 @172.17.0.8
(mysql-5.7.30 ndb-7.6.14)
[mysqld(API)] 2 node(s)
id=4 @172.17.0.4
(mysql-5.7.30 ndb-7.6.14)
id=5 @172.17.0.6
(mysql-5.7.30 ndb-7.6.14)
$
Perfect! It is up and running automatically.
B. Edit Config.ini of the Management Node
Open and edit config_ini.yaml into the following, see the update in red
below for adding 2 more data nodes and 2 more SQL nodes:
[ndbd default]
NoOfReplicas=2
DataMemory=98M
[ndb_mgmd]
NodeId=1
HostName=mgmt-0
DataDir=/var/lib/mysql
[ndbd]
HostName=dataa-0
NodeId=2
DataDir=/var/lib/mysql
ServerPort=2202
[ndbd]
HostName=datab-0
NodeId=3
DataDir=/var/lib/mysql
ServerPort=2202
[ndbd]
HostName=dataa-1
NodeId=4
DataDir=/var/lib/mysql
ServerPort=2202
[ndbd]
HostName=datab-1
NodeId=5
DataDir=/var/lib/mysql
ServerPort=2202
[mysqld]
[mysqld]
[mysqld]
[mysqld]
Apply the updated config_ini.yaml into Kubernetes as ConfigMap with this
command:
$ kubectl -n mysql-cluster
delete configmap config-ini
configmap
"config-ini" deleted
$ kubectl -n mysql-cluster
create configmap config-ini --from-file=config_ini.yaml
configmap/config-ini created
It is easy to understand how to scaling up the SQL nodes by looking at
the above YAML file. To understand how to scale up data nodes requires combination
between the number of replicas, node groups as well as Kubernetes StatefulSet. Long
story short, the replicas is a concept of storing all data into up to four
copies across multiple data nodes to avoid having just a single copy of the
data for high availability. Number of replicas is determined by the value of
parameter “NoOfReplicas” as seen at the above config. A node group is a group
of data nodes that holds the same data, so add an additional node group with the
same number of data nodes to expand / scale up the cluster’s overall data
volume capacity. That’s what we are going to do now, but on Kubernetes!
That explanation is actually very brief and insufficient.
For more detail information about NDB Cluster nodes, node groups, and replicas;
please visit https://dev.mysql.com/doc/mysql-cluster-excerpt/8.0/en/mysql-cluster-nodes-groups.html.
I also encourage to visit a blog by Mikael Ronstrom here http://mikaelronstrom.blogspot.com/2020/01/support-3-4-replicas-in-ndb-cluster-80.html
for the updates on 3-4 replicas support on NDB Cluster 8.0.
On Kubernetes, as I mentioned in previous blog article, I recommend to
use StatefulSet replicas to expand / scale up and use number of StatefulSets to
determine the number of data node within a data node group. Thus, if we want to
have two data nodes in one data node group, then we need two StatefulSets. If
we want to scale up from one data node group into two data node group, then we
need to scale up parameter “replicas” on each StatefulSets from “1” to “2”. This
is what we are going to cover in this article!
Because data nodes are deployed using StatefulSets, things become easy
to handle. If we scale up StatefulSet replicas being “1” to “2”, there will be
new Kubernetes Pod created with Pod ordering number equal to “1”, (e.g. dataa-1,
and datab-1 as shown in the diagram). Thus, we will have dataa-0 and datab-0
from current cluster setup, plus additional dataa-1 and datab-1. In this case, dataa-0
and dataa-1 belong to StatefulSet “dataa”, while datab-0 and datab-1 belong to StatefulSet
“datab”. But from NDB cluster
perspective, dataa-0 and datab-0 belong to data node group 0, while datab-0 and
datab-1 belong to data node group 1. It is indeed reversed, but this design may
give flexibility to scale.
Back to the YAML file, as you see there are definition for data node “dataa-1”
and “datab-1” in addition to the original YAML file.
C. Spin New Data Nodes on Kubernetes
This is the easiest part of this article. We know we have two additional
data nodes (“dataa-1” and datab-1”), therefore we need one new Kubernetes
services for each Pods for these new data nodes. To spin up additional two Pods
for two data nodes are even easier. Just edit parameter “replicas” on each data
node’s StatefulSets (“dataa” and “datab”) in the YAML file from “1” to “2”. See
below the edited YAML file which includes the necessary changes (written in
bold). - please note: for simplification in this article, I don't put PV/PVC into Pods definition for data nodes.
---
apiVersion: v1
kind: Service
metadata:
name: mgmt-0
namespace: mysql-cluster
spec:
ports:
- name: mgmtport
port: 1186
targetPort: 1186
- name: pingport
port: 7
targetPort: 7
selector:
statefulset.kubernetes.io/pod-name: mgmt-0
clusterIP: None
---
apiVersion: v1
kind: Service
metadata:
name: dataa-1
namespace: mysql-cluster
spec:
ports:
-
name: dataport
port: 2202
targetPort: 2202
-
name: pingport
port: 7
targetPort: 7
selector:
statefulset.kubernetes.io/pod-name: dataa-1
clusterIP: None
---
apiVersion: v1
kind: Service
metadata:
name: dataa-0
namespace: mysql-cluster
spec:
ports:
- name: dataport
port: 2202
targetPort: 2202
- name: pingport
port: 7
targetPort: 7
selector:
statefulset.kubernetes.io/pod-name: dataa-0
clusterIP: None
---
apiVersion: v1
kind: Service
metadata:
name: datab-1
namespace: mysql-cluster
spec:
ports:
-
name: dataport
port: 2202
targetPort: 2202
-
name: pingport
port: 7
targetPort: 7
selector:
statefulset.kubernetes.io/pod-name: datab-1
clusterIP: None
---
apiVersion: v1
kind: Service
metadata:
name: datab-0
namespace: mysql-cluster
spec:
ports:
- name: dataport
port: 2202
targetPort: 2202
- name: pingport
port: 7
targetPort: 7
selector:
statefulset.kubernetes.io/pod-name: datab-0
clusterIP: None
---
apiVersion: v1
kind: Service
metadata:
name: mysql-cluster
namespace: mysql-cluster
labels:
app: mysql-cluster
spec:
ports:
- name: tcp-rw
port: 3306
targetPort: 3306
selector:
app: mysql-cluster
type: LoadBalancer
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mgmt
namespace: mysql-cluster
spec:
serviceName: "mgmt"
replicas: 1
selector:
matchLabels:
app: mgmt
template:
metadata:
labels:
app: mgmt
spec:
containers:
- image: mysql/mysql-cluster:latest
name: mysql
volumeMounts:
- name: mysql-configmap-volume
mountPath: /etc/config.ini
subPath: config.ini
command: ["/bin/sh"] # do not modify
args: ["-c", "sleep 30;
ndb_mgmd --config-file=/etc/config.ini --config-dir=/home; while true; do sleep
1; done;"] # do not modify
volumes:
- name: mysql-configmap-volume
configMap:
name: config-ini
items:
- key: config_ini.yaml
path: config.ini
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: dataa
namespace: mysql-cluster
spec:
serviceName: "dataa"
replicas: 2
selector:
matchLabels:
app: dataa
template:
metadata:
labels:
app: dataa
spec:
containers:
- image: mysql/mysql-cluster:latest
name: mysql
volumeMounts:
-
name: mysql-configmap-volume
mountPath: /etc/datanode.cnf
subPath: datanode.cnf
command: ["/bin/sh",
"-c", "sleep 60; ndbd --defaults-file=/etc/datanode.cnf
--ndb-connectstring=mgmt-0; while true; do sleep 1; done;"]
volumes:
- name: mysql-configmap-volume
configMap:
name: datanode
items:
- key: datanode_ini.yaml
path: datanode.cnf
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: datab
namespace: mysql-cluster
spec:
serviceName: "datab"
replicas: 2
selector:
matchLabels:
app: datab
template:
metadata:
labels:
app: datab
spec:
containers:
- image: mysql/mysql-cluster:latest
name: mysql
volumeMounts:
- name: mysql-configmap-volume
mountPath: /etc/datanode.cnf
subPath: datanode.cnf
command: ["/bin/sh",
"-c", "sleep 60; ndbd --defaults-file=/etc/datanode.cnf
--ndb-connectstring=mgmt-0; while true; do sleep 1; done;"]
volumes:
- name: mysql-configmap-volume
configMap:
name: datanode
items:
- key: datanode_ini.yaml
path: datanode.cnf
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-cluster
namespace: mysql-cluster
spec:
replicas: 2
selector:
matchLabels:
app: mysql-cluster
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql-cluster
spec:
hostname: mysql-cluster
containers:
- image: mysql/mysql-cluster:latest
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-root-password
key: password
volumeMounts:
- name: mysql-configmap-volume
mountPath: /etc/my.cnf
subPath: my.cnf
volumes:
- name: mysql-configmap-volume
configMap:
name: my-cnf
items:
- key: my_cnf.yaml
path: my.cnf
Save that file as “all_nodes_scaling.yaml” and run the following command
to apply the change into Kubernetes:
$ kubectl apply -f
all_nodes_scaling.yaml
Check if “dataa-1” and “datab-1” Pods are running:
$ kubectl -n mysql-cluster
get pod
NAME READY STATUS
RESTARTS AGE
dataa-0 1/1 Running
3 5d10h
dataa-1 1/1 Running
0 62s
datab-0 1/1 Running
3 5d10h
datab-1 1/1 Running
0 62s
mgmt-0 1/1 Running
3 5d10h
mysql-cluster-796d8b4d78-dwn9j 1/1
Running 5 11d
mysql-cluster-796d8b4d78-n6n4l 1/1
Running 5 11d
Check if Kubernetes services for dataa-1 and datab-1 are available:
$ kubectl -n mysql-cluster
get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dataa-0 ClusterIP None <none> 2202/TCP,7/TCP 11d
dataa-1 ClusterIP None <none> 2202/TCP,7/TCP 2m40s
datab-0 ClusterIP None <none> 2202/TCP,7/TCP 11d
datab-1 ClusterIP None <none> 2202/TCP,7/TCP 2m40s
mgmt-0 ClusterIP None <none> 1186/TCP,7/TCP 11d
mysql-cluster LoadBalancer 10.107.196.107 <pending> 3306:32418/TCP 11d
Cool, we are ready!
D. Restart All Cluster Nodes
D.1. Restart Management Node “mgmt.-0”
Remember that we recreated config.ini at section B of this article. But
the one which currently running on management node “mgmt-0” is the old
config.ini which does not have “dataa-1” and “datab-1”.
Thus, we need to restart Pod “mgmt-0”, as follow:
$ kubectl -n mysql-cluster
delete pod mgmt-0
Wait for 1 minute, and check cluster status, as follow:
$ kubectl -n mysql-cluster
exec -it mgmt-0 -- ndb_mgm -c localhost -e show
Connected to Management
Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 4 node(s)
id=2 @172.17.0.5
(mysql-5.7.30 ndb-7.6.14, Nodegroup: 0, *)
id=3 @172.17.0.7
(mysql-5.7.30 ndb-7.6.14, Nodegroup: 0)
id=4 (not connected, accepting connect from
dataa-1)
id=5 (not connected, accepting connect from
datab-1)
[ndb_mgmd(MGM)] 1 node(s)
id=1 @172.17.0.8 (mysql-5.7.30
ndb-7.6.14)
[mysqld(API)] 4 node(s)
id=6 (not connected,
accepting connect from any host)
id=7 (not connected,
accepting connect from any host)
id=8 (not connected, accepting connect from
any host)
id=9 (not connected, accepting connect from
any host)
As you see the above output, now the cluster consists of “dataa-1” and “datab-1”
as well as 4 SQL Nodes.
D.2. Restart Data Node “dataa-0” and “datab-0”
Restart node id 2:
$ kubectl -n mysql-cluster
exec -it mgmt-0 -- ndb_mgm -c localhost -e "2 restart;"
Once done, restart node id 3:
$ kubectl -n mysql-cluster
exec -it mgmt-0 -- ndb_mgm -c localhost -e "3 restart;"
D.3. Restart All SQL Nodes
$ kubectl -n mysql-cluster
delete pod mysql-cluster-796d8b4d78-dwn9j
$ kubectl -n mysql-cluster
delete pod mysql-cluster-796d8b4d78-n6n4l
E. Adding New Data Nodes
$ kubectl -n mysql-cluster
exec -it dataa-1 -- ndbd --defaults-file=/etc/datanode.cnf
--ndb-connectstring=mgmt-0
$ kubectl -n mysql-cluster
exec -it datab-1 -- ndbd --defaults-file=/etc/datanode.cnf --ndb-connectstring=mgmt-0
Now check our cluster status:
$ kubectl -n mysql-cluster
exec -it mgmt-0 -- ndb_mgm -c localhost -e show
Connected to Management
Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 4 node(s)
id=2 @172.17.0.5
(mysql-5.7.30 ndb-7.6.14, Nodegroup: 0, *)
id=3 @172.17.0.7
(mysql-5.7.30 ndb-7.6.14, Nodegroup: 0)
id=4 @172.17.0.3 (mysql-5.7.30 ndb-7.6.14, Nodegroup: 1)
id=5 @172.17.0.4 (mysql-5.7.30 ndb-7.6.14, Nodegroup: 1)
[ndb_mgmd(MGM)] 1 node(s)
id=1 @172.17.0.8
(mysql-5.7.30 ndb-7.6.14)
[mysqld(API)] 4 node(s)
id=6 @172.17.0.19
(mysql-5.7.30 ndb-7.6.14)
id=7 @172.17.0.6
(mysql-5.7.30 ndb-7.6.14)
id=8 (not connected,
accepting connect from any host)
id=9 (not connected,
accepting connect from any host)
As you see, now our NDB Cluster already has 4 data nodes with 2 data
node groups!
F. Adding SQL Nodes
Simple! Just open all_nodes_scaling.yaml again, and edit the SQL Nodes section
to change replicas=2 into replicas=4, as highlighted below.
…
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-cluster
namespace: mysql-cluster
spec:
replicas: 4
selector:
matchLabels:
app: mysql-cluster
…
Apply the YAML file again.
$ kubectl apply -f
all_nodes_scaling.yaml
G. TEST IT !
Show cluster status:
$ kubectl -n mysql-cluster
exec -it mgmt-0 -- ndb_mgm -c localhost -e show
Connected to Management
Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 4 node(s)
id=2 @172.17.0.5
(mysql-5.7.30 ndb-7.6.14, Nodegroup: 0, *)
id=3 @172.17.0.7
(mysql-5.7.30 ndb-7.6.14, Nodegroup: 0)
id=4 @172.17.0.3 (mysql-5.7.30 ndb-7.6.14, Nodegroup: 1)
id=5 @172.17.0.4 (mysql-5.7.30 ndb-7.6.14, Nodegroup: 1)
[ndb_mgmd(MGM)] 1 node(s)
id=1 @172.17.0.8
(mysql-5.7.30 ndb-7.6.14)
[mysqld(API)] 4 node(s)
id=6 @172.17.0.19
(mysql-5.7.30 ndb-7.6.14)
id=7 @172.17.0.6
(mysql-5.7.30 ndb-7.6.14)
id=8 @172.17.0.20 (mysql-5.7.30 ndb-7.6.14)
id=9 @172.17.0.18 (mysql-5.7.30 ndb-7.6.14)
Done! The cluster now is running with 4 SQL Nodes !
H. How about Cluster Manager
As you see in this article, scaling up NDB Cluster are quite manual
without MySQL Cluster Manager (mcm). The more Data Nodes and SQL Nodes to handle,
the task will get complicated. Using the Cluster Manager (mcm), DBA can use a
lot of handy commands to simplify cluster management, including backup and
recovery. See this URL for detail: https://dev.mysql.com/doc/mysql-cluster-manager/1.4/en/
Disclaimer:
The method and tricks presented here are experimental only and it’s your
own responsibility to test, implement, and support in case of issues. It is
only showing example and not for a production deployment, as this is not
formally a supported configuration. I encourage more testing to be done,
including by development team. Implementation with less layer as possible is
recommended, and with having support from Oracle Support for real world
implementation serving business application.
No comments:
Post a Comment