MySQL Router
is a stateless and lightweight middleware that provides connection transparency
from applications to InnoDB Cluster (IDC). Application does not need to be
cluster aware while using the router to connect to IDC, because it provides
application connection failover in case of IDC’s primary node failover, and it
also provides transparent client connection routing to IDC secondary nodes with
load balancing.
MySQL
Router is one of the application stacks in the InnoDB Cluster architecture that
is usually run locally on the same host as the application.
More info
about MySQL Router can be found here:
This article
aims to show how to use and run MySQL Router in Kubernetes for microservices
applications. For steps how to deploy and run MySQL InnoDB Cluster on Kubernetes as StatefulSet, see also my previous blog post:
Deployment Model
MySQL router can be deployed in
Kubernetes using one of the following models:
1. Multi-containers pod
2. Single-container pod
1. Multi-containers pod
2. Single-container pod
First of all, pods are the smallest unit that can be deployed and managed in Kubernetes. Containers are similar to VMs, considered lightweight, and they have relaxed isolation properties to share the OS among the applications. A pod can be viewed as a single server whereby containers within a pod can access each other as different port on localhost.
Thus, in
multi-container pods model, MySQL Router and application node are deployed as
separate containers within a single pod. Usually, this is because those
containers are relatively tightly coupled since each of application nodes will
need MySQL Router to connect to an IDC. In multi-container pods model,
application will use “localhost” or “127.0.0.1” to connect to an IDC through
router.
Pic.1. MySQL Router deployed on Multi-containers pod
In
single-container pods, application containers and MySQL router containers are
running on separate pods in the Kubernetes. In order for application to connect
to routers, it uses service discovery and load balancing to expose MySQL Router
nodes using their DNS names.
Pic.2. MySQL Router deployed on Single-containers pod
Suitable Controllers
In case of
multi-containers pod model is used, MySQL Router will be deployed together with
applications as either StatefulSets, Deployments, or DaemonSets. In case of
single-container pod, recommend to use Deployments or DaemonSets to deploy
MySQL Router on Kubernetes Cluster as stateless.
When using a
Deployment, the total number of MySQL Router being deployed across Kubernetes
worker nodes randomly is specified by the .spec.replicas field in the manifest.
The .spec.selector.matchLabels in deployments’ manifest limit the pods to run
only on worker nodes with matched labels.
When using a
DaemonSet, MySQL Router pod runs on all the Kubernetes worker nodes. If a
worker node is added/removed, DaemonSet will automatically adds/deletes the
pod. If .spec.selector.matchLabels is set in manifest, the MySQL Router pods
will run across worker nodes that have matched label.
In this article,
I give an example how to deploy MySQL Routers on Kubernetes using Multi-containers
pod model with Deployments.
Let’s begin ..
First, we
need to start the Minikube to start the InnoDB Cluster. I’ve explained this on
my previous blog post
goldfish.local:~/go $ minikube start
🎉 minikube 1.11.0 is available!
Download it: https://github.com/kubernetes/minikube/releases/tag/v1.11.0
💡 To disable this notice, run:
'minikube config set WantUpdateNotification false'
🙄 minikube v1.5.2 on Darwin 10.14.4
💡 Tip: Use 'minikube start -p
<name>' to create a new cluster, or 'minikube delete' to delete this one.
🔄 Starting existing virtualbox VM
for "minikube" ...
⌛ Waiting for the host to be
provisioned ...
🐳 Preparing Kubernetes v1.16.2 on
Docker '18.09.9' ...
🔄 Relaunching Kubernetes using
kubeadm ...
⌛ Waiting for: apiserver
🏄 Done! kubectl is now configured
to use "minikube"
goldfish.local:~/go $
Then, we
check the MySQL database pods created previously.
goldfish.local:~/go $ kubectl -n mysql-cluster get pod
NAME READY STATUS
RESTARTS AGE
innodb-cluster-0 1/1 Running
0 29h
innodb-cluster-1 1/1 Running
0 29h
innodb-cluster-2 1/1 Running
0 29h
goldfish.local:~/go $
We check if IDC’s
group replication is running using the following command.
goldfish.local:~/go $ kubectl -n mysql-cluster exec -it innodb-cluster-0
-- mysqlsh root:root@localhost:3306 -- cluster status
WARNING: Using a password on the command line interface can be insecure.
{
"clusterName":
"myCluster",
"defaultReplicaSet":
{
"name":
"default",
"primary":
"innodb-cluster-1:3306",
"ssl":
"REQUIRED",
"status":
"OK",
"statusText":
"Cluster is ONLINE and can tolerate up to ONE failure.",
"topology": {
"innodb-cluster-0:3306": {
"address": "innodb-cluster-0:3306",
"mode":
"R/O",
"readReplicas": {},
"replicationLag": null,
"role":
"HA",
"status":
"ONLINE",
"version": "8.0.19"
},
"innodb-cluster-1:3306": {
"address": "innodb-cluster-1:3306",
"mode":
"R/W",
"readReplicas": {},
"replicationLag":
null,
"role":
"HA",
"status":
"ONLINE",
"version": "8.0.19"
},
"innodb-cluster-2:3306": {
"address": "innodb-cluster-2:3306",
"mode":
"R/O",
"readReplicas": {},
"replicationLag": null,
"role":
"HA",
"status":
"ONLINE",
"version": "8.0.19"
}
},
"topologyMode":
"Single-Primary"
},
"groupInformationSourceMember":
"innodb-cluster-1:3306"
}
goldfish.local:~/go $
Perfect!
Now, let’s do MySQL Router deployment using multi-containers pod. In this
article, two nodes of WordPress application will run together with MySQL Router
within Pods. WORDPRESS_DB_HOST is set to “127.0.0.1” and port 6446 to connect
to local MySQL Router. While in Router definition, MYSQL_HOST is pointing to
current IDC’s primary node for Router’s initial bootstrap process.
Below is the
sample YAML.
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-router
namespace: mysql-cluster
labels:
app: mysql-router
spec:
replicas: 2
selector:
matchLabels:
app: mysql-router
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql-router
spec:
containers:
- name: mysqlrouter
image:
mysql/mysql-router:latest
env:
- name: MYSQL_PASSWORD
value: root
- name: MYSQL_USER
value: root
- name: MYSQL_PORT
value: "3306"
- name: MYSQL_HOST
value: innodb-cluster-0
- name:
MYSQL_INNODB_NUM_MEMBERS
value: "3"
command:
- "/bin/bash"
- "-cx"
- "exec /run.sh
mysqlrouter"
- name: wordpress
image:
wordpress:4.9-php7.2-apache
env:
- name: WORDPRESS_DB_HOST
value: 127.0.0.1:6446
- name:
WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-root-password
key: password
ports:
- containerPort: 80
Let’s apply
this YAML.
goldfish.local:~/go $ kubectl apply -f router-apps2.yaml
deployment.apps/mysql-router created
goldfish.local:~/go $
Check if two
mysql-router pods are created successfully.
goldfish.local:~/go $ kubectl -n mysql-cluster get pod
NAME
READY STATUS RESTARTS
AGE
innodb-cluster-0
1/1 Running 0
29h
innodb-cluster-1
1/1 Running 0
29h
innodb-cluster-2
1/1 Running 0
29h
mysql-router-5698f85cc-4nz42
2/2 Running 0
45s
mysql-router-5698f85cc-5q65f
2/2 Running 0
45s
goldfish.local:~/go/webinar/03/router $
Finally ! We
have two additional Pods which each of them run local MySQL Router and WordPress
application in less than 3 minutes.
Disclaimer:
The method and tricks presented here are experimental only and it’s your own responsibility to test, implement and support in case of issues. It is only showing example and not for a production deployment, as this is not formally a supported configuration. I encourage more testing to be done, including by development team.
Disclaimer:
The method and tricks presented here are experimental only and it’s your own responsibility to test, implement and support in case of issues. It is only showing example and not for a production deployment, as this is not formally a supported configuration. I encourage more testing to be done, including by development team.
No comments:
Post a Comment