Saturday, January 23, 2021

Oracle Container Engine for Kubernetes (OKE) for MySQL InnoDB Cluster

Looking for fully-managed, scalable, and high available Kubernetes service on public cloud? Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE) allows you to deploy your containerized applications to the cloud when your development team wants to reliably build, deploy, and manage cloud-native-application. If you look for deploying MySQL database on OKE as a micro-services workload for your micro-services application so both database and application are uniformly running on a containerized platform which allows automated deployment, scaling, and management of containerized applications across clusters of hosts, then this article is for you. 

We will discuss how to deploy a high available InnoDB Cluster using MySQL Enterprise Edition and MySQL Routers on OKE as follow: 

 


A. Preparation

First, we need to install docker desktop on our terminal. We will need this to push MySQL Enterprise Edition docker image to OKE Container Registry. Use the following URL as guide depending on your Operating System: https://docs.docker.com/get-docker/

Second, we need to obtain MySQL Enterprise Edition docker image from Oracle website (https://edelivery.oracle.com). Search for "MySQL server", then pick "Linux x86 (64bit)", scroll down to find "MySQL Commercial Server docker image TAR". 

Third, load MySQL docker image to docker desktop on our terminal. Follow instruction as explained here: https://dev.mysql.com/doc/mysql-installation-excerpt/8.0/en/docker-mysql-getting-started.html. Issue command "docker images" to check if MySQL docker image is properly loaded. Run "docker pull mysql/mysql-router" to pull latest MySQL Router docker image from GitHub.

Fourth, follow the steps in this URL to push your MySQL Enterprise Edition docker image and MySQL Router docker image from your local docker registry to OKE Container Registry: https://www.oracle.com/webfolder/technetwork/tutorials/obe/oci/registry/index.html

Fifth, provision a Kubernetes Cluster on OKE with 3 worker nodes running on private subnet across 3 Availability Domains. Follow the following instruction on how to easily setup an OKE instance in your tenancy https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contengcreatingclusterusingoke.htm. Setting up cluster access (https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contengdownloadkubeconfigfile.htm) and access this cluster using kubectl (https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contengaccessingclusterkubectl.htm). For access simplicity, use Cloud Shell.

Yay, we completed the installation of a Kubernetes Cluster on OKE and all necessary images to run an InnoDB Cluster are already on OKE Container Registry. Now, it's time for us to deploy the InnoDB Cluster and MySQL Routers as depicted in the above diagram.

The remaining step shall be performed using Cloud Shell.

 

B. Setting Up the Environment

Open your Cloud Shell and follow instruction on https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contengaccessingclusterkubectl.htm to get access to Kubernetes Cluster. Create Namespace "mysql-cluster" as follow:

 $ kubectl create ns mysql-cluster

Create a secret to set "root" password in MySQL to "root" using the following secret.yaml:

apiVersion: v1
kind: Secret
metadata:
  name: mysql-root-password
  namespace: mysql-cluster
type: Opaque
data:
  password: cm9vdA==
Apply secret.yaml as follow:

$ kubectl apply -f secret.yaml

Create another secret to login to OKE Container Registry when pulling image from there. Use the following command and execute this from Cloud Shell:

kubectl -n mysql-cluster create secret docker-registry ocirsecret --docker-server=<region-key>.ocir.io --docker-username='<tenancy-namespace>/<oci-username>' --docker-password='<your-password>' --docker-email='<your-email>'

Check your <region-key> from this URL: https://docs.oracle.com/en-us/iaas/Content/General/Concepts/regions.htm

Check your <Tenancy-namespace> from OKE Container Registry (Click "Developer Services" from OCI dashboard -> Click "Container Registry" -> see the name below button "Create Repository")

Check your <oci-username> from OCI Dashboard -> Click icon on the top right of the page to see your user Profile. Your <oci-username> will appear and shall consist your email address.

<your-password> is the Auth Token generated first time when you do the step 4 on section A. 


C. InnoDB Cluster Deployment    

Now we are ready to deploy our first InnoDB Cluster on OKE using kubectl. Use the following YAML to deploy a statefulset with 3 replicas to run 3 MySQL Enterprise Edition databases pods. Please replace <region-key> and <tenancy-namespace> to match with yours! Also, replace <image-name> to match with MySQL Enterprise image name on your Container Registry (
(Click "Developer Services" from OCI dashboard -> Click "Container Registry")

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: node
  namespace: mysql-cluster
spec:
  replicas: 3
  serviceName: node
  selector:
    matchLabels:
      app: node
  template:
    metadata:
      labels:
        app: node
    spec:
     affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 1
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                  - key: "app"
                    operator: In
                    values:
                    - node
              topologyKey: "kubernetes.io/hostname"
     containers:
     - image: <region-key>.ocir.io/<tenancy-namespace>/<image-name>
       name: mysql
       env:
         - name: MYSQL_ROOT_PASSWORD
           valueFrom:
             secretKeyRef:
                name: mysql-root-password
                key: password
       volumeMounts:
         - name: data-vol
           mountPath: /var/lib/mysql
     imagePullSecrets:
         - name: ocirsecret
  volumeClaimTemplates:                                
  - metadata:
      name: data-vol                                   
    spec:
      accessModes: [ "ReadWriteOnce" ]                 
      resources:
        requests:
          storage: 1Gi

Apply the above YAML file using kubectl, and you will notice that the 3 pods are running separately across 3 Kubernetes worker nodes. This is because the YAML file contains node affinity as follow.

affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 1
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                  - key: "app"
                    operator: In
                    values:
                    - node
              topologyKey: "kubernetes.io/hostname"

Check the pods using "kubectl -n mysql-cluster get pods", ensure those 3 pods are RUNNING.

Now, we need to create 3 headless services for those 3 pods. Use to following YAML file (to be executed via kubectl):

---
apiVersion: v1
kind: Service
metadata:
  name: node-0
  namespace: mysql-cluster
spec:
#  type: NodePort
  ports:
  - name: db-port
    protocol: TCP
    port: 3306
  - name: gr-port
    protocol: TCP
    port: 33061
#    nodePort: 30036
  selector:
    statefulset.kubernetes.io/pod-name: node-0
---
apiVersion: v1
kind: Service
metadata:
  name: node-1
  namespace: mysql-cluster
spec:
#  type: NodePort
  ports:
  - name: db-port
    port: 3306
    protocol: TCP
  - name: gr-port
    port: 33061
    protocol: TCP
#    nodePort: 30037
  selector:
    statefulset.kubernetes.io/pod-name: node-1
---
apiVersion: v1
kind: Service
metadata:
  name: node-2
  namespace: mysql-cluster
spec:
#  type: NodePort
  ports:
  - name: db-port
    port: 3306
    protocol: TCP
  - name: gr-port
    port: 33061
    protocol: TCP
#    nodePort: 30038
  selector:
    statefulset.kubernetes.io/pod-name: node-2

Check the services using "kubectl -n mysql-cluster get svc", ensure Services are available.

Now, the system is ready for InnoDB Cluster. First, we need to run dba.configureInstance on these 3 databases. Do as follow:

$ kubectl -n mysql-cluster exec -it node-0 -- mysqlsh -- dba configure-instance { --socket=/var/lib/mysql/mysql.sock --user=root --password=root } --clusterAdmin=gradmin --clusterAdminPassword=grpass --restart=true --interactive=false

$ kubectl -n mysql-cluster exec -it node-1 -- mysqlsh -- dba configure-instance { --socket=/var/lib/mysql/mysql.sock --user=root --password=root } --clusterAdmin=gradmin --clusterAdminPassword=grpass --restart=true --interactive=false

$ kubectl -n mysql-cluster exec -it node-2 -- mysqlsh -- dba configure-instance { --socket=/var/lib/mysql/mysql.sock --user=root --password=root } --clusterAdmin=gradmin --clusterAdminPassword=grpass --restart=true --interactive=false

Use pod node-0 to create cluster for the first time:

$ kubectl -n mysql-cluster exec -it node-0 -- mysqlsh gradmin:grpass@localhost:3306 -- dba createCluster 'mycluster' --ipAllowlist='10.0.0.0/8'

Then, register pod node-1 to the cluster:

$ kubectl -n mysql-cluster exec -it node-0 -- mysqlsh gradmin:grpass@localhost:3306 -- cluster add-instance gradmin:grpass@node-1:3306 --recoveryMethod=clone --ipAllowlist='10.0.0.0/8'

Then, register pod node-2 to the cluster:  

$ kubectl -n mysql-cluster exec -it node-0 -- mysqlsh gradmin:grpass@localhost:3306 -- cluster add-instance gradmin:grpass@node-2:3306 --recoveryMethod=clone --ipAllowlist='10.0.0.0/8'

Finally, check cluster status using the following command:

$ kubectl -n mysql-cluster exec -it node-0 -- mysqlsh gradmin:grpass@localhost:3306 -- cluster status 

This command will produce output similar like this (which is showing InnoDB Cluster is successfully deployed and running with 3 nodes).

$ kubectl -n mysql-cluster exec -it node-0 -- mysqlsh gradmin:grpass@localhost:3306 -- cluster status
WARNING: Using a password on the command line interface can be insecure.
{
    "clusterName": "mycluster",
    "defaultReplicaSet": {
        "name": "default",
        "primary": "node-0:3306",
        "ssl": "REQUIRED",
        "status": "OK",
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
        "topology": {
            "node-0:3306": {
                "address": "node-0:3306",
                "mode": "R/W",
                "readReplicas": {},
                "replicationLag": null,
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.22"
            },
            "node-1:3306": {
                "address": "node-1:3306",
                "mode": "R/O",
                "readReplicas": {},
                "replicationLag": null,
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.22"
            },
            "node-2:3306": {
                "address": "node-2:3306",
                "mode": "R/O",
                "readReplicas": {},
                "replicationLag": null,
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.22"
            }
        },
        "topologyMode": "Single-Primary"
    },
    "groupInformationSourceMember": "node-0:3306"
}

 

D. MySQL Router Deployment  

Deploy MySQL Router as a statefulset with 3 replicas using the following YAML (to be executed via kubectl). Please replace <region-key> and <tenancy-namespace> to match with yours! Also, replace <image-name> to match with MySQL Router image name on your Container Registry (Click "Developer Services" from OCI dashboard -> Click "Container Registry")

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: router
  namespace: mysql-cluster
spec:
  replicas: 3
  serviceName: router
  selector:
    matchLabels:
      app: router
  template:
    metadata:
      labels:
        app: router
    spec:
     affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 1
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                  - key: "app"
                    operator: In
                    values:
                    - router
              topologyKey: "kubernetes.io/hostname"
     containers:
     - image:
<region-key>.ocir.io/<tenancy-namespace>/<image-name>
       name: router
       env:
       - name: MYSQL_PASSWORD
         value: grpass
       - name: MYSQL_USER
         value: gradmin
       - name: MYSQL_PORT
         value: "3306"
       - name: MYSQL_HOST
         value: node-0
       - name: MYSQL_INNODB_NUM_MEMBERS
         value: "3"
       command:
       - "/bin/bash"
       - "-cx"
       - "exec /run.sh mysqlrouter"
     imagePullSecrets:
         - name: ocirsecret 
 

E. Configure Load-Balancer   

Finally, we need to create a Load Balancer service on public subnet for the MySQL Routers to access the InnoDB Cluster. In OCI, once load balancer is provisioned in OKE, it will (actually) create load balancer services also on OCI networking.

Use the following YAML and deploy it using kubectl:

apiVersion: v1
kind: Service
metadata:
  name: internal-router-svc
  namespace: mysql-cluster
  labels:
    app: router
  annotations:
    service.beta.kubernetes.io/oci-load-balancer-internal: "true"

spec:
  type: LoadBalancer
  ports:
  - port: 6446
    targetPort: 6446
    name: port-rw
  - port: 6447
    targetPort: 6447
    name: port-ro
  selector:
    app: router

Please pay attention to the annotations in red above, that is for declaring that this load balancer is for internal network, which is accessible only within the VCN. If this annotations is missing, then your databases are exposes to the internet!

 

 

2 comments:

  1. This is an excellent guide. In your following steps

    Then, register pod node-1 to the cluster:

    $ kubectl -n mysql-cluster exec -it node-0 -- mysqlsh gradmin:grpass@localhost:3306 -- cluster add-instance gradmin:grpass@node-1:3306 --recoveryMethod=clone --ipAllowlist='10.0.0.0/8'

    how do I retrieve the existing ipAllowList in the cluster? Don't you have to add the rest node-1 and node-2 as well?

    ReplyDelete
    Replies
    1. Hi, thank you :)
      The ipAllowlist is the part of IP segment of the pods. To add node-1 and node-2, we need to give the same ipAllowlist because those nodes are sharing the same IP segment.

      Delete