Saturday, November 7, 2020

Router-less InnoDB Cluster Replication to A Standalone MySQL as Replica

I am fascinated with Replication Asynchronous Connection Failover, a new feature introduced in MySQL 8.0.22. This feature allows replication connection to failover to a new source from the appropriate list after the existing connection from the replica to its current source fails. More information are available from this online resource: https://dev.mysql.com/doc/refman/8.0/en/replication-functions-source-list.html#udf_asynchronous-connection-failover-add-source

This is a great feature, especially if we have InnoDB Cluster replication to a standalone MySQL instance for Disaster Recovery purpose. Let me explain. Suppose we have a cluster consist of 3 nodes and we want to establish an asynchronous replication to a standalone instance. As we know those 3 nodes are redundant, we just need to establish replication from one of the cluster node (as a source) to the standalone instance (as the replica). There's a problem here! If the source is down, then the replication channel is unable to divert the source to any available cluster nodes. We need some sort of replication connection failover mechanism. Therefore, prior 8.0.22, you need to deploy a MySQL Router in between cluster and the replica. See below sample:

Thus, the replication connection isn't direct, but from the replica to a MySQL Router. The router will redirect the original connection to a cluster node. If the replication connection is using port 6446 to connect from the replica to the router, then the replication connection will be redirected to the cluster primary node. If port 6447 is used, then the connection will be redirected to one of the cluster secondary nodes. 

That story gets old now! With MySQL 8.0.22, we don't need MySQL router for such scenario, because replication connection failover is now built-in within the MySQL 8.0.22. Thus, the replication connection can now be direct between a replica to a cluster node. If the cluster node, which the replication connection is connecting to, is no longer available, the replication connection will automatically failover to another sources as specified in table mysql. replication_asynchronous_connection_failover. We just need to add all cluster nodes into replication sources under the same channel name. Remember to put the weight parameter on each source (1 to 100, with 100 being the highest priority), to determine the priority which cluster nodes become the next source in case of failover.

Let's test!

 

First, let's build our InnoDB Cluster (I use my localhost to run all nodes)

1. Create 3 sandbox instances (port 3311, 3312, 3313)

mysqlsh -e "dba.deploySandboxInstance(3311)"
mysqlsh -e "dba.deploySandboxInstance(3312)"
mysqlsh -e "dba.deploySandboxInstance(3313)"

2. Configure Instance for InnoDB Cluster

mysqlsh root@localhost:3311 -- dba configure-instance { --host=127.0.0.1 --user=root --port=3311 } --clusterAdmin=gradmin --clusterAdminPassword=grpass --restart=true --interactive=false

mysqlsh root@localhost:3312 -- dba configure-instance { --host=127.0.0.1 --user=root --port=3312 } --clusterAdmin=gradmin --clusterAdminPassword=grpass --restart=true --interactive=false

mysqlsh root@localhost:3313 -- dba configure-instance { --host=127.0.0.1 --user=root --port=3313 } --clusterAdmin=gradmin --clusterAdminPassword=grpass --restart=true --interactive=false

3. Create Cluster on 3311

mysqlsh gradmin:grpass@localhost:3311 -- dba createCluster mycluster

4. Add 3312 and 3313 into cluster 

mysqlsh gradmin:grpass@localhost:3311 -- cluster add-instance gradmin:grpass@localhost:3312 --recoveryMethod=clone --interactive=false

mysqlsh gradmin:grpass@localhost:3311 -- cluster add-instance gradmin:grpass@localhost:3313 --recoveryMethod=clone --interactive=false

 5. Check cluster status

mysqlsh gradmin:grpass@localhost:3311 -- cluster status

WARNING: Using a password on the command line interface can be insecure.
{
    "clusterName": "mycluster",
    "defaultReplicaSet": {
        "name": "default",
        "primary": "127.0.0.1:3311",
        "ssl": "REQUIRED",
        "status": "OK",
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
        "topology": {
            "127.0.0.1:3311": {
                "address": "127.0.0.1:3311",
                "mode": "R/W",
                "readReplicas": {},
                "replicationLag": null,
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.22"
            },
            "127.0.0.1:3312": {
                "address": "127.0.0.1:3312",
                "mode": "R/O",
                "readReplicas": {},
                "replicationLag": null,
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.22"
            },
            "127.0.0.1:3313": {
                "address": "127.0.0.1:3313",
                "mode": "R/O",
                "readReplicas": {},
                "replicationLag": null,
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.22"
            }
        },
        "topologyMode": "Single-Primary"
    },
    "groupInformationSourceMember": "127.0.0.1:3311"
}

6. Create replication user and grant replication slave privilege

mysqlsh root@localhost:3311 --sql -e "create user repl@'%' identified with mysql_native_password by 'repl'; grant replication slave on *.* to repl@'%'" 

7. Get Sakila database and apply into the cluster

(Refer to this online resource: https://dev.mysql.com/doc/sakila/en/sakila-installation.html)

mysqlsh root@localhost:3311 --sql -e "source sakila-schema.sql; source sakila-data.sql" 

8.  Check database

mysqlsh root@localhost:3311 --sql -e "show databases"

Database
information_schema
mysql
mysql_innodb_cluster_metadata
performance_schema
sakila
sys

Perfect!


Second, let's deploy the replica

1. Create new instance with port 3314

mysqlsh -- dba.deploySandboxInstance(3314)

2. Create replication channel "channel1" on instance 3314 to instance 3311

mysqlsh root@localhost:3314 --sql -e "change master to master_user='repl', master_host='127.0.0.1', master_port=3311, master_auto_position=1, master_password='repl', source_connection_auto_failover=1, master_connect_retry=3, master_retry_count=3 for channel 'channel1';" 

To enable replication connection auto failover, set source_connection_auto_failover=1 when creating replication channel.

master_connect_retry specifies interval between retries (in seconds, default=60)

master_retry_count determines how many number of retries before declaring a timeout (default=86400)

3. On 3314, add asynchronous replication sources (don't forget to assign priorities)

mysqlsh root@localhost:3314 --sql -e "select asynchronous_connection_failover_add_source('channel1', '127.0.0.1', 3311, '', 80)"

mysqlsh root@localhost:3314 --sql -e "select asynchronous_connection_failover_add_source('channel1', '127.0.0.1', 3312, '', 60)"

mysqlsh root@localhost:3314 --sql -e "select asynchronous_connection_failover_add_source('channel1', '127.0.0.1', 3313, '', 40)"

80, 60, and 40 are the weight (priorities) set on 3311, 3312, and 3314.

4. On 3314, check table mysql.replication_asynchronous_connection_failover

mysqlsh root@localhost:3314 --sql -e "select * from mysql.replication_asynchronous_connection_failover"

Channel_name    Host    Port    Network_namespace    Weight
channel1    127.0.0.1    3311                        80
channel1    127.0.0.1    3312                        60
channel1    127.0.0.1    3313                        40

5. On 3314, check table performance_schema.replication_connection_configuration

mysqlsh root@localhost:3314 --sql -e "select * from performance_schema.replication_connection_configuration \G"

*************************** 1. row ***************************
                   CHANNEL_NAME: channel1
                           HOST: 127.0.0.1
                           PORT: 3311
                           USER: repl
              NETWORK_INTERFACE:
                  AUTO_POSITION: 1
                    SSL_ALLOWED: NO
                    SSL_CA_FILE:
                    SSL_CA_PATH:
                SSL_CERTIFICATE:
                     SSL_CIPHER:
                        SSL_KEY:
  SSL_VERIFY_SERVER_CERTIFICATE: NO
                   SSL_CRL_FILE:
                   SSL_CRL_PATH:
      CONNECTION_RETRY_INTERVAL: 3
         CONNECTION_RETRY_COUNT: 3

             HEARTBEAT_INTERVAL: 30
                    TLS_VERSION:
                PUBLIC_KEY_PATH:
                 GET_PUBLIC_KEY: NO
              NETWORK_NAMESPACE:
          COMPRESSION_ALGORITHM: uncompressed
         ZSTD_COMPRESSION_LEVEL: 3
               TLS_CIPHERSUITES: NULL
SOURCE_CONNECTION_AUTO_FAILOVER: 1

6. On 3314, start replication channel "channel1" 

mysqlsh root@localhost:3314 --sql -e "start replica for channel 'channel1'"

7. On 3314, show replication status

mysqlsh root@localhost:3314 --sql -e "show replica status for channel 'channel1' \G"

*************************** 1. row ***************************
             Replica_IO_State: Waiting for master to send event
                  Source_Host: 127.0.0.1
                  Source_User: repl
                  Source_Port: 3311
                Connect_Retry: 3
              Source_Log_File: goldfish-bin.000002
          Read_Source_Log_Pos: 1421272
               Relay_Log_File: goldfish-relay-bin-channel1.000002
                Relay_Log_Pos: 1421493
        Relay_Source_Log_File: goldfish-bin.000002
           Replica_IO_Running: Yes
          Replica_SQL_Running: Yes
              Replicate_Do_DB:
          Replicate_Ignore_DB:
           Replicate_Do_Table:
       Replicate_Ignore_Table:
      Replicate_Wild_Do_Table:
  Replicate_Wild_Ignore_Table:
                   Last_Errno: 0
                   Last_Error:
                 Skip_Counter: 0
          Exec_Source_Log_Pos: 1421272
              Relay_Log_Space: 1421714
              Until_Condition: None
               Until_Log_File:
                Until_Log_Pos: 0
           Source_SSL_Allowed: No
           Source_SSL_CA_File:
           Source_SSL_CA_Path:
              Source_SSL_Cert:
            Source_SSL_Cipher:
               Source_SSL_Key:
        Seconds_Behind_Source: 0
Source_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error:
               Last_SQL_Errno: 0
               Last_SQL_Error:
  Replicate_Ignore_Server_Ids:
             Source_Server_Id: 1799819884
                  Source_UUID: 4dd5835a-20d8-11eb-86b9-1d4d2aeb1dcf
             Source_Info_File: mysql.slave_master_info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
    Replica_SQL_Running_State: Slave has read all relay log; waiting for more updates
           Source_Retry_Count: 3
                  Source_Bind:
      Last_IO_Error_Timestamp:
     Last_SQL_Error_Timestamp:
               Source_SSL_Crl:
           Source_SSL_Crlpath:
           Retrieved_Gtid_Set: f64ff858-20d8-11eb-a555-48672ca9cbfe:1-134
            Executed_Gtid_Set: 476154d8-20db-11eb-b263-952b17f380a3:1-3,
f64ff858-20d8-11eb-a555-48672ca9cbfe:1-134
                Auto_Position: 1
         Replicate_Rewrite_DB:
                 Channel_Name: channel1
           Source_TLS_Version:
       Source_public_key_path:
        Get_Source_public_key: 0
            Network_Namespace:

8. On 3314, show databases

mysqlsh root@localhost:3314 --sql -e "show databases"

Database
information_schema
mysql
mysql_innodb_cluster_metadata
performance_schema
sakila
sys

Perfect! We can see sakila database.

 

Third, test the replication connection auto failover.

1. On 3311, create dummy table

mysqlsh root@localhost:3311 --sql -e "create table sakila.test (i int primary key)"

2. Shutdown 3311 (cluster primary node failover from 3311 to 3312)

mysqladmin -uroot -h127.0.0.1 -P3311 shutdown

3.  On 3312, insert the following records

mysqlsh root@localhost:3312 --sql -e "insert into sakila.test values (1), (2), (3);"

4. On 3314, check table sakila.test

mysqlsh root@localhost:3314 --sql -e "select * from sakila.test"

1
2
3

Records are available on 3314 !

5. Now, on 3314, check the replication status 

mysqlsh root@localhost:3314 --sql -e "show replica status for channel 'channel1' \G"

*************************** 1. row ***************************
             Replica_IO_State: Waiting for master to send event
                  Source_Host: 127.0.0.1
                  Source_User: repl
                  Source_Port: 3312
                Connect_Retry: 3
              Source_Log_File: goldfish-bin.000002
          Read_Source_Log_Pos: 1397228
               Relay_Log_File: goldfish-relay-bin-channel1.000003
                Relay_Log_Pos: 762
        Relay_Source_Log_File: goldfish-bin.000002
           Replica_IO_Running: Yes
          Replica_SQL_Running: Yes
              Replicate_Do_DB:
          Replicate_Ignore_DB:
           Replicate_Do_Table:
       Replicate_Ignore_Table:
      Replicate_Wild_Do_Table:
  Replicate_Wild_Ignore_Table:
                   Last_Errno: 0
                   Last_Error:
                 Skip_Counter: 0
          Exec_Source_Log_Pos: 1397228
              Relay_Log_Space: 1422539
              Until_Condition: None
               Until_Log_File:
                Until_Log_Pos: 0
           Source_SSL_Allowed: No
           Source_SSL_CA_File:
           Source_SSL_CA_Path:
              Source_SSL_Cert:
            Source_SSL_Cipher:
               Source_SSL_Key:
        Seconds_Behind_Source: 0
Source_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error:
               Last_SQL_Errno: 0
               Last_SQL_Error:
  Replicate_Ignore_Server_Ids:
             Source_Server_Id: 3952651428
                  Source_UUID: 789a1236-20d8-11eb-b61c-7919120f1745
             Source_Info_File: mysql.slave_master_info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
    Replica_SQL_Running_State: Slave has read all relay log; waiting for more updates
           Source_Retry_Count: 3
                  Source_Bind:
      Last_IO_Error_Timestamp:
     Last_SQL_Error_Timestamp:
               Source_SSL_Crl:
           Source_SSL_Crlpath:
           Retrieved_Gtid_Set: f64ff858-20d8-11eb-a555-48672ca9cbfe:1-135:1000043
            Executed_Gtid_Set: 476154d8-20db-11eb-b263-952b17f380a3:1-3,
f64ff858-20d8-11eb-a555-48672ca9cbfe:1-135:1000043
                Auto_Position: 1
         Replicate_Rewrite_DB:
                 Channel_Name: channel1
           Source_TLS_Version:
       Source_public_key_path:
        Get_Source_public_key: 0
            Network_Namespace: 

Great ! Replication connection automatically failover from pointing to 3311 to pointing to 3312.


Indeed, MySQL 8.0.22 offers a great feature that helps to simplify asynchronous replication from MySQL InnoDB Cluster and Group Replication. Without MySQL Router, the architecture looks simpler and easier to manage.

Upgrade now and enjoy this new feature!

 

Saturday, August 15, 2020

Containerizing MySQL Enterprise Monitor (MEM) on Kubernetes

 A DBA holds a big responsibility for keeping mission critical databases in the organization to be always up and running and meeting all the SLAs assigned to the database, such as availability, recovery, security, performance, and more. This is really a difficult and risky job to maintain mission critical databases without proper tools providing monitoring data. In real life, it is very important for DBAs to be equipped with good database monitoring tools to ensure they can quickly identify, resolve and fix any potential problems before those problems even impact operations.

MySQL Enterprise Monitor (MEM) is a comprehensive, centralized, real-time MySQL database monitoring tool that provides a complete range of database monitoring aspects: from availability to performance. It is able to monitor all MySQL database instances in the environment and present information in a web-based interface which visually displays a global and detailed view of all monitored databases, including MySQL cluster (NDB Cluster and InnoDB Cluster), replication, backup, all security features on MySQL Enterprise Edition, etc. More information about MEM can be found here: https://dev.mysql.com/doc/mysql-monitor/8.0/en/

 

This article discusses on how to run the MEM on Kubernetes to monitor closely all MySQL Enterprise Edition database that running on the same Kubernetes cluster. We will start by creating a docker image from scratch. (Disclaimer: Please note, this is unofficial and DIY way to create docker image for MEM. If you require your MEM to run on container, you can follow below steps, test those, but support is by your own in case of issues). 

 

1. Obtain MySQL Enterprise Monitor 8.0.21 binary 

Download mysql-enterprise-monitor-8.0.21 for Linux x86_64 from edelivery.oracle.com (file name: V999555-01.zip). Unzip V999555-01.zip to get installation file: mysqlmonitor-8.0.21.1248-linux-x86_64-installer.bin

  

 2. Prepare and run docker-build to build image


Create Dockerfile on the same directory: 

FROM oraclelinux:7-slim

 

RUN groupadd mysql

RUN useradd -g mysql mysql

 

RUN mkdir /mem

 

COPY mysqlmonitor-8.0.21.1248-linux-x86_64-installer.bin docker-entrypoint.sh /mem/

WORKDIR /mem

 

RUN chmod u+x /mem/docker-entrypoint.sh

RUN yum install -y libaio

RUN yum install -y numactl-libs

RUN yum install -y hostname

RUN yum install -y procps-ng

 

ENTRYPOINT ["/mem/docker-entrypoint.sh"]

Create docker-entrypoint.sh file on the same directory: 

#! /bin/bash

 

if [ "$ADMINUSER" = "" ]; then

   ADMINUSER="admin"

fi

 

if [ "$ADMINPASSWORD" = "" ]; then

   ADMINPASSWORD="admin"

fi;

 

if [ "$SYSTEMSIZE" = "" ]; then

   SYSTEMSIZE="small"

fi;

 

echo "Admin user is set to $ADMINUSER"

echo "Admin password is set to $ADMINPASSWORD"

echo "System size is set to $SYSTEMSIZE"

 

if [ `ls -l /opt/mysql/enterprise/monitor | grep total | awk '{print $2}'` -eq 0 ]; then

 

       echo "Install new Service Manager"

       /mem/mysqlmonitor-8.0.21.1248-linux-x86_64-installer.bin --mode unattended --system_size $SYSTEMSIZE --adminuser $ADMINUSER --adminpassword $ADMINPASSWORD --mysql_installation_type bundled

 

       /opt/mysql/enterprise/monitor/mysqlmonitorctl.sh status

 

fi;

 

while true; do

       echo "Checking if MySQL is Up"

       if [ `/opt/mysql/enterprise/monitor/mysqlmonitorctl.sh status mysql | grep "not running" | wc -l` -gt 0 ]; then

             echo "MySQL is not running, starting up"

             /opt/mysql/enterprise/monitor/mysqlmonitorctl.sh start mysql

       else

             echo "MySQL is already running"

       fi       

 

       echo "Checking if Tomcat is Up"

       if [ `/opt/mysql/enterprise/monitor/mysqlmonitorctl.sh status tomcat | grep "not running" | wc -l` -gt 0 ]; then

             echo "Tomcat is not running, starting up"

             /opt/mysql/enterprise/monitor/mysqlmonitorctl.sh start tomcat

       else

             echo "Tomcat is already running"

       fi

      

       /opt/mysql/enterprise/monitor/mysqlmonitorctl.sh status

       sleep 15;

done; 

Check our installed docker images: 

$ docker images

REPOSITORY                                                       TAG                 IMAGE ID            CREATED             SIZE

mysql/enterprise-server                                          8.0                 106673d79487        8 weeks ago         452MB

mysql/mysql-cluster                                              latest              ca2cf722118d        3 months ago        415MB

k8s.gcr.io/kube-proxy                                            v1.16.2             8454cbe08dc9        10 months ago       86.1MB

k8s.gcr.io/kube-apiserver                                        v1.16.2             c2c9a0406787        10 months ago       217MB

k8s.gcr.io/kube-controller-manager                               v1.16.2             6e4bffa46d70        10 months ago       163MB

k8s.gcr.io/kube-scheduler                                        v1.16.2             ebac1ae204a2        10 months ago       87.3MB

quay.io/kubernetes-ingress-controller/nginx-ingress-controller   0.26.1              29024c9c6e70        10 months ago       483MB

k8s.gcr.io/etcd                                                  3.3.15-0            b2756210eeab        11 months ago       247MB

kubernetesui/dashboard                                           v2.0.0-beta4        6802d83967b9        11 months ago       84MB

k8s.gcr.io/coredns                                               1.6.2               bf261d157914        12 months ago       44.1MB

k8s.gcr.io/kube-addon-manager                                    v9.0.2              bd12a212f9dc        12 months ago       83.1MB

kubernetesui/metrics-scraper                                     v1.0.1              709901356c11        13 months ago       40.1MB

k8s.gcr.io/kube-addon-manager                                    v9.0                119701e77cbc        19 months ago       83.1MB

k8s.gcr.io/kubernetes-dashboard-amd64                            v1.10.1             f9aed6605b81        20 months ago       122MB

k8s.gcr.io/k8s-dns-sidecar-amd64                                 1.14.13             4b2e93f0133d        23 months ago       42.9MB

k8s.gcr.io/k8s-dns-kube-dns-amd64                                1.14.13             55a3c5209c5e        23 months ago       51.2MB

k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64                           1.14.13             6dc8ef8287d3        23 months ago       41.4MB

k8s.gcr.io/pause                                                 3.1                 da86e6ba6ca1        2 years ago         742kB

gcr.io/k8s-minikube/storage-provisioner                          v1.8.1              4689081edb10        2 years ago         80.8MB

Execute the following command to build the image: 

$ set -x

$ set -e

$ docker build -m 2g -c 4 -t private/mem:8.0.21 .

Check again our installed docker images: 

$ docker images

REPOSITORY                                                       TAG                 IMAGE ID            CREATED             SIZE

private/mem                                                      8.0.21              57ddbccb6758        39 seconds ago      1.74GB

oraclelinux                                                      7-slim              153f8d73287e        4 weeks ago         131MB

mysql/enterprise-server                                          8.0                 106673d79487        8 weeks ago         452MB

mysql/mysql-cluster                                              latest              ca2cf722118d        3 months ago        415MB

k8s.gcr.io/kube-proxy                                            v1.16.2             8454cbe08dc9        10 months ago       86.1MB

k8s.gcr.io/kube-controller-manager                               v1.16.2             6e4bffa46d70        10 months ago       163MB

k8s.gcr.io/kube-scheduler                                        v1.16.2             ebac1ae204a2        10 months ago       87.3MB

k8s.gcr.io/kube-apiserver                                        v1.16.2             c2c9a0406787        10 months ago       217MB

quay.io/kubernetes-ingress-controller/nginx-ingress-controller   0.26.1              29024c9c6e70        10 months ago       483MB

k8s.gcr.io/etcd                                                  3.3.15-0            b2756210eeab        11 months ago       247MB

kubernetesui/dashboard                                           v2.0.0-beta4        6802d83967b9        11 months ago       84MB

k8s.gcr.io/coredns                                               1.6.2               bf261d157914        12 months ago       44.1MB

k8s.gcr.io/kube-addon-manager                                    v9.0.2              bd12a212f9dc        12 months ago       83.1MB

kubernetesui/metrics-scraper                                     v1.0.1              709901356c11        13 months ago       40.1MB

k8s.gcr.io/kube-addon-manager                                    v9.0                119701e77cbc        19 months ago       83.1MB

k8s.gcr.io/kubernetes-dashboard-amd64                            v1.10.1             f9aed6605b81        20 months ago       122MB

k8s.gcr.io/k8s-dns-sidecar-amd64                                 1.14.13             4b2e93f0133d        23 months ago       42.9MB

k8s.gcr.io/k8s-dns-kube-dns-amd64                                1.14.13             55a3c5209c5e        23 months ago       51.2MB

k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64                           1.14.13             6dc8ef8287d3        23 months ago       41.4MB

k8s.gcr.io/pause                                                 3.1                 da86e6ba6ca1        2 years ago         742kB

gcr.io/k8s-minikube/storage-provisioner                          v1.8.1              4689081edb10        2 years ago         80.8MB

 

3.  Prepare YAML

Deploying MEM on Kubernetes requires the following sample YAML (mem.yaml).  Please note the pod needs to run as privileged container, or else tomcat service will fail to start. We will configure db repository with user memadmin:manager. MEM will be for small number of monitored database instances (up to 5 - 10 databases). If there’re more number of instances, consider to change SYSTEMSIZE parameter from “small” to “medium” (up to 100 databases) or “large” (more than 100 databases).

---

apiVersion: apps/v1

kind: StatefulSet

metadata:

  name: mem

  namespace: mysql-cluster

spec:

  replicas: 1

  serviceName: mem

  selector:

    matchLabels:

      app: mem

  template:

    metadata:

      labels:

        app: mem

    spec:

     containers:

     - image: private/mem:8.0.21

       name: mem

       env:

         - name: ADMINUSER

           value: memadmin

         - name: ADMINPASSWORD

           value: manager

         - name: SYSTEMSIZE

           value: small

       resources:

         limits:

           cpu: "2"

           memory: "4096Mi"

         requests:

           cpu: "2"

           memory: "2048Mi"

       volumeMounts:

         - name: mem-vol

           mountPath: /opt/mysql/enterprise/monitor

       securityContext:

         privileged: true

  volumeClaimTemplates:                               

  - metadata:

      name: mem-vol                                  

    spec:

      accessModes: [ "ReadWriteOnce" ]                

      resources:

        requests:

          storage: 1Gi

---


4. Deploy MEM

 

Deploy the YAML file to Kubernetes

 

$ kubectl apply -f mem.yaml

 

Check Pod Status

$ kubectl -n mysql-cluster get pod

NAME    READY   STATUS    RESTARTS   AGE

mem-0     1/1           Running      0                   3s 

 

Wait for 1 min to give time for MEM configuration process. Then, check the status of our MEM on Kubernetes

 

$ kubectl -n mysql-cluster exec -it mem-0 -- /opt/mysql/enterprise/monitor/mysqlmonitorctl.sh status

MySQL Enterprise MySQL is running

MySQL Enterprise Tomcat is running


Both MySQL instance and Tomcat are running!


5. Access our MEM

In this article, I just simply used port-forward to test if my MEM is running:

 

$ kubectl -n mysql-cluster port-forward pods/mem-0 18443:18443


Then, I opened my web browser and accessed https://127.0.0.1:18443

The MEM initial setup page appeared after few minutes.

 

It's working !