Thursday, April 13, 2023

MySQL Galera Cluster using Percona XtraDB Cluster

What is Percona XtraDB Cluster

Percona develop open-source database solutions based on MySQL, PostgreSQL, MongoDB and RocksDB.

Their solution is compatible with the base open-source solution with many extras that make their solution even better.

Their MySQL cluster solution XtraDB Cluster is based on Galera Cluster with some safety belts that make their solution more stable and easier to manage.

MySQL Galera Cluster Introduction


MySQL Galera Cluster is a popular solution for achieving high availability in MySQL databases, as it provides synchronous multi-master replication. This means that multiple MySQL nodes can act as both master and replica simultaneously, allowing for better fault tolerance and automatic failover in case of node failures.

On the other hand, traditional MySQL replication, where one node acts as the master and others as slaves, operates asynchronously. This means that any failure of the master node can result in downtime and potential data loss until one of the slaves is promoted to be the new master. Additionally, due to the asynchronous nature of replication, there may be data discrepancies and staleness in the replicas until they catch up with the new master, leading to potential data loss and inconsistency.

Managing traditional MySQL replication during a failure scenario can indeed be challenging and require manual intervention, fast response, and accurate execution, as it can impact production services and result in financial loss. In contrast, MySQL Galera Cluster provides automatic and immediate failover, with all nodes being able to accept read and write operations, ensuring higher availability and data consistency.

Galera cluster


In Galera Cluster, many of the issues associated with traditional MySQL replication are addressed out of the box. Galera Cluster uses a master-master ring topology, where all nodes can actively read and write data, and all data changes are replicated synchronously to all nodes in the cluster. This ensures that data changes are acknowledged by all nodes before they are considered committed, providing high consistency and avoiding data discrepancies.

Galera Cluster requires an odd number of nodes (such as 3, 5, 7, etc.) to avoid split-brain scenarios, where multiple nodes assume the role of the master simultaneously. In case of a node failure or unresponsiveness, the remaining healthy nodes in the cluster use a quorum-based voting mechanism to decide which node should be rejected. The decision is made by majority vote, following the principle of democracy, and the healthy nodes continue to operate as a cluster.

When a broken node is fixed and rejoins the cluster, it needs to synchronize its data with the other nodes. There are two types of synchronization that can occur:

Incremental State Transfer (IST): If the failed node has missed only a small amount of data changes, it can request an IST from one of the healthy nodes in the cluster. The healthy node streams the missing data changes to the failed node, allowing it to catch up and synchronize its data quickly.

Full State Snapshot Transfer (SST): If the failed node has been down for a longer period or has lost significant data, it may need to perform a full SST. In this case, the failed node requests a complete snapshot of the database from a healthy node, and the healthy node streams the entire database to the failed node, which then applies the snapshot and catches up with the cluster.

Both IST and SST are automated processes in Galera Cluster, helping to bring the failed node back to a consistent state and ensuring data integrity in the cluster.


State Snapshot Transfer (SST) and Incremental State Transfer (IST) are two mechanisms in Galera Cluster for synchronizing data between nodes.

State Snapshot Transfer (SST): When a new node joins the cluster or a failed node needs to catch up on a significant amount of data changes, it performs a full sync with one of the existing nodes in the cluster (known as the donor node). The donor node streams a complete snapshot of its data to the joining or recovering node, which then applies the snapshot and becomes available for new requests. SST is an automated process in Galera Cluster, but it requires copying all the data from the donor node to the joining or recovering node, which can take time and resources.

Incremental State Transfer (IST): IST is a faster mechanism for synchronizing data when a node has been down for a short period of time and has missed only a small amount of data changes. In this case, the recovering node requests only the incremental changes from the other nodes' cache, rather than copying the entire dataset from a donor node. This helps in faster synchronization and reduces the impact on resources. However, for IST to work effectively, it's important to configure the gcache.size parameter with an appropriate value, so that the cache has enough space to store the changes that occurred during the downtime.

Both SST and IST are important mechanisms in Galera Cluster for ensuring data consistency and integrity among the nodes in the cluster, and they provide options for efficient synchronization depending on the duration of the downtime and the amount of data changes missed.

Galera Cluster has specific use cases where it can be highly effective, as well as some limitations to be aware of.

Galera Cluster




Use Cases for Galera Cluster:

1. Heavy OLTP (Online Transaction Processing) systems with frequent small reads and writes: Galera Cluster is well-suited for transactional workloads with a high volume of concurrent read and write operations, making it suitable for applications that require real-time processing and frequent database updates.

2. Web and mobile applications with thousands of active users: Galera Cluster can provide high availability and data consistency for web and mobile applications with a large number of concurrent users, ensuring smooth and reliable performance.
Galera Cluster Downsides:

3. Writes can be slightly slower: Due to the synchronous replication nature of Galera Cluster, where data changes must be approved by all nodes, there may be a slight overhead in write performance compared to traditional master-slave replication setups.
Not suitable for heavy writing applications: Galera Cluster may not be the best choice for applications that write a huge amount of data constantly, such as data warehousing or analytics systems, as the synchronous replication overhead may impact performance.

Locks are not supported: Galera Cluster does not support table-level locks, as locks can only be done in the local instance and not in remote instances. This can be a limitation for applications that heavily rely on table-level locking mechanisms.

4. Primary key and InnoDB engine requirement: To ensure data consistency and integrity in Galera Cluster, every table must have a primary key defined and use the InnoDB transactional engine. This requirement may necessitate changes to the database schema or application code for some applications.
Overall, Galera Cluster is a powerful solution for high availability and data consistency in certain use cases, but it's important to consider its limitations and suitability for specific application requirements before implementing it. Proper planning, configuration, and monitoring are crucial for ensuring optimal performance and reliability in a Galera Cluster environment.


Install MySQL Percona XtraDB Cluster Step by Step


The Installation Will be on Ubuntu 20:

Update the system:
sudo apt update

Install the necessary packages:
sudo apt install -y wget gnupg2 lsb-release curl

Download the repository package:
wget https://repo.percona.com/apt/percona-release_latest.generic_all.deb

Install the package with dpkg : 
sudo dpkg -i percona-release_latest.generic_all.deb

Refresh the local cache to update the package information:
sudo apt update

Enable the release repository for Percona XtraDB Cluster:
sudo percona-release setup pxc80

Install the cluster:
sudo apt install -y percona-xtradb-cluster

Setup Galera Cluster parameters in my.cnf in the first node:

 Cluster name 
wsrep_cluster_name=xtradb-cluster

The Location of Galera extension library 
wsrep_provider=/usr/lib/galera4/libgalera_smm.so

The Nodes IP’s or server names that are part of Galera Cluster
wsrep_cluster_address=gcomm://10.10.10.1,10.10.10.2,10.10.10.3

Set Galera Cache Size to do incremental IST rather than SST 
wsrep_provider_options          = “gcache.size=2G”

Galera cluster support on ROW Binlog format 
 binlog_format=ROW

The Node IP
wsrep_node_address =10.10.10.1

Node Name 
wsrep_node_name=node-01

Galera State Transfer Method – SST . Using Percona xtrabackup
wsrep_sst_method=xtrabackup-v2

Disable data encryption between nodes  (default is on and should be used in prod)
pxc_encrypt_cluster_traffic = OFF

 

Bootstrap first node 

systemctl start mysql@bootstrap.service

 

Connect to the database and check node and cluster status 
show status like ‘wsrep%’ and check these 2 parameters:

+—————————+—————+

| Variable_name             | Value       |

+—————————+—————+

| wsrep_local_state_comment | Synced      |

| wsrep_incoming_addresses  |10.10.10.1:3306 |

+—————————+—————-+

Copy galera parameters to other nodes and replace wsrep_node_name and wsrep_node_address with the node details 

start the second node. once it’s up. Connect to the database and check the cluster and node status . you should be to see that it joined to the cluster
show status like ‘wsrep%

+—————————+—————+

| Variable_name             | Value       |

+—————————+—————+

| wsrep_local_state_comment | Synced      |

| wsrep_incoming_addresses  |10.10.10.1:3306, 10.10.10.2:3306  |

+—————————+—————-+

once the second node is up. Start the third node and check its status as well.

+—————————+—————+

| Variable_name             | Value       |

+—————————+—————+

| wsrep_local_state_comment | Synced      |

| wsrep_incoming_addresses  |10.10.10.1:3306, 10.10.10.2:3306, 10.10.10.3:3306  |

+—————————+—————-+

Done.



Steps to Setup Redis Clustering on CentOS

Redis Cluster is a built-in feature of Redis that provides automatic sharding, replication, and high availability. It replaces the previously used Sentinels for these purposes. Redis Cluster is designed to automatically split your dataset among multiple instances and provide some level of availability during partitions, allowing operations to continue even when some instances (especially masters) fail or are unable to communicate with the majority of nodes in the cluster.

However, Redis Cluster may stop operating in the event of larger failures, such as when the majority of master instances are unavailable. If a master and slave fail simultaneously, the cluster cannot continue normal operations, although a workaround is to add more nodes or create asymmetry in the cluster to automatically change the cluster layout.

According to the Redis Cluster documentation, a "minimal cluster" that functions as expected should have at least 3 master nodes. However, for a more suitable setup with higher availability, it is recommended to have at least 6 nodes, with three masters and three slaves, where each master has a corresponding slave.

It's important to note that Redis Cluster has some limitations, including lack of support for NATted environments or environments where IP addresses or TCP ports are remapped, such as in Docker. Additionally, not all client libraries may fully support Redis Cluster.

This article will provide a step-by-step guide on how to set up a Redis Cluster (with Cluster-Mode Disabled) in CentOS 8. It will cover the installation of Redis, configuration of the cluster nodes, creation of the cluster, and testing of the cluster failover.

Note: For this guide, we will use fresh/empty Redis instances to run the cluster mode. The cluster mode will not work with some configurations done in the first two guides of our Redis Series, particularly when the replica of the parameter is being used.

 

Prerequisites:

  1. Servers with CentOS 8 Installation

Test Environment Setup

Redis Master1: 10.42.0.247
Redis Master2: 10.42.0.197
Redis Master3: 10.42.0.132
Redis Slave1: 10.42.0.200
Redis Slave2: 10.42.0.21
Redis Slave3: 10.42.0.34

 



                                                    Redis Cluster Logical Diagram

 

Our setup has 3 read/write master nodes and 3 read-only replica nodes, each master having one replica, so three shards contain all of the cluster’s data in each node. An application API or CLI client can write only to the master nodes but read from any node in the cluster.

Step 1: Installing Redis on All Nodes

1. Log into all the instances via SSH, then run the following command to install the Redis module using the DNF package manager as shown.

# dnf module install redis

2. Next, start the Redis service, enable it to automatically start at system boot and check its status to verify that it is running (verify the service on all the 6 instances):

# systemctl start redis
# systemctl enable redis
# systemctl status redis

Step 2: Configuring Redis Instances on all Nodes

3. This section describes how to configure the Redis cluster nodes. Remember to perform the configurations here on all the nodes.

Use the /etc/redis.conf configuration file to configure the Redis server. As a recommended practice, create a backup of the original file before editing it using a command-line text editor of your choice.

# cp /etc/redis.conf /etc/redis.conf.orig
# vi /etc/redis.conf

4. Next, find the following configuration parameters and edit their values as shown. The bind parameter sets the interface of the Redis server will listen on, set its value to the instance LAN IP. Remove the 127.0.0.1 because we realized leaving it there slows down the process of cluster creation, particularly the stage of joining the cluster.

bind  10.42.0.247

Then set the protected-mode to no to allow connections from the other instances on the cluster.

protected-mode no

The port parameter defines the port the Redis server will listen on for connections, the default is 6379. This is the data port for communicating with clients

 

                                                       Set Redis Listen Interface and Port

5. The next set of parameters will enable the cluster mode and set some of its useful features. The cluster-enabled parameter, when set to yes, activates the cluster mode.

cluster-enabled yes

Next, the cluster-config-file parameter sets the name of a cluster node’s cluster configuration file (e.g nodes-6379.conf). The file is created in the working directory (default is /var/lib/redis defined using the dir parameter) and is not user editable.

cluster-config-file nodes-6379.conf

The next useful cluster option is cluster-node-timeout, it is used to set the maximum amount of time in milliseconds an instance can be unavailable for it to be considered in a failure state. A value of 15000 is equivalent to 15 seconds.

cluster-node-timeout 15000

                                                             Set Cluster Node Timeout

6. We also need to enable Redis persistence on disk. We can use one of the persistence modes, that is the Append Only File (AOF): it logs (in the file appendonly.aof created under the working directory) every write operation successfully received by the server. The data will be played during the server startup to reconstruct the original dataset.

To enable it, set the appendonly parameter to yes.

appendonly yes

                                         Set Persistence Options

7. After making all the changes, restart the Redis service on all the nodes to apply the recent changes.

# systemctl restart redis

8. At this point, every cluster node should now have an ID. You can check this in the logfile located at /var/log/redis/redis.log.

# cat /var/log/redis/redis.log

Check Cluster Node Log File

9. Next, open port 6397 and 16379 on all the instances. The later port is used for the cluster bus (a node-to-node communication channel using a binary protocol). This is a basic requirement for the Redis cluster TCP connections.

# firewall-cmd --zone=public --permanent --add-port=6379/tcp 
# firewall-cmd --zone=public --permanent --add-port=16379/tcp 
# firewall-cmd --reload

Step 3: Creating the Redis Cluster

10. To create the cluster, use the redis-cli command-line client as follows. The --cluster create enables cluster creation and --cluster-replicas 1 means create one replica per master.

For our setup which has 6 nodes, we will have 3 masters and 3 slaves.

Note that the first 6 nodes will be considered masters (M) and the next three will be considered slaves (S). The first slave i.e 10.42.0.200:6379 replicates the first master i.e 10.42.0.247:6379, the second slave replicates the second master, in that order.

The following command is formatted in a way that the outcome will represent our logical setup above.

# redis-cli --cluster create 10.42.0.247:6379 10.42.0.197:6379 10.42.0.132:6379 10.42.0.200:6379 10.42.0.21:6379 10.42.0.34:6379 
--cluster-replicas 1

Create Redis Cluster

11. Once the cluster creation is successful, run the following command on any host (specify its IP address using the -h flag) to list all cluster nodes.

# redis-cli -h 10.42.0.247 -p 6379 cluster nodes

You should be able to see all the cluster nodes, with the slaves indicating their masters, as shown in the following screenshot.

Check All Cluster Nodes on Any Node

The different fields are in this order: node ID, IP address:port, flags, last ping sent, last pong received, configuration epoch, link-state, slots (for masters).

Step 4: Testing Redis Cluster Failover

12. In this section, we will demonstrate how to test a cluster failover. First, let’s take note of the masters.

# redis-cli -h 10.42.0.247 -p 6379 cluster nodes  | grep master

List Redis Cluster Masters

Also, take note of the Redis slaves.

# redis-cli -h 10.42.0.247 -p 6379 cluster nodes  | grep slave

List All Redis Cluster Slaves

13. Next, let’s stop the Redis service on one of the master nodes e.g 10.42.0.197 and check all master nodes in the cluster.

# systemctl stop redis
# redis-cli -h 10.42.0.247 -p 6379 cluster nodes | grep master

From the following screenshot, you can see that the node 10.42.0.197:6367 is in fail state and its slave 10.42.0.21:6379 has been promoted to master status.

Check Cluster Failover Status

14. Now let’s start the Redis service once again on the failed node and check all the masters in the cluster.

# systemctl start redis
# redis-cli -h 10.42.0.247 -p 6379 cluster nodes  | grep master

Check All Redis Cluster Master Status

Also, check the cluster slaves to confirm that the failed master is now a slave.

# redis-cli -h 10.42.0.247 -p 6379 cluster nodes  | grep slave

Check All Redis Cluster Slaves

Step 5: Testing Data Replication Across the Redis Cluster

15. This last section explains how to verify cluster data replication. We will create a key and value on one of the masters, then try to read it from all the cluster nodes as follows. Use the -c switch to enable cluster support under redis-cli utility and access data in cluster mode.

# redis-cli -c -h 10.42.0.247 -p 6379 set name 'TecMint.com'
# redis-cli -c -h 10.42.0.247 -p 6379 get name
# redis-cli -c -h 10.42.0.21 -p 6379 get name
# redis-cli -c -h 10.42.0.132 -p 6379 get name
# redis-cli -c -h 10.42.0.200 -p 6379 get name
# redis-cli -c -h 10.42.0.197 -p 6379 get name
# redis-cli -c -h 10.42.0.34 -p 6379 get name

Verify Redis Cluster Data Replication.


 Redis Cluster is a preferred solution for achieving automatic sharding, replication, and high availability in Redis. It provides a built-in mechanism for distributing data across multiple instances, ensuring data redundancy and fault tolerance.

In addition to the basic configuration settings covered in this article, there are many other configuration parameters available in the /etc/redis.conf file that can be used to fine-tune the performance, security, and behavior of Redis Cluster. These parameters are well-documented in the Redis documentation, and understanding their usage can help optimize the performance and reliability of your Redis Cluster deployment.

Some of the commonly used configuration parameters in Redis Cluster include settings related to network interfaces, ports, timeouts, authentication, memory management, persistence, and replication. By carefully configuring these parameters based on your specific requirements and use case, you can customize the behavior of Redis Cluster to meet the needs of your application.

It's important to thoroughly review the Redis documentation and understand the implications of each configuration parameter before making any changes to the /etc/redis.conf file. Incorrect configuration settings can impact the performance, stability, and security of your Redis Cluster. Therefore, it's recommended to test any configuration changes in a controlled environment before applying them to a production Redis Cluster.

In conclusion, Redis Cluster is a powerful and reliable solution for achieving automatic sharding, replication, and high availability in Redis. By leveraging the various configuration parameters available in the /etc/redis.conf file, you can fine-tune the behavior of Redis Cluster to meet the specific requirements of your application and ensure optimal performance.

Top of Form

Bottom of Form

 

 


How to Install and Setup Mysql InnoDB Cluster on Ubuntu

MySQL InnoDB Cluster offers a comprehensive high availability solution for MySQL databases. It utilizes the AdminAPI, which is included with MySql Shell, to easily configure and manage a group of at least three MySQL server instances that function as an InnoDB Cluster. Each MySQL server instance runs MySQL Group Replication, which handles data replication within the InnoDB Cluster and provides built-in failover capabilities. With AdminAPI, you don't need to work directly with Group Replication in InnoDB Clusters, making it user-friendly. In case of an unexpected server failure, the cluster automatically reconfigures itself. In the default single-primary mode, the InnoDB Cluster has a single read-write server instance, known as the primary, with multiple secondary server instances acting as replicas of the primary. If the primary fails, a secondary server instance is automatically promoted to the role of the primary to ensure continuous operation.

Furthermore, MySQL Router can be configured to automatically adjust itself based on the deployed cluster, transparently connecting client applications to the appropriate server instances. In case of a server failure, MySQL Router detects the change and redirects client applications to the new primary server instance, ensuring uninterrupted access to the database.

In conclusion, MySQL InnoDB Cluster, along with MySQL Router, provides a robust and reliable high availability solution for MySQL databases, ensuring seamless failover and continuous operation in the event of unexpected server failures. The AdminAPI simplifies the configuration and management of InnoDB Clusters, making it an efficient and user-friendly solution for businesses requiring high availability and data replication in their MySQL database environment.




For setup, we installed 3 MySQL Instances listed below:


test-machine01

192.168.114.177

Member-1 (Master)


test-machine01

192.168.114.177

Member-2 (Replica)

test-machine02

192.168.114.176

Member-3 (Replica)


Instance-1 Configurations

Login to MySQL:

mysql> mysql> prompt Instance-1 > PROMPT set to 'Instance-1 >' Instance-1 > Instance-1 >select @@server_id; +-------------+ | @@server_id | +-------------+ | 1 | +-------------+ 1 row in set (0.07 sec) Instance-1 >select @@server_uuid; +--------------------------------------+ | @@server_uuid | +--------------------------------------+ | 13ecba9c-444a-11eb-a397-000c29f9d9e6 | +--------------------------------------+ 1 row in set (0.00 sec) Instance-1 >select @@port; +--------+ | @@port | +--------+ | 3306 | +--------+ 1 row in set (0.07 sec) Instance-1 > [mysqld@inst1] #General user = mysql port = 3306 server_id = 1 socket=/var/lib/mysql/mysql.sock pid_file=/var/run/mysqld/mysqld.pid # Data Storage datadir=/var/lib/mysql innodb_directories="/u01/mysql/" #Logging log_bin = /var/lib/mysql/binlog log_error = /var/lib/mysql/mysqld.log expire_logs_days = 7 relay_log = /var/lib/mysql/relay_bin01 relay_log_index = /var/lib/mysql/relay_bin.index relay_log_recovery = on master_info_repository = TABLE relay_log_info_repository = TABLE # GTID Based Replication Parameter gtid-mode=on enforce-gtid-consistency=1 log_slave_updates= 1

Instance-2 Configurations

mysql> mysql> prompt Instance-2> PROMPT set to 'Instance-2>' Instance-2> Instance-2>select @@server_id; +-------------+ | @@server_id | +-------------+ | 3 | +-------------+ 1 row in set (0.06 sec) Instance-2>select @@server_uuid; +--------------------------------------+ | @@server_uuid | +--------------------------------------+ | 98411332-6aad-11eb-809a-000c29f9d9e6 | +--------------------------------------+ 1 row in set (0.00 sec) Instance-2>select @@port; +--------+ | @@port | +--------+ | 3307 | +--------+ 1 row in set (0.02 sec) Instance-2> [mysqld@inst2] #General user = mysql port=3307 server_id = 3 socket=/u01/mysql-2/mysql.sock pid_file=/u01/mysql-2/mysqld.pid # Data Storage basedir=/u01/mysql-2 datadir=/u01/mysql-2/data innodb_directories="/u01/mysql/" plugin_dir=/usr/lib64/mysql/plugin #Logging log_bin = /u01/mysql-2/data/binlog log_error = /u01/mysql-2/data/mysqld.log expire_logs_days = 7 relay_log = /u01/mysql-2/data/relay_bin01 relay_log_index = /u01/mysql-2/data/relay_bin.index relay_log_recovery = on master_info_repository = TABLE relay_log_info_repository = TABLE # GTID Based Replication Parameter gtid-mode=on enforce-gtid-consistency=1 log_slave_updates= 1

Instance-3 Configurations

mysql> prompt Instance-3 > PROMPT set to 'Instance-3 >' Instance-3 > Instance-3 >select @@server_id; +-------------+ | @@server_id | +-------------+ | 2 | +-------------+ 1 row in set (0.00 sec) Instance-3 >select @@server_uuid; +--------------------------------------+ | @@server_uuid | +--------------------------------------+ | 13c84508-5014-11eb-af41-000c2997dedd | +--------------------------------------+ 1 row in set (0.00 sec) Instance-3 >select @@port; +--------+ | @@port | +--------+ | 3306 | +--------+ 1 row in set (0.08 sec) Instance-3 > [mysqld@inst1] #General user = mysql port = 3306 server_id = 2 bind-address = 0.0.0.0 socket=/u01/mysql-8/mysql.sock pid_file=/u01/mysql-8/mysqld.pid # Data Storage basedir=/u01/mysql-8 datadir=/u01/mysql-8/data innodb_directories="/u01/mysql-8;/u01/mysql" #Logging log_bin = /u01/mysql-8/data/binlog log_error = /u01/mysql-8/data/mysqld.log expire_logs_days = 7 relay_log = /u01/mysql-8/data/relay_bin01 relay_log_index = /u01/mysql-8/data/relay_bin.index relay_log_recovery = on master_info_repository = TABLE relay_log_info_repository = TABLE # GTID Based Replication Parameter gtid-mode=on enforce-gtid-consistency=1 log_slave_updates=1


Step 1: Verify Instance Configuration

Before setting up MySQL Group Replication and creating an InnoDB Cluster, it's important to check the configuration of all the MySQL instances that you plan to add to the cluster. This can be done using the checkInstanceConfiguration() function in MySQL Shell. Let's start by connecting to the MySQL instance 'test-machine01:3306' using MySQL Shell.

bash
$ mysqlsh MySQL Shell 8.0.27 Type '\help' or '\?' for help; '\quit' to exit. JS > \connect test-machine01:3306 Creating a session to 'test-machine01:3306' Please provide the password for 'user@test-machine01:3306': ********

Once connected to the MySQL instance, you can use the checkInstanceConfiguration() function to verify if the instance is suitable to be added to the InnoDB Cluster.

scss
JS > dba.checkInstanceConfiguration()

This function will perform various checks on the MySQL instance, including checking the MySQL version, the configuration parameters, and the required plugins, to ensure that the instance meets the requirements for adding it to the InnoDB Cluster. If there are any issues, the function will provide feedback on what needs to be fixed before proceeding.

Make sure to repeat this step for all the MySQL instances that you plan to add to the InnoDB Cluster. It's important to ensure that all instances meet the necessary requirements to avoid any potential issues during the cluster setup process. Once all instances pass the configuration check, you can proceed to the next step of setting up MySQL Group Replication and creating the InnoDB Cluster.

Instance-1: “test-machine01:3306”

Instance-1: “test-machine01:3306” [root@test-machine01 ~]# mysqlsh root@localhost:3306 MySQL Shell 8.0.23 Copyright (c) 2016, 2021, Oracle and/or its affiliates. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type '\help' or '\?' for help; '\quit' to exit. Creating a session to 'root@localhost:3306' Fetching schema names for autocompletion... Press ^C to stop. Your MySQL connection id is 9 MySQL localhost:3306 ssl JS > MySQL localhost:3306 ssl JS > MySQL localhost:3306 ssl JS > MySQL localhost:3306 ssl JS > dba.checkInstanceConfiguration('test-machine01:3306'); Validating local MySQL instance listening at port 3306 for use in an InnoDB cluster... This instance reports its own address as test-machine01:3306 Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed. Group Replication requires tables to use InnoDB and have a PRIMARY KEY or PRIMARY KEY Equivalent (non-null unique key). Tables that do not follow these requirements will be readable but not updateable when used with Group Replication. If your applications make updates (INSERT, UPDATE or DELETE) to these tables, ensure they use the InnoDB storage engine and have a PRIMARY KEY or PRIMARY KEY Equivalent. Checking instance configuration... Instance configuration is compatible with InnoDB cluster The instance 'test-machine01:3306' is valid to be used in an InnoDB cluster. WARNING: Some non-fatal issues were detected in some of the existing tables. You may choose to ignore these issues, although replicated updates on these tables will not be possible. { "status": "ok" } MySQL localhost:3306 ssl JS >

Instance-2: “test-machine01:3307”

MySQL  localhost:3306 ssl  JS > 

MySQL  localhost:3306 ssl  JS > dba.checkInstanceConfiguration('test-machine01:3307');

Please provide the password for 'root@test-machine01:3307': *********

Save password for 'root@test-machine01:3307'? [Y]es/[N]o/Ne[v]er (default No): Y

Validating local MySQL instance listening at port 3307 for use in an InnoDB cluster...

 

This instance reports its own address as test-machine01:3307

Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.

 

Group Replication requires tables to use InnoDB and have a PRIMARY KEY or PRIMARY KEY Equivalent (non-null unique key). Tables that do not follow these requirements will be readable but not updateable when used with Group Replication. If your applications make updates (INSERT, UPDATE or DELETE) to these tables, ensure they use the InnoDB storage engine and have a PRIMARY KEY or PRIMARY KEY Equivalent.

 

Checking instance configuration...

Instance configuration is compatible with InnoDB cluster

 

The instance 'test-machine01:3307' is valid to be used in an InnoDB cluster.

WARNING: Some non-fatal issues were detected in some of the existing tables.

You may choose to ignore these issues, although replicated updates on these tables will not be possible.

 

{

    "status": "ok"

}

 MySQL  localhost:3306 ssl  JS >



                                                        Instance-3: “test-machine02:3306”

MySQL  localhost:3306 ssl  JS > dba.checkInstanceConfiguration('test-machine02:3306')
Please provide the password for 'root@test-machine02:3306': *********
Save password for 'root@test-machine02:3306'? [Y]es/[N]o/Ne[v]er (default No): Y
Validating MySQL instance at test-machine02:3306 for use in an InnoDB cluster...
 
This instance reports its own address as test-machine02:3306
Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.
 
Group Replication requires tables to use InnoDB and have a PRIMARY KEY or PRIMARY KEY Equivalent (non-null unique key). Tables that do not follow these requirements will be readable but not updateable when used with Group Replication. If your applications make updates (INSERT, UPDATE or DELETE) to these tables, ensure they use the InnoDB storage engine and have a PRIMARY KEY or PRIMARY KEY Equivalent.
 
Checking instance configuration...
Instance configuration is compatible with InnoDB cluster
 
The instance 'test-machine02:3306' is valid to be used in an InnoDB cluster.
WARNING: Some non-fatal issues were detected in some of the existing tables.
You may choose to ignore these issues, although replicated updates on these tables will not be possible.
 
{
    "status": "ok"
}
 MySQL  localhost:3306 ssl  JS >


Step 2. Configure Instance: Use function configureInstance() to configure all 3 mysql instances.

                                                       Instance-1: “test-machine01:3306”

MySQL  localhost:3306 ssl  JS >
MySQL  localhost:3306 ssl  JS > dba.configureInstance('test-machine01:3306');
Configuring local MySQL instance listening at port 3306 for use in an InnoDB cluster...
 
This instance reports its own address as test-machine01:3306
Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.
 
The instance 'test-machine01:3306' is valid to be used in an InnoDB cluster.
The instance 'test-machine01:3306' is already ready to be used in an InnoDB cluster.
MySQL  localhost:3306 ssl  JS >

                                                      Instance-2: “test-machine01:3307”

MySQL  localhost:3306 ssl  JS >
MySQL  localhost:3306 ssl  JS > dba.configureInstance('test-machine01:3307');
Configuring local MySQL instance listening at port 3307 for use in an InnoDB cluster...
 
This instance reports its own address as test-machine01:3307
Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.
 
The instance 'test-machine01:3307' is valid to be used in an InnoDB cluster.
The instance 'test-machine01:3307' is already ready to be used in an InnoDB cluster.
MySQL  localhost:3306 ssl  JS >

                                                    Instance-3: “test-machine02:3306”

MySQL  localhost:3306 ssl  JS >
MySQL  localhost:3306 ssl  JS > dba.configureInstance('test-machine02:3306');
Configuring MySQL instance at test-machine02:3306 for use in an InnoDB cluster...
 
This instance reports its own address as test-machine02:3306
Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.
 
The instance 'test-machine02:3306' is valid to be used in an InnoDB cluster.
The instance 'test-machine02:3306' is already ready to be used in an InnoDB cluster.
MySQL  localhost:3306 ssl  JS >

Step 3. Create Cluster: Use createCluster() function to create InnoDB cluster. And status() function to check current status of InnoDB cluster.

MySQL  localhost:3306 ssl  JS >
MySQL  localhost:3306 ssl  JS >  var cluster = dba.createCluster('innodbclustertest');
A new InnoDB cluster will be created on instance 'localhost:3306'.
 
Validating instance configuration at localhost:3306...
 
This instance reports its own address as test-machine01:3306
 
Group Replication requires tables to use InnoDB and have a PRIMARY KEY or PRIMARY KEY Equivalent (non-null unique key). Tables that do not follow these requirements will be readable but not updateable when used with Group Replication. If your applications make updates (INSERT, UPDATE or DELETE) to these tables, ensure they use the InnoDB storage engine and have a PRIMARY KEY or PRIMARY KEY Equivalent.
 
Instance configuration is suitable.
NOTE: Group Replication will communicate with other members using 'test-machine01:33061'. Use the localAddress option to override.
 
Creating InnoDB cluster 'innodbclustertest' on 'test-machine01:3306'...
 
Adding Seed Instance...
NOTE: Metadata schema found in target instance
Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to
one server failure.
 
 MySQL  localhost:3306 ssl  JS >
 
 
 
 MySQL  localhost:3306 ssl  JS > cluster.status();
{
    "clusterName": "innodbclustertest",
    "defaultReplicaSet": {
        "name": "default",
        "primary": "test-machine01:3306",
        "ssl": "REQUIRED",
        "status": "OK_NO_TOLERANCE",
        "statusText": "Cluster is NOT tolerant to any failures.",
        "topology": {
            "test-machine01:3306": {
                "address": "test-machine01:3306",
                "mode": "R/W",
                "readReplicas": {},
                "replicationLag": null,
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.22"
            }
        },
        "topologyMode": "Single-Primary"
    },
    "groupInformationSourceMember": "test-machine01:3306"
}
 MySQL  localhost:3306 ssl  JS >

Step 4. Add 1 Instance: Use addInstance() function to add instance to newly created InnoDB Cluster.

MySQL  localhost:3306 ssl  JS >
MySQL  localhost:3306 ssl  JS > cluster.addInstance('test-machine01:3307');
The safest and most convenient way to provision a new instance is through automatic clone provisioning, which will completely overwrite the state of 'test-machine01:3307' with a physical snapshot from an existing cluster member. To use this method by default, set the 'recoveryMethod' option to 'clone'.
 
The incremental state recovery may be safely used if you are sure all updates ever executed in the cluster were done with GTIDs enabled, there are no purged transactions and the new instance contains the same GTID set as the cluster or a subset of it. To use this method by default, set the 'recoveryMethod' option to 'incremental'.
 
Incremental state recovery was selected because it seems to be safely usable.
 
Validating instance configuration at test-machine01:3307...
 
This instance reports its own address as test-machine01:3307
 
Group Replication requires tables to use InnoDB and have a PRIMARY KEY or PRIMARY KEY Equivalent (non-null unique key). Tables that do not follow these requirements will be readable but not updateable when used with Group Replication. If your applications make updates (INSERT, UPDATE or DELETE) to these tables, ensure they use the InnoDB storage engine and have a PRIMARY KEY or PRIMARY KEY Equivalent.
 
Instance configuration is suitable.
NOTE: Group Replication will communicate with other members using 'test-machine01:33071'. Use the localAddress option to override.
 
A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.
 
Adding instance to the cluster...
 
Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background.
State recovery already finished for 'test-machine01:3307'
 
The instance 'test-machine01:3307' was successfully added to the cluster.
 
 MySQL  localhost:3306 ssl  JS >
 
 
 
 MySQL  localhost:3306 ssl  JS >
 MySQL  localhost:3306 ssl  JS > cluster.status();
{
    "clusterName": "innodbclustertest",
    "defaultReplicaSet": {
        "name": "default",
        "primary": "test-machine01:3306",
        "ssl": "REQUIRED",
        "status": "OK_NO_TOLERANCE",
        "statusText": "Cluster is NOT tolerant to any failures.",
        "topology": {
            "test-machine01:3306": {
                "address": "test-machine01:3306",
                "mode": "R/W",
                "readReplicas": {},
                "replicationLag": null,
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.22"
            },
            "test-machine01:3307": {
                "address": "test-machine01:3307",
                "mode": "R/O",
                "readReplicas": {},
                "replicationLag": null,
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.22"
            }
        },
        "topologyMode": "Single-Primary"
    },
    "groupInformationSourceMember": "test-machine01:3306"
}
 MySQL  localhost:3306 ssl  JS >

Step 5. Add 2 Instance: Use addInstance() function to add instance to newly created InnoDB Cluster.

MySQL  localhost:3306 ssl  JS >
MySQL  localhost:3306 ssl  JS > cluster.addInstance('test-machine02:3306');
The safest and most convenient way to provision a new instance is through automatic clone provisioning, which will completely overwrite the state of 'test-machine02:3306' with a physical snapshot from an existing cluster member. To use this method by default, set the 'recoveryMethod' option to 'clone'.
 
The incremental state recovery may be safely used if you are sure all updates ever executed in the cluster were done with GTIDs enabled, there are no purged transactions and the new instance contains the same GTID set as the cluster or a subset of it. To use this method by default, set the 'recoveryMethod' option to 'incremental'.
 
Incremental state recovery was selected because it seems to be safely usable.
 
Validating instance configuration at test-machine02:3306...
 
This instance reports its own address as test-machine02:3306
 
Group Replication requires tables to use InnoDB and have a PRIMARY KEY or PRIMARY KEY Equivalent (non-null unique key). Tables that do not follow these requirements will be readable but not updateable when used with Group Replication. If your applications make updates (INSERT, UPDATE or DELETE) to these tables, ensure they use the InnoDB storage engine and have a PRIMARY KEY or PRIMARY KEY Equivalent.
 
Instance configuration is suitable.
NOTE: Group Replication will communicate with other members using 'test-machine02:33061'. Use the localAddress option to override.
 
A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.
 
Adding instance to the cluster...
 
Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background.
State recovery already finished for 'test-machine02:3306'
 
The instance 'test-machine02:3306' was successfully added to the cluster.
 
 MySQL  localhost:3306 ssl  JS >
 MySQL  localhost:3306 ssl  JS >
 MySQL  localhost:3306 ssl  JS >
 MySQL  localhost:3306 ssl  JS > cluster.status();
{
    "clusterName": "innodbclustertest",
    "defaultReplicaSet": {
        "name": "default",
        "primary": "test-machine01:3306",
        "ssl": "REQUIRED",
        "status": "OK",
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
        "topology": {
            "test-machine01:3306": {
                "address": "test-machine01:3306",
                "mode": "R/W",
                "readReplicas": {},
                "replicationLag": null,
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.22"
            },
            "test-machine01:3307": {
                "address": "test-machine01:3307",
                "mode": "R/O",
                "readReplicas": {},
                "replicationLag": null,
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.22"
            },
            "test-machine02:3306": {
                "address": "test-machine02:3306",
                "mode": "R/O",
                "readReplicas": {},
                "replicationLag": null,
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.22"
            }
        },
        "topologyMode": "Single-Primary"
    },
    "groupInformationSourceMember": "test-machine01:3306"
}
 MySQL  localhost:3306 ssl  JS >
 
 MySQL  localhost:3306 ssl  JS >
 MySQL  localhost:3306 ssl  JS > cluster.describe()
{
    "clusterName": "innodbclustertest",
    "defaultReplicaSet": {
        "name": "default",
        "topology": [
            {
                "address": "test-machine01:3306",
                "label": "test-machine01:3306",
                "role": "HA"
            },
            {
                "address": "test-machine01:3307",
                "label": "test-machine01:3307",
                "role": "HA"
            },
            {
                "address": "test-machine02:3306",
                "label": "test-machine02:3306",
                "role": "HA"
            }
        ],
        "topologyMode": "Single-Primary"
    }
}
 MySQL  localhost:3306 ssl  JS >