Welcome to another blog. This time I am here to show you guys how to setup percona XtraDB cluster. Ok, so what is percona XtraDB? XtraDB is a MySQL clustering solution, It ensures high availability, prevents downtime and data loss and provides linear scalability for a growing environment. When you execute a query, it is executed locally on the node. All data is available locally, no need for remote access. A good solution for scaling a read workload. You can put read queries to any of the nodes. Once you install percona XtraDB you don’t need to install MySQL separately, XtraDB will get it from the percona repository and you will be able to get the cluster ready after going through this article.
prerequisite:
To configure a percona XtraDB cluster on Linux Servers we need three servers, information is like the following:
server percona1: 192.168.56.110
server percona2: 192.168.56.113
Server percona3: 192.168.56.114
OS: CentOS 7
you can also go with two servers but the official document recommends an odd number of servers. e.g 3,5,7 and so on. just don’t go for any even number of nodes, they call it split-brain. A split-brain is a state in which the nodes lose contact with one another and then both try to take control of shared resources or provide simultaneously the cluster service
NOTE: We need root access on the node where we will be installing Percona XtraDB Cluster
(either logged in as a user with root privileges or be able to run commands with Sudo
Prechecks:
Add below entries in your /etc/hosts file ——> on all server (percona1,percona2,percona3)
192.168.56.110 percona1.localdomain percona1
192.168.56.113 percona2.localdomain percona2
192.168.56.114 percona3.localdomain percona3
Open firewall ports:
We need to open below ports on all server (percona1,percona2,percona3)
3306
4444
4567
4568
firewall-cmd --zone=public --add-service=mysql --permanent
firewall-cmd --zone=public --add-port=3306/tcp --permanent firewall-cmd --zone=public --add-port=4567/tcp --permanent firewall-cmd --zone=public --add-port=4568/tcp --permanent firewall-cmd --zone=public --add-port=4444/tcp --permanent firewall-cmd --zone=public --add-port=4567/udp --permanent firewall-cmd --reload firewall-cmd --list-all
[root@percona1 ~]# firewall-cmd --list-all public (active) target: default icmp-block-inversion: no interfaces: enp0s3 enp0s8 sources: services: ssh dhcpv6-client mysql ports: 3306/tcp 4567/tcp 4568/tcp 4444/tcp 4567/udp protocols: masquerade: no forward-ports: source-ports: icmp-blocks: rich rules
Disable SELinux: ——->on all server(percona1,percona2,percona3)
[root@percona1 ~]# setenforce 0
This will put the SELinux in permissive mode for the current session but we need to disable it permanently.
edit /etc/sysconfig/selinux file set selinux variable as disabled on all servers (percona1,percona2,percona3):
vi /etc/sysconfig/selinux
SELINUX=disabled
All servers should be accessible to each other, In order to check the accessibility ping each server.
Instll below prereq RPM: ——->on all server(percona1,percona2,percona3)
epel-release
libev
socat
[root@percona1 ~]# yum -y install epel-release Installed: epel-release.noarch 0:7-11 [root@percona1 yum.repos.d]# yum -y install libev Installed: libev.x86_64 0:4.15-7.el7 [root@percona1 Desktop]# yum -y install socat Installed: socat.x86_64 0:1.7.3.2-2.el7
Add Percona Repository: ——->on all server(percona1,percona2,percona3)
Now we will add the percona repository in all of the three servers (percona1,percona2,percona3):
[root@percona1 ~]# sudo yum -y install https://repo.percona.com/yum/percona-release-latest.noarch.rpm Installed: percona-release.noarch 0:1.0-13 Complete!
Install Percona-XtraDB-Cluster: ——->on all server(percona1,percona2,percona3)
[root@percona1 ~]# yum install -y Percona-XtraDB-Cluster-57 Loaded plugins: fastestmirror, langpacks Loading mirror speeds from cached hostfile . . . Installed: Percona-XtraDB-Cluster-57.x86_64 0:5.7.27-31.39.1.el7 Percona-XtraDB-Cluster-shared-57.x86_64 0:5.7.27-31.39.1.el7 Percona-XtraDB-Cluster-shared-compat-57.x86_64 0:5.7.27-31.39.1.el7 Dependency Installed: Percona-XtraDB-Cluster-client-57.x86_64 0:5.7.27-31.39.1.el7 Percona-XtraDB-Cluster-server-57.x86_64 0:5.7.27-31.39.1.el7 percona-xtrabackup-24.x86_64 0:2.4.17-1.el7 perl-DBD-MySQL.x86_64 0:4.023-6.el7 perl-Digest.noarch 0:1.17-245.el7 perl-Digest-MD5.x86_64 0:2.52-3.el7 qpress.x86_64 0:11-1.el7 Replaced: mariadb-libs.x86_64 1:5.5.60-1.el7_5 Complete!
Let check out the Percona XtraDB is installed or not.
[root@percona1 ~]# yum search Percona-XtraDB NOTE: Till here all above steps should be perfomed on all servers (percona1,percona2,percona3)
Configuring Nodes for Write-Set Replication
ON Percona1:
Let start the MySQL server and change the root password only on percona1 server
[root@percona1 ~]# systemctl start mysql [root@percona1 ~]# grep password /var/log/mysqld.log 2019-12-11T10:35:12.135133Z 1 [Note] A temporary password is generated for root@localhost: hJ(l8ySe>f>E Note : copy the password "hJ(l8ySe>f>E" we are going to change this password : [root@percona1 ~]# mysql_secure_installation ------->on all server(percona1,percona2,percona3) Securing the MySQL server deployment. Enter password for user root: The existing password for the user account root has expired. Please set a new password. New password: Re-enter new password: VALIDATE PASSWORD PLUGIN can be used to test passwords and improve security. It checks the strength of password and allows the users to set only those passwords which are secure enough. Would you like to setup VALIDATE PASSWORD plugin? Press y|Y for Yes, any other key for No: Using existing password for root. Change the password for root ? ((Press y|Y for Yes, any other key for No) : ... skipping. By default, a MySQL installation has an anonymous user, allowing anyone to log into MySQL without having to have a user account created for them. This is intended only for testing, and to make the installation go a bit smoother. You should remove them before moving into a production environment. Remove anonymous users? (Press y|Y for Yes, any other key for No) : yes Success. Normally, root should only be allowed to connect from 'localhost'. This ensures that someone cannot guess at the root password from the network. Disallow root login remotely? (Press y|Y for Yes, any other key for No) : yes Success. By default, MySQL comes with a database named 'test' that anyone can access. This is also intended only for testing, and should be removed before moving into a production environment. Remove test database and access to it? (Press y|Y for Yes, any other key for No) : ... skipping. Reloading the privilege tables will ensure that all changes made so far will take effect immediately. Reload privilege tables now? (Press y|Y for Yes, any other key for No) : yes Success. All done!
Now stop the MySQL server on percona1.
[root@percona1 ~]# systemctl stop mysql
Now we will configure our replication, we will add replication information in my.cnf file.
Configure Replication Settings on percona1:
cat >>/etc/my.cnf<<EOF [mysqld] wsrep_provider=/usr/lib64/galera3/libgalera_smm.so wsrep_cluster_name=perconacluster wsrep_cluster_address=gcomm://192.168.56.110,192.168.56.113,192.168.56.114 wsrep_node_name=percona1 wsrep_node_address=192.168.56.110 wsrep_sst_method=xtrabackup-v2 wsrep_sst_auth=perconarep:perconapasswd pxc_strict_mode=ENFORCING binlog_format=ROW default_storage_engine=InnoDB innodb_autoinc_lock_mode=2 EOF
Now initialize the cluster by bootstrapping the first node. This must be the node with your main database, which will be used as the data source for the cluster in our case it is percona1 server
[root@percona1 ~]# systemctl start mysql@bootstrap Before configuring replication we need to create a separate user for replication which will be used for the replication process. why we need replication user? when you add a new node to the cluster it dumps the current node configuration and transfers it to the second node for that reason we need a separate user.
NOTE: Make sure the user should be the same mentioned against the variable “wsrep_sst_auth” in the replication configuration used in my.cnf file.
[root@percona1 ~]#mysql -u root -p mysql>create user perconarep@'%' identified by 'perconapasswd'; mysql>grant all on *.* to perconarep@'%'; mysql>flush privileges;
ON Percona2:
Configure Replication Settings on percona2:
cat >>/etc/my.cnf<<EOF [mysqld] wsrep_provider=/usr/lib64/galera3/libgalera_smm.so wsrep_cluster_name=perconacluster wsrep_cluster_address=gcomm://192.168.56.110,192.168.56.113,192.168.56.114 wsrep_node_name=percona2 wsrep_node_address=192.168.56.113 wsrep_sst_method=xtrabackup-v2 wsrep_sst_auth=perconarep:perconapasswd pxc_strict_mode=ENFORCING binlog_format=ROW default_storage_engine=InnoDB innodb_autoinc_lock_mode=2 EOF
Now start the mysql and this will add this node to the cluster.
[root@percona2 ~]# systemctl start mysql
ON Percona3:
Configure Replication Settings on percona3:
cat >>/etc/my.cnf<<EOF [mysqld] wsrep_provider=/usr/lib64/galera3/libgalera_smm.so wsrep_cluster_name=perconacluster wsrep_cluster_address=gcomm://192.168.56.110,192.168.56.113,192.168.56.114 wsrep_node_name=percona3 wsrep_node_address=192.168.56.114 wsrep_sst_method=xtrabackup-v2 wsrep_sst_auth=perconarep:perconapasswd pxc_strict_mode=ENFORCING binlog_format=ROW default_storage_engine=InnoDB innodb_autoinc_lock_mode=2 EOF
Now start the mysql and this will add this node to the cluster.
[root@percona3 ~]# systemctl start mysql
you will observer such messages in mysql log (var/log/mysqld.log)
(percona2): State transfer from 0.0 (percona1) complete
1.0 (percona3): State transfer from 0.0 (percona1) complete
To make sure that the cluster has been initialized, run the following:
mysql> show status like 'wsrep%'; +----------------------------------+-------------------------------------------------------------+ | Variable_name | Value | +----------------------------------+-------------------------------------------------------------+ | wsrep_local_state_uuid | 6d95cd36-1c1a-11ea-999a-83631204f6e3 | | wsrep_protocol_version | 9 | | wsrep_last_applied | 6 | . . . | wsrep_flow_control_recv | 0 | | wsrep_flow_control_interval | [ 173, 173 ] | | wsrep_flow_control_interval_low | 173 | | wsrep_flow_control_interval_high | 173 | | wsrep_flow_control_status | OFF | . . | wsrep_local_state | 4 | | wsrep_local_state_comment | Synced | | wsrep_cert_index_size | 1 | | wsrep_cert_bucket_count | 22 | | wsrep_gcache_pool_size | 2384 | . . | wsrep_incoming_addresses | 192.168.56.110:3306,192.168.56.114:3306,192.168.56.113:3306 | | wsrep_cluster_weight | 3 | | wsrep_desync_count | 0 | | wsrep_evs_delayed | | | wsrep_evs_evict_list | | | wsrep_evs_repl_latency | 0/0/0/0/0 | | wsrep_evs_state | OPERATIONAL | | wsrep_gcomm_uuid | 1d16f574-1c1b-11ea-b5dd-574c9a653584 | | wsrep_cluster_conf_id | 3 | | wsrep_cluster_size | 3 | | wsrep_cluster_state_uuid | 6d95cd36-1c1a-11ea-999a-83631204f6e3 | | wsrep_cluster_status | Primary | | wsrep_connected | ON | | wsrep_local_bf_aborts | 0 | | wsrep_local_index | 0 | | wsrep_provider_name | Galera | | wsrep_provider_vendor | Codership Oy <[email protected]> | | wsrep_provider_version | 3.39(rb3295e6) | | wsrep_ready | ON | +----------------------------------+-------------------------------------------------------------+ 71 rows in set (0.00 sec)
As you can see that the cluster size is 3 “wsrep_cluster_size | 3” and you can also see the IP of our three nodes
wsrep_incoming_addresses | 192.168.56.110:3306,192.168.56.114:3306,192.168.56.113:3306
Test our cluster:
I am creating a database on percona 1 and then we will check on percoan 3 that it has been replicated.
mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | sys | +--------------------+ 4 rows in set (0.03 sec) mysql> CREATE DATABASE perconatest; Query OK, 1 row affected (0.03 sec)
Now let’s check on percona3 we will see that ‘perconatest’ DB will be created
[root@percona3 ~]# mysql -uroot -p -e "show databases" Enter password: +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | perconatest | | performance_schema | | sys | +--------------------+