We should consider installing ProxySQL on client nodes for efficient workload management across the cluster without any changes to the applications that generate queries. This is the recommended high-availability solution for Percona XtraDB Cluster.
Some of the popular features of ProxySQL are:
- High performance
- Efficient workload management
- Query caching
- Query routing
- Supports failover
- Advanced configuration with 0 downtime
- Application layer proxy
- Cross-platform
- Advanced topology support
- Firewall
Environment Specification:
192.168.56.115
centos
Pre requisites:
We need to open below ports on all server
firewall-cmd --zone=public --add-service=mysql --permanent firewall-cmd --zone=public --add-port=3306/tcp --permanent firewall-cmd --zone=public --add-port=4567/tcp --permanent firewall-cmd --zone=public --add-port=4568/tcp --permanent firewall-cmd --zone=public --add-port=4444/tcp --permanent firewall-cmd --zone=public --add-port=4567/udp --permanent
Allow ProxySQL service port 6033/tcp (it is the reverse of MySQL default port 3306) in Linux Firewall.
firewall-cmd --permanent --add-port=6033/tcp firewall-cmd --reload
Installing ProxySQL Load Balancer for Percona XtraDB Cluster on CentOS 7
ProxySQL v2 natively supports Percona XtraDB Cluster, To install ProxySql install it from the percona repository
sudo yum install proxysql2
To connect to the ProxySQL admin interface, you need a MySQL client.
yum install Percona-XtraDB-Cluster-client-57
Now start the proxysql service
[root@localhost ~]# systemctl start proxysql.service
To check port on cluster node use the below query.
mysql -uroot -p -e "SHOW GLOBAL VARIABLES LIKE 'PORT'";
Now connect to ProxySQL admin panel and configure the load balancer.
mysql -u admin -p123 -h 127.0.0.1 -P6032 --prompt='ProxySQL> ' INSERT INTO mysql_servers(hostgroup_id, hostname, port) VALUES (0,'192.168.56.110',3306); INSERT INTO mysql_servers(hostgroup_id, hostname, port) VALUES (0,'192.168.56.113',3306); INSERT INTO mysql_servers(hostgroup_id, hostname, port) VALUES (0,'192.168.56.114',3306); ProxySQL> SELECT * FROM mysql_servers; +--------------+----------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | hostgroup_id | hostname | port | gtid_port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment | +--------------+----------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ | 0 | 192.168.56.110 | 3306 | 0 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | | 0 | 192.168.56.113 | 3306 | 0 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | | 0 | 192.168.56.114 | 3306 | 0 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | | +--------------+----------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+ 3 rows in set (0.00 sec)
Configure ProxySQL Nodes Monitoring:
Login to MySQL database instance and execute following commands to create the monitoring user with USAGE privilege
on any percona NODE:
CREATE USER 'proxysql'@'%' IDENTIFIED BY 'ProxySQL'; GRANT USAGE ON *.* TO 'proxysql'@'%'; mysql> CREATE USER 'proxysql'@'%' IDENTIFIED BY 'ProxySQL'; Query OK, 0 rows affected (0.01 sec) mysql> GRANT USAGE ON *.* TO 'proxysql'@'%'; Query OK, 0 rows affected (0.02 sec) UPDATE global_variables SET variable_value='proxysql' WHERE variable_name='mysql-monitor_username'; UPDATE global_variables SET variable_value='ProxySQL' WHERE variable_name='mysql-monitor_password'; LOAD MYSQL VARIABLES TO RUNTIME; SAVE MYSQL VARIABLES TO DISK; ProxySQL> UPDATE global_variables SET variable_value='proxysql' WHERE variable_name='mysql-monitor_username'; Query OK, 1 row affected (0.01 sec) ProxySQL> UPDATE global_variables SET variable_value='ProxySQL' WHERE variable_name='mysql-monitor_password'; Query OK, 1 row affected (0.00 sec) ProxySQL> LOAD MYSQL VARIABLES TO RUNTIME; Query OK, 0 rows affected (0.00 sec) ProxySQL> SAVE MYSQL VARIABLES TO DISK; Query OK, 136 rows affected (0.01 sec) ProxySQL> LOAD MYSQL SERVERS TO RUNTIME; Query OK, 0 rows affected (0.01 sec)
Creating ProxySQL Client User
Provide read/write access to the cluster for ProxySQL, add this user on one of the Percona XtraDB Cluster nodes: CREATE USER 'lbuser'@'192.168.56.115' IDENTIFIED BY 'lbpass'; GRANT ALL ON *.* TO 'lbuser'@'192.168.56.115';
Creating ProxySQL Client User:
ProxySQL> INSERT INTO mysql_users (username,password) VALUES ('lbuser','lbpass'); Query OK, 1 row affected (0.00 sec) ProxySQL> LOAD MYSQL USERS TO RUNTIME; Query OK, 0 rows affected (0.00 sec) ProxySQL> SAVE MYSQL USERS TO DISK; Query OK, 0 rows affected (0.02 sec)
confirm that the user has been set up correctly, you can try to log in:
[root@localhost ~]# mysql -u lbuser -plbpass -h 127.0.0.1 -P 6033 mysql: [Warning] Using a password on the command line interface can be insecure. Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 4 Server version: 5.5.30 (ProxySQL) Copyright (c) 2009-2019 Percona LLC and/or its affiliates Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql>
Let see to which node our client is going to connect of the cluster.
[root@localhost ~]# mysql -u lbuser -plbpass -h 127.0.0.1 -P 6033 -e "select @@hostname;"; mysql: [Warning] Using a password on the command line interface can be insecure. +----------------------+ | @@hostname | +----------------------+ | percona3.localdomain | +----------------------+
you can see the proxy server is connected to percona3
I tried from another putty session then it gets connected to percona2:
[root@localhost ~]# mysql -u lbuser -plbpass -h 127.0.0.1 -P 6033 -e "select @@hostname;"; mysql: [Warning] Using a password on the command line interface can be insecure. +----------------------+ | @@hostname | +----------------------+ | percona2.localdomain | +----------------------+
After a few multiple session it gets connected to percona1:
[root@localhost ~]# mysql -u lbuser -plbpass -h 127.0.0.1 -P 6033 -e "select @@hostname;"; mysql: [Warning] Using a password on the command line interface can be insecure. +----------------------+ | @@hostname | +----------------------+ | percona1.localdomain | +----------------------+
Connecting from MySQL Workbench:
Below are the configure of my session in MySQL workbench:
I Tried to see which Node will connect from MySQL Workbench
Failover:
Now we will check how this will perform the failover
[root@percona3 mysql]# service mysql stop Redirecting to /bin/systemctl stop mysql.service ProxySQL> select hostgroup_id,hostname,port,status from runtime_mysql_servers; +--------------+----------------+------+---------+ | hostgroup_id | hostname | port | status | +--------------+----------------+------+---------+ | 0 | 192.168.56.110 | 3306 | ONLINE | | 0 | 192.168.56.114 | 3306 | SHUNNED | | 0 | 192.168.56.113 | 3306 | ONLINE | +--------------+----------------+------+---------+ 3 rows in set (0.01 sec)
Now let start the mysql on the node 3 again and check the cluster status again from proxy server
[root@percona3 mysql]#systemctl start mysql.service ProxySQL> SELECT hostgroup_id hg,count(status) cnt from main.runtime_mysql_servers WHERE status = "ONLINE" GROUP BY hg having cnt ; +----+-----+ | hg | cnt | +----+-----+ | 0 | 3 | +----+-----+ 1 row in set (0.00 sec) ProxySQL> select hostgroup_id,hostname,port,status from runtime_mysql_servers; +--------------+----------------+------+--------+ | hostgroup_id | hostname | port | status | +--------------+----------------+------+--------+ | 0 | 192.168.56.110 | 3306 | ONLINE | | 0 | 192.168.56.114 | 3306 | ONLINE | | 0 | 192.168.56.113 | 3306 | ONLINE | +--------------+----------------+------+--------+ 3 rows in set (0.00 sec)
NOTE: If you see the node status as SHUNNED then try to connect multiple time so the client gets the latest status of the cluster as runtime_mysql_servers store the latest status of the node when it was tried to connect last time.
Testing Cluster with sysbench:
yum install sysbench
sysbench requires ProxySQL client user credentials that you created (lbuser/lbpass) in Creating ProxySQL Client User.
sysbench /usr/share/sysbench/oltp_read_only.lua --threads=4 --mysql-host=127.0.0.1 --mysql-user=lbuser --mysql-password=lbpass --mysql-port=6033 --tables=10 --table-size=10000 prepare sysbench /usr/share/sysbench/oltp_read_only.lua --threads=4 --events=0 --time=300 --mysql-host=27.0.0.1 --mysql-user=lbuser --mysql-password=lbpass --mysql-port=6033 --tables=10 --table-size=10000 --range_selects=off --db-ps-mode=disable --report-interval=1 run
To see the number of commands that run on the cluster:
proxysql> SELECT * FROM stats_mysql_commands_counters;