MySQL High Availability with ProxySQL and MySQL Group Replication

As you may know, I started working with high load MySQL servers in 2010 (and then there are several entries that were noted in 2012 after I purchased back this domain :-D) and I mostly go with multi-slaves replications. These might be true in many practical cases where most of the high-traffic loads are for reading operators, not writing ones. There are several cases that I worked with multi-masters replications / MySQL cluster but I then found that my DB cluster often ran unstable, so I rarely suggest to use MySQL cluster previously. In addition, to deal with multi-servers connections, I gradually move from the application layer to the network-based layer (e.g. HA Proxy in my case). The preceding approach works, however that approach did not satisfy me since I still need much effort to monitor and manual intervention.

Today I note a new approach for MySQL HA requirement: High Availability with ProxySQL and MySQL Group Replication (which was introduced in v. 5.7.12). In case you want to check some other popular MySQL proxy solutions, come to read the Comparision at ProxySQL website first.

To start this test, I use 4 servers as follows:

  • SERVER_1: Install ProxySQL.
  • SERVER_2, SERVER_3, SERVER_4: play 3 nodes in MySQL Group Replication. Remember that we should better to NOT having an even number of nodes to avoid the split-brain issue.

Install latest MySQL Softwares to 3 DB nodes

  1. Add MySQL APT Repository: Be noted that this step is necessary as the community version will be selected when installing MySQL instead of the default one. Otherwise, you will not see the file in the plugin_dir () directory.
  2. Install mysql-server package (be noted that we will install mysql-community edition, not the default one shipped with Ubuntu).
  3. Check to be sure the extension exists at /usr/lib/mysql/plugin/. As mentioned, we need this extension for the group replication functionality.

Setting up MySQL Group Replication on EACH DB SERVER

Before starting configuring MySQL configs at all DB nodes, I would like to suggest to have a clear way to organize your configuration files so that you can be sure what will happen when you edit one parameter. I personally like to organize params as follows:

  • By default, in Ubuntu 16.04, most configurations relating to [mysqld] section will go to /etc/mysql/mysql.conf.d/mysqld.cnf and this file is included into the main /etc/mysql/my.cnf.
  • I will keep the local configuration in /etc/mysql/mysql.conf.d/mysqld.cnf. These settings are normally run on 1 machine to tunning the system based on the current server configuration, and they can be used without replication.
  • I will add a new [mysqld] section at the end of /etc/mysql/my.cnf for MySQL replication settings. This is to be sure that settings there will be always the final one since they will rewrite any params defined in any previously included files.
  • I like to keep the port number for group replication is sth easy to remember, so 33060 (relates to the 3306 port of MySQL service) is the selected port.

So, let’s start the configuration work.

  1. Generate UUID that will be used later as the group name:

    Remember this value since we will use it later.
  2. Add the following section at the end of /etc/mysql/my.cnf for group replication settings:

    . Remember that you need to fill in your own information for SERVER_X_IP, YOUR_SERVER_IP, and YOUR_SERVER_NUMBER. Also, you will need to manually REJOIN the group after the service is restarted with the above setting (loose-group_replication_start_on_boot = OFF).
  3. Restart MySQL Service:

    Remember to open ports for MySQL service and MySQL Group Replication service.
  4. Login to MySQL console on each server to create a new user and grant replication permission:

    . Remember to choose your user (e.g. rep) and password (e.g. [email protected]) in the above configuration.
  5. Verify that the group replication plugin is on with the command SHOW PLUGINS; in MySQL console.

Start MySQL Group Replication service

To start replication, we will need to start a bootstrapped node (the first node), and then start replication services on remaining nodes.

One thing worth noting is that if you have an existing database, be sure to (i) Stop the replication service (STOP GROUP_REPLICATION), (ii) Backup and restore it on ALL nodes in the replication group before (iii) Starting the replication service.

  1. Bootstrap the first node on SERVER_2:
    • Trigger the group_replication_bootstrap_group variable in MySQL console and start the replication service:
    • To verify, query online nodes on MySQL console (we must see the current node with the ONLINE state there):
  2. Start replication service on the remaining nodes (SERVER_3 and SERVER_4) in MySQL console:
  3. To verify, we can also query the state of group replication in MySQL console of each node.
    • We should have the ONLINE state in several seconds for all nodes.
    • Otherwise, having the RECOVERING for a long time (several seconds) indicating that there is something wrong when starting the group replication service for that node. Check the log at /var/log/mysql/error.log to see what was wrong. The failed node will retry, however after several attempts (normally 10), the node will then LEAVE the group.

Modification on MySQL Group Replication

This part is an optional one. I just note here some popular cases that we often deal with MySQL Group Replication.

  1. If you have existing data and forgot to replicate it to other nodes before first starting DB, poor you! You will need to follow these steps:
    • STOP GROUP_REPLICATION; on all nodes.
    • Backup database, transfer to all remaining nodes.
    • On each node, in order to create a new database and import data to it, we need to
      • run SET GLOBAL super_read_only = OFF; SET SQL_LOG_BIN=0;
      • importing data
      • and then set it back to ON (SET GLOBAL super_read_only = ON; SET SQL_LOG_BIN=1;)
      • issue RESET MASTER; to reset all binary logs in current slave (in case you forgot to disable bin log before importing data) if necessary.
    • Start group replication on the first node, and then start on all remaining nodes.
  2. Some MySQL queries to check sth relating to group replication:
  3. To add a new member to the current group, we can follow the following simple steps:
    • Install and configure group replication as normally in the new node. Remember to have all node IPs in loose-group_replication_ip_whitelist and loose-group_replication_group_seeds variables in the /etc/mysql/my.cnf file.
    • Start group replication in the new node, and check to be sure the new node successfully joins the cluster.
    • Edit the /etc/mysql/my.cnf file in the existing nodes (SERVER_2, SERVER_3, SERVER_4) to include the new IP in loose-group_replication_ip_whitelist and loose-group_replication_group_seeds variables. We DO NOT need to restart Mysql services on these nodes.
  4. In order for the group replication works, we also need to care about DB Schema design: explicit PK or at least 1 Not-null Unique Key (so it will be selected as PK). We can run the following query to find which table is not satisfied with this constraint:


Configuring ProxySQL as the DB Connection proxy

  1. Install and start ProxySQL on SERVER_1
  2. Login to the ProxySQL Admin Interface with MySQL client via the port 6032 and change the admin password (if you want):

    As you might see, I need to copy the memory settings to the runtime realm, then save them to disk to make them persist. The reason is that updating operations here only affect the memory layer.
  3. Add 3 MySQL nodes to the ProxySQL backends:

    Check the above result to be sure that all 3 nodes are in ONLINE status.


  4. Define and insert ProxySQL HostGroup functionality, as defined at the Lefred’s blog entry:

  5. Create a new user for monitoring purpose in ONE of the nodes in the cluster (e.g. we log in to SERVER_2) and run the following commands in MySQL console:

  6. Update the monitor information in ProxySQL Admin console (I also updated the MySQL server version that the ProxySQL responds to the client app):

  7. Create the user and password to access the app database in ONE (SERVER_2) of DB server nodes (if we do not have). E.g.

  8. We need to extend the MySQL 5.7 sys schema with the lefred’s addition_to_sys.sql script to allow ProxySQL to monitor states of the cluster nodes. We only need to run it on the primary DB node (SERVER_2) since we are running the system in group replication:

  9. Create the SAME user and password in the ProxySQL Admin console so that our application can connect to the ProxySQL with this user and password:

  10. Change the ProxySQL Listening port: By default, the application will connect to the cluster through ProxySQL using port 6032 but for most PHP applications usually, they are familiar with connecting the standard to MySQL port on 3306. So we are going to change the listening port to 3306:

    , then we need to restart the ProxySQL service:

    , then check if ProxySQL is listening on that port:

  11. Nice, test to verify all is working properly by connecting the app to ProxySQL listening port.

ProxySQL Query Rules

Till now you have ProxySQL with Group Replication work properly. However, if you take a little monitoring effort on server load, you might see that all traffic comes to the writer group (2 this case). Even when you setup single-primary for MySQL Group Replication, ProxySQL does not “automatically” route read traffics to reader nodes. Routing and balancing load are the DBA job, so it needs to be done with ProxySQL query rules.

  1. To split read and write traffic, you can check the ProxySQL Wiki entry. For example, to simply route read traffic to reader group (3) and write traffic to writer group (2), we can use the following simple regex:

    Try put traffic to the ProxySQL listening port to see the traffic routing. Note that the above approach is “blindly” route read and write traffic to reader and writer groups. We should take more effort on monitoring and on routing traffic to achieve better load distribution among DB nodes.
  2. Next thing to consider is Query Cache. As you may know, the MySQL query cache deprecated as of MySQL 5.7.20 and removed in MySQL 8.0. In addition, the MySQL Query cache is not really helpful in term of performance due to the way it handles locking and cache invalidation. Caching should also be done at the proxy level so that it can be reused for the system-wide. I would recommend checking the good article about ProxySQL query cache at Percona Blog.
    • Guess what? I am lazy now 🙂

Some useful ProxySQL Monitoring Queries

  1. The log file is at /var/lib/proxysql/proxysql.log
  2. Queries relating to monitoring connection pools


Leave a Reply