Support for Percona XtraDB Cluster in ProxySQL

How scheduler and script stand in supporting failover (Percona and Marco example) 

In part one of this series,  I had illustrated how simple scenarios may fail or have problems when using Galera native support inside ProxySQL. In this post, I will repeat the same tests but using the scheduler option and the external script.

The Scheduler

First a brief explanation about the scheduler.

The scheduler inside ProxySQL was created to allow administrators to extend ProxySQL capabilities. The scheduler gives the option to add any kind of script or application and run it at the specified interval of time. The scheduler was also the initial first way we had to deal with Galera/Percona XtraDB Cluster (PXC) node management in case of issues. 

The scheduler table is composed as follows:

The relevant elements are:

  • Active: that defines if the scheduler should execute or not the external script
  • Interval_ms: frequency of the execution. This has NO check if previous executions terminate. Given that a script must include a check to prevent launching multiple instances which will probably create conflicts and resource issues.
  • Filename: the FULL path of the script/app you want to be executed.
  • Arg(s): whatever you want to pass as arguments. When you have a complex script, either use a configuration file or collapse multiple arguments in a single string.

The Scripts

In this blog, I will present two different scripts (as examples). Both will cover the scenarios as in the previous article and can do more, but I will focus only on that part for now.

One script is written in Bash and is the porting of the proxysql_galera_checker Percona was using with ProxySQL-admin in ProxySQL version 1.4. The script is available here from Percona-lab (git clone ).

The other, written by me, is written in Perl and is probably the first script that came out in 2016. I have done some enhancements and bug fixing to it during the years. Available here (git clone).

Both are offered here as examples and I am not suggesting to use them in critical production environments.

The Setup

To use the two scripts some custom setup must be done. First of all, check that the files are executable by the user running ProxySQL.

Let’s start with mine in Perl

To make it work we need to define a set of host groups that will work as Reader/Writer/Backup-writer/backup-reader (optional but recommended). The difference from the native support is that instead of having them indicated in a specialized table, we will use the mysql_servers table.

  • Writer: 100
  • Readers: 101
  • Backup Writers:8100
  • Backup Readers: 8101

Given the above, on top of the already defined servers in the previous article, we just need to add the 8000 HGs. 

For example:

After that we need to insert the instructions for the scheduler:

The result will be:

Please refer to the instruction in Github for the details of the parameters. What we can specify here is:

  • -H=100:W,101:R Are the Host Group we need to refer to as the ones dealing with our PXC cluster
  • –active_failover=1 Failover method to apply
  • –retry_down=2 –retry_up=1 If action must be taken immediately or if a retry is to be done. This is to avoid the possible jojo effect due to any delay from the node or network.  

Always set it to 0 and activate only when all is set and you are ready to go. Once the above is done, the script ready to be used by ProxySQL is the galera_check script.

Percona proxysql_galera_checker

One limitation this script has is that you cannot use different IPs for the PXC internal communication and the ProxySQL node. Given that, we need to modify the setup we had in the previous blog to match the script requirements. Also here we need to define which HG will be the writer which the reader, but we will specify the internal IPs, and, of course, ProxySQL must have access to that network as well.

  • Writer HG : 200
  • Reader HG: 201
  • Network IPs 10.0.0.22 – 23 – 33

Given that, our ProxySQL setup will be:

As you can see here we need to redefine also the user and query rules to match the different HGs, if you use the same (100 -101) no need to do that. Now it’s time to add the line in for the scheduler:

Also in this case please refer to the specifications of the parameters, but it’s worth mentioning:

  • –write-hg=200 –read-hg=201 Host groups definition
  • –writer-is-reader=always Keep this as ALWAYS please, we will see you do not need anything different.
  • –mode=singlewrite Possible modes are load balancer and single writer. This is refuse from the old. Never, ever use Galera/PXC in multi-primary mode, period.
  • –priority=10.0.0.22:3306,10.0.0.23:3306,10.0.0.33:3306 This is where we define the priority for the writers.

Also in this case when loading a schedule, keep the schedule deactivated, and enable it only when ready.

The Tests

Read Test

The first test is the simple read test, so while we have sysbench running in read_only mode we remove one reader after the other.

Marco script:

As we can see, by just setting the weight we will be able to prevent sending reads to the Writer, and while some will still arrive there, it is negligible. Once we put all the readers down…

Marco script: 

Given the last node also if with the low weight it will serve all the reads.

Percona Script:

Remove the reads:

In both cases, no issue at all; the writer takes the load of the reads only when left alone. 

Maintenance Test

In this test, I will simply put the node down into maintenance mode using pxc_maint_mode=maintenance, as done in the other article. As a reminder, this was working fine also with native Galera.


Marco script:

After:

Node was elected and connections on the old writer were also able to end given OFFLINE_SOFT. Putting back the node, removing it from maintenance:

Node WILL NOT failback by default (this is by design), this will eventually allow you to warm caches or anything else it may be meaningful before moving the node to Primary role again.

The Percona script will behave a bit differently:

Then I put the node under maintenance:

Connections will be moved to the new Writer slowly based on the application approach. 

But when I put the node back from maintenance:

The old Writer will be put back as Primary. As indicated above I consider this wrong, given we may risk putting back a node that is cold and that can affect production performance. It is true that putting it back from maintenance is a controlled action, but the more checks the better.

Testing Node Crash

Marco script:

To emulate a crash I will kill the mysqld process with kill -9 <pid>.

Kill the process:

Five seconds is consistently taken, of which two are because I set the scheduler to run every two seconds, and also a retry. And the new Primary is serving while the failed node is removed:

Percona script:

Also, in this case, the Percona script behaves a bit differently.

Before the crash:

Then kill the process:

Variable time to recover but around 6-12 seconds.

New Primary is elected. But on node recovery:

As for maintenance, when the node comes back, by default it is moved to the Primary role. As already explained I consider this wrong and dangerous, but it is a way of seeing what a script should do.

Conclusions

PXC is a complex product, the ways it can be deployed are many, and is not easy or possible to identify all of the possible variants.

Having the opportunity to use native support could be the easier to go solution, but as illustrated part one of this series, misbehavior is just around the corner and it may seriously impact your production environment.

The use of the scheduler with a properly developed script/application that handles the Galera support can guarantee better consistency and proper behavior in respect to your custom expectations. 

There are solutions out there that may fit you and your needs, but if not you can develop your own solution, and be sure that you keep consistency when changing versions of ProxySQL and/or PXC/Galera. In the end, once the main work is done, maintaining a script will be much easier than having to patch a product or wait for a feature request to be implemented. 

I know it may look like a step back, moving out from native support and using a scheduler again. But it is not, it’s just the acknowledgment that sometimes it is better to keep it simple and do some specific tuning/work, rather than trying to put the universe in a bottle which overcomplicates the problem.