« Previous 1 2 3 Next »
MySQL is gearing up with its own high-availability Group Replication solution
Do It Yourself
Limits
MySQL does not support transaction savepoints, which also rules out the legacy mysqldump
command with the --single-transaction
parameter as a backup method. Thus, a backup solution for Group Replication would require either MySQL Enterprise Backup [5], which is available when you purchase an enterprise license from Oracle, or the free Percona XtraBackup [6] software. The query shown in Listing 1 discovers whether or not your server uses savepoints.
Listing 1
Testing for Savepoints
mysql> SELECT COUNT_STAR FROM performance_schema.events_statements_summary_global_by_event_name WHERE event_name LIKE "%/savepoint"; +------------+ | COUNT_STAR | +------------+ | 1 | +------------+ 1 row in set (0.00 sec)
Hands On
The following sections demonstrate the installation and setup of MySQL Group Replication. The example starts with three MySQL 5.7.17 servers that do not house any data.
The settings described in Listing 2 must be enabled on all servers (in the /etc/my.cnf
file). Global transaction IDs and binary logging are enabled, and all nodes that belong to the group need to be listed with the port used for Group Replication.
Listing 2
Configuration Settings
01 gtid_mode = ON 02 enforce_gtid_consistency = ON 03 master_info_repository = TABLE 04 relay_log_info_repository = TABLE 05 binlog_checksum = NONE 06 log_slave_updates = ON 07 log_bin = binlog 08 binlog_format = ROW 09 transaction_write_set_extraction = XXHASH64 10 plugin-load = group_replication=group_replication.so 11 group_replication_group_name = "aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaabcdef" 12 group_replication_start_on_boot = on 13 group_replication_group_seeds = "192.168.1.10:24901,192.168.1.11:24901,192.168.1.12:24901" 14 group_replication_bootstrap_group = off
One block of settings needs to be different on each of the nodes, including the server ID, port (for client connections), and local address that has a port for group communications (Listing 3). Because synchronization of the nodes relies on the MySQL binary log infrastructure, an extra replication user also needs to be created (Listing 4).
Listing 3
Node-Specific Settings
server-id=10001 port = 3306 group_replication_local_address = "127.0.0.1:24901"
Listing 4
Replication User
CREATE USER IF NOT EXISTS rpl@'%' IDENTIFIED BY 'Rplpass1!'; GRANT REPLICATION SLAVE ON *.* TO rpl@'%'; CHANGE MASTER TO MASTER_USER='rpl', MASTER_PASSWORD='Rplpass1!' FOR CHANNEL 'group_replication_recovery';
You then start Group Replication on the first node, which, because it still cannot connect with other nodes, first enters bootstrap mode before starting:
SET GLOBAL group_replication_bootstrap_group=ON; START GROUP_REPLICATION; SET GLOBAL group_replication_bootstrap_group=OFF;
Even if all the servers you use are empty, you still need to transfer the dataset from the first server to the others with the help of a dump; otherwise, the global transaction IDs of the additional systems make it unnecessarily difficult to set up replication. To do this, create a dump on the first server:
mysqldump -uroot -p -A --triggers --routines --events > /tmp/instanz1.sql
Next, copy this dump to the other nodes and import it there:
mysql -uroot -p --init-command="RESET MASTER" < /tmp/instanz1.sql
Now the MySQL service must be started again on the respective nodes:
CHANGE MASTER TO MASTER_USER='rpl', MASTER_PASSWORD='Rplpass1!' FOR CHANNEL 'group_replication_recovery'; START GROUP_REPLICATION;
You have now met all the requirements for starting Group Replication on the other nodes, and Group Replication is now running. Write queries are already allowed against the first node.
Operation
If the node fails, the system automatically selects a new node as the primary and allows writes there. If the failed node later returns to the group, it picks up the changes that accumulated in the meantime from one of the other nodes.
One pitfall remains: If the connection fails for some reason (e.g., because of a network failure), the server will work as a standalone system from that point on and thus accept write queries again, as such, which makes it impossible for the cluster to integrate the node without populating it with a completely new backup from one of the other nodes. A potential for improvement still exists, but as the first stable release, Group Replication left a good impression.
« Previous 1 2 3 Next »
Buy this article as PDF
(incl. VAT)