Template:OXLoadBalancingClustering Database: Difference between revisions

From Open-Xchange
(added max allowed packet size)
No edit summary
Line 1: Line 1:
== Overview ==
You can choose between Galera or two-sided Master/Slave ("Master/Master") replication.
== Galera database setup ==
OX only supports the "Percona XtraDB Cluster" flavor of the Galera database.
=== Installation ===
To install the software, we first need to configure the repository and its build key, update our sources lists and install the packages:
gpg --keyserver  hkp://keys.gnupg.net --recv-keys 1C4CBDCDCD2EFD2A
gpg -a --export CD2EFD2A | apt-key add -
cat >/etc/apt/sources.list.d/percona.list <<EOF
deb http://repo.percona.com/apt squeeze main
deb-src http://repo.percona.com/apt squeeze main
EOF
apt-get update
apt-get install percona-xtradb-cluster-client-5.5 percona-xtradb-cluster-server-5.5 percona-xtrabackup
=== Configuration ===
For the configuration, you need in addition to standard MySQL configuration some configuration variables for Galera. Here, Galera is also referred to as "wsrep", "write set replication", which is the internal name for the replication mechanism Galera is based on.
It is convenient to put the Galera configuration in its own file in /etc/mysql/conf.d and make sure this file gets included from the main MySQL config file. This file can look as follows:
[mysqld]
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
innodb_locks_unsafe_for_binlog=1
query_cache_size=0
query_cache_type=0
bind-address=0.0.0.0
wsrep_provider=/usr/lib64/libgalera_smm.so
wsrep_cluster_name="my_wsrep_cluster"
wsrep_cluster_address="gcomm://<GALERA_NODE1_IP>,<GALERA_NODE2_IP>,<GALERA_NODE3_IP>"
wsrep_sst_method=xtrabackup
wsrep_sst_auth=<db_user>:<db_password>
Some comments: most of those configurables are required by Galera exactly as they are configured there. The last four lines can or must be adjusted to the local conditions.
=== Cluster startup ===
Whenever all nodes of a Galera cluster are not running (like before starting the cluster for the very first time), the first Galera node needs to get started with the wsrep_cluster_address parameter overridden to the value "gcomm://" in order to denote that the node shall not try to join an existing cluster (which would inevitable fail now, because no other cluster nodes are running yet), but to bootstrap the cluster instead. This override can most conveniently done on the command line, instead of editing to wsrep.cnf file to and fro.
Nodes 2 and 3 can be started without this parameter overridden subsequently.
So, for the first node, the startup command is
mysqld_safe --wsrep_cluster_address=gcomm:// &
and the subsequent nodes can be started with
mysqld_safe
Since the standard service startup scripts cannot account for this special treatment, we recomment not to use them.
You can check the status of the Galera cluster using the SQL query
show status like 'wsrep%';
The output is lengthy. The most relevant fields are given as follows:
+----------------------------+----------------------------------------------------------------------+
| Variable_name              | Value                                                                |
+----------------------------+----------------------------------------------------------------------+
| wsrep_local_state_comment  | Synced                                                              |
| wsrep_incoming_addresses  | <GALERA_NODE1_IP>:3306,<GALERA_NODE2_IP>:3306,<GALERA_NODE3_IP>:3306 |
| wsrep_cluster_size        | 3                                                                    |
| wsrep_cluster_status      | Primary                                                              |
| wsrep_connected            | ON                                                                  |
| wsrep_ready                | ON                                                                  |
+----------------------------+----------------------------------------------------------------------+
== Master/Master database setup ==
== Master/Master database setup ==
This article describes the setup process "Master/Master replication" for new Open-Xchange  database cluster. During configuration and initialization, other database operations must be prohibited.
This section describes the setup process "Master/Master replication" for new Open-Xchange  database cluster. During configuration and initialization, other database operations must be prohibited.


The Master/Master replication is a vice versa setup of Master/Slave configurations. This means each server is afterwards the slave of the other.  
The Master/Master replication is a vice versa setup of Master/Slave configurations. This means each server is afterwards the slave of the other.  
Line 123: Line 201:
  +--------------------+
  +--------------------+


=== Creating Open-Xchange user ===
== Creating Open-Xchange user ==


Now setup access for the Open-Xchange Server database user 'openexchange' to configdb and the oxdb for both groupware server addresses. These databases do not exist yet, but will be created during the Open-Xchange Server installation.
Now setup access for the Open-Xchange Server database user 'openexchange' to configdb and the oxdb for both groupware server addresses. These databases do not exist yet, but will be created during the Open-Xchange Server installation.

Revision as of 09:07, 13 September 2013

Overview

You can choose between Galera or two-sided Master/Slave ("Master/Master") replication.

Galera database setup

OX only supports the "Percona XtraDB Cluster" flavor of the Galera database.

Installation

To install the software, we first need to configure the repository and its build key, update our sources lists and install the packages:

gpg --keyserver  hkp://keys.gnupg.net --recv-keys 1C4CBDCDCD2EFD2A
gpg -a --export CD2EFD2A | apt-key add -

cat >/etc/apt/sources.list.d/percona.list <<EOF
deb http://repo.percona.com/apt squeeze main
deb-src http://repo.percona.com/apt squeeze main
EOF

apt-get update
apt-get install percona-xtradb-cluster-client-5.5 percona-xtradb-cluster-server-5.5 percona-xtrabackup

Configuration

For the configuration, you need in addition to standard MySQL configuration some configuration variables for Galera. Here, Galera is also referred to as "wsrep", "write set replication", which is the internal name for the replication mechanism Galera is based on.

It is convenient to put the Galera configuration in its own file in /etc/mysql/conf.d and make sure this file gets included from the main MySQL config file. This file can look as follows:

[mysqld]
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
innodb_locks_unsafe_for_binlog=1
query_cache_size=0
query_cache_type=0
bind-address=0.0.0.0
wsrep_provider=/usr/lib64/libgalera_smm.so
wsrep_cluster_name="my_wsrep_cluster"
wsrep_cluster_address="gcomm://<GALERA_NODE1_IP>,<GALERA_NODE2_IP>,<GALERA_NODE3_IP>"
wsrep_sst_method=xtrabackup
wsrep_sst_auth=<db_user>:<db_password>

Some comments: most of those configurables are required by Galera exactly as they are configured there. The last four lines can or must be adjusted to the local conditions.

Cluster startup

Whenever all nodes of a Galera cluster are not running (like before starting the cluster for the very first time), the first Galera node needs to get started with the wsrep_cluster_address parameter overridden to the value "gcomm://" in order to denote that the node shall not try to join an existing cluster (which would inevitable fail now, because no other cluster nodes are running yet), but to bootstrap the cluster instead. This override can most conveniently done on the command line, instead of editing to wsrep.cnf file to and fro.

Nodes 2 and 3 can be started without this parameter overridden subsequently.

So, for the first node, the startup command is

mysqld_safe --wsrep_cluster_address=gcomm:// &

and the subsequent nodes can be started with

mysqld_safe

Since the standard service startup scripts cannot account for this special treatment, we recomment not to use them.

You can check the status of the Galera cluster using the SQL query

show status like 'wsrep%';

The output is lengthy. The most relevant fields are given as follows:

+----------------------------+----------------------------------------------------------------------+
| Variable_name              | Value                                                                |
+----------------------------+----------------------------------------------------------------------+
| wsrep_local_state_comment  | Synced                                                               |
| wsrep_incoming_addresses   | <GALERA_NODE1_IP>:3306,<GALERA_NODE2_IP>:3306,<GALERA_NODE3_IP>:3306 |
| wsrep_cluster_size         | 3                                                                    |
| wsrep_cluster_status       | Primary                                                              |
| wsrep_connected            | ON                                                                   |
| wsrep_ready                | ON                                                                   |
+----------------------------+----------------------------------------------------------------------+

Master/Master database setup

This section describes the setup process "Master/Master replication" for new Open-Xchange database cluster. During configuration and initialization, other database operations must be prohibited.

The Master/Master replication is a vice versa setup of Master/Slave configurations. This means each server is afterwards the slave of the other.

Server IPs in the example are 1.1.1.1 and 9.9.9.9

Startup both database machines and install the mysql server packages

$ apt-get install mysql-server

During the installation, a dialog will show up to set a password for the MySQL 'root' user.

Open the MySQL configuration file on both servers:

$ vim /etc/mysql/my.cnf

Modify or enable the following configuration options in the mysqld-section, use 1 as ${unique Number} on the server 1.1.1.1 and 2 for 9.9.9.9:

bind-address            = 0.0.0.0
server-id               = ${unique Number}
log_bin                 = /var/log/mysql/mysql-bin.log
binlog_format           = statement
max_allowed_packet      = 16M
  • bindaddress specifies the network address where MySQL is listening for network connections. Since the MySQL slave and both Open-Xchange Servers are dedicated machines it is required to have the master accessible through the network.
  • server-id is just a unique number within a environment with multiple MySQL servers. It needs to be unique for each server in a replication cluster.
  • log-bin enables the MySQL binary log which is required for Master/Master replication. In general every statement triggered at the database is stored there to get distributed through the database cluster.

To apply the configuration changes, restart the MySQL servers.

$ /etc/init.d/mysql restart

Then login to MySQL with the credentials given at the MySQL installation process

$ mysql -u root -p
Enter password:

First Master configuration

Choose one server to start with as the first Master (here we use 1.1.1.1).

Create a MySQL user with rights "REPLICATION". This account is used by the MySQL slave to fetch database updates. In this example, the username is "replication":

 mysql> GRANT REPLICATION SLAVE ON *.* TO 'replication'@'9.9.9.9' IDENTIFIED BY 'secret';

Verify that the MySQL daemon writes a binary log and note the log Position and File name:

mysql> SHOW MASTER STATUS;
+------------------+----------+--------------+------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000001 |     1111 |              |                  |
+------------------+----------+--------------+------------------+

First Slave configuration

On 9.9.9.9, set the MySQL system user as owner of the binary log that has just been copied to the slave.

$ chown mysql:adm /var/log/mysql/*

Configure MySQL on 9.9.9.9 to use 1.1.1.1 as Master Server. (Use the actual log File name and Position which you just obtained with the command SHOW MASTER STATUS on 1.1.1.1. as explained above.)

mysql> CHANGE MASTER TO MASTER_HOST='1.1.1.1', MASTER_USER='replication', MASTER_PASSWORD='secret', MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=1111;

Start the MySQL slave replication

mysql> START SLAVE;

And check the status

mysql> SHOW SLAVE STATUS\G;

"Slave_IO_Running" and "Slave_SQL_Running" should be set to "yes". Furthermore "Read_Master_Log_Pos" should be counting and "Seconds_Behind_Master" should be approaching the 0 mark.

Second Master configuration

This means, the first Master/Slave Replication is working and the "reverse" replication needs to be prepared. Please now create the replication user on 9.9.9.9:

 mysql> GRANT REPLICATION SLAVE ON *.* TO 'replication'@'1.1.1.1' IDENTIFIED BY 'secret';

Verify that the MySQL daemon writes a binary log and remember the log Position:

mysql> SHOW MASTER STATUS;
+------------------+----------+--------------+------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000009 |      9999|              |                  |
+------------------+----------+--------------+------------------+

Second Slave configuration

1.1.1.1 is now the slave in this context and 9.9.9.9 is the master. Log in to 1.1.1.1

Configure MySQL on 1.1.1.1 to use 9.9.9.9 as Master Server. Use the remembered log and file position from 1.1.1.1.

mysql> CHANGE MASTER TO MASTER_HOST='9.9.9.9', MASTER_USER='replication', MASTER_PASSWORD='secret', MASTER_LOG_FILE='mysql-bin.000009', MASTER_LOG_POS=9999;

start the MySQL slave replication

mysql> START SLAVE;

and check the status

mysql> SHOW SLAVE STATUS\G;

"Slave_IO_Running" and "Slave_SQL_Running" should be set to "yes". Furthermore "Read_Master_Log_Pos" should be counting and "Seconds_Behind_Master" should be approaching the 0 mark.

Also check the syslog if the replication has been sucessfully started

$ tail -fn20 /var/log/syslog
Jul 26 19:03:45 dbslave mysqld[4718]: 090726 19:03:45 [Note] Slave I/O thread: connected to master 'replication@1.1.1.17:3306',  replication started in log 'mysql-bin.000001' at position 10000

Testing Master/Master

On 1.1.1.1, create a new database in MySQL:

mysql> CREATE DATABASE foo;

Verify the database to als be available on 9.9.9.9 afterwards:

mysql> SHOW DATABASES;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| foo                |
| mysql              |
+--------------------+

Delete the new database on 9.9.9.9:

mysql> DROP DATABASE foo;

Check if the database has also been removed on 1.1.1.1

mysql> SHOW DATABASES;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
+--------------------+

Creating Open-Xchange user

Now setup access for the Open-Xchange Server database user 'openexchange' to configdb and the oxdb for both groupware server addresses. These databases do not exist yet, but will be created during the Open-Xchange Server installation.

Note: The IPs in this example belong to the two different Open-Xchange Servers, please adjust them accordingly.

mysql> GRANT ALL PRIVILEGES ON *.* TO 'openexchange'@'10.20.30.213' IDENTIFIED BY 'secret';
mysql> GRANT ALL PRIVILEGES ON *.* TO 'openexchange'@'10.20.30.215' IDENTIFIED BY 'secret';