Load balancing and clustering: Difference between revisions
Line 210: | Line 210: | ||
For the web server we only need a very small set of packages, basically only packages that starts with ''open-xchange-gui'' where most of additional packages are languagespacks or plugins. Add the Open-Xchange software repository to the package manager configuration first. Then install the ''open-xchange-gui'' package to the web server. | For the web server we only need a very small set of packages, basically only packages that starts with ''open-xchange-gui'' where most of additional packages are languagespacks or plugins. Add the Open-Xchange software repository to the package manager configuration first. Then install the ''open-xchange-gui'' package to the web server. | ||
$ apt-get install | $ apt-get install{{OXPackageInstallation-GUI}} | ||
This will install the Open-Xchange user interface, Apache 2 and several services as dependency. The Apache module ''proxy_ajp'' will handle all the communication with the Open-Xchange Servers. Its configuration also contains the setup of the session balancing. What it basically does is defining two backend nodes and forwarding servlet paths to them based on the ''loadfactor''. This setting can be customized in case the backend servers are not equal in terms of performance. The ''route'' property is important, it specifies a unique ID of a backend server and will be used when setting up Open-Xchange Servers later. Please see the Apache mod_proxy_ajp [http://httpd.apache.org/docs/2.2/mod/mod_proxy_ajp.html documentation] for more details. | This will install the Open-Xchange user interface, Apache 2 and several services as dependency. The Apache module ''proxy_ajp'' will handle all the communication with the Open-Xchange Servers. Its configuration also contains the setup of the session balancing. What it basically does is defining two backend nodes and forwarding servlet paths to them based on the ''loadfactor''. This setting can be customized in case the backend servers are not equal in terms of performance. The ''route'' property is important, it specifies a unique ID of a backend server and will be used when setting up Open-Xchange Servers later. Please see the Apache mod_proxy_ajp [http://httpd.apache.org/docs/2.2/mod/mod_proxy_ajp.html documentation] for more details. |
Revision as of 12:20, 23 October 2009
Load balancing and clustering Open-Xchange (WORK IN PROGRESS)
General
Open-Xchange Server 6 is primary built for the Software-as-a-Service world. Hosting and telecommunication providers around the world use Open-Xchange to offer hosted services to their customers. Open-Xchange Server 6 scales vertical and horizontal which means either use a more powerful server or add more machines to fulfill resource requirements. While upgrading a single server installation inevitable gets to a point where costs rise faster than performance gains, adding some simple machines to the installation provides linear cost increase and a slightly more complex administration. Besides the fiscal impact of using medium sized servers another key argument for clustering is service availability, single nodes can go down for maintenance without influencing the general service availability. A typical scenario for clustering is virtualization where multiple nodes can provide resources on demand.
One of the main principles of Open-Xchange Server 6 is the ability to utilize several medium sized servers. This guide will outline the basic principles of clustering Open-Xchange Server instances and provide load balancing to utilize all nodes of a cluster.
Requirements
Since clustering and load balancing is an advanced topic, skills on operating system and Open-Xchange Server 6 administration are required. To gain those skills, please refer to the documentation repository and general system administration lecture. With this guide we're going to set up five machines in total. Therefor it's recommended to get some training on a virtualized environment first. When rolling out the setup it is recommended to use real hardware or enterprise grade virtualization solutions like VMware ESX or Citrix XEN. If VMware is used, please make sure that VMware Tools are installed on all hosts to ensure optimal network performance. The following types servers will be set up:
- 1 Webserver (Apache)
- 2 Groupware nodes (Open-Xchange Server 6)
- 2 Database servers (MySQL Master/Slave)
To maintain consistency throughout the guide, each system gets a unique name which can be set as hostname. The IP addresses are also used through the whole guide but they may differ at the actual network setup. All systems run Debian GNU/Linux 5.0 (Lenny), any other supported platform works as well. All assumptions and instructions about system configuration is based on a minimal installation of the operating system. This guide is valid for Open-Xchange 6.10.
- web (10.20.30.210)
- oxgw01 (10.20.30.213)
- oxgw02 (10.20.30.215)
- dbmaster (10.20.30.217)
- dbslave (10.20.30.219)
When finishing the guide the setup will provide several load balancing and clustering features.
- Session load balancing
- Open-Xchange clustering
- Database master/slave replication
- Database read/write separation
- Distributed file storage
- Remote logging
Concepts
Master/Slave database setup
Startup both database machines and install the mysql server packages
$ apt-get install mysql-server
During the installation, a dialog will show up to set a password for the MySQL 'root' user. Please set a strong password here.
Master configuration
Open the MySQL configuration file with you favorite editor
$ vim /etc/mysql/my.cnf
Modify or enable the following configuration options
bindaddress = 10.20.30.217 server-id = 1 log_bin = /var/log/mysql/mysql-bin.log
- bindaddress specifies the network address where MySQL is listening for network connections. Since the MySQL slave and both Open-Xchange Servers are dedicated machines it is required to have the master accessible through the network.
- server-id is just a number within a environment with multiple MySQL servers. It needs to be unique for each server.
- log_bin enables the MySQL binary log which is required for Master/Slave replication. In general every statement triggered at the database is stored there to get distributed through the database cluster.
To apply the configuration changes, restart the MySQL server.
$ /etc/init.d/mysql restart
Then login to MySQL with the credentials given at the MySQL installation process
$ mysql -u root -p Enter password:
Configure replication permissions for the MySQL slave server and the MySQL user "replication". This account is used by the MySQL slave to get database updates from the master. Please chose a strong password here.
mysql> GRANT REPLICATION SLAVE ON *.* TO 'replication'@'10.20.30.219' IDENTIFIED BY 'secret';
Now setup access for the Open-Xchange Server database user openexchange to configdb and the groupware database for both groupware server addresses. These databases do not yet exist, but will be created during the Open-Xchange Server installation.
mysql> GRANT ALL PRIVILEGES ON *.* TO 'openexchange'@'10.20.30.213' IDENTIFIED BY 'secret'; mysql> GRANT ALL PRIVILEGES ON *.* TO 'openexchange'@'10.20.30.215' IDENTIFIED BY 'secret';
Verify that the MySQL master is writing a binary log and remember the values
mysql> SHOW MASTER STATUS; +------------------+----------+--------------+------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +------------------+----------+--------------+------------------+ | mysql-bin.000001 | 1082| | | +------------------+----------+--------------+------------------+
Copy the MySQL master binary log and the index file to the slave. This is required for initial synchronization.
$ scp /var/log/mysql/mysql-bin.* root@10.20.30.219:/var/log/mysql
Slave configuration
Set the MySQL system user as owner to the binary log that has just been copied to the slave.
$ chown mysql:adm /var/log/mysql/*
Open the MySQL configuration file with you favorite editor
$ vim /etc/mysql/my.cnf
Modify or enable the following configuration options. Just like the master, the slave requires a unique server-id and needs to listen to an external network address. Activating the binary log is not required at the slave.
bindaddress = 10.20.30.219 server-id = 2
To apply the configuration changes, restart the MySQL server.
$ /etc/init.d/mysql restart
Then login to MySQL with the credentials given at the MySQL installation process
$ mysql -u root -p Enter password:
Configure the replication from the master based on the 'replication' user and the masters binary log status. The values for MASTER_LOG_FILE and MASTER_LOG_POS must equal the output of the SHOW MASTER STATUS command at the MySQL master.
mysql> CHANGE MASTER TO MASTER_HOST='10.20.30.217', MASTER_USER='replication', MASTER_PASSWORD='secret', MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=1082;
Now setup access for the Open-Xchange Server database user 'openexchange' to configdb and the oxdb for both groupware server addresses. These databases do not yet exist, but will be created during the Open-Xchange Server installation.
mysql> GRANT ALL PRIVILEGES ON *.* TO 'openexchange'@'10.20.30.213' IDENTIFIED BY 'secret'; mysql> GRANT ALL PRIVILEGES ON *.* TO 'openexchange'@'10.20.30.215' IDENTIFIED BY 'secret';
Start the MySQL slave replication
mysql> START SLAVE;
Check the slave status, sometimes it can take a while until the replication starts. Slave_IO_Running shows that the MySQL slave is exchanging data with the MySQL master.
mysql> SHOW SLAVE STATUS \G; Slave_IO_Running: Yes Slave_SQL_Running: Yes
Also check the syslog if the replication has been sucessfully started
$ tail -fn20 /var/log/syslog Jul 26 19:03:45 dbslave mysqld[4718]: 090726 19:03:45 [Note] Slave I/O thread: connected to master 'replication@10.20.30.217:3306', replication started in log 'mysql-bin.000001' at position 1082
Testing Master/Slave
On the master, create a new database in MySQL:
mysql> CREATE DATABASE foo;
Check if this database is available on the slave:
mysql> SHOW DATABASES; +--------------------+ | Database | +--------------------+ | information_schema | | foo | | mysql | +--------------------+
Delete the database on the master
mysql> DROP DATABASE foo;
Check if the database has been removed at the slave
mysql> SHOW DATABASES; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | +--------------------+
Distributed file storage
The distributed file storage will be set up on the MySQL database master server. Of course it is possible to use a dedicated file server or a already existing storage system, this guide does not cover that. This has several reasons:
- Open-Xchange Server does not require much I/O on typical operation
- Data for groupware objects like the Infostore is stored at the file storage and file metadata is stored at the database. Consistency between the database and the file storage is critical.
Installation of the NFS server
Open-Xchange Server is able to access various storage backends, NFS (Network File System) is a mature and proven backend. Install the following packages at the MySQL master server to enable NFS storage
$ apt-get install nfs-kernel-server nfs-common portmap
Create a directory for the Open-Xchange Server file storage.
$ mkdir /var/opt/filestore
Open-Xchange Server runs as user open-xchange, create a user account at the NFS server, this is required for accessing the NFS export later. NFS will map the user id (uid) and group id (gid), therefor they need to be equal at the Open-Xchange Server nodes and the NFS server.
$ useradd open-xchange
Check the uid and gid, typically its 1001:1001 since it's the first user on the system.
$ grep open-xchange /etc/passwd open-xchange:x:1001:1001::/home/open-xchange:/bin/sh
Make the newly created user own the filestore at the NFS server
$ chown open-xchange:open-xchange /var/opt/filestore
Configure the NFS server to provide this directory to both Open-Xchange Server nodes in read and write mode. Enter the uid and gid of the open-xchange user to the NFS export.
$ vim /etc/exports /var/opt/filestore 10.20.30.213(rw,no_subtree_check,all_squash,anonuid=1001,anongid=1001) 10.20.30.215(rw,no_subtree_check,all_squash,anonuid=1001,anongid=1001)
Make the changes effective to the running NFS server
$ exportfs -a
Installation of NFS clients
Both Open-Xchange Server machines are NFS clients since they mount the distributed file storage. It's critical that both Open-Xchange Server nodes can access the same filestorage since due to session load balancing it is possible that a user logs in to either one Open-Xchange Server.
Install required NFS client packages on both Open-Xchange Server nodes
$ apt-get install nfs-common portmap
Create mountpoints for the filestore at both Open-Xchange Server nodes
$ mkdir /var/opt/filestore/
Open-Xchange Server runs as user open-xchange, to let this user access the filestore, create a user account at all Open-Xchange Server nodes. NFS will map the user id (uid) and group id (gid) to the ones at the NFS server, therefor they need to be equal at the Open-Xchange Server nodes and the NFS Server.
$ useradd open-xchange $ grep open-xchange /etc/passwd open-xchange:x:1001:1001::/home/open-xchange:/bin/sh
Add the NFS storage to the fstab configuration file to mount the storage automatically on boot at both Open-Xchange Server nodes
$ vim /etc/fstab 10.20.30.217:/var/opt/filestore /var/opt/filestore nfs defaults 0 0
Testing the distributed file storage
Mount the filestore manually on both Open-Xchange Server nodes to check if the connection works properly
$ mount /var/opt/filestore
To test the distributed storage, create a file on one Open-Xchange Server node as user open-xchange
$ su open-xchange $ touch /var/opt/filestore/foo
Then check if the file is available and writeable at the other node also as user open-xchange
$ su open-xchange $ ls -la /var/opt/filestore $ rm /var/opt/filestore/foo
Session load balancing
Since configuration of system services for the corresponding operating system is already described at the general installation guides, this guide will focus on the specialties when creating a distributing setup. Please refer to the installation guides for configuration that is not mentioned in this guide.
The web server on this setup is a pure frontend server. This means it takes and responds to requests sent by a client but it does not contain any groupware logic. All requests are forwarded to the Open-Xchange Servers through the AJP13 protocol. The configuration will allow round-robin session load balancing, basically both Open-Xchange servers are configured as backends for answering requests with an 50:50 probability of being chosen. Once a new session is created, that session is bound to the groupware server it has been created.
For the web server we only need a very small set of packages, basically only packages that starts with open-xchange-gui where most of additional packages are languagespacks or plugins. Add the Open-Xchange software repository to the package manager configuration first. Then install the open-xchange-gui package to the web server.
$ apt-get install open-xchange-configjump-generic-gui \ open-xchange-gui open-xchange-gui-wizard-plugin-gui \ open-xchange-online-help-de \ open-xchange-online-help-en open-xchange-online-help-fr
This will install the Open-Xchange user interface, Apache 2 and several services as dependency. The Apache module proxy_ajp will handle all the communication with the Open-Xchange Servers. Its configuration also contains the setup of the session balancing. What it basically does is defining two backend nodes and forwarding servlet paths to them based on the loadfactor. This setting can be customized in case the backend servers are not equal in terms of performance. The route property is important, it specifies a unique ID of a backend server and will be used when setting up Open-Xchange Servers later. Please see the Apache mod_proxy_ajp documentation for more details.
$ vim /etc/apache2/conf.d/proxy_ajp.conf <IfModule mod_proxy_ajp.c> ProxyRequests On ProxyVia On <Proxy *> Order deny,allow Allow from all </Proxy> <Proxy balancer://oxcluster> BalancerMember ajp://10.20.30.213:8009 smax=0 ttl=60 retry=5 loadfactor=50 route=OX-1 BalancerMember ajp://10.20.30.215:8009 smax=0 ttl=60 retry=5 loadfactor=50 route=OX-2 </Proxy> ProxyPass /ajax balancer://oxcluster/ajax stickysession=JSESSIONID ProxyPass /servlet balancer://oxcluster/servlet stickysession=JSESSIONID ProxyPass /axis2 balancer://oxcluster/axis2 stickysession=JSESSIONID ProxyPass /infostore balancer://oxcluster/infostore stickysession=JSESSIONID ProxyPass /publications balancer://oxcluster/publications stickysession=JSESSIONID ProxyPass /Microsoft-Server-ActiveSync balancer://oxcluster/Microsoft-Server-ActiveSync stickysession=JSESSIONID </IfModule>
Restart the Apache 2 web server and check if it is possible to connect with a browser. By default, this configuration allows plain HTTP access. In order to offer privacy to the customer the connection must be secured by a HTTPS connection based on a valid certificate. It is also recommended to set a redirect for all plain HTTP connections to use HTTPS.
Add some required apache modules to the web server. See the general installation guides for more information about configuration of expires and deflate.
$ a2enmod proxy && a2enmod proxy_ajp && a2enmod proxy_balancer && a2enmod expires && a2enmod deflate && a2enmod headers
Restart the Apache web server after applying all configuration changes.
$ /etc/init.d/apache2 restart
Configuring Open-Xchange Server
Install all relevant Open-Xchange Server packages to both groupware nodes after adding the Open-Xchange software repository to your package manages configuration. This package selection does not contain user interface packages.
apt-get install open-xchange open-xchange-authentication-database \ open-xchange-admin-client open-xchange-admin-lib \ open-xchange-admin-plugin-hosting open-xchange-admin-plugin-hosting-client \ open-xchange-admin-plugin-hosting-lib open-xchange-configjump-generic \ open-xchange-contactcollector open-xchange-conversion \ open-xchange-conversion-engine open-xchange-conversion-servlet \ open-xchange-crypto open-xchange-data-conversion-ical4j \ open-xchange-dataretention open-xchange-dataretention-csv open-xchange-genconf \ open-xchange-genconf-mysql open-xchange-imap open-xchange-mailfilter \ open-xchange-management open-xchange-monitoring \ open-xchange-passwordchange-database open-xchange-passwordchange-servlet \ open-xchange-pop3 open-xchange-publish open-xchange-publish-basic \ open-xchange-publish-infostore-online open-xchange-publish-json \ open-xchange-publish-microformats open-xchange-push-udp \ open-xchange-resource-managerequest open-xchange-server \ open-xchange-settings-extensions open-xchange-smtp \ open-xchange-spamhandler-default open-xchange-sql open-xchange-subscribe \ open-xchange-xerces-sun open-xchange-subscribe-json \ open-xchange-subscribe-microformats open-xchange-subscribe-crawler \ open-xchange-subscribe-xing open-xchange-subscribe-linkedin \ open-xchange-templating open-xchange-timer open-xchange-unifiedinbox \ open-xchange-admin-doc open-xchange-admin-plugin-hosting-doc \ open-xchange-charset open-xchange-contacts-ldap open-xchange-control \ open-xchange-easylogin open-xchange-group-managerequest open-xchange-i18n \ open-xchange-jcharset open-xchange-sessiond
Create the configdb database at the MySQL Master. This step does only need to be performed on one of the Open-Xchange Server nodes.
$ /opt/open-xchange/sbin/initconfigdb --configdb-user=openexchange --configdb-pass=secret --configdb-host=10.20.30.217
Setup the Open-Xchange Server configuration. This step needs to be performed on 'both' groupware nodes. Note that the --jkroute parameter must equal the route parameter at the web servers proxy_ajp load balancing configuration of the specific server. Node 1:
$ /opt/open-xchange/sbin/oxinstaller --servername=oxserver --configdb-readhost=10.20.30.217 --configdb-writehost=10.20.30.217 --configdb-user=openexchange --master-pass=secret --configdb-pass=secret --jkroute=OX-1 --ajp-bind-port=*
Node 2:
$ /opt/open-xchange/sbin/oxinstaller --servername=oxserver --configdb-readhost=10.20.30.217 --configdb-writehost=10.20.30.217 --configdb-user=openexchange --master-pass=secret --configdb-pass=secret --jkroute=OX-2 --ajp-bind-port=*
Startup the Administration Daemon on one of the nodes. Wait some seconds until the Open-Xchange Administration Daemon is started completely.
$ /etc/init.d/open-xchange-admin start
Now register the Open-Xchange Server at the database. Note that a server is a whole cluster in this case. This step does only need to be performed on one of the Open-Xchange Server nodes.
$ /opt/open-xchange/sbin/registerserver -n oxserver -A oxadminmaster -P secret
Register the filestorage. This step does only need to be performed on one of the Open-Xchange Server nodes. Note that the NFS export must be mounted to the same path on both groupware nodes.
$ /opt/open-xchange/sbin/registerfilestore -A oxadminmaster -P secret -t file:///var/opt/filestore
Now register the MySQL Master database at configdb. This step does only need to be performed on one of the Open-Xchange Server nodes.
$ /opt/open-xchange/sbin/registerdatabase -A oxadminmaster -P secret --name oxdatabase --hostname 10.20.30.217 --dbuser openexchange --dbpasswd secret --master true database 4 registered
Check the returned database ID which is 4 in this case. This value is required to register the MySQL Slave database at configdb. This step does only need to be performed on one of the Open-Xchange Server nodes.
$ /opt/open-xchange/sbin/registerdatabase -A oxadminmaster -P secret --name oxdatabase_slave --hostname 10.20.30.219 --dbuser openexchange --dbpasswd secret --master false --masterid=4
Now start Open-Xchange Server on both groupware nodes.
$ /etc/init.d/open-xchange-groupware start
Create a new context and a testuser
$ /opt/open-xchange/sbin/createcontext -A oxadminmaster -P secret -c 1 -u oxadmin -d "Context Admin" -g Admin -s User -p secret -L defaultcontext -e oxadmin@example.com -q 1024 --access-combination-name=all $ /opt/open-xchange/sbin/createuser -c 1 -A oxadmin -P secret -u testuser -d "Test User" -g Test -s User -p secret -e testuser@example.com
Test Session load balancing
Apache is configured to use a 50:50 balancing between both Open-Xchange Servers. Now that they are up and running its time to check if this balancing works. This can be done by simply watching the Open-Xchange Server log files while a user logs in. Execute tail to the open-xchange.log.0 file on both servers. Then login with the testuser, one of the servers log file should show something like
$ tail -fn200 /var/log/open-xchange/open-xchange.log.0 [...] INFO: Session created. ID: 31060fc80b9e44d38148ef4d5d19963d, Context: 1, User: 3
Then logout and login again. This time, the session should be created on the other server. On the client side, the JSESSIONID cookie at the browser shows evidence on which server the user has logged in by the trailing ".OX-" identifier. This identifier is set by Open-Xchange Server based on its AJP_JVM_ROUTE attribute.
Clustering Open-Xchange Server
It is already possible to distribute sessions through several groupware nodes by using the proxy_ajp load balancing technology. While this might be adequate for simple failover, it lacks clustering on the application side. Just as an example, users may be distributed to different OX servers but they are still working together in one context. If User A on the first server shares a folder to User B on the second server, User B will not be able to access this folder since the foldertree is cached within Open-Xchange Server. Clustering with Open-Xchange Server primarily affects cache invalidation which allows a groupware node to delete a reference to a piece of data through the whole cluster, the single nodes will then fetch an updated version of this data. There are various caches used by the Open-Xchange Server, by using clustering it is possible to move cache content from one node to another which enables user session migration that allows restarts of single nodes without losing user sessions bound to that machine.
Network configuration
Open-Xchange Server uses multicast discovery to find other nodes. Once this discovery has been successful, the groupware nodes will establish TCP connections for cache communication.
Configure a multicast address for the servers network. This needs to be done on all groupware nodes.
$ vim /etc/network/interfaces [...] iface eth0 inet static [...] post-up route add -net 224.0.0.0/8 dev eth0
Check the Open-Xchange Servers cache configuration files /opt/open-xchange/etc/groupware/cache.ccf and /opt/open-xchange/etc/admindaemon/cache.ccf on all groupware nodes. Only the very last section is interesting for distributed caching (jcs.auxiliary.*) Make sure the TCPServers attribute is commented out and the UDPDiscovery settings are active. Also check the cache configuration for /opt/open-xchange/etc/groupware/sessioncache.ccf
# jcs.auxiliary.LTCP.attributes.TcpServers=127.0.0.1:57461 jcs.auxiliary.LTCP.attributes.TcpListenerPort=57462 jcs.auxiliary.LTCP.attributes.UdpDiscoveryAddr=224.0.0.1 jcs.auxiliary.LTCP.attributes.UdpDiscoveryPort=6780 jcs.auxiliary.LTCP.attributes.UdpDiscoveryEnabled=true
These settings configure Open-Xchange Server to discover other nodes through the multicast address 224.0.0.1 and UDP port 6780. Note that the property TcpListenerPort differs at the groupware and admindaemon configuration file. This is required to avoid socket conflicts, they define the TCP port that listens for incoming connections by other groupware nodes.
Restart the networking to enable the new multicast address on both groupware nodes. Also restart the Open-Xchange Server processes on all nodes.
$ /etc/init.d/networking restart $ /etc/init.open-xchange-groupware restart $ /etc/init.open-xchange-admin restart
Test the network settings
The new routing information for the multicast network should be available when printing the routing table.
$ route -n [...] 224.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth0
TCP connections that are created after the UDP multicast discovery are shown with netstat.
$netstat -tlpa | grep java | grep ESTABLISHED Proto Recv-Q Send-Q Local Address Foreign Address State tcp6 0 0 oxgw01:49103 oxgw02:57461 ESTABLISHED 3706/java tcp6 0 0 oxgw01:37912 oxgw02:57462 ESTABLISHED 3706/java tcp6 0 0 oxgw01:58849 oxgw02:49302 ESTABLISHED 3706/java tcp6 0 0 oxgw01:57462 oxgw02:46054 ESTABLISHED 3706/java tcp6 0 0 oxgw01:57462 oxgw01:41904 ESTABLISHED 3706/java tcp6 0 0 oxgw01:48628 oxgw02:57461 ESTABLISHED 3582/java tcp6 0 0 oxgw01:57461 oxgw02:47115 ESTABLISHED 3582/java tcp6 0 0 oxgw01:57461 oxgw02:57348 ESTABLISHED 3582/java tcp6 0 0 oxgw01:57461 oxgw01:42589 ESTABLISHED 3582/java tcp6 0 0 oxgw01:43960 oxgw02:57462 ESTABLISHED 3582/java tcp6 0 0 oxgw01:41904 oxgw01:57462 ESTABLISHED 3582/java tcp6 0 0 oxgw01:42589 oxgw01:57461 ESTABLISHED 3706/java tcp6 0 0 oxgw01:43786 oxgw02:57461 ESTABLISHED 3706/java tcp6 0 0 oxgw01:35196 oxgw02:58849 ESTABLISHED 3706/java tcp6 0 0 oxgw01:57462 oxgw02:44548 ESTABLISHED 3706/java tcp6 0 0 oxgw01:57461 oxgw02:44893 ESTABLISHED 3582/java
How to verify those connection? The last line shows a process id (PID) of the local process that has an established connection. In this case, PID3706 is the Open-Xchange Groupware Daemon and PID3582 is the Open-Xchange Administration Daemon. These services build mesh connections between each groupware, each admindaemon and each foldercache service. Some connections are used bidirectional so only one connection is visible, others use two connections (inbound and outbound) depending on the network responses. It is important that each service is connected to each other while the foldercache is only connected between two groupware services. It can take some time until all connections are established after Open-Xchange Server has been started. In this example, the first two lines indicate connections between the local groupware process and the remote admindaemon and groupware processes.