This article discusses how to configure Redhat 5/CentOS 5 Clustering services to use Zadara Storage. By way of example, it will use the popular MySQL database as the service to configure in a highly-available manner. It assumes the reader is familiar with Redhat/CentOS server administration, Cluster Services setup and management, MySQL database management, general networking concepts and how to provision storage with the Zadara platform in the OpSource Cloud.
Note: the instructions and scripts described in this article have not been tested with Redhat 6/CentOS 6 nor any other Linux distribution.
- The network where the Cluster will reside must have Multicast enabled. For details, see this article in the Cloud Community: How to Manage the Properties of a Network using the Administrative UI.
- The firewall on the Cloud network should pose no limitation to Clustering since all traffic will remain within the local subnet, however, the server-based firewall (IPTables) may require configuration to open the ports used by Cluster Services and the service(s) to be clustered. See the documentation on Cluster Services for specifics.
The /etc/hosts file should list all servers that will be participating in the Cluster. In addition, the server name should not be listed on the line with the localhost address (127.0.0.1). All hosts should have IP addresses as defined in the Cloud UI. An example /etc/hosts file might look like this:
127.0.0.1 localhost.localdomain localhost ::1 localhost6.localdomain6 localhost6 10.163.1.11 10-163-1-11 10.163.1.12 10-163-1-12
Test by pinging all hosts by name from each server.
You will need the UUID identifiers for the Organization and all servers that will comprise the Cluster. See the Cloud Rest API documentation (https://community.opsourcecloud.net/Browse.jsp?id=8a8a8a171c4c4ed8011c4c5b152b009f) for details on how to obtain these values. In addition, it is recommended that you create a separate sub-admin user specifically for the purpose of making API calls. This user will require the "Server" role. See this article in the Cloud Community for details on creating sub-admins: How to Create a Sub-Administrator Using the Administrative UI.
- You will need to install the Clustering software and MySQL database software, as provided by the OS vendor.
- If you are using Redhat 5, contact Support to have the Clustering entitlements added to your servers.
- If you are using CentOS 5, simply install as per the instructions below.
- You will need to install a UDEV add-on and a new fencing agent for Cluster services. This software is provided by (Zadara/OpSource/?).
- Install the MySQL software and configure your database as necessary:
# yum -y install mysql-server
- Install the Cluster services software:
# yum -y install rgmanager
You will need to install a few additional packages to utilize a special fencing agent for Cluster services
# yum -y install perl-XML-Simple perl-Crypt-SSLeay
Provision at least 2 volumes in Zadara and export them to your Cloud servers. The first volume will be used for the MySQL data files. Make the volume as large as you require. The second volume will be used as a quorum disk. The volume size should be 10 MB.
Starting on one server:
- As root, install the UDEV add-on by unpacking the zip file into the /etcdirectory:
# cd / # unzip udev_iscsi_0.1.zip
- Connect the iscsi volume to the server.
- XXXX: add example commands here, including autologin
- Get the SCSI ID for each volume and set up persistent names for each (see /etc/iscsi/iscsi.namesfor details.)
- XXXX: add example scsi_id command here
- Configure the iscsi and iscsiddaemons to start at boot
# /sbin/chkconfig iscsi on # /sbin/chkconfig iscsid on
- Partition (if necessary) and format the data volume. Note: if you opt to use LVM, you must configure your cluster to use CLVM.
- Format the quorum disk with the mkqdisk command.
Repeat steps 1, 2 & 4 for each server that will connect to the Zadara volumes. Then copy the /etc/iscsi/iscsi.names from the first server to each server and verify that the device names are consistent across all servers.
Normally, MySQL places its data files in /var/lib/mysql but in this exmaple, the files will reside on the iscsi volume mounted to /mysql and a symlink from /var/lib/mysql pointing to it. We'll use the fictional /dev/iscsi/zadara/mysql01 device as the name for the iscsi volume.
- Stop MySQL if it is running.
- Create the mount point for the iscsi volume:
# mkdir /mysql
- Mount the iscsi volume to /mysql.
- Create a 'data' subdirectory in _/mysql. This is necessary to prevent the MySQL database from mistaking the 'lost+found' directory (a required element of mount points) as a schema.
- Move all the files in /var/lib/mysql to /mysql/data, remove the /var/lib/mysqldirectory and put a symlink in its place and set ownership of the 'data' directory:
# mv /var/lib/mysql/* /mysql/data # rmdir /var/lib/mysql # ln -s /mysql/data /var/lib/mysql # chown -R mysql:mysql /mysql/data
- Start the MySQL database and ensure it is working properly. If the database fails to start, check your work for any mistakes.
- Shut down the MySQL database and unmount the iscsi volume.
Proceed to configure the cluster, adding nodes, failover domains, resources and the service. For fencing agents, use the 'fence_manual' during initial configuration. For the file system resource, use the persistent name you configured for the iscsi volume (under Setup.) Since a quorum disk is being used, you should configure a heuristic and change a couple timeout values to compensate. A common heuristic is to ping a network resource, such as the gateway. Our example will use the following quorum, heuristic and totem configuration:
<quorumd interval="1" label="zadaraqdisk" min_score="1" tko="15" votes="1"> <heuristic interval="4" program="/bin/ping 10.163.1.1 -c3 -t1" score="1"/> </quorumd> <totem token="31000"/>
You will need to change the quorum disk label and IP address accordingly.
Set the cluster services to start at boot:
# /sbin/chkconfig rgmanager on # /sbin/chkconfig cman on # /sbin/chkconfig qdiskd on
Fencing will be accomplished by means of the Cloud Rest API. The agent name is 'fence_opsource' and should be installed in the /sbin directory. The configuration resides within the cluster configuration file. The syntax for the fencedevice XML element is:
<fencedevice agent="" serverid="" username="" name="" password="" orgid=""/>
The parameters are:
- name - this is a unique name you select as part of configuring fencing in the cluster
- agent - the name of the fencing agent. This must be 'fence_opsource'
- serverid - this is the unique Cloud ID for the server
- orgid - this is the unique Cloud Organization ID
- username - this is the username of the sub-administrator you created in the Setup section above
- password - this is the password of the sub-administrator you created in the Setup section above
An example fencedevice XML might look like:
<fencedevice agent="fence_opsource" serverid="cee92db8" username="cluster_user" name="node1fence" password="Sup3rSECR3T" orgid="74e98314"/>
Restart your cluster to begin using the new fencing agent.
Use the clustat command to verify that all nodes and the quorum disk are present and online and that the service is started:
# clustat Cluster Status for zadara @ Tue Mar 13 16:12:06 2012 Member Status: Quorate Member Name ID Status ------ ---- ---- ------ 10-163-1-11 1 Online, rgmanager 10-163-1-12 2 Online, Local, rgmanager /dev/disk/by-id/scsi-26565636663313061 0 Online, Quorum Disk Service Name Owner (Last) State ------- ---- ----- ------ ----- service:mysql 10-163-1-11 started
|cluster.conf||2 kB||Jeff Stoner||Mar 20, 2012 5:44:44 PM EDT||Example cluster.conf||Checkout | Edit ||
|udev_iscsi_0.1.zip||2 kB||Jeff Stoner||Mar 20, 2012 5:45:03 PM EDT||UDEV add-on for iscsi-based Zadara storage||Checkout | Edit ||