Topic: High-Availability Maildir Storage With GlusterFS + CentOS 5.x
Hi everyone this is my first contribution here and I wanted to produce something and I hope to be worthy
By: Basem Hegazy (Linux System Administrator)
This tutorial shows how to set up high-availability storage with two storage servers (CentOS 5.4) that use GlusterFS. Each storage server will be a mirror of the other storage server, and files in /var/vmail directory will be replicated automatically across both storage servers. The client (iRedMail) system (CentOS 5.4 as well) will be able to access the storage as if it was a local filesystem.
I prefer to start by this tutorial first before installing the (iRedMail) system then install the iRedMail before mounting the share folder.
I do not issue any guarantee that this will work for you!
In this tutorial I use three systems, two servers and a client:
• server1.example.com: IP address 192.168.0.100 (server)
• server2.example.com: IP address 192.168.0.101 (server)
• client1.example.com: IP address 192.168.0.102 (client), the Client in our case is the iRedMail server also.
All three systems should be able to resolve the other systems' hostnames. If this cannot be done through DNS, you should edit the /etc/hosts file so that it contains the following lines on all three systems:
vi /etc/hosts
[...]
192.168.0.100 server1.example.com server1
192.168.0.101 server2.example.com server2
192.168.0.102 client1.example.com client1
[...]
(It is also possible to use IP addresses instead of hostnames in the following setup. If you prefer to use IP addresses, you don't have to care about whether the hostnames can be resolved or not.)
2) Setting Up the GlusterFS Servers for both server1.example.com and server2.example.com:
GlusterFS my not be available as a package (RPM) some CentOS 5.x distribution, therefore I will build it myself.
First I install the prerequisites:
yum groupinstall 'Development Tools'
yum groupinstall 'Development Libraries'
yum install libibverbs-devel fuse-devel
Then we download the latest GlusterFS release from http://www.gluster.org/download.php and build it as follows:
cd /tmp
wget http://ftp.gluster.com/pub/gluster/glus … 0.9.tar.gz
tar xvfz glusterfs-2.0.9.tar.gz
cd glusterfs-2.0.9
./configure
At the end of the ./configure command, you should see something like this:
[...]
GlusterFS configure summary
===========================
FUSE client : yes
Infiniband verbs : yes
epoll IO multiplex : yes
Berkeley-DB : yes
libglusterfsclient : yes
argp-standalone : no
[root@server1 glusterfs-2.0.9]#
Then run the make command:
make && make install
ldconfig
Check the GlusterFS version afterwards (should be 2.0.9):
[root@server1 glusterfs-2.0.9]# glusterfs --version
you should see something like:
glusterfs 2.0.9 built on Mar 1 2010 15:34:50
Repository revision: v2.0.9
Copyright (c) 2006-2009 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.[root@server1 glusterfs-2.0.9]#
Next we create a few directories:
mkdir /data/
mkdir /data/export
mkdir /data/export-ns
mkdir /etc/glusterfs
Now we create the GlusterFS server configuration file /etc/glusterfs/glusterfsd.vol which defines which directory will be exported /data/export and what client is allowed to connect (192.168.0.102 = client1.example.com):
vi /etc/glusterfs/glusterfsd.vol
enter the following data:
volume posix
type storage/posix
option directory /data/export
end-volumevolume locks
type features/locks
subvolumes posix
end-volumevolume brick
type performance/io-threads
option thread-count 8
subvolumes locks
end-volumevolume server
type protocol/server
option transport-type tcp
option auth.addr.brick.allow 192.168.0.102
subvolumes brick
end-volume
Please note that it is possible to use wildcards for the IP addresses (like 192.168.*) and that you can specify multiple IP addresses separated by comma (e.g. 192.168.0.102,192.168.0.103).
Afterwards we create the following symlink"
ln -s /usr/local/sbin/glusterfsd /sbin/glusterfsd
... and then the system startup links for the GlusterFS server and start it:
chkconfig --levels 35 glusterfsd on
/etc/init.d/glusterfsd start
3) Setting Up the GlusterFS Client:
client1.example.com:
GlusterFS my not be available as a package (RPM) some CentOS 5.x distribution, therefore I will build it myself.
First I install the prerequisites:
yum groupinstall 'Development Tools'
yum groupinstall 'Development Libraries'
yum install libibverbs-devel fuse-devel
Then we load the fuse kernel module...
modprobe fuse
... And create the file /etc/rc.modules with the following contents so that the fuse kernel module will be loaded automatically whenever the system boots:
vi /etc/rc.modules
modprobe fuse
Then Make the file executable:
chmod +x /etc/rc.modules
Then we download the GlusterFS 2.0.9 sources (please note that this should be the same version as that installed on the server!) and build GlusterFS as follows:
cd /tmp
wget http://ftp.gluster.com/pub/gluster/glus … 0.9.tar.gz
tar xvfz glusterfs-2.0.9.tar.gz
cd glusterfs-2.0.9
./configure
At the end of the ./configure command, you should see something like this:
[...]
GlusterFS configure summary
===========================
FUSE client : yes
Infiniband verbs : yes
epoll IO multiplex : yes
Berkeley-DB : yes
libglusterfsclient : yes
argp-standalone : no
then run the make command:
make && make install
ldconfig
Check the GlusterFS version afterwards (should be 2.0.9):
[root@client1 glusterfs-2.0.9]# glusterfs --version
you should see something like:
glusterfs 2.0.9 built on Mar 1 2010 15:58:06
Repository revision: v2.0.9
Copyright (c) 2006-2009 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.[root@client1 glusterfs-2.0.9]#
Then we create the following two directories:
mkdir /etc/glusterfs
Next we create the file /etc/glusterfs/glusterfs.vol:
vi /etc/glusterfs/glusterfs.vol
volume remote1
type protocol/client
option transport-type tcp
option remote-host server1.example.com
option remote-subvolume brick
end-volumevolume remote2
type protocol/client
option transport-type tcp
option remote-host server2.example.com
option remote-subvolume brick
end-volumevolume replicate
type cluster/replicate
subvolumes remote1 remote2
end-volumevolume writebehind
type performance/write-behind
option window-size 1MB
subvolumes replicate
end-volumevolume cache
type performance/io-cache
option cache-size 512MB
subvolumes writebehind
end-volume
Make sure you use the correct server hostnames or IP addresses in the option remote-host lines!
That's it! Now we can Install the (iRedMail ) system before we mount the GlusterFS filesystem to /var/vmail with one of the following two commands:
glusterfs -f /etc/glusterfs/glusterfs.vol /var/vmail
Or:
mount -t glusterfs /etc/glusterfs/glusterfs.vol /var/vmail
You should now see the new share in the outputs of mount
[root@client1 ~]# mount
you will see:
/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/sda1 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
glusterfs#/etc/glusterfs/glusterfs.vol on /var/vmail type fuse (rw,allow_other,default_permissions,max_read=131072)[root@client1 ~]#
... And ...
[root@client1 ~]# df –h
you will see:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
29G 2.2G 25G 9% /
/dev/sda1 99M 13M 82M 14% /boot
tmpfs 187M 0 187M 0% /dev/shm
glusterfs#/etc/glusterfs/glusterfs.vol
28G 2.3G 25G 9% /var/vmail
[root@client1 ~]#
(server1.example.com and server2.example.com each have 28GB of space for the GlusterFS filesystem, but because the data is mirrored, the client doesn't see 56GB (2 x 28GB), but only 28GB.)
Instead of mounting the GlusterFS share manually on the client, you could modify /etc/fstab so that the share gets mounted automatically when the client boots.
Open /etc/fstab and append the following line:
vi /etc/fstab
and append/add the following line:
[...]
/etc/glusterfs/glusterfs.vol /var/vmail glusterfs defaults 0 0
To test if your modified /etc/fstab is working, reboot the client:
reboot
After the reboot, you should find the share in the outputs of:
df -h
... and...
mount
4) Testing
Now let's create some test files on the GlusterFS share:
client1.example.com:
touch /var/vmail/test1
touch /var/vmail/test2
Now let's check the /data/export directory on server1.example.com and server2.example.com. The test1 and test2 files should be present on each node:
server1.example.com and server2.example.com:
[root@server1 ~]# ls -l /data/export
the result is:
total 0
-rw-r--r-- 1 root root 0 2010-02-22 16:50 test1
-rw-r--r-- 1 root root 0 2010-02-22 16:50 test2
[root@server1 ~]#
Now we shut down server1.example.com and add/delete some files on the GlusterFS share on client1.example.com.
server1.example.com:
shutdown -h now
client1.example.com:
touch /var/vmail/test3
touch /var/vmail/test4
rm -f /var/vmail/test2
The changes should be visible in the /data/export directory on server2.example.com:
server2.example.com:
[root@server2 ~]# ls -l /data/export
the result is:
total 0
-rw-r--r-- 1 root root 0 2010-02-22 16:50 test1
-rw-r--r-- 1 root root 0 2010-02-22 16:53 test3
-rw-r--r-- 1 root root 0 2010-02-22 16:53 test4
[root@server2 ~]#
Let's boot server1.example.com again and take a look at the /data/export directory:
server1.example.com:
[root@server1 ~]# ls -l /data/export
the result is:
total 0
-rw-r--r-- 1 root root 0 2010-02-22 16:50 test1
-rw-r--r-- 1 root root 0 2010-02-22 16:50 test2
[root@server1 ~]#
As you see, server1.example.com hasn't noticed the changes that happened while it was down. This is easy to fix, all we need to do is invoke a read command on the GlusterFS share on client1.example.com, e.g.:
client1.example.com:
ls -l /var/vmail
However the read command happened automatically each time a user trying to access his mailbox using RoundCube webmail, even you may notice that the mailbox wasn't replicated once you create it, don't bother it does when a user access his webmail.
[root@client1 ~]# ls -l /var/vmail/
.. and the result is:
total 0
-rw-r--r-- 1 root root 0 2010-02-22 16:50 test1
-rw-r--r-- 1 root root 0 2010-02-22 16:53 test3
-rw-r--r-- 1 root root 0 2010-02-22 16:53 test4
[root@client1 ~]#
Now take a look at the /data/export directory on server1.example.com again, and you should see that the changes have been replicated to that node:
server1.example.com:
[root@server1 ~]# ls -l /data/export
the result is:
total 0
-rw-r--r-- 1 root root 0 2010-02-22 16:50 test1
-rw-r--r-- 1 root root 0 2010-02-22 16:53 test3
-rw-r--r-- 1 root root 0 2010-02-22 16:53 test4
[root@server1 ~]#
Thanks and welcome for comments.
----
Spider Email Archiver: On-Premises, lightweight email archiving software developed by iRedMail team. Supports Amazon S3 compatible storage and custom branding.