Tuesday, April 12, 2011

Using loop devices to test GlusterFS

I had come across a issue when one of our users had questions like does X happen when one of the nodes in GlusterFS is almost full? Or does Y happen if one of the nodes is full? Or does GlusterFS work at all if couple of nodes are full?

Though the answer was straight forward I thought it would be better to test the functionality under those conditions before answering the obvious.

Initially I thought about launching a few VMs to do a quick test. But the partition sizes were too huge for my tests, it was going to be a long wait before I fill up the nodes. As a alternative I had to create smaller partitions which involves fdisk et al., and work back to restore to original disk layout (if necessary).

The better solution for this type of test would be to create a few huge files with `dd' command and use them as Gluster exports.

For example:

sac@odin:/data/disks $ for i in {1..4}; do
> dd if=/dev/zero of=disk$i bs=256k count=1000
> done
sac@odin:/data/disks $

Create a filesystem on the data files.

root@odin:/root # for i in {1..4}; do
> mkfs.ext3 /data/disks/disk$i
> done
root@odin:/root #

Mount the filesystem via the loop device

root@odin:/root # mkdir /mnt/{1..4}
root@odin:/root # for i in {1..4}; do
> mount /data/disks/disk$i /mnt/$i -o loop
> done
root@odin:/root #

Now we have four partitions with the sizes we want pretty cheaply, without needing to have multiple servers or partitions.

root@odin:/root # df -h /mnt/*
Filesystem Size Used Avail Use% Mounted on
/dev/loop0 2.9G 69M 2.7G 3% /mnt/1
/dev/loop1 2.9G 70M 2.7G 3% /mnt/2
/dev/loop2 2.9G 70M 2.7G 3% /mnt/3
/dev/loop3 2.9G 70M 2.7G 3% /mnt/4
root@odin:/root #

These mount points are then used as export directories and can be played around to understand Gluster behavior when one of the partitions is filled up. Or the performance observations by building filesytems with various different flags.

Conclusion:

This is a fast and cheaper way to test GlusterFS functionality under various filesystems without having to bother about getting disks and creating partitions. The advantage is we need not repartition the disks to create different sized partitions, we can delete the file and create a new file with a different size. Better for functionality testing, sucks in performance though. Gluster behavior can be quickly tested over various filesystems before setting up dedicated disks for extensive testing. Building filesytems with various options and tuning for observing GlusterFS behavior is very easy.