Tuesday, February 2, 2016

Grouping section data in gdeploy 1.1

The new release of gdeploy 1.1 has a cool new feature which allows users to group the ip-addresses/hostnames within the section headings.

Consider a scenario where there are four hosts whose disks have to be cleaned up for a fresh install. Two machines have vda disks and two more have vdb.

The configuration for such a setup looks like:


backend-reset.conf

[hosts]
10.70.46.159
10.70.46.172
10.70.46.185
10.70.47.129

[backend-reset:10.70.46.159]
pvs=vdb
unmount=yes

[backend-reset:10.70.46.185]
pvs=vdb
unmount=yes

[backend-reset:10.70.46.172]
pvs=vda
unmount=yes

[backend-reset:10.70.47.129]
pvs=vda
unmount=yes




The above configuration is quite long and gets tiresome to maintain as the number of nodes increase.

Thanks to Nandaja Varma, now the above configuration can be written as:


backend-reset.conf:


[hosts]
10.70.46.159
10.70.46.172
10.70.46.185
10.70.47.129

[backend-reset:10.70.46.{159,185}]
pvs=vdb
unmount=yes

[backend-reset:{10.70.46.172,10.70.47.129}]
pvs=vda
unmount=yes




Looks pretty neat and easy to maintain. Latest gdeploy release can be downloaded from here.

If you encounter any bugs please report them here.

Monday, November 16, 2015

gdeploy 1.1 released

We are happy to announce the 1.1 release of gdeploy. RPMs can be downloaded from: http://download.gluster.org/pub/gluster/gdeploy/1.1/

The most notable features of this release include:
 

Change in the configuration file format, the new configuration file format is more intuitive and easy to write(tool is compatible with older format). With this format we can have both host-specific and group-specific data.

Example of old and new configuration format:


backend-old.conf


[hosts]
10.70.46.159
10.70.46.172
 

[10.70.46.159]
devices=/dev/vdb,/dev/vdc
mountpoints=/gluster/brick1,/gluster/brick2

[10.70.46.172]
devices=/dev/sdb,/dev/vda
mountpoints=/gluster/brick1,/gluster/brick2

backend-new.conf




[hosts]
10.70.46.159
10.70.46.172
 
[backend-setup:10.70.46.159]
devices=vdb,vdc
mountpoints=/gluster/brick{1,2}

[backend-setup:10.70.46.172]
devices=sdb,vda
mountpoints=/gluster/brick{1,2}


Other features include:
  • Patterns for hostnames & mountpoints in the configuration files.
  • Now, rerunning the configuration does not throw error.
  • Backend reset option added, which helps in cleaning up the VGs, LVs, and PVs.
  • Host specific and group specific configurations.

And support for the following GlusterFS features have been added:

  • Quota
  • Snapshot
  • Geo-replication
  • Subscription manager
  • Package install
  • Firewalld
  • Samba
  • CTDB
  • CIFS mount

Sample configuration files can be found at:
https://github.com/gluster/gdeploy/tree/1.1/examples


Friday, September 18, 2015

gdeploy: Getting Started

This write up is intended to show how a gluster storage volume can be setup using gdeploy.

For some explanation on what is gdeploy and what problems it solves please
see gdeploy: A Short Introduction.

gdeploy works by reading configuration file(s) and performing particular task(s)
based on the values in the configuration file. This write up explains one such
configuration file, which is used to create a volume.

To create a volume on a freshly installed system, we perform the following
actions:

* Setting up backend for the cluster.
* Probing all the peers
* Creating a volume

The above tasks can be performed independently or together in a single
configuration file. For the sake of brevity, let us consider writing a single
configuration file which does the above three steps.

Before anything, create a passwordless ssh login to all the nodes which are
intended to be used to create a cluster.

# ssh-copy-id root@[host/ip]

The following configuration file creates a 2x2 distributed-replicate cluster,
and mounts the volume on one of the servers.


# Configuration for creating a 2x2 cluster: cluster.conf

[hosts]
10.70.46.159
10.70.46.172
10.70.46.185
10.70.47.129

[10.70.46.159]
devices=/dev/vdb

[10.70.46.172]
devices=/dev/sdb

[10.70.46.185]
devices=/dev/vdb

[10.70.47.129]
devices=/dev/sdb

[peer]
manage=probe

[volume]
action=create
volname=glustervol
transport=tcp,rdma
replica=yes
replica_count=2
force=yes

[clients]
action=mount
hosts=10.70.46.159
fstype=glusterfs
client_mount_points=/mnt/gluster

# End: Configuration for 2x2 cluster

Detailed information on configuration file can be found here and here.

Once the configuration file is created, it can be executed by the command:

# gdeploy -c cluster.conf

The output should look like:


INFO: Back-end setup triggered
INFO: Peer management(action: probe) triggered
INFO: Volume management(action: create) triggered


INFO: FUSE mount of volume triggered.

PLAY [gluster_servers] ******************************************************** 

TASK: [Create Physical Volume on all the nodes] *******************************
changed: [10.70.46.172]
changed: [10.70.46.159]
changed: [10.70.47.129]
changed: [10.70.46.185]

PLAY [gluster_servers] ******************************************************** 

TASK: [Create volume group on the disks] ************************************** 

.
.
.

TASK: [Mount the volumes] ***************************************************** 
changed: [10.70.46.159] => (item=/mnt/gluster)

PLAY RECAP ******************************************************************** 
10.70.46.159               : ok=19   changed=17   unreachable=0    failed=0   
10.70.46.172               : ok=13   changed=13   unreachable=0    failed=0   
10.70.46.185               : ok=13   changed=13   unreachable=0    failed=0   
10.70.47.129               : ok=13   changed=13   unreachable=0    failed=0   


===

A distributed-replicate 2x2 volume will be created on the machines listed in [hosts]
section and mounted on 10.70.46.159.

Monday, September 14, 2015

gdeploy: A Short Introduction


gdeploy is a new tool developed using ansible, to help in setting backend, creating volume, and deploying gluster usecases.

Setting up a backend filesystem for GlusterFS becomes a tedious task as  the number of servers/bricks increase. GlusterFS being a highly scalable software solution, provides the user ability to create a storage cluster with large number of nodes.

As the number of nodes increase, naturally we face the following shortcomings:
  1. One has to login to the nodes to setup the backend.
  2. Typing a long command with combination of node:brick is error prone.
  3. In case of error, clean up is painful.
  4. User might find setting up an UI solution to be too heavy, and again requires installing and maintaining necessary packages on all the nodes.
  5. If the user wants to use the cool new snapshot feature, thin-p backend has to be configured. This requires running plethora of commands to setup thin-p volume on multiple nodes.

gdeploy, addresses the above shortcomings and adds cool features to make the life of an admin/user easy.

gdeploy 1.0 currently implements the features:
  1. Setting up a thin-p backend on any number of nodes in a non-interactive and automated way.
  2. Mount the LV on a specified directory.
  3. Peer probe the listed nodes and create a volume using them.
  4. Mount the volume for the listed clients.
  5. Set/Unset an option on a volume.
  6. add-brick to a given volume.
  7. remove-brick from a given volume.
  8. Support multiple volume types...

Using gdeploy:

 

gdeploy can be run from one's laptop/workstation, and is not needed to be installed on any of the servers that gdeploy manages.
 
Installing - gdeploy RPMs for CentOS and Fedora can be found here.

Bootstrapping - There is a one step bootstrapping required; to create passwordless ssh to the nodes which are intended to be used to create a cluster.

$ ssh-copy-id root@hostname.com
Once the bootstrapping is done it is a matter of writing configuration files to setup the components like:
  1. Setup backend.
  2. Create volume.
  3. add-brick ...

A single configuration can be written to  many tasks or can be made modular by writing a configuration file for each task.

My next post will explain on how to writing configuration files to do particular tasks with examples.

Tuesday, May 17, 2011

GlusterFS: replace-brick

GlusterFS has a volume command called replace-brick, intuitively it replaces one brick with another. However, the way it works is not quite intuitive but requires some understanding before actually trying it out.



Let us say the cluster looks like this:

# gluster volume info

Volume Name: rb-test
Type: Distribute
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 192.168.1.95:/data/distributestore/store-1
Brick2: 192.168.1.95:/data/distributestore/store-2
Brick3: 192.168.1.96:/data/distributestore/store-1
Brick4: 192.168.1.96:/data/distributestore/store-2

To replace the brick 192.168.1.95:/data/distributestore/store-2 with 192.168.1.77:/data/distributestore/store-2 we run the following command.

# gluster volume replace-brick rb-test \ 192.168.1.95:/data/distributestore/store-2 \ 192.168.1.77:/data/distributestore/store-2 start

This command basically migrates the data from
192.168.1.95:/data/distributestore/store-2 to 192.168.1.77:/data/distributestore/store-2

But actual brick replacement is not done on the volume yet. After the completion of the above command the data is present both on 192.168.1.95:/data/distributestore/store-2 and
192.168.1.77:/data/distributestore/store-2.

To include the brick 192.168.1.77:/data/distributestore/store-2 to the cluster the command,

# gluster volume replace-brick rb-test \ 192.168.1.95:/data/distributestore/store-2 \
192.168.1.77:/data/distributestore/store-2 commit

has to be run. This attaches the brick to the cluster. After the above command volume info looks like this:

[root@centos5 store-2]# gluster volume info rb-test

Volume Name: rb-test
Type: Distribute
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: 192.168.1.95:/data/distributestore/store-1
Brick2: 192.168.1.77:/data/distributestore/store-2
Brick3: 192.168.1.96:/data/distributestore/store-1
Brick4: 192.168.1.96:/data/distributestore/store-2

If you need to just create a backup of a brick just run `gluster volume replace-brick <VOLNAME> <BRICK> <NEW-BRICK> start' ;-).

The volume command `replace-brick' has other sub-commands viz pause, abort, status which are pretty much intuitive. See http://goo.gl/F3Lfw for more details on them.