« Previous 1 2 3 Next »
Diving In
Expanding Storage Pools
GlusterFS makes it easy to adapt the existing storage pool. If you want to add, for example, a new storage system to a pool, you can use the commands:
gluster peer probe gluster3 gluster volume add-brick replica 3 gv0 gluster3:/storage/brick1/gv0/
Here, the gluster3
system is added to the storage pool to expand the existing volume by one brick. A call to gluster volume info
should confirm that the volume now has three bricks. Depending on the selected mode, you might need to add additional bricks to the volume. For example, a distributed replicated volume requires four bricks.
You can remove a brick from a volume just as easily as you added a brick. If the storage system is no longer needed, as well, you can remove it from the trusted storage pool:
gluster volume remove-brick gv0 gluster3:/storage/brick1/gv0/ gluster peer detach gluster3
When you add bricks to, or remove bricks from, a distributed volume, you need to re-sort the data to reflect the changed number of bricks. To initiate this process, use the command:
gluster volume rebalance gv0 start
Calling the command with the parameter status
instead of start
gives you details on the restructuring progress.
GlusterFS as Cloud Storage
Thanks to good performance and easy scalability, GlusterFS is frequently used as a storage solution for cloud environments. Deployment is possible both in purely libvirt-based Qemu/KVM environments and for environments in which multiple KVM instances are operated in parallel. The oVirt framework and the commercial variant by Red Hat (Enterprise Virtualization) [5] are examples. They have offered the ability to use Gluster volumes as a storage pool or storage domain for some time. Qemu can access the disk directly without having to detour via a FUSE mount, thanks to integration of the libgfapi library in GlusterFS version 3.4 [6]. Performance tests have shown that direct access to the GlusterFS volume nearly achieves the same performance as accessing a brick directly.
The following example shows how to provide a simple storage pool for a KVM libvirt-based instance. At this point, I assume that the hypervisor is installed and only the previously generated Gluster volume needs to be connected to the hypervisor. In principle, this is possible with the help of the graphical virt-manager (Virtual Machine Manager) tools (Figure 2), as well as with the virsh
command-line tool.
Listing 3 shows an XML file that describes a Gluster volume and then adds it to the libvirt framework. You just need to specify a single storage system, along with the volume name that you used when configuring the volume. Next, create a new libvirt storage pool and enable it:
# virsh pool-define /tmp/gluster-storage.xml Pool glusterfs-pool defined from /tmp/gluster-storage.xml # virsh pool-start glusterfs-pool Pool glusterfs-pool started
Listing 3
Pool Definition
<pool type='gluster'> <name>glusterfs-pool</name> <source> <host name='192.168.122.191'/> <dir path='/'/> <name>gv0</name> </source> </pool>
If this worked, you can type virsh pool-list
to show an overview of the existing storage pools on the local hypervisor:
# virsh pool-list --all Name State Autostart ---------------------------------- default active yes glusterfs-pool active no
Volumes can be assigned to virtual machines within this storage pool. Unfortunately, libvirt does not let you create volumes within a GlusterFS pool as of this writing, so you need to create the volume manually (Figure 2). The following command creates a 4GB volume on the hypervisor for installing a Red Hat Enterprise Linux system:
qemu-img create gluster://192.168.122.191/gv0/rhel7.img 4G
The IP address corresponds to the first storage system within the trusted storage pool in which the GlusterFS volume was previously created. The virsh vol-list
command shows that the volume was created correctly:
# virsh vol-list glusterfs-pool Name Path ----------------------------------------- rhel7.img gluster://192.168.122.191/gv0/rhel7.img
Finally, you can use virt-manager or the virt-install
command-line tool to create the required virtual system and define the volume you just set up as the storage back end. A very simple example of installing a virtual system on the GlusterFS volume could look like this:
# virt-install --name rhel7 --memory 4096 --disk vol=glusterfs-pool/rhel7.img,bus=virtio --location ftp://192.168.122.1/pub/products/rhel7/
Of course you would need to modify the call to virt-install
accordingly. The intent at this point is simply to show how you can use the GlusterFS volume as a back end for an installation.
Finally, also note that GlusterFS version 3.3 introduced yet another innovation in the form of the Unified File and Object (UFO) translator, which enables the filesystem to handle POSIX files as objects and vice versa. In OpenStack environments, the filesystem is a genuine alternative to the built-in OpenStack storage component Swift [7], because it supports all OpenStack storage protocols (file, block, object).
« Previous 1 2 3 Next »