Comparing Ceph and GlusterFS

Shared storage systems GlusterFS and Ceph compared

GlusterFS Front Ends

GlusterFS comes with four different interfaces. The first is the native filesystem driver. This is not part of the Linux kernel and uses the FUSE approach (filesystem in userspace; Figure 2). Additionally, the storage solution comes with its own downgraded NFS server. This may only support NFS version 3, and you should expect TCP as the transport protocol, but that is fine in many cases. The obligatory RESTful interface is of course also present. The libgfapi library is the latest access option. Armed with this, GlusterFS is setting out to conquer the storage world. Whether these features are enough, only time will tell.

Figure 2: GlusterFS uses FUSE for native access via a POSIX-compatible layer.

Licensing Business

Ceph is based throughout on the LGPL – that is, the variant of GPL that even allows linking with non-free software, as long as a completely new piece of work (derivative work) is not subsequently created from the two linked parts. From InkTank's point of view, this decision is sensible; after all, it is quite possible that Ceph could be used in the foreseeable future as a part of commercial storage products, and in such a scenario, the restrictions of the GPL [3] would be rather annoying in regard to linked software.

Incidentally, the LGPL is effective for all components that are part of Ceph. These include OSDs and MONs, as well as metadata servers for the CephFS filesystem and the libraries (i.e., Librados and the RBD Library Librbd).

Until its acquisition by Red Hat in 2011, GlusterFS was released under the GNU Affero General Public License (AGPL) [4]. Additionally, the co-developers had to sign a Contributor License Agreement (CLA) [5]. This setup is not necessarily unusual in the open source community and serves to protect the patrons of the software project. Red Hat removed the need for the aforementioned Contributor License Agreement and put GlusterFS under the GNU Public License version 3 (GPLv3). In doing so, the new "owner" of the open source storage software wanted to show maximum openness. The license change was part of the discussions that took place during the acquisition of Gluster Inc. by Red Hat. No further adjustments are expected in this area in the near future.

Front Ends – Take One!

The software simultaneously provides two different possibilities for POSIX-style access to data in the GlusterFS cluster. A filesystem driver is of course an obvious approach. This driver is usually part of the Linux kernel for local data storage devices or even traditional shared solutions. In the recent past, even filesystems in userspace (FUSEs) have enjoyed a fair amount of popularity [6]). GlusterFS also used this approach, which offers several advantages: Development is not subject to the strict regulations of the Linux kernel. Porting to other platforms is easier – if you assume that they also support FUSE. This means that the entire software stack runs in userspace. The usual mechanisms for process management are thus applicable one-to-one.

FUSE is not an approach without controversy, however. The additional context switch between the kernel and userspace certainly impairs the performance of the filesystem.

Interestingly, GlusterFS provides its own NFS server [7], which, on second glance, is not necessarily surprising. First of all, NAS was the market segment that GlusterFS wanted to conquer. Second, all functions were to be part of the software and thus largely independent of the underlying operating system and its configuration. However, this approach to development came at a price. GlusterFS paid it primary through the fact that its own NFS server is rather limited. Your search for version 4 [8] of the protocol here will be in vain. The same applies to support for UDP. The missing Network Lock Manager (NLM) is now in place (Listing 1).

Listing 1

Gluster NFS

# rpcinfo -p gluster1
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100000    3   tcp    111  portmapper
    100000    2   tcp    111  portmapper
    100000    4   udp    111  portmapper
    100000    3   udp    111  portmapper
    100000    2   udp    111  portmapper
    100227    3   tcp   2049  nfs_acl
    100021    3   udp  56765  nlockmgr
    100021    3   tcp  54964  nlockmgr
    100005    3   tcp  38465  mountd
    100005    1   tcp  38466  mountd
    100003    3   tcp   2049  nfs
    100021    4   tcp  38468  nlockmgr
    100024    1   udp  34712  status
    100024    1   tcp  46596  status
    100021    1   udp    769  nlockmgr
    100021    1   tcp    771  nlockmgr
#

The typical Linux NFS client can access data on the GlusterFS network with the above-mentioned limitations. A separate software package is not necessary. This approach facilitates the migration of traditional NAS environments, because the lion's share of the work virtually takes place on the server.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • GlusterFS

    Sure, you could pay for cloud services, but with GlusterFS, you can take the idle space in your own data center and create a large data warehouse quickly and easily.

  • Red Hat Storage Server 2.1
    If you believe Red Hat's marketing hype, the company has no less than revolutionized data storage with version 2.1 of its Storage Server. The facts tell a rather different story.
  • GlusterFS Storage Pools

    GlusterFS stores data across the network and can be used as a storage back end in cloud environments.

  • Build storage pools with GlusterFS
    GlusterFS stores data across the network and can be used as a storage back end in cloud environments.
  • Getting Ready for the New Ceph Object Store

    The Ceph object store remains a project in transition: The developers announced a new GUI, a new storage back end, and CephFS stability in the just released Ceph v10.2.x, Jewel.

comments powered by Disqus