Comparing Ceph and GlusterFS

Shared storage systems GlusterFS and Ceph compared

Tuning for GlusterFS and Ceph

To get the most out of GlusterFS, admins can start at different points. In terms of pure physics, the network throughput and the speed of the disks behind the brick are decisive. At this level, these actions are completely transparent to GlusterFS. It uses eth0 or bond0. Faster disks on the back end help. Admins also can tune the corresponding filesystem. It helps for GlusterFS to store files entrusted to it 1:1 on the back end. It is not advisable to choose too many bricks per server, but there are also many things to adjust at protocol level.

Switching O_DIRECT on or off using the POSIX translator has already been mentioned. At the volume level, read cache and write buffers can be adjusted to suit your needs. The eager-lock switch is relatively new. GlusterFS allows faster transfer of locks from one transaction to the next. In general, the following rules apply: The relative performance grows with the number of clients. Distribution at the application level is also advantageous for GlusterFS performance. Single-threaded applications should be avoided. Ceph provides administrators with some interesting options with regard to a storage cluster's hardware.

The previously described process of storing data on OSDs happens in the first step between the client and a single OSD, which accepts the binary objects from the client. The trick is that, on the client side, more than one connection to an OSD at the same time does not represent a problem. The client can therefore split a 16MB file into four objects of 4MB each and then simultaneously upload these four objects to different OSDs. A client in Ceph can thus continuously write to several spindles simultaneously (this bundles the performance of all the disks used, as in RAID0).

The effects are dramatic: Instead of expensive SAS disks, as used in SAN storage, Ceph provides comparable performance values with normal SATA drives, which are much better value for the money. The latency may give some admins cause for concern, because the latency of SATA disks (especially the desktop models) lags significantly behind that of similar SAS disks. However, Ceph developers also have a solution for this problem, which relates to the OSD journals.

Each OSD has a journal in Ceph – that is, an upstream region that initially incorporates all changes and then ultimately sends them to the actual data carrier. The journal can either reside directly on the OSD or on an external device (e.g., on an SSD). Up to four OSD journals can be outsourced to a single SSD, with a dramatic effect on performance in turn. Clients simultaneously write to the Ceph cluster in such setups at the speed that several SSDs can offer so that, in terms of performance, such a combination leaves even SAS drives well behind.

Conclusion

No real winner or loser is seen here. Both solutions have their own strengths and weaknesses – fortunately, never in the same areas. Ceph is deeply rooted in the world of the object store and can therefore play its role particularly well in that area as storage for hypervisors or open source cloud solutions. It looks slightly less impressive on the filesystem area. This, however, is where GlusterFS enters the game. Coming from the file-based NAS environment, it can leverage its strengths – even in a production environment. GlusterFS only turned into an object store quite late in its career; thus, it still has to work like crazy to catch up.

In the high-availability environment, both tools feel comfortable – Ceph is less traditionally oriented than GlusterFS. The latter works with consumer hardware, but it feels a bit more comfortable on enterprise servers.

The "distribution layer" is quite different. The crown jewel of Ceph is RADOS and its corresponding interfaces. GlusterFS, however, impresses thanks to its much leaner filesystem layer that enables debugging and recovery from the back end. Additionally, the translators provide a good foundation for extensions. IT decision makers should look at the advantages and disadvantages of these solutions and compare them with the requirements and conditions of their data center. What fits best will then be the right solution.

The Author

Dr. Udo Seidel is a Math and Physics Teacher and has been a Linux fan since 1996. After graduating, he worked as a Linux/Unix trainer, system administrator, and senior solution engineer. Today, he is a director of the Linux strategy team at Amadeus Data Processing GmbH in Erding, Germany.

Martin Gerhard Loschwitz works as a principal consultant at hastexo. He concentrates on the topics of HA, distributed storage, and OpenStack. In his spare time, he maintains Pacemaker for the Debian distribution.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • GlusterFS

    Sure, you could pay for cloud services, but with GlusterFS, you can take the idle space in your own data center and create a large data warehouse quickly and easily.

  • Red Hat Storage Server 2.1
    If you believe Red Hat's marketing hype, the company has no less than revolutionized data storage with version 2.1 of its Storage Server. The facts tell a rather different story.
  • GlusterFS Storage Pools

    GlusterFS stores data across the network and can be used as a storage back end in cloud environments.

  • Build storage pools with GlusterFS
    GlusterFS stores data across the network and can be used as a storage back end in cloud environments.
  • Getting Ready for the New Ceph Object Store

    The Ceph object store remains a project in transition: The developers announced a new GUI, a new storage back end, and CephFS stability in the just released Ceph v10.2.x, Jewel.

comments powered by Disqus