« Previous 1 2 3 4 Next »
What's new in Ceph
Well Kept
RBD: Mirroring and Snapshots
The Ceph Block Device, or RADOS Block Device (RBD), to use the term familiar to some Ceph veterans, comes with several new volume mirroring features. On the one hand, the developers have significantly improved the snapshot capabilities for this purpose. Snapshots in RBD have always been incremental, but Red Hat has once again tweaked both the performance and storage aspects. In this respect, snapshots in RBD require less space today than ever before and are also a little faster.
At the same time, the once rudimentary rbd-mirror
now comes as a complete mirror suite that keeps RBD volumes and, more specifically, their snapshots in sync between two Ceph clusters. The benefits go to companies that operate two Ceph clusters at different locations and use one as a disaster recovery setup for the other. An RBD snapshot on site B can easily be used to start a virtual machine (VM) if site B fails. Unless you manage your VMs completely manually, however, you will still need the internal management tools for the VMs to be able to handle this mirroring function, as is now the case with OpenStack, for example.
CephFS: More Filesystems per Cluster
CephFS was once the nucleus of the entire Ceph setup. However, in the early Inktank years, it was forced to take a back seat behind RBD and the Object Gateway as the RADOS front end, because it offered the least value in the context of private cloud setups. At the time, almost everyone wanted a private cloud. It took several years before Inktank finally dared to move CephFS to version number 1.0 and declare it ready for production. However, critics at the time saw CephFS 1.0 as more of a pared-back release with a massively reduced feature set. Of course, many of the missing features have been added in the meantime. Moreover, it became possible to operate multiple CephFS filesystems within a RADOS cluster.
What sounds like a detail has practical, tangible implications. Previously, you could create precisely one CephFS in a RADOS cluster. In many companies, though, this approach is not viable because of compliance issues. For example, a common occurrence is to stipulate a logical separation between a company's own data and third-party data in the cloud. However, implementing this in a meaningful way is virtually impossible with just ACLs and POSIX permissions. A better solution is to have a separate filesystem with its own CephX key for each customer. RADOS then prevents access to objects that the accessing clients have nothing to do with at the level of object storage. This feature has since also reached maturity for production. As a bonus, it is now possible to replace different services such as NFS and Samba with CephFS in your own environment without the content of the different CephFS filesystems getting in each other's way.
Ceph Object Gateway
Huge changes to the Ceph Object Gateway have been made, in particular to the way off-site replication works. The gateway, often referred to by its legacy name, RADOS Gateway (RGW), was the first front end in Ceph to allow asynchronous replication across multiple sites, which made sense because everyone knows that the gateway supports REST-based access to objects in RADOS by emulating AWS Simple Storage Service (S3).
S3 has become the asset store in many web applications, for example, to keep images, video, and other immutable data in small stores in regional proximity to the client and let the web application automatically generate the appropriate links, depending on the client's origin. Of course, the option to sync content between sites makes sense in this case.
Today, this functionality is no longer a separate tool, but a component of RGW. It is nevertheless remarkable: Specific buckets or entire pools can be replicated between RADOS instances with realms, zone groups, and zones. The Ceph documentation even has a blueprint on how to extend such a setup with Nginx. RADOS then becomes a kind of massively scalable mini-CDN that offers good enough performance for the requirements of HTTP and HTTPS.
« Previous 1 2 3 4 Next »
Buy this article as PDF
(incl. VAT)