Management improvements, memory scaling, and EOL for FileStore

Refreshed

Few Filesystem Changes

Once the ugly duckling in the list of Ceph interfaces, CephFS has long since morphed into a full-fledged front end. In Ceph 17.2, however, the changes are minor. Existing filesystems can now be given a new name, but adjustments to Cephx keys must be made manually. Alternatively, each CephFS instance will have a unique ID in the future, which can also be used to address the filesystem.

Changes to the Ceph front end for access over a block device interface are also minor. The Ceph Block Device can be used in several ways right from the start; the rbd-nbd option was a new addition to Ceph 16. It relies on a userspace daemon to wrap a network block device (NBD) in an RBD such that only the nbd driver of the Linux kernel plays a role. Although the kernel also has a native RBD driver (krbd), users have always struggled with version differences between the cluster and the client, especially on enterprise systems with their often ancient kernels. The rbd-nbd client is particularly popular in the Kubernetes environment, where accessing a local block device is still the preferred way to access persistent volumes, even though a native connection from Ceph to Kubernetes exists.

Quality of Service for RGW

In the slipstream of the other front ends for RADOS, the Ceph Object Gateway (aka RADOS Gateway, RGW) has continuously improved over the past years. Ceph 17.2 sees some interesting component changes. The most important change is likely to be the support for bandwidth limitation on the basis of users or buckets. RGW extends RADOS to include an interface for access with the REST HTTP protocol; it emulates either OpenStack's Swift protocol or Amazon S3.

In many places, the RADOS gateway is used as a supplement for providers who want to offer their customers on-demand online storage without rolling out their own infrastructure. However, setups without an upstream load balancer have been particularly affected by the risk of individual files accessible on the public Internet taking down the RADOS gateway or even the entire cluster. Mitigation is now possible by defining limits for the respective user or the associated bucket rate – and is also an opportunity for more sales: After all, if a company defines standard limits for all users, turning off one limit per user can be marketed as a feature with a separate price tag.

The Road to Ceph 17.2

All in all, Ceph 17.2 is a robust update without many thrills. For the vast majority of Ceph admins, it primarily offers increased stability and more features with little overhead. If you want to update to the new version, named Quincy, you have several options.

The easiest way is to rely on Ceph's own cephadm orchestration tool. Upgrading an active Ceph cluster with RocksDB can then be triggered by typing

ceph orch upgrade start --ceph-version 17.2.0

The described changes still need to be made, depending on the deployment scenario.

If you still roll out your Ceph services manually or use an automation tool (e.g., Ansible), you can look forward to some manual work or a call to the developers for help. The developers of Rook, for example, which makes Ceph operable in Kubernetes, were already working on new Helm charts at the time of going to press, and they might already be available when this issue is published.

In any case, upgrade angst is unfounded; if you got along with Ceph 16, you will not encounter any major problems in Ceph 17.2.

The Author

Freelance journalist Martin Gerhard Loschwitz focuses primarily on topics such as OpenStack, Kubernetes, and Ceph.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • What's new in Ceph
    Ceph and its core component RADOS have recently undergone a number of technical and organizational changes. We take a closer look at the benefits that the move to containers, the new setup, and other feature improvements offer.
  • Ceph object store innovations
    The Ceph object store remains a project in transition: The developers announced a new GUI, a new storage back end, and CephFS stability in the just released Ceph c10.2.x, Jewel.
  • Getting Ready for the New Ceph Object Store

    The Ceph object store remains a project in transition: The developers announced a new GUI, a new storage back end, and CephFS stability in the just released Ceph v10.2.x, Jewel.

  • Manage cluster state with Ceph dashboard
    The Ceph dashboard offers a visual overview of cluster health and handles baseline maintenance tasks; with some manual work, an alerting function can also be added.
  • The RADOS Object Store and Ceph Filesystem

    Scalable storage is a key component in cloud environments. RADOS and Ceph enter the field, promising to support seamlessly scalable storage.

comments powered by Disqus