« Previous 1 2 3 4
Management improvements, memory scaling, and EOL for FileStore
Refreshed
Few Filesystem Changes
Once the ugly duckling in the list of Ceph interfaces, CephFS has long since morphed into a full-fledged front end. In Ceph 17.2, however, the changes are minor. Existing filesystems can now be given a new name, but adjustments to Cephx keys must be made manually. Alternatively, each CephFS instance will have a unique ID in the future, which can also be used to address the filesystem.
Changes to the Ceph front end for access over a block device interface are also minor. The Ceph Block Device can be used in several ways right from the start; the rbd-nbd
option was a new addition to Ceph 16. It relies on a userspace daemon to wrap a network block device (NBD) in an RBD such that only the nbd
driver of the Linux kernel plays a role. Although the kernel also has a native RBD driver (krbd
), users have always struggled with version differences between the cluster and the client, especially on enterprise systems with their often ancient kernels. The rbd-nbd
client is particularly popular in the Kubernetes environment, where accessing a local block device is still the preferred way to access persistent volumes, even though a native connection from Ceph to Kubernetes exists.
Quality of Service for RGW
In the slipstream of the other front ends for RADOS, the Ceph Object Gateway (aka RADOS Gateway, RGW) has continuously improved over the past years. Ceph 17.2 sees some interesting component changes. The most important change is likely to be the support for bandwidth limitation on the basis of users or buckets. RGW extends RADOS to include an interface for access with the REST HTTP protocol; it emulates either OpenStack's Swift protocol or Amazon S3.
In many places, the RADOS gateway is used as a supplement for providers who want to offer their customers on-demand online storage without rolling out their own infrastructure. However, setups without an upstream load balancer have been particularly affected by the risk of individual files accessible on the public Internet taking down the RADOS gateway or even the entire cluster. Mitigation is now possible by defining limits for the respective user or the associated bucket rate – and is also an opportunity for more sales: After all, if a company defines standard limits for all users, turning off one limit per user can be marketed as a feature with a separate price tag.
The Road to Ceph 17.2
All in all, Ceph 17.2 is a robust update without many thrills. For the vast majority of Ceph admins, it primarily offers increased stability and more features with little overhead. If you want to update to the new version, named Quincy, you have several options.
The easiest way is to rely on Ceph's own cephadm
orchestration tool. Upgrading an active Ceph cluster with RocksDB can then be triggered by typing
ceph orch upgrade start --ceph-version 17.2.0
The described changes still need to be made, depending on the deployment scenario.
If you still roll out your Ceph services manually or use an automation tool (e.g., Ansible), you can look forward to some manual work or a call to the developers for help. The developers of Rook, for example, which makes Ceph operable in Kubernetes, were already working on new Helm charts at the time of going to press, and they might already be available when this issue is published.
In any case, upgrade angst is unfounded; if you got along with Ceph 16, you will not encounter any major problems in Ceph 17.2.
« Previous 1 2 3 4
Buy this article as PDF
(incl. VAT)