« Previous 1 2 3 4 Next »
Cloud-native storage for Kubernetes with Rook
Memory
Simpler Start
The basic requirement before you can try Rook is a running Kubernetes cluster. Rook does not place particularly high demands on the cluster. The configuration only needs to support the ability to create local volumes on the individual cluster nodes with the existing Kubernetes volume manager. If this is not the case on all machines, Rook's pod definitions let you specify explicitly which machines of the solution are allowed to take storage and which are not.
To make it as easy as possible, the Rook developers have come up with some ideas. Rook itself comes in the form of Kubernetes pods. You can find example files on GitHub [6] that start these pods. The operator
namespace contains all the components required for Rook to control Ceph. The cluster
namespace starts the pods that run the Ceph components themselves.
Remember that for a Ceph cluster to work, it needs at least the monitoring servers (MONs) and its data silos, the object storage daemons (OSDs). In Ceph, the monitoring servers take care of both enforcing a quorum and ensuring that clients know how to reach the cluster by maintaining two central lists: The MON map lists all existing monitoring servers, and the OSD map lists the available storage devices.
However, the MONs do not act as proxy servers. Clients always need to talk to a MON when they first connect to a Ceph cluster, but as soon as they have a local copy of the MON map and the OSD map, they talk directly to the OSDs and also to other MON servers.
Ceph, as controlled by Rook, makes no exceptions to these rules. Accordingly, the cluster namespace from the Rook example also starts corresponding pods that act as MONs and OSDs. If you run the
kubectl get pods -n rook
command after starting the namespaces, you can see this immediately. At least three pods will be running with MON servers, as well as various pods with OSDs. Additionally, the rook-api
pod, which is of fundamental importance for Rook itself, handles communication with the other Kubernetes APIs.
At the end of the day, a new volume type is available in Kubernetes after the Rook rollout. The volume points to the different Ceph front ends and can be used by users in their pod definitions like any other volume type.
Complicated Technology
Rook does far more work in the background than you might think. A good example of this is integration into the Kubernetes Volumes system. Because Ceph running in Kubernetes is great, but also useless if the other pods can't use the volumes created there, the Rook developers tackled the problem and wrote their own volume driver for use on the target systems. The driver complies with the Kubernetes FlexVolume guidelines.
Additionally, a Rook agent runs on every kubelet node and handles communication with the Ceph cluster. If a RADOS Block Device (RBD) originating from Ceph needs to be connected to a pod on a target system, the agent ensures that the volume is also available to the target container by calling the appropriate commands on that system.
The Full Monty
Ceph currently supports three types of access. The most common variant is to expose Ceph block devices, which can then be integrated into the local system by the rbd
kernel module. Also, the Ceph Object Gateway or RADOS Gateway (Figure 2) enables an interface to Ceph on the basis of RESTful Swift and S3 protocols. For some months now, CephFS has finally been approved for production; that is, a front end that offers a distributed, POSIX-compatible filesystem with Ceph as its back-end storage.
From an admin point of view, it would probably already have been very useful if Rook were only able to use one of the three front ends adequately: the one for block devices. However, the Rook developers did not want to skimp; instead, they have gone whole hog and integrated support into their project for all three front ends.
If a container wants to use persistent storage from Ceph, you can either create a real Docker volume using a volume directive, organize access data for the RADOS Gateway for RESTful access, or integrate CephFS locally. The functional range of Rook is quite impressive.
« Previous 1 2 3 4 Next »
Buy this article as PDF
(incl. VAT)
Buy ADMIN Magazine
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Most Popular
Support Our Work
ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.