« Previous 1 2
Cloud-native storage with OpenEBS
Home in the Clouds
cStor Storage Engine
The fourth storage engine in OpenEBS is cStor, which is getting ready to replace the good old Jiva. Although Jiva is easy to use, it does not offer as many options as cStor. To use cStor, you first need to create a cStor storage pool – which OpenEBS does not do by default – by making available block devices on each storage node not used elsewhere. The Node Disk Manager scans the available devices and uses a (configurable) blacklist to filter out those that are not suitable for storage, such as already mounted disks. Potentially, several disks per node then form a storage pool that, together with the storage pools of the other nodes, forms the cStor storage pool.
A list of the block devices available in the cluster is output by the call:
kubectl get blockdevice -n openebs
These devices can then be added to a YAML file and used to compile a storage pool. You can specify, for example, how a node writes the data to its local disks: striped, mirrored, raidz, or raidz2. The fact that these terms are reminiscent of RAID is not a coincidence; they are underpinned by the ZFS filesystem, which OpenEBS uses behind the scenes for cStor for local data storage. However, it should be noted that this only applies to local data storage: As mentioned, replication of the data otherwise takes place on the volume level. A maximum of five replicas are possible, of which at least three must be available for the volume to be usable.
As an alternative to the manual process described previously for creating the cStor storage pool, OpenEBS has developed new Kubernetes operators in version 2.0 under the cStorPoolCluster
(CSPC) umbrella, which are intended to simplify the provisioning and management of cStor pools, including such things as adding new disks to pools, replacing broken disks, resizing volumes, backing up and restoring, and automatically upgrading the software. More information can be found on the GitHub page [3]. If OpenEBS is already installed as described above, you just need to deploy the operator on top:
kubectl apply -f https://openebs.github.io/charts/cstor-operator.yaml
Now you can create a YAML file as shown in Listing 3 and enter the block devices you discovered in the process described above. This creates a storage pool that mirrors the data between the two disks (dataRaidGroupType: "mirror"
). Calls to
kubectl get cspc -n open-ebs kubectl get cspi -n openebs
Listing 3
cStor Pool
apiVersion: cstor.openebs.io/v1 kind: CStorPoolCluster metadata: name: simplepool namespace: openebs spec: pools: - nodeSelector: kubernetes.io/hostname: "node2" dataRaidGroups: - blockDevices: - blockDeviceName: "blockdevice-3f4e3fea1ee6b86ca85d2cde0f132007" - blockDeviceName: "blockdevice-db84a74a39c0a1902fced6663652118e" poolConfig: dataRaidGroupType: "mirror"
show the current status of the pool. To now retrieve volumes from the pool, all you need is a storage class as described in Listing 4. The replica count of this storage class is 1
because more than one pool is not available. If replication is desired, you would have to extend the cStorPoolCluster
(CSCP) API accordingly below the pools
key. The cstor-simple
storage class can now be used in PVCs to request persistent volumes from the storage.
Listing 4
cStor Storage Class
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: cstor-simpl provisioner: cstor.csi.openebs.io allowVolumeExpansion: true parameters: cas-type: cstor cstorPoolCluster: simplepool replicaCount: "1"
Conclusions
Storage is a massive challenge for container clusters. OpenEBS provides container-attached storage that makes block devices available to individual nodes in the Kubernetes cluster. Operation is supported by a Kubernetes operator, which also supports features such as snapshots, backup, and restore.
One prerequisite for using OpenEBS is control over the Kubernetes worker nodes, which you need to extend with physical or virtual disks. However, Kubernetes users should not expect performance miracles, because the storage is distributed over the cluster by iSCSI behind the scenes. If you want to run a database with high-performance requirements, you can use the local provisioners from OpenEBS, which offer more convenience than the local provisioners from Kubernetes but are, of course, ultimately tied to the respective Kubernetes node.
Infos
- "Cloud-native storage for Kubernetes with Rook" by Martin Loschwitz, ADMIN , issue 49, 2019, pg. 47, https://www.admin-magazine.com/Archive/2019/49/Cloud-native-storage-for-Kubernetes-with-Rook/
- OpenEBS: https://openebs.io
- CSCP operator: https://github.com/openebs/cstor-operators
« Previous 1 2
Buy this article as PDF
(incl. VAT)
Buy ADMIN Magazine
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Most Popular
Support Our Work
ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.