Red Hat PaaS hyperconverged storage
Permanently Secured
The ability to provide persistent storage for containers, and thus make the data independent of container life, can help containers make further inroads into enterprise environments. Red Hat provides hyperconverged storage in a container platform based on Docker and Kubernetes with a Platform-as-a-Service (PaaS) solution known as OpenShift Container Platform.
Volatile Containers
Just as virtual machines originally formed the basis of cloud-based environments, today, containers help boost and expand the cloud landscape. Unlike virtual machines, containers do not require a hypervisor or their own operating system and thus run directly within the system on which the container is started. This arrangement is good not only for developers, who can thus very quickly and independently provision new containers in which to develop, deploy, and test applications, but also for system administrators, because containers are very easy to manage, often with fewer dependency problems when installing applications. IT managers are also happy because of the lower costs that containers offer, as well as the increased productivity of their teams. New containers can be provisioned within seconds. Downtime as a team waits for the deployment of a new system can thus be eliminated in most cases.
Another difference between containers and virtual machines is that containers tend to be regarded as short-lived objects, whereas virtual machines are intended more for long-term use – thus, the origin of the often-used pets vs. cattle meme, in which virtual machines represent cute pets with names, and containers are anonymous, unnamed critters. At the beginning of their success story, containers were often only used for stateless applications. Why this is so, quickly becomes clear if you take a closer look at the life cycle of a container.
Typically, a container is started on an arbitrary host in the cloud landscape on the basis of a specific container image. Data can be modified or new data created inside the container in a writable layer of the image; however, these changes are lost on exiting the container. To keep data permanently, containers can include data volumes, which are directories within the host system provided to the containers running on the host. Here is an example of incorporating a data volume in a Docker container:
# docker run -d -P --name web -v /data:/www/webapp fedora/httpd python app.py
The /data
folder on the host is now available inside the container as /www/webapp
. Another approach is to use data volume containers. In this case, a specific container only has the task of providing certain volumes to other containers; these are mounted at startup with the --volumes-from
option.
Of course, these approaches pose several problems. If you use orchestration tools to manage the containers, you will probably not know on which host a container is started. Even if it is clear, it is not unusual for the same container to run on a different host at a later time – for example, if the original host has failed or if the admin has decided that several containers of the same type are necessary and starts them on different host systems to improve scaling.
The idea is for the container administrator not to see resources of the cloud as individual silos but to provide them as a shared resource for all systems. The host to which CPUs, RAM, or data volumes belong should no longer play a role. Access to local data volumes is thus not recommended in practical container use cases.
Storage for Stateful Containers
What might a solution for stateful containers look like, in which data is not lost when the container is terminated and is available within a complete container cluster? The answer is software-defined storage (SDS), which is provided as persistent storage for containers. CoreOS introduced this kind of software based on distributed memory systems in the form of Torus [1] some time ago.
The Linux distributor Red Hat goes one step further and – in its latest release – provides hyperconverged storage in the form of the OpenShift Container Platform [2]. What does this mean in detail? Put simply, that computing and storage resources are merged, deployed, and managed using the same container management software. Computer and storage containers therefore coexist within the same cloud framework – in this case, the OpenShift Container Platform. When creating a container, the user simply indicates the storage cluster from which they have access and the size of the required volume. When defining a container in a YAML file, this volume is then listed as an additional resource. The container is assigned the storage at startup; it remains permanently available regardless of the container's life cycle. Red Hat relies on Gluster storage, which is provided in OpenShift containers (Figure 1). If necessary, it can also be integrated from an external storage cluster, with the result that a hyperconverged solution is no longer available (Figure 2).
A new service in Gluster 3.1.3 – Heketi – makes the whole thing possible. It provides a RESTful API that can be addressed by various cloud frameworks, such as OpenShift or OpenStack, to provision the desired storage volume dynamically and mount it within the container as a persistent volume. Gluster storage pools need not be created manually; instead, the new Heketi service takes care of organizing all registered Gluster nodes in clusters, creating storage pools, and providing the desired volumes on request (Figure 3).
The example in this article is based on the Red Hat OpenShift Container Platform 3.2. Within this environment, version 3.1.3 Gluster storage nodes are used; they are implemented themselves as OpenShift containers. All practical examples assume that a corresponding OpenShift installation already exists, because the description of the complete setup is beyond the scope of this article. The installation should already have an existing Heketi setup and an OpenShift project should exist. A description of the installation of OpenShift [5], the Heketi service, and Gluster storage containers can be found online [6].
For a better understanding of the setup, a brief introduction to the OpenShift PaaS framework follows before I then look at the configuration of the storage system. OpenShift uses a microservice-based architecture in which multiple components are deployed. For example, Docker takes care of providing the required run-time environments for the applications to be operated under OpenShift within containers. Kubernetes is used to orchestrate these containers. The framework consists of a series of services, of which some run on the control system, or master node, and others on each Docker host (the nodes).
An API service on the master node controls the individual Kubernetes services and also takes care of the containers on the individual nodes. In the world of Kubernetes, the container objects are known as "pods," which can contain a single container or multiple containers. A YAML file contains all the required information for a pod, such as what image should be used for a pod's container and on which port the services within the container should listen. Information about the volume the pod should access is also included in this YAML file. Other objects in Kubernetes are typically described by JSON files and created by the oc
command-line tool within OpenShift.
A Kubernetes agent service named "kubelet" runs on the individual OpenShift nodes. The kubelet service ensures that the pods or other Kubernetes objects correspond to the respective descriptions from the YAML or JSON files referred to above and is continuously verified. If, for example, a certain service should consist of four containers, but the kubelet service sees that only three containers are available, it starts an additional container to meet the previously defined requirements.
etcd Configuration
Kubernetes uses the etcd service, a distributed key value database, for back-end storage. It holds the configuration and status information of the Kubernetes cluster. The kubelet service on the individual nodes constantly queries the database for changes and performs the necessary actions. For example, the database contains a list of all nodes of an OpenShift cluster. The master node uses this information to determine the hosts on which a new pod can be generated. So that the master node is not a single point of failure, it makes sense to implement it as a highly available service. This works, for example, with the Pacemaker cluster software or HAProxy.
Before launching into the setup of the storage system, first log in to the master node (oc login
) and, if not already in place, install a new project for which the storage is being provisioned (oc new-project <project-name>
). At this point, assume that the Heketi service was previously set up and all Gluster storage containers are available [6].
To prepare the storage system, it is important to understand that Gluster organizes all exported hard drives of the Gluster storage nodes in pools, which are then available within a cluster. From these pools, individual storage blocks known as persistent volumes (PVs) can be used in an OpenShift project. Inside pods, persistent volume claims (PVCs) are then used as a persistent storage. PVs are more or less shared as storage on the network and – like OpenShift nodes – provide a resource that containers can access. Kubernetes controls this access through a corresponding volume plugin. If a developer or an administrator requires a volume from this storage, the volume is requested with the help of a PVC. This is much like a pod that has to draw on the individual OpenShift nodes as a resource. In terms of storage, the individual volume claims draw on the PVs set up previously as resources. Ideally several such PVs are created with different performance characteristics. In the case of a request for a storage volume (volume claim), you can look for a matching PV.
As mentioned earlier, most Kubernetes objects are described by JSON files and then defined using the oc
tool within the OpenShift environment. Kubernetes needs a service for the Gluster storage nodes as the first object. It abstracts access to the individual nodes, providing transparent access to them. To do so, a Kubernetes service relies on an endpoint, which can be defined by a JSON file like that shown in Listing 1. The file is available below /usr/share/heketi/openshift/
on an OpenShift system. The OpenShift oc
tool generates a corresponding Kubernetes object from the JSON file:
# oc create -f gluster-endpoint.json endpoints "glusterfs-cluster" created
Listing 1
gluster-endpoint.json
01 { 02 "kind": "Endpoints", 03 "apiVersion": "v1", 04 "metadata": { 05 "name": "glusterfs-cluster" 06 }, 07 "subsets": [ 08 { 09 "addresses": [ 10 { 11 "ip": "192,168,121,101" 12 } 13 ], 14 "ports": [ 15 { 16 "port": 1 17 } 18 ] 19 }, 20 { 21 "addresses": [ 22 { 23 "ip": "192,168,121,102" 24 } 25 ], 26 "ports": [ 27 { 28 "port": 1 29 } 30 ] 31 }, 32 { 33 "addresses": [ 34 { 35 "ip": "192,168,121,103" 36 } 37 ], 38 "ports": [ 39 { 40 "port": 1 41 } 42 ] 43 } 44 ] 45 }
To check whether this worked, you can again use oc
(Listing 2).
Listing 2
Showing Endpoints
# oc get endpoints NAME ENDPOINTS ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** **AGE storage-project-router 192.168.121.233:80,192.168.121.233:443,192.168.121.233:1936 2d glusterfs-cluster 192.168.121.101:1,192.168.121.102:1,192.168.121.103:1 ** ** ** ** ** ** ** ** ** **3s heketi 10.1.1.3:8080 ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** **2m heketi-storage-endpoints 192.168.121.101:1,192.168.121.102:1,192.168.121.103:1 ** ** ** ** ** ** ** ** ** **3m
On the basis of this endpoint, you generate the service for these Gluster nodes in the next step. For this step, you need another JSON file named gluster-service.json
(Listing 3).
Listing 3
gluster-service.json
01 { 02 "kind": "Service","apiVersion": "v1", "metadata": { 03 "name": "glusterfs-cluster" 04 }, 05 "spec": { 06 "ports": [ 07 {"port": 1} 08 ] 09 } 10 }
Next, create the service and verify that no errors have occurred (Listing 4). To provision the GlusterFS storage dynamically, you need to call the Heketi service. It has its own tool, with which you can create a volume within a trusted storage pool:
# heketi-cli volume create --size=100 --persistent-volume-file=pv01.json
Listing 4
Verifying a Service
# oc create -f gluster-service.json service "glusterfs-cluster" created # oc get service NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE storage-project-router 172.30.94.109 <none> ** ** ** ** ** ** ** **80/TCP,443/TCP,1936/TCP ** ** ** **2d glusterfs-cluster 172.30.212.6 <none> ** ** ** ** ** ** ** **1/TCP 5s heketi 172.30.175.7 <none> ** ** ** ** ** ** ** **8080/TCP 2m heketi-storage-endpoints 172.30.18.24 <none> ** ** ** ** ** ** ** **1/TCP 3m
The pv01.json
file (Listing 5) must contain the name of the previously established endpoints, the desired size of the volume, and a name for the volume. Finally, you need to embed the volume in Kubernetes and make sure that it works correctly (Listing 6).
Listing 5
pv01.json
01 { 02 "kind": "PersistentVolume", 03 "apiVersion": "v1", 04 "metadata": { 05 "name": "glusterfs-1ba23e11", "creationTimestamp": null 06 }, 07 "spec": { 08 "capacity": { 09 "storage": "100Gi" 10 }, 11 "glusterfs": { 12 "endpoints": "glusterfs-cluster", "path": "vol_1ba23e1134e531dec3830cfbcad1eeae" 13 }, 14 "accessModes": [ 15 "ReadWriteMany" 16 ], 17 "persistentVolumeReclaimPolicy": "Retain" 18 }, 19 "status": {} 20 }
Listing 6
Mounting Volumes
# oc create -f pv01.json persistentvolume "glusterfs-1ba23e11" created # oc get pv NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE glusterfs-1ba23e11 100Gi RWX Available 4s
From this storage block, you can now define a PVC, which you specify in the next step when you create a pod as storage (Listing 7).
Listing 7
Creating a Storage Pod
01 { 02 "apiVersion": "v1", 03 "kind": "PersistentVolumeClaim", 04 "metadata": { 05 "name": "glusterfs-claim" 06 }, 07 "spec": { 08 "accessModes": [ 09 "ReadWriteMany" 10 ], 11 "resources": { 12 "requests": { 13 "storage": "10Gi" 14 } 15 } 16 } 17 }
Now use oc
to create the PVC that will be provisioned later (Listing 8). Finally, define the pool (Listing 9).
Listing 8
Creating a PVC
# oc create -f pvc.json persistentvolumeclaim "glusterfs-claim" created # oc get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES AGE glusterfs-claim Bound glusterfs-1ba23e11 100Gi RWX 11s
Listing 9
Defining the Pool
01 apiVersion: v1 02 kind: Pod 03 metadata: 04 name: fedora 05 spec: 06 containers: 07 - image: fedora:latest 08 volumeMounts: 09 - mountPath: /data 10 name: pvc01 11 volumes: 12 - name: pvc01 13 persistentVolumeClaim: 14 claimName: glusterfs-claim 15 16 # oc create -f fedora.yml 17 pod "fedora" created
This pool includes a single container that was created on the basis of Fedora container images. The volumes
section references the previously set up storage by name and provides it with a volume name. The name is used in turn in the volumeMounts
section to mount the volume on a particular mount point.
Within the pod, the storage requested within the persistent storage claim is now available under the /data
mount point. Because the storage is provided from the Gluster storage pool, it exists independent of the node running the pod and, of course, of the pod's life cycle. It is still available for use after terminating the pod.
Buy this article as PDF
(incl. VAT)
Buy ADMIN Magazine
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Most Popular
Support Our Work
ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.