Combining containers and OpenStack

Interaction

Kolla

OpenStack users separate naturally into two camps. Up to this point, the target group has been users who want access to containers, in addition to classical physical and virtual machines. Their focus of attention is managing Docker, rkt, LXD, and the like, thus bringing Kubernetes, Mesos, and Docker Swarm into the limelight.

However, more recent interest has been in containers as the basic framework for overall management of OpenStack, wherein OpenStack resides on top of the containers. One then refers, for example, to OpenStack on Kubernetes. Stackanetes [15] is a well-known project to deploy and manage OpenStack on Kubernetes that came out of a collaboration between CoreOS and Google. The Kolla project also fits in this camp very nicely. Because it is an OpenStack project, Kolla naturally is aimed specifically at this application.

Figure 2 shows the architecture of the container substructure for OpenStack. The project itself is not so new. The first entries in the Git repository date back to 2014; therefore, Kolla is even older than Magnum. The motivation for the project was to address the inadequacies in the previously used methods for updating OpenStack without downtime for the user. The GitHub page describes three use cases [16].

Figure 2: The Kolla architecture in a schematic overview.

The first case is to update the overall OpenStack construct, which is equivalent to changing from one version to the next. As Kolla emerged, this approach was standard practice and was the only way to update OpenStack. The other two cases are based on updating or rolling back individual OpenStack components.

The design of Kolla is based on two components: individual containers and container sets. Both must meet specific, well-documented requirements [16]. Kolla top-level containers correspond to OpenStack's basic services.

All controlling instances, such as the Neutron server or the Glance Registry, are part of the OpenStack Control group, whereas Cinder and Swift fall under OpenStack storage. Table 1 provides a more precise accounting.

Table 1

Kolla Container Sets

Container Set Components
Messaging Control RabbitMQ
High-Availability Control HAProxy, Keepalived
OpenStack Instance Keystone; Glance, Nova, Ceilometer, and Heat APIs
OpenStack Control Glance, Neutron, Cinder, Nova, Ceilometer, and Heat controllers
OpenStack Compute nova-compute, nova-libvirt, Neutron agents
OpenStack Storage Cinder, Swift
OpenStack Network DHCP, L3, Load Balancer as a Service (LBaaS), Firewall-as-a-Service (FWaaS)

In operation, you can make use of prefabricated container images by the project or build them yourself [17]. At this point in time, only Docker has been tested and is supported as a container technology. The images themselves can reside either on the public Docker hub or in a private registry.

In principle, two approaches are conceivable for the installation of OpenStack. The first is deploying, starting, and stopping the container instances manually, which certainly is not desirable in these times of automation. The alternative is to use management software. Kolla offers two possibilities: Kubernetes [12] [18] and Ansible [19] [20]. First, you need to set up the required infrastructure.

Ansible is significantly faster than using a Kubernetes cluster; however, Ansible comes from the configuration management field and covers many application cases outside of containers, whereas Kubernetes considers mastering Docker and containers to be its main task. What you should use at the end of the day strongly depends on the nature of the IT landscape in which you want to run Kolla. I tend to choose Kubernetes in many cases.

What Remains

If you want to connect containers and OpenStack, you cannot ignore the Zun, Magnum, and Kolla projects. Whereas Zun and Magnum are aimed at OpenStack users, Kolla is intended more for OpenStack administrators. In any case, container fans certainly get their money's worth. Those interested in OpenStack on containers also should look outside OpenStack – think Stackanetes.

The leap to using containers is no longer so huge. However, containerizing OpenStack is not trivial and requires prior consideration. What services are fundamental in order for other services or the basic framework to function? In which order should the individual services be instantiated? Is it advisable to hardwire certain components to simplify installation and operation? Is container management software recommended or even necessary?

The Author

Udo Seidel is a math and physics teacher and has been a Linux and open source fan since 1996. After graduating, he worked as a Linux/Unix trainer, system administrator, and senior solutions engineer. Today, he works as a digital evangelist and architect at Amadeus Data Processing GmbH in Erding, Germany.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus