« Previous 1 2
Combining containers and OpenStack
Interaction
Kolla
OpenStack users separate naturally into two camps. Up to this point, the target group has been users who want access to containers, in addition to classical physical and virtual machines. Their focus of attention is managing Docker, rkt, LXD, and the like, thus bringing Kubernetes, Mesos, and Docker Swarm into the limelight.
However, more recent interest has been in containers as the basic framework for overall management of OpenStack, wherein OpenStack resides on top of the containers. One then refers, for example, to OpenStack on Kubernetes. Stackanetes [15] is a well-known project to deploy and manage OpenStack on Kubernetes that came out of a collaboration between CoreOS and Google. The Kolla project also fits in this camp very nicely. Because it is an OpenStack project, Kolla naturally is aimed specifically at this application.
Figure 2 shows the architecture of the container substructure for OpenStack. The project itself is not so new. The first entries in the Git repository date back to 2014; therefore, Kolla is even older than Magnum. The motivation for the project was to address the inadequacies in the previously used methods for updating OpenStack without downtime for the user. The GitHub page describes three use cases [16].
The first case is to update the overall OpenStack construct, which is equivalent to changing from one version to the next. As Kolla emerged, this approach was standard practice and was the only way to update OpenStack. The other two cases are based on updating or rolling back individual OpenStack components.
The design of Kolla is based on two components: individual containers and container sets. Both must meet specific, well-documented requirements [16]. Kolla top-level containers correspond to OpenStack's basic services.
All controlling instances, such as the Neutron server or the Glance Registry, are part of the OpenStack Control group, whereas Cinder and Swift fall under OpenStack storage. Table 1 provides a more precise accounting.
Table 1
Kolla Container Sets
Container Set | Components |
---|---|
Messaging Control | RabbitMQ |
High-Availability Control | HAProxy, Keepalived |
OpenStack Instance | Keystone; Glance, Nova, Ceilometer, and Heat APIs |
OpenStack Control | Glance, Neutron, Cinder, Nova, Ceilometer, and Heat controllers |
OpenStack Compute | nova-compute, nova-libvirt, Neutron agents |
OpenStack Storage | Cinder, Swift |
OpenStack Network | DHCP, L3, Load Balancer as a Service (LBaaS), Firewall-as-a-Service (FWaaS) |
In operation, you can make use of prefabricated container images by the project or build them yourself [17]. At this point in time, only Docker has been tested and is supported as a container technology. The images themselves can reside either on the public Docker hub or in a private registry.
In principle, two approaches are conceivable for the installation of OpenStack. The first is deploying, starting, and stopping the container instances manually, which certainly is not desirable in these times of automation. The alternative is to use management software. Kolla offers two possibilities: Kubernetes [12] [18] and Ansible [19] [20]. First, you need to set up the required infrastructure.
Ansible is significantly faster than using a Kubernetes cluster; however, Ansible comes from the configuration management field and covers many application cases outside of containers, whereas Kubernetes considers mastering Docker and containers to be its main task. What you should use at the end of the day strongly depends on the nature of the IT landscape in which you want to run Kolla. I tend to choose Kubernetes in many cases.
What Remains
If you want to connect containers and OpenStack, you cannot ignore the Zun, Magnum, and Kolla projects. Whereas Zun and Magnum are aimed at OpenStack users, Kolla is intended more for OpenStack administrators. In any case, container fans certainly get their money's worth. Those interested in OpenStack on containers also should look outside OpenStack – think Stackanetes.
The leap to using containers is no longer so huge. However, containerizing OpenStack is not trivial and requires prior consideration. What services are fundamental in order for other services or the basic framework to function? In which order should the individual services be instantiated? Is it advisable to hardwire certain components to simplify installation and operation? Is container management software recommended or even necessary?
Infos
- Docker: http://www.docker.com
- rkt: https://coreos.com/rkt
- LXD: http://linuxcontainers.org
- OpenStack: http://www.openstack.org
- Magnum: http://wiki.openstack.org/wiki/Magnum
- Zun: http://wiki.openstack.org/wiki/Zun
- Kolla: http://wiki.openstack.org/wiki/Kolla
- GIFEE: http://github.com/GIFEE/GIFEE
- Nova Docker: http://github.com/openstack/nova-docker
- Kuryr: http://wiki.openstack.org/wiki/Kuryr
- Magnum release notes: http://github.com/openstack/magnum/blob/master/releasenotes/notes/remove-container-endpoint-3494eb8bd2406e87.yaml
- Kubernetes: http://kubernetes.io
- Docker Swarm: http://docs.docker.com/swarm/
- Mesos: http://mesos.apache.org
- Stackanetes: http://github.com/stackanetes/stackanetes
- Kolla specification: http://github.com/openstack/kolla/blob/master/specs/containerize-openstack.rst
- Image building with Kolla: http://docs.openstack.org/developer/kolla/image-building.html
- kolla-kubernetes: http://docs.openstack.org/developer/kolla-kubernetes/
- Ansible: http://www.ansible.com
- kolla-ansible: http://docs.openstack.org/developer/kolla-ansible/
« Previous 1 2
Buy this article as PDF
(incl. VAT)
Buy ADMIN Magazine
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Most Popular
Support Our Work
ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.