Lead Image © alexutemov, 123RF.com

Lead Image © alexutemov, 123RF.com

Continuous integration with Docker and GitLab

Assembly Line

Article from ADMIN 45/2018
By
GitLab provides the perfect environment for generating Docker containers that can help you operate critical infrastructure reliably and reproducibly.

Various articles have already explained why it is not a good idea to rely on prebuilt Docker containers (e.g., from Docker Hub [1]). The containers found there are usually black boxes, whose creation process lies completely in the dark. You don't want that at the core of your infrastructure.

The good news: You can easily build your own containers for Docker on the basis of official containers from the various manufacturers. Practically every common Linux distribution, such as Ubuntu, Debian, and CentOS, offers official Docker images that reside on Docker Hub. Their sources are freely available for any user to download.

GitLab [2] proves to be extremely useful in this context: It comes with its own Docker registry and offers many tools to achieve true continuous integration (CI). One example is pipelines: If you put tests in your Docker container directory, GitLab automatically calls these tests every time you commit to the Git directory after rebuilding the Docker container.

In this article, I present in detail a complete Docker workflow based on GitLab. Besides the necessary adjustments in GitLab, I talk about the files required in a GitLab Docker container directory.

Conventional Approach

The classic approach is to install the services needed on central servers (see the "Deep Infrastructure" box), provide them with the corresponding configuration files, and operate them there – possibly with Pacemaker or a similar resource manager for high availability. However, this arrangement is not particularly elegant: If you want to compensate for the failure of such a node quickly, you would first have to invest a great deal of time in automation with Ansible or other tools.

Deep Infrastructure

When it comes to the efficient distribution of applications (i.e., classic Platform as a Service or Software as a Service), Docker is now part of the standard repertoire. If you limit the use of Docker containers to these areas of application, though, you are doing Docker and containers an injustice, because meaningful application scenarios for container virtualization also regularly arise in places where you do not expect them at all. One example that stands out is deep infrastructure.

When the conversation turns to deep infrastructure, administrators usually understand it to mean the components of a setup that take care of the operation of fundamental services. Imagine a cloud installation based on OpenStack as an example. For the user to start virtual machines with an API call, many different services need to collaborate in the background. Every server needs a suitable network configuration, including the correct IP addresses, which is only realistically possible in large setups with DHCP. However, this implies the operation of a DHCP server.

NTP is also a classic standard requirement, because the clocks of all servers need to be synchronized to within fractions of a second. Various services such as message queues based on AMQP or distributed storage systems otherwise quit almost immediately.

Additionally, huge cloud installations need a solution to scaling quickly in a short time, which only works if the manual overhead for installing new servers is kept to a minimum. Once a server is entered in the inventory system, including the required IP addresses and any relevant credentials, it must be sufficient to mount it in the rack and power it on.

Everything else, from carrying out maintenance tasks to installing an operating system and rolling out the services for the cloud, must be fully automated. However, this workflow requires additional services from the deep infrastructure category, such as TFTP. A local mirror server of the deployed distribution also is often used in setups like this, offering a faster response and generating far less external traffic than an external server.

Moreover, you would have to follow the maintenance cycles of the distributor whose Linux product is running on your own servers. If an update goes wrong, in the worst case, the entire boot infrastructure is affected.

Doing It Differently

Another approach is to roll out individual services such as NTP or dhcpd in the form of individual containers on a simple basic Linux distribution (e.g., Container Linux [3], formerly CoreOS). A minimal basic system then runs on the boot infrastructure, downloading containers from a central registry during operation and simply running them. If you want to update individual services, you install a new container for the respective program.

If you want to rebuild the boot infrastructure completely (e.g, because of important Container Linux updates), you need to install a new Container Linux, which automatically downloads all relevant containers on first launch and executes them again. Container Linux even offers a configuration framework for automatic instructions in the form of Ignition [4].

Of course, the containers must be prepared such that they already contain the required configuration and all the relevant data. For updates, you also need to be able to rebuild a container at any time and without much overhead. However, this is easier said than done: In fact, you need to build a comprehensive continuous delivery (CD) and continuous integration system for this workflow that can reliably regenerate your Docker containers.

Step 1: Docker Registry in GitLab

A working CI process for Docker containers in GitLab relies on two functioning components. Although the GitLab CI system is a standard feature of GitLab, it requires what is known as a Runner to perform the CI tasks. Additionally, the Docker registry is the target for the CI process, because the finished Docker image resides there at the end of the process.

Step 1 en route to Docker CI with GitLab is to enable the Docker registry in GitLab. The following example assumes the use of the Community Edition of GitLab based on the GitLab Omnibus version. To enable the registry, you just need three lines in /etc/gitlab/gitlab.rb:

registry_external_url 'https://gitlab.example.com:5000'
registry_nginx['ssl_certificate'] = </path/to/>certificate.pem
registry_nginx['ssl_certificate_key'] = </path/to/>certificate.key

If you have already stored the certificate for GitLab and the SSL key in /etc/gitlab/ssl/<hostname>.crt and /etc/gitlab/ssl/<hostname>.key, you can skip the path specification for Nginx in the example, leaving only the first line of the code block. The example assumes that GitLab's Docker registry uses the same host name as GitLab itself, but a different port. Also, the GitLab Docker registry is secured by SSL out of the box.

Once gitlab.rb has been extended to include the required lines, trigger the regeneration of the GitLab configuration with Chef, which is part of GitLab's scope if you have the Omnibus edition:

sudo gitlab-ctl reconfigure
sudo gitlab-ctl restart

Using netstat -nltp, you can then check whether Nginx is listening on port 5000. On a host with Docker installed, you can also try

Docker login gitlab.example.com:5000

to connect to GitLab's Docker registry.

Incidentally, in the default configuration, GitLab stores the Docker images in the /var/opt/gitlab/gitlab-rails/shared/registry/ directory. If you want to change this, you just need

gitlab_rails['registry_path'] = "</path/to/folder>"

in gitlab.rb, followed by regeneration of the GitLab configuration and a restart.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus