CI/CD deliverables pipeline
Happy Coincidence
Software developers are faced with an overcrowded toolbox of build and automation tools to help automate the steps from editing source code in Git to delivering the finished product. It is not easy to select the right tools and forge them into a chain completely and efficiently that reflects your way of working.
If a ready-made service solution such as GitLab does not keep you happy because its customization and expansion options are limited, the following guide will take you to a rewarding goal: a continuous integration (CI) installation system that uses Jenkins [1] as the backbone.
This scheme empowers a team of software developers who manage their source code via Git to automate intelligently the recurring steps (builds, tests, delivery) in the project. The following example deliberately keeps the individual components simple, leaving enough items with a potential for improvement, depending on your application.
In release 2.x, pipelining [2] officially became part of Jenkins. Since then, completely new possibilities have opened up as to how a build pipeline can be orchestrated. Docker [3] is also a powerful tool that lets you implement customizations that otherwise require considerable maintenance in traditional installations, including new installations.
Build Environment
The following demonstration is a build pipeline that orchestrates Jenkins and takes advantage of Docker's capabilities for the build process. It is not explicitly about creating Docker images.
Jenkins orchestrates the entire build. To be as flexible as possible about updating Jenkins itself, the setup presented uses the official Jenkins Docker image [4]. The Jenkins Master therefore requires only one machine to run the Docker daemon. The big advantage of the Docker image is that the Jenkins home directory can be moved to a Docker volume. In this way, no settings are lost when you need to update Jenkins or the Jenkins Docker image.
In the case of the slave machines for the Linux builds, it doesn't matter whether they are dedicated servers, virtual machines (VMs), or even VMs hosted by a cloud provider. For Jenkins to be able to use a machine as a slave, the machine only needs to be equipped with SSH access and a Java 8 Runtime. The builds themselves make use of Docker, so additional Docker instances need to be installed on the slaves.
An Apple build slave has to be configured much like a normal Linux machine in terms of Jenkins integration; it also needs SSH access and an installed Java 8 Runtime. If you need Docker for the build, you have to install it there explicitly. Mac-specific builds, however, usually occur directly on the machine, so admins have to set up the required software on the device.
As always, Windows requires extra handling. In short, the implementation is as follows: The Windows slaves need to connect via the Java Network Launch Protocol (JNLP), and the required tool chain must be installed on the Windows machine. Details on connecting these slaves can be found online [5].
Builder Images
To build the results for different Linux target systems, intelligent use of Docker is the answer. The idea is to use a Docker image of the corresponding target system and build it in an instance of the image. After the build, the system exports the results from the instance in an appropriate way. The deliverable is not a Docker image, but a target system-specific binary.
The goal of the setup presented here is a cmake
-based Hello World application. The corresponding Dockerfile for a Debian Builder is documented in Listing 1. The resulting binary is only an example, of course, because it is not bound to any specific Linux system as a standalone binary. In this case, the binary built for Fedora works just as well on Debian, which is admittedly rare in the field.
Listing 1
Debian Builder Dockerfile
01 FROM debian:stretch 02 MAINTAINER Yves Schumann <yves@eisfair.org> 03 04 ENV WORK_DIR=/data/work \ 05 DEBIAN_FRONTEND=noninteractive \ 06 LC_ALL=en_US.UTF-8 07 08 # Mount point for development workspace 09 RUN mkdir -p ${WORK_DIR} 10 VOLUME ${WORK_DIR} 11 12 COPY Debian/stretch-backports.list /etc/apt/sources.list.d/ 13 COPY Debian/testing.list /etc/apt/sources.list.d/ 14 15 RUN apt-get update -y \ 16 && apt-get upgrade -y 17 18 RUN apt-get install -y \ 19 autoconf \ 20 automake \ 21 build-essential \ 22 ca-certificates \ 23 cmake \ 24 curl \ 25 g++-7 \ 26 git \ 27 less \ 28 locales \ 29 make \ 30 openssh-client \ 31 pkg-config \ 32 wget 33 34 # Set locale to UTF8 35 RUN echo "${LC_ALL} UTF-8" > /etc/locale.gen \ 36 && locale-gen ${LC_ALL} \ 37 && dpkg-reconfigure locales \ 38 && /usr/sbin/update-locale LANG=${LC_ALL}
Uploader Image
After the build is completed, the deliverable is still inside the running container. If you want to publish the results on GitHub, you need a suitable mechanism to automate this upload step. The aktau/github-release [6] project, a Go application that helps publish arbitrary artifacts on GitHub, is the perfect solution.
To integrate the tool into the build process, Docker and its broad base of ready-made images once again play the leading roles. The Alpine Linux-based golang
image is used for the uploader image, which is where line 7 of the Dockerfile in Listing 2 installs the github-release
tool.
Listing 2
Dockerfile github-release Tool
01 FROM golang:1.11.5-alpine3.8 02 MAINTAINER Yves Schumann <yves@eisfair.org> 03 04 # Install git, show Go version and install github-release 05 RUN apk add git \ 06 && go version \ 07 && go get github.com/aktau/github-release 08 09 # Create mountpoint for file to upload 10 RUN mkdir /filesToUpload 11 12 # Just show the help to github-release if container is simply started 13 CMD github-release --help
Buy this article as PDF
(incl. VAT)