« Previous 1 2 3 Next »
GitLab for DevOps teams
Assembly Line
Look Ma, No Scripting!
The CI/CD pipeline is configured declaratively in a YAML file, and we only define the "what"; GitLab CI takes care of the "how" [9].
Jobs
form the top-level object of a pipeline definition and have to contain at least the Script
parameter. Caution is required at this point: The possibilities offered by a Script
statement quickly lead one to forget the declarative approach and implement pipeline logic. Then "what" then becomes "how."
To counteract this, our parent company is currently establishing a centrally maintained container library for CI/CD as an inner source that is available to all DB Systel employees. This library serves as the basis for language-specific builder images, ideally based on lightweight base images such as Alpine Linux [10]. All required certificates, environment variables, binaries, and tools for building the application are contained in the images themselves. Helper images, which come with routines for computing a new software version, for example, are also part of the container library.
Docker's approach of running one process per container allows these images to be easily linked in the job definition. The GitLab Runner then automatically starts the respective container and schedules it as soon as it has completed the pipeline step.
In our shop, we also integrate test automation into the CI/CD chain. New application versions reach the users without lengthy manual tests. Much like the images created during the build phase (Figure 2), these images also end up in a container registry. GitLab offers its own but also has interfaces to external registries like Artifactory [11].
A pipeline definition can quickly grow to more than a couple of hundred lines, making it confusing and no longer maintainable. Each GitLab installation includes a link tool to check the YAML configuration.
Apart from this syntax check, it helps to cluster thematically identical blocks and store them in separate files. Using the include
statement added to the core in GitLab 11.3, developers can add external configurations. Technically, this is realized as a deep merge. In this way, standard configurations can easily be added to, and then be overwritten with, project-specific requirements, if necessary. The include
statements also include remote files, which opens up further possibilities. On this basis, one team is currently evaluating the approach of curated pipelines as a service.
Build artifacts generated during pipeline execution can be made permanently available by declaration and reused in pending jobs. Additionally, the web GUI offers the very convenient option of browsing through the job artifacts or downloading them.
Pipeline jobs, which have extremely long run times because of their complexity, can be outsourced to separate pipelines by the GitLab scheduler, a kind of Cron daemon. The jobs then run outside the development phases (e.g., nightly builds).
Auto DevOps
The basic idea behind GitLab Auto DevOps is to minimize the complexity associated with fully automated software delivery. For this purpose, GitLab Auto DevOps accumulates the necessary requirements and maps them via a pipeline.
Programming language recognition already works for a large number of common languages and is being continuously extended. Automatic software builds that are based on Docker or Heroku build packs have already been implemented. Pipeline jobs for application tests and quality assurance (QA) are available, as are routines for packaging and monitoring. Deployment mechanisms with Helm charts [12] complete the solution.
To familiarize yourself with the conventions and best practices of a GitLab CI pipeline, take a look at the Auto DevOps pipeline from the GitLab repository [13]. GitLab itself relies on native Kubernetes integration and Google's Kubernetes Engine (GKE) as its target platform. Because it is cloud agnostic, both Auto DevOps and manually generated pipelines can be used with any cloud platform and underlying orchestration tools, thus avoiding vendor lock-in.
Currently, none of the teams we serve uses GitLab Auto DevOps, because it cannot fully cover the DB Group's security and compliance requirements. The concept also does not support all application stacks that we require by default. That said, Auto DevOps is becoming a useful feature that generates added value. A structured blueprint at the fingertips of teams allows them to enter a DevOps-centric work mode quickly.
Security and Compliance
Other challenges our DevOps teams face are security and compliance requirements. On the way to a self-service infrastructure that provides standardized processes in a CI/CD environment, "compliant by default" is a cornerstone for a fully automated pipeline.
Recently, GitLab strongly expanded its focus toward security. The GitLab core team has bundled many of the methods and functionalities required in this area in its roadmap. The vision for GitLabs' product includes, among other things, a secure stage, which the company plans to expand to include a defense stage.
Static Application Security Testing (SAST) checks the source code for existing vulnerabilities, such as buffer overflows or insecure function calls. Dynamic Application Security Testing (DAST) tests against a running application or against the Review Apps collaboration tool [14]. These tests start with each merge request and thus allow early detection of security vulnerabilities at run time. Interactive Application Security Testing (IAST) aims to shed light on how the application deals with security scans that come directly out of the application. For this use case, GitLab delivers an agent in deployment that integrates into the application and implements the scans. Technically, this is done by GitLab integrating a corresponding open source scan tool. The developers are looking to increase the number of supported languages.
Example scan tools include Secret Detection, which scans the commit for credentials and secrets. The results are sent to existing reports. Dependency Scanning has GitLab check the deployed package managers for security vulnerabilities. This technology is also open source and based on Gemnasium [15].
Additional open source tools are used in container scanning. The CoreOS project Clair [16] collects information about app and Docker container vulnerabilities in its database. With the use of the Clair scanner [17], the pipeline is able to validate container images against the Clair server before it pushes the images into the registry.
Last but not least, license management is intended to help meet compliance requirements by checking the code for license violations in the pipeline.
These techniques can already be integrated into pipelines today, even if only in the Enterprise edition. One hopes GitLab will integrate these components into the Community Edition in the future. Currently, manual work is still required to implement the functions themselves, which are largely based on open source software.
« Previous 1 2 3 Next »
Buy this article as PDF
(incl. VAT)