Spotlight on the Kubernetes package manager, Helm

Helmsman

Problems

The comparison of a deployment of the same software with and without Helm will help shed some light.

Suppose you want to roll out an arbitrary microservices application in K8s. The classic way to do this would be to write a pod definition that contains instances of all the containers you need. As a rule, however, this step is not enough because you have to take into consideration configuration parameters in addition to the containers. At the end of the day, the services in the individual pods need to know how they will communicate with each other later.

If you want to tackle the issue right away, you can rely on external solutions such as Istio, which spins a web between all the components and routes the traffic dynamically. To do this, however, it must itself be part of the pod definition, which makes the definition a good deal longer. Once you have completed your pod definitions, you then hand them over to K8s, which starts the corresponding resources.

However, the fun often comes to an abrupt end when you want to change the running setup in K8s a few months or years later without having looked at it too much in the meantime.

It gets even worse from an enterprise point of view if the original author of the pod definitions has left the company and a successor has to make sense of it all. Although K8s follows a strict syntax for its pod definitions, because it is YAML-based, understanding what your predecessor put together, how, and for what reasons will typically take a while, and the more complex the original pod definitions were, the longer the familiarization process will take.

At some point, it comes to a showdown. If the admin – whether the same or new – feels sufficiently confident, they will make some changes. However, if they go wrong, in many cases the only way out is to ditch the existing pods and set up a completely new virtual environment with the data from the old pods.

The whole process is neither particularly convenient nor very friendly in terms of operational stability, which is exactly where Helm comes to your rescue. Reduced to the absolute basics, Helm is a kind of translation aid. The Helm library takes information from the Helm client and converts it on the fly into pod definitions that K8s understands. At the same time, Helm exposes an interface to the admin and user sides that is similar in principle to a solution for automation and configuration management.

In this example, you would write a statement in Helm for deployment of your microarchitecture application comprising two parts: (1) the generic statements from which Helm then builds pod definitions for K8s from the Helm library and (2) a file named values.yaml that defines configuration parameters you can influence from the outside.

The totality of these files (i.e., the templates for K8s on the one hand and the admin-editable variables on the other) is known as a chart in Helm-speak. A chart is therefore a template comprising several parts, with the help of which workloads can be rolled out ready for use on K8s. For this very reason, many also refer to Helm as a package manager for K8s.

Running Workloads

Helm is also widely accepted because it not only allows you to roll out a workload in K8s for the first time, it also offers functions for modifying running workloads in K8s if it previously rolled them out itself.

A classic example is changing the replica count for the front-end servers in web operations. Once again, the classic web store example comes into play here: Online stores that generate predictable sales throughout the year often groan under the load at times of high access – especially in the lead-up to Christmas. If you host your workloads in AWS or Azure, for example, you will want to upgrade virtual instances accordingly during the busy period to cope with the increased load. This is precisely the promise of salvation providers use to entice their customers: They only have to pay for what they actually consume in terms of resources.

However, the workload and the tools that manage the workload must also be able to handle this kind scaling. Kubernetes provides the functionality natively, but what I described as a challenge earlier in this article hits home here. Depending on how the pod definitions are written, if careless, a stack update can end in an absolute disaster. Helm comes to the administrator's rescue with its standardized interface that always works the same way.

Most charts that you find online support a replica parameter. The helm update instruction can change this parameter for active workloads in a K8s cluster. The rest happens automatically: A single Helm command tells the Helm library to adjust its own pod definitions in K8s so that it no longer runs 10 instances of a web server, but 20, or however many you think you need.

Getting Started with Helm

Producing your own charts will not involve too much work. First, you install the command-line interface (CLI) and library on your own system. The Helm website [1] offers both package sources for Snap and Apt and a statically linked binary that you can use. If you have a computer with macOS as your workstation, you can pick up Helm from the Homebrew environment (Figure 3). Additionally, the developers offer on their website [2] a kind of guide to best practices when it comes to writing charts.

Figure 3: The Helm CLI and its library are available on various systems – in this case, for macOS with the Homebrew package management system.

Once Helm is available locally, you're ready to go. Each instance of a workload that you roll out with Helm is referred to as a release. You can use a ready-made chart to create an arbitrarily named release of a virtual environment in K8s. As usual with package managers, the charts come from different repositories, which helps beginners in particular find their way around Helm. Because you can access various repositories, you likely will not to have to write your own Helm charts and can probably turn to existing charts.

The Helm client knows about the Hub repository out of the box and searches the Artifact Hub [3] for charts (Figure 4). At the command line, for example,

helm search hub wordpress
Figure 4: Helm's Artifact Hub displays Helm charts from various repositories so you don't have to collet the data yourself.

fetches ready-made charts for WordPress. Conveniently, Artifact bundles the results from various repositories, meaning you don't have to add them all to your local repository lists. However, if you do want to use charts from a repository that Artifact Hub does not know about, you need to add the repository upfront by typing:

helm repo add <name> <URL>

The helm repo list command displays a list of all enabled repositories.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus