Spotlight on the Kubernetes package manager, Helm
Helmsman
Scholars argue about whether history repeats itself. Clearly, however, some trends will return, and administrators will repeatedly encounter various technical approaches in IT – maybe in a slightly different guise and perhaps under a new name, but the principle always remains the same.
Helm is like that. It is a kind of package manager that the vast majority of admins have at least heard of in the context of Kubernetes. In fact, Helm comes from a completely different technical background from the package managers of classic distributions (e.g., rpm
and dpkg
), aiming to make distributing software for Kubernetes easier. The Helm skills of many admins do not extend beyond knowing that the tool exists. As a result, many myths circulate around the solution, and the resulting uncertainty puts Helm in a worse light than it deserves.
In this article, I go into detail about Helm: introducing the architecture of the solution; describing how Helm defines, deploys, and upgrades Kubernetes apps; and presenting practical use cases. The first step is to ask what Helm is good for in the first place and what problems it solves in the Kubernetes context.
Why Bother?
Kubernetes (K8s) has been on a roll for the past few years, which does not happen too often in IT – at least not if you consider the speed of spread and penetration. Today, it seems almost impossible to bring up container virtualization in Linux without at least mentioning Kubernetes. Contrary to what some observers claim, K8s and cloud computing have much in common. Kubernetes would be inconceivable without the idea of cloud computing. In other words, it is no longer possible to imagine running a containerized workload efficiently without the principles of cloud computing (i.e., automatically controlling individual containers with applications across a swarm of servers).
Kubernetes is now far more complex than it was a few years ago. The number of ready-made third-party images from which admins can choose is almost impossible to track. The functionality of K8s itself has also expanded continuously. Custom Resource Definitions (CRDs), new API extensions, and various architectural rebuilds make it difficult to keep up, not to mention the features that external vendors then add to the container orchestrator to get their piece of the pie – with some really great innovations. Mesh networks such as Istio secure network traffic between the microcomponents of an application, take care of load balancing, and offer secure socket layer (SSL) encryption on the fly.
Declarative Language
If you want to make clear to K8s the kind of workload you want to launch, you use an API component and a declarative language (Figure 1) by describing the desired state of your containers in a template file that you send to K8s. Kubernetes then tries to match the actual state as closely as possible to the state described by the template.
In parallel, you have to get used to new terminology in K8s because anyone trying to work with the tool without knowing about pods is unlikely to succeed. Things get tricky if you want to run larger workloads on K8s that involve more than a single image with just a few configuration parameters. Quite a few components in microarchitecture applications need to be considered in the pod definitions. Even when it comes to rolling out multiple monolithic apps in the form of K8s pods, creating the appropriate pod definitions is a pretty tedious exercise (Figure 1).
Software vendors looking to sell their products with K8s underpinnings cannot be happy about this. As usual, admins prefer products that are delivered as turnkey systems, or as close as you can get to them. In the K8s context, this ideally means you take delivery of preconfigured components for everything that makes up the product. Then you only have to define your environment-specific configuration parameters for your K8s cluster. All the other commissioning work for a particular piece of software is ideally taken care of by the system. The idea is ultimately the same as with package managers. Where rpm
or dpkg
take the stage on conventional systems, Helm comes into play with K8s.
Old Hand
Helm has been around for a few years in the K8s universe. The current version 3 is quite different from previous versions, which you will notice when you search online for information.
The differences are particularly evident in the components. In Helm 2, Tiller was the central component for communication with K8s. A Helm client at the command line communicated with Tiller and passed templates for virtual setups (Figure 2). Kubernetes then set about creating the setups Tiller specified. Throughout its life, however, Tiller suffered several problems that were inherent in its design – related to rights management, parallel operations in Helm, and several other key aspects – and consequently were very difficult to correct.
In Helm 3, the developers summarily did away with Tiller and replaced it with a more powerful command-line client that still goes by the name of Helm and a library that serves as a bridge to K8s for the client. Accordingly, Helm 3 no longer has its own management instance. It is important to keep this in mind because Helm newbies still often get stuck with old docs when searching for information and are frustrated when they roll out Helm 3 and see that Tiller is missing.
Buy this article as PDF
(incl. VAT)