Kubernetes containers, fleet management, and applications

Training Guide

The Architecture

The question arises as to how Kubernetes manages to convert the desired state definition given by the user in the form of a template into running resources. The core issue is the Kubernetes architecture, in which a number of components contribute to achieving this state. The K8s services can be divided roughly into two categories: the control plane and the nodes that contain the containers (Figure 2).

Figure 2: The Kubernetes architecture comprises the controller services in the controller cluster and Kubelet, which handles the configuration on the target systems. © Kubernetes

The control plane, in turn, comprises several microservices. The API server acts as a hub, handling most of the communication for K8s with the outside world. When you use kubectl at the command line to deliver a resource definition to Kubernetes, the utility communicates with the Kubernetes's API in the background as a classic REST interface that accepts updates for existing resources or commands for creating new resources and provides a source of information for running resources, as well.

Individual resources in the environment can also be controlled directly through the Kubernetes API. For example, if you execute a command to stop an application running in containers, kubectl translates the command on the fly into an updated resource definition with the desired status of "stopped" and passes this template to the Kubernetes API.

This example shows that tools like kubectl are versatile in everyday use, because starting and stopping individual resources works very well in the way just described. However, if you want to describe an entire virtual environment with individual commands in this way, it would be very time consuming. In such scenarios, YAML templates come to the rescue.

High Availability Built In

One central aspect of modern IT architectures is built-in high availability (HA). Where workarounds like Pacemaker were used a few years ago, modern applications look to take care of their own availability. The control plane of a K8s cluster also needs to meet HA requirements. The failure of the central services in a K8s cluster would not lead to a loss of functionality of the rolled-out resources, but they would no longer be controllable.

Consequently, all components of a K8s cluster are natively HA-enabled. Stored information about the states of resources rolled out in Kubernetes – basically, classic database content – plays a special role, and anyone who has ever dealt with high availability in a database context knows that the topic is anything but trivial to handle.

Kubernetes itself does not attempt to keep its own persistent data highly available. Instead, it offloads the task to an external component, etcd, which is an implicitly highly available key-value store. Each Kubernetes controller cluster therefore also includes an etcd cluster that enables write and read access in multimaster mode (Figure 3). In this way, many small K8s services are quickly transformed into a multidimensional mesh. An arbitrary number of Kubernetes API instances access an arbitrary number of etcd instances in the background. If one of the instances fails, the remaining ad hoc instances take over its tasks.

Figure 3: Etcd uses a consensus algorithm to take care of redundant storage of Kubernetes's metadata. © etcd

Creating Resources

So far, the Kubernetes story as this article tells it still has a gaping hole. Although receiving DSM definitions on one side and storing them persistently with HA on the other side is great, at the end of the day, you also need some kind of instance to ensure that the DSM specifications are turned into resources on the systems – ideally as running containers.

Now the Kubernetes controller manager steps in: It includes several components, including the node controller, which keeps track of all available target systems that can run container workloads; the job controller, which is responsible for scheduling upcoming work for execution; and the endpoint, service account, and namespace controllers, which manage the internal details of Kubernetes (e.g., user management) and maintain a directory of running services in Kubernetes and their endpoints (i.e., the URLs used to reach the services).

Kubernetes uses the term "controller" not only in the context of its own infrastructure, but also for a specific type of resource. A controller at the Kubernetes resource level is basically a kind of programmed infinite loop that balances the actual and target states of the available resources. You need controllers to turn a DSM declaration into a running infrastructure by enforcing its creation with the use of other Kubernetes services.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus