Nested Kubernetes with Loft
Matryoshka
Kubernetes is considered the frontrunner when it comes to container orchestration, and no matter where you look, no potential successor is in sight – regardless of the flavor, whether OpenShift, Rancher, or plain vanilla Kubernetes. If you run containers, you will find it difficult to avoid fleet management with Kubernetes – not least because former alternatives such as Docker Swarm have now become virtually irrelevant.
Kubernetes' popularity admittedly also means that admins don't have much choice if they are dissatisfied with the platform – and admins can find many things not to like. One often-cited criticism, for example, is the traditionally poor support for multiple tenants. In fact, the solution was never designed to manage the containers of many clients in parallel – a curse and a blessing at the same time: A blessing because Kubernetes is far less complex than OpenStack, for example, where multitenancy was part of the strategy right from the start, and a curse because the fairly mediocre multitenant support means that operating Kubernetes clusters can quickly get out of hand.
Because it doesn't make sense to create a full Kubernetes setup right away for every test scenario, Kubernetes developers have given some thought to multitenancy over the past few years. They now rely on namespaces to separate the workloads of different users and projects in a Kubernetes cluster. If this reminds you of namespaces in the Linux kernel, you are right: Parts of Kubernetes' isolation solution are at least inspired by Linux kernel namespaces, and they are also used quite tangibly in that Kubernetes relies on the kernel feature to isolate different projects from each other on the target systems.
Most admins agree that namespaces in Kubernetes are not a full-fledged substitute for true multitenancy. (See the "The Namespaces Challenge" box.) However, the truth is, they were never supposed to be. A platform for Kubernetes self-service and multitenancy named Loft now comes to the rescue of all admins looking to resort to a large number of Kubernetes islands (because of frustration with namespaces) by allowing them to "spin up low-cost, low-overhead Kubernetes environments for a variety of use cases" [1].
The Namespaces Challenge
If you want to understand the challenges that Kubernetes namespaces present to their users, it's best to imagine a multitenanted process such as OpenStack. Here, a new project would simply be created in the environment for a new customer. You would want to create virtual networks for the customer, which would be strictly isolated from those of other customers. Specific quotas for virtual CPUs and RAM (vCPUs, vRAM) and virtual block devices would instantly be in place. Beyond that, however, the project would have access to all the documented features of the OpenStack API that regular projects are allowed to use. The platform does not care what workload a project invokes in OpenStack, for example, as long as it is within the defined quotas.
The situation is somewhat different for namespaces in Kubernetes. For example, Kubernetes comes with role management out of the box, but it cannot be varied at the namespace level. In plain terms, this means different roles with their own permissions cannot be defined and implemented for different namespaces. In turn, a project with access to a particular namespace cannot display the rules that apply to its own namespace – much to the chagrin of many admins looking to troubleshoot problems.
The very popular custom resource definitions (CRDs, i.e., extensions to the Kubernetes API that users can create and manage themselves) facilitate the work of Kubernetes administrators; moreover, various external tools, such as many of the Helm (Kubernetes' package manager) charts, also need them. Conversely, the lack of CRDs means a great restriction in terms of the Helm charts that can be installed, and it entails a permanent risk of standard operations either not working at all or working incorrectly because of namespaces.
However, this is by no means the end of the list of namespace restrictions. Cluster global resources always apply to all namespaces. If a project needs such resources and you install them, it could lead to conflicts with the resources defined for other namespaces. From the point of view of a namespace, however, this is only of secondary importance because access to cluster global resources from within a namespace is severely restricted anyway. If a prebuilt solution for operation in Kubernetes requires cluster global resources, you can forget about the namespaces plan right away.
Troublesome For Many
Like all centralized management tools, Kubernetes has several must-have components. The most prominent example is the Kubernetes API. However, the control plane with central intelligence includes many more components, such as the Kubernetes scheduler or Etcd as a consensus system. The compute nodes also run Kubelet, which receives its instructions from the control plane and translates them into the local configuration. If you want to build a Kubernetes cluster, you'll need some hardware, especially because the control plane usually needs to be rolled out redundantly. If it does not work, you will immediately suffer a loss of functionality, and you will certainly see some loss of control.
Loft promises to roll out virtual Kubernetes clusters in namespaces such that you can use a full Kubernetes without needing the resources of individual and physical Kubernetes clusters. If this works, Loft would be something like the holy grail of tenant functionality for Kubernetes: reason enough to take a closer look at the product.
How Loft Works
Even a quick look at the solution's architecture diagram (Figure 1) shows that Loft is fairly complex. By its very nature, it digs deep into the Kubernetes instances it uses – but more on that later.
The core of the solution is the Loft Management Center, which is the first point of contact for end users for any communication with the Kubernetes clusters that are available to Loft for its tasks and is where commands to launch resources or roll out pods land. Part of the management center is Loft's own Kubernetes API instance. Users who send commands to a Kubernetes (K8s) instance managed by Loft do not talk directly to the K8s target instance because it lacks elementary functions in terms of multiclient capability.
The topic of user authentication quickly makes this clear. Kubernetes does have certain options for obtaining its user management (e.g., by LDAP), but quite a few admins detest what they see as unspeakable tinkering with HTTP request headers or webhooks. Also, integrating users from external authentication sources into the Kubernetes role-based access control (RBAC) is anything but stringent or convenient. Once again, Kubernetes is simply not designed for true multitenant operation.
Admittedly, this poses a problem for Loft (Figure 2) because the solution advertises itself as being able to retrofit a kind of multitenancy capability for Kubernetes conveniently and without massive resource outlay. Consequently, Loft includes an authentication service that plays all the parts you would expect from a state-of-the-art service of this kind. LDAP, for example, can be seamlessly integrated with the Cloud Native Computing Foundation Dex module. Other options such as Security Assertion Markup Language (SAML) 2.0 or logging in by GitHub single sign-on (SSO) are also available.
With the use of an API gateway upstream of the Loft Kubernetes API, Loft integrates the service so that several different accounts can be available in a Loft instance. Loft's multitenancy solution works as promised. The API gateway also fulfills another function in that it serves as a single point of contact for incoming requests and as a kind of signal switch. If it receives Loft-specific commands, it processes them itself. Generic Kubernetes API calls, on the other hand, are forwarded by the API gateway to the real K8s interfaces in the background.
From Accounts to Namespaces
Much is still missing, however, if you are looking for multitenancy on the Kubernetes side – or virtual multitenancy, as the Loft documentation sometimes puts it. Loft's Kubernetes API, for instance, contains various resources that an official Kubernetes lacks, including all Loft-owned resources that are connected by a Loft extension with its own API, as well as Loft CRDs, without which users would not be able to launch resources through Loft.
The Loft CRDs also differ from normal CRDs in one very important detail: They have their own API fields for specifying users, teams, or target clusters in which they are to run. These details are missing in Kubernetes because, as already described, the solution is not designed for clients. If Loft wants to add multitenancy to Kubernetes, it has to add the appropriate fields. The developers doing this in accordance with Kubernetes' API specifications deserve kudos. The Loft API for Kubernetes can also be used with the normal kubectl
if the user defines the necessary parameters where required.
Although Loft comes with its own, far more comfortable command-line interface tool and even its own interface, being able to control Loft with standard tools in the worst case is an advantage not to be underestimated.
Buy this article as PDF
(incl. VAT)
Buy ADMIN Magazine
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Most Popular
Support Our Work
ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.