« Previous 1 2 3 Next »
Production-ready mini-Kubernetes installations
Rich Harvest
Many Options
Development setups kick off this tour of the mini-K8s universe. In these setups, you typically want to create a local Kubernetes instance to test your workloads and check whether your configuration works in the intended way.
The Kubernetes market can be confusing here, with various solutions vying for your favor, most of which claim to be just as suitable for small development environments as for redundant production setups in data centers. The real problem is a lack of clarity: If you haven't had much experience with Kubernetes in the past and just want to familiarize yourself with the principles, you can quickly become confused when you hear about K3s [2], k0s [3], minikube [4], Microkubes [5], and the many other variants.
These names relate to K8s distributions, all of which enrich the vanilla version of Kubernetes with their own components and setup logic and set up the whole kit and caboodle so you can get places with just a little typing at the command line. Where they differ – if at all – is mainly in the components they use to enrich the environment. As crazy as it sounds: "Not invented here" plays a major roll in the colorful world of Kubernetes. Vendors often only decide to launch their own K8s distributions because they want to create a unique selling point compared with other manufacturers, which, from the point of view of the K8s newcomer, doesn't make things any easier.
If you want to start with a local setup, you first need to set up a classic virtual machine (VM). Ideally, you will want to run Debian GNU/Linux 12 or Ubuntu 24.04. Preferably, you should also install a runtime environment on the system that lets you operate containers on Linux. In both cases, the choice will typically be the Docker Community Edition (Docker CE). More modern distributions such as Ubuntu 24.04 come with a runtime environment in place, in the form of Podman, which can be used immediately.
One thing that practically all mini K8s distributions have in common is that their authors attach great importance to getting a complete Kubernetes up and running as quickly as possible. It comes as little surprise to hear that K3s, for example, can be set up on an existing virtual instance in next to no time (Figure 1). K3s is a significantly pared down distribution, originally created by the Rancher developers, that is still fully compatible with the Kubernetes API. In the meantime, the focus has shifted to edge computing, IoT, and continuous integration and continuous delivery (CI/CD) applications.

Installing K3s is very easy (see the "Insecure Installation" box): A short curl
command is all it takes to roll out all the components in a single-node installation and, after about 30 seconds, view a list of the available K3s nodes on the screen:
$ curl -sfL https://get.k3s.io | sh -- $ sudo k3s kubectl get node
Insecure Installation
In terms of installation, both K3s and k0s deserve an admonishment. The principle of loading a shell script from the network with curl
and using sudo
to execute it directly as root on the target host naturally causes any security-conscious admin to break out in a sweat; you would want to download the script manually and examine it meticulously before running it. However, because you are using curl
to set up Kubernetes on a virtual instance, the damage would not be so great in case of an incident and could be simply remedied by ditching the instance and creating a new one.
In the standard installation, though, this simply means the Kubernetes control plane itself.
Right now, you are still missing a K3s agent that can run containers. To do this, first find the token for authenticating agents and then launch the virtual instance, specifying its address:
$ NODE_TOKEN=$(cat /var/lib/rancher/k3s/server/token) $ sudo k3s agent --server https://<Server-IP>:6443 --token ${NODE_TOKEN}
These commands produce a complete Kubernetes that can create volumes and manage containers on the specified IP. If your VM is powerful enough, you can start experimenting right now with something quite close to a production setup.
At first glance, you will probably not realize how many components K3s integrates out of the box. I already mentioned SDS and SDN; K3s supplies working components for both: Volumes can be created, locally at least, by an LVM provider, and Flannel software gives you SDN. Although it is considered one of the simpler SDN solutions for Kubernetes, it is fine for test setups and for the vast majority of production environments. Flannel can also be combined with various network components within Kubernetes (e.g., with the Istio mesh solution).
By the way, you are not restricted to rolling out K3s in single-node mode. If you later want to recreate a development K3s setup in production, you can call the setup script with various parameters that roll out the Kubernetes control plane redundantly. You can tell that K3s was originally the mini-K8s distribution in the background with Rancher and was designed for this purpose. I'll get back to Rancher later.
k0s Impresses
The situation with k0s is very similar to that with K3s (Figure 2). This Kubernetes distribution by Mirantis, a cloud computing company, is also available as free software and can be installed in a few simple steps. Here, too, it is advisable to start a single virtual instance for development and test purposes on which to carry out the work. Again, Ubuntu 22.04 is a good choice. The steps are quickly completed: Use curl
to download the k0s binary, which you then use to roll out the Kubernetes cluster:
$ curl -sSLf https://get.k0s.sh | sudo sh

If k0s is available locally, install a single-node control plane for Kubernetes and then start the desired services and check their status:
$ sudo k0s install controller --single $ sudo k0s start $ sudo k0s status
The kubectl
command-line tool can be deployed at this point.
Like K3s, k0s also claims to be suitable for production environments. By running the command
$ k0s init > k0sctl.yaml
after the install, k0s creates a basic configuration for a cluster consisting of two nodes in the file k0sctl.yaml
. To make the control plane highly available, and therefore suitable for production, you can edit the file accordingly and then implement the saved configuration, output a configuration file compatible with kubectl
, and display the running pods:
$ k0sctl apply --config k0sctl.yaml $ k0sctl kubeconfig > kubeconfig $ kubectl get pods --kubeconfig kubeconfig -A
Like K3s, k0s comes with ready-made integrations for SDN and SDS. However, it takes a different approach, at least in terms of the network, by giving you Calico, which is far more comprehensive and complex than Flannel. When it comes to storage, you face a little DIY. The k0s developers recommend combining the tool with OpenEBS. However, this block-mode storage platform was dropped from the k0s scope of delivery a while back and, according to the authors, needs to be installed by the Helm package manager. The k0s documentation [6] contains instructions for this task.
My choice of describing K3s and k0s for this article has nothing to do with bias. In fact, the various Kubernetes distributions in the same league are not too different in terms of content and technology. If you choose minikube or MicroK8s instead of K3s or k0s, you will quickly realize that their installation procedures are similar. After completing the install, you will have a comparable feature set in each case, and the way things are handled is pretty much the same.
Production, Fast
If you are looking for a flexible and not overly complex solution to roll out a production Kubernetes setup for your company, you first need to make sure the conditions mentioned earlier for a production Kubernetes are met. You need to be clear about the SDS solution you will be using and about the SDN solution you intend to deploy.
For the sake of simplicity, assume in this example that the all-rounder Flannel will be handling SDN, with Ceph shouldering the SDS load. Ceph can be rolled out as a service in Kubernetes by Rook [7], which automatically gives you a management option. The SDS part of the setup only plays a minor role in the configuration of Kubernetes itself.
I already looked at Rancher in the context of K3s, but Rancher is also ideal for rolling out production Kubernetes setups (Figure 3). This concept is easy to understand when you think about what Rancher actually is.

Rancher is itself based on Kubernetes and integrates a Kubernetes control plane. It can also be used to roll out and control many different Kubernetes instances in parallel. Once you have set up the Rancher control plane, you can add further hosts to the software and then roll out your own small Kubernetes clusters on them with the command-line interface or graphical user interface. These clusters can then be deleted or reconfigured easily.
What impresses most is that you can set up Rancher just as quickly as K3s or k0s. In practical terms, you simply start a Docker container,
$ sudo docker run --privileged -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher
and the Rancher web user interface is then available on port 443 of that container. Additional bare metal hosts can now be created here, for which Rancher can use SSH to open connections so that you can make the required configuration changes. It makes sense to prepare all the systems involved at the outset to support a secure shell connection from the Rancher control host.
The operating system install can also be automated by life-cycle management, if so desired. Once you have installed the first high-availability Kubernetes cluster on the target systems with Rancher, you can roll out Rook (Figure 4) for high-availability storage directly afterward.

However, be careful. As I mentioned earlier, the mini-setup described assumes that the controller and storage services are hyperconverged and running on the same systems. Systems with the Kubernetes services of a K8s cluster rolled out by Rancher therefore also need storage devices (hard disks or flash drives) for Rook to offer redundant storage.
Of course, such a setup can be operated on virtual instances – or at least tested there – if you will be using bare metal later in production. Either way, you can achieve a production-ready setup quickly on the basis of Kubernetes with Rancher and Rook, without having to deal with countless add-ons and product variants.
« Previous 1 2 3 Next »
Buy this article as PDF
(incl. VAT)