Rancher manages lean Kubernetes workloads
Construction Guide
Getting Started
In this example, I have four servers running Ubuntu 20.04 LTS. Because Podman is primarily a Red Hat invention, nothing is in the way of Ubuntu continuing to use Docker as a container runtime environment. Accordingly, Docker's community repositories are activated in the system configurations, and the packages you need for Docker are installed. In this state, the systems are ready for the Rancher installation.
Terms
Like virtually every cloud-native solution, some terms have taken on a unique meaning in the Rancher universe over time. Before working with Rancher, you need to know the most important terms. For example, when people talk about a Rancher server, they are talking about a system that hosts all of Rancher's components for managing and provisioning Kubernetes clusters. Importantly, you must have at least one of these Rancher servers, but Rancher also allows for full scalability at its management level. As a result, the maximum number of Rancher servers per installation is virtually unlimited.
K3s is equally important in the Rancher context. This minimal Kubernetes version is maintained by the Rancher developers themselves and only contains the components you need to run Rancher. As a reminder, Rancher also rolls itself out as a Kubernetes cluster. K3s provides the foundations for this – and far leaner ones than OpenShift and others.
K3s, by the way, is not the only Kubernetes distribution that Rancher handles. You will also regularly encounter the Rancher Kubernetes Engine (RKE), a K8s distribution that comes directly from the Rancher developers but is older than K3s. The somewhat more modern K3s is therefore the recommended standard distribution for new setups. Additionally, RKE2 is a further development of RKE, which focuses on security and is designed for special use cases (e.g., in government offices).
Creating the Infrastructure
The first step en route to a working Rancher instance is installing K3s. The following example assumes that the setup has a MariaDB or MySQL database where Rancher can store its metadata. The database can optionally run on the same servers as Rancher, but an external database on its own hardware is also an option. Make sure that the database in question is highly available: Without its metadata, Rancher is more or less useless. MariaDB or MySQL must run reliably.
Because the Rancher server components in this example use a redundant design, you will need a load balancer. Ideally, the load balancer will listen for incoming connections on the address that Rancher uses later; it forwards these connections to two Rancher hosts. Load balancers on protocol Layer 4 and on Layer 7 are possible.
I assume that a Layer 4 load balancer is used (Figure 2), which makes the setup a little easier, especially with a view to Rancher's Secure Sockets Layer (SSL) capabilities. A Layer 7 device would offer more configuration options in theory, but you would need to configure it to handle SSL management. If you have a Layer 4 load balancer, Rancher handles SSL management itself, rolling out an instance of Traefik [2] to do so.
Either way, the load balancer must also be highly available. Its failure would mean that Rancher itself and the managed Kubernetes clusters might still be running, but they would be inaccessible from the outside. Speaking of accessibility: The DNS entry intended for Rancher needs to be added to the zone file for your domain when you set up the load balancer; otherwise, you can't start to install Rancher itself.
Last but not least, the time needs to be correct on every system of the installation and can be implemented within the distributions either by the legacy network time protocol (NTP) or with chronyd
. Either way, one of the two components needs to be set up and have access to an NTP server to set the system time correctly.
Buy this article as PDF
(incl. VAT)