« Previous 1 2 3 4 Next »
Rancher Kubernetes management platform
Building Plans
Getting Started
In my test setup, I had four servers running Ubuntu 20.04 LTS. Podman is primarily a Red Hat invention, but nothing stops Ubuntu from continuing to rely on Docker as a container runtime environment. Accordingly, Docker's community repositories are enabled in the system configurations, and the packages required for Docker are installed. In this state, the systems are ready for the Rancher installation.
Terminology
Like virtually any solution from the cloud-native orbit, a number of Rancher-speak terms have evolved over time that have a distinct meaning in the Rancher universe. Before working with Rancher, you need to get to know the most important terms. For example, a Rancher server is a system that hosts all of Rancher's components for managing and provisioning Kubernetes clusters. Importantly, you will have at least one of these Rancher servers, but Rancher also offers scalability across the board at its management level. The maximum number of Rancher servers per installation is therefore practically unlimited.
K3s is equally important in the Rancher context. This minimal Kubernetes version is maintained by the Rancher developers; it contains only the components needed to run Rancher. Remember, Rancher also rolls itself out as a Kubernetes cluster. K3s provides the underpinnings for this capability, and it is far leaner than other alternatives.
K3s, by the way, is not the only Kubernetes distribution that works with Rancher. You also regularly come across the Rancher Kubernetes Engine (RKE), which is a Kubernetes distribution that also comes directly from the Rancher developers, but it is older than K3s. The somewhat more modern K3s is therefore the recommended standard distribution for new setups. RKE2 is a further development that focuses on security and is designed for special areas of application (e.g., government).
Infrastructure
The first step on the way to a running Rancher is to install K3s. The example here assumes the setup has a MariaDB or MySQL database available for Rancher to store its metadata. Optionally, the database can run on the same servers as Rancher, but an external database on separate hardware is an option. Make sure you secure the database and make it highly available. Without its metadata, Rancher is more or less useless. MariaDB or MySQL therefore need to run reliably.
Because Rancher's server components in the example also need to be redundant, you will want a load balancer (Figure 2), which ideally will listen for incoming connections on the address on which Rancher will be accessible to forward to the two Rancher hosts used in this example. Load balancers are possible in Layer 4 and Layer 7.
I am assuming that a Layer 4 load balancer will be used. This configuration makes the setup a little easier, especially later on with regard to Rancher's SSL capabilities. A Layer 7 device would offer more in terms of configuration options in theory, but you would need to configure it to handle SSL management. However, with a Layer 4 load balancer, Rancher does this itself by rolling out an instance of Traefik [2].
Either way, the load balancer must also be highly available. Its failure would mean that Rancher itself and the managed Kubernetes clusters would still be running, but would no longer be accessible from the outside. Speaking of accessibility: The DNS entry intended for Rancher needs to be stored in the zone file of the respective domain when you set up the load balancer; otherwise, the Rancher installation cannot begin.
Last but not least, but which should be a matter of course today, the time needs to be correct on each system of the installation. On the distribution used here, this can be done by the legacy network time protocol (NTP) or chronyd
. Either way, one of the two components needs to be set up and have access to an NTP server to set the system time correctly.
« Previous 1 2 3 4 Next »
Buy this article as PDF
(incl. VAT)