A multicluster management tool for Kubernetes
One Stop Shop
With application containerization on the rise, administrators face the problem of having to manage not just one platform, but multiple platforms. In hybrid cloud environments, different architectures are used in case of doubt. Clusters with OpenShift, Rancher, or Amazon Cloud Development Kit (CDK) in the data center are teamed with Kubernetes cloud setups from Amazon, Azure, or Google.
The great advantage of containerization is precisely that apps in containers work without any problems on all the platforms just mentioned. Therefore, administrators need just one thing: the perfect management tool to handle the mix of platforms.
Although you can use tools like Ansible for distributing applications [1], another option is Open Cluster Manager (OCM).
Test Environment with Kind
If you want to evaluate tools for multicluster management, you first need a multicluster environment, which presupposes a huge time investment and – in the case of cloud-based clusters – a great deal of cash. However, you can set up test clusters specifically for management and API tests in a far easier and cheaper way. The suitable tool comes from the Kubernetes makers themselves and is named kind
[2].
The secret ingredient is nested containers. Kind launches complete Kubernetes clusters with the appropriate virtual networking in CRI-O containers, where the whole environment sits inside a Podman or Docker container. The basic setup simply routes the API management port of the cluster to the host system. With a few additional port-mapping rules, Kind also sends application data to an application pod within the Kind cluster. However, this scenario should be strictly limited to test environments.
The practical thing about Kind is that it can be used to run multiple Kubernetes clusters on a single host, which means you can test multicluster management tools with relatively little effort.
To set up Kind, all you need is a physical or virtual machine with 8GB of RAM and four virtual (v)CPUs running Podman or Docker. The installation can be handled by either the Go runtime or a package manager. Kind does not even require a Linux environment – it can be used in combination with the Docker desktop for Windows thanks to the Chocolatey Windows package manager. On a Mac, you need Brew. In other words, it will easily run on your desktop or notebook.
After you install the tool, create a Kubernetes test environment with a single command:
kind create cluster
The tool now also launches a single container with a single-node Kubernetes setup in your Docker or Podman environment. Kind sets up the appropriate port mapping from a random localhost port to the API on the container's internal port 6443. However, you don't need to worry about that, because the tool immediately creates a matching ~/.kube/config
file along with the access credentials in your user directory. Immediately after starting up, the cluster named kind-kind
can be controlled by kubectl
:
kubectl cluster-info
A simple kind delete cluster
is sufficient to remove the cluster, including the entry in ~/.kube/config
. To set up multiple clusters, simply start Kind with the --name
parameter. Because the tool stores the authentication for both clusters in the kubeconfig
file, you simply need to select the cluster to manage with a context
option,
kubectl cluster-info --context kind-test2
or make one cluster the default:
kubectl config use-context kind-test1
For a test with different cluster configurations, you can also create a configuration file that specifies the number of controller and worker nodes, but single-node clusters are fine for the purposes here.
OCM Structure
The OCM [3] community project centrally manages multiple Kubernetes clusters. OCM is not interested in the Kubernetes distribution you use. The architecture borrows heavily from the Kubernetes architecture itself and from its API. For example, a Kubernetes worker node within a cluster uses kubelet
to run the client service controlled by the cluster's control plane. In the case of OCM, a hub cluster acts as the control plane for the connected Kubernetes setups. The managed clusters then run the klusterlet resource as a client.
OCM does not use its own communication port for its work but plugs directly into the Kubernetes API like other extensions. To work, the hub and managed clusters need to see each other (i.e., the hub must have access to the Kubernetes API port of the managed clusters, and the managed clusters must have access to the API of the hub cluster). A hybrid environment with some clusters in the cloud and some at the data center will need appropriate firewall and port rules to access the API. However, the individual managed clusters are not needed to communicate with each other.
In OCM, you define your Kubernetes applications as ManifestWorks
. These are application templates that can be rolled out and monitored by the hub on one of the managed clusters. Manifest management is comparatively simple. Simply take your application's existing YAML declaration along with the usual suspects (resources such as deployments and PersistentVolumes) and bundle them into a new YAML file with some metadata for the manifest. OCM can then pass the application to the specified cluster and monitor its operation there.
You do not have to specify a cluster directly to run an application. OCM also manages cluster sets with placement rules, which means that applications can be assigned to a cluster group and that OCM then references a ruleset to choose the cluster on which the application will ultimately run. One of OCM's strengths is its many placement options. Very detailed rulesets can be created to distribute the applications to reflect the required resources or the operating costs.
Because OCM only works with the plain vanilla Kubernetes API, it can manage clusters with different Kubernetes distributions. However, you need to make sure that your workloads do not use distribution-specific functions and API extensions of which a target cluster might not even be aware. For example, anyone who declares OpenShift routes cannot use that application on a Google cluster. In this case, as well, placement rules will help to place the manifests correctly.
Commissioning OCM
A test setup with OCM and Kind is quickly created. For this example, I used a RHEL 9 virtual machine (VM) with four vCPUs and 8GB of RAM. An Enterprise Linux (EL) 8/9 clone or Fedora will work just as well. On the system, first install Podman and the Go environment:
dnf install podman golang
If you want to compile applications from source files, you also need to set up the required developer tools,
dnf group install "Development Tools"
then install the kubectl
Kubernetes client. Depending on your distribution, it is in the OpenShift client package (RHEL), or you can pick it up directly from the Kubernetes site,
curl -LO "https://dl.k8s.io/release/ $(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
where you can also pick up the binary package for Kind. At the time of writing, version 0.17.0 is the latest version of the tool, although a newer version might already be released by the time you read this article; be sure to check the project page before you install. Alternatively, you can install Kind from the source files in the Git repository [4] and then work with the current version:
curl -Lo ./kind https://kind.sigs. k8s.io/dl/v0.17.0/kind-linux-amd64
Make Kind executable and store it in a directory in your path:
chmod +x ./kind sudo mv ./kind /usr/bin/kind
For deeper insight into your Kind clusters, I recommend using the K9s tool as your text-based user interface (TUI) at this point. Now create two test clusters,
kind create cluster --name hub kind create cluster --name one
and install the OCM admin tool, clusteradm
. Just copy the prebuilt binary into your path:
curl -L https://raw.githubusercontent.com/open-cluster-management-io/clusteradm/main/install.sh | bash
In my lab, I experienced some problems with the context
switch and clusteradm
. To work around this, you need to change the context each time before performing operations with a cluster:
kubectl config use-context kind-hub
Run the OCM admin tool to get OCM set up on your first Kind cluster with the one-liner:
clusteradm init --wait --context kind-hub
The tool now creates two namespaces named open-cluster-management and open-cluster-management-hub (Figure 1). The latter contains the management components such as the Placement and the Registration Controllers.
At the end of the hub initialization, clusteradm
outputs the command line that you use to log clusters onto the hub. Make sure you save this line in a safe place, because it contains the registration token. Now switch to the context of the cluster you want to manage,
kubectl config use-context kind-one
then run the specified registry line with one important change, because to register a Kind cluster, you need to force clusteradm
to use the local API endpoint:
clusteradm join --hub-token <token> --cluster-name one --context kind-one --force-internal-endpoint-lookup
Clusteradm now installs the klusterlet on the node to be managed and then sends the registration request to the management hub. As soon as it arrives there, you need to confirm the requested certificate. Check on the hub node for the Certificate Signing Request (CSR), and if the signing request appears in the list, add the cluster:
kubectl get csr --context kind-hub kubectl config use-context kind-hub clusteradm accept --clusters one --context kind-hub
As mentioned, the second line alone should do the trick. However, in my lab, the --context
option did not always work reliably with the clusteradm
command. If successful, cluster one
should be part of the management network and show up in the overview:
kubectl get managedcluster --context kind-hub
The output should then list the one cluster as True in the Accepted, Joined, and Available columns.
Buy this article as PDF
(incl. VAT)