« Previous 1 2 3 Next »
Kubernetes k3s lightweight distro
Smooth Operator
William Tell or Agent Cooper to Boot
It's time to get your hands much dirtier now and delve into the innards of k3s. To begin, fire up your k3s cluster by running the script above. Either put the appended pipe and sh
section back at the end of the command or type:
chmod +x install-script.sh ./install-script.sh
You will need to be root for the correct execution permissions.
The trailing dash after the sh
in the original command stops other options being accidentally passed to the shell that's going to run. Listing 1 shows what happens after running the install script. The output is nice, clean, and easy to follow.
Listing 1
Installing k3s
[INFO] Finding latest release [INFO] Using v1.0.0 as release [INFO] Downloading hash https://github.com/rancher/k3s/releases/download/v1.0.0/sha256sum-amd64.txt [INFO] Downloading binary https://github.com/rancher/k3s/releases/download/v1.0.0/k3s [INFO] Verifying binary download [INFO] Installing k3s to /usr/local/bin/k3s [INFO] Creating /usr/local/bin/kubectl symlink to k3s [INFO] Creating /usr/local/bin/crictl symlink to k3s [INFO] Skipping /usr/local/bin/ctr symlink to k3s, command exists in PATH at /usr/bin/ctr [INFO] Creating killall script /usr/local/bin/k3s-killall.sh [INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh [INFO] env: Creating environment file /etc/systemd/system/k3s.service.env [INFO] systemd: Creating service file /etc/systemd/system/k3s.service [INFO] systemd: Enabling k3s unit Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service ? /etc/systemd/system/k3s.service. [INFO] systemd: Starting k3s
The documentation notes that as part of the installation process, the /etc/rancher/k3s/k3s.yaml
config file is created. In that file, you can see a certificate authority (CA) entry to keep internal communications honest and trustworthy across the cluster, along with a set of admin credentials.
So you don't have to install client-side binaries, the sophisticated kubectl
is also installed locally by k3s, which is a nice time-saving touch. You're suitably encouraged at this point to see whether the magical one-liner did reliably create a Kubernetes build by running the command:
$ kubectl get pods --all-namespaces
Although you have connected to the API server, as you can see in Listing 2, some of the pods are failing to run correctly with CrashLoopBackOff errors. Troubleshooting these errors will give you more insight into what might go wrong with Kubernetes and k3s.
Listing 2
Failing Pods (Abridged)
NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-d798c9dd-h25s2 0/1 Running 0 10m kube-system metrics-server-6d684c7b5-b2r8w 0/1 CrashLoopBackOff 6 10m kube-system local-path-provisioner-58fb86bdfd 0/1 CrashLoopBackOff 6 10m kube-system helm-install-traefik-rjcwp 0/1 CrashLoopBackOff 6 10m
A cluster by definition is more than one node, and so far you've only created one node. To check how many are available, enter
$ kubectl get nodes NAME STATUS ROLES AGE VERSION kilo Ready master 15m v1.16.3-k3s.2
or check events. As Listing 3 shows, not much is going on. However, by checking all Kubernetes namespaces for events with the command
$ kubectl get events --all-namespaces
you can see lots of useful information.
Listing 3
Cluster Events
$ kubectl get events LAST SEEN TYPE REASON OBJECT MESSAGE 16m Normal Starting node/kilo Starting kubelet. 16m Warning InvalidDiskCapacity node/kilo invalid capacity 0 on image filesystem 16m Normal NodeHasSufficientMemory node/kilo Node kilo status is now: NodeHasSufficientMemory 16m Normal NodeHasNoDiskPressure node/kilo Node kilo status is now: NodeHasNoDiskPressure 16m Normal NodeHasSufficientPID node/kilo Node kilo status is now: NodeHasSufficientPID 16m Normal NodeAllocatableEnforced node/kilo Updated Node Allocatable limit across pods 16m Normal Starting node/kilo Starting kube-proxy. 16m Normal NodeReady node/kilo Node kilo status is now: NodeReady 16m Normal RegisteredNode node/kilo Node kilo event: Registered Node kilo in Controller
To debug any containerd [7] run-time issues, look at the logfile in Listing 4, which shows some heavily abbreviated sample entries with some of the errors. You can also check the verbose logging in the Syslog file, /var/log/syslog
, or on Red Hat Enterprise Linux derivatives, /var/log/messages
. To check your systemd service status, enter:
$ systemctl status k3s
Listing 4
Kubernetes Namespace Errors
$ less /var/lib/rancher/k3s/agent/containerd/containerd.log BackOff pod/helm-install-traefik-rjcwp Back-off restarting failed container pod/coredns-d798c9dd-h25s2 Readiness probe failed: HTTP probe failed with statuscode: 503 BackOff pod/local-path-provisioner-58fb86bdfd-2zqn2 Back-off restarting failed container
After looking at the output from the command
iptables --nvL
you might discover (in-hand with reading on GitHub [8] about a cluster networking problem) that firewalld is probably responsible for breaking internal networking. A simple fix is to see whether firewalld is enabled:
$ systemctl status firewalld
To stop it running, you use:
$ systemctl stop firewalld
That iptables and Kubernetes don't play nicely together is relatively well known (in this case, the ACCEPT entries in Figure 2 would be showing as REJECT before stopping firewalld). As a result, traffic won't be forwarded correctly under the k3s-friendly KUBE-FORWARD chain in iptables. The number of pods that should be running can be seen in Listing 5.
Listing 5
All Pods Running
NAMESPACE NAME READY STATUS RESTARTS AGE kube-system local-path-provisioner-58fb86bdfd-bvphm 1/1 Running 0 65s kube-system metrics-server-6d684c7b5-rrwpl 1/1 Running 0 65s kube-system coredns-d798c9dd-ffzxf 1/1 Running 0 65s kube-system helm-install-traefik-qfwfj 0/1 Completed 0 65s kube-system svclb-traefik-sj589 3/3 Running 0 31s kube-system traefik-65bccdc4bd-dctwx 1/1 Running 0 31s
Ain't No Thing
Now that you have a happy cluster, install a pod to make sure Kubernetes is running as expected. The command to install a BusyBox pod is:
$ kubectl apply -f https://k8s.io/examples/admin/dns/busybox.yaml pod/busybox created
As you can see from the output, k3s accepts standard YAML (see also Listing 6). If you check the default namespace, you can see the pod is running (Listing 7)
Listing 6
BusyBox YAML
apiVersion: v1 kind: Pod metadata: name: busybox namespace: default spec: containers: - name: busybox image: busybox:1.28 command: - sleep - "3600" imagePullPolicy: IfNotPresent restartPolicy: Always
Listing 7
BusyBox Running
$ kubectl get pods -ndefault NAME READY STATUS RESTARTS AGE busybox 1/1 Running 0 2m14s
Excellent! It looks like the cluster is able to handle workloads, as hoped. Of course, the next step is to add some Agent (Worker) nodes to the k3s Server (Master) to run more than just a single-node cluster.
Two's Company
Look back at Figure 1 to the example of a standard configuration with two Agent nodes and a main Server node. Consider how software as slick as k3s might make life easier for you when it comes to adding Agent nodes to your cluster. I'll illustrate this surprisingly simple process with a single Master node and a single Agent in Amazon Web Services (AWS) [9].
The process so far has set up a Master node, which helps create the intricate security settings and certificates for a cluster. Suitably armed, you can now look inside the file /var/lib/rancher/k3s/server/node-token
to glean the API token and insert it into the command that will be run on the Agent node. Essentially, the command simply tells k3s to enter Agent mode and not become a Master node:
$ k3s agent --server https://${MASTER}:6443 --token ${TOKEN}
On a freshly built node (which I'll return to later), it's even quicker:
$ curl -sfL https://get.k3s.io | K3S_URL=https://${MASTER}:6443 TOKEN=${TOKEN} sh -
I have some Terraform code [10] that will set up two instances, creating them in the same virtual private cloud (VPC). To keep things simple, don't worry about how the instances are created; instead, focus on the k3s aspects. For testing purposes, you could hand-crank the instances, too.
As mentioned earlier, I've gone for two Ubuntu 18.04 (Bionic) instances. Additionally, I'll use the same key pair for both, put the instances in a VPC of their own, and configure a public IP address for each (so I can quickly access them with SSH). After a quick check of which packages need updating and an apt upgrade -y
command on each instance, followed by a reboot, I'm confident that each instance has a sane environment on which k3s can run (Figure 3).
For the Master node, you again turn to the faithful installation one-liner:
$ curl -sfL https://get.k3s.io | sh -
After 30 seconds or so, you have the same output as seen in Listing 1.
Now, check how many nodes are present in the, ahem, lonely one-machine cluster (Listing 8). As expected, the role is displayed as master
. Check the API token needed for the node's configuration in /var/lib/rancher/k3s/server/node-token
(see also the "API Token" box). I can display the entire unredacted contents of that file with impunity, because the test lab was torn down in a few minutes time:
$ cat /var/lib/rancher/k3s/server/node-token K10fc63f5c9923fc0b5b377cac1432ca2a4daa0b8ebb2ed1df6c2b63df13b092002::server:bf7e806276f76d4bc00fdbf1b27ab921
API Token
Make sure you note the API token correctly or you might see the error:
msg="token is not valid: https://127.0.0.1:41797/apis: 401 Unauthorized
I got that error after pasting from a document, and the double quotes (inverted) were altered subtly so that the command line couldn't understand the variable!
Listing 8
A One-Machine Cluster
$ kubectl get nodes NAME STATUS ROLES AGE VERSION ip-172-30-2-181 Ready master 27s v1.16.3-k3s.2
Before leaving the Master node, look up its private IP address,
$ ip a
and the cni0 interface. Then, carefully take a note of that IP address. In my case, it was 172.30.2.181/24.
The standard security groups (SGs) in AWS only open TCP port 22, so I've made sure that any instances in that newly baked VPC can talk to each other. As you will see in a later command, only TCP port 6443 is needed for intracluster communications. Figure 4 shows what I've added to the SG (the SG self-references here, so anything in the VPC can talk to everything else over the prescribed ports).
Now, jump over to the Agent node:
$ ssh -i k3s-key.pem ubuntu@34.XXX.XXX.XXX # Agent node over public IP address
I'll reuse the command I looked at a little earlier. This time, however, I will populate the slightly longer command used for Agents with two environment variables. Adjust these to suit your needs and then run the commands:
$ export MASTER="172.30.2.181" # Private IP Address $ export TOKEN="K10fc63f5c9923fc0b5b377cac1432ca2a4daa0b8ebb2ed1df6c2b63df13b092002::server:bf7e806276f76d4bc00fdbf1b27ab921"
Finally, you're ready to install your agent with a command that pulls in your environment variables:
$ curl -sfL https://get.k3s.io | K3S_URL=https://${MASTER}:6443 K3S_TOKEN=${TOKEN} sh -
In Listing 9, you can see in the tail end of the installation output where k3s acknowledges Agent mode
. Because the K3S_URL
variable has been passed to it, the Agent mode has been enabled. However, to test that the Agent node's k3s service is running as expected, it's wise to refer to systemd again:
$ systemctl status k3s-agent
Listing 9
Agent Mode Enabled
[INFO] env: Creating environment file /etc/systemd/system/k3s-agent.service.env [INFO] systemd: Creating service file /etc/systemd/system/k3s-agent.service [INFO] systemd: Enabling k3s-agent unit Created symlink /etc/systemd/system/multi-user.target.wants/k3s-agent.service ? /etc/systemd/system/k3s-agent.service. [INFO] systemd: Starting k3s-agent
The output you're hoping for, right at the end of the log, is kube.go:124] Node controller sync successful .
Now for the proof of the pudding: Log back in to your Master node and see if the cluster has more than one member (Listing 10). As you can see, all is well. You can add workloads to the Agent node and, if you want, also chuck another couple of Agent nodes into the pool to help load balance applications and add some resilience.
Listing 10
State of the Cluster
$ kubectl get nodes NAME STATUS ROLES AGE VERSION ip-172-30-2-181 Ready master 45m v1.16.3-k3s.2 ip-172-30-2-174 Ready <none> 11m v1.16.3-k3s.2
Here, the <none> instead of worker or agent is the default [11].
« Previous 1 2 3 Next »
Buy this article as PDF
(incl. VAT)
Buy ADMIN Magazine
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Most Popular
Support Our Work
ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.