Kubernetes k3s lightweight distro
Smooth Operator
If commentators are to be believed, tech communities have warmly embraced another predictable and definable phase in the Internet's evolution following the steady creep of Internet of Things (IoT) technologies. Apparently, the human race suddenly deems it necessary to connect to the Internet 24 hours a day anything that boasts a thermistor. Reportedly, somewhere in the region of a staggering 50 to 70 billion IoT devices will be in action by the end of 2020.
For example, according to one report [1], a water project in China includes a whopping 100,000 IoT sensors to monitor three separate 1,000km-long canals that will ultimately "divert 44.8 billion cubic metres of water annually from rivers in southern China and supply it to the arid north." The sensors were apparently installed to monitor for structural weaknesses (in a region with a history of earthquakes), scan water quality, and check water flow rates.
As you can imagine, the constant data streams being fed 24/7 from that number of sensors needs a seriously robust infrastructure management solution. Kubernetes (often abbreviated k8s by the cool kids) has historically been trying to address the non-trivial challenge of achieving what might be considered production-grade clusters without using beefy processing power and sizeable amounts of memory. With its automated failover, scalability, load balancing services to meet global traffic demands, and extensible framework suitable for distributed systems in cloud and data centers alike, Kubernetes has gained unparalleled popularity for managing containers. Building on 15 years of herding cat-like containers in busy production clusters at Google means it also has an unimpeachable pedigree.
In this article, I look at a Kubernetes implementation equally suitable for use at the network edge, shoehorned into continuous integration and continuous deployment (CI/CD) pipeline tests and IoT appliances. The super-portable k3s [2] Kubernetes distribution is a lightweight solution to all the capacity challenges that a fully blown installation might usually bring and should keep on trucking all year round with little intervention.
We Won It Six Times
The compliant k3s boasts a simple installation process that, according to the README file, requires "half the memory, all in a binary less than 40MB" to run. By design, it is authored with a healthy degree of foresight by the people at Rancher [3]. The GitHub page [4] notes that the components removed from a fully-blown Kubernetes installation shouldn't cause any headaches. Much of what has been removed, they say, is older legacy code, alpha and experimental features, or non-default features.
As well as pruning many of the add-ons from a standard build (which can be replaced apparently with out-of-tree add-ons), the light k3s dutifully replaces the highly performant key-value store etcd [5] with a teeny, tiny SQLite3 database module to act as the storage component. For context, the not-so-new Android smartphone on which I am writing this article uses an SQLite database. Chuck in a refined mechanism to assist with the encrypted TLS communications required to run a Kubernetes cluster securely and a slim list of operating system dependencies, and k3s is built to be fully accessible.
The documentation self-references k3s [6]: "Simple but powerful 'batteries-included' features have been added, such as: a local storage provider, a service load balancer, a helm controller, and the Traefik ingress controller."
The marvelous k3s should apparently run on most Linux flavors but has been tested on Ubuntu 16.04 and 18.04 on AMD64 and Raspbian Buster on armhf. For reference, I'm using Linux Mint 19 based on Ubuntu 18.04 for my k3s cluster.
All the Things
Figure 1 shows the slight changes in naming conventions that you might be used to if you are familiar with Kubernetes and how, from an architectural perspective, a k3s cluster might fit together. Note for example that the nodes, now commonly called Worker nodes, are called Agents. Additionally, in k3s parlance, the Master node is referred to as a k3s Server. Not much else is different. As you'd expect, most of the usual elements are present: the controller, the scheduler, and the API server.
All Good Things
The docs give you a tantalizing, single command line to download, install, and run k3s:
$ curl -sfL https://get.k3s.io | sh -
With that curl
command, the download is run through an sh
shell. From my DevOps security perspective, I'm not a fan of being quite so cavalier and trusting software I'm not familiar with. Therefore, I would remove the | sh -
pipe and see what's being downloaded first. The -s
switch makes the curl
output silent, and the -f
and -L
switches ensure that HTTP errors are not shown and the command simply quits and follows page redirections, respectively.
If you run the command without the shell pipe as suggested and instead redirect the curl
output to a file,
$ curl -sfL https://get.k3s.io > install-script.sh
you get a good idea what k3s is trying to do. In broad strokes, after skimming through the helpful comments at the top of the output file, you will see a number of configurable install options, such as avoiding k3s automatically starting up as a service, altering which GitHub repository to check for releases, and not downloading the k3s binary again if it's already available. At more than 700 lines, the content is well worth a quick once-over to get a better idea of what's going on behind the scenes.
Buy this article as PDF
(incl. VAT)