![Lead Image © Goran Bogicevic, Fotolia.com Lead Image © Goran Bogicevic, Fotolia.com](/var/ezflow_site/storage/images/archive/2024/83/production-ready-mini-kubernetes-installations/po-23100-fotolia-goran_bogicevic_fotolia-gew_rze_resized.png/222944-1-eng-US/PO-23100-Fotolia-Goran_Bogicevic_Fotolia-Gew_rze_resized.png_medium.png)
Lead Image © Goran Bogicevic, Fotolia.com
Production-ready mini-Kubernetes installations
Rich Harvest
No matter what your technical problems, containers make everything easier, better, and faster – at least that's the rosy promise of the glossy brochures. However, some distributors of container platforms that include Kubernetes provide an excellent tool that, once it has been rolled out, keeps on adding components that can quickly overwhelm newcomers with its complexity to cover various missing aspects of container operation.
Each of these components, whether addressed by the tool itself or as an add-on, comes with its own system requirements and drags in other components that also need to be rolled out before anything else happens. The many solutions in a provider's portfolio did not come about by chance but are part of a carefully developed product strategy. The idea is to cover every eventuality, discourage admins from shopping around, and ensure maximum returns.
By the time you have implemented a production-ready Kubernetes (K8s) in this way, you will have paid a large amount of cash to the distributor and your choice of hardware vendor, fought your way through countless pages of documentation, and probably worked for weeks to knock all the add-ons and the tool itself into a production-ready state. It's easy to understand why many admins lose interest in containerization before they have even gotten started.
In this article, I focus on Kubernetes per se and the means for achieving the smallest possible K8s setup that is as complete as possible for a production or development environment.
Basics
Any container platform certainly needs scalable, redundant storage; versatile software-defined networking; and comprehensive security features. The obvious question is: How do you set up Kubernetes without breaking the bank? What options do you have for building an executable Kubernetes that is suitable for production use outside the sphere of influence of the major distributions? How do you create a small but capable K8s playground for your first steps?
Plenty of ready-made solutions can address all of these challenges; after all, K8s business has been a revenue driver for countless technology groups for years. However, keeping track and developing a suitable strategy can be a challenge.
Test or Production?
Even if you only want to build a small Kubernetes for your own needs, you must ask a central question: Does the setup need to be suitable for production use? Much depends on the answer.
Intuitively, the vast majority of administrators will probably think of redundancy first, and Kubernetes is all about redundancy on several levels. The very principle of a distributed and scalable platform is diametrically opposed to a solution without any redundancy. First is the redundancy that Kubernetes itself requires. Central components such as the scheduler, the Kubernetes API, and the cluster manager need to be operated redundantly in production setups; otherwise, the failure of individual systems will wreak havoc across the entire platform.
Second, a redundant Kubernetes is useless if storage for the running container instances is not set up to be redundant; again, its failure would paralyze the entire platform. Whether redundant or not, Kubernetes can be rolled out by almost identical methods, either as a single-node setup for development purposes or as a highly redundant installation. Obviously a redundant setup requires more hardware than a simple development environment, but in both cases the installation can at least be virtual.
The infrastructure surrounding a Kubernetes installation requires more thought. In the context of a development environment, the most important consideration is that you need a way to create standards-compliant volumes in Kubernetes (persistent volumes and persistent volume claims). Whether they then point to a local logical volume manager (LVM) volume in the background is pretty much irrelevant.
However, this situation changes if you are looking at a production setup. Redundant storage can be implemented in various ways. Besides legacy network-attached storage (NAS) or a storage area network (SAN), you can use more modern approaches such as Ceph or Longhorn. However, you do need to procure, install, and set up the required hardware before you can get started with Kubernetes itself.
In the example here, I assume that at least five servers are available for the production scenario: three systems for the Kubernetes control plane, a Ceph-based distributed storage system, and two additional systems for (redundant) container operations.
Beware of Vanilla
A warning is appropriate at this point: Administrators who have not yet had any experience in the K8s world, in particular, tend to stumble into a trap by rolling out the vanilla Kubernetes distribution (i.e., the containerized software that can be found on the Kubernetes homepage [1]). As things stand at present, though, this approach is strongly discouraged – an opinion shared in the Kubernetes community.
Since its inception, K8s has been based on the principle that an official Kubernetes version exists whose central role is to specify the Kubernetes API and define the rules. Although the open source upstream (vanilla) Kubernetes can be rolled out, you won't enjoy such a setup for long, because it explicitly gives you only the absolute minimum of services to operate with minimum redundancy. The management tools are missing, as are tools for connecting the basic Kubernetes installation with external services such as software-defined storage (SDS) or software-defined networking (SDN).
Although technically possible, a great deal of effort is required to create an environment that is production- or even just development-ready. Experienced Kubernetes admins go so far as to describe running a local vanilla Kubernetes as a form of administrative self-boycott. If the experts already fear such a task, K8s newcomers are very much advised not to tread this path. You do not need to take such a course of action: Far removed from the huge Kubernetes environments are rugged and agile solutions that enable the operation of a standards-compatible K8s cluster.
Buy this article as PDF
(incl. VAT)
Buy ADMIN Magazine
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Most Popular
Support Our Work
ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.
![Learn More”>
</a>
<hr>
</div>
</div>
<div class=](https://www.admin-magazine.com/var/ezflow_site/storage/images/media/images/learn-more/211417-1-eng-US/Learn-More.png)