Tinkerbell life-cycle management
Magical Management
Bogged Down in Detail
The basics were not the big problem, said Goulding. Taking a server out of the box, mounting it in a rack, and then booting it into an installer in a preboot execution environment (PXE) is not the challenge. In most cases, however, this is only a small part of the work that needs to be done.
Saying that commodity hardware always behaves in the same way is simply not true. Anyone who has ever had to deal with different server models from the same manufacturer can confirm this. Bare metal life-cycle management therefore also includes updating the firmware, observing different hardware requirements for specific servers, and implementing specific features on specific systems – not to mention the special hardware that needs to be taken into account during deployment.
Imagine a scenario in which a provider uses special hardware such as network interface controllers (NICs) by Mellanox, for which the driver is also integrated into its own bare metal environment. If you need to buy a successor model for a batch of additional servers because the original model is no longer available, you face a problem that quite often requires a complete rebuild. Tinkerbell has looked to make precisely these tasks more manageable right from the outset.
The Tinkerbell community particularly sorely misses the ability to intervene flexibly with individual parts of the deployment process in other solutions. Indeed, Red Hat, Debian, and SUSE offer virtually no controls once the installer is running. Moreover, changing the installer with a view to extended functionality turns out to be very much nontrivial.
One Solution, Five Components
To achieve these goals, the Tinkerbell developers adhere to virtually all the specifications of a modern software architecture. Under the hood, Tinkerbell comprises five components that follow the principle of microarchitecture; it thus has a separate service on board for each specific task (Figure 1).
Tinkerbell does not rely on existing components; rather, it is a construct written from scratch. Consequently, the developers implemented the services for basic protocols such as DHCP or TFTP from scratch, too. Experienced administrators automatically react to this with some skepticism – after all, new wheels are rarely, if ever, rounder than their predecessors. Is Tinkerbell the big exception?
An answer to this question requires a closer examination of Tinkerbell's architecture. The authors of the solution distinguish between two instances: the Provisioner and the Worker. The Provisioner contains all the logic for controlling Tinkerbell. The Worker converts this logic in batches into logic tailored for individual machines, which it then executes locally.
Tink as a Centralized Tool
Anyone who has ever dealt with the approach of microarchitecture applications will most likely be familiar with what "workflow engine" means in this context. Many recent programs rely on workflows, which define individual work steps and specify the order in which the work steps need to be completed.
In a bare metal context, for example, a workflow might consist of a fresh server first booting into a PXE environment over DHCP, and then receiving the kernel and RAM disk for a system inventory and performing the installation. During the transition from one phase to the next (i.e., from one element of the workflow to the next), the server reports where it is in the process directly to the workflow engine, which enables it to take corrective action if necessary and to cancel or extend processes.
Tink, one of the five core components in Tinkerbell, follows exactly this approach. As Tinkerbell's workflow engine, it acts as the solution's control center. You communicate with Tink over the command-line interface (CLI) and inject templates into it in this way. A template contains the instructions to be applied to a specific piece of hardware (e.g., a server) or the workflow that the server runs through in Tinkerbell, if you prefer. Moreover, Tink contains the database listing the machines Tinkerbell can handle.
Furthermore, Tink includes a container registry, which will become important later on. All the work that Tink does on the target systems takes the form of containers. On the one hand, this allows you to define your own work steps and store them in the form of generic containers. On the other hand, it makes the standard container images of the major distributors usable, even if Tink provides a small detour.
Buy this article as PDF
(incl. VAT)