Managing network connections in container environments
Switchboard
Certificate Management
The Traefik Proxy implements what many admins might consider a killer feature: It has a client for the ACME protocol and therefore handles communication with services like Google's Let's Encrypt.
Certificate handling is usually very unpopular with admins, because running your own certificate management setup is tedious and time consuming. Even creating a certificate signing request (CSR) regularly forces admins to delve the depths of the OpenSSL command-line parameters. Automatically requesting SSL certificates from Let's Encrypt not only relieves the admin of this tedious work but also ensures that expired certificates are no longer a source of problems. Traefik Proxy talks to Let's Encrypt on demand and fetches official certificates for public endpoints from services in pods.
A robust reverse proxy and load balancer that can be controlled from within Kubernetes may be considered valuable assets per se, but the world has become more complex in recent years, especially in the container context, and microservices play a crucial role in ensuring that plain vanilla reverse proxies and load balancers are no longer all you need. Microservices make an application highly flexible on the one hand, but difficult to operate on the other. Depending on the load and its own configuration, Kubernetes launches an arbitrary number of instances of certain services or causes them to disappear ad hoc into a black hole (e.g., if the load is light at the moment and the resources are needed to handle other tasks).
Therefore, it is enormously difficult to keep track of all the paths for inter-instance communication or to configure those paths – in fact, it's impossible. Hand-coding all instances of service A to talk to all known instances of host B is not feasible in practice with any reasonable amount of effort – not to mention that most components of microarchitecture applications today communicate with each other by REST or gRPC, which are based on HTTP at their core or at least make heavy use of it. However, HTTP does not provide for specifying multiple possible destinations for a connection.
Meshes as Brokers
On the application level, communication between the different components of an application can hardly be implemented in a meaningful way. In the wake of Kubernetes and its ilk, mesh solutions have therefore enjoyed great popularity for some time. They interpose themselves between the instances of all services of a microservices application and act as a kind of broker. They automatically register outgoing and incoming instances of the individual services, forward new connections to them, and thus establish an end-to-end communication network.
Not wanting to miss out on market developments, Traefik developers expanded the functionality of their solution and put together a product named Traefik Mesh (Figure 3). The core of Traefik Mesh is still Traefik Proxy, but the technical foundation of this solution is fundamentally different from the approach that some admins may be familiar with from Istio and others.
Resource-Intensive Sidecars
The architecture of solutions like Istio is often known as a sidecar architecture, because it sets up a proxy server alongside each service in an environment directly in the pod; therefore, it requires the proxy component to be part of the pod definition. For example, to use Istio in this way, the admin or developer has to make Istio part of a pod at the Kubernetes level.
In such a construct, the various instances of microservices no longer talk to each other directly; instead, each instance has its own proxy that can dynamically forward incoming connections to its own or another instance. Traefik calls this design "invasive" precisely because it requires every definition at the pod level to contain the components necessary for Istio.
This approach undoubtedly has technical advantages, such as comprehensive mutual transport layer security (mTLS) encryption between all instances and all apps. Nevertheless, Traefik fixes another problem: the resource consumption of the proxy servers that act as sidecars in the pod. Twenty sidecar instances of the proxy with 20 instances of different apps in a microservice environment logically cause significant overhead.
Buy this article as PDF
(incl. VAT)
Buy ADMIN Magazine
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Most Popular
Support Our Work
ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.