Managing network connections in container environments
Switchboard
Traefik wants to make "networking boring … with [a] cloud-native networking stack that just works" [1]. The program aims to make working in DevOps environments easy and enjoyable and ultimately ensure that admins who are not totally network-savvy are happy to deal with DevOps and related areas. Traditionally, this has not been the case. Many administrators who grew up in the classic silo thinking of IT in the early 2000s see networks (and often storage as well) as the devil's work.
Traefik counters this: It aims to make software-defined networking (SDN) approaches in container environments obsolete by replacing them with simple technology. Traefik acts as a reverse proxy and load balancer, but also as a mesh network. I put the entire construct to the test and look into applications where Traefik might be of interest.
Complexity
No matter how much the container apologists of modern IT try to convince admins, virtualized systems based on Kubernetes and the like are naturally significantly more complex than their less modern predecessors. This complexity becomes completely clear the moment you compare the number of components in a conventional environment with that in a Kubernetes installation.
Web server setups follow a simple structure: a load balancer and a database, each redundant in some way, and then several application servers to which the load balancers point. The job is done. Virtualized container environments with Kubernetes and others also contain these components but additionally have virtualization and orchestration components weighing them down.
Many virtualized services comply with components such as those provided by a cloud-ready architecture: dozens of fragments and microservices that somehow want to communicate with each other in the background, even if they are running on different nodes in far-flung server farms and rely on SDN to communicate.
DevOps also takes on a new meaning in such environments. Even if you want to implement the typical segregation into specialized teams that take care of network, applications, and operations, it would simply be impossible. The individual levels are too interlocked, and it is too difficult to disentangle them now.
Therefore, a category of programs has emerged with the aim of making shouldering this complexity easier for admins and developers alike by setting up a dynamic load balancer environment that includes encapsulated traffic between the individual microcomponents to retrofit secure sockets layer (SSL) encryption and endpoint monitoring on the fly.
In addition to the Istio service mesh, various alternatives can be found: Consul, originally an algorithm for consensus finding in distributed environments, now offers mesh functionality, as does Linkerd (Figure 1). Mesh networks, then, are a cake from which many companies want a slice (and maybe even a large one). Before continuing, however, it is helpful to answer a few very basic questions, especially about terminology.
Clarifying Terms
The component that is now known as Traefik Proxy [2] did not always go by that name. Initially, it was the only component by the Traefik project; the project and product were both simply named Traefik. However, when the feature set was expanded to include a mesh solution along the lines of Istio, Traefik became Traefik Proxy, and the mesh solution was redubbed Traefik Mesh [3].
Today, the vendor has two more products in its lineup: Traefik Pilot is a graphical dashboard that displays connections from Traefik Mesh and Traefik Proxy in a single application, and Traefik Enterprise – not a technical product in the strict sense – is a combination of the three software products with additional features and support. In this article, I focus on Traefik Proxy and Traefik Mesh, but the Traefik dashboard is also discussed.
If you think you now know your way around Traefik terms, you haven't reckoned with the creativity of the Traefik developers, because the proxy, according to Traefik's authors, is far more than a mere proxy. Although the company does not want to rename the product, Traefik is more of an edge router that can be used as a reverse proxy or load balancer. What the differences between an edge router, a reverse proxy, and a load balancer are supposed to be is something the developers let you guess with the help of colorful pictures. It's high time, then, to get to the bottom of the issue.
Traefik and Kubernetes
The original idea behind Traefik (Figure 2) was to develop a reverse proxy server that would work particularly well in tandem with container orchestrators like Kubernetes. In this specific case, "particularly well" means that the proxy server at the Kubernetes level should be a first-class citizen (i.e., a resource that can be operated by the familiar Kubernetes APIs).
Although a given now, the Kubernetes API was not as functional in the early Kubernetes years as it is today. Also, the entire ecosystem surrounding Kubernetes was not so well developed. Admins often relied on DIY solutions. If you needed a high-availability (HA) proxy, for example, you built a pod with an HA proxy container that had the appropriate configuration installed by some kind of automation mechanism. This arrangement worked in many cases, but was unsatisfactory in terms of controllability.
Therefore, the Traefik developers took a different approach to their work right from the outset. The idea was for Traefik to be a native Kubernetes resource and, like the other resources in Kubernetes, be manageable through its APIs. Even before the release of a formal version 1.0, the developers achieved this goal, and Traefik was one of the first reverse proxies on the market with this feature set.
Buy this article as PDF
(incl. VAT)
Buy ADMIN Magazine
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Most Popular
Support Our Work
ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.