Link aggregation with kernel bonding and the Team daemon

Battle of Generations

Availability

At the outset, admins will ask themselves what type of bonding they can get to run on their systems with as little overhead as possible. The kernel bonding driver clearly has its nose in front here, because it is part of Linux and therefore automatically included in every current Linux distribution. Additionally, it is directly integrated into the network administration of the respective systems and can be configured easily with configuration tools such as Yast or Kickstart.

On desktops this argument is not so convincing, because popular distributions such as Fedora and openSUSE include libteam packages, and it doesn't matter whether the network configuration has to be adjusted manually in the meantime.

Servers, the natural habitat of the bonding driver, typically do not run Fedora or openSUSE, but rather RHEL, CentOS, SLES, or some other enterprise distribution. They usually come with more or less recent libteam packages, but the setup is not as smooth as configuring the bonding interfaces using the system tool.

The points for convenience clearly go to the classic kernel-based bonding driver (Figure 2), but on RHEL systems, at least, it is no problem to configure libteam with system tools, because it was invented by Red Hat (Figure 3).

Figure 2: The bonding driver can be configured directly with ifenslave on almost any Linux system.
Figure 3: Fedora and Red Hat systems also have a graphical configuration option for libteam.

libteam in User Space

In the second round of tests, the architecture of the two candidates is put to the test. How are the bonding driver and libteam designed under the hood? Does one have obvious advantages for one solution or another? Libteam goes first in this round for a change.

Libteam is published on GitHub and is available under the GNU LGPL; thus, it meets the accepted criteria for free software. The library is divided into three components: The libteam library, which provides various functions; the Team daemon teamd; and the network Team driver. Since Linux 3.3, it has been part of the Linux kernel and therefore, like the bonding driver, can be found on any recent Linux system. Moreover, teamd has to run on systems with Team-flavored bonding for teaming to work.

The Team daemon and libteam do without kernel functions to the extent possible and live almost exclusively in user space. The kernel module's only task is to receive and send packets as quickly as possible, a function that seems to be most efficient in the kernel. The strict focus on userland is simply an architectural decision by Pírko to keep out of kernel space as much as possible, not an inability to engage in kernel development, as his various kernel patches in recent years show.

The modular design of teamd, with libteam backing it up, is responsible for Netlink communication with the Team module in the kernel. However, the implementation of teaming functionality (i.e., the various standard bonding modes) is implemented by teamd in the form of loadable modules, or "runners," which are units of code compiled into the Team daemon when an instance of the daemon is created (Figure 4).

Figure 4: In principle, libteam works much like bonding.

The broadcast runner implements a mode wherein Linux sends all packets over all available ports. If you load the roundrobin runner instead, you get load balancing based on the round-robin principle. The activebackup runner implements classic failover bonding, for when a network component dies on you.

The great flexibility of teamd in userland is an advantage. If a company wants to use a bonding mode that teamd does not yet support, it can write the corresponding module itself with relatively little effort according to the Team Netlink API. The author explicitly encourages users to do so and states that teamd is a community product and is actively developed jointly.

Bonding in Contrast

The bonding driver is implemented in Linux with the exact opposite approach of libteam. For more than 20 years, bonding.ko has been part of the Linux kernel, and all of the driver's functionality is implemented in kernel space. Only the configuration of bonding devices is done in user space with ifenslave; however, the program uses Netlink to communicate with the Linux kernel to inform it of the bonding configuration the admin intends to use for the system.

With good reason, bonding.ko can therefore be described as a classic implementation of a Linux kernel driver from the old days, true to the motto: Let the kernel do whatever it can do.

From an admin point of view, this can be a real disadvantage: If you want to use a feature that the bonding driver simply doesn't have in the kernel, in the worst case, you will have to start tweaking the kernel itself. A module API like the Team daemon is not found in the kernel bonding driver, so points for elegance of architecture and expandability of design clearly go to teamd. Score: 1 to 1.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus