Link aggregation with kernel bonding and the Team daemon
Battle of Generations
Home Cooking
The bonding driver, on the other hand, is a bit calmer when it comes to basic functionality. To avoid creating a false impression: bonding.ko
supports the basic features for operating bonding setups, just like teamd
and with comparable quality. If you need LACP or bonding with broadcast, round robin, or load balancing in active or passive mode, you will be just as happy with the bonding driver as with teamd
.
Especially when sending data, bonding à la teamd
relies on a hash function to determine the correct device. However, the admin cannot influence this hash function when using bonding.ko
, because it is not implemented by BPF, but buried deep in the bonding driver. LACP for round-robin bonding is also on the list of things the admin has to do without using the kernel driver.
Whereas in kernel bonding, only one priority can be defined for the individual aggregated network ports, teamd
offers different priority configurations and a stickiness setting that works quasi-dynamically. The ability to configure the monitoring configuration for each port is also not available with the bonding driver – the settings made here always apply to each port of the entire network.
On the other hand, teamd
and the bonding driver agree again when it comes to virtual LANs (VLANs): They are included in both solutions and are supported without problem. The same applies to integration with the Network Manager, which handles the network configuration on many systems: Both the kernel bonding driver and teamd
can talk to it directly.
If you compare the basic functions of the two solutions, the picture becomes clear: In terms of the basics, the two solutions have practically no weaknesses, but as before, libteam uses clever additional functions that are missing in bonding.ko
at various points.
The perfect example is the BPF-based load balancing functionality, which is completely absent from the bonding driver – not least because the BPF did not even exist when the bonding driver was introduced into the Linux kernel. In this round, the point again goes to libteam – making the score 3 to 1.
Performance
Performance is an exciting aspect when you are comparing different solutions for the same problem. However, the test confirmed exactly what Red Hat, as the driving force behind the Teaming driver, already announced on its own website: teamd
and bonding don't differ greatly when it comes to performance.
The performance measures provided by Red Hat [3] suggest that the teamd
trunk delivers more throughput and plays out this strength with 1KB packets in particular. Really significant performance differences from the kernel driver do not arise at this point. The average latency of the two devices in direct comparison is even identical in Red Hat's stats – which is not unimportant when running an application on a server that is particularly prone to latency. In the end, both opponents get one point each: 4 to 2.
Get on D-Bus
In the final contest, it is almost unfair to compare teamd
and bonding. Of course, teamd
is the newer solution and various tools that libteam supports natively are unknown to the legacy kernel bonding driver, but the clearly visible attention to detail amazes time and time again when using libteam.
The Team daemon, for example, has a connection to the D-Bus system bus and can then be comprehensively monitored and modified by other D-Bus applications. The Team daemon offers further connectivity to other services with its own ZeroMQ interface, which is a RabbitMQ, or Qpid-style message bus but comes without its own server component and behaves more like a library.
The bonding driver does not even compete in this category, so it would be unfair to award another point to teamd
.
Buy this article as PDF
(incl. VAT)