Link aggregation with kernel bonding and the Team daemon
Battle of Generations
No matter how many network interfaces a server has and how the admin combines them, hardware alone is not enough. To turn multiple network cards into a true failover solution or a fat data provider, the host's operating system must also know that the respective devices work in tandem. If you are confronted with this problem on Linux, you usually use the bonding driver in the kernel; however, an alternative is libteam in user space, which implements many other functions and features, as well.
A Need for NICs
Many people who see a data center from the inside for the first time are surprised by the large number of LAN ports on an average server (Figure 1): Virtually no server can manage with just one network interface, for several reasons.
- The owner might use several network interfaces to increase redundancy. In such a setup, the server is connected to separate switches by different network ports, so that the failure of one network device does not take down the server.
- The admin might want to bundle ports to double the bandwidth.
- The network card itself might fail, or even the PCI bus into which it is inserted.
Although redundancy and performance can be combined, it requires at least four network ports, and to take into account Murphy's Law, two of the ports should reside on a different network card than the other two.
To alert a Linux operating system of multiple network cards, you would usually use the bonding
driver in the kernel and manage it with ifenslave
at the command line. Most Linux distributions have full-fledged ifenslave
integration controlled directly through network configuration of the respective system. Because of this integration, many admins are not even aware of alternatives to classical link aggregation under Linux.
Alternatives
One alternative is libteam
[1], which also interconnects network cards on Linux systems. In contrast to the ifenslave
-based link aggregation approach, libteam resides exclusively in user space and virtually does without kernel support. The main developer, Jirí Pírko, describes it as a lightweight alternative to the heavyweight legacy code in the Linux kernel – although legacy and inflexible do not necessarily have to mean bad and slow.
Now is the time, therefore, to let the two opponents battle it out directly – not so much with regard to definitive performance, which was very much comparable in the test and at the limit of what the hardware was able to deliver, but with regard to a focus on which functions the two approaches support and where they differ.
Nomenclature
In the Linux context, admins usually refer to bonding when they mean link aggregation of multiple network interfaces for better performance or greater redundancy. However, the term "teaming" can also be found in various documents. Sometimes it appears to be synonymous with bonding, whereas on other occasions you will find detailed explanations of how bonding and teaming differ (e.g., in the Red Hat world) [2]. Others again consider the terms merely to be descriptions of the same technology in the context of different operating systems.
In this article, I use bonding and teaming as synonyms for any kind of link aggregation. So wherever one of the terms is used here, the other would have been equally as good.
Buy this article as PDF
(incl. VAT)