Flexible software routing with open source FRR
Special Delivery
In the past, many network administrators used pre-installed and expensive appliances for routing. Although this was used for a long time as a reliable solution, it is no longer suitable for more flexible use. For example, if you have highly virtualized server environments, the norm today, hardware appliances don't really make sense.
New approaches, such as network function virtualization (NFV) or software-defined networking (SDN), decouple networks from the hardware. Routing therefore needs to support integration into a service chain (i.e., the concatenation of the required services, alongside other functions such as firewalling or intrusion detection and prevention).
Moreover, it used to take a large amount of space, power, and money to learn in a test environment how routing protocols work with real hardware routers or to recreate specific behaviors. Network administrators who wanted to test their implementations for vulnerabilities had to build elaborate hardware environments or write their own routing stacks.
In this article, I look at an open routing stack provided by the open source project Free Range Routing [1], usually known as FRRouting or FRR.
Open Source Routing Stack Remedy
An open source routing stack can be an alternative to classic routers. In contrast, the conventional monolithic architectures with specialized application-specific integrated circuits (ASICs) or field-programmable gate arrays (FPGAs) for hardware-optimized packet forwarding obviously cannot directly offer the same performance optimizations; however, in some constellations, it is not so much a question of forwarding performance, but more likely about the functionality of the control plane, as when validating a network design. Control plane refers to the functions that can be used to control networks – Spanning Tree on Layer 2, for example, and the corresponding routing protocols on Layer 3.
However, technologies such as the Data Plane Development Kit (DPDK) [2] for performance optimization, developed by Intel in 2010 and now under the auspices of the Linux Foundation, now also allow direct access to physical resources without placing too much strain on remaining resources such as the CPU.
Single-root I/O virtualization (SR-IOV) can also provide the necessary flexibility in virtualized environments to access a native hardware resource with multiple virtualized network functions. In combination with the currently much-hyped SmartNICs (programmable network adapter cards with programmable accelerators and Ethernet), optimized packet forwarding could usher in a new network architecture with an open routing stack. Here, network functions are to be outsourced from the host to specialized network cards, such as encryption functions for virtual private networks (VPNs), deep packet inspection for next generation firewalls, and offloading of routing tasks, which makes SmartNICs interesting for software routers.
Free Range Routing
FRR emerged as a fork from the Quagga project [3]. Quagga itself has been known to some administrators for years as a component of other open source projects, such as pfSense [4], into which FRR can now also be integrated. But Quagga is also used as the routing stack of Sophos's unified threat management software as a routing substructure. Quagga itself was created in 2002 as a fork from the Zebra project, which is no longer maintained.
The fork from Quagga to FRR came about because of the large backlogs of patches and the slow evolution of Quagga. FRRouting has a four-month release cycle. Currently, the project is under the care of the Linux Foundation, to which it was handed over in April 2017. However, many organizations also very actively support the development work, including VMware, Orange, Internet Systems Consortium (ISC), Nvidia, and Cumulus Networks. As a licensing model, it is under the GPLv2+.
FRR's routing stack can adapt very flexibly to different environments, as is shown by the many implementations that include the previously mentioned open source firewall system pfSense but also its fork OPNsense and the complete Network Operating System (NOS) VyOS. Furthermore, routing functionalities of data center switches can also be implemented with FRR, as is demonstrated by the integration of the routing stack in switches from Cumulus Networks. It should be noted that FRR is only responsible for the control plane. The decision about forwarding IP packets is made by the kernel of the underlying operating system.
Flexible Architecture
Before I go into the individual performance features, I will first take a look at the architecture. C is the programming language with individual additions in Python. Otherwise, FRR differs fundamentally from classic network operating systems. As I already pointed out at the beginning of this article, the nature of the software architecture in classic routers is monolithic and is attributable to scarce resource availability at the time. In these cases, all processes for the dynamic routing protocols are already activated out of the box, which generates unnecessary load, opens up attack vectors, and increases complexity. For example, each routing process communicates directly with the other dynamic routing protocol when redistributing from one to the other. A different interface must be known and documented for each routing protocol, and the programming overhead grows with each additional protocol.
FRR solves this problem more elegantly than the software architecture in classic routers. To do so, it introduces a central and protocol-independent mediator process daemon (Figure 1) named Zebra. A dynamic routing protocol, such as the Border Gateway Protocol (BGP), binds to this daemon with the Zebra API (ZAPI). Together with the dynamic routing protocol daemons, the Zebra daemon forms the control plane. However, packet forwarding itself is handled by the kernel of the underlying operating system.
The routes from the Zebra process now have to be transferred from the userspace process to the kernel of the operating system through a socket-based interface known as the Netlink bus. The interface has a function for adding new routes (RTM_NEWROUTE
), as can be seen in Figure 2, but it can also signal new routes in the kernel to the Zebra process. The BGP daemon (bgpd
) passes the route to the Zebra daemon through the Zebra API (ZEBRA_ROUTE_ADD
). The daemon uses the Netlink bus function RTM_NEWROUTE
to pass the new route to the kernel. Confirmation takes place afterward.
The architecture, relying on the middleman process, facilitates the integration of new routing protocols because there is a uniform interface (ZAPI). In redistributing from routing process A to B, routing process A gives its routes to the Zebra process by way of the Zebra API, and ZAPI passes them to routing process B. Errors and crashes in one protocol do not necessarily affect other daemons, which basically improves overall availability.
To use a dynamic routing protocol, you need to enable it in /etc/frr/daemons
. Only the Zebra daemon and watchfrr
daemon, which detects faulty daemons and restarts them if necessary, are already enabled after installation. For all others, you need to configure /etc/frr/daemons
.
Buy this article as PDF
(incl. VAT)
Buy ADMIN Magazine
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Most Popular
Support Our Work
ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.