« Previous 1 2 3 4 Next »
Software instead of appliances: load balancers compared
Balancing Act
Unexpected: Nginx
Most admins will not have the second candidate in this comparison on their score cards as a potential load balancer, although they know the program well: Nginx [2] is used in many corporate environments, usually as a high-performance web server. However, Nginx comes with a module named Upstream that turns the web server into a load balancer. Keeping to what it does best, Nginx restricts itself to the HTTP protocol on Layer 7 of the OSI model.
Although Nginx is primarily a web server and not a load balancer, it can compete with HAProxy in terms of features. The program's modular structure is an advantage: Nginx can handle SSL and the more modern HTTP/2 protocol. Compressed connections and metrics data in real time are also part of the feature set.
Thanks to the Upstream module (Figure 2), admins can then simply define another hop in the route taken by incoming packets. If SSL is used, Nginx handles SSL termination without a problem. Like HAProxy, Nginx supports several operating modes when it comes to forwarding clients to the back-end servers. In addition to the typical Round Robin mode, Least Connections is available, which selects the back end with the lowest load. The IP Hash algorithm calculates the target back end according to a hash value that it obtains from the client's IP address.
A commercial version of Nginx, Nginx Plus, offers a Least Time mode, as well, wherein the back end that currently has the lowest latency from Nginx's viewpoint is always given the connection. However, session persistence is also supported by the normal Nginx (minus Plus). Whether the IP hashing algorithm or a random mode with persistent sessions is the better option depends in part on the setup and can ultimately only be found out through trial and error.
Nginx admins also can solve the HA problem described at the beginning in a very elegant way by buying the product's commercial Nginx Plus distribution, because it has a built-in HA mode and can be run in active-passive mode or even active-active mode. Potentially unwelcome tools like Pacemaker can be left out in this scenario.
Robust Solution: Seesaw
Not surprisingly, Google, one of the most active companies in the IT environment, has its own opinion on the subject of load balancing. The company is heavily involved in the development of a load balancer, even if it is not officially a Google product. The name of this program is Seesaw [3], which describes the core aspect of a load balancer quite well.
Seesaw is based on the Linux virtual server (LVS) functions and therefore tends to follow Keepalived in terms of functionality. It is aimed exclusively at users of OSI Layer 4, which makes it explicitly the first load balancer in the test that does not offer special functions for HTTP. However, the World Wide Web protocol was probably not the focus of the Google developers when they started working on Seesaw a few years ago, which is clear from the basic operating concept of the solution.
Hacks that use Pacemaker and the like to achieve high availability do not exist in Seesaw. Instead, it always needs to be operated as a cluster of at least two instances that communicate with each other. In this way, Seesaw practically avoids the HA problem. On the Seesaw level, the admin configures the virtual IPs (VIPs) to be published to the outside world, as well as the IPs of the internal target systems.
Because Seesaw supports Anycast and load balancing for Anycast, the setup can become a little more complex. If you bind the Quagga Border Gateway Protocol (BGP) daemon to Seesaw, it will announce Anycast VIPs as soon as you enable them on one of the Seesaw systems. Anycast load balancing is something that Nginx and HAProxy only support to a limited extent, so this is where Seesaw provides innovative features.
In return, however, Seesaw has strict requirements: at least two network ports per machine on the same Layer 2 network. One is used for the system's own IP, and the others announce the VIPs for running services. However, the developers note that Seesaw can easily be run as a group of multiple virtual machines, which takes some of the stress out of the setup.
Written in Go, Seesaw ideally should not cause too much grief after the initial setup. The clearly laid out configuration file takes the INI format and usually comprises no more than 20 lines, although it only plays a minor role in the Seesaw context anyway. It was important to Google that the service could be managed easily from a central point. Seesaw therefore also includes a config server service, where the admin dynamically configures the back-end servers and then passes the information on to Seesaw (Figure 3).
This setup might sound a bit like overkill at first, and it only pays off to a limited extent if you are running a single load balancer pair. However, if you think in Google terms and are confronted with quite a few Seesaw instances, you will quickly understand the elegance behind this approach, which is all the more true because Seesaw tries to keep its configuration simple and does not try to replicate one of the common automators. If you are looking for a versatile load balancer for OSI Layer 4, you might want to take a closer look at Seesaw – it's worth it.
Commercial: Zevenet
The last candidate in the comparison is Zevenet [4], which proves that load balancers can be of commercial origin without being appliances. If you want, you can also get the service with hardware from Zevenet, but it is not mandatory. Zevenet also describes itself as cloud ready, and the manufacturer even delivers its product as a ready-to-use appliance for operation in various clouds.
In terms of content, Zevenet is not that spectacular. It is a standard load balancer, with community and professional versions available. The community version has basic balancing support for OSI Layers 4 and 7; the focus in Layer 7 is on HTTP. However, the strategy behind this is clearly do-it-yourself; support from the manufacturer is only available in the form of best-effort requests in their forum.
The Enterprise Edition from Zevenet is far more fun. Like Nginx and Seesaw, it offers built-in high availability for cluster operation, delivers its own monitoring messages over SNMP, and offers its own API for access. Unlike the tools presented so far, Zevenet Enterprise also has an extended GUI that admins can use to configure Zevenet.
A basic version of the GUI is also available for the Community Edition (Figure 4), but the Enterprise variant offers significantly more functions (Figure 5). What may be a thorn in the side of experienced admins can save the day in meetings with management. Zevenet can be connected to an external user administration tool in the GUI or, alternatively, provide its own with an audit trail. The ability to change the load balancer configuration can therefore be defined in Zevenet at the service level.
The Layer 4 and 7 capabilities of the Community version are significantly expanded in the Enterprise environment. For example, different log types can be monitored and logged separately, with the ability to specify the destination for the log messages. On OSI Layer 7 for HTTP, Zevenet offers features such as support for SSL wildcard certificates, cookie injection, and support for OpenSSL 1.1.
Not all of these features are equally relevant for all use cases, but all told, Zevenet's range of functions is impressive. Attention to detail can be seen in many places. For example, Zevenet transfers connections from one node of a cluster to another without losing state. Zevenet takes security seriously by integrating various external domain name system blacklist list (DNSBL) or real-time blackhole list (RBL) services, as well as services for DDoS prevention and the ability to deny access to arbitrary clients when needed. Zevenet already leaves the spheres of load balancing and mutates into something more like a small firewall.
« Previous 1 2 3 4 Next »
Buy this article as PDF
(incl. VAT)