Networking strategies for Linux on Azure
Blue Skies
Application Gateway
The Azure Application Gateway is a traffic management tool that operates at the application layer (Layer 7), providing more sophisticated routing capabilities tailored to web-based applications. It is particularly useful for Linux-based web services that require advanced routing, Secure Sockets Layer (SSL) termination, and application firewall capabilities.
One of the primary features of the Application Gateway is its ability to perform SSL offloading, which reduces the computational load on your Linux VMs by handling SSL termination at the gateway level. This offloading frees up resources on the VMs, allowing them to handle more application-level processing. SSL termination also simplifies certificate management because SSL certificates are only applied at the gateway, rather than on each individual VM.
Another critical feature is URL-based routing, which allows the Application Gateway to direct traffic by URL path. This capability is especially useful for microservices architectures or applications with multiple subdomains. For instance, requests to api.example.com can be routed to a specific set of Linux VMs optimized for API processing, whereas requests to www.example.com are directed to a different set of VMs that handle web traffic.
The Application Gateway also integrates with the web application firewall (WAF), providing an additional layer of security for Linux-based web applications. The WAF protects against common web vulnerabilities such as SQL injection, cross-site scripting (XSS), and other Open Web Application Security Project (OWASP) top 10 threats. This integration ensures that your Linux web services are not only performant but also secure from common attack vectors.
Traffic Routing
Azure Traffic Manager is a global DNS-based traffic routing service that enables you to distribute traffic across multiple Azure regions. This service is particularly beneficial for Linux deployments that need to ensure high availability and optimal performance for users across different geographic locations.
Traffic Manager works by directing incoming DNS requests to the most appropriate endpoint according to the routing method you choose. The available routing methods include Priority, Performance, Geographic, and MultiValue. Each method serves a different purpose, allowing you to tailor traffic routing to your specific needs.
For instance, the Performance routing method directs traffic to the closest endpoint with the lowest latency, ensuring that users experience the best possible performance regardless of their location. This method is ideal for global Linux applications where user experience is critical. Alternatively, the Priority routing method can be used to implement a primary backup failover strategy, ensuring that traffic is directed to a secondary region if the primary region becomes unavailable.
Traffic Manager also supports Weighted routing, allowing you to distribute traffic across multiple endpoints on the basis of assigned weights. This feature is useful for gradually rolling out updates or balancing loads across different regions to prevent overloading a single data center.
Integrating Traffic Manager with your Linux deployments involves configuring DNS settings to point to the Traffic Manager profile, which then directs traffic according to the specified routing method. This setup provides a resilient and flexible way to manage global traffic, ensuring that your Linux applications remain available and performant even under challenging conditions.
Accelerated Networking and Performance Tuning
Optimizing network performance is important for Linux workloads running on Azure, especially for applications that demand low latency, high throughput, and consistent reliability. Azure offers several tools and techniques to enhance network performance, with Accelerated Networking being a key feature. Coupled with advanced throughput optimization and latency minimization strategies, IT professionals can ensure their Linux-based applications perform optimally in the cloud.
The Accelerated Networking feature provided by Azure importantly improves the network performance of VMs by reducing latency, lowering jitter, and decreasing CPU utilization on the VM. It does so by offloading network processing from the VM's CPU to the underlying hardware, specifically to a dedicated NIC that supports single-root I/O virtualization (SR-IOV).
To enable Accelerated Networking on Linux VMs, you first need to ensure that the VM size and the operating system version support this feature. Accelerated Networking is available on most general-purpose and compute-optimized VM sizes in Azure, but it's important to verify compatibility with the specific Linux distribution you are using.
Once compatibility is confirmed, Accelerated Networking can be enabled during the VM creation process or on existing VMs by attaching a supported NIC through the Azure Portal, Azure command-line interface (CLI), or Azure Resource Manager (ARM) templates. For example, from the Azure CLI, you can enable Accelerated Networking on a NIC with the command
az network nic update --name <nic-name> --resource-group <resource-group-name> --accelerated-networking true
After enabling Accelerated Networking, you should verify that the feature is working as expected by checking the NIC properties in the Azure Portal or by issuing the ethtool
command on the Linux VM, which should show that SR-IOV is enabled.
Buy this article as PDF
(incl. VAT)