Pansiri Pikunkaew, 123RF

Pansiri Pikunkaew, 123RF

Protecting your web application infrastructure with the Nginx Naxsi firewall

Fire Protection

Article from ADMIN 15/2013
By
One of the best-kept secrets about the popular Nginx web server is the Naxsi web application firewall.

Large web applications usually consist of several components, including a front end, back-end application servers, and a database. The front end handles a large part of the processing of requests, and if properly configured and tuned, it can help to shorten access times, relieving the load on the underlying application server and providing protection against unauthorized access.

The Nginx web server [1] has quickly established itself in this ensemble. Nginx (pronounced "Engine X") can act as a reverse proxy, load balancer, and static server. The system is known for high performance, stability, and frugal resource requirements. Although Apache is still king of the hill, approximately 30 percent of the top 10,000 websites already benefit from Nginx [2] (Figure 1).

Figure 1: Nginx usage in February 2013.

Although Nginx is finding success with all sizes of networks, it is particularly known for its ability to scale to extremely high volumes. According to the project website, "Unlike traditional servers, Nginx doesn't rely on threads to handle requests. Instead it uses a much more scalable event-driven (asynchronous) architecture. This architecture uses small, but more importantly, predictable amounts of memory under load."

Nginx can provide several services as a web front end, including load balancing and hot standby (see the box titled "Balancing and Standby"), reverse proxy, static web service, web/proxy cache (see the box titled "Caching"), SSL offload and SNI, and header cleanup. Figure 2 shows a sample web application configuration with Nginx as the front end.

Figure 2: Sample setup with Nginx as the front end.

Caching

Another special feature in Nginx is the ability to cache back-end responses and deliver the response from the cache on the next request. The cache can be based on URL parameters, which include the query parameters. Thus, Nginx can cache dynamic queries of the type /index.do?arg1=hello&arg2=world. The cache is used to accelerate heavily frequented traffic portals and content management systems. If the content does not change every second, the cache can mean a significant reduction of the underlying server load at peak times.

I used Nginx to optimize the main portal of a large German amusement park, which 30,000 users per day visit during the season on average. At peak times, the site has several thousand visitors per minute over periods of several hours; this volume regularly pushes the server to its performance limits. Nginx was installed upstream as a caching server, and the cache, with a total size of 1GB and an expiry time of five minutes, was shifted completely into RAM.

This configuration ensured that 99 percent of all queries were answered directly from the cache, and the underlying construct of Apache web server, PHP, and MySQL could take a rest with a load of 0.01. Performance tests on the cache resulted in 100,000 requests per second for 1,000 parallel requests without the base load of the server changing significantly. This means even a small denial-of-service attack can fail unnoticed.

The Cache Purge module [8] lets you clear the entire cache or parts of it. You can thus control the caching server from the app side and rebuild the cache with the next request. This feature is ideal for caching APIs, whose responses are influenced by GET parameters and are not regenerated continuously.

Balancing and Standby

Load balancers distribute requests across underlying servers or services. Nginx provides several modules that perform load balancing under a variety of criteria. The load balancer functions from the upstream module lend themselves to customization. For example, you can individually set the number of failed attempts and the timeout for each server in the load balancer network. Weighting of each server is possible. In case of failure, individual servers can be removed from the cluster and designated inactive (down). Error pages can be intercepted and diverted. It is also possible to configure hot standby servers that become active when other servers fail. Listing 1 shows an example of a load balancer configuration.

The load distribution of the load balancer can be organized by round robin, least connection, or IP hash. Sticky sessions, which always redirect a user session to the same node, are also possible. The round robin method distributes the load continuously and alternately to the downstream servers. In the least connection method, Nginx forwards the requests to the server with the fewest active connections. Load distribution based on the IP hash method is only useful in some cases and should not be used for publicly accessible services.

Session stickyness is usually required where interactive web applications and load balancing come together. Here, user X, who has logged on to server Y, is always routed to this server by the load balancer and thus "sticks" on this server for the duration of the session. Sticky sessions are implemented with an external module [7] in Nginx. The administrator can change the weighting of each server using the weight parameter to assign more queries to servers with better performance.

Nginx also lets you set up hot standby scenarios, in which a fallback instance with the same codebase runs in parallel, stepping in immediately if the main server fails. The hot standby feature lets you update web applications without downtime, thus achieving a higher overall availability for small systems.

Listing 1

Load Balancer Configuration

01 upstream backend {
02 backend1.example.com server weight = 5;
03 server backend2.example.com max_fails fail_timeout = 10s = 5;
04 backend3.example.com server;
05 backend4.example.com server down;
06 backend5.example.com backup server;
07 }
08
09 upstream fallback {
10 fallback1.example.com server: 8081;
11 }
12
13
14 server {
15 %
16 proxy_intercept_errors on;
17 ...
18 Filming location
19 ...
20 error_page 502 @ fallback;
21 proxy_next_upstream HTTP_500 http_502 http_503 http_504 timeout error invalid_header;
22 proxy_pass http://backend;
23

Nginx also comes with some other useful features for busy admins. You can reload the configuration via a HUP signal and even replace the Nginx binary on the fly without any disconnects [3]. The new SPDY protocol is implemented in Nginx version 1.3.15 and will probably be integrated into the stable version 1.4, which is expected in May. (A SPDY patch is available for older versions of the 1.3 branch [4].)

You can download stable and development versions of Nginx at nginx.org ; the current versions are 1.2.8 (stable) and 1.3.16 (development). The documentation in the Nginx wiki [5] is comprehensive, including configuration examples, best practice guides, and HowTos. The mailing list is available to newcomers and hard-core users alike. Even Igor Sysoev, the main developer of Nginx, responds from time to time to technically demanding questions.

Like Apache, Nginx comes with several powerful modules that expand and extend its collection of core services. One of the more interesting recent additions to the Nginx ecosystem is the Naxsi module [6], which converts Nginx into a Web Application Firewall (WAP). Naxsi, which was introduced in 2011, is still in development, but many networks are already using it productively. The Naxsi firewall offers promising features to protect web servers against script kiddies, scanners, and other automated tools that search around the clock for low-hanging fruit.

In this article, I describe how Naxsi works and how to implement it to protect your web application infrastructure.

Introducing Naxsi

Naxsi comes with its own core ruleset and is extensible with user-specific rulesets. The configuration takes place in the Nginx context. Thanks to scores for individual rules and customizable thresholds for block actions, the WAF can be adapted to different environments and web applications.

Naxsi can check different values, such as URLs, request parameters, cookies, headers, or the POST body, and it can be switched on or off at location level in the Nginx configuration. Automatic whitelist creation makes it easy to deploy the firewall upstream and rule out 100 percent of false positives. Other tools, such as NX-Utils and Doxi, facilitate administration, report generation, and ruleset updates.

Naxsi comes with NX-utils, which is very useful for generating whitelists and reports. First, the NX-utils collection includes intercept mode, which allows Naxsi to save requests blocked by the WAF for future reports and whitelists in a database, and report mode, which visualizes the stored events. NX-Utils is currently under construction and will provide improved report processing and filtering to analyze the WAF events more precisely in a later version.

Modes: Live vs. Learning

Naxsi can operate in two modes: Live and Learning (Figure 3). Like any WAF or IDS, Naxsi must be adapted for the application. Developers can take very different approaches when programming web applications. For instance, 2KB cookies with large chunks of disorganized data are not uncommon and push even experienced WAF admins to the brink of madness. For these cases, Learning mode allows you to test an application fully behind a protected test domain and generate appropriate whitelists from the queries and events, which you can then feed to an active WAF in Live operations.

Figure 3: Naxsi only blocks suspicious inquiries in Live mode. Learning mode helps you compose a ruleset.

In Learning mode, requests are registered but not blocked. Whitelists can be generated from the false positives to prevent them from occurring in Live operation.

Rules

The Naxsi rules are simple in design, flexible in terms of handling, and simpler in structure than Apache ModSecurity or Snort rules. The rules consist of a designator, a search pattern (st or rx ), a short text (msg), the match zone (mz ), the score (s), and the unique ID (id).

Strings and regular expressions are allowed as search patterns, although strings are preferable in general for performance reasons. Match zones indicate the areas of a request in which Naxsi searches for the specified pattern. Match zones are combinable and can be defined using the following values:

  • URL : Checks on the search pattern in the URL (server path).
  • ARGS: Searches for the pattern in the request arguments.
  • FILE_EXT: Tests the file attachment for the search pattern.
  • BODY: Checks the body of a POST request for the search pattern; can be further limited with $BODY_VAR:VarName.
  • HEADERS : Finds the search pattern in the header of a request; can be further delimited: $HEADERS_VAR:User-Agent, $HEADERS_VAR:Cookie, $HEADERS_VAR:Content-Type, $HEADERS_VAR:Connection, $HEADERS_VAR:Accept-Encoding.

The score indicates the value for each event. Thus, you can create signatures that do not lead to the firewall blocking the connection when left to their own devices, but only in combination with other events. Scores and check rules can be configured and extended freely.

Listing 2 shows an example of a Naxsi control set. Rule 1 in line 1 checks whether the given URL is accessed. The search pattern is a string. The UWA score increases by 8 if the rule applies. Rule 2 tests whether the given regular expression occurs in the BODY of a POST request. Rule 3 checks the authorization header to see whether it contains the given string (Base64-encoded password admin) when the /manager URL is invoked. Rule 4 then tests whether the given string (RFI) occurs in the request parameters, BODY, or Cookie, whether it is a GET or POST request. Finally, Rule 5 checks whether the given string exists in the URL, BODY, ARGS, or Cookie.

Listing 2

Naxsi Rules

01 MainRule "str:/manager/html/upload" "msg:DN SCAN Tomcat" "mz:URL" "s:$UWA:8" id:42000217 ;
02 MainRule "rx:type( *)=( *)[\"|']symbol[\"|']" "msg:DN APP_SERVER Possible RAILS Exploit using type=symbol" "mz:BODY" "s:$ATTACK:8" id:42000233 ;
03 MainRule "str:basic ywrtaw46ywrtaw4=" "msg:APP_SERVER Tomcat admin-admin credentials" "mz:$URL/manager|$HEADERS_VAR:Authorization" "s:$ATTACK:8" id:42000216 ;
04 MainRule "str:http://" "msg:http:// scheme" "mz:ARGS|BODY|$HEADERS_VAR:Cookie" "s:$RFI:8" id:1100 ;
05 MainRule "str:/*" "msg:mysql comment (/*)" "mz:BODY|URL|ARGS|$HEADERS_VAR:Cookie" "s:$SQL:8" id:1003 ;

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • The OpenResty distribution for Nginx
    Event-based request processing makes Nginx an agile web server. With the OpenResty packages, it becomes a fast application server based on the Lua scripting language.
  • Better compression of web pages
    Google develops a software tool that is a genuine alternative to Gzip, with improved website compression rates that save bandwidth for server operators.
  • Security first with the Hiawatha web server
    The small but secure Hiawatha web server provides an appealing alternative to the complex Apache and other alternatives.
  • Interview: Nginx's Gus Robertson
    The Nginx web server platform is not as well known as another open source web alternative known as Apache, but the fast and frugal Nginx is growing in popularity around the world as web admins contend with increased traffic and the challenges of container and cloud environments. We talked to Nginx CEO Gus Robertson.
  • Activate HTTP/2 on web servers
    HTTP/2 offers reduced website load times and other performance benefits, along with the promise of server push.
comments powered by Disqus