« Previous 1 2 3 4 Next »
Useful NFS options for tuning and management
Tune-Up
System Memory
System tuning options aren't really NFS tuning options, but a system change can result in a change in NFS performance. In general, Linux and its services like memory and will grab as much system memory as possible. Of course, this memory is returned if the system needs it for other applications, but rather than let it go to waste (i.e., not being used), it uses it for buffers and caches.
NFS is definitely one of the services, particularly on the server, that will use as much buffer space as possible. With these buffers, NFS can merge I/O requests to improve bandwidth. Therefore, the more physical memory you can add to the NFS server, the more likely the performance will improve, particularly if you have lots of NFS clients hitting the server at the same time. The question is: How much memory does your NFS server need?
The answer is not easy to determine because of conflicting goals. Ideally, you should put as much memory in the server as you can afford. But if budgets are a little on the tight side, then you'll have to deal with trade-offs between buying the largest amount of memory for the NFS server or putting the money into other aspects of the system. Could you reduce memory on the NFS server from 512 to 256GB and perhaps buy an extra compute node? Is that worth the trade? The answer is up to you.
As a rule of thumb for production HPC systems, however, I tend to put in no less than 64GB on the NFS server, because memory is less expensive overall. You can always go with less memory, perhaps 16GB, but you might pay a performance penalty. However, if your applications don't do much I/O, then the trade-off might be worthwhile.
If you are choosing to use asynchronous NFS mode, you will need more memory to take advantage of async, because the NFS server will first store the I/O request in memory, respond to the NFS client, and then retire the I/O by having the filesystem write it to stable storage. Therefore, you need as much memory as possible to get the best performance.
The very last word I want to add about system memory is about speed and number of memory channels. To ring out every last bit of performance from your NFS server, you will want the fastest possible memory, while recognizing the trade-off between memory capacity and memory performance. The solution to the trade-off is really up to you, but I like to see how much memory I can get using the fastest dual in-line memory modules (DIMMs) possible. If the memory capacity is not large enough, you might want to step down to the next level in memory speed to increase capacity.
For the best possible performance, you also want an NFS server with the maximum number of memory channels to increase the overall memory bandwidth of the server. In each memory channel, be sure to put
- at least one DIMM in each channel,
- the same number of DIMMs in each channel, and
- the same DIMM size and speed in each channel.
Again, this is more likely to be critical if you are using asynchronous mode, but it's a good idea for even synchronous mode.
MTU
Changing the network maximum transmission unit (MTU) is also a good way to affect performance, but it is not an NFS tunable; rather, it is a network option that you can tune on the system to improve NFS performance. The MTU is the maximum amount of data that can be sent via an Ethernet frame. The default MTU is typically 1500
(1,500 bytes per frame), but this can be changed fairly easily.
For the greatest effect on NFS performance, you will have to change the MTU on both the NFS server and the NFS clients. You should check both of these systems before changing the value to determine the largest MTU you can use. You also need to check for the largest MTU the network switches and routers between the NFS server and the NFS clients can accommodate (refer to the hardware documentation). Most switches, even non-managed "home" switches, can accommodate an MTU of 9000
(commonly called "jumbo packets").
The MTU size can be very important because it determines packet fragments on the network. If your chunk size is 8KB and the MTU is 1500
, it will take six Ethernet frames to transmit the 8KB. If you increase the MTU to 9000
(9,000 bytes), the number of Ethernet frames drops to one.
The most common recommendation for better NFS performance is to set the MTU on both the NFS server and the NFS client to 9000
if the underlying network can accommodate it. A study by Dell [3] a few years back examined the effect of an MTU of 1500
compared with an MTU of 9000
. Using Netperf [4], they found that the bandwidth increased by about 33 percent when an MTU of 9000
was used.
TCP Tuning on the Server
A great deal can be done to tune the TCP stack for both the NFS client and the NFS server. Many articles around the Internet discuss TCP tuning options for NFS and for network traffic in general. The exact values vary depending on your specific situation. Here, I want to discuss two options for better NFS performance: system input and output queues.
Increasing the size of the input and output queues allows more data to be transferred via NFS. Effectively, you are increasing the size of buffers that can store data. The more data that can be stored in memory, the faster NFS can process it (i.e., more data is queued up). The server NFS daemons share the same socket input and output queues, so if the queues are larger, all of the NFS daemons have more buffer and can send and receive data much faster.
For the input queue, the two values you want to modify are /proc/sys/net/core/rmem_default
(the default size of the read queue in bytes) and /proc/sys/net/core/rmem_max
(the maximum size of the read queue in bytes). These values are fairly easy to modify:
echo 262144 > /proc/sys/net/core/rmem_default echo 262144 > /proc/sys/net/core/rmem_max
These commands change the read buffer sizes to 256KiB (base 2), which the NFS daemons share. You can do the same thing for the write buffers that the NFS daemons share:
echo 262144 > /proc/sys/net/core/wmem_default echo 262144 > /proc/sys/net/core/wmem_max
After changing these values, you need to restart the NFS server for them to take effect. However, if you reboot the system, these values will disappear and the defaults will be used. To make the values survive reboots, you need to enter them in the proper form in the /etc/sysctl.conf
file.
Just be aware that increasing the buffer sizes doesn't necessarily mean performance will improve. It just means the buffer sizes are larger. You will need to test your applications with various buffer sizes to determine whether increasing buffer size helps performance.
« Previous 1 2 3 4 Next »
Buy this article as PDF
(incl. VAT)
Buy ADMIN Magazine
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Most Popular
Support Our Work
ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.