Managing Linux Memory

Memory Hogs

Focusing on the Kernel

From an administrative perspective, a much easier approach would be to adapt the kernel itself so that the application memory bottlenecks no longer occur. A setting in the kernel would then affect all programs and, for example, not require any application-specific coding. You can try to achieve this in several ways.

First, you could set the page cache limit to a fixed size. Such a limit effectively prevents swapping – as measurements will show. Unfortunately, the patches required are not included in the standard kernel and probably will not be in the near future. Only a few enterprise versions by distributors such as SUSE offer these adjustments, and only for special cases. Thus, this is not a solution for many environments.

The next solution is again only suitable for special cases. If the swap space is designed to be very small compared with the amount of main memory available, or not present at all, the Linux kernel can hardly swap out even in an emergency. Although this approach protects application pages against swapping, it comes at the price of causing out-of-memory situations – for example, if the system needs to swap because of a temporary memory shortage.

In this case, continued smooth operation requires very careful planning and precise control of the system. For computers with an inaccurately defined workload, such an approach can easily result in process crashes and thus in an uncontrollable environment.

Swappiness

Much optimism is thus focused on a setting that the Linux kernel has supported since version 2.6.6 [12]. Below /proc/sys/vm/swappiness is a run-time configuration option that lets you influence the way the kernel swaps out pages. The swappiness value defines the extent to which the kernel should swap out pages that do not belong to the page cache or other operating system caches.

A high swappiness value (maximum 100) causes the kernel to swap out anonymous pages intensely, which is very gentle on the page cache. In contrast, a value of   tells the kernel, if possible, never to swap out application pages. The default value today is usually 60. For many desktop PCs, this has proven to be a good compromise.

You can quite easily change the swappiness value to   at run time using this command:

echo 0 > /proc/sys/vm/swappiness

This setting, however, does not survive a restart of the system. To achieve a permanent change, you need an entry in the /etc/sysctl.conf configuration file. If you set vm.swappiness = 0, for example, the kernel specifically tries to keep application pages in main memory. These should normally solve the problem.

Unfortunately, the results discussed later show major differences in the way some distributions implement this. Additionally, swappiness changes the behavior for all applications in the same way. In an enterprise environment in particular, just a few applications will be particularly important. The swappiness approach cannot prevent cut-throat competition between applications if you have a system running multiple large applications. Only careful distribution of applications can help here.

Gorman Patch

The final approach, which currently promises at least a partial solution to the displacement problem, only became part of the official kernel in Linux 3.11. The patch by developer Mel Gorman optimizes paging behavior with parallel I/O [13].

Initial results show that the heuristics incorporated into this patch actually improve the performance of applications if you have parallel I/O. However, it is still unclear whether this patch fixes the problem if the applications were inactive at the time of disk I/O – for example, at night. Will the users see significantly deteriorated performance in the morning because the required pages were swapped out?

The practical utility value of these approaches can be evaluated from different perspectives. The above sections have already mentioned some criteria. Kernel patching could be ruled out for organizational and (support) reasons in a production environment. The first important question therefore is whether a change in the application code is necessary. All approaches that require such as change are impractical from the administrator's view.

On the other hand, you cannot typically see the benefits of changes to the kernel directly but only determine them in operation. Thus, the following section of this article shows the effects of these changes for the same environment as described above. Based on these experimental results and the basic characteristics of the approach, you can ultimately arrive at an overall picture that supports an assessment.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Tuning Your Filesystem’s Cache

    Keeping your key files in RAM reduces latency and makes response time more predictable.

  • Processor and Memory Metrics

    One goal of HPC administration is effective monitoring of clusters. In this article, we talk about writing code that measures processor and memory metrics on each node.

  • Optimizing Windows Server 2016 performance
    With Windows Server 2016, tweaking the settings and taking advantage of performance monitoring tools can help boost your system's performance.
  • The Benefit of Hybrid Drives
    People still use hard disks even when SSDs are much faster and more robust. One reason is the price; another is the lower capacity of flash storage. Hybrid drives promise to be as fast as SSDs while offering as much capacity as hard drives. But can they keep that promise?
  • RAM Revealed

    Virtualized systems are inflationary when it comes to RAM requirements. Storage access is faster when excess RAM is used as a page cache, and having enough RAM helps avoid the dreaded performance killer, swapping. We take a look at the current crop of RAM.

comments powered by Disqus