« Previous 1 2
Tuning ZFS for Speed on Linux
ZFS Tuning for HPC
The L2ARC
ZFS provides another, larger secondary layer for read caching. By having a larger volume to cache, you are increasing your chances of rereading valuable data content without hitting the slower device underneath. In ZFS, this is accomplished by adding an SSD to your pool. The Level 2 ARC (L2ARC) will host entries that are scanned from the "primary" ARC cache and are next to be evicted.
In my configuration, I have created two partitions on a local NVMe device:
$ cat /proc/partitions|grep nvme 259 0 244198584 nvme0n1 259 3 97654784 nvme0n1p1 259 4 96679936 nvme0n1p2
I will be using partition 1 for the L2ARC read cache, so to enable, I enter:
$ sudo zpool add myvol cache nvme0n1p1
Then, I verify that the cache volume has been added to the pool configuration (Listing 5).
Listing 5
Verify Pool Config 1
$ sudo zpool status pool: myvol state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM myvol ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 sdc ONLINE 0 0 0 sdd ONLINE 0 0 0 sde ONLINE 0 0 0 sdf ONLINE 0 0 0 cache nvme0n1p1 ONLINE 0 0 0 errors: No known data errors
Updates that enable a persistent L2ARC cache that can tolerate system reboots are soon to make the mainline ZFS code.
The ZFS ZIL SLOG
The purpose of the ZFS Intent Log (ZIL) is to persistently log synchronous I/O operations to disk before it is written to the pool managed array. That synchronous part is how you can ensure that all operations complete and are persisted to disk before returning an I/O completion status back to the application. You can think of it as a sort of "write cache." The separate intent log (SLOG), however, is intended to give this write log a bit of a boost by plugging in an SSD.
Remember how I had two separate partitions on the local NVMe device? The one partition was used for the L2ARC read cache; now, I will use the second partition for the SLOG write cache.
To add the NVMe device partition as the SLOG to the pool, enter:
$ sudo zpool add myvol log nvme0n1p2
Then, verify that the cache volume has been added to the pool configuration (Listing 6).
Listing 6
Verify Pool Config 2
$ sudo zpool status pool: myvol state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM myvol ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 sdc ONLINE 0 0 0 sdd ONLINE 0 0 0 sde ONLINE 0 0 0 sdf ONLINE 0 0 0 logs nvme0n1p2 ONLINE 0 0 0 cache nvme0n1p1 ONLINE 0 0 0 errors: No known data errors
Now that you have added the NVMe devices as the caches for both reads and writes, you can view general and basic metrics of those devices with the same zpool iostat
interface (Listing 7).
Listing 7
View Metrics
$ zpool iostat -v myvol capacity operations bandwidth pool alloc free read write read write ----------- ----- ----- ----- ----- ----- ----- myvol 1.62M 25.2T 0 1 84 15.2K raidz1 1.62M 25.2T 0 1 67 13.5K sdc - - 0 0 16 3.40K sdd - - 0 0 16 3.39K sde - - 0 0 16 3.38K sdf - - 0 0 16 3.37K logs - - - - - - nvme0n1p2 0 92G 0 0 586 56.6K cache - - - - - - nvme0n1p1 16.5K 93.1G 0 0 1.97K 636 ----------- ----- ----- ----- ----- ----- -----
Conclusion
As you can see, ZFS is equipped with an entire arsenal of features that allow it to perform better in more mission critical or demanding high-performance environments. With an active community supporting ZFS, the filesystem is also very likely to continue to see additional features and improvements in the near future.
Infos
- ZFS on Linux: https://zfsonlinux.org/
« Previous 1 2
Buy this article as PDF
(incl. VAT)
Buy ADMIN Magazine
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Most Popular
Support Our Work
ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.