Unleashing Accelerated Speeds with RAM Drives
Playing with Blocks
Time is money, and sometimes that means you need a faster way to process data. Solid state drives (SSDs) and, more specifically, non-volatile memory express (NVMe) devices have helped alleviate the burden of processing data to and from a backing store. However, at times, even SSD technology is not quite fast enough, which is where the RAM drive comes into the picture.
Typically, the RAM drive is used as temporary storage for two reasons: Its capacities tend to be lower (because the technology is more expensive), and more importantly, it is a volatile technology; that is, if the system were to lose power or go into an unstable state, the contents of that RAM drive would disappear. Depending on the type of data being processed, the reward can often outweigh the risks, which is why the RAM drive can potentially be the better option.
In this article, I rely on the RapidDisk suite to create and manage RAM drives. The RapidDisk software project [1] provides an advanced set of Linux kernel RAM drive and caching modules with which you can dynamically create and remove RAM drives of any size or map them as a temporary read cache to slower devices.
The system used in this article is an older system with limited memory clocked at a slow speed. More modern and faster systems with faster memory will produce significantly different results than those found here. The dmidecode
command summarizes the configuration and capabilities of memory DIMMs and revealed that my system has four DDR3 RAM devices of 2048MB configured at speeds of 1333MTps (mega transfers per second).
Playing with RAM Drives
To begin, you need to download and build the RapidDisk suite from source code by cloning the latest stable code from the RapidDisk repository [2] (Listing 1).
Listing 1
Cloning RapidDisk
$ git clone https://github.com/pkoutoupis/rapiddisk.git Cloning into 'rapiddisk'... remote: Enumerating objects: 1560, done. remote: Counting objects: 100% (241/241), done. remote: Compressing objects: 100% (158/158), done. remote: Total 1560 (delta 149), reused 156 (delta 82), pack-reused 1319 Receiving objects: 100% (1560/1560), 762.11 KiB | 5.82 MiB/s, done. Resolving deltas: 100% (949/949), done.
Next, change into the repository's root directory and build and install both the kernel modules and userspace utilities:
$ cd rapiddisk $ make && sudo make install
Assuming that all libraries and package dependencies have been installed, both the build and installation should have completed without failure. Now insert the kernel modules:
$ sudo modprobe rapiddisk $ sudo modprobe rapiddisk-cache
At this point, if you invoke the rapiddisk
utility to list all RAM drive targets, none should be listed:
$ sudo rapiddisk -l rapiddisk 7.2.0 Copyright 2011 - 2021 Petros Koutoupis ** Unable to locate any RapidDisk devices.
The amount of memory to use for your RAM drive needs to be determined carefully. The rapiddisk
module allocates memory pages as they are requested. In theory, you can create a RAM drive that is much larger than the amount of system memory, but in good practice, this should never be done. As a RAM drive continues to be filled with data, it will eventually run out of free memory pages to allocate, and it could potentially panic the kernel, requiring a reboot of the system. In this example, the system has 8GB of total memory and about 6.5GB of "free" memory (Listing 2).
Listing 2
RAM Memory
$ free -m total used free shared buff/cache available Mem: 7951 202 6678 1 1069 7479 Swap: 4095 0 4095
The following example creates a 1GB RAM drive:
$ sudo rapiddisk -a 1024 rapiddisk 7.2.0 Copyright 2011 - 2021 Petros Koutoupis ** Attached device rd0 of size 1024 Mbytes
As mentioned earlier, the RapidDisk suite is designed to create and remove RAM drives of any size dynamically. For instance, if you want to create an additional 32MB RAM drive, you would rerun the same command with a different size (Listing 3). The output from the second command verifies that the RAM drives were created.
Listing 3
Adding RAM Drive of 32MB
$ sudo rapiddisk -a 32 rapiddisk 7.2.0 Copyright 2011 - 2021 Petros Koutoupis ** Attached device rd1 of size 32 Mbytes $ sudo rapiddisk -l rapiddisk 7.2.0 Copyright 2011 - 2021 Petros Koutoupis List of RapidDisk device(s): RapidDisk Device 1: rd1 Size (KB): 32768 RapidDisk Device 2: rd0 Size (KB): 1048576 List of RapidDisk-Cache mapping(s): None
Just as easily as it was created, the RAM drive can be removed:
$ sudo rapiddisk -d rd1 rapiddisk 7.2.0 Copyright 2011 - 2021 Petros Koutoupis Detached device rd1
To compare the RAM drive with a local 12Gb Serial Attached SCSI (SAS) spinning hard disk drive (HDD) connected to a 6Gb Host Bus Adaptor (HBA), write 1GB worth of sequential data in 1MB transfers to the HDD with dd
:
$ sudo dd if=/dev/zero of=/dev/sdf bs=1M count=1024 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.76685 s, 225 MB/s
The result, 225MBps throughput, is not bad at all for an HDD. To compare its performance with a memory-backed volume, enter:
$
Wow! The output shows 1.4GBps. That is nearly six times the throughput of the HDD. On a more modern and faster system, that number should and would be higher: about 16 times the throughput or more.
Random access I/O is where the memory device shines. Now, test your performance with the fio
performance benchmarking utility by running a random write test on the original HDD with 4KB transfers (Listing 4). The output shows 1.5MBps, which isn't fast at all. Now, run the same random write test to the RAM drive (Listing 5). Here, you can see an impressive 1GBps; again, on a modern system that number would be much higher.
Listing 4
HDD Random Write
$ sudo fio --bs=4k --ioengine=libaio --iodepth=32 --size=500m --direct=1 --runtime=60 --filename=/dev/sdf --rw=randwrite --numjobs=1 --name=test test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32 fio-3.16 Starting 1 process Jobs: 1 (f=1): [w(1)][100.0%][w=1733KiB/s][w=433 IOPS][eta 00m:00s] test: (groupid=0, jobs=1): err= 0: pid=12944: Sat Jun 19 14:49:58 2021 write: IOPS=421, BW=1685KiB/s (1725kB/s)(98.9MiB/60118msec); 0 zone resets [ ... ] Run status group 0 (all jobs): WRITE: bw=1685KiB/s (1725kB/s), 1685KiB/s-1685KiB/s (1725kB/s-1725kB/s), io=98.9MiB (104MB), run=60118-60118msec Disk stats (read/write): sdf: ios=51/25253, merge=0/0, ticks=7/1913272, in_queue=1862556, util=99.90%
Listing 5
RAM Random Write
$ sudo fio --bs=4k --ioengine=libaio --iodepth=32 --size=500m --direct=1 --runtime=60 --filename=/dev/rd0 --rw=randwrite --numjobs=1 --name=test test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32 fio-3.16 Starting 1 process test: (groupid=0, jobs=1): err= 0: pid=12936: Sat Jun 19 14:48:48 2021 write: IOPS=250k, BW=977MiB/s (1024MB/s)(500MiB/512msec); 0 zone resets [ ... ] Run status group 0 (all jobs): WRITE: bw=977MiB/s (1024MB/s), 977MiB/s-977MiB/s (1024MB/s-1024MB/s), io=500MiB (524MB), run=512-512msec Disk stats (read/write): rd0: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
You will observe similar results with random read operations (Listings 6 and 7). The HDD produces about 2.5MBps, whereas the RAM drive is up to an impressive 1.2GBps.
Listing 6
HDD Random Read
$ sudo fio --bs=4k --ioengine=libaio --iodepth=32 --size=500m --direct=1 --runtime=60 --filename=/dev/sdf --rw=randread --numjobs=1 --name=test test: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32 fio-3.16 Starting 1 process Jobs: 1 (f=1): [r(1)][100.0%][r=2320KiB/s][r=580 IOPS][eta 00m:00s] test: (groupid=0, jobs=1): err= 0: pid=12975: Sat Jun 19 14:51:27 2021 read: IOPS=622, BW=2488KiB/s (2548kB/s)(146MiB/60127msec) [ ... ] Run status group 0 (all jobs): READ: bw=2488KiB/s (2548kB/s), 2488KiB/s-2488KiB/s (2548kB/s-2548kB/s), io=146MiB (153MB), run=60127-60127msec Disk stats (read/write): sdf: ios=37305/0, merge=0/0, ticks=1913563/0, in_queue=1838228, util=99.89%
Listing 7
RAM Random Read
$ sudo fio --bs=4k --ioengine=libaio --iodepth=32 --size=500m --direct=1 --runtime=60 --filename=/dev/rd0 --rw=randread --numjobs=1 --name=test test: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32 fio-3.16 Starting 1 process test: (groupid=0, jobs=1): err= 0: pid=12967: Sat Jun 19 14:50:18 2021 read: IOPS=307k, BW=1199MiB/s (1257MB/s)(500MiB/417msec) [ ... ] Run status group 0 (all jobs): READ: bw=1199MiB/s (1257MB/s), 1199MiB/s-1199MiB/s (1257MB/s-1257MB/s), io=500MiB (524MB), run=417-417msec Disk stats (read/write): rd0: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
NVMe over Fabrics Network
Sometimes it might be necessary to export those performant volumes across a network so that other compute nodes can take advantage of the high speed. In the following example, I rely on the NVMe over Fabrics concept and, more specifically, the NVMe target modules provided by the Linux kernel to export the RAM drive and, in turn, import it to another server where it will look and operate like a local storage volume.
Most modern distributions will have the NVMe target modules installed and available for use. To insert the NVMe and NVMe TCP target modules, enter:
$ sudo modprobe nvmet $ sudo modprobe nvmet-tcp
The NVMe target directory tree will need to be made available by the kernel user configuration filesystem, which will provide access to the entire NVMe target configuration environment. To begin, mount the kernel user configuration filesystem and verify that it has been mounted:
$ sudo /bin/mount -t configfs none /sys/kernel/config/ $ mount|grep configfs configfs on /sys/kernel/config type configfs (rw,relatime)
Now, create an NVMe target directory for the RAM drive under the target subsystem and change to that directory (which will host the NVMe target volume plus its attributes):
$ sudo mkdir /sys/kernel/config/nvmet/subsystems/nvmet-rd0 $ cd /sys/kernel/config/nvmet/subsystems/nvmet-rd0
Because this is an example of general usage, you do not necessarily care about which initiators (i.e., hosts) connect to the exported target:
$ echo 1 |sudo tee -a attr_allow_any_host > /dev/null
Next, create a namespace, change into the directory, set the RAM drive volume as the device for the NVMe target, and enable the namespace:
$ sudo mkdir namespaces/1 $ cd namespaces/1/ $ echo -n /dev/rd0 |sudo tee -a device_path > /dev/null $ echo 1|sudo tee -a enable > /dev/null
Now that you have defined the target block device, you need to switch your focus and define the target (network) port. To create a port directory in the NVMe target tree, change into that directory, and set the local IP address from which the export will be visible, enter:
$ sudo mkdir /sys/kernel/config/nvmet/ports/1 $ cd /sys/kernel/config/nvmet/ports/1 $ echo 10.0.0.185 |sudo tee -a addr_traddr > /dev/null
(The IP address in the last command will need to reflect your server configuration.) Now, set the transport type, port number, and protocol version:
$ echo tcp|sudo tee -a addr_trtype > /dev/null $ echo 4420|sudo tee -a addr_trsvcid > /dev/null $ echo ipv4|sudo tee -a addr_adrfam > /dev/null
Note that for any of this to work, both the target and initiator will need port 4420 to be open in its input/output firewall rules.
To tell the NVMe target tree that the port just created will export the block device defined in the subsystem section above, the commands
$ sudo ln -s /sys/kernel/config/nvmet/subsystems/nvmet-rd0/ /sys/kernel/config/nvmet/ports/1/subsystems/nvmet-rd0 $ dmesg |grep "nvmet_tcp" [14798.568843] nvmet_tcp: enabling port 1 (10.0.0.185:4420)
link the target subsystem to the target port and verify the export.
Importing to a Remote Server
To use the RAM drive as if it were a locally attached device, move onto a secondary server (i.e., the server that will connect to the exported target). Because most modern distributions will have the proper NVMe modules installed and available for use, load the initiator or host-side kernel modules:
$ sudo modprobe nvme $ sudo modprobe nvme-tcp
Again, for any of this to work, both the target and initiator will need to have port 4420 open in its input/output firewall rules.
Use the nvme
command-line utility [3] to discover the NVMe target exported by the target server (Listing 8), connect to the target server, and import the NVMe device(s) it is exporting (in this case, you should see just the one),
Listing 8
Discover the NVMe Target
$ sudo nvme discover -t tcp -a 10.0.0.185 -s 4420 Discovery Log Number of Records 1, Generation counter 2 =====Discovery Log Entry 0====== trtype: tcp adrfam: ipv4 subtype: nvme subsystem treq: not specified, sq flow control disable supported portid: 1 trsvcid: 4420 subnqn: nvmet-rd0 traddr: 10.0.0.185 sectype: none
$ sudo nvme connect -t tcp -n nvmet-rd0 -a 10.0.0.185 -s 4420
and verify that the NVMe subsystem sees the NVMe target (Listing 9).
Listing 9
Verify the NVMe Target Is Seen
$ sudo nvme list Node SN Model Namespace Usage Format FW Rev ---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- -------- /dev/nvme0n1 S3ESNX0JA48075E Samsung SSD 960 EVO 250GB 1 22.41 GB / 250.06 GB 512 B + 0 B 2B7QCXE7 /dev/nvme1n1 07b4753784e26c18 Linux 1 1.07 GB / 1.07 GB 512 B + 0 B 5.4.0-74
You will notice that the RAM drive is enumerated as the second NVMe drive in the list (i.e., /dev/nvme1n1
). Now, verify that the volume is listed in the local device listing:
$ cat /proc/partitions |grep nvme 259 0 244198584 nvme0n1 259 1 244197543 nvme0n1p1 259 3 1048576 nvme1n1
You are now able to read from and write to /dev/nvme1n1
as if the RAM drive were a locally attached device:
$ sudo dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1024 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.868222 s, 1.2 GB/s
Finally, type
$ sudo nvme disconnect -d /dev/nvme1n1
which will disconnect the NVMe target volume.
Buy this article as PDF
(incl. VAT)
Buy ADMIN Magazine
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Most Popular
Support Our Work
ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.