![Lead Image © Jakub Jirsak, 123RF.com Lead Image © Jakub Jirsak, 123RF.com](/var/ezflow_site/storage/images/archive/2017/37/sds-configuration-and-performance/po-26365-123rf-jakub_jirsak_123rf-little_men_with_test_resized.png/134367-1-eng-US/PO-26365-123RF-Jakub_Jirsak_123RF-Little_Men_with_Test_resized.png_medium.png)
Lead Image © Jakub Jirsak, 123RF.com
SDS configuration and performance
Put to the Test
Frequently, technologies initially used in large data centers end up at some point in time in smaller companies' networks or even (as in the case of virtualization) on ordinary users' desktops. This process can be observed for software-defined storage (SDS), as well.
SDS basically converts hard drives from multiple servers into a large, redundant storage environment. The idea is that storage users have no need to worry about which specific hard drive their data is on. Equally, if individual components crash, users should be confident that the landscape has consistently saved the data so that it is always accessible.
This technology makes little sense in an environment with only one file server, but it is much more useful in large IT environments, where you can implement the scenario professionally with dedicated servers and combinations of SSDs and traditional hard drives. Usually, the components connect with each other and the clients over a 10Gb network.
However, SDS now also provides added value if several servers with idle disk space are waiting in small or medium-sized businesses. In this case, it can be interesting to combine this space using the distributed filesystems in a redundant array.
Candidate Lineup
Linux admins can immediately access several variants of such highly available, distributed filesystems. Well-known examples include GlusterFS [1] and Ceph [2], whereas LizardFS is relatively unknown [3]. In this article, I analyze the three systems and compare the read and write speeds in the test network in a benchmark.
Speed may be key for filesystems, but a number of other features are of interest, too. For example, depending on your use of the filesystem, sometimes sequential writing, sometimes
...Buy this article as PDF
(incl. VAT)