Software-defined storage with LizardFS

Designer Store

What the Clients Do

From the perspective of its clients, LizardFS provides a POSIX filesystem, which, like NFS, mounts under /mnt. Admins should not expect too much in terms of account management: LizardFS does not support users and groups that come from LDAP or Active Directory. Any client on the LAN can mount any share. If you want to restrict access, you can only design read-only individual shares, or define access control lists (ACLs) on the basis of the network segment, IPs, or both. You can also define one (!) password for all users.

Although the server components mandate Linux as the basis, both Linux clients, which install the lizardfs-client package, and Windows machines can access the network filesystem using a proprietary tool from Skytechnology. The configuration file for LizardFS exports is visibly based on the /etc/exports file known from NFS and takes a goodly number of the well-known parameters from NFS.

When a user or an application accesses a file in the storage pool, the LizardFS client contacts the current master, which has a list of the current chunk servers and the data hosted there. The master randomly presents a suitable chunk server to the client, which then contacts the chunk server directly and requests the data (Figure 2).

Figure 2: A read operation under LizardFS (source: Skytechnology).

The client also talks to the master for write operations, in case the LizardFS balancing mechanisms have moved the file in question or the previous chunk server has gone offline, for example. In the default configuration, the master also randomly selects a chunk server to store the data, although one condition is that all servers in the pool are evenly loaded.

Once the master has selected a chunk server for the write operation and delivered its name to the client, the client sends the data to the target chunk server (Figure 3), which then confirms the write and – if necessary – triggers a replication of the newly written data. The chunk server thus ensures that a replication target is accessed as quickly as possible. A response to the LizardFS client signals the success of the write operation. The client then ends the open write session by informing the master of the completed operation.

Figure 3: A write operation under LizardFS (source: Skytechnology).

During both read and write operations, the LizardFS client observes a topology, if configured. If the client thinks it useful, it will prefer a local chunk server to those that would result in higher latency. Nearby or far away can be defined on the basis of labels (such as kansas, rack1, or ssd) or by reference to the IP addresses of the nodes.

Topologies and Goals

By nature, LizardFS runs on multiple servers, although in terms of functionality, it makes no difference whether these are virtual or dedicated machines. Each server can take any role – which can also change – although specialization often makes sense. For example, it makes sense to run chunk servers on hosts with large and fast hard disk drives, whereas the master server in particular requires more CPU and memory. The metadata backup logger, however, is happy on a small virtual machine or a backup server because of its modest requirements.

LizardFS uses predefined replication objectives (goals) to replicate the data as often as required between targets (i.e., the data is both redundant and fault-tolerant). If a chunk server fails, the data is always available on other servers. Once the defective system is repaired and added back into the cluster, LizardFS automatically redistributes the files to meet the replication targets.

Admins configure the goals on the server side; alternatively, clients may also set replication goals. Thus, a client can, for example, store files on a mounted filesystem normally (i.e., redundantly), whereas temporary files can be set to be stored without replicated copies. By giving the client some room in configuring file goals, a certain degree of flexibility can be reached.

Chunk servers can be fed the topologies from your own data center so that LizardFS knows whether the chunk servers reside in the same rack or cage. If you configure the topologies intelligently, the traffic caused by replication will only take place between adjacent switches or within a co-location. It is conducive to locality that LizardFS also reports the topologies to the clients.

If you want to use LizardFS setups across data center boundaries, you will also want to use topologies as a basis for georeplication and tell LizardFS which chunk servers are located in which data center. The clients that access the storage pool can thus be motivated to prefer the chunk servers from the local data center to remote chunk servers.

Furthermore, users and applications themselves do not need to worry about replication. Correct georeplication is a natural consequence of properly configured topologies, and the setup automatically synchronizes the data between two or more sites.

Scale and Protect

If the free space in the storage pool is running low, you can add another chunk server. Conversely, if you remove a server, the data needs to be stored elsewhere. If a chunk server fails, LizardFS automatically rebalances and offers alternative chunk servers as data sources to clients currently accessing their data. The master servers scale much like the chunk servers. If two masters are no longer enough (e.g., because a new location is added), you simply set up two more masters at that location.

If you want data protection on top of fail-safe and fault-tolerant data, you do need to take additional measures. Although LizardFS sometimes distributes files as stripes across multiple servers, this does not offer many benefits in terms of security and privacy. LizardFS does not have a built-in encryption mechanism, which is a genuine drawback.

You need to ensure that at least the local filesystems on the chunk servers are protected (e.g., by encrypting the hard disks or using LUKS-encrypted filesystem containers). The approach is useful in the case of hardware theft, of course, but anyone can access the current chunk server if they run a LizardFS client on the LAN.

In a talk on new features upcoming in LizardFS, Skytechnology's Chief Satisfaction Officer, Szymon Haly, reported at the beginning of November 2016 that his company definitely sees the need for encryption features. A new LizardFS version was planned for the first quarter of 2017 that would enable encrypted files and folders. Haly did not reveal the type of encryption or whether the new version would also use secure communication protocols.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • SDS configuration and performance
    Software-defined storage promises centrally managed, heterogeneous storage with built-in redundancy. We examine how complicated it is to set up the necessary distributed filesystems. A benchmark shows which operations each system is best at tackling.
comments powered by Disqus