« Previous 1 2 3 Next »
Storage innovations in Windows Server 2016
Employee of the Month
Goodbye NTFS
New Technology File System (NTFS) has more or less ended its useful life as a filesystem for storing data on Windows Server 2016 systems. Microsoft has announced the Resilient File System (ReFS) as the new default filesystem for use with virtualization. By using and verifying checksums, ReFS is protected against logical errors or incorrect bits ("bit rotting") even with very large volumes of data. The filesystem itself writes checksums, but you can also check and correct the data on the volume if necessary. Operations such as checking or repairing with CHKDSK now happen on-the-fly; there is no need to schedule downtime.
When using ReFS as storage for Hyper-V VMs, you will find some additional improvements. Generating VHDX files with a fixed size now requires only a few seconds. Unlike previously, this operation does not fill the volume with zeros first; instead, a metadata operation occurs in the background that tags the entire space as occupied in a very short time. When resolving Hyper-V checkpoints (known as snapshots in Windows Server 2012), there is no longer any need to move data. Resolving checkpoints is now a metadata operation. Even checkpoints with a size of several hundred gigabytes can be merged within the shortest possible time. This reduces the burden dramatically for other VMs on the same volume.
Better Storage Management
In Windows Server 2012 R2, IT managers can only restrict single VHDs or VHDX media. This changes fundamentally with Storage Quality of Service (QoS) in Windows Server 2016, which significantly expands the options for measuring and limiting the performance of an environment. Thanks to the use of Hyper-V (usually in the form of a Hyper-V failover cluster with multiple nodes) and a scale-out file server, the entire environment can be monitored and controlled.
By default, the system ensures that, for example, a VM cannot grab all resources for itself and thus paralyze all other VMs ("Noisy Neighbor Problem"). Once a VM is stored on the scale-out file server, logging of its performance begins. You then call these values with the Get-StorageQosFlow
PowerShell command. The command then creates a list of all VMs with the measured values. These values can be used as a basis for adapting the environment, say, to restrict a VM.
In addition to listing the performance of your VMs, you can configure different rules that govern the use of resources. These rules regulate either individual VMs or groups of VMs by setting a limit or an IOPS guarantee. If you define, say, a limit of 1,000 IOPS for a group, all VMs together cannot exceed this limit. If five of six VMs are consuming virtually no resources, the sixth VM can claim the remaining IOPS for itself. Among others, this scenario targets hosters and large-scale environments that want to assign the same amount of compute power to each user or to control the compute power to reflect billing. Within a storage cluster, you can define up to 10,000 rules that ensure the best possible operations in the cluster, thus avoiding bottlenecks.
Technically, Storage QoS is based on a Policy Manager in a scale-out file server cluster that is responsible for central monitoring of storage performance. This policy manager can be one of the cluster nodes; you do not require a separate server. Each node also runs an "I/O scheduler" that is important for communication with the Hyper-V hosts. A rate limiter also runs on each node; the limiter communicates with the I/O scheduler, receiving and implementing reservations or limitations from it (Figure 2).
Every four seconds, the rate limiter runs on the Hyper-V and storage hosts; the QoS rules are then adapted if required. The IOPS are referred to as "Normalized IOPS," which means that each operation is counted as 8KB. If a process is smaller, it still counts as an 8KB IOPS operation. 32KB thus count as 4 IOPS.
Because monitoring automatically takes place on Windows Server 2016 if used as a scale-out file server, you can very quickly determine what kind of load your storage is exposed to and the kind of resources that each of your VMs requires. If you upgrade your scale-out file servers, when the final version of Windows Server 2016 is released next year, this operation alone, and the improvements in the background, will help to optimize your Hyper-V/Scale-Out File Server(SOFS) environment.
If you do not use scale-out file servers and also have no plans to change, you can still benefit from the Storage QoS functionality. According to Senthil Rajaram, Microsoft Program Manager in the Hyper-V Group, this feature is being introduced for all types of CSV data carriers. This change means you will have the opportunity to configure a restriction or reservation of IOPS if you use iSCSI or FC SAN.
Organization is Everything
The currently available version of Storage Spaces offers no way to reorganize the data. This feature is useful, for example, after a disk failure. If one disk fails, either a hot spare disk takes its place, or the free space within the pool is used (which is definitely eminently preferable to one or more hot spare disks) in order to repair the mirror. When the defective medium is replaced, there is no way to reorganize the data to restore the new volume to the same "level" as the other disks.
In Windows Server 2016, you can run reorganize using the Optimize Storage Pool command. In this process, the data within the specified pools are analyzed and rearranged, so that a similar level exists on each disc after completing the procedure. If another disk fails, all the remaining disks, and the free storage space, are available for restoring the mirror.
« Previous 1 2 3 Next »
Buy this article as PDF
(incl. VAT)