Lead Image © liubomirt, 123RF.com

Lead Image © liubomirt, 123RF.com

Storage innovations in Windows Server 2016

Employee of the Month

Article from ADMIN 34/2016
By
The upcoming release of Windows Server 2016 introduces major innovations in the field of storage. With built-in storage replication, Storage Spaces Direct, and traffic shaping for storage access via QoS, Windows Server looks like a good candidate for employee of the month.

Windows Server 2016 offers many advances in network storage. To understand what is happening in Microsoft storage now, it is best to start with a recap on some innovations that arrived in Windows Server 2012.

With the Windows Server 2012 release, Microsoft first unveiled an option for setting up a file server for application data using on-board tools. This feature assumes two to eight servers that run a file server in a failover cluster and thus offer high availability. The storage can either be SAS disks in enclosures or logical unit numbers (LUNs) attached via Fibre Channel Storage Area Network (FC SAN)/Internet Small Computer System Interface (iSCSI). This storage is then provided to the application servers, such as Hyper-V or SQL Server, over the network. SMB version 3 is used as the protocol.

In Windows Server 2012 R2, Microsoft offered the ability to use SSDs and HDDs simultaneously in a storage pool for performance reasons. This technology, known as tiering, automatically moves frequently used data in 1MB chunks to fast disks (SSDs) during operation, while data that sees little or no use is stored on HDDs. This technique gives admins the ability to build high-performance, highly available, and economically attractive storage solutions.

If you are using SSDs, 1GB of the available space is used as a write-back cache by default. This reduces the latency for write operations and the negative performance impact on other file operations. Other new features in Windows 2012 R2 were the support for parity disks in the failover cluster, the use of dual parity (similar to a RAID 6), and the ability to automatically repair or recreate Storage Spaces given sufficient free space in the pool. This repair feature removed the need for "hot spare" media. The free disk space on the functional disks is used to restore data integrity.

IOPS with Storage Spaces Direct

The upcoming

...
Use Express-Checkout link below to read the full article (PDF).

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Storage Spaces Direct with different storage media
    In Windows Server 2016, software-defined storage lets you combine several volumes to create a central storage pool; then, you can use a combination of HDD, SSD, and NVMe SSD storage media to divide the pool into different volumes.
  • Highly available Hyper-V in Windows Server 2016
    Microsoft has extended the failover options for Hyper-V in Windows Server 2016 to include two new cluster modes, as well as the ability to define an Azure Cloud Witness server. We look at how to set up a Hyper-V failover cluster.
  • Hyper-V containers with Windows Server 2016
    The release of Windows Server 2016 also heralds a new version of Hyper-V, with improved cloud security, flexible virtual hardware, rolling upgrades of Hyper-V clusters, and production checkpoints.
  • Software-defined networking with Windows Server 2016
    Windows Server 2016 takes a big step toward software-defined networking, with the Network Controller server role handling the centralized management, monitoring, and configuration of network devices and virtual networks. This service can also be controlled with PowerShell and is particularly interesting for Hyper-V infrastructures.
  • Hyper-V 3.0 in Windows Server 2012

    In the old Hyper-V hypervisor from Microsoft, many features for professional use were missing. The new version 3.0 has been significantly improved and is slowly catching up to VMware.

comments powered by Disqus