« Previous 1 2 3 4
Software-defined networking with Windows Server 2016
Virtual Cables
Faster Live Migration
Normally, the processor manages the control of data and its use on a server. The processor is burden by each action when a server service such as Hyper-V sends data via the network (e.g., during live migration), which takes up computing capacity and time; above all, users are disconnected from their services, which in turn are saved on the virtual servers because the processor needs to create and calculate data packages for the network. To do so, it in turn needs access to the server's memory. If the package is completely calculated, the processor directs it to a cache on the network card. The package then waits there for transmission and is then sent from the network card to the destination server or the client. Conversely, the same process takes place when data packages arrive at the server. If a data package reaches the network card, it is then directed to the processor for further processing. These operations are very time consuming and require considerable processing power for the large amounts of data that are incurred, for example, when transferring virtual servers during live migration.
The solution to these problems is called Direct Memory Access (DMA). Simply put, the various system components (e.g., network cards) can access memory directly to store data and perform calculations. This relieves pressure on the processor and significantly shortens queues and operations, which in turn increases the speed of the operating system and of server services such as Hyper-V.
Remote DMA (RDMA) is an extension of this technology to include network functions. The technology allows the memory content to be transmitted to another server on the network and to access the memory of another server directly. Microsoft had already integrated RDMA in Windows Server 2012, but they improved it for Windows Server 2012 R2 and installed it directly in Hyper-V.
The technology is again expanded in Windows Server 2016; above all, performance is improved. The new Storage Spaces Direct can also use RDMA and even presuppose such network cards (see the "SDN for Storage Spaces Direct" box). Additionally, RDMA supports SET for teaming network adapters in Hyper-V. Windows Server 2012/2012 R2 can use the technology automatically when two such servers communicate on the network. RDMA significantly increases data throughput on the network and reduces latency when transmitting data and also plays an important role in live integration.
SDN for Storage Spaces Direct
The SDN functions in Windows Server 2016 are optimized for using Storage Spaces Direct. The local data storage of Windows 2016 servers can be combined in a virtual storage pool using this technology, which is available to all nodes in the cluster and also can be used for saving virtual servers and their data. The communication takes place via the network in this case. All components should therefore be able to support and optimally connect the new functions. To supplement this feature, a new Windows Server 2016 capability replicates entire disks into other data centers. This "storage replication" is ideal, for example, for geo-clustering, which also uses the new SDN functions of Windows Server 2016.
Also interesting is data center bridging, which controls data traffic in very large networks. If the network adapters used have the Converged Network Adapter (CNA) function, data can be better used on the basis of iSCSI disks or RDMA techniques – even between different data centers. Moreover, you can limit the bandwidth that this technology uses.
For quick communication between servers based on Windows Server 2016 – especially on cluster nodes – the network cards must support RDMA on the servers, which is particularly worth using for very large amounts of data. In this context, Windows Server 2016 also improves cooperation between different physical network adapters. Already in Server 2012 R2, the adapters can be grouped into teams via the Server Manager, although this is no longer necessary in Windows Server 2016 because – as briefly mentioned earlier – SET allows you to assign multiple network adapters as early as when creating virtual switches. You need the SCVMM or PowerShell to do this. In this way, it is possible to combine up to eight uplinks in a virtual switch, as well as dual-port adapters.
In Server 2016, you use RDMA-capable network adapters that are directly grouped together in a Hyper-V switch on the basis of SET. The virtual network adapters of the individual hosts access virtual switches directly, but the VMs can also access the virtual switch directly and take advantage of the power of the virtual Hyper-V switch with SET (Figure 5).
SET in Practice
The PowerShell command used to create a SET switch is pretty simple:
new-vmswitch -name SETswitch -Netadaptername "net01", "net02" -AllowManagementOS $True
If you don't want to allow the team to manage the operating system, use the -AllowManagementOS $False
option. Once you have created the virtual SET switch, you can retrieve its information with the GET-VMSwitch
command, with which you also can see the different adapters that are part of the virtual switch. More detailed information can be found with the
Get-Vmswitchteam <Name>
command, which displays the virtual switch settings (Figure 6). If you want to delete the switch again, use Remove-VMSwitch
in PowerShell. Microsoft provides a Guide [10] you can use to test the technique in detail.
Because RDMA supports SET, the technology is also cluster-capable and significantly increases the performance of Hyper-V on the network. The new switches support RDMA, as well as Server Message Block (SMB) Multichannel and SMB Direct, which is activated automatically between servers with Windows Server 2016. The built-in network adapters, and the teams and virtual switches too, of course, need to support the RDMA function for this technology to be used between Hyper-V hosts. The network needs to be extremely fast for these operations to work optimally. Adapters with the iWARP, InfiniBand, and RDMA over Converged Ethernet (RoCE) types are best suited.
Hyper-V can access the SMB protocol even better in Windows Server 2016 and can, in this way, outsource data from the virtual servers on the network. The purpose of the technology is that companies don't store the virtual disks directly on the Hyper-V host or use storage media from third-party manufacturers but instead use a network share of a server with Windows Server 2016, possibly also with Storage Spaces Direct. Hyper-V can then very quickly access this share with SMB Multichannel, SMB Direct, and Hyper-V over SMB. High-availability solutions such as live migration also use this technology. The shared cluster disk no longer needs to be in an expensive SAN. Instead, a server – or better, a cluster – with Windows Server 2016 and sufficient disk space is enough. The configuration files from the virtual servers and any existing snapshots can be stored on the server or cluster.
Conclusions
Microsoft now provides a tool for virtualizing network functions, in addition to servers, storage, and clients. The standard Network Controller in Windows Server 2016 allows numerous configurations that would otherwise require paid third-party tools and promises to become a powerful service for the centralized management of networks. However, this service is only useful if your setup is completely based on Windows Server 2016, including Hyper-V, in the network. Setting up the service and connecting the devices certainly won't be simple, but companies that meet this requirement will benefit from the new service, which includes the parallel use of IPAM, which also can be connected to the Network Controller. Together with Hyper-V network virtualization and SET, networks can be designed much more efficiently and clearly in Hyper-V. Additionally, the power of virtual servers increases, as do transfers during live migration.
The employment of System Center 2016 is necessary so that SCVMM and Operations Manager can make full use of and monitor functions. However, this doesn't make network management easier, because Network Controllers and the System Center also need to be mastered and controlled.
Infos
- Network Controller configuration files: https://github.com/Microsoft/SDN/tree/master/VMM/Templates/NC
- Step-by-step guide for deploying a SDNv2 using VMM: https://blogs.technet.microsoft.com/larryexchange/2016/05/30/step-by-step-for-deploying-a-sdnv2-using-vmm-part-2/
- Border Gateway Protocol: https://technet.microsoft.com/library/dn614183.aspx
- Windows Server 2012 R2 RRAS Multitenant Gateway Deployment Guide: https://technet.microsoft.com/library/dn641937.aspx
- Windows Server Gateway: https://technet.microsoft.com/library/dn313101.aspx
- Deploy Network Controller using Windows PowerShell: https://technet.microsoft.com/en-us/library/mt282165.aspx
- PowerShell cmdlets for the Network Controller: https://technet.microsoft.com/itpro/powershell/windows/network-controller/index
- Datacenter abstraction layer: https://technet.microsoft.com/en-us/library/dn265975(v=ws.11).aspx
- Script control of compatible devices: https://blogs.msdn.microsoft.com/powershell/2013/07/31/dal-in-action-managing-network-switches-using-powershell-and-cim/
- SET User Guide: https://gallery.technet.microsoft.com/Windows-Server-2016-839cb607
« Previous 1 2 3 4
Buy this article as PDF
(incl. VAT)