« Previous 1 2 3 4 Next »
Virtual environments in Windows Server
Virtual Windows
Hardware Changes on the Fly
The virtual hardware that Hyper-V provides for VMs will be even more flexible in the new release. Already on Windows Server 2012 R2, it was possible to add, remove, or resize virtual hard drives while the VM was running. In the future, you will also be able to increase or decrease the statically assigned VM memory at VM run time. Similarly, the sys admin can install or remove virtual NICs during operation. Almost all of a VM's resources can thus be changed without restarting; the only exception is (still) the processors.
Under the hood, Microsoft has also been pushing on with Hyper-V development. The evidence includes new file formats for the configuration of virtual machines that are no longer based on XML but exist as binaries. To do this, the next release of Hyper-V uses a new configuration version for VMs, which leads to another innovation. Previously, the hypervisor updated virtual machines that admins imported, or automatically migrated, from an older system to the new version. As a result, it was impossible to reassign these VMs to the previous host server.
In the future, Hyper-V will no longer perform this step itself, thereby making the VM backward compatible. If you encounter obstacles during a migration, you can push the previously transferred VMs back to the old hosts and keep on working. When you are certain that everything is working, you can then convert the VMs manually. After that, however, there is no easy way back, and only then can the specific VM use all the new features.
Updating a Cluster Gradually
This parallel operation of old and new VM versions is indicative of a change in the cluster configuration of the new Windows Server. Rolling upgrades are now possible for Hyper-V and file server clusters. To migrate a larger server cluster to the new version of Windows, you will be able to update individual servers without removing them from the cluster. You can therefore upgrade a high-availability environment step by step. This reduces the migration risk and relieves budget pressures because, unlike today, new hardware might not be required. The running VMs keep the existing configuration version during the cluster upgrade and can be moved between the old and new hosts. Right at the end, you then update the VMs manually.
If want to run Windows Server in a high-availability environment, you might also want to distribute that environment geographically. It could be useful, for example, to distribute a Hyper-V cluster across two server rooms or data centers. One part of the host then runs at one location and the other part at the second location. If a server completely fails because of a power outage, fire, or water damage, for example, operations can continue in the second room.
One challenge in such constructions is the witness, often referred to as the quorum. This is usually an additional server that must remain accessible in case of failure to ensure correct continued operation of the cluster. Many companies provide this server at a third location, so that it is not affected by the failure of one of the two main sites. However, if you do not have a third, independent server room, you could not implement this setup in the past.
The new Windows Server will be able to outsource the witness server to a service in the cloud (Figure 2). The highlight is that it does not have to be a full-fledged server VM; Azure file storage in Microsoft's Azure blob format will suffice. This convenient storage format ensures high availability. It allows you to operate the main and backup computer center for your cluster yourself, while running the witness server independently in the cloud. Microsoft naturally prefers Azure here; whether other cloud services will be possible in the future is not yet known.
High-Availability Storage
Even away from virtualization, other innovations can be seen. In Windows Server 2012, Microsoft started to expand the storage capabilities of the operating system significantly to position Windows Server as a storage system. Along with expansion of the file server protocol, SMB (Server Message Block), in version 3.0 to provide an infrastructure for data centers, the Storage Spaces are also worthy of note.
This new form of storage definition groups any data storage such as hard drives of different types or even SSDs as a pool. With advanced techniques such as redundancies or storage tiering, this pool can provide even the most demanding server applications with powerful storage options. Tiering automatically moves frequently used data sectors to fast SSDs, while less often addressed data remain on cheaper disks.
Microsoft's stated goal is that storage servers running Windows will be able to work with low-budget hard disks, thus replacing expensive SAN storage. So far, the technology is not yet capable of doing this on a large scale. In the Technical Preview, however, Redmond has added an important building block: The Storage Replica (SR) function is capable of transferring data from a server to a second server on the fly. This lets administrators build high-availability systems that continue to run without any data loss, even after the failure of a server.
« Previous 1 2 3 4 Next »
Buy this article as PDF
(incl. VAT)