Lead Image © SV-Luma, 123RF.com

Lead Image © SV-Luma, 123RF.com

Live migration of virtual machines with KVM

Movers

Article from ADMIN 33/2016
By
Live migration of virtual machines is necessary when you need to achieve high-availability setups and load distribution.

The KVM hypervisor has been a powerful alternative to Xen and VMware in the Linux world for several years. To make the virtualization solution suitable for enterprise use, the developers are continually integrating new and useful features. An example of this is live migration of virtual machines (VMs).

On Linux systems, the kernel-based virtual machine (KVM) hypervisor built into the kernel won the upper hand a long time ago, because KVM is available on all common Linux distributions without additional configuration overhead, and the major distributors such as Red Hat and SUSE are continuously working on improving it. For most users, the upper performance limits in terms of memory usage and number of VMs are probably less interesting than bread-and-butter features that competitor VMware already offered while KVM was still in its infancy.

Meanwhile, KVM has caught up and now offers live migration of virtual machines [1] – a prerequisite to reliably virtualizing services for several years. This feature lets admins migrate VMs along with services from one server to another, which is useful for load balancing or hardware updates. High-availability services can also be realized more easily with the help of live migration. And, live migration means that the virtual machines just keep running, with interruptions kept to a minimum. Ideally, clients using the VM as a server will not even notice the migration.

Live migration is implemented by some sophisticated technologies intended to ensure that, on one hand, the interruption to service is as short as possible and, on the other hand, that the state of the migrated machine exactly corresponds to that of the original machine. To do this, the hypervisors involved begin to transfer the RAM while monitoring the activities of the source machine. If the remaining changes are small enough to be transferred in the allotted time, the source VM is paused, non-committed memory is copied to the target, and the new machine is started there.

Shared Storage Required

A prerequisite for the live migration of KVM machines is that the disks involved reside on shared storage; that is, they use a data repository that is shared between the source and target hosts. This can be, for example, NFS, iSCSI, or Fibre Channel, but it can also be a distributed or clustered filesystem like GlusterFS and GFS2. Libvirt, the abstraction layer for managing various hypervisor systems on Linux, manages data storage in storage pools. This can mean, for example, conventional disks and ZFS pools or Sheepdog clusters, in addition to the technologies mentioned above.

The different technologies for shared storage offer various benefits and require more or less configuration overhead. If you use storage appliances by a well-known manufacturer, you can also use your vendor's network storage protocols (e.g., iSCSI, Fibre Channel), or you can stick with NFS. For example, NetApp Clustered ONTAP also offers support for pNFS, which newer Linux distributions such as Red Hat Enterprise Linux or CentOS  6.4 support.

In contrast, iSCSI can be used redundantly thanks to multipathing, but again the network infrastructure to support this must be available. Fibre Channel is the most complex in this respect, because it requires a private storage area network (SAN). There is also a copper-based alternative in the form of FCoE (Fibre Channel over Ethernet), which is only offered by Mellanox. An even more exotic solution is SCSI RDMA on InfiniBand or iWARP.

Setting Up the Test Environment with NFS

For the first tests, you can aim a whole order lower and use NFS; after all, this kind of storage also can be implemented with built-in tools. For example, using a Linux Server with CentOS 7, you can set up an NFS server very quickly. Because the NFS server is implemented in the Linux kernel, only the rpcbind daemon and the tools required for managing NFS are missing, and they can be installed via the nfs-utils package. The etc/exports file lists the directories "exported" by the server. The following command starts the NFS service:

systemctl start nfs

A call to journalctl -xn shows the success or failure of the call. Using showmount -e lists the currently exported directories. The syntax of the exports file is simple in principle: the exported directory is followed by the IP address or hostname of the machines allowed to mount the directory, followed by options that, for example, determine whether the shares are read-only or released for writing. That said, many such options are available, as a glance at the export man page will tell you.

To write to the network drives with root privileges from other hosts, you also need to allow the no_root_squash option. A line for this purpose in /etc/exports would look like this:

/nfs 192.168.1.0/ 24(rw,no_root_squash)

It allows all the computers on the 192.168.1.0 network to mount the /nfs directory with root privileges and write to it. You can then restart the NFS server or type exportfs -r to tell the server about the changes. If you now try to mount the directory from a machine on the aforementioned network, this may fail, because you might need to customize the firewall on the server (i.e., to open TCP port 2049).

Storage Pool for libvirt

If you prepare two KVM hosts, you need to define a libvirt-storage pool with NFS on each of them. Libvirt handles mounting the share itself, if it is configured appropriately. You first need to define a storage pool, for example, using the virsh command-line tool. This in turn requires an XML file that determines the location and format of the storage. An example of this is shown in Listing 1.

Listing 1

Storage Pool with NFS

01 <pool type='netfs'>
02    <name>virtstore </name>
03     <source>
04        <host name='192.168.1.1'/>
05        <dir path='/virtstore'/>
06        <format type='nfs'/>
07     </source>
08     <target>
09        <path>/virtstore</path>
10        <permissions>
11           <mode>0755</mode>
12           <owner>-1</owner>
13           <group>-1</group>
14        </permissions>
15     </target>
16 </pool>

Save this XML file as virtstore.xml to define a storage pool named virtstore :

virsh pool-define virtstore.xml

The following command configures the storage pool to start automatically:

virsh pool-autostart virtstore

You still need to adjust the owner and permissions of the files depending on your Linux distribution. If you use SELinux with Red Hat, etc., you can turn to a special setsebool setting that allows access:

setsebool virt_use_nfs 1

The VM hosts involved use the secure shell, SSH, to communicate, and you need to configure them accordingly. The easiest approach is to generate a new key with ssh-keygen and do without the passphrase, which is only meaningful on a well-secured network, of course. Then, copy the keys to the other host with ssh-copy-id. Now, you can test whether an SSH login without a password works.

When you create a new virtual machine on one of the two VM hosts, make sure it uses the previously generated NFS storage pool. Also, you must disable the cache of the virtual block device because virsh otherwise objects when you migrate:

error: Unsafe migration:
  Migration may lead to data corruption
  if discs use cache! = none

You will want to turn off the cache anyway because the VM host also has a buffer cache that is used before the VM data reaches a physical disk via the simulated disk. To do this, you can either use a graphical front end such as virt-manager or continue to do everything at the console by editing the configuration with virsh edit VM. In the XML file that defines the configuration of a virtual machine, look for the lines with the disk settings and add the cache = 'none' option to the driver attributes (Listing  2). Now you can start migrating a VM manually as follows:

virsh migrate --live fedora-22 qemu+ssh://node2/system

Listing 2

XML File Disk Settings

01 <devices>
02     <emulator>/usr/bin/kvm</emulator>
03     <disk type='file' device='disk'>
04        <driver name='qemu' type='raw' cache='none'/>
05        ...

The virtual machine fedora-22 was transferred from computer node1 to computer node2 in about three seconds (Figure 1).

Figure 1: The virtual machine named fedora-22 was transferred from computer node1 to computer node2 in about three seconds.

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Managing Virtual Infrastructures

    Version 3.1 of the oVirt management platform for virtual infrastructures has recently become available. On Fedora 17, the new release is easy to install and deploy.

  • Open Source VDI solution with RHEV and oVirt
    Red Hat introduces a management system for KVM-based virtualization based on Red Hat Enterprise Virtualization and open source project oVirt. Now, solutions for desktop virtualization can be implemented without using proprietary software.
  • Alternative virtualization solutions when OpenStack is too much
    OpenStack is considered the industry standard for building private clouds, but the solution is still far too complex and too difficult to maintain and operate for many applications. What causes OpenStack projects to fail, and what alternatives do administrators have?
  • Fedora 18 for Admins

    Fedora is a trend-setting distribution that sets the pace for future developments of Red Hat Enterprise Linux. Administrators, regardless of whether they use Fedora, are well advised to look at the newest innovations of the Fedora distribution.

  • Professional virtualization with RHV4
    New and updated features in Red Hat Virtualization (RHV) 4, along with seamless provisioning of services between traditional and cloud workloads, could help RHV make up ground on VMware and Hyper-V.
comments powered by Disqus