Professional virtualization with RHV4
Polished
Connecting Storage
Apart from a couple of exceptions, storage is connected from the GUI. Nevertheless, the concept of storage domains in Libvirt/Qemu (and therefore also in RHV), with its storage pools and volumes, is different from that of logical data stores with a separate VMFS filesystem in VMware or CSV shared storage in Hyper-V. In RHV, a storage domain is nothing more than a collection of images that are served by the same interface (i.e., images of VMs, including snapshots, or ISO files). Each storage domain (backing) comprises either block devices (iSCSI or FCP), a filesystem like NFS (NAS) or GlusterFS, or a POSIX-compliant filesystem.
For a network share (NFS), all vDisks, templates, or snapshots are files that reside on the underlying filesystem. For iSCSI or FCP, however, each vDisk, template, or snapshot forms a logical volume. Block devices are thus aggregated to create a logical entity and are divided by the Logical Volume Manager (LVM) into logical volumes, which are in turn more or less virtual hard drives. The vDisks of the VMs use either the QCOW2 or RAW format. QCOW2 can be thin or thick provisioned (sparse or pre-allocated). To create an iSCSI-based storage domain, you just need to enter the IP address of the iSCSI server and then click Discover (dynamic discovery). Manual addition of iSCSI software initiators is not required. By default, the result of discovery is a list of available targets. For details of the individual LUNs, press the arrow on the far right.
OpenStack Integration
RHV supports NFS, POSIX, iSCSI/fibre channel block storage, and GlusterFS, where Gluster storage is addressed through shares (bricks) from the RHV perspective, just like NFS. However, embedding GlusterFS-based storage requires a functional cluster that must be configured with a server-side and client-side quorum to ensure the integrity of the data. In terms of underpinnings, RHV is traditionally very close to OpenStack and is therefore steadily expanding the supported APIs in RHV. In RHV4, Red Hat points to API integration with Glance (images), Cinder (block storage), and Neutron (software-defined networking), although tests show that some manual labor is still required. However, the handling of storage domains is one of the aspects that nicely demonstrates integration capabilities.
RHV can use, but not manage, Ceph storage with workarounds (OpenStack Cinder). In OpenStack, this only works with Red Hat CloudForms, for example. Ceph integration via Cinder still has a preview status in RHV and, like many other features in RHV, must be configured at the command line. Glance integration for the use of ISO images and VM templates stored in OpenStack is slightly more advanced. RHV4 lets you use, export, and share with an existing Red Hat OpenStack platform; however, the required subscription is not included in RHV4.
Neutron Support
Also new in RHV4 is a free API that allows superior support for external third-party networks and thus ultimately supports centralization and simplification of network management. Red Hat Virtualization Manager uses the API to communicate with external systems (e.g., to retrieve network settings, which can then be applied to VMs).
With regard to integration with OpenStack, RHV4 includes a technology preview of the upcoming integration of Open vSwitch. RHV4 currently works with ordinary Linux bridge devices for network communication between VMs and the physical network layer in the RHV nodes. Creating bridges, VLANs, or bonds is now more convenient on the Manager side, at least, thanks to Cockpit integration. In the future, RHV IP address management (IPAM) will be fully based on Neutron subsets. Neutron is already listed as a network provider in the GUI and supports convenient importing of software-defined networks via Neutron.
Buy this article as PDF
(incl. VAT)