Professional virtualization with RHV4

Polished

Cockpit for the Manager

The Cockpit interface installed by default on the RHVH nodes is an interesting option for the Machine Manager, which you can use to set up additional storage devices such as iSCSI targets or additional network devices such as bridges, bonds, or VLANs. In Figure 1, I gave the Manager an extra disk in the form of an iSCSI logical unit number (LUN).

Figure 1: Adding iSCSI targets in the Cockpit GUI is a breeze.

To install the Cockpit web interface, you just need to include two repos – rhel-7-server-extras-rpms and rhel-7-server-optional-rpms – and then install the software by typing yum install cockpit. Cockpit requires port 9090, which is opened as follows, unless you want to disable firewalld.

$ firewall-cmd --add-port=9090/tcp
$ firewall-cmd --permanent --add-port=9090/tcp

Now log in to the Cockpit web interface as root.

Setting Up oVirt Engine

To install and configure the oVirt Engine, enter:

$ yum install ovirt-engine-setup
$ engine-setup

The engine-setup command takes you to a comprehensive command-line interface (CLI)-based wizard that guides you through the configuration of the virtualization environment and should be no problem for experienced Red Hat, vSphere, or Hyper-V admins, because the wizard suggests functional and practical defaults (in parentheses). Nevertheless, some steps differ from the previous version. For example, step 3 lets you set up an Image I/O Proxy and expands the Manager with a convenient option for uploading VM disk images to the desired storage domains.

Subsequently setting up a WebSocket proxy server is also recommended if you want your users to connect with their VMs via the noVNC or HTML5 console. The optional setup of a VMConsole Proxy requires further configuration on client machines. In step 12 Application Mode , you can decide between Virt , Gluster , and Both , where the latter offers the greatest possible flexibility. Virt application mode only allows the operation of VMs in this environment, whereas Gluster application mode also lets you manage GlusterFS via the administrator portal. The public key infrastructure (PKI) configuration wizard continues after the engine configuration and guides you through the setup of an NFS-based ISO domain under /var/lib/export/iso on the Machine Manager.

After displaying the inevitable summary page (Figure 2), the oVirt Engine setup completes; after a short while, you will be able to access the Manager splash page at https://rhvm-machine/ovirt-engine/ using the account name admin and a password, which you hopefully configured in the setup. From here, the other portals, such as the admin portal, the user portal, and the documentation, are accessible.

Figure 2: The oVirt Engine setup shows all the settings before continuing with the configuration.

You now need to configure a working DNS, because access to all portals relies on the fully qualified domain name (FQDN). Because a "default" data center object exists, the next step is to set up the required network and storage domains, add RHVH nodes, and, if necessary, build failover clusters.

Host, Storage, and Network Setup

For the small nested setup in the example here, I first used iSCSI and NFS as shared storage; however, later I will look at how to access Gluster or Ceph storage. For now, it makes sense to set up two shared storage repositories based on iSCSI disks or NFS shares. A 100GB LUN can accommodate, for example, the VM Manager if you are using the Hosted Engine setup described below. Because I am hosting the VM Manager on ESXi, I avoided a doubly-nested VM, even if it is technically possible. As master storage for the VMs, I opted for a 500GB thin-provisioned iSCSI LUN.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Managing Virtual Infrastructures

    Version 3.1 of the oVirt management platform for virtual infrastructures has recently become available. On Fedora 17, the new release is easy to install and deploy.

  • Open Source VDI solution with RHEV and oVirt
    Red Hat introduces a management system for KVM-based virtualization based on Red Hat Enterprise Virtualization and open source project oVirt. Now, solutions for desktop virtualization can be implemented without using proprietary software.
  • Live migration of virtual machines with KVM
    Live migration of virtual machines is necessary when you need to achieve high-availability setups and load distribution.
  • Alternative virtualization solutions when OpenStack is too much
    OpenStack is considered the industry standard for building private clouds, but the solution is still far too complex and too difficult to maintain and operate for many applications. What causes OpenStack projects to fail, and what alternatives do administrators have?
  • Fedora 18 as a server distribution
    Fedora is a trend-setting distribution that sets the pace for future developments of Red Hat Enterprise Linux. Administrators, regardless of whether they use Fedora, are well advised to look at the newest innovations of the Fedora distribution.
comments powered by Disqus