Lead Image © johnjohnson, 123RF.com

Lead Image © johnjohnson, 123RF.com

Deploying OpenStack in the cloud and the data center

Build, Build, Build

Article from ADMIN 39/2017
By
Canonical's Juju, MAAS, and Autopilot deliver practical experience in setting up an OpenStack environment in Ubuntu.

Admins dealing with OpenStack for the first time can be quickly overwhelmed by the shear mass of components and their interactions. The first part of this series on practical OpenStack applications [1] likely raised new questions for some readers. It is one thing to know about the most important OpenStack components and to have heard about the concepts that underlie them, and a totally different matter to set up your own OpenStack environment – even if you only want to gain some experience with the free cloud computing environment.

In this article, therefore, I specifically deal with the first practical steps en route to becoming an OpenStack admin. The aim is a basic installation that runs on virtual machines (VMs) and can thus be set up easily with standard VMs. In terms of deployment, the choices were Ubuntu's metal as a service (generically, MaaS, but MAAS when referring to Ubuntu's implementation) [2], Juju [3], and Autopilot [4]. With this tool mix, even newcomers can quickly establish a functional OpenStack environment.

Before heading down to the construction site, though, you first need to finish off a few homework assignments. Although the Canonical framework removes the need for budding OpenStack admins to handle most of the tasks, you should nevertheless have a clear idea of what MAAS, Juju, and Autopilot do in the background. Only then will you later be able to draw conclusions about the way OpenStack works and reproduce the setup under real-world conditions and on real hardware.

Under the Hood

Canonical's deployment concept for OpenStack is quite remarkable because it was not intended in its present form. MAAS is a good example of this. Originally, Canonical sought to confront MAAS competitors like Red Hat Satellite. The goal was to be present during the complete life cycle of a server – from the moment in which the admin adds it to the rack to the point when the admin removes and scraps it.

To achieve this goal, MAAS usually teams up with Landscape [5] by implementing all parts of the setup that are responsible for collecting and managing hardware nodes: For example, a new computer receives a response from a MAAS server when it sends a PXE request after first starting up its network interface. On top of this, the MAAS server runs a DHCP daemon that provides all computers with IP addresses. On request, MAAS will also configure the system's out-of-band interfaces and initiate the installation of the operating system, so that after a few minutes, you can have Ubuntu LTS running on a bare metal server.

Canonical supplements MAAS and Landscape with Juju to run maintenance tasks on the hosts. The Juju service modeling tool is based on the same ideas as Puppet, Chef, or Ansible; however, it never shed the reputation of being suitable only for Ubuntu. Whereas the other automation specialists are found on a variety of operating systems, Juju setups are almost always based on Ubuntu. This says nothing about the quality of Juju: It rolls out services to servers as reliably as comparable solutions.

Neither MAAS, Landscape, nor Juju are in any way specific to OpenStack. The three tools are happy to let you roll out a web server, a mail server, or OpenStack if you want. Because several solutions exist on the market that automatically roll out OpenStack and combine several components in the process, Canonical apparently did not want to be seen lacking in this respect.

The result is Autopilot. Essentially, it is a collection of Juju charms integrated into a uniform interface in Landscape that help roll out OpenStack services. The combination of MAAS, Landscape, Juju, and the Juju-based Autopilot offers a competitive range of functions: Fresh servers are equipped with a complete OpenStack installation in minutes.

Caution, Pitfalls!

For the sake of completeness, I should point out that the team of Landscape, MAAS, Juju, and Autopilot does not always thrill admins. The license model is messy: Without a valid license, Landscape only manages 10 nodes. For an OpenStack production cloud, this is just not enough.

However, SUSE and Red Hat also aggravate potential clients in a comparable way. In this example, I would not have improved the situation by using Red Hat or SUSE instead of Ubuntu. However, admins need to bear in mind that additional costs can occur later.

Preparations

Five servers are necessary for an OpenStack setup with MAAS and Autopilot. The first server is the MAAS node: It runs the DHCP and TFTP servers and responds to incoming PXE requests. The second machine takes care of all tasks that fall within the scope of Autopilot. The rest of the servers are used in the OpenStack installation itself: a controller, a network node, and a Compute server.

OpenStack stipulates special equipment requirements for these servers: The server that acts as a network node requires at least two network cards, and because Autopilot also has Ceph [6] on board as an all-inclusive package for storage services, all three OpenStack nodes should be equipped with at least two hard drives.

Nothing keeps you from implementing the servers for your own cloud in the form of VMs, which can reside in a public cloud hosted by Amazon or Google, as long as you adhere to the specs described in this article. Although such a setup is fine for getting started, the conclusions you can draw about a production setup from this kind of cloud are obviously limited.

One problem with public clouds is that they usually do not support nested virtualization. Although the Compute node can use Qemu to run VMs, these machines are fully virtualized and come without support for the various features of the host CPU. You will have more leeway if you can run the appropriate VMs on your own desktop. Nested virtualization is now part of the repertoire of VMware and similar virtualization solutions.

The three OpenStack nodes, at least, need a fair amount of power under the hood, so this setup will only be successful on workstations with really powerful hardware, as is also true with regard to the storage devices you use. Because Autopilot in this sample setup rolls out a full-blown Ceph cluster, the host's block device will experience significant load, which is definitely not much fun with a normal hard drive; you will want a fast SSD, at least.

Of course, the best way of exploring OpenStack is on physical machines. If you have a couple of older servers left over and fancy building a suitable environment in your own data center, go ahead. However, it is important that the MAAS management network does not contain any other DHCP or TFTP server, because they would conflict with MAAS. The requirements for a network with at least two OpenStack nodes and for the Ceph disk need to be observed in such a construct.

The following example assumes that five VMs are available in a private virtualization environment. The MAAS and Autopilot nodes have a network interface. The three OpenStack nodes each have a disk for Ceph, in addition to the system disk, as well as four CPU cores and 16GB of virtual memory. All three use a network adapter to connect to the same virtual network as the node for MAAS and Autopilot. The designated network node, which has a second network interface controller (NIC), is also connected to the same network.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus