Running OpenStack in a data center
Operation Troubles
If you have tried OpenStack – as recommended in the second article of this series [1] – you may be thinking: If you can roll out metal as a service (MaaS) and Juju OpenStack within a few hours, the solution cannot be too complex. However, although the MaaS-Juju setup as described simplifies requirements in some places, it initially excludes many functions that a production OpenStack environment ultimately needs.
In the third part of this series, I look at how the lessons learned from a first mini-OpenStack can be transferred to real setups in data centers. To do so, I assume the deployment of a production environment with Juju and MaaS, accompanied by the unfortunate restriction that the solution costs money as of the 11th node that MaaS needs to manage.
Of course, it would also be possible to focus on other deployment methods. As an alternative, Ansible could be used on a bare-bones Ubuntu to roll out OpenStack, but then you would have the task of building the bare metal part yourself. Anyone who follows this path might not be able to apply all the tactics used in this article to their setup. However, most of the advice will work in all OpenStack setups, no matter how they were rolled out.
Matching Infrastructure
The same basic rules apply to OpenStack environments as for any conventional setup: Redundant power and a redundant network are mandatory. When it comes to network hardware in particular, you should plan big rather than attempting to scrimp from the outset. Switches with 48, 25Gb Ethernet ports are now available on the market (Figure 1), and if you want a more elegant solution, you can set up devices with Cumulus [2] and establish Layer 3 routing. Each individual node uses the Border Gateway Protocol (BGP)
...Buy this article as PDF
(incl. VAT)