Bare metal deployment with OpenStack
Design Consultant
Once a private cloud with OpenStack is up and running, virtual machines can be created quickly, but you have to get there first. This process involves several steps, including installing hardware in the data center, wiring, and installing an operating system. The operating system installation is where the OpenStack component Ironic [1] enters the field. In this article, we introduce the Ironic module, which specializes in bare metal installations.
Ironic first appeared as an Incubator Project in the OpenStack Icehouse release of April 2017 [2] as an offshoot of the Nova bare metal driver. It was further integrated in the course of the Juno release and finally found its place as an integrated project in the Kilo release.
Bifrost
Installing an operating system raises the chicken and egg problem: The operating system is installed on disk by an installer, which in turn needs something like an operating system under which it can run. The Bifrost subproject aims to solve this conundrum. Underneath, you have to imagine a standalone, running Ironic that OpenStack provides with all the components it requires.
This article assumes the following situation: A Bifrost instance and OSISM (Open Source Infrastructure and Service Manager) are used to deploy an operating system on a node. OSISM [3], the deployment and lifecycle management framework, installs and configures all the required OpenStack components on that node so that another node can subsequently be installed by Ironic (Figure 1).
Bifrost bridges the gap between "the hardware is ready and wired in the data center" and "now you can log on to the operating system." The name comes from Norse mythology and refers to the rainbow bridge between Midgard and Asgard (i.e., the connection between middle earth, the home of humans, and the kingdom of the gods and goddesses).
Bifrost requires two configuration files in the environments/kolla/files/overlays/
directory: one for itself and one on which the server is to be installed (node1 in this case). The most important parameters in the Bifrost configuration file (bifrost.yml
, Listing 1) are enabled_hardware_types
, use_tinyipa
, cirros_deploy_image_upstream_url
, and dhcp_pool*
.
Listing 1
bifrost.yml
enabled_hardware_types: ipmi download_ipa: true use_tinyipa: false ipa_upstream_release: stable-{{ openstack_release }} cirros_deploy_image_upstream_url: https://share/ironic-ubuntu-osism-20.04.qcow2 dhcp_pool_start: 192.168.21.200 dhcp_pool_end: 192.168.21.250 dnsmasq_router: 192.168.21.254 domain: osism.test
The enabled_hardware_types
configuration option describes the interface for controlling the hardware [4], and the use_tinyipa
variable gives you a choice between CoreOS (true
) and CentOS (false
); CoreOS is intended for continuous integration and test environments. The cirros_deploy_image_upstream_url
parameter contains the link to the image to be installed, and dhcp_pool*
contains the DHCP pool for the PXE environment.
Listing 2 shows the servers.yml
configuration file for server(s) to be installed. It is important to use the MAC address of the deployment network interface controller (NIC) here and not, say, that of the remote NIC interface. In the case of Supermicro, for example, you determine this with the command:
Listing 2
servers.yml
--- node1: uuid: "UUID" driver_info: power: ipmi_username: "<LOGIN>" ipmi_address: "<IP/Hostname>" ipmi_password: "<lesssecret>" nics: - mac: "<MAC of NIC>" driver: "ipmi" ipv4_interface_mac: "<MAC of NIC>" ipv4_address: "192.168.21.21" ipv4_subnet_mask: "255.255.255.0" ipv4_gateway: "192.168.21.254" ipv4_nameserver: - "8.8.8.8" - "8.8.4.4" properties: cpu_arch: "x86_64" ram: "65536" disk_size: "1024" cpus: "16" name: "node1"
# ipmitool -H <IP address> -U <user> -P <password> raw 0x30 0x21 | tail -c 18 | sed 's/ /:/g'
To avoid the password specified here appearing in the process list and history, use the -f /<path>/<to>/<passwordfile>
parameter instead of -P <password>
, if necessary, and enter the path to a file containing the password.
Once you have committed both files to a Git repository, you can now turn to the OSISM Manager inventory. To begin, assign roles by modifying the inventory/20-roles
file as shown in Listing 3; you need to define the host variables for node1
in inventory/host_vars/node1.osism.test.yml
(Listing 4).
Listing 3
20-roles
[generic] manager.osism.test node1.osism.test [manager] manager.osism.test [monitoring] manager.osism.test [control] node1.osism.test [compute] node1.osism.test [network] node1.osism.test
Listing 4
node1.osism.test.yml
-- ansible_host: 192.168.21.21 console_interface: eno1 management_interface: eno1 internal_address: 192.168.21.21 fluentd_host: 192.168.21.21 network_interfaces: - device: eno1 auto: true family: inet method: static address: 192.168.21.21 netmask: 255.255.255.0 gateway: 192.168.21.254 mtu: 1500 - device: eno2 auto: true family: inet method: manual mtu: 1500 network_interface: eno1
These files also should be maintained in the Git repository and retrieved with the osism-generic
configuration
command in the OSISM Manager, which updates the repository contents in /opt/configuration/
. This workflow always serves as the basis for an OSISM environment: maintaining the Git repository and retrieving with osism-generic configuration
:
local: git add <path>/<to>/<file> local: git commit -m "<Commit Message>" manager: osism-generic configuration
Now Bifrost can be installed with the:
osism-kolla deploy bifrost
command to create the bifrost_deploy
container and four volumes (Listing 5, lines 1-5). There are two operating system images: One is the Ironic Python Agent (IPA), and the other is the deployment image (lines 7-14). The IPA image is CentOS (lines 16 and 17).
Listing 5
Volumes, Images, CentOS
01 # docker volume ls 02 local bifrost_httpboot 03 local bifrost_ironic 04 local bifrost_mariadb 05 local bifrost_tftpboot 06 07 # ll -h /var/lib/docker/volumes/bifrost_httpboot/_data/ 08 [...] 09 -rw-r--r-- 1 42422 42422 6.1G Mar 2 10:33 deployment_image.qcow2 10 -rw-r--r-- 1 root root 57 Mar 2 10:21 deployment_image.qcow2.CHECKSUMS 11 -rw-r--r-- 1 42422 42422 318M Mar 2 10:13 ipa.initramfs 12 -rw-r--r-- 1 42422 42422 104 Mar 2 10:13 ipa.initramfs.sha256 13 -rw-r--r-- 1 42422 42422 9.1M Mar 2 10:13 ipa.kernel 14 -rw-r--r-- 1 42422 42422 101 Mar 2 10:12 ipa.kernel.sha256 15 16 # file /var/lib/docker/volumes/bifrost_httpboot/_data/ipa.kernel 17 /var/lib/docker/volumes/bifrost_httpboot/_data/ipa.kernel: Linux kernel x86 boot executable bzImage, version 4.18.0-240.10.1.el8_3.x86_64 (mockbuild@kbuilder.bsys.centos.org) #1 SMP Mon Jan 18 17:05:51 UT, RO-rootFS, swap_dev 0x9, Normal VGA
The deployment image here is ironic-ubuntu-osism-21.04.qcow2
, which was configured in Listing 1. The osism-kolla deploy-servers bifrost
command triggers the installation of node1
. Bifrost creates a PXE environment for the MAC address configured for node1
– started by Intelligent Platform Management Interface (IPMI) boot into the PXE environment – and the IPA image is loaded and started. The Ironic Python Agent connects to Bifrost and writes the deployment image to disk. If this is successful, Bifrost dismantles the PXE environment, restarts node1,
and boots from disk.
Up to Neutron with OSISM
Installing and setting up the OpenStack components requires an SSH connection between the Manager and node1
. The private SSH key is available in the OSISM Manager in /opt/ansible/secrets/id_rsa.deploy
and also in the Ansible Vault in environments/kolla/secrets.yml
; look for the bifrost_ssh_key.private_key
variable. You can then connect:
$ ssh -i /opt/ansible/secrets/id_rsa.deploy ubuntu@192.168.21.21
After successfully opening a connection, the OpenStack components can be installed from the OSISM framework. The statement:
# osism-generic operator --limit node1.osism.test --user ubuntu --ask-become-pass --private-key /ansible/secrets/id_rsa.deploy
sets up the operator user that you need for all further osism-*
calls.
The osism-generic facts
command collects the facts, and the network configuration stored in host_vars
is written with the osism-generic network
command. The call to osism-generic reboot
verifies the network configuration. After the configuration of the remaining OpenStack components in the Git repository, the commands in Listing 6 continue the installation up to the Nova phase.
Listing 6
Installation Up to Nova
# osism-generic hosts # osism-generic bootstrap # osism-kolla deploy common # osism-kolla deploy haproxy # osism-kolla deploy elasticsearch # osism-kolla deploy kibana # osism-kolla deploy memcached # osism-kolla deploy mariadb # osism-kolla deploy rabbitmq # osism-kolla deploy openvswitch # osism-kolla deploy keystone # osism-kolla deploy horizon # osism-kolla deploy glance # osism-kolla deploy placement # osism-kolla deploy nov
OSISM relies on local name resolution and maintains the /etc/hosts
file on manager
and node1
for this purpose. As part of the bootstrap, it sets kernel parameters, (de)installs packages, and hardens the SSH server. In this setup, the manager also takes on the controller role, which you would distribute across at least three nodes in a production setup (Figure 2).
HAProxy handles the distribution of queries to the controller nodes. The common
, elasticsearch
, and kibana
components provide centralized logging that can be accessed from the Kibana web interface. Memcached accelerates the Apache server behind OpenStack's Horizon web interface. OpenStack requires a database (MariaDB in this case), a message queue (RabbitMQ), and a virtual switch (Open vSwitch).
Now the Keystone authentication/authorization, Horizon, Glance image store, Placement resource management, Nova, and Hypervisor components are installed and set up. The following customizations for the OpenStack Neutron module allow Ironic to talk to the hardware:
neutron_type_drivers: "flat,vxlan" neutron_tenant_network_types: "flat" enable_ironic_neutron_agent: "yes"
The mandatory network type flat
provides a way to communicate with the hardware at the network level. The ironic_neutron_agent
component is responsible for communication between the Neutron network component and Ironic. According to the git
commands entered earlier, osism-kolla deploy neutron
installs and configures the Neutron component.
En Route to Ironic
Ironic deployment requires a provision network and a cleaning network. The remote board network serves as the cleaning network, and the provision network handles the deployment (Listing 7). The UUIDs of the two networks can be used to populate Ironic's configuration (Listings 8 and 9 in the environments/kolla
path).
Listing 7
Provision and Cleaning Networks
# openstack network create --provider-network-type flat --provider-physical-network physnet1 --share provisionnet # openstack subnet create --network pxenet --subnet-range 192.168.21.0/24 --ip-version 4 --gateway 192.168.21.254 --allocation-pool start=192.168.21.22,end=192.168.21.28 --dhcp provisionsubnet # openstack network create --provider-network-type flat --provider-physical-network physnet1 --share cleannet # openstack subnet create --network pxenet --subnet-range 10.21.21.0/24 --ip-version 4 --gateway 10.21.21.254 --allocation-pool start=10.21.21.22,end=10.21.21.28 --dhcp cleansubnet
Listing 8
… /configuration.yml
:enable_ironic: "yes" ironic_dnsmasq_dhcp_range: "192.168.21.150,192.168.21.199" ironic_cleaning_network: "<cleannet-UUID>" ironic_dnsmasq_default_gateway: "192.168.21.254" ironic_dnsmasq_boot_file: pxelinux.0
Listing 9
… /files/overlays/ironic
[DEFAULT] enabled_network_interfaces = noop,flat,neutron default_network_interface = neutron [neutron] provisioning_network = <provisionnet-UUID>
The command:
osism-kolla deploy ironic
installs Ironic, with three other steps in addition to adjusting the quota: Send the deployment kernel, including the ramdisk and the image to be written to Glance (Listing 10); create a flavor with appropriate parameters [5];
# openstack flavor create --ram 65536 --disk 500 --vcpus 16 --property resources:CUSTOM_BAREMETAL_RESOURCE_CLASS=1 baremetal-flavor
Listing 10
Deployment Kernel, Ramdisk, and Image
# openstack image create --disk-format aki --container-format aki --file /data/ironic-agent.kernel --public deploy-kernel # openstack image create --disk-format ari --container-format ari --file /data/ironic-agent.initramfs --public deploy-ramdisk # openstack image create --disk-format qcow2 --container-format bare --file /data/osism-image.qcow2 --public deployment-image
and create the node and port in Ironic (Listing 11).
Listing 11
Ironic Node and Port
# openstack baremetal node create --driver ipmi --name node2 --driver-info ipmi_username=ADMIN --driver-info ipmi_password=<lesssecret> --driver-info ipmi_address=192.168.33.22 --resource-class baremetal-resource-class --driver-info deploy_kernel=<Deploy-Kernel-UUID> --driver-info deploy_ramdisk=<Deploy-Ramdisk-UUID> --driver-info cleaning_network=<cleannet-UUID> --driver-info provisioning_network=<provisionnet-UUID> --property cpu_arch=x86_64 --property capabilities='boot_mode=uefi' --inspect-interface inspector --deploy-interface direct --boot-interface ipxe
Make sure you have the right IPMI data and that the resource class matches that stored in the flavor. The flavor uses CUSTOM_BAREMETAL_RESOURCE_CLASS
, which gives you a baremetal-resource-class
. Other examples include CUSTOM_BAREMETAL_LARGE
or CUSTOM_BAREMETAL_HANA
, which resolve to baremetal-large
and baremetal-hana
.
As the driver-info
, you need to specify the cleaning and provision networks in addition to the deploy kernel and the associated ramdisk. To enable deployment by iPXE, you need to specify direct
as the deploy interface and UEFI
as the boot_mode,
and then create the Ironic port for node2
manually:
# openstack baremetal port create <MAC-Address> --node <Baremetal-Node-UUID> --physical-network physnet1
or load it by in-band inspection.
Buy this article as PDF
(incl. VAT)
Buy ADMIN Magazine
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Most Popular
Support Our Work
ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.