OpenStack workshop, part 2: OpenStack cloud installation
A Step-by-Step Cloud Setup Guide
Block Storage with Cinder
Compared with configuring Quantum, configuring Cinder is a piece of cake. This component was also around in the Essex version of OpenStack, where it was still called nova-volume
and was part of the Computing component. It now has a life of its own. For Cinder to work, an LVM volume group by the name of cinder-volumes
must exist on the host on which Cinder will be running. Cinder typically resides on the cloud controller, and in this example, again, the program runs on Alice. Cinder doesn't really mind which storage devices are part of the LVM volume group – the only important thing is that Cinder can create volumes in this group itself. Alice has a volume group named cinder-volumes
in this example.
After installing the Cinder services, you come to the most important part: The program needs an sql_conn
entry in /etc/cinder/cinder.conf
pointing the way to the database. The entry needed for this example is:
sql_connection = mysql://cinderdbadmin:ceeShi4O@192.168.122.111:3306/cinder
This is followed by /etc/cinder/api-paste.ini
– the required changes here follow the pattern for the changes in api-paste.ini
in the other programs. The service_
entries use the same values as their auth_
counterparts. The admin_user
is cinder
.
Cinder also needs tables in its MySQL database, and the cinder-manage db sync
creates them. Next, you can restart the Cinder services:
for i in api scheduler volume; do restart cinder-"$i"; done
Finally, you need a workaround for a pesky bug in the tgt
iSCSI target, which otherwise prevents Cinder from working properly. The workaround is to replace the existing entry in /etc/tgt/targets.conf
with include /etc/tgt/conf.d/cinder_tgt.conf
. After this step, Cinder is ready for use; the cinder list
command should output an empty list because you have not configured any volumes yet.
Nova – The Computing Component
Thus far, you have prepared a colorful mix of OpenStack services for use, but the most important one is still missing: Nova. Nova is the computing component; that is, it starts and stops the virtual machines on the hosts in the cloud. To get Nova up to speed, you need to configure services on both Alice and Bob in this example. Charlie, which acts as the network node, does not need Nova (Figure 5).
The good news is that the Nova configuration, /etc/nova/nova.conf
, can be identical on Alice and Bob; the same thing applies to the API paste file in Nova, which is called /etc/nova/api-paste.ini
. As the compute node, Bob only needs a minor change to the Qemu configuration for Libvirt in order to start the virtual machines. I will return to that topic presently.
I'll start with Alice. The /etc/nova/api-paste.ini
file contains the Keystone configuration for the service with a [filter:authtoken]
entry. The values to enter here are equivalent to those in the api-paste.ini
files for the other services; the value for admin_user
is nova
. Additionally, the file has various entries with volume
in their names, such as [composite:osapi_volume]
. Remove all the entries containing volume
from the configuration because, otherwise, nova-api
and cinder-api
might trip over one another. After making these changes, you can copy api-paste.ini
to the same location on Bob.
nova.conf for OpenStack Compute
Now you can move on to the compute component configuration in the /etc/nova/nova.conf
file. I have published a generic example of the file to match this article [4]; explaining every single entry in the file is well beyond the scope of this article. For an overview of the possible parameters for nova.conf
, visit the OpenStack website [5]. The sample configuration should work unchanged in virtually any OpenStack environment, although you will need to change the IP addresses, if your local setup differs from the setup in this article. Both Alice and Bob need the file in /etc/nova/nova.conf
– once it is in place, you can proceed to create the Nova tables in MySQL on Alice with:
nova-manage db sync
This step completes the Nova configuration, but you still need to make some changes to the Qemu configuration for Libvirt on Bob, and in the Libvirt configuration itself. The Qemu configuration for Libvirt resides in /etc/libvirt/qemu.conf
. Add the lines shown in Listing 5 to the end of the file. The Libvirt configuration itself also needs a change; add the following lines at the end of /etc/libvirt/libvirtd.conf
:
Listing 5
Qemu Configuration
cgroup_device_acl = ["/dev/null", "/dev/full", "/dev/zero","/dev/random", "/dev/urandom","/dev/ptmx", "/dev/kvm", "/dev/kqemu","/dev/rtc", "/dev/hpet","/dev/net/tun", ]
listen_tls = 0 listen_tcp = 1 auth_tcp = "none"
These entries make sure that Libvirt opens a TCP/IP socket to support functions such as live migration later on. For this setup to really work, you need to replace the libvirtd_opts="-d"
in /etc/default/libvirt-bin
with libvirtd_opts="-d -l"
.
Then, restart all the components involved in the changes; on Alice, you can type the following to do this:
for i in nova-api-metadata nova-api-os-computenova-api-ec2nova-objectstorenova-schedulernova-novncproxynova-consoleauthnova-cert;do restart "$i"; done
On Bob, the command is
for i in libvirt-bin nova-compute; do restart $i; done
Next, typing nova-manage service list
should list the Nova services on Alice and Bob. The status for each service should be :-)
.