« Previous 1 2 3 4 Next »
OpenStack workshop, part 3:Gimmicks, extensions, and high availability
Wide Load
Meaningful Storage Planning
A classic approach in cloud computing environments is to allow them to scale laterally seamlessly. When the platform grows, it must be possible to add additional hardware and thus increase the capacity of the installation. However, this approach collides with classic storage approaches because neither typical SAN storage nor suitable replacement structures using NBD or DRBD will scale to the same dimensions as Ceph.
What is really interesting is a pairing of a cloud platform such as OpenStack and an object storage solution. OpenStack itself offers one in the form of Swift, but Swift suffers from an inability to access the storage as a block device. This is where Ceph enters the game; Ceph allows block access. The two OpenStack components that have to do with storage – Glance and Cinder – provide a direct connection to Ceph, but how does the practical implementation work?
Glance and Ceph
In the case of Glance, the answer is: easy as pie. Glance, which is the component in OpenStack that stores operating system images for users, can talk natively to Ceph. If an object store is already in place, nothing is more natural than storing those images in it. For this to work, only a few preparations are necessary.
Given that Ceph pool images
exists and the Ceph user client.glance
has access to it, the rest of the configuration is easy. For more details about user authentication in Ceph, check out my CephX article published online [3].
Any host that needs to run Glance with a Ceph connection needs a working /etc/ceph/ceph.conf
. Glance references this to retrieve the necessary information about the topology of the Ceph cluster. Additionally, the file needs to state where the keyring with the password belonging to the Glance user in Ceph can be found. A corresponding entry for ceph.conf
looks like this:
[client.glance] keyring = /etc/ceph/keyring.glance
The /etc/ceph/keyring.glance
keyring must contain the user's key, which looks something like:
[client.glance] key = AQA5XRZRUPvHABAABBkwuCgELlu...
Then, you just need to configure Glance itself by entering some new values in /etc/glance/glance-api.conf
. The value for default_store=
is rbd
. If you use client.glance
as the username for the Glance user in Ceph, and images
as the pool, you can close the file now; these are the default settings.
If other names are used, you will need to modify rbd_storage_user
and rbd_store_pool
accordingly lower down in the file. Finally, you can restart glance-api
and run glance-registry
.
Cinder is the block storage solution in OpenStack; it supplies what are not actually persistent VMs with non-volatile block memory. Cinder also has a native back end for Ceph. The configuration is a bit more extensive because Cinder handles memory allocation directly via libvirt
. Libvirt itself is thus the client that logs in directly to Ceph.
If CephX authentication is used, the virtualization environment must therefore know it has to introduce itself to Ceph. So far, so good. Libvirt 0.9.12 introduced a corresponding function.
Cinder
The implementation of Cinder is somewhat more complicated. The following example assumes that the Ceph user in the context of Cinder is client.cinder
and that a separate cinder
pool exists. As in the Glance example, you need a keyring called /etc/ceph/keyring.cinder
for the user, and this must be referenced accordingly in ceph.conf
. To generate a UUID at the command line, use uuidgen
; libvirt stores passwords with the UUID name. The example uses 95aae3f5-b861-4a05-987e-7328f5c8851b
. The next step is to create a matching secret file for libvirt – /etc/libvirt/secrets/ceph.xml
in this example; its content is shown in Figure 3. Always replace the uuid
field with the actual UUID, and if the local user is not client.cinder
, but has a different name, also adjust the name
accordingly. Now enable the password in libvirt:
virsh secret-define /etc/libvirt/secrets/ceph.xml
Thus far, libvirt knows there is a password, but it is still missing the real key. The key is located in /etc/ceph/keyring.cinder
; it is the string that follows key=
. The following line tells libvirt that the key belongs to the password that was just set,
virsh secret-set-value <UUID> <key>
which here is:
virsh secret-set-value 95aae3f5-b861-4a05-987e-7328f5c8851b AQA5jhZRwGPhBBAAa3t78yY/0+1QB5Z/9iFK2Q==
This completes the libvirt part of the configuration; libvirt can now log in to Ceph as client.cinder
and use the storage devices there. What's missing is the appropriate Cinder configuration to make sure the storage service actually uses Ceph.
The first step is to make sure Cinder knows which account to use to log in to Ceph so it can create storage devices for VM instances. To do this, modify /etc/init/cinder-volume.conf
in your favorite editor so that the first line of the file reads
env CEPH_ARGS="--id cinder"
(given a username of client.cinder
). The second step is to add the following four lines to the /etc/cinder/cinder.conf
file:
volume_driver=cinder.volume.driver.RBDDriver rbd_pool=images rbd_secret_uuid=2a5b08e4-3dca-4ff9-9a1d-40389758d081 rbd_user=cinder
After restarting the cinder services – cinder-api
, cinder-volume
, and cinder-scheduler
– the Cinder and Ceph team should work as desired.
The best OpenStack installation is not worth a penny if the failure of a server takes it down. The setup I created in the previous article [1] still has two points of failure: the API node and the network node.
« Previous 1 2 3 4 Next »