« Previous 1 2 3 4 Next »
OpenStack: What's new in Grizzly?
Strong as a Bear
Glance with Many Back Ends
The developers of the Glance image service have listened to the wailing of their users and implemented a feature that many urgently wanted: Glance can now use multiple, parallel image back ends. To date, admins had to decide whether they wanted to store images in Swift, in Ceph, or locally on Glance's host disk. In Grizzly, you have the option of combining different image stores.
Another new feature in Grizzly is the ability to share your own images with others; user images were previously available only to the same user, which often led to duplication. Additionally, admins can now store many more details for images in the Glance database; including the ability to output the exact version of an operating system. Until now it was only possible to choose a name.
And, admins now can set flags that state that the image is, for example, a Windows 7 or a Windows 8, as well as define the image format properties. The primary objective is to achieve more clarity in large OpenStack installations with a corresponding number of images. All the new fields are indexed and therefore searchable – if you want a specific OS in the cloud, you will reach your destination much faster.
OpenStack Compute (Nova) is perhaps the most important OpenStack core component. And, in Grizzly, the OpenStack developers came up with many changes. During configuration, you will initially notice that something is missing: since nova-volume
was virtually kicked out of Nova in Folsom, the developers have now eradicated the final remnants. The same is true for most functions formerly available in nova-manage
because, besides nova
, the CLI client for Nova, nova-manage
provided additional details. No more: In the long term, nova-manage
will only be necessary to create tables in the MySQL database for Nova during the first phase of the installation. All other functions are being migrated to nova
.
Return of the Cells
If you have been following OpenStack for a long time, will meet an old friend in a new guise in Nova Grizzly: Cells are back. In previous OpenStack versions, cells allowed you to group OpenStack computers and then deploy these groups in a targeted manner (e.g., specifying that different VMs belonging to a customer had to run in different cells, which then possibly resided in two separate data centers).
In OpenStack Essex, the cell feature fell out of favor, and the developers removed the function – not because it did not prove to be useful, but because they did not like the implementation. Now cells are back as Nova Compute Cells , allowing precise control of both geographically distributed systems and the option of separating host and cell scheduling to give preferential treatment to individual tenants in OpenStack, if necessary.
Another new feature in Grizzly looks trivial but is spectacular – the libvirt back end of OpenStack now understands SPICE, the protocol for connecting a client to the graphical desktop of a virtualized system. Basically, it is a very similar solution to the VNC protocol, but SPICE offer much better performance. It is finally possible in Grizzly to connect to the desktops of Windows VMs in a technically reliable and easy way. OpenStack thus opens up a whole new target group because SPICE allows you to use OpenStack as a typical solution for virtualized desktops (VDIs). With VNC, that was unthinkable.
If you like VMware, you will be pleased to hear that Nova now has a computing module for VMware in Grizzly. This makes it possible to use hosts with VMware as OpenStack computing nodes. If you prefer to rely on virtualization with VMware rather than KVM or Xen, Grizzly finally lets you do so.
Nova and High Availability
Up to now, support for high availability (HA) functions in Nova has been limited, but the developers promised improvements in Grizzly, and they delivered – somewhat. Unfortunately, the Evacuate feature stops halfway to true HA functionality.
Evacuate is designed to let administrators restart VMs on other hosts if they previously ran on a computing node that went down. However, the implementation fails because – although Evacuate provides exactly this function – OpenStack still has no reliable way of determining whether a computing node has failed or not. That is, failed VMs might indeed be much easier to start on other nodes now, but the admin still needs to lend a hand, which unfortunately is far removed from true high availability. However, the developers mention the HA scenario in the specification of Evacuate, so hope for an implementation still exists. Perhaps this will happen in Havana.
« Previous 1 2 3 4 Next »