Lead Image © zelfit, 123RF.com

Lead Image © zelfit, 123RF.com

Harden your OpenStack configuration

Fortress in the Clouds

Article from ADMIN 39/2017
By
Any OpenStack installation that hosts services and VMs for several customers poses a challenge for the security-conscious admin. Hardening the overall system can turn the porous walls into a fortress – but you'll need more than a little mortar.

One of the biggest concerns about virtualization is that an attacker could succeed in breaking out of the virtual machine (VM) and thus gain access to the resources of the physical host. The security of virtual systems thus hinges on the ability to isolate resources of the various VMs on the same server.

A simple thought experiment shows how important it is that the boundaries of VM and host are not blurred. Assume you have a server that hosts multiple VMs that all belong to the same customer. In this scenario, a problem occurs if a user manages to break out from a VM and gain direct access to the server: In the worst case, the attacker now has full access to the VMs on the host and can access sensitive data at will, or even set up booby traps to fish for even more information.

To gain unauthorized access, attackers need to negotiate multiple obstacles: First, they must gain access to the VM itself. If all VMs belong to the same customer and the same admins regularly maintain them, this risk is minimized, but it cannot be ruled out. In the second step, an attacker needs to negotiate the barrier between the VM and the host. Technologies such as SELinux can help to minimize the risks of an attacker crossing the VM barrier.

Aggravated in the Cloud

The risk potential is far greater when the environment does not operate under a single, uniform management system, such as the case where individual customers operate within a public cloud.

In a public cloud setting, the VMs maintained by each customer must run alongside one another on a common host. Customers cannot choose which host the VM runs on and who else has machines on the same host. You have no control over the software that runs on the other VMs or on how well the other VMs are maintained. Instead, you need to trust the provider of the platform.

This article describes some potential attack vectors for VMs running in the OpenStack environment – and how admins can best minimize the dangers.

Identifying Attack Vectors

The most obvious point of entry for attackers is the individual VM. An attacker who gains access to a VM in the cloud can also gain access to the server if the host and the VMs are not properly isolated.

A number of other factors provide a potential gateway into the cloud. Practically any cloud will rely on network virtualization. Most will use Open vSwitch, which relies on a VXLAN construct or GRE tunnels to implement virtual networks, bridges, and virtual ports on the hypervisor. An attacker who somehow gains access to the hypervisor machine can sniff the network traffic produced by VMs and thus access sensitive data.

The plan of attack does not need to be complex: In many cases, attackers in a cloud computing environment are confronted with a software museum that ignores even the elementary principles of IT security, such as timely updates and secure practices for configuring network services.

One complicating factor in the OpenStack environment is that an attacker can discover critical data easily if the login credentials for an account fall into the hands of a potential attacker: Usually, one such set of credentials is sufficient to modify or read existing VMs and volumes at will.

What can a provider do to avoid these attack scenarios? A large part of the work is to create an actionable environment in which self-evident security policies can actually be implemented.

What Services?

Hardening an OpenStack environment starts with careful planning and is based on a simple question: What services need to be accessible from the outside? OpenStack, like most other clouds, continuously relies on APIs. The API components must be accessible to the outside world – and that's all. Users from outside do not need access to MySQL or Rabbit MQ, which are necessary for the operation of OpenStack.

Typically, a cloud does not even expose the APIs directly, because this makes high availability impossible. Load balancers serve as the first line of defense against attacks in this case (Figure 1), allow incoming connections on specific ports.

Figure 1: Load balancers like HAProxy form a line of defense in the battle against break-ins.

A first important step in securing OpenStack is the design of the network layout to make external access to resources impossible whenever it is unnecessary. Admins would do well to be as radical as possible: If neither the OpenStack controllers nor the hypervisors need to be accessible from the outside, they do not need a public IP. Furthermore, it is not absolutely necessary to open connections to the outside world from any of these hosts. Completely isolating systems from the Internet effectively prevents attackers from loading malicious software if they actually have gained access in some way.

Of course, this approach means additional work. Distribution updates or additional software packages somehow need to find their way onto the affected servers. But this challenge can be managed by installing a local mirror server (Figure 2) and a local package repository.

Figure 2: Debmirror creates a local package mirror from an official Debian or Ubuntu mirror. You can keep machines up-to-date locally, even if they do not have access to the Internet.

To understand the principle, you need to think about network segments: A network segment in this case includes systems that only communicate locally with other servers. A second segment serves as a kind of DMZ, supporting servers with one leg on the private network and the other on the Internet. Load balancers, mirror servers, and the OpenStack nodes are typical examples.

Each network segment should include servers that act as jump hosts, that is, they provide access to individual systems on the private network. Actions such as VPN access and login should be handled via SSH keys or – even better – certificates. Packet filters are a good idea in principle, but they only offer protection until someone with admin rights gains access to a server. After that, they are virtually ineffective, because they can be overridden by the attacker at any time.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • The state of OpenStack in 2022
    The unprecedented hype surrounding OpenStack 10 years ago changed to disillusionment, which has nevertheless had a positive effect: OpenStack is still evolving and is now mainly deployed where it actually makes sense to do so.
  • Kickstack: OpenStack with Puppet

    Kickstack uses Puppet modules to automate the installation of OpenStack and facilitate maintenance.

  • Simple OpenStack deployment with Kickstack
    Kickstack uses Puppet modules to automate the installation of OpenStack and facilitate maintenance.
  • Do You Know Juno?
    The OpenStack cloud platform plays a major role in the increasingly important cloud industry, so a new release is big news for cloud integrators and admins. The new version 2014.2 "Juno" release mostly cleans up and maintains the working model but adds a few innovations.
  • OpenStack: Shooting star in the cloud
    OpenStack is attracting lots of publicity. Is the solution actually qualified as a cloud prime mover? We take a close look at the OpenStack cloud environment and how it works.
comments powered by Disqus