« Previous 1 2 3 4 Next »
OpenShift 3: Platform as a Service
Pain-Free Support
OpenShift Origin
The core component of Red Hat OpenShift is OpenShift Origin. Although the names are similar, they have two different purposes. On Origin, all threads run together; it is the product's technical foundation. Red Hat OpenShift, on the other hand, is the complete PaaS platform, of which Origin is the important component. In OpenShift, Origin acts as the control center that performs all tasks directly related to the development and rollout of PaaS stacks. The management of sources is a part of that, as is creating containers on the basis of PaaS applications.
Between OpenShift versions 2 and 3, Red Hat practically turned the software upside down. Version 2 still worked on the basis of brokers and gears, which was Red Hat's wording for distributing containers on hosts. The whole technology was developed in-house. OpenShift 3, on the other hand, is completely customized to Docker: Image management is based on Docker and can also use Docker Hubs to obtain basic images. Because Docker itself is not compatible with clusters, Red Hat provides Google's Kubernetes. How that works and how admins are involved requires a closer look.
Kubernetes Under the Hood
The Kubernetes project, invented and since developed by Google, is a cloud environment based on container virtualization with Docker (Figure 3). In principle, it is therefore comparable to OpenStack or CloudStack, although when it comes to the specifics, large differences can be seen.
One of these differences is the overall complexity of the Kubernetes architecture, although it is still significantly less complicated than other clouds. Moreover, Kubernetes offers no support for any virtualization methods other than Docker containers, because the Docker concept is an integral component of Kubernetes' design.
A Kubernetes cloud comprises two types of nodes. The master servers are the controllers of the setup that coordinate the operation of containers on simple nodes. The master servers operate different services, such as an API for client access that also takes over management of users within the Kubernetes cloud [6].
The scheduler selects nodes on which new containers land according to the data available. The replication controller also plays a role, facilitating horizontal scaling. If a certain container can no longer cope with its burden, the replication controller starts new containers. That occurs automatically if the relevant environment is configured accordingly.
In Kubernetes, the way a solution understands a container is clearly documented, because containers are always specific to an application. Ideally, only one application will run within a container. Because setups generally contain more than one service, Google has implemented pods in Kubernetes; a pod is a group of containers belonging to the same setup. All of a pod's containers start together and run on the same node of the setup.
This principle is ideally suited for PaaS. Even when an environment needs several services, they can be started together in OpenShift, with OpenShift forwarding the request to Kubernetes, which then processes the necessary tasks in the background. Ultimately, with these facts in mind, it is only logical that Red Hat discarded its own underpinnings in OpenShift 2 and latched onto Kubernetes in version 3.
Network and Persistent Memory
Kubernetes relieves OpenShift of two further problems on which cloud environments put special demands: network and memory. For example, the network must be configurable from the environment and cannot depend on on-site network hardware; yet, the containers of two customers running on the same bare metal still must not be able to see each another.
For this problem, Kubernetes offers software-defined networking (SDN) that is even allowed to harness external solutions (e.g., Open vSwitch). Kubernetes also processes traffic with the outside world automatically, so admins can cross that item off their list. The customer can be sure that their active containers have a network connection and can communicate with one another.
Storage is an ongoing theme in clouds. Kubernetes assumes that a container can roll itself out afresh at any time. Persistent storage is therefore not envisaged for the average container. Nonetheless, everyday experience shows that it is sometimes necessary; a database with customer data is the perfect example.
Kubernetes can manage persistent storage and attach it to containers, although the user has to request this specifically. OpenShift loops this option through its API so that containers can be provided with persistent storage for PaaS.
This hymn of praise to Kubernetes begs the question of whether administrators should deal directly with Kubernetes instead of taking the long route via OpenShift. In fact, Kubernetes is very well suited for PaaS, but the environment in the version offered by Google is in a rough state. Administrators have to take care of setting it up and equipping the environment with suitable applications. Similarly, everything that eases deployment for developers, such as direct accessibility to Git, is lacking in Kubernetes.
In this area, Red Hat has found the added value that should make OpenShift appetizing to the customer: Red Hat is not only expanding Kubernetes with an installation routine and fancy graphical tools, but also rolling in with a truckload of containers optimized for OpenShift. The integration into development processes, realized via Git, is also attractive.
« Previous 1 2 3 4 Next »
Buy ADMIN Magazine
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Most Popular
Support Our Work
ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.