Kubernetes comes with a sophisticated system for ensuring secure access by users and system components through an API. We look at the options for authentication, authorization, and access control.
The first time you roll out a Kubernetes test cluster with kubeadm, superuser certificate-based access is copied to ~/.kube/config on the host, with all options open. Unfortunately, it is not uncommon for clusters used in production to continue working this way. Security officers in more regulated industries would probably shut down such an installation immediately. In this article, I seek to shed some light on the issue and show alternatives.
Three Steps
Kubernetes has a highly granular authorization system that influences the rights of administrators and users, as well as the access options to the running services and components within the cluster. The cluster is usually controlled directly by means of an API and the kubectl command-line front end. Access to the kube-apiserver control component is kept as simple as possible by the Kubernetes makers. Developers who want to build more complex services in Kubernetes can take advantage of this system.
The official Kubernetes documentation is a useful starting point [1] if you want to understand the access control strategies that Kubernetes brings to the table. Figure 1 shows the different components and their interaction within a cluster. All requests run through the API server and pass through three stations: authentication, authorization, and access control.
...
Use Express-Checkout link below to read the full article (PDF).
When Kubernetes needs to scale applications, it searches for free nodes that meet a container's CPU and main memory requirements; however, when the existing hardware is at full capacity, the Kubernetes Cluster Federation project (KubeFed) takes the pain out of adding clusters.
In native cloud environments, classic monitoring tools reach their limits when monitoring transient objects such as containers. Prometheus closes this gap, which Kubernetes complements, thanks to its conceptual similarity, simple structure, and far-reaching automation.
We describe how OpenShift and Rancher use their different architectures to integrate KubeVirt, an extension used by Kubernetes to operate virtual machines in addition to containers.