Lead Image © Zachery Blanton, 123RF.com

Lead Image © Zachery Blanton, 123RF.com

Secure access to Kubernetes

Avoiding Pitfalls

Article from ADMIN 56/2020
By
Kubernetes comes with a sophisticated system for ensuring secure access by users and system components through an API. We look at the options for authentication, authorization, and access control.

The first time you roll out a Kubernetes test cluster with kubeadm, superuser certificate-based access is copied to ~/.kube/config on the host, with all options open. Unfortunately, it is not uncommon for clusters used in production to continue working this way. Security officers in more regulated industries would probably shut down such an installation immediately. In this article, I seek to shed some light on the issue and show alternatives.

Three Steps

Kubernetes has a highly granular authorization system that influences the rights of administrators and users, as well as the access options to the running services and components within the cluster. The cluster is usually controlled directly by means of an API and the kubectl command-line front end. Access to the kube-apiserver control component is kept as simple as possible by the Kubernetes makers. Developers who want to build more complex services in Kubernetes can take advantage of this system.

The official Kubernetes documentation is a useful starting point [1] if you want to understand the access control strategies that Kubernetes brings to the table. Figure 1 shows the different components and their interaction within a cluster. All requests run through the API server and pass through three stations: authentication, authorization, and access control.

Figure 1: Kubernetes makes virtually no distinction between human users and service accounts in terms of access controls. © Kubernetes [2]

Authentication checks whether Kubernetes is able to verify the username with an authentication method that can be a password file, a token, or a client certificate, whereas Kubernetes manages ordinary users outside the cluster itself. The platform allows multiple authentication modules and tries them all, until one of them allows authentication or until all fail.

If the first step was successful, authorization follows, during which a decision is made as to whether the authenticated user is granted the right to execute operations on an object according to the configuration. The process clarifies, for example, whether the admin user is allowed to create a new pod in the development namespace, or whether they have write permission for pods in the management namespace.

In the last step, the access control modules look into the request. For example, although user Bob can have the right to create pods in the development namespace occupying a maximum of 1GB of RAM, a request for a container of 2GB would pass the first two checks but fail the access control test.

Namespaces

The basic Kubernetes concept of namespaces further brackets all other resources and implements multiclient capability. Without special permissions, which you would have to define in the authorization area, a component only ever sees other components in the area of its own namespace.

For example, if you want to use kubectl to list the running pods in the cluster, entering

kubectl get pods

usually results in zero output. The command queries the default namespace, which is unlikely to be the one in which any of the pods that make up the cluster are running.

Only after adding the -n kube-system option do you see output (Listing 1); this option also considers the Kubernetes components that are typically run in the kube-system namespace. To see all the pods in all namespaces, you need to pass in the additional -A option.

Listing 1

kube-system Namespace Pods

$ kubectl get pods -n kube-system
NAME                                  READY STATUS  RESTARTS AGE
coredns-5644d7b6d9-7n5qq              1/1   Running 1        41d
coredns-5644d7b6d9-mxt8k              1/1   Running 1        41d
etcd-kube-node-105                    1/1   Running 1        41d
kube-apiserver-kube-node-105          1/1   Running 1        41d
kube-controller-manager-kube-node-105 1/1   Running 3        41d
kube-flannel-ds-arm-47r2m             1/1   Running 1        41d
kube-flannel-ds-arm-nkdrf             1/1   Running 4        40d
kube-flannel-ds-arm-vdprb             1/1   Running 3        26d
kube-flannel-ds-arm-xnxqp             1/1   Running 0        26d
kube-flannel-ds-arm-zsnpp             1/1   Running 4        34d
kube-proxy-lknwh                      1/1   Running 1        41d
kube-proxy-nbdkq                      1/1   Running 0        34d
kube-proxy-p2j4x                      1/1   Running 4        40d
kube-proxy-sxxkh                      1/1   Running 0        26d
kube-proxy-w2nf6                      1/1   Running 0        26d
kube-scheduler-kube-node-105          1/1   Running 4        41d

Of Humans and Services

As shown in Figure 1, Kubernetes treats both human users and service accounts in the same way after authentication, in that, during authorization and access control, it makes no difference whether jack@test.com or databaseservice wants something from the cluster. Kubernetes checks requests according to role affiliations before letting them pass – or not.

The components in the Kubernetes cluster usually communicate with each other by service accounts (e.g., a pod talks to other pods or services). In theory, a human user could also use service accounts. Whether this is sensible and desired is another matter.

Admins create service accounts with kubectl and the Kubernetes API, which act as internal users of the cluster. Kubernetes automatically provides a secret, which authenticates the request.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Linking Kubernetes clusters
    When Kubernetes needs to scale applications, it searches for free nodes that meet a container's CPU and main memory requirements; however, when the existing hardware is at full capacity, the Kubernetes Cluster Federation project (KubeFed) takes the pain out of adding clusters.
  • Monitoring container clusters with Prometheus
    In native cloud environments, classic monitoring tools reach their limits when monitoring transient objects such as containers. Prometheus closes this gap, which Kubernetes complements, thanks to its conceptual similarity, simple structure, and far-reaching automation.
  • Run Kubernetes in a container with Kind
    Create a full-blown Kubernetes cluster in a Docker container with just one command.
  • Kubernetes clusters within AWS EKS
    Automated deployment of the AWS-managed Kubernetes service EKS helps you run a production Kubernetes cluster in the cloud with ease.
  • Kubernetes Auto Analyzer
    The fast pace of Kubernetes development can patch and introduce security vulnerabilities between versions. The Kubernetes Auto Analyzer configuration analyzer tool automates the review of Kubernetes installations against CIS Benchmarks.
comments powered by Disqus