Exploring Kubernetes with Minikube
Kubernetes Kickoff
Special Thanks: This article was made possible by support from Linux Professional Institute
If somehow you've missed out on ten years of cutting-edge technology, permit me to fill in the gaps: containers are king and orchestrators are required to ensure that containers behave properly.
Over the past few years, Docker has served as a reliable container runtime with improved networking and a number of other important features. More recently, the Kubernetes [1] orchestration environment has emerged from the camp of Google. Since then, the popularity of Kubernetes has expanded exponentially.
The meteoric rise of Kubernetes follows the explosion of containers in all facets of IT. Kubernetes, which was created by Google, offers the ability to manage otherwise-unmanageable, multiple containers on multiple hosts for resilience and scalability while retaining the portability and speed for software releases within containers.
Kubernetes offers rolling updates to minimize disruption when a new feature is released, and the Kubernetes environment provides the ability to scale, load balance, and provide redundancy to applications should a container (or a collection of containers, known as a pod ) go offline.
For many users, however, Kubernetes remains a black art due to a relatively steep learning curve. Commercial products, such as Red Hat's OpenShift, have attempted to improve access to Kubernetes, but versatile tools such as OpenShift can be as nuanced and complicated as working with Kubernetes directly.
One easy and accessible way to start experimenting with Kubernetes is to use Minikube [2]. Minikube is a tool that is designed to let the user work with Kubernetes locally through a virtual environment. The official Kubernetes site states: ``Minikube runs a single-node Kubernetes cluster inside a VM on your laptop for users looking to try out Kubernetes or develop with it day-to-day.''
For most Kubernetes beginners, installing locally is much easier than learning the integration quirks of a cloud provider. With Minikube and a little Docker knowledge under your belt, it's perfectly possible to begin learning the basics once you complete the relatively simplified installation.
This article describes how to get Kubernetes up and running on a local Linux system using Minikube, so you can can experiment with it and see if you would like to deploy it on a larger scale. I'll use KVM as a virtual machine and create an Nginx deployment in the cluster as a proof of concept. Of course, you can modify this configuration as needed to customize this setup for your own environment.
I'll use Ubuntu 16.04 ``Xenial Xerus'' LTS for this article. If you are using a different version or a different Linux distribution, some of the steps might vary. If you get stuck, see the Minikube GitHub page for additional installation information.
Easy Peasy
Start by installing two packages for the KVM virtual machine. Ideally, your local laptop or desktop will already be using the Intel VT or AMD-V hardware extensions that support virtualization. (Check your BIOS settings to see if they exist, and then enable them if they're present and disabled.) If your system doesn't support these hardware extensions, see the box entitled ``Alternatives to Virtualization.''
For Debian-based package system, the command for installing KVM is:
$ apt install libvirt-bin qemu-kvm
Add the text in Listing 1 to a little script and make it executable. The few lines in Listing 1 are all you need to add the Kubernetes key to your package manager, as supplied by the master of containers, Google.
Install the kubectl command line interface package with:
apt-get update apt-get install -y kubectl
You might wish to add the preceding commands to a simple script called kubectl_install.sh and make it executable, then run the script to install the Kubectl package: :
$ chmod +x kubectl_install.sh $ ./kubectl_install.sh
If you're interested, the commands reference for kubectl is available at the Kubernetes website [4].
The next step is to add Minikube to the system with the following commands (split up into three commands for ease of reading):
$ curl -LO <a href="https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-kvm2">https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-kvm2 $ chmod +x docker-machine-driver-kvm2 $ sudo mv docker-machine-driver-kvm2 /usr/local/bin
The first command downloads the docker-machine-driver-kvm2 package using curl . The second command makes the download executable, and the third copies that command into a system path, which should be seen system-wide (feel free to choose which $PATH you wish to use instead of /usr/local/bin ). If you're unsure that you've copied it correctly into your user's path, run the following command and look for a response about Plugin binaries to denote success:
$ docker-machine-driver-kvm2
You can also run echo $PATH if you don't know the existing paths for your user and you want to know where to copy your binary.
For further KVM information look online for a short list of KVM commands [5].
To start up Kubernetes with Minikube and KVM, simply run the following command:
$ minikube start --vm-driver kvm
In order to stop the Minikube instance, you can run the following command:
$ minikube stop Stopping local Kubernetes cluster... Machine stopped.
If you reboot or come back to Minikube after a period of time and this command doesn't work, check the Troubleshooting section for information on how to fix the problem.
Looking Around
Now onto the good stuff. Barring any hideous errors, you should be up and running.
Kubernetes splits up resources (think on a per-customer basis, as a simple example) into namespaces , and as a result, in order to see everything within a cluster, you need to ask for --all-namespaces .
The following command shows all pods in the cluster and across all of the available namespaces:
$ kubectl get pods --all-namespaces
The output appears in Figure 1.
Figure 1 shows that the cluster is responding with lots of running pods.
Mikikube offers a simple dashboard GUI for managing the Kubernetes cluster (Figure 2). To start the dashboard, enter:
$ minikube dashboard
The dashboard is intended for commands that don't require root access. If you need to execute a privileged command, such as the virsh commands, you're better off at the command line. See the box entitled ``Minikube Network Error'' if you have any trouble getting Minikube started.
Engine X
You need something to play with inside your shiny Kubernetes cluster in order to demonstrate how to use the orchestrator.
Copy the contents of Listing 2 into a file called nginx.yml . Indents and spacing can be a killer in YAML (``YAML Ain't Markup Language'' [6]), so be careful.
Listing 2 configures the latest official Nginx container (via image:nginx ) and also making sure two replicas are running for resilience via a Deployment , which is then presented by the nginx-svc service. This should give you two pods.
Enter the following command to ingest the contents of the nginx.yml file in Listing 2:
$ kubectl create -f nginx.yml
Assuming your formatting and syntax are correct (try this YAML checker to check the formatting [7]) then the magical Kubernetes springs to life and immediately gets jiggy with creating pods, a deployment, and a service.
Run the following command afterwards:
$ kubectl get pods
The output is shown in Figure 5. This command applies to the default namespace, so there was no reason to specify a namespace.
Figure 5 shows that two Nginx pods are running, and that's because the config file asked for two replicas for redundancy reasons. If one pod fails, you will still have a web server available. The deployment will also restart another pod if one fails.
The -o wide option offers more networking information than the standard command:
$ kubectl get pod nginx-dep-54b9c79874-b9dzh -o wide
See the output in Figure 6.
Use the -n option to specify a namespace:
$ kubectl get pods -n namespace_name
To get the deployments running in the default namespace:
$ kubectl get deployment
See Figure 7.
Or add the -o wide '' addition to view the replicas:
$ kubectl get deployment -o wide
See Figure 8.
Buy ADMIN Magazine
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Most Popular
Support Our Work
ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.