« Previous 1 2 3 4 Next »
Optimally combine Kubernetes and Ceph with Rook
Castling
Control Plane Node
Of five servers with Ubuntu 18.04 LTS, pick one to serve as the Kubernetes master. For the example here, the setup does without any form of classic high availability. In a production setup, any admin would handle this differently so as not to lose the Kubernetes controller in case of a problem.
On the Kubernetes master, which is referred to as the Control Plane Node in Kubernetes-speak, ports 6443, 2379-2380, 10250, 10251, and 10252 must be accessible from the outside. Also, disable the swap space on all the systems; otherwise, the Kubelet service will not work reliably on the target systems (Kubelet is responsible for communication between the Control Plane and the target system).
Install CRI-O
To run Kubernetes, the cluster systems need a container run time. Until recently, this was almost automatically Docker, but not all of the Linux community is Docker friendly. An alternative to Docker is CRI-O [3], which now officially supports Kubernetes. To use it on Ubuntu 18.04, you need to run the commands in Listing 1. As a first step, you set several sysctl
variables; the actual installation of the CRI-O packages then follows.
Listing 1
Installing CRI-O
# modprobe overlay # modprobe br_netfilter [ ... required Sysctl parameters ... ] # cat > /etc/sysctl.d/99-kubernetes-cri.conf <<EOF net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF # sysctl --system [ ... Preconditions ... ] # apt-get update # apt-get install software-properties-common # add-apt-repository ppa:projectatomic/ppa # apt-get update [ ... Install CRI-O ... ] # apt-get install cri-o-1.13
The systemctl start crio
command starts the run time, which is now available for use by Kubernetes. Working as an admin user, you need to perform these steps on all the servers, not just on the Control Plane or the future Kubelet servers.
Next Steps
Next, complete the steps in Listing 2 to add the Kubernetes tools kubelet
, kubeadm
, and kubectl
to your systems.
Listing 2
Installing Kubernetes
# apt-get update && apt-get install -y apt-transport-https curl # curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - # cat <<EOF >/etc/apt/sources.list.d/kubernetes.list deb https://apt.kubernetes.io/ kubernetes-xenial main EOF # apt-get update # apt-get install -y kubelet kubeadm kubectl # apt-mark hold kubelet kubeadm kubectl
Thus far, you have only retrieved the components needed for Kubernetes and still do not have a running Kubernetes cluster. Kubeadm will install this shortly. On the node that you have selected as the Control Plane, the following command sets up the required components:
# kubeadm init --pod-network-cidr=10.244.0.0/16
The --pod-network-cidr
parameter is not strictly necessary, but a Kubernetes cluster rolled out with kubeadm
needs a network plugin compatible with the Container Network Interface standard (more on this later). First, you should note that the output from kubeadm init
contains several commands that will make your life easier.
In particular, you should make a note of the kubeadm join
command at the end of the output, because you will use it later to add more nodes to the cluster. Equally important are the steps used to create a folder named .kube/
in your personal folder, in which you store the admin.conf
file. Doing so then lets you use Kubernetes tools without being root. Ideally, you would want to carry out this step immediately.
As a Kubernetes admin, you cannot avoid the network. Although not nearly as complicated as with OpenStack, for which an entire software-defined networking suite has to be tamed, you still have to load a network plugin: After all, containers without a network don't make much sense.
The easiest way to proceed is to use Flannel [4], which requires some additional steps. On all your systems, run the following command to route IPv4 traffic from bridges through iptables:
# sysctl net.bridge.bridge-nf-call-iptables=1
Additionally, all your servers need to be able to communicate via ports 8285 and 8462. Flannel can then be integrated on the Control Plane:
# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
This command loads the Flannel definitions directly into the running Kubernetes Control Plane, making them available for use.
« Previous 1 2 3 4 Next »
Buy this article as PDF
(incl. VAT)
Buy ADMIN Magazine
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Most Popular
Support Our Work
ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.