« Previous 1 2
Kubernetes clusters within AWS EKS
Cloud Clusters
All Areas Access
Now, you need to authenticate with your cluster with AWS IAM authenticator:
$ curl -o aws-iam-authenticator https://amazon-eks.s3.us-west-2.amazonaws.com/1.19.6/2021-01-05/bin/linux/amd64/aws-iam-authenticator $ mv aws-iam-authenticator /usr/local/bin $ chmod +x /usr/local/bin/aws-iam-authenticator
Now run the command to save the local cluster configuration, replacing the cluster name after --name
:
$ aws eks update-kubeconfig --name <chrisbinnie-eks-cluster> --region eu-west-1 Updated context arn:aws:eks:eu-west-1:XXXX:cluster/chrisbinnie-eks-cluster in /root/.kube/config
Now you can authenticate with the cluster. Download kubectl
to speak to the API server in Kubernetes (if you don't already have it) and then save it to your path:
$ curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" $ mv kubectl /usr/local/bin $ chmod +x /usr/local/bin/kubectl
This command allows you to connect to the cluster with a local Kubernetes config file. Now for the moment of truth: Use kubectl
to connect to the cluster to see what pods are running across all namespaces. Listing 3 shows a fully functional Kubernetes cluster, and in Figure 3 you can see the worker nodes showing up in AWS EC2. Finally, the EC2 section of the AWS Management Console (Figure 4; bottom-left, under Elastic IPs
) displays the Elastic IP address of the cluster.
Listing 3
Running Pods
$ kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system aws-node-96zqh 1/1 Running 0 19m kube-system aws-node-dzshl 1/1 Running 0 20m kube-system coredns-6d97dc4b59-hrpjp 1/1 Running 0 25m kube-system coredns-6d97dc4b59-mdd8x 1/1 Running 0 25m kube-system kube-proxy-bkjbw 1/1 Running 0 20m kube-system kube-proxy-ctc6l 1/1 Running 0 19m
Destruction
To destroy your cluster, you first need to wait for the Node group to drain its worker nodes by deleting the Node group in the AWS Console (see the "Node groups" box); then, you can also click the Delete Cluster button; finally, make sure you delete the Elastic IP address afterward. Be warned that the deletion of both of these components takes a little while, so patience is required. (See the "Word to the Wise" box.)
Node groups
Although I've mentioned the worker nodes a few times, I haven't really looked at how EKS presents them for you to configure. EKS runs off vetted Linux 2 Amazon machine images (AMIs). It creates the maximum and minimum number of workers that you define dutifully yourself and then fronts them via an Amazon EC2 Auto Scaling group, so they can scale as you require automatically.
Note that not all Nodegroup settings let you SSH into the nodes for extended access. This access can be useful for troubleshooting, as mentioned, but also if the Docker Engine (potentially the CRI-O runtime instead) needs to have self-signed certificates ratified for tasks (e.g., connecting to image registries). You are advised to experiment with the levels of access your application requires. More official information is found on the create-nodegroup
page [14].
Word to the Wise
When you tear down the cluster (which you should get working programmatically), the Elastic IP address will continue to be billed to your account. That can total up to more than a few bucks in a calendar month if you forget, which might be a costly mistake. You should definitely check out the documentation [12], especially if you are new to AWS. Incidentally, deleting a cluster involves running a command similar to the create cluster
seen in Listing 2 [13]. If you don't use the "eksctl delete cluster" option, to manually clear up all the billable AWS components, you should consider deleting the NAT gateway, network interface, elastic IP Address, the VPC (carefully, only if you are sure) and the cluster CloudFormation stacks.
The End Is Nigh
As you can see, the process of standing up a Kubernetes cluster within AWS EKS is slick and easily automated. If you can afford to pay some fees for a few days, the process covered here can be repeated, and destroying a cluster each time is an affordable way of testing your applications in a reliable, production-like environment.
A reminder that once you have mastered the process, it is quite possible to autoscale your cluster to exceptionally busy production workloads, thanks to the flexibility made possible by the Amazon cloud. If you work on the programmatic deletion of clusters, you can start and stop them in multiple AWS regions to your heart's content.
Infos
- GKE: https://cloud.google.com/kubernetes-engine
- AKS: https://azure.microsoft.com/en-gb/services/kubernetes-service/
- EKS: https://aws.amazon.com/eks
- Amazon EC2: https://aws.amazon.com/ec2
- etcd: https://www.etcd.io
- Kubernetes components: https://kubernetes.io/docs/concepts/overview/components
- Amazon EKS cluster endpoint access control: https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html
- eks: https://awscli.amazonaws.com/v2/documentation/api/latest/reference/eks/index.html
- create-cluster: https://docs.aws.amazon.com/eks/latest/userguide/delete-cluster.html
- eksctl: https://eksctl.io
- Setting up AWS keys: https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html
- Amazon EC2 on-demand pricing: https://aws.amazon.com/ec2/pricing/on-demand
- delete-cluster: https://awscli.amazonaws.com/v2/documentation/api/latest/reference/eks/delete-cluster.html
- create-nodegroup: https://awscli.amazonaws.com/v2/documentation/api/latest/reference/eks/create-nodegroup.html
« Previous 1 2
Buy this article as PDF
(incl. VAT)