« Previous 1 2
Kibana Meets Kubernetes
Second Sight
Check It Out
If you tried to jump ahead and run a Helm command, you might have been disappointed. First, you need to make sure that Helm and K3s are playing together nicely by telling Helm where to access the Kubernetes cluster:
$ export KUBECONFIG=/etc/rancher/k3s/k3s.yaml $ kubectl get pods --all-namespaces $ helm ls --all-namespaces
If you didn't get any errors, then all is well and you can continue; otherwise, search online for Helm errors relating to issues such as cluster unreachable and the like.
Next, having tweaked the values
file to your requirements, run the chart installation command:
$ helm install stable/elastic-stack --generate-name
Listing 3 reports some welcome news – an installed set of Helm charts that report a DNS name to access. Now you can access Elastic Stack from inside or outside the cluster.
Listing 3
Kibana Access Info
NAME: elastic-stack-1583501473 LAST DEPLOYED: Fri Mar 6 13:31:15 2020 NAMESPACE: default STATUS: deployed REVISION: 1 NOTES: The elasticsearch cluster and associated extras have been installed. Kibana can be accessed: * Within your cluster, at the following DNS name at port 9200: elastic-stack-1583501473.default.svc.cluster.local * From outside the cluster, run these commands in the same shell: export POD_NAME=$(kubectl get pods --namespace default -l "app=elastic-stack,release=elastic-stack-1583501473" -o jsonpath="{.items[0].metadata.name}") echo "Visit http://127.0.0.1:5601 to use Kibana" kubectl port-forward --namespace default $POD_NAME 5601:5601
Instead of following the details in the listing, for now, I'll adapt them and access Kibana (in the default
Kubernetes namespace) with the commands:
$ export POD_NAME=$(kubectl get pods -n default | grep kibana | awk '{print $1}') $ echo "Visit http://127.0.0.1:5601 to use Kibana" $ kubectl port-forward --namespace default $POD_NAME 5601:5601
By clicking the URL displayed in the echo
statement – et voilà! – a Kibana installation awaits your interaction. To test that things look sane, click through the sample data links in the Discover
section (Figure 1).
In normal circumstances, you need to ingest data into your Elastic Stack. Detailed information on exactly how to do that, dependent on your needs, is on the Elastic site [9]. This comprehensive document is easy to follow and well worth a look. Another resource [10] describes a slightly different approach to the one I've taken, with information on how to get Kubernetes to push logs into Elasticsearch to monitor Kubernetes activities with Fluentd [11]. Note the warning about leaving Kibana opened up to public access and the security pain that may cause you. If you're interested in monitoring Kubernetes, you can find information on that page to get you started.
As promised at the beginning of this article, the aim of my lab installation was to create some dashboards, as shown in a more visual representation of the sample data in Figure 2.
The End Is Nigh
There's something very satisfying about being able to set up a lab quickly by getting a piece of tech working before rolling it out into some form of consumer-facing service. As you can see, the full stack, including K3s, is slick and fast to set up. The solution is pretty much perfect for experimentation. The installation is so quick, you are able to tear it down and rebuild it (or, e.g., write an Ansible playbook to create it) without the interminable wait.
I will leave you to ingest some useful telemetry into your shiny new Elastic Stack.
Infos
- Elastic Stack: https://www.elastic.co/elastic-stack
- Splunk: https://www.splunk.com
- K3s: https://k3s.io
- "Kubernetes k3s lightweight distro" by Chris Binnie, ADMIN , issue 56, 2020, pg. 50, https://www.admin-magazine.com/Archive/2020/56/Kubernetes-k3s-lightweight-distro
- Helm install docs: https://helm.sh/docs/intro/install/
- Helm security: https://helm.sh/docs/faq/
- Elastic Stack: https://github.com/helm/charts/tree/master/stable/elastic-stack
- Values YAML file: https://github.com/helm/charts/blob/master/stable/elastic-stack/values.yaml
- Ingesting data: https://www.elastic.co/blog/how-to-ingest-data-into-elasticsearch-service
- Parsing log data with Fluentd: https://docs.bitnami.com/tutorials/integrate-logging-kubernetes-kibana-elasticsearch-fluentd/
- Fluentd: https://www.fluentd.org
« Previous 1 2
Buy this article as PDF
(incl. VAT)
Buy ADMIN Magazine
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Most Popular
Support Our Work
ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.