Monitoring container clusters with Prometheus
Perfect Fit
Kubernetes [1] makes it much easier for admins to distribute container-based infrastructures. In principle, you no longer have to worry about where applications run or if sufficient resources are available. However, if you want to ensure the best performance, you usually cannot avoid monitoring the applications, the containers in which they run, and Kubernetes itself.
You can read how Prometheus works in a previous ADMIN article [2]; here, I shed light on the collaboration between Prometheus and Kubernetes. Because of its service discovery, Prometheus independently retrieves information about the container platform, the current container, services, and applications via the Kubernetes API. You do not have to change the configuration of Prometheus when pods launch or die or when new nodes appear in the cluster: Prometheus detects all of this.
Uplifting
In addition to the usual information, such as CPU usage, memory usage, and hard disk performance, the metrics of containers, pods, deployments, and ongoing applications are of interest in a Kubernetes environment. In this article, I show you how to collect and visualize information about your Kubernetes installation with Prometheus and Grafana. A demo environment provides impressions of the insights Prometheus delivers into a Kubernetes installation.
The Prometheus configuration is oriented on the official example [3]. When querying metrics from the Kubernetes API, the excerpt from Listing 1 is sufficient. Thanks to service discovery in Prometheus, many metrics can be retrieved, as shown in Figure 1.
Buy this article as PDF
(incl. VAT)