What's your status (page)?
Custodian
Keeping the lights on round the clock in a modern IT infrastructure is pretty complicated. The usual procedure consists of running both internal and external servers and services to deliver end products and services. Keeping a close watch on each and every element of the running infrastructure is a necessity for any technology-driven business.
The modern monitoring solutions are devised to address the critical need to be proactive, rather than reactive, and spot problems before failures can impair a business. Having a status page for internal and external servers and services that provides a quick overview of what's failing where can keep IT teams on top of their infrastructure.
Most enterprise monitoring solutions are overkill and way too expensive for many companies, especially small to medium-sized businesses. In this article, I look at some amazing, free and open source solutions that set up various kinds of status pages performing black box monitoring. The only requirement to test these solutions on your IT infrastructure is a running Docker engine, which is pretty common nowadays.
Monitoror Wallboard
The first free open source status page solution I will talk about is Monitoror [1]. It is know as a monitoring wallboard because it is a single-page app comprising different colored rectangular tiles. Monitoror is mainly concerned with three kinds of general-purpose monitoring checks: ping, port, and HTTP.
The ping check verifies connectivity to a configured host, the port check verifies port listening on a configured endpoint, and the HTTP checks GET requests to a URL. It also has special built-in checks for Azure DevOps, GitHub, GitLab, Jenkins, Pingdom, and Travis CI (continuous integration). The wallboard highlights the configured tiles either in green or red according to a respective check passing or failing. To see Monitoror yourself in action, use a terminal command to create a Docker network for test container(s):
docker network create statuspage-demo
Next, create the monitoror_stack.yml
and config.json
files (Listings 1 and 2) [2] to launch a Monitoror stack and supply its configuration, respectively.
Listing 1
monitoror_stack.yml
01 version: '3.5' 02 services: 03 monitoror: 04 image: monitoror/monitoror:${MTRRTAG:-latest} 05 ports: 06 - "38080:8080" 07 environment: 08 - "MO_CONFIG=/etc/config.json" 09 restart: unless-stopped 10 11 networks: 12 default: 13 name: statuspage-demo 14 external: true
Listing 2
config.json
01 { 02 "version": "2.0", 03 "columns": 2, 04 "tiles": [ 05 { "type": "PING", "params": {"hostname": "127.0.0.1"}}, 06 { "type": "PORT", "params": {"hostname": "129.0.0.1", "port": 8080}}, 07 { "type": "HTTP-STATUS", "params": {"url": "https://google.com"}}, 08 { 09 "type": "GROUP", 10 "label": "localhost PING/PORT/HTTP Tests", 11 "tiles": [ 12 { 13 "type": "PING", 14 "params": { 15 "hostname": "128.0.0.1" 16 } 17 }, 18 { 19 "type": "PORT", 20 "params": { 21 "hostname": "127.0.0.1", 22 "port": 8080 23 } 24 },{ 25 "version": "2.0", 26 "columns": 2, 27 "tiles": [ 28 { "type": "PING", "params": {"hostname": "127.0.0.1"}}, 29 { "type": "PORT", "params": {"hostname": "129.0.0.1", "port": 8080}}, 30 { "type": "HTTP-STATUS", "params": {"url": "https://google.com"}}, 31 { 32 "type": "GROUP", 33 "label": "localhost PING/PORT/HTTP Tests", 34 "tiles": [ 35 { 36 "type": "PING", 37 "params": { 38 "hostname": "128.0.0.1" 39 } 40 }, 41 { 42 "type": "PORT", 43 "params": { 44 "hostname": "127.0.0.1", 45 "port": 8080 46 } 47 }, 48 { 49 "type": "HTTP-STATUS", 50 "params": { 51 "url": "http://localhost:8080" 52 } 53 } 54 ] 55 } 56 ] 57 }
The Monitoror configuration file defines an arrangement of desired monitoring tiles in a given number of columns. If the columns are fewer than the number of tiles, then the screen is filled vertically, too. An array of tiles is defined to contain various monitoring checks on the wallboard. The PING
, PORT
, and HTTP-STATUS
tiles are self-explanatory. A tile of type GROUP
is defined that shows a single rectangular area to represent multiple checks. This tile will be red when single or multiple checks in the group fail. This kind of tile makes use of the limited page area, so you can pack more checks into the wallboard.
The Monitoror documentation has complete information about tiles that can be monitored, along with their respective parameters, the information displayed, and so on. Even if you need to run Monitoror natively, just grab the appropriate Golang static binary from its GitHub releases page and run the binary after making it executable. To launch the Monitoror container and supply the required Monitoror configuration to demonstrate the tiles monitoring localhost in its container and itself running on port 8080, execute the two commands in Listing 3.
Listing 3
Monitoror Container and Config
docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock:ro -v ./monitoror_stack.yml:/etc/compose/monitoror_stack.yml:ro docker docker compose -f /etc/compose/monitoror_stack.yml up -d docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock:ro -v ./config.json:/etc/monitoror/config.json:ro -v ./monitoror_stack.yml:/etc/compose/monitoror_stack.yml:ro docker docker compose -f /etc/compose/monitoror_stack.yml cp /etc/monitoror/config.json monitoror:/etc/config.json
Now when you access the Monitoror wallboard page in your browser at localhost:38080 , you should see a page with monitoring tiles (Figure 1).
I intentionally provided false IP addresses in the demo config file to show how failing tiles look on the wallboard. Monitoror picks up config changes during its periodic checks. To see how the tiles change, with changing input, correct the invalid IP addresses and introduce a typo in the HTTP-STATUS
tile by modifying the config.json
as in Listing 4.
Listing 4
Diff of config.json
5,6c6,7 < { "type": "PORT", "params": {"hostname": "129.0.0.1", "port": 8080}}, < { "type": "HTTP-STATUS", "params": {"url": "https://google.com"}}, --- > { "type": "PORT", "params": {"hostname": "127.0.0.1", "port": 8080}}, > { "type": "HTTP-STATUS", "params": {"url": "https://gogle.com"}}, 15c15 < "hostname": "128.0.0.1" --- > "hostname": "127.0.0.1"
When you provide Monitoror the new configuration (second line of Listing 3), the wallboard should reflect the new configuration (Figure 2). Correcting the typo in the HTTP tile URL and copying the new configuration should turn all tiles on the wallboard green.
The second demo arrays the Monitoror wallboard with tiles monitoring some popular modern cloud servers. To launch single monitoring instances for OpenSearch, Kafka, and Redis, modify the monitoror_stack.yml
file as in Listing 5 and the config.json
file as in Listing 6.
Listing 5
Diff for monitoror_stack.yml
8a10,27 > - "MO_MONITORABLE_HTTP_SSLVERIFY=false" > restart: unless-stopped > > opensearch: > image: opensearchproject/opensearch:${OSRHTAG:-latest} > environment: > - "discovery.type=single-node" > restart: unless-stopped > > kafka: > image: bitnami/kafka:${KFKATAG:-3.2.3} > environment: > - "ALLOW_PLAINTEXT_LISTENER=yes" > - "KAFKA_CFG_LISTENERS=PLAINTEXT://0.0.0.0:9092,CONTROLLER://:9093" > - "KAFKA_ENABLE_KRAFT=yes" > restart: unless-stopped > > redis: > image: redis:${RDSSTAG:-latest} > command: "redis-server --save 60 1 --loglevel warning"
Listing 6
Diff for config.json
< "columns": 2, --- > "columns": 3, 5,7d4 < { "type": "PING", "params": {"hostname": "127.0.0.1"}}, < { "type": "PORT", "params": {"hostname": "127.0.0.1", "port": 8080}}, < { "type": "HTTP-STATUS", "params": {"url": "https://google.com"}}, 10c7 < "label": "localhost PING/PORT/HTTP Tests", --- > "label": "opensearch PING/PORT/HTTP Tests", 12,30c9,28 < { < "type": "PING", < "params": { < "hostname": "129.0.0.1" < } < }, < { < "type": "PORT", < "params": { < "hostname": "127.0.0.1", < "port": 8080 < } < }, < { < "type": "HTTP-STATUS", < "params": { < "url": "http://localhost:8080" < } < } --- > {"type": "PING", "params": {"hostname": "opensearch"}}, > {"type": "PORT", "params": {"hostname": "opensearch", "port": 9200}}, > {"type": "PORT", "params": {"hostname": "opensearch", "port": 9600}}, > {"type": "HTTP-STATUS", "params": {"url": "https://admin:admin@opensearch:9200"}} > ] > }, > { > "type": "GROUP", > "label": "kafka PING/PORT Tests", > "tiles": [ > {"type": "PING", "params": {"hostname": "kafka"}}, > {"type": "PORT", "params": {"hostname": "kafka", "port": 9092}} > ] > }, > { > "type": "GROUP", > "label": "redis PING/PORT Tests", > "tiles": [ > {"type": "PING", "params": {"hostname": "redis"}}, > {"type": "PORT", "params": {"hostname": "redis", "port": 6379}}
Use the commands in Listing 3 to launch and copy the Monitoror config. You should see the wallboard with the new server tiles (Figure 3). You should now be feeling at home with the elegant capabilities of Monitoror that let you get a monitoring wallboard up and running. The command
docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock:ro -v ./monitoror_stack.yml:/etc/compose/monitoror_stack.yml:ro docker docker compose -f /etc/compose/monitoror_stack.yml down
cleans up the running stack once you're done playing with Monitoror.
Vigil Status Page
The Monitoror single-page wallboard is a quick and nice solution but has limited capabilities and is suited to a relatively smaller number of servers and services. The next free open source status page solution, Vigil [3], is more mature and capable of handling large numbers of servers and services with additional capabilities, including branding, alerting, announcements, and other options. To bring up Vigil quickly to see it in action, create the YML and CFG files shown in Listings 7 and 8.
Listing 7
vigil_stack.yml
01 version: '3.5' 02 services: 03 04 vigil: 05 image: valeriansaliou/vigil:${VGILTAG:-v1.26.0} 06 ports: 07 - "48080:8080" 08 restart: unless-stopped 09 10 networks: 11 default: 12 name: statuspage-demo 13 external: true
Listing 8
config.cfg
01 [server] 02 log_level = "debug" 03 inet = "0.0.0.0:8080" 04 workers = 4 05 manager_token = "REPLACE_THIS_WITH_A_VERY_SECRET_KEY" 06 reporter_token = "REPLACE_THIS_WITH_A_SECRET_KEY" 07 08 [assets] 09 path = "./res/assets/" 10 11 [branding] 12 page_title = "Vigil Localhost Test Status Page" 13 page_url = "https://teststatus.page/status" 14 company_name = "RNG" 15 icon_color = "#1972F5" 16 icon_url = "https://avatars.githubusercontent.com/u/226598?v=4" 17 logo_color = "#1972F5" 18 logo_url = "https://avatars.githubusercontent.com/u/226598?v=4" 19 website_url = "https://teststatus.page/" 20 support_url = "mailto:help@teststatus.page" 21 custom_html = "" 22 23 [metrics] 24 poll_interval = 60 25 poll_retry = 2 26 poll_http_status_healthy_above = 200 27 poll_http_status_healthy_below = 400 28 poll_delay_dead = 30 29 poll_delay_sick = 10 30 push_delay_dead = 20 31 push_system_cpu_sick_above = 0.90 32 push_system_ram_sick_above = 0.90 33 script_interval = 300 34 local_delay_dead = 40 35 36 [notify] 37 startup_notification = true 38 reminder_interval = 300 39 40 [notify.webhook] 41 hook_url = "https://webhook.site/4406e2a4-13cd-4c99-975c-d3456a148b26" 42 43 [probe] 44 [[probe.service]] 45 id = "ping" 46 label = "PING" 47 [[probe.service.node]] 48 id = "invalidiping" 49 label = "Invalid IP Ping" 50 mode = "poll" 51 replicas = ["icmp://129.0.0.1"] 52 53 [[probe.service]] 54 id = "port" 55 label = "PORT" 56 [[probe.service.node]] 57 id = "localhostport" 58 label = "Localhost Port 8080 Probe" 59 mode = "poll" 60 replicas = ["tcp://localhost:8080"] 61 62 [[probe.service]] 63 id = "http" 64 label = "HTTP" 65 [[probe.service.node]] 66 id = "googlehttp" 67 label = "Google Http Probe" 68 mode = "poll" 69 replicas = ["https://google.com"]
This Vigil configuration with minimal necessary settings is self-explanatory. The [server]
section controls on which IP and port Vigil is running with a defined number of parallel workers. The [branding]
section contains various settings for the status page header (e.g., company name, logo, website). The [metrics]
section defines various polling parameters for the Vigil probes.
Vigil notifies you of the different monitoring events emitted in different ways (e.g., email, Twilio, Slack, Telegram, XMPP, Webex). The test configuration uses a random webhook (it will be different for you) generated through the random URL and email address generator Webhook.site , so you can see some events generated by Vigil during testing.
The Vigil GitHub project provides a complete configuration file [4], so you to move quickly through all the settings it provides. The probe section has various subsections to group and define your various ICMP, TCP, and HTTP probes against various hosts and endpoints provided in the replica array. Vigil provides a script probe as well to cover monitoring checks not served by other probes. The Vigil GitHub project page provides a detailed description of all the configuration settings.
The commands in Listing 9 bring up the container, provide the required configuration, and restart Vigil. When you open localhost:48080 in your browser, you will see the Vigil status page (Figure 4).
Listing 9
Launch and Configure Vigil Container
docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock:ro -v ./vigil_stack.yml:/etc/compose/vigil_stack.yml:ro docker docker compose -f /etc/compose/vigil_stack.yml up -d docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock:ro -v ./vigil_stack.yml:/etc/compose/vigil_stack.yml:ro -v ./config.cfg:/etc/vigil.cfg:ro docker docker compose -f /etc/compose/vigil_stack.yml cp /etc/vigil.cfg vigil:/etc/vigil.cfg docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock:ro -v ./vigil_stack.yml:/etc/compose/vigil_stack.yml:ro docker docker compose -f /etc/compose/vigil_stack.yml restart vigil
To add more external servers in a second test setup, as for Monitoror, change the YML file as in Listing 10. Also, change the previous configuration file to include probes for the OpenSearch, Kafka, and Redis containers (Listing 11).
Listing 10
Diff for vigil_stack.yml
11a12,30 > opensearch: > image: opensearchproject/opensearch:${OSRHTAG:-latest} > environment: > - "discovery.type=single-node" > restart: unless-stopped > > kafka: > image: bitnami/kafka:${KFKATAG:-3.2.3} > environment: > - "ALLOW_PLAINTEXT_LISTENER=yes" > - "KAFKA_CFG_LISTENERS=PLAINTEXT://0.0.0.0:9092,CONTROLLER://:9093" > - "KAFKA_ENABLE_KRAFT=yes" > restart: unless-stopped > > redis: > image: redis:${RDSSTAG:-latest} > command: "redis-server --save 60 1 --loglevel warning" > restart: unless-stopped >
Listing 11
Diff for config.cfg
30,34d29 < push_delay_dead = 20 < push_system_cpu_sick_above = 0.90 < push_system_ram_sick_above = 0.90 < script_interval = 300 < local_delay_dead = 40 45,46c40,41 < id = "ping" < label = "PING" --- > id = "kafka" > label = "KAFKA" 48,49c43,44 < id = "invalidiping" < label = "Invalid IP Ping" --- > id = "kafkaping" > label = "Kafka Ping" 51c46,53 < replicas = ["icmp://129.0.0.1"] --- > replicas = ["icmp://kafka"] > reveal_replica_name = true > [[probe.service.node]] > id = "kafkaport9092" > label = "Kafka Port 9092" > mode = "poll" > reveal_replica_name = true > replicas = ["tcp://kafka:9092"] 54,55c56,75 < id = "port" < label = "PORT" --- > id = "opensearch" > label = "OPENSEARCH" > [[probe.service.node]] > id = "opensearchping" > label = "Opensearch Ping" > mode = "poll" > reveal_replica_name = true > replicas = ["icmp://opensearch"] > [[probe.service.node]] > id = "opensearchport9200" > label = "Opensearch Port 9200" > mode = "poll" > reveal_replica_name = true > replicas = ["tcp://opensearch:9200"] > [[probe.service.node]] > id = "opensearchport9600" > label = "Opensearch Port 9600" > mode = "poll" > reveal_replica_name = true > replicas = ["tcp://opensearch:9600"] 57,58c77,78 < id = "localhostport" < label = "Localhost Port 8080 Probe" --- > id = "opensearchttp9200" > label = "Opensearch Http 9200" 60c80,81 < replicas = ["tcp://localhost:8080"] --- > reveal_replica_name = true > replicas = ["https://admin:admin@opensearch:9200"] 63,64c84,91 < id = "http" < label = "HTTP" --- > id = "redis" > label = "REDIS" > [[probe.service.node]] > id = "redisping" > label = "Redis Ping" > mode = "poll" > reveal_replica_name = true > replicas = ["icmp://redis"] 66,67c93,94 < id = "googlehttp" < label = "Google Http Probe" --- > id = "redisport6379" > label = "Redis Port 6379" 69c96,97 < replicas = ["https://google.com"] --- > reveal_replica_name = true > replicas = ["tcp://redis:6379"]
Execute the commands in Listing 9 to launch the containers, copy the updated config, and restart Vigil, respectively. Now refresh your browser and you should see an updated page (Figure 5).
You can see for yourself that the status page is user friendly and interactive, instantly helping you figure out where the probes are passing or failing so you can dig in further. If you add
reveal_replica_name = true
in every [[probe.service.node]]
subsection, tool tips will show replica details on mouseover. The Vigil status page enables you to add a large number of probe targets because of its vertical layout. Please note that the OpenSearch HTTP probe is failing here because Vigil has no way to turn off the SSL certificate through the config file. However, you could solve this issue with the script probe provided by Vigil by creating an inline script that makes use of curl
with a flag to skip SSL certificate checks. To obtain a new image for Vigil, modify the YML and CFG files, as shown in Listings 12 and 13, create the Dockerfile_VigilSSLCertIgnore
file in the current working directory with the lines,
FROM valeriansaliou/vigil:v1.26.0 RUN apk --no-cache add curl and run the command docker build -f Dockerfile_VigilSSLCertIgnore . -t vigilsci:v1.26.0
Listing 12
Diff for vigil_stack.yml
< image: valeriansaliou/vigil:${VGILTAG:-v1.26.0} --- > image: vigilsci:${VGILTAG:-v1.26.0}
Listing 13
Diff for config.cfg
79c79 < mode = "poll" --- > mode = "script" 81c81,86 < replicas = ["https://admin:admin@opensearch:9200"] --- > scripts = [ > ''' > /usr/bin/curl -k https://admin:admin@opensearch:9200 > return $? > ''' > ]
Now execute the Docker commands used previously to launch the Vigil service, copy the new Vigil config, and restart the Vigil service. Voilá, the script probe fixes the limitation of the HTTP probe and all the tiles are now green.
You also can administer Vigil through its APIs to publish public announcements, manually report node metrics, and so on. The Vigil GitHub project page has relevant information for you to make use of the Manager HTTP and Reporter HTTP APIs. A related optional component known as Vigil Local can be used to add the health of local services on the status page. Last but not the least, Vigil Reporter libraries are provided for various programming languages to submit health information to Vigil from your apps. All of this information should be enough for you to make full use of the Vigil capabilities to craft a powerful black box monitoring status page.
Statping-ng Status Page and Monitoring Server
Finally, I look at a black box monitoring status page solution full of features known as Statping-ng [5]. To explore its vast array of functionalities, create a statpingng_stack.yml
file in your current directory (Listing 14), and execute the command
docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock:ro -v ./statpingng_stack.yml:/etc/compose/statpingng_stack.yml:ro docker docker compose -f /etc/compose/statpingng_stack.yml up -d
in the current directory to launch the Statping-ng Docker container.
Listing 14
statpingng_stack.yml
01 version: '3.5' 02 services: 03 04 statping: 05 container_name: statpingng 06 image: adamboutcher/statping-ng:${SPNGTAG:-latest} 07 ports: 08 - 58080:8080 09 restart: unless-stopped 10 11 networks: 12 default: 13 name: statuspage-demo 14 external: true
Accessing localhost:58080 in your web browser should present you with the Statping-ng setup (Figure 6). Just fill in the necessary details and hit Save Settings . It should now proceed to another page with your entered Name and Description and be populated with multiple kinds of demo probes supported by Statping-ng (Figure 7).
It's pretty cool that the crisp-looking status page not only provides demo probes to familiarize yourself with the solution right away, but on scrolling down, you'll find monitoring graphs for these demo services (Figure 8). It's just the tip of the iceberg; you can dig into the fine details about the monitored endpoint by clicking on the View buttons located on the respective graphs.
A Dashboard link at the top of the status page takes you to another page (after entering the admin credentials you already set in the Statping-ng setup page), presenting each and every possible setting provided for the Statping-ng configuration.
In the Services tab you can see and modify the demo services. You don't need to learn anything else to make use of Statping-ng because every operation is driven by its user-friendly tab pages. In the Services tab try adding and removing some of the probes by selecting the appropriate drop-down items for HTTP, TCP, UDP, gRPC, and Static services; ICMP Ping; and various other applicable settings. You could also set up various Notifiers to receiving online and offline alerts, post Announcements for respective services, browse through the Statping-ng logs, add different kinds of users, and so on.
An important feature of Statping-ng is the ability to back up and restore current Statping services, groups, notifiers, and other settings to and from a JSON file, respectively. The Statping-ng wiki [6] provides more detailed info about its various aspects.
Finally, shift gears to start and configure Statping-ng programmatically through its configuration settings – but without involving any manual steps: Create the Dockerfile_MyStatpingNG
file with the lines
FROM adamboutcher/statping-ng CMD statping --port $PORT -c /app/config.yml
and create a new Docker image with the command:
docker build -f Dockerfile_MyStatpingNG . -t mystatpingng
Now modify statpingng_stack.yml
as shown in Listing 15 to include the servers to be monitored, and then create the required bind mount directory for Statping-ng with
mkdir config
Listing 15
Diff for statpingng_stack.yml
6c6 < image: adamboutcher/statping-ng:${SPNGTAG:-latest} --- > image: mystatpingng:${SPNGTAG:-latest} 8a9,36 > volumes: > - ./config:/app > environment: > - "DB_CONN=sqlite" > - "STATPING_DIR=/app" > - "SAMPLE_DATA=false" > - "GO_ENV=test" > - "NAME=StatpingNG Probes Demo" > - "DESCRIPTION=StatpingNG Probes Configuration Demo" > restart: unless-stopped > > opensearch: > image: opensearchproject/opensearch:${OSRHTAG:-latest} > environment: > - "discovery.type=single-node" > restart: unless-stopped > > kafka: > image: bitnami/kafka:${KFKATAG:-3.2.3} > environment: > - "ALLOW_PLAINTEXT_LISTENER=yes" > - "KAFKA_CFG_LISTENERS=PLAINTEXT://0.0.0.0:9092,CONTROLLER://:9093" > - "KAFKA_ENABLE_KRAFT=yes" > restart: unless-stopped > > redis: > image: redis:${RDSSTAG:-latest} > command: "redis-server --save 60 1 --loglevel warning"
and create a services.yml
file (Listing 16) in the config
directory. Finally, set up the correct owner for the bind mount directory with the command
chown -R root:root config
and bring up the new Statping-ng containers with the command used earlier.
Listing 16
services.yml
01 x-tcpservice: &tcpservice 02 type: tcp 03 check_interval: 60 04 timeout: 15 05 allow_notifications: true 06 notify_after: 0 07 notify_all_changes: true 08 public: true 09 redirect: true 10 11 x-httpservice: &httpservice 12 type: http 13 method: GET 14 check_interval: 45 15 timeout: 10 16 expected_status: 200 17 allow_notifications: true 18 notify_after: 2 19 notify_all_changes: true 20 public: true 21 redirect: true 22 23 x-icmping: &icmping 24 type: icmp 25 check_interval: 60 26 timeout: 15 27 allow_notifications: true 28 notify_after: 0 29 notify_all_changes: true 30 public: true 31 32 services: 33 - name: ICMP Kafka 34 domain: kafka 35 <<: *icmping 36 37 - name: TCP Kafka 9092 38 domain: kafka 39 port: 9092 40 <<: *tcpservice 41 42 - name: ICMP opensearch 43 domain: opensearch 44 <<: *icmping 45 46 - name: TCP opensearch 9200 47 domain: opensearch 48 port: 9200 49 <<: *tcpservice 50 51 - name: TCP opensearch 9600 52 domain: opensearch 53 port: 9600 54 <<: *tcpservice 55 56 - name: HTTP opensearch 57 domain: https://admin:admin@opensearch:9200 58 <<: *httpservice 59 60 - name: ICMP redis 61 domain: redis 62 <<: *icmping 63 64 - name: TCP redis 6379 65 domain: redis 66 port: 6379 67 <<: *tcpservice
When you refresh the Statping-ng status page, you should see the new servers being monitored.
You should feel confident now to start using Statping-ng for your production-level status pages. The server provides many additional features, including a choice to use Postgres or MySQL for the production back end, a server configuration with more environmental variables, a full-fledged API to access data on your Statping server programmatically, a Lets Encrypt-enabled automatic SSL certificate, exporting your status page to a static HTML file, and so on. The Statping-ng wiki provides relevant documentation for most of these features.
Buy this article as PDF
(incl. VAT)
Buy ADMIN Magazine
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Most Popular
Support Our Work
ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.