Manage containerized setups with Ansible
Put a Bow on It
Cloud rollouts with Ansible, in which administrators create virtual machines (VMs) in clouds and set up their applications there, have changed as companies have increasingly started using containerized applications instead of VMs. Containers can greatly simplify managing, maintaining, and updating applications and infrastructures. However, the prerequisite remains that the application in question is suitable for containerized use. The platform of choice is, of course, Kubernetes, and in the concluding part of this article I introduce you to managing Kubernetes applications with Ansible.
However, sometimes Kubernetes is too large and complicated. In branch installations, remote offices, and at the network edge, Kubernetes might not be needed, and a simple container setup with Podman will suffice. Kubernetes-style functionality for small installations can be implemented quite easily with the "LANP" stack.
LANP Instead of Kubernetes
LAMP is a common term for a web application stack comprising Linux, Apache, MySQL, and PHP; however, an invention for this article is "LANP," made up of Linux, Ansible, Nginx, and Podman. The idea is simple and coherent. Kubernetes helps you roll out applications in containers, grouped by namespaces and route traffic to HTTP/ HTTPS through routers from the host to the pod. LANP does the same thing, but without Kubernetes. The Linux host runs Podman for the containers and Nginx as a reverse proxy. The host has an IP address and an fully qualified domain name (FQDN).
Depending on the flexibility of the DNS setup of this environment, the Nginx proxy then forwards to the containers either C domains in the style <http or https>://<app>.fqdn or subdirectories following the pattern <http or https>://fqdn/<app> . If needed, Nginx also handles secure socket layer (SSL) termination. You only need an SSL certificate for the host, and Nginx sends the traffic to the containers internally over HTTP. The "ingress" route through a virtual subdirectory is theoretically easier to implement, but it requires the application in the container to handle this form of URL rewrite. If in doubt, adjust the configuration of the web application in the container. The wildcard route by way of the C domain is a more reliable option.
The logical flow of an application rollout by Ansible on this platform is as follows: The playbook first creates a Podman pod, in which it groups all the containers of the desired application. This pod only opens up the HTTP port of the application to the outside by port mapping. Internal ports (e.g., the database container) remain invisible outside the pod. This way, you could easily run multiple pods, each with its own MariaDB container, without any of them blocking the host's port on 3306.
Within the pod, Ansible then rolls out the required containers with matching configurations. Finally, it creates the reverse proxy configuration matching the application in /etc/nginx/conf.d/
on the Podman/Nginx host and activates it.
In previous articles, I always worked with bridge networks and custom IP addresses for containers in articles with Podman. Although I could also do that here, the pod is assigned the IP address and not an individual container. For this example, I instead use a setup with port mappings without a bridge network.
Required Preparations
To begin, install a Linux machine or VM with a distribution of your choice and set it up with Podman and Nginx. In this example, I use RHEL, but the setup also works with any other distribution. Only the configuration of the Nginx server differs for enterprise Linux- and Debian-based distributions. You might need to adjust the templates and directories. On a system with SELinux enabled, do not forget to use the command:
setsebool httpd_can_network_connect=trueOtherwise, Nginx is not allowed to route traffic to nonstandard ports.
A Raspberry Pi will be fine for simple applications. Make sure the machine has a valid name on the local area network (LAN) and that your DNS resolves it correctly. Depending on whether you want to run the proxy service with C domains or subdirectories, your DNS server will need to resolve wildcards on the Podman host. For this example, I used a RHEL-8 VM named pod.mynet.ip
with C domain routing. The DNS server (dnsmasq) of the network therefore contains the matching entry:
address=/.pod.mynet.ip/192.168.2.41
DNS requests for names such as app1.pod.mynet.ip
or wp01.pod.mynet.ip
always prompt a response of 192.168.2.41
. I used a Raspberry Pi 4 with 8GB of RAM and parallel RHEL 9 (pi8.mynet.ip
) to use Nginx with subdirectory routing. The Ansible code ran on a Fedora 36 workstation, the alternative being a Windows subsystem for Linux (WSL) environment running Fedora 36 with Ansible 2.13. In addition to the basic installation, you need the containers.podman
collection:
ansible-galaxy collection install containers.podman
Alternatively, launch the playbooks from an AWX environment (web-based user interface) with the appropriate execution environment.
In this example, I used a typical WordPress setup comprising a MariaDB container for the database and the application container with Apache, PHP, and WordPress (Figures 1 and 2).
Separating What from How
To use Ansible playbooks as flexibly as possible for different scenarios, the automation logic is separated from the application parameters. The example is not yet fully generalized but can be used for several scenarios. First, store all the required parameters in a configuration file named config.yml
; it uses vars_file
to include the playbook. The setup procedure starts with the host configuration for the setup (i.e., the URL of the Podman server, the base directory of the persistent storage for the containers, and the name to be assigned to the application pod),
domain_name: pod.mynet.ip pod_dir: /var/pods pod_name: wp01_pod
followed by the database pod parameters, which are largely self-explanatory. Podman later masks the local host directory into the container with the variables db_local_dir
and db_pod_dir
so that the database is not lost, even if the MariaDB port stops or is updated:
db_name: db01 db_user: wpuser db_pwd: wp01pwd db_root_pwd: DBrootPWD db_image: docker.io/mariadb:latest db_local_dir: db01 db_pod_dir: /var/lib/mysql
The parameters for the application pod look very similar. Only later does Podman make app_port
available to the outside world and therefore to the reverse proxy. If you run multiple application pods, they simply need different port numbers:
app_name: wp01 app_port: 18000 app_image: docker.io/wordpress app_local_dir: wp01 app_pod_dir: /var/www/html
For the reverse proxy in router mode, define a simple Nginx configuration file as a Jinja2 template, which Ansible later simply adds to /etc/nginx/conf.d/
(Listing 1). The configuration file sets up a reverse proxy from the application name to its application port on the Podman host.
Listing 1
Jinja2 Template for Reverse Proxy
server { Listen 80; server_name {{ app_name }}.{{ domain_name }}; access_log /var/log/nginx/{{ app_name }}.access.log; error_log /var/log/nginx/{{ app_name }}.error.log; client_max_body_size 65536M; location / { proxy_pass http://127.0.0.1:{{ app_port }}; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } }
If you use Nginx to terminate SSL security, specify port 443 with SSL in the Listen
section accordingly and add the required SSL parameters and certificate details in front of the location
section.
In subdirectory mode, on the other hand, the proxy configuration is not integrated by the internal server
configuration file, but by the internal location
configuration, which Ansible stores in /etc/nginx/default.d
. From there, it integrates nginx.conf
within the regular server statement (Listing 2). I took the rewrite and proxy rules provided there from the WordPress documentation, so they might not work for other web applications. The Ansible playbook that rolls out the containers and configures the proxy starts with the default parameters:
- name: Roll Out WordPress hosts: pod become: true gather_facts: false vars_files: - config.yml
Listing 2
Jinja2 Template for Router Mode Proxy
location /{{ app_name }}/ { rewrite ^([^\?#]*/)([^\?#\./]+)([\?#].*)?$ $1$2/$3 permanent; proxy_pass http://127.0.0.1:{{ app_port }}/; proxy_read_timeout 90; proxy_connect_timeout 90; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $host; proxy_set_header X-NginX-Proxy true; proxy_set_header Connection ""; }
You do not need the data (facts) of the remote system in this context. Ansible takes the configuration from the previously declared file. It first creates the pod and only redirects external port 18000 to internal port 80. Because all web applications usually listen on port 80, I hard-coded it in this example:
tasks: - name: Create Application Pod containers.podman.podman_pod: name: "{{ pod_name }}" state: started ports: - "{{ app_port }}:80"
If you use applications with other ports (e.g., Kibana on port 5601), you also need to declare this specification as a variable in config.yml
as app_internal_ port
, for example. After that, the entry in Listing 3 creates the subdirectories, in which the applications later store their files on the Podman host . The code in Listing 4 rolls out the database container, followed by the sections of Listing 5, which create the application container. In this case, the connection to the database container is simply defined as 127.0. 0.1:3306, because the database port is open inside the pod, but cannot be seen from the outside.
Listing 3
Creating Subdirectories
- name: Create Container Directory DB ansible.builtin.file: path: "{{ pod_dir }}/{{ db_local_dir }}" state: directory - name: Create Container Directory WP ansible.builtin.file: path: "{{ pod_dir }}/{{ app_local_dir }}" state: directory
Listing 4
Rolling Out Database Container
- name: Create MySQL Database for WP containers.podman.podman_container: name: "{{ db_name }}" image: "{{ db_image }}" state: started pod: "{{ pod_name }}" volumes: - "{{ pod_dir }}/{{ db_local_dir }}: {{ db_pod_dir }}:Z" env: MARIADB_ROOT_PASSWORD: "{{ db_root_pwd }}" MARIADB_DATABASE: "{{ db_name }}" MARIADB_USER: "{{ db_user }}" MARIADB_PASSWORD: "{{ db_pwd }}"
Listing 5
Creating Application Container
- name: Create and run WP Container containers.podman.podman_container: pod: "{{ pod_name }}" name: "{{ app_name }}" image: "{{ app_image }}" state: started volumes: - "{{ pod_dir }}/{{ app_local_dir }}: {{ app_pod_dir }}:Z" env: WORDPRESS_DB_HOST: "127.0.0.1:3306" WORDPRESS_DB_NAME: "{{ db_name }}" WORDPRESS_DB_USER: "{{ db_user }}" WORDPRESS_DB_PASSWORD: "{{ db_pwd }}"
Finally, I'll look at the reverse proxy configuration for the proxy in C domain routing mode:
- name: Set Reverse Proxy ansible.builtin.template: src: proxy.j2 dest: /etc/nginx/conf.d/{{ app_name }}.conf
For subdirectory routing, the matching entry is:
dest: /etc/nginx/default.d/{{ app_name }}.conf
In both cases, the code continues:
owner: root group: root mode: '0644' - name: Reload Reverse Proxy ansible.builtin.service: name: nginx state: reloaded
That completes the task. With this playbook template and individual config.yml
files, you can quite easily roll out most applications with an app container and a database container. The next evolution of this playbook moves the env
variables from the playbook into the configuration file.
The next stage then declares the variables in the config.yml
file as a hierarchical "dictionary" and therefore only needs a Create Container
task in the playbook that iterates over the dictionary with an arbitrary number of containers. To do this, you create a cleanup playbook that deletes the entire installation – including the containers, pod, and reverse proxy – but preserves the application data. You can also update the application easily with a cleanup and re-rollup, assuming your container images are pointing to the :latest
tag.
Buy this article as PDF
(incl. VAT)
Buy ADMIN Magazine
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Most Popular
Support Our Work
ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.