« Previous 1 2 3 Next »
Explore automation-as-code with Ansible
Automated
Route vs. Ingress
OpenShift and MicroShift use name-based routes to forward HTTP and HTTPS traffic to applications in pods. You need to resolve all the subdomains in your DNS with the same IP address as your Kubernetes node (or cluster virtual IP). For example, if your single-node setup is running on kube.demo.com
with the IP address 192.168.1.1, you need to resolve all sub-domains (*.kube.demo.com
) to 192.168.1.1, too.
The router in your Kubernetes installation can then forward the packets to the service on a name basis. For example, one AWX setup could then run on http://awx01.kube.demo.com
and another on http://awx02.kube.demo.com
, which is why the demo code on GitHub has a route definition in the 01a_awx_kube_route.yml
playbook containing:
apiVersion: route.openshift.io/v1 kind: Route
Note that this only works with OpenShift or MicroShift. Starting in Kubernetes version 1.19, the developers adopted the OpenShift route concept and implemented it there as an Ingress route (Figure 1); therefore, a second playbook (01a_awx_kube_ingress.yml
) in the demo code declares an Ingress route:
apiVersion: networking.k8s.io/v1 kind: Ingress

This strategy also works with MicroK8s or K3s. You might have to adjust the Ingress definition if routers such as Traefik cannot cope directly.
You need to include the task from the main playbook. Depending on which Kubernetes distribution you are using, comment out Route
or Ingress
. Accordingly, you must use the Route
or Ingress
section in the 99_cleanup.yml
playbook.
AWX in Five Minutes
You need a Kubernetes environment for AWX. A simple single-node setup with distributions such as MicroShift, K3s, or MicroK8s is all you need; then, you can use Kustomize or Helm to install the AWX operator with default settings, as described in the documentation [2]. Operator version 2.12.2 was used for this example.
The main playbook for the AWX 00_awx_full.yml
rollout has the same structure as a workflow template in AWX itself and includes other playbooks in the appropriate places. In the best Ansible style, you want to avoid transferring additional playbooks as include_tasks
and create appropriate roles instead. However, because the tasks to be included are so simple and do not use any variables, files, or templates other than the general configuration variables, it is not worth paying the price of the overhead for roles in this case. Of course, if you are planning to use an AWX server to roll out further AWX servers, you would want to convert the 00_awx_full.yml
playbook into a suitable workflow template.
The playbook fetches all the variable values from two files (secrets.yml
and vars.yml
) and then defines shell variables (environment) with the access credentials for the Kubernetes cluster and the AWX setup to be rolled out, which are adopted by the playbook modules further down the line. To keep the example simple, just add the kubeconfig
file for authentication to the playbook.
In larger installations, you would define a bearer token instead with restricted access to the defined namespace. The AWX service runs without SSL in this scenario and is where you would select the HTTPS protocol in a production environment and declare a route or Ingress with SSL.
What is originally missing, though, is the CONTROLLER_PASSWORD
. According to the AWX operator documentation, you can store your own password as a Kubernetes secret when rolling out an AWX setup. Unfortunately, this did not work in our lab for unknown reasons. I decided to leave out the declaration, prompting the AWX operator to generate a random, secure password for the AWX setup during the rollout itself before simply parsing this out at a later date and adding it to the environment.
The first playbook, 01_awx_kube.yml
, does not require any AWX credentials as yet because it only rolls out the AWX service on Kubernetes. In this rollout, the playbook requires two persistent volumes (PVs) instead of the usual one:
spec: ... postgres_storage_requirements: requests: storage: 10Gi projects_persistence: true projects_storage_size: 10Gi
The postgres_storage_requirements
variable can be used to specify the size of the PVs for the database. You definitely need this parameter because the AWX operator only allocates 1GB to the database by default, which will fill up quickly during active use of your AWX environment. The controller uses the second volume to save the project data (i.e., the Git repositories with playbooks) in non-volatile storage. AWX basically also works without persistent project storage but then has to reload completely all the project repositories when restarting the pod or redeploying. Persistent volumes offer a faster experience.
As described in the "Route vs. Ingress" section, depending on the Kubernetes variant used, one of the playbooks that defines the OpenShift or Ingress route to your AWX setup then follows. In practice, tasks that wait for something to happen need to be included in many places in your Ansible playbooks. Ansible is often faster than the services it automates, which is why the configuration playbook, 02_awx_config.yml
, starts with a wait loop (Listing 1).
Listing 1
Wait for the AWX API
- name: Wait for the AWX API to be accessible ansible.builtin.uri: url: "http://{{ awxsetup.name }}.{{ awxsetup.baseurl }}/api/v2/ping" register: result until: "result.status == 200" retries: 30 delay: 10
This code tells Ansible's URI module to check the AWX REST API at predefined intervals of 10 seconds. The API has a ping function that responds with the HTTP code 200 OK once the AWX environment is ready. If AWX fails to respond after five minutes, the playbook aborts with an error. The ping API responds without you having to log in to the AWX server, because you do not yet know the admin password at this point. If you use HTTPS for the AWX server as described earlier, you need to adjust this accordingly in the queue task.
The wait loop is followed by a second static 30-second wait loop. Experience shows that the ping API often returns an OK too soon and that the downstream configuration tasks cannot reach it after all. You can avoid errors here by giving the API another 30 seconds. If the API responds, the AWX rollout is complete, and the operator has also stored the admin password in a Kubernetes secret. The next task retrieves the password from the secret (Listing 2) and makes it available to the subsequent steps.
Listing 2
Retrieve Password
- name: Get Admin PW kubernetes.core.k8s_info: api_version: v1 kind: Secret name: "{{ awxsetup.name }}-admin-password" namespace: "{{ awxsetup.namespace }}" register: awxadmin
Again, you need to resort to a little trick: Ansible can set the environment variables for the complete automation run. In this case, they appear at the start of the playbook before the tasks
section. Alternatively, Ansible can assign environment variables to each individual task, which then only apply to the respective tasks. No "add key-value to existing environment" function exists, so you summarize the following configuration tasks to create a block. Because the block itself is considered to be a task, and the CONTROLLER_PASSWORD
environment variable was added to the block, it is automatically available for all tasks within the block (Listing 3).
Listing 3
Group Tasks in a Block
- name: Upload Config to AWX block: block: - name: Create Organization awx.awx.organization: ... ((Many config tasks )) ... environment: CONTROLLER_PASSWORD: "{{ awxadminresources[0].data.password | b64decode }}"
The playbook uses the awx.awx
collection to upload the AWX configuration. The collection's modules address the AWX REST API and use it to generate the required settings, but it is important to adhere to the correct sequence. For example, you cannot create a job template until the associated project template exists and is synchronized with its source code management (SCM) system.
The project can only be created after you have created both an organization and suitable SCM credentials. The correct order is: Organizations, Credentials (SCM, Machine, Registry (optional)), Registry and Execution Environments (optional), Inventories, Hosts (if static; otherwise, dynamic inventory), Projects (must be Update Revision on Launch ), Templates (job and workflow (optional)).
This example only creates one organization with one project and several job templates; however, this idea can be extended as desired. For example, you could bundle the configurations for several projects into separate files, outsource part of the playbook from awx.awx.project
to a separate file, and include it in a
with items: project1.yml, project2.yml
loop type.
Creating Organizations the Right Way
Pitfalls await as soon as the first module (awx.awx.organization
) is called up. When you start a task on AWX, the system first generates a Kubernetes pod with the execution environment, which then performs the automation. The standard container template for this (usually quay.io/ansible/awx-ee:latest
) includes only a few Ansible collections.
To use modules from collections such as ansible.windows
, containers.podman
, or kubernetes.core
in your playbook, the execution environment first needs to load them. In its project folder, the collections/requirements.yml
file lists the collections that the execution environment inserts directly after the start. As a rule, you can pick up freely available collections from the Galaxy website [3]. You do not need to log in to this service before using it.
Alternatively, you can find repositories for collections that require registration; a small bug in AWX raises its head here. You always need to assign at least one credential of the type Ansible Galaxy/Automation Hub API Token to an organization, even if you do not need it for the open Ansible Galaxy service. If you manually create a new organization in the AWX web UI, you will not notice anything because AWX automatically inserts Ansible Galaxy pseudo-credentials unless you select otherwise. The catch is that if you use the REST API to create an organization, AWX does not run in the background and the credentials remain empty, which means an execution environment running in this organization cannot reload any collections.
Unfortunately, no meaningful error messages alert you to this problem. To manage this omission, you need an entry in your playbook:
- name: Create Organization awx.awx.organization: name: "{{ org.name }}" description: "{{ org.desc }}" galaxy_credentials: - Ansible Galaxy state: present
The galaxy_credentials
specification must be included in this task.
In the following tasks, your playbook creates the most important basic credentials. Unfortunately, this has to be done in separate tasks one after the other, because it is difficult to combine different credential types in a credentials loop. The parameters to be transferred are just too different and depend heavily on the types. Of course, you can build credential dictionaries for similar types and combine them in a loop (e.g., for all credentials that only require a combination of a username and a password). The sample code copies your standard SSH private key from ~/.ssh/id_rsa
to the machine credentials. You will need to change this line if your AWX setup uses a different key.
Now things become relatively easy. The other tasks generate the required resources on the AWX setup one after the other. This example generates an inventory with static hosts, as is usually the case for standard infrastructure machines on the LAN. I also used the previously mentioned trick with the default function, as you can see in Listing 4.
Listing 4
Use Default Functions
- name: Create Inventory Hosts awx.awx.host: name: "{{ item.key }}" description: "{{ item.value.description }}" inventory: "{{ inventory.name }}" state: present enabled: true variables: ansible_host: "{{ item.value.ip }}" ansible_port: "{{ item.value.port | default('22') }}" with_dict: "{{ inventory_host }}"
The task reads the host information, such as the name and IP address, from the dictionary in vars.yml
. If you do not enter a port parameter for some hosts, Ansible will simply fill out the field with 22
.
As a final step, the AWX rollout then creates your job templates (Figure 2). Of course, this only works if the playbooks specified in the job templates actually exist in your project Git and were synchronized correctly when the project was created. The code in this example only contains the as-code rollout, so you will need to provide a project with templates yourself.
« Previous 1 2 3 Next »
Buy this article as PDF
(incl. VAT)
Buy ADMIN Magazine
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Most Popular
Support Our Work
ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.
