Installing AWX 19 on MicroK8s in AWS

22 04 2021

AWX is now deployed on Kubernetes (since AWX release 18), which is great – the only thing is, what do you do if this is the only application you need Kubernetes for? It is a bit of a hassle setting up the K8s master and worker nodes just for a single application.

The documentation suggests you use Minikube for this, but that seems to be designed for local / testing use only. There’s no middle ground between these two options, so I decided to work it out on MicroK8s.

MicroK8s is Canonical’s minimal production Kubernetes environment. It installs on one host, but can be set up for high availability and even run on a Raspberry Pi!

Here are the instructions if you want to do the same.

Install an Ubuntu 20 host on a t2.medium or higher instance in AWS.

Give it 20Gb of general purpose SSD disk.

Create a security group that permits TCP/443 through from your location – only TCP/22 is permitted by default.

Install Microk8s on a new Ubuntu host in AWS:

ubuntu@ip-172-31-0-208:~$ sudo snap install microk8s --classic
microk8s (1.20/stable) v1.20.5 from Canonical✓ installed

Add the ‘ubuntu’ user you are logged in as to the microk8s user group, then log out and back in again:

ubuntu@ip-172-31-0-208:~$ sudo usermod -a -G microk8s $USER
ubuntu@ip-172-31-0-208:~$ sudo chown -f -R $USER ~/.kube
ubuntu@ip-172-31-0-208:~$ exit

Log back in again to acquire the rights. Then check microk8s is running:

ubuntu@ip-172-31-0-208:~$ microk8s status
microk8s is running
high-availability: no
  datastore master nodes:
  datastore standby nodes: none
    ha-cluster           # Configure high availability on the current node
    ambassador           # Ambassador API Gateway and Ingress
    cilium               # SDN, fast with full network policy
    dashboard            # The Kubernetes dashboard
    dns                  # CoreDNS
    fluentd              # Elasticsearch-Fluentd-Kibana logging and monitoring
    gpu                  # Automatic enablement of Nvidia CUDA
    helm                 # Helm 2 - the package manager for Kubernetes
    helm3                # Helm 3 - Kubernetes package manager
    host-access          # Allow Pods connecting to Host services smoothly
    ingress              # Ingress controller for external access
    istio                # Core Istio service mesh services
    jaeger               # Kubernetes Jaeger operator with its simple config
    keda                 # Kubernetes-based Event Driven Autoscaling
    knative              # The Knative framework on Kubernetes.
    kubeflow             # Kubeflow for easy ML deployments
    linkerd              # Linkerd is a service mesh for Kubernetes and other frameworks
    metallb              # Loadbalancer for your Kubernetes cluster
    metrics-server       # K8s Metrics Server for API access to service metrics
    multus               # Multus CNI enables attaching multiple network interfaces to pods
    portainer            # Portainer UI for your Kubernetes cluster
    prometheus           # Prometheus operator for monitoring and logging
    rbac                 # Role-Based Access Control for authorisation
    registry             # Private image registry exposed on localhost:32000
    storage              # Storage class; allocates storage from host directory
    traefik              # traefik Ingress controller for external access
ubuntu@ip-172-31-0-208:~$ microk8s kubectl get services
kubernetes   ClusterIP   <none>        443/TCP   12m
ubuntu@ip-172-31-0-208:~$ microk8s kubectl get nodes
ip-172-31-0-208   Ready    <none>   12m   v1.20.5-34+40f5951bd9888a

Enable persistent storage on the cluster.  Without this, Postgres container will fail to start:

ubuntu@ip-172-31-0-208:~$ microk8s enable storage
Enabling default storage class
deployment.apps/hostpath-provisioner created created
serviceaccount/microk8s-hostpath created created created
Storage will be available soon

Enable DNS so that containers can reach each other using DNS names within the pod:

ubuntu@ip-172-31-0-208:~$  microk8s enable dns
Enabling DNS
Applying manifest
serviceaccount/coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created created created
Restarting kubelet
DNS is enabled

If you don’t enable DNS, you will see errors like ‘name or service not known’ in the logs:

ubuntu@ip-172-31-0-208:~$ sudo tail -f /var/log/pods/default_awx-6dbb9946c7-86zhh_ffa0203-c8fe-4c1f-a2b3-7d294dbd084e/awx-web/0.log
2021-04-13T08:22:51.29771604Z stderr F File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/django/db/backends/base/", line 217, in ensure_connection2021-04-13T08:22:51.297720294Z stderr F self.connect()2021-04-13T08:22:51.297760208Z stderr F conn = _connect(dsn, connection_factory=connection_factory, **kwasync)2021-04-13T08:22:51.297764847Z stderr F django.db.utils.OperationalError: could not translate host name "awx-postgres" to address: Name or service not known2021-04-13T08:22:51.297768742Z stderr F

Enable an ingress controller – this will permit inbound access to the AWX service and will terminate the SSL/TLS session from the browser.

ubuntu@ip-172-31-0-208:~$ microk8s enable ingress
Enabling Ingress created
namespace/ingress created
serviceaccount/nginx-ingress-microk8s-serviceaccount created created created created created
configmap/nginx-load-balancer-microk8s-conf created
configmap/nginx-ingress-tcp-microk8s-conf created
configmap/nginx-ingress-udp-microk8s-conf created
daemonset.apps/nginx-ingress-microk8s-controller created
Ingress is enabled

Install the AWX operator:

ubuntu@ip-172-31-0-208:~$ microk8s kubectl apply -f unchanged configured unchanged
serviceaccount/awx-operator unchanged
deployment.apps/awx-operator configured

Check it is running after a few mins (on an Ubuntu host in an AWS t2.medium instance, this takes about 35 seconds):

ubuntu@ip-172-31-0-208:~$ microk8s kubectl get all
NAME                              READY   STATUS    RESTARTS   AGE
pod/awx-operator-f768499d-4xvdd   1/1     Running   0          56s

NAME                           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
service/kubernetes             ClusterIP     <none>        443/TCP             16m
service/awx-operator-metrics   ClusterIP   <none>        8383/TCP,8686/TCP   26s

NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/awx-operator   1/1     1            1           56s

NAME                                    DESIRED   CURRENT   READY   AGE
replicaset.apps/awx-operator-f768499d   1         1         1       56s

Create a key and self-signed certificate, perhaps using a bigger value than 365 days shown below so it doesn’t expire soon.  Make sure you have the FQDN in the subject alternative name (SAN) since this seems to be how the ingress controller knows which container to send traffic to.  Without it you get messages about SAN in the logs:

ubuntu@ip-172-31-0-208:~$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout awx.key -out awx.crt -subj "/" -addext "subjectAltName ="
Generating a RSA private key
writing new private key to 'awx.key'

Put the key into the default namespace, and call it awx-secret-tls:

ubuntu@ip-172-31-0-208:~$ microk8s kubectl create secret tls awx-secret-tls --namespace default --key awx.key --cert awx.crt
secret/awx-secret-tls created

View the secret with:

microk8s kubectl get secret --all-namespaces

Make a YAML file called my-awx.yml with the following – this names the AWX deployment and does a few other things like turn on https and create a volume that maps to the host machine’s local filesystem.  It also specifies the secret we just created.

The hostname needs to match what appears in the Subject Alternative Name of the certificate we made – otherwise, the hostname defaults to

The volume mapping is just something I needed to do for my particular case – you can leave out the tower_extra_volumes and tower_task_extra_volume_mounts sections if you don’t need this.:

kind: AWX
  name: awx
  tower_ingress_type: Ingress
  tower_ingress_tls_secret: awx-secret-tls
  tower_extra_volumes: | 
    - name: data-vol 
        path: /home/ubuntu 
        type: Directory 
  tower_task_extra_volume_mounts: | 
    - name: data-vol 
      mountPath: /data

Create the deployment using the above yaml file:

ubuntu@ip-172-31-0-208:~$ microk8s kubectl apply -f my-awx.yml created

The delete command can be used to remove the deployment too, when you need to make changes to my-awx.yml and re-apply. Check status (on a new Ubuntu install in AWS t2.medium this takes 2 minutes 13 seconds):

ubuntu@ip-172-31-0-208:~$ microk8s kubectl get pods
NAME                          READY   STATUS    RESTARTS   AGE
awx-operator-f768499d-66wjw   1/1     Running   0          3m45s
awx-postgres-0                1/1     Running   0          2m39s
awx-b5f6cf4d4-8mnx6           4/4     Running   0          2m31s

Create an ingress file that references the FQDN and the secret that was installed earlier like this:

kind: Ingress
  name: awx-ingress
  annotations: /$1
  - hosts:
    secretName: awx-secret-tls
    - host:
          - backend:
                name: awx-service
                  number: 80
            path: /
            pathType: Prefix

Make sure your awx pod is showing 4/4 containers running. If it is, apply the ingress rule:

ubuntu@ip-172-31-0-208:~$ microk8s kubectl apply -f myingress.yml configured

Check the ingress rule is applied correctly:

ubuntu@ip-172-31-0-208:~$ microk8s kubectl get ingress
NAME          CLASS    HOSTS           ADDRESS   PORTS     AGE
awx-ingress   <none>             80, 443   4m53s
ubuntu@ip-172-31-0-208:~$ microk8s kubectl describe ingress
Name:             awx-ingress
Namespace:        default
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
  awx-secret-tls terminates
  Host           Path  Backends
  ----           ----  --------
                 /   awx-service:80 (
Annotations: /$1
Events:          <none>

The above looks ok, but there’s nothing under ‘Events’.  If you set a hostname in /etc/hosts on the machine you are browsing from you will see an NGINX 404 not found message.  

Remove and reapply the ingress – note that after that is done, the events list has a CREATE event in it:

ubuntu@ip-172-31-0-208:~$ microk8s kubectl delete -f myingress.yml "awx-ingress" deleted
ubuntu@ip-172-31-0-208:~$ microk8s kubectl apply -f myingress.yml created
ubuntu@ip-172-31-0-208:~$ microk8s kubectl describe ingress
Name:             awx-ingress
Namespace:        default
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
  awx-secret-tls terminates
  Host           Path  Backends
  ----           ----  --------
                 /   awx-service:80 (
Annotations: /$1
  Type    Reason  Age   From                      Message
  ----    ------  ----  ----                      -------
  Normal  CREATE  3s    nginx-ingress-controller  Ingress default/awx-ingress

Make sure the hostname resolves on your computer, then browse to and you should see a login page (after you pass the invalid self-signed certificate warning).

I hope this helps someone else out!

A couple of troubleshooting commands:

In the case below, postgres is not starting due to a storage issue – the persistent storage hadn’t been enabled on Microk8s:

microk8s kubectl describe pod/awx-postgres-0
Name:           awx-postgres-0
Namespace:      default
Priority:       0
Node:           <none>
Annotations:    <none>
Status:         Pending
IPs:            <none>
Controlled By:  StatefulSet/awx-postgres
    Image:      postgres:12
    Port:       5432/TCP
    Host Port:  0/TCP
      POSTGRES_DB:                <set to the key 'database' in secret 'awx-postgres-configuration'>  Optional: false
      POSTGRES_USER:              <set to the key 'username' in secret 'awx-postgres-configuration'>  Optional: false
      POSTGRES_PASSWORD:          <set to the key 'password' in secret 'awx-postgres-configuration'>  Optional: false
      PGDATA:                     /var/lib/postgresql/data/pgdata
      POSTGRES_INITDB_ARGS:       --auth-host=scram-sha-256
      POSTGRES_HOST_AUTH_METHOD:  scram-sha-256
      /var/lib/postgresql/data from postgres (rw,path="data")
      /var/run/secrets/ from default-token-2qmrb (ro)
  Type           Status
  PodScheduled   False
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  postgres-awx-postgres-0
    ReadOnly:   false
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-2qmrb
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations: op=Exists for 300s
        op=Exists for 300s
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  35s (x5 over 3m12s)  default-scheduler  0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.

Connecting to a bash shell in a container. First find the pod name, then run the second command using -c to specify the container name (in this case, awx-task):

ubuntu@ip-172-31-9-88:~$ microk8s kubectl get pod awx-6dbb9946c7-86zhh
NAME                   READY   STATUS    RESTARTS   AGE
awx-6dbb9946c7-86zhh   4/4     Running   0          8m45s
ubuntu@ip-172-29-247-122:~$ microk8s kubectl exec --stdin --tty awx-6dbb9946c7-86zhh /bin/bash -c awx-task
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.



7 responses

9 06 2021
Lucius Jankok

microk8s kubectl apply -f my-awx.yml fails.

the content of my-awx.yml is:

cat my-awx.yml

kind: AWX
name: awx
tower_ingress_type: Ingress
tower_ingress_tls_secret: awx-secret-tls

The command fails like this:

microk8s kubectl apply -f my-awx.yml
error: error validating “my-awx.yml”: error validating data: [ValidationError(AWX.spec): unknown field “tower_hostname” in com.ansible.awx.v1beta1.AWX.spec, ValidationError(AWX.spec): unknown field “tower_ingress_tls_secret” in com.ansible.awx.v1beta1.AWX.spec, ValidationError(AWX.spec): unknown field “tower_ingress_type” in com.ansible.awx.v1beta1.AWX.spec]; if you choose to ignore these errors, turn validation off with –validate=false

11 06 2021
Lucius Jankok

removing “tower_” solves the issue

11 06 2021

Thanks for the update! I was trying to find time to test your issue – it is probably something I will run into, I see they changed this in May 2021:

12 06 2021
LJ (@ljankok)

Than explains why “tower_” is no longer valid. Do you know how to set the “PROJECTS_ROOT” variable?

12 06 2021
LJ (@ljankok)


15 06 2021

From what I read here, it needs a change to /etc/awx/

I’ve not tried this though

18 06 2021


Thank you for the very well done installation tutorial. It helped me a lot when I was setting up my k8s environment.

I’m curious on how you got the volume mapping working, I need something like this and I’m a little unclear how you setup the /data path. Did you create a PV/PVC? My env is not minikube, so I won’t have the mount directly on the host.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: