Installing AWX 19 on MicroK8s in AWS

22 04 2021

AWX is now deployed on Kubernetes (since AWX release 18), which is great – the only thing is, what do you do if this is the only application you need Kubernetes for? It is a bit of a hassle setting up the K8s master and worker nodes just for a single application.

The documentation suggests you use Minikube for this, but that seems to be designed for local / testing use only. There’s no middle ground between these two options, so I decided to work it out on MicroK8s.

MicroK8s is Canonical’s minimal production Kubernetes environment. It installs on one host, but can be set up for high availability and even run on a Raspberry Pi!

Here are the instructions if you want to do the same.

Install an Ubuntu 20 host on a t2.medium or higher instance in AWS.

Give it 20Gb of general purpose SSD disk.

Create a security group that permits TCP/443 through from your location – only TCP/22 is permitted by default.

Install Microk8s on a new Ubuntu host in AWS:

ubuntu@ip-172-31-0-208:~$ sudo snap install microk8s --classic
microk8s (1.20/stable) v1.20.5 from Canonical✓ installed
ubuntu@ip-172-31-0-208:~$

Add the ‘ubuntu’ user you are logged in as to the microk8s user group, then log out and back in again:

ubuntu@ip-172-31-0-208:~$ sudo usermod -a -G microk8s $USER
ubuntu@ip-172-31-0-208:~$ sudo chown -f -R $USER ~/.kube
ubuntu@ip-172-31-0-208:~$ exit

Log back in again to acquire the rights. Then check microk8s is running:

ubuntu@ip-172-31-0-208:~$ microk8s status
microk8s is running
high-availability: no
  datastore master nodes: 127.0.0.1:19001
  datastore standby nodes: none
addons:
  enabled:
    ha-cluster           # Configure high availability on the current node
  disabled:
    ambassador           # Ambassador API Gateway and Ingress
    cilium               # SDN, fast with full network policy
    dashboard            # The Kubernetes dashboard
    dns                  # CoreDNS
    fluentd              # Elasticsearch-Fluentd-Kibana logging and monitoring
    gpu                  # Automatic enablement of Nvidia CUDA
    helm                 # Helm 2 - the package manager for Kubernetes
    helm3                # Helm 3 - Kubernetes package manager
    host-access          # Allow Pods connecting to Host services smoothly
    ingress              # Ingress controller for external access
    istio                # Core Istio service mesh services
    jaeger               # Kubernetes Jaeger operator with its simple config
    keda                 # Kubernetes-based Event Driven Autoscaling
    knative              # The Knative framework on Kubernetes.
    kubeflow             # Kubeflow for easy ML deployments
    linkerd              # Linkerd is a service mesh for Kubernetes and other frameworks
    metallb              # Loadbalancer for your Kubernetes cluster
    metrics-server       # K8s Metrics Server for API access to service metrics
    multus               # Multus CNI enables attaching multiple network interfaces to pods
    portainer            # Portainer UI for your Kubernetes cluster
    prometheus           # Prometheus operator for monitoring and logging
    rbac                 # Role-Based Access Control for authorisation
    registry             # Private image registry exposed on localhost:32000
    storage              # Storage class; allocates storage from host directory
    traefik              # traefik Ingress controller for external access
ubuntu@ip-172-31-0-208:~$ microk8s kubectl get services
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.152.183.1   <none>        443/TCP   12m
ubuntu@ip-172-31-0-208:~$ microk8s kubectl get nodes
NAME              STATUS   ROLES    AGE   VERSION
ip-172-31-0-208   Ready    <none>   12m   v1.20.5-34+40f5951bd9888a

Enable persistent storage on the cluster.  Without this, Postgres container will fail to start:

ubuntu@ip-172-31-0-208:~$ microk8s enable storage
Enabling default storage class
deployment.apps/hostpath-provisioner created
storageclass.storage.k8s.io/microk8s-hostpath created
serviceaccount/microk8s-hostpath created
clusterrole.rbac.authorization.k8s.io/microk8s-hostpath created
clusterrolebinding.rbac.authorization.k8s.io/microk8s-hostpath created
Storage will be available soon

Enable DNS so that containers can reach each other using DNS names within the pod:

ubuntu@ip-172-31-0-208:~$  microk8s enable dns
Enabling DNS
Applying manifest
serviceaccount/coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created
clusterrole.rbac.authorization.k8s.io/coredns created
clusterrolebinding.rbac.authorization.k8s.io/coredns created
Restarting kubelet
DNS is enabled

If you don’t enable DNS, you will see errors like ‘name or service not known’ in the logs:

ubuntu@ip-172-31-0-208:~$ sudo tail -f /var/log/pods/default_awx-6dbb9946c7-86zhh_ffa0203-c8fe-4c1f-a2b3-7d294dbd084e/awx-web/0.log
2021-04-13T08:22:51.29771604Z stderr F File "/var/lib/awx/venv/awx/lib64/python3.8/site-packages/django/db/backends/base/base.py", line 217, in ensure_connection2021-04-13T08:22:51.297720294Z stderr F self.connect()2021-04-13T08:22:51.297760208Z stderr F conn = _connect(dsn, connection_factory=connection_factory, **kwasync)2021-04-13T08:22:51.297764847Z stderr F django.db.utils.OperationalError: could not translate host name "awx-postgres" to address: Name or service not known2021-04-13T08:22:51.297768742Z stderr F

Enable an ingress controller – this will permit inbound access to the AWX service and will terminate the SSL/TLS session from the browser.

ubuntu@ip-172-31-0-208:~$ microk8s enable ingress
Enabling Ingress
ingressclass.networking.k8s.io/public created
namespace/ingress created
serviceaccount/nginx-ingress-microk8s-serviceaccount created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-microk8s-clusterrole created
role.rbac.authorization.k8s.io/nginx-ingress-microk8s-role created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-microk8s created
rolebinding.rbac.authorization.k8s.io/nginx-ingress-microk8s created
configmap/nginx-load-balancer-microk8s-conf created
configmap/nginx-ingress-tcp-microk8s-conf created
configmap/nginx-ingress-udp-microk8s-conf created
daemonset.apps/nginx-ingress-microk8s-controller created
Ingress is enabled
ubuntu@ip-172-31-0-208:~$

Install the AWX operator:

ubuntu@ip-172-31-0-208:~$ microk8s kubectl apply -f https://raw.githubusercontent.com/ansible/awx-operator/devel/deploy/awx-operator.yaml
customresourcedefinition.apiextensions.k8s.io/awxs.awx.ansible.com unchanged
clusterrole.rbac.authorization.k8s.io/awx-operator configured
clusterrolebinding.rbac.authorization.k8s.io/awx-operator unchanged
serviceaccount/awx-operator unchanged
deployment.apps/awx-operator configured
ubuntu@ip-172-31-0-208:~$

Check it is running after a few mins (on an Ubuntu host in an AWS t2.medium instance, this takes about 35 seconds):

ubuntu@ip-172-31-0-208:~$ microk8s kubectl get all
NAME                              READY   STATUS    RESTARTS   AGE
pod/awx-operator-f768499d-4xvdd   1/1     Running   0          56s

NAME                           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
service/kubernetes             ClusterIP   10.152.183.1     <none>        443/TCP             16m
service/awx-operator-metrics   ClusterIP   10.152.183.140   <none>        8383/TCP,8686/TCP   26s

NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/awx-operator   1/1     1            1           56s

NAME                                    DESIRED   CURRENT   READY   AGE
replicaset.apps/awx-operator-f768499d   1         1         1       56s
ubuntu@ip-172-31-0-208:~$

Create a key and self-signed certificate, perhaps using a bigger value than 365 days shown below so it doesn’t expire soon.  Make sure you have the FQDN in the subject alternative name (SAN) since this seems to be how the ingress controller knows which container to send traffic to.  Without it you get messages about SAN in the logs:

ubuntu@ip-172-31-0-208:~$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout awx.key -out awx.crt -subj "/CN=awx.yourname.com/O=awx.yourname.com" -addext "subjectAltName = DNS:awx.yourname.com"
Generating a RSA private key
..................................................................................................................................................................................................................+++++
...+++++
writing new private key to 'awx.key'
-----
ubuntu@ip-172-31-0-208:~$

Put the key into the default namespace, and call it awx-secret-tls:

ubuntu@ip-172-31-0-208:~$ microk8s kubectl create secret tls awx-secret-tls --namespace default --key awx.key --cert awx.crt
secret/awx-secret-tls created
ubuntu@ip-172-31-0-208:~$

View the secret with:

microk8s kubectl get secret --all-namespaces

Make a YAML file called my-awx.yml with the following – this names the AWX deployment and does a few other things like turn on https and create a volume that maps to the host machine’s local filesystem.  It also specifies the secret we just created.

The hostname needs to match what appears in the Subject Alternative Name of the certificate we made – otherwise, the hostname defaults to awx.example.com

The volume mapping is just something I needed to do for my particular case – you can leave out the tower_extra_volumes and tower_task_extra_volume_mounts sections if you don’t need this.:

apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
  name: awx
spec:
  tower_ingress_type: Ingress
  tower_ingress_tls_secret: awx-secret-tls
  tower_hostname: awx.yourname.com
  tower_extra_volumes: | 
    - name: data-vol 
      hostPath: 
        path: /home/ubuntu 
        type: Directory 
  tower_task_extra_volume_mounts: | 
    - name: data-vol 
      mountPath: /data

Create the deployment using the above yaml file:

ubuntu@ip-172-31-0-208:~$ microk8s kubectl apply -f my-awx.yml
awx.awx.ansible.com/awx created

The delete command can be used to remove the deployment too, when you need to make changes to my-awx.yml and re-apply. Check status (on a new Ubuntu install in AWS t2.medium this takes 2 minutes 13 seconds):

ubuntu@ip-172-31-0-208:~$ microk8s kubectl get pods
NAME                          READY   STATUS    RESTARTS   AGE
awx-operator-f768499d-66wjw   1/1     Running   0          3m45s
awx-postgres-0                1/1     Running   0          2m39s
awx-b5f6cf4d4-8mnx6           4/4     Running   0          2m31s

Create an ingress file that references the FQDN and the secret that was installed earlier like this:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: awx-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
  tls:
  - hosts:
    - awx.yourname.com
    secretName: awx-secret-tls
  rules:
    - host: awx.yourname.com
      http:
        paths:
          - backend:
              service:
                name: awx-service
                port:
                  number: 80
            path: /
            pathType: Prefix

Make sure your awx pod is showing 4/4 containers running. If it is, apply the ingress rule:

ubuntu@ip-172-31-0-208:~$ microk8s kubectl apply -f myingress.yml
ingress.networking.k8s.io/awx-ingress configured
ubuntu@ip-172-31-0-208:~$

Check the ingress rule is applied correctly:

ubuntu@ip-172-31-0-208:~$ microk8s kubectl get ingress
NAME          CLASS    HOSTS           ADDRESS   PORTS     AGE
awx-ingress   <none>   awx.yourname.com             80, 443   4m53s
ubuntu@ip-172-31-0-208:~$ microk8s kubectl describe ingress
Name:             awx-ingress
Namespace:        default
Address:
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
  awx-secret-tls terminates awx.yourname.com
Rules:
  Host           Path  Backends
  ----           ----  --------
  awx.yourname.com
                 /   awx-service:80 (10.1.59.71:8052)
Annotations:     nginx.ingress.kubernetes.io/rewrite-target: /$1
Events:          <none>
ubuntu@ip-172-31-0-208:~$

The above looks ok, but there’s nothing under ‘Events’.  If you set a hostname in /etc/hosts on the machine you are browsing from you will see an NGINX 404 not found message.  

Remove and reapply the ingress – note that after that is done, the events list has a CREATE event in it:

ubuntu@ip-172-31-0-208:~$ microk8s kubectl delete -f myingress.yml
ingress.networking.k8s.io "awx-ingress" deleted
ubuntu@ip-172-31-0-208:~$ microk8s kubectl apply -f myingress.yml
ingress.networking.k8s.io/awx-ingress created
ubuntu@ip-172-31-0-208:~$ microk8s kubectl describe ingress
Name:             awx-ingress
Namespace:        default
Address:
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
  awx-secret-tls terminates awx.yourname.com
Rules:
  Host           Path  Backends
  ----           ----  --------
  awx.yourname.com
                 /   awx-service:80 (10.1.121.71:8052)
Annotations:     nginx.ingress.kubernetes.io/rewrite-target: /$1
Events:
  Type    Reason  Age   From                      Message
  ----    ------  ----  ----                      -------
  Normal  CREATE  3s    nginx-ingress-controller  Ingress default/awx-ingress
ubuntu@ip-172-31-0-208:~$

Make sure the hostname resolves on your computer, then browse to https://awx.yourname.com and you should see a login page (after you pass the invalid self-signed certificate warning).

I hope this helps someone else out!

A couple of troubleshooting commands:

In the case below, postgres is not starting due to a storage issue – the persistent storage hadn’t been enabled on Microk8s:

microk8s kubectl describe pod/awx-postgres-0
Name:           awx-postgres-0
Namespace:      default
Priority:       0
Node:           <none>
Labels:         app.kubernetes.io/component=database
                app.kubernetes.io/managed-by=awx-operator
                app.kubernetes.io/name=awx-postgres
                app.kubernetes.io/part-of=awx
                controller-revision-hash=awx-postgres-6f5cdc455c
                statefulset.kubernetes.io/pod-name=awx-postgres-0
Annotations:    <none>
Status:         Pending
IP:
IPs:            <none>
Controlled By:  StatefulSet/awx-postgres
Containers:
  postgres:
    Image:      postgres:12
    Port:       5432/TCP
    Host Port:  0/TCP
    Environment:
      POSTGRES_DB:                <set to the key 'database' in secret 'awx-postgres-configuration'>  Optional: false
      POSTGRES_USER:              <set to the key 'username' in secret 'awx-postgres-configuration'>  Optional: false
      POSTGRES_PASSWORD:          <set to the key 'password' in secret 'awx-postgres-configuration'>  Optional: false
      PGDATA:                     /var/lib/postgresql/data/pgdata
      POSTGRES_INITDB_ARGS:       --auth-host=scram-sha-256
      POSTGRES_HOST_AUTH_METHOD:  scram-sha-256
    Mounts:
      /var/lib/postgresql/data from postgres (rw,path="data")
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-2qmrb (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  postgres:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  postgres-awx-postgres-0
    ReadOnly:   false
  default-token-2qmrb:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-2qmrb
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  35s (x5 over 3m12s)  default-scheduler  0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.

Connecting to a bash shell in a container. First find the pod name, then run the second command using -c to specify the container name (in this case, awx-task):

ubuntu@ip-172-31-9-88:~$ microk8s kubectl get pod awx-6dbb9946c7-86zhh
NAME                   READY   STATUS    RESTARTS   AGE
awx-6dbb9946c7-86zhh   4/4     Running   0          8m45s
ubuntu@ip-172-29-247-122:~$ microk8s kubectl exec --stdin --tty awx-6dbb9946c7-86zhh /bin/bash -c awx-task
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
bash-4.4$


Actions

Information

7 responses

9 06 2021
Lucius Jankok

microk8s kubectl apply -f my-awx.yml fails.

the content of my-awx.yml is:

cat my-awx.yml


apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
name: awx
spec:
tower_ingress_type: Ingress
tower_ingress_tls_secret: awx-secret-tls
tower_hostname: awx.example.com

The command fails like this:

microk8s kubectl apply -f my-awx.yml
error: error validating “my-awx.yml”: error validating data: [ValidationError(AWX.spec): unknown field “tower_hostname” in com.ansible.awx.v1beta1.AWX.spec, ValidationError(AWX.spec): unknown field “tower_ingress_tls_secret” in com.ansible.awx.v1beta1.AWX.spec, ValidationError(AWX.spec): unknown field “tower_ingress_type” in com.ansible.awx.v1beta1.AWX.spec]; if you choose to ignore these errors, turn validation off with –validate=false

11 06 2021
Lucius Jankok

removing “tower_” solves the issue

11 06 2021
DataPlumber

Thanks for the update! I was trying to find time to test your issue – it is probably something I will run into, I see they changed this in May 2021:

https://github.com/ansible/awx-operator/commit/75458d0678572377a74ffa84081953c061448826

12 06 2021
LJ (@ljankok)

Than explains why “tower_” is no longer valid. Do you know how to set the “PROJECTS_ROOT” variable?

12 06 2021
LJ (@ljankok)

*that

15 06 2021
DataPlumber

From what I read here, it needs a change to /etc/awx/settings.py

https://stackoverflow.com/questions/21688336/change-default-project-base-path-in-ansible-awx-tower

I’ve not tried this though

18 06 2021
Kev

@DataPlumber

Thank you for the very well done installation tutorial. It helped me a lot when I was setting up my k8s environment.

I’m curious on how you got the volume mapping working, I need something like this and I’m a little unclear how you setup the /data path. Did you create a PV/PVC? My env is not minikube, so I won’t have the mount directly on the host.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s




%d bloggers like this: