feat: bump base os version to centos stream 9 (#324)

* feat: bump base os version to centos stream 9

* feat: add comments to extract commands for automated test

* chore: add emojis, fix urls, order descriptions

* chore: change emoji
This commit is contained in:
kurokobo 2024-03-26 23:58:22 +09:00 committed by GitHub
parent f0dec9a403
commit 6016b81f7e
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
14 changed files with 121 additions and 85 deletions

View file

@ -1,5 +1,5 @@
<!-- omit in toc --> <!-- omit in toc -->
# AWX on Single Node K3s # 📚 AWX on Single Node K3s
An example implementation of AWX on single node K3s using AWX Operator, with easy-to-use simplified configuration with ownership of data and passwords. An example implementation of AWX on single node K3s using AWX Operator, with easy-to-use simplified configuration with ownership of data and passwords.
@ -11,54 +11,54 @@ An example implementation of AWX on single node K3s using AWX Operator, with eas
**If you want to view the guide for the specific version of AWX Operator, switch the page to the desired tag instead of the `main` branch.** **If you want to view the guide for the specific version of AWX Operator, switch the page to the desired tag instead of the `main` branch.**
<!-- omit in toc --> <!-- omit in toc -->
## Table of Contents ## 📝 Table of Contents
- [Environment](#environment) - [📝 Environment](#-environment)
- [References](#references) - [📝 References](#-references)
- [Requirements](#requirements) - [📝 Requirements](#-requirements)
- [Deployment Instruction](#deployment-instruction) - [📝 Deployment Instruction](#-deployment-instruction)
- [Prepare CentOS Stream 8 host](#prepare-centos-stream-8-host) - [✅ Prepare CentOS Stream 9 host](#-prepare-centos-stream-9-host)
- [Install K3s](#install-k3s) - [Install K3s](#-install-k3s)
- [Install AWX Operator](#install-awx-operator) - [Install AWX Operator](#-install-awx-operator)
- [Prepare required files to deploy AWX](#prepare-required-files-to-deploy-awx) - [Prepare required files to deploy AWX](#-prepare-required-files-to-deploy-awx)
- [Deploy AWX](#deploy-awx) - [Deploy AWX](#-deploy-awx)
- [Back up and Restore AWX using AWX Operator](#back-up-and-restore-awx-using-awx-operator) - [📝 Back up and Restore AWX using AWX Operator](#-back-up-and-restore-awx-using-awx-operator)
- [Additional Guides](#additional-guides) - [📝 Additional Guides](#-additional-guides)
## Environment ## 📝 Environment
- Tested on: - Tested on:
- CentOS Stream 8 (Minimal) - CentOS Stream 9 (Minimal)
- K3s v1.28.7+k3s1 - K3s v1.28.7+k3s1
- Products that will be deployed: - Products that will be deployed:
- AWX Operator 2.13.1 - AWX Operator 2.13.1
- AWX 24.0.0 - AWX 24.0.0
- PostgreSQL 15 - PostgreSQL 15
## References ## 📝 References
- [K3s - Lightweight Kubernetes](https://docs.k3s.io/) - [K3s - Lightweight Kubernetes](https://docs.k3s.io/)
- [INSTALL.md on ansible/awx](https://github.com/ansible/awx/blob/24.0.0/INSTALL.md) @24.0.0 - [INSTALL.md on ansible/awx](https://github.com/ansible/awx/blob/24.0.0/INSTALL.md) @24.0.0
- [README.md on ansible/awx-operator](https://github.com/ansible/awx-operator/blob/2.13.1/README.md) @2.13.1 - [README.md on ansible/awx-operator](https://github.com/ansible/awx-operator/blob/2.13.1/README.md) @2.13.1
## Requirements ## 📝 Requirements
- **Computing resources** - **Computing resources**
- **2 CPUs with x86-64-v2 support**. - **2 CPUs with x86-64-v2 support**.
- **4 GiB RAM minimum** - **4 GiB RAM minimum**.
- It's recommended to add more CPUs and RAM (like 4 CPUs and 8 GiB RAM or more) to avoid performance issue and job scheduling issue. - It's recommended to add more CPUs and RAM (like 4 CPUs and 8 GiB RAM or more) to avoid performance issue and job scheduling issue.
- The files in this repository are configured to ignore resource requirements which specified by AWX Operator by default. - The files in this repository are configured to ignore resource requirements which specified by AWX Operator by default.
- **Storage resources** - **Storage resources**
- At least **10 GiB for `/var/lib/rancher`** and **10 GiB for `/data`** are safe for fresh install. - At least **10 GiB for `/var/lib/rancher`** and **10 GiB for `/data`** are safe for fresh install.
- `/var/lib/rancher` will be created and consumed by K3s and related data like container images and overlayfs.
- `/data` will be created in this guide and used to store AWX-related databases and files.
- **Both will be grown during lifetime** and **actual consumption highly depends on your environment and your use case**, so you should to pay attention to the consumption and add more capacity if required. - **Both will be grown during lifetime** and **actual consumption highly depends on your environment and your use case**, so you should to pay attention to the consumption and add more capacity if required.
- `/var/lib/rancher` will be created and consumed by K3s and related data like container images and overlayfs.
- `/data` will be created in this guide and used to store AWX-related databases and files.
## Deployment Instruction ## 📝 Deployment Instruction
### Prepare CentOS Stream 8 host ### ✅ Prepare CentOS Stream 9 host
Disable firewalld and nm-cloud-setup if enabled. This is [recommended by K3s](https://docs.k3s.io/advanced#red-hat-enterprise-linux--centos). Disable firewalld and nm-cloud-setup if enabled. This is [recommended by K3s](https://docs.k3s.io/installation/requirements?os=rhel#operating-systems).
```bash ```bash
# Disable firewalld # Disable firewalld
@ -75,15 +75,16 @@ Install the required packages to deploy AWX Operator and AWX.
sudo dnf install -y git curl sudo dnf install -y git curl
``` ```
### Install K3s ### Install K3s
Install a specific version of K3s with `--write-kubeconfig-mode 644` to make the config file (`/etc/rancher/k3s/k3s.yaml`) readable by non-root users. Install a specific version of K3s with `--write-kubeconfig-mode 644` to make the config file (`/etc/rancher/k3s/k3s.yaml`) readable by non-root users.
<!-- shell: k3s: install -->
```bash ```bash
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.28.7+k3s1 sh -s - --write-kubeconfig-mode 644 curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.28.7+k3s1 sh -s - --write-kubeconfig-mode 644
``` ```
### Install AWX Operator ### Install AWX Operator
> [!WARNING] > [!WARNING]
> AWX Operator 2.13.x introduces some major changes and some issues related to these changes are reported. If you don't have any strong reason to use 2.13.x, personally I recommend to use [2.12.1](https://github.com/kurokobo/awx-on-k3s/tree/2.12.1) instead until major issues are resolved. > AWX Operator 2.13.x introduces some major changes and some issues related to these changes are reported. If you don't have any strong reason to use 2.13.x, personally I recommend to use [2.12.1](https://github.com/kurokobo/awx-on-k3s/tree/2.12.1) instead until major issues are resolved.
@ -103,12 +104,14 @@ git checkout 2.13.1
Then invoke `kubectl apply -k operator` to deploy AWX Operator. Then invoke `kubectl apply -k operator` to deploy AWX Operator.
<!-- shell: operator: deploy -->
```bash ```bash
kubectl apply -k operator kubectl apply -k operator
``` ```
The AWX Operator will be deployed to the namespace `awx`. The AWX Operator will be deployed to the namespace `awx`.
<!-- shell: operator: get resources -->
```bash ```bash
$ kubectl -n awx get all $ kubectl -n awx get all
NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE
@ -124,10 +127,11 @@ NAME DESIRED CURRENT
replicaset.apps/awx-operator-controller-manager-68d787cfbd 1 1 1 16s replicaset.apps/awx-operator-controller-manager-68d787cfbd 1 1 1 16s
``` ```
### Prepare required files to deploy AWX ### Prepare required files to deploy AWX
Generate a Self-Signed certificate. Note that an IP address can't be specified. If you want to use a certificate from a public ACME CA such as Let's Encrypt or ZeroSSL instead of a Self-Signed certificate, follow the guide on [📁 **Use SSL Certificate from Public ACME CA**](acme) first and come back to this step when done. Generate a Self-Signed certificate. Note that an IP address can't be specified. If you want to use a certificate from a public ACME CA such as Let's Encrypt or ZeroSSL instead of a Self-Signed certificate, follow the guide on [📁 **Use SSL Certificate from Public ACME CA**](acme) first and come back to this step when done.
<!-- shell: instance: generate certificates -->
```bash ```bash
AWX_HOST="awx.example.com" AWX_HOST="awx.example.com"
openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -out ./base/tls.crt -keyout ./base/tls.key -subj "/CN=${AWX_HOST}/O=${AWX_HOST}" -addext "subjectAltName = DNS:${AWX_HOST}" openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -out ./base/tls.crt -keyout ./base/tls.key -subj "/CN=${AWX_HOST}/O=${AWX_HOST}" -addext "subjectAltName = DNS:${AWX_HOST}"
@ -169,6 +173,7 @@ Modify the two `password` entries in `base/kustomization.yaml`. Note that the `p
Prepare directories for Persistent Volumes defined in `base/pv.yaml`. These directories will be used to store your databases and project files. Note that the size of the PVs and PVCs are specified in some of the files in this repository, but since their backends are `hostPath`, its value is just like a label and there is no actual capacity limitation. Prepare directories for Persistent Volumes defined in `base/pv.yaml`. These directories will be used to store your databases and project files. Note that the size of the PVs and PVCs are specified in some of the files in this repository, but since their backends are `hostPath`, its value is just like a label and there is no actual capacity limitation.
<!-- shell: instance: create directories -->
```bash ```bash
sudo mkdir -p /data/postgres-15/data sudo mkdir -p /data/postgres-15/data
sudo mkdir -p /data/projects sudo mkdir -p /data/projects
@ -177,16 +182,18 @@ sudo chown 1000:0 /data/projects
sudo chmod 700 /data/postgres-15/data sudo chmod 700 /data/postgres-15/data
``` ```
### Deploy AWX ### Deploy AWX
Deploy AWX, this takes few minutes to complete. Deploy AWX, this takes few minutes to complete.
<!-- shell: instance: deploy -->
```bash ```bash
kubectl apply -k base kubectl apply -k base
``` ```
To monitor the progress of the deployment, check the logs of `deployments/awx-operator-controller-manager`: To monitor the progress of the deployment, check the logs of `deployments/awx-operator-controller-manager`:
<!-- shell: instance: gather logs -->
```bash ```bash
kubectl -n awx logs -f deployments/awx-operator-controller-manager kubectl -n awx logs -f deployments/awx-operator-controller-manager
``` ```
@ -203,6 +210,7 @@ localhost : ok=90 changed=0 unreachable=0 failed=0 s
The required objects should now have been deployed next to AWX Operator in the `awx` namespace. The required objects should now have been deployed next to AWX Operator in the `awx` namespace.
<!-- shell: instance: get resources -->
```bash ```bash
$ kubectl -n awx get awx,all,ingress,secrets $ kubectl -n awx get awx,all,ingress,secrets
NAME AGE NAME AGE
@ -237,7 +245,7 @@ NAME COMPLETIONS DURATION AGE
job.batch/awx-migration-24.0.0 1/1 2m4s 4m36s job.batch/awx-migration-24.0.0 1/1 2m4s 4m36s
NAME CLASS HOSTS ADDRESS PORTS AGE NAME CLASS HOSTS ADDRESS PORTS AGE
ingress.networking.k8s.io/awx-ingress traefik awx.example.com 192.168.0.219 80, 443 6m6s ingress.networking.k8s.io/awx-ingress traefik awx.example.com 192.168.0.221 80, 443 6m6s
NAME TYPE DATA AGE NAME TYPE DATA AGE
secret/redhat-operators-pull-secret Opaque 1 7m33s secret/redhat-operators-pull-secret Opaque 1 7m33s
@ -257,13 +265,13 @@ Note that you have to access via the hostname that you specified in `base/awx.ya
At this point, AWX can be accessed via HTTP as well as HTTPS. If you want to force users to use HTTPS, see [📝Tips: Enable HTTP Strict Transport Security (HSTS)](tips/enable-hsts.md). At this point, AWX can be accessed via HTTP as well as HTTPS. If you want to force users to use HTTPS, see [📝Tips: Enable HTTP Strict Transport Security (HSTS)](tips/enable-hsts.md).
## Back up and Restore AWX using AWX Operator ## 📝 Back up and Restore AWX using AWX Operator
The AWX Operator `0.10.0` or later has the ability to back up and restore AWX in easy way. The AWX Operator `0.10.0` or later has the ability to back up and restore AWX in easy way.
Refer [📁 **Back up AWX using AWX Operator**](backup) and [📁 **Restore AWX using AWX Operator**](restore) for details. Refer [📁 **Back up AWX using AWX Operator**](backup) and [📁 **Restore AWX using AWX Operator**](restore) for details.
## Additional Guides ## 📝 Additional Guides
- [📁 **Back up AWX using AWX Operator**](backup) - [📁 **Back up AWX using AWX Operator**](backup)
- The guide to make backup of your AWX using AWX Operator. - The guide to make backup of your AWX using AWX Operator.

View file

@ -41,6 +41,7 @@ This guide does not provide any information how to configure Azure, other DNS se
Deploy cert-manager first. Deploy cert-manager first.
<!-- shell: instance: deploy cert manager -->
```bash ```bash
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.14.4/cert-manager.yaml kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.14.4/cert-manager.yaml
``` ```
@ -112,6 +113,7 @@ To store Client Secret for the Service Principal to Secret resource in Kubernete
Once the file has been modified to suit your environment, deploy the Issuer. Once the file has been modified to suit your environment, deploy the Issuer.
<!-- shell: instance: deploy issuer -->
```bash ```bash
kubectl apply -k acme kubectl apply -k acme
``` ```

View file

@ -21,6 +21,7 @@ You can also refer [the official instructions](https://github.com/ansible/awx-op
Prepare directories for Persistent Volumes to store backup files that defined in `backup/pv.yaml`. This guide use the `hostPath` based PV to make it easy to understand. Prepare directories for Persistent Volumes to store backup files that defined in `backup/pv.yaml`. This guide use the `hostPath` based PV to make it easy to understand.
<!-- shell: backup: create directories -->
```bash ```bash
sudo mkdir -p /data/backup sudo mkdir -p /data/backup
sudo chown 26:0 /data/backup sudo chown 26:0 /data/backup
@ -29,6 +30,7 @@ sudo chmod 700 /data/backup
Then deploy Persistent Volume and Persistent Volume Claim. Then deploy Persistent Volume and Persistent Volume Claim.
<!-- shell: backup: deploy -->
```bash ```bash
kubectl apply -k backup kubectl apply -k backup
``` ```
@ -48,19 +50,21 @@ metadata:
Then invoke backup by applying this manifest file. Then invoke backup by applying this manifest file.
<!-- shell: backup: backup -->
```bash ```bash
kubectl apply -f backup/awxbackup.yaml kubectl apply -f backup/awxbackup.yaml
``` ```
To monitor the progress of the deployment, check the logs of `deployments/awx-operator-controller-manager`: To monitor the progress of the deployment, check the logs of `deployments/awx-operator-controller-manager`:
<!-- shell: backup: gather logs -->
```bash ```bash
kubectl -n awx logs -f deployments/awx-operator-controller-manager kubectl -n awx logs -f deployments/awx-operator-controller-manager
``` ```
When the backup completes successfully, the logs end with: When the backup completes successfully, the logs end with:
```txt ```bash
$ kubectl -n awx logs -f deployments/awx-operator-controller-manager $ kubectl -n awx logs -f deployments/awx-operator-controller-manager
... ...
----- Ansible Task Status Event StdOut (awx.ansible.com/v1beta1, Kind=AWXBackup, awxbackup-2021-06-06/awx) ----- ----- Ansible Task Status Event StdOut (awx.ansible.com/v1beta1, Kind=AWXBackup, awxbackup-2021-06-06/awx) -----
@ -70,6 +74,7 @@ localhost : ok=7 changed=0 unreachable=0 failed=0 s
This will create AWXBackup object in the namespace and also create backup files in the Persistent Volume. In this example those files are available at `/data/backup`. This will create AWXBackup object in the namespace and also create backup files in the Persistent Volume. In this example those files are available at `/data/backup`.
<!-- shell: backup: get resources -->
```bash ```bash
$ kubectl -n awx get awxbackup $ kubectl -n awx get awxbackup
NAME AGE NAME AGE

View file

@ -46,6 +46,7 @@ Note that this playbook enables `clean_backup_on_delete` by default that only wo
Create a Service Account, Role, and RoleBinding to manage the `AWXBackup` resource. Create a Service Account, Role, and RoleBinding to manage the `AWXBackup` resource.
<!-- shell: backup: serviceaccount -->
```bash ```bash
# Specify NameSpace where your AWXBackup resources will be created. # Specify NameSpace where your AWXBackup resources will be created.
$ NAMESPACE=awx $ NAMESPACE=awx
@ -118,6 +119,7 @@ In this case, the PostgreSQL db will be dumped while the job is running, so comp
1. Add a new Container Group to make the API token usable inside the EE. 1. Add a new Container Group to make the API token usable inside the EE.
- Enable `Customize pod specification` and put the following YAML string. `serviceAccountName` and `automountServiceAccountToken` are important to make the API token usable inside the EE. - Enable `Customize pod specification` and put the following YAML string. `serviceAccountName` and `automountServiceAccountToken` are important to make the API token usable inside the EE.
<!-- yaml: backup: container group -->
```yaml ```yaml
apiVersion: v1 apiVersion: v1
kind: Pod kind: Pod

View file

@ -55,9 +55,9 @@ cd awx-on-k3s/builder
### Environment in This Example ### Environment in This Example
- CentOS Stream 8 (Minimal) - CentOS Stream 9 (Minimal)
- Python 3.9 - Python 3.11
- Docker 20.10.17 - Docker 25.0.4
- Ansible Builder 3.0.1 - Ansible Builder 3.0.1
### Install Ansible Builder ### Install Ansible Builder
@ -138,8 +138,8 @@ Once the command is complete, your custom EE image is built and stored on Docker
```bash ```bash
$ docker image ls $ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE REPOSITORY TAG IMAGE ID CREATED SIZE
registry.example.com/ansible/ee 2.15-custom d804667597e9 20 seconds ago 284MB registry.example.com/ansible/ee 2.15-custom d804667597e9 20 seconds ago 284MB
``` ```
## Use EE ## Use EE
@ -176,13 +176,13 @@ This means that if your Kubernetes has all the EE images you need in its cache i
docker save registry.example.com/ansible/ee:2.15-custom -o custom-ee.tar docker save registry.example.com/ansible/ee:2.15-custom -o custom-ee.tar
# Import the Tar file to containerd # Import the Tar file to containerd
sudo /usr/local/bin/k3s ctr images import --compress-blobs --base-name registry.example.com/ansible/ee:2.15-custom custom-ee.tar sudo $(which k3s) ctr images import --compress-blobs --base-name registry.example.com/ansible/ee:2.15-custom custom-ee.tar
``` ```
Ensure your imported image is listed. Ensure your imported image is listed.
```bash ```bash
$ sudo /usr/local/bin/k3s crictl images $ sudo $(which k3s) crictl images
IMAGE TAG IMAGE ID SIZE IMAGE TAG IMAGE ID SIZE
... ...
registry.example.com/ansible/ee 2.15-custom d804667597e9e 96.3MB registry.example.com/ansible/ee 2.15-custom d804667597e9e 96.3MB

View file

@ -180,7 +180,7 @@ In this example, we make the Execution Environment to work with the Pod with fol
- Mount PVC as `/etc/demo` - Mount PVC as `/etc/demo`
- Run on the node with the label `awx-node-type: demo` using `nodeSelector` - Run on the node with the label `awx-node-type: demo` using `nodeSelector`
- Have custom environment variable `MY_CUSTOM_ENV` - Have custom environment variable `MY_CUSTOM_ENV`
- Use custom DNS server `192.168.0.219` in addition to the default DNS servers - Use custom DNS server `192.168.0.221` in addition to the default DNS servers
### Prepare host and Kubernetes ### Prepare host and Kubernetes
@ -201,11 +201,11 @@ kubectl apply -k containergroup/case2
Add label to the node. Add label to the node.
```bash ```bash
$ kubectl label node kuro-awx01.kuro.lab awx-node-type=demo $ kubectl label node kuro-c9s01.kuro.lab awx-node-type=demo
$ kubectl get node --show-labels $ kubectl get node --show-labels
NAME STATUS ROLES AGE VERSION LABELS NAME STATUS ROLES AGE VERSION LABELS
kuro-awx01.kuro.lab Ready control-plane,master 3d7h v1.21.2+k3s1 awx-node-type=demo,... kuro-c9s01.kuro.lab Ready control-plane,master 3d7h v1.21.2+k3s1 awx-node-type=demo,...
``` ```
Copy `awx` role and `awx` rolebinding to new `ee-demo`, to assign `awx` role on `ee-demo` to `awx` serviceaccount on `awx` namespace. Copy `awx` role and `awx` rolebinding to new `ee-demo`, to assign `awx` role on `ee-demo` to `awx` serviceaccount on `awx` namespace.
@ -274,7 +274,7 @@ spec:
awx-node-type: demo awx-node-type: demo
dnsConfig: dnsConfig:
nameservers: nameservers:
- 192.168.0.219 - 192.168.0.221
volumes: volumes:
- name: demo-volume - name: demo-volume
persistentVolumeClaim: persistentVolumeClaim:
@ -289,7 +289,7 @@ This is the customized manifest to achieve;
- Mounting PVC as `/etc/demo` - Mounting PVC as `/etc/demo`
- Running on the node with the label `awx-node-type: demo` using `nodeSelector` - Running on the node with the label `awx-node-type: demo` using `nodeSelector`
- Having custom environment variable `MY_CUSTOM_ENV` - Having custom environment variable `MY_CUSTOM_ENV`
- Using custom DNS server `192.168.0.219` in addition to the default DNS servers - Using custom DNS server `192.168.0.221` in addition to the default DNS servers
You can also change `image`, but it will be overridden by specifying Execution Environment for the Job Template, Project Default, or Global Default. You can also change `image`, but it will be overridden by specifying Execution Environment for the Job Template, Project Default, or Global Default.
@ -338,7 +338,7 @@ spec:
... ...
dnsConfig: dnsConfig:
nameservers: nameservers:
- 192.168.0.219 - 192.168.0.221
nodeSelector: nodeSelector:
awx-node-type: demo awx-node-type: demo
... ...

View file

@ -53,12 +53,14 @@ cd awx-on-k3s
Then invoke `kubectl apply -k galaxy/operator` to deploy Galaxy Operator. Then invoke `kubectl apply -k galaxy/operator` to deploy Galaxy Operator.
<!-- shell: operator: deploy -->
```bash ```bash
kubectl apply -k galaxy/operator kubectl apply -k galaxy/operator
``` ```
The Galaxy Operator will be deployed to the namespace `galaxy`. The Galaxy Operator will be deployed to the namespace `galaxy`.
<!-- shell: operator: get resources -->
```bash ```bash
$ kubectl -n galaxy get all $ kubectl -n galaxy get all
NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE
@ -78,6 +80,7 @@ replicaset.apps/galaxy-operator-controller-manager-69bdb6886d 1 1
Generate a Self-Signed Certificate and key pair. Note that IP address can't be specified. Generate a Self-Signed Certificate and key pair. Note that IP address can't be specified.
<!-- shell: instance: generate certificates -->
```bash ```bash
GALAXY_HOST="galaxy.example.com" GALAXY_HOST="galaxy.example.com"
openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -out ./galaxy/galaxy/tls.crt -keyout ./galaxy/galaxy/tls.key -subj "/CN=${GALAXY_HOST}/O=${GALAXY_HOST}" -addext "subjectAltName = DNS:${GALAXY_HOST}" openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -out ./galaxy/galaxy/tls.crt -keyout ./galaxy/galaxy/tls.key -subj "/CN=${GALAXY_HOST}/O=${GALAXY_HOST}" -addext "subjectAltName = DNS:${GALAXY_HOST}"
@ -119,6 +122,7 @@ Modify two `password`s in `galaxy/galaxy/kustomization.yaml`.
Prepare directories for Persistent Volumes defined in `galaxy/galaxy/pv.yaml`. Prepare directories for Persistent Volumes defined in `galaxy/galaxy/pv.yaml`.
<!-- shell: instance: create directories -->
```bash ```bash
sudo mkdir -p /data/galaxy/postgres-13 sudo mkdir -p /data/galaxy/postgres-13
sudo mkdir -p /data/galaxy/redis sudo mkdir -p /data/galaxy/redis
@ -131,12 +135,14 @@ sudo chmod 755 /data/galaxy/postgres-13
Deploy Galaxy NG, this takes few minutes to complete. Deploy Galaxy NG, this takes few minutes to complete.
<!-- shell: instance: deploy -->
```bash ```bash
kubectl apply -k galaxy/galaxy kubectl apply -k galaxy/galaxy
``` ```
To monitor the progress of the deployment, check the logs of `deployments/galaxy-operator-controller-manager`: To monitor the progress of the deployment, check the logs of `deployments/galaxy-operator-controller-manager`:
<!-- shell: instance: gather logs -->
```bash ```bash
kubectl -n galaxy logs -f deployments/galaxy-operator-controller-manager kubectl -n galaxy logs -f deployments/galaxy-operator-controller-manager
``` ```
@ -144,7 +150,7 @@ kubectl -n galaxy logs -f deployments/galaxy-operator-controller-manager
When the deployment completes successfully, the logs end with: When the deployment completes successfully, the logs end with:
```txt ```txt
$ kubectl -n galaxy logs -f deployments/pulp-operator-controller-manager $ kubectl -n galaxy logs -f deployments/galaxy-operator-controller-manager
... ...
----- Ansible Task Status Event StdOut (galaxy.ansible.com/v1beta1, Kind=Galaxy, galaxy/galaxy) ----- ----- Ansible Task Status Event StdOut (galaxy.ansible.com/v1beta1, Kind=Galaxy, galaxy/galaxy) -----
PLAY RECAP ********************************************************************* PLAY RECAP *********************************************************************
@ -153,6 +159,7 @@ localhost : ok=128 changed=25 unreachable=0 failed=0 s
Required objects has been deployed next to Pulp Operator in `galaxy` namespace. Required objects has been deployed next to Pulp Operator in `galaxy` namespace.
<!-- shell: instance: get resources -->
```bash ```bash
$ kubectl -n galaxy get galaxy,all,ingress,secrets $ kubectl -n galaxy get galaxy,all,ingress,secrets
NAME AGE NAME AGE
@ -195,7 +202,7 @@ NAME READY AGE
statefulset.apps/galaxy-postgres-13 1/1 3m45s statefulset.apps/galaxy-postgres-13 1/1 3m45s
NAME CLASS HOSTS ADDRESS PORTS AGE NAME CLASS HOSTS ADDRESS PORTS AGE
ingress.networking.k8s.io/galaxy-ingress traefik galaxy.example.com 192.168.0.219 80, 443 2m9s ingress.networking.k8s.io/galaxy-ingress traefik galaxy.example.com 192.168.0.221 80, 443 2m9s
NAME TYPE DATA AGE NAME TYPE DATA AGE
secret/galaxy-admin-password Opaque 1 4m44s secret/galaxy-admin-password Opaque 1 4m44s
@ -465,10 +472,10 @@ sudo systemctl restart k3s
If this is successfully applied, you can check the applied configuration in the `config.registry` section of the following command. If this is successfully applied, you can check the applied configuration in the `config.registry` section of the following command.
```bash ```bash
sudo /usr/local/bin/crictl info sudo $(which k3s) crictl info
# With jq # With jq
sudo /usr/local/bin/crictl info | jq .config.registry sudo $(which k3s) crictl info | jq .config.registry
``` ```
Now you can use Execution Environment on Galaxy NG through AWX as following. Now you can use Execution Environment on Galaxy NG through AWX as following.

View file

@ -67,7 +67,7 @@ NAME DESIRED CURRENT READY AGE
replicaset.apps/git-56cc958f9 1 1 1 9s replicaset.apps/git-56cc958f9 1 1 1 9s
NAME CLASS HOSTS ADDRESS PORTS AGE NAME CLASS HOSTS ADDRESS PORTS AGE
ingress.networking.k8s.io/git-ingress <none> git.example.com 192.168.0.219 80, 443 9s ingress.networking.k8s.io/git-ingress <none> git.example.com 192.168.0.221 80, 443 9s
``` ```
Now your Git repository is accessible through `https://git.example.com/` or the hostname you specified. Visit the URL and follow the installation wizard. Now your Git repository is accessible through `https://git.example.com/` or the hostname you specified. Visit the URL and follow the installation wizard.

View file

@ -22,6 +22,7 @@ Deploying your private container registry on your K3s to use with AWX.
Generate a Self-Signed Certificate. Note that IP address can't be specified. Generate a Self-Signed Certificate. Note that IP address can't be specified.
<!-- shell: instance: generate certificates -->
```bash ```bash
REGISTRY_HOST="registry.example.com" REGISTRY_HOST="registry.example.com"
openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -out ./registry/tls.crt -keyout ./registry/tls.key -subj "/CN=${REGISTRY_HOST}/O=${REGISTRY_HOST}" -addext "subjectAltName = DNS:${REGISTRY_HOST}" openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -out ./registry/tls.crt -keyout ./registry/tls.key -subj "/CN=${REGISTRY_HOST}/O=${REGISTRY_HOST}" -addext "subjectAltName = DNS:${REGISTRY_HOST}"
@ -58,6 +59,7 @@ Replace `htpasswd` in `registry/configmap.yaml` with your own `htpasswd` string
Prepare directories for Persistent Volumes defined in `registry/pv.yaml`. Prepare directories for Persistent Volumes defined in `registry/pv.yaml`.
<!-- shell: instance: create directories -->
```bash ```bash
sudo mkdir -p /data/registry sudo mkdir -p /data/registry
``` ```
@ -66,12 +68,14 @@ sudo mkdir -p /data/registry
Deploy private container registry. Deploy private container registry.
<!-- shell: instance: deploy -->
```bash ```bash
kubectl apply -k registry kubectl apply -k registry
``` ```
Required resources has been deployed in `registry` namespace. Required resources has been deployed in `registry` namespace.
<!-- shell: instance: get resources -->
```bash ```bash
$ kubectl -n registry get all,ingress $ kubectl -n registry get all,ingress
NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE
@ -87,7 +91,7 @@ NAME DESIRED CURRENT READY AGE
replicaset.apps/registry-7457f6c64b 1 1 1 9s replicaset.apps/registry-7457f6c64b 1 1 1 9s
NAME CLASS HOSTS ADDRESS PORTS AGE NAME CLASS HOSTS ADDRESS PORTS AGE
ingress.networking.k8s.io/registry-ingress <none> registry.example.com 192.168.0.219 80, 443 ingress.networking.k8s.io/registry-ingress <none> registry.example.com 192.168.0.221 80, 443
``` ```
Now your container registry can be used through `registry.example.com` or the hostname you specified. Now your container registry can be used through `registry.example.com` or the hostname you specified.
@ -189,6 +193,7 @@ Note that required `imagePullSecrets` will be automatically created by AWX once
The `tls` section is required to disable SSL Verification as the endpoint is HTTPS with a Self-Signed Certificate. The `tls` section is required to disable SSL Verification as the endpoint is HTTPS with a Self-Signed Certificate.
<!-- shell: config: insecure registry -->
```bash ```bash
sudo tee /etc/rancher/k3s/registries.yaml <<EOF sudo tee /etc/rancher/k3s/registries.yaml <<EOF
configs: configs:
@ -206,11 +211,12 @@ sudo systemctl restart k3s
If this is successfully applied, you can check the applied configuration in the `config.registry` section of the following command. If this is successfully applied, you can check the applied configuration in the `config.registry` section of the following command.
<!-- shell: config: dump config -->
```bash ```bash
sudo /usr/local/bin/k3s crictl info sudo $(which k3s) crictl info
# With jq # With jq
sudo /usr/local/bin/k3s crictl info | jq .config.registry sudo $(which k3s) crictl info | jq .config.registry
``` ```
If you want Kubernetes to be able to pull images directly from this private registry, alternatively you can also manually create `imagePullSecrets` for the Pod instead of writing your credentials in `auth` in `registries.yaml`. [Another guide about rate limiting on Docker Hub](../tips/dockerhub-rate-limit.md) explains how to use `ImagePullSecrets`. If you want Kubernetes to be able to pull images directly from this private registry, alternatively you can also manually create `imagePullSecrets` for the Pod instead of writing your credentials in `auth` in `registries.yaml`. [Another guide about rate limiting on Docker Hub](../tips/dockerhub-rate-limit.md) explains how to use `ImagePullSecrets`.

View file

@ -26,18 +26,23 @@ Some manual additions, such as [the HSTS configuration](../tips/enable-hsts.md)
If your AWX instance is running, it is recommended that it be deleted along with PVC and PV for the PostgreSQL first, in order to restore to be succeeded. If your AWX instance is running, it is recommended that it be deleted along with PVC and PV for the PostgreSQL first, in order to restore to be succeeded.
<!-- shell: restore: uninstall -->
```bash ```bash
# Delete AWX resource, PVC, and PV # Delete AWX resource, PVC, and PV
kubectl -n awx delete pvc postgres-15-awx-postgres-15-0 --wait=false
kubectl -n awx delete awx awx kubectl -n awx delete awx awx
kubectl -n awx delete pvc postgres-15-awx-postgres-15-0
kubectl delete pv awx-postgres-15-volume kubectl delete pv awx-postgres-15-volume
```
<!-- shell: restore: delete directories -->
```bash
# Delete any data in the PV # Delete any data in the PV
sudo rm -rf /data/postgres-15 sudo rm -rf /data/postgres-15
``` ```
Then prepare directories for your PVs. `/data/projects` is required if you are restoring the entire AWX to a new environment. Then prepare directories for your PVs. `/data/projects` is required if you are restoring the entire AWX to a new environment.
<!-- shell: restore: create directories -->
```bash ```bash
sudo mkdir -p /data/postgres-15/data sudo mkdir -p /data/postgres-15/data
sudo mkdir -p /data/projects sudo mkdir -p /data/projects
@ -48,6 +53,7 @@ sudo chmod 700 /data/postgres-15/data
Then deploy PV and PVC. It is recommended that making the size of PVs and PVCs same as the PVs which your AWX used when the backup was taken. Then deploy PV and PVC. It is recommended that making the size of PVs and PVCs same as the PVs which your AWX used when the backup was taken.
<!-- shell: restore: deploy -->
```bash ```bash
kubectl apply -k restore kubectl apply -k restore
``` ```
@ -86,19 +92,21 @@ If the AWXBackup object no longer exists, place the backup files under `/data/ba
Then invoke restore by applying this manifest file. Then invoke restore by applying this manifest file.
<!-- shell: restore: restore -->
```bash ```bash
kubectl apply -f restore/awxrestore.yaml kubectl apply -f restore/awxrestore.yaml
``` ```
To monitor the progress of the deployment, check the logs of `deployments/awx-operator-controller-manager`: To monitor the progress of the deployment, check the logs of `deployments/awx-operator-controller-manager`:
<!-- shell: restore: gather logs -->
```bash ```bash
kubectl -n awx logs -f deployments/awx-operator-controller-manager kubectl -n awx logs -f deployments/awx-operator-controller-manager
``` ```
When the restore complete successfully, the logs end with: When the restore complete successfully, the logs end with:
```txt ```bash
$ kubectl -n awx logs -f deployments/awx-operator-controller-manager $ kubectl -n awx logs -f deployments/awx-operator-controller-manager
... ...
----- Ansible Task Status Event StdOut (awx.ansible.com/v1beta1, Kind=AWX, awx/awx) ----- ----- Ansible Task Status Event StdOut (awx.ansible.com/v1beta1, Kind=AWX, awx/awx) -----
@ -108,6 +116,7 @@ localhost : ok=92 changed=0 unreachable=0 failed=0 s
This will create AWXRestore object in the namespace, and now your AWX is restored. This will create AWXRestore object in the namespace, and now your AWX is restored.
<!-- shell: restore: get resources -->
```bash ```bash
$ kubectl -n awx get awxrestore $ kubectl -n awx get awxrestore
NAME AGE NAME AGE

View file

@ -47,12 +47,14 @@ cd awx-on-k3s
Then invoke `kubectl apply -k rulebooks/operator` to deploy EDA Server Operator. Then invoke `kubectl apply -k rulebooks/operator` to deploy EDA Server Operator.
<!-- shell: operator: deploy -->
```bash ```bash
kubectl apply -k rulebooks/operator kubectl apply -k rulebooks/operator
``` ```
The EDA Server Operator will be deployed to the namespace `eda`. The EDA Server Operator will be deployed to the namespace `eda`.
<!-- shell: operator: get resources -->
```bash ```bash
$ kubectl -n eda get all $ kubectl -n eda get all
NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE
@ -72,6 +74,7 @@ replicaset.apps/eda-server-operator-controller-manager-7bf7578d44 1 1
Generate a Self-Signed certificate for the Web UI and API for EDA Server. Note that IP address can't be specified. Generate a Self-Signed certificate for the Web UI and API for EDA Server. Note that IP address can't be specified.
<!-- shell: instance: generate certificates -->
```bash ```bash
EDA_HOST="eda.example.com" EDA_HOST="eda.example.com"
openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -out ./rulebooks/server/tls.crt -keyout ./rulebooks/server/tls.key -subj "/CN=${EDA_HOST}/O=${EDA_HOST}" -addext "subjectAltName = DNS:${EDA_HOST}" openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -out ./rulebooks/server/tls.crt -keyout ./rulebooks/server/tls.key -subj "/CN=${EDA_HOST}/O=${EDA_HOST}" -addext "subjectAltName = DNS:${EDA_HOST}"
@ -115,6 +118,7 @@ Modify two `password`s in `rulebooks/server/kustomization.yaml`.
Prepare directories for Persistent Volumes defined in `base/pv.yaml`. This directory will be used to store your database. Prepare directories for Persistent Volumes defined in `base/pv.yaml`. This directory will be used to store your database.
<!-- shell: instance: create directories -->
```bash ```bash
sudo mkdir -p /data/eda/postgres-13/data sudo mkdir -p /data/eda/postgres-13/data
sudo chown 26:0 /data/eda/postgres-13/data sudo chown 26:0 /data/eda/postgres-13/data
@ -125,12 +129,14 @@ sudo chmod 700 /data/eda/postgres-13/data
Deploy EDA Server, this takes few minutes to complete. Deploy EDA Server, this takes few minutes to complete.
<!-- shell: instance: deploy -->
```bash ```bash
kubectl apply -k rulebooks/server kubectl apply -k rulebooks/server
``` ```
To monitor the progress of the deployment, check the logs of `deployment/eda-server-operator-controller-manager`: To monitor the progress of the deployment, check the logs of `deployment/eda-server-operator-controller-manager`:
<!-- shell: instance: gather logs -->
```bash ```bash
kubectl -n eda logs -f deployment/eda-server-operator-controller-manager kubectl -n eda logs -f deployment/eda-server-operator-controller-manager
``` ```
@ -147,6 +153,7 @@ localhost : ok=57 changed=0 unreachable=0 failed=0 s
Required objects has been deployed next to AWX Operator in `awx` namespace. Required objects has been deployed next to AWX Operator in `awx` namespace.
<!-- shell: instance: get resources -->
```bash ```bash
$ kubectl -n eda get eda,all,ingress,configmap,secret $ kubectl -n eda get eda,all,ingress,configmap,secret
NAME AGE NAME AGE
@ -197,7 +204,7 @@ NAME READY AGE
statefulset.apps/eda-postgres-13 1/1 3m38s statefulset.apps/eda-postgres-13 1/1 3m38s
NAME CLASS HOSTS ADDRESS PORTS AGE NAME CLASS HOSTS ADDRESS PORTS AGE
ingress.networking.k8s.io/eda-ingress traefik eda.example.com 192.168.0.219 80, 443 2m49s ingress.networking.k8s.io/eda-ingress traefik eda.example.com 192.168.0.221 80, 443 2m49s
NAME DATA AGE NAME DATA AGE
configmap/kube-root-ca.crt 1 5m7s configmap/kube-root-ca.crt 1 5m7s
@ -387,8 +394,8 @@ $ kubectl apply -f rulebooks/webhook/ingress.yaml
$ kubectl -n eda get ingress $ kubectl -n eda get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE NAME CLASS HOSTS ADDRESS PORTS AGE
eda-ingress traefik eda.example.com 192.168.0.219 80, 443 4h45m eda-ingress traefik eda.example.com 192.168.0.221 80, 443 4h45m
eda-ingress-webhook traefik eda.example.com 192.168.0.219 80, 443 1s 👈👈👈 eda-ingress-webhook traefik eda.example.com 192.168.0.221 80, 443 1s 👈👈👈
``` ```
### Trigger Rule using Webhook ### Trigger Rule using Webhook

View file

@ -7,9 +7,9 @@ This repository includes ready-to-use files as an example to run Ansible Runner.
## Environment in This Example ## Environment in This Example
- CentOS Stream 8 (Minimal) - CentOS Stream 9 (Minimal)
- Python 3.9 - Python 3.11
- Docker 20.10.17 - Docker 25.0.4
- Ansible Runner 2.3.6 - Ansible Runner 2.3.6
## Install ## Install

View file

@ -10,10 +10,10 @@ One easy way to do this is to use `dnsmasq`.
```bash ```bash
sudo tee -a /etc/hosts <<EOF sudo tee -a /etc/hosts <<EOF
192.168.0.219 awx.example.com 192.168.0.221 awx.example.com
192.168.0.219 registry.example.com 192.168.0.221 registry.example.com
192.168.0.219 git.example.com 192.168.0.221 git.example.com
192.168.0.219 galaxy.example.com 192.168.0.221 galaxy.example.com
EOF EOF
``` ```
@ -28,7 +28,7 @@ One easy way to do this is to use `dnsmasq`.
```bash ```bash
sudo tee /etc/rancher/k3s/resolv.conf <<EOF sudo tee /etc/rancher/k3s/resolv.conf <<EOF
nameserver 192.168.0.219 nameserver 192.168.0.221
EOF EOF
``` ```
@ -65,7 +65,7 @@ One easy way to do this is to use `dnsmasq`.
Address 1: 10.43.0.10 kube-dns.kube-system.svc.cluster.local Address 1: 10.43.0.10 kube-dns.kube-system.svc.cluster.local
Name: git.example.com Name: git.example.com
Address 1: 192.168.0.219 Address 1: 192.168.0.221
pod "busybox" deleted pod "busybox" deleted
``` ```

View file

@ -10,23 +10,11 @@
### Uninstall resources on Kubernetes ### Uninstall resources on Kubernetes
In kubernetes, you can deploy resources with `kubectl create -f (-k)` or `kubectl apply -f (-k)` by specifying manifest files, and similarly, you can use manifest files to delete resources by `kubectl delete -f (-k)` command. The resources that you've deployed by `kubectl create -f (-k)` or `kubectl apply -f (-k)` commands can be also removed `kubectl delete -f (-k)` command by passing the same manifest files.
For example, some resources deployed with the following command; These are the example commands to delete existing AWX on K3s. Note that PVC for PostgreSQL should be removed manually since this PVC was created by not `kubectl apply -k` but AWX Operator.
```bash
$ kubectl apply -k base
secret/awx-admin-password created
secret/awx-postgres-configuration created
secret/awx-secret-tls created
persistentvolume/awx-postgres-volume created
persistentvolume/awx-projects-volume created
persistentvolumeclaim/awx-projects-claim created
awx.awx.ansible.com/awx created
```
can be deleted with the following command with same manifest files. Note that PVC for PostgreSQL should be removed manually since this PVC was created by not `kubectl apply -k` but AWX Operator.
<!-- shell: instance: uninstall -->
```bash ```bash
$ kubectl -n awx delete pvc postgres-15-awx-postgres-15-0 --wait=false $ kubectl -n awx delete pvc postgres-15-awx-postgres-15-0 --wait=false
$ kubectl delete -k base $ kubectl delete -k base
@ -39,7 +27,7 @@ persistentvolumeclaim "awx-projects-claim" deleted
awx.awx.ansible.com "awx" deleted awx.awx.ansible.com "awx" deleted
``` ```
Or, you can delete all resources in specific namespace by deleting that namespace. PVs cannot be deleted in this way since the PVs are namespace-independent resources, so they need to be deleted manually. You can also delete all resources in the specific namespace by deleting the namespace. Any PVs cannot be deleted in this way since the PVs are namespace-independent resources, so they need to be deleted manually.
```bash ```bash
$ kubectl delete ns awx $ kubectl delete ns awx
@ -51,12 +39,14 @@ persistentvolume "<volume name>" deleted
### Remove data in PVs ### Remove data in PVs
All manifest files in this repository, the PVs were persisted under `/data/<volume name>` on the K3s host using `hostPath`. All PVs that deployed by any guides on this repository are designed to persist data under `/data` using `hostPath`. But removing PVs by `kubectl delete pv` command does not remove actual data in the host filesystem. So if you want to remove the data in the PVs, you need to remove it manually.
If you want to initialize the data and start all over again, for example, you can delete the data manually. These are the example commands to remove the data in the PVs for AWX.
<!-- shell: pv: remove -->
```bash ```bash
sudo rm -rf /data/<volume name> sudo rm -rf /data/projects
sudo rm -rf /data/postgres-15
``` ```
### Uninstall K3s ### Uninstall K3s