docs: add the most popular example for container group

This commit is contained in:
kurokobo 2024-02-25 14:43:02 +09:00
parent 60cf11b476
commit 34d4a3b38d
No known key found for this signature in database
GPG key ID: A6224377E5B0604E
8 changed files with 210 additions and 14 deletions

View file

@ -3,6 +3,175 @@
You can customize the specification of the Pod of the Execution Environment using **Container Group**.
<!-- omit in toc -->
## Table of Contents
- [Case 1: Read and write files on K3s host during Jobs](#case-1-read-and-write-files-on-k3s-host-during-jobs)
- [Prepare host and Kubernetes](#prepare-host-and-kubernetes)
- [Create Container Group](#create-container-group)
- [Quick testing](#quick-testing)
- [Case 2: Achieve complex requirements](#case-2-achieve-complex-requirements)
- [Prepare host and Kubernetes](#prepare-host-and-kubernetes-1)
- [Create Container Group](#create-container-group-1)
- [Quick testing](#quick-testing-1)
## Case 1: Read and write files on K3s host during Jobs
One typical use case to use Container Group is to read or write files on K3s host during Jobs.
In this example, we make Container Group that works with the Pod with following specification.
- Mount PVC as `/data/work` that is the exact `/data/work` on the K3s host
By using this Container Group, you can read and write any files under `/data/work` on the K3s host during Jobs. Of course, these files are persisted since they are on the host file system, so you can read and write the same files in any Jobs.
> [!NOTE]
> Almost the same thing can be achieved by enabling `Expose host paths for Container Groups` and adding `"/data/work:/data/work:O"` to `Paths to expose to isolated jobs` on `Administration` > `Settings` > `Jobs settings`, but using Container Group is more flexible and recommended.
> The configuration `Paths to expose to isolated jobs` is a global setting and all Jobs mount the host file system even if it's not required. Whereas, any number of container groups can be created, with different settings for each job template.
### Prepare host and Kubernetes
Prepare directories for Persistent Volumes defined in `containergroup/case1/pv.yaml`.
```bash
sudo mkdir -p /data/work
sudo chmod 755 /data/work
sudo chown 1000:0 /data/work
```
Create PV and PVC.
```bash
kubectl apply -k containergroup/case1
```
### Create Container Group
You can create new Container Group by `Administration` > `Instance Group` > `Add` > `Add container group`.
Enable `Customize pod specification` and define specification as following.
```yaml
apiVersion: v1
kind: Pod
metadata:
namespace: awx
spec:
serviceAccountName: default
automountServiceAccountToken: false
containers:
- image: quay.io/ansible/awx-ee:latest
name: worker
args:
- ansible-runner
- worker
- '--private-data-dir=/runner'
resources:
requests:
cpu: 250m
memory: 100Mi
volumeMounts:
- name: awx-work-volume
mountPath: /data/work
volumes:
- name: awx-work-volume
persistentVolumeClaim:
claimName: awx-work-claim
```
This is the customized manifest to achieve mounting PVC as `/data/work` that is the exact `/data/work` on the K3s host.
### Quick testing
The Container Group that to be used can be specified as `Instance Groups` in the Job Template.
Before proceeding the testing, create new file under `/data/work` on the K3s host.
```bash
$ echo "This file is written from K3s host." | sudo tee /data/work/demo_from_host.txt
This file is written from K3s host.
$ ls -l /data/work
total 4
-rw-r--r--. 1 root root 36 Feb 25 14:09 demo_from_host.txt
```
Then by running any playbooks on AWX that includes following tasks for example, you can confirm that the files under `/data/work` on the K3s host are readable and writable. Note some tasks that read or write files should be delegated to `localhost` (with `delegate_to: localhost`) if your playbook is not for `localhost`.
```yaml
---
- hosts: localhost
gather_facts: false
tasks:
- name: Gather files under /data/work
ansible.builtin.find:
paths: /data/work
register: existing_files
- name: List files under /data/work
ansible.builtin.debug:
var: existing_files.files
- name: Read a file under /data/work
ansible.builtin.debug:
var: lookup("ansible.builtin.file", "/data/work/demo_from_host.txt")
- name: Write a file under /data/work
ansible.builtin.copy:
content: "This file is written during the task in the Job.\n"
dest: /data/work/demo_from_job.txt
- name: Read a file under /data/work
ansible.builtin.debug:
var: lookup("ansible.builtin.file", "/data/work/demo_from_job.txt")
```
This is the example output of the Job. You can see that the file written from the K3s host is readable in the Job, and writing a file in the Job is completed successfully.
```bash
TASK [Gather files under /data/work] *******************************************
ok: [localhost]
TASK [List files under /data/work] *********************************************
ok: [localhost] => {
"existing_files.files": [
{
...
"path": "/data/work/demo_from_host.txt",
...
}
]
}
TASK [Read a file under /data/work] ********************************************
ok: [localhost] => {
"lookup('ansible.builtin.file', '/data/work/demo_from_host.txt')": "This file is written from K3s host."
}
TASK [Write a file under /data/work] *******************************************
changed: [localhost]
TASK [Read a file under /data/work] ********************************************
ok: [localhost] => {
"lookup('ansible.builtin.file', '/data/work/demo_from_job.txt')": "This file is written during the task in the Job."
}
```
You can also see that the file written in the task is actually available on the K3s host.
```bash
$ ls -l /data/work
total 8
-rw-r--r--. 1 root root 36 Feb 25 14:09 demo_from_host.txt
-rw-r--r--. 1 1000 root 49 Feb 25 14:13 demo_from_job.txt
$ cat /data/work/demo_from_job.txt
This file is written during the task in the Job.
```
## Case 2: Achieve complex requirements
In this example, we make the Execution Environment to work with the Pod with following specification.
- Run in a different namespace `ee-demo` instead of default one
@ -13,28 +182,20 @@ In this example, we make the Execution Environment to work with the Pod with fol
- Have custom environment variable `MY_CUSTOM_ENV`
- Use custom DNS server `192.168.0.219` in addition to the default DNS servers
<!-- omit in toc -->
## Table of Contents
### Prepare host and Kubernetes
- [Procedure](#procedure)
- [Prepare host and kubernetes](#prepare-host-and-kubernetes)
- [Create Container Group](#create-container-group)
- [Quick Testing](#quick-testing)
## Procedure
### Prepare host and kubernetes
Prepare directories for Persistent Volumes defined in `containergroup/pv.yaml`.
Prepare directories for Persistent Volumes defined in `containergroup/case2/pv.yaml`.
```bash
sudo mkdir -p /data/demo
sudo chmod 755 /data/demo
sudo chown 1000:0 /data/demo
```
Create Namespace, PV, and PVC.
```bash
kubectl apply -k containergroup
kubectl apply -k containergroup/case2
```
Add label to the node.
@ -132,7 +293,7 @@ This is the customized manifest to achieve;
You can also change `image`, but it will be overridden by specifying Execution Environment for the Job Template, Project Default, or Global Default.
## Quick Testing
### Quick testing
The Container Group that to be used can be specified as `Instance Groups` in the Job Template. After specifying and running the Job, you can see the result as follows.

View file

@ -0,0 +1,8 @@
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: awx
resources:
- pv.yaml
- pvc.yaml

View file

@ -0,0 +1,14 @@
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: awx-work-volume
spec:
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
capacity:
storage: 4Gi
storageClassName: awx-work-volume
hostPath:
path: /data/work

View file

@ -0,0 +1,13 @@
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: awx-work-claim
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 4Gi
storageClassName: awx-work-volume