docs: fix typos

This commit is contained in:
kurokobo 2022-08-12 11:14:37 +09:00
parent c1b451df4a
commit af5bd7293e
14 changed files with 32 additions and 32 deletions

View file

@ -1,6 +1,6 @@
MIT License
Copyright (c) 2021 kurokobo
Copyright (c) 2022 kurokobo
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

View file

@ -3,7 +3,7 @@
An example implementation of AWX on single node K3s using AWX Operator, with easy-to-use simplified configuration with ownership of data and passwords.
- Accesible over HTTPS from remote host
- Accessible over HTTPS from remote host
- All data will be stored under `/data`
- Fixed (configurable) passwords for AWX and PostgreSQL
- Fixed (configurable) versions of AWX and PostgreSQL
@ -49,7 +49,7 @@ An example implementation of AWX on single node K3s using AWX Operator, with eas
- The files in this repository are configured to ignore resource requirements which specified by AWX Operator by default.
- **Storage resources**
- At least **10 GiB for `/var/lib/rancher`** and **10 GiB for `/data`** are safe for fresh install.
- **Both will be grown during lifetime** and **actual consumption highly depends on your environment and your usecase**, so you should to pay attention to the consumption and add more capacity if required.
- **Both will be grown during lifetime** and **actual consumption highly depends on your environment and your use case**, so you should to pay attention to the consumption and add more capacity if required.
- `/var/lib/rancher` will be created and consumed by K3s and related data like container images and overlayfs.
- `/data` will be created in this guide and used to store AWX-related databases and files.
@ -57,10 +57,10 @@ An example implementation of AWX on single node K3s using AWX Operator, with eas
### Prepare CentOS Stream 8 host
Disable Firewalld and nm-cloud-setup if enabled. This is [recommended by K3s](https://rancher.com/docs/k3s/latest/en/advanced/#additional-preparation-for-red-hat-centos-enterprise-linux).
Disable firewalld and nm-cloud-setup if enabled. This is [recommended by K3s](https://rancher.com/docs/k3s/latest/en/advanced/#additional-preparation-for-red-hat-centos-enterprise-linux).
```bash
# Disable Firewalld
# Disable firewalld
sudo systemctl disable firewalld --now
# Disable nm-cloud-setup if exists and enabled
@ -246,7 +246,7 @@ secret/awx-broadcast-websocket Opaque
Now your AWX is available at `https://awx.example.com/` or the hostname you specified.
Note that you have to access via hostname that you specified in `base/awx.yaml`, instead of IP address, since this guide uses Ingress. So you should configure your DNS or `hosts` file on your client where the brower is running.
Note that you have to access via hostname that you specified in `base/awx.yaml`, instead of IP address, since this guide uses Ingress. So you should configure your DNS or `hosts` file on your client where the browser is running.
At this point, AWX can be accessed via HTTP as well as HTTPS. If you want to redirect HTTP to HTTPS, see [📝Tips: Redirect HTTP to HTTPS](tips/https-redirection.md).
@ -272,7 +272,7 @@ Refer [📁 **Back up AWX using AWX Operator**](backup) and [📁 **Restore AWX
- If we want to use our own Execution Environment built with Ansible Builder and don't want to push it to the public container registry e.g. Docker Hub, we can deploy a private container registry on K3s.
- [📁 **Deploy Private Galaxy NG on Docker or Kubernetes** (Experimental)](galaxy)
- The guide to deploy our own Galaxy NG instance.
- **Note that the containerized implementation of Galaxy NG is not officialy supported at this time.**
- **Note that the containerized implementation of Galaxy NG is not officially supported at this time.**
- **All information on the guide is for development, testing and study purposes only.**
- [📁 **Use SSL Certificate from Public ACME CA**](acme)
- The guide to use a certificate from public ACME CA such as Let's Encrypt or ZeroSSL instead of Self-Signed certificate.
@ -286,7 +286,7 @@ Refer [📁 **Back up AWX using AWX Operator**](backup) and [📁 **Restore AWX
- [📝Expose `/etc/hosts` to Pods on K3s](tips/expose-hosts.md)
- [📝Redirect HTTP to HTTPS](tips/https-redirection.md)
- [📝Use HTTP proxy](tips/use-http-proxy.md)
- [📝Uninstall deployed resouces](tips/uninstall.md)
- [📝Uninstall deployed resources](tips/uninstall.md)
- [📝Deploy older version of AWX Operator](tips/deploy-older-operator.md)
- [📝Upgrade AWX Operator and AWX](tips/upgrade-operator.md)
- [📝Workaround for the rate limit on Docker Hub](tips/dockerhub-rate-limit.md)

View file

@ -99,7 +99,7 @@ spec:
key: client-secret
```
To store Client Secret for the Service Principal to Secret resouce in Kubernetes, modify `acme/kustomization.yaml`.
To store Client Secret for the Service Principal to Secret resource in Kubernetes, modify `acme/kustomization.yaml`.
```yaml
...

View file

@ -124,7 +124,7 @@ It is also possible to making the backup of the AWX itself where the Job Templat
2. Add new Project including the playbook.
- You can specify this repository (`https://github.com/kurokobo/awx-on-k3s.git`) directly, but use with caution. The playbook in this repository is subject to change without notice. You can use [Tag](https://github.com/kurokobo/awx-on-k3s/tags) or [Commit](https://github.com/kurokobo/awx-on-k3s/commits/main) to fix the version to be used.
3. Add new Job Template which use the playbook.
- Select appropriate `Execution Environment`. The default `AWX EE (latest)` (`quay.io/ansible/awx-ee:latest`) contains required collections and modules by defaut, so it's good for the first choice.
- Select appropriate `Execution Environment`. The default `AWX EE (latest)` (`quay.io/ansible/awx-ee:latest`) contains required collections and modules by default, so it's good for the first choice.
- Select your `backup.yml` as `Playbook`.
- Select your Credentials created in the above step.
- Specify `Variables` as needed.

View file

@ -90,7 +90,7 @@ Other customization is possible besides this. Refer to [the official Ansible Bui
### Build EE
Once your files are ready, run `ansible-builder build` command to build EE as a container image according to the definition in `execution-environment.yml`. Specify a tag (`--tag`) to suit your requiremnts.
Once your files are ready, run `ansible-builder build` command to build EE as a container image according to the definition in `execution-environment.yml`. Specify a tag (`--tag`) to suit your requirements.
```bash
ansible-builder build --tag registry.example.com/ansible/ee:2.12-custom --container-runtime docker --verbosity 3

View file

@ -1,7 +1,7 @@
<!-- omit in toc -->
# [Experimental] Deploy Galaxy NG
Deploying your private Galaxy NG a.k.a. upstream version of Ansible Automatuin Hub.
Deploying your private Galaxy NG a.k.a. upstream version of Ansible Automation Hub.
**Note that the containerized implementation of Galaxy NG is not supported at this time. See the official installation guide for supported procedure.**
@ -69,7 +69,7 @@ TOKEN_AUTH_DISABLED=True
EOF
```
Then inovoke `docker run`.
Then invoke `docker run`.
```bash
docker run --detach \
@ -393,7 +393,7 @@ Basic configuration and usage of Galaxy NG.
### Sync Collections with Public Galaxy
Create a list of Collections to be syncronized as YAML file.
Create a list of Collections to be synchronized as YAML file.
```yaml
---
@ -500,7 +500,7 @@ Optionally, this approval process can be disabled by adding `galaxy_require_cont
### Install Collections Locally from Galaxy NG
Modify your `ansible.cfg` to speficy which Galaxy Instance will be used in which order. Note that you can get appropriate configuration from `Collections` > `Repository Management` > `Local` > `CLI configuration` per distributions. Your token is available at `Collections` > `API Token`.
Modify your `ansible.cfg` to specify which Galaxy Instance will be used in which order. Note that you can get appropriate configuration from `Collections` > `Repository Management` > `Local` > `CLI configuration` per distributions. Your token is available at `Collections` > `API Token`.
```init
[galaxy]
@ -645,4 +645,4 @@ Now you can use Execution Environment on Galaxy NG through AWX as following.
3. Register new Execution Environment on AWX
4. Specify it as Execution Environment for the Job Template, Project Default, or Global Default.
Once you start the Job Template, `imagePullSecrets` will be created from Credentials and assinged to the Pod, the image will be pulled, and the playbook will run on the Execution Environment.
Once you start the Job Template, `imagePullSecrets` will be created from Credentials and assigned to the Pod, the image will be pulled, and the playbook will run on the Execution Environment.

View file

@ -70,11 +70,11 @@ NAME CLASS HOSTS ADDRESS
ingress.networking.k8s.io/git-ingress <none> git.example.com 192.168.0.100 80, 443 11s
```
Now your Git repository is accesible through `https://git.example.com/` or the hostname you specified. Visit the URL and follow the installation wizard.
Now your Git repository is accessible through `https://git.example.com/` or the hostname you specified. Visit the URL and follow the installation wizard.
Note that this sample manifest does not include any databases, so the SQLite3 has to be selected as `Database Type` for Gitea.
| Configration | Recommemded Value |
| Configuration | Recommended Value |
| -------------- | -------------------------------------------------------- |
| Database Type | `SQLite3` |
| Gitea Base URL | `https://git.example.com/` or the hostname you specified |

View file

@ -16,9 +16,9 @@ You can also refer [the official instructions](https://github.com/ansible/awx-op
## Instruction
To perfom restoration, you need to have AWX Operator running on Kubernetes. If you are planning to restore to a new environment, first prepare Kubernetes and AWX Operator by referring to [the instructions on the main guide](../README.md).
To perform restoration, you need to have AWX Operator running on Kubernetes. If you are planning to restore to a new environment, first prepare Kubernetes and AWX Operator by referring to [the instructions on the main guide](../README.md).
It is strongly recommended that the version of AWX Operator is the same as the version when the backup was taken. This is because the structure of the backup files differs between versions and may not be compatible. If you have upgraded AWX Operator after taking the backup, it is recommended to downgrade AWX Operator first before perfoming the restore. To deploy `0.13.0` or earlier version of AWX Operator, refer [📝Tips: Deploy older version of AWX Operator](../tips/deploy-older-operator.md)
It is strongly recommended that the version of AWX Operator is the same as the version when the backup was taken. This is because the structure of the backup files differs between versions and may not be compatible. If you have upgraded AWX Operator after taking the backup, it is recommended to downgrade AWX Operator first before performing the restore. To deploy `0.13.0` or earlier version of AWX Operator, refer [📝Tips: Deploy older version of AWX Operator](../tips/deploy-older-operator.md)
### Prepare for Restore

View file

@ -5,7 +5,7 @@
- [📝Expose `/etc/hosts` to Pods on K3s](expose-hosts.md)
- [📝Redirect HTTP to HTTPS](https-redirection.md)
- [📝Use HTTP proxy](use-http-proxy.md)
- [📝Uninstall deployed resouces](uninstall.md)
- [📝Uninstall deployed resources](uninstall.md)
- [📝Deploy older version of AWX Operator](deploy-older-operator.md)
- [📝Upgrade AWX Operator and AWX](upgrade-operator.md)
- [📝Workaround for the rate limit on Docker Hub](dockerhub-rate-limit.md)

View file

@ -143,7 +143,7 @@ Add the following lines to the `ingress.yaml` for each resource,
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: <resouce name>
name: <resource name>
annotations: 👈👈👈
traefik.ingress.kubernetes.io/router.middlewares: default-redirect@kubernetescrd 👈👈👈
...
@ -158,6 +158,6 @@ kubectl apply -k <path>
Or you can also patch Ingress resources directly.
```bash
kubectl -n <namespace> patch ingress <resouce name> --type=merge \
kubectl -n <namespace> patch ingress <resource name> --type=merge \
-p '{"metadata": {"annotations": {"traefik.ingress.kubernetes.io/router.middlewares": "default-redirect@kubernetescrd"}}}'
```

View file

@ -147,7 +147,7 @@ Events:
If you follow the steps in this repository to deploy you AWX, your pull request to Docker Hub will be identified as a free, anonymous account. Therefore, you will be limited to 200 requests in 6 hours. The message "429 Too Many Requests" indicates that it has been exceeded.
To solve this, you can simply wait until the limit is freeed up, or [consider giving your Docker Hub credentials to K3s by follwing the guide on this page](dockerhub-rate-limit.md).
To solve this, you can simply wait until the limit is freed up, or [consider giving your Docker Hub credentials to K3s by following the guide on this page](dockerhub-rate-limit.md).
### The Pod is `Pending` with "1 Insufficient cpu, 1 Insufficient memory." event
@ -168,7 +168,7 @@ Typical solutions are one of the following:
- **Add more CPUs or memory to your K3s node.**
- If you have at least 3 CPUs and 5 GB RAM, AWX may work.
- **Reduce resource requests for the containers.**
- The minimum resouce requirements can be ignored by adding three lines in `base/awx.yml`.
- The minimum resource requirements can be ignored by adding three lines in `base/awx.yml`.
```yaml
...
@ -184,7 +184,7 @@ Typical solutions are one of the following:
### The Pod is `Pending` with "1 pod has unbound immediate PersistentVolumeClaims." event
If your Pod is in `Pending` state and its `Events` shows following events, the reason is that no usable Persisten Volumes are available.
If your Pod is in `Pending` state and its `Events` shows following events, the reason is that no usable Persistent Volumes are available.
```bash
$ kubectl -n awx describe pod awx-84d5c45999-h7xm4
@ -260,7 +260,7 @@ This problem occurs when the AWX pod and the PostgreSQL pod cannot communicate p
To solve this, check or try the following:
- Ensure your PostgreSQL (typically the Pod named `awx-postgres-0` or `awx-postgres-13-0`) is in `Running` state.
- Ensure `host` under `awx-postgres-configuration` in `base/kustomizaton.yaml` has correct value.
- Ensure `host` under `awx-postgres-configuration` in `base/kustomization.yaml` has correct value.
- Specify `awx-postgres` for AWX Operator 0.25.0 or earlier, `awx-postgres-13` for `0.26.0`.
- Ensure your `firewalld`, `ufw` or any kind of firewall has been disabled on your K3s host.
- Ensure your `nm-cloud-setup` service on your K3s host is disabled if exists.

View file

@ -1,5 +1,5 @@
<!-- omit in toc -->
# Uninstall deployed resouces
# Uninstall deployed resources
<!-- omit in toc -->
## Table of Contents
@ -62,7 +62,7 @@ sudo rm -rf /data/<volume name>
### Uninstall K3s
K3s comes with a handy uninstall script. Once executed, it will perform an uninstall that includes removing all resources deployed on Kubeneretes.
K3s comes with a handy uninstall script. Once executed, it will perform an uninstall that includes removing all resources deployed on Kubernetes.
```bash
/usr/local/bin/k3s-uninstall.sh

View file

@ -23,7 +23,7 @@ Ensure your `/etc/systemd/system/k3s.service.env` has correct environment variab
sudo cat /etc/systemd/system/k3s.service.env
```
If your `/etc/systemd/system/k3s.service.env` already has correct envirnment variables for your proxy, there is nothing to do for your K3s.
If your `/etc/systemd/system/k3s.service.env` already has correct environment variables for your proxy, there is nothing to do for your K3s.
If not, export environment variables and re-run installation script,

View file

@ -1,10 +1,10 @@
<!-- omit in toc -->
# Version Mapping between AWX Operator and AWX
- [Default version mapping betwern AWX Operator and AWX](#default-version-mapping-betwern-awx-operator-and-awx)
- [Default version mapping between AWX Operator and AWX](#default-version-mapping-between-awx-operator-and-awx)
- [Appendix: Gather bundled AWX version from AWX Operator](#appendix-gather-bundled-awx-version-from-awx-operator)
## Default version mapping betwern AWX Operator and AWX
## Default version mapping between AWX Operator and AWX
The table below maps the AWX Operator versions and bundled AWX versions.