- Cluster Installation Instructions
- Prerequisites for Cluster Installation
- Initialize the Cluster
- Before Moving to Next Steps
- General instructions
- Day 2: Post Initialization Instructions (Individual Playbooks)
- Installing api and wildcard certs
- Include Chrony Machineconfigs
- Installing machinesets
- Enable LDAP
- Enable LDAP Group Sync and role bindings
- Setting up Infra Taints and Tolerations
- Configure OperatorHub to Pull from Disconnected Quay
- Configure Update Services to pull from Disconnected Quay
- Install an Operator
- Toggle Operator Update Mode
- Install ArgoCD
- Install Bitnami
- Deploy Network Policy Template
- Create ArgoCD Projects
- Configure Image Registry
- Move Monitoring Pods to Infra Nodes
- Update Pull Secret
- Backup of Bitnami Sealed Secret
These instructions are based on the Installer-provisioned Infrastructure installation procedure.(https://docs.openshift.com/container-platform/4.10/installing/installing_vsphere/installing-vsphere-installer-provisioned-customizations.html)
-
Create an ssh-key pair for your user: https://docs.openshift.com/container-platform/4.10/installing/installing_vsphere/installing-vsphere-installer-provisioned.html#ssh-agent-using_installing-vsphere-installer-provisioned
-
Copy the contents of your ssh-key pub file to the last line, under
sshKey: |
in the following file: ansible/roles/initialize_cluster/templates/install-config.yaml.j2 -
Update group_vars/pull-secret.yaml with customer pull-secret
ansible-vault edit group_vars/pull-secret.yaml --vault-password-file=../resources/vault-password.txt
-
Make sure there is a cluster specific variables folder and all.yaml under ansible/group_vars/cluster. Use
ansible/group_vars/cluster/dev/all.yaml
as an example, changing variables to match new cluster. -
New certificates will need to be saved in vaulted files using
ansible-vault
command: https://docs.ansible.com/ansible/latest/user_guide/vault.html- Each cluster has a certs file. You can view the format of the file with
ansible-vault edit filename
- Be sure to view the format of the existing certs files before adding a new one. If the format doesn't match the playbooks will fail. The certs info is indented four (4) spaces.
- Currently the vault password is being stored in plain text in resources/vault-password.txt. Suggest saving this file to an encrypted password application.
- Where you see
--vault-password-file=../resources/vault-password.txt
in the examples below, substitute with--ask-vault-pass
for production builds, and should be a different password
- Each cluster has a certs file. You can view the format of the file with
Ensure you are in the ansible directory before you run these commands
ansible-playbook -i inventory \
-e @./group_vars/cluster/[cluster]/all.yaml \
-e @./group_vars/env/[env]/default-vault.yaml \
-e @./group_vars/env/[env]/all.yaml \
-e @./group_vars/pull-secret.yaml \
-e @./group_vars/cluster/[cluster]/ssh-key.yaml \
--vault-password-file=../resources/vault-password.txt \
initialize_cluster.yaml
NOTE: For lab add -e @./group_vars/env/lab/all.yaml \
NOTE: For Quay clusters replace pull secret line with -e @./group_vars/quay-pull-secret.yaml \
NOTE: For Mesa installations, environment variable is 'mprod'
Ensure you get the folder and template names for the new cluster you just built, and update the variables in the cluster specific all.yaml files
cd install-dir
grep template *
vi ../group_vars/cluster/[cluster]/all.yaml
Add the following to your .bashrc file so that KUBECONFIG environment variable set. This file will be created, and updated, after you log into the cluster from the command line.
# User specific aliases and functions
export KUBECONFIG=~/.kube/config
This section shows the command to run all the plays as one big playbook. Or go to the next section to run individual plays to update, or check a config setting.
ansible-playbook -i inventory \
-e @./group_vars/certs/cluster-[cluster]-certs.yaml \
-e "cluster=[cluster]" \
--vault-password-file=../resources/vault-password.txt \
certs_deployment_playbook.yaml
Wait for all initial nodes to be ready before moving on. May need to login to the api with kubeadmin since system:admin will be logged out when masters are restarted
After the certificates have been applied, and all nodes are back in ready state, log into the api with the kubeadmin username and password provided after the cluster has been initialized.
Update the KUBECONFIG environment variable with the logged in users's tokens:
export KUBECONFIG=~/.kube/config
There will be pauses during the builds, wait until the object just initiated by the playbook has completed before moving on.
The first pause is for the storage nodes, after the machineconfigs have been created. Wait until the storage nodes are up, and vmotion, if needed, the storage nodes before moving on.
Once storage nodes are vmotioned, continue the playbook.
ansible-playbook -i inventory \
-e @./group_vars/cluster/[cluster]/all.yaml \
-e @./group_vars/env/[env]/default-vault.yaml \
-e @./group_vars/env/[env]/all.yaml \
-e @./group_vars/quay.yaml \
--vault-password-file=../resources/vault-password.txt \
--tags ocp \
finalize_cluster.yaml
ansible-playbook -i inventory \
-e @./group_vars/cluster/[quay_cluster]/all.yaml \
-e @./group_vars/env/[env]]/default-vault.yaml \
-e @./group_vars/env/[env]/all.yaml \
--vault-password-file=../resources/vault-password.txt \
--tags quay \
finalize_cluster.yaml
Wait 5-10 minutes then run the ArgoCD Projects playbook: Create ArgoCD Projects
Each section below contains the commands for installing the individual plays to finalize a cluster. You will need to replace the [cluster] defined in each with the cluster you are building (for instance: dev).
ansible-playbook -i inventory \
-e @./group_vars/certs/cluster-[cluster]-certs.yaml \
-e "cluster=[cluster]" \
certs_deployment_playbook.yaml \
--vault-password-file=../resources/vault-password.txt
ansible-playbook -i inventory \
-e "cluster=[cluster]" \
chrony_playbook.yaml
### Create MachineConfigPool for Infra Nodes
```sh
ansible-playbook -i inventory mcp_playbook.yaml
When running this command you will be prompted for the type of machinesets to be created.
Defaults to worker, but to build infra machinesets, type: infra
ansible-playbook -i inventory \
-e @./group_vars/env/[env]/default-vault.yaml \
-e @./group_vars/cluster/[cluster]/all.yaml \
-e @./group_vars/env/[env]/all.yaml \
-e "role_assigned=storage" \
--vault-password-file=../resources/vault-password.txt \
machinesets_playbook.yaml
ansible-playbook -i inventory \
-e @./group_vars/env/[env]/default-vault.yaml \
-e @./group_vars/env/[env]/all.yaml \
--vault-password-file=../resources/vault-password.txt \
ldap_playbook.yaml
ansible-playbook -i inventory \
-e @./group_vars/env/[env]/default-vault.yaml \
-e @./group_vars/cluster/[cluster]/all.yaml \
--vault-password-file=../resources/vault-password.txt \
ldap_groupsync_playbook.yaml
Note: Make sure all infra nodes are up and provisioned before running this step.
ansible-playbook -i inventory \
-e @./group_vars/cluster/[cluster]/all.yaml \
nodeconfig_playbook.yaml
ansible-playbook -i inventory \
-e @./group_vars/quay.yaml \
-e "cluster=[cluster]" \
--vault-password-file=../resources/vault-password.txt \
operators_config_playbook.yaml
ansible-playbook -i inventory \
-e "cluster=[cluster]" \
update_svs_playbook.yaml
Pass the variables, depending on the operator, to the playbook. Variables to pass are:
- Operator Name
- Install Mode (SingleNamespace or AllNamespaces or OwnNamespaces)
- Namespace
- Channel
The above information can be gathered by grepping the information from the packagemanifest. Defaults to SingleNamespace. If you put AllNamespaces the results will show installed in all namespaces.
To get the required variables grep from the packagemanifest. See next step.
This should be done for the following information:
- Install Modes (keyword: modes)
- Suggested Namespace(s) (keyword: suggested)
- If none is provided, then use the default namespace: openshift-operators
- Install Channel: (keyword: channel)
oc describe packagemanifest openshift-gitops | grep -A10 -i modes
Install Modes:
Supported: false
Type: OwnNamespace
Supported: false
Type: SingleNamespace
Supported: false
Type: MultiNamespace
Supported: true
Type: AllNamespaces
ansible-playbook -i inventory \
-e "cluster=[cluster_name]" \
-e "operator=[operator_name]" \
-e "install_mode=[install_mode]" \
-e "ns=[namespace]" \
-e "channel=[channel]" \
-e @./group_vars/cluster/[cluster]/all.yaml \
operators_install_playbook.yaml
- Operator: odf-operator
- install_mode: OwnNamespace
- ns: openshift-storage
- channel: stable-4.12
- Operator: local-storage-operator
- install_mode: OwnNamespace
- ns: openshift-local-storage
- channel: stable
- Operator: openshift-gitops-operator
- install_mode: AllNamespaces
- ns: openshift-operators
- channel: latest
- Operator: openshift-pipelines-operator-rh
- install_mode: AllNamespaces
- ns: openshift-operators
- channel: latest
- Operator: cluster-logging
- install_mode: OwnNamespace
- ns: openshift-logging
- channel: stable
- Operator: elasticsearch-operator
- install_mode: OwnNamespace
- ns: openshift-operators-redhat
- channel: stable
- Operator: openshift-cert-manager-operator
- install_mode: AllNamespaces
- ns: cert-manager-operator
- channel: stable-v1
Run the following playbook to set all Operators to Manual Update mode
ansible-playbook -i inventory \
-e "cluster=[cluster]" \
operator_toggle_playbook.yaml
Need to update default-vault.yaml (ansible-vault edit) with the ArgoCD Service Account password, using the username as the key.
ansible-playbook -i inventory \
-e @./group_vars/cluster/[cluster]/all.yaml \
-e @./group_vars/env/[environment]/all.yaml \
-e @./group_vars/env/[environment]/default-vault.yaml \
--vault-password-file=../resources/vault-password.txt \
install_argocd_playbook.yaml
ansible-playbook -i inventory \
-e "cluster=[cluster]" \
bitnami_playbook.yaml
ansible-playbook -i inventory \
-e @./group_vars/cluster/[cluster]/all.yaml \
network_policy_playbook.yaml
ansible-playbook -i inventory \
-e @./group_vars/cluster/[cluster]/all.yaml \
net_policy_argo_playbook.yaml
ansible-playbook -i inventory \
-e "cluster=[cluster]" \
image_registry_playbook.yaml
ansible-playbook -i inventory \
-e "cluster=[cluster]" \
monitoring_config_playbook.yaml
Use vault to update pull-secret.yaml file under /group_vars. Apply below script to apply change on clusters.
ansible-playbook -i inventory \
-e cluster=[cluster] \
-e @./group_vars/pull-secret.yaml \
update_pullsecret_playbook.yaml \
--vault-password-file=../resources/vault-password.txt
- Create a new git branch for backup
- Run playbook: sealed_secrets_backup_playbook.yaml
- Run Scott's script to push backed up secret to git repository.
ansible-playbook -i inventory \
-e "cluster=[cluster]" \
sealed_secrets_backup_playbook.yaml