OpenShift

= Version Details = Ansible 2.6.1 Python 2.7.5 OpenShift 3.9 CentOS 7.5.1804

= System Requirements =

3x CentOS 7.5 VMs or Hosts 8 vCPUs 12-16 GB RAM 80 GB HDD space (custom partition — 8GB swap, 1GB /boot, rest for / partition) 1 NIC interface with static address and gateway configured
 * For this cluster setup you will need the following VM or Hardware Requirements:

= Prequisites =


 * Perform these steps on all 3 hosts:

yum -y install yum yum-utils wget git net-tools bind-utils iptables-services bridge-utils NetworkManager yum -y install ansible pyOpenSSL python python-yaml python-dbus python-cryptography python-lxml docker tesseract nano yum -y update
 * Install the following pre-requirements:

yum install epel-release yum install ansible
 * Latest Ansible Version:

systemctl enable docker systemctl start docker
 * Setup Docker pre-requirements:

10.1.10.27 opshmaster Openshift01 10.1.10.28 opshnode1 Openshift02 10.1.10.29 opshnode2 Openshift03
 * Add the IP address, hostname and FQDN of each node to /etc/hosts for each host


 * Run ssh-keygen, accept the default location of the key and do not set a password for the key.

for host in Openshift01 \ Openshift02 \ Openshift03; \ do ssh-copy-id -i ~/.ssh/id_rsa.pub $host; \ done
 * Run the following bash script:


 * Restart all 3 hosts

= Installing =

Openshift 3.9
Create ansible hosts file nano /etc/ansible/hosts

[OSEv3:children] masters nodes etcd
 * 1) Create an OSEv3 group that contains the masters and nodes groups

[OSEv3:vars] ansible_ssh_user=root openshift_deployment_type=origin openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}] os_sdn_network_plugin_name=redhat/openshift-ovs-multitenant openshift_disable_check=memory_availability
 * 1) Set variables common for all OSEv3 hosts
 * 1) openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
 * 1) openshift_release=v3.7

[masters] Openshift01 openshift_hostname=Openshift01 openshift_public_hostname=Openshift01
 * 1) host group for masters

[etcd] Openshift01

[nodes] Openshift01 openshift_node_labels="{'region': 'infra', 'zone': 'default'}" openshift_hostname=Openshift01 openshift_public_hostname=Openshift01 Openshift02 openshift_node_labels="{'region': 'primary', 'zone': 'east'}" openshift_hostname=Openshift02 openshift_public_hostname=Openshift02 Openshift03 openshift_node_labels="{'region': 'primary', 'zone': 'west'}" openshift_hostname=Openshift03 openshift_public_hostname=Openshift03
 * 1) host group for nodes, includes region info


 * If parsing of this file fails, change the Quotes & Double Quotes symbols.

git clone --single-branch -b release-3.9 https://github.com/openshift/openshift-ansible
 * Download the Files:

ansible-playbook -i /etc/ansible/hosts openshift-ansible/playbooks/prerequisites.yml
 * Install Prerequisites:

ansible-playbook -i /etc/ansible/hosts openshift-ansible/playbooks/deploy_cluster.yml
 * Install OpenShift:

PLAY RECAP ***************************************************************************************************** Openshift01               : ok=619  changed=251  unreachable=0    failed=0 Openshift02               : ok=134  changed=53   unreachable=0    failed=0 Openshift03               : ok=134  changed=53   unreachable=0    failed=0 localhost                 : ok=12   changed=0    unreachable=0    failed=0
 * You should see similar output once done:

INSTALLER STATUS *********************************************************************************************** Initialization            : Complete (0:00:48) Health Check              : Complete (0:01:23) etcd Install              : Complete (0:01:37) Master Install            : Complete (0:05:13) Master Additional Install : Complete (0:00:49) Node Install              : Complete (0:07:34) Hosted Install            : Complete (0:03:42) Web Console Install       : Complete (0:01:04) Service Catalog Install   : Complete (0:07:04)

Openshift 3.10
nano playbooks/init/base_packages.yml

changing the line: "" to ""

[OSEv3:children] masters nodes etcd

[OSEv3:vars] ansible_ssh_user=root ansible_become=true openshift_deployment_type=origin
 * 1) admin user created in previous section

openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}] openshift_docker_insecure_registries=172.30.0.0/16
 * 1) use HTPasswd for authentication
 * 1) define default sub-domain for Master node
 * 2) openshift_master_default_subdomain=apps.srv.world
 * 3) allow unencrypted connection within cluster

[masters] Openshift01 openshift_schedulable=true containerized=false

[etcd] Openshift01

[nodes] Openshift01 openshift_node_group_name='node-config-master-infra' Openshift02 openshift_node_group_name='node-config-compute' Openshift03 openshift_node_group_name='node-config-compute'
 * 1) defined values for [openshift_node_group_name] in the file below
 * 2) [/usr/share/ansible/openshift-ansible/roles/openshift_facts/defaults/main.yml]


 * 1) if you'd like to separate Master node feature and Infra node feature, set like follows
 * 2) ctrl.srv.world openshift_node_group_name='node-config-master'
 * 3) Openshift02 openshift_node_group_name='node-config-compute'
 * 4) Openshift03 openshift_node_group_name='node-config-infra'

= WebUI =

htpasswd -c /etc/origin/htpasswd admin
 * For initial installation, simply use htpasswd for simple authentication.
 * Seed it with a couple of sample users to allow us to login to the OpenShift Console and validate the installation:

oc adm policy add-cluster-role-to-user cluster-admin admin
 * Set cluster-role rights to admin user have access all projects in the cluster:

https://openshift01:8443
 * WebUI should be available at:

= Verification =


 * On Master:

oc status oc get all

oc get nodes

NAME         STATUS    ROLES     AGE       VERSION openshift01  Ready     master    18m       v1.9.1+a0ce1bc657 openshift02  Ready     compute   18m       v1.9.1+a0ce1bc657 openshift03  Ready     compute   18m       v1.9.1+a0ce1bc657

oc get pods

NAME                      READY     STATUS    RESTARTS   AGE docker-registry-1-m8hgb   1/1       Running   0          13m registry-console-1-zxpqh  1/1       Running   0          9m router-1-vwl99            1/1       Running   0          13m

oc get -o wide pods NAME                      READY     STATUS    RESTARTS   AGE       IP            NODE avise-defaultgroup-8b77n  1/1       Running   0          12h       10.129.0.37   openshift01 avise-defaultgroup-bk48l  1/1       Running   0          12h       10.130.0.23   openshift03 avise-defaultgroup-r8hfx  1/1       Running   0          12h       10.128.0.23   openshift02 avitest-1-7ftb8           1/1       Running   2          18h       10.130.0.21   openshift03 avitest-1-fc9pv           1/1       Running   2          18h       10.128.0.22   openshift02 docker-registry-1-m8hgb   1/1       Running   2          40d       10.129.0.32   openshift01 registry-console-1-zxpqh  1/1       Running   2          40d       10.128.0.20   openshift02 router-1-vwl99            1/1       Running   2          40d       10.70.41.20   openshift01

= Troubleshooting =

Diagnostics oc adm diagnostics

Firewall Rules iptables -L

= Uninstall =

ansible-playbook -i /etc/ansible/hosts /usr/share/ansible/openshift-ansible/playbooks/adhoc/uninstall.yml

= Disable Kube-Proxy =


 * References