OpenShift
System Requirements
- For this cluster setup you will need the following VM or Hardware Requirements:
3x CentOS 7.5 VMs or Hosts 8 vCPUs 12-16 GB RAM 80 GB HDD space (custom partition — 8GB swap, 1GB /boot, rest for / partition) 1 NIC interface with static address and gateway configured
Prequisites
- Perform these steps on all 3 hosts
- Install the following pre-requirements:
yum -y install yum yum-utils wget git net-tools bind-utils iptables-services bridge-utils NetworkManager yum -y install ansible pyOpenSSL python python-yaml python-dbus python-cryptography python-lxml docker tesseract nano yum -y update
- Latest Ansible Version:
yum install epel-release yum install ansible
- Setup Docker pre-requirements:
systemctl enable docker systemctl start docker
- Add the IP address, hostname and FQDN of each node to /etc/hosts for each host
10.1.10.27 opshmaster Openshift01 10.1.10.28 opshnode1 Openshift02 10.1.10.29 opshnode2 Openshift03
- Run ssh-keygen, accept the default location of the key and do not set a password for the key.
- Run the following bash script:
for host in Openshift01 \ Openshift02 \ Openshift03; \ do ssh-copy-id -i ~/.ssh/id_rsa.pub $host; \ done
- Restart all 3 hosts
Installing
Openshift 3.9
- Version Details
Ansible 2.6.1 Python 2.7.5 OpenShift 3.9 CentOS 7.5.1804
- Create ansible hosts file
nano /etc/ansible/hosts
# Create an OSEv3 group that contains the masters and nodes groups [OSEv3:children] masters nodes etcd # Set variables common for all OSEv3 hosts [OSEv3:vars] ansible_ssh_user=root openshift_deployment_type=origin openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}] #openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}] os_sdn_network_plugin_name=redhat/openshift-ovs-multitenant openshift_disable_check=memory_availability #openshift_release=v3.7 # host group for masters [masters] Openshift01 openshift_hostname=Openshift01 openshift_public_hostname=Openshift01 [etcd] Openshift01 # host group for nodes, includes region info [nodes] Openshift01 openshift_node_labels="{'region': 'infra', 'zone': 'default'}" openshift_hostname=Openshift01 openshift_public_hostname=Openshift01 Openshift02 openshift_node_labels="{'region': 'primary', 'zone': 'east'}" openshift_hostname=Openshift02 openshift_public_hostname=Openshift02 Openshift03 openshift_node_labels="{'region': 'primary', 'zone': 'west'}" openshift_hostname=Openshift03 openshift_public_hostname=Openshift03
- If parsing of this file fails, change the Quotes & Double Quotes symbols.
- Download the Files:
git clone --single-branch -b release-3.9 https://github.com/openshift/openshift-ansible
- Install Prerequisites:
ansible-playbook -i /etc/ansible/hosts openshift-ansible/playbooks/prerequisites.yml
- Install OpenShift:
ansible-playbook -i /etc/ansible/hosts openshift-ansible/playbooks/deploy_cluster.yml
Openshift 3.10
Source: server-world.info
- Version Details
Ansible 2.7.4 Python 2.7.5 OpenShift 3.10 CentOS 7.6.1810
- Install Packages
yum -y install centos-release-openshift-origin310 epel-release docker git pyOpenSSL yum -y install openshift-ansible
- Fix python-docker package issue:
nano /usr/share/ansible/openshift-ansible/playbooks/init/base_packages.yml
Changing the line:
"{{ 'python3-docker' if ansible_distribution == 'Fedora' else 'python-docker' }}"
to
"{{ 'python3-docker' if ansible_distribution == 'Fedora' else 'python-docker-py' }}"
Create Ansible Hosts file:
sudo vi /etc/ansible/hosts
[OSEv3:children] masters nodes etcd [OSEv3:vars] # admin user created in previous section ansible_ssh_user=root ansible_become=true openshift_deployment_type=origin # use HTPasswd for authentication openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}] # define default sub-domain for Master node #openshift_master_default_subdomain=apps.srv.world # allow unencrypted connection within cluster openshift_docker_insecure_registries=172.30.0.0/16 [masters] Openshift01 openshift_schedulable=true containerized=false [etcd] Openshift01 [nodes] # defined values for [openshift_node_group_name] in the file below # [/usr/share/ansible/openshift-ansible/roles/openshift_facts/defaults/main.yml] Openshift01 openshift_node_group_name='node-config-master-infra' Openshift02 openshift_node_group_name='node-config-compute' Openshift03 openshift_node_group_name='node-config-compute' # if you'd like to separate Master node feature and Infra node feature, set like follows # ctrl.srv.world openshift_node_group_name='node-config-master' # Openshift02 openshift_node_group_name='node-config-compute' # Openshift03 openshift_node_group_name='node-config-infra'
Install Prerequisites:
ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml
Instal Openshift:
ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
- You should see similar output once done:
PLAY RECAP ***************************************************************************************************** Openshift01 : ok=619 changed=251 unreachable=0 failed=0 Openshift02 : ok=134 changed=53 unreachable=0 failed=0 Openshift03 : ok=134 changed=53 unreachable=0 failed=0 localhost : ok=12 changed=0 unreachable=0 failed=0 INSTALLER STATUS *********************************************************************************************** Initialization : Complete (0:00:48) Health Check : Complete (0:01:23) etcd Install : Complete (0:01:37) Master Install : Complete (0:05:13) Master Additional Install : Complete (0:00:49) Node Install : Complete (0:07:34) Hosted Install : Complete (0:03:42) Web Console Install : Complete (0:01:04) Service Catalog Install : Complete (0:07:04)
WebUI
The Authentication is failing in below procedure, need to troubleshoot |
- For initial installation, simply use htpasswd for simple authentication.
- Seed it with a couple of sample users to allow us to login to the OpenShift Console and validate the installation:
htpasswd -c /etc/origin/htpasswd admin
- Set cluster-role rights to admin user have access all projects in the cluster:
oc adm policy add-cluster-role-to-user cluster-admin admin
- WebUI should be available at:
https://openshift01:8443
Verification
- On Master:
oc status oc get all
oc get nodes
NAME STATUS ROLES AGE VERSION openshift01 Ready master 18m v1.9.1+a0ce1bc657 openshift02 Ready compute 18m v1.9.1+a0ce1bc657 openshift03 Ready compute 18m v1.9.1+a0ce1bc657
oc get pods
NAME READY STATUS RESTARTS AGE docker-registry-1-m8hgb 1/1 Running 0 13m registry-console-1-zxpqh 1/1 Running 0 9m router-1-vwl99 1/1 Running 0 13m
oc get -o wide pods
NAME READY STATUS RESTARTS AGE IP NODE avise-defaultgroup-8b77n 1/1 Running 0 12h 10.129.0.37 openshift01 avise-defaultgroup-bk48l 1/1 Running 0 12h 10.130.0.23 openshift03 avise-defaultgroup-r8hfx 1/1 Running 0 12h 10.128.0.23 openshift02 avitest-1-7ftb8 1/1 Running 2 18h 10.130.0.21 openshift03 avitest-1-fc9pv 1/1 Running 2 18h 10.128.0.22 openshift02 docker-registry-1-m8hgb 1/1 Running 2 40d 10.129.0.32 openshift01 registry-console-1-zxpqh 1/1 Running 2 40d 10.128.0.20 openshift02 router-1-vwl99 1/1 Running 2 40d 10.70.41.20 openshift01
Troubleshooting
Diagnostics
oc adm diagnostics
Firewall Rules
iptables -L
Uninstall
ansible-playbook -i /etc/ansible/hosts /usr/share/ansible/openshift-ansible/playbooks/adhoc/uninstall.yml
Disable Kube-Proxy
- References
{{#widget:DISQUS
|id=networkm
|uniqid=OpenShift
|url=https://aman.awiki.org/wiki/OpenShift
}}