OpenShift: Difference between revisions

 
(24 intermediate revisions by the same user not shown)
Line 6:
 
* For this cluster setup you will need the following VM or Hardware Requirements:
3 -3x CentOS 7.5 VMs or hostsHosts
8 vCPUs
12-16 GB RAM
Line 20:
yum -y install ansible pyOpenSSL python python-yaml python-dbus python-cryptography python-lxml docker tesseract nano
yum -y update
 
* Latest Ansible Version:
yum install epel-release
yum install ansible
 
* Setup Docker pre-requirements:
Line 41 ⟶ 45:
* Restart all 3 hosts
 
= Installing =
= Ansible Configuration =
 
== Openshift 3.9 ==
sudo nano /etc/ansible/hosts
 
*Version Details
Ansible 2.6.1
Python 2.7.5
OpenShift 3.9
CentOS 7.5.1804
 
 
*Create ansible hosts file
nano /etc/ansible/hosts
 
<pre>
# Create an OSEv3 group that contains the masters and nodes groups
[OSEv3:children]
masters
Line 55 ⟶ 70:
ansible_ssh_user=root
openshift_deployment_type=origin
openshift_master_identity_providers=[{‘name’'name': ‘htpasswd_auth’'htpasswd_auth', ‘login’'login': ‘true’'true', ‘challenge’'challenge': ‘true’'true', ‘kind’'kind': ‘HTPasswdPasswordIdentityProvider’'HTPasswdPasswordIdentityProvider', ‘filename’'filename': '/etc/origin/master/htpasswd’htpasswd'}]
#openshift_master_identity_providers=[{‘name’'name': ‘htpasswd_auth’'htpasswd_auth', ‘login’'login': ‘true’'true', ‘challenge’'challenge': ‘true’'true', ‘kind’'kind': ‘HTPasswdPasswordIdentityProvider’'HTPasswdPasswordIdentityProvider'}]
os_sdn_network_plugin_name=redhat/openshift-ovs-multitenant
openshift_disable_check=memory_availability
#openshift_release=v3.7
 
# host group for masters
[masters]
masterOpenshift01 openshift_hostname=masterOpenshift01 openshift_public_hostname=masterOpenshift01
 
[etcd]
Openshift01
master
 
# host group for nodes, includes region info
[nodes]
masterOpenshift01 openshift_node_labels="{‘region’'region': ‘infra’'infra', ‘zone’'zone': ‘default’'default'}" openshift_hostname=masterOpenshift01 openshift_public_hostname=masterOpenshift01
node1Openshift02 openshift_node_labels="{‘region’'region': ‘primary’'primary', ‘zone’'zone': ‘east’'east'}" openshift_hostname=node1Openshift02 openshift_public_hostname=node1Openshift02
node2Openshift03 openshift_node_labels="{‘region’'region': ‘primary’'primary', ‘zone’'zone': ‘west’'west'}" openshift_hostname=node2Openshift03 openshift_public_hostname=node2Openshift03
</pre>
 
* If parsing of this file fails, change the Quotes & Double Quotes symbols.
= OpenShift Instrallation =
 
 
*Download the Files:
git clone --single-branch -b release-3.9 https://github.com/openshift/openshift-ansible
cd openshift-ansible/
 
*Install Prerequisites:
ansible-playbook -i /etc/ansible/hosts openshift-ansible/playbooks/prerequisites.yml
 
*Install OpenShift:
ansible-playbook -i /etc/ansible/hosts openshift-ansible/playbooks/deploy_cluster.yml
 
== Openshift 3.10 ==
 
Source: [https://www.server-world.info/en/note?os=CentOS_7&p=openshift310&f=1 server-world.info]
 
*Version Details
Ansible 2.7.4
Python 2.7.5
OpenShift 3.10
CentOS 7.6.1810
 
*Install Packages
yum -y install centos-release-openshift-origin310 epel-release docker git pyOpenSSL
yum -y install openshift-ansible
 
*Fix python-docker package issue:
nano /usr/share/ansible/openshift-ansible/playbooks/init/base_packages.yml
 
Changing the line:
<pre>"{{ 'python3-docker' if ansible_distribution == 'Fedora' else 'python-docker' }}"</pre>
to
<pre>"{{ 'python3-docker' if ansible_distribution == 'Fedora' else 'python-docker-py' }}"</pre>
 
Create Ansible Hosts file:
sudo vi /etc/ansible/hosts
<pre>
[OSEv3:children]
masters
nodes
etcd
 
[OSEv3:vars]
# admin user created in previous section
ansible_ssh_user=root
ansible_become=true
openshift_deployment_type=origin
 
# use HTPasswd for authentication
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
# define default sub-domain for Master node
#openshift_master_default_subdomain=apps.srv.world
# allow unencrypted connection within cluster
openshift_docker_insecure_registries=172.30.0.0/16
 
[masters]
Openshift01 openshift_schedulable=true containerized=false
 
[etcd]
Openshift01
 
[nodes]
# defined values for [openshift_node_group_name] in the file below
# [/usr/share/ansible/openshift-ansible/roles/openshift_facts/defaults/main.yml]
Openshift01 openshift_node_group_name='node-config-master-infra'
Openshift02 openshift_node_group_name='node-config-compute'
Openshift03 openshift_node_group_name='node-config-compute'
 
# if you'd like to separate Master node feature and Infra node feature, set like follows
# ctrl.srv.world openshift_node_group_name='node-config-master'
# Openshift02 openshift_node_group_name='node-config-compute'
# Openshift03 openshift_node_group_name='node-config-infra'
</pre>
 
Install Prerequisites:
ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml
 
Instal Openshift:
ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
 
 
*You should see similar output once done:
<pre>
PLAY RECAP *****************************************************************************************************
Openshift01 : ok=619 changed=251 unreachable=0 failed=0
Openshift02 : ok=134 changed=53 unreachable=0 failed=0
Openshift03 : ok=134 changed=53 unreachable=0 failed=0
localhost : ok=12 changed=0 unreachable=0 failed=0
 
INSTALLER STATUS ***********************************************************************************************
Initialization : Complete (0:00:48)
Health Check : Complete (0:01:23)
etcd Install : Complete (0:01:37)
Master Install : Complete (0:05:13)
Master Additional Install : Complete (0:00:49)
Node Install : Complete (0:07:34)
Hosted Install : Complete (0:03:42)
Web Console Install : Complete (0:01:04)
Service Catalog Install : Complete (0:07:04)
</pre>
 
= WebUI =
 
{{notice|The Authentication is failing in below procedure, need to troubleshoot}}
 
* For initial installation, simply use htpasswd for simple authentication.
* Seed it with a couple of sample users to allow us to login to the OpenShift Console and validate the installation:
htpasswd -c /etc/origin/htpasswd admin
 
* Set cluster-role rights to admin user have access all projects in the cluster:
oc adm policy add-cluster-role-to-user cluster-admin admin
 
* WebUI should be available at:
https://openshift01:8443
 
= Verification =
 
* On Master:
 
oc status
oc get all
 
oc get nodes
 
<pre>
NAME STATUS ROLES AGE VERSION
openshift01 Ready master 18m v1.9.1+a0ce1bc657
openshift02 Ready compute 18m v1.9.1+a0ce1bc657
openshift03 Ready compute 18m v1.9.1+a0ce1bc657
</pre>
 
oc get pods
 
<pre>
NAME READY STATUS RESTARTS AGE
docker-registry-1-m8hgb 1/1 Running 0 13m
registry-console-1-zxpqh 1/1 Running 0 9m
router-1-vwl99 1/1 Running 0 13m
</pre>
 
 
oc get -o wide pods
<pre>
NAME READY STATUS RESTARTS AGE IP NODE
avise-defaultgroup-8b77n 1/1 Running 0 12h 10.129.0.37 openshift01
avise-defaultgroup-bk48l 1/1 Running 0 12h 10.130.0.23 openshift03
avise-defaultgroup-r8hfx 1/1 Running 0 12h 10.128.0.23 openshift02
avitest-1-7ftb8 1/1 Running 2 18h 10.130.0.21 openshift03
avitest-1-fc9pv 1/1 Running 2 18h 10.128.0.22 openshift02
docker-registry-1-m8hgb 1/1 Running 2 40d 10.129.0.32 openshift01
registry-console-1-zxpqh 1/1 Running 2 40d 10.128.0.20 openshift02
router-1-vwl99 1/1 Running 2 40d 10.70.41.20 openshift01
</pre>
 
= Troubleshooting =
 
Diagnostics
oc adm diagnostics
 
Firewall Rules
iptables -L
 
= Uninstall =
 
ansible-playbook -i /etc/ansible/hosts /usr/share/ansible/openshift-ansible/playbooks/adhoc/uninstall.yml
 
= Disable Kube-Proxy =