AVI

= Kubernetes Integration= Source: avinetworks.com

Create a Service Account kubectl create serviceaccount avi -n default

Create a Cluster Role for deploying Avi Service Engines as a pod:

nano clusterrole.json {   "apiVersion": "rbac.authorization.k8s.io/v1beta1", "kind": "ClusterRole", "metadata": { "name": "avirole" },   "rules": [ {           "apiGroups": [ ""           ],            "resources": [ "*"           ],            "verbs": [ "get", "list", "watch" ]       },        {            "apiGroups": [ ""           ],            "resources": [ "pods", "replicationcontrollers" ],           "verbs": [ "get", "list", "watch", "create", "delete", "update" ]       },        {            "apiGroups": [ ""           ],            "resources": [ "secrets" ],           "verbs": [ "get", "list", "watch", "create", "delete", "update" ]       },        {            "apiGroups": [ "extensions" ],           "resources": [ "daemonsets", "ingresses" ],           "verbs": [ "create", "delete", "get", "list", "update", "watch" ]       }    ] }

kubectl create -f clusterrole.json

Create Cluster Role Binding nano clusterbinding.json {   "apiVersion": "rbac.authorization.k8s.io/v1beta1", "kind": "ClusterRoleBinding", "metadata": { "name": "avirolebinding", "namespace": "default" },   "roleRef": { "apiGroup": "rbac.authorization.k8s.io", "kind": "ClusterRole", "name": "avirole" },   "subjects": [ {           "kind": "ServiceAccount", "name": "avi", "namespace": "default" }   ] }

kubectl create -f clusterbinding.json

Extract the Token for Use in Avi Cloud Configuration kubectl describe serviceaccount avi -n default kubectl describe secret avi-token-esdf0 -n default

On AVI Controller
Enter the Master IP address & Token in AVI Portal: https://10.1.10.160:6443

Create NorthSouth-IPAM NorthSouth_DNS

EastWest-IPAM EastWest-DNS

Goto Tenant Default, Check VS status

Either Disable Kube-Proxy(which is default LB in Kubernetes) or Give it a different IP than East_West Subnet.

= OpenShift =

Replace kube-proxy with Avi

 * If kube-proxy is enabled, it uses the service subnet (default is 172.30.0.0/16) to allocate east-west VIPs to services.
 * In this case, east-west VIPs handled by Vantage have to be configured to use other subnets.
 * Kube-proxy will be running, but unused, since services use Avi-allocated VIPs for east-west traffic, instead of OpenShift-allocated VIPs from the service network.


 * If a user wishes to use the service subnet to load balance traffic using Avi, kube-proxy must be disabled.
 * This mode offers operational advantages, since OpenShift’s API and CLI are in sync with the VIP used for the service.
 * That is to say, if someone does a “oc get service,” the VIPs shown in the output are the same VIPs on which Avi provides the service.

1) OpenShift Master node delete all user-created services: oc delete all --all
 * Disable kube-proxy

2) To disable kube-proxy, perform the below steps on all nodes (Masters and Slaves):

OPTIONS="--loglevel=2 --disable proxy"
 * Edit /etc/sysconfig/origin-node and change the OPTIONS variable to read as below:

systemctl restart origin-node.service
 * Save and exit the editor.
 * Restart the origin-node service:

1) Configure the east-west VIP network to use the service network (default 172.30.0.0/16).
 * Configuration changes on Avi

2) In the cloud configuration, select the Use Cluster IP of service as VIP for East-West checkbox.

Configuring the Network
Configure a subnet and IP address pool for intra-cluster/east-west traffic and a subnet and IP address pool for external/north-south traffic.

= Using Ansible =

mkdir ~/virtualenv mkdir avisdk mkdir bin

cd ~/virtualenv/ cd avisdk/

pip install setuptools

export LC_ALL=C virtualenv ~/virtualenv/avisdk/ pip install avisdk

cd bin . activate

pip install avisdk==17.2.7b2 pip install avisdk pip freeze

cd ~/virtualenv/avisdk/ cd bin source activate pip install ansible cp /tmp/for_ansible_training.yml ~ nano ~/for_ansible_training.yml ansible-playbook ~/for_ansible_training.yml ansible-playbook ~/for_ansible_training.yml -vvvvv

ansible-galaxy -f install avinetworks.avisdk la ~/.ansible/roles/avinetworks.avisdk/library/

ansible-playbook ~/for_ansible_training.yml

= Ansible Playbook to Deploy VS =

nano avi-deploy.yml

Available Roles: ls /etc/ansible/roles/avinetworks.avisdk/library/

Deployment: ansible-playbook -v avi-deploy.yml --step

= Using AVI SDK =

nano pool_vs.py

python pool_vs.py -u admin -p Admin@123 -c 10.10.26.40 -t admin -vs test_aman -v 10.91.0.6 -po test_pool_aman


 * References