AVI

= Kubernetes Integration= Source: avinetworks.com

Create a Service Account kubectl create serviceaccount avi -n default

Create a Cluster Role for deploying Avi Service Engines as a pod:

nano clusterrole.json {   "apiVersion": "rbac.authorization.k8s.io/v1beta1", "kind": "ClusterRole", "metadata": { "name": "avirole" },   "rules": [ {           "apiGroups": [ ""           ],            "resources": [ "*"           ],            "verbs": [ "get", "list", "watch" ]       },        {            "apiGroups": [ ""           ],            "resources": [ "pods", "replicationcontrollers" ],           "verbs": [ "get", "list", "watch", "create", "delete", "update" ]       },        {            "apiGroups": [ ""           ],            "resources": [ "secrets" ],           "verbs": [ "get", "list", "watch", "create", "delete", "update" ]       },        {            "apiGroups": [ "extensions" ],           "resources": [ "daemonsets", "ingresses" ],           "verbs": [ "create", "delete", "get", "list", "update", "watch" ]       }    ] }

kubectl create -f clusterrole.json

Create Cluster Role Binding nano clusterbinding.json {   "apiVersion": "rbac.authorization.k8s.io/v1beta1", "kind": "ClusterRoleBinding", "metadata": { "name": "avirolebinding", "namespace": "default" },   "roleRef": { "apiGroup": "rbac.authorization.k8s.io", "kind": "ClusterRole", "name": "avirole" },   "subjects": [ {           "kind": "ServiceAccount", "name": "avi", "namespace": "default" }   ] }

kubectl create -f clusterbinding.json

Extract the Token for Use in Avi Cloud Configuration kubectl describe serviceaccount avi -n default kubectl describe secret avi-token-esdf0 -n default

On AVI Controller
Enter the Master IP address & Token in AVI Portal: https://10.1.10.160:6443

Create NorthSouth-IPAM(Should be routeable) NorthSouth_DNS

EastWest-IPAM EastWest-DNS

Goto Tenant Default, Check VS status

Either Disable Kube-Proxy(which is default LB in Kubernetes) or Give it a different IP than East_West Subnet.

= OpenShift =

Replace kube-proxy with Avi

 * If kube-proxy is enabled, it uses the service subnet (default is 172.30.0.0/16) to allocate east-west VIPs to services.
 * In this case, east-west VIPs handled by Vantage have to be configured to use other subnets.
 * Kube-proxy will be running, but unused, since services use Avi-allocated VIPs for east-west traffic, instead of OpenShift-allocated VIPs from the service network.


 * If a user wishes to use the service subnet to load balance traffic using Avi, kube-proxy must be disabled.
 * This mode offers operational advantages, since OpenShift’s API and CLI are in sync with the VIP used for the service.
 * That is to say, if someone does a “oc get service,” the VIPs shown in the output are the same VIPs on which Avi provides the service.

1) OpenShift Master node delete all user-created services: oc delete all --all
 * Disable kube-proxy

2) To disable kube-proxy, perform the below steps on all nodes (Masters and Slaves):

OPTIONS="--loglevel=2 --disable proxy"
 * Edit /etc/sysconfig/origin-node and change the OPTIONS variable to read as below:

systemctl restart origin-node.service
 * Save and exit the editor.
 * Restart the origin-node service:

1) Configure the east-west VIP network to use the service network (default 172.30.0.0/16).
 * Configuration changes on Avi

2) In the cloud configuration, select the Use Cluster IP of service as VIP for East-West checkbox.

Configuring the Network
1. Create east-west network: 172.50.0.0/16 Static Pool: 172.50.0.10 - 172.50.0.250

If Kube-proxy is enabled: You must use a different subnet than the kube-proxy’s cluster IP subnet. Please choose a /16 CIDR from the IPv4 private address space (172.16.0.0/16-172.31.0.0/16, 10.0.0.0/16 or 192.168.0.0/24

If Kube-proxy is disabled: 172.30.0.0/16

2. Create NorthSouth network(Should be routeable): 10.70.41.66/28 Static Pool: 10.70.41.65 - 10.70.41.78

3. Configuring IPAM/DNS profile Name: EastWest Type: Avi Vantage DNS

Name: NorthSouth Type: Avi Vantage DNS

OpenShift Service Account for Avi Authentication
1. Create a Service Account for Avi: nano sa.json

{  "apiVersion": "v1", "kind": "ServiceAccount", "metadata": { "name": "avi" } }

oc create -f sa.json

2. Create a Cluster Role nano clusterrolesepod.json

{   "apiVersion": "v1", "kind": "ClusterRole", "metadata": { "name": "avirole" },   "rules": [ {           "apiGroups": [ ""           ],            "resources": [ "*"           ],            "verbs": [ "get", "list", "watch" ]       },        {            "apiGroups": [ ""           ],            "resources": [ "routes/status" ],           "verbs": [ "patch", "update" ]       },        {            "apiGroups": [ ""           ],            "resources": [ "pods", "secrets", "securitycontextconstraints", "serviceaccounts" ],           "verbs": [ "create", "delete", "get", "list", "update", "watch" ]       },        {            "apiGroups": [ "extensions" ],           "resources": [ "daemonsets", "ingresses" ],           "verbs": [ "create", "delete", "get", "list", "update", "watch" ]       },        {            "apiGroups": [ "apps" ],           "resources": [ "*"           ],            "verbs": [ "create", "delete", "get", "list", "update", "watch" ]       }    ] }

3. Add Created Cluster Role to Service Account oc adm policy add-cluster-role-to-user avirole system:serviceaccount:default:avi

4. Extract Token for Use in Avi Cloud Configuration oc describe serviceaccount avi oc describe secret avi-token-emof0

Adding L7 North-South HTTPS Virtual Service
Deployment configuration: nano app-deployment.json { "kind": "DeploymentConfig", "apiVersion": "v1", "metadata": { "name": "avitest" }, "spec": { "template": { "metadata": { "labels": { "name": "avitest" }   },    "spec": { "containers": [ {      "name": "avitest", "image": "avinetworks/server-os", "ports": [ {       "name": "http", "containerPort": 8080, "protocol": "TCP" }     ]     }    ]   }  },  "replicas": 2, "selector": { "name": "avitest" } } }

oc create -f app-deployment.json

Service file to create a north-south service: nano app-service.json

{ "kind": "Service", "apiVersion": "v1", "metadata": { "name": "avisvc", "labels": { "svc": "avisvc" }, "annotations": { "avi_proxy": "{\"virtualservice\": {\"services\": [{\"port\": 443, \"enable_ssl\": true}], \"east_west_placement\": false, \"ssl_key_and_certificate_refs\": [\"avisvccert\"], \"ssl_profile_ref\": \"/api/sslprofile/?name=System-Standard\"}}" } },  "spec": { "ports": [{ "name": "https", "port": 443, "targetPort": "http" }], "selector": { "name": "avitest" } } }

oc create -f app-service.json

= OpenStack =

Mgmt   PG-747 [Client]--[OS]-[WebServer] ens160   ens192 10.10.30.8/23   NO Ip Address, Promiscuous mode Pool 10.70.47.97-10.70.47.126

Authenticate into Openstack source keystonerc_admin

Provider Network(Should be route-able): POOL_START=10.70.47.97 POOL_END=10.70.47.126 GW=10.70.47.1 CIDR=10.70.47.96/27

neutron net-create --shared --router:external --provider:physical_network provider --provider:network_type flat provider1 neutron subnet-create --name provider1-v4 --ip-version 4 \ --allocation-pool start=$POOL_START,end=$POOL_END \ --gateway $GW --dns-nameserver 8.8.4.4 provider1 \ $CIDR

Create router openstack router create adminrouter Set it to connect to the external network routerid=`openstack router show adminrouter | grep " id " | awk '{print $4;}'` extnetid=`openstack network show provider1 | grep " id " | awk '{print $4;}'` neutron router-gateway-set $routerid $extnetid

Create couple of networks in admin tenant

1. Mgmt network: neutron net-create mgmt --shared neutron subnet-create mgmt 10.0.1.0/24 --name mgmtsnw --dns-nameserver 10.10.0.100

Connect router to it: subnetid=`openstack subnet show mgmtsnw | grep " id " | awk '{print $4;}'` neutron router-interface-add $routerid subnet=$subnetid

2. Vip ipv4 network neutron net-create vip4 --shared neutron subnet-create vip4 10.0.2.0/24 --name vip4snw --dns-nameserver 10.10.0.100

Connect router to it: subnetid=`openstack subnet show vip4snw | grep " id " | awk '{print $4;}'` neutron router-interface-add $routerid subnet=$subnetid

3. Data ipv4 network: neutron net-create data4 --shared neutron subnet-create data4 10.0.3.0/24 --name data4snw --dns-nameserver 10.10.0.100

Connect router to it: subnetid=`openstack subnet show data4snw | grep " id " | awk '{print $4;}'` neutron router-interface-add $routerid subnet=$subnetid

Configuring Allowed Address Pair (AAP)

 * This setup is not required if the OS VM is not deployed on top of another OS cloud.
 * If the OS VM is deployed on top of a vCenter cloud, we need to change the value of Promiscous Mode, MAC address Changes and Forged Transmists to Accept on the PG used for the Provider Network Mapping(for ex: eth1) interface of the VM
 * Not needed if Devstack is deployed in VCenter

interface="ens192" cidr="10.70.47.96/27"

for e in `env | grep ^OS_ | cut -d'=' -f1`; do unset $e; done my_mac=`ifconfig $interface | grep "HWaddr" | awk '{print $5;}'` if [ -z "$my_mac" ]; then echo "Can't find mac!" exit fi

Resolve openstack-controller sed -i "s/nameserver 10.10.0.100\n//g" /etc/resolv.conf echo "nameserver 10.10.0.100" >> /etc/resolv.conf sed -i "s/search avi.local\n//g" /etc/resolv.conf echo "search avi.local" >> /etc/resolv.conf

Figure out the port-id from lab credentials port_id=`neutron port-list | grep "$my_mac" | awk '{print $2;}'` qrouters=`ip netns list | grep qrouter | cut -f 1 -d ' '` aaplist="" for qr in $qrouters; do    mac=`sudo ip netns exec $qr ifconfig | grep qg | awk '{print $5;}'` aaplist="$aaplist mac_address=$mac,ip_address=$cidr" done

neutron port-update $port_id --allowed-address-pairs type=dict list=true $aaplist

= Using Ansible =

mkdir ~/virtualenv mkdir avisdk mkdir bin

cd ~/virtualenv/ cd avisdk/

pip install setuptools

export LC_ALL=C virtualenv ~/virtualenv/avisdk/ pip install avisdk

cd bin . activate

pip install avisdk==17.2.7b2 pip install avisdk pip freeze

cd ~/virtualenv/avisdk/ cd bin source activate pip install ansible cp /tmp/for_ansible_training.yml ~ nano ~/for_ansible_training.yml ansible-playbook ~/for_ansible_training.yml ansible-playbook ~/for_ansible_training.yml -vvvvv

ansible-galaxy -f install avinetworks.avisdk la ~/.ansible/roles/avinetworks.avisdk/library/

ansible-playbook ~/for_ansible_training.yml

= Ansible Playbook to Deploy VS =

nano avi-deploy.yml

Available Roles: ls /etc/ansible/roles/avinetworks.avisdk/library/

Deployment: ansible-playbook -v avi-deploy.yml --step

= Using AVI SDK =

nano pool_vs.py

python pool_vs.py -u admin -p Admin@123 -c 10.10.26.40 -t admin -vs test_aman -v 10.91.0.6 -po test_pool_aman


 * References