AVI

= Kubernetes Integration= Source: avinetworks.com

Kubernetes Config
kubectl create serviceaccount avi -n default
 * Create a Service Account


 * Create a Cluster Role for deploying Avi Service Engines as a pod:

nano clusterrole.json {   "apiVersion": "rbac.authorization.k8s.io/v1beta1", "kind": "ClusterRole", "metadata": { "name": "avirole" },   "rules": [ {           "apiGroups": [ ""           ],            "resources": [ "*"           ],            "verbs": [ "get", "list", "watch" ]       },        {            "apiGroups": [ ""           ],            "resources": [ "pods", "replicationcontrollers" ],           "verbs": [ "get", "list", "watch", "create", "delete", "update" ]       },        {            "apiGroups": [ ""           ],            "resources": [ "secrets" ],           "verbs": [ "get", "list", "watch", "create", "delete", "update" ]       },        {            "apiGroups": [ "extensions" ],           "resources": [ "daemonsets", "ingresses" ],           "verbs": [ "create", "delete", "get", "list", "update", "watch" ]       }    ] }

kubectl create -f clusterrole.json
 * Create the Role:

nano clusterbinding.json
 * Create Cluster Role Binding

{   "apiVersion": "rbac.authorization.k8s.io/v1beta1", "kind": "ClusterRoleBinding", "metadata": { "name": "avirolebinding", "namespace": "default" },   "roleRef": { "apiGroup": "rbac.authorization.k8s.io", "kind": "ClusterRole", "name": "avirole" },   "subjects": [ {           "kind": "ServiceAccount", "name": "avi", "namespace": "default" }   ] }

kubectl create -f clusterbinding.json
 * Apply Cluster Role Binding

kubectl describe serviceaccount avi -n default kubectl describe secret avi-token-esdf0 -n default
 * Extract the Token for Use in Avi Cloud Configuration

AVI Controller Config
https://10.1.10.160:8443
 * Enter the Master IP address & Token in Cloud Config:

NorthSouth-IPAM(Should be route-able) 10.52.201.0/24:     10.52.201.14 - 10.52.201.30 EastWest-IPAM 172.50.0.0/16       172.50.0.10  - 172.50.0.250
 * Create IPAM Profiles with below subnets:

NorthSouth_DNS                        [avi] EastWest-DNS                          [avi]
 * Create DNS Profiles with below domains:


 * Go to Tenant Default & Check VS status


 * Either Disable Kube-Proxy(which is default LB in Kubernetes) or Give it a different IP than East_West Subnet.

= Kubernetes VIP =

nano deployment.yaml
 * Edit Deployment file:

kind: Deployment apiVersion: apps/v1beta2 metadata: name: avitest-deployment labels: app: avitest spec: replicas: 2 selector: matchLabels: app: avitest template: metadata: labels: app: avitest spec: containers: - name: avitest image: avinetworks/server-os ports: - name: http containerPort: 8080 protocol: TCP

kubectl create -f deployment.yaml
 * Create the Deployment

nano service.yaml
 * Edit Service file:

kind: Service apiVersion: v1 metadata: name: avisvc labels: svc: avisvc spec: ports: - name: http port: 80 targetPort: 8080 selector: app: avitest

kubectl create -f service.yaml
 * Create the Service

nano route.yaml
 * Edit Route file:

apiVersion: extensions/v1beta1 kind: Ingress metadata: name: avitest-route spec: rules: - host: httptest http: paths: - path: / backend: serviceName: avisvc servicePort: 80

kubectl create -f route.yaml
 * Create the Route


 * This will create a VIP in Avi in Tenant Default

curl 10.52.201.15                ==> Fails; will not hit the HTTP Request policy to fwd traffic to Pool; will hit 404 policy. curl -H "HOST:httptest" 10.52.201.15 http://httptest
 * Test reachability:

oshift-k8s-cloud-connector--httptest--/-- Path begins with (/) Host Header equals 'httptest' Content Switch: Pool Group: httptest---aviroute-poolgroup-8080-tcp
 * Avi HTTP Request policies:

host--path--drop--rule--httptest--[u'/'] Path does not equal (/) Host Header equals 'httptest' Content Switch Status Code: 404

all-nomatch-host--drop--rule Host Header does not equal 'httptest' Content Switch Status Code: 404

= OpenShift =


 * OpenShift Cloud should be Default Cloud.
 * Routes are created in OpenShift directly.
 * Annotations are used to map objects to AVI objects.

Replace kube-proxy with Avi

 * If kube-proxy is enabled, it uses the service subnet (default is 172.30.0.0/16) to allocate east-west VIPs to services.
 * In this case, east-west VIPs handled by Vantage have to be configured to use other subnets.
 * Kube-proxy will be running, but unused, since services use Avi-allocated VIPs for east-west traffic, instead of OpenShift-allocated VIPs from the service network.


 * If a user wishes to use the service subnet to load balance traffic using Avi, kube-proxy must be disabled.
 * This mode offers operational advantages, since OpenShift’s API and CLI are in sync with the VIP used for the service.
 * That is to say, if someone does a “oc get service,” the VIPs shown in the output are the same VIPs on which Avi provides the service.

1) OpenShift Master node delete all user-created services: oc delete all --all
 * Disable kube-proxy

2) To disable kube-proxy, perform the below steps on all nodes (Masters and Slaves):

OPTIONS="--loglevel=2 --disable proxy"
 * Edit /etc/sysconfig/origin-node and change the OPTIONS variable to read as below:

systemctl restart origin-node.service
 * Save and exit the editor.
 * Restart the origin-node service:

1) Configure the east-west VIP network to use the service network (default 172.30.0.0/16).
 * Configuration changes on Avi

2) In the cloud configuration, select the Use Cluster IP of service as VIP for East-West checkbox.

Configuring the Network
1. Create east-west network: 172.50.0.0/16 Static Pool: 172.50.0.10 - 172.50.0.250

If Kube-proxy is enabled: You must use a different subnet than the kube-proxy’s cluster IP subnet. Please choose a /16 CIDR from the IPv4 private address space (172.16.0.0/16-172.31.0.0/16, 10.0.0.0/16 or 192.168.0.0/24

If Kube-proxy is disabled: 172.30.0.0/16

2. Create NorthSouth network(Should be routeable): 10.70.41.66/28 Static Pool: 10.70.41.65 - 10.70.41.78

3. Configuring IPAM/DNS profile Name: EastWest Type: Avi Vantage DNS

Name: NorthSouth Type: Avi Vantage DNS

OpenShift Service Account for Avi Authentication
1. Create a Service Account for Avi: nano sa.json

{  "apiVersion": "v1", "kind": "ServiceAccount", "metadata": { "name": "avi" } }

oc create -f sa.json

2. Create a Cluster Role nano clusterrolesepod.json

{   "apiVersion": "v1", "kind": "ClusterRole", "metadata": { "name": "avirole" },   "rules": [ {           "apiGroups": [ ""           ],            "resources": [ "*"           ],            "verbs": [ "get", "list", "watch" ]       },        {            "apiGroups": [ ""           ],            "resources": [ "routes/status" ],           "verbs": [ "patch", "update" ]       },        {            "apiGroups": [ ""           ],            "resources": [ "pods", "secrets", "securitycontextconstraints", "serviceaccounts" ],           "verbs": [ "create", "delete", "get", "list", "update", "watch" ]       },        {            "apiGroups": [ "extensions" ],           "resources": [ "daemonsets", "ingresses" ],           "verbs": [ "create", "delete", "get", "list", "update", "watch" ]       },        {            "apiGroups": [ "apps" ],           "resources": [ "*"           ],            "verbs": [ "create", "delete", "get", "list", "update", "watch" ]       }    ] }

oc create -f clusterrolesepod.json

3. Add Created Cluster Role to Service Account oc adm policy add-cluster-role-to-user avirole system:serviceaccount:default:avi

4. Extract Token for Use in Avi Cloud Configuration oc describe serviceaccount avi oc describe secret avi-token-emof0

Adding L7 North-South HTTPS Virtual Service
Deployment configuration: nano app-deployment.json { "kind": "DeploymentConfig", "apiVersion": "v1", "metadata": { "name": "avitest" }, "spec": { "template": { "metadata": { "labels": { "name": "avitest" }   },    "spec": { "containers": [ {      "name": "avitest", "image": "avinetworks/server-os", "ports": [ {       "name": "http", "containerPort": 8080, "protocol": "TCP" }     ]     }    ]   }  },  "replicas": 2, "selector": { "name": "avitest" } } }

oc create -f app-deployment.json

Service file to create a north-south service: nano app-service.json

{ "kind": "Service", "apiVersion": "v1", "metadata": { "name": "avisvc", "labels": { "svc": "avisvc" }, "annotations": { "avi_proxy": "{\"virtualservice\": {\"services\": [{\"port\": 443, \"enable_ssl\": true}], \"east_west_placement\": false, \"ssl_key_and_certificate_refs\": [\"avisvccert\"], \"ssl_profile_ref\": \"/api/sslprofile/?name=System-Standard\"}}" } },  "spec": { "ports": [{ "name": "https", "port": 443, "targetPort": "http" }], "selector": { "name": "avitest" } } }

oc create -f app-service.json

= OpenStack =

Mgmt   PG-747 [Client]--[OS]-[WebServer] ens160   ens192 10.10.30.8/23   NO Ip Address, Promiscuous mode Pool 10.70.47.97-10.70.47.126

Specify list of physical_network names with which flat, vlan, gre, etc type networks can be created: sudo nano /etc/neutron/plugins/ml2/ml2_conf.ini

Under [ml2_type_flat]: flat_networks = *

Authenticate into Openstack source keystonerc_admin

Provider Network(Should be route-able): POOL_START=10.70.47.97 POOL_END=10.70.47.126 GW=10.70.47.1 CIDR=10.70.47.96/27

neutron net-create --shared --router:external --provider:physical_network provider --provider:network_type flat provider1 neutron subnet-create --name provider1-v4 --ip-version 4 \ --allocation-pool start=$POOL_START,end=$POOL_END \ --gateway $GW --dns-nameserver 8.8.4.4 provider1 \ $CIDR

Create router openstack router create adminrouter Set it to connect to the external network routerid=`openstack router show adminrouter | grep " id " | awk '{print $4;}'` extnetid=`openstack network show provider1 | grep " id " | awk '{print $4;}'` neutron router-gateway-set $routerid $extnetid

Create couple of networks in admin tenant

1. Mgmt network: neutron net-create mgmt --shared neutron subnet-create mgmt 10.0.1.0/24 --name mgmtsnw --dns-nameserver 10.10.0.100

Connect router to it: subnetid=`openstack subnet show mgmtsnw | grep " id " | awk '{print $4;}'` neutron router-interface-add $routerid subnet=$subnetid

2. Vip ipv4 network neutron net-create vip4 --shared neutron subnet-create vip4 10.0.2.0/24 --name vip4snw --dns-nameserver 10.10.0.100

Connect router to it: subnetid=`openstack subnet show vip4snw | grep " id " | awk '{print $4;}'` neutron router-interface-add $routerid subnet=$subnetid

3. Data ipv4 network: neutron net-create data4 --shared neutron subnet-create data4 10.0.3.0/24 --name data4snw --dns-nameserver 10.10.0.100

Connect router to it: subnetid=`openstack subnet show data4snw | grep " id " | awk '{print $4;}'` neutron router-interface-add $routerid subnet=$subnetid

Configuring Allowed Address Pair (AAP) If OS running over another OS

 * This setup is not required if the OS VM is not deployed on top of another OS cloud.
 * Not needed if Devstack is deployed in VCenter

interface="ens192" cidr="10.70.47.96/27"

for e in `env | grep ^OS_ | cut -d'=' -f1`; do unset $e; done my_mac=`ifconfig $interface | grep "HWaddr" | awk '{print $5;}'` if [ -z "$my_mac" ]; then echo "Can't find mac!" exit fi

Resolve openstack-controller sed -i "s/nameserver 10.10.0.100\n//g" /etc/resolv.conf echo "nameserver 10.10.0.100" >> /etc/resolv.conf sed -i "s/search avi.local\n//g" /etc/resolv.conf echo "search avi.local" >> /etc/resolv.conf

Figure out the port-id from lab credentials port_id=`neutron port-list | grep "$my_mac" | awk '{print $2;}'` qrouters=`ip netns list | grep qrouter | cut -f 1 -d ' '` aaplist="" for qr in $qrouters; do    mac=`sudo ip netns exec $qr ifconfig | grep qg | awk '{print $5;}'` aaplist="$aaplist mac_address=$mac,ip_address=$cidr" done

neutron port-update $port_id --allowed-address-pairs type=dict list=true $aaplist

Instance not creating
cat /var/log/nova/nova-api.log
 * Check Nova API logs:

debug controller cloud_connector trace_level trace_level_debug trace_level trace_level_debug_detail save
 * Debug Avi Cloud connector

tcpdump -i eth0 -s 0 -w /tmp/openstack.pcap
 * TCPDump on Avi Controller:

POST {"server": {"name": "Avi-se-iqpug", "imageRef": "7d95c733-3279-47f3-bbc2-047e1d3cb3b7", "flavorRef": "2", "max_count": 1, "min_count": 1, "metadata": {"AVICNTRL": "10.10.30.47", "AVITENANT": "admin", "AVICOOKIE": "3355a182-83a1-44ee-9fa4-6a6cd65e4dfb", "AVIFLAVOR": "2", "AVICLUSTER_UUID": "cluster-c820efe7-2e24-4a14-9a7f-c2df55477599", "AVISG_UUID": "7a2f6b47-8b39-4836-a75f-b88aefe6a085", "AVICNTRLTENANT": "admin", "AVICLOUD_UUID": "OpenStack-Cloud:cloud-5f5b017e-dbf9-4ab2-a1cb-b5de3bed6fc2", "HYPERVISOR_TYPE": "kvm", "CNTRL_SSH_PORT": 5098, "AVIMGMTMAC": "fa:16:3e:2b:b7:24"}, "networks": [{"port": "d418b981-cfd8-4648-8a2a-0ffd7eb23e1b"}], "security_groups": [{"name": "9c1ae14a-bbe0-11e8-84dc-0242ac110002"}], "config_drive": true}}HTTP/1.1

= Using Ansible =

mkdir ~/virtualenv mkdir avisdk mkdir bin

cd ~/virtualenv/ cd avisdk/

pip install setuptools

export LC_ALL=C virtualenv ~/virtualenv/avisdk/ pip install avisdk

cd bin . activate

pip install avisdk==17.2.7b2 pip install avisdk pip freeze

cd ~/virtualenv/avisdk/ cd bin source activate pip install ansible cp /tmp/for_ansible_training.yml ~ nano ~/for_ansible_training.yml ansible-playbook ~/for_ansible_training.yml ansible-playbook ~/for_ansible_training.yml -vvvvv

ansible-galaxy -f install avinetworks.avisdk la ~/.ansible/roles/avinetworks.avisdk/library/

ansible-playbook ~/for_ansible_training.yml

= Ansible Playbook to Deploy VS =

nano avi-deploy.yml

Available Roles: ls /etc/ansible/roles/avinetworks.avisdk/library/

Deployment: ansible-playbook -v avi-deploy.yml --step

= Using AVI SDK =

nano pool_vs.py

python pool_vs.py -u admin -p Admin@123 -c 10.10.26.40 -t admin -vs test_aman -v 10.91.0.6 -po test_pool_aman


 * References