AVI: Difference between revisions
→AVI Controller Config
(17 intermediate revisions by the same user not shown) | |||
Line 6:
Source: [https://avinetworks.com/docs/17.2/kubernetes-service-account-for-avi-vantage-authentication/ avinetworks.com]
== Kubernetes Config ==
*Create a Service Account
kubectl create serviceaccount avi -n default
*Create a Cluster Role for deploying Avi Service Engines as a pod:
nano clusterrole.json
Line 87 ⟶ 89:
</pre>
*Create the Role:
kubectl create -f clusterrole.json
*Create Cluster Role Binding
nano clusterbinding.json
<pre>
{
Line 114 ⟶ 118:
</pre>
*Apply Cluster Role Binding
kubectl create -f clusterbinding.json
*Extract the Token for Use in Avi Cloud Configuration
kubectl describe serviceaccount avi -n default
kubectl describe secret avi-token-esdf0 -n default
==
*Enter the Master IP address & Token
https://10.1.10.160:6443 ==> Kubernetes
https://10.1.10.160:8443 ==> Openshift
*Create IPAM Profiles with below subnets:
NorthSouth-IPAM(Should be
10.52.201.0/24: 10.52.201.14 - 10.52.201.30
EastWest-IPAM
172.50.0.0/16 172.50.0.10 - 172.50.0.250
*Create DNS Profiles with below domains:
NorthSouth_DNS [avi]
EastWest-DNS [avi]
*Either Disable Kube-Proxy(which is default LB in Kubernetes) or Give it a different IP than East_West Subnet.
= Kubernetes VIP =
*Edit Deployment file:
nano deployment.yaml
<pre>
kind: Deployment
apiVersion: apps/v1beta2
metadata:
name: avitest-deployment
labels:
app: avitest
spec:
replicas: 2
selector:
matchLabels:
app: avitest
template:
metadata:
labels:
app: avitest
spec:
containers:
- name: avitest
image: avinetworks/server-os
ports:
- name: http
containerPort: 8080
protocol: TCP
</pre>
*Create the Deployment
kubectl create -f deployment.yaml
*Edit Service file:
nano service.yaml
<pre>
kind: Service
apiVersion: v1
metadata:
name: avisvc
labels:
svc: avisvc
spec:
ports:
- name: http
port: 80
targetPort: 8080
selector:
app: avitest
</pre>
*Create the Service
kubectl create -f service.yaml
*Edit Route file:
nano route.yaml
<pre>
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: avitest-route
spec:
rules:
- host: httptest
http:
paths:
- path: /
backend:
serviceName: avisvc
servicePort: 80
</pre>
*Create the Route
kubectl create -f route.yaml
* This will create a VIP in Avi in Tenant Default
*Test reachability:
curl 10.52.201.15 ==> Fails; will not hit the HTTP Request policy to fwd traffic to Pool; will hit 404 policy.
curl -H "HOST:httptest" 10.52.201.15
http://httptest
*Avi HTTP Request policies:
oshift-k8s-cloud-connector--httptest--/--
Path begins with (/)
Host Header equals 'httptest'
Content Switch:
Pool Group: httptest---aviroute-poolgroup-8080-tcp
host--path--drop--rule--httptest--[u'/']
Path does not equal (/)
Host Header equals 'httptest'
Content Switch
Status Code: 404
all-nomatch-host--drop--rule
Host Header does not equal 'httptest'
Content Switch
Status Code: 404
= OpenShift =
*OpenShift Cloud should be Default Cloud.
*Routes are created in OpenShift directly.
*Annotations are used to map objects to AVI objects.
== Replace kube-proxy with Avi ==
Line 297 ⟶ 414:
}
</pre>
oc create -f clusterrolesepod.json
3. Add Created Cluster Role to Service Account
Line 391 ⟶ 510:
10.10.30.8/23 NO Ip Address, Promiscuous mode
Pool 10.70.47.97-10.70.47.126
Specify list of physical_network names with which flat, vlan, gre, etc type networks can be created:
sudo nano /etc/neutron/plugins/ml2/ml2_conf.ini
Under [ml2_type_flat]:
flat_networks = *
Authenticate into Openstack
Line 479 ⟶ 605:
* Check Nova API logs:
cat /var/log/nova/nova-api.log
* Debug Avi Cloud connector
Line 492 ⟶ 611:
trace_level trace_level_debug_detail
save
* TCPDump on Avi Controller:
tcpdump -i eth0 -s 0 -w /tmp/openstack.pcap
POST
{"server": {"name": "Avi-se-iqpug", "imageRef": "7d95c733-3279-47f3-bbc2-047e1d3cb3b7", "flavorRef": "2", "max_count": 1, "min_count": 1, "metadata": {"AVICNTRL": "10.10.30.47", "AVITENANT": "admin", "AVICOOKIE": "3355a182-83a1-44ee-9fa4-6a6cd65e4dfb", "AVIFLAVOR": "2", "AVICLUSTER_UUID": "cluster-c820efe7-2e24-4a14-9a7f-c2df55477599", "AVISG_UUID": "7a2f6b47-8b39-4836-a75f-b88aefe6a085", "AVICNTRLTENANT": "admin", "AVICLOUD_UUID": "OpenStack-Cloud:cloud-5f5b017e-dbf9-4ab2-a1cb-b5de3bed6fc2", "HYPERVISOR_TYPE": "kvm", "CNTRL_SSH_PORT": '''5098''', "AVIMGMTMAC": "fa:16:3e:2b:b7:24"}, "networks": [{"port": "d418b981-cfd8-4648-8a2a-0ffd7eb23e1b"}], "security_groups": [{"name": "9c1ae14a-bbe0-11e8-84dc-0242ac110002"}], "config_drive": true}}HTTP/1.1
= Using Ansible =
*Use Virtual Environment:
mkdir ~/virtualenv
mkdir avisdk
Line 511 ⟶ 637:
. activate
*Install Avi SDK:
pip install avisdk==17.2.7b2
pip install avisdk
pip freeze
*Activate Virtual Environment:
cd ~/virtualenv/avisdk/
cd bin
source activate
pip install ansible
*Install Avi Roles:
ansible-galaxy -f install avinetworks.avisdk
la ~/.ansible/roles/avinetworks.avisdk/library/
ansible-
*Run Playbook:
cp /tmp/for_ansible_training.yml ~
nano ~/for_ansible_training.yml
ansible-playbook ~/for_ansible_training.yml
ansible-playbook ~/for_ansible_training.yml -vvvvv
= Ansible Playbook to Deploy VS =
|