AVI: Difference between revisions

12,700 bytes added ,  4 years ago
 
(43 intermediate revisions by the same user not shown)
Line 6:
Source: [https://avinetworks.com/docs/17.2/kubernetes-service-account-for-avi-vantage-authentication/ avinetworks.com]
 
== Kubernetes Config ==
Create a Service Account
 
*Create a Service Account
kubectl create serviceaccount avi -n default
 
*Create a Cluster Role for deploying Avi Service Engines as a pod:
 
nano clusterrole.json
Line 87 ⟶ 89:
</pre>
 
*Create the Role:
kubectl create -f clusterrole.json
 
*Create Cluster Role Binding
nano clusterbinding.json
 
<pre>
{
Line 114 ⟶ 118:
</pre>
 
*Apply Cluster Role Binding
kubectl create -f clusterbinding.json
 
*Extract the Token for Use in Avi Cloud Configuration
kubectl describe serviceaccount avi -n default
kubectl describe secret avi-token-esdf0 -n default
 
 
== On AVI Controller Config ==
 
*Enter the Master IP address & Token in AVICloud PortalConfig:
https://10.1.10.160:6443 ==> Kubernetes
https://10.1.10.160:8443 ==> Openshift
 
*Create IPAM Profiles with below subnets:
Create
NorthSouth-IPAM(Should be route-able)
10.52.201.0/24: 10.52.201.14 - 10.52.201.30
NorthSouth_DNS
EastWest-IPAM
172.50.0.0/16 172.50.0.10 - 172.50.0.250
 
*Create DNS Profiles with below domains:
EastWest-IPAM
NorthSouth_DNS [avi]
EastWest-DNS
EastWest-DNS [avi]
 
Goto*Go to Tenant '''Default,''' & Check VS status
 
*Either Disable Kube-Proxy(which is default LB in Kubernetes) or Give it a different IP than East_West Subnet.
 
= Kubernetes VIP =
 
*Edit Deployment file:
nano deployment.yaml
 
<pre>
kind: Deployment
apiVersion: apps/v1beta2
metadata:
name: avitest-deployment
labels:
app: avitest
spec:
replicas: 2
selector:
matchLabels:
app: avitest
template:
metadata:
labels:
app: avitest
spec:
containers:
- name: avitest
image: avinetworks/server-os
ports:
- name: http
containerPort: 8080
protocol: TCP
</pre>
 
*Create the Deployment
kubectl create -f deployment.yaml
 
*Edit Service file:
nano service.yaml
 
<pre>
kind: Service
apiVersion: v1
metadata:
name: avisvc
labels:
svc: avisvc
spec:
ports:
- name: http
port: 80
targetPort: 8080
selector:
app: avitest
</pre>
 
*Create the Service
kubectl create -f service.yaml
 
*Edit Route file:
nano route.yaml
 
<pre>
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: avitest-route
spec:
rules:
- host: httptest
http:
paths:
- path: /
backend:
serviceName: avisvc
servicePort: 80
</pre>
 
*Create the Route
kubectl create -f route.yaml
 
* This will create a VIP in Avi in Tenant Default
 
*Test reachability:
curl 10.52.201.15 ==> Fails; will not hit the HTTP Request policy to fwd traffic to Pool; will hit 404 policy.
curl -H "HOST:httptest" 10.52.201.15
http://httptest
 
*Avi HTTP Request policies:
oshift-k8s-cloud-connector--httptest--/--
Path begins with (/)
Host Header equals 'httptest'
Content Switch:
Pool Group: httptest---aviroute-poolgroup-8080-tcp
 
host--path--drop--rule--httptest--[u'/']
Path does not equal (/)
Host Header equals 'httptest'
Content Switch
Status Code: 404
 
all-nomatch-host--drop--rule
Host Header does not equal 'httptest'
Content Switch
Status Code: 404
 
= OpenShift =
 
*OpenShift Cloud should be Default Cloud.
*Routes are created in OpenShift directly.
*Annotations are used to map objects to AVI objects.
 
 
== Replace kube-proxy with Avi ==
 
*If kube-proxy is enabled, it uses the service subnet (default is 172.30.0.0/16) to allocate east-west VIPs to services.
*In this case, east-west VIPs handled by Vantage have to be configured to use other subnets.
*Kube-proxy will be running, but unused, since services use Avi-allocated VIPs for east-west traffic, instead of OpenShift-allocated VIPs from the service network.
 
*If a user wishes to use the service subnet to load balance traffic using Avi, kube-proxy must be disabled.
*This mode offers operational advantages, since OpenShift’s API and CLI are in sync with the VIP used for the service.
*That is to say, if someone does a “oc get service,” the VIPs shown in the output are the same VIPs on which Avi provides the service.
 
;Disable kube-proxy
1) OpenShift Master node delete all user-created services:
oc delete all --all
 
2) To disable kube-proxy, perform the below steps on all nodes (Masters and Slaves):
 
*Edit /etc/sysconfig/origin-node and change the OPTIONS variable to read as below:
OPTIONS="--loglevel=2 --disable proxy"
 
*Save and exit the editor.
*Restart the origin-node service:
systemctl restart origin-node.service
 
;Configuration changes on Avi
1) Configure the east-west VIP network to use the service network (default 172.30.0.0/16).
 
2) In the cloud configuration, select the Use Cluster IP of service as VIP for East-West checkbox.
 
== Configuring the Network ==
 
1. Create east-west network:
172.50.0.0/16
Static Pool: 172.50.0.10 - 172.50.0.250
 
If Kube-proxy is enabled:
You must use a different subnet than the kube-proxy’s cluster IP subnet.
Please choose a /16 CIDR from the IPv4 private address space (172.16.0.0/16-172.31.0.0/16, 10.0.0.0/16 or 192.168.0.0/24
 
If Kube-proxy is disabled:
172.30.0.0/16
 
2. Create NorthSouth network(Should be routeable):
10.70.41.66/28
Static Pool: 10.70.41.65 - 10.70.41.78
 
3. Configuring IPAM/DNS profile
Name: EastWest
Type: Avi Vantage DNS
 
Name: NorthSouth
Type: Avi Vantage DNS
 
== OpenShift Service Account for Avi Authentication ==
 
1. Create a Service Account for Avi:
nano sa.json
 
{
"apiVersion": "v1",
"kind": "ServiceAccount",
"metadata": {
"name": "avi"
}
}
 
oc create -f sa.json
 
2. Create a Cluster Role
nano clusterrolesepod.json
 
<pre>
{
"apiVersion": "v1",
"kind": "ClusterRole",
"metadata": {
"name": "avirole"
},
"rules": [
{
"apiGroups": [
""
],
"resources": [
"*"
],
"verbs": [
"get",
"list",
"watch"
]
},
{
"apiGroups": [
""
],
"resources": [
"routes/status"
],
"verbs": [
"patch",
"update"
]
},
{
"apiGroups": [
""
],
"resources": [
"pods",
"secrets",
"securitycontextconstraints",
"serviceaccounts"
],
"verbs": [
"create",
"delete",
"get",
"list",
"update",
"watch"
]
},
{
"apiGroups": [
"extensions"
],
"resources": [
"daemonsets",
"ingresses"
],
"verbs": [
"create",
"delete",
"get",
"list",
"update",
"watch"
]
},
{
"apiGroups": [
"apps"
],
"resources": [
"*"
],
"verbs": [
"create",
"delete",
"get",
"list",
"update",
"watch"
]
}
]
}
</pre>
 
oc create -f clusterrolesepod.json
 
3. Add Created Cluster Role to Service Account
oc adm policy add-cluster-role-to-user avirole system:serviceaccount:default:avi
 
4. Extract Token for Use in Avi Cloud Configuration
oc describe serviceaccount avi
oc describe secret avi-token-emof0
 
== Add OpenShift Cloud to Avi ==
{{UC}}
 
== Adding L7 North-South HTTPS Virtual Service ==
 
Deployment configuration:
nano app-deployment.json
<pre>
{
"kind": "DeploymentConfig",
"apiVersion": "v1",
"metadata": {
"name": "avitest"
},
"spec": {
"template": {
"metadata": {
"labels": {
"name": "avitest"
}
},
"spec": {
"containers": [
{
"name": "avitest",
"image": "avinetworks/server-os",
"ports": [
{
"name": "http",
"containerPort": 8080,
"protocol": "TCP"
}
]
}
]
}
},
"replicas": 2,
"selector": {
"name": "avitest"
}
}
}
</pre>
 
oc create -f app-deployment.json
 
Service file to create a north-south service:
nano app-service.json
 
<pre>
{
"kind": "Service",
"apiVersion": "v1",
"metadata": {
"name": "avisvc",
"labels": {
"svc": "avisvc"
},
"annotations": {
"avi_proxy": "{\"virtualservice\": {\"services\": [{\"port\": 443, \"enable_ssl\": true}], \"east_west_placement\": false, \"ssl_key_and_certificate_refs\": [\"avisvccert\"], \"ssl_profile_ref\": \"/api/sslprofile/?name=System-Standard\"}}"
}
},
"spec": {
"ports": [{
"name": "https",
"port": 443,
"targetPort": "http"
}],
"selector": {
"name": "avitest"
}
}
}
</pre>
 
oc create -f app-service.json
 
= OpenStack =
 
 
Mgmt PG-747
[Client]--------------[OS]-----------------[WebServer]
ens160 ens192
10.10.30.8/23 NO Ip Address, Promiscuous mode
Pool 10.70.47.97-10.70.47.126
 
 
Specify list of physical_network names with which flat, vlan, gre, etc type networks can be created:
sudo nano /etc/neutron/plugins/ml2/ml2_conf.ini
 
Under [ml2_type_flat]:
flat_networks = *
 
Authenticate into Openstack
source keystonerc_admin
 
Provider Network(Should be route-able):
POOL_START=10.70.47.97
POOL_END=10.70.47.126
GW=10.70.47.1
CIDR=10.70.47.96/27
 
neutron net-create --shared --router:external --provider:physical_network provider --provider:network_type flat provider1
neutron subnet-create --name provider1-v4 --ip-version 4 \
--allocation-pool start=$POOL_START,end=$POOL_END \
--gateway $GW --dns-nameserver 8.8.4.4 provider1 \
$CIDR
 
Create router
openstack router create adminrouter
Set it to connect to the external network
routerid=`openstack router show adminrouter | grep " id " | awk '{print $4;}'`
extnetid=`openstack network show provider1 | grep " id " | awk '{print $4;}'`
neutron router-gateway-set $routerid $extnetid
 
Create couple of networks in admin tenant
 
1. Mgmt network:
neutron net-create mgmt --shared
neutron subnet-create mgmt 10.0.1.0/24 --name mgmtsnw --dns-nameserver 10.10.0.100
 
Connect router to it:
subnetid=`openstack subnet show mgmtsnw | grep " id " | awk '{print $4;}'`
neutron router-interface-add $routerid subnet=$subnetid
 
2. Vip ipv4 network
neutron net-create vip4 --shared
neutron subnet-create vip4 10.0.2.0/24 --name vip4snw --dns-nameserver 10.10.0.100
 
Connect router to it:
subnetid=`openstack subnet show vip4snw | grep " id " | awk '{print $4;}'`
neutron router-interface-add $routerid subnet=$subnetid
 
3. Data ipv4 network:
neutron net-create data4 --shared
neutron subnet-create data4 10.0.3.0/24 --name data4snw --dns-nameserver 10.10.0.100
 
Connect router to it:
subnetid=`openstack subnet show data4snw | grep " id " | awk '{print $4;}'`
neutron router-interface-add $routerid subnet=$subnetid
 
== Configuring Allowed Address Pair (AAP) If OS running over another OS ==
 
* This setup is not required if the OS VM is not deployed on top of another OS cloud.
* Not needed if Devstack is deployed in VCenter
 
interface="ens192"
cidr="10.70.47.96/27"
 
for e in `env | grep ^OS_ | cut -d'=' -f1`; do unset $e; done
my_mac=`ifconfig $interface | grep "HWaddr" | awk '{print $5;}'`
if [ -z "$my_mac" ]; then
echo "Can't find mac!"
exit
fi
 
Resolve openstack-controller
sed -i "s/nameserver 10.10.0.100\n//g" /etc/resolv.conf
echo "nameserver 10.10.0.100" >> /etc/resolv.conf
sed -i "s/search avi.local\n//g" /etc/resolv.conf
echo "search avi.local" >> /etc/resolv.conf
 
Figure out the port-id from lab credentials
port_id=`neutron port-list | grep "$my_mac" | awk '{print $2;}'`
qrouters=`ip netns list | grep qrouter | cut -f 1 -d ' '`
aaplist=""
for qr in $qrouters; do
mac=`sudo ip netns exec $qr ifconfig | grep qg | awk '{print $5;}'`
aaplist="$aaplist mac_address=$mac,ip_address=$cidr"
done
 
neutron port-update $port_id --allowed-address-pairs type=dict list=true $aaplist
 
== Troubleshooting ==
 
=== Instance not creating===
 
* Check Nova API logs:
cat /var/log/nova/nova-api.log
 
* Debug Avi Cloud connector
debug controller cloud_connector
trace_level trace_level_debug
trace_level trace_level_debug_detail
save
 
* TCPDump on Avi Controller:
tcpdump -i eth0 -s 0 -w /tmp/openstack.pcap
 
POST
{"server": {"name": "Avi-se-iqpug", "imageRef": "7d95c733-3279-47f3-bbc2-047e1d3cb3b7", "flavorRef": "2", "max_count": 1, "min_count": 1, "metadata": {"AVICNTRL": "10.10.30.47", "AVITENANT": "admin", "AVICOOKIE": "3355a182-83a1-44ee-9fa4-6a6cd65e4dfb", "AVIFLAVOR": "2", "AVICLUSTER_UUID": "cluster-c820efe7-2e24-4a14-9a7f-c2df55477599", "AVISG_UUID": "7a2f6b47-8b39-4836-a75f-b88aefe6a085", "AVICNTRLTENANT": "admin", "AVICLOUD_UUID": "OpenStack-Cloud:cloud-5f5b017e-dbf9-4ab2-a1cb-b5de3bed6fc2", "HYPERVISOR_TYPE": "kvm", "CNTRL_SSH_PORT": '''5098''', "AVIMGMTMAC": "fa:16:3e:2b:b7:24"}, "networks": [{"port": "d418b981-cfd8-4648-8a2a-0ffd7eb23e1b"}], "security_groups": [{"name": "9c1ae14a-bbe0-11e8-84dc-0242ac110002"}], "config_drive": true}}HTTP/1.1
 
= Using Ansible =
 
*Use Virtual Environment:
mkdir ~/virtualenv
mkdir avisdk
Line 155 ⟶ 637:
. activate
 
*Install Avi SDK:
pip install avisdk==17.2.7b2
pip install avisdk
pip freeze
 
*Activate Virtual Environment:
cd ~/virtualenv/avisdk/
cd bin
source activate
pip install ansible
cp /tmp/for_ansible_training.yml ~
nano ~/for_ansible_training.yml
ansible-playbook ~/for_ansible_training.yml
ansible-playbook ~/for_ansible_training.yml -vvvvv
 
*Install Avi Roles:
ansible-galaxy -f install avinetworks.avisdk
la ~/.ansible/roles/avinetworks.avisdk/library/
 
ansible-playbookgalaxy --list ~/for_ansible_training.yml
 
*Run Playbook:
cp /tmp/for_ansible_training.yml ~
nano ~/for_ansible_training.yml
ansible-playbook ~/for_ansible_training.yml
ansible-playbook ~/for_ansible_training.yml -vvvvv
 
= Ansible Playbook to Deploy VS =
Line 224 ⟶ 711:
= Using AVI SDK =
 
cat pool_vs.py
nano pool_vs.py
python pool_vs.py -u admin -p Admin@123 -c 10.10.26.40 -t admin -vs test_aman -v 10.91.0.6 -po test_pool_aman
 
<syntaxhighlight lang="python">
Line 277 ⟶ 762:
vs_obj = {'name': vs, 'vip' : [ {'ip_address': {'addr': vip, 'type': 'V4'}}], 'services': services_obj, 'pool_ref': pool_ref}
 
#POstingPosting VS OBJ
resp = api.post('virtualservice', data=vs_obj)
 
print resp.json()
</syntaxhighlight>
 
python pool_vs.py -u admin -p Admin@123 -c 10.10.26.40 -t admin -vs test_aman -v 10.91.0.6 -po test_pool_aman
 
<br />