Kubernetes: Difference between revisions

From Network Security Wiki
Content added Content deleted
Line 6: Line 6:
= Basics =
= Basics =


*Kubernetes Master Processes:
We don't care about PODS we care about deployments in Kubernetes

Kubernetes Master Processes:
- kube-apiserver
- kube-apiserver
- kube-controller-manager
- kube-controller-manager
Line 14: Line 12:
- etcd: key-value pair store to store tags or labels for easy Management
- etcd: key-value pair store to store tags or labels for easy Management


Nodes Component Processes:
*Nodes Component Processes:
- kubelet: talks to kubernetes master
- kubelet: talks to kubernetes master
- kube-proxy:
- kube-proxy:
Line 22: Line 20:
- Supervisord
- Supervisord


Kubernetes Concepts:
*Kubernetes Concepts:
- Container Image: Docker container Image with Application code
- Container Image: Docker container Image with Application code
- Pod: Set of containers sharing network namespace & Local volumes, co-scheduled on one machine. Mortal, has IP, Label
- Pod: Set of containers sharing network namespace & Local volumes, co-scheduled on one machine. Mortal, has IP, Label
- Deployment: Specify how many replicas of pod should be run in a cluster. Has label.
- Deployment: Specify how many replicas of pod should be run in a cluster. Has label.
- Service: names things in DNS, Gets Virtual IP. Routes based on labels. Two types:
- Service: names things in DNS, Gets Virtual IP. Routes based on labels. Two types:
ClusterIP for Internal services
ClusterIP for Internal services
NodePort for publishing to outside.
NodePort for publishing to outside.


*Do not care about PODS, Just care about Deployments in Kubernetes.
Kubernetes Networking Requirements:

*Kubernetes Networking Requirements:
- All contianers should communicate without NAT
- All contianers should communicate without NAT
- All Nodes can communicate without NAT
- All Nodes can communicate without NAT
- IP that a container sees itselg as is the same IP that others see it.
- IP that a container sees itselg as is the same IP that others see it.


eth0<------->docker0<--------->veth0
10.100.0.2 docker bridge container1
172.17.0.1 172.17.0.2


Overlay network e.g. Flannel:
eth0<------->docker0<--------->veth0
eth0<--------->cbr0<--------->veth0
10.100.0.2 docker bridge container1
172.17.0.1 172.17.0.2
10.100.0.2 container1
172.17.0.1 172.17.0.2


Overlay network e.g.: Flannel
eth0<--------->cbr0<--------->veth0
10.100.0.2 container1
172.17.0.1 172.17.0.2


*Pause State


*All networking in the Pod lies in the overlay itself.
Pause?


*Kube-Proxy:
All networking in the Pod lies in the overlay itself.
- Implemented by Iptables
- Handles Inter-Node(nodes on different Hosts) communication => East-West communication


*YAML output:
Kube-Proxy:
kubectl get pod <name> -o yaml
- Implemented by Iptables
- Handles Inter-Node(nodes on different Hosts) communication => East-West communication


*Shell Access:
YAML output:
kubectl get pod <name> -o yaml
kubectl exec <container-name> -c shell -it bash


*Labels:
Shell Access:
Used for ease of management
kubectl exec <container-name> -c shell -it bash


Labels:
*Assign Labels:
kubectl label pods <pod-name> owner=test
Used for ease of management
kubectl label pods <pod-name> env=development


Assign Labels:
*Show Labels:
kubectl label pods <pod-name> owner=test
kubectl get pods --show-labels
kubectl label pods <pod-name> env=development


Show Labels:
*Filter based on Labels:
kubectl get pods --show-labels
kubectl get pods --selector owner=test

Filter based on Labels:
kubectl get pods --selector owner=test
OR
OR
kubectl get pods -l owner=test
kubectl get pods -l owner=test

kubectl get pods -l 'env in (production,development)' --show-labels


kubectl get pods -l 'env in (production,development)' --show-labels


*Deployment uses ReplicaSet NOT ReplicationController:
kubectl get replicationcontroller
kubectl get replicationcontroller
<no output>
<no output>
OR
OR
kubectl get rc
kubectl get rc
<no output>
<no output>


Deployment uses ReplicaSet NOT ReplicationController



kubectl get replicaset
kubectl get replicaset
OR
OR
kubectl get rs
kubectl get rs


You can expose Pods based on Labels, i.e. Pods from different Deployments can be exposed.

Service:


ClusterIP(default): used for East-West traffic only. Not significant outside cluster.
NodePort: Used for North-South traffic, Makes a service accessible from outside using NAT.
LoadBalancer: creates an external load-balancer in current cloud. Makes a service available outside.
ExternalName:


*You can expose Pods based on Labels, i.e. Pods from different Deployments can be exposed.
Application should be accessible from all nodes - Master & Worker Nodes


*Services:
ClusterIP(default): used for East-West traffic only. Not significant outside cluster.
NodePort: Used for North-South traffic, Makes a service accessible from outside using NAT.
LoadBalancer: creates an external load-balancer in current cloud. Makes a service available outside.
ExternalName:


*Application should be accessible from all nodes - Master & Worker Nodes
Ingress: not a Service, Instead sits in front of multiple services. type of Smart Router or entrypoint into cluster


*Ingress: not a Service, Instead sits in front of multiple services. type of Smart Router or entrypoint into cluster
Traffic ==> Ingress ==> | ==> foo.mydamoin.com -- Service -- Pod,Pod,Pod
| ==> mydomain.com/bar -- Service -- Pod,Pod,Pod
| ==> Other -- Service -- Pod,Pod,Pod


Traffic ==> Ingress ==> | ==> foo.mydamoin.com -- Service -- Pod,Pod,Pod
| ==> mydomain.com/bar -- Service -- Pod,Pod,Pod
| ==> Other -- Service -- Pod,Pod,Pod


= Requirements=
= Requirements=

Revision as of 09:48, 19 June 2018



Basics

  • Kubernetes Master Processes:
- kube-apiserver
- kube-controller-manager
- kube-scheduler
- etcd: key-value pair store to store tags or labels for easy Management
  • Nodes Component Processes:
- kubelet: talks to kubernetes master
- kube-proxy:
- DNS addon
- UI addon
- fluentd: for log collection
- Supervisord
  • Kubernetes Concepts:
- Container Image: Docker container Image with Application code
- Pod: Set of containers sharing network namespace & Local volumes, co-scheduled on one machine. Mortal, has IP, Label
- Deployment: Specify how many replicas of pod should be run in a cluster. Has label.
- Service: names things in DNS, Gets Virtual IP. Routes based on labels. Two types:
   ClusterIP for Internal services
   NodePort for publishing to outside.
  • Do not care about PODS, Just care about Deployments in Kubernetes.
  • Kubernetes Networking Requirements:
- All contianers should communicate without NAT
- All Nodes can communicate without NAT
- IP that a container sees itselg as is the same IP that others see it.
eth0<------->docker0<--------->veth0
10.100.0.2  docker bridge    container1
            172.17.0.1       172.17.0.2

Overlay network e.g. Flannel:

eth0<--------->cbr0<--------->veth0
10.100.0.2                   container1
             172.17.0.1      172.17.0.2


  • Pause State
  • All networking in the Pod lies in the overlay itself.
  • Kube-Proxy:
- Implemented by Iptables
- Handles Inter-Node(nodes on different Hosts) communication => East-West communication
  • YAML output:
kubectl get pod <name> -o yaml
  • Shell Access:
kubectl exec <container-name> -c shell -it bash
  • Labels:
Used for ease of management
  • Assign Labels:
kubectl label pods <pod-name> owner=test
kubectl label pods <pod-name> env=development
  • Show Labels:
kubectl get pods --show-labels
  • Filter based on Labels:
kubectl get pods --selector owner=test

OR

kubectl get pods -l owner=test
kubectl get pods -l 'env in (production,development)' --show-labels
  • Deployment uses ReplicaSet NOT ReplicationController:
kubectl get replicationcontroller
<no output>

OR

kubectl get rc
<no output>


kubectl get replicaset

OR

kubectl get rs


  • You can expose Pods based on Labels, i.e. Pods from different Deployments can be exposed.
  • Services:
ClusterIP(default): used for East-West traffic only. Not significant outside cluster.
NodePort: Used for North-South traffic, Makes a service accessible from outside using NAT.
LoadBalancer: creates an external load-balancer in current cloud. Makes a service available outside.
ExternalName:
  • Application should be accessible from all nodes - Master & Worker Nodes
  • Ingress: not a Service, Instead sits in front of multiple services. type of Smart Router or entrypoint into cluster
Traffic ==> Ingress ==> | ==> foo.mydamoin.com  -- Service -- Pod,Pod,Pod
                        | ==> mydomain.com/bar  -- Service -- Pod,Pod,Pod
                        | ==> Other             -- Service -- Pod,Pod,Pod

Requirements

3 Ubuntu VMs having:

Same version
Having same resources
LAN Connectivity

Installing dependencies

Source: techrepublic.com, linuxtechi.com

This will be done on all machines that will join the Kubernetes cluster.

sudo apt-get update && apt-get install -y apt-transport-https

Our next dependency is Docker. Our Kubernetes installation will depend upon this, so install it with:

sudo apt install docker.io

Once that completes, start and enable the Docker service with the commands

sudo systemctl start docker
sudo systemctl enable docker

Disable Swap in all the 3 VMs:

sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

Or:

sudo sed -i '/ swap / s/^/#/' /etc/fstab

Installing Kubernetes

sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add 

Next add a repository by creating the file /etc/apt/sources.list.d/kubernetes.list and enter the following content:

deb http://apt.kubernetes.io/ kubernetes-xenial main 

Save and close that file. Install Kubernetes with the following commands:

apt-get update
apt-get install -y kubelet kubeadm kubectl kubernetes-cni

Initialize your master

Go to the machine that will serve as the Kubernetes master and issue the command:

sudo su
sudo kubeadm init

Before you join a node, you need to issue the following commands (as a regular user):

exit
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Deploying a pod network

You must deploy a pod network before anything will actually function properly:

kubectl apply -f [podnetwork].yaml'

You can use one of the below Pod Networks:

https://kubernetes.io/docs/concepts/cluster-administration/addons/

Verify Pods, all should be running & only DNS pod should be Pending initially:

aman@ubuntu:~$ kubectl get pods --all-namespaces
NAMESPACE     NAME                             READY     STATUS    RESTARTS   AGE
kube-system   etcd-ubuntu                      1/1       Running   0          3m
kube-system   kube-apiserver-ubuntu            1/1       Running   0          3m
kube-system   kube-controller-manager-ubuntu   1/1       Running   0          3m
kube-system   kube-dns-86f4d74b45-wq49s        0/3       Pending   0          4m    <==
kube-system   kube-proxy-g96ml                 1/1       Running   0          4m
kube-system   kube-scheduler-ubuntu            1/1       Running   0          3m

Flannel

        Multiple bugs were encountered when implementing Flannel

Here we will be installing the Flannel pod network:

sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml

Issue the command:

kubectl get pods —all-namespaces

Weave Net

Install the WeaveNet Pod:

export kubever=$(kubectl version | base64 | tr -d '\n')
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever"

Verification

Verify Installation after a few minutes:

aman@ubuntu:~$ kubectl get pods --all-namespaces
NAMESPACE     NAME                             READY     STATUS    RESTARTS   AGE
kube-system   etcd-ubuntu                      1/1       Running   0          11m
kube-system   kube-apiserver-ubuntu            1/1       Running   0          11m
kube-system   kube-controller-manager-ubuntu   1/1       Running   0          11m
kube-system   kube-dns-86f4d74b45-wq49s        3/3       Running   0          12m    <==
kube-system   kube-proxy-g96ml                 1/1       Running   0          12m
kube-system   kube-scheduler-ubuntu            1/1       Running   0          11m
kube-system   weave-net-pg57l                  2/2       Running   0          6m     <==

Joining a node

With everything in place, you are ready to join the node to the master. To do this, go to the node's terminal and issue the command:

sudo su
kubeadm join --token <TOKEN> <MASTER_IP:6443>

OR what ever is shown in the outputof master after kubeadm init:

kubeadm join 10.1.11.184:6443 --token 0lxezc.game230zg6jpa60g --discovery-token-ca-cert-hash sha256:74b34793d0ty56037c71e4a54e7475901bf627~

Recreate a token if required:

sudo kubeadm token create

Verify from Master node:

kubectl get nodes

Deploying a service

Source: medium.com

At this point, you are ready to deploy a service on your Kubernetes cluster. To deploy an NGINX service (and expose the service on port 80), run the following commands (from the master):

sudo kubectl run nginx-app --image=nginx --port=80 --env="DOMAIN=cluster" --replicas=2
sudo kubectl expose deployment nginx-app --port=80 --name=nginx-http --type=NodePort

Managing Kubernetes

Scaling Deployment:

sudo kubectl get deployment nginx-app
sudo kubectl scale deployment nginx-app --replicas=3

Verify

Verify Pods:

kubectl get pods
kubectl get pods -o wide

Go to each worker nodes:

sudo docker ps -a

Delete a Pods

This will delete an existing POD 7 create a new one:

kubectl delete pod nginx-app-56f6bb6776-wrbvl

Delete a Deployment

Verify existing Pods & Service:

kubectl get deployments
kubectl get service

Delete a Pods & Service:

kubectl delete deployment nginx-app
kubectl delete service nginx-http

Delete all Pods & Services:

kubectl delete pods --all
kubectl delete service --all

Troubleshooting

If the Pod creations fails check logs:

kubectl describe pod nginx-app-56f6bb6776-b7cb5

Reset Everything

sudo kubeadm reset
sudo rm -rf .kube

More Information


References





{{#widget:DISQUS |id=networkm |uniqid=Kubernetes |url=https://aman.awiki.org/wiki/Kubernetes }}