Kubernetes: Difference between revisions

From Network Security Wiki
Content added Content deleted
 
(32 intermediate revisions by the same user not shown)
Line 1: Line 1:
[[Category:Virtualization]]
__TOC__
<br />



Source: [https://www.techrepublic.com/article/how-to-quickly-install-kubernetes-on-ubuntu/ techrepublic.com], [https://www.linuxtechi.com/install-kubernetes-1-7-centos7-rhel7/ linuxtechi.com]
= Basics =

*Kubernetes Master Processes:
- kube-apiserver
- kube-controller-manager
- kube-scheduler
- etcd: key-value pair store to store tags or labels for easy Management

*Nodes Component Processes:
- kubelet: talks to kubernetes master
- kube-proxy:
- DNS addon
- UI addon
- fluentd: for log collection
- Supervisord

*Kubernetes Concepts:
- Container Image: Docker container Image with Application code
- Pod: Set of containers sharing network namespace & Local volumes, co-scheduled on one machine. Mortal, has IP, Label
- Deployment: Specify how many replicas of pod should be run in a cluster. Has label.
- Service: names things in DNS, Gets Virtual IP. Routes based on labels. Two types:
ClusterIP for Internal services
NodePort for publishing to outside.

*Do not care about PODS, Just care about Deployments in Kubernetes.

*Kubernetes Networking Requirements:
- All contianers should communicate without NAT
- All Nodes can communicate without NAT
- IP that a container sees itselg as is the same IP that others see it.

eth0<------->docker0<--------->veth0
10.100.0.2 docker bridge container1
172.17.0.1 172.17.0.2

Overlay network e.g. Flannel:
eth0<--------->cbr0<--------->veth0
10.100.0.2 container1
172.17.0.1 172.17.0.2


*Pause State:
*All networking in the Pod lies in the overlay itself.

*Kube-Proxy:
- Implemented by Iptables
- Handles Inter-Node(nodes on different Hosts) communication => East-West communication

*YAML output:
kubectl get pod <name> -o yaml

*Shell Access:
kubectl exec <container-name> -c shell -it bash

*Labels:
Used for ease of management

Assign Labels:
kubectl label pods <pod-name> owner=test
kubectl label pods <pod-name> env=development

Show Labels:
kubectl get pods --show-labels

Filter based on Labels:
kubectl get pods --selector owner=test
OR
kubectl get pods -l owner=test

kubectl get pods -l 'env in (production,development)' --show-labels

You can expose Pods based on Labels, i.e. Pods from different Deployments can be exposed.

*Deployment uses ReplicaSet NOT ReplicationController:
kubectl get replicationcontroller
<no output>
OR
kubectl get rc
<no output>

kubectl get replicaset
OR
kubectl get rs

*Services:
ClusterIP(default): used for East-West traffic only. Not significant outside cluster.
NodePort: Used for North-South traffic, Makes a service accessible from outside using NAT.
LoadBalancer: creates an external load-balancer in current cloud. Makes a service available outside.
ExternalName:

*Application should be accessible from all nodes - Master & Worker Nodes

*Ingress: not a Service, Instead sits in front of multiple services. type of Smart Router or entrypoint into cluster

Traffic ==> Ingress ==> | ==> foo.mydamoin.com -- Service -- Pod,Pod,Pod
| ==> mydomain.com/bar -- Service -- Pod,Pod,Pod
| ==> Other -- Service -- Pod,Pod,Pod


= Requirements=
= Requirements=
Line 10: Line 111:
=Installing dependencies=
=Installing dependencies=


Source: [https://www.techrepublic.com/article/how-to-quickly-install-kubernetes-on-ubuntu/ techrepublic.com], [https://www.linuxtechi.com/install-kubernetes-1-7-centos7-rhel7/ linuxtechi.com]
The first thing you must do is install the necessary dependencies. This will be done on all machines that will join the Kubernetes cluster. The first piece to be install is apt-transport-https (a package that allows using https as well as http in apt repository sources). This can be installed with the following command:


This will be done on all machines that will join the Kubernetes cluster.
sudo apt-get update && apt-get install -y apt-transport-https
sudo apt-get update && apt-get install -y apt-transport-https


Our next dependency is Docker. Our Kubernetes installation will depend upon this, so install it with:
Our next dependency is Docker. Our Kubernetes installation will depend upon this, so install it with:

sudo apt install docker.io
sudo apt install docker.io


Once that completes, start and enable the Docker service with the commands
Once that completes, start and enable the Docker service with the commands

sudo systemctl start docker
sudo systemctl start docker
sudo systemctl enable docker
sudo systemctl enable docker
Line 114: Line 214:
OR what ever is shown in the outputof master after kubeadm init:
OR what ever is shown in the outputof master after kubeadm init:
kubeadm join 10.1.11.184:6443 --token 0lxezc.game230zg6jpa60g --discovery-token-ca-cert-hash sha256:74b34793d0ty56037c71e4a54e7475901bf627~
kubeadm join 10.1.11.184:6443 --token 0lxezc.game230zg6jpa60g --discovery-token-ca-cert-hash sha256:74b34793d0ty56037c71e4a54e7475901bf627~

Recreate a token if required:
sudo kubeadm token create

Verify from Master node:
kubectl get nodes


=Deploying a service=
=Deploying a service=
Source: [https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0 medium.com]


At this point, you are ready to deploy a service on your Kubernetes cluster. To deploy an NGINX service (and expose the service on port 80), run the following commands (from the master):
At this point, you are ready to deploy a service on your Kubernetes cluster. To deploy an NGINX service (and expose the service on port 80), run the following commands (from the master):


sudo kubectl run --image=nginx nginx-app --port=80 --env="DOMAIN=cluster"
sudo kubectl run nginx-app --image=nginx --port=80 --env="DOMAIN=cluster" --replicas=2
sudo kubectl expose deployment nginx-app --port=80 --name=nginx-http
sudo kubectl expose deployment nginx-app --port=80 --name=nginx-http --type=NodePort


= Managing Kubernetes =
Go to your node and issue below command, you should see the service listed:

sudo docker ps -a
Scaling Deployment:
sudo kubectl get deployment nginx-app
sudo kubectl scale deployment nginx-app --replicas=3


= Verify =
= Verify =

Verify Pods:
kubectl get pods
kubectl get pods
kubectl get pods -o wide
kubectl get pods -o wide

Go to each worker nodes:
sudo docker ps -a

= Delete a Pods =
This will delete an existing POD 7 create a new one:
kubectl delete pod nginx-app-56f6bb6776-wrbvl

= Delete a Deployment =
Verify existing Pods & Service:
kubectl get deployments
kubectl get service

Delete a Pods & Service:
kubectl delete deployment nginx-app
kubectl delete service nginx-http

Delete all Pods & Services:
kubectl delete pods --all
kubectl delete service --all


= Troubleshooting =
= Troubleshooting =
If the Pod creations fails:
If the Pod creations fails check logs:
kubectl describe pod nginx-app-56f6bb6776-b7cb5
kubectl describe pod nginx-app-56f6bb6776-b7cb5

kubectl config view
<pre>
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://10.1.10.158:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
</pre>

sudo kubectl create serviceaccount test
sudo kubectl get secret
sudo kubectl get secret test-token-xqh8z
sudo kubectl get secret test-token-xqh8z -o yaml

kubectl describe secret

kubectl describe secret test-token-xqh8z

kubectl --token=<token> get nodes


= Reset Everything =
= Reset Everything =
sudo kubeadm reset
sudo kubeadm reset
sudo rm -rf .kube
sudo rm -rf .kube

= Dashboard =
{{notice|This setup has not been tested successfully yet}}

*Install Dashboard:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml

*Create the User File:
vi dashboard-adminuser.yaml

<pre>
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
</pre>

kubectl create -f dashboard-adminuser.yaml

*Create the Role Binding file:
vi ClusterRoleBinding-ui.yaml

<pre>
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
</pre>

kubectl create -f ClusterRoleBinding-ui.yaml

*Edit Dashboard service
kubectl -n kube-system edit service kubernetes-dashboard

type: ClusterIP <--- change this to NodePort

*Generate the token:
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

*Get the Port number:
kubectl get service -o wide --all-namespaces
kube-system kubernetes-dashboard NodePort 10.99.104.194 <none> 443:31860/TCP 133m k8s-app=kubernetes-dashboard

*Enable firewall ports ON ALL 3 NODES:
firewall-cmd --permanent --add-port=31860/tcp
firewall-cmd --reload

*Start Kude Proxy
kubectl proxy --address='0.0.0.0' --accept-hosts='^*$' &

*Access the UI:
https://<k8s-master>:31860/

*Select and paste the token

= More Information =

*[https://kubernetes.io/docs/reference/kubectl/cheatsheet/ Cheatsheet]
*[https://medium.com/google-cloud/kubernetes-101-pods-nodes-containers-and-clusters-c1509e409e16 Kubernetes 101]

<br />
;References
<references/>
<br />
<br />
<br />


{{DISQUS}}

Latest revision as of 14:36, 14 May 2019



Basics

  • Kubernetes Master Processes:
- kube-apiserver
- kube-controller-manager
- kube-scheduler
- etcd: key-value pair store to store tags or labels for easy Management
  • Nodes Component Processes:
- kubelet: talks to kubernetes master
- kube-proxy:
- DNS addon
- UI addon
- fluentd: for log collection
- Supervisord
  • Kubernetes Concepts:
- Container Image: Docker container Image with Application code
- Pod: Set of containers sharing network namespace & Local volumes, co-scheduled on one machine. Mortal, has IP, Label
- Deployment: Specify how many replicas of pod should be run in a cluster. Has label.
- Service: names things in DNS, Gets Virtual IP. Routes based on labels. Two types:
   ClusterIP for Internal services
   NodePort for publishing to outside.
  • Do not care about PODS, Just care about Deployments in Kubernetes.
  • Kubernetes Networking Requirements:
- All contianers should communicate without NAT
- All Nodes can communicate without NAT
- IP that a container sees itselg as is the same IP that others see it.
eth0<------->docker0<--------->veth0
10.100.0.2  docker bridge    container1
            172.17.0.1       172.17.0.2

Overlay network e.g. Flannel:

eth0<--------->cbr0<--------->veth0
10.100.0.2                   container1
             172.17.0.1      172.17.0.2


  • Pause State:
  • All networking in the Pod lies in the overlay itself.
  • Kube-Proxy:
- Implemented by Iptables
- Handles Inter-Node(nodes on different Hosts) communication => East-West communication
  • YAML output:
kubectl get pod <name> -o yaml
  • Shell Access:
kubectl exec <container-name> -c shell -it bash
  • Labels:
Used for ease of management

Assign Labels:

kubectl label pods <pod-name> owner=test
kubectl label pods <pod-name> env=development

Show Labels:

kubectl get pods --show-labels

Filter based on Labels:

kubectl get pods --selector owner=test

OR

kubectl get pods -l owner=test
kubectl get pods -l 'env in (production,development)' --show-labels

You can expose Pods based on Labels, i.e. Pods from different Deployments can be exposed.

  • Deployment uses ReplicaSet NOT ReplicationController:
kubectl get replicationcontroller
<no output>

OR

kubectl get rc
<no output>
kubectl get replicaset

OR

kubectl get rs
  • Services:
ClusterIP(default): used for East-West traffic only. Not significant outside cluster.
NodePort: Used for North-South traffic, Makes a service accessible from outside using NAT.
LoadBalancer: creates an external load-balancer in current cloud. Makes a service available outside.
ExternalName:
  • Application should be accessible from all nodes - Master & Worker Nodes
  • Ingress: not a Service, Instead sits in front of multiple services. type of Smart Router or entrypoint into cluster
Traffic ==> Ingress ==> | ==> foo.mydamoin.com  -- Service -- Pod,Pod,Pod
                        | ==> mydomain.com/bar  -- Service -- Pod,Pod,Pod
                        | ==> Other             -- Service -- Pod,Pod,Pod

Requirements

3 Ubuntu VMs having:

Same version
Having same resources
LAN Connectivity

Installing dependencies

Source: techrepublic.com, linuxtechi.com

This will be done on all machines that will join the Kubernetes cluster.

sudo apt-get update && apt-get install -y apt-transport-https

Our next dependency is Docker. Our Kubernetes installation will depend upon this, so install it with:

sudo apt install docker.io

Once that completes, start and enable the Docker service with the commands

sudo systemctl start docker
sudo systemctl enable docker

Disable Swap in all the 3 VMs:

sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

Or:

sudo sed -i '/ swap / s/^/#/' /etc/fstab

Installing Kubernetes

sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add 

Next add a repository by creating the file /etc/apt/sources.list.d/kubernetes.list and enter the following content:

deb http://apt.kubernetes.io/ kubernetes-xenial main 

Save and close that file. Install Kubernetes with the following commands:

apt-get update
apt-get install -y kubelet kubeadm kubectl kubernetes-cni

Initialize your master

Go to the machine that will serve as the Kubernetes master and issue the command:

sudo su
sudo kubeadm init

Before you join a node, you need to issue the following commands (as a regular user):

exit
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Deploying a pod network

You must deploy a pod network before anything will actually function properly:

kubectl apply -f [podnetwork].yaml'

You can use one of the below Pod Networks:

https://kubernetes.io/docs/concepts/cluster-administration/addons/

Verify Pods, all should be running & only DNS pod should be Pending initially:

aman@ubuntu:~$ kubectl get pods --all-namespaces
NAMESPACE     NAME                             READY     STATUS    RESTARTS   AGE
kube-system   etcd-ubuntu                      1/1       Running   0          3m
kube-system   kube-apiserver-ubuntu            1/1       Running   0          3m
kube-system   kube-controller-manager-ubuntu   1/1       Running   0          3m
kube-system   kube-dns-86f4d74b45-wq49s        0/3       Pending   0          4m    <==
kube-system   kube-proxy-g96ml                 1/1       Running   0          4m
kube-system   kube-scheduler-ubuntu            1/1       Running   0          3m

Flannel

        Multiple bugs were encountered when implementing Flannel

Here we will be installing the Flannel pod network:

sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml

Issue the command:

kubectl get pods —all-namespaces

Weave Net

Install the WeaveNet Pod:

export kubever=$(kubectl version | base64 | tr -d '\n')
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever"

Verification

Verify Installation after a few minutes:

aman@ubuntu:~$ kubectl get pods --all-namespaces
NAMESPACE     NAME                             READY     STATUS    RESTARTS   AGE
kube-system   etcd-ubuntu                      1/1       Running   0          11m
kube-system   kube-apiserver-ubuntu            1/1       Running   0          11m
kube-system   kube-controller-manager-ubuntu   1/1       Running   0          11m
kube-system   kube-dns-86f4d74b45-wq49s        3/3       Running   0          12m    <==
kube-system   kube-proxy-g96ml                 1/1       Running   0          12m
kube-system   kube-scheduler-ubuntu            1/1       Running   0          11m
kube-system   weave-net-pg57l                  2/2       Running   0          6m     <==

Joining a node

With everything in place, you are ready to join the node to the master. To do this, go to the node's terminal and issue the command:

sudo su
kubeadm join --token <TOKEN> <MASTER_IP:6443>

OR what ever is shown in the outputof master after kubeadm init:

kubeadm join 10.1.11.184:6443 --token 0lxezc.game230zg6jpa60g --discovery-token-ca-cert-hash sha256:74b34793d0ty56037c71e4a54e7475901bf627~

Recreate a token if required:

sudo kubeadm token create

Verify from Master node:

kubectl get nodes

Deploying a service

Source: medium.com

At this point, you are ready to deploy a service on your Kubernetes cluster. To deploy an NGINX service (and expose the service on port 80), run the following commands (from the master):

sudo kubectl run nginx-app --image=nginx --port=80 --env="DOMAIN=cluster" --replicas=2
sudo kubectl expose deployment nginx-app --port=80 --name=nginx-http --type=NodePort

Managing Kubernetes

Scaling Deployment:

sudo kubectl get deployment nginx-app
sudo kubectl scale deployment nginx-app --replicas=3

Verify

Verify Pods:

kubectl get pods
kubectl get pods -o wide

Go to each worker nodes:

sudo docker ps -a

Delete a Pods

This will delete an existing POD 7 create a new one:

kubectl delete pod nginx-app-56f6bb6776-wrbvl

Delete a Deployment

Verify existing Pods & Service:

kubectl get deployments
kubectl get service

Delete a Pods & Service:

kubectl delete deployment nginx-app
kubectl delete service nginx-http

Delete all Pods & Services:

kubectl delete pods --all
kubectl delete service --all

Troubleshooting

If the Pod creations fails check logs:

kubectl describe pod nginx-app-56f6bb6776-b7cb5
kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: REDACTED
    server: https://10.1.10.158:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
sudo kubectl create serviceaccount test
sudo kubectl get secret
sudo kubectl get secret test-token-xqh8z
sudo kubectl get secret test-token-xqh8z -o yaml
kubectl describe secret
kubectl describe secret test-token-xqh8z
kubectl --token=<token> get nodes

Reset Everything

sudo kubeadm reset
sudo rm -rf .kube

Dashboard

        This setup has not been tested successfully yet
  • Install Dashboard:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml
  • Create the User File:
vi dashboard-adminuser.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
kubectl create -f dashboard-adminuser.yaml
  • Create the Role Binding file:
vi  ClusterRoleBinding-ui.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
kubectl create -f ClusterRoleBinding-ui.yaml
  • Edit Dashboard service
kubectl -n kube-system edit service kubernetes-dashboard
 type: ClusterIP  <--- change this to NodePort
  • Generate the token:
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
  • Get the Port number:
kubectl get service -o wide --all-namespaces
  kube-system   kubernetes-dashboard   NodePort    10.99.104.194   <none>        443:31860/TCP   133m   k8s-app=kubernetes-dashboard
  • Enable firewall ports ON ALL 3 NODES:
firewall-cmd --permanent --add-port=31860/tcp
firewall-cmd --reload
  • Start Kude Proxy
kubectl proxy --address='0.0.0.0' --accept-hosts='^*$' &
  • Access the UI:
https://<k8s-master>:31860/
  • Select and paste the token

More Information


References





{{#widget:DISQUS |id=networkm |uniqid=Kubernetes |url=https://aman.awiki.org/wiki/Kubernetes }}