This is a series of tutorials about Kubernetes and its management. Also, this tutorial will help you to gather the knowledge required to complete the Certified Kubernetes Administration exam. Let’s get started
What is Kubernetes?
Kubernetes is an open-source container orchestration platform (or) it provides a centralized management layer for the containerized applications.
For example, think that we have a WordPress site and it uses Nginx to process PHP files and use a database for storing the data. We can deploy this site in multiple containers (one container for the PHP file process and the other for the database) with the help of containerization software called Containerd or Docker.
As our site grows we may need to extend the application resources and may need to use additional resources like a load balancer, more CPU, and RAM, continuous deployment and we may need to create multiple containers for this. Kubernetes can handle all these activities. Now I hope you have an idea about what is Kubernetes.
Kubernetes have multiple components to manage the containers, the details are given below.
Details about each of these components are given below.
Handle orchestration tasks by managing and controlling the worker nodes in the cluster
Run application workloads in the Pods and accepts instruction from the control plane.
Pods: Collection of containers running on the same machines
Kube API service
Used to expose the Kubernetes API, which handles all the communication in and out of the cluster.
The persistent key-value store holds the desired state of the entire cluster.
Handles all the scheduling (placement) of workloads by determining the best location on all nodes. It has the info about what resources are available and what running and based on it, it places containers.
Monitor and respond to events that occur in the cluster to maintain the desired state.
It is a Kubernetes agent that runs on every node that communicates with the control plane and ensures that containers are run on its node as instructed by the control plane. Kubelet also handles the process of reporting containers and other data about containers back to the control plane.
The kube-proxy running on each node handle routing and load-balancing for local cluster traffic as well as external, each node has a unique IP address that is maintained by Kube-proxy. It helps monitor the pods and services running on the nodes by updating the local IP table and firewall rule. If the Pods are unavailable, node routing information knows where and when not to send the traffic to.
Software that the container instance runs, here we are using Containerd. The Runtime fetches container images and is responsible for starting and stopping the container.
A unit of deployment that is a logical group of one or more containers that live on the same machines, A Pod is a single manage instance of an application. A pod has information about how to run the containers in it, like the shared storage and networking.
Next, we will build a Kubernetes cluster with Kubeadm
Build a cluster with Kubeadm
Here I am building the cluster on Azure public cloud but you can use any cloud provider on-premise server with the following configuration.
|Ubuntu 18.04 LTS
|2cpu, 4g memory
|Ubuntu 18.04 LTS
|2cpu, 4g memory
|Ubuntu 18.04 LTS
|2cpu, 4g memory
Login to all 3 machines and make sure that all the machines can communicate with each other. Also, I have added a public IP address to each of these machines.
Install containerd on all 3 machines, containerd is the runtime package for the kubernetes cluster.
Execute the below command to enable the kernel module after a server reboot, this is required for containerd.
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
For now, we will enable this module without performing a reboot by executing the below command.
sudo modprobe overlay
sudo modprobe br_netfilter
Next, execute the below command to enable Kubernetes network settings at the time of restart.
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
sudo sysctl --system
We have completed few system-level changes on all 3 machines, next install containerd packages on all machines.
sudo apt-get update && sudo apt-get install -y containerd
Containerd installed, next create a default configuration with the help of the below command.
sudo containerd config default | sudo tee /etc/containerd/config.toml
Restart the containerd service
sudo systemctl restart containerd
Disable swap memory on all nodes as part of Kubernetes installation.
sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
Install kubeadm, kubelet, and kubectl on all nodes and hold its auto-update.
sudo apt-get update && sudo apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
sudo apt-get update
sudo apt-get install -y kubelet=1.21.0-00 kubeadm=1.21.0-00 kubectl=1.21.0-00
sudo apt-mark hold kubelet kubeadm kubectl
All the packages installation has been completed. Initialize the cluster and set up kubectl access. only control plane (master) node.
sudo kubeadm init --pod-network-cidr 192.168.0.0/16 --kubernetes-version 1.21.0
A successful installation message will be as follows
Execute the commands which are mentioned in the installation output.
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
Execute the below command to get control node status.
Here the status is not ready because we haven’t configured a network. To set up a network in the cluster, install the Calico plugin (only in the control node).
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
We have done the control node setup and next, we have to add the 2 worker node to the cluster. To join the worker node executes the command which we got from Kubernetes installation (refer to the screenshot) or you can get the join command by executing the below command.
kubeadm token create --print-join-command
Take the output and execute the command in both worker node as root.
sudo kubeadm join 22.214.171.124:6443 --token 4fcnl0.cm6y31jjnv6m0xuq --discovery-token-ca-cert-hash sha256:c033502147a136edf8f1cbad6f52055349d8418ff761e
Come back to the control node and execute the below command to get all node status.
techies@control-node-1:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION control-node-1 Ready control-plane,master 26m v1.21.0 worker-node-1 Ready <none> 71s v1.21.0 worker-node-2 Ready <none> 16s v1.21.0
Here all the nodes are ready, which means we have successfully created a Kubernetes cluster.