How to install Kubernetes with Docker

In this blog post, I will demonstrate how to set up a home lab Kubernetes cluster with a Docker container engine. Many times the developer or DevOps may need a development Kubernetes environment to test the codes or changes, so I hope this blog will help them. In this tutorial, we will create one control plane and 2 worker nodes with the help of Vagrant and Virtualbox.

Pre-requisites

I hope VirtualBox has already been installed in your system if not please refer to some online tutorials for it.

Vagrant Installation

A vagrant is open-source software and it can be used to provision multiple virtual environments on Virtualbox, KVM, VMware, etc. To install the vagrant execute the below command on mac os.

brew install vagrant

Once the installation is done execute the below command to confirm the installation is correct, the command will return the installed Vagrant version.

$ vagrant version

Installed Version: 2.2.19
Latest Version: 2.2.19

Vagrant and VirtualBox are ready, next, we have to provision 3 virtual machines, for this we need to create a file named Vagrantfile, Vagrant will look for this file and create virtual machines based on the configuration details mentioned in the file. Below is a sample Vagrantfile which will provision 3 ubuntu machines with private IP addresses configured.

vim Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :

ENV['VAGRANT_NO_PARALLEL'] = 'yes'
Vagrant.configure(2) do |config|

  # Kubernetes Master Server
  config.vm.define "master" do |master|
    master.vm.box = "bento/ubuntu-18.04"
    master.vm.hostname = "master.example.com"
    master.vm.network "private_network", ip: "192.168.56.111"
    master.vm.provider "virtualbox" do |v|
      v.name = "master"
      v.memory = 2048
      v.cpus = 2
    end
  end

  NodeCount = 2

  # Kubernetes Worker Nodes
  (1..NodeCount).each do |i|
    config.vm.define "worker#{i}" do |workernode|
      workernode.vm.box = "bento/ubuntu-18.04"
      workernode.vm.hostname = "worker#{i}.example.com"
      workernode.vm.network "private_network", ip: "192.168.56.12#{i}"
      workernode.vm.provider "virtualbox" do |v|
        v.name = "worker#{i}"
        v.memory = 2048
        v.cpus = 1
      end
    end
  end
end

The file provision 3 machines (1 master or control and 2 worker nodes) with the mentioned IP address, memory, and CPU. A complete explanation about the file is not the scope of this post, to understand more about the Vagrantfile I recommend you to refer to another online article.

To provision, the virtual machines execute the below command.

Provision virtual machines

vagrant up

The command may take a few minutes to complete as it downloads the OS from the Hashicorp repository if it is not present in the local machine. Once the command is complete, execute the vagrant status command to see the provisioned machines.

$ vagrant status
Current machine states:

master                    running (virtualbox)
worker1                   running (virtualbox)
worker2                   running (virtualbox)

All the machines are up now, execute the “vagrant ssh master” command to log in to the virtual machine, replace the machines with “worker” to log in to the worker machine. Make sure that the IP address, hostname, CPU, and memory are configured correctly. If everything is fine we can start to install the Kubernetes packages.

Kubernetes installation

We will install Kubernetes version 1.21.0, you can install the other versions by changing this value in the below commands. Please note below commands should be executed on all 3 machines.

sudo swapoff -a

curl https://baltocdn.com/helm/signing.asc | sudo apt-key add -
echo "deb https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list

sudo apt-get update
sudo apt install zip -y
sudo apt-get install helm -y

sudo modprobe br_netfilter

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sudo sysctl --system
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg lsb-release -y


sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

sudo echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null


sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io -y
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl -y

sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg

sudo echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list


sudo apt-get update
sudo apt-get install -y kubelet=1.21.0-00 kubeadm=1.21.0-00 kubectl=1.21.0-00 -y
sudo apt-mark hold kubelet kubeadm kubectl

The above commands install the packages required for Kubernetes. Once these commands are installed on all nodes execute below command ONLY in MASTER node.

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

The above command will install weave network plugin for Kubernetes, this plugin will helps to setup network for kubrentes pods.

Next, initiate the control node by executing below command.

sudo kubeadm init --pod-network-cidr 10.244.0.0/16 --apiserver-advertise-address=192.168.56.111

Replace the IP address 192.168.56.111 with your control plane node or master node IP address. This command may take few minute to complete. Once it is done you will get a successfull message as shown in the below screenshot.

In the terminal you can see few commands like below, execute this commands only on master to start using the cluster.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Master or controlplane node is ready, next we need to add the remaining 2 worker node to the cluster for this, execute the command which you got in the terminal (It will be present in the terminal) on both worker nodes ( don’t run in master).

sudo kubeadm join 192.168.56.111:6443 --token 4yfy8a.8xv9sxyjr9gba2cd --discovery-token-ca-cert-hash sha256:f94de78d70667fdfa58dbc668be01b8b36f36027b7f0ae8cfd45bd9a236e89fc

Kubernetes installation has been completed successfully. Execute below commands to make sure that all nodes are in the cluster is ready.

vagrant@master:~$ kubectl get nodes 
NAME      STATUS   ROLES                  AGE     VERSION
master    Ready    control-plane,master   28m     v1.21.0
worker1   Ready    <none>                 7m47s   v1.21.0
worker2   Ready    <none>                 7m42s   v1.21.0

All cluster nodes are ready, now you can start deploy your applications !!!

Leave a Reply

Your email address will not be published. Required fields are marked *