Kubernetes is an open-source Docker container orchestration tool. With the help of Kubernetes, we can easily manage Docker container deployment, scaling and Docker resource usage. In this article, I will demonstrate the Kubernetes cluster installation on the AWS cloud with the help of Kops.
Kops installation on ubuntu machine
Kops stands for Kubernetes operations. This tool helps to install, upgrade and manage a production-ready cluster. To install Kops execute below command.
$ curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
$ chmod +x kops-linux-amd64
$ sudo mv kops-linux-amd64 /usr/local/bin/kops
Next, install python-pip will helps to install AWS CLI. To install python-pip execute below command. Once pip installed, install AWS CLI with the help of pip
$ sudo apt-get install python-pip $ sudo pip install awscli
To confirm the installation, execute the below command which gives you the awscli version.
$ aws --version aws-cli/1.16.298 Python/2.7.15+ Linux/4.15.0-66-generic botocore/1.13.34
AWS resources for Kops.
We need an AWS user for Kops to deploy the Kubernetes on AWS. To create a user, log in to the AWS account and select Identity and Access Management from all services.
Select users from the left side menu and click the “Add user” button to add a new user. Give a username and password, also select Programmatic access.
Click the “Next:Permissions” button to create a group for the user. Give a group name and select permission for the group. I have opted Administrator access policy for the group, this policy will help the user “kops-user” to deploy the cluster.
Once the group created, click the “tag” button to tag it. Click the “create user” button to complete the user creation action. On the next screen, you will get an option to download the user credentials, save it in a safe place.
Next, we need to configure this user on the Ubuntu terminal to access AWS resource, for this execute below commands and add your credentials as per the request.
$ aws configure
AWS user-created, next we need to create an S3 bucket to store kops state. For this select S3 from AWS all services. Click the create button to create a bucket.
Permit the bucket. I have restricted public access to my bucket.
and finally, click create a bucket button to complete the activity
We need one more AWS resource that is Route53, so Kops can manage the DNS of the Kubernetes cluster. Here I am using a subdomain of my main domain techiescorner.in or you can buy a new domain from AWS.
Click DNS management from the Route53 service and click Create Hosted Zone.
Add the sub-domain name and click the create button.
Once you complete the activity you will get below DNS (Nameserver) records.
Next, we have to add all the above records at our DNS provider and. I have registered my domain name with Godaddy and am using their DNS service. If you bought the domain from Godaddy follow the below steps or if it from another provider, please follow their document.
Login to the Godaddy account and click DNS management. Add all records as follows.
Please make sure that you have added all the records which AWS provides.
Once you added all Nameserver records you can confirm this from a Linux terminal by executing the below command and it will show all added NS records.
$ host -t ns kops.techiescorner.in kops.techiescorner.in name server ns-795.awsdns-35.net. kops.techiescorner.in name server ns-426.awsdns-53.com. kops.techiescorner.in name server ns-1368.awsdns-43.org. kops.techiescorner.in name server ns-2036.awsdns-62.co.uk.
Create a cluster with Kops
DNS part is over login back to the Linux terminal where we installed kops package and generate an ssh key pair for our Kubernetes cluster. For this execute below command.
techies@techiescorner:~$ sudo mkdir -p .ssh/id_rrsa techies@techiescorner:~$ sudo ssh-keygen -f .ssh/id_rsa Generating public/private rsa key pair. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in .ssh/id_rsa. Your public key has been saved in .ssh/id_rsa.pub
The public key will be saved in
.ssh/id_rsa.pub a file and the private key will be present in the .ssh/id_rsa file.
Before creating cluster we also need to install kubectl, to install this tool, execute below command.
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
We all set. Next, trigger the below command from the terminal to create a cluster.
kops create cluster --name=kops.techiescorner.in --state=s3://bucket-for-kops-project --zones=us-east-1a --node-count=2 --node-size=t2.micro --master-size=t2.micro --dns-zone=kops.techiescorner.in
The argument –name must be a fully qualified domain
–state: This is to store kops state, use the S3 bucket name that we have created earlier
–zones: Zone name where you want to launch the cluster
–master-size: It will be our master node and size is t2.micro
Once we execute the above command it shows complete details about the cluster before it launch and at the last, you can see a few lines as follows.
Suggestions: * list clusters with: kops get cluster * edit this cluster with: kops edit cluster kops.techiescorner.in * edit your node instance group: kops edit ig --name=kops.techiescorner.in nodes * edit your master instance group: kops edit ig --name=kops.techiescorner.in master-us-east-1a Finally configure your cluster with:
As mentioned in the last line to configure the cluster execute below command.
kops update cluster --name kops.techiescorner.in --yes --state=s3://bucket-for-kops-project
Here we are using kops status in the S3 bucket so we have to give it as an argument. Once done you will get a message like below
Cluster changes have been applied to the cloud
To confirm the cluster has launched successfully, log in to aws command and select EC2 service.
Or you can confirm this with kubctl command.
techies@techiescorner:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION ip-172-20-44-237.ec2.internal Ready node 13m v1.15.5 ip-172-20-45-20.ec2.internal Ready node 13m v1.15.5 ip-172-20-53-225.ec2.internal Ready master 13m v1.15.5
To check any service is running, execute the below command.
techies@techiescorner:~$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 100.64.0.1 <none> 443/TCP 14m
The default service Kubernetes is running.
Next, we will run a sample service on the cluster, for this execute below command.
#kubectl create deployment hello-world --image=gcr.io/hello-minikube-zero-install/hello-node deployment.apps/hello-world created
Here we have created a deployment named hello-world (any name) and from the image which from Google image registry’s named hello-node this image contains a code to display the output as “Hello world”. To access this service we need to expose the deployment for this execute below command.
techiescorner:~$ kubectl expose deployment hello-world --type=NodePort --port=8080 service/hello-world exposed
–type: How we want to access the deployment if you want to connect your service with a load balancer give type as a “loadbalancer” or to access from each node IP use NodePort as type.
techies@techiescorner:~$ kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hello-world NodePort 100.68.210.245 <none> 8080:30671/TCP 26s kubernetes ClusterIP 100.64.0.1 <none> 443/TCP 3h8m
Now we have created our service successfully. To see the service we need to open the port 30671 (External connection) on the AWS security group.
Add below rule on the AWS security group.
Use any of the instance public IP addresses on the browser to see the service.
We have successfully created a Kubernetes cluster on AWS and deployed a Docker container on the cluster.