Cloud | DevOps

Setup Kubernetes Cluster with Zero Cost

Setup Kubernetes Cluster with Zero Cost

This tutorial will help you to setup Kubernetes cluster (1 Master and two worker Nodes) on your Laptop. You need to have windows 10 with a minimum of 8 GB of RAM. We will bootstrap all three machines using vagrant and Oracle Virtual Box.

Master Workers and


To start with, please install the latest version of Vagrant, Virtual Box and Git on your Windows setup and reboot it. Please make sure the VT-virtualisation is set to enabled in the BIOS. Please create a directory/folder say k8 and create three folders inside that is master, worker1 and worker2 as shown below.

Create Virtual Machines

  • Please follow the following steps to create three ubuntu-18.04 boxes with Vagrant. Before proceeding, we should know our Ethernet adapter Virtual Box Host-Only Network IPs. It is required since each box is created with Vagrant will have for eth0 by default, and we want to set up our own private IPs which will help us to communicate internally and outside network(internet). You can get to know this IP by executing ipconfig in command prompt.

  1. Create a box for master:

You need to create a Vagrantfile in the master folder and content as follows. Please make sure the Vagrantfile type is file not text. As you can see, we have given private_network ip as our host network and 2 GB RAM size.

Vagrant.configure(“2”) do |config| = “bento/ubuntu-18.04” “private_network”, ip: “”

  config.vm.provider :virtualbox do |v|

  v.customize [“modifyvm”, :id, “–memory”, 2048]



Launch your command prompt or Git bash and cd to master folder where your Vagrant file is located and run vagrant up. It takes a few minutes depending on your internet speed to create the box. You can log in to the box using vagrant ssh command from the master folder.

Change Hostname:

  1. hostname master
  2. sudo vi /etc/hosname and replace ‘vagrant’ to ‘master’
  3. Reboot the machine after exiting from the box

Vagrant reload – Reboots your box

  1. Create Worker1 and worker2

Create separate Vagrantfile in woker1 and worker2 and run vagrant up from respective directory.

Worker1 Vagrantfile:

Vagrant.configure(“2”) do |config| = “bento/ubuntu-18.04” “private_network”, ip: “”

  config.vm.provider :virtualbox do |v|

  v.customize [“modifyvm”, :id, “–memory”, 1024]



Change Hostname:

  1. hostname worker1/worker2
  2. sudo vi /etc/hosname and replace ‘vagrant’ to ‘worker1’ OR ‘worker2’.
  3. Reboot the machine after exiting from the box

Vagrant reload – Reboots your box

Worker2 Vagrantfile:

Vagrant.configure(“2”) do |config| = “bento/ubuntu-18.04” “private_network”, ip: “

  config.vm.provider :virtualbox do |v|

  v.customize [“modifyvm”, :id, “–memory”, 1024]



Please note that there is different ip and 1GB RAM size.

Setup Kubernetes Cluster:

Follow this step for all the nodes:

  • Disable Swap

sudo swapoff -a

sudo vi /etc/fstab # comment on swap file system

  • Add the Docker Repository on all three servers.

curl -fsSL | sudo apt-key add –

sudo add-apt-repository    “deb [arch=amd64] \

   $(lsb_release -cs) \


  • Add the Kubernetes repository on all three servers.

curl -s | sudo apt-key add –

cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list

deb kubernetes-xenial main


  • Install Docker, Kubeadm, Kubelet, and Kubectl on all three servers.

sudo apt-get update

sudo apt-get install -y docker-ce=18.06.1~ce~3-0~ubuntu kubelet=1.12.2-00 kubeadm=1.12.2-00 kubectl=1.12.2-00

sudo apt-mark hold docker-ce kubelet kubeadm kubectl

  • Enable bridge networking

echo “net.bridge.bridge-nf-call-iptables=1” | sudo tee -a /etc/sysctl.conf

sudo sysctl –p

  • Run following only on Master

sudo kubeadm init –pod-network-cidr= –apiserver-advertise-address=

This will keep  a token which help to join the other nodes. Please keep it save.

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

kubectl apply -f

kubectl get nodes (wait for sometime, it status will become ready)

IF status is not getting ready state then check the /var/log/syslog and check the status of the kubelet service (sudo systemctl status kubelet.service)

  • Now it is time to join both workers by running the following command

sudo kubeadm join <Master_ip):6443 –token <token>–discovery-token-ca-cert-hash <hash) [ you will get these value while running kubeadm init –pod-network-cidr= –apiserver-advertise-address= on both nodes].

Congratulation! You have now successfully setup the Kubernetes cluster. You can verify by running kubectl get nodes on the master

We will use YAML file to deploy the containers

In this scenario, we will user kubectl to create and launch Deployments, Replication Controller and access those using Service.

  • Firstly, we will create deployment.yaml which uses image to create container and bind on port 80

  Vi deployment.yaml

  apiVersion: extensions/v1beta1

  kind: Deployment


  name: webapp1


  replicas: 1




  app: webapp1



  name: webapp1

  image: katacoda/docker-http-server:latest


  containerPort: 80

  • create deployment by running

 kubectl create -f deployment.yaml

  • kubectl get deployment
  • Create a service with Nodeport which help to access the container using Node IPs.

Vi service.yaml

   apiVersion: v1

   kind: Service


   name: webapp1-svc


   app: webapp1


   type: NodePort


   port: 80

   nodePort: 30080


   app: webapp1

   kubectl create -f service.yaml

Once pod is running, you can access it using one of Node IP such as

We have now successfully deployed a container. Now we will update the deployment.yaml file to increase the number of instances running. For example, the file should look like this:

replicas: 4

kubectl apply –f deployment.yaml

Now you can access these containers using both nodes as shown below:

With this we have successfully setup Kubernetes cluster and deployed containers using YAML .