Setup Kubernetes Cluster with Zero Cost
This tutorial will help you to setup Kubernetes cluster (1 Master and two worker Nodes) on your Laptop. You need to have windows 10 with a minimum of 8 GB of RAM. We will bootstrap all three machines using vagrant and Oracle Virtual Box.
Master | Workers |
192.168.56.32 | 192.168.56.31 and 192.168.56.32 |
To start with, please install the latest version of Vagrant, Virtual Box and Git on your Windows setup and reboot it. Please make sure the VT-virtualisation is set to enabled in the BIOS. Please create a directory/folder say k8 and create three folders inside that is master, worker1 and worker2 as shown below.
Create Virtual Machines
- Please follow the following steps to create three ubuntu-18.04 boxes with Vagrant. Before proceeding, we should know our Ethernet adapter Virtual Box Host-Only Network IPs. It is required since each box is created with Vagrant will have 10.0.2.15 for eth0 by default, and we want to set up our own private IPs which will help us to communicate internally and outside network(internet). You can get to know this IP by executing ipconfig in command prompt.
- Create a box for master:
You need to create a Vagrantfile in the master folder and content as follows. Please make sure the Vagrantfile type is file not text. As you can see, we have given private_network ip as our host network and 2 GB RAM size.
Vagrant.configure(“2”) do |config|
config.vm.box = “bento/ubuntu-18.04”
config.vm.network “private_network”, ip: “192.168.56.30”
config.vm.provider :virtualbox do |v|
v.customize [“modifyvm”, :id, “–memory”, 2048]
end
end
Launch your command prompt or Git bash and cd to master folder where your Vagrant file is located and run vagrant up. It takes a few minutes depending on your internet speed to create the box. You can log in to the box using vagrant ssh command from the master folder.
Change Hostname:
- hostname master
- sudo vi /etc/hosname and replace ‘vagrant’ to ‘master’
- Reboot the machine after exiting from the box
Vagrant reload – Reboots your box
- Create Worker1 and worker2
Create separate Vagrantfile in woker1 and worker2 and run vagrant up from respective directory.
Worker1 Vagrantfile:
Vagrant.configure(“2”) do |config|
config.vm.box = “bento/ubuntu-18.04”
config.vm.network “private_network”, ip: “192.168.56.31”
config.vm.provider :virtualbox do |v|
v.customize [“modifyvm”, :id, “–memory”, 1024]
end
end
Change Hostname:
- hostname worker1/worker2
- sudo vi /etc/hosname and replace ‘vagrant’ to ‘worker1’ OR ‘worker2’.
- Reboot the machine after exiting from the box
Vagrant reload – Reboots your box
Worker2 Vagrantfile:
Vagrant.configure(“2”) do |config|
config.vm.box = “bento/ubuntu-18.04”
config.vm.network “private_network”, ip: “192.168.56.312
config.vm.provider :virtualbox do |v|
v.customize [“modifyvm”, :id, “–memory”, 1024]
end
end
Please note that there is different ip and 1GB RAM size.
Setup Kubernetes Cluster:
Follow this step for all the nodes:
- Disable Swap
sudo swapoff -a
sudo vi /etc/fstab # comment on swap file system
- Add the Docker Repository on all three servers.
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add –
sudo add-apt-repository “deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable”
- Add the Kubernetes repository on all three servers.
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add –
cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
- Install Docker, Kubeadm, Kubelet, and Kubectl on all three servers.
sudo apt-get update
sudo apt-get install -y docker-ce=18.06.1~ce~3-0~ubuntu kubelet=1.12.2-00 kubeadm=1.12.2-00 kubectl=1.12.2-00
sudo apt-mark hold docker-ce kubelet kubeadm kubectl
- Enable bridge networking
echo “net.bridge.bridge-nf-call-iptables=1” | sudo tee -a /etc/sysctl.conf
sudo sysctl –p
- Run following only on Master
sudo kubeadm init –pod-network-cidr=10.244.0.0/16 –apiserver-advertise-address=192.168.56.30
This will keep a token which help to join the other nodes. Please keep it save.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
kubectl get nodes (wait for sometime, it status will become ready)
IF status is not getting ready state then check the /var/log/syslog and check the status of the kubelet service (sudo systemctl status kubelet.service)
- Now it is time to join both workers by running the following command
sudo kubeadm join <Master_ip):6443 –token <token>–discovery-token-ca-cert-hash <hash) [ you will get these value while running kubeadm init –pod-network-cidr=10.244.0.0/16 –apiserver-advertise-address=192.168.56.30 on both nodes].
Congratulation! You have now successfully setup the Kubernetes cluster. You can verify by running kubectl get nodes on the master
We will use YAML file to deploy the containers
In this scenario, we will user kubectl to create and launch Deployments, Replication Controller and access those using Service.
- Firstly, we will create deployment.yaml which uses image to create container and bind on port 80
Vi deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webapp1
spec:
replicas: 1
template:
metadata:
labels:
app: webapp1
spec:
containers:
name: webapp1
image: katacoda/docker-http-server:latest
ports:
containerPort: 80
- create deployment by running
kubectl create -f deployment.yaml
- kubectl get deployment
- Create a service with Nodeport which help to access the container using Node IPs.
Vi service.yaml
apiVersion: v1
kind: Service
metadata:
name: webapp1-svc
labels:
app: webapp1
spec:
type: NodePort
ports:
port: 80
nodePort: 30080
selector:
app: webapp1
kubectl create -f service.yaml
Once pod is running, you can access it using one of Node IP such as
We have now successfully deployed a container. Now we will update the deployment.yaml file to increase the number of instances running. For example, the file should look like this:
replicas: 4
kubectl apply –f deployment.yaml
Now you can access these containers using both nodes as shown below:
With this we have successfully setup Kubernetes cluster and deployed containers using YAML .