How to Setup a Multinode Kubernetes Cluster on Vagrant

monitor displaying computer application
Reading Time: 4 minutes

In this blog I have setup a step by step guide on How to Setup a Multinode Kubernetes Cluster on Vagrant.The Kubernetes cluster setup will be using kubeadm.

What is Kubeadm ?

Kubeadm is a command-line utility which will help you to bootstrap your kubernetes cluster that confirms to the best practices.It helps with installing and configuring kubernetes cluster. It performs the actions necessary to get a minimum viable, secure cluster up and running in a user friendly way. Kubeadm’s scope is limited to the local node filesystem and the Kubernetes API, and it is intended to be a composable building block of higher level tool.

What is Vagrant?

Vagrant is a tool for building and managing virtual machine environments in a single workflow. With an easy-to-use workflow and focus on automation,Vagrant lowers development environment setup time, increases production parity, and makes the “works on my machine” excuse a relic of the past.

There are many third party tools to setup a Kubernetes cluster. I’ve looked at two options, Minikube and Kubeadm. Minikube doesn’t support multi node setup, so I picked Kubeadm.

I follow Kubeadm guide and setup a three node cluster (single master and two worker nodes) manually first and then automated in Vagrant because I can recreate the cluster easily later with no effort.

If you want to learn more about what and why Vagrant checkout the following blog:

Know more About Vagrant

Kubernetes-Kubeadm Vagrant Github Repository

All the scripts and kubeadm vagrantfile are present in github repo.Clone the repo and just run the command vagarnt up.

$ git clone https://github.com/ahmadjubair33/vagrant-kubernetes.git

Explanation of Vagrantfile with .sh File

Here is my Vagarntfile and firstly I am going to tell you what does this Vagrantfile do.

NUM_WORKER_NODES=2
IP_NW="10.0.0."
IP_START=10

Vagrant.configure("2") do |config|
    config.vm.provision "shell", inline: <<-SHELL
        apt-get update -y
        echo "$IP_NW$((IP_START))  master-node" >> /etc/hosts
        echo "$IP_NW$((IP_START+1))  worker-node01" >> /etc/hosts
        echo "$IP_NW$((IP_START+2))  worker-node02" >> /etc/hosts
    SHELL
    config.vm.box = "bento/ubuntu-21.10"
    config.vm.box_check_update = true

    config.vm.define "master" do |master|
      master.vm.hostname = "master-node"
      master.vm.network "private_network", ip: IP_NW + "#{IP_START}"
      master.vm.provider "virtualbox" do |vb|
          vb.memory = 4048
          vb.cpus = 2
          vb.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
      end
      master.vm.provision "shell", path: "scripts/common.sh"
      master.vm.provision "shell", path: "scripts/master.sh"
    end

    (1..NUM_WORKER_NODES).each do |i|
      config.vm.define "node0#{i}" do |node|
        node.vm.hostname = "worker-node0#{i}"
        node.vm.network "private_network", ip: IP_NW + "#{IP_START + i}"
        node.vm.provider "virtualbox" do |vb|
            vb.memory = 2048
            vb.cpus = 1
            vb.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
        end
        node.vm.provision "shell", path: "scripts/common.sh"
        node.vm.provision "shell", path: "scripts/node.sh"
      end
    end
  end
   

As you can see, I have added the following IPs for nodes, and it is added to the host’s file entry of all the nodes with its host name with a common shell block that gets executed on all the VMs.

  • 10.0.0.10 (master)
  • 10.0.0.11 (node01)
  • 10.0.0.11 (node02)

Also, the worker node block is in a loop. So if you want more than two worker nodes or have only one worker node, you need to replace 2 with the desired number in the loop declaration. If you add more nodes, ensure you add the IP to the host’s file entry.

For example, for 3 worker nodes, you need to have 1..4 loop.

(1..4).each do |i|

When you will run the Vagrant command to configure the cluster the three shell scripts get called as provisioners during the Vagrant run to configure the cluster.What these shell files are?I will explain one by one.

Common.sh

Here we are disabling the swap as it is requirement for the kubeadm.And all the other command are pretty self-explanatory overall it install docker,and we are installing version 1.20 of kubeadm,kubelet and kubectl.If you want to latest version of kubernetes just remove the Kubernetes version from the command.

Master.sh

Here I have declared three variables a MASTER_IP pod network POD_CIDR and system hostname variable NODENAME.These variable get substituted in the kubeadm init command.If you notice the pod_cidr is of 192 series.It is essential to have a non overlapping ip address range for the nodes and pods otherwise you might face routing issues while deploying and accessing application.

Then we copy the generated kubeconfig to the home location to execute the kubectl command.In vagrant the /vagrant directory on every vm is a shared host folder that contains a vagrantfile this means even the worker node shares the same /vagrant directory because we create all the vm’s from a single vagrantfile.So we use this directory to hold all the configs required for workers nodes to connect to the master.

After that it install the calico plugin, metrics server, and kubernetes dashboard.

Node.sh

It reads the join.sh command from the configs shared folder and join the master node. Also, copied the kubeconfig file to /home/vagrant/.kube location to execute kubectl commands.

Launch Machine

Now we have enough understanding of about the script let’s run the command to create kubernetes cluster.

 First cd into the cloned directory And execute the below command.

$ vagrant up

When you run for the first time vagrant will download the specified ubuntu image from the vagrant cloud.

Now run the below command to login to the master node.

$ vagrant ssh master

To see status of all nodes run the following command.

$ kubectl get nodes

Now you are good to deploy application on kubernetes cluster.

If you want to shutdown the Vm’s run below command:

$ vagarant halt

Whenever you need the cluster, just execute, the vagrant up command and your cluster will be ready.

To destroy the VM run the below command.

$ vagrant destroy

It is good to have a Local kubernetes cluster setup that you can spin up and tear down whenever you need without spending much time.To set up the kubernetes cluster on Vagrant, all you have to do is, clone the repo and run the vagrant up command.Thank you for sticking to the end. If you like this blog, please do show .

Refrences:

https://www.vagrantup.com/docs

Written by 

Jubair Ahmad is a Software Consultant (DevOps)at Knoldus.Inc.He loves learning new technology and also have interest in playing cricket.

Discover more from Knoldus Blogs

Subscribe now to keep reading and get access to the full archive.

Continue reading