A Kubernetes cluster on VirtualBox

How to create a Kubernetes cluster on VirtualBox the proper way

Ani Sinanaj
Caffeina Developers

--

Kubernetes (background photo from Dhruv Deshmukh, Unsplash)

Why did I do this?

A few months after I joined Caffeina, the CTO (Stefano Azzolini) mentioned Docker in one of our weekly meetings. At the company I’m primarily a mobile developer but I didn’t really get what it was so I wanted to understand it. I started learning Docker and I created an infrastructure and some tools to help me deploy my projects and maintain them easily. Docker itself though doesn’t manage a cluster of servers. That’s where Kubernetes comes in.

What Kubernetes does is balance a project (multiple docker containers) automatically throughout the cluster nodes positioning the project inside the most adequate one taking into account available CPU, RAM and hard disk space, and it also does scaling.

Before renting a few servers and installing Kubernetes, I wanted to test it out on Virtual Box. Of course, you can use Minikube or Docker’s built-in version (only on Docker edge for now), but I wanted to replicate a real situation as much as possible.

Setting up the environment.

Well, obviously the first thing to do is install VirtualBox. You can do that either by downloading it from the website and installing it manually or if you’re on Mac and have brew installed, just run brew cask install virtualbox and it should work.

The next step is to download an operating system. Most linux distributions should work but I decided to go with Ubuntu server.

I downloaded Ubuntu and installed it on 1 virtual machine, setting both username and password to toor because this was going to be a test, it didn’t matter to me to have strong security.

I recommend to setup one machine and then clone that one instead of installing the same things over and over from the start.

So now that I have a clean installation of ubuntu-server I’m going to install the basics.

Executing commands.

There are two ways of achieving this. Once you start the machine, you can login using toor as the username and toor as the password (or whatever you set it to) and you can start running the commands needed to setup the machine. But there are some issues with this approach.

  • You can’t copy and paste from and to the host system.
  • You’d need to install the virtualbox tools which, without a GUI isn’t that easy.
  • After installing the virtualbox tools, you’d have to share a folder between the machine and the host. This also isn’t the easiest thing to do.

The second approach, which I prefer, is to ssh into the machine from the host.

To make this happen, you need to add to the machine a network adapter of type bridge (there’s a problem with this one which I’ll explain later), or a host-only adapter so the host can communicate to the virtual machine.

Once that is done, I can start the machine and log in and then running ifconfig I can see what is the ip address I can use to access the VM.

Now that I have a way of accessing the virtual machine, I can minimise it and continue from the terminal as follows.

$ ssh toor@192.168.1.155

Note that the ip changes with every boot if you choose the bridge adapter depending on the router you’re using. In any case it will be different from the one above.

Installation.

Now that I have a comfortable way to access the machine I can move on and install the necessary software.

First things first. To be able to install software we need to sudo. Since we’re going to execute a lot of commands that need sudo privileges I’m going to run sudo su.

$ sudo su -

The - runs the sub-shell with .bashrc or .profile loaded.

The following code is self explanatory.

$ apt-get update 
$ apt-get install -y \
apt-transport-https \
ca-certificates \
curl \
software-properties-common

Now we can start adding Docker which Kubernetes needs to work. The steps are, adding the repository key, adding the repository in the sources list, and finally installing docker

$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -$ add-apt-repository \
"deb https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") \
$(lsb_release -cs) \
stable"
$ apt-get update
$ apt-get install -y \
docker-ce=$(apt-cache madison docker-ce | grep 17.03 | head -1 | awk '{print $3}')

We’re going to do the same thing for the kubernetes tools, kubeadm kubectl and kubelet

$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -$ cat <<EOF >/etc/apt/sources.list.d/kubernetes.listdeb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
$ apt-get update
$ apt-get install -y kubelet kubeadm kubectl

There’s a software called crictl that kubernetes uses but it is not required. When using kubeadm it will throw a warning. I’ve tested it and the system works without crictl but I don’t like warnings so I decided to try installing it.

It turns out it needs an updated version of golang to work so we need to install it manually and not from apt-get. before installing crictl.

First we download Go , untar it and move it to /usr/local and in the end we set some environment variables.

$ curl -O https://storage.googleapis.com/golang/go1.8.linux-amd64.tar.gz$ tar -xvf go1.8.linux-amd64.tar.gz$ mv go /usr/local$ echo "export GOROOT=/usr/local/go" >> ~/.profile
$ echo "export GOPATH=\$HOME/go" >> ~/.profile
$ echo "export PATH=\$PATH:\$GOROOT/bin:\$GOPATH/bin" >> ~/.profile
$ source ~/.profile

We need to add GOROOT so go needs where it’s executable is. GOPATH is needed to build downloaded applications like crictl and then to be able to use go from everywhere we need to add it into the PATH environment variable.

Check out that go works by executing go version

Keep in mind that you have to add it on both root and toor users.

Finally installing crictl

$ go get github.com/kubernetes-incubator/cri-tools/cmd/crictl

Configuration.

Some other things are required to make the system work. First of all the swap must be disabled. To do that, first get the swap partition ID like this.

$ cat /proc/swaps

Then run swappoff -a to actually turn it off.
And finally, remove the swap record from /etc/fstab

Secondly, both Kubernetes and Docker must have the same cgroup and that can be either cgroupfs or systemd I tried cgroupfs first but then switched to systemd

Use the command below to check which cgroup docker is using

$ docker info | grep -i cgroup

Docker may not be configured at all

And this other one for kubernetes

$ cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

To update both to systemd execute the following

$ sed -i "s/\$KUBELET_EXTRA_ARGS/\$KUBELET_EXTRA_ARGS\ --cgroup-driver=systemd/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf$ cat <<EOF >/etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

In the end reload Docker and Kubernetes or just reboot the machine

$ systemctl daemon-reload
$ systemctl restart kubelet
$ systemctl restart docker

Now my system is all set up and I can configure kubernetes, but before doing so, since configuration changes depending on whether a machine is master or a node, this is the right time to clone our machine.

Make sure to select generate new mac address when cloning and check that they have different mac addresses by executing the commands below on each machine.

$ echo `ip link`
$ sudo cat /sys/class/dmi/id/product_uuid

Kubernetes makes use of a network driver to establish network protocol between nodes, pods, and containers. There are a few different network drivers that can be used, they’re all described in the documentation. I’m going with Flannel because it supports cpus other than just amd64 . It supports arm, 32bit processors and others, so I thought it’s the best fit for Virtual Box.

One other reason that convinced me to use Flannel was the fact that it’s IP address subnet is 10.244.0.0 which would avoid conflicts with routers that use 192.168.0.0 or 172.20.0.0

$ kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=$CURRENT_OUTBOUND_IP

The $CURRENT_OUTBOUND_IP variable contains the IP of the machine in reachable to the host and other machines. There’s a problem with the bridge network adapter here. Every time I boot my machine, it will have a different IP which wouldn’t work for Kubernetes because the nodes would look for the initial IP address and wouldn’t find it. That’s why I’d suggest to use a host-only network adapter and then nat to have network access.
For testing purposes, this choice works fine because I’m deleting the machines once I confirm the system works.

Flannel all IPv4 trafic to pass through the iptables chain that’s why we need to set net.bridge.bridge-nf-call-iptables to 1.

$ sysctl net.bridge.bridge-nf-call-iptables=1

Before adding the network and joining the master node from the other nodes, we need to make kubeadm work from a non root user. So we exit the root subshell with exit

We need a .kube directory in our home directory to put kubernetes configuration. Then we need to copy the configuration generated from the init command and set it the correct permissions. All of this is done with the following lines.

$ mkdir -p $HOME/.kube
$ sudo cp -if /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

Finally to complete the master node configuration we must add the network driver. The command is very simple.

$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml

At this point we log into the nodes and run the join command.

You should have copied it from the result of the kubeadm init command which returns the tokens as well. The command should look something like this

$ kubeadm join $IP --token $TOKEN --discovery-token-ca-cert-hash $DISCOVERY_TOKEN

Where $IP is the ip of the master node, $TOKEN is the token generated by kubeadm init it’s valid only for 24 hours because it’s just used to authenticate the nodes and the master, $DISCOVERY_TOKEN, is the hash of the token certificate.

If you forget to copy the join command from the init results, you can regenerate the token or read the existing one from the configuration.

If you don’t want to include the discovery token you can replace --discovery-token-ca-cert-hash with --discovery-token-unsafe-skip-ca-verification

At this point we’d have a working Kubernetes cluster.

Management.

The best part (in my opinion) of Kubernetes is that with kubectl you can manage the whole cluster from anywhere. The way to do that is by copying the admin configuration of your master node and configure the local installation of kubectl with the same configuration file.

The first step is to copy the config file. And the easiest way I thought of, was the following. This must be executed on the master node obviously either through ssh or theVirtualBox instance.

$ cat $HOME/.kube/config

The next commands instead need to be executed in your host machine.

$ mkdir -p ~/.kube
$ touch ~/.kube/config
$ pbpaste > ~/.kube/config

The -p flag in the mkdir command means that if any directory of the path to the right doesn’t exist, it will be created automatically. It’s not needed in this case but I left it because I’m used to do it this way.
pbpaste is a MacOS command which contains the content of the clipboard

Kubectl will automatically load the configuration file when put in ~/.kube/config but you can load a configuration on runtime as well.

$ kubectl --kubeconfig=other_configuration view

The view command prints the configuration.

Finally by running kubectl get nodes --all-namespaces we’ll see all the nodes of the cluster.

All nodes except for the master node will start without a label. Check out the issue below to see why and how to assign roles to the other nodes.

My friend Gabriele Diener and I worked on the experiment together and he also contributed on the writing of this article.

The proper way

Using a bridge adapter isn’t exactly the right fit for this use case as I said above. Gabriele Diener explains how to properly manage networking in VirtualBox.

Here you can find the repository with the commands above. Keep in mind that you shouldn’t execute them.

Thanks for reading this far and stay tuned for more fun articles :)

--

--