Building Cloud Apps with Civo and Docker Part IV: Kubernetes

Written by: Lee Sylvester

In my first two articles of this series, you looked at deploying a cluster of nodes and running a simple two-service application across them with Docker Compose on the Civo platform. You then took a look at some rather rough cloud theory in the third installment, providing some foundation upon which to consider your application development and deployment in order to create scaleable, fault-tolerant services.

In this article, you’ll take your exploration of cloud deployments to another level, by redeploying the previous simple application using Google’s now famous Kubernetes platform.

Setting Up

Let’s start by creating three small Ubuntu nodes on Civo using the same steps from Part I of this series. Give them the names kube-manager, kube-worker1, and kube-worker2. For the purposes of this article, be sure to give them a root user. You may want to do this differently with your real applications, but it makes life a little easier for this demonstration.

Once they have resolved their IP addresses, go ahead and provision Docker to each server as before.

Since you’re not going to be using Docker Swarm this time, do not link your instances together. Kubernetes provides its own means to do this, which you’ll see shortly.

Installing Kubernetes

Now that your cluster of servers is up and accessible, go ahead and SSH into the kube-manager instance using Docker Machine.

# docker-machine ssh kube-manager

Once connected, the first thing you’ll need to do is prepare the server for the Kubernetes install. You do this with:

root@kube-manager:~# sudo apt-get update && sudo apt-get install -y apt-transport-https

This ensures that the current Ubuntu install is up to date and enables apt secure repository support.

I’m including sudo on each call here to minimize difficulties. Since you’re logged in as the root user, you shouldn’t need to do this, but it also doesn’t hurt.

Adding the Kubernetes Repository

By default, your Ubuntu instance has no idea how or where the Kubernetes packages live. You’ll need to fix that. Since Ubuntu doesn’t simply allow packages to be installed from just any location, you’ll first need to add the Kubernetes repository authentication key.

root@kube-manager:~# sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add

Next, you need to add the Kubernetes repository path to Ubuntu’s repository source list. You can do this by first creating a new list file in the apt sources directory and supplying it with the package location. To do this, run nano (or your favorite text editor) with the path of the new list file.

root@kube-manager:~# sudo nano /etc/apt/sources.list.d/kubernetes.list

Then add the following package path as its content:

deb http://apt.kubernetes.io/ kubernetes-xenial main

Be sure to save and exit this file.

Finally, update apt so that it loads this new repository list, like so:

root@kube-manager:~# sudo apt-get update

Installing the Kubernetes Binaries

You’re now ready to install Kubernetes itself. Don’t worry, the process is surprisingly painless.

root@kube-manager:~# sudo apt-get install -y kubelet kubeadm kubectl kubernetes-cni

That’s it! Kubernetes is installed! Was that so hard?

The previous line installed each of the Kubernetes interactive tools, as well as the Container Network Interface. Everything you need to run your app is now available; you simply need to configure it.

Before you do this, however, you’ll first need to repeat the above process to install Kubernetes on each of the workers. Go ahead and do that now.

Configuring the controller Node

As with Docker Swarm, Kubernetes uses a controller/agent node formation. The controller node is responsible for sending commands to each of the agent nodes (and to itself). When first setting up your cluster, you need to manually elect a controller before doing anything else. While SSH’d into the kube-manager server, go ahead and issue the following command:

root@kube-manager:~# sudo kubeadm init --pod-network-cidr 10.244.0.0/16

The kubeadm init command prepares the machine as a Kubernetes controller node. The --pod-network-cidr flag is required specifically for this article, as you’ll be using the Flannel pod network. If you don’t use Flannel in your actual projects, you won’t need to supply it. Don’t worry about this for now, as all will be explained later.

Running the above command should print out something similar to the following:

Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run (as a regular user):
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
  http://kubernetes.io/docs/admin/addons/
You can now join any number of machines by running the
following on each node as root:
  kubeadm join –token lzyq4s.jjlcg7hm8oqk9fzl --discovery-token-ca-cert-hash sha256:63796b2147c2450f.[snip].d3d33

Wonderful! Kubernetes has written half of this article for me!

Let’s go ahead and finish preparing the controller node.

Node Configuration Directory

Kubernetes uses a configuration storage directory for placing data about your environment. As the previous output stated, this must be done as a user other than the root user. To do this, you’ll need to create one while ensuring it has sudo capabilities.

root@kube-manager:~# adduser kubeuser
root@kube-manager:~# usermod -aG sudo kubeuser

When adding the user, you’ll be prompted to supply a password and some details. You’ll need the password whenever you call sudo while in the user context. The other details can simply be skipped by pressing Enter, if you wish. Next, change your user context to this new user:

root@kube-manager:~# su – kubeuser

You should see the user's name form part of the command prompt, like this:

kubeuser@kube-manager:~#

You can now issue the remaining commands to set up your Kubernetes controller.

kubeuser@kube-manager:~# mkdir -p $HOME/.kube
kubeuser@kube-manager:~# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
kubeuser@kube-manager:~# sudo chown $(id -u):$(id -g) $HOME/.kube/config

Pod Networks

That was pretty much it for the controller node. However, there is still one outstanding requirement to configure your Kubernetes installation before you can establish your cluster: the pod network.

Docker networking

If you recall from Part 2 of this series, you deployed two containers using a Docker Compose file. The YAML inside that file detailed a service called service (rather stupidly in retrospect). Later in the same article, an inspection of the deployed service contained a Network array which detailed a network also called service.

When deploying Docker containers into a Swarm cluster, the cluster manager creates an overlay network, placing each container within it. An overlay network is a virtual network of sorts that allow containers to communicate in the same address space. Other types of network can also be created, if you know what you’re doing.

Using Swarm and Docker Compose, it is possible and recommended to create custom networks that group services together. Each container can belong to more than one network if necessary. The result is then a means to protect those containers that do not need to be accessed from the outside world, while remaining contactable by the rest of your service architecture.

With Kubernetes, the same network configuration requirement applies. The considerable difference is that, while the Docker Engine manages networks for Docker Swarm, a Kubernetes cluster typically requires the use of third-party solutions.

Enter Flannel

Flannel is a third-party network solution for Kubernetes. Of the available options for providing a pod network, Flannel is by far the simplest. That’s why you’ll use it here, for the purpose of this article.

Installing Flannel onto your controller node is super simple. Simply run the following:

kubeuser@kube-manager:~# sudo kubectl apply -f  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubeuser@kube-manager:~# sudo kubectl apply -f  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml

These commands run containers within Kubernetes pods that manage traffic to your own pods. If you inspect your current running Kubernetes pod list, you should see something like this:

kubeuser@kube-manager:~# sudo kubectl get pods --all-namespaces
NAMESPACE     NAME                                   READY     STATUS              RESTARTS   AGE
kube-system   etcd-kube-manager                      1/1       Running             0          1m
kube-system   kube-apiserver-kube-manager            1/1       Running             0          1m
kube-system   kube-controller-manager-kube-manager   1/1       Running             0          1m
kube-system   kube-dns-86f4d74b45-48bmx              0/3       ContainerCreating   0          2m
kube-system   kube-flannel-ds-zdz59                  1/1       Running             0          32s
kube-system   kube-proxy-txjh8                       1/1       Running             0          2m
kube-system   kube-scheduler-kube-manager            1/1       Running             0          1m

Don’t worry if it’s not exactly the same, so long as you see pods listed.

The kube-flannel pod was started as part of the Flannel setup. Flannel uses etcd to keep track of your pods and their relationships to one another. The result is a simple overlay network that works with minimal fuss.

Creating the Cluster

Now you’re ready to join your workers to the manager. Using the kubeadm join command output when the controller node was initialized, SSH into both worker machines and run it in the console. You should be presented with a success response.

In order to check that all is well, simply call this line in the controller node, ensuring you are in the kubeuser context.

kubeuser@kube-manager:~# kubectl get nodes

The resulting output should look a little like this:

NAME          STATUS    ROLES     AGE       VERSION
kube-manager   Ready     master    24m       v1.10.0
kube-worker1    Ready     <none>    27m       v1.10.0
kube-worker2    Ready     <none>    27m       v1.10.0

You may be tempted to simply spin up one machine and use it as a controller with no agents. This will not work. Kubernetes will not allow pods to be scheduled without at least one additional agent node attached to the cluster.

Deploying the Application

Just as with Docker Swarm, Kubernetes also makes use of YAML files to deploy applications. While you can easily deploy a whole cluster with a single YAML file for Docker Swarm, it is preferable to break Kubernetes deployments into smaller chunks, providing greater flexibility and making it easier to focus on smaller details.

Deploying the Backend Application

For the backend application, you’ll deploy the phpinf image from Part I as a Kubernetes pod. Pods are an abstraction of one or more containers, providing port, volume, and other configuration data. This is a more advanced concept than a simple container, as it means multiple containers can exist together as a pseudo container, providing greater flexibility.

To deploy the phpinf application, you’ll need a pod configuration file. On the controller server, create a new document called backend.yaml and populate it with the following content:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: service
spec:
  selector:
    matchLabels:
      app: php-service
      srv: backend
  replicas: 3
  template:
    metadata:
      labels:
        app: php-service
        srv: backend
    spec:
      containers:
      - name: php-service
        image: "leesylvester/phpinf"
        ports:
        - name: http
          containerPort: 80

This seems quite a bit more complicated than the Docker Swarm example, I’m sure. The important parts to take note of here are the replica count, the image, and the ports properties.

In Kubernetes, much of the configuration you’ll work with involves the use of labels. Here, the container's port value is associated with the label http. Typically, container port specification in configuration files presents no additional functionality, but by naming the port, this value can be referenced in other configurations by name, keeping deployments truly dynamic.

With this file created and saved, let’s deploy the image to the cluster.

kubeuser@kube-manager:~# kubectl create -f backend.yaml

You can check for its status by recalling the current pod list.

kubeuser@kube-manager:~# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
service-6c75bd4c8c-lhplx    1/1       Running   0          2m
service-6c75bd4c8c-2q76f    1/1       Running   0          2m
service-6c75bd4c8c-47jw9    1/1       Running   0          2m

If your status says something like ContainerCreating, then the pod is still downloading and preparing to be run. Wait a few seconds and try again.

Kubernetes Services

So you now have three instances of the backend pod running and ready to receive requests, but in its current state, there is no way to do so. Kubernetes pods are inaccessible until a service is deployed, providing configuration for port access.

Once again, to create a service, you’ll need to provide a YAML configuration file. Do this now by creating a file called service.yaml and populating it with the following:

apiVersion: v1
kind: Service
metadata:
  name: service
spec:
  selector:
    app: php-service
    srv: backend
  ports:
  - protocol: TCP
    port: 80
    targetPort: http

Again, you can launch the service by simply entering:

kubeuser@kube-manager:~# kubectl create -f service.yaml

A list of running services can be displayed by entering:

kubeuser@kube-manager:~# kubectl get services

Deploying the Frontend Application

Just as with Docker Swarm in Part I, the backend service now has an internal port ready to receive requests. What you need now is the frontend load balancer app to forward calls from the public internet.

As with the backend application, you’ll need a YAML file. This time, it will include both the deployment and service configuration, together.

apiVersion: v1
kind: Service
metadata:
  name: frontend
spec:
  selector:
    app: nginx-lb
    srv: frontend
  ports:
  - protocol: "TCP"
    port: 80
    targetPort: lbhttp
    nodePort: 30001
  type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  selector:
    matchLabels:
      app: nginx-lb
      srv: frontend
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx-lb
        srv: frontend
    spec:
      containers:
      - name: nginx
        image: "leesylvester/civolb"
        ports:
        - containerPort: 80
          name: lbhttp
        lifecycle:
          preStop:
            exec:
              command: ["/usr/sbin/nginx","-s","quit"]

If you create this service and query the pods, you should now see both sets of containers running.

kubeuser@kube-manager:~# kubectl create -f frontend.yaml
kubeuser@kube-manager:~# kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
frontend-64bcc57c9d-nvrbg   1/1       Running   0          1m
frontend-64bcc57c9d-rqmlb   1/1       Running   0          1m
frontend-64bcc57c9d-5mpgc     1/1       Running   0          1m
service-6c75bd4c8c-lhplx    1/1       Running   0          14m
service-6c75bd4c8c-2q76f    1/1       Running   0          14m
service-6c75bd4c8c-47jw9    1/1       Running   0          14m

Now listing the services will show both the frontend and backend overlay networks.

kubeuser@kube-manager:~# kubectl get services
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
frontend     NodePort    10.108.194.216   <none>        80:30001/TCP   3m
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        16m
service      ClusterIP   10.110.192.42    <none>        80/TCP         15m

That’s it! Your application is now running.

To see it in action, call up any one of the external IPs from your cluster and visit port 30001 in the browser. As with the Docker Swarm implementation, refreshing the browser should show a different service id and IP each time.

Load-Balancing Considerations

When requests are made to a node (a given server in the cluster), the frontend service routes the request to the NGINX load balancer. This request may not go to the load balancer on the same server. Likewise, the request from the load balancer to the backend application may also go to a different server via the backend service. It is the service's job to identify a suitable pod to handle the request, whichever node it happens to be running on.

There are ways to ensure that requests passed to a server remain within the same node if that is your preference, provided the server contains the service it is trying to contact. However, that is beyond the scope of this article. If you wish to read up on that further, try searching for externalTrafficPolicy.

Kubernetes is rich with confguration capabilities, of which externalTrafficPolicy is but a small part, so I would advise getting a good book on the subject if you really want to controller it. I would thoroughly recommend Kubernetes in Action by Marko Lukša, as this provides a wealth of real-world information that is easy to understand and follow.

Despite all that Kubernetes can do, however, there is still something you can do external to your cluster to ensure your application is load balanced and fault tolerant: implement an external load balancer.

The Civo Load Balancer

Although Kubernetes ensures requests to each pod service is load balanced, the actual request to the cluster still needs to hit a node. If you assign a domain name to the controller node and send all traffic to it, then the frontend service on that node will be hit with 100 percent of all traffic. What’s more, if the node fails, the whole application becomes unavailable, which is not good for business.

The solution is to leverage Civo’s Load Balancer service.

By supplying a domain and assigning each of the cluster nodes, they can be sufficiently balanced using either a least-connections, round-robin, or IP address hash policy. The service also provides a means to ping a node's health via a given endpoint, which you can expose in your application.

This way, if a node fails, the load balancer can be sure not to route any further traffic to it until it comes back online.

Conclusion

In this article, you took a simple Docker Swarm application and transitioned it to big-boy deployment on the Civo platform. While you have only just scratched the surface of what Kubernetes can do, I hope it has at least provided a sufficient starting point for your own projects and shown that Kubernetes is not so complicated and scary to work with.

In the next article of this series, you’ll take a look at some of the additional benefits of Kubernetes.

Previous articles

You can find the previous articles here:

Building Cloud Apps with Civo and Docker Part I: Setting Up the Cluster

Building Cloud Apps with Civo and Docker Part II: Stateless Applications

Building Cloud Apps with Civo and Docker Part III: Cloud Theory

Stay up to date

We'll never share your email address and you can opt out at any time, we promise.