See all articles

A Simple Kubernetes Cluster on AWS - Tutorial

It’s DevOps tutorial time again! This time around we take a closer look at Kubernetes, the container orchestration package, and how we can use it to deploy and load balance clusters on AWS with Kubespray.

Kubernetes is a rising star in the DevOps world. This clever container orchestration package is making waves due to its management and configuration options - which make it ideal for load balancing. In this continuation of our DevOps tutorials, we will show you how to setup a simple Kubernetes cluster on AWS using Kubespray.

You might ask, “Why `Kubespray`?”, since there are other more popular solutions like Cobs. `Kubespray` is architecture agnostic - it’s a community driven combination of Ansible playbooks that allows us to create a Kubernetes cluster on AWS EC2, DigitalOcean, VPS - or even bare metal.

AWS Setup

First we need to configure networking for our Kubernetes cluster.

VPC

Let’s create a `kubernetes-ironin-vpc` VPC with `10.1.0.0/16` as CIDR and the `DNS resolution` and `DNS hostnames` enabled:

Subnets

Let’s add 2 new subnets for the VPC we just created:

1. `kubernates-ironin-1` with CIDR set to `10.1.1.0/24` in `eu-central-1a` availability zone

2. `kubernates-ironin-2` with CIDR set to `10.1.2.0/24` in `eu-central-1b` availability zone

Routing table

We will also need a routing table for our VPC with 2 subnets. Let’s name it `kubernetes-ironin-routetable`.

Internet gateway

Finally, we need to create an internet gateway (named `kubernetes-ironin-internetgw`) that will allow us to connect to our VPC from the outside world.

Security groups

Since we have a basic network setup we can create proper security groups for connecting to our instances.

Here is the minimal set of rules that should keep us going:

They will allow us to create a cluster using internal IPs, as well as connect to the dashboard (https://master.node.external.ip:6443/ui) from our personal machine (source `my``.personal.static.ip/32`).

Now we can create our EC2 instances (we suggest at least 2x `t2.small` instances running Ubuntu 16.04) in `eu-central-1a` and `eu-central-1b` availability zones, assigned to the `kubernetes` security group we created before.

After successful creation, we can connect to all instances via SSH and run `sudo apt-get update` to update the packages so that new packages can be correctly installed from the playbooks.

Creating an inventory

Note: Before cloning `Kubespray`, make sure that `Python3` with `pip` is installed.

1. Clone https://github.com/kubernetes-incubator/kubespray to your local drive

2. Install python dependencies:

1 pip3 install -r requirements.txt

4. Create an `inventory/inventory.cfg` file similar to the one below:

1 2 3 4 5 6 7 8 9 10 11 12 [all] ip-10-1-1-2.eu-centeral-1.computer.internal ansible_host=ip1 ip=10.1.1.2 ansible_user=ubuntu ansible_python_interpreter=/usr/bin/python3 ip-10-1-1-3.eu-centeral-1.computer.internal ansible_host=ip2 ip=10.1.1.3 ansible_user=ubuntu ansible_python_interpreter=/usr/bin/python3 [kube-master] ip-10-1-1-2.eu-centeral-1.computer.internal [kube-node] ip-10-1-1-3.eu-centeral-1.computer.internal [etcd] ip-10-1-1-2.eu-centeral-1.computer.internal [k8s-cluster:children] kube-node kube-master

Provide instance’s external ips under `ansible_host` attribute so Ansible know how to connect to your instances.

Setting up the cluster

Once you are ready run the following command to provision your servers:

1 ansible-playbook -i inventory/inventory.cfg -b -v cluster.yml --private-key=~/.ssh/your_key

After provision is successful you can login to the `master` instance and check if nodes are connected correctly using `kubectl get nodes` command. It should give you similar output:

1 2 3 NAME STATUS ROLES AGE VERSION ip-10-1-1-184 Ready node 3m v1.9.1+coreos.0 ip-10-1-1-216 Ready master 3m v1.9.1+coreos.0

Deploying an app

In this tutorial we will deploy simple Node.js hello world app. Let’s login to the `master` instance and create a new deployment file:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 # template.yml apiVersion: apps/v1 kind: Deployment metadata: name: app-deployment labels: app: node-app spec: replicas: 2 selector: matchLabels: app: node-app template: metadata: labels: app: node-app spec: containers: - name: node-app image: lmironin/hello-docker:latest ports: - containerPort: 8080 apiVersion: v1 kind: Service metadata: name: app-service spec: selector: app: node-app type: NodePort ports: - port: 80 targetPort: 8080 nodePort: 30080 protocol: TCP

This template defines `Deployment` and a `Service`. `Deployment` (a rule for running the pods on the cluster). In our case, we have 2 replicas running `lmironin/hello-docker` app which exposes `8080` port.

A `Service` with the `NodePort` type basically creates a service on all Nodes, so that we can access our pods using `NodeIp:NodePort` - even though the pods change (when you delete a pod and deployment creates a new one, it will have a different IP address).

Let’s create our deployment and service:

1 2 3 $ kubectl create -f template.yml > deployment "app-deployment" created > service "app-service" created

You can check if pods are running with the `kubectl get pods` command. This should give you output similar to below:

1 2 3 4 $ kubectl get pods > NAME READY STATUS RESTARTS AGE > app-deployment-85c868cc55-44s5m 1/1 Running 0 36m > app-deployment-85c868cc55-b6kjz 1/1 Running 0 36m

If you remove one of the pods, deployment will automatically spin up a new one to preserve the number of running replicas. You can try this yourself:

1. Remove a pod by running: `kubectl delete pod pod``'``s id` (you can take pod’s id from the command above)

2. Check running pods again: `kubectl get pods`. It should give you output similar to this:

1 2 3 4 5 $ kubectl get pods > NAME READY STATUS RESTARTS AGE > app-deployment-85c868cc55-44s5m 1/1 Running 0 39m > app-deployment-85c868cc55-b6kjz 1/1 Terminating 0 39m > app-deployment-85c868cc55-bdfkf 1/1 Running 0 7s

You can run the following command to check the pod’s internal IP:

1 2 $ kubectl describe pod app-deployment-85c868cc55-44s5m | grep IP > IP: 10.233.99.69

By having our `app-service` running, we can access our pods running on a specific instance using the instance’s IP and `nodePort`. This way, we can create an `ElasticLoadBalancer` and register all the Kubernetes nodes as registered targets, so the load balancer will automatically balance the traffic between our services running on different instances.

This is not the best approach - as it could lead to unbalanced traffic - but it’s more than enough for the purposes of this tutorial. A better option would be to use a service with the `LoadBalancer` type, but it would require us to provide the necessary AWS configuration to the Kubespray setup so it could correctly manage AWS resources.

Now if you open the Elastic Load Balancer address in your browser, you should see a response from the app running in our pods:

That’s all for now. Keep in mind that the process we showed you is very manual and not designed for production environments. In production environments you probably want to use CI for deployments and services with LoadBalancer for registering nodes in the ELB automatically, as well as namespaces for separating Kubernetes environments. However the tutorial should give you a better picture of how the Kubernetes pieces work together and what they are capable of. Keep in mind this is just tip of the iceberg - people are already running huge and complex clusters with Kubernetes!

If you would like assistance in setting up Kubernetes in your production environment and would like a hand, then turn to us at iRonin. We have people on the team experienced in complex Kubernetes setups that would be happy to help out.

Read Similar Articles