This post is a part of the bigger series. The guide presents an opinionated evolution of the Kubernetes deployment, starting with a very simple setup described here.

It will guide you through all necessary, small steps towards fully automated CI/CD pipeline.

Example application

Most Rails applications use a similar stack:

  • web application itself
  • background job processor (e.g. Sidekiq or Resque)
  • external services
    • RDBMS (e.g. PostgreSQL or MySQL)
    • Redis

This guides is based on an example Rails 5.2 application available on Github .

It implements the stack, using Sidekiq as the background jobs processor and PostgreSQL as the RDBMS.

Prerequisites

To make this post and the guide concise it assumes that the reader is familiar with basic Docker and Kubernetes concepts.

Software requirements not covered in this post:

  • Kuberentes cluster
  • PostgreSQL and Redis databases, accessible in the cluster
  • Docker image repository - public or private, accessible in the cluster

Kubernetes

There are plenty of guides and tutorials online describing Kubernetes setup. Official documentation is fantastic too.

For beginners looking for a hassle-free cloud solution I recommend Google Cloud. Kubernetes setup is much easier and cheaper than on AWS.

Services - Postgres and Redis

The application requires two external services: PostgreSQL and Redis. They don’t necessarily have to be installed on Kubernetes.

I used Helm charts to simplify the setup. Charts available in the repository deployment/databases/.

Docker image repository

The most popular is repository is Docker Hub. It has a free plan with 1 private repository.

Each major cloud providers such as GCP, AWS, Azure also provide the service.

Production image

Rails production image differs form the local image (read more about local image).

There are few features that characterize production image:

  • assets are precompiled
  • image has the code baked in
    • optionally - ignores unnecessary repository files like docs, specs
  • deployment and test Gemfile groups are omitted

Because of these differences it’s a good idea to have separate Dockerfiles for both local and production environments.

Production Dockerfile I created:

FROM ruby:2.5.1-alpine

RUN apk add --no-cache --update build-base \
                                postgresql-dev \
                                nodejs \
                                tzdata

WORKDIR /app
COPY Gemfile Gemfile.lock ./
RUN bundle --deployment --without development test

COPY app/ ./app/
COPY bin/ ./bin/
COPY config ./config/
COPY db ./db/
COPY lib ./lib/
COPY public ./public/
COPY config.ru package.json Rakefile ./

ENV RAILS_ENV production
RUN bundle exec rake assets:precompile
RUN rm -rf tmp/

ENV RAILS_SERVE_STATIC_FILES true
ENV RAILS_LOG_TO_STDOUT true

It uses multiple COPY statements to explicitly whitelist included files and directories.

rm -rf tmp/ removes unnecessary caches created by assets precompliation.

In this setup reverse proxy like nginx is not used. Rails application is responsible for serving static content. In default Rails 5.2 setup RAILS_SERVE_STATIC_FILES controls the feature, configuration code.

The same applies to RAILS_LOG_TO_STDOUT. We want to log directly to STDOUT as this is a default and the simplest solution for containers.

Production Dockerfile can be kept in the repository.

I recommend keeping Dockerfile filename for the local image definition. I decided to call production definition Dockerfile.production.

It’s now time to build the image. Use -f parameter of docker build to specify custom Dockerfile path. Example command:

docker build -f Dockerfile.production -t rails-example-app-production .

After building push the image to your Docker registry.

Docker repository for the test app.

Secrets and configuration setup

Test app secrets:

  • master.key to decode Rails 5.2 credentials.yml.enc
  • Postgres connection credentials
  • Redis connection URL

master.key

By design master.key shouldn’t be available publicly. However because it’s an example application it’s included in the repository.

kubectl create secret generic master.key --from-file=./config/master.key

generates secret called master.key with one key, master.key with the file content.

$ kubectl describe secret master.key

Name:         master.key
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  Opaque

Data
====
master.key:  32 bytes

Databases connections credentials

Redis and Postgres connection credential secrets are defined in deployment/secrets

To load them to Kubernetes run:

kubectl create -f deployment/secrets/

It’s worth noting that values are stored in base64 form. redis.url value in the config YAML is stored as cmVkaXM6Ly9yZWRpcy1zYW5kYm94LW1hc3Rlcg==. It can be decoded using base64 util:

$ echo "cmVkaXM6Ly9yZWRpcy1zYW5kYm94LW1hc3Rlcg==" | base64 --decode
redis://redis-sandbox-master%

Database creation

It’s time to create a production database.

deployment/migration.yml defines Kubernetes Job that runs bin/rake db:create db:migrate. Full definition:

apiVersion: batch/v1
kind: Job
metadata:
  generateName: rails-example-app-migration-
spec:
  template:
    spec:
      containers:
      - name: rails
        image: docker.io/janjedrychowski/rails-example-app-production:v1
        command: ["bin/rake", "db:create", "db:migrate"]
        env:
          - name: DB_HOST
            valueFrom:
              secretKeyRef:
                name: production-database
                key: host
          - name: DB_PASSWORD
            valueFrom:
              secretKeyRef:
                name: production-database
                key: password
      restartPolicy: Never

As before to run it execute:

kubectl create -f deployment/migration.yml

Created Job has randomized name that starts with rails-example-app-migration-. This way we can create the job more than once without deleting the old one.

In my case command printed:

job.batch "rails-example-app-migration-hxptn" created

Run describe on this job to check what Pod it created. We want to check its log using kubectl log <POD_ID>. Exemplary execution log.

Kubernetes Job alternatives

It is possible to use kubectl run to run the migration but it’s much more complicated. See kubernetes/kubernetes#48684 for more details.

Running new Pods with preconfigured variables can be simplified with PodPresets. This is an alpha feature and it shouldn’t be used in production. Examples available at the official tutorial.

First Deployment

Web app definition in deployment/web.yml:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: rails-example-app-web
spec:
  selector:
    matchLabels:
      app: rails-example-app
      tier: web
  replicas: 2
  template:
    metadata:
      labels:
        app: rails-example-app
        tier: web
    spec:
      containers:
        - name: rails-example-app-web
          image: docker.io/janjedrychowski/rails-example-app-production:v1
          ports:
            - containerPort: 3000
          command:
            - bin/docker/web_start
          env:
            - name: DB_HOST
              valueFrom:
                secretKeyRef:
                  name: production-database
                  key: host
            - name: DB_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: production-database
                  key: password
            - name: RAILS_MASTER_KEY
              valueFrom:
                secretKeyRef:
                  name: master.key
                  key: master.key
            - name: REDIS_URL
              valueFrom:
                secretKeyRef:
                  name: redis
                  key: url

Sidekiq Deployment definition in deployment/sidekiq.yml is nearly the same. The main difference is the command and lack of external ports:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: rails-example-app-sidekiq
spec:
  selector:
    matchLabels:
      app: rails-example-app
      tier: sidekiq
  replicas: 2
  template:
    metadata:
      labels:
        app: rails-example-app
        tier: sidekiq
    spec:
      containers:
        - name: rails-example-app-web
          image: docker.io/janjedrychowski/rails-example-app-production:v1
          command:
            - bin/docker/worker_start
          env:
            - name: DB_HOST
              valueFrom:
                secretKeyRef:
                  name: production-database
                  key: host
            - name: DB_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: production-database
                  key: password
            - name: RAILS_MASTER_KEY
              valueFrom:
                secretKeyRef:
                  name: master.key
                  key: master.key
            - name: REDIS_URL
              valueFrom:
                secretKeyRef:
                  name: redis
                  key: url

To create Deployments run:

kubectl create -f deployment/web.yml -f deployment/sidekiq.yml

To verify that it’s working forward port 3000 from one of the pods.

Run kubectl get pods and find a pod that starts with rails-example-app-web-. In my case the full name is rails-example-app-web-c475bdbb4-k72m7.

To forward the port I run:

kubectl port-forward rails-example-app-web-c475bdbb4-k72m7 3000:3000

Rails app should be now available at http://localhost:3000/.

To verify that Sidekiq workers are working correctly visit http://localhost:3000/admin/sidekiq/busy. In Processes section there should be 2 Sidekiq processes running.

Kubernetes Service

The last step is creating a Service. This way Deployment Pods will be accessible on the Internet.

In real production environment usually LoadBalancer Service type is used. In Cloud environment that setup automatically starts a Load Balancer (e.g. ELB on AWS).

They are usually not cheap, so NodePort can be used instead. Exemplary config uses very high port, 30000, because by default Kuberenetes NodePorts have very strict limitations.

Service definition in deployment/service.yml:

apiVersion: v1
kind: Service
metadata:
  name: rails-example-app-web
spec:
  selector:
    app: rails-example-app
    tier: web
  type: NodePort
  ports:
    - port: 80
      nodePort: 30000
      targetPort: 3000

This can be created as any other Kubernetes object with kubectl create -f.

Now the webapp should be available on ther inernet at http://<NODE_ID>:30000.

To find Node external IP use kubectl describe nodes. You can connect to any Node in the cluster. All of them have the Service available at port 30000.

Deploying new version

Kubernetes support built-in rolling restart updates of Deployments. To deploy new code new image is needed.

New image can be created and pushed to the repository using the same steps as before.

To update the images on both Web and Sidekiq deployments run:

kubectl set image deployment/rails-example-app-web rails-example-app-web=docker.io/janjedrychowski/rails-example-app-production:v1-updated
kubectl set image deployment/rails-example-app-sidekiq rails-example-app-web=docker.io/janjedrychowski/rails-example-app-production:v1-updated

Next steps

This setup is very basic and has few big flaws:

  • multiple moving parts
  • big duplicated code sections (e.g. env section)
  • no automation
  • no logging and monitoring

First two issues can be solved by using Helm. It allows creating packages, called charts, that encapsulate multiple Kubernetes objects.

This topic will be covered in a next post in the series.