How To Deploy A Docker Container to an AWS Cluster Using Terraform

close up photo of programming of codes
Reading Time: 7 minutes

Hello Readers ! I hope you’re doing well . So here I am with the new post where we’ll learn basic about docker , AWS , docker container ,terraform and also How To Deploy A Docker Container to an AWS Cluster Using Terraform. Stay Tuned and stick to the end and perform the task along with me to do so .

Let’s Get Started

INTRODUCTION

This is a pleasant but sizeable stroll through on the use of Terraform to set up a docker field to an AWS EC2 cluster. But earlier than we get to the principle event, we’ve got a few house responsibilities objects we should first get through.

Infrastructure as Code

As the call implies deploying your infrastructure as code. To be greater precise readable code. It’s now no longer a programming language in step with say however greater so a fixed of easy-to-observe commands on how an infrastructure have to be installation and maintained.

DOCKER

Docker is a software program platform for constructing programs primarily based totally on bins—small and light-weight execution environments that make shared use of the running machine kernel however in any other case run in isolation from one another.

While bins had been utilised in Linux and Unix structures for a few time, Docker, an open supply challenge released in 2013, helped popularise the generation via way of means of making it less complicated than ever for builders to bundle their software program to “construct as soon as and run anywhere.”

DOCKER CONTAINER

Docker is an open platform for developing, shipping, and walking programs. It allows you to split your programs out of your infrastructure so that you can supply software program quickly. With Docker, you may control your infrastructure within side the identical methods you control your programs.

By taking gain of Docker’s methodologies for shipping, testing, and deploying code quickly, you may extensively lessen the postpone , among writing code and walking it in production.

AWS

Amazon Web Services (AWS) is a complete cloud computing platform ,consists of infrastructure as a service (IaaS) and platform as a service (PaaS) offerings. AWS offerings provide scalable answers for compute, storage, databases, analytics, and more.

Terraform

Terraform is an infrastructure as code device that helps you to outline , each cloud and on-prem sources in human-readable configuration documents , that you could version, reuse, and share. You can then use a constant workflow to provision and control all your infrastructure all through its life cycle. Terraform can control low-stage components.

Prerequisites

 An IDE of your choice.

2. An AWS Account.

3. Terraform already installed and configured.

Steps To Deploy A Docker Container to an AWS Cluster Using Terraform

Here are the various steps to perform the task . These steps are as follows :

STEP 1: Create a directory

In order to stay organised a new folder or directory has to be created. You can create folder and cd into it as :

mkdir <folder_name>

cd <folder_name>

STEP 2 : Create accompanying file

Once the folder or the directory has been created, within said directory create the following files:

  • providers.tf
  • variables.tf
  • vpc.tf
  • subnets.tf
  • main.tf
  • terraform.tfvars
  • .gitignore
touch providers.tf 
                                                                                                 touch variables.tf  
                                                                                                  touch vpc.tf  
                                                                                                       touch subnets.tf 
                                                                                                     touch main.tf 
                                                                                                      touch terraform.tfvars.tf  
                                                                                           touch .gitignore 

STEP 3 : Providers.tf

According to Terraform.io, providers are plugins that allow Terraform to interact with services, cloud providers, and other APIs. Just think of it as a bridge between Terraform and other services. Since we’re pulling an image from Docker and deploying it to an AWS cluster, we need to make sure both services list as our providers. Our access key and secret key are enter as variables for a specific reason which we will cover later

terraform {

  required_providers {

    docker = {

      source  = "kreuzwerker/docker"

      version = "2.15.0"

    }

    aws = {

      source  = "hashicorp/aws"

      version = "~> 3.0"

    }

  }

}


provider "docker" {}


provider "aws" {

  region = var.region

  access_key = var.aws_access_key

  secret_key = var.aws_secret_key

}

STEP 4 : Lets’s understand Variables.tf file

Variables are a type of value that can be assign or pass on. This file contains information about our region, CIDR block, and access and secret keys. Because the variable.tf will be store in our repo and will be accessible to others, the values for our access and secret will not be assign on this file. Those values will be save in a terraform. tfvars.

variable "region" {

  description = "The region where environment is going to be deployed"

  type        = string

  default     = "us-east-1"

}

variable "aws_access_key" {

  type      = string

  sensitive = true

}


variable "aws_secret_key" {

  type      = string

  sensitive = true

}

# VPC variables


variable "vpc_cidr" {

  description = "CIDR range for VPC"

  type        = string

  default     = "10.0.0.0/16"

}
STEP 5 : vpc.tf & subnets.tf files

When using ECS to deploy a container, it is strongly recommend to do inside of a vpc.

vpc.tf file

resource "aws_vpc" "ecs_vpc" {

  cidr_block = var.vpc_cidr


  tags = {

    Name = "my_vpc"

  }

}

subnets.tf file

resource "aws_subnet" "private_subnet_1" {

  vpc_id            = aws_vpc.ecs_vpc.id

  cidr_block        = "10.0.2.0/24"

  availability_zone = "us-east-1a"


  tags = {

    Name = "my_subnet_1"

  }

}



resource "aws_subnet" "private_subnet_2" {

  vpc_id            = aws_vpc.ecs_vpc.id

  cidr_block        = "10.0.3.0/24"

  availability_zone = "us-east-1a"


  tags = {

    Name = "my_subnet_2"

  }

}

STEP 6 : Let’s understand main.tf

To create your main.tf file click on the following link, and use it as reference

resource "aws_ecs_cluster" "cluster" {

  name = "my_ecs_cluster"


  capacity_providers = ["FARGATE_SPOT", "FARGATE"]


  default_capacity_provider_strategy {

    capacity_provider = "FARGATE_SPOT"

  }

  setting {

    name  = "containerInsights"

    value = "disabled"

  }

}


module "ecs-fargate" {

  source  = "umotif-public/ecs-fargate/aws"

  version = "~> 6.1.0"


  name_prefix        = "ecs-fargate-example"

  vpc_id             = aws_vpc.ecs_vpc.id

  private_subnet_ids = [aws_subnet.private_subnet_1.id, 

aws_subnet.private_subnet_2.id]


  cluster_id = aws_ecs_cluster.cluster.id


  task_container_image   = "centos"

  task_definition_cpu    = 256

  task_definition_memory = 512


  task_container_port             = 80

  task_container_assign_public_ip = true


  load_balanced = false


  target_groups = [

    {

      target_group_name = "tg-fargate-example"

      container_port    = 80

    }

  ]


  health_check = {

    port = "traffic-port"

    path = "/"

  }


  tags = {

    Environment = "test"

    Project     = "Test"

  }

}

STEP 7 : terraform.tfvars & .gitignore

.gitignore is a file text that tells Git which files or directories to ignore when committing your project to Github. Our terraform.tfvars file will be the file ignore for it contains our secret and access keys.

# Local .terraform directories

**/.terraform/*

**/.terraform.*


# .tfstate files

*.tfstate

*.tfstate.*


# Crash log files

crash.log


# Exclude all .tfvars files, which are likely to contain sentitive data, such 
as

# password, private keys, and other secrets. These should not be part of
 
version 

# control as they are data points which are potentially sensitive and subject 

# to change depending on the environment.

#

*.tfvars

# Ignore override files as they are usually used to override resources
 
locally and so

# are not checked in

override.tf

override.tf.json

*_override.tf

*_override.tf.json


# Include override files you do wish to add to version control using negated

 pattern

#

# !example_override.tf

# Include tfplan files to ignore the plan output of command: terraform plan -

out=tfplan

# example: *tfplan*


# Ignore CLI configuration files

.terraformrc

terraform.rc

STEP 8: Commands

Once all of our files have been create , please run the following commands from the terminal.

Terraform init  - In order to initialise our working directory containing our
 
terraform code
Terraform plan -  It is used to preview our infrastructure prior to executing

 our terraform code.
Terraform apply -  It is used to apply all the changes specified in the plan

 into motion.

 

STEP 9 : Double Check in Console

Now let’s head to the management, console to ensure that all our resources have been created. On the search bar, type ECS, and once the ECS dashboard is displayed, click on cluster to check if your cluster was created

Search for Cluster as :

Click on the cluster you created as in my case I will click on my_ecs_cluster , It will show as :

STEP 10 : Destroy

Once you’ve confirmed that the resources were successfully created, in order to avoid unnecessary charges, ensure that all resources are destroyed. This can be achieved by entering the Terraform destroy command in your terminal as :

Terraform destroy 

Conclusion

So If you’ve gotten this far and have follow the steps above you have successfully and should know how to deploy a Docker container to an AWS ECS cluster using Terraform. Well thankyou for showing interest .

Thank You

Happy Learning !!!

Reference

Click Here

Click here

Written by 

Deeksha Tripathi is a Software Consultant at Knoldus Inc Software. She has a keen interest toward learning new technologies. Her practice area is DevOps. When not working, she will be busy in listening music , and spending time with her family .

Discover more from Knoldus Blogs

Subscribe now to keep reading and get access to the full archive.

Continue reading