AWS ECS technical series part III

Published on: Tue Dec 07 2021

Series

Last Updated on: Mon Dec 13 2021

Technical Series

  1. Introducing the AWS ECS technical series
  2. AWS ECS technical series part I
  3. AWS ECS technical series part II

Goals

Here we finally put the theory and concepts into practice, by the end of this module, you should know how to create your own AWS ECS infrastructure using terraform.

In addition, after this, you should be familiar with the different components within AWS ECS and some of it’s configuration options. I’ve also provided references, and additional information to dig deeper if you wish to make adjustments.

So, without further ado, let’s dive right in.

Content

Introduction

In this module, we are going to build out our AWS ECS infrastructure using terraform.

To get started, feel free to continue on your progress from previous section or use this starter repository - building-with-aws-ecs-part-2.

Review

Before we proceed to building out this infrastructure, let’s do a quick re-cap of components for AWS ECS.

  • Clusters - logical grouping of tasks or services
  • Services - part of the cluster and helps to manage tasks within the cluster
  • Tasks Definitions - specific instructions to run the tasks (ie container image, cpu, memory and port mapping, and more)
  • Tasks - set of tasks that run within the services using the provided task definitions
  • Execution Role - IAM attached to your service for “management” of your tasks
  • Task Role - IAM attached to your tasks which enables them to access other AWS services

If you want to more details, feel free to re-visit the "Components" section within Introducing the AWS ECS technical series article.

Setting up the infrastructure

1. Add the AWS ECS Cluster

# main.tf

resource "aws_ecs_cluster" "web_cluster" {
  name = "web-cluster-${var.project_id}-${var.env}"
  setting {
    name  = "containerInsights"
    value = "enabled"
  }
}

2. Scaffold out task definition template file

We are using a template file here because we require some information from other AWS resources and those need to be populated dynamically.

In addition, we want this to be as agnostic as possible so we can make adjustments via terraform whenever possible.

from your root directory:

mkdir ./infra/task-definitions && touch ./infra/task-definitions/service.json.tpl

// Change into infra directory
cd ./infra

3. Fill out the task definitions for service.json.tpl

Template:

This our terraform template file for dynamically generating our task definition containing most up to date information.

{
  "family": "task-definition-node",
  "networkMode": "${network_mode}",
  "requiresCompatibilities": [
    "${launch_type}"
  ],
  "cpu": "${cpu}",
  "memory": "${memory}",
  "executionRoleArn": "${ecs_execution_role}",
  "containerDefinitions": [
    {
      "name": "${name}",
      "image": "nginx:latest",
      "memoryReservation": ${memory},
      "portMappings": [
        {
          "containerPort": ${port},
          "hostPort": ${port}
        }
      ],
      "environment": [
        {
          "name": "PORT",
          "value": "${port}"
        }
      ],
      "logConfiguration": {
        "logDriver": "awslogs",
        "options": {
          "awslogs-group": "${log_group}",
          "awslogs-region": "${aws_region}",
          "awslogs-stream-prefix": "ecs"
        }
      },
      "secrets": []
    }
  ]
}

We will be changing the image which is set to nginx:latest as a placeholder. Also, note since we are using AWS Fargate, the networkMode is required to be awsvpc .

These are just the base template, there are things we can change:

4. Create the task definition files

Here we are doing the following:

  1. Add the new ECS local configuration to the locals at the top of the page on line 7-13.
  2. Gather the template file and generate a new template with the given variables
  3. Write the new template to the filesystem (to be used later in CI/CD)
  4. Create a resource with the generated template
💡 Note: on line 48-52, we are using our generated template then getting the containerDefinitions configurations to be used for the AWS ECS task definitions. As part of the process, we are converting it to json, getting the information then converting it back to json again.
# main.tf

locals {
  # Target port to expose
  target_port = 3000

  ## ECS Service config
  ecs_launch_type = "FARGATE"
  ecs_desired_count = 2
  ecs_network_mode = "awsvpc"
  ecs_cpu = 512
  ecs_memory = 1024
  ecs_container_name = "nextjs-image"
  ecs_log_group = "/aws/ecs/${var.project_id}-${var.env}"
  # Retention in days
  ecs_log_retention = 1
}

data "template_file" "task_def_generated" {
  template = "${file("./task-definitions/service.json.tpl")}"
  vars = {
    env                 = var.env
    port                = local.target_port
    name                = local.ecs_container_name
    cpu                 = local.ecs_cpu
    memory              = local.ecs_memory
    aws_region          = var.aws_region
    ecs_execution_role  = module.ecs_roles.ecs_execution_role_arn
    launch_type         = local.ecs_launch_type
    network_mode        = local.ecs_network_mode
  }
}

# Create a static version of task definition for CI/CD
resource "local_file" "output_task_def" {
  content         = data.template_file.task_def_generated.rendered
  file_permission = "644"
  filename        = "./task-definitions/service.latest.json"
}

resource "aws_ecs_task_definition" "nextjs" {
  family                   = "task-definition-node"
  execution_role_arn       = module.ecs_roles.ecs_execution_role_arn
  task_role_arn            = module.ecs_roles.ecs_task_role_arn

  requires_compatibilities = [local.ecs_launch_type]
  network_mode             = local.ecs_network_mode
  cpu                      = local.ecs_cpu
  memory                   = local.ecs_memory
  container_definitions    = jsonencode(
    jsondecode(
      data.template_file.task_def_generated.rendered
    ).containerDefinitions
  )
}

You should also see service.latest.json created within the infra/task-definitions , this will be used later in our CI/CD as a static configuration template where we update the container definition with our new container image.

5. Add the AWS ECS Service

finally, we create a service which our tasks will run in. As you can see, it’ll contain details like the cluster, desired count, launch type, load balancer and network configuration and so forth.

Be mindful that your launch type in container definition must match the one defined here.

# main.tf

resource "aws_ecs_service" "web_ecs_service" {
  name            = "web-service-${var.project_id}-${var.env}"
  cluster         = aws_ecs_cluster.web_cluster.id
  task_definition = aws_ecs_task_definition.nextjs.arn
  desired_count   = local.ecs_desired_count
  launch_type = local.ecs_launch_type 

  load_balancer {
    target_group_arn = module.ecs_tg.tg.arn
    container_name   = local.ecs_container_name
    container_port   = local.target_port 
  }

  network_configuration {
    subnets         = module.networking.private_subnets[*].id
    security_groups = [aws_security_group.ecs_sg.id]
  }

  tags = {
    Name = "web-service-${var.project_id}-${var.env}"
  }

  depends_on = [
    module.alb.lb,
    module.ecs_tg.tg
  ]
}

6. Add logging group

resource "aws_cloudwatch_log_group" "ecs" {
  name = local.ecs_log_group
  # This can be changed
  retention_in_days = local.ecs_log_retention
}

7. Create execution and task role

Here we are creating our execution role and task roles.

💡 Remember the distinction between the two:
  • Execution role - For running your service (ie permission to pull images, logging)
  • Task role - For running your tasks (ie permission to access DynamoDB or S3)

In our setup, we only have execution role but we can extend it to allow our tasks permissions to access other AWS services later.

# main.tf

## Execution role and task roles
module "ecs_roles" {
  source                    = "github.com/Jareechang/tf-modules//iam/ecs?ref=v1.0.1"
  create_ecs_execution_role = true
  create_ecs_task_role      = true

  # Extend baseline policy statements (ignore for now)
  ecs_execution_policies_extension = {}
}

Here are the permission we are using for the execution role - AmazonECSTaskExecutionRolePolicy:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "ecr:GetAuthorizationToken",
        "ecr:BatchCheckLayerAvailability",
        "ecr:GetDownloadUrlForLayer",
        "ecr:BatchGetImage",
        "logs:CreateLogStream",
        "logs:PutLogEvents"
      ],
      "Resource": "*"
    }
  ]
}

8. Apply the infrastructure

export AWS_ACCESS_KEY_ID=<your-key>
export AWS_SECRET_ACCESS_KEY=<your-secret>
export AWS_DEFAULT_REGION=us-east-1

terraform init
terraform plan
terraform apply -auto-approve
⚠️ Note: Remember to run terraform destroy -auto-approve after you are done with the module. Unless you wish to keep the infrastructure for personal use.

If you’d like a reference of the result, it should be available at building-with-aws-ecs-part-3.

Conclusion

That’s it, good job on making through it! We just went through the process of creating all the components we needed for our AWS ECS infrastructure.

In the next module, we will start wiring up the github actions to see the CI/CD in actions. So, when we make a change, it’ll build and deploy our new application!

References


Enjoy the content ?

Then consider signing up to get notified when new content arrives!

Jerry Chang 2022. All rights reserved.