Published on: Tue Dec 07 2021
Last Updated on: Mon Dec 13 2021
Here we finally put the theory and concepts into practice, by the end of this module, you should know how to create your own AWS ECS infrastructure using terraform.
In addition, after this, you should be familiar with the different components within AWS ECS and some of it’s configuration options. I’ve also provided references, and additional information to dig deeper if you wish to make adjustments.
So, without further ado, let’s dive right in.
In this module, we are going to build out our AWS ECS infrastructure using terraform.
To get started, feel free to continue on your progress from previous section or use this starter repository - building-with-aws-ecs-part-2.
Before we proceed to building out this infrastructure, let’s do a quick re-cap of components for AWS ECS.
If you want to more details, feel free to re-visit the "Components" section within Introducing the AWS ECS technical series article.
# main.tf
resource "aws_ecs_cluster" "web_cluster" {
name = "web-cluster-${var.project_id}-${var.env}"
setting {
name = "containerInsights"
value = "enabled"
}
}
We are using a template file here because we require some information from other AWS resources and those need to be populated dynamically.
In addition, we want this to be as agnostic as possible so we can make adjustments via terraform whenever possible.
from your root directory:
mkdir ./infra/task-definitions && touch ./infra/task-definitions/service.json.tpl
// Change into infra directory
cd ./infra
Template:
This our terraform template file for dynamically generating our task definition containing most up to date information.
{
"family": "task-definition-node",
"networkMode": "${network_mode}",
"requiresCompatibilities": [
"${launch_type}"
],
"cpu": "${cpu}",
"memory": "${memory}",
"executionRoleArn": "${ecs_execution_role}",
"containerDefinitions": [
{
"name": "${name}",
"image": "nginx:latest",
"memoryReservation": ${memory},
"portMappings": [
{
"containerPort": ${port},
"hostPort": ${port}
}
],
"environment": [
{
"name": "PORT",
"value": "${port}"
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "${log_group}",
"awslogs-region": "${aws_region}",
"awslogs-stream-prefix": "ecs"
}
},
"secrets": []
}
]
}
We will be changing the image which is set to nginx:latest
as a placeholder. Also, note since we are using AWS Fargate, the networkMode
is required to be awsvpc
.
These are just the base template, there are things we can change:
Here we are doing the following:
locals
at the top of the page on line 7-13.💡 Note: on line 48-52, we are using our generated template then getting the containerDefinitions configurations to be used for the AWS ECS task definitions. As part of the process, we are converting it to json, getting the information then converting it back to json again.
# main.tf
locals {
# Target port to expose
target_port = 3000
## ECS Service config
ecs_launch_type = "FARGATE"
ecs_desired_count = 2
ecs_network_mode = "awsvpc"
ecs_cpu = 512
ecs_memory = 1024
ecs_container_name = "nextjs-image"
ecs_log_group = "/aws/ecs/${var.project_id}-${var.env}"
# Retention in days
ecs_log_retention = 1
}
data "template_file" "task_def_generated" {
template = "${file("./task-definitions/service.json.tpl")}"
vars = {
env = var.env
port = local.target_port
name = local.ecs_container_name
cpu = local.ecs_cpu
memory = local.ecs_memory
aws_region = var.aws_region
ecs_execution_role = module.ecs_roles.ecs_execution_role_arn
launch_type = local.ecs_launch_type
network_mode = local.ecs_network_mode
}
}
# Create a static version of task definition for CI/CD
resource "local_file" "output_task_def" {
content = data.template_file.task_def_generated.rendered
file_permission = "644"
filename = "./task-definitions/service.latest.json"
}
resource "aws_ecs_task_definition" "nextjs" {
family = "task-definition-node"
execution_role_arn = module.ecs_roles.ecs_execution_role_arn
task_role_arn = module.ecs_roles.ecs_task_role_arn
requires_compatibilities = [local.ecs_launch_type]
network_mode = local.ecs_network_mode
cpu = local.ecs_cpu
memory = local.ecs_memory
container_definitions = jsonencode(
jsondecode(
data.template_file.task_def_generated.rendered
).containerDefinitions
)
}
You should also see service.latest.json
created within the infra/task-definitions
, this will be used later in our CI/CD as a static configuration template where we update the container definition with our new container image.
finally, we create a service which our tasks will run in. As you can see, it’ll contain details like the cluster, desired count, launch type, load balancer and network configuration and so forth.
Be mindful that your launch type in container definition must match the one defined here.
# main.tf
resource "aws_ecs_service" "web_ecs_service" {
name = "web-service-${var.project_id}-${var.env}"
cluster = aws_ecs_cluster.web_cluster.id
task_definition = aws_ecs_task_definition.nextjs.arn
desired_count = local.ecs_desired_count
launch_type = local.ecs_launch_type
load_balancer {
target_group_arn = module.ecs_tg.tg.arn
container_name = local.ecs_container_name
container_port = local.target_port
}
network_configuration {
subnets = module.networking.private_subnets[*].id
security_groups = [aws_security_group.ecs_sg.id]
}
tags = {
Name = "web-service-${var.project_id}-${var.env}"
}
depends_on = [
module.alb.lb,
module.ecs_tg.tg
]
}
resource "aws_cloudwatch_log_group" "ecs" {
name = local.ecs_log_group
# This can be changed
retention_in_days = local.ecs_log_retention
}
Here we are creating our execution role and task roles.
💡 Remember the distinction between the two:
- Execution role - For running your service (ie permission to pull images, logging)
- Task role - For running your tasks (ie permission to access DynamoDB or S3)
In our setup, we only have execution role but we can extend it to allow our tasks permissions to access other AWS services later.
# main.tf
## Execution role and task roles
module "ecs_roles" {
source = "github.com/Jareechang/tf-modules//iam/ecs?ref=v1.0.1"
create_ecs_execution_role = true
create_ecs_task_role = true
# Extend baseline policy statements (ignore for now)
ecs_execution_policies_extension = {}
}
Here are the permission we are using for the execution role - AmazonECSTaskExecutionRolePolicy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
}
]
}
export AWS_ACCESS_KEY_ID=<your-key>
export AWS_SECRET_ACCESS_KEY=<your-secret>
export AWS_DEFAULT_REGION=us-east-1
terraform init
terraform plan
terraform apply -auto-approve
⚠️ Note: Remember to run terraform destroy -auto-approve after you are done with the module. Unless you wish to keep the infrastructure for personal use.
If you’d like a reference of the result, it should be available at building-with-aws-ecs-part-3.
That’s it, good job on making through it! We just went through the process of creating all the components we needed for our AWS ECS infrastructure.
In the next module, we will start wiring up the github actions to see the CI/CD in actions. So, when we make a change, it’ll build and deploy our new application!
Then consider signing up to get notified when new content arrives!