AWS ECS technical series part I

Published on: Thu Nov 25 2021

Series

Technical Series

  1. Introducing the AWS ECS technical series

Content

Introduction

AWS container services (ECS) is part of AWS’s container orchestration offering.

With AWS ECS, the infrastructure still runs on EC2 but ECS provides a lot of improvements to the setup, management and scaling workflow of your application. I think it brings your infrastructure closer to what you’d expect from deploying with heroku.

Part of the offering does include Fargate which is their ”serverless“ container solution where they manage run and manage the lifecycle of your compute infrastructure.

Sounds simple right ? well, not quite. Deploying still not as easy as git push heroku . Setting up the infrastructure is always a lot of work but once that is done and automated, it gets easier over time.

In this series, we will be going through the whole process, and deploying our very own Next.js application on AWS ECS.

The best part is the infrastructure will be built using Infrastructure as code (IaC), terraform. So, if you’d like to repeat the process across multiple environment then it is easy as changing the variables.

So, I think this is a perfectly great choice for teams who want to migrate off Heroku and want to leverage AWS services but at the same time keep the infrastructure complexity relatively low.

The format of the this module is going to be top down. We first look at the over-arching goal then examine the plan in details. Finally, tie it all together by building out the infrastructure using terraform.

Let’s jump right in and look at some prerequisities as we will be breezing through a lot of the sections.

Prerequisites

I will assume that you have worked with AWS before and are familiar with IAM, networking, and how to set this up using terraform.

If not, no worries. We will be using terraform modules I created to create the resources - tf-modules/networking.

Our focus in this series is not on AWS VPC rather on AWS ECS.

Here are some of the prerequisities that will be helpful to get through this technical series:

  • Access to an AWS account (required)
  • Basic understanding of Terraform, docker, Node.js (recommended)
  • Knowledge of AWS components (IAM and networking) (receommended)

Again, if you don’t have the base knowledge, that is totally fine, just follow along and try your best.

I will be breezing through parts of it and using various pre-made terraform modules just so we can focus on the core which is AWS ECS infrastructure, github actions and CI/CD.

That said, I will still go into the details of our architecture so we have some insight into what is happening behind the scenes.

If you just want to skip the details and go straight to building out the infrastructure then visit the Setting up our project section.

Goals

Before we start implementing the network and compute, let’s discuss the high level goals of what we are trying to achieve.

The gist of it is that we want to keep our AWS ECS services private by creating a VPC. However, we also want to expose our application(s) to the public but only allow the port which the application is running on to be exposed.

Beyond that, we may also want our resources within the VPC to connect to internet for public docker images, gather software updates and other things.

Basically, we want our services to be private enough but also public enough to perform the required tasks and allow end-users to use it.

Now we have a good understanding of the goal, let’s discuss the components, and parts of the AWS components we will be setting up in this module.

See sections below for details otherwise feel free to skip right to the implementation section.

Here is a rough blueprint of what the VPC will look like:

ecs vpc network

Network Design Details

We will be creating two subnets or availability zones (us-east-1a , us-east-1b ) which will live within the CIDR range which we have defined 10.0.0.0/16 .

Subnets

The CIDR blocks are further divided to create subnets for our network within our VPC.

The availability zones we will be using are us-east-1a and us-east-1b (More can be added more redudancy).

Those will be further divided into ranges for:

  • 10.0.1.0/24 - ranges for public in us-east-1a
  • 10.0.2.0/24 - ranges for public in us-east-1b
  • 10.0.11.0/24 - ranges for private in us-east-1a (app)
  • 10.0.12.0/24 - ranges for private in us-east-1b (app)
  • 10.0.21.0/24 - ranges for private in us-east-1a (db) - If you wish to host databases
  • 10.0.22.0/24 - ranges for private in us-east-1b (db) - If you wish to host databases

NAT gateway

This component lives in the public subnet which allows our private resources to communicate with the internet from within the VPC (ie update software, pull public docker images).

Note: For our network terraform module we are only using one NAT Gateway (but in production you may want to consider using one in each public subnet for redundancy)

Network ACL

We are leveraging Network ACL for granular network flow control. Here are a few details about it:

  • Allow TCP (port 22) ingress / egress (ssh)
  • Allow TCP (port 80) ingress / egress (http)
  • Allow TCP (port 443) ingress / egress (https)
  • Allow Ephemeral ports for AWS resources (ELB/ALB) ingress / egress (port 1024 - 65535)

For more information about Ephemeral ports, please consult this article.

Internet gateway

This allows for us our public resources in public subnet to access the internet (ie ALB).

Compute Design Details

Within the application we have added a few AWS resources for security and availability. These include the Application load balancer, target group and security groups.

Application load balancer

A logical group is created to contain the AWS ECS resources which the ALB will proxy requests to. ALB is a layer 7 which distributes the load across our AWS resources. In our case, this would be the target group we will be creating for our AWS ECS services.

The module also set up the listeners to proxy client requests (http - port 80, https - port 443) to the attached target group.

Security Groups

As a security measure, We will be adding fine-grained control on the port access within our resources.

AWS ALB:

  • Only allow ingress from target port (ie 3000)
  • Allow egress from all ports (so returning traffic can go out)

AWS ECS:

  • Only allow ingress traffic coming from ALB Security group
  • Allow egress from all ports (so returning traffic can go out)

Setting up our project

  1. Clone the following and remove the .git
git clone --depth=1 --branch=master https://github.com/Jareechang/example-nextjs-emotion11-material-ui

// If it exists
rm -rf ./.git
  1. Initialize terraform infrastructure files under infra/*
// In your directory create the following

infra/
├── main.tf
├── output.tf
└── variable.tf
  1. Create terraform files
mkdir infra
touch infra/main.tf infra/output.tf infra/variable.tf

Note: Ignore the output.tf file for now (We will need this later)

  1. Scaffold out the terraform files
// infra/variable.tf
variable "aws_region" {
  default = "us-east-1"
}

variable "project_id" {
  default = "node-app"
}

variable "env" {
  default = "prod"
}

// infra/main.tf
provider "aws" {
  version = "~> 2.0"
  region  = var.aws_region
}

locals {
  # Target port to expose
  target_port = 3000
}

Network Implementation

As I mentioned, we will be leveraging a terraform module I created just to simplify things but as an excercise, feel free to build your own.

As a reference, we will define the CIDR blocks for our public and private subnets:

  • Public - 10.0.1.0/24 and 10.0.2.0/24 .
  • Private - 10.0.11.0/24 and 10.0.12.0/24 .
  • Private - 10.0.21.0/24 and 10.0.22.0/24 (ignore for now - we don’t have this but this can be added in)

Then we also the availability zones wich we have chosen as us-east-1a and us-east-1b .

Add the following:

// main.tf
#### Networking (subnets, igw, nat gw, rt etc)
module "networking" {
  source = "github.com/Jareechang/tf-modules//networking?ref=v1.0.1"
  env = var.env
  project_id = var.project_id
  subnet_public_cidrblock = [
    "10.0.1.0/24",
    "10.0.2.0/24"
  ]
  subnet_private_cidrblock = [
    "10.0.11.0/24",
    "10.0.22.0/24"
  ]
  azs = ["us-east-1a", "us-east-1b"]
}

Compute Implementation

Let’s also setup the application load balancer, target groups and security groups.

We are using the defined local.target_port here to provide proper access.

As a reference here is what we are trying to configure:

AWS ALB:

  • ALB
  • Custom target group (target_type = "ip")
  • Only allow ingress from target port (port = 3000)
  • Allow egress from all ports

AWS ECS:

  • Only allow ingress traffic coming from ALB Security group
  • Allow egress from all ports

Add the following to your main.tf file:


#### Security groups
resource "aws_security_group" "alb_ecs_sg" {
  vpc_id = module.networking.vpc_id

  ## Allow inbound on port 80 from internet (all traffic)
  ingress {
    protocol         = "tcp"
    from_port        = 80
    to_port          = 80
    cidr_blocks      = ["0.0.0.0/0"]
  }

  ## Allow outbound to ecs instances in private subnet
  egress {
    protocol    = "tcp"
    from_port   = local.target_port
    to_port     = local.target_port
    cidr_blocks = module.networking.private_subnets[*].cidr_block
  }
}

resource "aws_security_group" "ecs_sg" {
  vpc_id = module.networking.vpc_id
  ingress {
    protocol         = "tcp"
    from_port        = local.target_port
    to_port          = local.target_port
    security_groups  = [aws_security_group.alb_ecs_sg.id]
  }

  ## Allow ECS service to reach out to internet (download packages, pull images etc)
  egress {
    protocol         = -1
    from_port        = 0
    to_port          = 0
    cidr_blocks      = ["0.0.0.0/0"]
  }
}

module "ecs_tg" {
  source              = "github.com/Jareechang/tf-modules//alb?ref=v1.0.2"
  create_target_group = true
  port                = local.target_port
  protocol            = "HTTP"
  ## This is important! *
  target_type         = "ip"
  vpc_id              = module.networking.vpc_id
}

module "alb" {
  source             = "github.com/Jareechang/tf-modules//alb?ref=v1.0.2"
  create_alb         = true
  enable_https       = false
  internal           = false
  load_balancer_type = "application"
  security_groups    = [aws_security_group.alb_ecs_sg.id]
  subnets            = module.networking.public_subnets[*].id
  target_group       = module.ecs_tg.tg.arn
}

Validate the setup

let’s take this time to create our existing infrastructure and validate everything is working as expected.

export AWS_ACCESS_KEY_ID=<your-key>
export AWS_SECRET_ACCESS_KEY=<your-secret>
export AWS_DEFAULT_REGION=us-east-1

terraform init
terraform plan
terraform apply -auto-approve

Please ensure that there are no errors and services are created as expected. Please visit the AWS console to validate at this time.

⚠️ Note: Remember to run terraform destroy -auto-approve after you are done with the module otherwise you will incur charges on your AWS Account. Unless you wish to keep the infrastructure for personal use.

Conclusion

To wrap up this section, while it didn’t contain much detail of AWS ECS, it did contain a lot of the required steps before we can proceed to the other sections of the technical series.

We not only discussed the blueprint of the infrastructure but we also examined the different components within it to have a base understanding of the role they play in it.

Finally, we tied it all together by building out the resoruces using terraform.

It’s a lot of up-front work but hopefully the terraform modules will simplify things a little bit. We’ll get to the the fun parts I promise!

In the next section, we will be setting up the application, and containerizing it to prepare it for continuous integration.

To see the the completed repository of this section, please see building-ecs-part-1.

Feel free to use this as a starting point. Stayed tuned for the next one!

References

Here are some helpful links for the AWS resources we used to build out our network and compute infrastructure.

Network References

Compute References


Enjoy the content ?

Then consider signing up to get notified when new content arrives!

Jerry Chang 2022. All rights reserved.