AWS ECS technical series part II

Published on: Tue Nov 30 2021

Series

Technical Series

  1. Introducing the AWS ECS technical series
  2. AWS ECS technical series part I

Content

Introduction

In this module, we are going to prepare our application for continuous integration by containerizing it.

In the previous module, we have noted that AWS ECS uses docker containers to run the task in the services. So, this is something that we will need to add as part of our software lifecycle, which is creating a new docker image of our application.

Specifically, we will be going through all the steps of the continuous integration pipeline using the CLI just to better understand each part.

Once we start automating everything using Github actions, it’ll be evident what we are doing and everything will just click together.

Not only that, we will also be creating our AWS ECR infrastructure which is our private registry to store our docker images.

We got a lot of work to do, so, let’s get started!

Feel free to continue on your progress from previous section or use this start repository - building-with-aws-ecs-part-1

Running locally

First, let’s start by running the application (next.js) locally.

Commands

Using yarn:

yarn install

// Run the application
yarn dev

Using npm:

npm install

// Run the application
npm run dev

Once that is done visit http://localhost:3000/ , you should see a simple basic blog starter site.

It’s a simple site for demonstration purposes but this infrastructure and pipeline should work for any node.js application. Depending on your setup, you may need to tweak it slightly.

Ok, now we know what it looks like, let’s move on to containerizing it.

Goal

When we start our application locally, we already go through some steps of preparing the application for continuous integration.

This includes installing the npm packages, and running the actual application.

The only difference is we also need to prepare the docker container to run our node application in.

AWS ECS requires us to specify a new container image as part of the configuration.

In addition, for security, we will also be setting up a private registry (using AWS ECR) to store our images.

Before we automate all these steps, I think it is good to build a good mental map of not only what is happening at a high level but also behind the scenes.

We will go through each step manually in our command line (CLI). That way, when we start to automate these steps using custom Github actions, you would still have a firm understanding of what is happening behind the scenes.

Let’s take a look at the steps required to prepare our next.js application.

Preparing a new version

continuous integration steps
  1. Install - Install the packages
  2. Build - Prepare a optimized bundle of the application
    • Run any formatting check (linters, typescript), tests or any pre-build checks
    • Run the build to prepare the bundle + a production install (without dev dependencies)
  3. Docker (login) - authenticate with private registry (AWS ECR)
  4. Docker (build + tag) - build the docker image using Dockerfile then tag it (version)
  5. Docker (push) - upload the new docker image to the private registry (AWS ECR)

At the end of this process, we should have a new version uploaded to our private registry in AWS ECR which we can reference using our container task definition in AWS ECS.

The install and build (step 1,2) are already done as part of the repository but feel free to tweak it for your needs.

The only part missing is the docker setup and the AWS ECR, let’s set that up.

Configuring the Docker Image

So, once we have installed the packages, and prepared an optimized bundle to run our application. We need to prepare the environment with our custom run-time for it to run on.

This is where docker and a Dockerfile comes in, it allows us to set up a custom environment just for our application.

Please follow these steps to configure it.

1. Create a dockerfile (under the root directory)

We will also be creating a start.sh which is just a shell script file for running our next.js application (along with other configuration).

touch Dockerfile && touch start.sh

2. Create a custom shell script for starting our node application

The $PORT environment variable will be the port we will be supplying through our AWS ECS configuration. So, don’t worry about that for now, just know it can be changed via the environment variable when we run our docker container.

Additionally, I have configured this application to disable the debug telemetry sent to next.js (this is optional but I will leave that up to you if you want that).

Read about it at next.js telemetry.

#!/bin/sh

export NODE_ENV=production
export NEXT_TELEMETRY_DISABLED=1

yarn start -- -p $PORT

3. Specify run-time and base setup:

We are adding the run-time, working directory and preparing some user management as per next.js documentation.

# Dockerfile

FROM node:alpine AS runner
RUN mkdir -p /opt/app

WORKDIR /opt/app

RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001

4. Copy over files:

Copy over all the files we need to run the application in our custom run-time:

  • public
  • node_modules
  • package.json
  • .next
  • start.sh

Notice we also need execution permission on the custom script file and we are adding that at the end with chmod +x .

# Dockerfile

FROM node:alpine AS runner
RUN mkdir -p /opt/app

WORKDIR /opt/app

RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001

COPY ./public /opt/app/public
COPY --chown=nextjs:nodejs ./.next /opt/app/.next
COPY ./node_modules /opt/app/node_modules
COPY ./package.json /opt/app/package.json
COPY ./start.sh /opt/app/start.sh
RUN chmod +x /opt/app/start.sh

5. Final addition:

Final addition includes specifying the entrypoint which is a command provided to the docker container when a run command is issued to the docker container.

# Dockerfile

FROM node:alpine AS runner
RUN mkdir -p /opt/app

WORKDIR /opt/app

RUN addgroup -g 1001 -S nodejs
RUN adduser -S nextjs -u 1001

COPY ./public /opt/app/public
COPY --chown=nextjs:nodejs ./.next /opt/app/.next
COPY ./node_modules /opt/app/node_modules
COPY ./package.json /opt/app/package.json
COPY ./start.sh /opt/app/start.sh
RUN chmod +x /opt/app/start.sh

USER nextjs

ENTRYPOINT ["/opt/app/start.sh"]

Testing the Docker Image

Ok, now we have everything ready. Time for a quick test to make sure everything is working as expected.

1. Building the application

// Build the application
yarn build

You should see this after succesfully building your next.js application:

production next.js build results

2. Building the image

// Build the docker image 
docker build -t local-nextjs .

you should see something similar at the end of the docker build (you should have a different built hash) along with other outputs:

Successfully built af0080bfd01e
Successfully tagged local-nextjs:latest

3. running the docker image

Let’s try running on port 5000 as a test to ensure the dynamic port environment variable is working as expected.

Feel free to use other ports if 5000 is unavailable, just ensure to update the port mapping -p <DOCKER_PORT>:<HOST_PORT> .

docker run -d -it -e PORT='5000' -p 5000:5000 local-nextjs:latest

You should see the container with the local-nextjs:latest image running when running this command docker ps -a .

As a final check, visit http://localhost:5000 or http://localhost:<PORT> , to verfiy the blog application running as expected.

If not, try to debug using docker ps -a and using docker logs .

Let’s move onto building the AWS ECR infrastructure for the continuous integration pipeline.

AWS ECR Infrastructure

We will be creating our AWS ECR resource, and adding some output to verify the details of our setup.

In terms of naming, feel free to change the repository name where it says "nextjs" (this is optiotional). I just use it for this example but it can be changed. However, just be mindful of this change as we proceed with other modules (we will be referring back to it).

We are using the aws_caller_identity to create the path to our private registry. This is useful for testing (which is something we will be doing) but it is an optional step.

At the continuous integration stage, we will be using a custom github action but it is good to understand what is happening behind the scenes with authentication and pushing the docker image to AWS ECR.

Let’s setup the infrastructure then test it out.

1. Add the following to the main.tf and output.tf files:

// main.tf
data "aws_caller_identity" "current" {}

resource "aws_ecr_repository" "main" {
  name                 = "web/${var.project_id}/nextjs"
  image_tag_mutability = "MUTABLE"
}

// output.tf
output "ecr_repo_url" {
  value = "${data.aws_caller_identity.current.account_id}.dkr.ecr.${var.aws_region}.amazonaws.com"
}

output "ecr_repo_path" {
  value = aws_ecr_repository.main.name
}

output "aws_region" {
  value = var.aws_region
}

Run the command:

export AWS_ACCESS_KEY_ID=<your-key>
export AWS_SECRET_ACCESS_KEY=<your-secret>
export AWS_DEFAULT_REGION=us-east-1

terraform init
terraform plan 
terraform apply -auto-approve

Uploading to AWS ECR

Now we have everything setup, let’s try uploading our image to AWS ECR.

Since we have a private registry (not public), we need to authenticate using the AWS CLI (ECR) to get the password and login using docker before we can work with the registry.

Logging in

aws ecr get-login-password --region <region> | docker login --username AWS --password-stdin <ecr_repo_url>

Note: the information needs to be filled should be available in the output when you ran terraform apply -auto-approve , this includes:

  • ecr_repo_url - The ECR repository url
  • region - the aws region (ie us-east-1 )

You should see Login Succeeded if everything worked as expected.

Tagging the image

We need to the tag our local image with the new registry details.

Run the following:

// Get image ID
docker images --filter=reference=local-nextjs --format "{{.ID}}"

// Tag the image (using image ID from above)
docker tag <Image-ID> <ecr_repo_url>/<ecr_repo_path>:latest

After tagging, You should see the following running docker images in the terminal:

docker images list after tagging

Note: ecr_repo_url and ecr_url_path should be available in the output.

Uploading to the registry

Finally, upload the image to the AWS ECR registry.

docker push <ecr_repo_url>/<ecr_url_path>:latest

When this is done, verify via the AWS console that the image has been uploaded.

There you have it. You’ve succesfully gone through the whole process of manually building, containerizing the application and uploading a new image to AWS ECR (including the authenication portion).

In the next module we will be automating all of this using Github actions.

Even though we will be using a custom github action to automate this step, I still believe it highly valuable to step through the whole process to understand each step.

Before we conclude this module, let’s setup one more thing which we will be using for the continuous integration and delivery.

References:

AWS CI/CD role

There are two main AWS resources which our CI/CD role would need access to in order to:

  • AWS ECR - Upload a new docker image
  • AWS ECS - Update our AWS ECS with new task definitions

I left out the codedeploy part as that is something we will add into the role later!

Let’s re-visit our CI/CD diagram to get an overview.

ecs ci/cd

I will leave this up to you on whether you’d want to create a custom role just for the CI/CD on this project.

You can technically just use your AWS ID and Secret with admin access, but for security, that is not recommended. That is because if the credentials do get leaked then the person would have access to everything.

Where as with this custom CI/CD role, they may only have access to just a portion of your AWS with limited permissions (especially if you specify no deletion and IMMUTABLE image tags which you cannot override).

Ideally, you should use resource based IAM but I think having a limited AWS role stored in Github actions secret is also a pretty safe bet.

IAM for CI/CD

What permissions should we set on our role then ?

As part of the terraform module, we will be using an extended version of AmazonEC2ContainerRegistryPowerUser with some additional policies for AWS ECS too.

In our setup, we also limit what the resources which the CI/CD role can action on. Optionally, if you wish to use this across multiple projects, you can also add more AWS ECR arn to ecr_resource_arns .

Once that is done, this role will be able to perform actions on those ECR respositories as well.

Add the following to your terraform:


# main.tf

## CI/CD user role for managing pipeline for AWS ECR resources
module "ecr_ecs_ci_user" {
  source            = "github.com/Jareechang/tf-modules//iam/ecr?ref=v1.0.12"
  env               = var.env
  project_id        = var.project_id
  create_ci_user    = true
  # This is the ECR ARN - Feel free to add other repository as required (if you want to re-use role for CI/CD in other projects)
  ecr_resource_arns = [
    "arn:aws:ecr:${var.aws_region}:${data.aws_caller_identity.current.account_id}:repository/web/${var.project_id}",
    "arn:aws:ecr:${var.aws_region}:${data.aws_caller_identity.current.account_id}:repository/web/${var.project_id}/*"
  ]
}

# output.tf

output "aws_iam_access_id" {
  value = module.ecr_ecs_ci_user.aws_iam_access_id
}

output "aws_iam_access_key" {
  value = module.ecr_ecs_ci_user.aws_iam_access_key
}

Applying the infrastructure:

export AWS_ACCESS_KEY_ID=<your-key>
export AWS_SECRET_ACCESS_KEY=<your-secret>
export AWS_DEFAULT_REGION=us-east-1

terraform init
terraform plan
terraform apply -auto-approve

After this is done, the aws_iam_access_id and aws_iam_access_secret used for the CI/CD role should be available in the terminal.

For sanity, feel free to go through the steps to manually verify this AWS ID and Secret provides you the same permissions as your admin one.

After tagging your new your docker image with the new details (ECR repo url and path), you should then be able to authenticate then update to the AWS ECR with CI/CD AWS credentials.

⚠️ Note: Remember to run terraform destroy -auto-approve after you are done with the module. Unless you wish to keep the infrastructure for personal use.

References:

Conclusion

Congrats on getting through all the steps! It’s a lot of manual work but don’t fret this is only for learning purposes. As I mentioned, we will be automating all of the manual steps using Github actions.

By going through the steps manually, we built a good mental map to understand the different steps involved in your continuous integration pipeline.

The only difference with this setup, and a docker set up that you may be use to, is in our case we are using a private registry with AWS ECR.

We’ve not only built the AWS ECR infrastructure but we also tested it out via the CLI (the authentication, tagging and uploading a new docker image to the private registory).

I hope this has been a helpful module. On the next one, we’ll start configuring our Github actions pipeline and automating these steps! Stay tuned!

As a reference, here is repository of the completed version of this module - building-with-aws-ecs-part-2.


Enjoy the content ?

Then consider signing up to get notified when new content arrives!

Jerry Chang 2022. All rights reserved.