AWS ECS technical series part IV

Published on: Mon Dec 13 2021

Series

Technical Series

  1. Introducing the AWS ECS technical series
  2. AWS ECS technical series part I
  3. AWS ECS technical series part II
  4. AWS ECS technical series part III

Goals

If you’ve made it this far, give yourself a pat on the back! It’s a lot of work, and you should be proud.

In this module, we are finally going to wire everything together by building out our CI/CD pipeline using Github Actions. That way, whenever we make changes, it will automatically deploy the new changes.

By the end of this, you should be familiar with all parts of the CI/CD pipeline (github actions, and continuous integration and continuous deployment). Both theory and practice.

Here is what we will be building:

ecs ci/cd with github actions

Let’s dive right in.

Content

Introduction

So, far we’ve built our VPC to host our AWS resources, the network and compute portion and also the AWS ECS infrastructure. This is all done through terraform so, if we destroy it, we can easily re-create it within minutes.

Now we finally tie it all together by using Github actions to automate the tasks like testing, building and deploying our code changes.

To get started, feel free to continue on your progress from previous section or use this starter repository - building-with-aws-ecs-part-3.

Let’s review of the concepts and theory for this module before we start building it out.

Review & Theory

The gist of this section is to review steps within the continuous integration and continuous deployment for our setup.

Continuous integration

In the AWS ECS technical series part II module, we manually deployed our changes by building the it locally then uploading to AWS ECR.

This will be our ”Continuous integration” portion of our pipeline where we prepare all our changes into a docker container and deploy it to AWS ECR so it is ready to be deployed to AWS ECS.

💡 As a reminder here are the steps:
  1. Install - Install the packages
  2. Test / Format / lint - Run any tests or pre-build checks or formatting check (linters, typescript)
  3. Build - Prepare a optimized bundle of the application
  4. Docker (login) - Authenticate with private registry (AWS ECR)
  5. Docker (build + tag) - Build the docker image using Dockerfile then tag it (version)
  6. Docker (push) - Upload the new docker image to the private registry (AWS ECR)

The only difference is we are now automating these steps with Github Actions on each merge into our main/master branch.

Continuous Deployment

Once continuous integration has finished, and our docker image is prepared with our new code changes, we now need to actually release it or update our infrastructure to run this new version.

We will be using amazon-ecs-deploy-task-definition Github Actions to help us with updating our task definitions with our new changes then start new tasks using these new settings (new docker image).

In the previous section, AWS ECS technical series part III, we created a static task definition called services.latest.json , this will serve as a “blueprint” to run new tasks.

The github actions will take this static definition, and update it with the new container image then upload it to AWS ECS. This will create a new version of the task definition.

Updating the infrastructure

In the previous setup of ECS, I forgot that we would also need to setup logging. Please ensure to add it in if you don’t already have it in your main.tf !

Note: This portion is optional depending if you have already done this in the previous section.

Supporting ECS task logging to cloudwatch

Make the following changes to the following blocks:

# main.tf

locals {
  # Target port to expose
  target_port = 3000

  ## ECS Service config
  ecs_launch_type = "FARGATE"
  ecs_desired_count = 2
  ecs_network_mode = "awsvpc"
  ecs_cpu = 512
  ecs_memory = 1024
  ecs_container_name = "nextjs-image"
  ecs_log_group = "/aws/ecs/${var.project_id}-${var.env}"
  # Retention in days
  ecs_log_retention = 1
}

resource "aws_cloudwatch_log_group" "ecs" {
  name = local.ecs_log_group
  # This can be changed
  retention_in_days = local.ecs_log_retention
}

data "template_file" "task_def_generated" {
  template = "${file("./task-definitions/service.json.tpl")}"
  vars = {
    env                 = var.env
    port                = local.target_port
    name                = local.ecs_container_name
    cpu                 = local.ecs_cpu
    memory              = local.ecs_memory
    aws_region          = var.aws_region
    ecs_execution_role  = module.ecs_roles.ecs_execution_role_arn
    launch_type         = local.ecs_launch_type
    network_mode        = local.ecs_network_mode
    log_group           = local.ecs_log_group
  }
}

Update service.json.tpl:

{
  "family": "task-definition-node",
  "networkMode": "${network_mode}",
  "requiresCompatibilities": [
    "${launch_type}"
  ],
  "cpu": "${cpu}",
  "memory": "${memory}",
  "executionRoleArn": "${ecs_execution_role}",
  "containerDefinitions": [
    {
      "name": "${name}",
      "image": "nginx:latest",
      "memoryReservation": ${memory},
      "portMappings": [
        {
          "containerPort": ${port},
          "hostPort": ${port}
        }
      ],
      "environment": [
        {
          "name": "PORT",
          "value": "${port}"
        }
      ],
      "logConfiguration": {
        "logDriver": "awslogs",
        "options": {
          "awslogs-group": "${log_group}",
          "awslogs-region": "${aws_region}",
          "awslogs-stream-prefix": "ecs"
        }
      },
      "secrets": []
    }
  ]
}

Updating naming convention in ECS

Let’s also clean up the naming convention used in our AWS ECS service as we need to refer to them in our github actions.

ECS Service name update:

# Example: "web-service-node-app-prod"

resource "aws_ecs_service" "web_ecs_service" {
  name            = "web-service-${var.project_id}-${var.env}"
  cluster         = aws_ecs_cluster.web_cluster.id
  task_definition = aws_ecs_task_definition.nextjs.arn
  desired_count   = local.ecs_desired_count
  launch_type     = local.ecs_launch_type

  load_balancer {
    target_group_arn = module.ecs_tg.tg.arn
    container_name   = local.ecs_container_name
    container_port   = local.target_port
  }

  network_configuration {
    subnets         = module.networking.private_subnets[*].id
    security_groups = [aws_security_group.ecs_sg.id]
  }

  tags = {
    Name = "web-service-${var.project_id}-${var.env}"
  }

  depends_on = [
    module.alb.lb,
    module.ecs_tg.tg
  ]
}

If you choose to use a different name, just ensure you use the same one in the configuration for github actions later.

ECS cluster name update:

# Example: "web-cluster-node-app-prod"

resource "aws_ecs_cluster" "web_cluster" {
  name = "web-cluster-${var.project_id}-${var.env}"
  setting {
    name  = "containerInsights"
    value = "enabled"
  }
}

Setting up Github Actions - Continuous integration

1. Add folders and files

From your root directory:

mkdir -p .github/workflows && touch .github/workflows/main.yml

2. Scaffold the github actions file

# main.yml

name: deploy

3. Add Github push event trigger

# main.yml

name: deploy

on:
  push:
    branches:
      - master
      - main

📝 Helpful reference:

4. Configure a “Job” and the type of machine to run in

# main.yml

name: deploy

on:
  push:
    branches:
      - master
      - main

jobs:
  build:
    runs-on: ubuntu-latest

📝 Helpful reference:

5. Configure checkout, install and build step

Here we are using the Github checkout actions v2 to help us checkout our git repository with all the source code.

Then, we kick off the CI step with installation and build.

# main.yml

name: deploy

on:
  push:
    branches:
      - master
      - main

jobs:
  build:
    runs-on: ubuntu-latest

    steps:
      - name: Install & Build
        uses: actions/checkout@v2
      - run: yarn install --frozen-lockfile
      - run: yarn build && yarn install --production --ignore-scripts --prefer-offline
💡 we run yarn install --production --ignore-scripts --prefer-offline to optimize our node_modules for production to get rid of any unused dependencies like devDependencies.

6. Setting up AWS credentials & Login to ECR

Let’s setup on AWS credentials with the CI/CD AWS role we created just for this!

We will need to add this to our github “Secrets” section but we will re-visit that a bit later.

so, we will be using AWS Github actions - configure-aws-credentials to setup the credentials and login to ECR.

# main.yml

name: deploy

jobs:
  build:
    runs-on: ubuntu-latest

    steps:
      - name: Install & Build
        uses: actions/checkout@v2
      - run: yarn install --frozen-lockfile
      - run: yarn build && yarn install --production --ignore-scripts --prefer-offline

      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v1
        with:
            aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
            aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
            aws-region: us-east-1

      - name: Login to Amazon ECR
        id: login-ecr
        uses: aws-actions/amazon-ecr-login@v1
💡 aws-actions/amazon-ecr-login@v1 is just doing the same thing as:
  • aws ecr get-login-password --region | docker login --username AWS --password-stdin (ecr_repo_url)
The only difference is that it is using the javscript sdk in that custom github actions.

📝 Helpful reference:

Setting up Github Actions - Continuous deployment

1. Add step to build, tag and push the image to AWS ECR

Add the github actions step to handle our docker image build, tag and update to AWS ECR.

After we are done, we will set the output in the build step to be consumed by the next step.

name: deploy

on:
  push:
    branches:
      - master
      - main

jobs:
  build:
    runs-on: ubuntu-latest

    steps:
      - name: Install & Build
        uses: actions/checkout@v2
      - run: yarn install --frozen-lockfile
      - run: yarn build && yarn install --production --ignore-scripts --prefer-offline

      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v1
        with:
            aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
            aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
            aws-region: us-east-1

      - name: Login to Amazon ECR
        id: login-ecr
        uses: aws-actions/amazon-ecr-login@v1

      - name: Build, tag, and push image to Amazon ECR
        id: build-image
        env:
            ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
            ECR_REPOSITORY: web/node-app/nextjs
            IMAGE_TAG: ${{ github.sha }}
        run: |
            docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
            docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
            echo "::set-output name=image::$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG"
💡 Note:
  • ECR_REGISTRY - comes from the previous build step
  • ECR_REPOSITORY - If you changed your repository name, update it in the github build actions
  • ::set-output name=image::$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG - This sets our image reference (full path with tag)

2. Add step to render the new task definition

In this step, we will be filling out the service.latest.json with the image we set as output from previous build step.

This will be the task definition we will be updating to AWS ECS.

name: deploy

on:
  push:
    branches:
      - master
      - main

jobs:
  build:
    runs-on: ubuntu-latest

    steps:
      - name: Install & Build
        uses: actions/checkout@v2
      - run: yarn install --frozen-lockfile
      - run: yarn build && yarn install --production --ignore-scripts --prefer-offline

      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v1
        with:
            aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
            aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
            aws-region: us-east-1

      - name: Login to Amazon ECR
        id: login-ecr
        uses: aws-actions/amazon-ecr-login@v1

      - name: Build, tag, and push image to Amazon ECR
        id: build-image
        env:
            ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
            ECR_REPOSITORY: web/node-app/nextjs
            IMAGE_TAG: ${{ github.sha }}
        run: |
            docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
            docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
            echo "::set-output name=image::$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG"

      - name: Fill in the new image ID in the Amazon ECS task definition
        id: task-def
        uses: aws-actions/amazon-ecs-render-task-definition@v1
        with:
            task-definition: infra/task-definitions/service.latest.json
            container-name: nextjs-image 
            image: ${{ steps.build-image.outputs.image }}

3. Add step to deploy to AWS ECS

Finally, in this step, we will be deploying to AWS ECS using the task definition rendered from the previous step.

This will create a new version of the definition as well as deploy our tasks into the ECS service.

name: deploy

on:
  push:
    branches:
      - master
      - main

jobs:
  build:
    runs-on: ubuntu-latest

    steps:
      - name: Install & Build
        uses: actions/checkout@v2
      - run: yarn install --frozen-lockfile
      - run: yarn build && yarn install --production --ignore-scripts --prefer-offline

      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@v1
        with:
            aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
            aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
            aws-region: us-east-1

      - name: Login to Amazon ECR
        id: login-ecr
        uses: aws-actions/amazon-ecr-login@v1

      - name: Build, tag, and push image to Amazon ECR
        id: build-image
        env:
            ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
            ECR_REPOSITORY: web/node-app/nextjs
            IMAGE_TAG: ${{ github.sha }}
        run: |
            docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
            docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
            echo "::set-output name=image::$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG"

      - name: Fill in the new image ID in the Amazon ECS task definition
        id: task-def
        uses: aws-actions/amazon-ecs-render-task-definition@v1
        with:
            task-definition: infra/task-definitions/service.latest.json
            container-name: nextjs-image 
            image: ${{ steps.build-image.outputs.image }}

      - name: Deploy Amazon ECS task definition
        uses: aws-actions/amazon-ecs-deploy-task-definition@v1
        with:
            task-definition: ${{ steps.task-def.outputs.task-definition }}
            service: web-service-node-app-prod
            cluster: web-cluster-node-app-prod
            wait-for-service-stability: true
⚠️ Important: Please ensure the following configuration are the same as defined in terraform
  • container-name
  • service (name)
  • cluster (name)

This is critical to have everything working properly.

📝 Helpful reference:

Final steps

Before we can test this out, we’ll need to add the AWS ID and secret to the github actions “secrets” vault on your respository.

Within your Github repository you are hosting the code, go under “Setting“ > “Secrets” and click “New repository secret”.

Add the following:

  • AWS_ACCESS_KEY_ID
  • AWS_SECRET_ACCESS_KEY
github actions secrets tab

Now we are ready to test this thing out. To test the deployment, just make a change in the next.js application and push the change to your main or master branch.

Please visit the Github Actions on your repository to ensure the deployment has suceeded.

Then use the alb_url in the terraform output to view the updated site:

image with terraform output with alb url

If you’d like a reference of the result, it should be available at building-with-aws-ecs-part-4.

Conclusion

That’s it! We succeeded in creating our CI/CD for our next.js.

The beauty of this setup is uor infrastructure is made using terraform & github actions and through the AWS console.

This mean that our infrastructure are repeatable processes that can be re-used or re-purposed for other use cases, and you’ll get the same results. Think of it as a blueprint!

I encourage you to look more into the github actions and its features. It’s quite a powerful tool with a vibrant community of developers building on top of it.

In the next, and final module we are going to take our deployment to the next level by adding some factors of safety into it.

This way we can reduce the impact of our errors when we release the new version of our code.

Stay tuned for the next module!


Enjoy the content ?

Then consider signing up to get notified when new content arrives!

Jerry Chang 2022. All rights reserved.