AWS Aurora Technical Series Part V - Integrate with AWS ECS

Published on: Mon Feb 28 2022

Series

Content

Introduction

In this module, we will take a look at how to integrate our existing infrastructure (AWS Aurora - PostgreSQL) and code (connection to PostgreSQL) with AWS ECS.

This should really be a quick module because we did most of the heavy lifting in the previous modules of setting up everything.

Review

Let’s quickly review what we have so far.

we have the following:

  • Working code that can connect to a local PostgreSQL
  • Infrastructure for production PostgreSQL in AWS Aurora

So, what else is missing to link these to AWS ECS ?

Well, when we deploy our application, we need to let our application know what credentials to use when connecting to AWS Aurora (which is stored in AWS SSM Parameter store).

So, really, this module boils down to how do we make these credentials available to our AWS ECS tasks when they are deployed inside a cluster.

If you like a review of AWS ECS, feel free to re-visit my series on that topic - here.

Once everything is integrated, this is what our architecture should look like:

aws aurora with ecs

Let’s get started! Feel free to use the previous code repository as a starting point - aws-aurora-part-4.

AWS ECS secret injection

Since AWS ECS uses container definitions as a blueprint to run/deploy containers in production, this is were we need to define our environment variables.

We are in luck because AWS ECS integrates with AWS SSM, and we can use the configuration option secrets which are a list of values that you can retrieve from AWS SSM (Parameter store or Secrets Manager) by referencing the arn of the secret value.

Once retrieved, AWS ECS injects these into the environment of the running container.

Example:

{
  "secrets": [
    {
      "name": "DATABASE_PASSWORD",
      "valueFrom": "arn:aws:ssm:us-east-1:awsExampleAccountID:parameter/awsExampleParameter"
    }
  ]
}

Updating container defintions

Let’s update our AWS ECS container definitions to include the database configurations.

1. Add the environment variables

These include:

  • PGUSER - database user
  • PGHOST - database host
  • PGPORT - database port
  • PGDATABASE - database name

within the file task-definitions/service.json.tpl make the following changes:

{
  "family": "task-definition-node",
  "networkMode": "${network_mode}",
  "requiresCompatibilities": [
    "${launch_type}"
  ],
  "cpu": "${cpu}",
  "memory": "${memory}",
  "executionRoleArn": "${ecs_execution_role}",
  "containerDefinitions": [
    {
      "name": "${name}",
      "image": "nginx:latest",
      "memoryReservation": ${memory},
      "portMappings": [
        {
          "containerPort": ${port},
          "hostPort": ${port}
        }
      ],
      "environment": [
        {
          "name": "PORT",
          "value": "${port}"
        },
        {
          "name": "PGHOST",
          "value": "${db_host}"
        },
        {
          "name": "PGDATABASE",
          "value": "${db_name}"
        },
        {
          "name": "PGUSER",
          "value": "${db_username}"
        },
        {
          "name": "PGPORT",
          "value": "${db_port}"
        }
      ],
      "logConfiguration": {
        "logDriver": "awslogs",
        "options": {
          "awslogs-group": "${log_group}",
          "awslogs-region": "${aws_region}",
          "awslogs-stream-prefix": "ecs"
        }
      },
      "secrets": []
    }
  ]
}

2. Add the database password

{
  "family": "task-definition-node",
  "networkMode": "${network_mode}",
  "requiresCompatibilities": [
    "${launch_type}"
  ],
  "cpu": "${cpu}",
  "memory": "${memory}",
  "executionRoleArn": "${ecs_execution_role}",
  "containerDefinitions": [
    {
      "name": "${name}",
      "image": "nginx:latest",
      "memoryReservation": ${memory},
      "portMappings": [
        {
          "containerPort": ${port},
          "hostPort": ${port}
        }
      ],
      "environment": [
        {
          "name": "PORT",
          "value": "${port}"
        },
        {
          "name": "PGHOST",
          "value": "${db_host}"
        },
        {
          "name": "PGDATABASE",
          "value": "${db_name}"
        },
        {
          "name": "PGUSER",
          "value": "${db_username}"
        },
        {
          "name": "PGPORT",
          "value": "${db_port}"
        }
      ],
      "logConfiguration": {
        "logDriver": "awslogs",
        "options": {
          "awslogs-group": "${log_group}",
          "awslogs-region": "${aws_region}",
          "awslogs-stream-prefix": "ecs"
        }
      },
      "secrets": [
        {
          "name": "PGPASSWORD",
          "valueFrom": "${db_password}"
        }
      ]
    }
  ]
}

3. Add new variables terraform file

Let’s also update our terraform file to include these new variables when rendering our task defintion.

data "template_file" "task_def_generated" {
  template = "${file("./task-definitions/service.json.tpl")}"
  vars = {
    env                 = var.env
    port                = local.target_port
    name                = local.ecs_container_name
    cpu                 = local.ecs_cpu
    memory              = local.ecs_memory
    aws_region          = var.aws_region
    ecs_execution_role  = module.ecs_roles.ecs_execution_role_arn
    launch_type         = local.ecs_launch_type
    network_mode        = local.ecs_network_mode
    log_group           = local.ecs_log_group
    db_host             = aws_rds_cluster_endpoint.static.endpoint
    db_name             = aws_rds_cluster.default.database_name
    db_port             = 5432
    db_username         = aws_rds_cluster.default.master_username
    db_password         = "arn:aws:ssm:${var.aws_region}:${data.aws_caller_identity.current.account_id}:parameter${aws_ssm_parameter.db_password.name}"
  }
}

📝 Helpful reference:

Permissions

Just like everything else within AWS, you need to explicity grant permissions for one service to access to other services.

In our case, we need to provide our AWS ECS resource with permissions to:

  • Get the secret from AWS SSM
  • Decrypt the SecureString secret in AWS SSM using AWS KMS

1. Grant AWS ECS permissions

module "ecs_roles" {
  source                    = "github.com/Jareechang/tf-modules//iam/ecs?ref=v1.0.23"
  create_ecs_execution_role = true
  create_ecs_task_role      = true

  ecs_execution_iam_statements = {
    ssm = {
      actions = [
        "ssm:GetParameter",
        "ssm:GetParameters",
        "ssm:GetParametersByPath"
      ]
      effect = "Allow"
      resources = [
        "arn:aws:ssm:${var.aws_region}:${data.aws_caller_identity.current.account_id}:parameter/web/${var.project_id}/*"
      ]
    }
    kms = {
      actions = [
        "kms:Decrypt"
      ]
      effect = "Allow"
      resources = [
        aws_kms_key.default.arn
      ]
    }
  }
}
💡 Note: These permissions need to be on the execution role because this is considered ”infrastructure” task where these secrets are injected at deploy time rather than needed for run time (ie accessing S3).

2. Apply the infrastructure

export AWS_ACCESS_KEY_ID=<your-key>
export AWS_SECRET_ACCESS_KEY=<your-secret>
export AWS_DEFAULT_REGION=us-east-1
export TF_VAR_ip_address=<your-ip-address>

terraform init
terraform plan
terraform apply -auto-approve

Setting up our deployment

Once the infrastructure is ready, we need to add the AWS credentials to our Github.

Within your Github repository you are hosting the code, go under “Setting“ > “Secrets” and click “New repository secret”. It may also be under ”Setting” > “Secrets” > ”Actions” if you already set this up.

Add the following:

  • AWS_ACCESS_KEY_ID
  • AWS_SECRET_ACCESS_KEY
github actions secrets tab

Now we are ready to make a change and test out our changes.

Verifying our integration

Once deployed, visit the url under alb_url in your terraform output. Then go through the application to verify there are no errors and the information is correct.

Everything should be the same as what we’ve seen locally where we could view a list of our blogs then be able to visit the page and also see the comments.

If you’d like a reference of the finished result, it is available at aws-aurora-part-5.

Conclusion

That’s it! Congrats on making it this far :)

This is a very quick one but it just demonstrates how simple it is to securely integrate AWS Aurora with AWS ECS.

Of course, this is not limited to just AWS Aurora, other services like AWS RDS, DynamoDB, ElasticCache and Kinesis can also be integrated as well. The process will be similar or exactly the same.

I hope this series has been helpful!


Enjoy the content ?

Then consider signing up to get notified when new content arrives!

Jerry Chang 2022. All rights reserved.