Published on: Wed Jul 13 2022
In this module, we will use this starter template - aws-webhook-series-part-2 as a starting point.
By the end of this module, you should:
✅ Be able to integrate AWS API gateway and SQS and AWS Lambda
✅ Know how to trigger an event between AWS API Gateway and Lambda
✅ Understand and be able to setup the IAM permissions for the different resources (API gateway, Lambda etc)
This module will be focused building on top of our existing infrastructure where we will start to layer in the API endpoint (via API Gateway) and the queue (AWS SQS).
To review, here is what the flow of the webhook:
Initial trigger (via api gateway)
API gateway trigger event to AWS Alias (for Lambda)
AWS Lambda (ingestion) will add the event to AWS SQS
Once the event is added to the queue, it will trigger the AWS Lambda (process-queue)
[Optional] Upon failure, we will also have a Dead-Letter Queue (DLQ) which essentially handles the failures and will try to re-process the event
AWS offers two types of API Gateway: http and rest.
API gateway rest is more comprehensive and offers more features out of the box. However, it is also more verbose and higher level of complexity.
API Gateway http is a simplified version more suitable for simple (and even complex) use cases.
It really comes down to the features you need (VTL, advanced deployments etc).
To read more, check out this comparison - AWS - Choosing between REST APIs and HTTP APIs.
Add the following to your main.tf
.
resource "aws_apigatewayv2_api" "lambda" {
name = "webhook_api"
protocol_type = "HTTP"
}
This is basically the stage for your API (ie dev
, staging
, qa
etc).
Also, we are going to include logging for our API gateway along with metadata in the logging.
resource "aws_apigatewayv2_stage" "lambda" {
api_id = aws_apigatewayv2_api.lambda.id
name = "serverless_lambda_stage"
auto_deploy = true
access_log_settings {
destination_arn = aws_cloudwatch_log_group.api_gw.arn
format = jsonencode({
requestId = "$context.requestId"
sourceIp = "$context.identity.sourceIp"
requestTime = "$context.requestTime"
protocol = "$context.protocol"
httpMethod = "$context.httpMethod"
resourcePath = "$context.resourcePath"
routeKey = "$context.routeKey"
status = "$context.status"
responseLength = "$context.responseLength"
integrationErrorMessage = "$context.integrationErrorMessage"
})
}
}
resource "aws_cloudwatch_log_group" "api_gw" {
name = "/aws/api_gw/${aws_apigatewayv2_api.lambda.name}"
# If this is not available add it to the top of the file
# locals {
# default_lambda_log_retention = 1
# }
retention_in_days = local.default_lambda_log_retention
}
Once we create the log group, we also need the IAM permissions to be able to update that log group.
resource "aws_api_gateway_account" "this" {
cloudwatch_role_arn = aws_iam_role.cloudwatch.arn
}
resource "aws_iam_role" "cloudwatch" {
name = "api_gateway_cloudwatch_global"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Sid = ""
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "apigateway.amazonaws.com"
}
}]
})
}
data "aws_iam_policy_document" "cloudwatch" {
version = "2012-10-17"
statement {
actions = [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:DescribeLogGroups",
"logs:DescribeLogStreams",
"logs:PutLogEvents",
"logs:GetLogEvents",
"logs:FilterLogEvents"
]
effect = "Allow"
resources = [
"*"
]
}
}
resource "aws_iam_role_policy" "cloudwatch" {
name = "cloudwatch-default-log-policy-${var.aws_region}-${var.environment}"
role = aws_iam_role.cloudwatch.id
policy = data.aws_iam_policy_document.cloudwatch.json
}
This will be the event trigger that tells API Gateway to trigger the AWS Lambda (ingestion).
resource "aws_apigatewayv2_integration" "this" {
api_id = aws_apigatewayv2_api.lambda.id
integration_uri = module.lambda_ingestion.alias[0].invoke_arn
integration_type = "AWS_PROXY"
integration_method = "POST"
payload_format_version = "2.0"
}
⚠️ Note: An important gotcha regarding this integration is that we are pointing the arn reference to the Alias not the AWS Lambda!
Add the following to your main.tf
.
resource "aws_apigatewayv2_route" "this" {
api_id = aws_apigatewayv2_api.lambda.id
route_key = "POST /webhooks/receive"
target = "integrations/${aws_apigatewayv2_integration.this.id}"
}
Add the following to your outputs.tf
.
This will be output our API gateway endpoint to be invoked once we apply our infrastructure!
output "api_endpoint" {
description = "Endpoint URL"
value = aws_apigatewayv2_stage.lambda.invoke_url
}
resource "aws_sqs_queue" "ingest_queue" {
name = "ingest-queue"
visibility_timeout_seconds = local.default_lambda_timeout
tags = {
Environment = var.environment
}
}
variable "environment" {
type = string
default = "dev"
}
resource "aws_sqs_queue" "ingest_queue" {
name = "ingest-queue"
# This may be tweaked depending on the processing time of the lambda
visibility_timeout_seconds = local.default_lambda_timeout
redrive_policy = jsonencode({
deadLetterTargetArn = aws_sqs_queue.ingest_dlq.arn,
maxReceiveCount: 2
})
tags = {
Environment = var.environment
}
}
resource "aws_sqs_queue" "ingest_dlq" {
name = "ingest-queue-dlq"
receive_wait_time_seconds = 20
tags = {
Environment = var.environment
}
}
resource "aws_lambda_event_source_mapping" "queue_lambda_event" {
event_source_arn = aws_sqs_queue.ingest_queue.arn
function_name = module.lambda_process_queue.alias[0].arn
batch_size = 5
}
⚠️ Note: Same gotcha as above, when the SQS triggers an event on the AWS Lambda, it needs to use the AWS Lambda Alias ARN rather than the AWS Lambda ARN
Ingestion
functionThis will allow the Ingestion function to add messages to the queue (We will be adding the code in the next module!).
module "lambda_ingestion" {
source = "./modules/lambda"
code_src = "../functions/ingestion/main.zip"
bucket_id = aws_s3_bucket.lambda_bucket.id
timeout = local.default_lambda_timeout
function_name = "Ingestion-function"
runtime = "nodejs12.x"
handler = "dist/index.handler"
publish = true
alias_name = "ingestion-dev"
alias_description = "Alias for ingestion function"
iam_statements = {
sqs = {
actions = [
"sqs:GetQueueAttributes",
"sqs:GetQueueUrl",
"sqs:SendMessage",
"sqs:ReceiveMessage",
]
effect = "Allow"
resources = [
aws_sqs_queue.ingest_queue.arn
]
}
}
environment_vars = {
DefaultRegion = var.aws_region
}
}
This will allow our Process-Queue
function to get messages from the queue and manage the Dead-letter Queue (DLQ).
module "lambda_process_queue" {
source = "./modules/lambda"
code_src = "../functions/process-queue/main.zip"
bucket_id = aws_s3_bucket.lambda_bucket.id
timeout = local.default_lambda_timeout
function_name = "Process-Queue-function"
runtime = "nodejs12.x"
handler = "dist/index.handler"
publish = true
alias_name = "process-queue-dev"
alias_description = "Alias for ingestion function"
iam_statements = {
sqs = {
actions = [
"sqs:GetQueueAttributes",
"sqs:GetQueueUrl",
"sqs:ReceiveMessage",
"sqs:DeleteMessage",
]
effect = "Allow"
resources = [
aws_sqs_queue.ingest_queue.arn
]
}
dlq_sqs = {
actions = [
"sqs:GetQueueAttributes",
"sqs:GetQueueUrl",
"sqs:SendMessage",
"sqs:ReceiveMessage",
]
effect = "Allow"
resources = [
aws_sqs_queue.ingest_dlq.arn
]
}
}
environment_vars = {
DefaultRegion = var.aws_region
}
}
This is the permission for our API Gateway to invoke our AWS Lambda alias.
resource "aws_lambda_permission" "api_gw" {
statement_id = "AllowExecutionFromAPIGateway"
action = "lambda:InvokeFunction"
function_name = module.lambda_ingestion.alias[0].arn
principal = "apigateway.amazonaws.com"
source_arn = "${aws_apigatewayv2_api.lambda.execution_arn}/*/*"
}
let’s use terraform to apply the CI/CD infrastructure.
// This will re-generate the assets
pnpm run generate-assets --filter @function/*
export AWS_ACCESS_KEY_ID=<your-key>
export AWS_SECRET_ACCESS_KEY=<your-secret>
export AWS_DEFAULT_REGION=us-east-1
terraform init
terraform plan
terraform apply -auto-approve
Before we close this module out, let’s do a quick test.
curl -X POST "<api_endpoint>/webhooks/receive" \
-H "Content-Type: application/json" \
--data-raw '{"data": "test"}'
We have not wired up the connection between Process-Queue
function, so, we can only check the Ingestion
function and the API gateway logs.
Make sure you see the logs in those two spots.
So, that’s it! Great job on working through it. We covered a lot of ground in this module.
Here is the link to the completed version of this module - aws-webhook-series-part-3.
As a recap, let’s review what we did:
We setup the API gateway to allow us to trigger our Ingestion function
We added the appropriate IAM role and permissions to our AWS Resources (AWS Lambda and API Gateway)
We setup event triggers for AWS API gateway, Lambda and SQS
We reviewed a gotcha on these event trigger, which is you have to point them to the AWS Lambda Alias (if you are using them)
In the next module, we will start to write out the code to connect the two flows (Event in-take and Event processing).
Lastly, if you enjoyed this tutorial, be sure to share it with a friend 📬!
See you in the next module!
Then consider signing up to get notified when new content arrives!