Build a webhook microservice using AWS

Published on: Tue May 31 2022

Series

Content

Introduction

In this technical series, we will be building out a common solution used in many applications - a webhook endpoint.

This allows for service-to-service communication in your application or with a third party service.

We will go through setting up the infrastructure using AWS, code using Node.js, security and much more.

Beyond just receiving events through an endpoint, we can also leverage this pattern to process tasks in the background.

This provides several advantages such as:

  • Decoupled architecture
    • Makes it easier for security hardening and auditing
    • Allows for independent scaling
  • Better reliability (Handling of failures - errors, timeouts and rate limits etc)
  • Resilience to handling unpredictable work loads (ie Traffic spikes)

Architecture - AWS

In terms of architecture, we will be creating a public api endpoint, that is gated by AWS Cognito, which gives us the ability to add events to the queue then have it processed at a later time.

The architecture will be divided into two distinct sections:

  • Event in-take - Handling the request to add an event to the queue
  • Event processing - Processing the event out of process at a later time (then re-trying as needed during failures)
Webhook full infrastructure integrating with Github

Webhook Sequence

  1. The 3rd party service (ie Github) generates a signature using a shared key with SHA-256

  2. The 3rd party service then makes a HTTP request to the webhook endpoint with the payload and signature

  3. The AWS API gateway invokes the AWS Lambda Alias (associated with the Ingestion AWS Lambda function)

  4. The Ingestion AWS Lambda function verifies the signature

  5. The Ingestion AWS Lambda add the new message to the AWS SQS

  6. AWS SQS triggers the AWS Alias (associated with the Process Queue AWS Lambda function) with the new message to be processed

  7. [Optional] Failures in the Process Queue AWS Lambda function will be added to the Dead-Letter Queue (DLQ) to be retried

Let’s go through each part of the infrastructure in a bit more detail.

Authentication

The Authentication for our webhook will use HS256 which stands for HMAC using SHA-256.

This allows us to establish a trusted relationship between the 3rd party service and our API endpoint so we can ensure that the payload we receive is coming from a valid source.

Throughout the series, we will go through the details on how to implement this using Node.js.

Lambda (Event in-take)

Once authorized, the api gateway will call the lambda alias which will trigger the “ingestion” lambda. This will basically just add the event to our in-take queue.

SQS (queue)

We are using SQS for our event in-take and dead-letter queues (DLQ).

Event triggers

Once an event is added to the queue, it will trigger the ”processing” lambda.

This will allow us to process the event that was added to the queue using our lambda function.

This can be sending an email, relaying to another channel (discord or slack) or any other tasks.

Technical series

1. Setup infrastructure for the Lambda function

Build a webhook microservice using AWS Technical series Part I

In this module, we will be scaffolding out our lambda functions then testing the CI/CD manually from our local environment.

This is in preparation for automating the process in Github actions (next module).

The CI/CD will add the assets to the s3 buckets then that will be used to publish a new lambda function version then the lambda alias will be updated with the new version.

webhook lambda infrastructure

2. Setup the CI/CD for the Lambda functions

How To Secure Your AWS Deployments Using Github Actions and OIDC In 7 Easy Steps

In this module, we will be setting up the CI/CD for our lambda functions which will take the new changes we pushed all the way live to the lambda function.

This will go through the process of building, testing, and deploying.

As part of this setup, we will be writing our the github workflows and setting up the Open ID connect with Github & AWS with the correct IAM permissions.

Once this is all setup, we should have an automated pipeline that will deploy our changes to our dev alias when we push our code in both of our functions!

webhook lambda CI/CD infrastructure

3. Setup up AWS API Gateway SQS, integrate with Lambda and DLQ

Setup API Gateway and SQS with AWS Lambda

In this module, we will setup both the in-take and re-procesing (DLQ) SQS queue.

Then, we will connect that via an event trigger to call the “processing” lambda whenever an event is added to the queue.

webhook sqs with lambda and DLQ infrastructure

4. Setup the code for the ingestion and processing lambda

Step-By-Step Guide to integrate a webhook with Github using AWS

In this module, we will code out all the logic for our ”ingestion” and ”processing” lambda.

Then we will integrate it with Github webhooks to see it in action!

webhook sqs with lambda and DLQ infrastructure

This will include:

  • Code structure
  • Error handling
  • Event validation
  • Logic for adding into the queue
  • Guarding against SQS size limits (256kb)
  • Webhook payload signature verification (SHA-256)
  • And much more...

Then, we will connect that via an event trigger to call the “processing” lambda whenever an event is added to the queue.

5. Service health and performance

Webhook infrastructure tracing

Now we have a fully functional service, let’s take a look at how to handle upkeep and service health.

We have to have proper instrumentation in place in order to better understand how our service is doing in production.

In this module, we’ll explore the important metrics to look out for in our infrastructure.

Also, we’ll see how we can integrate different tooling and techniques to debug, measure and benchmark performance.

These could include things like:

  • Tracing
  • Logging
  • Metrics
  • Performance benchmarking

6. Error handling in AWS Lambda

AWS Lambda: The structured approach to simple and testable code

AWS Lambda response outcomes

As noted in our scaffold, we will be layering in error handling as a final touch.

By the end of this module, we should have an error handler that is simple, easy to test and extend.

We’ll not only go through the implementation but also the design to lead us to the end result!


Enjoy the content ?

Then consider signing up to get notified when new content arrives!

Jerry Chang 2023. All rights reserved.