Amazon S3 + Cloudfront: Multi-page suport (Tutorial)

Published on: Sat Dec 17 2022

Series

Content

Introduction

We’re going setup a cloudfront viewer functions to remap the paths so that is uses the /index.html when you visit the folder paths.

If you would like a refresher on why we need to do this, feel free to check out my other article - Amazon S3 + Cloudfront: Multi-page suport.

Otherwise, let’s start setting this up!

Brief Background - The Why

As mentioned in my last article, Amazon S3 + Cloudfront: Multi-page support, we need to remap the paths in order to default to index.html .

The gist is that, when using the Origin Access Identity (OAI) to access S3, it uses the REST API endpoint.

So, this means it won’t default to index.html on folder paths like what you’d expect to see when using S3 static website hosting.

The Architecture

Here is what we are building.

Illustration of remapping the url
Illustration of remapping the url

Architecture Explanation

The architecture is rather simple, on top of the Cloudfront and S3, we’re going to add a cloudfront function to remap the url path.

Instead of the request going straight to Cloudfront, it would first pass through this function where we will have our remapping logic.

The function will be a viewer request, meaning that it will run before the request goes to Cloudfront.

In this tutorial, that’s is what we’ll be setting up:

  • The cloudfront function (viewer request) Resource
  • The cloudfront function code

Before you start

Before you start going through the tutorial, make sure you are using the starter repository - astro-cloudfront-integration.

This streamlines some of the things like writing out all the boilerplate files.

It will be the base from which we will build from!

Building the remap function

Let’s start building our remap function.

1. Create a new file for

Under the root project, create a new file - functions/viewer-request/index.js .

This is where the code for our function will live!

Run the following:

mkdir functions/viewer-request && touch functions/viewer-request/index.js

2. Add the logic for remapping

We’re now going to add the logic to perform the URI remapping.

At a high level, here is what that does:

  1. remap URI missing a file name (trailing slash)

  2. remap URI missing a file extension (ie .html or .png )

Add the following:

// functions/viewer-request/index.js 

function handler(event) {
  var request = event.request;
  var uri = request.uri;

  // Check whether the URI is missing a file name.
  if (uri.endsWith('/')) {
    request.uri += 'index.html';
  }
  // Check whether the URI is missing a file extension.
  else if (!uri.includes('.')) {
    request.uri += '/index.html';
  }

  return request;
}

3. Create the cloudfont function resource

Ok, now that we have our function code ready to go.

Let’s define the resource in our terraform.

Add the following changes:

resource "aws_cloudfront_function" "cf_viewer" {
  name    = "astro-multi-page-cf-viewer-function"
  runtime = "cloudfront-js-1.0"
  comment = "Remap paths to index.html when request origin for multi-page sites"
  publish = true
  code    = file("../functions/viewer-request/index.js")
}
⚠️ Note:

Cloudfront functions are a little different from Lambda functions, they have more constraints in terms of resources and environment utilities.

Some of the things that you get in a lambda environment may not have available to you in that run time.

So, just keep that in mind.

For a full list of features supported by Cloudfront function runtime, check out - Edge functions - JavaScript runtime features for CloudFront Functions.

4. Attach function to the cloudfront distribution

Now that we can created our viewer function, we got to attach it to our cloudfront distrubtion.

Add the following changes:

resource "aws_cloudfront_distribution" "cf_distribution" {
  origin {
    domain_name = aws_s3_bucket.site_asset_bucket.bucket_regional_domain_name
    origin_id = local.s3_origin_id

    s3_origin_config {
      origin_access_identity = "origin-access-identity/cloudfront/${aws_cloudfront_origin_access_identity.oai.id}"
    }
  }

  enabled = true
  is_ipv6_enabled     = true
  comment             = "Astro static site CF"
  default_root_object = "index.html"

  default_cache_behavior {
    allowed_methods  = ["GET", "HEAD"]
    cached_methods   = ["GET", "HEAD"]
    target_origin_id = local.s3_origin_id
    viewer_protocol_policy = "redirect-to-https"
    min_ttl                = local.min_ttl
    default_ttl            = local.default_ttl
    max_ttl                = local.max_ttl
    forwarded_values {
      query_string = false
      cookies {
        forward = "none"
      }
    }
    function_association {
      event_type   = "viewer-request"
      function_arn = aws_cloudfront_function.cf_viewer.arn
    }
  }

  # Redirect all navigation outside of expected to home
  custom_error_response {
    error_caching_min_ttl = 0
    error_code = 403
    response_code = 200
    response_page_path = "/index.html"
  }

  price_class = var.cf_price_class

  restrictions {
    geo_restriction {
      restriction_type = "none"
    }
  }

  viewer_certificate {
    cloudfront_default_certificate = true
  }
}

5. Update the repo name

To generate the right permissions for our OIDC to use within our Github Acionts CI/CD, specify your Github Repository.

Add the following changes:

# Input variable definitions

variable "aws_region" {
  description = "AWS region for all resources."
  type    = string
  default = "us-east-1"
}

variable "client_id_list" {
  default = [
    "sts.amazonaws.com"
  ]
}

variable "repo_name" {
  type    = string
  # Example: Jareechang/astro-aws-starter
  default = "Jareechang/astro-aws-multi-page-support"
}

variable "cf_price_class" {
  default = "PriceClass_100"
}

6. Apply the infrastructure

Now we have all our definition in Terraform, all that is left is to generate it.

Run the following:

export AWS_ACCESS_KEY_ID=<your-key>
export AWS_SECRET_ACCESS_KEY=<your-secret>
export AWS_DEFAULT_REGION=us-east-1

terraform init
terraform plan
terraform apply -auto-approve

If the infrastructure applied successfully then you should see something like this:

Illustration of the Terraform outputs
Illustration of the Terraform outputs

Github actions (Updates)

After applying the infrastructure, there are a few things we need to do:

  1. Make sure the secrets are added to your Github repository

  2. Update the role-to-assume field in the github workflow yaml

1. Update the secrets in your Github Repo

Illustration of the Github secrets UI
Illustration of the Github secrets UI

The secrets that we want to add to Github actions are:

  • AWS_S3_BUCKET_NAME - The S3 bucket we will upload assets to

  • AWS_DISTRIBUTION_ID - The ID of our Cloudfront distribution

  • UNSPLASH_API_KEY - Unsplash API key used by the Astro static site

2. Update Github Actions YAML definitions

This will tell our Github Actions which role to assume when running our workflow.

name: deploy-site

on:
  push:
    branches:
      - master
      - main

# more changes
permissions:
  id-token: write
  contents: read

jobs:
  build:
    runs-on: ubuntu-latest

    steps:
      - name: Configure
        uses: actions/checkout@v2
      - uses: pnpm/action-setup@646cdf48217256a3d0b80361c5a50727664284f2
        with:
          version: 6.10.0
      - name: Configure AWS credentials
        uses: aws-actions/configure-aws-credentials@master
        with:
            role-to-assume: "<your-assume-role-arn>" 
            aws-region: "us-east-1"
      - run: ./scripts/deploy-site.sh
        env:
          AWS_S3_BUCKET_NAME: ${{ secrets.AWS_S3_BUCKET_NAME }}
          UNSPLASH_API_KEY: ${{ secrets.UNSPLASH_API_KEY }}
          AWS_DISTRIBUTION_ID: ${{ secrets.AWS_DISTRIBUTION_ID }}

Testing it out

1. Push the changes

Push the changes to your github repository in the main branch.

git push origin main

2. View the site using the Cloudfront URL

Go to /about using the cloudfront url.

If everything went as expected, you should be able to see the about page!

Illustration of visiting the about page on the site via the Cloudfront URL
Illustration of visiting the about page on the site via the Cloudfront URL

For reference, here is the repository of completed tutorial - Github: astro-aws-multi-page-support.

Conclusion

And that’s it! You now have a fully functional multi-page static site and the AWS infrastructure to support it!

This is just one of the useful applications of Cloudfront functions (viewer request).

They are quite useful when you need to run some simple logic on the request (ie transformation/rewrites) reaching Cloudfront.

I hope you enjoyed this series with Astro + AWS.

If you did, please do share this article with a friend or co-worker 🙏❤️ (Thanks!)

Helpful References


Enjoy the content ?

Then consider signing up to get notified when new content arrives!

Jerry Chang 2023. All rights reserved.