Published on: Wed Dec 07 2022
In the previous tutorial, we setup the CI/CD pipeline to deploy our Astro site into an Amazon S3 bucket.
Now, we’re going to integrate cloudfront into the whole process.
Using a CDN like Cloudfront will allow us to distribute our site assets efficiently to our users around the world!
Before we start integrating Cloudfront, let’s review the infrastructure.
Cloudfront will act as a gateway to our S3 Bucket, which in our case will be the origin server.
Illustration Astro integration with Amazon Cloudfront and S3
The steps:
Viewer Request - The viewer request comes in from the client
Origin request - Assuming the response was not cached, cloudfront will forward the request to the S3 bucket (Origin server)
Origin Response - S3 buckets responds with the resource requested
Viewer Response - Response is added to the cloudfront cache (if applicable), and it is also forwarded to the client
The only change that we need when integrating Cloudfront to our pipeline is invalidating the cache when we build and deploy the site.
That way, our users can see the changes when we deploy them!
Illustration Astro CI/CD full infrastructure with Cloudfront
🔎 Preview:The change that we need to make in the CI/CD is quite simple.
It will involve a small addition in our scripts/deploy-site.sh [in Step 4].
We will cover that shortly in the later section!
Before you start going through the tutorial, make sure you are using the starter repository - static-site-astro-aws-ci-cd.
This streamlines some of the things like writing out all the boilerplate files.
It will be the base from which we will build from!
Now onto setting up Cloudfront.
This resource will be the “user” which can be associated with the Cloudfront distribution.
Likewise, this “user” will have specific IAM permissions associated with it.
In our case, we will be adding S3 read access, so we can serve our files through Cloudfront.
Add the following changes:
// infra/main.tf
resource "aws_cloudfront_origin_access_identity" "oai" {
comment = "CF origin identity"
}
The gist of the policy is that it will give our OAI read access to the S3 bucket (which is where we keep our site assets).
Add the following changes:
// infra/main.tf
data "aws_iam_policy_document" "cf_bucket_policy" {
statement {
sid = "AllowCloudFrontS3Access"
actions = [
"s3:GetObject"
]
resources = [
"${aws_s3_bucket.site_asset_bucket.arn}/*"
]
principals {
type = "AWS"
identifiers = [
aws_cloudfront_origin_access_identity.oai.iam_arn
]
}
}
}
This will be the IAM role used by our Github actions to manage AWS resources (ie S3).
Add the following changes:
// infra/main.tf
resource "aws_s3_bucket_policy" "s3_allow_access" {
bucket = aws_s3_bucket.site_asset_bucket.id
policy = data.aws_iam_policy_document.cf_bucket_policy.json
}
Now onto the Cloudfront distribution, let’s start by configuring our origin.
Remember, our origin in this case is our S3 bucket.
Also, add some local
variables on top the file — some of them will be used later.
Add the following changes:
// infra/main.tf
locals {
min_ttl = 0
max_ttl = 86400
default_ttl = 3600
s3_origin_id = "astro-static-site"
}
resource "aws_cloudfront_distribution" "cf_distribution" {
origin {
domain_name = aws_s3_bucket.site_asset_bucket.bucket_regional_domain_name
origin_id = local.s3_origin_id
}
}
This will give our Cloudfront distribution all the permissions granted to this OAI.
Add the following changes:
// infra/main.tf
resource "aws_cloudfront_distribution" "cf_distribution" {
origin {
domain_name = aws_s3_bucket.site_asset_bucket.bucket_regional_domain_name
origin_id = local.s3_origin_id
s3_origin_config {
origin_access_identity = "origin-access-identity/cloudfront/${aws_cloudfront_origin_access_identity.oai.id}"
}
}
}
These are just a few more configurations that are required by Cloudfront.
Feel free to check the terraform documentation for more information about them.
Add the following changes:
// infra/main.tf
resource "aws_cloudfront_distribution" "cf_distribution" {
origin {
domain_name = aws_s3_bucket.site_asset_bucket.bucket_regional_domain_name
origin_id = local.s3_origin_id
s3_origin_config {
origin_access_identity = "origin-access-identity/cloudfront/${aws_cloudfront_origin_access_identity.oai.id}"
}
}
enabled = true
is_ipv6_enabled = true
comment = "Astro static site CF"
default_root_object = "index.html"
default_cache_behavior {
allowed_methods = ["GET", "HEAD"]
cached_methods = ["GET", "HEAD"]
target_origin_id = local.s3_origin_id
viewer_protocol_policy = "redirect-to-https"
min_ttl = local.min_ttl
default_ttl = local.default_ttl
max_ttl = local.max_ttl
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
}
# Redirect all navigation outside of expected to home
custom_error_response {
error_caching_min_ttl = 0
error_code = 403
response_code = 200
response_page_path = "/index.html"
}
price_class = var.cf_price_class
restrictions {
geo_restriction {
restriction_type = "none"
}
}
viewer_certificate {
cloudfront_default_certificate = true
}
}
These are the outputs for our infrastructure.
Make the following changes:
// infra/outputs.tf
# Output value definitions
output "site_asset_bucket" {
description = "Name of the S3 bucket used to store function code."
value = aws_s3_bucket.site_asset_bucket.id
}
output "role_arn" {
value = aws_iam_role.github_actions.arn
}
output "cf_distribution_domain_url" {
value = "https://${aws_cloudfront_distribution.cf_distribution.domain_name}"
}
Add the following changes:
// infra/variables.tf
variable "repo_name" {
type = string
# Example: Jareechang/astro-aws-starter
default = "<insert-your-repo>"
}
We need to give Github actions the permission to be able to invalidate the cache when we deploy new assets to the S3 bucket.
Add the following changes:
// infra/main.tf
data "aws_iam_policy_document" "github_actions" {
statement {
actions = [
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket",
"s3:PutObject",
"s3:DeleteObject"
]
effect = "Allow"
resources = [
aws_s3_bucket.site_asset_bucket.arn,
"${aws_s3_bucket.site_asset_bucket.arn}/*"
]
}
statement {
actions = [
"cloudfront:CreateInvalidation"
]
effect = "Allow"
resources = [
aws_cloudfront_distribution.cf_distribution.arn
]
}
}
Now we have all our definition in Terraform, all that is left is to generate it.
Run the following:
export AWS_ACCESS_KEY_ID=<your-key>
export AWS_SECRET_ACCESS_KEY=<your-secret>
export AWS_DEFAULT_REGION=us-east-1
terraform init
terraform plan
terraform apply -auto-approve
If the infrastructure applied successfully then you should see something like this:
Illustration of the Terraform outputs
After applying the infrastructure, there are a few things we need to do:
Make sure the secrets are added to your Github repository
Update the role-to-assume
field in the github workflow yaml
Illustration of the Github secrets UI
The secrets that we want to add to Github actions are:
AWS_S3_BUCKET_NAME - The S3 bucket we will upload assets to
AWS_DISTRIBUTION_ID - The ID of our Cloudfront distribution
UNSPLASH_API_KEY - Unsplash API key used by the Astro static site
This will tell our Github Actions which role to assume when running our workflow.
name: deploy-site
on:
push:
branches:
- master
- main
# more changes
permissions:
id-token: write
contents: read
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Configure
uses: actions/checkout@v2
- uses: pnpm/action-setup@646cdf48217256a3d0b80361c5a50727664284f2
with:
version: 6.10.0
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@master
with:
role-to-assume: "<your-assume-role-arn>"
aws-region: "us-east-1"
- run: ./scripts/deploy-site.sh
env:
AWS_S3_BUCKET_NAME: ${{ secrets.AWS_S3_BUCKET_NAME }}
UNSPLASH_API_KEY: ${{ secrets.UNSPLASH_API_KEY }}
AWS_DISTRIBUTION_ID: ${{ secrets.AWS_DISTRIBUTION_ID }}
Under the project root, open up scripts/deploy-site.sh
.
This change will invalidate the cached responses in our Cloudfront distribution when we deploy our app.
Make the following change:
echo "Step 1: Install & Build"
pnpm install --production
pnpm run build --filter "@site/*"
echo "Step 2: Syncing to s3"
aws s3 sync $PWD/site/dist s3://$AWS_S3_BUCKET_NAME
echo "Step 3: Invalidating the Cloudfront cache"
aws cloudfront create-invalidation --distribution-id ${{ secrets.AWS_DISTRIBUTION_ID }} --paths "/*"
Push the changes to your github repository in the main
branch.
git push origin main
If everything went as expected, here is what you should see:
Illustration of visiting the site via the Cloudfront URL
For reference, here is the repository of completed tutorial - Github: astro-cloudfront-integration.
Nice! Now we not only have a fully automated CI/CD but we also have it integrated with a CDN (Cloudfront).
Faith’s little floral online shop will now flourish :)
She now can serve a global audience who can come visit her site.
The final thing that is left is to be able to serve multi-page website from S3 + Cloudfront.
we’ll cover that in the next tutorial (and you’ll see why this is a gotcha)!
And... that’s all for now, stay tuned for more!
I hope you enjoyed this tutorial or learned something new.
If you did, please do share this article with a friend or co-worker 🙏❤️ (Thanks!)
Then consider signing up to get notified when new content arrives!