AWS Aurora Technical Series Part IV - PostgreSQL integration with Next.js

Published on: Mon Feb 21 2022

Series

Content

Introduction

In this module, we will take a look at the code used to interact with PostgreSQL with node-pg .

To simplify this series, most of the implementation are already available. However, we will stil briefly go over it so we get the gist of the implementation.

Finally, we will enhance our blog site by adding a comments section, then as a challenge, there will be a list of improvements available if you wish to further enhance the website.

Let’s get started! Feel free to use the previous code repository as a starting point - aws-aurora-part-3.

Overview of the code

We will go over the part of the code that mainly deals with the PostgreSQL intergration. For managing the database, we will be leveraging the pg package.

Database service

The database service handles the data instance throughout the lifecycle of the application.

These includes configurations, and connection types (ie pooling).

The instance will be used by the dao or any other part of the code that need to make operations to the database.

Configuration

The configuration of the database is done through the environment variable or loading via .env files.

These are mainly credentials for establishing a database connection:

  • PGUSER - the username
  • PGHOST - the host of the database (ie 0.0.0.0)
  • PGDATABASE - The database name
  • PGPASSWORD - The password of the database
  • PGPORT - The port which the database is running on

In addition, we have added a custom ssl configuration if we are running in production:

export function getCert(): string {
  const root = process.env.GITHUB_WORKSPACE || process.cwd();
  return fs.readFileSync(
    path.join(root, './global-bundle.pem')
  ).toString();
}

if (process.env.NODE_ENV !== 'development') {
  config.ssl = {
    rejectUnauthorized: false,
    ca: getCert()
  };
}

getCert() is just a simple function that loads the certificate details from the filesystem.

This is the main difference between our development and production.

Data access objects

The code is structured as such that dao (data access objects) contains objects to interface with particular tables (entities).

.
├── comments
├── posts
└── users

The sql queries are imported and used. These will live inside the queries folders.

Here is an example of getting all the posts by a particular slug:

-- Get all data for displaying a post

SELECT posts.id AS id
  , posts.title
  , posts.content
  , posts.image_url
  , posts.created_at
  , posts.updated_at
  , slug
  , email
  , social_accounts.social_type
  , social_accounts.username AS social_handle
FROM posts
INNER JOIN users ON users.id = posts.user_id
INNER JOIN social_accounts ON social_accounts.id = users.social_id
WHERE slug = $1;

Running the application

Now that we went over the integration, let’s run it to verify everything.

1. Starting local PostgreSQL

docker-compose up --build --force-recreate -V

We are using --build --force-recreate -V to ensure that it is not cached. You may need to use docker stop if you have an existing process runing.

2. Adding the configurations

Let’s add the configuration for our local PostgreSQL instance inside a .env file.

PGUSER=postgres
PGHOST=0.0.0.0
PGPASSWORD=test123
PGPORT=5438

These are the default credentials set in our docker-compose.yml , if you did change it, be sure to update it accordingly.

3. Starting the server

You can start the server by simply running this command:

yarn dev

Now verify the implementation by going to http://localhost:3000 .

Ensure that you have the links available for the two blogs (in our seed) and clicking through will display the data.

You can also verify data you get back from the api http://localhost:3000/api/test | jq .

Now, you may be wondering, I see the post content but what about the comments ?

I saved that portion so we can go over that together.

There are a few considerations to be made for supporting the comment section. I think its worth going through the process and arriving at our solution!

Data lifecycles

Considerations

post content with comments

For loading the comments, we should make the considerations on when to load it, and there are few factors to consider:

  1. Do we want it to be part of the SEO ? and does it make sense ?
  2. Do most users come to the page for comments or the contents ?
  3. Does the content stay static and dynamic over time ?

So what do we do ? We have two options:

  1. We can pre-fetch (fetch and render as part of the SSR request)
  2. defer it and load it on the client side.

In general, I think option (2) is most common.

For our purposes, we will asssume SEO optimization will be on our content and majority of the user’s visiting our site are here for the content. So, we can defer it.

This means we will defer loading of the comments until after the page content has loaded.

Static vs dynamic

This also leads point 3 which is our content in our posts tend to become more static over time.

What do I mean by that ? I am mainly referring to the fact that stale content tend to not be updated as frequently as the new content over time.

post content data lifecycle
Post content data becomes static over time.

Therefore, it is better suited as static content in the long run.

On the other hand, our comments, it can change anytime whenever a user makes a comment. So, we can’t assume it will just be static over time.

So, we want to be make a clear distinction between the data lifecycle of these two type content while also considering Next.js’s rendering models (SSG, SSR, ISR, CSR etc).

post content data lifecycle
comments data remains dynamic over time because it can be updated at any time.

In order to achieve good UX and performance, we want to keep the content static while keeping the comment section dynamic (loading on demand).

As an aside, this doesn’t mean we can’t cache the comments but given how frequently it may change we probably want to keep the time to live (TTL) short to avoid stale comments.

At the end of the day it comes down to balancing UX, performance and product goals.

Now that we have a solution, let’s look at how to implement it.

💡 Note: we are using SSR in our application but in reality we may prefer to use ISR (Incremental Site regeneration) or SSG (Static site generation) as an option due to the fact that our content tend to be more static over time.

Adding comments to our posts

The gist of the implementation:

  • Expose an api endpoint to get all the comments by post
  • Update the UI the fetch and display it on the client side

1. Creating the endpoint

We will expose an endpoint which will return all the comments given a blogId .

This way we can query it to fetch all the comments under a given blog.

This will be under http://localhost:3000/api/comments

touch ./pages/api/comments.ts
Note: Next.js’s will create an endpoint using the file name because of its filesystem convention.

2. Fetching the comments

import { NextRequest, NextResponse } from 'next';
import Comments from '@app/dao/comments/Comments';
import {
  IComments
} from '@app/dao/comments/Comments/types';

export default async(req: NextRequest, res: NextResponse) => {
  let results: IComments | null = null;
  let error: any = null;
  const blogId : number = +req.query.blogId;
  try {
    const rows = await Comments.getAllByBlogId(
      blogId
    );
    results = rows;
  } catch (err) {
    error = err;
    console.log('Error querying DB: ', err);
  }
  res.status(200).json({ results, error });
}

3. Verify the endpoint

request:

curl http://localhost:3000/api/comments?blogId=1

Expected results:

{
  "results": [
    {
      "content": "What about Incremental Static Regeneration (ISR) ? Can you write about that ?",
      "name": "Bob",
      "created_at": "2022-02-20 23:46:05.809508+00",
      "updated_at": "2022-02-20 23:46:05.809508+00"
    },
    {
      "content": "When should I know to use SSG over SSR ? ",
      "name": "Susan",
      "created_at": "2022-02-20 23:46:05.811648+00",
      "updated_at": "2022-02-20 23:46:05.811648+00"
    },
    {
      "content": "Great post",
      "name": "Bob",
      "created_at": "2022-02-20 23:46:05.815591+00",
      "updated_at": "2022-02-20 23:46:05.815591+00"
    }
  ],
  "error": null
}

4. Creating the comments component

touch ./src/components/comment.tsx
// src/components/comment.tsx

import * as React from 'react';
import Date from '@app/components/date'
import {
  Typography,
  Box,
} from '@material-ui/core';
import { css } from '@emotion/css'

export interface CommentProps {
  content: string;
  name: string;
  // date string
  created_at: string;
  // date string
  updated_at: string;
}

const Comment: React.FC<CommentProps> = (
  props
) => {
  const {
    name,
    content,
    updated_at
  } = props;
  if (!name || !content || !updated_at) {
    return null;
  };
  return (
    <Box my={2.5}>
      <Box>
        <Typography
          variant="caption"
          className={css`
            padding-right: 0.5rem;
            font-weight: 500;
          `}
        >
          {name}
        </Typography>
        <Typography
          variant="caption">
          <Date dateString={updated_at} />
        </Typography>
      </Box>
      <Box py={1}>
        {content}
      </Box>
    </Box>
  );
}

export default Comment;

5. Updating PostService

We need to return the id of the fetched post when we perform SSR on the server and pass it to the client.

Please update the highlighted sections in the codebase.

// ./src/services/PostService.ts

interface IPostData {
  id?: number | null;
  title: string;
  content?: string;
  // timestamp string
  createdAt?: string;
  contentHtml?: string;
}

class PostService implements IPostService {
  // Omitted code

  /**
   * Get a single post by slug
   *
   * */
  public async getBySlug(slug: string): Promise<IPostData> {
    const page = await Posts.getBySlug(slug)
    const fileContents: string = page?.content ?? ''

    // Use gray-matter to parse the post metadata section
    const matterResult = matter(fileContents)

    // Use remark to convert markdown into HTML string
    const processedContent = await remark()
    .use(html)
    .process(matterResult.content)
    const contentHtml = processedContent.toString()

    // Combine the data with the id and contentHtml
    return {
      id: page?.id,
      contentHtml,
      createdAt: page?.created_at,
      title: page?.title ?? '',
      ...matterResult.data,
    }
  }

  // Omitted code
}

6. Updating the Comment page

Updating the implementation in our Comments page under (posts/[id].js ).

Here we are updating our comments page to fetch the comments on the client side and then rendering it using our <Comment /> component when we get the results.

// pages/posts/[id].js

import {
  useEffect,
  useState,
} from 'react';
import Layout from '@app/components/layout'
import Head from 'next/head'
import Date from '@app/components/date'
import * as utilStyles from '@app/styles/utils.css'
import {
  Box,
  Typography,
  Divider,
} from '@material-ui/core';
import PostService from '@app/services/PostService';
import Comment from '@app/components/comment'
import {
  IComments
} from '@app/dao/comments/types';

export default function Post({ postData }) {
  const [comments, setComments] = useState([]);

  useEffect(() => {
    async function fetchComments(id) {
      try {
        const res = await fetch(`/api/comments?blogId=${id}`);
        const data = await res.json();
        setComments(data?.results ?? []);
      } catch (err) {
        console.log(`fetchComments failed : `, err);
      }
    }
    if (postData?.id) {
      fetchComments(postData?.id);
    }
  }, [postData?.id]);

  return (
    <Layout>
      <Head>
        <title>{postData.title}</title>
      </Head>
      <article>
        <Typography variant="h1" className={utilStyles.headingXl}>
          {postData.title}
        </Typography>
        {postData.createdAt && (
          <div className={utilStyles.lightText}>
            Authored: <Date dateString={postData?.createdAt} />
          </div>
        )}
        <div dangerouslySetInnerHTML={{ __html: postData.contentHtml }} />
      </article>
      <Box py={3}>
        <Divider />
      </Box>
      <section>
        {comments.length > 0 && (
          <Typography variant="h5" className={utilStyles.headingLg}>
            Comments
          </Typography>
        )}
        {
          comments.length > 0 ? comments.map((comment) => (
            <Comment {...comment} />
          )) : (
            <Typography variant="body1">
              No comments yet.
            </Typography>
          )
        }
      </section>
    </Layout>
  )
}

export async function getServerSideProps({ params }) {
  const postContentData = await PostService.getBySlug(params?.id);

  if (!postContentData) {
    return {
      notFound: true
    }
  }

  return {
    props: {
      postData: {
        ...postContentData,
      }
    }
  }
}

That’s it. We now should have the ability to see the comments added to the blog posts!

Improvements

Of course, this is a very basic example, and much can be done to improve this website!

If you are looking for a challenge, here are few improvements you can make:

  • Add in the image section on the post
  • Add in support for the author display
  • Add in support for displaying author’s social media handle
  • Refactor the way we fetch social accounts to be able to get multiple
    • Right now the SQL query only join to the social_accounts but ideally we want to get back an array of social_accounts based on the user id
    • This will likely be a separate service call on top of getting the post, to get social_accounts by user.id
  • Allow users to add comments
  • Allows users to upvote/like comments or posts

Feel free to get creative and add other features that you think are interesting too!

If you’d like a reference of the finished result, it is available at aws-aurora-part-4.

Conclusion

to summarize, this is brief section where we went over the code implementation of integrating PostgreSQL with Next.js.

We looked at the folder structure and configuration. Then we took a look at how to integrate comments into our post section.

Not only did we implement it, we also had a brief discussion about why we took our approach. The main take away is to evaluate the data lifecycle of the different “parts” of your page then optimize it accordingly.

Of course, I encourage you to take a shot at the improvements and let me know if you were able to add it. I would love to hear how you went about implementing it!

I hope you found this helpful. In the next and final module, we will finally integrate everything with our AWS ECS setup.

Stayed tuned!


Enjoy the content ?

Then consider signing up to get notified when new content arrives!

Jerry Chang 2022. All rights reserved.