Building a GitHub Discussions Powered Blog

14 min read

For a while now I've mostly been using dev.to as my preferred dev blogging platform, however recently I've been feeling that it is becoming a little spammy with more adverts and popups blocking the reader experience and so I wanted to look into alternative platforms.

My requirements were fairly simple:

  • Must be easy to maintain. If it was too complex I wouldn't keep up with it.
  • It mustn't be full of spam.
  • It must give me creative control over the site markup.

I tried some alternative platforms like HashNode and even Ghost, but they all seemed not quite the right fit. I also thought about using GitHub to host markdown files, but even this felt cumbersome.

Whilst researching though, I came across this blog post by Matteo Rigon on how he was using GitHub Discussions as his blogging engine and I thought it was just genius. And whilst Matteo gave up on the idea, I was certain it could really work.

GitHub Setup

The first thing we want to go is setup our GitHub repository enabling discussions and then configuring a few elements.

Category Configuration

Discussion categories are the key to being able to restrict the ability to create post to only the repository maintainers. We do this by removing all categories except one (we must have at least one), giving it the name "Blog Posts" and ensuring it is of type "Announcement". When setting a category as announcement, only maintainers of the repository can create discussions in that category, and if all your categories are set as announcement, the "New discussion" button will disappear for everyone besides maintainers.

image

Label Configuration

In my setup, I've made heavy use of labels for multiple different scenarios. For each case I provide labels with a fixed prefix making it easy to parse them out later on.

FormatDescription
tag/...Defines a blog tag.
series/...Defines a set of linked posts in a series.
state/...Define the state of the post, currently only support state/draft for draft articles.

image

Blog Post Format

Blog posts use markdown for the main body of the post, but can also be prefixed with some frontmatter for explicitly defining key metadata.

---
slug: blog-post-slug
description: A blog post about something
published: 2024-09-01
---
# Heading 1
## Heading 2
...

The supported front matter items are:

KeyDescription
slugProvides an explicit slug for the article. If one isn't defined, the posts title will be slugified.
descriptionAn optional description for this blog post.
publishedAn explicit publication date to allow back posting. If one isn't defined, the discussions create date will be used.

Content Population

At this stage you can setup all the blog posts you want to make available on your site (it's worth setting up a few at least whilst building the site)

GitHub Discussion blog post list

Site Setup

For the site setup I decided to copy Matteo's approach using Astro (mostly as I'd heard really cool things about it and this gave me an excuse to give it a try), but I took a slightly different direction to him when it came to querying for content.

Where Matteo used multiple queries for different areas of the site, I instead used a new feature in the Astro 5.0-Beta, the Content Layer API. This API simplifies querying by only requiring us to fetch all posts and populating a local content store which we can then perform further queries on when building out the pages of the site without incurring any further network requests.

Querying For Blog Posts

To access our blog posts we will be using the GitHub GraphQL API. The core query we'll perform is a search which retrieves all the content we require for both the blog posts and the labels as well as some pagination cursors we'll need access to later.

import { gql } from '@urql/core'

export default gql`
  query ($query: String!, $limit: Int!, $after: String) {
    search(query: $query, type: DISCUSSION, first: $limit, after: $after) {
      pageInfo {
        startCursor
        hasNextPage
        endCursor
      }
      edges {
        cursor
        node {
          ... on Discussion {
            id
            url
            number
            databaseId
            title
            body
            createdAt
            updatedAt
            labels(first: 10) {
              edges {
                node {
                  id
                  name
                  description
                  color
                }
              }
            }
          }
        }
      }
    }
  }
`

When we come to perform the query, it will accept the following variables:

NameValueDescription
queryrepo:${ GITHUB_REPO_OWNER }/${ GITHUB_REPO_NAME } category:"Blog Post" -label:state/draftThe query to perform in GitHub search syntax. Here we define the repo to search, the category of our blog posts, and an exclude for any posts with the state/draft label.
limit100The number of results to return.
afterY3Vyc29yOnYyOpHOUH8B7g==The ID of a pagination cursor after which results should be returned.

To make querying easier, we'll use urql to define a simple client that handles the basics of connecting and authenticating our GitHub API requests.

import { createClient, fetchExchange } from '@urql/core'

export default createClient({
    url: import.meta.env.GITHUB_API_URL,
    fetchOptions: () => {
        return {
            headers: {
                authorization: `Bearer ${import.meta.env.GITHUB_API_KEY}`,
                'user-agent': `${import.meta.env.DOMAIN}`
            },
        };
    },
    requestPolicy: 'network-only',
    exchanges: [ fetchExchange ]
})

Populating the Content Collection

Now that we have our query defined, the next step is to perform our query to populate our content collection. As mentioned before, what we want to achieve here is a content collection that contains all of our blog posts. Depending on the number of posts we have however, requesting everything from the GitHub API could exceed the allowed limits. To keep things within acceptable ranges, we will fetch our posts recursively in chunks.

import client from '../graphql/client'
import { mapPost } from '../utils'
import SEARCH_POSTS_QUERY from '../graphql/searchPostsQuery'
import type { Post, PostList } from '../types'

const getPosts = async (limit = 50, after?: string): Promise<PostList> => {

    const { data } = await client.query(
        SEARCH_POSTS_QUERY,
        {
            query: `repo:${import.meta.env.GITHUB_REPO_OWNER}/${import.meta.env.GITHUB_REPO_NAME} category:"Blog Post" -label:state/draft`,
            limit,
            after: after || null,
        },
    ).toPromise()

    if (!data) {
        return {
            posts: [],
            pageInfo: {
                startCursor: '',
                endCursor: '',
                hasNextPage: false,
            }
        }
    }

    const posts =  await Promise.all(
        data.search.edges.map(mapPost),
    )

    return {
        posts,
        pageInfo: data.search.pageInfo,
    }

}

const getPostsRecursive = async (limit: number, after?: string): Promise<Post[]> => {
    const { posts, pageInfo } = await getPosts(limit, after);
    if (pageInfo.hasNextPage) {
        return posts.concat(await getPostsRecursive(limit, pageInfo.endCursor))
    }
    return posts;
}

export const getAllPosts = async (): Promise<Post[]> => {
    const allPosts = await getPostsRecursive(100);
    return allPosts.sort((a, b) => b.date.getTime() - a.date.getTime());
}

Here we define a few helper methods to encapsulate this behaviour. We have a getPosts methods that performs a single query for a given page of results, we have getPostsRecursive which calls our getPosts recursively until there are no more pages left, and we have getAllPosts which is our main method to trigger the recursive request and then ensures the results are all sorted in reverse date order.

With these helper methods defined, we can then setup our collection by creating a src/content/config.ts file with the following contents:

import { defineCollection } from 'astro:content';
import { getAllPosts } from "../respository/getAllPosts.ts";

const blogPosts = defineCollection({
    loader: async () => {
        return await getAllPosts();
    }
});

export const collections = { blogPosts };

Now when Astro builds, it will automatically fetch all of our blog posts and populate our collection ready for querying.

Creating Pages

To generate pages for each of our blog posts we define a src/pages/[slug].ts file. The [slug] part of the filename tells Astro that the pages is dynamically generated and the URL is determined by a slug parameter. Inside this file we must provide a getStaticPaths() method that is responsible for returning a list of slugs for all the pages to generate, as well as a list of any properties to be made available to each of the pages. For this we make use of getCollection('blogPosts') which is what gives us access to our content collection.

---
import { getCollection } from "astro:content";
    
export const getStaticPaths = (async () => {
    const posts = await getCollection('blogPosts');
    return posts.map(post => ({
        params: {
            slug: post.data.slug
        },
        props: {
            post: post.data
        }
    }))
});
---

Outside of this method, we can then define the logic and markup that makes up the page. Here we can also access the post property from the dynamic route definition that gives us the details of the given page being rendered.

---
import { getCollection } from "astro:content";
    
export const getStaticPaths = (async () => {
    ...
});

const { post } = Astro.props;
---
<h1>{ post.title }</h1>

We can use variations of this approach applying different filters to the collection in order to build up the various other pages of the site.

Comments

To embed discussion comments directly into our articles we make use of another amazing project, Giscus. This embeds the comments from the GitHub Discussion and even allows posting comments back directly from our blog.

The Giscus website allows you to create a configuration by answering a few questions, mine ended up being this:

<script  src="https://giscus.app/client.js"
      data-repo="mattbrailsford/mattbrailsford.dev"
      data-repo-id="R_kgDOMyW45A"
      data-mapping="number"
      data-term={post.number}
      data-reactions-enabled="1"
      data-emit-metadata="0"
      data-theme="light"
      data-lang="en"
      crossorigin="anonymous"
      async
  ></script>

After embedding this little snippet, comments will magically be displayed.

Other Cool Stuff

I wanted to outline the main setup in this post, but there are some other cool features that I've managed to implement including:

All in all, I think this works really well as a solution. I can work really fast in GitHub's markdown editor, and I have full control over the output. I even got to learn about Astro in the process (I love it by the way).

If you'd like to use this blog as a basis for your own GitHub Discussion powered blog, you can find the source code on GitHub