How I built the fastest no-code blog on the internet?

August 14, 2025

A deep dive into the architecture behind a hyper-fast, no-code blogging tool powered by Notion. Explore the pivot from a simple, brittle monolith to a scalable, asynchronous system

Ever since my Writee days, I have been fascinated with SEO. In fact, a friend keeps joking how I consider my biggest achievement with Writee to not be building the company but getting listed on Google. Building Writee’s SEO invariably forced me to understand how blogs work - after all blogs are the bedrock of almost all SEO effort. This led me to something prize a lot - my personal blog where I am writing this. My personal blog is my personal corner in this vast, invisible internet. This is where I am rawest on the internet, where you can see me for who I am and not who I project myself to be.

As a technically-fascinated individual (I am not “technical” person or a “technically-inclined” - it’s more of a fascination), I found myself stuck with the traditional blogging tools on the internet. Setting up headless CMS’, getting your technical SEO on-point and getting everything blazing fast was always a pain. Hours were lost managing headless CMS configurations, manually setting up JSON+LD schemas, and fighting with next.config.js just to get images to load correctly.

There were alternatives to this. Superblog - which is an everything managed, blogging CMS is a fantastic, turnkey solution for those who want a completely hands-off experience and are willing to pay for the convenience. But as an early-stage founder/student combo watching every penny, Superblog was too expensive for an important albeit simple thing. I wanted to capture that same magic—the everything-just-works feeling—but on my own terms. That’s how NotoStack was born.

The 5 Rules of the Game

The first step to building any product is deciding on some basic truths about your product. In my case it was figuring out the following:

  1. Do I build a content management system or not - I decided not to build a CMS. There were already a 1000 CMSes in the market which solved literally everything in the world. It also didn’t solve the fundamental issue of another tool in your kit - a big pain I felt with Ghost was just having to use Ghost for something as simple as the blog. Why couldn’t things just work?
  2. Where do people create content - Now that I planned on not building a content management system, where will my users write the content for a content displaying solution? I had recently been converted from a notion-hater to a notion evangelist and decided that this is where my users work.
  3. How does the blog know when content is added/ updated - There are multiple ways that this could work. A server-rendered or incremental static generation (ISG) approach could work but the fundamental requirement for both is the presence of a server which limits scalability and reduces speed. I knew I wanted my websites to be insanely fast and infinitely scalable which is why a server could not feature into the picture. Pure HTML & CSS is the only way this can be achieved, but I wanted to add modern web features to the website too. The latest development in modern front-end development is an exciting library called astro.js which I have been using since it’s initial v1 launch. It was perfect for this use case.
  4. Still, how does the blog know when content is added/ updated - I realised that I cut off the option of server rendering content so I needed someway to create new HTML/CSS assets when there was an update. This was done through a service who’s job will be to - one, track if changes are made in the notion doc & two, do the entire build process.
  5. Where will everything be deployed - Understanding deployment processes ate up a lot of my time. As we’ve already discussed, I did not want a traditional server for the websites that were spun up. After a lot of back and forth, I ended up with a multi-server approach. My main app was a Next.js app which I would keep simple and deploy on Vercel. There was a building service which I would deploy on a simple AWS EC2 or similar server. For the static websites, I would upload all the HTML/ CSS/ JS files to AWS S3 which is a object store (Google Drive for devs) and link to something called AWS Cloudfront which basically let me see these files as HTML code and not just the files. In simple words, this was cache service that hosted my website for me.

Build to Break Mindset Broke my Code

With the guiding principles of my product in place, it was time to start architecting and then developing the product. My typical approach with architecting is to design for shortest dev time. I want to get my products to market as fast as possible and figure out if it deserves more complicated system designs. I do not typically go for production tools like Kafka Queues or massive micro-service architectures to cut down dev time. This is your typical build to break start-up mindset which destroyed 2 weeks worth of work. I ended up designing an system which was too simple for it’s own good. Every technical problems need to be given due respect in terms of design. Not everything can be solved with a “hello world” response on an express API call. I learnt this the hard way. Here’s my initial design.

My first architecture was a linear, pragmatic, and ultimately, deeply flawed solution. It was a single, perpetually running Node.js process I called the Build-Deploy Engine. My thinking was simple: get the job done, get it done right. The core idea revolved around using a MongoDB collection as a rudimentary job queue. When a user updated their Notion database, the perpetual node.js process would pick it up through an infinite loop that checked every user’s Notion DB’s updated at time and added a new document into this Mongo queue. The monolithic Build-Deploy Engine would then pull the job and start its work.

It felt good. It felt productive. I had a system that, on paper, worked flawlessly:

  1. A user connects their Notion account and provides their database ID.
  2. An infinite loop keeps checking for any updates, adding a jon document to a build_queue collection in Mongo.
  3. The single Build-Deploy Engine, constantly polling the database, pulls the oldest job from the queue.
  4. It clones a boilerplate Astro.js repository from a private GitHub template.
  5. It runs npm install to fetch dependencies.
  6. It runs npm run build to generate the static HTML, CSS, and JS.
  7. The resulting static files in the ‘dist/‘ folder are uploaded to an Amazon S3 bucket.
  8. A cache invalidation request is sent to Amazon CloudFront to update all cached content to latest content.
<figcaption>A simplified system design of the original architecture</figcaption>
A simplified system design of the original architecture

I was proud of this. It was a logical, well-documented system built with familiar tools. It was a straight line from A to B. But then I saw a single site build take a full 90 seconds. For one user, doesn’t matter. What if ten users hit "publish" at the exact same time? My single Node.js process, no matter how optimized, would become a serial bottleneck. It would process one job, then the next, then the next, creating a virtual line of frustrated writers staring at a "Deploying..." spinner. The tenth user might be waiting for 15 minutes. That’s an eternity. What if the engine crashed mid-build while running npm install for User #5? The job (the update) would be lost forever. The user would be left with a half-deployed website and no idea what went wrong.

My system was reactive, but it wasn't resilient. It was functional, but barely. I had the right blueprint, just that I was trying to build a skyscraper with tape and glue.

The Pivot to Asynchronicity

I spent an entire weekend staring at my code. Sunk cost is a powerful sedative, and I had invested weeks into my Build-Deploy Engine. It was my code but I knew it was a house built on a shaky foundation. My system was brittle, and I needed to make a fundamental ideological shift: from a synchronous, single-point-of-failure mindset to an asynchronous, distributed, and self-healing one.

This was the pivotal moment. This wasn't a refactor; it was a complete re-architecture of the platform's core. I had to throw away weeks of work and rebuild the entire deployment pipeline from the ground up. The new foundation would be built on two pillars of modern backend engineering: Redis for its raw, in-memory speed, and BullMQ for its powerful job queuing capabilities.

Why this specific combination? It was about choosing the absolute best tool for each specific job.

  • Redis: The Need for Speed. My first mistake was trying to avoid a new tech dependency. I had forced a database (MongoDB) to act as a job queue. It was like using a screwdriver as a hammer—it might work for a small nail, but you'll eventually break something. I was spending hours manually building features like job locking and priority handling that a dedicated in-memory store like Redis provides out of the box, all with sub-millisecond latency. The lesson was brutal but clear: don't fight your tools. Some technologies are essential, no matter how much you want to keep your stack simple.
  • BullMQ: The Need for Resilience. A simple queue isn't enough. What happens if a job fails? My old system would just lose it. I needed resilience baked into the very fabric of the system. BullMQ is a battle-tested library built on top of Redis that gives you an entire suite of mission-critical features for free. It provides automatic retries with exponential backoff, granular control over job concurrency, detailed progress tracking, and a beautiful dashboard (Bull-Board - my go to admin panel) to monitor every job in the system. If a worker process died mid-job, BullMQ would detect the stalled job and automatically re-queue it for another worker to pick up. The system would, in effect, heal itself.

This decision to abandon Mongo for the queue and embrace the Redis/ BullMQ stack is an architectural decision that taught me more than a lot of the production-level code I have written. There is a reason somethings are done the way they are, give a technical problem it’s due respect and use the required tools for the job. The result was a system that wasn't just faster—it was smarter, more resilient, and ready to scale infinitely.

The Distributed Factory Floor

The new architecture is a game-changer. Gone is the single, overworked artisan. In its place is a meticulously designed, distributed factory floor. We now have a fleet of specialized, stateless "workers," all communicating through the lightning-fast, Redis-backed Bull-MQ message bus. The entire process is fully asynchronous and event-driven.

Here’s how the new, ridiculously fast workflow breaks down:

  1. The Trigger (The Order Desk): As soon as an user updates a Notion page, a worker in BullMQ spun up every second to check for Notion updates picks it up and instantly publishes a build job to the Build-Deploy Queue in Redis. BullMQ is the central nervous system, guaranteeing that this "order" is safely logged and will be processed.
  2. The Build Worker (The Assembly Line): We now have a fleet of independent, stateless Build Workers. These are containerized Node.js applications whose only job is to listen to the Build-Deploy Queue. When a new job appears, a worker picks it up. It operates in a temporary, completely isolated environment. It clones the boilerplate Astro.js site from github, fetches the latest content from the Notion API using my custom @prkedia81/notion-blogs NPM package, and runs the npm run build process. This worker is a memoryless automaton; it knows nothing of the previous job and retains no state. If it succeeds, great. If it fails for any reason, Bull-MQ's resilience kicks in. The job is gracefully re-queued and instantly picked up by another available worker. This provides fault tolerance on a massive scale.
  3. The CDN Worker (The Shipping Department): Once the build is complete and the static files are generated, the Build Worker's final task is to place a new job on a separate queue: the Cloudfront Queue. A dedicated CDN Worker, whose sole responsibility is deployment, picks up this job. It pushes the new static files from the build environment to our globally distributed CDN (Amazon S3 and CloudFront). This separation of duties is critical. The build process and the deployment process are now decoupled, making the system incredibly easy to debug, maintain, and scale independently. The CDN Worker then handles the crucial step of cache invalidation, sending a targeted request to CloudFront to purge the old content. This ensures every user, everywhere in the world, sees the latest content instantly.

Why is this worker-based, distributed factory floor so much better?

  • Infinite Scalability: If 10,000 users publish at once, we just spin up more workers. Thanks to container orchestration, we can automatically scale our fleet of Build and CDN workers based on queue length, handling massive spikes in demand without breaking a sweat.
  • Bulletproof Resilience: If a worker process crashes, the job isn't lost. It gets added back to the queue and is picked up by another worker. The system is designed to expect and handle failure gracefully.
  • Insane Performance: By separating concerns into distinct workers and using a high-speed, in-memory message bus like Redis, we've eliminated every single bottleneck from the old architecture. The system is perpetually ready and ridiculously responsive.
<figcaption>Worker architecture for NotoStack</figcaption>
Worker architecture for NotoStack

The Secret Sauce - @prkedia81/notion-blogs

Every great system has its secret weapon - a custom-built tool that solves a unique problem so elegantly that it becomes a competitive advantage. For NotoStack, that weapon is a small, unassuming NPM package I poured countless hours into: @prkedia81/notion-blogs

On the surface, its job sounds simple: get content from Notion. But anyone who has worked with the Notion API knows that "simple" is the last word to describe it.

The Notion API is an engineering marvel. It's incredibly powerful and flexible, but it doesn't give you a clean, blog-ready post. Instead, it gives you a box of beautiful, high-quality, but completely unusable, deeply nested JSON tree of "blocks." A single blog post isn't a flat file of HTML; it's a complex hierarchy of page properties, database relations, and an array of block objects—paragraphs, headings, images, code snippets, callouts, and more—each with its own unique structure and metadata.

My first builds directly tried to wrestle with this raw data. The build script became a tangled mess of if/else statements or switch cases, and recursive functions trying to parse this tree. It was slow, brittle, and a nightmare to debug. If Notion ever added a new block type, the entire script would break.

I realized I wasn't just fetching data; I was performing a complex data transformation. The build process didn't need a simple API client; it needed an expert translator. It needed a specialized tool that could take Notion's esoteric language and convert it into a simple, predictable format that a static site generator like Astro could understand instantly.

This realization led to the birth of @prkedia81/notion-blogs. This package is the Rosetta Stone of the NotoStack ecosystem.

Here’s what makes it so critical:

  1. It's a Parser, Not Just a Client: The package handles all the messy parts of communicating with Notion—authentication, querying the correct database, and, most importantly, handling pagination to fetch all of a user's posts, not just the first 100 that the API returns by default.
  2. It Abstracts Away the Chaos: The core magic is in the parser. It recursively walks the entire block tree of a Notion page. It knows what a heading_2 block is and how to extract its text. It knows how to get the URL from an image block and the language from a code block. It takes that chaotic, nested structure and transforms it into a clean, flat array of objects like an output from a traditional CMS.
  3. It Optimizes and Simplifies: With this package, the build script for every NotoStack site becomes astonishingly simple. It doesn't need to know anything about the Notion API's internal structure. It just makes one function call: getBlogPosts(). It receives a perfect, predictable array of content, ready to be rendered into HTML. This makes the build process faster, more reliable, and infinitely easier to maintain.

Developing this package was a labor of love. It involved hours of poring over API documentation, writing tests for dozens of block types, and handling countless edge cases. But this investment created a core piece of intellectual property. @prkedia81/notion-blogs is the unsung hero of every single build. It's the specialized, hyper-efficient tool on our "distributed factory floor" that ensures the raw material (Notion content) is perfectly prepared before it even hits the main assembly line.

The Tech Stack Behind the Speed

Achieving instantaneous speed required a tech stack where every component was obsessively chosen for performance.

  • Astro.js (The Frontend): Astro's revolutionary "zero JavaScript by default" philosophy was key. It pre-renders every page into lightweight HTML and CSS, eliminating complex client-side rendering. The result is a flawless Lighthouse score and a user experience that feels like teleportation.
  • Custom Notion Parser (@prkedia81/notion-blogs): My custom NPM package acts as a hyper-efficient translator. It converts Notion's complex, nested JSON data into a clean, build-ready format once, during the build process. This crucial step prevents the user's browser from ever having to do the heavy lifting.
  • Redis & BullMQ (The Backend Engine): This duo forms our reactive backend. The high-speed, event-driven queue triggers a reliable deployment pipeline that takes a site from a Notion update to live in under 60 seconds.
  • CloudFront & S3 (Global Delivery): Static files are stored on Amazon S3 and delivered globally via CloudFront's CDN. This provides massive, low-cost scalability and ensures users from Sydney to New York get the same instant load times.
  • Cloudflare (Automation & Security): As the final layer, Cloudflare automates the complex process of pointing custom domains to new sites. It also provides a world-class security shield and an additional layer of caching for free.
<figcaption>The Design fora Newsletter System that I ended up not implementing</figcaption>
The Design fora Newsletter System that I ended up not implementing

The Journey really is the fun part

After all this work - the refactoring, the architectural debates, the obsessive tuning - I decided to deprecate the project. While the challenge was exhilarating, I realized that building and maintaining the platform itself wasn't what truly excited me. The real prize wasn't the final product, but the incredible learning journey I took to get there.

However, one crucial piece of NotoStack not only survived but has become a cornerstone of my development workflow: the @prkedia81/notion-blogs package. This little NPM package has become my personal lifesaver. Now, for every new project I build, adding a fast, SEO-optimized blog is no longer a chore. I just plug in the package, point it to a Notion database, and it just works. No more fighting with complicated CMSs; my entire content workflow for any idea now lives happily in Notion.

Ultimately, the most profound takeaway was the architectural one. My initial "build to break" mindset taught me a hard lesson: you must give a technical problem its due respect. Forcing a database to act as a queue was a mistake. Embracing the right tools for the job, like Redis and BullMQ, even though they added a bit of initial complexity, was the right call. It’s a philosophy I now carry into every project—sometimes, the right path isn't the simplest one, but the most robust. The NotoStack code may be gone, but the lessons and the tools it produced are here to stay.

Sign-up to my newsletter

I talk about start-up, investing, tech and everything around it. Sign-Up for my weekly newsletter to never miss an update