Jordan: Notion says it's a database. Airtable says it's a database. Postgres is actually a database. And somehow, all three of them end up holding your client records, your webhook logs, and your invoicing data — often at the same time, in the same project, for no clear reason.
I had a client last fall — automation consultancy, solo operator like you and me — who was running everything through Notion databases. CRM, project tracker, invoice log, the works. Looked beautiful. Worked fine... until he wired up a Make scenario that pushed about eight hundred record updates a day. Within a week he was drowning in four-twenty-nine errors. Retry-After headers stacking up. His automations were failing silently because Notion's API tops out at three requests per second per integration. Three. And he had no idea that ceiling existed until he hit it face-first.
So he moved everything to Airtable. Problem solved, right? For about two months. Then he added a webhook listener for Stripe events — payment succeeded, payment failed, subscription updated — and suddenly he needed those writes to be atomic. He needed transactions. He needed two events hitting the same record at the same time to not corrupt each other. Airtable doesn't do transactions. There is no atomic write. There's no ON CONFLICT clause. There's a five-request-per-second ceiling and a prayer.
He ended up on Supabase. Which is where he probably should have started — for that workload. But here's the thing that nobody in the Airtable vs Notion debate wants to admit: all three of these tools are correct. They're just correct for completely different workload shapes. And if you pick by vibes instead of by the numbers, you will migrate twice, waste two months, and mass-rename every automation you've built.
How many of your automations are one rate-limit change away from breaking? Not a hypothetical — Airtable updated their API call limits page in February, Notion refreshed their request limits docs, Supabase revised their connection settings just this month. The breakpoints shifted. And if you picked your data layer based on a blog post from twenty twenty-four, you might already be running on borrowed time. Today I'm giving you the decision framework I use with every client: pick by workload shape — ops per second, concurrency, whether you need transactions — not by which logo you like better. Airtable, Notion, or Postgres. One of them is right for what you're building. Let's figure out which.
Okay, so the data layer decision. Every comparison you'll find online — Forbes, Cloudwards, the usual suspects — frames this as Airtable vs Notion. Features, pricing, interface preferences. That framing is useless to you. Because as a solo builder wiring automations through Make or n8n or Zapier, you don't care which one has prettier Kanban boards. You care about three things. How many requests per second can I push before the API starts rejecting me? Can I write to the same record from two places at once without corrupting data? And what does it actually cost when my automations scale?
Those three questions — throughput ceiling, concurrency safety, and cost at volume — are the entire decision. Everything else is preference. So let's walk through each tool against those axes, starting with where Airtable genuinely wins.
Airtable's sweet spot is lightweight relational ops. Think CRM tables, project trackers, client pipelines — anything where you're reading and writing rows through the API at a moderate pace. The hard ceiling is five requests per second per base. That's not per account — per base. And there's a fifty request per second aggregate limit across all your bases per service token. Exceed either one, you get a four-twenty-nine and a thirty-second backoff.
Now, five requests per second sounds tiny. But Airtable lets you batch up to ten records per write request. So your effective throughput is closer to fifty records per second if you're batching properly. And they support a performUpsert parameter — find, create, or update in a single call — which cuts your round trips roughly in half for sync patterns.
Here's where it gets practical. I run a daily change digest for three of my clients — new leads, updated deal stages, closed-won notifications — all piped from Airtable to Slack on a schedule. Retool actually publishes a template for exactly this pattern. The whole thing runs on maybe two hundred API calls a day. Airtable's Team plan gives you a hundred thousand calls per month at twenty dollars per user. I'm using less than seven percent of that cap. For this workload, Airtable is perfect. Cheap, fast enough, and the interface means my clients can actually see and edit their own data without me building a front end.
The monthly caps matter though. Free tier is a thousand calls per month — that's basically useless for any real automation. Team is a hundred thousand. Business and Enterprise are described as high or unlimited, but Airtable doesn't publish exact numbers for those tiers. So if you're on Team and your automations are burning through calls, you need to watch that counter.
One more Airtable trick before we move on. If you're on Business or Enterprise, there's a Sync API — separate from the regular REST API — that accepts CSV payloads up to two megabytes, ten thousand rows per request, with its own rate limit of twenty requests per five minutes per base. That's a completely different throughput profile for bulk imports. If you're doing nightly data loads, the Sync API can save you thousands of regular API calls.
Okay. Notion. And I want to be careful here because I love Notion — I run my entire business in it, fourteen hundred documented workflows and counting. But Notion's API was not built for heavy database operations. It was built for documents that happen to have database features.
The rate limit is three requests per second average per integration. Some blogs claim it's been bumped to five — I went back to the official developer docs as of April twenty twenty-six, and it still says three. Bursts are allowed, but you'll get four-twenty-nines with Retry-After headers if you sustain above that average. So design for three.
But the rate limit isn't even the real constraint. It's the payload limits. Five hundred kilobytes max per request. A thousand block elements max per request. And if you're updating relation or multi-select properties, you're capped at a hundred items per update. Database queries return a hundred rows per page max — you have to paginate everything.
What this means in practice: if you're pushing rich pages with lots of content blocks, you might need two or three API calls per record just to get the data in. Your effective throughput drops from three records per second to one. Maybe less. And you're burning API calls on chunking overhead that doesn't exist in Airtable or Postgres.
So when does Notion win? When the database is subordinate to the document. Client project wikis where each row is a page with embedded docs, meeting notes, and reference material. Knowledge bases where the content is the value and the database is just the index. Light sync from your CRM or project tracker — push a summary, not the full dataset. For those patterns, Notion is unbeatable because the interface is the product. Your clients live in it. They don't need training. They don't need a dashboard. The Notion page is the dashboard.
But the moment you're pushing more than a few hundred records a day through the API, or you need fast reads across large datasets, Notion starts fighting you. And that's by design — it's not a bug, it's a boundary.
Now. Postgres. Specifically Supabase, because it's the fastest path to a managed Postgres instance that a solo operator can actually set up without a DevOps background.
Two things Postgres gives you that neither Airtable nor Notion can: transactions and true concurrency. An atomic UPSERT — INSERT ON CONFLICT — means you can have twenty webhook events hitting the same table at the same time and every single write either completes fully or doesn't happen at all. No partial updates. No race conditions. No corrupted records.
I built a Stripe webhook ledger for a client last quarter — payment succeeded, payment failed, subscription created, invoice finalized — all landing on a Supabase Edge Function that writes to a Postgres table with an UPSERT keyed on the Stripe event ID. Supabase's own docs walk through this exact pattern with their Stripe webhook example. The UPSERT makes it idempotent — same event fires twice, the second write just updates the existing row instead of creating a duplicate. That's impossible in Airtable. And in Notion, you'd need to query first, check for existence, then create or update — three API calls minimum, with a race condition window between the check and the write.
On the connection side, Supabase sizes capacity by compute tier. Micro gives you sixty direct connections and two hundred pooled clients. XL bumps that to two hundred forty direct and a thousand pooled. The largest tiers go up to five hundred direct and twelve thousand pooled. For most solo builds, Micro is more than enough.
And the cost — this is what surprises people. Supabase Pro is twenty-five dollars a month per org. That includes ten dollars in compute credits. Micro compute costs about ten dollars a month. So your net cost for a fully managed Postgres instance with connection pooling, Edge Functions, and row-level security is... twenty-five dollars a month. That's five dollars more than an Airtable Team seat. For a real database.
Now, I can already hear the objection. "Jordan, I don't need Postgres. I'm not running a high-volume webhook ledger. I have twelve clients and a few hundred records. Why would I add database admin overhead to my stack?"
And honestly? You might be right. Airtable's own engineering guidance says to batch your writes, cache your reads through a proxy, and use performUpsert to reduce round trips. If you do all of that and you're still under the five requests per second ceiling and the hundred thousand monthly call cap — stay on Airtable. Seriously. The interface alone is worth it for client-facing data.
Same with Notion. If your API usage is light sync — pushing a few summaries a day, updating a project status board — three requests per second is plenty. You'll never feel the ceiling.
The bright line is this: the moment you need atomic writes under concurrency — webhooks, event ledgers, anything where two processes might write to the same record at the same time — you need Postgres. The moment you're routinely exceeding Airtable's hundred thousand monthly calls or Notion's three requests per second average, you need Postgres. And the moment you need real transactions — multi-statement operations that either all succeed or all roll back — there is no workaround. Airtable and Notion simply don't offer that.
The move I recommend to most clients: keep Airtable or Notion for the interface layer — the stuff your clients see and touch — and put Postgres behind the hot path. Your webhook listeners, your event ledgers, your high-frequency sync jobs. They don't need to be in the same place as your client-facing project tracker. Use each tool where it's strongest.
Let me make this concrete. Say you're processing twenty-five thousand record changes a day — not unusual if you're running automations across multiple client accounts. With Airtable batching at ten records per request, that's twenty-five hundred write calls per day. Seventy-five thousand a month. You're at seventy-five percent of your Team plan cap, and that's writes alone — reads push you over. You'd need Business at forty-five dollars per user per month, and you're still stuck at five requests per second per base.
Same workload on Notion? Twenty-five thousand writes at one chunk per write — assuming small payloads — is twenty-five thousand API calls a day. At three requests per second, that's over two hours of sustained API time just for writes. Your automations are queuing. Your Retry-After headers are stacking. It's technically possible but operationally miserable.
On Supabase Micro? Twenty-five thousand inserts with UPSERT. No rate limit per se — you're bounded by connection count, and at two hundred pooled connections, you could process that entire batch in minutes. Twenty-five dollars a month. Atomic writes. No four-twenty-nines.
I built a calculator for exactly this decision — it's on the Resources page. Punch in your daily record changes, your peak concurrent clients, whether you need transactions, and your budget. It runs the Airtable call math with batching, the Notion chunking estimate, and recommends a Supabase compute tier with the Pro plan total. Takes about thirty seconds.
So remember the client from the top of the episode — the one who migrated from Notion to Airtable to Supabase in the span of four months? When we finally sat down and mapped his workload, the answer was obvious. His client-facing project tracker stayed in Notion — his clients loved it, they were already in there every day, and the API load was negligible. His CRM and pipeline reporting moved to Airtable — lightweight relational ops, a few hundred updates a day, well within the five requests per second ceiling. And his Stripe webhook ledger and event processing went to Supabase on Micro compute — twenty-five dollars a month for atomic, idempotent writes with zero four-twenty-nines. Three tools. Three workload shapes. Each one doing exactly what it's best at.
That's the whole framework. Throughput ceiling, concurrency safety, cost at volume. Match the tool to the workload, not the other way around. If you want to run your own numbers, grab the Data Layer Decision Calculator on the Resources page — thirty seconds and you'll know exactly where your build should live.
Here's what I want you to do this week. Pick your highest-volume automation — the one that pushes the most records through an API. Count the daily calls. Check it against the ceilings we talked about today. If you're under seventy percent of your limit, you're fine. If you're over, start planning the move now — before a rate-limit update makes the decision for you.
That's it for this one. I'm Jordan. This is Headcount Zero. Go build something.