Jordan: Someone in the community posted this last Thursday — and I'm paraphrasing, but it's close. "I just got through a security review for a fintech client. They asked me to prove that a specific automation run on March twelfth processed the right data in the right order. I had nothing. Make showed me the run succeeded. That's it. Green checkmark. No details, no export, no chain of evidence. I lost the contract."
Jordan: And the replies were all some version of the same thing. "Yeah, I log errors." "I have a Slack notification when something fails." "My scenarios have error handlers."
Jordan: None of that answers the question. The question isn't "did it fail." The question is "prove what happened." Step by step. With timestamps. With evidence you can hand to someone who doesn't trust you yet.
Jordan: That's a fundamentally different problem. And if you're selling to anyone who has a procurement team, a compliance officer, or even just a cautious CTO — it's the problem that's coming for you next.
Jordan: By the end of this episode you'll have a complete data lineage automation system — one correlation ID generated at the trigger, propagated through every step, writing to an append-only log you can export as a CSV and hand to an auditor. I'm covering three stacks — Airtable, Notion, and Postgres on Supabase — so you can build this with whatever you're already running. Plus a Slack slash command that answers "what happened on this run" with evidence in under three seconds. I'm Jordan. This is Headcount Zero.
Jordan: So here's the situation most of us are in. You've got Make scenarios, maybe some n8n workflows, possibly Zapier Zaps — production systems moving real client data. And every one of those platforms gives you some version of a run history. Make shows you a timeline of modules. n8n shows you execution logs. But that history lives inside the platform. You can't export it in a structured format. You can't hand it to a client's security team. And you definitely can't prove that the data wasn't modified after the fact.
Jordan: Meanwhile, the asks from procurement are getting more specific. Trust centers were the baseline — we covered that earlier this season. But the next question after "show me your security posture" is "show me what your automation actually did with our data on this date." And if you can't answer that with exportable evidence, you're stuck saying "trust me, it worked." Which is exactly what the person who lost that contract was stuck saying.
Jordan: The fix is one pattern with three pieces. A correlation ID. An append-only log. And an export path.
Jordan: The correlation ID is the simplest piece and the most important. One identifier, generated at the very first trigger — the webhook, the form submission, the scheduled run — and then passed through every single step of the workflow. Every API call, every database write, every conditional branch. That one ID stitches the entire story together.
Jordan: And this isn't something I invented. The W3C Trace Context spec — the same standard that OpenTelemetry uses, the same one AWS X-Ray adopted in twenty twenty-three — defines exactly this pattern. A traceparent header with a thirty-two character hex trace ID that propagates across services. We're just applying it to Make and n8n instead of microservices.
Jordan: In practice, here's what that looks like. If your trigger is a webhook and the inbound request already includes a traceparent header — which it will if the sender is using OpenTelemetry — you extract the trace ID from that header and use it as your correlation ID. If there's no traceparent, you generate your own. In JavaScript that's one line — crypto dot random UUID, strip the dashes, lowercase. Thirty-two hex characters. Done.
Jordan: n8n gives you a built-in execution ID you can grab from the execution object in any node. Make exposes an execution ID through its API. Either of those can serve as your correlation ID, or you can generate your own and propagate it alongside. The point is — one ID, decided at the trigger, carried everywhere.
Jordan: Now. Where do you write the log?
Jordan: If you're already on Airtable, you can build an append-only run log today without changing stacks. Here's the setup. You create a table — I call mine "run log" — with fields for timestamp, correlation ID, a sequence number, source, event type, status, latency in milliseconds, cost in dollars, a message field, and an event JSON field for structured details.
Jordan: The append-only part comes from permissions. As of March twenty twenty-six, Airtable lets you control who can create records and who can delete records at the table level. You also get field-level edit permissions. So you set the table to allow creates from your automation's personal access token, block deletes for everyone except owners, and lock every field so only the token can write values. Humans get read access. That's it.
Jordan: But here's the part most people miss — the intake surface. You don't want your automation writing directly to the grid if other people need to see the table. Airtable's Interface Designer lets you build a create-only form tied to the table. Interface-only editors can add records through that form but can't touch the underlying grid. So your Make scenario writes via the API, your client's team can view the log through a shared interface, and nobody can edit or delete what's already been written.
Jordan: Export is straightforward. Create a grid view filtered by correlation ID, sorted by sequence number. Download that view as CSV. That's your evidence file. If you're on Enterprise, you also get an audit log export that shows user-level activity — who accessed what, when.
Jordan: Roughly twenty minutes to set up the table, permissions, and interface. Your Make scenario just needs one additional HTTP module at each step — a POST to the Airtable API with the correlation ID and the event details. Maybe three minutes of extra build time per step.
Jordan: Notion is trickier. And I want to be honest about why. Notion does not have per-row immutability. There's no table-level "block deletes" toggle like Airtable has. The Lock feature on a database page prevents accidental edits in the UI, but it's not a security boundary — it's a convenience feature.
Jordan: So the append-only pattern in Notion is process-based, not permission-based. You create a logging database with the same fields — timestamp, correlation ID, sequence, source, event type, status, the whole set. Then you create a Notion internal integration — what they call a connection — and share the database with that integration. Give it read, update, and insert capabilities. Now here's the key move — every human collaborator on that database gets view-only or comment-only access. The integration writes. People read. Nobody edits.
Jordan: Is this cryptographically tamper-evident? No. A workspace owner could theoretically change data. But for most client engagements — especially the ones where you're proving operational reliability, not defending against insider threats — this is sufficient. You've got version history on paid plans, you've got the Enterprise audit log if you need it, and you've got a clear separation between the automation that writes and the humans who review.
Jordan: Export works the same way. Filter the database by correlation ID, export to CSV. Or hit the Notion API with a query filtered on correlation ID and serialize the results to JSON. Takes about fifteen minutes to set up the database and integration. Your automation writes via the API — same pattern as Airtable, one additional HTTP call per step.
Jordan: Now. If your client's security team is serious — if they're asking about tamper evidence, if they want to know whether the log itself could have been modified — Airtable and Notion won't satisfy them. You need Postgres. And specifically, you need hash-chaining.
Jordan: This is the part that sounds intimidating but is actually... elegant. Here's the concept. Every row in your run log table includes two extra fields — previous hash and row hash. When a new row is inserted, a database trigger fires before the insert. That trigger looks up the most recent row for the same correlation ID, grabs its row hash, and uses it as the previous hash for the new row. Then it computes the new row's hash by running SHA-256 — via Postgres's pgcrypto extension — over the previous hash concatenated with the event JSON. The result is a chain. Each row's hash depends on every row that came before it. If anyone modifies or deletes a row in the middle, the chain breaks. And you can verify that with a single query.
Jordan: On Supabase, you enforce append-only with Row Level Security. You create an RLS policy that allows INSERT but denies UPDATE and DELETE. The database literally will not let your API role modify existing rows. Combined with the hash chain, you get detection and prevention.
Jordan: One detail that matters for production — concurrency. If multiple steps in the same run try to insert simultaneously, they could read the same previous hash and create a fork in the chain. The fix is a PostgreSQL advisory lock scoped to the correlation ID. The trigger calls pg advisory xact lock with a hash of the correlation ID before reading the chain head. That serializes inserts per run without blocking other runs. Takes about forty-five minutes to set up the table, trigger, and RLS policies. The Run Log Kit on the Resources page has the exact SQL — you can paste it into Supabase's SQL editor and have this running in one session.
Jordan: Once you have the log, you need a way to query it that doesn't require opening a database. This is where the Slack slash command comes in. You type slash what-happened, paste the correlation ID, and get back a summary — how many steps, total duration, final status, and the last five events in the chain.
Jordan: The implementation is a webhook endpoint. Slack sends an HTTP POST with the correlation ID in the text field. You acknowledge within three seconds — that's Slack's requirement — with an ephemeral message that says "fetching." Then you query your log, format the results into Slack blocks, and POST them back to the response URL. If you want something richer, you can open a modal using the trigger ID that Slack provides — show the full timeline, add an "export CSV" button that generates a signed download link.
Jordan: I built this for a client in February. The first time their ops lead typed slash what-happened and got back a six-step timeline with timestamps and status codes — in Slack, in two seconds — she said, "This is the first time I've actually understood what your automations do." That's the client proof problem solved. Not with a dashboard. Not with a monthly report. With a command they can run themselves, any time, on any run.
Jordan: I need to be direct about what this does and doesn't give you. Airtable and Notion give you operational append-only logging. They're good enough for most client engagements, most security reviews, most SLA reporting. But a determined admin with full access could still modify data. The audit trail would show the modification on Enterprise plans, but the log itself isn't cryptographically sealed.
Jordan: Postgres with hash-chaining gives you tamper detection. If someone changes a row, the verification query catches it. But it doesn't prevent the change — it proves the change happened. A sufficiently motivated attacker with superuser access could rewrite the entire chain.
Jordan: If you need stronger assurance — and some regulated industries do — the next step is external anchoring. Once a day, compute a hash over all the chain heads from that day's runs and write it to a separate system. An S3 bucket with Object Lock. A different database. Even a simple append-only file on a different server. Now the chain is anchored to something outside the system that produced it. That's the level where you can say to an auditor, "Even if someone compromised the database, the daily anchor would show the discrepancy."
Jordan: For most of us, most of the time, the Postgres hash chain is more than enough. But know where the ceiling is before you promise a client you've hit it.
Jordan: So — back to that community post. The person who lost the fintech contract because they couldn't prove what happened on March twelfth. With this system, that answer takes about two seconds. You open Slack, type slash what-happened, paste the correlation ID, and you're looking at every step, every timestamp, every status code. You export the CSV, attach it to the email, and the conversation shifts from "trust me" to "here's the evidence."
Jordan: That shift — from invisible work to provable work — is the thing that changes how clients see you. Not as a vendor they're monitoring. As infrastructure they can rely on.
Jordan: If you want to ship this today, the Run Log Kit is on the Resources page. It has the field schemas for all three stacks, the Postgres SQL with the hash-chain trigger and RLS policies, and the Slack endpoint contract you can wire up in an afternoon.
Jordan: One thing to do this week. Pick the stack you're already on — Airtable, Notion, or Supabase — and set up the run log table with the correlation ID field. Just the table. Just the permissions. You don't need the Slack command yet. You don't need hash-chaining yet. Get the log writing first. Once you can see every run as a chain of timestamped events, everything else clicks into place.
Jordan: I'm Jordan. This is Headcount Zero. Go build something you can prove.