Tamper‑Evident Run Log Kit: Correlation IDs + Append‑Only Schemas for Airtable, Notion, and Postgres/Supabase
A paste‑ready, stack‑by‑stack build to ship append‑only run logs with a single correlation ID — including Airtable/Notion permission patterns, Postgres/Supabase SQL for hash‑chaining + RLS, a Slack “/what‑happened” contract, and export recipes you can hand to auditors.
Ship a defensible run log you can hand to a client, an auditor, or your future self. Generate one correlation ID at the trigger, propagate it through every step, and write append‑only records you can export on demand. This kit gives you the exact fields, permissions, SQL, and a Slack “/what‑happened” contract so you can answer the question with evidence in seconds.
1) Core spec: one correlation ID + portable run‑log fields
Use one ID to stitch the entire story together. Prefer the W3C Trace Context trace ID (32 hex chars). If an inbound request includes a traceparent header, extract its trace-id and use it as your correlation_id. If not, generate a 32‑char lowercase hex ID and send your own traceparent downstream.
Recommended run‑log fields (use these names across stacks):
ts(timestamp, ISO 8601) — when this step happened.correlation_id(text, 32 hex) — one ID for the entire run.seq(int) — 0‑based step order within a run.source(text) — system or step name (e.g.,make.webhook,n8n.http,app.charge).event_type(text) — what happened (e.g.,triggered,validated,called_api,error).status(enum) —ok|error|pending.actor(text) — person/integration/service account if relevant.input_ref/output_ref(text/URL) — IDs or links touched at this step.latency_ms(int) — step duration.cost_usd(numeric) — cost for this step (e.g., API/AI usage).message(text) — brief human‑readable note.event_json(JSON) — structured details (request/response keys, validated fields, etc.).prev_hash,row_hash(text, 64 hex) — only needed for Postgres/Supabase hash‑chaining.
Example record (generic):
{
"ts": "2026-05-01T14:22:31.481Z",
"correlation_id": "4bf92f3577b34da6a3ce929d0e0e4736",
"seq": 3,
"source": "app.charge",
"event_type": "called_api",
"status": "ok",
"actor": "svc-stripe",
"input_ref": "cust_9h8...",
"output_ref": "pi_3NL...",
"latency_ms": 712,
"cost_usd": 0.0100,
"message": "Stripe PaymentIntent confirmed",
"event_json": {"amount": 19900, "currency": "usd", "method": "card"}
}
Propagation tips:
- HTTP: forward both
traceparentandx-correlation-idheaders. - Make: capture
executionIdwhen available; fallback to your own 32‑hex. - n8n: read the built‑in execution ID; or set your own 32‑hex at the trigger.
- If you must mint your own ID in code:
crypto.randomUUID().replace(/-/g, "").toLowerCase()(JS) yields 32 hex characters.
2) Airtable: append‑only intake with Interfaces + view exports
Goal: allow creates, block edits/deletes, and give non‑owners a clean intake surface.
Table fields (map types to Airtable):
ts= Created time (auto)correlation_id= Single line text (lowercase; 32 chars)seq= Auto number or Formula (not editable by forms)source,event_type,status= Single select (pre‑seed common values)actor= Single line textinput_ref,output_ref= URLlatency_ms= Number (integer)cost_usd= Currency (2 decimals)message= Long textevent_json= Long text (store minified JSON)
Permission checklist (append‑only pattern):
- Table → Permissions
- Who can create records? Owners/Creators (and Interface form; see below)
- Who can delete records? Nobody (or Owners only; prefer Nobody)
- Field‑level permissions
- Set all fields to “Specific users: Owners/Creators.”
- Optional: leave
messageeditable by a trusted reviewer if you need human notes.
- Tokens
- Use a PAT scoped to the base; your automations should only call Create. Field‑level blocks will reject edits.
- Interface (create‑only)
- Build an Interface page with a Form tied to the run‑log table.
- Show only fields needed for creates (hide
seq,ts). - Share the Interface to interface‑only editors with permission to add records.
- In base permissions, keep non‑owners as read‑only to prevent grid edits.
Form data contract (for Make/n8n):
- Always include
correlation_id,source,event_type,status, andevent_json(stringified JSON). - Optionally include the rest; Airtable will add
tsandseq.
CSV export (evidence):
- Create a Grid view filtered by
correlation_id = <ID>and sorted byseq. - Download that view as CSV; the filter is preserved in the export.
API filter formula (programmatic export):
filterByFormula=AND(
{correlation_id}='4bf92f3577b34da6a3ce929d0e0e4736'
)
Quick test:
- Set table/field permissions, publish Interface form.
- Submit two steps with the same
correlation_id. - Attempt to edit a field in the grid with a non‑owner account → should be blocked.
- Export the filtered view to CSV.
3) Notion: integration‑only writes + CSV/audit exports
Goal: practical append‑only via integration‑only writes; humans are view/comment‑only.
Database properties:
ts= Created timecorrelation_id(Text),seq(Number)source,event_type,status(Select)actor(Text),input_ref/output_ref(URL)latency_ms(Number),cost_usd(Number)message(Text),event_json(Text; minified JSON)
Setup steps:
- Create a Notion internal integration (connection).
- Share the run‑log database with the integration (Can edit).
- Share with human collaborators as View (or Comment) only.
- Lock the database page to reduce accidental edits (does not create immutability).
Insert example (curl):
curl -X POST https://api.notion.com/v1/pages \
-H 'Authorization: Bearer $NOTION_TOKEN' \
-H 'Notion-Version: 2022-06-28' \
-H 'Content-Type: application/json' \
-d '{
"parent": {"database_id": "<DB_ID>"},
"properties": {
"correlation_id": {"rich_text": [{"text": {"content": "4bf92f3577b34da6a3ce929d0e0e4736"}}]},
"seq": {"number": 1},
"source": {"select": {"name": "make.webhook"}},
"event_type": {"select": {"name": "validated"}},
"status": {"select": {"name": "ok"}},
"latency_ms": {"number": 128},
"cost_usd": {"number": 0.0004},
"message": {"rich_text": [{"text": {"content": "validated payload"}}]}
},
"children": [
{"object":"block","type":"code","code": {"language":"json","rich_text":[{"text":{"content":"{\"user\":\"123\",\"valid\":true}"}}]}}
]
}'
Query by correlation_id (returns JSON):
curl -X POST https://api.notion.com/v1/databases/<DB_ID>/query \
-H 'Authorization: Bearer $NOTION_TOKEN' \
-H 'Notion-Version: 2022-06-28' \
-H 'Content-Type: application/json' \
-d '{
"filter": {"property": "correlation_id", "rich_text": {"equals": "4bf92f3577b34da6a3ce929d0e0e4736"}},
"sorts": [{"property": "seq", "direction": "ascending"}]
}'
Evidence exports:
- UI: open the database → ••• → Export → CSV. Filter first by
correlation_id. - Audit: workspace owners can export the Audit Log CSV for user‑level actions.
Important limits:
- Notion has no per‑row immutability. Treat this as operational append‑only plus audit visibility. Keep humans view/comment‑only; let the integration write.
Quick test:
- Remove all human edit rights; insert via the integration; confirm humans cannot modify rows; export filtered CSV.
4) Postgres/Supabase: append‑only + SHA‑256 hash‑chaining (with RLS and advisory locks)
Goal: true append‑only with tamper‑evident hash‑chaining per correlation_id.
Core SQL (Postgres/Supabase):
-- 1) Setup
create extension if not exists pgcrypto; -- for digest()
-- 2) Table
create table if not exists public.run_log (
id bigserial primary key,
ts timestamptz not null default now(),
correlation_id text not null check (correlation_id ~ '^[0-9a-f]{32}$'),
seq integer not null,
source text not null,
event_type text not null,
status text not null check (status in ('ok','error','pending')),
actor text,
input_ref text,
output_ref text,
latency_ms integer,
cost_usd numeric(10,4),
message text,
event_json jsonb not null,
prev_hash text not null,
row_hash text not null
);
create unique index if not exists run_log_cid_seq_idx on public.run_log(correlation_id, seq);
-- 3) Trigger: compute seq, prev_hash, row_hash with per-run serialization
create or replace function public.run_log_before_insert()
returns trigger language plpgsql as $$
declare
v_prev_hash text;
v_seq int;
v_event_text text;
begin
-- serialize per correlation using advisory xact lock (key space partition 1)
perform pg_advisory_xact_lock(1, hashtext(new.correlation_id));
select rl.row_hash, rl.seq into v_prev_hash, v_seq
from public.run_log rl
where rl.correlation_id = new.correlation_id
order by rl.seq desc
limit 1;
if v_prev_hash is null then
v_prev_hash := repeat('0', 64); -- chain root
v_seq := 0;
else
v_seq := v_seq + 1;
end if;
new.seq := coalesce(new.seq, v_seq);
new.prev_hash := v_prev_hash;
v_event_text := coalesce(new.event_json::text, '{}');
new.row_hash := encode(digest(v_prev_hash || v_event_text, 'sha256'), 'hex');
return new;
end;$$;
create or replace trigger run_log_bi
before insert on public.run_log
for each row execute procedure public.run_log_before_insert();
-- 4) RLS: insert-only for writers; no UPDATE/DELETE policies
revoke all on public.run_log from public;
alter table public.run_log enable row level security;
-- Example roles (adapt to your auth model)
-- grant select, insert to your API role as needed (Supabase service role bypasses RLS on the server).
create policy run_log_insert_only on public.run_log
for insert
to public
with check (true);
-- Optional read policy (tighten in production)
create policy run_log_read_all on public.run_log
for select
to public
using (true);
Verify the chain for a run (returns 0 rows when intact):
with chk as (
select
correlation_id,
seq,
prev_hash,
row_hash,
encode(
digest(
coalesce(lag(row_hash) over (partition by correlation_id order by seq), repeat('0',64)) || event_json::text,
'sha256'
), 'hex'
) as recomputed
from public.run_log
where correlation_id = $1
)
select * from chk where row_hash <> recomputed;
Minimal insert (server‑side):
insert into public.run_log (correlation_id, source, event_type, status, event_json)
values ('4bf92f3577b34da6a3ce929d0e0e4736','make.webhook','triggered','ok','{}'::jsonb);
Notes:
- Hash‑chains detect tampering; they don’t prevent it. RLS + least‑privilege limits who can write; the chain lets you prove if something changed.
- The advisory lock scopes to each
correlation_idto avoid race conditions when multiple steps insert concurrently. - For high write volume, consider partitioning by day or hashing
correlation_idto sharded tables. - Store only non‑sensitive payload keys in
event_jsonor redact/field‑encrypt before insert.
5) Slack “/what‑happened” endpoint: request/response and example blocks
Purpose: return a concise, defensible lineage for a given correlation_id inside Slack.
Slash command contract:
- Command:
/what-happened <correlation_id> - Slack sends an
application/x-www-form-urlencodedPOST with fields includingtext,user_id,channel_id,response_url,trigger_id. - Respond within 3 seconds (ephemeral ACK), then POST results to
response_url.
Pseudocode:
// Express-style handler
app.post('/slack/what-happened', async (req, res) => {
const id = (req.body.text || '').trim().toLowerCase();
res.json({ response_type: 'ephemeral', text: `Fetching run ${id}…` });
const rows = await db.run_log.findMany({
where: { correlation_id: id },
orderBy: { seq: 'asc' }
});
const duration = (rows.at(-1)?.ts - rows[0]?.ts) || 0;
const steps = rows.length;
const status = rows.at(-1)?.status || 'unknown';
const blocks = [
section(`*Run* \
${id} — *${steps}* steps • *${(duration/1000).toFixed(2)}s* • status: *${status}*`),
divider(),
...rows.slice(-5).map(r => section(`*#${r.seq}* ${r.source} • ${r.event_type} • ${r.status}\n${r.message || ''}`))
];
await fetch(req.body.response_url, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ response_type: 'ephemeral', blocks }) });
});
Sample JSON (blocks):
{
"response_type": "ephemeral",
"blocks": [
{"type":"section","text":{"type":"mrkdwn","text":"Run 4bf92f3577b34da6a3ce929d0e0e4736 — 6 steps • 7.21s • status: ok"}},
{"type":"divider"},
{"type":"section","text":{"type":"mrkdwn","text":"#0 make.webhook • triggered • ok"}},
{"type":"section","text":{"type":"mrkdwn","text":"#1 app.validate • validated • ok"}},
{"type":"section","text":{"type":"mrkdwn","text":"#2 app.charge • called_api • ok — Stripe PaymentIntent confirmed"}},
{"type":"section","text":{"type":"mrkdwn","text":"#3 app.email • called_api • ok"}},
{"type":"section","text":{"type":"mrkdwn","text":"#4 app.crm • called_api • ok"}},
{"type":"section","text":{"type":"mrkdwn","text":"#5 app.closeout • completed • ok"}}
]
}
Modal option:
- Use
views.openwithtrigger_idto show a timeline and an “Export CSV” button. - On click, generate the export (see next section) and respond with a signed URL.
6) CSV/JSON export recipes (Airtable, Notion, Postgres/Supabase)
Hand‑to‑auditor exports by stack.
Airtable → CSV
- Create a view filtered by
correlation_id = <ID>and sorted byseq ASC. - Grid view → … → Download CSV. File name convention:
runlog_<ID>_<UTCDate>.csv. - Programmatic: GET Records with
filterByFormula(as shown above) and write CSV headers:ts,seq,source,event_type,status,....
Notion → CSV/JSON
- UI: open database → ••• → Export → CSV (after filtering by
correlation_id). - API: use
/databases/{id}/querywith arich_text.equalsfilter and serialize the results to CSV or JSON. Include the database name and export timestamp in the envelope.
Postgres/Supabase → CSV/JSON
- SQL (psql):
\copy (
select ts, correlation_id, seq, source, event_type, status, actor, input_ref, output_ref, latency_ms, cost_usd, message, event_json, prev_hash, row_hash
from public.run_log
where correlation_id = '4bf92f3577b34da6a3ce929d0e0e4736'
order by seq
) to 'runlog_4bf9_2026-05-01T14-30Z.csv' csv header;
- Supabase JS snippet (JSON export):
const { data, error } = await supabase
.from('run_log')
.select('*')
.eq('correlation_id', id)
.order('seq', { ascending: true });
Export envelope (recommended wrapper):
{
"exported_at": "2026-05-01T14:30:00Z",
"correlation_id": "4bf92f3577b34da6a3ce929d0e0e4736",
"chain_verified": true,
"steps": 6,
"duration_ms": 7210,
"items": [/* ordered run-log rows */]
}
Tip: generate a signed URL that expires in 24 hours; never attach raw CSV/JSON directly in Slack.
7) Verification, ops checks, and SLOs
Bake verification into daily ops and incident response.
On every deploy:
- Unit‑test the Postgres trigger: insert 3 rows for the same
correlation_id; assertseq= 0..2 androw_hashchanges each time. - Run the integrity check query for the last 24h of
correlation_ids; alert if any chain breaks.
3 queries to memorize:
-- Latest step per run (status snapshot)
select correlation_id, max(seq) as last_seq,
(array_agg(status order by seq desc))[1] as last_status,
max(ts) as last_ts
from public.run_log
where ts > now() - interval '24 hours'
group by correlation_id
order by last_ts desc;
-- Integrity check for one run (0 rows = OK)
-- see Section 4 "Verify the chain" query
-- End-to-end duration per run
select correlation_id,
extract(epoch from (max(ts) - min(ts))) * 1000 as duration_ms,
count(*) as steps
from public.run_log
group by correlation_id
order by duration_ms desc;
SLOs to adopt:
- “/what‑happened” returns a summary in < 3 seconds.
- 100% of outbound HTTP calls propagate
traceparentorx-correlation-id. - 0 chain‑breaks across the last 30 days (Postgres builds).
8) Hardening: limits, external anchoring, and data hygiene
Know the limits and how to harden when scrutiny increases.
- Airtable/Notion: append‑only patterns rely on permissions and process. Admins can still change data; use audit logs and exports for visibility.
- Postgres/Supabase: hash‑chaining detects tampering; it doesn’t prevent it. Least‑privilege + RLS narrows risk.
- External anchoring (optional, stronger assurance): compute a daily hash over each run’s final
row_hash(or over the whole table state) and anchor it to write‑once storage (e.g., WORM/S3 Object Lock, another database, or a notarization service). Keep a simpleanchorstable withday,anchor_hash,anchored_at, and a link to the receipt. - PII minimization: store IDs/refs, not raw payloads; or encrypt specific JSON fields client‑side before insert.
- Backfills: when importing historical runs, set
correlation_id/seqconsistently and recompute the chain offline before load. - Cost visibility: populate
cost_usdandlatency_mson each step to defend margins and SLAs with numbers.
Anchor hash example (daily):
with heads as (
select correlation_id, (array_agg(row_hash order by seq desc))[1] as head
from public.run_log
where ts::date = current_date
group by correlation_id
),
rollup as (
select encode(digest(string_agg(head, '' order by correlation_id), 'sha256'), 'hex') as day_hash
from heads
)
insert into public.anchors(day, anchor_hash, anchored_at)
select current_date, day_hash, now() from rollup;
Document where the anchor is stored and how to retrieve the receipt during an audit.