Jordan: Your compliance policy is a lie.
Not because you wrote it wrong. Not because you don't care about data residency or consent or which models touch which data. You probably care a lot. You probably have a Google Doc somewhere—maybe a Notion page—that says things like "EU customer data must be processed within the EU" and "no PII sent to third-party models without explicit consent." Responsible stuff. Professional stuff.
But that document is not connected to anything. It's not wired into your Make scenarios. It's not gating your n8n workflows. It's sitting in a folder while your automations do whatever they want.
And the moment a procurement reviewer asks you to prove—not describe, prove—that your flows obey your own policy... you're stuck. Because the policy lives in a PDF and the enforcement lives in your memory. And your memory is not an audit trail.
Today I'm fixing that. One JSON file. One evaluator. Every API call and every LLM step in your stack checks the rules before it runs—and if the rules say no, the call doesn't happen. Not "maybe." Not "usually." Doesn't happen.
Jordan: Every week you don't have runtime policy enforcement, you are one mis-routed API call away from sending EU customer data through a US endpoint—and one procurement audit away from losing a contract you already earned. That's not hypothetical. That's the gap between what your policy document promises and what your automations actually do. I'm Jordan. This is Headcount Zero. And today you're getting policy as code JSON—a single versioned file that encodes consent, data residency, and tool-allowlist rules, plus the evaluator that checks those rules before every downstream call in Make or n8n. Fail closed by default. No exceptions without a human in the loop.
Jordan: So here's what actually happens in most solo practices right now. You have a client—say a healthcare staffing company, fifty employees, EU operations. You build them a Make scenario that takes inbound form data, runs it through an OpenAI classification step, and routes the output to their CRM. Works great. Ships fast.
But where does that API call go? Which OpenAI endpoint? Which region processes the data? Is the user's consent flag checked before the model sees their input? You don't know—because you didn't encode those rules anywhere the scenario can read them. You encoded them in your head when you picked the module settings. And if you build a second scenario next month, you'll encode them again. Slightly differently. From memory.
That's policy drift. And it's invisible until someone audits you.
The OWASP Developer Guide has a principle for this—fail-safe defaults. The idea is simple: if a condition isn't explicitly permitted, deny access. Don't allow by default and hope your filters catch the bad stuff. Deny by default and require every request to prove it's allowed. That's the posture we're building today.
Jordan: The core of this whole system is one JSON file. I call it policy.json. It's not code. It's data. And that distinction matters—because data is portable, data is versionable, and data is something a procurement reviewer can actually read.
The file has six sections. Consent flags—does this user allow processing, marketing use, model training? Data classes—which categories are allowed to touch an LLM and which are hard-denied? Think generic text versus payment card numbers versus government IDs. Regions—which geographies can data be stored in, processed in, and what's the fallback if a region isn't available? Purposes—what business reasons are approved for LLM use, and which ones require explicit consent first? Retention—how long do you keep logs, which fields get masked, do you store model responses at all? And finally, tools—which providers are allowed, which endpoints, which models, in which regions.
That last section is where it gets specific. Under OpenAI, you list the allowed domain prefixes—us.api.openai.com for US processing, eu.api.openai.com for EU. Under Bedrock, you specify the routing mode—In-Region, Geographic, or Global—and the model ID prefixes that match. Under Vertex AI, you lock the regional endpoint. Every provider, every model, every region—explicit.
And here's what makes this different from a PDF. When you change a rule—say a client revokes marketing consent, or you add Vertex as an approved provider—you change one file. Every scenario that reads that file picks up the change on the next run. You don't have to find every filter on every canvas and update them individually. One source of truth. Version it in Git if you want an audit trail. Or store it in a Make Data Store and pull it at runtime. Either way—one file, one truth.
The Governance-as-JSON Starter Pack on the Resources page has the full template with every field stubbed out. You fill in the brackets, commit it, and you've got a policy that machines can read.
Jordan: The policy file is just data. It doesn't do anything by itself. You need an evaluator—a small piece of JavaScript that sits in a Code step, reads the policy, reads the incoming request, and returns one boolean. Allow or deny.
I use JSONLogic for this. It's a lightweight rule engine—the entire library is a single JavaScript file, about fifteen hundred GitHub stars, actively maintained. The idea behind JSONLogic is that your rules are also JSON. You write expressions like "if resolved.vendorallowed equals true AND resolved.regionOK equals true AND resolved.data_denied equals false... then allow." All of it serializable. All of it testable.
Now—important caveat if you're on n8n Cloud. You cannot install npm packages on n8n Cloud. The Code node gives you JavaScript, but only built-in modules—crypto and moment. That's it. So you have two options. Option one: paste the JSONLogic library directly into your Code node. It's one file. It works. Option two: if your rules are simple enough, skip JSONLogic entirely and write the boolean checks inline. For most solo practices, five or six boolean comparisons cover everything you need. JSONLogic becomes more valuable when you're managing policies across multiple clients with different rules—because the rules themselves are data you can swap without changing the evaluator code.
On Make, the Code app runs sandboxed JavaScript—billed at two credits per second of execution time. A policy evaluation takes well under a second. We're talking fractions of a cent per check. Effectively free.
Jordan: Here's what the Make scenario looks like. You've got your trigger—a webhook, a schedule, whatever kicks off the flow. Before any API or LLM module, you drop in a Code step. That Code step loads the policy—either hard-coded, fetched from a Data Store, or pulled via HTTP from wherever you keep it. It builds a request object describing the intended call—which vendor, which endpoint, which model, which region, what data classes are in the payload, what's the business purpose, does the user have consent on file.
The evaluator runs. Returns allow true or allow false, plus a reason string and the resolved booleans so you can see exactly which check failed.
After the Code step, you hit a Router. First route has a Filter—allow equals true. That's your happy path. The API call runs. Second route is the fallback—and this is critical—the fallback is not "do nothing." The fallback sends a Slack message to an approval channel with the request summary and two options: approve or reject. If someone approves, the call proceeds with an override flag logged. If they reject, the execution terminates and the reason gets written to a sheet.
Make's Router processes routes in order and supports a fallback route for bundles that don't match any condition. That's your default-deny architecture right there. If the allow filter doesn't pass, the bundle falls through to the Slack approval path. Nothing slips through silently.
Make also just shipped If-Else and Merge modules in open beta. If you need the allow and deny paths to rejoin later—say you want both paths to log the outcome to the same audit sheet—If-Else with a Merge node is cleaner than a Router with duplicated modules on each branch.
Jordan: In n8n, the pattern is almost identical. Code node before the API call. Same evaluator logic. The output gets attached to each item as a policy decision object—allow, resolved checks, reason.
After the Code node, you add an IF node. Condition: policy_decision.allow equals true. True branch goes to the API node. False branch goes to a Slack node—and n8n has a ready-made workflow template for exactly this kind of approval routing. The template sends a Slack DM with approve and reject buttons, logs the result to Google Sheets, and only the approve path continues the execution.
One thing to watch on n8n Cloud—since you can't import npm, make sure your evaluator is self-contained. No external dependencies. If you're self-hosting, you can install json-logic-js via npm and require it normally. But on Cloud, inline everything.
Jordan: Once the evaluator says allow, you still need to route the actual API call to the right place. This is where the vendor-specific details matter.
OpenAI now offers project-level data residency. Eligible API customers can choose US or Europe, and the domain prefix changes accordingly—us.api.openai.com versus eu.api.openai.com. Your policy file stores both domains. The evaluator confirms the geo is allowed. Then a routing helper after the gate constructs the base URL from the policy. One line of code.
Bedrock is actually the most elegant here. AWS gives you three routing modes with explicit residency semantics. In-Region—data stays strictly within your chosen region. Geographic—kept within a geo boundary like US or EU. And Global—no restriction. The model ID prefix tells Bedrock which mode to use. You prepend "us." for US geographic routing, "eu." for EU, "global." for global. So your policy stores the routing mode and the geo prefix map, and the evaluator checks that the model ID in the request matches the expected prefix for the user's geography. If an EU user's request has a model ID starting with "us."—deny. Instantly.
Vertex AI processes inference in the specific region where the request is made. Your policy stores the allowed region—say europe-west-four—and the evaluator confirms the request's target region matches. The endpoint URL itself contains the region, so a mismatch is structurally impossible if you build the URL from the policy.
Azure Foundry Models—what used to be Azure OpenAI—offers Global, DataZone, and Regional deployment types. DataZone keeps processing within US or EU only. Regional locks it to a single Azure region. Same pattern—your policy specifies the allowed deployment type, the evaluator checks it, the routing helper constructs the right target.
Four providers. Four slightly different mechanisms. One policy file that handles all of them.
Jordan: Now—I can already hear the objection. "Jordan, real enterprises use Open Policy Agent. They use AWS Cedar. They have dedicated policy decision points with Rego files and formal verification. A JSON file with some boolean checks is not enterprise-grade policy enforcement."
And... yeah. That's fair. OPA and Cedar are purpose-built for organization-wide policy enforcement across hundreds of services. If you're working inside a company that has a platform team and a central policy engine, you should absolutely use it.
But you're not. You're a solo operator shipping client workflows on Make and n8n. You don't have a platform team. You don't have a Kubernetes cluster to run OPA as a sidecar. You have a canvas with modules on it and a client who needs to see proof that their data stays in the EU.
A JSON policy file with a Code-step evaluator is not OPA. It's not trying to be. What it is—is auditable, testable, portable, and shippable this week. You can hand the policy.json to a procurement reviewer and they can read it. You can run unit tests against it with sample payloads and show the results. You can copy it across clients and change only the values that differ.
And here's the thing—if you ever do graduate to a client that requires OPA, your JSON policy translates almost directly into Rego. The structure is the same. The rules are the same. You've already done the hard work of identifying what needs to be enforced. The migration is a format change, not a rethink.
Also—OpenAI's data residency has endpoint-specific limitations. Some features aren't available in every region. Vertex AI's older endpoints may not guarantee in-region ML processing. These caveats exist regardless of whether you're using OPA or a JSON file. The point is not the engine. The point is that the rules exist, they're checked at runtime, and when a combination isn't supported—the call doesn't go through. Fail closed. Always.
Jordan: Last piece. You need to prove this works—to yourself and to your clients. So you write unit tests. Sample request payloads with expected outcomes.
EU user hitting OpenAI EU with an allowed model and a necessary-processing purpose? Allow. US user sending a marketing message without consent on file? Deny. EU user trying to invoke Bedrock with a US-prefixed model ID? Deny. Vertex request targeting us-central-one when the policy says europe-west-four? Deny. Any request carrying a denied data class like payment card numbers? Deny. Regardless of everything else.
You run these tests by swapping the evaluator for a test harness that iterates through the cases and compares actual versus expected. In Make, drop it into a Code step. In n8n, same thing. Five test cases takes about thirty seconds to run. And now you have a document—a test result—that says "these five scenarios were evaluated against this policy and produced the correct outcome." That's the artifact a procurement reviewer actually wants to see.
Jordan: So let's come back to where we started. Your compliance policy is a lie—not because you don't mean it, but because meaning it and enforcing it are two completely different things. A PDF that says "EU data stays in the EU" is a promise. A JSON file that gates every API call against that rule at runtime—that's proof. And proof is what closes deals.
One policy file. One evaluator. Fail closed by default. Slack approval as the only override. That's the entire architecture. It takes roughly forty-five minutes to set up for your first client, and maybe fifteen minutes to adapt for each client after that—because you're changing values in a JSON file, not rebuilding logic on a canvas.
Here's what I want you to do this week. Pick one client—the one with the strictest data handling requirements. Write their policy.json. Drop the evaluator into one scenario—just one, the one that touches an LLM. Run the five unit tests from the starter pack. When all five pass, you've got a governance layer that didn't exist yesterday. Then copy it to the next scenario. And the next.
And the next time a procurement reviewer asks you to prove your flows obey your own rules—you send them the policy file and the test results. Not a paragraph. Not a promise. A machine-readable document and the receipts that prove it works.
That's it for today. I'm Jordan. This is Headcount Zero. Go build something that fails closed.