Jordan: You should stop writing specs.
I mean it. That requirements document you spend ninety minutes drafting before every build — the one with the bullet points and the edge cases and the little notes to yourself about what the client probably meant when they said "make it automatic"? Stop writing it. You're doing the model's job. And you're doing it worse.
Three weeks ago I got a Loom from a client. Landscape company. The guy talked for four minutes and twelve seconds about what he wanted automated. Four minutes of rambling — half of it contradictory, some of it physically impossible, a chunk of it just... describing his morning routine for some reason. And at the end he said, "So yeah, just make it work."
Old me would have spent an hour translating that into a spec doc. Mapping inputs, outputs, constraints, acceptance criteria — all by hand, all in my head, all in a Google Doc that the client would never read and I'd rewrite halfway through the build anyway.
New me pasted the transcript into a prompt with a JSON Schema attached, hit send, and had a validated, build-ready spec in under two minutes. Every field typed. Every constraint explicit. Every acceptance criterion testable. And when the schema validator caught two missing fields and a contradictory latency target — it told me exactly what was wrong before I'd opened Make.
That's what we're building today. Not the spec. The system that writes the spec, then proves the spec is correct, then turns it into a checklist — all before you touch a single module.
Picture this. A new client sends you three paragraphs in Slack — half requirements, half wishful thinking, zero structure. You read it twice, think you understand it, and start building. Forty-five minutes in, you realize the input format they described doesn't match the API they're actually using. You rewrite. An hour later, you discover they need the output in Notion, not Airtable — buried in sentence two, easy to miss. You rewrite again. By the time you deliver, you've built the project three times and billed for one. Today you're getting the system that kills that cycle. An LLM JSON schema workflow — generate the spec with structured outputs, validate it against a strict schema before you build, and auto-create a Notion checklist from the result. Fifteen minutes. One workflow. No more rewriting specs you should never have written by hand.
So here's the gap nobody talks about. OpenAI, Anthropic, Google — they all have structured output modes now. You hand the model a JSON Schema, you say "give me back JSON that matches this," and it does. OpenAI calls it Structured Outputs. Gemini has its own version. Claude handles it too. And the output looks right. It parses. The keys are there. You think you're done.
You are not done.
Because "the model returned valid JSON" and "the spec is actually correct" are two completely different statements. The model will happily give you a latency target of zero-point-five seconds on a workflow that requires three sequential API calls. It'll mark PII handling as "none" when the client explicitly said "no personal data leaves the EU." It'll generate acceptance criteria that sound great — "leads appear in Notion within eight seconds" — but leave out the failure case entirely. What happens when enrichment times out? What happens when the email is disposable? The JSON is valid. The spec is broken.
That's the core argument today. Generation and enforcement are two separate steps, and you need both. Step one — the model drafts the spec using structured outputs. Step two — a JSON Schema validator checks that spec against your rules before it ever reaches your build. If it fails, the errors route to Notion as tasks. If it passes, the acceptance criteria and constraints auto-populate a build checklist. No manual translation. No guessing.
The generation side is the easier half. You need three things — a JSON Schema that defines what a valid spec looks like for your services, a system prompt that tells the model to act as an automation architect, and the client's raw input. Could be a Loom transcript, a Slack thread, a messy email. Doesn't matter. The schema constrains the output.
Now — which model? If your schema is small and your client input is short, any of them work. But when you're dealing with long client briefs plus a detailed schema, context window matters. GPT four-point-one gives you a million tokens through the API. Claude offers up to a million with extended context. Gemini's long-context variants go past a million. For most service specs, you won't come close to those limits, but it's worth knowing the ceiling exists — especially if you're batching multiple client inputs into one generation call.
One caveat here that I want to be upfront about. Structured Outputs across all three providers support a subset of JSON Schema — not the full spec. The exact keywords that work vary by provider, and I haven't found a single centralized doc from OpenAI that lists every supported keyword across every model. So when you're designing your schema, keep it to the basics — type, enum, required, minimum, maximum, pattern, format. If you're using exotic keywords like conditional schemas or complex references, test before you trust. The generation side is reliable for straightforward schemas. Just don't assume full JSON Schema compliance.
The prompt structure I use is simple. System prompt tells the model it's a senior automation architect, instructs it to produce JSON matching the schema, and says to mark unknowns as explicit TODOs rather than inventing data. The user prompt drops in the client context, any constraints, and the schema itself. Then I run a second pass — a self-critique prompt — that checks for vague acceptance criteria, underspecified inputs, latency-budget conflicts, and contradictions. Two calls. Roughly twenty seconds total. And the output is dramatically better than what I used to write by hand.
Now the enforcement side. This is where it gets platform-specific, and this is where most people skip a step that costs them later.
If you're in Make — and a lot of you are — you cannot validate JSON Schema inside the platform. Full stop. Make's Custom Functions are capped at three hundred milliseconds of execution time. They're synchronous only. They cannot make HTTP requests. And they cannot load third-party libraries. So forget running Ajv or any schema validator inside a Make function. Make's own docs point you toward using an external JSON validator or the Create JSON module for well-formed JSON — but neither of those does schema validation. The JSON app parses and maps data. It doesn't enforce structure.
So in Make, the path is the HTTP module. You POST your generated spec and your schema to an external validator, and you branch on the response. Valid? Proceed to checklist generation. Invalid? Route the errors to Notion.
And the validator I recommend is a Cloudflare Worker. Here's why. Workers run at the edge — low latency, pennies per million requests on the paid plan, and the CPU time limits are generous enough now for lightweight validation. But — and this tripped me up the first time — you cannot use Ajv inside a Cloudflare Worker. Ajv relies on dynamic code generation, new Function calls, eval — and Workers' runtime blocks all of that by default. The package that works is called @cfworker/json-schema. It's built specifically for the Workers runtime. Same validation, no dynamic codegen. Deploy it once, call it from every scenario you run.
The Worker itself is maybe thirty lines of code. It accepts a POST with the spec, the schema, and a shared secret for auth. It validates. It returns a response with a valid boolean and an errors array — each error includes the path, the keyword that failed, and a human-readable message. That response is what your Make scenario branches on.
If you're on n8n, you have more options. Self-hosted n8n lets you enable external npm modules in the Code node — you set an environment variable called NODEFUNCTIONALLOW_EXTERNAL, package Ajv with your instance, and now you can validate directly inside your workflow. No external service needed.
There's also a community node — the NCNodes JSON Schema Validator — that wraps Ajv and gives you a drag-and-drop validation step with detailed error output. If you're self-hosted and comfortable installing community nodes, that's the fastest path to a working validator.
And of course, n8n's HTTP Request node works the same way as Make's HTTP module — you can call the same Cloudflare Worker from n8n if you want a single validator serving both platforms.
Okay, I can hear the objection. If n8n can validate in-platform with a Code node or a community node, why bother with an external microservice at all? Fewer moving parts. No Worker to maintain. No shared secret to manage. Just... validate and move on.
And honestly? If you're only running n8n, and you're self-hosted, and you're disciplined about versioning your Ajv dependency — that works. I'm not going to tell you it doesn't. For a single-platform shop, in-platform validation is simpler and it's fine.
But here's where it breaks down. The moment you're running Make and n8n — which a lot of us are, depending on the client — you now have validation logic in two places. Different implementations. Different failure modes. Different update cycles. And when you change your schema — which you will, because client requirements evolve — you're updating it in two workflows instead of one endpoint.
The external validator gives you portability. One Worker, one schema, every platform calls the same endpoint. Schema versioning happens in one place. And the validator's performance and uptime are isolated from your delivery flows — a bad deploy to your Worker doesn't break a running Make scenario, it just returns an error that your scenario already knows how to handle.
In Make specifically, you don't even have the choice. Custom Functions can't do it. External validation is the default, not the preference.
The last piece is what happens when validation fails. And this is the part that turns a nice idea into an actual system.
When the validator returns errors, each error has a path — like /project/owner_email — a keyword — like "format" — and a message — like "must match format email." You map those fields to Notion task properties. A required-field failure becomes a Blocker task. A type or enum mismatch becomes Major. A min-length or max-length issue becomes Minor. Every task gets the spec ID, the error path, and a suggested fix.
When validation passes, you expand the spec's acceptance criteria and constraints into checklist tasks automatically. "Verify: leads appear in Notion within eight seconds." "Enforce: no PII sent to third-party enrichment without consent." Each one becomes a task in a Notion database with a status, a severity, and a due date. Your build checklist writes itself from the spec — and the spec was written by the model and verified by the validator.
That's the full loop. Client sends messy paragraphs. Model generates a structured spec. Validator enforces the schema. Failures become Notion tasks with specific fix instructions. Passes become build checklists with testable criteria. You open Notion, and the project is already scoped, gated, and ready to build. Roughly twelve minutes of setup for a system you'll use on every client engagement going forward.
That landscape company client — the four-minute Loom guy? His spec came back with three validation errors on the first pass. One was a missing auth method on an input. One was an acceptance criterion that was literally just the word "fast." And the third was a constraint that contradicted his latency target — he wanted sub-two-second response times on a flow that required a Clearbit enrichment call, which alone takes three to five seconds. The validator caught all three. I fixed them in the spec before I'd built a single node. Total time from Loom to validated, build-ready checklist in Notion — under four minutes.
That's the shift. You're not writing specs anymore. You're reviewing them. The model does the drafting. The schema does the enforcing. And you do the thinking — which is the part you should have been doing all along.
If you want to skip the setup and start with a working schema, the Schema-First Spec Kit is on the Resources page. It's the JSON Schema I use for service requests, the Notion checklist template, and the prompt pack with the self-critique pass built in. Copy it, swap the placeholders, and you're running.
One thing to do this week. Take your last client intake — whatever messy message kicked off your most recent build — and run it through this workflow. Generate the spec, validate it, and see what the validator catches. I guarantee it finds something you missed. And that something is the rework you're not going to do next time.
I'm Jordan. This is Headcount Zero. Go build it.