From LLM Luck to Structurally Guaranteed: One Ticket Across Four Architectural Eras
Seven pipeline runs, one ticket, four architectural eras. Per-test cost dropped from $0.385 to $0.074 by replacing LLM guesswork with structural derivation.
Seven pipeline runs, one ticket, four architectural eras. Per-test cost dropped from $0.385 to $0.074 by replacing LLM guesswork with structural derivation.
Swarm parallelism is a throughput solution applied to a reliability problem. Probabilistic verification of probabilistic output does not converge.
What shipped in the three weeks after the 248-run hallucination ceiling: removing the LLM from computable decisions and validating everything else.
When a vague ticket has two equally-plausible interpretations, an early deterministic stage stops the pipeline and emits a structured clarification request before the coding loop starts.
Each stage in the pipeline runs against its own model and vendor config. How that design enables per-stage cost control, model swaps, and vendor flexibility.
The run archive is where pass/fail becomes diagnostic: per-stage operation logs, reasoning traces, and a correlation token that spans every stage.
How per-stage retry budgets, wall-clock timeouts, and a global token cap keep any stage from running indefinitely, with the Debugger as the most complex case.
The mechanics behind a binding validator: why synchronous pre-commit timing, structured rejections, and retry folding are each individually load-bearing.
When the model has a strong prior, naming the failure mode in the prompt doesn't prevent it. Prompt rules are advisory; validators are binding.
The data model behind the symbol registry: per-symbol records, file-level hashes, call-graph edges, and the invalidation strategy that keeps it current.
Looking up symbols by filename instead of full path pulls every `index.ts` in the project into the agent's context. One line changed. 20 results down to 1.
Three properties of a Lego instruction set, mapped to an AI coding pipeline: why manifest quality matters more than builder quality.
Bernoulli model predicted 36% first-pass success across 248 pipeline runs. Measured: 21%. The gap explains why per-field hallucination fixes have a ceiling.
Every field a Planner emits that the codebase already knows is a dice roll. Machine extraction replaces those dice rolls with deterministic lookups.
Why tracking known architectural gaps with specific close conditions is more useful than a backlog, and what makes each entry work.
Fixture-first development as an early warning system for AI pipelines: the first real-project run confirmed three known gaps instead of discovering new ones.
Using claude -p in a pipeline? The model has bash access you never granted. Each tool call re-sends your full context. One sentence cuts token spend by 52%.
A ticket that passed twice failed four times at lower model effort, exposing four structural pipeline bugs the higher-effort run had masked.
Same ticket, same pipeline config, different result two days apart. Why the first run passing was not confirmation that the constraint was enforced.
The pipeline committed code before branch isolation existed. The risk was real, named, given a close condition. That is what makes it different from a shortcut.
On attempt 3, the Coder tried to write a file that was not in the manifest. The write gate stopped it before anything hit disk. This is what it is for.
The Debugger receives the test failure and the code on disk, not the Coder's reasoning. That isolation is not a constraint. It is the design.
Tree-Sitter tells you where a symbol is defined. It cannot tell you where it is called. That gap cost one pipeline run 33,000 tokens to find out.
Not a model capability problem. An agent with the wrong codebase version produces output that is plausible but wrong in ways that are hard to catch.