Back to Insights
AI EngineeringTarun JainMarch 10, 2026· LinkedIn

One practical way I control AI coding agents

Long prompts and giant instruction files are not enough. When AI is writing code, control comes from explicit phases, strict scope boundaries, and review loops that no one can bypass.

Why Prompting Alone Fails

In my previous post I wrote about architectural guardrails and why layers in a system must stay isolated. The obvious follow-up question is how to enforce that when AI is writing code.

One thing I learned the hard way is that long prompts and large instruction files are not enough.

LLMs tend to pay close attention to the first instruction and the last instruction, while constraints buried in the middle often get softened, skipped, or reinterpreted.

So simply writing a long prompt that says follow all these rules does not reliably work.

Break Development Into Explicit Roles

Instead of relying on one smart agent, I now break development into smaller phases with strict scope boundaries, guardrails, and explicit review loops.

Every non-trivial change passes through structured roles.

  • Planner: understands the problem, analyzes the codebase, and produces a plan. It never writes code.
  • Plan Review Pass: a second agent challenges the plan for edge cases, hidden dependencies, and architectural conflicts.
  • Manual Plan Gate: I review and approve the plan before any code is written.
  • Implementer: executes the approved plan and edits files. If the plan breaks, it must return to planning.
  • Reviewer: checks the diff for architectural violations, boundary leaks, and unintended changes. It cannot modify code.
  • Second Review Pass: another agent challenges the review to catch missed issues.
  • Tester: runs tests and adds missing coverage when needed. It never changes production logic.
  • Manual Final Review: I confirm the final change matches the approved plan.
  • Committer: stages the change and prepares the PR.

The Feedback Loops Must Be Real

The workflow only works because the feedback loops are explicit.

Implementer goes back to Planner on plan conflict. Reviewer sends issues back to Implementer. Tester sends failures back to Implementer.

No phase can bypass another.

Guardrails That Matter

The workflow is protected by a few simple but enforceable guardrails.

  • Architectural boundaries must hold. Core processing logic should not depend on policy or UI layers.
  • Interfaces must remain stable. If a shared API or data structure changes, every consumer must be updated.
  • Changes must stay scoped. Each PR should represent one logical change, not a collection of unrelated fixes.

The Trade-Off Is Deliberate

Yes, this process adds some overhead. But the trade-off is deliberate: slightly slower decisions, far fewer architectural mistakes.

This is how strong engineering teams have always worked. The difference is that AI agents need the boundaries made explicit, or they will ignore them.

As AI writes more code, controlling who is allowed to do what becomes just as important as controlling the code itself.