Your Agents Need Better Instructions Than Your Team Did
When I managed teams of people, I could be somewhat vague. “Handle the edge cases.” “Use your best judgment.” “Make it work.” Humans are remarkable at filling in gaps — they infer intent, draw on experience, ask clarifying questions when something doesn’t make sense.
Agents don’t do that. Not yet. Not reliably.
An agent will do exactly what you tell it to do, with exactly the context you give it, following exactly the boundaries you define. If your instructions are ambiguous, the output is ambiguous. If your component boundaries are fuzzy, the agent bleeds across them. If your documentation is incomplete, the agent works with incomplete understanding.
This isn’t a weakness of the technology. It’s a forcing function for clarity.
Every time I build an agentic workflow, I’m forced to be more explicit than I ever had to be with a human team. What exactly is this component responsible for? What are its inputs and outputs? What should it do when it encounters something unexpected? Where does its authority end?
Those are architecture questions. And the discipline of answering them explicitly, upfront, before anything gets built — that’s what makes the difference between an agent that’s useful and an agent that’s a liability.
The bar for clarity just went up. That’s not a problem. That’s an opportunity to finally build things right.