Should This Be an Agent?
Most processes aren't one thing. Some parts suit an agent, some suit code, some need a person. This assessment helps you see which is which.
How much thinking does this process involve?
Some processes follow clear rules every time. Others need someone to read the situation and make a call. AI agents are most useful in the middle, where the work isn’t routine enough to automate with rules, but it’s predictable enough that a system can learn to handle it.
Think about: Could you write a step-by-step checklist that covers every scenario? If yes, you probably don’t need AI. If every case is different, that’s where an agent might help.
The middle is the sweet spot. If every case follows the same rules, automation handles it without AI. If every case needs deep expertise, you need a person. Agents work best where there's a pattern but each case needs some thinking through.
How variable are the inputs?
If every case looks the same, a simple system can handle it. AI earns its place when each case is different: different formats, different wording, different situations that can’t be captured in a template.
Think about: If you built a form with dropdown menus, could it capture every possible input? If not, that’s where AI starts to add value.
How much of the process is already documented?
The bus factor test: if your expert left tomorrow, could someone else do their job from the docs? Undocumented processes are full of tribal knowledge that’s hard to automate and harder to test.
Think about: Is there a process map, decision tree, or even a checklist? Or does it all live in someone’s head?
Do you have access to the data the agent would need?
This is one of the most common reasons AI projects stall. If the data your agent needs is scattered across systems, incomplete, or hard to access, that’s a problem to solve before the build starts, not during it.
Think about: Where does the data live? Can you access it? Is it in a state you could work with, or would it need significant cleaning or restructuring first?
Is there a human who currently does this and can validate the agent’s work?
You need someone who knows the process well enough to spot when the agent gets it wrong. Without that person, errors go unnoticed and trust erodes. With them, you can catch problems early and improve the agent over time.
Think about: Who would check the agent’s work? Would they spot it if the agent got something subtly wrong?
What’s the volume?
AI agents have a per-use cost every time they run. At low volumes, the cost of building and running an agent usually outweighs what you’d save. Higher volume means the investment is more likely to pay back.
Think about: How many times a day or week does this process run? If it’s only a handful, the economics may not work.
Does the business case hold up at current AI costs?
This doesn’t need to be a full business case, but it does need to be more than a hunch. If you haven’t compared what the agent would cost to run against what you’re spending now, you’re investing based on excitement rather than evidence.
Think about: Can you roughly estimate what each AI interaction would cost, multiplied by how often it runs, compared to the current human cost?
What happens when it gets it wrong?
Think about what could go wrong and what the impact would be. If a mistake is easy to fix, you’ve got room to learn as you go. If a mistake has serious consequences, you’ll need more oversight and checks in place from the start.
Think about: What’s the most likely thing to go wrong? Who would notice, and how quickly could it be fixed?
Worth knowing: Higher consequences don't rule agents out, but they do mean you'll need more checks in place and someone reviewing the output. Think about what would make you pull the plug, and agree on that before you start building.
Does leadership understand this won’t be right 100% of the time?
Teams that align on ‘good enough’ before building have a much higher success rate. The ones that make it to production tend to have leadership who understood from the start that AI won’t be right every time.
Think about: Has anyone asked: ‘What error rate would we accept?’ If that conversation hasn’t happened, start there before building anything.