Agentic AI Patterns
From single agents to orchestrated teams — the architecture patterns behind AI that acts.
The patterns
Click each pattern to see the architecture and business context.
The building block
Single Agent
"One employee with clear instructions and access to tools"
A single agent receives a goal, breaks it into steps, and uses tools to accomplish them — one action at a time. It reads data, calls APIs, writes files, and checks its own work. The key difference from a chatbot: it doesn't just answer questions, it takes actions. It has a loop: think, act, observe, repeat.
Start here. Most agent use cases don't need multi-agent orchestration. A single well-prompted agent with the right tools handles 80% of automation tasks. Over-engineering with multiple agents when one would do is the most common mistake in agentic AI.
The building block
Single Agent
"One employee with clear instructions and access to tools"
A single agent receives a goal, breaks it into steps, and uses tools to accomplish them — one action at a time. It reads data, calls APIs, writes files, and checks its own work. The key difference from a chatbot: it doesn't just answer questions, it takes actions. It has a loop: think, act, observe, repeat.
Start here. Most agent use cases don't need multi-agent orchestration. A single well-prompted agent with the right tools handles 80% of automation tasks. Over-engineering with multiple agents when one would do is the most common mistake in agentic AI.
The team
Multi-Agent Orchestration
"A project manager delegating to specialists"
An orchestrator agent receives the goal and delegates sub-tasks to specialist agents — a researcher, a writer, a reviewer, a coder. Each specialist has its own system prompt, tools, and context window. The orchestrator coordinates, merges results, and handles failures. Like a real team: the manager doesn't do the work, they make sure the right people do.
Use multi-agent when the task genuinely requires different expertise or when context windows aren't large enough for a single agent. The coordination cost is real — message passing, error handling, state management. Don't split into agents what one agent can handle sequentially.
The safety net
Human-in-the-Loop
"A junior employee who escalates to their manager for big decisions"
The agent works autonomously for routine tasks but pauses at defined checkpoints: before sending an email, before executing a payment, before deleting data. A human reviews, approves or rejects, and the agent continues. The boundaries are explicit: below this threshold, act freely; above it, ask.
This isn't a limitation — it's a feature. The highest-value agent deployments all have human checkpoints for irreversible or high-stakes actions. The goal isn't full autonomy; it's appropriate autonomy. Start with tight guardrails and widen them as trust builds.
The oversight layer
Human-on-the-Loop
"A factory supervisor watching the production line from the control room"
Unlike human-in-the-loop (where the agent pauses and waits for approval), human-on-the-loop means the agent acts autonomously while a human monitors the output. The human can intervene, correct, or shut things down — but they don't block each action. Think of it as supervision rather than co-signing. The agent sends periodic summaries, flags anomalies, and the human reviews asynchronously.
This is where mature agent deployments land. You start with human-in-the-loop (tight guardrails), measure error rates, and gradually shift to human-on-the-loop as confidence builds. The economics are better: the human reviews a dashboard of 50 completed actions rather than approving each one individually. Reserve in-the-loop for truly irreversible actions.
Universal connectivity
Tool Use via MCP
"USB-C for AI agents — connect once, use everything"
An agent without tools is just a chatbot. Tools let agents read databases, call APIs, search the web, write files, and interact with any system. MCP (Model Context Protocol) standardises this: instead of building a custom integration for every tool, you expose tools via a single protocol. Any MCP-compatible agent can use any MCP-compatible tool.
MCP eliminates the N×M integration problem. Without it, 5 agents × 4 tools = 20 custom integrations. With MCP, it's 5 + 4 = 9 implementations. The protocol is the multiplier that makes agentic AI practical at enterprise scale.
Revenue recovery agent
Revenue Recovery Agent
"An accounts receivable clerk who never sleeps"
Here's a concrete example that ties all the patterns together. A revenue recovery agent monitors overdue invoices, checks the CRM for context (are they a key account? is there a dispute?), drafts an appropriate follow-up email, escalates to a human for high-value accounts, and logs every action. It runs on a schedule, handles edge cases, and gets better with feedback.
This single agent combines: tool use (CRM, email, database), human-in-the-loop (escalation for high-value accounts), and a clear success metric (recovered revenue). It's not a demo — it's a pattern that applies to any process where humans currently chase status and send follow-ups.
Decision framework
Is the task well-defined with clear success criteria?
Yes → agent candidate. No → keep it as a copilot/assistant.
Can a failure be safely reversed?
No → add human-in-the-loop approval before irreversible actions.
Does it need access to multiple systems?
Yes → use MCP for tool connectivity. Avoid custom integrations.
Is the volume high enough to justify automation?
Calculate: (time saved per task) x (frequency) vs (build + maintain cost).