Agentic AI Patterns: From Chatbots to Autonomous Workers
· 6 min read
The architecture patterns behind AI that takes action, not just answers questions. Single agents, multi-agent orchestration, human-in-the-loop, and a worked example.
The most important shift in enterprise AI isn’t a new model. It’s a new verb. We’ve gone from AI that answers to AI that acts. A chatbot waits to be asked a question. An agent has a goal, makes decisions, uses tools, and checks its own work. That’s a different thing entirely.
I’ve been watching this transition closely, both through my work at Akamai and through the broader EMEA technology landscape. The patterns that make agentic AI work in production are becoming clear. They’re not complicated. But getting them wrong is expensive.
Pattern 1: The single agent
Start here. Most teams that jump to multi-agent orchestration are over-engineering.
A single agent receives a goal, breaks it into steps, and uses tools to accomplish them. It reads databases, calls APIs, writes files, and checks its work. The key mechanism is a loop: think, act, observe, repeat. The agent doesn’t just predict the next word. It executes a plan, evaluates the result, and adjusts.
The difference from a chatbot: a chatbot produces text. An agent produces outcomes. “Summarise this document” is a chatbot task. “Find all overdue invoices, draft follow-up emails, and send the ones under $10,000” is an agent task.
My rule of thumb: if the task has clear success criteria and the actions are reversible, a single agent can handle it. You don’t need multi-agent orchestration to process expense reports or generate weekly status emails.
Most agent use cases I’ve evaluated don’t need more than one agent. A single well-prompted agent with the right tools handles 80% of enterprise automation tasks. Over-engineering with multiple agents when one would do is the most common mistake I see.
Pattern 2: Multi-agent orchestration
When a task genuinely requires different expertise or different context windows, you split into specialists.
An orchestrator agent receives the goal and delegates sub-tasks to specialist agents: a researcher, a writer, a reviewer, a coder. Each specialist has its own system prompt, tools, and context window. The orchestrator coordinates, merges results, and handles failures.
Think of it like a real team. A project manager doesn’t write the code. They make sure the right people do the right work in the right order. The orchestrator is the project manager.
The coordination cost is real. Message passing between agents, error handling when one agent fails, state management across the system. Use multi-agent only when the task genuinely requires it: different domain expertise, context windows too small for one agent, or fundamentally parallel sub-tasks.
Pattern 3: Human-in-the-loop
This isn’t a limitation. It’s possibly the most important pattern.
The agent works autonomously for routine tasks but pauses at defined checkpoints: before sending an email to a customer, before executing a payment, before deleting data. A human reviews, approves or rejects, and the agent continues.
The boundaries need to be explicit. Below this threshold, act freely. Above it, ask. A revenue recovery agent might autonomously send follow-ups for invoices under $5,000 but escalate anything above that for human review.
The highest-value agent deployments I’ve seen all have human checkpoints for irreversible or high-stakes actions. The goal isn’t full autonomy. It’s appropriate autonomy. Start with tight guardrails and widen them as trust builds from observed performance.
The mistake organisations make: designing for full autonomy from day one. That’s not brave. It’s reckless. The path is: tight guardrails, measure error rates, widen gradually, measure again.
Pattern 4: Tool use via MCP
An agent without tools is just a chatbot with extra steps. Tools are what make agents capable of action.
The Model Context Protocol (MCP) standardises how agents connect to tools. Instead of building a custom integration for every tool, you expose tools via a single protocol. Any MCP-compatible agent can use any MCP-compatible tool.
The maths is straightforward. Without MCP: 5 agents and 10 tools means 50 custom integrations. With MCP: 5 + 10 = 15 implementations. The protocol is the multiplier that makes agentic AI practical at enterprise scale.
I wrote more about MCP specifically in the interactive MCP guide. The short version: it’s the USB-C of AI. Connect once, use everything.
Worked example: the revenue recovery agent
Here’s a concrete example that ties all four patterns together. I use this as a teaching example because it’s real, measurable, and applies to almost any B2B organisation.
The job: Monitor overdue invoices, assess context, send appropriate follow-ups, escalate when needed, log everything.
The agent flow:
- Detect — scan the finance system for invoices past their due date
- Check context — query the CRM. Is this a key account? Is there an ongoing dispute? What’s the payment history?
- Draft — compose an appropriate follow-up email based on context (first reminder vs. third escalation, key account vs. standard customer)
- Decision gate — high-value invoice (above threshold)? Escalate to a human. Standard? Send automatically.
- Execute — send the email, log the action, update the CRM
The patterns in play:
- Single agent — one agent handles the entire flow sequentially
- Tool use — CRM, email, finance system, logging (all via MCP)
- Human-in-the-loop — escalation gate for high-value accounts
- Clear success metric — recovered revenue, measured in dollars
This isn’t a demo. It’s a pattern that applies to any process where humans currently chase status and send follow-ups. Customer onboarding. Compliance checks. Vendor management. IT provisioning. The structure is identical: detect trigger, gather context, draft action, gate if high-stakes, execute, log.
The decision framework
Before deploying an agent, ask four questions:
Is the task well-defined with clear success criteria? If you can’t measure whether the agent succeeded, it’s not an agent task. It’s a copilot task. Agents need goals, not vibes.
Can a failure be safely reversed? If the agent sends a wrong email, can you apologise and fix it? That’s reversible. If the agent deletes production data, that’s not. Irreversible actions always need human-in-the-loop gates.
Does it need access to multiple systems? If yes, use MCP for tool connectivity. Building custom integrations for each tool is the path to unmaintainable spaghetti.
Is the volume high enough to justify the investment? Calculate: (time saved per task) x (frequency per month) vs (build cost + monthly maintenance). If the agent saves 2 hours per week on a task someone does manually, that’s 100 hours per year. Worth automating if build cost is under 40 hours.
The organisations succeeding with agentic AI aren’t the ones building the most sophisticated systems. They’re the ones picking the right tasks: well-defined, high-frequency, tool-connected, with clear escalation paths.
Explore the interactive Agentic AI Patterns framework for visual diagrams of each pattern, from single agents to orchestrated teams.