AI Fundamentals
A business leader's guide to the building blocks of AI — no code, no jargon, just the mental models that matter.
The building blocks
Click each concept to see the business analogy and visual explanation.
University education
Pre-training
"Sending someone to university"
You invest years and enormous cost in a broad education. The graduate doesn't know your specific business — but they can read, write, reason, and learn new things quickly. That general foundation is what makes everything else possible.
Cost: millions. Time: months. You almost certainly don't do this yourself — you buy the graduate (base model) from someone who did.
University education
Pre-training
"Sending someone to university"
You invest years and enormous cost in a broad education. The graduate doesn't know your specific business — but they can read, write, reason, and learn new things quickly. That general foundation is what makes everything else possible.
Cost: millions. Time: months. You almost certainly don't do this yourself — you buy the graduate (base model) from someone who did.
Library filing system
Embeddings
"A library shelved by meaning, not alphabet"
Imagine a library where books aren't filed alphabetically but by meaning. Cookbooks sit near nutrition guides. Thrillers sit near crime fiction. The "address" of each book on the shelf IS its meaning — and nearby books are related.
This is how AI understands similarity. When you search "revenue forecast", it also finds "financial projections" because they're shelved in the same area.
Specialist apprenticeship
Fine-tuning
"A graduate doing a specialist apprenticeship"
Your university graduate joins a law firm. Through months of supervised practice, they learn legal language, how to draft contracts, and what "good" looks like in this domain. They don't forget how to read or reason — they layer expertise on top of their general education.
Cost: thousands, not millions. Risk: over-specialise and they forget how to do anything else (catastrophic forgetting). Only fine-tune when cheaper levers genuinely fail.
Open-book exam
Retrieval-augmented generation
"An employee with a well-indexed filing cabinet"
Instead of memorising every fact, your employee keeps a well-indexed filing cabinet and looks things up when asked. "What's our refund policy?" — they flip to the right page, read the answer, and respond in their own words. The filing cabinet can be updated instantly; no retraining needed.
This is how you give AI access to your company data without retraining the model. New document? Drop it in the filing cabinet. Instant knowledge update, zero training cost.
Writing a clear brief
Prompt engineering
"The quality of your instructions determines the quality of output"
You don't retrain your employee — you give them better instructions. "Summarise this in 3 bullet points for the board" gets a very different result from "Tell me about this". The skill isn't in the employee's training; it's in how precisely you describe what you want.
Zero cost, instant feedback. This is always your first lever. Most "AI doesn't work" complaints are actually "the brief was vague" problems.
Employee doing the work
Inference
"The moment your trained employee sits down and produces output"
All the education and training was preparation. Inference is the moment they sit down and produce output — one word at a time, reading what they've written so far to decide what comes next. This is where you pay per minute of their time.
This is your running cost. Every API call, every chat response — it's all inference. Speed and cost per query matter here.
Clip-on specialist badge
LoRA and parameter-efficient fine-tuning
"A tiny reference card that shifts behaviour without retraining"
Instead of sending your employee back to university, you give them a laminated reference card for a specific domain. They clip it on, check it when needed, and their answers shift toward that speciality. Swap the card for a different one and they're instantly re-specialised.
One base model, many cheap adapters. Your legal card, medical card, and finance card all share the same employee. Storage: megabytes, not gigabytes.
Photo compression
Quantisation
"RAW photo to JPEG: dramatically smaller, barely visible quality loss"
A RAW photo is 50 MB but a JPEG is 5 MB — and you can barely tell the difference. Quantisation does the same to a model: it reduces the precision of every number from 32-bit to 16-bit, 8-bit, or 4-bit. The model gets dramatically smaller and faster, with minimal quality loss.
This is how you fit a large model onto cheaper hardware. A 28 GB model becomes 3.5 GB at 4-bit — runnable on a single consumer GPU instead of an expensive server.
Decision framework
Internal: can better instructions fix it?
Yes → prompt engineering. Free, instant.
Internal: does it need access to your data?
Yes → RAG with an API model. Low cost, always current.
External: will volume exceed 10K queries/day?
Yes → self-host. Fine-tune + quantise a 7B–14B model on your own GPUs.
External: does a small model handle it?
No → hybrid approach. SLM for 95% bulk, frontier API for 5% hard cases.
The decision ladder
Two different playbooks depending on whether you're building internal tools or shipping a product.
Internal: productivity tooling
Boosting employees, internal tools, knowledge management
- Free
- Low
- Occasional
API-based models work well — volume is low, value per query is high.
External: product integration
Embedding AI into products, pipelines, customer-facing systems
- Free
- Medium
- High
- Essential
API pricing is lethal at scale — self-hosted SLMs become the only viable path.