You were never tech-enabled.
You were human compute.
There is a polite fiction running through services businesses. It goes like this: we are a technology company. We are tech-enabled. We use AI.
The reality is more brutal and more honest: most services businesses are human compute businesses. Revenue equals headcount. Margin equals wage arbitrage. The business model, stripped to its skeleton, is brains for hire.
Every services portfolio company runs some version of this model. The question is what to do about it.
Human brainpower costs more every year.
AI compute has fallen 95% in three.
Human compute: Robert Half Salary Guide 2024–2026, BLS OES (accountants +3–4%/yr). AI compute: Stanford HAI AI Index 2025; OpenAI/Anthropic API pricing (GPT-4-class inference cost down ~95% since mid-2023); a16z "The Cost of AI Inference" (2024).
That orange line is the cost of human brainpower, ticking up relentlessly. The green line is AI compute, which has fallen 95% in three years. The gap between them is not a trend. It is an existential threat to the old way of doing things.
Cheaper labour worked. Better tools worked.
Once each. Neither compounds.
Playbook 1: Offshore wage arbitrage
Indian IT grew from $8B to $254B this way. But the India-US wage gap has narrowed from 8–10x to 3–4x. Wages grow 8–10% annually. 60–70K jobs cut in FY2024 - the first contraction in the industry's history.
Dead: wage inflation killed itPlaybook 2: Tech-enable the process
Better tools. Better workflows. RPA bots. This works - once. An RPA bot does the exact same thing on day 1,000 that it did on day one. It is a one-time step function. It does not compound.
Dead: tools that don't learn plateauEvery API call starts from zero.
Your corrections train their model, not yours.
Clients buy expertise, not software
If you replace people with a chatbot, you're no longer a services firm. You're a SaaS company with services pricing - the worst of both worlds.
LLMs have no memory
Every API call starts from zero. Your client-specific knowledge, your edge cases, your institutional judgment - none of it accumulates. When your team corrects the AI, that learning improves OpenAI's model, not yours.
You're paying the rent. The landlord keeps the equity.
Every correction your team makes improves their model. You get the productivity boost on day one and stay exactly there on day 1,000. Permanent amnesia.
Every decision becomes permanent ground truth.
Accuracy compounds. Knowledge stays.
What it is
Every transaction generates a decision trace - input, judgment, outcome. Traces retrain the system. Accuracy compounds. The more work flows through, the smarter it becomes. Unlike static automation, a retained learning system on day 1,000 handles cases it could not handle on day one.
Why now
Three things changed simultaneously: (1) Inference costs fell 200x in 18 months. (2) Foundation models reached task-level competence. (3) Fine-tuning became practical at enterprise scale. It is now cheaper to learn than to hire.
The retained earnings analogy
Retained earnings accumulate on the balance sheet. They don't walk out the door when an employee quits. Retained learning is the same. Every correction becomes permanent ground truth. The organisation's intelligence accumulates instead of dissipating.
Four requirements. In this order.
The counterintuitive principle: use less AI, not more. Exhaust the cheap options before reaching for the expensive ones.
1. Resolve the obvious
Most decisions are ones the firm has already made. The system must recognise these instantly, without AI, at near-zero cost.
2. Learn from corrections
When a human corrects the system, that correction must improve future decisions permanently - not vanish after the session ends.
3. Reserve AI for the novel
Only genuinely new situations should reach an LLM. As the system learns, these cases get rarer and the cost per decision falls.
4. Keep a human in the loop
Every correction feeds back into steps 1 and 2. The human makes the system smarter. The system makes the human faster.
The result: a system that only gets smarter.
Day 1,000 handles what day one could not.
Same headcount, 2.5x the output.
Margins compound instead of compressing.
| Contract Revenue | Human Compute | Output/Person | Cost/Unit | Margin | |
|---|---|---|---|---|---|
| Before retained learning | Flat | Flat | Baseline | Baseline | ~20% |
| After retained learning | Flat | Flat | +2.5x | Falls | Expands |
| Direction | → | → | ↑ | ↓ | ↑↑ |
Three things your team couldn't do before
that retained learning makes possible.
Niche mastery
Your system accumulates domain expertise that never forgets and never quits. Serve complex narrow segments profitably. Pricing power follows demonstrable accuracy.
Volume explosion
2.5x output with the same headcount. The system handles routine; humans handle novel. More volume means faster learning means higher accuracy means more volume. Winner-take-most.
Portfolio memory
The system holds the full context of every client and engagement simultaneously. Patterns from one client surface insights for all the others. Cross-portfolio intelligence as a byproduct.
The people don't change. What they can do with their time does.
Your clients see better work. You keep the margin.
Services contracts are priced on FTE rates or per-transaction volume. If you announce efficiency gains, clients demand lower prices. The margin improvement evaporates before you capture it.
The retained learning transformation happens inside the delivery engine. Invisibly. The client sees better outcomes, faster turnaround, fewer errors. The contract stays the same. You keep the spread.
This is not sleight of hand. It is how every services business model transition has ever worked. Value delivery changes first. Pricing catches up.
Client sees:
Better outcomes. Same SLA. Same pricing.
You keep:
Every efficiency gain captured. Compounding margins.
Clients never see:
The transformation happening inside the delivery engine.
AI-capable firms are pulling away.
The rest are compressing.
Upper Leg
EXL Service: stock +40–60% in 2024. Analytics revenue growing double digits.
Accenture: $3B+ GenAI bookings by mid-2024. 40,000+ AI practitioners deployed.
Lower Leg
Teleperformance: stock –30–40% from 2023 peaks. Defensive acquisitions.
Wipro: revenue declined FY2024. Margins at 16–16.5%.
Entry at 10x. Exit at 20x.
Same people. Same contracts.
- Labor-dependent
- Linear growth
- Margin compression risk
- No structural moat
- Compounding unit economics
- Proprietary learning, improves with scale
- Switching costs in client intelligence
- Margin trajectory decoupled from headcount
The difference between those two multiples is not operational improvement. It is a fundamentally different business - same people, same contracts, same client relationships on the surface.
But the physics underneath are completely different.
"Unit economics that compound become business economics that transform."
The services industry spent three decades improving unit economics through the processization of learning using technology. Retained learning makes that process exponential.
Use AI intelligently.
Score your firm on three dimensions.
Under 15 minutes. All client-side.
Exit Multiple Calculator
From Human Compute
to Retained Learning
Business models are up for an upgrade.
Gaurav Rastogi · doloopdigital
A companion piece to Garg & Gupta's Context Graphs thesis (Foundation Capital, 2025)
Your business model was improving unit economics through the processization of learning using technology. That is now exponential through the processization of learning using intelligence.
Most services businesses are human compute businesses
There is a polite fiction running through services businesses. It goes like this: we are a technology company. We are tech-enabled. We use AI.
The reality is more brutal and more honest: revenue equals headcount, margin equals wage arbitrage. The business model, stripped to its skeleton, is brains for hire. That is not an insult. It is a diagnosis.
The India-US wage gap has narrowed from 8-10x to 3-4x. FY2024 marked the first workforce contraction in Indian IT history. This is not a cyclical dip. It is structural compression. Read the full evidence
Two playbooks ran the last thirty years. Both are exhausted.
Playbook one: offshore wage arbitrage. Find cheaper humans. Indian IT grew from $8 billion to $254 billion this way. But the arbitrage has nearly disappeared. What remains is a shrinking gap, not a structural advantage.
Playbook two: tech-enable the process. Better tools. Better workflows. RPA bots. This works - once. An RPA bot does exactly the same thing on day 1,000 that it did on day one. It does not compound.
Both playbooks optimized within the human compute model. Neither escaped it.
Your corrections train their model, not yours
The obvious next move is to plug in an LLM. Capture the productivity boost - roughly 14% on average, 34% for lower-skill workers. But that boost is one-time and non-accumulating. Every API call starts from zero.
Your client-specific edge cases, your institutional judgment, your hard-won understanding of how a particular client's business behaves across seasons - none of it accumulates. When your team corrects the AI, that learning improves OpenAI's model, not yours.
And if you replace people with a chatbot, you are no longer a services firm. You are a SaaS company with services pricing - the worst of both worlds.
Retained learning: institutional intelligence that compounds
Retained learning treats institutional intelligence the way a balance sheet treats earnings: as something that accumulates, compounds, and does not walk out the door on Friday.
Every transaction generates a decision trace - the input, the human judgment applied, and the outcome. Those traces retrain the system. Accuracy compounds. Unlike static automation, a retained learning system on day 1,000 handles cases it could not have handled on day one.
The architecture is built on a counterintuitive principle: use less AI, not more. Exhaust the cheap options before reaching for the expensive ones. As volume grows, AI calls become increasingly rare because the system has learned the patterns. The ratchet only clicks in one direction.
Same headcount, 2.5x the output
Niche mastery. Expertise that never forgets, never quits. Pricing power follows accuracy.
Volume explosion. The system handles routine; humans handle novel. More volume means faster learning means higher accuracy.
Portfolio memory. Full context of every client simultaneously. Cross-portfolio intelligence as a byproduct of doing the work.
Your clients see better work. You keep the margin.
Services contracts are priced on FTE rates or per-transaction volume. If you announce efficiency gains, clients demand lower prices. The margin improvement evaporates before you capture it.
The retained learning transformation happens inside the delivery engine. Invisibly. The client sees better outcomes, faster turnaround, fewer errors. The contract stays the same. You keep the spread.
This is how every services business model transition has ever worked. Value delivery changes first. Pricing catches up.
AI-capable firms are pulling away. The rest are compressing.
EXL Service: stock +40-60% in 2024. Accenture: $3B+ GenAI bookings by mid-2024, 40,000+ AI practitioners.
Teleperformance: stock -30-40% from 2023 peaks. Wipro: revenue declined FY2024, margins at 16-16.5%.
The K-curve is self-reinforcing. Upper-leg firms build data flywheels that compound with every engagement. Lower-leg firms face declining margins that reduce the ability to invest, which further erodes competitiveness.
The window between the two legs is widening. It will not close.
Entry at 10x. Exit at 20x. Same people.
Base case: $10M EBITDA at 10x = $100M enterprise. Apply retained learning over five years: margins expand from 20% to 35%+, revenue grows at 12%/yr, multiple re-rates toward 20x. Exit range: $350M-$880M. That is a 3.5x-8.8x MOIC.
The difference is not operational improvement. It is a fundamentally different business.
THE THESIS
Unit economics that compound become business economics that transform.
The services industry spent three decades improving unit economics through the processization of learning using technology. Retained learning makes that process exponential. The firms that understand this will not look like they are doing anything dramatically different - same services, same teams, same clients, same contracts.
But the economics, the trajectory, and the exit multiple will be unrecognizable.
Use AI intelligently.
Brynjolfsson and Mitchell (2017) - automation at the task level. Brynjolfsson et al. (2025 QJE) - 14% average productivity gains from generative AI. Nonaka and Takeuchi (1995) - SECI model, tacit to codified knowledge. Agrawal, Gans, Goldfarb (2018) - prediction machines flywheel. Garg and Gupta (2025) - Context Graphs, decision traces as the next trillion-dollar enterprise layer. Morgan Stanley (2024) - barbell effect in AI valuations.
We run retained learning transformations
Architecture advisory, build, and deploy. We work with services businesses and their PE partners to identify the highest-value tasks, build the learning architecture, and prove the compounding curve in production. Not a workshop. Not a strategy deck. A working system with measurable accuracy gains, declining AI costs, and expanding margins.
Start with the assessment to see where your firm stands - or get in touch directly.
Gaurav Rastogi wrote the book on how this industry was built - literally. Offshore (Penguin, 2011) and Global Business in the Age of Destruction and Distraction (Oxford University Press, 2022). Visiting faculty at IIM Ahmedabad and Ashoka University. Board member at GTU and HSCI Global. 3,200+ commits in the last year building the architecture described above. More about Gaurav
Three instruments. One question each answers.
All assessments run client-side. No data is sent to any server.
Human Compute Diagnostic
← BackRetained Learning Readiness
← BackPhase 1: Business Context
Exit Multiple Calculator
← BackBase case: $10M EBITDA services business. Entry at 10x = $100M enterprise value. Apply retained learning over 5 years and see what changes.
Industry benchmark: 8–15pp over 5 years (Everest Group 2024)
Illustrative model. Sources: Everest Group (2024), Morgan Stanley IT Services Research (2024).
Gaurav Rastogi
Gaurav Rastogi is the founder of doloopdigital, focused on intelligence-enabled transformation for knowledge services businesses.
The Retained Learning Thesis is being built and proved in production through the SLAM architecture - running live inside a knowledge services delivery engine serving US startups.
The thesis is a companion piece to Ashu Garg and Jaya Gupta's Context Graphs thesis (Foundation Capital, 2025) - translating the concept of decision traces as institutional memory into the specific economic language of knowledge services, PE-owned businesses, and the human compute trap.
About doloopdigital
doloopdigital is a point of view, not a consulting firm. The thesis is that knowledge services businesses are human compute businesses facing structural compression - and that retained learning is the only escape valve that does not break contracts, alienate clients, or require wholesale business model reinvention.
We publish the thinking. We prove it in production. We share the architecture with anyone building in this space.