What brings you here?
For these companies, more people meant more revenue.
So why are the biggest firms shrinking?
There is a polite fiction running through services businesses. It goes like this: we are a technology company. We are tech-enabled. We use AI.
The reality is more brutal and more honest: most services businesses are human compute businesses. Revenue is proportional to headcount. Margin comes from wage arbitrage. The business model, stripped to its skeleton, is brains for hire. Human compute.
Every services portfolio company runs some version of this model. The question is what to do about it.
Human brainpower costs more every year.
AI compute has fallen 95% in three.
Human compute: Robert Half Salary Guide 2024–2026, BLS OES (accountants +3–4%/yr). AI compute: Stanford HAI AI Index 2025; OpenAI/Anthropic API pricing (GPT-4-class inference cost down ~95% since mid-2023); a16z "The Cost of AI Inference" (2024).
That orange line is the cost of human brainpower, ticking up relentlessly. The green line is AI compute, which has fallen 95% in three years. The gap between them is an existential threat to the old way of doing things. There is no escaping the infinite supply of virtually free intelligence, on tap.
The two old playbooks were really successful.
But cheaper labor and better processes are
both played out completely.
Playbook 1: Offshore wage arbitrage
Indian IT grew from $8B to $254B this way. But the wage gap narrowed from 8–10x to 3–4x.
Dead: wage inflation killed itPlaybook 2: Tech-enable the process
RPA bots do the exact same thing on day 1,000 that they did on day one.
Dead: tools that don't get better, make you worseEvery time you ask AI a question, it starts from scratch.
Imagine never learning from your mistakes.
Clients buy expertise, not software
If you replace people with a chatbot, you're no longer a services firm. You're a SaaS company with services pricing - the worst of both worlds.
LLMs have no memory
Every API call starts from zero. Your client-specific knowledge, your edge cases, your institutional judgment - none of it accumulates. When your team corrects the AI, that learning improves OpenAI's model, not yours.
You're paying the rent. The landlord keeps the equity.
Every correction your team makes improves their model. You get the productivity boost on day one and stay exactly there on day 1,000. Permanent amnesia.
What if your system could remember every decision path?
The more you use it, the better it gets.
What it is
Every transaction generates a decision trace. These traces retrain the system. This improves accuracy exponentially. Day 1,000 handles what day one could not.
Why now
Inference costs have fallen 200x and this will continue for some time. Foundation models have reached task-level competence. Fine-tuning is practical now. It is now cheaper to learn than to hire.
Knowledge that stays
Unlike employees, what the system learns doesn't walk out the door. It compounds with every transaction.
Efficiency gains pass through like vitamin yellow-dye — unless you retain them.
$100M services contract. 20% operating margin.
rate cuts + fewer
billed hours
$3.6M wages
margin
savings
clients
costs
left
Proof: operating margins despite billions in AI investment
Rate cut: KPMG demanded 14% AI discount from Grant Thornton (Irish Times, Feb 2026). Labor share: TCS, Infosys, Wipro, Cognizant annual reports FY23–24 (54–67%). Realized productivity: EXL FY24 (200–300bps); lab 20–40% (McKinsey 2023) vs 5–8% realized. AI investment: Accenture $3B/3yr, Wipro $1B/3yr; Gartner 2024 (3–5%). Wages: NASSCOM FY24 (8–10%), Aon (9.5%), Radford/Mercer (4–5% global).
Grow revenues without losing
your people and knowledge.
Margin pressure turns into margin flywheel.
| Contract Revenue | Human Compute | Output/Person | Cost/Unit | Margin | |
|---|---|---|---|---|---|
| Before retained learning | Flat | Flat | Baseline | Baseline | ~20% |
| After retained learning | Flat | Flat | +2.5x | Falls | Expands |
| Direction | → | → | ↑ | ↓ | ↑↑ |
Knowledge workers can now be compounded
by adding knowledge instead of workers.
The result is more revenues per employee.
This changes the economics of your business model.
Niche mastery
Retained domain expertise that never quits; no need for many unicorn-shaped experts.
Volume explosion
2.5x output when the humans handle clients and novel cases, while the system handles routine transactions.
Portfolio memory
Full context of every client simultaneously. Cross-portfolio intelligence as a byproduct.
Your employees can now focus on things that move the dial for your business.
The challenge will be to offer clients
a better service experience at the same price.
More valuable to both companies.
Services contracts are priced on FTE rates or per-transaction volume. If you announce efficiency gains, clients demand lower prices. The margin improvement evaporates before you capture it.
The retained learning transformation happens inside the delivery engine. Invisibly. The client sees better outcomes, faster turnaround, fewer errors. The contract stays the same. You keep the spread.
This is not sleight of hand. It is how every services business model transition has ever worked. Value delivery changes first. Pricing catches up.
Client sees:
Better outcomes. Same SLA. Same pricing.
You keep:
Every efficiency gain captured. Compounding margins.
Clients never see:
The transformation happening inside the delivery engine.
The industry divergence has already started.
Not all business models are the same.
Upper Leg
EXL Service: stock +40–60% in 2024. Analytics revenue growing double digits.
Accenture: $3B+ GenAI bookings by mid-2024. 40,000+ AI practitioners deployed.
Lower Leg
Teleperformance: stock –30–40% from 2023 peaks. Defensive acquisitions.
Wipro: revenue declined FY2024. Margins at 16–16.5%.
The payoff for moving from "tech-enabled services"
to "retained learning" services is likely to be
a multiplier boost: Entry at 10x. Exit at 20x.
- Labor-dependent
- Linear growth
- Margin compression risk
- No structural moat
- Compounding unit economics
- Proprietary learning, improves with scale
- Switching costs in client intelligence
- Margin trajectory decoupled from headcount
The difference between those two multiples is not operational improvement. It is a fundamentally different business - same people, same contracts, same client relationships on the surface.
But the physics underneath are completely different.
The Business Model shifts dramatically when the Unit Economics become compounding.
Getting better at getting better at business.
The services industry spent three decades improving unit economics through the process-ization of learning using technology. Retained learning makes that process exponential.
Use AI intelligently.
How will your business shift?
Score your firm on three dimensions
in under 15 minutes. All client-side.
Exit Multiple Calculator
From Human Compute
to Retained Learning
Business models are up for an upgrade.
Gaurav Rastogi · doloopdigital
A companion piece to Garg & Gupta's Context Graphs thesis (Foundation Capital, 2025)
Your business model was improving unit economics through the processization of learning using technology. That is now exponential through the processization of learning using intelligence.
Most services businesses are human compute businesses
There is a polite fiction running through services businesses. It goes like this: we are a technology company. We are tech-enabled. We use AI.
The reality is more brutal and more honest: revenue equals headcount, margin equals wage arbitrage. The business model, stripped to its skeleton, is brains for hire. That is not an insult. It is a diagnosis.
The India-US wage gap has narrowed from 8-10x to 3-4x. FY2024 marked the first workforce contraction in Indian IT history. This is not a cyclical dip. It is structural compression. Read the full evidence
Two playbooks ran the last thirty years. Both are exhausted.
Playbook one: offshore wage arbitrage. Find cheaper humans. Indian IT grew from $8 billion to $254 billion this way. But the arbitrage has nearly disappeared. What remains is a shrinking gap, not a structural advantage.
Playbook two: tech-enable the process. Better tools. Better workflows. RPA bots. This works - once. An RPA bot does exactly the same thing on day 1,000 that it did on day one. It does not compound.
Both playbooks optimized within the human compute model. Neither escaped it.
Your corrections train their model, not yours
The obvious next move is to plug in an LLM. Capture the productivity boost - roughly 14% on average, 34% for lower-skill workers. But that boost is one-time and non-accumulating. Every API call starts from zero.
Your client-specific edge cases, your institutional judgment, your hard-won understanding of how a particular client's business behaves across seasons - none of it accumulates. When your team corrects the AI, that learning improves OpenAI's model, not yours.
And if you replace people with a chatbot, you are no longer a services firm. You are a SaaS company with services pricing - the worst of both worlds.
Retained learning: institutional intelligence that compounds
Retained learning treats institutional intelligence the way a balance sheet treats earnings: as something that accumulates, compounds, and does not walk out the door on Friday.
Every transaction generates a decision trace - the input, the human judgment applied, and the outcome. Those traces retrain the system. Accuracy compounds. Unlike static automation, a retained learning system on day 1,000 handles cases it could not have handled on day one.
The architecture is built on a counterintuitive principle: use less AI, not more. Exhaust the cheap options before reaching for the expensive ones. As volume grows, AI calls become increasingly rare because the system has learned the patterns. The ratchet only clicks in one direction.
Same headcount, 2.5x the output
Niche mastery. Expertise that never forgets, never quits. Pricing power follows accuracy.
Volume explosion. The system handles routine; humans handle novel. More volume means faster learning means higher accuracy.
Portfolio memory. Full context of every client simultaneously. Cross-portfolio intelligence as a byproduct of doing the work.
The client sees faster, better work. The contract stays the same.
Services contracts are priced on FTE rates or per-transaction volume. If you announce efficiency gains, clients demand lower prices. The margin improvement evaporates before you capture it.
The retained learning transformation happens inside the delivery engine. Invisibly. The client sees better outcomes, faster turnaround, fewer errors. The contract stays the same. You keep the spread.
This is how every services business model transition has ever worked. Value delivery changes first. Pricing catches up.
AI-capable firms are pulling away. The rest are compressing.
EXL Service: stock +40-60% in 2024. Accenture: $3B+ GenAI bookings by mid-2024, 40,000+ AI practitioners.
Teleperformance: stock -30-40% from 2023 peaks. Wipro: revenue declined FY2024, margins at 16-16.5%.
The K-curve is self-reinforcing. Upper-leg firms build data flywheels that compound with every engagement. Lower-leg firms face declining margins that reduce the ability to invest, which further erodes competitiveness.
The window between the two legs is widening. It will not close.
Entry at 10x. Exit at 20x. Same people.
Base case: $10M EBITDA at 10x = $100M enterprise. Apply retained learning over five years: margins expand from 20% to 35%+, revenue grows at 12%/yr, multiple re-rates toward 20x. Exit range: $350M-$880M. That is a 3.5x-8.8x MOIC.
The difference is not operational improvement. It is a fundamentally different business.
THE THESIS
Unit economics that compound become business economics that transform.
The services industry spent three decades improving unit economics through the processization of learning using technology. Retained learning makes that process exponential. The firms that understand this will not look like they are doing anything dramatically different - same services, same teams, same clients, same contracts.
But the economics, the trajectory, and the exit multiple will be unrecognizable.
Use AI intelligently.
Brynjolfsson and Mitchell (2017) - automation at the task level. Brynjolfsson et al. (2025 QJE) - 14% average productivity gains from generative AI. Nonaka and Takeuchi (1995) - SECI model, tacit to codified knowledge. Agrawal, Gans, Goldfarb (2018) - prediction machines flywheel. Garg and Gupta (2025) - Context Graphs, decision traces as the next trillion-dollar enterprise layer. Morgan Stanley (2024) - barbell effect in AI valuations.
We run retained learning transformations
Architecture advisory, build, and deploy. We work with services businesses and their PE partners to identify the highest-value tasks, build the learning architecture, and prove the compounding curve in production. Not a workshop. Not a strategy deck. A working system with measurable accuracy gains, declining AI costs, and expanding margins.
Start with the assessment to see where your firm stands - or get in touch directly.
Gaurav Rastogi wrote the book on how this industry was built - literally. Offshore (Penguin, 2011) and Global Business in the Age of Destruction and Distraction (Oxford University Press, 2022). Visiting faculty at IIM Ahmedabad and Ashoka University. Board member at GTU and HSCI Global. 3,200+ commits in the last year building the architecture described above. More about Gaurav
Three instruments. One question each answers.
All assessments run client-side. No data is sent to any server.
Human Compute Diagnostic
← BackRetained Learning Readiness
← BackPhase 1: Business Context
Exit Multiple Calculator
← BackBase case: $10M EBITDA services business. Entry at 10x = $100M enterprise value. Apply retained learning over 5 years and see what changes.
Industry benchmark: 8–15pp over 5 years (Everest Group 2024)
Illustrative model. Sources: Everest Group (2024), Morgan Stanley IT Services Research (2024).
Gaurav Rastogi
Gaurav Rastogi is the founder of doloopdigital, focused on intelligence-enabled transformation for knowledge services businesses.
The Retained Learning Thesis is being built and proved in production through the SLAM architecture - running live inside a knowledge services delivery engine serving US startups.
The thesis is a companion piece to Ashu Garg and Jaya Gupta's Context Graphs thesis (Foundation Capital, 2025) - translating the concept of decision traces as institutional memory into the specific economic language of knowledge services, PE-owned businesses, and the human compute trap.
About doloopdigital
doloopdigital is a point of view, not a consulting firm. The thesis is that knowledge services businesses are human compute businesses facing structural compression - and that retained learning is the only escape valve that does not break contracts, alienate clients, or require wholesale business model reinvention.
We publish the thinking. We prove it in production. We share the architecture with anyone building in this space.