Every AI agent running in a company today was trained on something. Decisions, corrections, feedback loops, judgment calls. Someone generated that. In most cases, that someone was a person doing their job and they were paid a salary for it. They were not paid for what they contributed to the machine.
This is the central unpriced transaction of the AI economy.
The expertise that took years to build is being extracted, encoded, and turned into a productive asset the firm owns. The person gets their regular pay. The firm gets a permanent, copyable, deployable version of what made that person valuable. Acemoglu, Autor and Johnson argue that AI represents a fundamental shift in the capital-labour relationship, one that targets the cognitive assets workers previously kept.
One could argue workers were already compensated. “It is called a wage” as economists put it. Employment contracts are forward-looking agreements: workers accept a salary knowing their employer captures all work product, including unanticipated future uses.
However, it holds less well than it used to. Wages price the expected value of work at the time of hiring. They do not price the permanent, scalable, compounding value of an asset that did not exist when the contract was signed. A lawyer hired in 2019 did not negotiate a salary that reflected the possibility that her judgment calls would train an agent deployed across ten thousand clients in 2025. The forward-looking wage argument assumes both parties understood what was being priced. Neither did.
Why it is different now: when AI breaks the old deal
Employment contracts have always transferred work product to employers. That arrangement held because the transfer had natural limits. For instance, a senior lawyer’s judgment informed junior colleagues, but only through proximity and time; a skilled engineer’s instincts shaped a team, but imperfectly, gradually, and only as long as they stayed. Knowledge leaked slowly, depreciated constantly, and could never be fully separated from the person who held it. The worker retained something irreducible. The firm got the benefit but not the thing itself.
AI moves those limits significantly further than any previous technology. The lawyer’s judgment and the engineer’s instincts can now be extracted more completely, encoded more permanently, and deployed more widely than before. The transfer is now permanent in a way it never was. The firm doesn’t get the benefit of the expertise anymore. It gets a copy of it. That copy breaks the implicit assumption underneath the old contract: one written for a world where knowledge stayed, in some meaningful sense, with the person.
Two problems make this hard to fix.
First, the measurement problem. The instinct is to pay each worker proportional to their contribution. But training signal is collective where thousands of decisions combine into model weights that reflect none of them individually. Attribution methods exist with influence functions and data valuation techniques that can almost approximate individual contribution, but at employment scale the computational cost exceeds any plausible individual payout. The deeper form of the measurement problem is a transaction cost problem: if individual attribution is prohibitively costly, compensation needs to move to the collective level, such as sectoral royalty pools, analogous to music licensing, that distribute value across contributor classes without requiring per-worker attribution.
The second problem is the bargaining power problem. Workers have always had less bargaining power than employers. AI removes the one structural floor that existed underneath every labour negotiation. The same lawyer, the same engineer: once they encode, their exit changes nothing. The leverage that underpinned every negotiation over expertise, namely the “I can leave and take my value with me” is now significantly weakened at the moment of encoding.
Put both problems together and the shape of the trap is visible: workers cannot prove individual contribution precisely enough to claim compensation, cannot organize before encoding has already happened, and once it has, their leverage is gone.
That is what the four compensation models are navigating. None of them were designed for this.
Why is it hard to fix: three models that were never designed for the AI era
The models below are not new. They evolved across decades of creative industries and labour law. For most of that time they worked, because the knowledge transfer they were managing was always incomplete. The firm got the benefit but never quite the thing itself.
Transfer: nothing and one-time fee. Nothing is the default: your contract says your work belongs to your employer. The difference is that before, when you left, you took your expertise with you. Now the firm has a copy. Your exit changes nothing.On the other hand, one-time fees make the transaction explicit. It sounds fair until you realize you are being asked to price something neither you nor the firm yet understands. Imagine being paid for a film you did not know you were making. That is the negotiation.Both models close your claim at the moment your contribution is worth the least. Everything that accrues after belongs to the firm.
Income: Royalty. Royalties make intuitive sense. A musician earns every time her song is streamed. Why shouldn’t a worker earn every time the agent trained on her expertise generates value? Because the agent is not a song. It gets retrained continuously on new data, until it bears little resemblance to what it was when you contributed to it. The royalty expires. The agent keeps running. What sounded like ongoing income turns out to be a payment that stops at a moment the worker cannot predict or control.
Ownership: Equity. Equity is the only model that survives retraining. Equity stakes distributed collectively, across contributor classes rather than individual workers, sidestep attribution entirely. You do not need to prove your specific contribution. You need to belong to the class of people whose contributions made the asset possible. You own the asset instead of owning not a version of it. As a result, as the asset evolves, your stake evolves with it. It is also the only model that gives workers any standing to ask how the asset is being used. Every other model is a transaction. In reality, however, this is the most difficult model to implement because of access. Equity requires leverage to negotiate before your expertise is encoded. That leverage is exactly what workers in this situation do not have.
These models were built for a world where knowledge stayed with the person. Nobody updated them for a world where it does not. No one decided this was fair. The rules just never caught up and in the absence of new ones, the default runs.
The people bearing the cost of that gap are not who you might expect.
Who actually bears the cost: the pipeline is closing from the bottom up
The unpriced transaction with expertise contributed, permanently captured and nobody paid for the transfer does not land equally. It lands hardest on those with the least leverage. And it is doing something more structural than displacing individuals.
The entry-level job is disappearing first. The tasks that justified hiring junior workers, including first-pass reviews, routine analysis, structured correspondence, were never glamorous. They were learning environments. Brynjolfsson’s research using ADP payroll data found a 13% to 16% relative employment decline for early-career workers in AI-exposed occupations. The work that made juniors worth hiring no longer needs a human to do it.
The middle compresses next. U.S. employers advertising 42% fewer middle management positions at the end of 2024 than in 2022 with no recovery trajectory observed. The same survey of 10,000 leaders found that managers spend nearly 40% of their time on administrative tasks and day-to-day coordination, precisely the work AI handles first. This compression did not begin with AI as post-pandemic correction and flatter organizational structures were already underway. But what AI does is make the trend permanent: Gartner forecast in October 2024 that 20% of organizations will use AI to eliminate more than half their middle management positions by 2026. Work is being absorbed upward and automated downward leaving fewer people spanning a wider gap.
And finally, senior experts who used to be the most protected and the most valued are simultaneously the ones most actively feeding the machine. Their judgment and pattern recognition are the training signal most worth extracting. The unpriced transaction is least visible to the people whose contribution is worth the most.
The pipeline issue is also a business case for companies. Expertise accumulates through a pipeline. Remove the entry ramp, compress the middle, and the senior expertise firms are currently extracting stops being replenished. Firms drawing down that asset without investing in its renewal are liquidating something that does not appear on their balance sheet. The timeline is long enough that nobody in the room will be held accountable. That is precisely what makes it dangerous.
What is actually happening
Two things are now established: First, the compensation models do not work. And second, the cost falls across the entire workforce, with entry-level workers locked out, middle managers compressed, senior experts feeding a machine that is closing the pipeline beneath them.
What holds all of this together is the absence of any framework to govern it.
The nearest legal test is NYT v. OpenAI, where a corporation defending content it legally owns. Even if it wins, it says nothing about the worker whose judgment became training signal under a contract never written with this in mind. The law is arguing about the easy question. The hard one has not been asked.
The deeper problem is not that the transaction is unpriced. It is that it is invisible. The New York Times had legible, attributable, ownable content and even that is taking years to resolve. Workers have none of that. Their contributions are diffuse, collective, unattributable by design. You cannot price what neither party can point to.
The machine keeps running on what was fed into it until it cannot.
Who paid the humans who fed the machine? And what happens when there are no humans left who remember how?
Nobody did. And nobody has worked out what that costs yet.