There is, perhaps, a small silver lining in the current wave of AI anxiety. Not long ago, the dominant fears revolved around killer robots, runaway superintelligence, and apocalyptic scenarios that ended with data centers being nuked from space. Today the panic is more grounded, and in many ways more sophisticated. We are no longer imagining machines conquering humanity; we are worrying about white-collar unemployment ticking above 10%, mortgage books wobbling in San Francisco, and private credit portfolios unraveling because software agents can write code faster than junior analysts. The monsters have moved from science fiction to the balance sheet.
The 2028 Global Intelligence Crisis from Citrini captures this shift perfectly. Subtitled “The Consequences of Abundant Intelligence,” it presents a fictional macro memo from the near future in which cheaper, more capable AI triggers a white-collar job apocalypse, hollows out discretionary spending, and destabilizes housing and credit markets. It is cleverly constructed and economically literate, and its viral spread reflects a genuine unease among investors and executives. Yet for all its sophistication, the argument ultimately rests on a mispricing of what abundant intelligence actually means.
What the authors frame as an Intelligence Collapse Scenario is more accurately understood as an Intelligence Reconfiguration Scenario. The difference is not semantic. It is structural. The real question, one I have been exploring extensively in my own work on abundant intelligence, is not whether digital labor transforms the economy, but how that transformation is architected: who retains authority, who captures the surplus, and how judgment is redistributed when execution becomes abundant.
Written as a retrospective memo from June 2028, the essay sketches a world in which “abundant intelligence” delivers surging productivity alongside double-digit unemployment, as white-collar professionals, once the engine of discretionary consumption, are structurally displaced. The authors’ central mechanism is what they call “ghost GDP”: output and corporate profits rise on paper, but income no longer circulates through households because machines do not earn salaries or spend money. As wages contract and consumption weakens, asset prices and credit structures built on stable high-income employment begin to crack. Each firm’s rational decision to substitute software for labor aggregates into a systemic feedback loop, where declining demand justifies further automation, reinforcing what they portray as an intelligence displacement spiral with no obvious stabilizer.
It is a compelling story. But it rests on a critical modeling assumption that deserves scrutiny: that machine intelligence primarily substitutes for human work, and that wages are the only meaningful transmission mechanism of economic value. The memo treats intelligence as if it were a fixed pool of salaried labor. When machines perform that labor, value supposedly disappears from the system unless it flows through paychecks. That is a 20th-century production model applied to a 21st-century technology.
The deeper question is not whether machines can perform more tasks. It is how organizations reallocate judgment, authority, and ownership when intelligence becomes abundant. Modern enterprises are not simply collections of jobs. They are architectures of decision rights. Someone allocates capital. Someone signs off on compliance. Someone bears legal liability. Someone determines acceptable risk. AI systems can draft, optimize, simulate, and execute. They cannot absorb responsibility in the way institutions require.
When intelligence becomes abundant, value does not evaporate. It migrates. The constraint shifts from execution to orchestration. As digital labor absorbs routine analysis, drafting, coding, optimization, and coordination, the residual human contribution does not simply shrink in importance. In many cases, it becomes more leveraged. When a single executive, engineer, or strategist can direct systems that generate ten times the output of a traditional team, the marginal impact of their judgment increases, not decreases. The value of being correct when machines execute at scale rises sharply.
Consider how capital markets reward decision-making authority today. Portfolio managers do not earn fees because they personally process every data point. They earn fees because their judgment governs large pools of capital. The more leverage embedded in the system, the more valuable the individual exercising oversight becomes. Digital labor functions in a similar way. When output scales non-linearly but decision rights remain concentrated, the marginal productivity of judgment rises. Digital labor does not erase authority. It amplifies the consequences of those who hold it.
The crisis scenario assumes a simple substitution dynamic: one AI agent replaces one $180,000 employee. Multiply that across the economy and aggregate demand collapses. Yet real organizations rarely operate through one-to-one replacement. They operate through reconfiguration. Some roles disappear. Others expand. A smaller number of individuals may control far more productive systems. Income distribution may widen. But that is not the same as permanent economic contraction. If AI substitutes 50% of white-collar labor and multiplies the productivity of the remaining 50% by five, the income dynamics look radically different from pure elimination.
The memo also models only one economic effect of cheaper intelligence: substitution. It largely ignores two others that accompany every dramatic fall in input cost: scale expansion and new use cases. When a core production factor becomes cheaper, usage tends to explode. Lower-cost intelligence reduces the price of experimentation. It lowers barriers to entry. It enables new products and services that were previously uneconomic. Legal advice, design support, financial modeling, research assistance, and personalized education have historically been constrained by scarce human hours. As digital labor lowers those constraints, the total addressable market for intelligence-intensive work expands.
Abundant intelligence increases the number of problems worth solving. When launching a company, prototyping a product, or analyzing a market requires fewer human hours and less capital, more individuals can participate. Each new venture generates demand for coordination, oversight, trust-building, governance, and strategic direction. In that sense, digital labor expands the surface area of the economy itself. Execution becomes cheaper, but the need for judgment does not contract. It often intensifies.
This does not imply a frictionless transition. Routine cognitive labor will be commoditized. Middle layers may compress. Inequality may widen before it stabilizes. But the equilibrium outcome is unlikely to be mass professional obsolescence. It is more plausibly a bifurcation: execution becomes abundant, while high-leverage judgment, accountability, and system design become more valuable.
Many AI doomer scenarios share a hidden assumption: that artificial intelligence evolves rapidly while humans, organizations, and markets remain fixed in place. Capabilities improve. Tasks disappear. Wages fall. Systems fracture. Yet history suggests the opposite dynamic. Every general-purpose technology has triggered dislocation followed by reinvention, with new skills repriced, institutions redesigned, and entirely new industries emerging around the technology itself. The industrial revolution reorganized labor and capital. Electrification reshaped production. The internet created markets that were previously unimaginable. Betting that AI will transform cognition while leaving human adaptability unchanged is to ignore the most consistent pattern in economic history.
Abundant intelligence will commoditize certain forms of work. It will also elevate what remains scarce: judgment under uncertainty, ethical accountability, cross-domain synthesis, and the willingness to assume responsibility when automated systems fail. The real risk is not that machines change everything. It is that we misinterpret what is changing. Intelligence is becoming abundant. Judgment is not.
The future will belong not to those who resist digital labor, nor to those who deploy it blindly, but to those who understand how to redesign authority, ownership, and value creation around it.