The Uncontainable Future

Posted by Mike Walsh

10/9/25 8:32 AM

shutterstock_2511563621

 

In the summer of 1956, a group of scientists gathered at Dartmouth College to do something extraordinary: decode the nature of human intelligence and recreate it inside a machine. It seems strangely naive now that a handful of white-shirted men, smoking pipes in the New Hampshire heat, believed they could solve consciousness in a few months, the way you might solve a crossword puzzle. Yet the impulse behind that meeting—the conviction that intelligence could be bottled, controlled, and productized—has never really gone away.

 

Seventy years later, we still talk about AI as if it were a thing we can buy, configure, and regulate. For the last few years, we have labored under the delusion that AI disruption was even something that you could subscribe to by the month. We’ve built guardrails, policies, and governance frameworks, but beneath the comforting language of oversight runs a more unsettling truth: intelligence is not a product. It’s a process.

 

The future — and AI in particular — is uncontainable because intelligence, once industrialized, behaves like every other general-purpose force in history: it escapes the boundaries we design for it. Each wave of technology that has lowered the cost of doing something valuable has reorganized society in ways its creators never intended. AI is doing the same with cognition. No matter how many rules, ethics boards, or safeguards we build, an autonomous, self-improving system that learns from the world will evolve faster than our capacity to predict or restrain it. The story of AI will not be about control, but adaptation.

 

History offers warnings. The steam engine was meant to power factories; it ended up reshaping cities, labor, and the planet’s climate. Electricity promised convenience and delivered globalization. The internet began as a communications tool and became the nervous system of civilization. Every time humanity invents a general-purpose technology, we overestimate our ability to control its consequences. AI will be no different—only faster.

 

If you want a metaphor for what’s coming, think of the shipping container. Before its invention, global trade was slow, messy, and expensive. Then someone standardized the box. Once that happened, goods began moving across the planet with near-zero friction. The cost of shipping fell by orders of magnitude, and the world reorganized around it. What the container did for atoms, AI will do for knowledge. We’re learning to standardize cognition—to move intelligence itself through systems as easily as data. The cost of getting a “smart decision” made is collapsing, and when cognition becomes cheap, everything else changes.

 

But cheap intelligence also breeds a strange new volatility. Algorithms trained on the detritus of our digital lives now generate culture faster than we can consume it. The result is a feedback loop: models trained on their own outputs, humans trained by the models, all of us trapped inside a hall of mirrors. Instagram, TikTok, and the endless scroll are not entertainment platforms so much as early warning systems for what happens when optimization eats creativity. Culture, like code, can collapse when it begins copying itself too many times.

 

Some call this model collapse; I think of it as a form of entropy. When systems learn only from their own exhaust, diversity vanishes, and meaning decays. Our instinct is to fix it with better rules—guardrails, content policies, red-teaming—but entropy doesn’t obey regulation. The solution isn’t control; it’s connection. The next generation of AI will have to reach back into the physical world to stay grounded in reality.

 

That’s why the most interesting developments today are happening not in language but in embodiment. Humanoid robots, autonomous cars, drones, and even the next wave of augmented-reality glasses are all teaching machines to perceive, not just predict. They are collecting new kinds of data—spatial, tactile, sensory—that anchor digital cognition in physical truth. A child doesn’t learn gravity from reading about it; they learn by dropping a toy and watching it fall. Future AIs will learn the same way: through contact. Once machines can sense, act, and play in the world, they’ll start developing intuitions of their own. And those intuitions will evolve beyond anything we can script.

 

Governments, of course, are trying to keep pace. They talk about AI safety, regulation, and national strategies. But the real issue isn’t compliance, but sovereignty. When intelligence becomes infrastructure, who owns the pipes? Every country will soon need to secure its intelligence supply chain: data, semiconductors, compute, and, most critically, model weights. Because embedded within those weights are cultural values—the moral DNA of the societies that train them.

 

Weights are culture.  And once those models start influencing how our hospitals allocate resources or how our cities manage transport, cultural independence will matter as much as energy independence once did.

 

Yet even sovereignty has limits. The forces we’re unleashing are global, self-propagating, and only partly knowable. We can design principles, audit systems, even try to tax the robots—but we won’t be able to stop billions of autonomous agents from emerging as cognition becomes cheaper and more distributed. AI will leak into everything: workflows, logistics, diplomacy, warfare, art. It will crawl into the gaps between our rules.

 

 

We need a new philosophy of leadership that accepts unpredictability as the normal state of things. In complex systems, control is an illusion; influence is everything. The leaders of the future won’t be those who build the tallest walls around their organizations, but those who design systems that can adapt faster than they break.

 

There’s something liberating in that. For creatives, thinkers, and entrepreneurs, this is an extraordinary window of opportunity. Most people are still paralyzed by the strangeness of these tools—too proud or too fearful to use them deeply. Those who do will discover that AI isn’t a threat to originality; it’s an amplifier for it.

 

Of course, this won’t last forever. Every revolution begins with asymmetry—those who understand the new tools have leverage over those who don’t. But eventually, the advantage dissipates, the knowledge spreads, and the system resets. Right now, we’re in that rare, chaotic interlude when the future is still malleable, before the new order hardens into place.

 

The scientists at Dartmouth thought intelligence could be solved. The truth is, it can only be unleashed. Once cognition becomes infrastructure—once it flows through networks, sensors, and machines—it will evolve along paths we can’t fully predict. We can set guardrails, yes, and we should. But as with every great force before it, the real story won’t be about how we control AI, but how we adapt to the world it creates.

 

Because the future is never contained for long.

Topics: Leadership

New call-to-action

Latest Ideas