
For years, Waymo’s autonomous vehicles were known as the politest drivers in San Francisco. They came to full stops, waited patiently at four-way intersections, yielded generously, and behaved with a kind of algorithmic courtesy that seemed almost naive. Then, as the Wall Street Journal recently reported, they began to behave very differently: darting around double-parked trucks, merging aggressively, accelerating the instant a light turned green, even performing the occasional illegal U-turn. Basically, like a NYC taxi driver.
But here’s the plot twist. This wasn’t a malfunction. It was an upgrade. Waymo’s leaders explained that the cars needed to become “confidently assertive” because extreme politeness was creating safety issues. A vehicle that followed the rules perfectly was behaving in ways that other drivers did not expect. And in traffic, unpredictability is a form of risk.
This is a revealing lesson for the broader challenge of AI safety. We tend to assume that the safest systems are the ones with the most rigid guardrails. But in many real-world settings, risk arises not from breaking rules, but from breaking expectations.
I learned this years ago when I lived in Istanbul. I love driving - especially loud, classic, American muscle cars. But that aside, frankly - I’m also a cautious, rule-following driver. I quickly realized I could not safely drive in Turkey. Istanbul’s traffic isn’t chaotic — it’s coordinated on a different logic. Drivers move in fast, assertive rhythms, signaling intention less through indicators than through momentum and micro-negotiations. My careful Western driving style was out of sync with the system. I was the anomaly, and therefore the most dangerous person on the road.
This is exactly the dynamic that game theory helps illuminate. Human environments operate less like rulebooks and more like coordination games, where stability comes from shared expectations about how others will behave. These equilibria vary by culture, city, even neighborhood. They evolve through practice, not regulation. In such systems, a player who obeys the written rules but violates the unwritten equilibrium becomes unpredictable — and unpredictability destabilizes the game for everyone else.
Early autonomous vehicles made this mistake. They entered the traffic system playing a different game. Their rule-bound behavior created what game theorists call strategy incoherence. Humans behaved according to local norms; the AI behaved according to formal laws. Safety problems emerged not because the AI was reckless, but because it was misaligned with the equilibrium.
This brings us to an idea I explored in a Harvard Business Review article a little while back. When determining how much autonomy to grant AI agents, leaders usually focus on how big a potential risk is. But a more useful question is what kind of risk it is. Some problems are complicated — governed by fixed rules and stable relationships. Others are ambiguous — shaped by context, norms, and feedback loops. And some are uncertain, where neither rules nor data can reliably guide decisions.
Driving, despite its legal structure, is not complicated. Traffic laws are complicated. Driving is ambiguous.
Ambiguous problems are ones where AI systems become safer by engaging more deeply with the real world. More context improves their judgments. More interaction refines their models. As I wrote, “AI agents won’t always get it right, but they learn fast.” That is why Waymo doesn’t rely on teleoperation when a car is confused. Human operators offer guidance, but the AI must continue making decisions. The system strengthens not through constraint, but through exposure. Seen through this lens, Waymo’s shift from hyper-cautious to assertive makes sense.
The safest behavior is not always the most conservative. It is the behavior that best matches the expectations of the people around the system. A decisive merge, a quick acceleration, or a tactical lane change may appear aggressive, but if it aligns with local driving norms, it is actually the more predictable and therefore safer choice.
Economists have a parallel concept. Milton Friedman argued that inflation is driven not just by prices but by expectations. Once people expect inflation, they act in ways that make it real. Human systems are governed by beliefs about how others will behave. Traffic is no different. An autonomous vehicle that violates these beliefs, even while obeying the law, injects uncertainty into a coordination game that depends on shared assumptions.
This insight applies far beyond robotaxis. As AI agents proliferate inside organizations, they will increasingly operate in domains filled with tacit norms: customer service, workflow orchestration, decision support, compliance, prioritization. These environments resemble ambiguous games, not rigid rule systems. The most dangerous agent will not be the bold one, but the one that acts out of sync with human expectations.
Current AI guardrail strategies don’t fully account for this. They treat AI systems as if they operate in complicated domains where the primary goal is to restrict behavior. But ambiguity requires a different approach. The goal is not to eliminate variation, but to shape behavior so that systems remain consistent with the equilibrium of the environment they inhabit.
This is the real challenge of autonomous systems. We are not simply programming machines to follow rules. We are teaching them how to participate in human coordination. The next frontier of AI safety is not technical constraint, but behavioral coherence — designing agents that understand the social, cultural, and contextual signals that make their actions legible and predictable to the humans around them.
Waymo’s assertive driving is an early glimpse of this future. The first generation of AI systems obeyed the rules. The next generation must understand the game.

