The Future Is Elsewhere

When Work Isn’t a Workflow

Written by Mike Walsh | 3/19/26 5:43 PM

 

The prevailing story about AI and jobs is seductively simple: break work into tasks, measure how many can be automated, and once enough of them are, the job disappears. That logic works well for routine work. But in high-stakes, human-facing roles, especially those performed by agents and advisors, it rests on a fragile assumption: that jobs are just workflows, collections of discrete steps that can be taken apart without changing where value is actually created—or whether it can be created at all.

 

A recent report from Anthropic tries to quantify AI’s impact on the labor market through a measure of “observed exposure,” combining theoretical LLM capability with real-world usage from millions of Claude interactions and mapping both onto occupational task data from O*NET. Its logic is straightforward: the more of a job’s tasks AI can do, and is already doing, the more exposed that job is to disruption. It is a sophisticated extension of the task-based view of work, and it produces compelling signals about where AI is already active. But it still assumes that if you can decompose a job, you can understand its value. In many roles, especially those built around human judgment and coordination, that is precisely the mistake.

 

To see why, it is useful to borrow a concept from early twentieth-century psychology. The Gestalt theorists argued that we perceive patterns, not parts. A melody is not experienced as a sequence of notes, but as a whole. Rearrange the notes, and the melody disappears, even if every individual component is still present. As Kurt Koffka put it, the whole is not simply more than the sum of its parts; it is different in kind.

 

The same is true of many forms of work, especially those built around human interaction. What looks like a sequence of tasks on paper is, in practice, an evolving social process. Each interaction changes the next. Meaning is interpreted, not just transmitted. Decisions are shaped by timing, framing, and trust as much as by information. The outcome is not produced by completing steps, but by how people respond to them.

 

Consider a real estate transaction. It can be mapped as a series of steps: pricing, listing, marketing, negotiation, closing. But that is not how the deal actually happens. A buyer hesitates, not because of price, but because of uncertainty. A seller rejects an offer because it feels wrong, not because the numbers do not work. A shift in tone, timing, or phrasing can move a negotiation forward or cause it to collapse. The role of the agent is not to move the process along a checklist, but to manage a moving target of perception, emotion, and incentive. The outcome emerges from how those elements play out over time.

 

 

 

Financial advice operates in much the same way. Portfolio construction is increasingly straightforward. Models can optimize allocations, simulate risk, and rebalance continuously. Yet the defining moments in a client relationship rarely occur in these analytical phases. They occur when markets fall, uncertainty spikes, or life changes suddenly, and clients feel the urge to act against their own long-term interests. The critical work is not choosing the portfolio. It is navigating the client’s response to uncertainty. That is not a task. It is an unfolding interaction.

 

This is where task-based models begin to break down. They measure which parts of a job can be substituted, but miss how the whole situation actually works. You can automate analysis, generate documents, and manage communication flows, and still not control the outcome. Completing 80 or even 90 percent of the identifiable tasks in a role does not guarantee that a deal closes or that a decision holds under pressure. Those outcomes depend on moments of judgment, timing, trust, and emotional coordination that are not easily reduced to tasks in the first place.

 

A glimpse of this shift is already visible in the market. The recent convergence between Rocket, Redfin, and generative AI platforms is a case in point. Rocket’s acquisition of Redfin, followed by Redfin’s launch of an AI-powered home search experience inside ChatGPT, points toward a fully integrated, AI-native transaction stack. Discovery, pricing, brokerage, financing, and customer interaction are being pulled into a single conversational flow, collapsing what was once a fragmented process into a continuous digital experience. On one level, this is the logical endpoint of workflow automation: faster transactions, greater transparency, and radically lower friction.

 

But on another level, it exposes the limits of the workflow model itself. As the informational and transactional layers of an industry collapse into software, the system does not become simpler. It becomes faster, more dynamic, and often more volatile. More data does not eliminate uncertainty. It amplifies it. And as more of the customer journey compresses into software, the remaining human moments become more consequential. The role of the broker does not disappear. It becomes more valuable, precisely because it sits at the point where the process stops being computational and starts being emotional, interpretive, and irrevocable.

 

What AI changes, then, is not whether these jobs exist, but how they are structured. The lower layers of the work—analysis, preparation, routine communication—are increasingly handled by machines. The human role becomes more concentrated in moments that require interpretation, alignment, and commitment. The job compresses at the bottom and intensifies at the top. In fact, many of these roles, especially those requiring complex human coordination, may move toward the frontier of “high exposure” without triggering a white-collar collapse, because the work that remains is the work that matters most, and the people doing it will operate with far greater leverage than before.

 

This creates a less obvious but more troubling effect. Many of these professions have historically depended on apprenticeship. Junior roles provide exposure to real-world situations, allowing individuals to develop judgment over time. If AI removes much of this early-stage work, the training ground for these capabilities begins to disappear. We may become highly effective at automating large portions of the work, but less effective at developing the people who can do what remains.

 

The deeper point is that not all work can be broken down without changing what makes it valuable. When tasks are independent and repeatable, decomposition enables automation and scale. But when outcomes depend on how people interpret and respond to each other, breaking the work apart can strip out the very dynamics that drive results. Some jobs are workflows. Those will be automated with increasing precision. Others are social processes, where outcomes emerge through interaction over time. In those domains, AI does not eliminate the work. It raises the stakes of what remains human. AI will do most of the work. But someone still has to make it work together.

 

For leaders, this requires a shift in perspective. The critical question is no longer which roles have the most automatable tasks. It is where value depends on human judgment, trust, and coordination under uncertainty. Those are the roles that will not disappear, but be redefined. And they are the ones that will matter most.

 

Salespeople often assume they will be first in the firing line as AI reshapes work, a modern echo of Willy Loman watching the world move on from a model of selling that no longer works. There is some truth in that. The mechanics of the job are changing fast. But for those whose real work is helping other humans decide, commit, and act, the future will not be defined by how many tasks machines can perform. It will be defined by the value of what happens after those tasks are done. When value is created between people, not within steps, breaking the work apart risks breaking what makes it work.