
In 2013, a quiet software developer known to his colleagues as "Bob" became briefly infamous when investigators discovered that he had secretly outsourced his entire job to a contractor in China. He had mailed his security credentials overseas, paid the contractor a fraction of his six-figure salary, and spent his own workdays browsing Reddit and watching cat videos. Yet his performance reviews were stellar. Bob was, by every visible metric, one of the company’s best engineers. When his secret arrangement was uncovered, he was fired. But a decade later, his story looks less like a strange aberration and more like a preview of work in the age of AI.
Bob outsourced his job to a human. Today, millions of workers quietly outsource parts of their roles to machines. And unlike Bob, most of them are doing it not to avoid work, but to redesign it.
What’s emerging is a new form of unsanctioned productivity: employees quietly using AI systems to automate tasks, streamline workflows, and shift routine execution to autonomous agents without informing their employers. This behaviour has been called many things — from “bringing your own AI” to “rogue automation” — but for me, the term that best captures it is shadow labor. Like shadow IT before it, shadow labor is a bottom-up response to organizational constraints. But it is far more personal, far less visible, and ultimately far more transformative.
Shadow IT, which emerged in the early 2000s, was a reaction to slow-moving corporate technology. Employees brought their own devices, downloaded unsanctioned software, or spun up cloud services to get their work done. Companies initially treated it as a threat before gradually recognizing it as a signal. Shadow IT exposed inefficiencies. It revealed unmet needs. It surfaced the most inventive employees — the ones who hacked together solutions when formal systems failed them.
Shadow labor, by contrast, doesn’t just change the tools that employees use. It changes the location of labor itself. When workers use ChatGPT to write reports, Copilot to generate code, or custom agents to automate correspondence, they are not simply augmenting their workflow. They are reallocating execution. The work still gets done — often faster, more accurately, or more consistently — but the human’s effort is no longer the primary source of that output.
This shift poses a direct challenge to managerial assumptions about effort, authorship, and contribution. If a deliverable can be produced in minutes by an AI agent, what does “hard work” mean? If results improve, does the process matter? And if an employee’s value lies increasingly in how they design, supervise, and refine agentic workflows, rather than how many keystrokes they produce, what exactly defines a “good employee”?
Many managers interpret shadow labor as quiet deception — an attempt to evade the difficult parts of a job. But there is a more accurate and more revealing way to understand it: job refactoring. Employees using AI agents are not trying to shirk responsibility. They are redesigning the pathway to the outcome. They are asking the same question that has historically separated high performers from everyone else: What is the simplest, smartest, most elegant way to get this done?
When people ask me who they should hire, I sometimes say — half joking, half serious — that they should look for the smartest lazy people they can find. Smart, lazy people have no interest in grinding through a job as written. They want to rewrite it. They want to automate the tedious parts so they can focus on the work that actually requires judgment, imagination and impact.
Seen through this lens, the instinct behind shadow labor is not subversive at all. In many organizations, it may already be the leading indicator of future excellence.
And some companies are beginning to recognize this.
Consider Skims and Good American. Cofounder Emma Grede introduced a bonus system designed to challenge teams to find agentic applications inside their departments. Rather than rewarding perseverance through manual processes, she rewarded employees for discovering ways to hand work to AI. The biggest breakthrough did not come from marketing or creative — it came from the accounts team, which used AI to overhaul its chargeback systems. The resulting redesign saved the company hundreds of thousands of dollars, a vivid demonstration of what happens when employees are invited to treat automation as innovation rather than subversion.
Microsoft offers another clue to the future. The company has made it clear internally that using AI tools such as GitHub Copilot is now part of performance expectations. AI fluency is no longer an experiment; it is table stakes. Employees who fail to adopt these tools risk being seen not as diligent but as disengaged.
Meta has gone even further. Managers have been explicitly advised that low use of internal AI tools could negatively affect employee evaluations. The company has reframed agentic tool use as evidence of adaptability, literacy, and future readiness — not as discretionary enhancement but as professional obligation.
At Shoosmiths, the UK law firm, the incentives are more explicit still. Leaders created a one-million-pound reward pool tied directly to generating one million prompts with approved AI tools. Prompting became billable behaviour. Intelligent orchestration became a form of recognized contribution.
Across these examples, a pattern emerges. The companies at the leading edge of AI adoption are not cracking down on shadow labor. They are formalizing it. They are taking the behaviours that once lived in the shadows and pulling them into the centre of organizational performance. They are rewarding the employees who ask where machines should step in — not the ones who insist on protecting manual effort as a badge of honour.
This marks a deeper transition. The shift from shadow labor to sanctioned agentic work is not simply a matter of tools. It is a shift in organizational logic. Productivity in an age of abundant intelligence no longer hinges on human exertion alone. It hinges on how intelligently human effort is multiplied.
Your best employee may no longer be the one who works the hardest or logs the longest hours. It may be the one who quietly eliminates entire categories of effort, who replaces repetition with agents, and who frees their mind for decisions of higher consequence. In that sense, shadow labor may be one of the clearest early signals of twenty-first-century leadership potential.
And this, more than anything, is where leadership must evolve. Organizations that cling to traditional definitions of diligence will push their most innovative people underground. They will force ingenuity into secrecy. They will frustrate ambitious employees who see what is possible but are not allowed to act on it. Meanwhile, companies that reward agentic behaviour will surface new sources of leverage, accelerate learning cycles, and cultivate cultures where people are valued not only for what they do, but for how they redesign what is possible.
Shadow labor isn’t going away. It is scaling. As AI tools become more powerful and ubiquitous, the gap between those who use them effectively and those who do not will widen. Organizations that ignore this trend risk not only security breaches but cultural stagnation. They will miss the early signals of transformation and fall behind competitors willing to evolve.
None of this can be solved simply by giving everyone a chatbot license. The real work lies in redefining roles, restructuring workflows, and reimagining how teams operate when some work is done by human minds and some by synthetic ones. Managing this hybrid output requires new KPIs, new trust models, and a new kind of leadership literacy — one centred on orchestration rather than oversight.
What looks today like subversive behaviour will soon become standard practice. Shadow labor is not a deviation from the future of work. It is the future of work — just unevenly distributed.

