
With all the discussion about AI agents lately, you might be wondering: how exactly do you count them? If multiple agents collaborate to resolve a customer issue or approve a loan application, does that represent one digital worker or many? The question may sound trivial, but it will soon matter a great deal. Organizations will eventually track digital headcount the same way they track human employees today.
Workforces used to be easy to measure because they were made of people. Employees had identities, job descriptions, and clear places on an organizational chart. You could count accountants, engineers, or customer service representatives with simple headcount. AI agents break this model. They are not discrete in the way humans are. Agents can spawn sub-agents, operate for milliseconds, run invisibly inside software systems, or collaborate in networks that blur the line between tool and worker. What appears externally as a single agent may internally be an orchestration of models, prompts, memory systems, policy engines, and software tools. Technically the system is a constellation of components. Organizationally, however, it may still behave like a single role.
This is where the counting problem begins.
Different parts of the organization will see the same system very differently. To product marketing it may appear as one AI agent. To the software architecture team it may be a network of micro-agents. To cloud infrastructure it could represent hundreds of model calls. Finance, meanwhile, may see nothing more than a few cents of inference cost.
A simple rule of thumb helps cut through this complexity. What matters is not how many models are running, but how many decision-making roles exist inside the enterprise and how those roles interact. An agent is not defined by the number of tools behind it, but by the unit of responsibility it represents within the system. In practice, an agent is the smallest unit of autonomous responsibility in a digital workforce.
Many production agents are really bundles of models, prompts, memory systems, and tools working together behind a single interface. A customer support agent, for example, might include a reasoning model, a retrieval system, a policy engine, a summarizer, and an action executor. Technically that is a multi-agent pipeline. From the perspective of the enterprise, however, it functions as one digital worker with a defined role. If several internal components collaborate but consistently produce one coherent decision or action, it is best understood as a single agent with internal architecture, much like a human employee who relies on spreadsheets, software, and colleagues to do their job.
Research in technology governance highlights why this distinction matters. Sociologist Madeleine Clare Elish coined the term “moral crumple zone” to describe what happens when complex automated systems fail. Just as the crumple zone in a car absorbs the force of a collision, responsibility in automated systems often collapses onto the nearest human operator, even when the broader system design shaped the outcome. When organizations cannot clearly identify which digital systems act with autonomy or authority, accountability defaults to individuals rather than the architecture that produced the decision. Defining the boundaries of digital workers therefore becomes more than a technical exercise. It is a way of ensuring that responsibility is assigned where it actually belongs.
If agents are going to function as digital workers, leaders need a simple way to identify them. Here are some practical rules that might help:
The first is identity. If a system has a persistent identity inside the organization, it begins to behave like a digital worker. It can authenticate into systems, receive permissions, and perform actions that can be traced back to that identity. If a system cannot be independently identified and audited, it is probably just a component inside a larger architecture.
The second rule is lifecycle control. A system that can be provisioned, updated, paused, or retired independently has an operational lifecycle. That means it can be managed much like organizations manage applications or service accounts. By contrast, a micro-agent that appears only as part of an orchestrated chain of tasks is closer to a function than a worker.
The third rule is accountability for outcomes. A digital worker should own a measurable task or result. An IT support agent might be responsible for responding to service tickets within a defined service level. If a system contributes only a hidden sub-step within a larger workflow, it likely belongs to the system architecture rather than the workforce.
Together these rules create a surprisingly clear boundary. If a system has a distinct identity, an independent lifecycle, and responsibility for a defined outcome, it begins to resemble a digital employee. If not, it is better understood as infrastructure.
But what happens when components begin to behave like independent actors? If systems have distinct roles or objectives, if they operate asynchronously, coordinate decisions with one another, or expose separate identities and interfaces to the organization, then you are no longer looking at one agent. You are looking at a team of agents. At that point the system begins to resemble a small digital organization rather than a single worker augmented by technology.
Consider an aviation analogy. A modern aircraft cockpit contains autopilot systems, navigation computers, sensor networks, and sophisticated flight software performing thousands of calculations every second. Internally it is an extraordinarily complex digital environment. Yet operationally we still treat autopilot as part of a single role: the aircraft’s flight control system assisting the pilot.
Air traffic control, by contrast, is a distributed coordination system. Radar networks, aircraft, scheduling systems, and human controllers interact across towers and control centers. Each participant has its own responsibilities, authority, and identity within the system. What emerges is not one augmented operator but a network of interacting roles. The difference is not the number of machines involved. It is whether the system supports one role or many.
This shift from architecture to accountability is already appearing in governance frameworks. The U.S. National Institute of Standards and Technology has begun exploring how agent systems should be identified, authenticated, and authorized as they interact with digital infrastructure. The emphasis on identity and authorization reveals an important assumption: if agents are going to act autonomously inside enterprise systems, they must be treated as identifiable entities whose actions can be traced and governed.
International governance frameworks are moving in a similar direction. Emerging ISO standards like ISO/IEC 42001:2023 for AI management systems require organizations to define the scope of their AI deployments, manage them across their lifecycle, and assign accountability for their behavior. These frameworks do not attempt to catalog every model or algorithm inside a system. Instead, they focus on identifying which systems operate as actors inside organizational processes and ensuring those actors can be governed responsibly. Implicitly, they adopt the same principle: what matters is not the internal architecture of AI systems, but the role they play inside the enterprise.
For most executives, the debate about counting digital workers will likely surface first on the org chart. Should AI agents appear alongside human employees? In 2024, the HR software company Lattice briefly experimented with allowing companies to list AI employees in its platform, only to reverse course after a public backlash. At the time the idea seemed provocative, even absurd. In retrospect it may prove inevitable. If digital workers have identities, permissions, responsibilities, and measurable outcomes, they begin to resemble organizational actors rather than tools. The more interesting question may not be whether agents appear on org charts, but how their presence reshapes them. As digital workers take on operational decisions once handled by layers of management, hierarchies built around supervising people may give way to flatter structures designed to coordinate human and machine decision-making.
Yet even this debate about org charts may be missing the deeper shift underway. Org charts, after all, are still a way of counting people and managing layers of control. Agentic systems are beginning to change the underlying economics of work itself. The real transformation in organizations is not simply the number of digital workers they deploy, but the amount of decision-making capacity embedded in their operations.
Historically, firms measured productive capacity through simple metrics such as headcount or labor hours. Those measures made sense in an industrial economy where human attention was the primary constraint. Agentic systems change that equation. As organizations embed intelligence into everyday processes, the relevant question shifts from how many workers exist inside a workflow to how much cognition the system can execute. The more meaningful metrics may become things like decision throughput or the number of tasks completed autonomously. Instead of asking how many workers are involved in a process, leaders may soon ask how many decisions that process can execute per second.
So, with all that in mind, how many AI agents does it take to change a lightbulb? Arguably, none. A sensor detects the outage. A diagnostic model determines the cause. A procurement system orders a replacement. A scheduling agent allocates a technician. A workflow system verifies that the job is complete. But unless the organization has a team of highly sophisticated humanoid robots, it still takes a human to take the bulb out of the box and screw it in.
Depending on how you feel about the future of work, that may be good news for now.

