For a long time, white-collar work has enjoyed a useful ambiguity: if the output is hard to measure, presence can stand in for progress. A calendar packed with meetings, a trail of email, a deck that looks expensive—these became proxies for value. Generative AI didn’t invent this pattern. It simply made it visible by making large chunks of “office output” cheap.
Start with where time goes. Microsoft’s Work Trend Index research has argued that the average employee spends more time communicating than creating—57% in meetings, email, and chat versus 43% producing documents, spreadsheets, and presentations. The heaviest email users spend about 8.8 hours a week on email; the heaviest meeting users about 7.5 hours a week in meetings. That’s not inherently bad—coordination is work. But it hints at a deeper truth: many organizations have quietly substituted coordination artifacts for coordination itself. We don’t align, we circulate. We don’t decide, we “sync.” We don’t finish, we “update stakeholders.”
AI is an accelerant for anything that is mostly formatting, summarizing, rephrasing, or recombining. That includes a surprising fraction of what modern offices churn out: status reports that no one reads closely, meeting notes that are never referenced, slide decks whose job is to exist, not to persuade. When an intern could once spend a full day polishing an “executive summary,” it had a kind of scarcity value. Now a model can draft five variants in five minutes. The scarcity evaporates—and with it, the illusion that the work was valuable because it was laborious.
This is why AI feels threatening even when it’s “just” a productivity tool. It attacks the social contract of bureaucratic work: visibility as insurance. If your job has been partly about translating reality into the formats that power expects—weekly rollups, KPIs, pre-reads, steering committee decks—AI exposes how much of that translation is ritual. The ritual may still be politically necessary, but it becomes harder to pretend it is economically necessary.
Meanwhile, organizations are adopting AI fast enough to make this confrontation unavoidable. Microsoft reported in 2024 that 75% of global knowledge workers were using AI at work, and that use had nearly doubled in six months. If even half of that is informal “bring your own AI,” it means the productivity delta is arriving unevenly: some people are quietly compressing tasks while the surrounding process stays the same. And when only parts of the system speed up, the bottleneck becomes the system—approvals, handoffs, meetings, reporting layers. In other words: the “work” that remains is the work the organization has created for itself.
Atlassian’s research puts numbers on one particularly corrosive category: information hunting. Their State of Teams material summarizes a survey of 12,000 knowledge workers and 200 executives and reports that leaders and teams waste around 25% of their time searching for answers. If AI can surface the right snippet, link, owner, or decision record instantly, it doesn’t merely save minutes—it undermines the sprawling meeting-and-messaging economy built to compensate for missing documentation and unclear ownership. But it also exposes an uncomfortable truth: a lot of “collaboration” has been a workaround for information that should have been findable.
Here’s the twist: AI doesn’t only remove drudgery; it also removes credibility from drudgery. If a model can generate a project update on demand, the mere existence of an update stops being evidence that the project is healthy. That forces managers to confront what they actually need: fewer artifacts, more signal. Yet many organizations will initially respond the opposite way—by producing more. When output becomes cheap, it tends to multiply. The easiest failure mode of AI adoption is an arms race of auto-generated paperwork.
So what does it look like to use this moment well?
It starts with admitting that many office processes are measurement substitutes. Deloitte’s Global Human Capital Trends work has argued that the proxies organizations rely on to measure performance may no longer apply in a changing, “boundaryless” environment. AI intensifies that problem: if proxies can be generated, they cease to measure. A slide deck is no longer proof of thinking. A summary is no longer proof of understanding. A full inbox is no longer proof of importance.
The practical response is brutally concrete: treat every recurring artifact as guilty until proven useful. If a weekly status email doesn’t change decisions, kill it or turn it into a lightweight log. If a meeting exists because information isn’t discoverable, fix the documentation and ownership, then reduce the meeting. If a KPI dashboard exists because leadership doesn’t trust the team, address the trust—because AI will happily manufacture the dashboard forever.
There’s also a human dimension that’s easy to miss. Some people genuinely fear that if the “busy” layer disappears, there won’t be enough real work underneath. That fear isn’t irrational. Research on “socially useless” work shows a meaningful minority of workers report low perceived usefulness; for example, analysis discussed around the American Working Conditions Survey finds 19% answered “never” or “rarely” to whether they felt they were doing useful work or making a positive impact. AI doesn’t create that emptiness, but it can strip away the coping mechanisms that kept it hidden.
The best counter to that anxiety is not motivational messaging. It’s redesign: fewer internal chores, tighter feedback loops to customers, and clearer definitions of “done.” Interestingly, the strongest empirical productivity gains from AI often show up where work can be cleanly tied to outcomes. In a large field study of customer support, an AI assistant increased productivity by nearly 14% on average, with much larger gains for novices and smaller gains for top performers. That pattern is a clue: AI thrives when the job is structured around resolvable problems, not around performative motion.
In the end, the “illusion of work” isn’t that people are lazy. It’s that many organizations have built elaborate internal economies to manage uncertainty, mistrust, and complexity—then mistaken those economies for value creation. AI is the harsh auditor. It doesn’t care how hard something was to produce. It only cares whether it can be produced.
Which leaves leaders with a choice. They can let AI industrialize bureaucracy—an endless feed of immaculate, meaningless output. Or they can use AI as leverage to delete the junk layers and re-center work on decisions, accountability, and impact. The second path is harder, because it requires saying out loud what used to be politely ignored: some of our “work” was theater.
