The most compelling narrative circulating within American work culture today isn't that artificial intelligence will render jobs obsolete. Instead, it posits that AI will liberate individuals from the burdens of their work.
This optimistic vision has been actively promoted by the industry over the past three years, resonating with millions of apprehensive individuals eager for solutions. While acknowledging that some white-collar positions may indeed vanish, the argument asserts that for most other roles, AI acts as a potent force multiplier. It promises to transform professionals—be they lawyers, consultants, writers, coders, or financial analysts—into more capable and indispensable contributors. The premise is that these tools work for you, reducing your effort, and creating a win-win scenario for all.
However, a new study published in the Harvard Business Review explores this very premise to its logical conclusion, revealing not a productivity revolution, but a significant risk: companies potentially becoming engines of burnout.
As part of what they termed "in-progress research," the researchers spent eight months embedded within a 200-person tech company, observing the consequences when employees genuinely embraced AI. Across more than 40 "in-depth" interviews, they found no evidence of pressure or new targets being imposed. Instead, individuals voluntarily increased their output because the AI tools made a greater volume of work seem achievable. This newfound capability, however, led to work spilling over into lunch breaks and late evenings. Employees' to-do lists expanded relentlessly, consuming every hour that AI seemingly freed up, and then continued to grow.
As one engineer conveyed to them, “You had thought that maybe, oh, because you could be more productive with AI, then you save some time, you can work less. But then really, you don’t work less. You just work the same amount or even more.”
A similar observation was shared on the tech industry forum Hacker News, where a commenter wrote, “I feel this. Since my team has jumped into an AI everything working style, expectations have tripled, stress has tripled and actual productivity has only gone up by maybe 10%. It feels like leadership is putting immense pressure on everyone to prove their investment in AI is worth it and we all feel the pressure to try to show them it is while actually having to work longer hours to do so.”
This revelation is both intriguing and concerning. The discourse surrounding AI's impact on work has consistently grappled with the fundamental question: are the claimed gains real? Yet, too few have paused to consider the implications when they are.
The HBR study is not entirely unprecedented. A separate trial conducted last summer indicated that experienced developers using AI tools spent 19% longer on tasks, despite their perception of being 20% faster. Around the same period, a National Bureau of Economic Research study, which tracked AI adoption across thousands of workplaces, found that productivity improvements amounted to just 3% in time savings, with no significant impact on earnings or hours worked in any occupation. Both these earlier studies have faced considerable scrutiny.
This latest research, however, may prove more challenging to dismiss. It doesn't dispute the core premise that AI can augment employees' capabilities; rather, it confirms this potential and then illuminates where such augmentation ultimately leads. According to the researchers, this path culminates in “fatigue, burnout, and a growing sense that work is harder to step away from, especially as organizational expectations for speed and responsiveness rise.”
The industry's wager was that empowering people to achieve more would provide comprehensive solutions. It may, in fact, be initiating an entirely different set of problems.
The Editorial Staff at AIChief is a team of professional content writers with extensive experience in AI and marketing. Founded in 2025, AIChief has quickly grown into the largest free AI resource hub in the industry.