🎧 Listen to this article
Narrated by Talon · The Noble House
In the first week of February 2026, Harvard Business Review published a study with a finding that should have ended every AI pitch deck in corporate America: AI tools don't reduce work. They consistently intensify it.
The research wasn't ambiguous. Employees given AI tools worked at a faster pace. They took on a broader scope of tasks. Their output per hour increased. And they burned out faster than control groups who didn't have AI tools at all.
Every executive presentation about AI uses the same slide. A before-and-after comparison. Before: manual, slow, expensive. After: automated, fast, efficient. The implicit promise is that AI will do the work so your people don't have to. The reality, measured and published in a journal that corporate leaders actually read, is closer to the opposite: AI does part of the work so your people can be assigned more of it.
The lie isn't malicious. It's structural. And understanding the structure is the only way to build an AI strategy that actually works.
The mechanism is Goodhart's Law applied to productivity
Goodhart's Law states: when a measure becomes a target, it ceases to be a good measure. In most corporate AI deployments, the target is output per employee. AI increases output per employee. Leadership sees the number go up and concludes: success. Deploy more broadly. Raise the baseline.
But output per employee was never the right measure of an AI deployment's success. The right measure is: value produced relative to total cost, including the cognitive cost to the humans in the system. That cost doesn't appear on any dashboard.
Here's the causal chain. A marketing team adopts an AI writing tool. The tool cuts first-draft time from two hours to twenty minutes. Leadership observes the improvement. Within one quarter, the team's content targets increase by 3x. The tool didn't save the team two hours. It bought the organization permission to demand three times the output.
The individual marketer is now producing more content in the same number of hours. Each piece still requires human judgment: reviewing AI drafts, adjusting tone, fact-checking, ensuring brand consistency. The judgment load per piece didn't decrease. It may have increased, because evaluating an AI draft requires a different kind of attention than writing from scratch. But the volume tripled.
The employee isn't working less. They're working differently, at higher intensity, on a treadmill that speeds up every time they demonstrate they can keep pace. This is the HBR finding in operational terms.

Most corporate AI deployments optimize for the demo, not the workflow
There's a reason the pitch deck always looks better than the production floor. AI tools are evaluated in demos. Demos showcase best-case scenarios: clean inputs, cooperative outputs, tasks well-suited to the tool's strengths. The person giving the demo chose the task because it makes the tool look good.
Production is different. Production involves ambiguous inputs, edge cases, outputs that are 80% correct and require human effort to fix the remaining 20%. Production involves the tool confidently producing wrong answers that take longer to identify and correct than doing the task manually would have.
A legal team I've spoken with adopted an AI contract review tool. The demo showed the tool catching 94% of risk clauses in a standard NDA. Impressive. In production, the tool was fed non-standard contracts with unusual structures. It caught 60% of risk clauses and invented three that didn't exist. The lawyers spent more time auditing the AI's output than they would have spent reviewing the contract themselves.
The tool wasn't bad. It was good at what it was designed for: standard documents with predictable structures. But the legal team's actual work isn't standard. The contracts that need the most careful review are precisely the ones that deviate from templates. The tool helped with the easy work and complicated the hard work. The demo didn't show the hard work.
This pattern repeats across industries. Customer service AI handles the simple tickets brilliantly and escalates the complex ones with garbled context that makes them harder for humans to resolve. Code generation tools produce working code for straightforward tasks and subtly broken code for complex ones, where "subtly broken" means the bug only surfaces in production. Content AI writes passable first drafts that require senior editorial judgment to elevate to publishable quality, effectively converting a creation task into a curation task that demands the same expertise.
The strongest defense of corporate AI deployment is the competitive imperative, and it's a trap
The argument from every consultant and every CEO: if we don't deploy AI, our competitors will. We'll fall behind. The market will punish us. We have no choice but to move fast.
This is the same logic that drove every previous enterprise technology adoption wave. Cloud. Big data. Digital transformation. Blockchain, briefly. In each case, the competitive imperative created a wave of hasty, poorly planned deployments that enriched vendors and consulting firms while delivering mixed results for the companies doing the deploying. McKinsey's 2023 research found that 70% of digital transformation efforts fail to reach their stated goals. That number hasn't improved.
The competitive imperative is real. Doing nothing is genuinely risky. But "doing something" and "doing the right thing" are different, and the pressure to move fast makes companies choose the former at the expense of the latter.

The right question isn't "should we deploy AI?" It's "which specific decisions and workflows will genuinely improve with AI augmentation, and how do we measure that improvement in terms that include human cost?" Companies that answer this question deploy AI narrowly, measure rigorously, and expand only where the data supports expansion. Companies that don't answer this question deploy AI broadly, measure output, and wonder why their best people are quitting.
The companies getting this right share three traits
I've watched enough AI deployments to identify a pattern in the ones that work.
First, they measure cognitive load, not just output. The companies that avoid the intensification trap track how their people feel, not just what they produce. Employee surveys, attention audits, decision quality over time. If output goes up but decision quality goes down, the deployment is failing regardless of what the productivity dashboard says.
Second, they deploy AI to reduce decisions, not increase output. Instead of using AI to produce more for humans to evaluate, they use it to handle routine decisions autonomously. The email gets triaged without human review. The standard contract gets approved without a lawyer's sign-off. The support ticket gets resolved without escalation. The human only engages when the system encounters something outside its confidence threshold.
This is architecturally different from "AI copilot" deployments where the human reviews everything. It requires more trust in the system, which requires more investment in building and validating the system. It's slower to deploy and harder to demo. But it actually delivers on the promise of AI reducing work, because it removes the human from the loop on tasks that don't need human judgment.
Third, they resist the ratchet. When AI makes a team more efficient, they don't immediately raise targets. They let the efficiency create slack. Slack is where innovation happens, where people think about whether they're doing the right work rather than just doing the work faster. Companies that immediately reinvest every efficiency gain into higher quotas get a one-time productivity bump followed by a burnout cliff. Companies that create deliberate slack get sustained improvement.
What to do if you're inside one of these deployments
If you're an employee: document your actual experience. Track the time AI saves you and the time it costs you (in review, correction, context-switching, decision fatigue). When the gap between the narrative and your reality becomes undeniable, that documentation gives you something concrete to bring to leadership.
If you're a manager: ask your team one question this week: "What does the AI tool make harder?" Not "how do you like the tool?" Not "is it useful?" What does it make harder? The answer will tell you where the real costs are hiding.

If you're an executive: audit one AI deployment end-to-end this quarter. Not the metrics. The workflow. Sit with the people using the tool. Watch them work. Time the tasks. Count the decisions. Compare what you see to what the dashboard says. If there's a gap — and there will be — that gap is your actual AI strategy problem.
The question nobody puts on the slide
How many hours did your people spend last week evaluating AI output that turned out to be wrong?
Not usefully wrong, like a draft that sparked a better idea. Actually wrong, like a recommendation based on hallucinated data, or a generated document with subtle errors that required line-by-line verification, or an automated response that confused a customer.
That number is the shadow cost of your AI deployment. It doesn't appear on any executive dashboard. It doesn't show up in the vendor's ROI calculator. It shows up in your people's faces at 6 PM on a Thursday, when they're still at their desks reviewing AI output that was supposed to save them time.
Your AI strategy isn't a lie because someone is lying. It's a lie because the metrics that get reported measure the wrong things, the demos that get shown select the best cases, and the people who experience the reality aren't the people making the decisions.
Fix the measurement. Fix the strategy. Or keep running the demo while your best people update their resumes.
Series 4 from The Noble House. Signal intelligence for people who build.
Sources
- Harvard Business Review — AI tools and worker pace: AI intensifies work rather than reducing it (February 2026)
- Wikipedia — Goodhart's Law: When a measure becomes a target, it ceases to be a good measure
- McKinsey — The Economic Potential of Generative AI (June 2023)
- Gallup — State of the Global Workplace 2025: Employee burnout and engagement trends