On February 9th, 2026, TechCrunch published a piece with a headline that should have gotten more attention than it did: "The first signs of burnout are coming from the people who embrace AI the most."

The article described a pattern showing up across companies that adopted AI tools aggressively. Employees who used AI to accelerate their work didn't end up working less. They ended up working more. To-do lists expanded to fill every hour AI freed up, and then kept going.

This is not a technology problem. This is a human problem wearing technology's clothes.

The productivity trap is 160 years old and we keep falling for it

In 1865, William Stanley Jevons published The Coal Question and made an observation nobody wanted to hear: as steam engines became more fuel-efficient, total coal consumption went up, not down. Better efficiency didn't reduce usage. It made coal cheaper per unit of useful work, which made people find more uses for it, which increased total demand.

This became known as the Jevons Paradox. Cars got more fuel-efficient; people drove more miles. LED lights use a fraction of incandescent energy; total lighting consumption held roughly flat because we lit up spaces we never would have bothered with before. Email was supposed to reduce paper memos. It produced more communication, not less.

AI is the Jevons Paradox applied to human attention.

When you give a knowledge worker a tool that cuts report-writing from four hours to forty minutes, you'd expect them to reclaim three hours. That's the vendor pitch. That's what the ROI calculators project. What actually happens: their manager sees that reports now take forty minutes and assigns three more. Or the worker raises quality, adding sections that weren't previously feasible. Or the organization raises output expectations across the board.

The three hours don't come back. They get reinvested at a rate that exceeds the original savings.

The bottleneck moved from production to evaluation, and evaluation is cognitively worse

The 2025 Workforce Intelligence Report found that cognitive strain and decision friction have surpassed workload volume as the leading indicators of employee exhaustion. People aren't burning out from manual labor. They're burning out from decisions.

AI accelerates output but doesn't reduce the decision load. When an AI drafts five versions of a marketing email, someone has to choose between them. When it generates a risk analysis, someone has to decide what to do about it. When it surfaces twelve customer insights from a dataset, someone has to prioritize which ones matter.

Every unit of AI-generated output requires a unit of human judgment to evaluate. Writing a report is tiring. Choosing between five AI-drafted reports, each plausible, each slightly different in emphasis, is exhausting in a different way. It requires constant exercise of taste and judgment that depletes the same cognitive resources as creative work, without the satisfaction.

Attention Tax - section illustration

Two-thirds of American employees reported burnout symptoms in 2025, according to Censuswide research. That number rose to 81% for 18-to-24-year-olds and 83% for 25-to-34-year-olds. The age groups with the heaviest AI adoption are the most burned out. These facts are connected.

Every automated workflow terminates at your judgment, and judgment is finite

Herbert Simon coined the term "attention economy" in 1971: "a wealth of information creates a poverty of attention." He was talking about memos and television. He couldn't have imagined a world where an AI agent generates a comprehensive briefing every morning, flags seventeen items requiring your attention, drafts responses to eight of them, and waits for your approval on all eight before lunch.

The bottleneck is no longer information access or synthesis. It's the human capacity to evaluate, approve, and direct. Every automated workflow terminates at your judgment. Judgment is the one resource you can't automate away without losing the thing that makes the output yours.

A February 2026 MarTech Series report identified the biggest productivity killers: app switching, notification overload, and constant context-shifting between automated systems that each demand a slice of your attention. 41% of Americans are now consciously trying to reduce screen time. 16% abandoned at least one social media app in the past year. These aren't technophobes. These are people hitting a biological wall.

The "norms will adjust" argument is partly right but misidentifies the problem

The strongest counterargument is historical. Every productivity tool triggered an initial period of overwork before norms adjusted. Email arrived and people checked it every five minutes; now most professionals batch it into a few daily sessions. Smartphones made work portable and the first generation was "always on"; subsequent generations developed boundaries.

The argument: AI adoption will follow the same curve. Overwork initially, then policies, boundaries, equilibrium. Burnout is transitional, not structural.

This has merit. Norms do adjust. We developed email etiquette, meeting-free days, and "right to disconnect" laws in some jurisdictions.

But email increased communication volume. AI increases decision volume. Communication overload is uncomfortable. Decision overload is cognitively destructive. Roy Baumeister's research on ego depletion, while debated in replication literature, pointed at something real: decision-making draws on a finite resource. Judges grant fewer paroles as the day progresses. Doctors prescribe more antibiotics at the end of their shift. Quality degrades with volume.

The norm-adjustment thesis assumes the problem is behavioral. But if human cognition has a hard ceiling on daily decision throughput, no amount of norm-setting fixes it. You just distribute the same finite resource across more demands.

Inside companies, AI adoption follows a predictable three-quarter arc toward burnout

A company adopts AI tools across departments. First quarter: productivity metrics jump. Reports get written faster. Code ships sooner. Support handles more tickets. Leadership celebrates the ROI and raises targets to match the new capacity.

Attention Tax - pull quote illustration

Second quarter: the new targets feel normal. The AI-augmented pace becomes baseline. Workers who were praised for 1.5x output are now expected to maintain it permanently. The sprint becomes the marathon.

Third quarter: cracks. Top performers, the ones who adopted AI most enthusiastically, start missing deadlines. Not because they're slower. Because they're decision-fatigued. They're approving agent output at volume, but approval quality is slipping. Errors pass through. Communication gets terse. Sick days increase.

The TechCrunch report captured this: "Because employees could do more, work began bleeding into lunch breaks and late evenings." The AI freed up time. The organization immediately recaptured it. Efficiency gains accrued to the company. Cognitive costs accrued to the individual.

This is the attention tax. Not a fee for using AI. A fee for every unit of AI output that requires your judgment to validate. And it's regressive: the more AI you use, the more judgment you expend, the less you have left for decisions that actually matter.

The winning strategy is "decide less," not "do more"

The people getting this right are doing something architecturally different. Instead of using AI to produce more output for human evaluation, they use AI to reduce the total number of decisions that reach them.

Train your AI on your communication style until it sends routine emails without your approval. Configure the system to act on insights that match established patterns and only escalate the ones that don't. Define conditions under which a report self-publishes versus conditions requiring your sign-off.

The shift is from AI-as-generator to AI-as-governor. Your role changes from quality gate on everything to quality gate on exceptions.

This requires trust in your systems, which means building them carefully. You can't hand an untrained agent full autonomy. But over weeks, you expand the boundary of what it handles independently as you verify its judgment on progressively harder decisions. Solo operators figured this out early because they had to. A one-person operation can't sustain human-in-the-loop on every workflow. That constraint forced them to build trust architectures that corporate deployments are only now discovering they need.

Today: Audit your last week. Count the decisions you made. How many were genuinely novel? How many were routine approvals you could have delegated? If routine approvals outnumber genuine decisions by more than 5:1, you're paying too much attention tax.

This month: Pick your three highest-volume, lowest-stakes decision categories. Build autonomous handling for them. Give your AI clear rules, monitor for two weeks, then let go.

This quarter: Every decision you automate frees cognitive capacity for the decisions that deserve it. Over six months, you shift from reactive approval machine to strategic operator who intervenes only when it counts.

Your attention is the last scarce resource in an age of artificial abundance. Are you spending it, or is your organization spending it for you?