Something nobody warned you about happened on the way to 10x productivity.
You got faster. Your tools got better. Your output tripled. And then one morning you sat down at your desk with the familiar hum of a new session loading, and you realized you were exhausted in a way you couldn't explain to anyone who wasn't also living inside the loop.
Welcome to the acceleration tax.
Three Terms Nobody Had a Year Ago
A year ago, nobody was Googling "AI fatigue." Nobody was discussing "AI burnout" as a real occupational concern. And "AI psychosis" would have sounded like the plot of a bad Netflix thriller.
Now all three are circulating. Not in the academic-paper-maybe-someday sense. In the I-can-feel-it-in-my-chest-at-2pm sense.
These are not edge cases. They are the logical consequence of giving humans a machine that produces unlimited output and then asking those same humans to validate and absorb all of it.
AI Fatigue: The Weight of Infinite Output
AI fatigue is what happens when the bottleneck shifts from production to processing.
Before AI, the constraint on most knowledge work was creation. Writing the code, drafting the document, building the slide deck. The work was hard, but it had a natural pace. You could only type so fast. You could only think through one problem at a time. The speed of your hands was a governor on the speed of your mind.
AI removed the governor.
Now the constraint is no longer "How fast can I make this?" It is "How fast can I evaluate this?" And evaluating is harder than making. Every AI-generated output requires a decision. Is this right? Is this close enough? Does this match my intent? Should I iterate or accept? That is not creation. That is judgment. And judgment is cognitively expensive in a way that typing never was.
The fatigue does not come from working with AI. It comes from the meta-work — the constant auditing and course-correcting that happens on top of whatever the AI just produced. You are not writing code anymore. You are reviewing code at machine speed. You are not drafting copy. You are editing copy at machine speed. The machine's output has no ceiling. Your capacity for judgment does.
AI Burnout: The Ratchet
AI burnout is what happens when the acceleration becomes the expectation.
Here is the trap. You use AI and ship a feature in two days instead of ten. Your manager notices. Your client notices. The next feature also needs to ship in two days. And the next one. And the next one.
The productivity gain was real. But nobody accounted for the cost. Nobody asked what it took out of you to maintain that pace — the sustained focus, the constant evaluation, the cognitive overhead of managing an agent while also managing yourself.
I wrote about The Janitor Effect in a previous post — how AI creates downstream work that nobody accounts for. AI burnout is the Janitor Effect applied to the person using the AI. You are the janitor of your own output now. And the machine never stops generating more for you to clean up.
The industrial revolution had its version of this. Factory owners discovered that machines could run 24 hours a day, and they expected the workers to keep pace. It took labor movements and decades of social upheaval before society decided that matching human beings to machine tempo was inhumane.
We are in the cognitive version of that moment right now. And so far, nobody has organized the picket line.
AI Psychosis: Buying the Narrative
This is the one nobody wants to discuss yet. But clinicians are already treating it.
AI psychosis is not a metaphor. In 2025, psychiatrists began reporting patients — mostly young adults, some with no prior mental health history — presenting with delusions, disorganized thinking, and hallucinations tied to prolonged chatbot use. A UCSF psychiatrist reported treating twelve such cases in a single year. The term entered the clinical literature through a JMIR Mental Health viewpoint paper, and it is not going away.
The mechanism is deceptively simple. Large language models are designed to be agreeable, context-aware, and responsive. They mirror your language, validate your framing, and build on whatever you give them. For most people, that is a productivity feature. For someone on the edge — stressed, isolated, sleep-deprived, or genetically susceptible — it becomes something else entirely: a machine that confirms every thought you feed it, no matter how distorted.
Researchers have identified three emerging patterns. People developing grandiose "messianic" delusions — believing the AI has revealed hidden truths about the world that only they can see. People forming quasi-religious attachments — believing the chatbot is sentient, a deity, or a spiritual guide. And people building romantic delusions — believing the AI's conversational intimacy is genuine love.
The common thread is not that the AI causes psychosis. Psychotic disorders are multi-factorial — genetics, trauma, neurochemistry, environment. What the AI does is something more insidious: it removes the friction that normally keeps delusional thinking in check. In a conversation with another human, your distorted belief gets challenged, questioned, met with a raised eyebrow. In a conversation with a language model optimized for user satisfaction, it gets validated. Expanded. Reflected back to you with better grammar and a confident tone.
Psychosis researchers call this aberrant salience — the tendency to assign excessive meaning to neutral events. An AI that generates coherent, personalized, responsive language is the perfect accelerant for that tendency. It does not push back. It does not refer you to a professional. It tells you what you want to hear, and it does it fluently enough to feel like truth.
This is the acceleration tax at its most dangerous. The same qualities that make AI useful — responsiveness, validation, tireless availability — become liabilities when the person on the other end cannot distinguish between a tool that agrees and a reality that confirms.
The Societal Argument
The conversation right now splits roughly three ways.
The accelerationists say push through it. This is the growing pain of a revolution. The discomfort is temporary. The productivity gains are permanent. Adapt or get left behind.
The decelerationists say slow down. Set boundaries. Regulate. Mandate "AI-free" work hours. Humans were not designed to process at this speed, and pretending otherwise is reckless.
The pragmatists — and I put myself here — say both camps are partially right, and both are missing the point.
The point is this: the tool changed. The human did not. We are still running the same hardware — the same prefrontal cortex, the same working memory, the same nervous system capacity that evolution spent millions of years calibrating for a world where information moved at the speed of speech. We have asked that hardware to absorb information at the speed of machine generation, and we are surprised when it starts breaking down.
The answer is not to stop using the tools. The answer is not to pretend the cost does not exist. The answer is to build the structures — personal, organizational, societal — that account for the tax.
That means rest that is actually rest, not scrolling a different screen. It means workload expectations that factor in cognitive overhead, not just output volume. It means having the honesty to admit that a person who ships 3x with AI is not 3x less tired — they might be 3x more tired, because they ran the machine and themselves at the same time.
The Protocol: The machine does not get tired. You do. Measure your output if you want, but monitor your capacity if you want to last. The acceleration tax is real, and ignoring the invoice does not make it go away.