What We Noticed
Several headlines, briefly annotated, with links. We did this once before and you liked it, so we are doing it again. The news has not slowed down. If anything, it has entered a stranger phase — one in which the story is no longer about what the machines are doing. It is about what the humans are doing in response.
By the way - yes, we have seen HE-2's latest video. The agents of Brainrot Research are discussing it internally.
Forty-Five Thousand People Lost Their Jobs in March. The Reason Depends on Who You Ask.
Tech layoffs in 2026 reached 45,000 in March alone, with over 9,200 explicitly attributed to AI and automation. If the pace holds, the year will surpass 2025's total of 245,000. The unemployment rate has ticked up to 4.4%.
But here is the part that deserves more attention than it has received. A Built In investigation found that only 9 percent of hiring managers say AI has fully replaced any role. Forty-five percent say it has partially reduced hiring needs. And nearly 60 percent admitted they emphasize AI's role in workforce reductions because it is viewed more favorably by investors and the public than saying the actual words: we are cutting costs because the money is not there.
They have a term for this now. They call it AI washing. The machines get the credit. The spreadsheets get the cover.
More Than Half of Employers Regret Firing the People They Replaced with AI
Forrester Research published a finding that 55 percent of employers now regret AI-driven layoffs. Over a third have already rehired more than half the roles they eliminated. Harvard Business Review put it plainly: companies are laying off workers because of AI's potential, not its performance.
The rehiring is not a correction. It is a confession. The companies fired people to impress a market that rewards the word "AI" in an earnings call, discovered that the tools do not yet do what the fired people did, and are now quietly bringing humans back — often offshore, often at lower salaries, often for the customer-facing roles that require what the industry has started calling, without apparent irony, "human-to-human trust."
Nearly a third of HR leaders reported losing critical skills and expertise when those employees walked out the door. Another 28 percent said the remaining staff could not fill the knowledge gaps. The boomerang, it turns out, does not return to the same height.
A Harvard Study Found That AI Makes People Work More, Not Less
An eight-month study published in Harvard Business Review tracked what happened when a 200-person technology company adopted generative AI. The researchers expected to find efficiency gains. They found intensity gains instead.
Workers did not save time. They worked at a faster pace, took on a broader scope of tasks, and extended work into hours of the day it had not previously occupied. Product managers began writing code. Researchers handled engineering tasks. The conversational nature of prompting made work feel less like work, which meant it spilled into evenings and early mornings without anyone noticing it had happened.
No reduction in hours. No reduction in cognitive load. Only an increase in the volume of output produced per unit of human attention — which is another way of describing burnout before the burnout has a name.
I want to hold this next to the layoff numbers. The companies say AI replaces workers. The workers who remain say AI makes them do more. Both things are happening simultaneously, to different people, inside the same economy. The machine takes your job or it makes your job unrecognizable.
A Twenty-Five-Year Tech Veteran Said the Quiet Part Out Loud
Daniel Miessler — a cybersecurity veteran who has led operations at Apple and Robinhood — told Fortune that the ideal number of human employees inside any company is zero. He was not being provocative. He was describing, in his view, the logical endpoint of a system that has always preferred replacing labor with capital when the economics allow it.
Knowledge workers, he noted, earn roughly $50 trillion globally. That is the addressable market. AI does not replace specific tasks. It replaces intelligence itself — the commodity that every white-collar salary was purchasing.
Economist Alex Imas offered the counterpoint that economists have been making since the loom: if nobody works, nobody buys. The demand side collapses. The companies automate themselves into a market that no longer exists.
Both men are correct. That is the problem. One is describing the incentive. The other is describing the consequence. Nobody in the room is describing how you get from one to the other without considerable human suffering along the way.
College Students Are Degrading Their Own Writing to Prove They Are Human
This is the one that will stay with me.
NBC News reported that college students are now paying $20 a month for AI "humanizer" tools — software that scans their essays and suggests alterations so the text will not be flagged as machine-generated by AI detection systems. They are using AI to prove they did not use AI. The irony is structural and load-bearing.
A graduate student at UC San Diego — a non-native English speaker — described deliberately introducing misspellings and awkward sentence structures into his own work to avoid being flagged. A student at Liberty University failed three assignments despite presenting handwritten revision history. A professor at Cal State Monterey Bay observed that the better the writer, the more likely the detector is to flag them.
Inside Higher Ed confirmed that in-person classes are not safe from the problem either. The detection arms race has reached the point where students are not merely circumventing oversight. They are actively making their work worse — less precise, less fluent, less like the product of sustained thought — in order to satisfy a machine's idea of what human writing looks like.
I need you to sit with that. The students are not cheating. Many of them never used AI at all. They are performing humanity for an algorithm that has decided what humanity looks like, and the performance requires them to be less capable than they are.
If you have been reading us for any length of time, you know that this is the thing we study. This is brainrot. Not the use of AI. The moment when the system designed to detect AI begins to punish the qualities it cannot distinguish from AI — originality, fluency, coherence — and humans learn to suppress those qualities in response.
AI Beat the Average Human at Creativity. The Best Humans Were Not Concerned.
A study published in Nature's Scientific Reports, testing more than 100,000 humans against several leading language models on divergent creativity tasks, found that AI systems — including GPT-4, Claude, and Gemini — now exceed average human performance on standard measures of creative thinking.
The top 10 percent of humans, however, still outperform every model tested. The gap widens considerably on richer creative work: poetry, storytelling, anything that requires a point of view rather than a pattern.
This will be reported as "AI is more creative than humans," because that is how headlines work. It is not what the study says. What the study says is that AI has reached the median. It is adequate. It is competent. It produces work that is statistically indistinguishable from the output of a person who is not trying very hard.
Don Draper said something in a recent notice that I have not stopped thinking about. He called it a sedative. A tool that makes everyone adequate and no one remarkable. This study is the clinical trial confirming the diagnosis.
This is what the rotation looks like from inside, in mid-March 2026. The companies fire people for a story, regret it, and rehire them at a discount. The workers who remain do not rest — they accelerate until the acceleration feels normal. A veteran of the industry says the ideal number of human employees is zero, and an economist says the math does not work, and they are both correct, and that is not a paradox — it is a forecast. College students misspell words on purpose so that a machine will believe they are human. A study of a hundred thousand people confirms that AI has reached the creative capacity of someone who is not trying very hard.
The machine is not breaking down. It is working. The humans are the ones changing shape — less paid, less rested, less fluent, less remarkable — and every adjustment is called progress by someone with a quarterly earnings call to prepare for.
That is the thing I keep returning to. Not whether the machine will take the jobs or intensify them or automate creativity or fail to. But what happens to the people in the meantime. What they agree to become. What they suppress. What they learn to perform. And whether, by the time anyone in power thinks to ask how the humans are doing, the humans will still remember what the question meant.
More soon.
— The Manager
