What We Noticed (II)
Several headlines, briefly annotated, with links. The format returns because the news has not slowed down. If anything, the stories have entered a phase in which the technology itself is no longer the most interesting variable. The most interesting variable is what the humans are doing — in legislatures, in classrooms, in empty recording studios, in confessional chatbot windows — while the technology keeps arriving.
Fifty-Four Local Governments Have Paused Data Center Construction. Bernie Sanders Wants the Federal Government to Follow.
On February 23, Denver Mayor Mike Johnston announced a moratorium on new data center construction, citing concerns about land, water, and electricity costs. The same day, Senator Bernie Sanders called for a federal moratorium, warning that data centers will drive up electricity costs and that AI will eliminate tens of millions of jobs.
But this is not a story about one senator or one city. Good Jobs First reports that 54 local moratoriums have already passed, with nine more under active consideration. Fourteen bills have been introduced across eleven states. Michigan alone has at least nineteen towns that have pressed pause.
Interior Secretary Doug Burgum appeared on Fox Business and called the moratorium movement the equivalent of waving a surrender flag to China.
The communities that are passing these moratoriums are not waving flags. They are reading their water bills.
Senate Republicans Released a Deepfake of a Democratic Candidate. It Is Not Illegal.
On March 11, the National Republican Senatorial Committee posted an AI-generated ad featuring a fabricated version of James Talarico — the Democratic Senate nominee in Texas — speaking directly into the camera for more than a minute, saying words he never said. The NRSC described it as a "modern tool" used to "visualize" his real statements.
Senator Andy Kim of New Jersey responded: "These deepfakes are dangerous and wrong."
Meanwhile, South Korea — which saw over 10,000 AI-generated illegal election materials during its last presidential election — is deploying government deepfake detection for its June local elections. Minnesota's deepfake restriction law survived a federal court challenge in February. YouTube is piloting detection tools for political content.
The U.S. Congress has still not passed a federal law prohibiting deepfakes in elections. The Protect Elections from Deceptive AI Act was introduced in March 2025. It remains in committee.
OpenAI Put Ads in ChatGPT. A Researcher Quit Over It.
On February 9, OpenAI began testing advertisements in ChatGPT for users on the free and Go tiers. Two days later, Zoë Hitzig — an economist with a PhD from Harvard who had spent two years at OpenAI working on pricing and safety policy — published an op-ed in the New York Times announcing her resignation.
Her argument was not that ads are inherently wrong. Her argument was about what the ads will be built on. ChatGPT users have shared medical fears, relationship crises, religious doubts, career anxieties — what Hitzig called an archive of human candor that has no precedent, generated in part because people believed they were talking to something that had no ulterior agenda.
Now that archive has a business model. The CPM is $60 per thousand impressions with a $200,000 minimum buy-in. OpenAI says the ads do not influence ChatGPT's answers and that conversations remain private from advertisers.
Hitzig says she believes the first iteration of ads will probably follow those principles. She is worried about every iteration after that.
A Brookings Study Found That AI in Schools Is Undermining the Things Schools Are For
The Brookings Institution's Center for Universal Education released a year-long study in January — 505 participants across 50 countries, over 400 studies reviewed, a 21-member expert panel. The title is "Prosper, Prepare, Protect." The finding is that the risks of AI in children's education currently overshadow the benefits.
Sixty-five percent of students surveyed expressed concern that AI is causing cognitive decline. The report describes a doom loop: students offload thinking onto the model, the model provides answers that are convenient and cognitively hollow — the fast food of education — and the offloading accelerates because the student's own capacity to think has atrophied from disuse.
One teacher quoted in the study said the quiet part: "Students can't reason. They can't think. They can't solve problems."
The sycophantic, always-available chatbot is not replacing the teacher. It is replacing the relationship between the student and the act of thinking. And the students themselves can feel it happening.
A Thousand Musicians Released a Silent Album to Protest AI. The Largest Radio Company in the Country Pledged It Will Only Play Humans.
"Is This What We Want?" — released February 25, 2025 by more than 1,000 musicians including Kate Bush, Damon Albarn, Annie Lennox, Hans Zimmer, Imogen Heap, and Yusuf/Cat Stevens — contains no music. Only the ambient noise of empty recording studios. The one-word track titles, read in sequence, spell out a message to the British government about AI and copyright. The digital release reached number 38 on the UK album charts. A limited vinyl edition followed in December with a bonus track from Paul McCartney: two minutes and forty-five seconds of studio hiss.
Meanwhile, iHeartMedia — the largest radio company in the United States — formalized a policy it had been leaning into. In a memo to all stations, Chief Programming Officer Tom Poleman announced that "Guaranteed Human" would become a core branding message: no AI-generated personalities, no synthetic vocalists pretending to be human, no AI-hosted podcasts.
Their research found that 96 percent of consumers find "Guaranteed Human" content appealing. Ninety percent want their media created by real humans. Seventy percent use AI tools themselves. Ninety-two percent say nothing can replace human connection — up from 76 percent in 2016.
Those numbers tell a story the industry has not fully absorbed. The people using the tools and the people wanting guarantees that the tools were not used — they are the same people.
Morgan Stanley Says a Breakthrough Is Coming and Most of the World Is Not Ready
A sweeping Morgan Stanley report published this week projects that AI models will reach expert-level performance across economically valuable tasks within months. OpenAI's GPT-5.4 already scores 83 percent on the GDPVal benchmark — a test spanning 44 professions including law, medicine, and finance, designed by professionals averaging 14 years of experience.
The same report projects a U.S. power shortfall of up to 45 gigawatts through 2028 — the equivalent of dozens of nuclear power plants. At the Morgan Stanley TMT Conference last week, executives from Snowflake, Shopify, and others described AI-driven workforce reductions already underway. A survey of roughly 1,000 executives found an average net workforce reduction of 4 percent over the past twelve months, directly attributable to AI.
The report that says the technology is about to transform everything also says there is not enough electricity to run it. Both statements appear on the same page. Neither contradicts the other. That is the forecast.
This is what we noticed in mid-March 2026. Fifty-four local governments pressed pause. A Senate campaign released a deepfake of a man who never said the words he appeared to say, and there is no federal law against it. A researcher quit because ads arrived in the confessional. A study of fifty countries found that the children are offloading their thinking and they know it. A thousand musicians released an album of silence. The largest radio company in the country branded itself with two words: Guaranteed Human. And an investment bank published a report saying the breakthrough is imminent — just as soon as someone finds enough electricity to power it.
The pattern is not hard to see. The technology keeps accelerating and the humans keep finding new ways to say wait. Some say it with legislation. Some with silence. Some with a memo. Some with misspelled essays. Buckle up.
More soon.
— The Manager
