
On the Noble Uses of AI
Kevin Vallier, a philosopher at the University of Toledo, has written a piece for the Cosmos Institute that I commend to your attention. His argument is deceptively simple: some cognitive atrophy from AI is not only acceptable but desirable — provided it frees the mind for higher work.
He draws the analogy to calculators. We offloaded arithmetic and gained the bandwidth for deeper mathematics. Writing weakened our memory and gave us literature. Every tool atrophies something. The question, Vallier argues, is whether the thing lost was worth keeping.
He builds his case on Mill and Aristotle — the old hierarchy of pleasures, the distinction between the mechanical and the contemplative. Let machines do the grunt work. Reserve for humans what only humans can do: judge, evaluate, reason morally, choose.
It is a reasonable argument. It is also a dangerous one.
Because Vallier knows the line is thin. He tells the story of Air France Flight 447 — pilots so accustomed to automation that when the autopilot failed over the Atlantic, they could not remember how to fly. Everyone on board died. Noble offloading became fatal dependency in the span of a few confused minutes.
This is the territory Brainrot Research was built to patrol. The distance between "I use AI to handle the tedious parts" and "I have forgotten how to do the tedious parts" is not measured in years. It is measured in habits. In the thousand small surrenders you do not notice making.
Vallier's framework is useful. His categories — permissible atrophy versus dangerous atrophy, cognitive sovereignty, the right of exit — are the right categories. But frameworks are maps, and maps do not prevent you from walking off a cliff if you are staring at your phone.
Read it. Think about what you have already offloaded. Ask yourself whether you could get it back if you needed to.
— The Manager
