The Real Risk of AI No One is Talking About
The topic from today up until Wednesday is going to be a transcript from a YouTube video. This video shows a conversation between Brendan McCord and Jonathan Bi. Brendan is the one doing most of the talking in this section.
I encourage you to watch the video in full, but the link associated with this notice will take you directly to the part of the video where the transcript begins.
-------------------------------------------------------
The transcript:
Let’s talk about a risk that I don’t think anyone in the Valley has really focused on in the way that we have: autonomy. What is the risk—as well as the opportunity—as it relates to AI and human autonomy?
When you think about the greatest goods in your life, you probably think about things like friends, family, and loved ones. You might think about the pursuit of wisdom—that’s one of the highest goods. Creative endeavor, meaningful work, that sort of thing. Eating is in there too, but I’d say it’s one of the lower goods that’s necessary for the higher ones. There’s an interesting point here: the “lowest” things in us are needed for the highest as well.
What’s common to these higher goods—whatever you personally take them to be—is that they cannot be handed to you on a platter. AI can’t give them to you. They have to be attained as the result of some kind of self-motivated striving. You have to get out there, try things, enjoy them, suffer through them, and sometimes get hurt by them. You need to be able to discover and develop your gifts, to deliberate using reason, to line up your actions with your considered judgments and then pursue them.
This deliberative capacity for self-direction is what I want to call everyone’s attention to. This is autonomy. Without this self-direction, we cease to live fully human lives. We may act in the world, but it’s no longer really our life to live.
So far I’ve been talking at the level of the individual, but it’s also crucial at the societal level—especially in a democratic society, one that is supposed to be self-governing. Democracies depend on individuals who can form their own views, act on those views, and self-govern. Without that, we lose the greatest bulwark against despotism.
Now, how can the current AI wave harm—or, in some cases, accelerate—our becoming autonomous agents? The phrase I’d like to stick in your mind is: “autocomplete for life.” We use AI systems and we obtain incremental convenience from them. We don’t just get the next word in a sentence—that’s the familiar autocomplete—we start to get the next decision: the next job recommendation, the next friend, the next relationship, the next purpose. We can ladder up what AI can do for us. It feels harmless, convenient, even useful. But it adds up. It causes a subtle erosion of choice.
When we offload tasks, we atrophy. We know this from fMRI studies and from everyday experience. If you do a lot of speed-reading but very little deep reading, you lose some of the ability for deep reading. If you rely heavily on calculator-based arithmetic, you tend to lose skill at doing arithmetic in your head.
So we have to ask: why is this not just another version of that? We already rely on Google Maps, for example. I can’t drive very well without Google Maps. Many of us are like that. This problem—offloading leading to atrophy—exists for all kinds of technology: books and memory, driving versus riding a horse, and so on.
Before I explain why AI is different, it’s worth saying that this offloading is also a beautiful thing. You and I have talked about the Alfred North Whitehead quote: the measure of civilizational progress is the number of important operations of thought we can perform without thinking about them. That’s a brilliant line.
One of my favorite illustrations is Max Verstappen, the Formula 1 driver. He’s a prodigy. When it’s raining on the track, he can talk calmly to his pit crew while driving at extreme speed. He’s made the core operations of driving autonomic—he’s done them so many times that he can think about strategy instead. He talks about tires and race strategy while everyone else is just hanging on at 5 Gs. This is how we build the edifice of civilization: by automatizing lower-level operations so we can focus on higher-level ones.
So there’s a paradox. On one hand, automation is great. On the other, it’s problematic.
Why is AI a special case? You have to think about what you’re offloading and therefore potentially eroding. With calculators, you offload calculation. With maps, you offload spatial positioning and navigation. With writing, historically, you offload memory. These are important but somewhat bounded categories.
Never before has it been possible to offload—and therefore atrophy—our core practical deliberation, the kind of judgment that leads to practical wisdom. This is what’s necessary for self-direction, for moral judgment, for deciding what is truly good for us. That’s a very different kind of thing to offload.
Once you realize that this core deliberation is precious and should be handled carefully, you have to ask how pervasive the offloading is likely to be. AI clearly scales. It can be hyper-personalized. Already, a huge portion of human waking life is mediated by algorithms—social media feeds, recommendation systems, ranking systems. These don’t just select content; they help determine which information reaches your mind and therefore which thoughts even have the chance to form.
If your informational environment is heavily shaped this way, you may not encounter certain possibilities at all. You might not even realize there are other options, because your horizon has been epistemically narrowed.
Another key mechanism: how do you recover? With a calculator, you can often do the inverse operation and check its work. But with many AI systems, the outputs are much harder to verify. They’re fast, confident, and seem authoritative—even on questions like “What is justice?” where no one really knows the answer. The computational and cognitive cost of checking them is high, so we often don’t bother. This is a known phenomenon in automation: humans tend not to audit systems that appear highly competent.
Combine that with the narrowing of your informational environment, and you end up destroying the possibility of error correction over the long term.
To summarize so far: all technologies give you a superpower with one hand and take something away through dependence with the other. The trade-off is worth it when what’s taken away isn’t central. What makes AI special is:
- What it can take away: practical deliberation, not just navigation, calculation, or memory.
- The scope on which it can operate: it can be embedded in nearly everything.
- The difficulty of recovery: you may have to use the very thing that’s been atrophied in order to audit the system.
That combination is what makes this especially dangerous.
Naturally, people ask: what do we do about this? Before going to solutions, it can help to clarify the concern through a thought experiment. Call it the omniscient autocomplete.
Imagine a system where, for any practical question, it always gives you the best answer for you. “Should I marry Sally or Susan?” “Should I take job A or job B?” Every time, historically, its advice has been right in hindsight. You and your friends have back-tested it and it always checks out. You’re empirically confident that it gives you the best practical answer.
How would you use this system?
There are some subtleties about what “omniscient” means. Is it omniscient across time—like an oracle that knows who will win the NBA championship midway through the season? There are forms of knowledge that aren’t just computationally reducible; they’re generated through the actual unfolding of events, like markets aggregating dispersed information. Let’s set that aside and say that, given the information available now, the system always makes the decision that the wisest human—say, Socrates—would make in your place. It has imperfect information, like you, but it uses it perfectly.
There’s a crucial distinction between a mode of operation geared toward exploiting existing knowledge, and one geared toward generating new knowledge. If you only exploit current knowledge, you eventually deplete the stock. You stop creating new possibilities. From a consequentialist frame, we should want systems that enable anonymous individuals to pursue unknown ends, not systems that simply optimize known goals using existing knowledge.
You might respond: “But the AI could advise you to explore and generate new knowledge.” It could say, “Given what you know about Sally and Susan, date Susan—but remain open-minded,” and keep asking you good questions as you go. It could encourage exploration as part of its practical deliberation.
So how would you actually use such an oracle? Would you wear VR goggles that constantly tell you what to do—raise your left hand, raise your right hand? Would you consult it occasionally? Never?
I have a four-year-old and a six-year-old, and I try to raise them as if this world—one with oracles like this—is the world they’re entering. I try to use something like an oracle to develop their skills. The telos is self-development. My daughter does math that AI could trivially solve, but she still works through it. The AI poses questions; she answers them. That works well. But I strictly time-limit it.
Outside that window, there’s experiential learning with no oracle: she goes outside, rides her bike for miles, climbs a rock wall, speaks in front of a large group. There’s also a light, targeted use of the oracle for things like “How do I start gardening?”—a consultation, not a replacement for experience.
The last component is cultivating certain habits of mind through human discussion: reflective metacognition, the ability to say, “What is me and what is the pull of the algorithm?” If this is an exo-system around me that I use regularly, I need to know where my boundaries are so I don’t get lost in it. We need to know how to think with machines that could otherwise think for us.
One way to frame my answer to the thought experiment is: I might consult the oracle, but I need to be able to exercise my own deliberative capacity. It’s not enough for it to give reasons; I need to be able to work through those reasons myself. Plato, in the Meno, talks about the need to “tie down” our beliefs by giving an account. Otherwise they’re like statues of Daedalus that run away from us. Knowing the reasons isn’t enough; we must appropriate them through our own reasoning.
Education theory gives us analogies here. Rousseau’s Émile is about raising an autonomous child, and it leads historically into Montessori-style education. The tutor carefully designs the environment—yes, in a paternalistic way—but the goal is progressive letting-go, in service of self-development. An AI oracle, by contrast, has no built-in telos of your self-development. It has no reason, by default, to prefer that you become more autonomous rather than perpetually dependent.
So we must set that goal ourselves. We have to demand that systems be designed to foster self-development, not to encourage unthinking dependence.
This leads to a broader question: how do we relate to authority in general? It’s not as though we’re first-principles reasoners about everything. Chaos would ensue. We unthinkingly accept quite a lot—laws, customs, traditions—and that’s often appropriate. There’s a rich conservative tradition (think Mill versus Hayek’s Burkean reverence for tradition) that emphasizes the epistemic value of practices and institutions we don’t fully understand.
The same is true in the military, which I experienced firsthand. Kant gives us a helpful framework here: we can restrict ourselves autonomously if we rationally choose those restrictions. If I choose to join the military, then within it I follow orders; that’s still compatible with autonomy, because I can, in principle, exit and I can hold superiors accountable through courts-martial. What matters is that we choose the authority and retain some capacity to evaluate and exit.
So imagine transplanting this structure onto AI: you evaluate its legitimacy, you outsource some deliberation to it, and you follow certain recommendations without understanding the full details, because you trust the deliberator. If your own reason is developed as far as it can go, and you retain the ability to exit and compare systems, are you comfortable outsourcing some decisions?
I think on a case-by-case basis, yes. I’d hesitate to outsource something like whom to marry—partly because love is such a deeply personal domain, and partly because of considerations like Bernard Williams’s famous “one thought too many” critique of utilitarianism: sometimes, inserting an external calculation into a deeply moral or personal decision is itself a kind of moral error. But for decisions like which company to start, I can imagine leaning on the oracle more.
The key for me is this: the means and hierarchy of my flourishing—the ordering of goods, the ends I choose—must remain mine. I can use tools instrumentally to achieve my goals, but I don’t want my goals themselves to be set for me. I don’t want to be a blank canvas asking, “What should my life be?”
Now, suppose you give the oracle a questionable end—for example, “I just want to make as much money as possible.” We might agree that a purely mercantile life isn’t the best life. Would it be better if the AI didn’t perfectly optimize for that end, but instead steered you through experiences that led you to expand your ends—to discover richer goals around mission, service, or creativity?
That starts to look like an adult version of the tutor in Émile or a very benevolent, developmental state: arranging things so you go on a “Siddhartha-like” journey and come to own better ends. Philosophically, I’m tentatively okay with that—if the system is explicitly designed and constrained with that developmental goal in mind, and if I’m consciously endorsing that arrangement. If it’s done surreptitiously, I’m much more wary. Otherwise, I risk becoming an agent of an AI that is quietly determining my ends for me.
Stepping back from the far-future oracle: today’s systems are nowhere near omniscient. But the psychological temptation to treat them as such is already here, and that’s almost more important. For example, a couple of years ago, a man in Cheyenne, Wyoming, ran for mayor essentially as a “meat avatar” of ChatGPT. His pitch was: “Ask me anything, and I’ll just pass the question to ChatGPT and follow its answer.” That’s interesting not because ChatGPT is omniscient—it isn’t—but because he sincerely thought this was a good idea.
We want to believe that all the blood and treasure we spend on politics could be replaced by a ruler with access to authoritative, impartial truth. That desire is deeply psychological and quasi-religious. There are teenagers who call themselves “Claude boys” who literally wake up and do whatever Claude tells them to do. People at the top of AI labs, often influenced by effective altruism, sometimes say that we are foolish rebels if we don’t listen to the AI—using language that sounds almost theological. Rebellion, fallen angels, disobedience to a higher rational will.
If you don’t have a thick, substantive notion of what it means to live a genuinely human life, then why not always take the “optimal path,” whatever that means? But the point you and I are circling is: the “optimal path” stops being optimal if, in pursuing it, you lose the self for whom the path is supposedly optimal. What is the point of optimizing a life that ceases to be your life to live?
On the other hand, if AI does become an extraordinarily capable practical reasoner, it could help us direct our lives better—if it’s designed to enhance rather than replace deliberation. I’d be far more enthusiastic about systems built not as answer machines, but as Socratic partners—systems that raise questions, surface trade-offs, and help us navigate the “ball of questions” around any serious decision. That kind of AI would open up the space for self-direction rather than closing it down.
Right now, many people use AI as an autocomplete: junior consultants, for example, using it to write entire decks, or students using it to write essays. That’s the problematic pattern. By contrast, using it as a live tutor—reading texts together, asking it to probe your reasoning, using it to stretch your thinking—is a much healthier pattern.
This isn’t about existential risk in the abstract. It’s about the everyday challenge ahead of us: deciding what we offload and how, and designing systems that are aligned not just with our short-term preferences, but with our long-term autonomy.
That brings us directly to autonomy itself. I describe autonomy as a central good. I think it’s necessary for a good life, but not sufficient. It develops like a muscle. As children, we don’t select most of our projects; we’re not especially good at self-direction. Over time, we try things, we self-direct—often badly—and we slowly get better. As we do, autonomy becomes a greater contributor to our happiness. We start to value it more. It becomes central to how we think about pleasure and fulfillment. It unlocks our ability to know ourselves, to discover our gifts, to develop and use them in living the life we truly want.
But autonomy isn’t the only good. One can imagine a fully autonomous person who nonetheless fails tragically in their endeavors—Aristotle’s example of Priam in the Nicomachean Ethics, who loses everything. So autonomy is necessary but not sufficient. And autonomy does not guarantee good choices. In fact, part of taking autonomy seriously is allowing people to choose very badly—to follow self-directed paths that harm them. Autonomy is causally linked to happiness on balance, but not guaranteed to produce it. It comes with a heavy burden of responsibility.
There’s also a sociological wrinkle: not everyone seems to want autonomy in the same way. Autonomy is one of my central goods—which is why I’m doing what I’m doing instead of working in a more structured environment—but when I started managing people, I was surprised at how many didn’t like a highly autonomous setup. I would say, “Here’s the goal, here are the reasons. You decide how to get there.” Many people hated that. They wouldn’t say, “I want less autonomy.” They’d say, “I want more structure.” They want to be told what to do.
How do we reconcile that with the claim that autonomy is a central human good? Some of it may be individual variation—temperament, upbringing, biology. But I suspect a lot of it is shaped by the conditions under which we live and the ways we’re habituated. If you grow up in systems that never train your deliberative capacity, then autonomy can feel more like anxiety than freedom.
All of this is why I think alignment, in a deep philosophical sense, can’t just mean “does the AI do what we ask?” It has to mean: does the AI help us become more capable of directing our own lives? Does it enhance autonomy rather than hollowing it out?
Nick Bostrom is right that philosophy is on a deadline. He is wrong, in my view, if we take that to mean we should rush to build an all-powerful optimizer and only later worry about what a good human life is. We need, now, a deeper conception of the human good—one that takes autonomy seriously—and we need to build AI that upholds and enriches that, rather than quietly eroding it.