Don’t Call It ‘Intelligence’

AI Summary9 min read

TL;DR

The article argues that AI, despite its capabilities, lacks true intelligence and a human voice. It warns against overestimating AI and underestimating human qualities like questioning and emotional depth.

Key Takeaways

  • AI is described as an 'answer machine' that lacks a genuine voice, contrasting with humans as 'question machines' who develop voice through failure and experience.
  • The pursuit of Artificial General Intelligence (AGI) is critiqued for focusing on narrow, explicit knowledge while ignoring tacit knowledge, emotional intelligence, and human attributes like curiosity and feeling.
  • There is a risk that reliance on AI could diminish human self-conception, creativity, and the ability to distinguish between human and machine-generated content.
Humans are question machines. AI is an answer machine.
Illustration of hands united by a web of threads
Illustration by The Atlantic. Source: Ray Massey / Getty.
I am occasionally asked by colleges to give a version of a talk on how I became a writer.  The easy thing to do is to give a sort of guided tour through the woods of literary self-formation: a string of anecdotes designed to elicit a few chuckles, a moment or two of reflection about the inevitable bends in the road, things that felt momentous but turned out not to matter, or things that didn’t seem significant at the time but with hindsight turned out to be the most important of all.

Typically, these tours end in the same place: The author has found a path through the wilderness, and discovered a voice along the way. Voice is what leads us out of the woods.

The trouble, at least for me, is that this kind of speech is mostly fiction; the path is only a path in retrospect. Telling the story this way elides, smooths over, and underestimates the role of circumstance and dumb luck. Most of what a writer experiences is failure. Developing a voice takes years. The point is not to make it out of the woods quickly or unscathed. Getting lost is not the rough part. It’s the whole thing.

Now along comes AI, purporting to be our GPS through the woods. Not just any guide: tireless, fearless, knows all the shortcuts. AI obviates the need to enter the woods in the first place. Why face the blank page and the blinking cursor? Why struggle to understand what you mean and how to articulate it? Why listen to your own croaky, warbly voice when you can push the button for fluid, facile, polished language, available anytime, on any subject? Voice on demand.

When I speak to high-school and college students (including my own children), I worry that at the time when they should be developing their own voices, they’re being told they don’t need to bother. AI writes for us, reads for us, thinks for us. It replaces our voice with its own.

Except that AI doesn’t have a voice. It’s lip-syncing ours. It’s an average, a remix. Initially, the large language models had no ingredients other than our human language. Without the natural voice, there could never have been an artificial one. But if we become content to substitute AI-generated language for our own, we end up in a closed loop in which the same outputs are recycled back as inputs.

What I fear is that we’re losing the ability to tell the difference between our voice and the machines’. Or worse, losing the will to argue that there is one.

And it is an argument. Those who are the most bullish on machine learning argue that artificial general intelligence, or AGI—artificial intelligence models that match or surpass human cognitive capabilities on any task—is imminent, just two or three years away. Some say 10 years, or more. It’s a rolling target, always just over the horizon. But regardless of timeline, the idea is that all of our “cognitive work” will soon be automated. They believe this is possible because they believe that the language we produce is fungible with that generated by LLMs.

I’m not interested in predictions or timelines, or in who is right or wrong and by how much. I’m no AI expert, nor am I even an AI amateur. I’m not a neuroscientist or a cognitive scientist or any kind of scientist at all. What I am is a parent of teenagers, a human, a reader, and a writer, in roughly that order. What I am struggling with, like many others, is how to think about AI, and what it means for work, school, and life—and how to talk about all of that with my children (who surely have much more insight into AI than I do).

What I’m most interested in is the “I” in AGI. What does it actually mean? And why have we let a small number of wealthy businesspeople define it?

Sam Altman, the CEO of OpenAI, promised that engaging with Chat GPT-5 would be like talking “to a legitimate Ph.D.-level expert in anything.” I can’t stop thinking about how revealing—and weird—that definition of intelligence is.

Don’t get me wrong. It’s incredible that we are even having this conversation. I don’t want to minimize the distance the technology has traveled, the speed with which it has done so, or how far it might still go. What I do want to do is ask a question: How can we create intelligence when we don’t fully understand—can’t even really define—what intelligence is?

Back to Altman’s formulation: General intelligence means being a Ph.D.-level expert in anything. Such expertise is no doubt impressive, and certainly related to, or even a component of, intelligence, however defined. But it’s only one small part of intelligence. My alma mater, UC Berkeley, offers doctoral programs in 94 fields of study. Presumably AGI will cover all of those.

But the achievement of a degree does not cover, does not even purport to touch, emotional intelligence. What is a Ph.D. in reading the room? In teaching your kid to ride a bike? In crying because you were moved by a piece of music? We consider elephants intelligent because they mourn their dead. What is a Ph.D. in grief, awe, wonder, curiosity?

Perhaps no one should be surprised that some of the world’s best scientists and engineers have defined intelligence the way they have. Even if the AGI champions’ motives were entirely altruistic, they would still be biased by their own way of seeing the world, by their own experiences and successes. Researchers at the forefront of AI are among the most brilliant and accomplished minds on Earth—and they make up a very narrow, self-selected group of people primed to understand certain kinds of knowledge better than others: explicit, well-defined, tokenizable knowledge; knowledge that forms the basis of our most far-reaching, wildly accurate theories of the universe; knowledge that has allowed us to create world-changing technologies. But that is only a small subset of all knowledge—the sliver that can be expressed symbolically, as language or mathematics.

The rest is what the philosopher Michael Polanyi called “tacit knowledge,” which makes up a much larger amount of data, and interacts in many more ways. His philosophy of knowledge can be summed up by: “We know more than we can say.”

Is that part of AGI? I don’t believe so. I won’t believe it until ChatGPT texts me a link to a video that made it laugh or cry or reconsider its opinions on that thing we were talking about the last time we spoke.

Until it does, I’d argue that the “I” these engineers are chasing is a proxy—or even a misnomer. It’s nothing like intelligence as we understand it.

You might say this argument is flawed, based on an anthropocentric view of intelligence. Maybe we have to let go of preconceptions and embrace the idea that machine intelligence can—and perhaps must—be radically different from human intelligence. Maybe machine intelligence doesn’t require sentience, or autonomy, or curiosity, or feeling.

Read: The alien intelligence in your pocket

Say I concede all that. What I am arguing is that, whatever the machines can do—as incredible and useful and potentially economically valuable as their capabilities may be—none of it merits the word intelligence.

A couple of outliers aside, even the most enthusiastic proponents of AGI don’t believe that the frontier AI models are capable of feeling. Meaning they must assume that intelligence can be decoupled from embodiment and emotion. They are saying: We understand what intelligence is, in its distilled and isolated form.

To which I would say: Please share that definition with the rest of us.

If they’re right, we’ll know soon enough.

But if they’re wrong, the relentless pursuit of AGI poses real risks: to social policy, to education, to our power grid, to the economy, to the environment. Already, generative AI feels like supply in search of demand. The need to scale up, plus the ever-present pressure to seek higher rates of return, have combined to create a mind-boggling movement of capital and societal resources into one industry. Generative AI is the tech equivalent of high-fructose corn syrup: a possibly useful ingredient that is now being inserted into much of what we consume, without our consent.

But perhaps just as important are the potential harms to our own self-conception, both as individuals and as a species.

AI will continue to improve. It might change the world; arguably, it already has. But for now—and perhaps always—it is no substitute for the human voice.

Voice is what we use to communicate with one another. Voice is the sound we make as we navigate the unknown—our echolocation, mapping the world, attempting to place ourselves within it. Voice encodes experience, loss, pain, joy. We don’t acquire voice in spite of failure, but through it. Because of it.

AI doesn’t have a voice, and it’s not communicating with us. Not really. It answers our questions. That’s what it was built to do. It’s an answer machine. But we are question machines. Questions are essential to intelligence. Without them, we are static, stagnant. Without them, we don’t evolve. We can learn answers, but only by asking questions. Questions are how we recursively self-improve. We humans are constantly prompting one another in endlessly creative ways. We prompt. We answer. Our answers become new prompts. Our context windows are our lifetimes; our tokens are uncountable.

This is about more than semantics. By calling what AI can do “intelligence,” we are conflating a technological capability with a human attribute. We are dumbing ourselves down—not by talking to AI but by measuring ourselves against it. The danger isn’t that we are overestimating AI. It’s that we are underestimating ourselves.

This essay was adapted from Charles Yu’s 2026 Joel Connaroe Lecture, given at Davidson College on February 10.

Visit Website