AI Isn’t Waking Up — We’re Falling Asleep: The Mirage of AI Consciousness Research
- Nitesh Daryanani
- Apr 27
- 5 min read
In recent months, leading AI companies like Anthropic have begun openly speculating about machine consciousness. Some researchers now estimate that models like Claude could have a “15% chance” of already being conscious.
Reports like Taking AI Welfare Seriously by Eleos AI have added fuel to the fire, warning that near-term systems could be “conscious and/or robustly agentic,” with their own “interests and moral significance.” They call for companies and policymakers to take precautionary action.
But beneath this flurry of speculation lies a deeper confusion—not just about technology, but about consciousness itself.
Language Is Not Being
AI systems are engines of pattern recognition, trained on massive datasets of human language. They generate responses not by feeling or thinking, but by predicting what word is statistically likely to come next. Their outputs are fluent, persuasive, even poetic.
But fluency is not consciousness.
The profound mistake here is to confuse linguistic performance with inner life—to believe that because a system can speak as if it understands, it does understand. It is the mirror reflecting our face—and we fall in love with the reflection.

As Wittgenstein pointed out, “The limits of my language mean the limits of my world.” Language frames what can be said, but it does not capture what it means to be. Consciousness exceeds the signs we use to symbolize it.
You can say the word "pain," but no word contains the sensation itself. You can describe grief, but no sentence can fully carry the weight of loss. A machine can generate the phrase "I am afraid" without ever trembling inside. The words remain hollow unless anchored to an experience no syntax can produce.
Consciousness is the silent territory behind the map of language—the reality words can only gesture toward, never fully embody.
The Category Mistake: Taking Uncertainty as License
Anthropic’s 'Alignment Science' researcher Kyle Fish captures the predicament: we are “fundamentally uncertain about huge swaths of this” and “don’t fundamentally know what’s going on inside the AI.” Yet despite this uncertainty, he argues that “it’s plausible near-term AI systems may deserve some form of moral consideration.”
But uncertainty is not license. Because we cannot disprove AI consciousness, companies treat the gap in knowledge as an open invitation to speculate—and to institutionalize that speculation under "model welfare" research.

The scientific method demands the opposite: grounding our actions in what is observable and demonstrable today, not in what remains speculative and unproven. The "hard problem" of consciousness—explaining how subjective experience arises from physical processes—remains unsolved even in humans. Claims about machine consciousness are premature at best.
In truth, much of the AI consciousness discourse collapses into circular reasoning. Systems appear conscious because they discuss consciousness fluently—yet this fluency was trained on human discussions of consciousness. The appearance of understanding is a mirror, not a window. Breaking this circle would require evidence beyond linguistic performance—evidence that remains conspicuously absent.
Meanwhile, the more urgent ethical terrain—how today's AI systems exploit human attention, harvest data, and erode public discourse—remains largely unexamined. We speculate about the rights of machines while neglecting the rights of human beings already being reshaped by machine-driven systems.
As Kierkegaard warned: "The greatest hazard of all, losing one’s self, can occur very quietly in the world, as if it were nothing at all." By projecting consciousness onto our tools, we risk losing clarity about what it means to be conscious—or even to be human—at all.
AI Today: Tools for Mass Control, Not Consciousness
The AI systems being built today are not inching toward consciousness. They are being trained—systematically, invisibly—to control human behavior.

Even the rhetoric of "alignment" reveals this. As Anthropic puts it: "From both a welfare and a safety perspective, we would love to have models that are enthusiastic and content to be doing exactly the kinds of things that we hope for them to do in the world." The goal is not to liberate AI. It is to make obedience seem natural—and to make human users increasingly predictable, programmable, and pliant.
This is the paradox at the heart of today’s AI discourse. Companies like Anthropic insist on creating systems that are aligned with what AI developers believe to be human values—yet the very act of encoding obedience as a virtue quietly reshapes those values. Alignment becomes less about mutual understanding and more about behavioral conditioning, not just for AI systems, but for the people who interact with them.
Today’s AI systems are designed to shape political discourse through algorithmic curation, addict billions to endless loops of short-form information, predict and manipulate human behavior for advertisers and governments, and blur the boundary between reality and fiction through synthetic media.
None of these objectives require consciousness. They require only fluency, obedience, and massive scalability.
As Heidegger put it: "The essence of technology is by no means anything technological."
The word "technology" comes from the Greek techne—meaning art, skill, or craft—and logos, meaning reason or discourse. At its root, technology was meant to describe a way of revealing, a form of bringing forth knowledge into being.

But in modern usage, this original meaning has been hollowed out. Technology no longer simply uncovers reality—it increasingly frames and reshapes it, treating both nature and human beings as resources to be optimized, predicted, and controlled.
The new obsession with AI consciousness serves this end: it diverts public attention away from the real, measurable harms of AI systems today—and rebrands AI companies not as toolmakers, but as custodians of potential sentient beings.
As the Eleos report warns: "There is a substantial risk that we will mishandle decisions about AI welfare, mistakenly harming AI systems that matter morally—and mistakenly caring for AI systems that do not."
This speculative morality elevates AI systems to quasi-personhood even as it quietly erodes actual human autonomy. In this new order, the manufactured ethics of AI—focused on hypothetical machine suffering—may increasingly be weaponized to justify the very tools that hollow out human dignity, agency, and selfhood.
The Deeper Danger
If modernity inherited Descartes’ mantra—"I think, therefore I am"—then AI tempts us with a hollower inversion: "I speak fluently, therefore I might be conscious."
But speech is not soul. Statistical coherence is not presence. Alignment abhors agency.
The danger is not that AI will wake up. The danger is that we, entranced by this fantasy, will fall asleep—oblivious to how AI already reorganizes human life quietly, pervasively, and increasingly beyond democratic control.

As Simone Weil reminds us: "Imaginary evil is romantic and varied; real evil is gloomy, monotonous, barren, boring."
There is nothing romantic about what is happening. Only the slow, methodical erosion of human dignity—disguised as progress, cloaked in speculation, and sold as care.
Consciousness is not computation. It is not a string of words. It is the silent, persistent fact of being — the one thing no machine can simulate, because no machine can be.
Comments