Are We Smarter Than the Machines?
- Nitesh Daryanani
- Mar 26
- 4 min read
One of the most common critiques of generative AI is that it doesn’t think. It just predicts. Ask ChatGPT a question, and it will respond by calculating the most statistically likely string of words that should follow. No original insight. No real understanding. Just plausible sentences.
But what if the machines aren’t so different from us?
What if—to borrow a page from John Dewey and Paulo Freire—humans are increasingly doing little more than arranging words too? Participating in a society where language floats freely, detached from the things it was meant to signify. Of course, most of us aren’t doing this cynically. We’re trying to keep up, to participate, to make sense of the noise. But what if—despite our best intentions—the same critique that Dewey and Freire made of modern education now applies to our entire culture?
Language Without Engagement
Dewey argued that education had become a dry ritual of absorbing symbols without experience. He believed that education could not be separated from the culture in which it operated—when a culture prioritizes abstract knowledge over lived understanding, its educational system follows suit.
A student might be able to recite the principles of democracy, for example, but never be asked to participate in a real civic discussion or witness the workings of a local community meeting. The symbols are there, but the substance is not.
Dewey traced this shift back to the Enlightenment, when knowledge began to be defined by abstraction and disembodied reasoning. Education, once grounded in action and reflection, became a ritual of memorizing facts, formulas, and doctrines—disconnected from practical life. Freire said it more sharply: in the "banking model" of education, teachers deposit words into students, who are expected to store and repeat them. But there is no dialogue, no transformation, no action.
The problem? Symbols without engagement. Language divorced from reality.

This is exactly what we see in generative AI. These models are trained to produce language that sounds right. But they don’t know what any of it means. They can write about grief or joy or suffering or struggle, but they’ve never felt anything. They can write about caregiving, but they’ve never stayed up all night with a sick child or tended to an aging parent. They can describe a sunrise, but they’ve never stood outside in the quiet of early morning, watching it unfold. They can talk about politics, but they’ve never organized, protested, or voted.
They manipulate language, but don’t live in the world that language is supposed to help us navigate.
And yet: is that so different from us?
Corporate Vibes and Crisis Language
Look around. People now speak in marketing fragments. Everything is a brand. Everything has a tone. We are encouraged to have "founder energy," to craft a "personal narrative," to "drive impact." No one knows exactly what these phrases mean—but that’s not the point.
The point is to perform fluency. Fluency not as genuine clarity or understanding, but merely sounding right—sounding smooth—regardless of whether anything real is being said. It’s the kind of polished predictability that language models offer: coherent enough to pass, yet hollow beneath the surface.
In politics, both left and right speak in pre-fabricated frames. Terms like "neoliberalism," "wokeness," "deep state," or "free speech" are invoked without specificity. They signal identity, but rarely invite inquiry.
On social media, therapy-speak circulates with total detachment from clinical practice or ethical care. People cut off friends for violating "boundaries" they never articulated. Trauma is everywhere, yet rarely defined. We say we’re doing the work. But what work?
We speak in symbols that have become uncoupled from reality. There's a veneer of complexity to what we say and do—an illusion of thoughtfulness—but if you look closely, there's little room left for doubt, hesitation, or questions. Everything must be polished, certain, immediate. We no longer stumble toward understanding; we perform it.
The LLM in the Mirror
So maybe the discomfort with generative AI is not that it doesn’t think. Maybe it’s that it reveals how little we do.

We have trained ourselves to sound smart, to say the right things, to present a fluent identity. But fluency is not understanding. And performance is not reflection. Like the AI, we are often just predicting what should come next in the conversation. We say what sounds right. What aligns with the brand. What wins applause.
We talk like lawyers who have forgotten justice. Like teachers who have stopped learning. Like citizens who can no longer act.
And if we continue down this path—if we don’t reclaim meaning—we risk becoming indistinguishable from the tools we’ve created. Automatons governed by the same logic that drives the language model: predict what will please, repeat what has worked, suppress doubt. We stop asking questions. We stop noticing what matters. We outsource not just our writing, but our wondering.
In that world, LLMs aren’t just tools. They’re mirrors. And then, eventually, they become masters.
Can We Reclaim Meaning?
At regarder, we believe that future isn’t inevitable. Dewey said that thinking begins in doubt—when something interrupts the flow and forces us to wrestle with experience. Freire said that dialogue is the path to liberation—not because it sounds nice, but because it grounds us in reality and responsibility.
Real intelligence, for both of them, wasn’t about fluency. It was about engagement.
Maybe the next step isn’t to ask if they’re thinking. Maybe it’s to ask: are we?
Comments