top of page

The Ghost in the Machine: Why AI Writing Sounds Like It Does

  • Nitesh Daryanani
  • Jul 27
  • 6 min read

AI writing has a vibe. You know it when you see it. It's smooth, articulate, well-structured—but weirdly generic. Like someone who's very fluent but has never had a personal experience. Whether it’s finishing a sentence, summarizing a news story, or writing a fake Shakespearean sonnet, the tone is confident yet uncanny—like language without a voice.


Which is maybe exactly what it is.


ree

To understand why AI writing sounds the way it does, we have to go back to a theory that predates ChatGPT by decades: structuralism. Claude Lévi-Strauss and others argued that meaning doesn’t come from words in isolation, but from how they relate to each other within a larger structure. And it turns out, that's exactly how LLMs work.


Structuralism in Silicon


Ask an LLM to finish a phrase like “It was a dark and stormy…” and it doesn't pull its response from memory. It doesn’t remember the phrase the way you or I might. It generates a response by mapping statistical relationships across billions of words and picking up patterns—what usually comes after “dark and stormy”? Who uses this phrase, and in what contexts?


Or give it something ridiculous, like "Write a breakup text in the voice of a pirate," and you'll get, “Arrr, it pains me heart to say it, but we be better off sailin’ solo.” The LLM is not being creative here—it’s stitching together fragments of pirate tropes, modern breakup language, and texting norms, all based on how those things show up in its training data.


Both examples reveal the same underlying process: the LLM is mapping relationships between tone and situation, register and genre, word and context.


That's the structuralist point: meaning isn't in the word, it's in the network. And an LLM is like a machine tuned to detect that network—not a mind that understands, but a mirror that reflects the deep structures of how we think and speak. It doesn't understand what it's saying—it just exposes the scaffolding behind our thought.


This structural approach explains one of AI's most distinctive quirks.


The Binary Brain: How AI Thinks in Pairs


AI writing is filled with opposites: hot/cold, light/dark, inside/outside. Ask an LLM to set a scene and you’ll get, “The room was warm and glowing, a welcome contrast to the bleak gray drizzle outside.” It loves contrast.


But this isn’t an aesthetic preference. It’s how language—and thought—are structured. Structuralists argued that we understand the world through binary oppositions: raw/cooked, good/evil, nature/culture. AI mirrors that logic because our language is built on it.

ree

This also explains one of the most commonly mocked features of AI writing: its obsessive use of "not X, but Y" constructions. Critics roll their eyes at phrases like "not artificial mind, but linguistic ghost" or "not creativity, but pattern recognition." But this isn't just a stylistic tic—it reveals something fundamental about how meaning works.


Every word gets its definition partly from what it isn't. "Hot" means something because it's not cold. "Justice" emerges in opposition to injustice. AI doesn't overuse these constructions because it's poorly trained; it uses them because opposition is how language creates meaning. The mirror is simply showing us that contrast isn't just how we describe things—it's how we think about things.


Try prompting an LLM to respond without using binaries, and you’ll see it flail. Contrast is so deeply ingrained in how we describe things that avoiding it feels unnatural. AI isn’t imposing binaries—it’s revealing just how deep they go. This mirroring effect becomes even more apparent when we look at how AI handles different voices and styles.


The Author Was Already Dead


When you ask AI to write like someone specific—Hemingway, a sarcastic teen—it kind of all sounds the same. You get the sentence structure, the tone, the surface-level tics. But the deeper voice? The sense of a unique perspective? That doesn’t quite land.


And that’s not a glitch—it’s kind of the point.


When Roland Barthes declared "the author is dead," he was making a fundamentally structuralist point: what we call an author’s “voice” is really a pattern—vocabulary, syntax, tone, rhythm. AI gets that. It picks up on those patterns and blends them. So when it mimics Hemingway, it’s not channeling a tortured genius with a whiskey in hand—it’s assembling a structural fingerprint: short sentences, few adjectives, no fuss.


But here’s the uncomfortable part: that’s kind of how we write too. What feels like originality is often just habit and remix. Unless we're actively drawing from our lived experience—our mistakes, our discoveries, our particular way of seeing—we're often just cobbling together patterns we've picked up over time. The difference is intention and stakes. AI just makes that remixing process impossible to ignore.


Grammar, Culture, and the Algorithmic Unconscious


So AI doesn't know or understand anything. But it still finishes your sentence—"The child ran to his mother because…"—with "he was scared." (Note even the gendered assumptions baked into this example.) This reveals something profound: the patterns that feel natural to us are so consistent that a machine can learn them without comprehension.


This is the algorithmic unconscious at work: the cultural rules we follow without realizing it. Just like we "know" grammar without being able to explain it, AI absorbs the deeper grammar of our assumptions. It learns that children run to mothers when scared, that agents act before objects, that certain emotions follow certain situations—not because it understands these concepts, but because these patterns are woven into the fabric of how we describe the world.


Mikael Hvidtfeldt (CC BY 2.0)
Mikael Hvidtfeldt (CC BY 2.0)

Perhaps that’s why it feels uncanny, like there's a ghost in the machine. The LLMs don't have beliefs, but they reflect ours—perfectly, statistically, and without critique. Every sentence is a little echo of human cognition, passed down and reassembled at lightning speed.


Mirrors, Not Minds


This brings us back to the central insight: LLMs are mirrors, not minds. And understanding them as mirrors—rather than aspiring humans—changes everything. Mirrors reflect perfectly, but they don't feel anything about what they show. Similarly, LLMs don't "know" what they're saying, but they show us how we say things.


Ask AI to write a joke and you'll often get the scaffolding—setup, punchline, twist—but not the spark. Ask it for an argument and you’ll get a list of points, but no urgency, no real stake in the outcome. It can mimic the form of meaning, but not the feeling. The LLM's fluency reveals that so much of what we call style, voice, or originality is actually structure, repetition, and remix.


That can be unnerving. But it can also be liberating. If language is built on shared structures, then every sentence we write is part of something bigger—a kind of collective inheritance. AI didn’t invent those patterns; we did. And we continue to do so, encoding the accumulated wisdom of human culture in the very fabric of our language, every time we speak.


While structuralism explains a lot about how we use language, it doesn’t explain everything. Real creativity, ethics, embarrassment, grief, joy—those don’t live in linguistic patterns alone. They live in us. They come from context, from intention, from being in a body in a world full of noise and friction.


AI doesn’t have that. It doesn’t get embarrassed. It doesn’t fall in love. It doesn’t misread the room. When we stop expecting AI to sound human and start seeing it as a kind of linguistic X-ray, it becomes genuinely useful. It doesn't just show us what we've written—it shows us how we write, revealing the invisible structures that shape every sentence. The mirror doesn't lie about what it reflects; it just can't explain why we look the way we do.


Living with the Ghost


So when an LLM gives you a phrase like “not artificial mind, but linguistic ghost,” it’s actually onto something. These systems aren’t thinking, but they are haunted—by the structures of language, by the invisible rules of thought that we’ve been following all along. In their mechanical navigation of linguistic space, they offer us a new way of seeing ourselves. We are not autonomous creators of meaning, but participants in a vast, ongoing structural symphony that we play but did not compose.


The ghost in the machine, it turns out, was never artificial intelligence at all. It was the structure of language itself, finally made visible through silicon and statistics.

 
 
 

Comments


bottom of page