Are we all starting to sound the same? The GPTfication of writing

Wait 5 sec.

When Canadian YouTube creator Frankie–a bibliophile, who runs a channel called Frankie’s Shelf–picked up Hatchette’s new femgore horror novel Shy Girl, he could feel in his bones that a human being had not written it. He was not alone; several readers had said the book, written by American writer Mia Ballard, sounded like AI (artificial intelligence).With AI-generated text flooding inboxes, social media feeds, product descriptions, and now bookshelves, a lot of people have developed this sixth sense. Parul Sharma, a Singapore-based novelist, calls this faculty that constantly “watches for the AI-esque turn of phrase” her “AI-radar.” With “everybody sounding the same,” she finds herself nostalgic for “the awkward phrasing, too-long sentences, and clumsy attempts to put human experience into words.”Deepak Shukla, founder of Pearl Lemon, an SEO agency based in London, is skeptical of all writing. “I don’t wonder if something is AI-generated,” he says. “I assume it is until proven otherwise.”The Shy Girl exposé has brought a reckoning in the publishing industry, and it has also raised broader questions: What does AI sound like? And if AI, trained on human writing, sounds a certain way, can humans sound like AI? And, if all we read is AI slop, is human writing beginning to mimic AI in an endless feedback loop?Also Read | How reader outrage forced Hachette to yank femgore novel ‘Shy Girl’ amid AI allegationsHow reader outrage forced Hachette to yank femgore novel ‘Shy Girl’ amid AI allegationsWhat does AI sound like?“It is an unshakable feeling that a person would not write like that,” says Frankie, in his video cataloguing signs of AI in the book, which has since garnered over a million views.While it is not easy to explain exactly what sets people’s spidey sense tingling– the tone, phraseology, sterileness, or something else–most people instinctively recognise it, and have their lists of giveaways.Swetha Sitaraman, who leads the thought leadership vertical at Vajra Global Consulting in Chennai, keeps a running list, which includes overused words such as “era,” “harness,” “navigate the landscape,” and “the real shift.” Pragya Mittal, a content engineer at Packt who works with AI-generated manuscripts on a daily basis, adds bullet points, passive voice, repetition of the same idea in different verbiage, and a tendency to cluster observations in bunches of three. “This movie is funny, exciting, and memorable” – the use of three consecutive adjectives, she says, is a tell.Story continues below this adFor Aaron Traub, who runs a digital marketing firm, Geaux SEO, in New Orleans and uses AI routinely in his work, it comes down to feel: “It does not feel like someone is actually talking to you. It is clear, but it is missing personality and real-world perspective.”While these linguistic tells are useful, they are not very dependable, because the large language models (LLMs) keep training and evolving. AI is already learning to vary its cadence, drop the bullet points, and retire the tell-tale em dash. Munazir Hasan, a lawyer who writes on public policy, has been tracking this drift in real time. “For me it started first from noticing the hyphen, then the colon, then the word ‘undermine’, then ‘recalibrate’, then ‘yet’. These days AI uses ‘quite’.” Each time a tell becomes widely known, it goes out of circulation, but a new one manifests and seems ubiquitous in no time.Which is why Deepak Shukla, founder of London-based SEO agency Pearl Lemon, thinks hunting for structural tics misses the point. “The tell isn’t just tone,” he says. “It is safety. AI writing is often technically correct but emotionally neutered. It avoids sharp edges, strong opinions, or anything that feels remotely risky.” Human writing, by contrast, he says, leaks, contradicts itself, overreaches, occasionally says something slightly uncomfortable. The irony is that as AI improves “the most human thing you can do is be a bit messy.” Sohil Shah, a California-based staff software engineer at PayPal who builds AI agents for a living, agrees. He says AI is optimised to say the right things without sounding real. The output is “over-polished and sometimes overly cautious to avoid confrontation or conflicting thoughts.”“Writing carries the cumulative impact of the life experiences of the writer and the influence of the environment on the mind of the writer,” says Deepti Gupta, a professor of linguistics and former chairperson of Panjab University’s department of English literature and cultural studies. “These factors create an ecosystem that is unique and bespoke in nature.”Story continues below this adAI trained, which is but a language model, has no such ecosystem. “The tip of the iceberg is just a tip in AI, there is no iceberg to support the tip. Whereas a human writer carries a whole iceberg of logic and emotion coupled with experience,” she says.Can humans sound like AI? Table 1: Tell tale signs of AI writing as per readers, experts and editors. (Image generated using AI)As is the case with LLM models, AI detectors do not look for the iceberg, leading to false positives and results skewered against native English speakers and neurodivergent persons. A 2023 Stanford study testing seven widely used detectors found that while they were near-perfect on essays by native English speakers, they incorrectly labelled over 61% of essays written by non-native speakers as AI-generated, and at least one detector flagged 97% of those essays. A 2024 peer-reviewed study by Gegg-Harrison and Quarterman found that neurodivergent writers–including those with autism, ADHD, and dyslexia–are among the groups most likely to be hit by false positives, due to their reliance on repeated phrases, consistent terminology, and pattern-based composition.Formal registers use in legal, scientific, and pharmaceutical communities face the same wall. “If AI detectors had their way, a fair bit of legal writing would already be in custody,” says Shubham Kumar, a lawyer and former adjunct professor at Symbiosis Law School. What these tools flag, he says, is preciseness, which legal training demands. He has seen it play out firsthand after a piece he wrote in 2017 on the subject of stealthing was flagged at 42% AI when submitted for academic use years later. “Are we detecting AI,” he asks, “or just mistrusting writing that doesn’t perform the expected kind of human messiness?”Sariya Ahmad, a public relations professional, decided to test the detectors on some of the most celebrated human speeches in history. Jawaharlal Nehru’s ‘Tryst with Destiny’ came up 44% AI. Charlie Chaplin’s address in The Great Dictator scored over 60%. Swami Vivekananda’s 1893 address at the Parliament of the World’s Religions, delivered more than a century before the first large language model existed, registered a staggering 98%. Thinking the results might be skewed by differences in era and oratory style, she tested more recent pre-AI speeches: Angelina Jolie’s 2013 Oscar speech came in at 41%, Oprah Winfrey’s 2018 Golden Globes address at 54%. “How reliable are these tools, really?” she asks on LinkedIn.Story continues below this adThe answer to this conundrum lies in how AI detectors work, the software assesses a text for predictability. Srinivasan Sekar, Director of Engineering at TestMu AI, who evaluates conversational AI agents says detectors measure how predictable text is, and its burstiness (variation in sentence length and complexity). Their training data is public, and skewed toward native English speakers. “Whenever I write something on my own,” Sekar says, “it classifies me as non-human.” He is flagged not because he writes like a machine, but because he does not write like an American. The bias runs further. Formal registers — legal, scientific, pharmaceutical — score as machine-generated. Neurodivergent writers face the same wall as an AI detector classifies an autistic writer’s consistency as inhuman.His colleague Sai Krishna, also a Director of Engineering at TestMu AI who builds agent evaluation systems, says a major issue is data cutoff. Detectors are only as current as what they have been trained on, and entire domains of specialist writing simply do not exist in that training set. The kind of proprietary documentation produced by pharmaceutical companies like Pfizer, for instance, is not publicly available, which means the models have never encountered it. When a pharma professional writes in their native register, the detector has no frame of reference. “You don’t get that data anyway,” Krishna says flatly. The result is that industries whose writing is most regulated and necessarily repetitive are also the ones most exposed to false positives.The fact that some human minds produce a different kind of iceberg when they write should not become a stumbling block, says Gupta, who has 40 years of  teaching experience. “Any AI-based assessment must pass through a human check before a decision is finalised. If this does not become the norm, we are guilty of a double-abuse of individuals already fighting battles on too many fronts,” she says.Have we started to sound like AI?The more we read AI-generated text–in our inboxes, our feeds, our news, and our books—the more its cadences begin to feel natural, and many of us start to reach for the same constructions. The em dash,the rule of three, and even the pivot signalled by “yet.” If AI is learning to sound more human, humans may simultaneously be learning to sound more AI. As Hasan had pointed out, AI is not merely mimicking our vocabulary, it is increasingly determining it. The feedback loop, it turns out, runs both ways. And if it does, the scanner will only get more confused with time.