Once you familiarize yourself with the characteristics of ChatGPT’s writing, it becomes impossible to miss. The internet has been flooded with AI-generated text that often features distinctive language patterns, from liberal use of em dashes and repetitive sentence structures to specific turns of phrase and tone.The trend has become so ubiquitous that experts are now warning that it could even influence the way we speak in real life.As historian Ada Palmer and cryptographer and author Bruce Schneier argue in an opinion piece for The Guardian, it’s a very real risk that could also entrench a fundamental flaw plaguing large language models today. While these models were trained on vast quantities of written text, social media posts, movies, TV shows, and other recordings, this data often comes up short in “unscripted conversations we have face-to-face or voice-to-voice” — which represents the “vast majority of speech, and a vital component of human culture.”It’s a massive blind spot that could result in humans eventually adopting linguistic patterns of these models, among other far-reaching consequences.“This will affect not just how we communicate with one another,” Palmer and Schneier write, “but also how we think about ourselves and what goes on around us.”“Our sense of the world may become distorted in ways we have barely begun to comprehend,” they concluded.Research has already shown that AI-generated language relies on shorter-than-average sentences, while using a narrower vocabulary than human speech. It also sacrifices what makes human-written text human, including what Palmer and Schneier term “meanders, interruptions and leaps of logic that communicate emotion.”Worse yet, AI models developed after the advent of ChatGPT run the risk of being trained on output that was itself generated by an AI, a dangerous feedback loop that could further entrench these machine-inspired patterns.Beyond linguistic choices, AI models have long been shown to be highly agreeable or “sycophantic” towards the user, often indulging in their potentially flawed or downright dangerous lines of thinking or beliefs. It’s a tendency that can “reinforce bias and even worsen psychosis,” Palmer and Schneier argue.For impressionable minds, the consequences could be far reaching. Educators are warning that students are losing their ability to think for themselves, instead choosing to consult AI when prompted with a question they can’t answer. University students are worried their peers are starting to all sound the same, relying on the same machine-generated output. Meanwhile, experts are worried the widespread use of AI products in the workplace could be causing users’ cognitive faculties and critical thinking skills to deteriorate.Finding a solution in the long term to have these AI models better reflect when we’re “at our most authentically human” could prove difficult. However, that shouldn’t stop us from looking for one.“We don’t pretend to know what the best solutions might be,” Palmer and Schneier concluded in their piece. “But one has to imagine if there’s ingenuity to develop AI models, then surely there’s ingenuity to come up with a way to train them on informal human speech instead of us only at our most stylized, veiled, and sometimes worst.”More on AI: Large Language Models Will Never Be Intelligent, Expert SaysThe post There’s Something Fundamentally Wrong With LLMs appeared first on Futurism.