With so many puff pieces out there about what AI can do, it’s rare to see a story about what it can’t do. And as some researchers tell it, AI is falling terribly short at what many of us find to be one of the easiest tasks of all: getting into arguments on social media.First spotted by Ars Technica, a team of researchers from Switzerland, the Netherlands, and the US recently released a paper analyzing social media posts generated by large language models (LLMs).To conduct the yet-to-be-peer-reviewed study, the researches applied what they call a “computational Turing test” to posts LLMs on X-formerly-Twitter, Reddit, and Bluesky. They found that posts generated by AI bots — all open-weight models, ranging from DeepSeek to Qwen — were all “readily distinguishable” from ones by human users with a 70-80 percent accuracy rate, which is “well above [the threshold for] chance.”In other words, it’s laughably easy to catch an AI shitposter in the act by applying a one-size-fits-all screening to any text it spits out — let alone by using a little basic human judgement.One of the major reasons for this, the scholars posit, is that AI can only mimic a human’s emotional depth, what we might call the “heat of the moment” vitriol of a typical flame war. When we get into it, we really get into it, with a level of both “toxicity” and “sentiment” that remain unmistakably human.“Even after calibration, LLM outputs remain clearly distinguishable from human text, particularly in affective tone and emotional expression,” the team wrote. Interestingly, they found that an LLM’s size and complexity didn’t necessarily correlate to more realistic vitriol. For example, “the large Llama-3.1-70B performs on par with, or even below, smaller models,” the researchers wrote. “This suggests that scaling does not translate into more authentically human communication.”The findings are particularly ironic given that one of AI’s most prominent use-cases at the moment seems to be spamming social media, particularly the well-trafficked platforms of X-formerly-Twitter, Facebook, and Instagram (though other sites, like Reddit, are also being overrun.)Even the wannabe tech CEOs are getting on board, with startups like Doublespeed offering clients access to an AI-powered bot army tailored to their advertising needs.In a way, the paper’s findings are good news for anyone worried about the rate at which AI was becoming indistinguishable — though something tells us it won’t make much of a difference as AI bros continue to flood the internet with algorithmic slop.More on AI: Scientists Created an Entire Social Network Where Every User Is a Bot, and Something Wild HappenedThe post AI Is Failing at the Most Hilarious Task Imaginable appeared first on Futurism.