In 2016, the world changed; timelines that were once filled with real conversations have been quietly replaced by recycled opinions, trending hashtags, and algorithmically sculpted outrage. Everywhere you looked, there were automated accounts, engagement pods, and content farms.\Political bots disguised as people sparked endless debates, fake news sites spawned overnight, and armies of anonymous accounts shaped what millions came to believe.\For the first time, it seemed possible that the internet, this beautiful, chaotic experiment in human connection, had turned into a mirror hall designed to manipulate perception itself.\Of course, that was just a conspiracy theory… right?When Your Friends Start Posting EssaysAs a web developer, I spend lots of time on LinkedIn, scrolling through my daily dose of “wisdom” from CEOs and fellow technologists. For years, I got used to the usual copy-paste trend: recycled motivational quotes, “inspiring” hiring stories that never really happened, and the same fanfics about “giving opportunities” for career growth.\But recently, things started to feel different; every post started to sound like an “author” that I’m quite familiar with: ChatGPT. As someone who has been using AI to speed up work since the launch of ChatGPT, it’s quite easy to spot when it’s spitting out AI text, and I’m not even talking about the enormous amount of emojis :upsidedownface:. \n If you prompt ChatGPT to “write a viral, SEO-friendly LinkedIn post about career growth,” the result was nearly identical to what I had been seeing on my feed. That’s when I realized that they weren’t even pretending anymore.\And to make matters worse, it wasn’t just the posts. The comments were following the same pattern. When you look at them, it sounds like a constructive contribution to the topic. But if you look closely, they are more spit out AI slop.\Some users might be manually prompting an AI to generate their replies. Others probably have automated the whole flow with tools like n8n or custom scripts that scrape posts, feed them into an AI model, and auto-generate responses for engagement.\That’s when I remembered something I had read years ago: the Dead Internet Theory.The Conspiracy That Refused to DieAround 2019, users on boards like Wizardchan and 4chan’s /x/ (the paranormal board) started posting that the internet no longer felt alive. They noticed the same comments repeating across websites, weirdly similar discussions popping up in unrelated threads, and the sense that fewer real people were actually posting. Some even started to claim that governments, corporations, or AI systems had taken over the web, flooding it with fake profiles to manipulate public opinion.\But the theory didn’t truly take off until January 2021, when a user named “IlluminatiPirate” posted a now-infamous essay on a small, retro-themed forum called Agora Road’s Macintosh Café.\In his post, he described his belief that the internet had “died” around 2016 or 2017 and has been replaced by a mixture of bots, algorithmic manipulation, and AI-generated content designed to simulate human activity. He argued that the web we use today is an illusion built by algorithms to keep us entertained, distracted, and predictable.\The post went viral on tech niche and conspiracy circles. For some, it was nothing more than a sci-fi paranoia. For others, it explained a feeling they couldn’t describe, how the modern web felt hollow and empty.\It was easy to think of it as another weird conspiracy theory. But as I scrolled through my LinkedIn feed and starred posts that felt just out of a ChatGPT prompt, the theory suddenly didn’t sound so absurd.Back to Where It All BeganIf the Dead Internet Theory was born on Agora Road’s Macintosh Café, then that’s where I needed to go.\I went there expecting it to be long gone, one of those places that died on the old web. But there it was, still alive in its own nostalgic way, with a minimalist layout straight from the early 2000s, pixel fonts, retro gradients, and threads that looked like they hadn’t changed since the MySpace era.\So, I decided to make my own post. I introduced myself and asked the remaining members what they believed now that we are in a world where AI chatbots, content farms, and algorithmic feeds are a reality.\While I was going through the replies on the forum, one comment caught my attention because of how human it sounded. Humans will adapt. They just don’t want to grow anymore, maybe they’re too young, or maybe they’ve stopped caring. Still, even the bots serve a human purpose. People will get sick of them eventually. They can’t create, they can’t be original. I’m already sick of them now.The Rise of AI Slop and the Invisible HandScroll through Facebook or Instagram and you’ll see surreal AI mashups like Shrimp Jesus, an image of Christ reimagined as a shrimp that went viral in late 2023. At first sight, it was absurd and funny, but as the meme spread, it became clear that something had changed.\This isn’t the first time automation has quietly manipulated the way we think. Back in 2016, bots played an important role in shaping narratives. A study analyzing over 20 million tweets from that year’s U.S. election showed that automated accounts amplified misinformation on a massive scale, often being the first to push low-credibility links into trending spaces before any real user even saw them.\AI systems now generate entire posts, comment threads, and even fake personalities that maintain “consistent engagement,” complete with profile pictures, emojis, and scheduled activity. Some are controlled manually, others entirely automated through workflows like n8n or proprietary API chains. What you see on your feed might not be a person sharing their thoughts.\OpenAI’s upcoming Sora app, for example, integrates text-to-video generation directly into a social feed, blurring the line between creator and consumer.\Meanwhile, Meta recently revealed Vibes, an AI-centric content network where users explore surreal, algorithmically seeded media, which critics have already called “a showcase of AI slop.”When Machines Outnumbered UsIf there was ever a moment that confirmed the shift, it came in Imperva’s 2025 Bad Bot Report.\For the first time in over a decade, automated traffic officially surpassed human activity, making up 51% of all web traffic.\The report breaks down that more than 30% of this traffic came from malicious bots, scrapers, fake engagement accounts, and automated systems designed to harvest data or manipulate visibility. The rest were “good” bots: search crawlers, uptime monitors, and API calls. But the distinction doesn’t change the implication.When AI Learns From Its Own LiesIf bots now rule the web, then what happens when AIs start feeding on the content those bots produce?\Modern AI systems, including large language models, are constantly retrained on new data scraped from the web. The problem is that the web itself is increasingly filled with AI-generated articles, fake news, and fabricated citations that look authentic but aren’t.\So, pretty much, the machine is now learning from its own output, a closed loop of hallucinations reinforcing themselves.\A 2023 study titled The Curse of Recursion demonstrated how training models on their own generations leads to rapid degradation in quality; it means that the model starts producing distorted, repetitive, and meaningless content.\The same pattern is beginning to show up all over. AI news sites are publishing factually incorrect articles that end up cited by other AIs as reliable sources.\Social bots regurgitate hallucinated “facts” that slowly crawl their way into legitimate search indexes.\And as generative models get integrated into browsers, search engines, and personal assistants, the boundary between original data and AI echo grows thinner every day.The Future We Scrolled IntoMaybe the internet didn’t die.\Maybe we did, a little bit, every time we stopped noticing the difference between what’s real and what’s not.\I’ve been online long enough to remember when it was all different. When blogs had weird layouts, the forums had blinking, bright signatures, everything was managed by the community, and no algorithm was deciding what to show and what not.\When I scroll through LinkedIn and see endless AI-written posts pretending to be human, the first thing that pops into my mind is that old forum thread I stumbled on years ago, that one claiming that the internet was already dead.\Probably, the web won’t stop existing; it will just stop being ours.Sources:Dazed. (2024, April 26). How a 4chan conspiracy (kind of) foresaw the death of the internet. Dazed. https://www.dazeddigital.com/life-culture/article/62472/1/4chan-dead-internet-theory-conspiracy-foresaw-the-death-of-the-internet2025 Bad Bot Report | Resource Library. (2025, June 27). Resource Library. https://www.imperva.com/resources/resource-library/reports/2025-bad-bot-report/View of Social bots distort the 2016 U.S. Presidential election online discussion | First Monday. (n.d.). https://firstmonday.org/ojs/index.php/fm/article/view/7090/5653Wikipedia contributors. (2025, October 2). Dead Internet theory - Wikipedia. https://en.wikipedia.org/wiki/Dead_Internet_theoryIlia Shumailov, Zakhar Shumaylov, et al. (27 May 2023). The Curse of Recursion: Training on Generated Data Makes Models Forget - University of Cambridge. https://www.cl.cam.ac.uk/\~is410/Papers/dementia_arxiv.pdfRenzella, J., & Rozova, V. (2024). The ‘Dead internet theory’ makes eerie claims about an AI-run web. The truth is more sinister. The Conversation. https://theconversation.com/the-dead-internet-theory-makes-eerie-claims-about-an-ai-run-web-the-truth-is-more-sinister-229609\