Something strange has been happening on YouTube over the past few weeks. After being uploaded, some videos have been subtly augmented, their appearance changing without their creators doing anything. Viewers have noticed “extra punchy shadows,” “weirdly sharp edges,” and a smoothed-out look to footage that makes it look “like plastic.” Many people have come to the same conclusion: YouTube is using AI to tweak videos on its platform, without creators’ knowledge.A multimedia artist going by the name Mr. Bravo, whose YouTube videos feature “an authentic 80s aesthetic” achieved by running his videos through a VCR, wrote on Reddit that his videos look “completely different to what was originally uploaded.” “A big part of the videos charm is the VHS look and the grainy, washed out video quality,” he wrote. YouTube’s filter obscured this labor-intensive quality: “It is ridiculous that YouTube can add features like this that completely change the content,” he wrote. Another YouTuber, Rhett Shull, posted a video last week about what was happening to his video shorts, and those of his friend Rick Beato. Both run wildly popular music channels, with more than 700,000 and 5 million subscribers, respectively. In his video, Shull says he believes that “AI upscaling” is being used—a process that increases an image’s resolution and detail—and is concerned about what it could signal to his audience. “I think it’s gonna lead people to think that I am using AI to create my videos. Or that it’s been deepfaked. Or that I’m cutting corners somehow,” he said. “It will inevitably erode viewers’ trust in my content.”Fakery is a widespread concern in the AI era, when media can be generated, enhanced, or modified with little effort. The same pixel-filled rectangle could contain the work of someone who spent time and energy and had the courage to perform publicly, or of someone who sits in bed typing prompts and splicing clips in order to make a few bucks. Viewers who don’t want to be fooled by the latter must now be alert to the subtlest signs of AI modification. For creators who want to differentiate themselves from the new synthetic content, YouTube seems interested in making the job harder.[Read: ChatGPT turned into a Studio Ghibli machine. How is that legal?]When I asked Google, YouTube’s parent company, about what’s happening to these videos, the spokesperson Allison Toh wrote, “We’re running an experiment on select YouTube Shorts that uses image enhancement technology to sharpen content. These enhancements are not done with generative AI.” But this is a tricky statement: “Generative AI” has no strict technical definition, and “image enhancement technology” could be anything. I asked for more detail about which technologies are being employed, and to what end. Toh said YouTube is “using traditional machine learning to unblur, denoise, and improve clarity in videos,” she told me. (It’s unknown whether the modified videos are being shown to all users or just some; tech companies will sometimes run limited tests of new features.)Toh’s description sounds remarkably similar to the process undertaken when generative-AI programs create entirely new videos. These programs typically use a diffusion model: a machine-learning program that is trained to refine an extremely noisy image into one that’s clear, with sharp edges and smooth textures. An AI upscaler can use the same diffusion process to “improve” an existing image, rather than to create a new one. The similarity of the underlying process might explain why the visual signature of diffusion-based AI is recognizable in these YouTubers’ videos.While running this experiment, YouTube has also been encouraging people to create and post AI-generated short videos using a recently launched suite of tools that allow users to animate still photos and add effects “like swimming underwater, twinning with a lookalike sibling, and more.” YouTube didn’t tell me what motivated its experiment, but some people suspect that it has to do with creating a more uniform aesthetic across the platform. As one YouTube commenter wrote: “They’re training us, the audience, to get used to the AI look and eventually view it as normal.”Google isn’t the only company rushing to mix AI-generated content into its platforms. Meta encourages users to create and publish their own AI chatbots on Facebook and Instagram using the company’s “AI Studio” tool. Last December, Meta’s vice president of product for generative AI told the Financial Times that “we expect these AIs to actually, over time, exist on our platforms, kind of in the same way that [human] accounts do.”[Read: What we discovered on “deep YouTube”]In a slightly less creepy vein, Snapchat provides tools for users “to generate novel images” of themselves based on selfies they’ve taken. And last year, TikTok introduced Symphony Creative Studio, which generates videos and includes a “Your Daily Video Generations” feature that suggests new videos automatically each day.This is an odd turn for “social” media to take. Platforms that are supposedly based on the idea of connecting people with one another, or at least sharing experiences and performances—YouTube’s slogan until 2013 was “Broadcast Yourself”—now seem focused on getting us to consume impersonal, algorithmic gruel. Shull said that the modification of his videos erodes his trust in YouTube, and how could it not? The platform’s priorities have clearly shifted away from creators such as Shull, whose combined work is a major reason YouTube has become the juggernaut it is today.