xAI Workers Leak Disturbing Information About Grok Users

Wait 5 sec.

As he dived into the world of AI, Elon Musk promised that he would harness the tech to create a so-called “anti-woke” and “maximum truth-seeking” alternative to OpenAI’s ChatGPT.Instead, his company xAI has turned its in-house AI chatbot Grok into a tool for sexually charged conversations with an eager-to-undress female avatar — while Musk himself warns that the tech is poised to “one-shot the human limbic system.”Now it sounds like xAI users are using the platform for pretty much the worst stuff you can imagine, with twelve current and former workers telling Business Insider that they regularly encountered sexually explicit material, including AI-generated material involving the sexual abuse of children, in their work for the company.While the platform is far from alone in dealing with the influx of offending material — experts have found similarly disturbing content proliferating on TikTok and Instagram, as well as “nudify” apps designed to “undress” real photos — xAI’s doubling down on sexual content could make the situation even worse.“If you don’t draw a hard line at anything unpleasant, you will have a more complex problem with more gray areas,” Stanford University tech policy researcher Riana Pfefferkorn told BI.“If a company is creating a model that allows nudity or sexually explicit generations, that is much more nuanced than a model that has hard rules,” National Center for Missing and Exploited Children (NCMEC) executive director Fallon McNulty added. “They have to take really strong measures so that absolutely nothing related to children can come out.”xAI has been beset by chaos and controversy. The company laid off 500 workers, including high-level employees and those in charge of data annotation, earlier this month. The department in charge of training Grok is now led by a college student who graduated from high school in 2023.Former and current employees told BI that they often encountered images, videos, and audio files related to CSAM.“You have to have thick skin to work here, and even then it doesn’t feel good,” a former worker, who said they had quit over the sheer amount of upsetting material they encountered, told the publication.“It actually made me sick,” another former worker said. “Holy shit, that’s a lot of people looking for that kind of thing.”Others reported that they felt like they were “eavesdropping” on “adult conversations,” saying that Grok users “clearly didn’t understand that there’s people on the other end listening to these things.”AI-generated CSAM is an industry-wide issue. The Department of Justice has already started going after people using AI tools to generate problematic content involving minors.xAI’s competitors OpenAI and Anthropic have also reported instances of AI CSAM to the NCMEC. xAI didn’t file a single report for 2024, according to the organization, despite receiving a total of 67,000 reports involving generative AI last year.According to a recent blog post, the child protection organization saw “sharp increases in new and evolving crimes targeting children on the internet, including online enticement, use of artificial intelligence and child sex trafficking.”The numbers tell a concerning story. The organization received over 440,000 reports of AI CSAM as of June 30, compared to less than 6,000 over the same period last year.“It’s important that we stay on top of these emerging threats to warn the public and adjust our strategies for protecting children,” said NCMEC senior vice president John Shehan in a statement at the time.More on xAI: Elon Musk Fires 500 Staff at xAI, Puts College Kid in Charge of Training GrokThe post xAI Workers Leak Disturbing Information About Grok Users appeared first on Futurism.