Chinese group’s ChatGPT use reveals worldwide harassment campaign against critics

Wait 5 sec.

A Chinese law enforcement official attempted to use ChatGPT to review its reports on cyber operations, subsequently revealing details of a worldwide online harassment and silencing campaign of China’s critics at home and abroad.In a new threat report released Wednesday, OpenAI said the activity concerned a single account that regularly used ChatGPT to review and edit reports on “cyber special operations.” That same account also attempted to use ChatGPT to plan a propaganda campaign against Japanese Prime Minister Sanae Takaichi. When the model refused, the actor came back weeks later with prompts indicating the operation had proceeded anyway.The reports uploaded to ChatGPT “suggested that the threat actors had conducted many other, earlier operations, in a comprehensive effort to suppress dissent and silence critics both online and offline, at home and abroad,” the report said.While there’s only evidence of a single account used by the agency, OpenAI said the operations targeting Chinese critics described in the report appears “large-scale, resource-intensive and sustained,” consisting of hundreds of human staff, thousands of fake accounts across different social media platforms and the use of local Chinese AI models.  These operations included mass posting and content generation, flooding social media companies with bogus complaints about accounts owned by dissidents, forging documents and in some cases even impersonating U.S. officials for intimidation.  A separate campaign involving a cluster of accounts that “likely originated” in mainland China prompted ChatGPT for information on “U.S. persons, forums and federal building locations.”The accounts also generated email drafts purportedly from a company called Nimbus Hub Consulting based in Hong Kong, but OpenAI’s report notes that the accounts used VPNs and prompted the model using Simplified Chinese language characters, which is more commonly associated with mainland China.OpenAI said that, when asked about U.S. entities, ChatGPT also provided “publicly-available” information sources on U.S. federal government office locations, the distribution of federal employees by state, professional forums and job websites in the US economics and finance industries.The Chinese actors generated English-language emails to U.S. state officials and to business and financial policy analysts, inviting them to join paid consultations and offer strategic advice to the actors’ clients.These emails would frequently seek to move the conversation to another video conferencing platform, such as WhatsApp, Zoom or Teams. One of the accounts uploaded their hardware specifications and asked for step-by-step, non-technical instructions for installing real-time face-swapping software called FaceFusion.“The model responded with information that was drawn from FaceFusion’s publicly-available website and documentation,” OpenAI said.No evidence of automated cyber attacksThe report focused mainly on how cybercriminals and state actors used ChatGPT to support scams and influence operations. OpenAI detailed four covert information operations and three romance-scam operations. In addition to Chinese influence operations, it also reported on propaganda content generated for Rybar, a Russia-aligned online influence group.OpenAI’s report details how some operators used ChatGPT to automate isolated tasks, like a Cambodian romance scam that blended human and AI operators when communicating with victims. The report did not cite any instances of threat actors using ChatGPT for direct offensive hacking operations. AI tools can give both malicious and legitimate actors access to tremendous speed and scale online.  Over the past year, Chinese hackers have reportedly used at least one other U.S.-made AI model to conduct heavily automated cyberattacks against businesses and governments.During a media Q&A, an OpenAI official said they were not aware of any cases in which threat actors used ChatGPT to carry out automated attacks, but added that the company has multiple ongoing investigations that have not concluded.Much of the observed activity in OpenAI’s report follows a common pattern, detailing threat actors who are still very much in the throes of experimenting with AI technology and learning where it provides the most value in their chain of operations.Some used it to generate propaganda content around a specific target, or monitor social media platforms, or provide better language translation for phishing lures. But similar to reporting from Google earlier this month, in most cases threat actors are using AI in limited and targeted ways as an amplifier to existing operations.  In some cases, it’s clear that ChatGPT is one of multiple AI tools being used by the threat actor. In the case of the Chinese law enforcement agency, the status reports uploaded to the model on information operations reference the use of locally deployed Chinese AI models like DeepSeek, and it’s likely the group used a different model to prepare for its propaganda campaign against Taikaichi.“Threat activity is seldom limited to one platform; as our report…shows, it is not always limited to one AI model,” the report said. “Rather, threat actors may use different AI models at various points in their operational workflow.”The post Chinese group’s ChatGPT use reveals worldwide harassment campaign against critics appeared first on CyberScoop.