Ben Nimmo, a principal investigator for OpenAI, said this was the first time the company had uncovered an AI-powered surveillance tool of this kind. (Image Source: OpenAI)Written by Cade MetzOpenAI said Friday that it had uncovered evidence that a Chinese security operation had built an artificial intelligence-powered surveillance tool to gather real-time reports about anti-Chinese posts on social media services in Western countries.The company’s researchers said they had identified this new campaign, which they called Peer Review, because someone working on the tool used OpenAI’s technologies to debug some of the computer code that underpins it.Ben Nimmo, a principal investigator for OpenAI, said this was the first time the company had uncovered an AI-powered surveillance tool of this kind.“Threat actors sometimes give us a glimpse of what they are doing in other parts of the internet because of the way they use our AI models,” Nimmo said.There have been growing concerns that AI can be used for surveillance, computer hacking, disinformation campaigns and other malicious purposes. Though researchers like Nimmo say the technology can certainly enable these kinds of activities, they add that AI can also help identify and stop such behavior.Nimmo and his team believe the Chinese surveillance tool is based on Llama, an AI technology built by Meta, which open sourced its technology, meaning it shared its work with software developers across the globe.Story continues below this adIn a detailed report on the use of AI for malicious and deceptive purposes, OpenAI also said it had uncovered a separate Chinese campaign, called Sponsored Discontent, that used OpenAI’s technologies to generate English-language posts that criticized Chinese dissidents.The same group, OpenAI said, has used the company’s technologies to translate articles into Spanish before distributing them in Latin America. The articles criticized U.S. society and politics.Separately, OpenAI researchers identified a campaign, believed to be based in Cambodia, that used the company’s technologies to generate and translate social media comments that helped drive a scam known as “pig butchering,” the report said. The AI-generated comments were used to woo men on the internet and entangle them in an investment scheme.(The New York Times has sued OpenAI and Microsoft for copyright infringement of news content related to AI systems. OpenAI and Microsoft have denied those claims.)This article originally appeared in The New York Times.Tags:artificial intelligenceOpenai