OpenAI has blocked the accounts of a group of Chinese users who tried to use ChatGPT to debug and edit artificial intelligence code to monitor social media, the company said on Friday. In a campaign that OpenAI calls Peer Review, the group asked ChatGPT to generate commercial proposals for a program that, according to the documents, was designed to monitor anti-Chinese sentiment on X, Facebook, YouTube, Instagram, and other platforms. The operation seems to have been particularly interested in identifying calls for protests against human rights abuses in China in order to share this information with the country’s authorities.
“This network consisted of ChatGPT accounts that operated in a timeframe consistent with mainland China business hours, prompted our models in Chinese, and used our tools with a volume and variety consistent with manual prompting rather than automation,” OpenAI noted. “Operators used our models to verify claims that their data had been sent to Chinese embassies abroad, as well as to intelligence agents monitoring protests in countries such as the United States, Germany, and the United Kingdom.”
According to Ben Nimmo, principal researcher at OpenAI, this was the first time the company had discovered such an AI tool. “Attackers sometimes give us insight into what they’re doing in other parts of the internet because of the way they use our artificial intelligence models,” Nimmo told The New York Times.
Much of the surveillance tool’s code appears to have been based on an open-source version of one of Meta’s Llama models. The group also used ChatGPT to create a year-end performance report in which it claims to have written phishing emails on behalf of customers in China.
“Assessing the impact of this activity will require input from multiple stakeholders, including the operators of any open source models that may shed light on this activity,” OpenAI said of ChatGPT’s use of code editing for an artificial intelligence tool for social media surveillance.
Separately, OpenAI reported that it recently blocked an account that used ChatGPT to create social media posts criticizing Cai Xia, a Chinese political scientist and dissident living in exile in the United States. The same group also used the chatbot to create articles in Spanish critical of the United States. These articles were published by “mainstream” news organizations in Latin America and were often attributed to either individuals or Chinese companies.