OpenAI has disclosed that a now-banned account originating in China was using ChatGPT to help design promotional materials and project plans for a social media listening tool. OpenAI says that this work was purportedly done for a government client. The tool was a "probe" that could crawl social media sites like X, Facebook, Instagram, Reddit, TikTok and YouTube for specific political, ethnic or religious content as defined by the operator. The company said it cannot independently verify if the tool was used by a Chinese government entity. OpenAI disrupted similar efforts earlier this year.
The company also says it banned an account that was using ChatGPT to develop a proposal for a tool described as a "High-Risk Uyghur-Related Inflow Warning Model" that would aid in tracking the movements of "Uyghur-related" individuals. China has long been accused of alleged human rights abuses against Uyghur Muslims in the country.
OpenAI began publishing threat reports in February 2024 , raising awareness of state-affiliated actors using large language models to debug malicious code, develop phishing scams and more. The company's latest blog post serves as a roundup of notable threats and banned accounts over the last quarter.
Advertisement Advertisement
Advertisement
The company also caught Russian-, Korean- and Chinese-speaking developers using ChatGPT to refine malware, as well as entire networks in Cambodia, Myanmar and Nigeria using the chatbot to help create scams in an attempt to defraud people. According to OpenAI's own estimates, ChatGPT is being used to detect scams three times as often as it is to create them. .