This photo taken on February 2, 2024 shows Lu Yu, head of Product Management and Operations of Wantalk, an artificial intelligence chatbot created by Chinese tech company Baidu, showing a virtual girlfriend profile on her phone, at the Baidu headquarters in Beijing.
BEIJING — China plans to restrict artificial intelligence-powered chatbots from influencing human emotions in ways that could lead to suicide or self-harm, according to draft rules released Saturday.
The proposed regulations from the Cyberspace Administration target what it calls "human-like interactive AI services," according to a CNBC translation of the Chinese-language document.
The measures, once finalized, will apply to AI products or services offered to the public in China that simulate human personality and engage users emotionally through text, images, audio or video. The public comment period ends Jan. 25.
Beijing's planned rules would mark the world's first attempt to regulate AI with human or anthropomorphic characteristics, said Winston Ma, adjunct professor at NYU School of Law. The latest proposals come as Chinese companies have rapidly developed AI companions and digital celebrities.
Compared with China's generative AI regulation in 2023, Ma said that this version "highlights a leap from content safety to emotional safety."
The draft rules propose that: