
China’s draft rules signal a firm move to rein in emotionally engaging, human-like AI systems by placing responsibility on providers to manage psychological risks, content boundaries and user dependence as AI becomes more socially immersive. (Source: Image by RR)
Draft Rules Seek to Strengthen Oversight of Emotionally Interactive AI Services
China’s top internet regulator has released draft rules for public consultation aimed at strengthening oversight of artificial intelligence systems that simulate human personalities and engage users emotionally. Announced on December 27, the proposal reflects Beijing’s growing focus on shaping how consumer-facing AI evolves, particularly as chatbots and virtual agents become more human-like in behavior and interaction. The move, according to an article in reuters.com, underscores China’s broader strategy of pairing rapid AI deployment with strict governance frameworks.
The proposed rules would apply to AI products and services offered to the public that present human-like personality traits, thinking patterns, and communication styles, and that interact emotionally with users through text, images, audio, or video. Regulators are particularly concerned with how such systems influence user behavior, especially as emotional engagement becomes a core feature of next-generation AI assistants. As a result, the draft emphasizes early warnings, transparency, and active intervention by service providers.
A central focus of the proposal is psychological and behavioral risk management. AI providers would be required to monitor user states, assess emotional dependence, and intervene when signs of addiction or extreme emotional responses emerge. Companies would also need to warn users against excessive use and assume responsibility for safety across the entire product lifecycle, including algorithm review, data security, and personal information protection.
The draft rules also establish clear content and conduct red lines, prohibiting AI systems from generating material that threatens national security, spreads rumors, or promotes violence or obscenity. While still subject to public feedback, the proposal signals a tightening regulatory environment for emotionally intelligent AI in China—one that prioritizes social stability, user protection, and centralized oversight as human-like AI capabilities continue to advance.
read more at reuters.com
Leave A Comment