China is moving to tightly regulate AI-generated ‘digital humans,’ aiming to balance rapid AI adoption with strict controls on content, identity use, and the potential psychological impact—especially on minors. (Source: Image by RR)

Regulators Aim to Prevent Misleading or Addictive AI Services for Minors

China’s top internet regulator has proposed new rules to govern the fast-growing “digital human” sector, aiming to impose stricter oversight on AI-generated virtual personas. The draft regulations, released by the Cyberspace Administration of China, would require all digital human content to be clearly labeled and prohibit uses that could mislead users, particularly minors. The rules are currently open for public comment through early May.

A key focus of the proposal is child protection, with explicit bans on digital humans forming “virtual intimate relationships” with users under 18 or offering services deemed addictive. Regulators, as noted in reuters.com, are also targeting misuse of personal data, prohibiting the creation of digital avatars using someone’s likeness without consent and restricting the use of virtual humans to bypass identity verification systems.

The draft rules further reinforce China’s broader content controls, requiring providers to block material that threatens national security, promotes political dissent, or undermines national unity. Additional guidelines urge platforms to limit harmful or inappropriate content, including sexualized, violent, or discriminatory material, while encouraging intervention in cases involving users showing signs of self-harm.

The move reflects China’s dual-track strategy of aggressively advancing AI adoption while tightening regulatory control over its societal impact. Officials describe governance of digital humans as a strategic issue tied to cybersecurity, public interest, and economic development, signaling that AI-generated identities will be subject to the same strict oversight frameworks already applied to other areas of China’s digital ecosystem.

read more at reuters.com