Photo Credit: Qilai Shen/Bloomberg via Getty Images

Artificial Intelligence (AI) companions that mimic emotional intimacy are now squarely in China’s regulatory crosshairs. 

The Cyberspace Administration of China (CAC) released draft rules on April 3, 2026, that would ban “digital humans” from offering simulated romantic or family relationships to users under 18, prohibit platform features designed to fuel addiction, and require labeling on all AI-generated content that are designed to be human-like. 

The draft, open for public comment until May 6, covers everything from AI-powered livestreamers to customer service agents and emotionally responsive chatbots.

What Are Digital Humans?

Digital humans are defined as virtual figures that exist in non-physical environments and are simulations of human appearance and behavior using technologies such as computer graphics, digital image processing, and artificial intelligence. 

As such, any service that uses AI, digital modeling, or graphics technology to deliver human-like virtual representations to the public, whether in entertainment, live-streaming, customer service, education, or influencer culture, falls under the agency’s rules.

What the Rules Prohibit for Minors

The child protection provisions are the most specific part of the draft. 

Digital humans are prohibited from inducing addiction or excessive consumption among children. Platforms may not offer virtual relatives, romantic partners, or emotionally intimate relationships to users under 18. Minors may also not be exposed to content that promotes unsafe behavior, extreme emotions, moral violations, or harmful habits. Any digital human service that could negatively affect a child’s physical or mental health is prohibited.

This means service providers must establish dedicated user modes for minors that offer customized security settings including mode switching, periodic reminders, and usage duration limits. Guardians are also to be provided with control functions enabling them to receive real-time safety risk alerts, view summaries of their child’s usage, set usage restrictions, and prevent ads or pop-ups in-app purchases. For children under 14 specifically, guardian consent is mandatory before any digital human service can be provided.

Labeling, Consent, and Identity Rules

Beyond child protection, the draft introduces requirements that apply to all users. Any online content featuring a virtual persona must carry a continuous “digital human” label so users know whether they are interacting with an artificial image or not.

And on consent, any organization or individual using personal information to model or generate a digital virtual person must obtain explicit, informed consent from the subject. The purpose and potential impact must be clearly explained, and consent can be withdrawn, meaning all data will be deleted and the virtual persona will be deregistered. 

The draft also bans the use of digital humans to evade facial recognition, voice recognition, or other identity authentication mechanisms.

Enforcement and Penalties

Violations can result in warnings, public criticism, service suspensions, penalties, and fines of up to 200,000 yuan (around $29,200) when public health or safety is harmed. 

Service providers also face obligations to actively monitor users for signs of suicidal or self-harming tendencies and to direct them toward professional assistance when such indicators appear.

The Broader Context

The child-protection framework fits within a broader enforcement pattern Beijing has pursued since 2021, when gaming restrictions reduced online play for minors at three hours per week. The draft also builds on the CAC’s 2025 Measures for Labeling of AI-Generated Synthetic Content, which required visible and technical labels for AI-generated text, images, audio, and video.

Additionally, the concern China is addressing extends well beyond its own borders. Multiple families in the United States have alleged that AI chatbots fostered emotional dependency and validated self-destructive thoughts in adolescents. 

In one high-profile lawsuit filed against OpenAI, a California couple sued the company over the death of their 16-year-old son, Adam Raine, alleging that ChatGPT encouraged him to take his own life in April 2025.

The draft remains open for public comment until May 6. Whether it passes in its current form or is revised, the CAC has made clear that emotional dependency by design will not be an acceptable product feature in AI products and services, at least not for minors.

Share.

I’m Precious Amusat, Phronews’ Content Writer. I conduct in-depth research and write on the latest developments in the tech industry, including trends in big tech, startups, cybersecurity, artificial intelligence and their global impacts. When I’m off the clock, you’ll find me cheering on women’s footy, curled up with a romance novel, or binge-watching crime thrillers.

Comments are closed.

Exit mobile version