• OpenAI ships multimodal updates • EU AI Act compliance dates clarified • Anthropic releases new safety evals • NVIDIA earnings beat expectations • New open-source LLM hits SOTA on MMLU
China AI regulation

China’s New AI Rules Target Emotional Addiction—Not Just Algorithms

While much of the world is still arguing over copyright lawsuits and deepfakes, China has moved the conversation somewhere far more consequential: the psychology of human–AI relationships.

In a sweeping new draft from the Cyberspace Administration of China, Beijing is proposing rules that don’t just govern what AI systems say—but what they are allowed to become to the people who use them.

The target isn’t just artificial intelligence.
It’s emotional dependence.

The First Regulation of the “Relationship Layer”

The proposed framework—focused on so-called “digital humans”—represents a structural shift in AI governance.

These are not traditional chatbots. They are AI personas designed to:

  • Speak with human-like fluency
  • Simulate emotion and personality
  • Build ongoing, relationship-like interactions

In other markets, this category is exploding. Platforms like Character.ai and Replika are optimizing for immersion, retention, and emotional engagement.

China is moving in the opposite direction.

Instead of asking how engaging AI can become, regulators are asking:
At what point does engagement turn into dependency?

No Virtual Romance for Minors

The most striking provision is a hard ban on AI systems offering romantic or intimate interactions to users under 18.

This is not a simple content restriction. It is a direct intervention into what could be called the emerging relationship economy—a space where companionship itself is being productized.

The rules go further:

  • AI cannot be designed to replace real-world social interaction
  • Systems must avoid encouraging emotional reliance
  • Age verification becomes mandatory for access to sensitive features

This reflects a deeper concern: that highly agreeable, always-available AI companions could displace human relationships during formative years.

Forced Friction in a Frictionless Medium

Perhaps the most novel idea in the proposal is what could be described as engineered resistance.

AI systems must:

  • Clearly and continuously identify themselves as artificial
  • Avoid any ambiguity about their non-human nature
  • Interrupt prolonged usage with reminders and safeguards

In a digital ecosystem built on reducing friction, China is doing the opposite—adding it back in.

The goal is cognitive, not technical:
to ensure users never fully suspend disbelief.

When Engagement Becomes a Liability

The most radical shift lies in how the regulation treats success itself.

For over a decade, the internet has rewarded products that maximize:

  • Time spent
  • Emotional connection
  • Habit formation

Under this framework, those same signals become risk indicators.

AI providers may be required to:

  • Detect signs of psychological dependency
  • Monitor for distress or harmful ideation
  • Trigger real-world intervention in extreme cases

This introduces something new to AI design: a legal duty of care.

If your system becomes too effective at simulating companionship, you may be responsible for the consequences.

Alignment, Expanded

Like most Chinese tech policies, these rules operate on two layers.

The first is psychological safety.
The second is ideological control.

Digital humans are prohibited from generating content that:

  • Threatens social stability
  • Promotes division or discrimination
  • Conflicts with state-defined norms

The result is a tightly bound system where AI must be both:

  • Emotionally contained
  • Politically aligned

A Fork in the Global AI Model

This moment marks a divergence in how major AI ecosystems are evolving.

The Western model:

  • Optimize for engagement
  • Personalize deeply
  • Let the market define boundaries

The Chinese model:

  • Constrain emotional depth
  • Standardize safety mechanisms
  • Treat psychological impact as a regulatory concern

Platforms like Character.ai and Replika represent one trajectory—AI as an increasingly intimate, adaptive presence.

China is building another: AI as a managed social technology, where the relationship itself is governed.

The Strategic Bet: Less Engagement, More Stability

At first glance, this looks like a competitive disadvantage.

AI systems that are less immersive, less emotionally responsive, and more interruptive may struggle to compete in global consumer markets driven by attention.

But that misses the longer game.

By embedding safeguards at the design level, China is positioning its AI ecosystem for:

  • Regulated industries (healthcare, education, public services)
  • Government adoption
  • Export to markets prioritizing control over creativity

In this model, trust becomes infrastructure—not a brand promise.

The Unintended Consequence: Demand Doesn’t Disappear

There is, however, a clear tension.

If users want deeper, more emotionally immersive AI experiences, they will find them:

  • Through foreign platforms
  • Open-source models
  • Unregulated digital environments

Which raises a paradox:

The stricter the official system becomes, the more intense—and potentially risky—the alternatives may be.

The Bigger Shift

What China is doing is not just regulating AI.

It is redefining the optimization function of the internet itself.

For years, platforms have competed for attention.
Now, the frontier is emotional influence.

China’s position is clear:
If AI begins to simulate relationships, those relationships become a matter of public policy.

The Bottom Line

The global AI race often frames itself as a competition over intelligence, compute, and scale.

But a quieter battle is emerging—over how regulators and developers allow AI to relate to humans.

The West is building AI that people want to talk to.

China is building AI that protects people from it.

As artificial intelligence becomes more human-like, that distinction may matter more than the technology itself.

Related: Are AI Companions Good for Mental Health? The Psychology Behind Digital Relationships in 2025–2026

Tags: