China at the forefront of AI regulation
China stands out as a pioneer in the regulation of artificial intelligence. Well before the entry into force of the European AI Act, Beijing adopted a series of texts aimed at regulating the sensitive uses of AI. These measures do not concern only local players; any foreign company offering its services in China is also required to comply.
Deepfakes, generative models, recommendation systems: each technology has its own regulatory framework, but the whole rests on a common foundation of transparency, responsibility, and user protection.
Measures on deep synthesis: deepfakes and voice clones under surveillance
Adopted in November 2022 and entering into force in January 2023, the Measures for the administration of Internet information services using deep synthesis technology target multimedia content manipulated by AI techniques. Concretely, this concerns deepfake videos, voice clones, or digital avatars.
Operators must clearly label all synthetic content, ensure the traceability of files, implement moderation systems, and prevent malicious uses such as identity theft or manipulation of public opinion. Internal audits and regular security evaluations are also imposed.
The objective is clear: to contain the abuses of a technology that blurs the boundary between real and artificial.
Measures on generative AI: regulating ChatGPT, Midjourney and others
Second step: the Interim measures for the administration of generative AI services, adopted in July 2023. These concern systems generating text, images, audio, or video from generative models. Chatbots like ChatGPT, image generators like Midjourney, or platforms of artificial voices are directly concerned.
These services must align content with the core values of socialism, avoid any harm to state security or national unity, and fight against discrimination or algorithmic bias. The explicit consent of users is required for the use of their data, while models and algorithms must be registered with the authorities.
The regulation also imposes human moderation, as well as an accessible complaint system, in order to reinforce the responsibility of providers.
Measures on recommendation algorithms: transparency and user protection
As early as March 2022, China had already paved the way with the Measures for the administration of Internet recommendation algorithm services. They regulate platforms that use algorithms of ranking, personalization, and targeted advertising, such as TikTok (Douyin), WeChat, or Alibaba.
The rules require that users be informed of the use of algorithms, be able to disable personalization, and have the possibility to manage their preferences. Operators must also avoid promoting addictive or harmful content, protect minors and vulnerable groups, and declare their algorithms with the Cyberspace Administration of China (CAC).
These obligations mark a willingness to find a balance between innovation, freedom of choice, and citizen protection.
Comparative analysis: common points and specificities
These three texts rest on a common foundation: transparency, security, moderation, and traceability. They illustrate a coherent approach where regulation follows the cycle of technological innovation.
Scope of application
- Deep synthesis targets technologies that directly manipulate human perception (images, voices, videos).
- Generative AI applies to services accessible to the general public, centered on content generation.
- Recommendation algorithms concern the organization and dissemination of online content.
A single company may be subject to several of these regulations. A video chatbot with voice cloning, for example, would fall under both the rules on deep synthesis and those on generative AI.
What is being prepared: facial recognition and identification of AI content
Chinese regulation continues to evolve rapidly. Two new texts illustrate this dynamic.
- The measures on facial recognition regulate the use of AI in the fields of surveillance, security, and biometric identification.
- As of September 2025, the measures on the identification of AI-generated content will introduce new obligations of traceability, notably through watermarking and metadata.
These developments show a will to move towards integral traceability of synthetic content, with reinforced control at the source.
What lessons for Europe and the rest of the world?
China is today the country that has put in place the most comprehensive regulatory arsenal to regulate AI. Whereas the EU is still finalizing its AI Act, Beijing has already tested, adopted, and applied texts covering the main sensitive uses. For companies, this implies dealing with a strict framework, but also anticipating that these logics could inspire other jurisdictions.
Faced with AI that evolves faster than the law, China imposes itself as a global regulatory laboratory.
Conclusion: How Naaia supports organizations
At Naaia, we help organizations decipher these regulations and anticipate their operational impact. If you deploy AI systems and wish to guarantee their compliance, we support you in transforming these obligations into levers of trust and performance.