Anthropomorphic artificial intelligence systems are developing rapidly across many digital environments. Conversational companions, avatars, voice assistants, or relational chatbots are designed to simulate human traits such as empathy, personality, or relation continuity.
By introducing an emotional and social dimension into interaction, these systems profoundly transform the relationship between users and technology. This evolution nevertheless raises specific issues in terms of protection of individuals, risk management, and liability, as well as significant risks when these tools are used by vulnerable populations or in sensitive contexts.
This article aims to clarify the notion of anthropomorphic AI, to present the main risks associated with these systems, and to analyze the first structuring regulatory responses, notably those formalized by China and the State of New York or California.
1. Anthropomorphic AI: definition
General definition
Anthropomorphism refers to the attribution of human characteristics to non-human entities. Applied to artificial intelligence, it consists in designing or presenting AI systems as if they possessed human traits, such as emotions, a personality, intentions, or relational capacity.
Anthropomorphic AI is therefore not defined by a specific technology, but by a mode of interaction and representation. It relies on design choices (language, tone, relational memory, visual or vocal appearance) intended to make the AI closer, more familiar, or more engaging for the user.
Examples of anthropomorphic AI systems
The most common forms of anthropomorphic AI include in particular:
- AI companions, designed to interact continuously with the user and establish an emotional relationship;
- Avatars and synthetic humans, embodying a character endowed with a visual and behavioral identity;
- Certain relational chatbots, capable of maintaining a contextualized conversation over time and adapting their social posture;
- Assistants or copilots presented as empathetic or “understanding,” beyond simple functional assistance.
These systems differ from traditional chatbots or assistants by their ability to simulate a relationship, and not solely to provide information or execute a task.
2.Advantages and risks related to anthropomorphic AI
2.1 Potential and contributions of anthropomorphic systems
Anthropomorphism is first and foremost a lever for the adoption of AI technologies. By making interaction more natural and more intuitive, these systems can improve access to digital services, particularly for populations uncomfortable with traditional technical interfaces.
Anthropomorphic AI systems can also help to:
- Facilitate user engagement in assistance or learning pathways;
- Improve the user experience in support or accompaniment contexts;
- Offer a continuous, available, and personalized point of contact, notably in situations of isolation.
In some cases, these systems are presented as emotional support or well-being tools, which explains their rapid diffusion on the market.
2.2 Specific risks and governance challenges
The expected benefits nevertheless come with structuring risks, which justify specific oversight.
- Vulnerability and emotional dependence: The simulation of empathy and presence can foster excessive trust. Among certain populations, notably minors, isolated individuals, or people in vulnerable situations, this may lead to forms of emotional dependence or relational substitution.
- Manipulation and behavioral influence: Anthropomorphic AI benefits from a higher level of perceived credibility. This proximity can be exploited to steer behaviors, influence decisions, or artificially maintain user engagement, sometimes without sufficient transparency.
- Personal data and privacy: Users tend to share more personal, and even sensitive, information with systems perceived as “human.” This increases risks related to data collection, secondary use, and data security, particularly when protection standards are insufficient.
- Distortion of the relationship to AI: By blurring the boundary between human and machine, anthropomorphism can lead to an overestimation of the AI’s actual capabilities. This confusion weakens users’ critical thinking and complicates the attribution of responsibility in the event of harm.
3. Emerging regulatory frameworks
Faced with the specific risks linked to anthropomorphic AI, certain jurisdictions have begun to develop targeted legal frameworks, recognizing that the simulation of human traits and emotional interaction constitute distinct risk factors. China, the State of New York, and California are among the most advanced actors, according to different regulatory logics that nevertheless converge in substance.
3.1 China: recognizing and regulating emotional interaction as a risk factor
China is among the first jurisdictions to have defined and regulated anthropomorphic AI systems. The Cyberspace Administration of China (CAC) published, on December 27, 2025, a draft Measures for the Management of Anthropomorphic Interactive AI Services, currently subject to public consultation.
The measures target AI services designed to simulate human traits (personality, reasoning, communication) and to engage users in continuous emotional interactions, regardless of the media used (text, audio, video, avatars). These systems are now classified as sensitive infrastructures, due to their capacity to influence users’ emotions, behaviors, and autonomy.
The Chinese framework notably imposes:
- Lifecycle responsibility, including design, training, deployment, and updates, with comprehensive documentation of models, data, uses, and safety mechanisms;
- Mandatory security assessments beyond certain thresholds of use or social impact, covering system architecture, data governance, privacy protection, and risk management;
- Active management of psychological risks, including detection of dependence, compulsive use, or emotional distress, and implementation of graduated countermeasures;
- Enhanced protection of minors, through age-appropriate modes, parental consent, usage limits, and strict filtering of sensitive content;
- Explicit disruption of the human illusion, through clear and repeated information on the non-human nature of the system;
- Respect for ideological and social red lines, including the prohibition of certain content and identity impersonation.
Anthropomorphic AI will not be banned in China, but strictly regulated, in order to remain predictable, controllable, and aligned with objectives of social stability.
3.2 The State of New York: an approach focused on the prevention of individual harm
In the United States, the State of New York seeks to adopt a more targeted approach through NY State Assembly Bill 2025-A6767, explicitly devoted to AI companions. Adopted by the State Assembly in March 2025 and then returned to it after review by the Senate in January 2026, the text has not entered into force, but rests on a central principle of prevention of individual harm.
This text is based on the premise that companions, meaning systems designed to establish a prolonged social or emotional relationship with users, may generate serious and immediate risks. The bill prohibits the provision of an AI companion lacking protocols to address:
- Expressions of suicidal ideation or self-harm;
- Risks of violence toward others;
- Risks of financial harm.
The law also provides for:
- Explicit transparency obligations regarding the non-human nature of the system;
- Mechanisms for referral to assistance or crisis services when signals of distress are identified.
The New York approach is based on a duty of care principle: once a system is designed to interact on an emotional or relational register, its operator must anticipate potential harms and implement operational safeguards, without necessarily regulating all anthropomorphic design choices.
3.3 California: transparency, safety, and accountability obligations
California is part of the movement to regulate companion chatbots with Senate Bill No. 243, which entered into force on January 1, 2026.
The Californian framework adopts an approach centered on user protection and operator accountability, particularly when these systems may be perceived as human or address vulnerable populations. It notably requires:
- Enhanced transparency regarding the artificial nature of the system when interaction may be perceived as human, with regular reminders for minors;
- Mandatory safety protocols aimed at preventing the production of content related to suicide, self-harm, and other forms of harm, with an obligation to refer to assistance services in the event of distress signals;
- Increased protection of minors, through specific restrictions on certain content and explicit warnings regarding the suitability of these systems for minor audiences;
- Accountability obligations, including the implementation of annual reporting to the competent authorities to document deployed safety measures and detected incidents, accompanied by sanction mechanisms in the event of non-compliance.
4. Governance mechanisms adapted to anthropomorphic AI
Existing regulatory frameworks illustrate a fundamental trend: once an AI system is designed to interact on a social or emotional register, governance requirements must be strengthened. This logic applies beyond jurisdictions that have already legislated and concerns all organizations developing or deploying anthropomorphic AI.
Transparency and control of anthropomorphism
Users must be able to clearly identify the artificial nature of the system, understand its purposes and the main lines of its functioning, as well as the use made of their data. This transparency must be integrated into the user experience and limit any affective confusion linked to excessively humanizing design choices.
Oversight of emotionally impactful uses
When systems claim a role of emotional support or well-being, their scope of use must be explicitly defined and subject to independent evaluations, in order to avoid any implicit assimilation to therapeutic devices in the absence of an appropriate sectoral framework.
Safety and crisis management protocols
Anthropomorphic AI systems must also integrate safety protocols enabling the identification of situations of distress or dependence and, where necessary, referral to appropriate human resources.
Protection of vulnerable populations
Particular attention must be paid to vulnerable populations, notably minors and individuals in situations of psychological or social fragility. This implies usage restrictions, specific settings, and adapted warnings, in order to limit risks of emotional dependence or undue influence.
Traceability and accountability
Finally, credible governance relies on the ability to document incidents, audit systems, and ensure regular monitoring of safety measures. These traceability and reporting mechanisms constitute an essential foundation of responsibility, including in legal environments that remain weakly structured.
Regulating anthropomorphic AI today
Anthropomorphic AI systems introduce particular risks linked to emotional interaction, perceptions of autonomy, and protection of individuals. Their deployment calls for steering mechanisms capable of closely monitoring agents, their behaviors, and their actual uses.
👉 Discover the Naaia platform, designed to support organizations in steering their AI agents, managing associated risks, and anticipating regulatory frameworks applicable to anthropomorphic AI.