Human vulnerability in the face of AI 

The rise of artificial intelligence (AI) both fascinates and worries many. The position of humans and their vulnerability has become a concern in light of the risks associated with AI. These risks include loss of control, algorithmic biases, inequalities, discrimination, and threats to privacy. This vulnerability is evident in our growing dependence on complex systems, often termed « black boxes, » which remain opaque. Errors and biases in algorithms can severely impact crucial decisions, amplifying prejudices in employment, justice, and healthcare.

Putting Humans at the Center

To overcome these challenges, it is essential to put humans at the center of AI development and use. This involves designing ethical and transparent technologies while ensuring that users have an adequate understanding and control over these tools. By focusing on humans, we can maximize AI benefits and minimize risks. This ensures technology serves humanity’s interests, not the other way around.

Europe – AI Act  

The first comprehensive global regulation on AI, the AI Act provides a harmonized framework regulating AI as a product, ensuring product safety by integrating respect for fundamental rights. Thus, the AI Act emphasizes protecting humans and addressing the vulnerabilities that may arise from the use of AI systems. More concretely, this is reflected in principles such as human oversight, privacy protection in line with the GDPR, and diversity protection. It also includes non-discrimination, fairness, and social and environmental well-being.

What Is a Vulnerable Person?

In this context, the question arises as to what constitutes a « vulnerable person. »

This concept is not explicitly defined in the AI Act, but it is commonly understood that vulnerable persons include:

  • Minors or elderly individuals;   
  • Individuals with physical or mental disabilities;   
  • People living in extreme poverty or belonging to ethnic, racial, or religious minorities.   

More generally, vulnerability appears when there is a power imbalance between the end user and the AI deployer. This imbalance can result from status, authority, knowledge, economic or social circumstances, or age.

Prohibited Practices

At the heart of the AI Act’s provisions are prohibited practices, with two of the eight being particularly relevant to human vulnerability: 

  • Subliminal and Manipulative/Deceptive Techniques Causing Significant Harm  

It is prohibited to use an AI system that employs subliminal techniques or techniques of manipulation or deception to materially distort the behavior of one or more persons, resulting in significant harm to them. This prohibition aims to protect people against influences that could lead them to lose their free will and make decisions they would not otherwise have made.  

  • Exploitation of Vulnerabilities Causing Significant Harm  

It is prohibited to use an AI system to exploit the vulnerabilities of specific groups of people, such as those based on age, disability, or socioeconomic status, to adversely alter their behavior.  

High Risk

Additionally, the provisions also classify certain AI systems as high risk, particularly when their application could negatively impact fundamental rights protected by the Charter, such as the right to human dignity, non-discrimination, and the rights of persons with disabilities, among others. This applies to the use of AI systems in fields such as (Annex III):

  • Education and vocational training
  • Access to and rights to essential private services and public services and social benefits
  • Migration, asylum, and border management
  • Law enforcement
  • Administration of justice and democratic processes

In this domain, legislators believe that public authorities or private actors can make individuals dependent on their decisions.

In this regard, through delegated acts, the Commission may review the list of use cases or areas of high-risk AI systems, one of the criteria being human vulnerability.

Fundamental Rights Impact Assessment

The AI Act requires deployers to thoroughly assess the impact of high-risk AI systems on fundamental rights before deployment. This assessment must ensure a meticulous analysis of potential harm to individuals or groups. It includes an analysis of the intended use of the AI system, the categories of people affected, specific risks of harm, and human oversight measures to mitigate these risks. Furthermore, for real-world testing of high-risk AI systems, enhanced measures must protect vulnerable people due to their age or disability.

The AI Act requires individuals to be informed when interacting with an AI system, especially if it processes biometric data to identify emotions or intentions. Additionally, AI systems’ information and notifications must be accessible to people with disabilities. This ensures that everyone can understand and interact effectively with these systems.

By implementing these measures, the AI Act aims to foster an AI ecosystem that is not only innovative but also deeply rooted in protecting fundamental rights and the well-being of all individuals, particularly the most vulnerable in society within the single market.  

Like the AI Act, other legal frameworks for AI worldwide also emphasize protecting humans and their vulnerabilities from the risks associated with using AI. While these frameworks may vary in specifics due to different socio-legal contexts, the main objective remains constant: to ensure the safe and ethical deployment of AI technologies. This universal concern underscores the paramount importance of protecting human rights and well-being in the face of rapid technological advances.

Council of Europe  

The Council of Europe’s framework convention on artificial intelligence and human rights, democracy, and the rule of law, while not a legally binding law, aligns with the AI Act. Among the principles that AI systems must adhere to throughout their lifecycle are human dignity and individual autonomy, equality and non-discrimination, and respect for privacy and the protection of personal data.

It also includes several provisions aimed at protecting vulnerable people, such as persons with disabilities and children. Each party must consider these groups’ specific needs and vulnerabilities, following national and international laws. Additionally, effective procedural safeguards must be put in place for individuals affected by AI systems, including the obligation to inform them when interacting with AI rather than a human.

United States

In the United States, the Executive Order on AI emphasizes a human-centered approach in its policies and principles.

This approach ensures that AI’s development and deployment are safe, secure, and trustworthy, emphasizing human rights, fairness, and well-being. Among the principles are safety and security, privacy protection and civil liberties, promoting fairness and civil rights, and supporting workers and consumer protection.

China

Chinese legislation, the IMMGAI (Interim Measures for the Management of Generative Artificial Intelligence Services), specifically targets generative AI. It also aims to protect human vulnerabilities through various provisions safeguarding individuals’ rights and interests, particularly those likely to be affected by AI technologies. These measures require that generative AI services prevent discrimination, protect mental and physical health, and privacy, and, above all, respect fundamental socialist values. Furthermore, it also aims to protect minors from excessive dependence on generative AI services.

As the first AIMS® on the market in Europe, Naaia is a SaaS solution for governing and managing AI systems. With a unique end-to-end platform vision and advanced legal expertise, it enables the organization, management, and control of AI systems. In addition to optimizing business performance, this all-in-one tool addresses the triple imperative of trust, performance, and compliance in AI systems. Feel free to contact our teams for more information.

Partager l'article :
Search
Close this search box.