Search
Close this search box.

Prohibited AI Systems 

Prohibited AI Systems

5 points to remember 

  • The AI Act is coming into force on August 1st 2024. 
  • This text will impact you if you are leveraging an AI System (AIS) as part of your activities within the EU whatever the lifecycle of the AIS (development, deployment, distribution) 
  • The AI Act provides for 8 groups of prohibited practices. 
  • By February 1st 2025, AIS with unacceptable risk will be prohibited. 
  • It’s time to frame a clear strategy for managing emergencies while preserving your operations and core business. 

The AI Act is coming! 

If you’re developing or integrating an AIS as part of your operations within the EU, you’ll be affected by the AI Act: it’s time to get ready. Before we focus on Prohibited AIS, let’s take a look back at the AI Act, its main principles and its impact. 

 What is the AI Act? 

The AI Act is a regulation in the EU. It provides a comprehensive, harmonized, and horizontal legal framework for AI. The Act covers the development, commercialization, and use of AI systems (AI Systems and GPAI) as products.

Its aim is to foster human-centric and reliable AI. The Act ensures robust protection of health, safety, and fundamental rights. It also supports innovation while addressing the negative impacts of AI systems. The Act establishes strong governance mechanisms and strict safety standards.

Who’s concerned? 

The regulation covers all AI systems defined in the AI Act as follows: 

« a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. » 

Regulation covers as well General Purpose AI.  

The obligations of the AI Act apply to all operators throughout the AI value chain (providers, authorised representatives, distributors, importers and deployers) when they introduce their AI system and/or GPAI to the EU market, regardless of whether they are based in the EU or not. 

The obligations of the AI Act differ depending on the type of AIS operator you are and the risk category in which it is classified.  

A risk-based approach 

The AI Act’s approach is primarily risk-based, with four levels: 

  • Unacceptable risk 
  • See the section below (What are the « prohibited » AIS?). 
  • High-risk AIS 
  • They cover, among other areas, the following: biometric identification, education and vocational training, and employment. Also classified as high-risk AIS are those used in products subject to other EU sectoral regulations (toys, medical devices, elevators, etc.). 
  • AIS with specific or limited risk, subject to transparency obligations:  
  • These include chatbots that interact directly with natural persons, AIS that generate video or text content, and AIS that manipulate images to create deepfakes. 
  • Minimal risk 
  • These are other AIS that present neither an unacceptable nor a specific risk and are not categorized as high-risk. Spam filters are a perfect example. 

General-purpose AI (GPAI) is divided into two categories: with and without systemic risk.  A GPIA presenting a systemic risk1 (i) has high impact capabilities, or (ii) have been so categorized by a Commission decision. In this case, it is subject to a greater number of obligations than a GPIA without systemic risk.  

In the event of non-compliance with the obligations stipulated in the AI Act, the financial sanctions can be considerable, depending on the type of infringement. Fines can reach: 

Sanctions 

  • Up to 35 million euros, or 7% of worldwide annual turnover, for non-compliance with rules on prohibited AI practices. 
  • Up to 15 million euros, or 3% of worldwide annual turnover, for non-compliance with AI Act obligations and measures concerning GPAI. 
  • Up to 7.5 million euros, or 1% of worldwide annual turnover, for providing incorrect, incomplete or misleading information to the authorities. 

In all three cases, the highest amount will be applied. For SMEs, on the other hand, penalties must be proportionate to take account of their economic viability. For this reason, the lowest amount will be applied.  

It is therefore essential to be well prepared to avoid the risk of financial penalties and repercussions on your organization’s reputation.  

What are the deadlines of the AI Act? 

Once it comes into force (expected around May 2024), the AI Act will apply at staggered intervals:   

  •  6 months deadline: elimination of AIS having unacceptable risk (known as « prohibited »); 
  •  12 months deadline: compliance of General Purpose AI (GPAI) models; 
  •  24 months deadline: compliance of high-risk AIS not covered by EU sectoral legislation. 
  •  36 months deadline: additional period for high-risk AIS already covered by EU sectoral legislation (toys, medical devices, elevators, etc.). 

In this article, we are going to focus on the emergency action by end of year 2024 : the elimination of prohibited AIS.   

Prohibited AIS 

While AI offers many advantages, its power can be abused for manipulation, exploitation, and social control. These practices contradict European values. They clash with respect for human dignity, freedom, and equality. They also conflict with democracy and the rule of law. The Union’s fundamental rights are at stake. This includes the right to non-discrimination, data protection, and equal opportunities.

By setting these prohibitions, the EU aims to reduce AI risks. The goal is to prevent harm from misuse. The EU wants to protect fundamental rights. It seeks to stop AI from harming individuals or society.

What are the « prohibited » AIS? 

To cease using prohibited AIS, you need to know how to define them in order to be able to identify them. 

Based on Article 5 of the AI Act, the prohibited practices are as follows: 

  • Subliminal and manipulative / deceptive techniques resulting in significant harm 

It is forbidden to use an AIS that deploys subliminal, manipulative, or deceptive techniques to materially distort the behavior of one or more persons, resulting in significant harm to them. The purpose of this prohibition is to protect people from influences that would cause them to lose their free will and make decisions they would not otherwise have made. 

  • Exploitation of vulnerabilities resulting in significant harm 

It is forbidden to use an AIS to exploit the vulnerabilities of specific groups of people, such as those based on age, disability, or socio-economic situation, in order to alter their behavior in a harmful way. 

  • « Social scoring » by public and private authorities 

It is forbidden to use an AIS that evaluates or ranks people based on their social behavior or personal characteristics. If such use leads to a social score (i) determined in social contexts unrelated to the one in which the data was collected or generated or (ii) unjustified or disproportionate to the social behavior of the people rated   

  • Emotional interference in the workplace and in educational establishments except for medical or safety reasons 

It is forbidden to use an AIS to infer emotions in the workplace or in an educational setting, except for medical or safety reasons. This means that it is forbidden to infer emotions (anger, joy, surprise, etc.) from a person’s biometric data. On the other hand, its use is permitted to detect physical states such as fatigue or pain for drivers or airplane pilots, for example, with the aim of preventing accidents. 

  • Biometric categorization of sensitive attributes 

It is forbidden to use an AIS to categorize individuals based on biometric data in order to deduce sensitive attributes (race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation). Exceptions are made for law enforcement, under specific conditions. 

  • Untargeted scrapping of facial images from the Internet or CCTV to build-up or expand databases 

This practice would only accentuate the feeling of mass surveillance and could lead to violations of the right to privacy, which is why it is prohibited. 

  • Profiling of a natural person to assess or predict a crime 

It is forbidden to use AIS to assess or predict the likelihood of a person committing a crime, based solely on profiling or the assessment of personality traits or characteristics, unless they confirm a human assessment based on verifiable facts and directly related to criminal activity. 

  • “Real-time » remote biometric identification for law enforcement purposes in publicly accessible spaces 

The use of remote, real-time biometric identification systems in public spaces for law enforcement purposes is strictly limited, except in the case of specific and significant threats or to locate suspects of specific serious offences. 

Managing emergencies 

Eliminating prohibited AIS or bring them to compliance is an indispensable prerequisite for complying with the obligations of the AI Act applicable by the end of the year. It is therefore important to know how to manage emergencies, the first of which concerns prohibited AIS. 

We recommend setting up a team specialized in handling emergencies, to enable others to remain focused on their core business. This project team must have a vision that reconciles regulatory requirements and the needs of your business model.  

It is essential to do neither more nor less than is required, but to focus on the right actions. A good strategy will therefore result in the exclusion or switch to compliance of prohibited AIS, while causing as little disruption as possible to your company’s key operations.  

It’s important to remember there’s no perfect approach. No company is yet mature enough to dive in fully. Actions must be tailored to each company’s situation.

Naaia AIMS can assist you in complying with the AI Act. We start with an inventory of prohibited AI systems. This includes qualifying your AIS. We also help implement action plans for compliance. Additionally, we monitor each AIS throughout its lifecycle.

Share the Post:
Search
Close this search box.