Close this search box.

The AI Act, first regulation on artificial intelligence  

After several years of negotiations, the European Parliament has adopted a historic text on artificial intelligence (« AI »). AI Act (or European Regulation on Artificial Intelligence) becomes the first legislation in the world to regulate AI, making the European Union a true pioneer in this field. The aim of this innovative text ?To guarantee the safety of citizens and the protection of fundamental rights, while encouraging innovation and the development of trusted AI. 

The AI Act, the world’s first law to regulate artificial intelligence 

Although the concept of artificial intelligence is not new, the latest technological advances have recently propelled it into the spotlight! Increasingly powerful, AI systems (AIS) are part of our daily lives. Today, they can be used to solve complex tasks in a wide range of sectors and industries (marketing and logistics, for example, but also healthcare and finance…). 

This constantly evolving technology of the future obviously offers companies real development opportunities. But it also raises many questions, both ethical and legal. Copyright and personal data protection, errors and false information, discrimination or impact on employment… To protect its citizens against the risks of AI, the European Union has decided to frame and regulate this technology. 

After several years of discussion and negotiation, the AI Act (Artificial Intelligence Act) was approved by all EU member states in early February. The first AI regulation in the world, the European Regulation on Artificial Intelligence aims to ensure the use of responsible, sound and ethical AI, without stifling innovation.  

What does the law say? 

The AI Act classifies AI systems according to their potential risks and level of impact. The text distinguishes four levels of risk, and provides a specific category for general-purpose AI models. AI systems are then subject to more or less stringent rules, depending on their level of risk. 

AI systems at unacceptable risk 

The AI Act first reminds us that AI systems with unacceptable risks are prohibited. Here, the regulation refers to all systems that could threaten the safety, livelihood and rights of individuals, such as: 

  • social rating systems;  
  • biometric categorization systems (which use sensitive characteristics such as sexual orientation, race, or religious or political opinions); 
  • systems that deploy subliminal or deceptive techniques to manipulate human behavior, or exploit people’s vulnerabilities. 

High-risk AI systems 

The AI Act focuses primarily on applications deemed to be high-risk, which it subjects to specific legal requirements. These include AI systems used in sensitive areas such as education, employment or law enforcement:  

  • a system determining access, admission or assignment to educational and vocational training establishments; 
  • a CV scanning tool to rank candidates during recruitment; 
  • an emotion recognition system when it is not classified as a prohibited AIS; 
  • a solvency assessment system (except in the case of fraud detection); 
  • … 

The suppliers of these AI systems must comply with certain constraints and strict requirements (, provide for the registration of AIS in a European database, set up a risk management system and human supervision…). 

Limited-risk AI systems 

The text then targets limited risk AI systems, which benefit from a more flexible regulatory framework

Limited-risk AIS are above all subject to transparency obligations, which also aim to ensure respect for copyright. In particular, users of these systems need to be aware that they are interacting with AI (for example, when exchanging with a chatbot). 

AI systems with minimal risk 

The AI Act does not regulate applications that present a minimal risk, i.e. those that are not expressly prohibited or listed as high-risk. Examples include AI-enabled video games and spam filters. 

General-purpose AI systems  

Under the AI Act, suppliers of GPAI (General Purpose AI) models must also comply with a number of rules. A GPAI system is an AI system based on a general-purpose AI model, which can be used for a variety of purposes. It may be intended for direct use, or integrated with another AI system. 

In addition to complying with the copyright directive, GPAI model suppliers must provide certain documents. (technical documentation, instructions for use, a summary of the content used to train the model). And those presenting a systemic risk are subject to additional requirements. 

When will the text be applied? 

Implementation of the AI Act will be overseen by the European AI Office. The text will be applicable to all operators of AI systems designed, deployed and used within the European Union (suppliers, importers, distributors and deployers). Those who fail to comply with the new rules will face significant financial penalties (up to 35 million euros or 7% of annual sales). This is why the AI Act has become a global benchmark for the regulation of artificial intelligence… 

The European regulation was voted on March 13 and will be adopted in plenary session without a vote on April 22. It will come into force 20 days after publication in the European Official Journal. It will be fully applicable 24 months after its entry into force, with some specific deadlines for compliance. Unacceptably risky AI systems, for example, will be banned 6 months after entry into force (at the latest). The timetable also stipulates that rules concerning general-purpose AI must be put in place within 12 months of the regulation’s entry into force. To be ready in time, organizations have every interest in anticipating the application of the AI Act... 

 First AIMS® in Europe, Naaia turns regulatory constraints into strategic opportunities for organizations of all sizes. As a compliance and risk management solution for AI systems, Naaia meets compliance needs and transforms regulatory obligations into concrete actions for customers. Contact our teams today! 

Share the Post:
Close this search box.