Search
Close this search box.

The AI Act calendar: dates to remember 

Last update : July 12th, 2024

The AI Act was voted on March 13, 2024 by the European Parliament. It is designed to guarantee the deployment of responsible, safe and ethical artificial intelligence. A pioneering regulation on AI worldwide, the AI Act will be applicable to all operators of AI systems designed, deployed and used within the European Union. And those who fail to comply with the new rules will face substantial financial penalties! In addition to mastering the contours of this historic text, it is therefore essential to know the calendar and the key dates of its implementation… 

 Soon, the AI Act will enter into force! 

On April 22, 2024, the corrigendum of the AI Act was approved without a vote at the plenary session of the European Parliament. On May 21, the twenty-seven member countries of the European Union definitively adopted the AI Act. This new European regulation has been published in the Official Journal of the European Union on July 12, 2024. It will enter into force 20 days after its publication, i.e. on August 1, 2024.  
 
The AI Act will be fully applicable 24 months after its entry into force (summer 2026). But organizations will also have to meet certain specific deadlines to comply with this new regulation… 

Within 6 months: Address AI systems with unacceptable risk 

The AI Act first stipulates that prohibited AI systems must be phased out or brought into compliance within 6 months. The first deadline is therefore February 1, 2025. And it’s in organizations’ best interests to comply today… 

What are prohibited AI systems?

As specified in article 5 of the AI Act, the prohibited AI systems by the end of the year are those that present an unacceptable risk to the safety or rights of individuals, including :  

  • the deployment of subliminal, purposefully manipulative or deceptive techniques to distort the behavior of persons and cause them to make decisions they would not otherwise have taken; 
  • the exploitation of the vulnerability of persons due to their age, disability or socio-economic situation, in order to alter their behavior in a harmful manner; 
  • the establishment of a social scoring system by public and private authorities, based on people’s social behavior, personal or personality characteristics; 
  • the deployment of systems to infer the emotions of persons in the areas of workplace and educational institutions (except for medical or safety reasons); 
  • biometric categorization to categorize individuals according to their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation. Exceptions are made in the field of law enforcement, subject to specific conditions; 
  • the untargeted scrapping of facial images from the Internet or CCTV footage in order to build up or expand databases; 
  • assessing or predicting the risk of committing a criminal offence, solely on the basis of profiling a person or assessing his or her personality traits and characteristics (unless AI systems support a human assessment based on verifiable facts directly linked to a criminal activity); 
  • « real-time » remote biometric identification in public spaces, for law enforcement purposes (except in the case of specific and significant threats or to locate suspects of specific serious offences). 

Within 12 months: the compliance of GPAI models 

The AI Act also stipulates that the governance rules and obligations relating to General Purpose AI (GPAI) models will be applicable within 12 months of the entry into force of the AI Act. Organizations using or deploying these models will therefore have one year to comply (August 1, 2025). 

A GPAI (General Purpose AI) is a general-purpose AI model that can be used for a variety of purposes. It can be intended for direct use or integrated with another AI system. And the AI Act requires these models to comply with specific rules. Organizations deploying GPAI models must notably provide a certain number of documents (technical documentation, instructions for use, etc.). And models that present a systemic risk are subject to additional requirements. 

Within 24 months: compliance of high-risk AI systems not covered by EU specific legislation 

The AI Act subjects high-risk AI systems to strict legal requirements:

  • Registration of AIS in a European database
  • Implementation of a risk management system and human oversight…

According to the European regulation, organizations that design, deploy or use high-risk AI systems will have to comply within 24 months of its entry into force.

High risk AI systems are specifically listed in Annexe III of the AI Act. They are involved in sensitive areas, such as biometrics, critical infrastructure (road traffic, energy supply… ), education and vocational training, employment. Also access to essential private services and essential public services and benefits (health, banking, emergency services…). And law enforcement, migration, asylum and border control management, and administration of justice. 

And afterwards? 

 
The AI Act provides a longer deadline for compliance for certain high-risk AI systems. Those already covered by specific European legislation requiring certification (closed list in Annex I) will have 36 months to comply with the requirements of the AI Act. This additional time is granted, for example, to AI systems deployed in the fields of toys, medical devices, in vitro diagnostics, radio equipment, civil aviation security or agricultural vehicles… 

Obligations relating to AI systems used in large-scale information systems established by EU legislation (in the areas of freedom, security and justice) will have to be implemented by the end of 2030. In particular, the regulation targets the Schengen Information System (SIS), an information-sharing system for border security and management in Europe. 

How to ensure that these deadlines are met? 

To meet the requirements of this new regulatory framework, companies need to anticipate their compliance. They need to start training their teams. Phase out prohibited AI systems. Set up AI governance. And they also need the right tools! 

Naaia is the the first AIMS® on the European market. It is a SaaS solution for the governance and management of artificial intelligence systems:

  • Qualification of AI systems and identification of prohibited AIS
  • Implementation of an action plan to ensure compliance
  • Monitoring of each AI system throughout its lifecycle…

Our AIMS supports you in your efforts to comply with the AI Act.  
Naaia has just added 6 new functions to its solution. For an even more effective operational response and optimized action plans. Don’t hesitate to contact our teams to find out more. 

Share the Post:
Search
Close this search box.