Search
Close this search box.

AI Act: what are the requirements for high-risk AI systems?

With the rapid rise of artificial intelligence across nearly every sector, regulation and governance of this technology are becoming crucial. A new European regulation, named the AI Act, was published in the Official Journal of the European Union on July 12, 2024. It will take effect 20 days after its publication, on August 1, 2024. This text is the first in the world to regulate artificial intelligence. It applies to all providers and operators of AI systems in Europe. It classifies AI systems according to their level of risk and impact and then applies specific rules accordingly. While the text reiterates the prohibition of unacceptable risk AI systems, it primarily focuses on the regulation of high-risk AI systems.

So, which high-risk AI systems are targeted by the AI Act? What rules and obligations must they comply with? And how can they achieve compliance?


What is a High-Risk AI System?

After listing the prohibited AI practices, the AI Act defines and explains how to classify high-risk AI systems.

1 – A System Requiring Certification Under Another Regulation

According to the European regulation, an AI system is considered « high-risk » (Annex I) when:

  • the AI system is used as a safety component of a product, or if it is a product itself covered by European legislation (toys, medical devices, elevators…);

AND

  • the AI system must undergo a conformity assessment by a third party before it can be sold or used.

OR

2. A System Operating in a Sensitive Area

According to Annex III of the AI Act, all AI systems that relate to the following are considered high-risk:

  • Biometric data: remote biometric identification systems, biometric categorization systems (based on sensitive attributes or characteristics), emotion recognition systems;
  • Critical infrastructure: systems involved in managing and operating critical digital infrastructures, affecting road traffic, water supply, gas, heating, and electricity.
  • Education and vocational training: systems determining access to or placement in educational and vocational training institutions. These include systems assessing candidates’ results or educational levels, and systems monitoring students during exams.
  • Employment and worker management: systems used for recruiting or selecting candidates. They manage promotions or contract terminations, task allocation based on individual characteristics, and performance evaluations.
  • Essential public and private services: systems used in sectors such as healthcare, banking, and emergency services. They concern eligibility for certain benefits and services, credit scoring, emergency call assessments, and pricing for health and life insurance.

  • Law enforcement: systems evaluating the likelihood of a person being a victim of a crime, polygraphs and similar tools. They also include systems assessing the risk of delinquency or recidivism, and profiling systems.
  • Migration, asylum, and border control: polygraphs and similar tools, systems evaluating irregular migration or health risks. Additionally, they include systems reviewing asylum, visa, and residence permit applications, as well as detection and recognition systems.
  • Justice administration: systems used to search and interpret facts and apply the law. They also include systems influencing election outcomes or voter behavior (excluding tools for organizing political campaigns).


What Does the AI Act Require for High-Risk AI Systems?

The AI Act stipulates that high-risk AI systems are subject to binding obligations. To achieve compliance, providers, operators, and deployers of these high-risk systems must adhere to specific requirements. Each must ensure compliance with these obligations according to their role.

What Are the Obligations?

Companies using high-risk AI systems must:

  • Implement a risk management system: it must be regularly reviewed and updated throughout the AI’s lifecycle. The risk management system must also include continuous risk assessment and mitigation measures;
  • Ensure data governance: AI systems must use high-quality data to minimize bias and ensure fair and transparent results. In addition to being relevant and sufficiently representative, the data must be complete and error-free;
  • Guarantee the robustness, accuracy, and cybersecurity of the system: The AI Act specifies that high-risk AI systems must be designed to be accurate, robust, and secure. They must be resistant to errors and failures. They must also have backup plans and be protected against attempts to exploit their vulnerabilities.

  • Provide for the recording of relevant events: the AI system must automatically record events throughout its life. The goal is to identify risks and trace significant system modifications.
  • Ensure human oversight: the AI system must allow for human supervision. This oversight aims to prevent or minimize risks to health, safety, or fundamental rights.
  • Develop detailed technical documentation: it must be kept up to date and demonstrate that the AI system meets legal requirements. This documentation helps authorities assess system compliance.
  • Provide clear instructions for use: users must easily understand how to use the high-risk AI system. The system must provide information on its capabilities, limitations, and potential risks.
  • Implement a quality management system: this system must be sufficiently documented to ensure the system’s compliance at all stages of its life.

The high-risk AI system must then undergo a conformity assessment by a third party. It must obtain a CE marking and be registered in the European Union database.

Within What Timeframe?

Considered approved and final since mid-April, the AI Act will take effect on August 1. It will enter into force 20 days after its publication in the EU Official Journal scheduled for July 12. The European regulation requires organizations designing, deploying, or using high-risk AI systems to comply within 24 months of its entry into force. They must be compliant by mid-2026.

Those already governed by specific European legislation will have 36 months to comply with the AI Act’s requirements. This extended deadline applies to AI systems deployed in various sectors, including toys, in vitro diagnostic medical devices, and radio equipment. It also concerns civil aviation safety and agricultural vehicles.


How to Achieve Compliance with the AI Act?

To meet the requirements of this new regulatory framework, companies must prepare for compliance with the AI Act. Failure to comply with these regulations can result in significant fines, up to 15 million euros or 3% of annual turnover.

Necessary actions include mapping and classifying AI systems as well as updating risk management processes.

It is also essential to establish technical documentation, traceability tools, and robust mechanisms for monitoring and transparency.

Investing in AI governance solutions helps organizations comply with regulations. It also builds consumer and regulator trust while fully leveraging the benefits of AI.

As a pioneering European AI management system, Naaia offers a SaaS solution for AI governance and control.

Multi-legislation and multi-referential, the AIMS® supports your compliance with the AI Act and the deployment of responsible AI. For an even more effective operational response, Naaia has recently integrated new features into its solution. Feel free to contact our teams for more information.

Share the Post:
Search
Close this search box.