Generative AI and regulation: understanding the risks and obligations of the AI Act

With the rise of AI systems, capable of acting, interacting, and sometimes deciding, questions of security, transparency, and responsibility become urgent. What are the risks of generative AI? How can they be anticipated without hindering innovation? Where do we stand on the regulation of generative AI?

The AI Act, the first attempt at AI regulation at the European level, seeks to respond to this challenge by introducing AI risk management based on the level of potential impact. The objective is to explore the notion of high-risk AI systems, the obligations for providers, specific cases related to AI agents, and the limits of this emerging regulatory framework. An essential insight for all those who design, use, or oversee technologies based on AI.

What is a high-risk AI system according to the AI Act?

A dual classification of high-risk systems

The AI regulation distinguishes two main categories of high-risk AI systems:

  • Those linked to regulated products (Annex I)
  • Those classified according to their purpose (Annex III)

In both cases, the systems are subject to all the requirements of Title III of the AI Act regulation.

Generative AI and regulation: necessary oversight in the face of sensitive uses

The AI regulation no longer concerns only industrial players or researchers. It now affects companies from all sectors that integrate generative AI technologies into their tools, services, or decision-making processes. These autonomous agents, sometimes capable of complex actions, arouse both fascination and concern.

The AI Act acknowledges this reality by classifying certain uses as “high risk” when they may affect fundamental rights or public safety. This is notably the case for AI systems used in recruitment, education, healthcare, justice, or essential public services. This classification is not symbolic: it entails strong regulatory obligations for the designers and providers of these systems.

A classification based on the risks of AI agents

The text distinguishes two main types of situations in which an AI agent may be considered high risk. On the one hand, when it is integrated into a product regulated by the European Union, such as a medical device or an automated vehicle. On the other hand, when its purpose relates to sensitive fields such as biometrics, surveillance, education, or policing. It is Annex III of the AI Act that lists these critical uses.

This classification aims to prevent the risks linked to generative AI, notably those related to the opacity of models, the biased reproducibility of certain decisions, or the uncontrolled automation of human processes. It is not a question of prohibition, but of requiring these systems to be designed in a more robust, ethical, and traceable manner.

Article 6(2): a safety net for generative AIs with limited use

Not all generative AIs are automatically considered high risk. The AI Act provides for an exception to classification, defined in Article 6(2). If an AI agent is limited to carrying out a technical task, such as converting documents or sorting files, without influencing or replacing a human decision, it may be excluded from the scope of the strictest obligations.

This clause makes it possible to distinguish between a generative assistance tool and an autonomous AI agent, capable of acting with a strong impact on users. It thus avoids over-regulating modest uses while strengthening oversight of the most sensitive cases.

What obligations for designers of high-risk AI?

Providers of high-risk AI must implement continuous AI risk management. This involves documenting the system’s functions, assessing its risks at each stage of its life cycle, ensuring appropriate human oversight, and proving that the system is used in accordance with its declared purpose.

They must also comply with criteria of transparency, robustness, cybersecurity, and provide for a post-market monitoring system. Failure to meet these obligations may result in sanctions, but also a loss of credibility with users, partners, or regulators.

Towards responsible and compliant generative AI

The purpose of the AI Act is not to hinder innovation, but to enable the responsible development of generative AI. By overseeing the most sensitive uses, clarifying the responsibilities of stakeholders, and imposing technical and ethical safeguards, Europe seeks to lay the foundations for a safe, fair, and sustainable digital space.

Companies that anticipate this evolution by structuring their compliance with AI regulation today gain a head start. They protect themselves against abuses while strengthening the trust of their clients, users, and partners.

Share the Post: