The European regulation on artificial intelligence, better known as the AI Act, builds a true governance architecture articulated between a European coordination level and a national implementation level.
The goal is to ensure the coherence of practices and the exchange of know-how among the Member States of the European Union. Each level has its own institutions and areas of action. Let us examine them, as well as the points of collaboration and convergence.
The European level: strategic axis and pillar of overall coherence
At the top of the system, the European AI Office, attached to the European Commission, plays a leading role. Operational since February 2024, it drafts the delegated acts necessary for the implementation of the regulation, manages the database of high-risk AI systems, and supervises so-called “general-purpose” models (GPAI). It acts somewhat like a central European regulator, guaranteeing the technical and legal coherence of the system.
Alongside it, the European AI Committee, whose first meeting will be held on August 2, 2025, brings together one representative per Member State. Its mission: to promote coordination among national authorities, share best practices, develop common guidelines, and support the Commission in interpreting the text.
Two advisory bodies complete this architecture: the Advisory Forum, which gathers stakeholders from the economic, academic, and civil society worlds, and the Independent Scientific Group, composed of recognized experts providing technical analyses of AI models and systems.
Finally, when AI is used directly by European institutions, supervision falls under the European Data Protection Supervisor (EDPS), which acts as the competent authority.
Thus, this European level defines the strategic framework and overall coherence, while Member States ensure the operational implementation on the ground.
Member States: at the heart of local implementation
The national level is where concrete action takes place. Each Member State must, by August 2, 2025, designate a set of authorities and bodies responsible for applying the regulation within its territory.
Market surveillance authorities will verify the compliance of deployed systems, conduct investigations, and may order the withdrawal or update of a non-compliant system.
Notifying authorities will be responsible for designating and monitoring the notified bodies in charge of certifying high-risk systems.
Finally, authorities or bodies for the protection of fundamental rights will intervene to prevent any infringement of privacy, non-discrimination, or public freedoms. The powers of such authorities are not concretely defined within the meaning of the regulation: national implementation laws will likely determine their powers and interactions.
Each country must also have a single contact point ensuring the link between national authorities and the European Commission to guarantee coherence of practices and information exchange. Note that the designated Single Contact Points are centralized and published by the European Commission.
Focus on France: networked governance and shared expertise
France has opted for a coordinated approach, relying on actors already experienced in digital regulation and consumer protection:
- The government has entrusted the Directorate-General for Competition, Consumer Affairs and Fraud Control (DGCCRF) with the national coordination of the system as well as the function of Single Contact Point (SCP) with the European Commission.
- The Directorate-General for Enterprises (DGE) ensures the regulatory implementation of the AI Act and represents France in the European AI Committee. Two technical structures, ANSSI and PEReN, provide support to pool expertise in cybersecurity, technical evaluation, and standardization.
- The CNIL remains the reference authority for issues related to personal data protection and works closely with other actors to ensure respect for fundamental rights.
Furthermore, France has established “type-based” governance, where each authority is responsible for overseeing use cases falling within its domain of expertise.
Systems covered by Annex I
Annex I covers products subject to European harmonization legislation: medical devices, machinery, connected toys, autonomous vehicles, radio equipment, etc. When one of these products integrates an AI component, the latter is automatically considered a high-risk AI system. In France, control falls under sectoral market surveillance authorities, coordinated by the DGCCRF.
Systems covered by Annex III
Annex III groups eight main categories of high-risk uses, regardless of product type. These uses involve several French authorities depending on their nature:
Domain | Examples of AI systems | Competent authority |
---|---|---|
Biometrics | Facial recognition, biometric categorization, emotion detection | CNIL |
Critical infrastructures | AI systems for traffic, energy, or water management | HFDS of MEFSIN and MATTE |
Education and vocational training | AI for evaluating or orienting students and learners | Education: CNIL Vocational training: DGCCRF |
Employment, workforce management | Automated recruitment, HR scoring | CNIL |
Access and right to essential private and public services | Loan granting, insurance, social security | Financial services by financial institutions: ACPR CNIL |
Law enforcement | Predictive policing, image analysis for public security | CNIL |
Border control | Risk analysis, behavioral detection | CNIL |
Administration of justice | Decision-support systems for courts | Deployed or used by judicial authorities: Council of State, Court of Cassation, Court of Auditors |
Democratic processes | Moderation or electoral influence systems | CNIL, Arcom |
In addition, certain practices are strictly prohibited (Article 5 of the regulation), such as:
- The use of manipulative or subliminal techniques to influence a person’s behavior;
- The exploitation of vulnerabilities linked to age or disability;
- Illegal social scoring systems;
- Real-time remote biometric identification for law enforcement purposes (except for specific exceptions).
For these cases, the DGCCRF, the CNIL, and Arcom share competence, depending on the nature of the risk: commercial manipulation (DGCCRF), personal data processing (CNIL), or information integrity (Arcom).
National examples: diversity of governance models
While France relies on coordination and specialization, other Member States have adopted diverse governance models according to their institutional organization:
- Luxembourg is preparing a more centralized system, based on a single law under adoption, aiming to concentrate supervision within a main authority.
- Ireland has chosen a distributed model: fifteen competent authorities have been designated, which will eventually be coordinated by the National AI Office, supported by a national implementation committee operational since September 2025.
- Spain has established a Spanish Agency for AI Supervision (AESIA), which collaborates with the Spanish Data Protection Agency (AEPD), the Bank of Spain, the CNMV, and several regional authorities specializing in biometric and fundamental rights issues.
This diversity reflects the flexibility granted to Member States to adapt the system to their administrative structures, while maintaining overall coherence through the coordination role of the European AI Office and the AI Committee.
Implementation timeline: key dates to remember
- August 2, 2025: designation of notifying authorities, market surveillance authorities, and single contact points. However, most Member States have not yet designated their competent authorities.
- End of 2025 – early 2026: first official designations of notified bodies for the certification of high-risk systems.
- August 2, 2026: full application of obligations for high-risk systems as provided by the regulation.
Toward governance built on trust
The AI Act does not merely define obligations: it creates an institutional ecosystem designed to frame innovation while ensuring safety, transparency, and respect for fundamental rights.
For companies, the key will be to anticipate compliance: identify the systems concerned, implement evaluation procedures, and establish early dialogue with competent national authorities.
Strong, clear, and coordinated governance at all levels constitutes the foundation for trustworthy and competitive artificial intelligence in Europe.
At Naaia, we support organizations in establishing ethical, flexible, and compliant governance in the field of artificial intelligence.
Contact our experts to help structure your European AI compliance strategy.