The AI Act establishes a legal framework intended to regulate the use of AI systems within the European Union. Since its adoption, the text has continued to evolve, notably through guidelines, harmonised standards and adjustments to the implementation timetable, in order to facilitate its operational implementation by companies.
Between requirements that are already applicable and clarifications still underway, AI stakeholders must navigate a regulatory framework that is dynamic, sometimes complex, but structuring for the future of responsible innovation.
1. Official elements already in force
Scope of the regulation and key definitions
The European Commission has published several official guidelines aimed at securing the legal interpretation of the AI Act. These documents provide essential clarifications on:
- The definition of an AI system, based on the system’s ability to infer, generate outputs or make decisions influencing physical or digital environments;
- Prohibited AI practices, as defined by Regulation (EU) 2024/1689, in particular those infringing fundamental rights (cognitive manipulation, social scoring, exploitation of vulnerabilities).
These texts currently constitute stable legal references for the interpretation of the Regulation by companies and national supervisory authorities.
National governance and the role of competent authorities
The AI Act requires each Member State to designate at least one notifying authority and at least one market surveillance authority responsible for monitoring its application and implementation.
However, the Regulation grants Member States broad discretion in organising these authorities: responsibilities may be assigned to a single authority or distributed among several authorities, with or without a single point of contact.
Their roles are as follows:
- Market surveillance authorities: they ensure the supervision of AI systems placed on the market or used within the national territory, in particular those classified as high-risk.
- Notifying authorities: they structure the national certification ecosystem by designating notified bodies.
The European whistleblowing tool
In order to strengthen post-market monitoring, the European Commission has established a secure reporting platform accessible to any concerned person (employee, user, service provider, third party).
Reports may relate to:
- Violations of the obligations laid down by the AI Act (including risks, etc.);
- Serious incidents presenting a risk to health, safety, fundamental rights or the environment.
The platform guarantees anonymous and secure processing and forms part of a preventive and rapid risk correction approach.
The Commission’s position on AI agents
AI agents, capable of acting autonomously without necessarily continuous human supervision, fully fall within the scope of application of the AI Act.
The Commission confirms that:
- An AI agent may be classified as a high-risk AI system if it meets the criteria of Article 6;
- The applicable obligations depend on the context of use, in particular in sensitive sectors such as public security, financial services or human resources management.
The autonomy of the system therefore does not constitute an exemption, but rather a potential aggravating factor in terms of risks.
Guidelines on General-Purpose AI (GPAI) models
Providers of general-purpose AI (GPAI) models must also comply with enhanced requirements since August 2025. The GPAI guidelines clarify:
- The criteria for qualifying GPAI models and GPAI models presenting systemic risks;
- Providers’ obligations: risk management, comprehensive technical documentation, risk monitoring;
- Exemptions and obligations for open-source model providers.
Exemptions for open-source models
Open-source AI models benefit from targeted exemptions, in particular regarding technical documentation, the provision of information to integrators of the relevant models, and the designation of an authorised representative for providers established outside the EU.
These exemptions apply provided that the model is distributed under a free and open-source licence, without direct monetisation, and that its parameters are made public.
Certain obligations nevertheless remain, in particular with regard to copyright and transparency concerning training data.
Code of practice for General-Purpose AI (GPAI) models
Validated by the European Commission in July 2025, the GPAI Code of Practice constitutes a voluntary alignment tool intended to help providers comply with the AI Act legislation.
It is divided into three chapters:
- Transparency: structuring of the information to be provided;
- Copyright: compliance with European copyright legislation;
- Safety and security: enhanced requirements for high-impact or systemically risky models.
Although not legally binding, this Code constitutes a strategic reference for demonstrating a proactive compliance approach.
2. Elements under clarification or consultation
Proposal for a Code of Practice on the marking and labelling of AI-generated content
On 17 December 2025, the European Commission published a first draft Code of Practice on the marking and labelling of AI-generated or AI-manipulated content, as part of the implementation of Article 50 of the AI Act.
This voluntary Code aims to support generative AI providers and professional deployers in anticipating future transparency obligations, in particular with regard to machine-readable marking and the labelling of deepfakes.
A consultation is open until 23 January 2026, with a view to final adoption of the Code by June 2026, ahead of the entry into application of the legal obligations in August 2026.
Guidelines currently under development
Several guidelines are still being drafted, in particular:
- Guidelines on the transparency of AI systems subject to specific obligations, expected by mid-2026;
- Progressive and thematic consultations concerning high-risk AI systems, which may extend until August 2026.
Guidelines announced but not yet published
The European Commission has also announced forthcoming guidelines, including:
- The practical implementation of the classification of high-risk systems;
- The precise modalities for incident reporting by AI system providers;
- The practical implementation of obligations concerning providers and deployers of high-risk systems (notably on the notion of substantial modification, etc.).
Guidelines relating to transparency requirements are also expected.
These texts are anticipated, but their final content is not yet known.
Harmonised standards and technical challenges
Harmonised standards aim to translate the legal requirements of the Regulation into technical specifications.
Some are currently under consultation, such as those relating to:
- Cybersecurity: on 7 November 2025, the Commission indicated that the draft standard, as currently presented, does not yet provide sufficiently clear and operational technical specifications to meet the requirements of Article 15(5) of the AI Act. A revision of the draft standard is currently being prepared;
- Quality management systems (QMS): currently in the public enquiry phase since October 2025.
3. Remaining uncertainties: timetable and potential postponements
The AI Act timetable and potential changes
The initial timetable provides for the application of compliance obligations for high-risk systems from 2026. However, the European Commission has recently proposed a targeted postponement, providing for:
- 2026: Entry into force of obligations for systems listed in Annex III;
- 2027: Entry into force of obligations for systems falling under Annex I.
[!] This proposal is currently pending approval by the European Parliament and the Council of the EU.
On the road to compliance: key next steps
Despite these developments and adjustments, one thing is clear: compliance with the AI Act will not be treated as a one-off exercise, but as a continuous process.
AI stakeholders must not only monitor regulatory developments but also take the necessary measures to anticipate the application of the AI Act, while optimising the security, risk management and transparency of their systems.
Reporting mechanisms, harmonised standards and codes of good practice will play a central role in this dynamic.
Anticipating today to secure tomorrow
At Naaia, we support you from now on to ensure your compliance.
Thanks to our technical and regulatory expertise, we assist you in implementing the requirements of the Regulation, from risk management to the compliance of AI systems.