AI Act 2026 timeline: compliance obligations and challenges for companies

The AI Act, referring to the European regulation dedicated to artificial intelligence, provides for a progressive entry into application, staggered according to the type of AI system concerned.

Having entered into force in August 2024, the regulation progressively rolls out its main obligations between February 2025 and August 2027, offering organizations a key transition period to anticipate their compliance.

The timeline below defined in the regulation makes it possible to visualize the main deadlines to remember:

2 February 2025 Prohibition of unacceptable AI practices 
2 August 2025 Requirements for general-purpose AI models (GPAI) 
2 August 2026 General application of the Regulation (including transparency obligations and requirements for high-risk AI systems under Annex III) 
2 August 2027 Requirements for high-risk AI systems under Annex I 

Requirements and impacts according to categories of operators

The provisions of the AI Act apply to all operators, whether companies, public authorities, organizations, or other actors, who place on the market, deploy, or use an AI system or model in the European Union.

The determining criterion is not the place of establishment of the actor, but the fact that the AI system is marketed or put into service in the EU, or that its effects concern persons located in the territory of the Union.

1. High-risk AI systems

High-risk AI systems place the majority of the AI Act’s obligations on providers, to whom enhanced requirements apply throughout the lifecycle of the systems. These obligations notably concern:

  • Risk management,
  • Data governance,
  • Documentation and traceability,
  • Human oversight,
  • High requirements in terms of accuracy, robustness, and cybersecurity, and,
  • A quality management system.

Before any placing on the market, providers must also comply with procedural obligations, such as conformity assessment, CE marking, and registration in a European database.

Other operators (deployers, importers, distributors, and authorized representatives) are also subject to obligations, but to a lesser extent than providers. These mainly concern:

  • Proper use of systems,
  • Verification of certain compliance requirements,
  • Cooperation with competent authorities and,
  • Where applicable, carrying out impact assessments or keeping logs generated by AI systems.

2. Transparency obligations

Providers of AI systems subject to transparency obligations must:

  • Inform individuals concerned when they are interacting with an AI system, unless this is obvious in the context

  • Ensure that such content is identifiable as artificial or manipulated, by means of effective and reliable technical solutions, when systems generate or manipulate content (images, sounds, videos, or texts)

Deployers must:

  • Inform individuals concerned when AI systems are used for emotion recognition or biometric categorization

  • Indicate the artificial nature, subject to the provided exceptions (artistic, satirical works, or specific uses), when they disseminate deepfakes or content generated or manipulated by AI, notably in a context of public interest

State of play and anticipation of what is coming in 2026

The year 2026 is shaping up to be a pivotal year for the implementation of the AI Act.

Several major developments are expected, notably the publication of harmonized standards, European Commission guidelines, as well as discussions around the Omnibus package, whose debates could influence the timeline and modalities of application of the regulation.

1. Harmonized standards

The harmonized standards, developed by CEN and CENELEC, are expected to be published during 2026, not before the second half of the year, or even at the end of the year, without any precise date having yet been communicated.

They are intended to translate the requirements of the AI Act into concrete technical frameworks, in order to facilitate their operational implementation by organizations.

These standards will cover key themes such as quality management systems, risk management, cybersecurity, data governance and quality, as well as trust frameworks applicable to AI systems.

For certain obligations, their application will allow benefiting from a presumption of conformity, thus offering a clearer and more secure framework to demonstrate the regulatory alignment of AI systems.

2. Guidelines

The year 2026 will mark a key step in clarifying many still abstract concepts of the AI Act.

The European Commission plans to publish guidelines aimed at specifying the practical application of the classification of high-risk AI systems, the transparency requirements provided for in Article 50, as well as the requirements and obligations applicable to high-risk AI systems, both for providers and deployers.

These orientations will also address the notification of serious incidents by providers, responsibilities throughout the AI value chain, as well as the rules applicable in the event of substantial modification of systems.

Other guidelines are expected on key operational aspects, notably the provision of a fundamental rights impact assessment template, a voluntary post-market monitoring template for high-risk AI systems, as well as on the elements of the quality management system, with simplified compliance modalities for SMEs and SMCs.

Finally, the Commission should also clarify the articulation between the AI Act and other European legislation, in particular European data protection law.

3. The “Digital Omnibus” package

In a context of increasing European regulatory requirements in digital and artificial intelligence matters, the European Commission has initiated an approach aimed at simplifying, clarifying, and making more operational the application of the AI Act, in particular for economic actors, notably by granting them more time to prepare through the publication of guidelines.

It is within this framework that, on November 19, the Commission officially presented the “Digital Omnibus project”, a set of measures intended to lighten certain obligations of the AI Act, adjust implementation modalities, and strengthen the coherence of the European regulatory framework.

The project notably provides for the following simplification measures:

  • Adapt the timeline of the rules applicable to high-risk AI systems according to the actual availability of the necessary standards and tools: bringing forward the application of the requirements for labeling generated content so as to be machine-detectable (watermarking) within the framework of transparency obligations from 2 February 2027, postponement of the requirements for high-risk AI systems referred to in Annex III to 2 August 2027 and of the requirements for high-risk AI systems referred to in Annex III to 2 December 2028.

  • Extend the reliefs provided for SMEs to small mid-caps, including simplified requirements in terms of technical documentation and a proportionate approach in the application of sanctions;

  • Entrust the Commission and Member States with promoting AI literacy, rather than imposing a general and poorly defined obligation on providers and deployers; training obligations for deployers of high-risk systems remain;

  • Make post-market monitoring more flexible by removing the obligation of a harmonized plan;

  • Lighten registration obligations for providers of AI systems used in high-risk areas when these systems perform only narrow or procedural tasks;

  • Centralize the supervision of many AI systems (notably those based on general-purpose AI models or integrated into very large platforms and search engines) within the AI Office;

  • Facilitate compliance with data protection rules by authorizing, under appropriate safeguards, the processing of sensitive data for the purposes of detecting and correcting biases;

  • Extend the use of regulatory sandboxes and real-world testing, particularly in strategic sectors such as automotive, and prepare the establishment of a European sandbox by 2028;

  • Clarify the articulation of the AI Act with other Union law instruments and adjust certain procedures in order to improve their effectiveness.

It should be emphasized that this initiative constitutes a first step in the legislative process: the proposal will be examined and debated by the competent European institutions. This interinstitutional dialogue will be decisive for the adoption and final content of the mechanism.

The project will therefore have to be adopted by the European Parliament and the Council of the European Union before it can enter into force.

  • To modify the timeline applicable to high-risk AI systems, the Omnibus will have to be adopted before 2 August 2026.

If it is adopted in time, the obligations will be postponed to allow the publication of harmonized standards and give organizations time to concretely adjust their organization and tools to achieve compliance.

  • Regarding the majority of transparency obligations, the Omnibus does not provide for any substantive modification or change of timeline.

How these elements will concretely facilitate compliance in 2026

If the AI Act may today seem complex, the clarifications expected in 2026, notably through the publication of guidelines by the European Commission, should significantly facilitate organizations’ compliance and reduce uncertainty linked to changes in the regulatory timeline, particularly in the context of discussions around the Omnibus package. Anticipating compliance with the AI Act thus makes it possible to secure the development, placing on the market, and deployment of artificial intelligence systems.

Beyond a legal constraint, compliance with the AI Act becomes a strategic lever of trust and competitiveness. By integrating from the design stage the requirements of risk management, transparency, and responsible AI, organizations develop more trustworthy AI products, improve their acceptability by stakeholders, and sustainably differentiate themselves as trusted actors on the European market.


Turn the AI Act into a competitive advantage

Our team helps you transform a regulatory constraint into a strategic lever, by building compliance that is robust, proportionate, and fully operational, adapted to your organization and your business challenges.

👉 Contact us now for a personalized diagnostic and prepare your organization for a responsible integration of AI.

Share the Post: