AI Act and Harmonised Standards: role, development process, and state of progress of European AI standards

The AI Act adopts a risk-based approach: the greater the risks an AI system poses to people’s health, safety, or fundamental rights, the stricter the legal obligations it must comply with. This graduated logic forms the foundation of the new European framework of trust for AI.

To make these obligations operational, the regulation combines two complementary levels.

  • On the one hand, the regulation defines the essential requirements with which AI systems must comply, in particular in terms of safety and quality.
  • On the other hand, it refers to technical specifications and rules—harmonised standards—which detail the concrete implementation of these requirements and, where possible, translate certain qualitative concepts (for example, accuracy) into measurable criteria.

The articulation between regulatory requirements and harmonised standards thus constitutes the central mechanism enabling stakeholders to demonstrate their compliance with the obligations of the AI Act.

What are harmonised standards?

According to Article 2(1)(c) of Regulation (EU) No 1025/2012, harmonised standards are European standards adopted on the basis of a request issued by the Commission for the application of Union harmonisation legislation. The implementation of the AI Act therefore necessarily relies on the development of such standards.

Currently under development, they will play a key role in the implementation of the AI Act. They will notably define:

  • How to identify and manage AI-related risks
  • How to set up and operate an effective quality management system
  • How to measure accuracy and other relevant performance metrics of AI systems
  • How to ensure that AI systems remain trustworthy throughout their lifecycle

By ensuring a harmonised application of requirements across the European Union, these standards aim to guarantee that AI systems are designed and used according to common standards of safety, reliability, and trust, regardless of their place of deployment. They thus constitute an essential lever to facilitate the demonstration of compliance with the AI Act and to secure economic actors in a regulatory environment that is still being structured.

The phases of development of European AI standards

In general, the development of European AI standards is structured around the following steps:

1. Standardisation request from the European Commission

The process begins following a standardisation request issued by the European Commission, in which it defines what the standards must cover. In the case of the AI Act, this request notably concerns the obligations applicable to high-risk AI systems. It is then sent to the ESOs (European Standardisation Organizations): CEN, CENELEC, and ETSI.

2. Development of a draft standard

After a favourable opinion on the standardisation request, drafting work can begin.

For standards related to the AI Act, this work is carried out within CEN and CENELEC, in the Joint Technical Committee JTC 21, organised into five working groups (WG). A draft standard is assigned to one of these groups, within which technical experts from national standardisation bodies (NSBs, for example AFNOR in France or DIN in Germany), as well as other stakeholders, draft the text.

Under the responsibility of a Project Leader, the work is conducted according to a consensus-based approach; during this phase, experts may also refer to existing international standards, such as ISO/IEC standards, in order to support and guide the work.

A Working Draft of the project then circulates at least once for information and comments, which are subsequently discussed and resolved within the working group before the text is sent to public enquiry.

3. Public enquiry

When a project is deemed sufficiently mature and meets the standardisation request, it is transmitted to the national standardisation bodies (NSBs) for the so-called public enquiry phase. During this stage, NSBs organise a national consultation, conduct a vote, and transmit detailed comments collected from their stakeholders.

The experts working on the project review this feedback, propose amendments to the text, and attempt to resolve the comments, including where the positions of different countries diverge, by seeking consensus.

4. Formal vote

Once the revisions have been made, the updated project is submitted to the formal vote of the NSBs.


A positive vote leads to the approval of the standard at the European level, with only minor editorial corrections remaining possible. In the event of a negative vote, corrective actions will be required depending on the feedback received.

5. Publication by CEN/CENELEC

When the formal vote is positive, the standard is published by CEN/CENELEC. The final version is then made available, generally via the online shops of the national standardisation bodies (NSBs).

6. Evaluation by the Commission and citation in the Official Journal

In the final phase, the European Commission evaluates the published standard.
It verifies that it complies with the requirements of the AI Act and that it is consistent with the standardisation request.

If this evaluation is positive, the Commission adopts an implementing act and cites the standard in the Official Journal of the European Union (OJEU).

From that moment on, the standard becomes a harmonised standard. Its use and compliance provide a presumption of conformity with the corresponding legal requirements, which greatly facilitates the demonstration of compliance with the AI Act.

Harmonised standards under development

Below is an overview of the harmonised standards currently under development within the framework of the AI Act:

AI Act articles concernedCorresponding standard
Article 17.1; Article 11.1; Article 72prEN 18286 Quality management system for the European AI regulation
Article 9prEN 18228 AI risk management
Article 10prEN 18284 Quality and governance of datasets in AI
Article 10.2(f–g)prEN 18283 Concepts, measures, and requirements for bias management in AI systems
Articles 12–14prEN 18229-1 AI trustworthiness framework – Part 1: Logging, transparency, and human oversight
Article 15prEN 18229-2 AI trustworthiness framework – Part 2: Accuracy and robustness
Article 15prEN 18282 Cybersecurity specifications for AI systems
Article 43prEN 18285 AI conformity assessment framework

State of progress of the standards

Initially planned for 2025, these standards are now delayed, and their publication has been postponed to the course of 2026.

  • The QMS standard remains the most advanced. Its publication is envisaged by the third quarter of 2026. As a reminder, it is still at the public enquiry stage, which has been open since 30 October for a duration of 12 weeks.

  • The cybersecurity standard, which should already have entered the public enquiry phase, must be reworked following a negative opinion from the European Commission, as the draft does not provide sufficiently clear and operational technical specifications with regard to Article 15.

  • The other standards are expected to enter the public enquiry phase from February 2026, with publication targeted towards the end of 2026. As for the data-related standards, they are not expected to reach this stage until mid-2026, with publication then likely expected in the second quarter of 2027.

  • It should be noted that the publication of a standard by CEN/CENELEC does not automatically mean its citation in the Official Journal of the European Union. This may occur several weeks, or even several months, later. Only from that moment does the standard confer a presumption of conformity for the legal requirements it covers.

  • In addition to these harmonised standards, one standard currently under development, “Overview and architecture of standards supporting the AI regulation,” provides a structured overview of them.


🚀 Act now to prepare for compliance

The AI Act applies even before the publication of harmonised standards: anticipating their content is essential to avoid costly redesigns and to secure your compliance.

We support AI stakeholders in AI Act compliance, the structuring of quality management systems, risk and data management, and the preparation of audits and conformity assessments.

Contact us to secure your AI systems and benefit from structured support towards compliance.

Share the Post: