Imagine an AI system capable of analysing medical images to provide early diagnoses, or an AI embedded in a self-driving car. Behind their operation lies a crucial question: how can we ensure their safety, transparency, and compliance with European law? The answer lies in two words: harmonised standards.
With the upcoming enforcement of the AI Act for high risk products, the European Union sets out clear requirements to protect its citizens while fostering innovation. For companies, applying harmonised AI standards is the simplest way to demonstrate compliance.
Harmonised AI Standards: A Guarantee of Compliance with the EU Framework
The AI Act requires that all high-risk AI systems meet a set of essential requirements. These requirements, developed by industry and market stakeholders, are technically translated into harmonised standards, published in the Official Journal of the EU. Once applied, these standards will confer a presumption of conformity on the manufacturer, meaning that his products will be presumed to comply with the AI Act unless there is evidence of nonconformity compliant—unless evidence to the contrary is found—greatly facilitating the CE marking process of AI systems.
Concrete Benefits:
- Cost reduction
- Interoperability of technologies
- Shorter time to market
- Lower legal burden
- Increased credibility with partners and regulators
- Clear understanding of regulatory requirements
- Integration into the value chain and global trade
Harmonised standards will provide companies with essential guidelines on how to assess and mitigate risks related to AI products and will offer precise technical specifications to clarify regulatory requirements
ISO Standards and European Norms: Towards Strategic Alignment
ISO standards for AI, already internationally recognised, cover many dimensions: risk management, transparency, data governance, etc. EU standardisation bodies have reviewed existing standards that could help meet AI Act requirements and have used them as a basis to develop new harmonised standards. Still, in Europe, they are not always sufficient.
Companies using AI tools that fall within the scope of the AI Act must review the existing standards they can apply. They must also follow the work programme and the development of new harmonised standards in Europe that may affect their products.
A key example? ISO/IEC 42001, which covers AI management systems. It’s valuable for organisational governance but is not directly eligible as a harmonised standard:
- The AI Act focuses on product safety, while ISO 42001 focuses on AI management within organisations
- ISO 42001 provides limited coverage of record-keeping, treating it as an optional risk control
- According to ISO 42001, a risk is defined as the effect of uncertainty, while the European Commission uses a stricter definition, assessing risk in terms of its impact on safety and fundamental rights
The challenge is to better align the work of CEN/CENELEC with ISO to avoid redundancies. ISO standards should be leveraged but adapted to meet the EU AI Act’s specific requirements.
That said, companies already complying with international standards—such as ISO/IEC 42001—will be better positioned to adapt to the AI Act and benefit from their early efforts.
The 7 Harmonised AI Standards in Progress: What Do They Cover?
As of now, seven European standards are being drafted to support implementation of the AI Act.
Covered Area | Relevant Articles |
Trustworthy, robustness, transparency, human oversight, logging, accuracy | Art. 12–15 |
Risk management | Art. 9 |
Quality management & post-market monitoring | Art. 17, 72 |
Data governance and quality | Art. 10 |
Bias management | Specific provisions |
AI cybersecurity | Art. 15 |
Conformity assessment | Art. 9–17 + 72 |
Initially scheduled for August 2025, publication may be postponed to 2026.
Two paths available to companies to obtain CE marking for AI systems
Path 1: Application of harmonized standards
- Automatic presumption of conformity
- Simple and secure process
Path 2: Alternative Compliance Demonstration
- Requires use of other applicable standards or internal specifications
- Legally more complex
In both cases, access to the European market is possible. But only the first provides a clear and streamlined path.
Harmonised Standards: The Backbone of Ethical AI in Europe
The success of the European AI regulation lies in its practical implementation. Harmonised standards offer this operational translation, enabling companies to design AI products that are safe, fair, and compliant.
Adopting these standards means investing in trustworthy AI.
Are you developing an AI solution and aiming to anticipate the requirements of the upcoming European regulatory framework?
Contact our AI compliance experts for a personalised and pragmatic assessment.