Post-market monitoring: an important AI Act requirement

When our customers asked us: “How can we set up a post-market monitoring system for our AI products?”, we realized that few providers were prepared. However, this obligation of the European regulation on artificial intelligence (AI Act) is far from trivial: it structures the entire post-deployment life cycle of a high-risk system.

Today, we are sharing what we have learned and how we have equipped our clients to respond to it.

A new chapter for regulated AI systems

With the entry into force of the AI Act, the requirements do not stop at market entry. On the contrary, they intensify. Article 72 of the regulation introduces a little-known but central obligation: post-market monitoring of high-risk artificial intelligence systems.

This continuous monitoring is a direct response to a critical issue: ensuring that AI systems remain safe, ethical and efficient even after their deployment. And we must all anticipate this now.

What is post-market surveillance in the AI Act?

According to Article 72, providers must put in place a post-marketing AI monitoring system that is proportionate to the risks of the AI system concerned. This system must:

  • Actively collect and analyze usage data
  • Include interactions with other AI systems (if relevant)
  • Provide ongoing assessment of compliance
  • Be based on a post-marketing monitoring plan (template expected from the European Commission by February 2026)

AI and Serious Incidents: an enhanced transparency obligation

Article 73 of the AI Act introduces a clear obligation: we must report any serious incident to the competent authorities in the relevant Member State as soon as we become aware of it.

What are the reporting deadlines?

The deadlines vary according to the seriousness of the incident:

  • 15 days maximum for any serious incident (and immediately if possible)
  • 10 days maximum (and immediately if possible) if the incident has resulted in the death of a person
  • 2 days maximum (and immediately if possible) if it involves:
    • Widespread infringement
    • A serious and irreversible disruption of critical infrastructure

If a full report cannot be made in time, we can submit an initial incomplete report, followed by a supplement.

What happens after the report?

Once the incident has been reported, it is imperative to:

  • launch an internal investigation without delay,
  • assess the risks,
  • propose appropriate corrective measures.

We must also cooperate fully with the national market surveillance authorities. The latter are required to:

  • notify the incident immediately to the relevant public authorities or bodies,
  • take concrete measures within 7 days of receiving the report.

The European Commission also plans to publish guidelines by August 2, 2025 to help stakeholders comply with these requirements. A regular evaluation of this framework will also be put in place.

What if we are already subject to other regulations?

The AI Act takes specific cases into account:

If we are already subject to equivalent reporting obligations in other European regulatory frameworks, or if our AI systems are integrated into systems covered by other regulations, specific rules apply to avoid duplication or inconsistencies.

This level of rigor transforms the management of AI risks into a structured, transparent, and interoperable process at the European level. Read: AI systems prohibited by AI Act

Why this is a turning point for AI governance

Post-market monitoring is not a simple administrative formality. It represents a major cultural change in the way we deploy, monitor and correct our AI systems. In particular, it enables:

  • Continuous improvement of models based on real usage
  • Reduction of legal and ethical risks
  • Better management of drifts or biases in production
  • Ability to react quickly in the event of an incident

It is not a regulatory constraint: it is an opportunity to align performance and responsibility.

What we are implementing with Naaia

At Naaia, we have designed our platform to transform the requirements of the AI Act into concrete and documented actions. Rather than leaving our clients alone to deal with complex regulations, we provide them with ready-to-use tools, clear models and a collaborative interface to manage AI compliance over time.

1. Implement a post-market monitoring system

We help providers establish a structured post-market monitoring system, as required by Article 72 of the AI Act. This system allows for:

  • A systematic collection of usage data for deployed AIs
  • An active analysis of performance and possible deviations
  • Full traceability to document any changes
  • A solid basis for assessing the ongoing compliance of the system

Proposed key action: Establish and document a post-market monitoring system

Template included: A post-market monitoring plan, operational now, pending the official Commission model (expected in 2026)

2. Manage the reporting of serious incidents

In accordance with Article 73, we propose a clear methodology to help providers (and deployers, where applicable) report any serious incidents within the specified time limits:

  • Centralization of reports to the competent authorities
  • Documented history of incidents, corrective measures and decisions
  • Clear procedures for initiating an internal investigation after reporting

Actions for providers :

  • Report serious incidents to the authorities
  • Communicate the corrective measures implemented
  • Set up a system for monitoring incidents and their treatment

Action for deployers:

  • Report serious incidents observed in their context of use

Template included:

→ An incident management procedure, modular and adapted to the requirements of the regulation

3. Manage AI vigilance with our Event Tracker

Our AI Vigilance – Event Tracker module is designed to report, track and supervise all critical events related to the life of an AI system:

  • Reporting of serious incidents
  • Detection of substantial modifications
  • Reporting of risks:
    • Harm to the health of natural persons
    • Harm to safety
    • Harm to fundamental rights
    • Prohibited discrimination

Each event can be linked to alert notifications, suspension or withdrawal actions, or internal governance decisions.

Our goal: to enable internal and external stakeholders to stay informed and aligned, ensuring clear and continuous oversight of AI products.

Anticipate rather than suffer

Post-market monitoring is a centerpiece of AI governance in Europe. It is not a regulatory detail, but a mechanism of long-term trust.

At Naaia, we help AI teams transform this constraint into a performance lever for greater rigor, reduced risk and, above all, better control of your production systems.

Ready to manage the compliance of your high-risk AIs?

We have already supported players in setting up their post-market monitoring system.

Contact us to find out how Naaia can help you anticipate the AI Act obligations – simply, starting today. → Talk to a Naaia expert.

Share the Post: