General-purpose AI models (GPAI) play a central role in innovation due to their ability to perform a multitude of tasks and to be easily integrated into various downstream systems. However, this generality imposes specific regulatory responsibilities on their providers. The European AI Regulation (AI Act) establishes a clear framework.
The obligations for general-purpose AI model providers came into effect on August 2, 2025. Are you ready to navigate this new compliance landscape?
This article breaks down the key obligations, critical thresholds, deadlines, and implementation paths, including through adherence to codes of good practice.
What is a general-purpose AI model (GPAI)?
The Regulation (AI Act), in its Article 3, point 63, generally lists the factors that determine whether a model is a general-purpose AI model:
- It is trained using a large amount of data through large-scale self-supervision.
- It exhibits significant generality.
- It is capable of competently performing a wide range of distinct tasks.
- It can be integrated into a variety of downstream systems or applications.
Models used for research, development, or prototyping activities prior to their market release are excluded from this definition.
The European Commission, in its guidelines, takes an approach based on the amount of computing resources used to train the model. It proposes an indicative criterion where, as a benchmark, a model may be considered GPAI if its training computation exceeds 10²³ FLOPs (floating point operations) and if it is capable of generating language (text or audio), text-to-image, or text-to-video.
However, this is not an absolute rule:
- Models below this threshold may be included if they exhibit significant generality;
- Conversely, models exceeding this threshold may be excluded if they do not meet this generality criterion.
Compliance timeline: what are the key deadlines?
The obligations for GPAI providers entered into force on August 2, 2025.
- For models placed on the market after August 2, 2025: Providers must immediately comply with transparency and policy adaptation obligations.
- For models placed on the market before August 2, 2025: A compliance grace period is granted until August 2, 2027. Providers are not required to retrain or delete training data if this is technically impossible or disproportionate.
Although the obligations apply from August 2, 2025, the Commission will not have enforcement powers before August 2, 2026.
Obligations incumbent on GPAI providers
All general-purpose AI model providers are subject to a series of essential obligations mainly defined in Article 53 of the AI Act.
Transparency and documentation
Providers must develop and maintain technical documentation for the authorities, describing the model’s functioning and key characteristics.
They must also provide appropriate documentation to downstream providers, enabling them to integrate the model with full awareness and meet their own regulatory obligations.
Finally, a summary of the training data content must be made publicly available.
Copyright and training content transparency
How to ensure compliance with intellectual property law?
Providers must implement a copyright compliance policy ensuring that the data used for model training complies with applicable legislation.
The Code of Good Practice for General-Purpose AI provides detailed guidance on how to meet this obligation.
Representatives for providers established in third countries
GPAI providers established in third countries must, by written mandate, designate a representative established in the Union (Article 54) before placing their model on the EU market.
This representative is authorized to verify the technical documentation, keep it available for authorities for ten years, and cooperate with the AI Office.
Enhanced obligations for systemic-risk GPAIs (Articles 51 and 55)
General-purpose AI models presenting systemic risk are subject to additional obligations, even when released under open-source licenses. These models stand out for their high-impact capabilities, likely to cause significant effects on safety, fundamental rights, or society as a whole.
A model may be recognized as systemic-risk based on its high-impact capacity (total compute used for its training exceeds 10²⁵ FLOPs) or by decision of the European Commission.
Affected providers must notably:
- Assess and document risks associated with the model;
- Implement mitigation and monitoring measures to limit potential impacts;
- Promptly report serious incidents to competent authorities;
- Ensure a high level of cybersecurity to protect the model and its infrastructure.
Exemptions for open-source models: an essential nuance
Open-source model providers may be exempted from certain obligations if:
- The license allows access, use, modification, and distribution;
- The model is not directly monetized (no dual license or exclusive paid hosting);
- The weights, architecture, and usage data are public;
- The model does not present systemic risk.
Remaining obligations:
- Implement an EU copyright compliance policy.
- Publish a summary of training data.
Voluntary compliance: the Code of Good Practice
GPAI providers may rely on the Code of Good Practice published on July 10, 2025, covering at least the obligations under Articles 53 and 55 of the AI Act. Adherence to this code is voluntary.
Complying with the AI Act: uncompromising GPAI obligation management
Complying with GPAI obligations under the AI Act is not a mere formality: it requires proactive, structured, and uncompromising management to navigate effectively within a complex framework.