AI Act: what legal regime for open-source general-purpose AI models?

The AI Act establishes a harmonised legal framework applicable to AI systems as well as to general-purpose AI models (General Purpose AI – GPAI), that is, AI models trained using a large amount of data through large-scale self-supervision, capable of being integrated into a wide variety of systems and of performing a wide range of distinct tasks.

In this context, the European legislator has provided for a specific regime for certain GPAI models distributed as open source. This regime is based on targeted exemptions, intended to take into account the specific characteristics and benefits of open source, while maintaining a minimum baseline of requirements in terms of transparency, governance and accountability.

These exemptions are, however, neither general nor automatic: they are conditional upon compliance with cumulative conditions laid down by the Regulation, which requires a rigorous analysis of the regime applicable to each model.

I. Definition of an open-source model within the meaning of the AI Act

According to recitals 102 and 104 of the AI Act, open-source models refer to:

  • AI models published under a free and open-source licence
  • Allowing access to, use, modification and distribution of the model,
  • And for which the following are made public:
    • The parameters, including the model weights,
    • Information relating to the model architecture,
    • Information necessary to understand its use.

It is therefore not merely a matter of access to the code, but rather of full technical and functional transparency.

II. The cumulative exemption conditions provided for by the AI Act

As a matter of principle, general-purpose AI models are subject to a set of specific requirements laid down in particular in Articles 53 and 54 of the AI Act, reflecting their broad scope of application and their potential cross-cutting impact.

However, the Regulation provides for a specific regime of targeted exemptions for certain models distributed as open source, subject to compliance with strictly defined cumulative conditions.

In order to benefit from these exemptions, open-source models must simultaneously meet two conditions, specified in the Guidelines on the scope of obligations applicable to general-purpose AI models §§72–§92.

1. The determining criterion: absence of systemic risk

Where a general-purpose AI model presents a systemic risk, the fact that it is distributed under a free and open-source licence is not sufficient to exempt its provider from compliance with the obligations laid down by the AI Act.

For the record, according to Article 3(65) of the AI Act, a systemic risk means:

A risk specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach or actual or reasonably foreseeable adverse effects on public health, safety, public security, fundamental rights or society as a whole, which can be propagated at scale throughout the value chain.

This criterion therefore conditions access to the exemption regime itself, independently of compliance with other requirements.

2. Conditions relating to the open-source qualification

An effective free and open-source licence

The licence must allow, without unjustified restriction:

  • Access to the model,
  • Its use,
  • Its modification,
  • Its redistribution.

Where there is a substantial limitation of any of these rights, the model cannot claim the benefit of the exemption provided for by the AI Act.

Absence of direct monetisation

Two key criteria are explicitly mentioned:

  • No commercial dual licensing,
  • No exclusive paid hosting of the model.

In other words, an open-source model serving as the basis for a closed or financially conditioned commercial offering cannot benefit from the exemption.

Technical and functional transparency of the model

The following must be made public:

  • The parameters, including the model weights,
  • Information relating to the model architecture,
  • Information necessary to understand its use,

And this in a format, and with a degree of clarity and precision, enabling access to, use, modification and distribution of the model.

III. Exemptions granted to eligible open-source models

Once the cumulative conditions are met, providers of open-source models benefit from a regime of exemptions from the obligations laid down by the Regulation:

  1. Exemption from technical documentation: providers are exempt from the obligation to draw up and keep up to date the technical documentation normally required for general-purpose AI models. 

  1. Exemption from transmission of documentation to integrators (downstream providers integrating GPAI models into their own systems): they are also exempt from the obligation to draw up, maintain and make available information and documentation for providers of AI systems wishing to integrate the general-purpose AI model into their AI systems. 

  1. Exemption from designation of a representative in the Union: for providers established in third countries, the AI Act in principle provides for the designation of an authorised representative in the European Union. Eligible open-source models are exempt from this obligation. 

IV. Obligations that remain despite the exemptions

The exemptions provided for by the AI Act in favour of certain open-source general-purpose AI models do not mean a total exemption from responsibility.

The European legislator has sought to maintain a minimum baseline of cross-cutting obligations, aimed at guaranteeing a sufficient level of transparency, legal compliance and governance, regardless of the applicable exemption regime.

In this respect, two major obligations continue to apply to providers of open-source AI models eligible for exemptions:

1. Implementation of a European Union copyright compliance policy

Providers of open-source AI models must establish and implement a formal policy for compliance with European Union copyright law, covering the entire life cycle of the model, and in particular the constitution and use of training datasets.

This policy must in particular make it possible:

  • To identify content protected by copyright likely to be used in training data;
  • To take into account and respect rights reservations expressed by rightholders, including opt-out mechanisms, neighbouring rights and other applicable restrictions;
  • To ensure minimum traceability of choices made regarding the collection, selection and use of data.

This obligation is of central importance, insofar as it directly engages the governance of training data and exposes providers to significant legal risks in the event of non-compliance.

2. Producing a summary of the content used for training

Providers are also required to draw up and make publicly available a summary of the content used to train the model. This requirement aims to strengthen the transparency of general-purpose AI models, without imposing exhaustive or sensitive disclosure of the underlying datasets.

This summary must make it possible:

  • To provide an overview of the types of content used (sources, categories, nature of the data);

  • To promote understanding of the general characteristics of the model and its potential limitations;

  • To reconcile transparency requirements with the protection of trade secrets, sensitive data and the legitimate interests of providers.

The summary obligation thus forms part of a balanced approach, intended to improve information for authorities, integrators and the public, while preserving the operational and economic constraints specific to open-source models.

V. What this concretely implies for AI stakeholders

AI models distributed as open source are neither outside the scope of the AI Act, nor automatically compliant solely by virtue of their mode of distribution.

The use of a free and open-source licence does not, as such, dispense with an in-depth legal analysis of the model and its conditions of use.

In this respect, the development, provision or integration of open-source AI models notably involves:

  • A rigorous legal qualification of the model, in order to determine its status under the categories provided for by the AI Act and to assess the possible existence of a systemic risk;

  • An analysis of the modalities of distribution and monetisation, covering applicable licences, conditions of access to the model and commercial uses likely to affect eligibility for exemptions;

  • Clear governance of training data, including compliance with European Union copyright law and consideration of rights reservations;

  • The ability to document the residual compliance required by the AI Act, in particular with regard to obligations maintained despite the exemptions.

In practice, these requirements compel open-source AI stakeholders to adopt a structured and proactive approach to compliance, integrating regulatory issues from the design, publication and operation phases of the models.


AI Act & open source: turning regulatory constraint into a controlled advantage

The entry into force of the AI Act requires organisations developing, publishing or integrating open-source AI models to adopt a structured approach to regulatory compliance.

Open source does not constitute an automatic exemption: the legal qualification of models, governance of training data and compliance with residual obligations become key issues.

👉 Discover our AI management platform, designed to turn regulatory requirements into levers of control, transparency and trust.

Share the Post: