AI today encompasses a wide diversity of technologies, models, and use cases. This plurality makes their understanding essential for organizations, in order to grasp their impacts, identify associated risks, and define appropriate frameworks for responsibility and governance.
Before being able to deploy, regulate, or effectively govern AI solutions, it is first necessary to clarify their fundamental concepts. This article provides a concise overview of the main types of AI, in order to offer clear, structured, and operational reference points.
1. The AI system: the foundation of the AI ecosystem
Before examining the different categories of AI in detail, it is necessary to focus on the central concept around which the entire European framework is built: the AI system.
This notion constitutes the anchor point of the regulatory framework, as it defines the scope of application of the requirements, responsibilities, and control mechanisms provided for by the regulation.
Legal definition of an AI system under the AI Act
According to Article 3(1) of the AI Act, an AI system means:
‘A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments’
This definition highlights several structuring elements:
- Automation;
- The degree of autonomy;
- The inference capability; and
- The potential impact of the outputs produced by the system on physical or virtual environments.
In practice, the AI system is the primary object of regulation: risk classification, compliance obligations, controls, and sanctions apply to it.
An AI system may rely on one or more models, be open source or proprietary, and be general-purpose or specialized, without these elements affecting its qualification as an AI system.
| Example of an AI system: An AI-based candidate pre-screening system automatically analyzes CVs using an AI model in order to produce scores or recommendations. |
AI systems outside the scope of the AI Act
It should be specified that certain AI systems, although they meet general technical criteria, may be considered outside the scope of the AI Act insofar as they fall neither within the functional definition of AI within the meaning of the regulation, nor within the risks targeted by European regulation.
These situations concern in particular the following categories:
- Traditional mathematical optimization: this refers to systems aimed solely at improving or accelerating classical optimization methods, without learning or modification of the decision-making logic (e.g. accelerated physical simulations, parameter approximations, network optimization using established methods).
- Data processing through fixed instructions: these are tools relying exclusively on deterministic and predefined instructions, without modeling or reasoning, such as data sorting, filtering, or extraction via SQL, simple statistical calculations, or scripts with fixed rules.
- Descriptive analysis, testing, and visualization: this refers to systems limited to data description, standard statistical tests, or the visualization of indicators, without producing recommendations, predictions, or decisions (dashboards, exploratory analyses, charts).
- Classic heuristic systems: this refers to programs based on fixed rules or heuristics, without learning capability or autonomous improvement, such as a game engine using a static evaluation function.
- Simple statistical rules: that is, systems using basic estimates (mean, static benchmark) without handling complex patterns and whose performance is comparable to traditional methods.
Moreover, understanding an AI system nevertheless implies not confusing it with its technical components, foremost among which is the AI model.
2. The AI model: the technical foundation of the system
Definition of an AI model
An AI model refers to a mathematical or computational representation obtained through a training process based on data and used to perform inference.
It enables the transformation of input data into outputs such as predictions, classifications, recommendations, or decisions, according to a learned function.
As such, it constitutes the algorithmic core of automated reasoning, without having, in itself, an operational purpose.
| Example of an AI model: A fraud detection model specifically trained to identify suspicious banking transactions based on historical data. |
Definition of a general-purpose AI model
Generally, AI models are not directly targeted by the AI Act, insofar as they are considered fundamental components of AI systems.
Indeed, they are subject to specific regulation only when they exhibit characteristics leading them to be classified as general-purpose AI models, which introduces a distinct classification within the AI Act. They refer to:
‘An AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market’ (Source : article 3 (63) AI Act)
| Example of a general-purpose AI model: A versatile language model, such as OpenAI’s GPT-4, capable of generating, summarizing, translating, or analyzing text, and being integrated into many AI systems for varied uses. |
General-purpose AI models presenting systemic risks
Some general-purpose AI models present specific risks, known as systemic risks, defined as:
‘Risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain’ (Source : article 3 (65) AI Act)
| Example of a model presenting systemic risks: A large general-purpose AI model trained on massive volumes of textual, visual, and audio data and integrated into numerous online services (search engines, customer support, office tools, educational platforms) may present systemic risks within the meaning of the AI Act. |
According to the European Commission, general-purpose AI models trained using cumulative computing power exceeding 10²⁵ FLOPs are presumed to present a systemic risk (Article 3(67) and Article 51(2)).
Independently of these thresholds, the European Commission may designate a model as presenting systemic risk based on the criteria set out in Annex XIII of the AI Act.
These models are subject to enhanced transparency obligations and specific risk assessments.
AI model and AI system: integration and responsibilities
Unlike the AI system, the AI model does not directly interact with the end user. It produces concrete effects only once integrated into a software environment, combined with data, interfaces, business rules, and organizational processes.
- It is this integration that transforms a model into an operational AI system capable of influencing decisions or real-world environments.
The same model may be reused across several distinct AI systems, each pursuing different purposes and presenting different risk levels, usage contexts, and responsibilities. The nature of an AI system therefore depends not only on the model used, but also on how it is deployed and operated.
Focus: general-purpose AI models and the systems derived from them
Unlike AI systems based on specialized models, there are AI systems integrating general-purpose AI models: general-purpose AI systems. These are defined as:
‘An AI system which is based on a general-purpose AI model and which has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems’ (Source : article 3 (66) AI Act)
This capacity for generalization and reuse heightens governance, traceability, and accountability challenges, particularly when such models are deployed at scale or integrated into sensitive contexts.
Governance challenges
The distinction between model and system is decisive for AI governance.
Risks, obligations, and responsibilities do not arise from the model in isolation, but from its integration and use within a deployed system, as well as from the system’s purpose and usage context.
Understanding the role of the model as a technical component thus makes it possible to better map AI systems, identify technological dependencies, and structure appropriate governance.
3. The issue of open source in AI
Open-source AI models
According to Articles 53(2) and 54(6) of the AI Act, open-source AI models are models that:
- Are published under a free and open license allowing access, use, modification, and distribution of the model;
- Make public their parameters, including weights, information on the model architecture, and information on the model’s use;
- Are not subject to direct monetization, such as exclusive paid hosting (clarification provided by the Commission’s guidelines on the obligations of general-purpose AI model providers – Section 4.2.2. Absence of monetization).
The AI Act explicitly recognizes their role in innovation, while introducing specific obligations depending on use and risk level.
| Example of an open-source model: Mistral 7B by Mistral AI is an open-source language model published under an open license, with accessible weights and architecture. It can be used, modified, and integrated into many AI systems for various purposes (text generation, summarization, assistance, analysis). |
Open source and governance challenges
Once an open-source model is integrated into a deployed AI system, that system may be subject to the obligations of the regulation.
If a general-purpose AI model meets the open-source conditions outlined above, it is in all cases subject to the following obligations:
- Establish a policy for compliance with EU copyright law, including identification and respect of rights reservations;
- Produce a summary of the content used for training.
However, if it does not present systemic risks, it may benefit from exemptions (technical documentation, information and documentation for downstream integrators, designation of a representative for providers established in third countries).
Challenges for organizations
While the use of open-source AI components offers innovation opportunities, it also complicates AI system governance.
It makes it more difficult to ensure traceability of models and their evolution, the quality and consistency of documentation, and the assessment of risks related to bias, use cases, and potential impacts.
4. Chatbots: the conversational interface
Definition of a chatbot
A chatbot (or conversational assistant) is an AI system designed to simulate a conversation in a given channel and provide information, assistance, or a service. It may answer FAQs, check order status, recommend a product, or guide a user through a form.
Unlike an agent, a traditional chatbot pursues no objective, does not plan strategies, does not reason across multiple steps, and is limited to responding to received messages. It does not use internal chains of thought or deep adaptation to business context. It is a conversational AI system, but not an agent.
Regulatory challenges
Chatbots, recognized as AI systems by the European Commission, fully fall within the scope of the AI Act. As such, they are subject to the following obligations:
- Transparency (Article 52): clear information to users that they are interacting with an AI system, subject to strictly defined exceptions.
When deployed in sensitive contexts (HR, public services, healthcare, or education), a chatbot may be classified as a high-risk AI system under Articles 6 and 7 and Annexes II and III. This classification entails reinforced requirements, including:
- Risk management throughout the lifecycle (Article 9);
- Effective human oversight measures (Article 14);
- Technical documentation, traceability, and post-deployment monitoring obligations (Articles 11 to 15).
A chatbot is never a neutral tool: as soon as it generates responses through inference and can influence user behavior or decisions, it engages the responsibility of the actors who design, integrate, and operate it.
5. AI agents: from tool to autonomy
AI agents, in their common understanding, refer to AI systems presenting specific characteristics:
- They rely on an AI model pursuing a defined or undefined objective, without having undergone additional development or significant modification;
- They are accessible through a studio in which users can edit their parameters;
- They are configured to automate a complex, contextualized task, make decisions, and execute actions without necessarily requiring human intervention.
AI agents relate to the notion of agency, meaning a system’s ability to:
- Act autonomously;
- Initiate actions;
- Plan sequences;
- Adapt to changing contexts;
- Pursue high-level objectives without continuous human supervision.
| Example of an AI agent: An AI agent may be an automated assistant tasked with sorting incoming emails. It analyzes each message, identifies its category (sales, support, urgency), and applies the appropriate action, such as archiving or creating a ticket. It thus performs a specific, predefined task without exceeding that role. |
6. Agentic AI: orchestration and complexity
Agentic AI extends and goes beyond the concept of an agent. According to the taxonomy proposed by Sapkota et al. (2025), agentic AI represents a paradigm shift compared to traditional AI agents. It is characterized in particular by:
- Collaboration between multiple agents within the same system;
- Dynamic decomposition of tasks into context-adapted sub-tasks;
- The existence of persistent memory enabling long-term use of historical information;
- Orchestrated autonomy, meaning structured and coordinated autonomy exceeding the capabilities of a single agent.
It is therefore a global system endowed with coordination and autonomy capabilities.
| Example of agentic AI: In a multi-agent system, each agent executes a specific sub-task to achieve the objective, and their efforts are coordinated through AI orchestration functionalities. |
Governance challenges
AI agents and agentic systems raise major governance challenges:
- Coordination between agents may generate emergent behaviors and unforeseen effects that are difficult to anticipate;
- Increased autonomy complicates attribution of responsibility and the maintenance of effective human oversight;
- Model opacity limits transparency, explainability, and auditability of decisions.
These challenges make reinforced governance, alignment, and bias management mechanisms essential.
7. Why this inventory is essential for AI governance
The diversity of AI technologies makes a detailed understanding of the different types of AI deployed within organizations indispensable. This inventory is a prerequisite for any effective governance approach.
Precisely identifying the types of AI used makes it possible to determine applicable regulatory obligations, which vary according to system nature, purposes, and capabilities. It also facilitates a more accurate risk assessment, taking into account autonomy level, potential impact, and usage context for each system.
This understanding is also critical for defining and deploying appropriate technical, organizational, or human controls and ensuring their effectiveness over time. Finally, it enables the structuring of coherent, documented, and sustainable AI governance capable of evolving alongside technologies and use cases.
At Naaia, we support organizations in mapping, governing, and ensuring compliance across all their AI systems, regardless of type, autonomy level, or underlying model.