Responsible AI Glossary

Glossary

Algorithm

An algorithm is the description of a sequence of steps used to obtain a result from input elements. For
an algorithm to be implemented by a computer, it must be expressed in a computer language, in the
form of software (often also referred to as an “application”). Software generally combines a number of
algorithms: for data entry, calculation of results, display, communication with other software, etc. Some
algorithms have been designed so that they can be used by a computer.
Some algorithms have been designed in such a way that their behavior evolves over time, depending
on the data supplied to them. These “self-learning” algorithms belong to the research field of expert
systems and AI. They are used in a growing number of fields, from traffic prediction to medical image
analysis.


Source : CNIL


Ex : Mathematical algorithms make it possible to combine the most diverse types of information to
produce a wide variety of results: simulating the spread of influenza in winter, recommending books to
customers on the basis of choices already made by other customers, comparing digital images of faces
or fingerprints, autonomously piloting automobiles or space probes, and so on.

AI literacy

Means skills, knowledge and understanding that allow providers, deployers and affected persons,
taking into account their respective rights and obligations in the context of this Regulation, to make an
informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of
AI and possible harm it can cause;


Source : AI Act – Article 3(56)

AI Office

Means the Commission’s function of contributing to the implementation, monitoring and supervision of
AI systems and general-purpose AI models, and AI governance, provided for in Commission Decision
of 24 January 2024; references in this Regulation to the AI Office shall be construed as references to
the Commission.


Source : AI Act – Article 3(47)

AI product

In the context of the Naaia Solution, the term “AI product” refers to either an AI system or an AI model.

AI regulatory sandbox

Means a controlled framework set up by a competent authority which offers providers or prospective
providers of AI systems the possibility to develop, train, validate and test, where appropriate in real-
world conditions, an innovative AI system, pursuant to a sandbox plan for a limited time under
regulatory supervision.


Source : AI Act – Article 3(55)


Ex : In 2023, CNIL launched a regulatory sandbox to support AI projects aimed at improving public
services. The selected projects 3 will benefit from personalized support over several months and the
expertise of the CNIL, in particular its new AI department, on emerging legal and technical issues.

AI system (“AIS”)

Means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions
that can influence physical or virtual environments.
Systems that employ these AI techniques are considered as AI systems :
— machine learning methods
— logic- and knowledge-based approaches Systems that are outside the scope of the AI system definition:
— systems for improving mathematical optimization
— basic data processing
— systems based on classical heuristics
— simple prediction systems

Source : AI Act – Article 3(1) ; Commission Guidelines on the definition of an artificial intelligence
system established by Regulation (EU) 2024/1689 (AI Act) – Annex : section 5

 

Autonomy

The second element of the definition [of an AI system] refers to the system being ‘designed to operate
with varying levels of autonomy’. Recital 12 of the AI Act clarifies that the terms ‘varying levels of
autonomy’ mean that AI systems are designed to operate with ‘some degree of independence of
actions from human involvement and of capabilities to operate without human intervention’.
The notions of autonomy and inference go hand in hand: the inference capacity of an AI system (i.e., its
capacity to generate outputs such as predictions, content, recommendations, or decisions that can
influence physical or virtual environments) is key to bring about its autonomy.
Central to the concept of autonomy is ‘human involvement’ and ‘human intervention’ and thus human-
machine interaction. At one extreme of possible human-machine interaction are systems which are
designed to perform all tasks though manually operated functions. At the other extreme are systems
that are capable to operate without any human involvement or intervention, i.e. fully autonomously.
The reference to ‘some degree of independence of action’ in recital 12 AI Act excludes systems that
are designed to operate solely with full manual human involvement and intervention. Human
involvement and human intervention can be either direct, e.g. through manual controls, or indirect, e.g.
though automated systems-based controls which allow humans to delegate or supervise system
operations.

Ex: A system that requires manually provided inputs to generate an output by itself is a system with
‘some degree of independence of action’, because the system is designed with the capability to
generate an output without this output being manually controlled, or explicitly and exactly specified by a
human. Likewise, an expert system following a delegation of process automation by humans that is
capable, based on input provided by a human, to produce an output on its own such as a
recommendation is a system with ‘some degree of independence of action’.
The reference in the definition of an AI system in Article 3(1) AI Act to ‘machine-based system that is
designed to operate with the varying levels of autonomy’ underlines the ability of the system to interact
with its external environment, rather than a choice of a specific technique, such as machine learning, or
model architecture for the development of the system.
Therefore, the level of autonomy is a necessary condition to determine whether a system qualifies as
an AI system. All systems that are designed to operate with some reasonable degree of independence
of actions fulfil the condition of autonomy in the definition of an AI system.

Source : AI Act – Recital 12 ; Commission Guidelines on the definition of an artificial intelligence
system established by Regulation (EU) 2024/1689 (AI Act) – Annex : paragraph 14, 15, 16, 17, 18, 19,
and 20

Biometric categorisation system

Means an AI system for the purpose of assigning natural persons to specific categories on the basis of
their biometric data, unless it is ancillary to another commercial service and strictly necessary for
objective technical reasons;

Source : AI Act – Article 3(40)

Ex : A security application uses a biometric categorisation system to identify employees based on their
fingerprints and automatically assign them to different levels of access within the company. This
ensures that only authorised people can enter certain sensitive areas.

Biometric data

Means personal data resulting from specific technical processing relating to the physical, physiological
or behavioural characteristics of a natural person, such as facial images or dactyloscopic data 5 .

Source : AI Act – Article 3(34)

Ex : An AI system for secure access control to certain premises uses fingerprints to authenticate
employees. Here, the biometric data used are fingerprints. An AI system to recognize emotions from
voice. Here, the biometric data is the voice of an identified or identifiable person.

CE marking

Means a marking by which a provider indicates that an AI system is in conformity with the requirements
set out in Chapter III, Section 2 6 and other applicable Union harmonisation legislation providing for its
affixing.

Source : AI Act – Article 3(24)

Ex : A company is developing facial recognition software to be used by public institutions for security
reasons. Before marketing this software, the company must ensure that it complies with all the security,
transparency and data protection requirements defined by the AI Act. Once the software has been
certified as complying with these requirements, it can be CE marked. This CE marking indicates to end-
users that the software has been assessed and found to comply with EU standards, thus guaranteeing
its reliability and security.

Conformity assessment

Means the process of demonstrating whether the requirements set out in Chapter III, Section 2 relating
to a high-risk AI system have been fulfilled 7 .

Source : AI Act – Article 3(20)

Ex : A company developing an AI system to detect bank fraud must follow a risk management process,
test the system under real-world conditions, and provide technical documentation detailing the system's
cybersecurity measures and performance. This documentation is submitted to a certification authority to
verify compliance before the system can be used by banks.

Conformity assessment body

Means a body that performs third-party conformity assessment activities, including testing, certification
and inspection.

Source : AI Act – Article 3(21)

Deceptive techniques

Deceptive techniques deployed by AI systems should be understood to involve presenting false or
misleading information with the objective or the effect of deceiving individuals and influencing their
behaviour in a manner that undermines their autonomy, decision-making and free choices.
An example of deceptive techniques that may be deployed by AI is an AI chatbot that impersonates a
friend of a person or a relative with synthetic voice and tries to pretend it is the person causing scams
and significant harms.


Source : Commission Guidelines on prohibited artificial intelligence practices established by Regulation
(EU) 2024/1689 (AI Act) – Annex : paragraphs 70 and 73

Deep fake

Means AI-generated or manipulated image, audio or video content that resembles existing persons,
objects, places, entities or events and would falsely appear to a person to be authentic or truthful;

Source : AI Act – Article 3(60)

Ex : A photo in which Angela Merkel's face is replaced by that of Donald Trump. In another example, a
video is circulating on social networks showing a politician giving a controversial speech. However, in
reality, this video is a “deep fake” created by an AI, using authentic images and sounds of the politician
to manipulate his speech. This highly realistic fake content can mislead viewers into believing that the
politician actually said those words, when it's not the case.

Deep learning

A subset of machine learning that utilises layered architectures (neural networks) for representation
learning. AI systems based on deep learning can automatically learn features from raw data, eliminating
the need for manual feature engineering. Due to the number of layers and parameters, AI systems
based on deep learning typically require large amounts of data to train, but can learn to recognise
patterns and make predictions with high accuracy when given sufficient data.


Source : Commission Guidelines on the definition of an artificial intelligence system established by
Regulation (EU) 2024/1689 (AI Act) – Annex : paragraph 38

Deployer

Means a natural or legal person, public authority, agency or other body using an AI system under its
authority except where the AI system is used in the course of a personal non-professional activity.


Source : AI Act – Article 3(4)

Ex : A store uses an intelligent heating management and monitoring tool, equipped with sensors that
adjust the temperature according to the time of day, to optimize customer comfort and energy efficiency.
Using this tool, the store acts as a deployer of this AI system.

Distributor

Means a natural or legal person in the supply chain, other than the provider or the importer, that makes
an AI system available on the Union market.

Source : AI Act – Article 3(7)

Ex : A French company buys an artificial intelligence software package developed by an American
company. The French company then resells this software to European companies, without making any
technical modifications to the software. In this case, the French company acts as a distributor, making
the AI system available on the European Union market.

Diversity, non-discrimination and fairness

Means that AI systems are developed and used in a way that includes diverse actors and promotes
equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair
biases that are prohibited by Union or national law.

Source : AI Act – Recital 27

Downstream provider

Means a provider of an AI system, including a general-purpose AI system, which integrates an AI
model, regardless of whether the AI model is provided by themselves and vertically integrated or
provided by another entity based on contractual relations.


Source : AI Act – Article 3(68)


Ex : A company specializing in the development of AI-driven virtual assistants for various applications,
such as customer service, personal productivity and connected home systems, acts as a downstream
provider. It offers its virtual assistant platform to businesses and consumers alike. In this example, this
technology company integrates AI models into its platform.

Emotion recognition system

Means an AI system for the purpose of identifying or inferring emotions or intentions of natural persons
on the basis of their biometric data;
The notion of ‘emotion recognition system’ referred to in this Regulation should be defined as an AI
system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis
of their biometric data. The notion refers to emotions or intentions such as happiness, sadness, anger,
surprise, disgust, embarrassment, excitement, shame, contempt, satisfaction and amusement. It does
not include physical states, such as pain or fatigue, including, for example, systems used in detecting the state of fatigue of professional pilots or drivers for the purpose of preventing accidents. This does
also not include the mere detection of readily apparent expressions, gestures or movements, unless
they are used for identifying or inferring emotions. Those expressions can be basic facial expressions,
such as a frown or a smile, or gestures such as the movement of hands, arms or head, or
characteristics of a person’s voice, such as a raised voice or whispering.

Source: AI Act Article 3(39) and Recital 18

EU declaration of conformity

A declaration drawn up by the provider for each high-risk AI system, attesting to its compliance with the
requirements of the AI Act. This declaration, which must be machine-readable and signed, must be kept
for ten years after the system has been placed on the market or put into service. It must be translated
for the competent authorities of the concerned Member States and contain all the information set out in
Annex V. Where high-risk AI systems are subject to other Union harmonization legislation which also
requires an EU Declaration of Conformity, a single EU Declaration of Conformity shall be drawn up in
respect of all Union law applicable to the high-risk AI system. A copy of the EU Declaration of
Conformity shall be submitted to the relevant national competent authorities upon request.

The EU declaration of conformity referred to in Article 47, shall contain all of the following information:
1. AI system name and type and any additional unambiguous reference allowing the identification and
traceability of the AI system;


2. The name and address of the provider or, where applicable, of their authorised representative;


3. A statement that the EU declaration of conformity referred to in Article 47 is issued under the sole
responsibility of the provider;


4. A statement that the AI system is in conformity with this Regulation and, if applicable, with any other
relevant Union law that provides for the issuing of the EU declaration of conformity referred to in Article
47

5. Where an AI system involves the processing of personal data, a statement that that AI system
complies with Regulations (EU) 2016/679 10 and (EU) 2018/1725 11 and Directive (EU) 2016/680 12 ;


6. References to any relevant harmonised standards used or any other common specification in relation
to which conformity is declared;


7. Where applicable, the name and identification number of the notified body, a description of the
conformity assessment procedure performed, and identification of the certificate issued;


8. The place and date of issue of the declaration, the name and function of the person who signed it, as
well as an indication for, or on behalf of whom, that person signed, a signature.

Source : AI Act – Article 47 and Annex V

Floating-point operation or “FLOP”

Means any mathematical operation or assignment involving floating-point numbers, which are a subset
of the real numbers typically represented on computers by an integer of fixed precision scaled by an
integer exponent of a fixed base.

Source : AI Act – Article 3(67)

Ex : When weather forecasting software calculates future temperatures, it uses floating-point
operations. For example, to determine the average temperature between 23.5°C, 24.1°C and 22.8°C,
the software adds up these floating-point values and divides the result by three. These calculations
provide the precise results needed for reliable forecasts.

Fundamental rights

These are the essential rights and values enshrined in the Charter of Fundamental Rights of the
European Union 13 and enshrined in Article 2 of the Treaty on European Union 14 :
● Respect for human dignity
● Liberty
● Democracy
● Equality
● Solidarity 
● Citizenship
● Justice
● Rule of law
● Respect for human rights, including the rights of persons belonging to minorities

General-purpose AI (“GPAI”) model

Means an AI model, including where such an AI model is trained with a large amount of data using self-
supervision at scale, that displays significant generality and is capable of competently performing
a wide range of distinct tasks regardless of the way the model is placed on the market and that can be
integrated into a variety of downstream systems or applications, except AI models that are used for
research, development or prototyping activities before they are placed on the market.

Source : AI Act – Article 3(63)

Ex : A natural language processing model such as GPT-4 is a general-purpose AI model that can be
used to write articles, generate document summaries, answer questions, translate texts and much
more. This model can be integrated into various applications, such as virtual assistants, translation
software or AI-assisted writing tools, and is available on the market for commercial use.

Harmonised standard

Means a harmonised standard as defined in Article 2(1), point (c), of Regulation (EU) No 1025/2012;

Source : AI Act – Article 3(27)
Regulation (EU) No 1025/2012 on European standardisation defines a harmonised standard as “a
European standard adopted on the basis of a request made by the Commission for the application of
Union harmonisation legislation;”

Ex : One harmonised standard could be EN 301 549, which specifies accessibility requirements for ICT
products and services. This standard was developed at the request of the European Commission to
ensure that websites and mobile applications are accessible to all, including people with disabilities.

High-risk AI system

An AI system is considered high-risk if its use could have a significant impact on the fundamental
rights, health, safety or legal interests of natural persons.
High-risk AI systems are principally those which:


1) Are themselves products subject to EU regulations requiring third-party conformity assessment
before they are placed on the market or put into service. This includes areas such as medical
devices, industrial machinery and vehicles.


2) Are intended to be used as safety components in products subject to the same regulations
mentioned above.


3) Specifically listed in Annex III of the AI Act, covering one of the following areas:
– Biometrics 16  (in so far as the use the use of the AIS is permitted under relevant Union
or national law);
– Critical infrastructure (e.g., supply of water, gas, heating or electricity);
– Education and vocational training (e.g., student admission or assessment systems);
– Employment, workers’ management and access to self-employment (e.g.,
recruitment software) ;
– Access to and enjoyment of essential private services and essential public
services and benefits
– Law enforcement (in so far as the use of the AIS is permitted under relevant Union or
national law)
– Migration, asylum and border control management (in so far as the use the use of
the AIS is permitted under relevant Union or national law)
– Administration of justice and democratic processes

High-risk AI systems must meet strict requirements for risk management, data quality, transparency,
human oversight and accuracy. The obligations concerning them are enforceable within 24 months (2
August 2026) following the entry into force of the AI Act.
However, for high-risk AIS already covered by EU sectoral legislation, the relevant obligations are
enforceable within 36 months (2 August 2027) following the entry into force of the AI Act.

Source : AI Act – Article 6 ; Article 113 ; Annex I ; Annex III
l AI systems that pose “high risks” to basic rights must undergo pre- and post-conformity
assessments, and in addition to adhering to ethics, the relevant statutory requirements must be
considered.
Source: SDAIA AI Ethics Principles

Inference of emotions or intentions

Inferring is done by deducing information generated by analytical and other processes by the system
itself. In such a case, the information about the emotion is not solely based on data collected on the
natural person, but it is inferred from other data, including machine learning approaches that learn from
data how to detect emotions.

Source : Commission Guidelines on prohibited artificial intelligence practices established by Regulation
(EU) 2024/1689 (AI Act) – Annex : paragraph 246

Language model

Statistical model of the distribution of linguistic units (e.g. letters, phonemes, words) in a natural
language. A language model can, for example, predict the next word in a sequence of words. Large
Language Models (LLMs) are models with a large number of parameters (generally of the order of a
billion weights or more), such as GPT-3, BLOOM, Megatron NLG, Llama or PaLM.


Source : CNIL

Limited risk

AI systems that pose limited risks, such as technical programs related to function, development, and
performance, are subject to the application of the AI ethics principles mentioned in this document (AI
Ethics Principles).


Source: SDAIA AI Ethics Principles

Little or no risk

There are no restrictions on AI systems that pose little or no risk such as spam filters, but it is
recommended that these systems be ethically compliant.

Source: SDAIA AI Ethics Principles

Machine-based

The term ‘machine-based’ refers to the fact that AI systems are developed with and run on machines.
The term ‘machine’ can be understood to include both the hardware and software components that
enable the AI system to function. The hardware components refer to the physical elements of the
machine, such as processing units, memory, storage devices, networking units, and input/output
interfaces, which provide the infrastructure for computation. The software components encompass
computer code, instructions, programs, operating systems, and applications that handle how the
hardware processes data and performs tasks.


All AI systems are machine-based, since they require machines to enable their functioning, such as
model training, data processing, predictive modelling and large-scale automated decision making. The
entire lifecycle of advanced AI systems relies on machines that can include many hardware or software
components. The element of ‘machine-based’ in the definition of AI system underlines the fact that AI
systems must be computationally driven and based on machine operations.
The term ‘machine-based’ covers a wide variety of computational systems.


Ex : The currently most advanced emerging quantum computing systems, which represent a significant
departure from traditional computing systems, constitute machine-based systems, despite their unique
operational principes and use of quantum-mechanical phenomena, as do biological or organic systems
so long as they provide computational capacity.

Source : Commission Guidelines on the definition of an artificial intelligence system established by
Regulation (EU) 2024/1689 (AI Act) – Annex : paragraph 11, 12, and 13

Machine learning

Machine learning approaches includes a large variety of approaches enabling a system to ‘learn’, such
as supervised learning, unsupervised learning, self-supervised learning and reinforcement learning.
Also refer to deep learning which is a subset of machine learning.


Source : Commission Guidelines on the definition of an artificial intelligence system established by
Regulation (EU) 2024/1689 (AI Act) – Annex : paragraph 32, and 38

Market surveillance authority

Means the national authority carrying out the activities and taking the measures pursuant to Regulation
(EU) 2019/1020 18 


Source : AI Act – Article 3(26)

Minimal risk AI system

Minimal risk AI systems are those that present neither an unacceptable risk nor a specific transparency
risk, and which are not categorised as high-risk. These systems include common, low-intrusive
applications such as spam filters or AI-enabled video games. They are designed to operate securely
without imposing significant threats to the fundamental rights, safety or interests of natural persons.

National competent authority

Means a notifying authority or a market surveillance authority; as regards AI systems put into service or used
by Union institutions, agencies, offices and bodies, references to national competent authorities or market
surveillance authorities in this Regulation shall be construed as references to the European Data Protection
Supervisor.


Source : AI Act – Article 3(48)

Notified body

Means a conformity assessment body notified in accordance with this Regulation and other relevant
Union harmonisation legislation.


Source : AI Act – Article 3(22)

Operator

Means a provider, product manufacturer, deployer, authorised representative, importer or distributor.


Source : AI Act – Article 3(8)

Personality characteristics

Should be in principle interpreted as synonymous with personal characteristics, but may also imply the
creation of specific profiles of individuals as personalities. Personality characteristics may be also
based on a number of factors and imply a judgement, which may be made by the individuals
themselves, other persons, or generated by AI systems. In the AI Act, personality characteristics are
sometimes referred to as personality traits and characteristics; those concepts should be interpreted
consistently.


Source : Commission Guidelines on prohibited artificial intelligence practices established by Regulation
(EU) 2024/1689 (AI Act) – Annex : paragraph 158

Penalties

The AI Act lays down strict penalties for breaches of the obligations set out in the Act:


1) Non-compliance with prohibitions on prohibited AI practices (referred to in Article 5 of the AI Act)
– Up to € 35 million, or 7% of total worldwide annual turnover for the preceding financial year,
if the offender is an undertaking.

2) Non-compliance with one of the provisions related to operators or notified bodies (other
than those laid down in Article 5) :

– Up to € 15 million, or 3% of total worldwide annual turnover for the preceding financial year,
if the offender is an undertaking.

3) The supply of incorrect, incomplete or misleading information to notified bodies or
national competent authorities in reply to a request :

– Up to € 7,5 million, or 1% of total worldwide annual turnover for the preceding financial year,
if the offender is an undertaking.

In each case, the higher of the fixed amount or the percentage of turnover will be used. For small and
medium-sized enterprises (SMEs), including start-ups, penalties must be proportionate to take account
of their economic viability. The lowest amount will therefore be applied to these businesses.


Source : AI Act – Article 99

Placing on the market

Means the first making available of an AI system or a general-purpose AI model on the Union market;

Source : AI Act – Article 3(9)

Post-market monitoring system

Means all activities carried out by providers of AI systems to collect and review experience gained from the use of AI systems they place on the market or put into service for the purpose of identifying any
need to immediately apply any necessary corrective or preventive actions.


Source : AI Act – Article 3(25)

Prohibited or unacceptable AI practices (“Prohibited AI systems”)

AI systems presenting an unacceptable risk to the fundamental rights, safety, or interests of natural
persons are specifically prohibited because of their potential to cause significant harm or exploit
people vulnerabilities inappropriately.


AI systems involving the following practices are unacceptable:
– Harmful manipulation, and deception
– Harmful exploitation of vulnerabilities
– Social scoring
– Individual criminal offence risk assessment and prediction
– Untargeted scraping to develop facial recognition databases
– Emotion recognition
– Biometric categorisation
– Real-time remote biometric identification (‘RBI’)

Source : AI Act – Article 5 and Article 113 ; Commission Guidelines on prohibited artificial intelligence
practices established by Regulation (EU) 2024/1689 (AI Act) – Annex : paragraph 9
To find out more, read our article on Prohibited AI Systems

Provider

Means a natural or legal person, public authority, agency or other body that develops an AI system or
a general-purpose AI model or that has an AI system or a general-purpose AI model developed and
places it on the market or puts the AI system into service under its own name or trademark, whether for
payment or free of charge

Source : AI Act – Article 3(3)


Ex : A manufacturer of healthcare products develops an AI chatbot in-house to monitor the side effects
of its drugs. Using its own IT resources and data, this manufacturer therefore qualifies as a provider
because it has designed and developed the AI system using its own data and tools, or those of a third
party, and puts this system into service under its own name.

Putting into service

Means the supply of an AI system for first use directly to the deployer or for own use in the Union for its
intended purpose.


Source : AI Act – Article 3(11)

Red Team

The Red Team method involves advanced security testing where a specialist team simulates hacker
attacks to assess the resilience of a security system. These tests use ethical hacking techniques to
replicate the tactics, techniques and procedures of a real attacker, including intrusion attempts, phishing
attacks 25 and vulnerability scans. The aim is to discover and correct security flaws before a real attacker
can exploit them.


Sources : Red Team – NIST Glossary ; Red teaming – CNIL

Safety component

Means a component of a product or of an AI system which fulfils a safety function for that product or AI
system, or the failure or malfunctioning of which endangers the health and safety of persons or
property.


Source : AI Act – Article 3(14)

Ex : In the case of a medical device, the software used to analyze the data is used to recommend a
dose of medication. Its failure could lead to an overdose or underdose, endangering the patient's
health.

Scraping

“Scraping” typically refers to using web crawlers, bots, or other means to extract data or content from
different sources, including CCTV, websites or social media, automatically. These tools are software
‘programmed to sift through databases and extract information and to make use of that information for
another purpose.

Source : Commission Guidelines on prohibited artificial intelligence practices established by Regulation
(EU) 2024/1689 (AI Act) – Annex : paragraph 227

Serious incident

Means an incident or malfunctioning of an AI system that directly or indirectly leads to any of the
following:
a) the death of a person, or serious harm to a person’s health;
b) a serious and irreversible disruption of the management or operation of critical
infrastructure;
c) the infringement of obligations under Union law intended to protect fundamental rights;
d) serious harm to property or the environment;

Source : AI Act – Article 3(49)


Ex : An AI system used in a hospital to automatically adjust medication doses makes a calculation error,
resulting in several patients being overdosed, causing serious medical complications.

Social and environmental well-being

Means that AI systems are developed and used in a sustainable and environmentally friendly manner
as well as in a way to benefit all human beings, while monitoring and assessing the long-term impacts
on the individual, society and democracy.


Source : AI Act – Recital 27

Special categories of personal data

Means the categories of personal data referred to in Article 9(1) of Regulation (EU) 2016/679 26 ,
Article 10 of Directive (EU) 2016/680 27 and Article 10(1) of Regulation (EU) 2018/1725

“personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or
trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely
identifying a natural person, data concerning health or data concerning a natural person's sex life or
sexual orientation shall be prohibited”


Source : AI Act – Article 3(37)

Specific transparency risk AI system

Specific transparency risk AI systems are those that interact directly with natural persons and are
neither presenting unacceptable risk nor high-risk.
These systems impose specific transparency obligations, requiring users to be informed when they
interact with an AI or when they are exposed to content generated or modified by an AI.

Ex : When using AI systems such as chatbots (conversational agents) or deep fakes, humans must be
informed that they are interacting with a machine so that they can make an informed decision to
continue or step back. Providers will also need to ensure that AI-generated content is identifiable.


Source : AI Act – Article 50 and Article 113

Systemic risk

Means a risk that is specific to the high-impact capabilities of general-purpose AI models, having
a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable
negative effects on public health, safety, public security, fundamental rights, or the society as a whole,
that can be propagated at scale across the value chain.


Source : AI Act – Article 3(65)


Ex : An example of systemic risk is the widespread use of an AI model in the banking sector to assess
loan applications. If this model has a systematic bias against certain demographic groups, this could
lead to the mass exclusion of these groups from access to credit, affecting their ability to invest in real
estate, set up businesses or access essential financial resources, with negative repercussions for the
economy and society as a whole.

Partager l'article :