Search
Close this search box.

Biometrics, AI, and Security: When Progress Challenges Regulatory Frameworks

This article was drafted based on the compromise reached on the AI Act on January 29, 2024, and which was subject to a vote in Coreper on February 2, 2024. The information presented may therefore vary slightly from the version of the text that will ultimately be adopted.

5 Key Points

  • The enthusiasm for biometrics is driving researchers to find new universal, permanent, recordable, measurable, and tamper-proof characteristics.
  • In terms of security, biometric data is primarily used in two distinct processes: identification and authentication.
  • Globally, regulatory frameworks to regulate the intersection of AI and biometrics remain varying.
  • According to the current version of the AI Act:
    • In the context of law enforcement, the use of AI for remote real-time identification of individuals in public spaces follows an approach based on risk, necessity, and proportionality. Remote identification in retrospect in public spaces, on the other hand, is allowed in a strict framework but less restrictive than the one provided for in real-time.
    • Regarding surveillance and prevention, the use of remote biometric identification in public spaces (real-time or in retrospect) will be authorized under conditions applicable to high-risk biometric AI systems (AIS).

Introduction

Biometrics. Behind this modern notion lies an ancient practice. Currently, « bio-digital identification » finds breeding ground for its expansion in technological advancements and international events. Beyond security, numerous sectors assign significant importance to its use. The attraction to this technology raises questions about the nature of biometrics, its current and future uses, as well as the measures taken by governments to regulate practices that carry potential risks of abuse.

1.      Biometrics: What Are We Talking About?

According to the CNIL, biometrics refers to computer techniques capable of « automatically recognizing an individual based on their physical, biological, or even behavioral characteristics. » These data are considered personal due to their uniqueness and permanence.

This « measurement of the human body » encompasses various recognition techniques, including the analysis of fingerprints, face, and other biological elements, such as blood or saliva. It also extends to behavioral characteristics, among which vocal recognition remains the predominant process, alongside gait analysis or gesture recognition.

Identification vs. Authentication

In a security context, biometric data serves two processes with distinct objectives: identification and authentication. Biometric identification seeks to establish the identity of a person among several, by comparing their biometric data (such as face or fingerprints) with a large database of identities. The goal here is to answer the question « who are you? ». In contrast, biometric authentication aims to confirm that a person is indeed who they claim to be, by comparing their data with their own set of biometric measurements, according to a verification principle. It is then a matter of answering the question « is it really you? ».

2.      Biometrics and AI: Current and Future Uses

To prove one’s identity, a person can possess a document/object, know a secret information, or resort to their own characteristics, such as face or fingerprints. This last method is particularly safe and reliable. The probability of finding two people with identical fingerprints is extremely low: 1 chance in 64 billion. This efficiency explains its continued use through the centuries and its widespread acceptance today.

By 2028, « the biometrics market size is expected to increase from USD 42.96 billion [in 2023] to USD 94.23 billion. » This growth reflects the broadening of biometric applications beyond security. Coupled with boarding pass data, facial recognition allows British Airways to streamline boarding. Alipay’s « Smile-to-Pay » system in China, smartphone unlocking, or contactless payment are all examples of uses demonstrating the intensification of this practice. These technologies, facilitating everyday actions, are widely favored: « 93% of consumers prefer biometrics to passwords for validating their payments. »

Researchers worldwide are therefore seeking new universal characteristics that would allow for unique, permanent, recordable, measurable, and tamper-proof identification of an individual. In Canada, a team is working on the use of gait, an innovation that could find its place in a « contactless » world, post-pandemic. More surprisingly, in Japan, researchers have developed « an olfactory sensor capable of identifying a person by analyzing the molecules present in their breath, » with a success rate of over 97%.

Biometrics and Security: a Controversial Topic

In this context of innovation, biometric surveillance, combined with artificial intelligence (AI), raises intense debate about the respect for privacy and fundamental rights. Three main technologies automate data analysis to identify « abnormal » behaviors (algorithmic video, audio surveillance and facial recognition).

Under the guise of maintaining security, the risks of misuse of these tools are very real. Nevertheless, the application of these technologies is expanding worldwide, from the use of facial recognition in hotels in Hangzhu (China), the identification of Capitol rioters (United States), to the prevention of disturbances in English football stadiums or the deployment of video surveillance in France during the upcoming Olympic Games. Faced with risks to privacy, initiatives such as that of Hamburg, Germany, propose solutions anonymizing individuals, depicting them as « matching figures. »

In France, within the context of the 2024 Olympic Games, the law allows for the experimentation of algorithmic video surveillance to ensure the security of sports, recreational, or cultural events particularly exposed to risks from the Games’ starting date and until March 31, 2025. This decision, combined with the emergence of the use of biometric identification, has raised voices among ethics advocates and institutions. Thus, on January 29, the French National Pilot Committee on Digital Ethics, in its opinion on the challenges of facial, postural, and behavioral recognition technologies, highlighted the impact of such innovations on individual freedoms. It made recommendations that go beyond the scope of ethics, taking into account scientific and epistemological dimensions, as well as economic and social issues related to the deployment of these devices.

This expansion of biometric uses therefore raises questions about the need to propose appropriate regulatory, ethical, and political frameworks.

3.      The International and European Response to the Biometrics and AI Intersection

AI remains a « legal minefield, » a situation exacerbated by disparities in legal frameworks worldwide. In France, since 2004, the CNIL has been authorized to monitor the use of biometric devices in the private sector. At the European level, since 2016, the General Data Protection Regulation (GDPR) harmonizes regulations across the 27 member states and the United Kingdom; a mechanism strengthened by the implementation of the AI Act. However, in the United States there is not a uniform federal regulation for biometric data to this date, leaving the responsibility to legislate on such matters to each individual state.

Internationally, the agreement on AI security proposes recommendations to make the technology safe from the stages of its conception, but without providing legal constraints, details on specific AI applications or data collection methods.

How do Biometrics and Security manifest at the European Level?

In this regard, the AI Act complements the GDPR by specifying the obligations related to remote biometric identification in public places, in real-time or post hoc. A distinction appears in the regulation between law enforcement and prevention uses.

AI Act: Towards Stringent Control of Remote Identification in Public Spaces by AI in a Law Enforcement  Context

The use of AI for remote real-time identification in public spaces: a prohibition with exceptions

In the AI Act, the use of AI to identify individuals remotely, in real-time, is prohibited, except for pursuing 3 objectives: targeted search of specific victims (abduction, trafficking, and sexual exploitation of human beings, search for missing persons), prevention of a specific, substantial, and imminent threat to the life and security of individuals, or a threat of attack, and localization/identification of a person suspected of committing a criminal offense (listed in Annex II bis, punishable in the Member State concerned, see the list at the end of the article).

In this framework, the AI system should only be used to confirm the identity of the person sought. These exceptions require a rigorous assessment of the situation, considering the gravity and potential consequences on individual rights and freedoms. The use of AI is therefore linked to an approach based on risk, but also on necessity and proportionality.

The procedure provided for allowing the use of AI for such cases is particularly strict, as it includes:

  • Obtaining prior authorization (or filed and granted within 24 hours in case of emergency) provided by an independent judicial and administrative authority.
  • Conducting an impact assessment on fundamental rights (before use).
  • Registering the system in a database.
  • Notifying the use of AI systems to competent national authorities (for each use).

In case of rejection of the authorization, the use of these systems must cease immediately, and the associated data must be deleted.

The use of AI for remote « post » identification in public spaces: limited authorization

The use of AI for remote « post » identification follows in the footsteps of real-time identification but is subject to less stringent regulation.

Considered as « high-risk » systems (Annex III of the AI Act), these AI systems must not pursue law enforcement purposes in a non-targeted manner, without any connection to a criminal offense, a criminal procedure, a real and current threat, or foreseeable real and current threat of a criminal offense or the search for a specific missing person. Moreover, surveillance results cannot be the sole basis for legal decisions.

The post hoc use of remote biometric identification systems is subject to the same obligations as real-time identification: prior authorization, impact assessment on fundamental rights, when used by public authorities or by private operators providing public services, and notification of their use to competent authorities.

In addition to these, there is an obligation for users not to consider any action or make any decision based on remote, post identification unless it has been verified and confirmed separately by at least two competent, trained, and authorized individuals.

However, the authorization conditions for « post » identification are relaxed compared to those applicable in real-time:

  • The authority issuing the authorization is not necessarily independent, so obtaining it could be easier.
  • The obtaining deadline is extended to 48 hours.
  • The obligation is lifted if the system is used to identify a potential suspect (based on objective and verifiable facts, directly related to the offense).
  • Notifications to competent national authorities are not made for each use but on an annual basis.
  • The obligation imposed on users to resort to human verification and confirmation is lifted if the law of a Member State, or that of the Union, considers it potentially disproportionate.

AI Act: Loosening the Use of Remote Biometric Identification in Public Spaces in a Preventive Context

Beyond the law enforcement use cases, the upcoming European regulation on artificial intelligence allows the use of biometrics in a general context, which consequently applies to the preventive framework of the security domain.

In this field of action, the use of biometric identification (both in real-time and post hoc) will be authorized, under the conditions applicable to high-risk biometric AI systems.

Thus, before use, an impact assessment on fundamental rights must be carried out if the AI system is used by a public body or private operators providing public services. Furthermore, no action or decision can be considered based on the system’s identification, unless it has been verified and confirmed separately by at least two individuals with the necessary skills, training, and authority.

Conclusion

The rise of biometrics and artificial intelligence marks a revolution in various sectors. It goes far beyond the scope of security and is deeply rooted in our daily lives. In this context, lawmakers, especially in Europe with the AI Act, are grappling with the regulatory challenges of a technology that touches the core of identity and privacy.

At the European level, the use of biometrics in a security and repressive framework is subject to several safeguards. Despite the existence of a regulatory framework, for preventive purposes, this use remains accepted, raising concerns about the crossover of data between preventive and repressive uses.

It is worth mentioning the reference made by the French National Pilot Committee on Digital Ethics (CNPEN), fearing the reactivation of concerns about the emergence of a panoptic system within society that, like Michel Foucault’s « faceless gaze, » would transform « the entire social body into a field of perception »*.

*Michel Foucault – Discipline and Punish: The Birth of the Prison. Gallimard, 1975

Annexes

List of criminal offenses covered in Annex II bis, Article 5, paragraph 1, point iii)

  • Terrorism
  • Trafficking in human beings
  • Sexual exploitation of children and child pornography
  • Illicit trafficking in narcotics and psychotropic substances
  • Illicit trafficking in arms, ammunition, and explosives
  • Murder, serious assault, and battery
  • Illicit trafficking in organs and human tissue
  • Illicit trafficking in nuclear or radioactive materials
  • Abduction, kidnapping, and hostage-taking
  • Crimes falling under the jurisdiction of the International Criminal Court
  • Hijacking of aircraft/vessels
  • Rape
  • Crimes against the environment
  • Organized or armed robbery
  • Sabotage
  • Participation in a criminal organization involved in one or more of the offenses listed above.
Share the Post:
Search
Close this search box.