Overview of AI regulation in the United States

5 points to remember

  • The U.S. approach is split between federal initiatives, local legislation, agency work and case law.
  • At the federal level, several initiatives have laid a solid foundation for far-reaching regulation
  • At local level, legislative initiatives are multiplying
  • Case law plays an important role in understanding and mitigating the role of algorithms in discrimination. The agencies also conduct surveys on the impact of AI and guide legislators’ debates
  • Finally, the US and EU have published a joint roadmap for trusted AI and risk management.

From suppliers of artificial intelligence systems (AIS) to policymakers, a global consensus is emerging on the need to regulate the development, marketing and use of this multifaceted technology. The United States has demonstrated its commitment to making AI safer through ambitious, voluntary legislative projects designed to guide the efforts of AI players and thus regulate its use. As a result, the years 2023 and 2024 could see more restrictive measures implemented by bodies such as the Federal Trade Commission (FTC). In this article, find out about key legislative initiatives at the federal level for AI in the US, as well as at the local level to understand the fragmented approach to US AI regulation.


Federal initiatives

Although not all the proposed regulations at federal level are binding, they demonstrate that overseeing the development and use of AI is a priority for many US authorities. In particular, the United States has worked hard to promote American leadership and R&D in AI. In addition, the use of AI in federal agencies has been regulated, and texts have been published to protect Americans and promote the adoption of “trustworthy AI” :

  • The use of AI in federal agencies: the need for training

Several laws have been passed in the United States to regulate the use of AI within the federal government. L’
U.S. AI Training Act
passed in 2022 aims to train federal personnel in the procurement, adoption and use of AI within its agencies. Training is to be developed by OMB, the Office of Management and Budget. This regulation adopts a risk management approach, as does the AI Act in the European Union.

In order to promote the use of trustworthy AI in the federal government, the White House has also adopted the
Presidential Executive Order EO13960
in December 2020. It stipulates that the principles developed to guide the use of AI within agencies must be consistent with American values and applicable law. Agencies must make public an inventory of unclassified and non-sensitive AI use cases. The National Institute of Standards and Technology (NIST) must assess in 2023 the compliance of any AI that has been deployed or is being used by one or more federal agencies, according to the presidential decree.

  • The Blueprint for an AI Bill of Rights to protect Americans’ rights

The
Blueprint for an AI Bill of Rights
aims to protect the American people in the age of artificial intelligence. This charter of rights published in 2022 is based on the willingness of AIS operators (designers, developers, distributors) to apply it, and aims to guide the design, deployment and development of AI systems using 5 main principles:

  • Protection against dangerous and inefficient AIS
  • Protection against algorithmic discrimination and fair use and design of AIS
  • Data protection integrated into SIAs and right to oversee their use to protect against abusive practices
  • Transparency when using an automated system and accessible documentation about it
  • Power of withdrawal and access to a human alternative in the event of a problem

The charter applies to all automated systems likely to have a significant impact on the rights, opportunities or access of the American public to essential resources or services. It thus takes a sectoral approach to the regulation of artificial intelligence in the United States, focusing in particular on certain sectors such as hiring, education, access to healthcare and financial services. Technical documentation entitled “From principles to practice” accompanies the Blueprint, and is designed to help organizations implement the framework.

Other texts are designed to protect the rights of Americans, such as Presidential Executive Order 14091 (2022 ) on promoting racial equity and supporting communities underserved by the federal government, or the joint statement by representatives of the FTC and three other federal agencies.

  • NIST’s AI Risk Management Framework for the adoption of “trustworthy” AI

Trustworthy AI is defined by NIST as AIS that is efficient, secure, valid, reliable, fair, privacy-friendly, transparent, accountable, explainable and interpretable. The
AI Risk Management Framework
is a non-normative text, independent of use cases. It was developed by NIST, the U.S. government agency whose mission is to promote U.S. innovation and industrial competitiveness by advancing standards and technology.

The aim of this non-normative framework is to “prevent, detect, mitigate and manage AI-related risks” in order to foster public trust. It thus adopts a human rights-based approach. It is intended to be applied early in the AI lifecycle and to all stakeholders.

The desire of NIST and other organizations to develop this kind of code of conduct stems from concrete cases in which harm has been caused to people through the use of AI systems. Automated claims processing by State Farm, an insurance company, allegedly discriminated against black homeowners, while facial recognition in Louisiana led to the arrest and week-long imprisonment of an ultimately innocent man.

  • The Algorithmic Accountability Act: a binding law soon?

L’
Algorithmic Accountability Act
reintroduced in 2022 but never passed, provides for ex-ante and ex-post risk management mechanisms. It seeks to regulate AIS in the US to protect consumers from algorithmic bias. It would require companies to assess the impact of the automated systems they use and sell in terms of bias and efficiency.

Fields of application

Implementation

  • Entities using an automated system to make critical decisions that own, manage, modify, manipulate, analyze or control the data of more than one million people and with annual sales exceeding $50 million
  • Critical decisions: decisions likely to have a legal, material or similar effect on the lives of others and covering the following categories: education and vocational training, employment, self-employment and worker management, essential utilities, family planning, financial services, health care, housing or accommodation, and legal services
  • Via the FTC: Federal Trade Commission, an independent agency of the U.S. government
  • Obligation to transmit information on annual impact assessments to the FTC
  • Elaboration of assessments, guidelines and aggregates by the FTC
  • FTC audit of SIA

The text also requires companies to be transparent about when and how automated systems are used, so that consumers can make informed choices. It is based on the “Stop Discrimination by Algorithms Act” (Washington – 2021).

Local initiatives

The 2023 legislative session witnessed an increase in US state AI legislation compared to previous sessions. Between 2021 and 2022, AI bill introductions increased by 46%. American states have prioritized AI regulation with the aim of combating the damage associated with it, notably in the fields of employment and human resources (particularly at the hiring stage), healthcare, or insurance. It is feared that this new technology will reverberate the fractures of inequality that already exist in the United States, as demonstrated by the case of State Farm or the Louisiana authorities. Some states have also included AI regulations in privacy and data protection laws. Others have called for the formation of working groups to investigate the impacts of AI.

Here are a few local initiatives that exhaustively represent the issues addressed by local U.S. legislation:

New York City: The Bias Audit Law, which came into force on July 5, has paved the way for the regulation of AI in hiring in the USA. New York City now requires companies to conduct a bias audit of automated employment decision tools used to select applicants residing in NYC. A European employer is therefore subject to this law when hiring a New York resident. NYC Local Law 144 requires that applicants be informed about the use of such tools 10 days before they are used, and about the information they use to make decisions.

Illinois: The Illinois AI Video InterviewAct imposes obligations on employers who use AI to judge video interviews. It has been effective since January1, 2020.

Colorado: The Colorado Insurance Act (2021) seeks to protect consumers from discriminatory insurance practices. This regulation on the governance of algorithms and predictive models from the State Insurance Division includes the maintenance of a risk management framework, bias audits of algorithms and data, documentation and reporting obligations, and the explicability of how data and algorithms are used.

Washington D.C.: The Stop Discrimination by Algorithms Act suspended in 2022 was reintroduced in 2023. It aims to combat algorithmic discrimination. It plans to prohibit algorithmic decision-making from using subsequent determinations in a discriminatory way, and would require individuals to be informed about the use of their personal data.

California: California is one of the most active US states in terms of AI legislation and efforts to make it safer and fairer. The California Workplace Technology Accountability Act introduced in January 2022 aims to protect employees by limiting employer processing of their data, restricting technology surveillance, and requiring algorithmic impact assessments of automated decision systems. Amendments to California employment legislation, such as the Automated Decision Tools Bill, prohibit the use of automated decision systems that discriminate on the basis of protected characteristics. They also extend the responsibilities and obligations of employers and suppliers of these tools (e.g. record-keeping).

Other players in US AI legislation

While some federal or state laws are stalled in their legislative process, case law plays a major role in shaping the American regulatory approach, which is more fragmented than ever, as evidenced by the lawsuit Louis et al v. Saferent et al. Two black female rental applicants were refused rental accommodation because of their “SafeRent score”. It’s a score derived from selection software based on an algorithm that they accuse of discriminating against black and Hispanic people because of the factors taken into account to establish the score (credit, debt, but not the use of housing vouchers). The Ministry of Justice and the Ministry of Housing and Urban Development have therefore submitted a joint declaration alleging that “defendants’ use of an algorithm-based scoring system to select tenants discriminates against Black and Hispanic rental applications in violation of the Fair Housing Act.”

The agencies are playing a major role in the construction of this American regulation of AI. In particular, the work of the FTC should be closely watched, as it is stepping up investigations into AI, its impact on data privacy and manipulation, and thus greatly impacting US legislation.

Finally, American regulations could well be influenced by European regulations. Indeed, the USA and the EU have published a joint roadmap for trusted AI and risk management. This text aims to support international standardization in terms of AI, respect the commitment of both powers to the OECD recommendations, and promote trustworthy AI. This roadmap aims to guide the development of AI and related tools. What’s more, Americans and Europeans alike recognize the effectiveness of a risk-based approach to building public confidence in AI without holding back innovation.

Between reminders of rights and the drafting of codes of conduct, the United States has laid a solid foundation for far-reaching regulation. However, America’s fragmented approach to AI regulation, and the importance played by federal agencies and case law, make it complex for the country to make a joint advance and thus assert American leadership over global AI regulation.