Search
Close this search box.

US Regulations on AI

Overview of the US regulations on AI.
Last updated on 20/09/2023

5 Key Points to Remember

  • The US approach to AI is divided between federal initiatives, local legislation, the work of agencies, and case law.
  • At the federal level, several initiatives have laid a solid foundation for comprehensive regulation.
  • At the local level, legislative initiatives are multiplying.
  • Case law plays an important role in understanding and mitigating the role of algorithms in discrimination. Agencies also conduct investigations into the impact of AI and guide legislative debates.
  • Finally, the US and the EU have published a joint roadmap for trustworthy AI and risk management.

Introduction

A global consensus is emerging on regulating artificial intelligence, from providers to policymakers. The United States is committed to making AI safer through ambitious legislative projects. In 2023 and 2024, stricter measures from agencies like the FTC are expected. Explore federal and local AI initiatives to understand America’s varied regulatory approach.

Federal Initiatives

Although not all federal regulations are binding, they demonstrate that regulating the development and use of AI is a priority for many US authorities. The United States has notably focused on promoting American leadership and R&D in AI. Additionally, the use of AI in federal agencies has been regulated, and texts have been published to protect Americans and promote the adoption of “trustworthy AI.”

The Use of AI in Federal Agencies: The Need for Training

Several texts have been adopted in the United States to regulate the use of AI within the federal government itself. The U.S. AI Training Act, adopted in 2022, aims to train federal personnel in the procurement, adoption, and use of AI within agencies. Training must be developed by the OMB, the Office of Management and Budget. This regulation thus adopts a risk management approach, similar to the AI Act in the European Union.

To promote the use of trustworthy AI within the federal government, the White House also adopted the presidential decree EO13960 in December 2020. It stipulates that the principles developed to guide the use of AI within agencies must conform to American values and applicable law. Agencies must thus make public an inventory of non-classified and non-sensitive use cases of AI. The National Institute of Standards and Technology (NIST) must evaluate in 2023 the compliance of any AI that has been deployed or is used by one or more federal agencies, in accordance with the presidential decree.

Blueprint for an AI Bill of Rights for the Protection of Americans’ Rights

The Blueprint for an AI Bill of Rights from the White House aims to protect the American people in the age of artificial intelligence. This bill of rights, published in 2022, relies on the willingness of AIS operators (designers, developers, distributors) to apply it and aims to guide the design, deployment, and development of AI systems using five main principles:

  • Protection against dangerous and ineffective AIS.
  • Protection against algorithmic discrimination and the fair use and design of AIS.
  • Data protection integrated into AIS and the right to oversee their use for protection against abusive practices.
  • Transparency at the time of using an automated system and accessible documentation about it.
  • Right to withdraw and access a human alternative in case of problems.

The charter covers automated systems impacting American rights, opportunities, or access to essential resources.

It adopts a sectoral approach to AI regulation, focusing on hiring, education, healthcare, and financial services sectors. A technical documentation entitled “From Principles to Practice” accompanies the Blueprint and aims to help organizations implement this framework.

Other texts aim to protect the rights of Americans, such as the presidential decree 14091 (2022) on promoting racial equity and support for underserved communities by the federal government, or the joint declaration by representatives of the FTC and three other federal agencies.

The AI Risk Management Framework from NIST for the Adoption of “Trustworthy AI”

NIST defines trustworthy AI as AIS that are performant, safe, valid, reliable, fair, privacy-respecting, transparent, accountable, explainable, and interpretable. The AI Risk Management Framework is a non-normative text independent of use cases, developed by NIST, a US government agency whose mission is to promote innovation and industrial competitiveness in the United States by advancing standards and technology.

This non-normative framework aims to “prevent, detect, mitigate, and manage AI-related risks” to build public trust. It adopts a human rights-based approach. It is intended to be applied from the beginning of the AI lifecycle and to all stakeholders.

NIST and other organizations developed this code of conduct after incidents of harm caused by AI systems.

State Farm’s automated claim processing allegedly discriminated against black homeowners, while facial recognition in Louisiana wrongfully led to a man’s arrest and imprisonment.

The Algorithmic Accountability Act: A Binding Law Soon?

The Algorithmic Accountability Act, reintroduced in 2022 but never adopted, provides for ex ante and ex post risk management mechanisms. It seeks to regulate AIS in the United States to protect consumers against algorithmic biases. It would require companies to assess the impact of the automated systems they use and sell in terms of bias and effectiveness.

Scope

Entities using an automated system for critical decisions must:

  • Own, manage, modify, manipulate, analyze, or control the data of more than one million people.
  • Have annual revenues exceeding $50 million.

Critical decisions: Decisions likely to have a legal, material, or similar effect on someone’s life, including:

  • Education and vocational training
  • Employment
  • Self-employment and worker management
  • Essential public utility services
  • Family planning
  • Financial services
  • Healthcare
  • Housing
  • Legal services

Implementation

  • Via the FTC: Federal Trade Commission, an independent US government agency.
  • Mandatory submission of annual impact analysis information to the FTC.
  • Development of assessments, guidelines, and aggregates by the FTC.
  • Audit of AIS by the FTC.

This text also provides for a transparency obligation for companies on when and how automated systems are used so that consumers can make informed choices. It draws its basis from the “Stop Discrimination by Algorithms Act” (Washington – 2021).

Local Initiatives

The 2023 legislative session saw an increase in US state laws on AI compared to previous sessions. Between 2021 and 2022, AI bill introductions increased by 46%. US states have prioritized AI regulation aimed at combating associated harms, particularly in the areas of employment and human resources (especially at the hiring stage), health, and insurance. There is a fear that this new technology will echo existing inequality gaps in the United States, as evidenced by the State Farm or Louisiana authorities cases. Some states have also included AI regulations in privacy and personal data protection laws. Others have called for the establishment of task forces to investigate the impacts of AI.

New York City

The Bias Audit Law, which came into force on July 5, opened the way for AI regulation in hiring in the United States. New York City now requires companies to audit the bias of automated decision tools used for selecting candidates residing in NYC. A European employer is therefore subject to this law if they hire a person residing in New York. NYC Local Law 144 provides that candidates must be informed of the use of such tools 10 days before their use and of the information they use to make decisions.

Illinois

The Illinois law on AI video interviews (AI Video Interview Act) imposes obligations on employers using AI to judge video interviews. It has been effective since January 1, 2020.

Colorado

The Colorado Insurance Law (2021) seeks to protect consumers from discriminatory insurance practices. This regulation on the governance of algorithms and predictive models by the state insurance division includes:

  • Ensuring explainability of data and algorithm usage
  • Maintaining a risk management framework
  • Conducting bias audits of algorithms and data
  • Fulfilling documentation obligations
  • Reporting requirements

Washington D.C.

The Stop Discrimination by Algorithms Act, suspended in 2022 and reintroduced in 2023, aims to combat algorithmic discrimination. This law prohibits the use of algorithmic decisions to make discriminatory determinations and requires that people be informed of the use of their personal data.

California

California is one of the most active US states in terms of AI legislation and efforts to make it safer and fairer. The California Workplace Technology Accountability Act, introduced in January 2022, aims to protect employees by limiting the processing of their data by employers, restricting technological surveillance, and requiring algorithmic impact assessments of automated decision systems. Amendments to California employment law, such as the Automated Decision Tools Bill, prohibit the use of automated decision systems that discriminate based on protected characteristics. They also expand the responsibilities and obligations of employers and providers of these tools (e.g., record-keeping).

Other Actors in US AI Legislation

While some federal or state laws stall in the legislative process, case law plays a major role in shaping the fragmented American regulatory approach. A notable example is the case Louis et al v. Saferent et al, where two black rental applicants were denied housing due to their « SafeRent score. » This score is derived from selection software using an algorithm allegedly discriminatory against black and Hispanic individuals for factors like credit and debt, but excluding housing voucher use.

The Department of Justice and the Department of Housing and Urban Development therefore filed a joint statement alleging that “the defendants’ use of an algorithm-based scoring system to select tenants discriminates against Black and Hispanic rental applicants, in violation of the Fair Housing Act.”

Agencies

Agencies play a major role in shaping American AI regulation. The FTC’s work is particularly noteworthy as it increases investigations into AI, its impact on data privacy, and manipulation, significantly influencing American legislation.

A Joint Roadmap for Trustworthy AI

Finally, European regulation could influence American regulation. Indeed, the US and the EU have published a joint roadmap for trustworthy AI and risk management. This text aims to support international standardization in terms of AI, respect the commitment of both powers to OECD recommendations, and promote trustworthy AI. This roadmap seeks to guide the development of AI and related tools. Additionally, Americans and Europeans recognize the effectiveness of a risk-based approach to building public trust in AI without hindering innovation.

The United States has laid solid foundations for comprehensive regulation through rights reminders and code of conduct development. However, the fragmented approach to AI regulation, coupled with federal agency roles and case law, complicates unified progress and American leadership in global AI regulation.

Share the Post:
Search
Close this search box.