The legal framework governing artificial intelligence in the United States does not rely on a single federal law comparable to an “AI code.” It is instead organized around federal steering through Executive Orders and a set of state (and sometimes local) laws that create directly applicable obligations.
Result: a fragmented landscape, where compliance strongly depends on the state, the use case (recruitment, essential services, generative content, etc.), and the role of the operator (developer, deployer, provider).
I. The American Framework: Two Levels, No “U.S. AI Act”
The regulation of artificial intelligence in the United States is embedded in the country’s institutional architecture, which is based on a distribution of powers between the federal level and the states.
In this context, the American framework is characterized by:
- A two-level governance structure combining federal strategy and legislation adopted by states (and sometimes local authorities).
- The absence of a single, comprehensive federal law comparable to the European AI Act.
- Federal steering largely structured by Executive Orders, which set national priorities and guide the actions of federal agencies.
- State laws that may impose legally binding obligations on companies, enforced by Attorneys General or local authorities.
- Priority areas: combating discrimination, transparency, consumer protection, and regulation of generated content.
Within this framework, the American approach generally emphasizes maintaining technological leadership and innovation.
| Executive Orders: The Federal Engine An Executive Order is a normative act issued by the President of the United States in the exercise of his constitutional and/or legal powers. It is legally binding for federal agencies and administrations, guides their priorities, and may influence the private sector through public procurement, regulatory guidance, or federal standards. However, it does not amount to a law adopted by Congress and does not, in principle, create a compliance code directly applicable to all companies. |
In the absence of a comprehensive federal AI law, the federated states develop and adopt their own rules applicable to AI systems. This results in a fragmented regulatory landscape where obligations vary depending on the jurisdiction and the use case.
Among the most structuring examples:
| Colorado | Risk-based approach targeting algorithmic discrimination (entry into force: 2026) |
| California | Transparency obligations for generative AI and regulation of deepfakes (entry into force: 2026) |
| Texas | Prohibitions of certain uses, targeted obligations and civil sanctions (entry into force: 2026) |
| New York City | Regulation of automated decision tools in recruitment (in force since 2023) |
| Utah | Consumer and minors protection in interactions with AI systems (in force since 2024) |
II. Federal Level: Executive Strategy and “AI Action Plan”
1. Federal governance structured by the executive
At the federal level, AI policy is mainly implemented through:
- Executive Orders
- Strategic frameworks (action plans, national priorities)
- Execution by agencies (implementation, public procurement, infrastructure, international positioning)
“Winning the Race: America’s AI Action Plan”
Following the Executive Order “Removing Barriers to American Leadership in Artificial Intelligence” (EO 14179, January 23, 2025), the White House published the strategic plan “Winning the Race: America’s AI Action Plan” (July 2025).
This document does not create a single federal law but serves as a blueprint guiding the administration’s action (priorities, funding, public procurement, infrastructure, diplomacy).
The plan is structured around three pillars:
- Accelerate AI innovation
- Build American AI infrastructure
- Lead international AI diplomacy and security
2. Key Executive Orders linked to the federal AI strategy
Here are the structuring texts and their main role within the strategy:
- Maintain U.S. Leadership in Artificial Intelligence (2019): launches the American AI Initiative and sets federal priorities (research, talent, regulatory framework).
- Removing Barriers to American Leadership in Artificial Intelligence (EO 14179, Jan. 2025): anchors a federal “pro-innovation” orientation, calling for the removal of perceived obstacles to AI competitiveness.
- President’s Council of Advisors on Science and Technology (EO 14177, Jan. 2025): strengthens the scientific and technological advisory structure at the presidential level.
- Advancing Artificial Intelligence Education for American Youth (Apr. 2025): aims to develop AI education and skills within the education system.
- Accelerating Federal Permitting of Data Center Infrastructure (EO 14318, July 2025): accelerates certain federal permitting processes for data center infrastructure related to AI.
- Promoting the Export of the American AI Technology Stack (EO 14320, July 2025): structures a coordinated federal effort to support exports of American full-stack AI technology.
- Preventing Woke AI in the Federal Government (EO 14319, July 2025): establishes requirements/expectations for AI systems (notably LLMs) used by the federal government and potentially influences public procurement.
- Ensuring a National Policy Framework for Artificial Intelligence (EO 14365, Dec. 2025): asserts a national “minimally burdensome” framework and aims to reduce fragmentation, particularly by addressing obstacles created by certain state approaches.
3. Ensuring a National Policy Framework for Artificial Intelligence: Toward Stronger Federal Coordination
The most recent Executive Order “Ensuring a National Policy Framework for Artificial Intelligence” (December 2025) marks an important step in the evolution of federal AI governance in the United States.
The text affirms the intention to strengthen the coherence of the national regulatory framework and limit fragmentation resulting from the multiplication of state legislative initiatives.
To this end, the Executive Order notably provides for:
- The creation of a task force within the Department of Justice (DOJ)
- The analysis of state laws and initiatives related to AI
- The identification of potential conflicts with federal priorities in innovation and technological competitiveness
Colorado’s AI law is specifically mentioned in the text as an example of regulation that may raise such concerns. A DOJ report is expected in the coming months and may shed light on future federal policy directions.
Regulatory Fragmentation Tensions
This initiative takes place in a context marked by the multiplication of regulations adopted at the state level.
Several federal authorities have highlighted the risks that an overly fragmented regulatory landscape could pose to the development and deployment of AI technologies in the United States.
At the same time, several states have affirmed their intention to maintain autonomous regulatory frameworks, particularly to regulate risks related to algorithmic discrimination, AI transparency, or consumer protection.
4. The TRUMP AMERICA AI Act: An Attempt at Federal Harmonization
In this context, a federal bill entitled TRUMP AMERICA AI Act
(The Republic Unifying Meritocratic Performance Advancing Machine Intelligence by Eliminating Regulatory Interstate Chaos Across American Industry Act) was introduced to establish a minimal set of federal requirements applicable to AI systems.
The text notably provides for:
- The establishment of a duty of care for AI system developers in the design and operation of their platforms
- Risk management protocols for advanced AI models (frontier models)
- Transparency and reporting obligations for high-impact models
- The creation of a Federal AI Safety Institute (FAISI) within the National Institute of Standards and Technology (NIST)
- Mechanisms to frame the liability of developers and operators of AI systems
The bill also addresses issues such as the use of data for model training, the security of advanced systems, the impact of AI on employment, and the protection of minors in digital environments.
At this stage, legislative developments related to this text remain relatively limited in terms of public visibility, and its adoption timeline remains uncertain.
III. Pioneer States That Have Adopted Binding AI Regulations
In the absence of a single federal law, several jurisdictions have adopted texts that are applicable (or about to become so), with concrete obligations (audit, transparency, duty of care, prohibitions, sanctions).
At the same time, the American regulatory ecosystem is also marked by the presence of numerous “microbills,” very short and targeted legislative texts aimed at specific actors or uses of AI.
These initiatives notably concern AI governance in the public sector, electoral contexts, transparency of AI-generated content, protection of minors or regulation of deepfakes.
1. Texas – TRAIGA (Texas Responsible Artificial Intelligence Governance Act)
Objective: Establish an AI governance framework in Texas, notably through the prohibition of certain high-risk practices and the establishment of a regime of control and sanctions.
Scope of application: The text applies to organizations whose AI systems are developed, deployed, or operated in connection with Texas (notably when persons located in Texas are concerned).
Main obligations and prohibitions :
Targeted prohibitions:
- Behavioral manipulation encouraging self-harm, violence, or criminal activity
- Certain forms of unlawful discrimination
- Social scoring by governmental entities
- Restrictions on certain biometric uses by the government without consent (with exceptions)
- Prohibition of objectives aimed at infringing constitutional rights
- Prohibition of systems designed to produce or distribute certain illegal content (including illicit sexual deepfakes, etc.)
Other mechanisms:
- Regulatory sandbox: supervised testing mechanisms (while maintaining substantive prohibitions)
- Safe harbor: substantial compliance with recognized frameworks (e.g., NIST AI RMF) may support a defense or mitigation in certain enforcement contexts.
Sanctions and enforcement:
- Authority: Texas Attorney General (investigative powers and online reporting mechanism)
- Compliance timeline: notice & cure logic (correction period) provided by the text
- Civil penalties: ranges provided for uncorrected and “incurable” violations, as well as daily penalties for ongoing violations (with possible injunctions)
2. Colorado – SB 24-205 (Consumer Protections in Interactions with AI Systems)
Objective: Prevent algorithmic discrimination in high-risk AI systems used for “consequential” decisions (employment, housing, credit, health, public services, etc.).
Scope:
- Applicable to developers of high-risk AI systems
- To deployers using such systems in the state of Colorado
- To AI systems affecting Colorado residents
The text establishes a clear distinction between:
- Developer (entity designing or providing the system)
- Deployer (entity using it in an operational context)
The law mainly targets predictive AI systems used to make or substantially contribute to decisions. It does not primarily target general-purpose generative AI tools.
Main obligations
- General duty of reasonable care: developers and deployers must exercise reasonable care to prevent algorithmic discrimination related to the use of high-risk systems.
- Developers: documentation and information enabling risk management by deployers; transparency on limits/uses; notification in case of detected discrimination.
- Deployers: risk management policy, impact assessments, monitoring, information to individuals when AI is used in a consequential decision (depending on the case), and appeal/human oversight mechanisms where required.
- Interaction transparency: obligation to inform when a consumer interacts with an AI system, unless it is obvious.
Sanctions and enforcement
- Enforcement authority lies exclusively with the Colorado Attorney General.
- There is no private right of action for individuals.
- Violations are classified as unfair or deceptive trade practices under the Colorado Consumer Protection Act.
- Civil penalties of up to $20,000 per violation may be imposed.
3. California – Generative AI Transparency (AB-2013 and SB-942)
California has adopted two major texts focused on transparency in generative AI, both effective January 1, 2026.
A) AB-2013 – Training Data Transparency (TDTA)
Objective: Strengthen transparency regarding the training data of generative AI systems accessible in California.
Scope:
- The law applies to developers of generative AI systems and to providers offering generative AI systems or services accessible in California.
- It targets systems capable of generating synthetic content such as text, images, audio, or video.
- Systems used exclusively for internal purposes and not accessible to the public are excluded.
No disclosure obligation applies to systems used exclusively for:
- Data security and integrity
- Physical security
- Operation of aircraft in national airspace
- National security, defense, or military activities
Main obligation
- Publication of a high-level summary of datasets used for training (including sources, origin, type and volume of data, and substantial fine-tuning or updates), with updates in case of significant changes.
Sanctions and enforcement:
- Enforcement through state authority mechanisms, with possible civil consequences under the applicable California legal framework.
B) SB-942 – California AI Transparency Act
Objective: Increase transparency regarding AI-generated media (audio, image, video) and reduce the proliferation of deepfakes through technical and contractual requirements.
Scope:
The law applies to “Covered Providers,” meaning entities that:
- Create or produce a generative AI system
- Have more than 1,000,000 monthly users
- Make the system publicly accessible in California
The text concerns only image, audio, and video content generated by AI. Textual content is not covered.
Excluded from the scope are video games, films, audiovisual works, or interactive experiences that do not rely on user-generated AI content.
Main obligations
- A detection tool made available free of charge
- Latent disclosures (embedded metadata) and the option of visible disclosures
- Contractual obligations when the system is licensed to third parties (maintenance of disclosure capabilities and revocation mechanisms if altered)
Sanctions and enforcement
- Civil penalty: $5,000 per violation (each day may constitute an additional violation)
- Enforcement by the Attorney General and certain local authorities.
4. New York City – Local Law 144 (Automated Employment Decision Tools)
Unlike state laws, Local Law 144 is a municipal regulation adopted by the city of New York, targeting a specific use case: employment.
Objective: Reduce the risk of discrimination in hiring and promotion decisions when automated tools are used, through independent audit and transparency toward candidates.
Definition : An Automated Employment Decision Tool (AEDT) means:
- A computational tool based on machine learning, statistical modeling, data analysis, or artificial intelligence
- That provides a simplified output (for example a score, classification, or recommendation)
- Used as an exclusive or determining factor in a hiring or promotion decision
- Or used to substitute human decision-making.
Not considered AEDTs:
- Spam filters, firewalls, antivirus software
- Calculators, spreadsheets, databases
- Datasets or data compilations without automated decision functions
Main obligations
- Independent bias audit before use and on a recurring basis (annually in practice)
- Candidate notification regarding the use of an AEDT and key evaluation elements
- Publication of information related to the latest audit (date and summary)
Sanctions and enforcement
- Authority: NYC Department of Consumer and Worker Protection (DCWP)
- Fines: $500 (first violation), then $1,500 (subsequent violations)
- No private action: enforcement by the local authority.
IV. Conclusion: A “Federal and Patchwork” Framework Requiring a Structured Compliance Approach
U.S. AI regulation relies on a balance:
- On one side, a federal strategy largely driven by the executive branch, setting national priorities for innovation, infrastructure, and competitiveness.
- On the other, state and local legislation creating concrete legal obligations regarding audits, transparency, risk management, targeted prohibitions, and sanctions.
For organizations, the challenge is not only to “know the rule,” but to map AI uses, identify relevant jurisdictions, and implement verifiable governance (documentation, processes, controls, monitoring).
Master Your AI Compliance in the United States
Are you deploying (or planning to deploy) AI systems in the United States?
Naaia helps you structure an operational governance framework: system mapping, risk management, compliance documentation, and continuous monitoring through an AI management platform designed to industrialize these requirements over time.