Search
Close this search box.

The Canadian Regulation of AI: Focus on the AIDA

La réglementation de l’IA au Canada : focus sur la LIAD
Last updated on 08/09/2023 

The AIDA: 5 Key Points to Remember

  • In June 2022, the Canadian government introduced the Artificial Intelligence and Data Act (AIDA) as part of Bill C-27, the 2022 Digital Charter Implementation Act.
  • The AIDA adopts a risk-based approach by distinguishing « high-impact AI systems » and imposing obligations on them.
  • These obligations will be guided by « grand principles » aligned with international standards for AI governance: human oversight and monitoring, transparency, fairness and equity, safety, accountability, validity, and robustness.
  • The AIDA would be the first Canadian law to regulate AI systems in the private sector.
  • There are two types of sanctions for non-compliance with the AIDA: administrative monetary penalties for regulatory offenses and a separate mechanism for criminal offenses.

Introduction

In June 2022, the Canadian Government introduced the Artificial Intelligence and Data Act (AIDA) as part of Bill C-27, the 2022 Digital Charter Implementation Act. The aim is to enhance Canadians’ trust in digital technologies.

Canada already has a robust legal framework that can apply to certain uses of AI. Bill C-27 seeks to modernize this existing framework and introduce new regulations specifically for the use of artificial intelligence systems (AI systems or AIS).

The Global Emergence of AI Regulations: Canada’s Position

The AIDA was designed to align with national standards while also complying with human rights principles and the evolving international standards in artificial intelligence, such as the AI Act proposed by the European Union. For instance, similar to the AI Act, the AIDA currently defines an « artificial intelligence system » based on the OECD definition. Canada plans to collaborate with international partners such as the EU, the UK, and the US to ensure comprehensive protection for Canadians and their businesses.

The Risk-Based Approach of the AIDA

Apologies for the oversight. Here’s the text with each sentence limited to 20 words:

The Canadian government adopts a risk-based approach akin to the AI Act to mitigate risks. AI systems are regulated based on their risk level, focusing on services, employment, health, safety, biometrics, and behavior influence. This mirrors the EU’s AI Act development. Consequently, the AIDA provides a series of obligations and responsibilities for « high-impact AI systems » and their operators.

« High-Impact AI Systems »

Under the AIDA, the reckless or malicious use of AI that could cause serious harm to Canadians or their interests is prohibited, and two types of risks associated with high-impact AI systems will be regulated:

  • The risk of harm to individuals, whether physical or psychological, damage to property, or economic loss.
  • The risk of systemic bias in the private sector: the goal is to ensure that there is no unjustified adverse differentiation based on one or more prohibited grounds of discrimination under the Canadian Human Rights Act. This ensures that a system does not use, for example, gender as an indicator of income.

Furthermore, measures must be put in place before an artificial intelligence system is brought to market in Canada. These measures would be established through regulations that provide risk-proportionate obligations and are distinct based on the type of regulated activity. The development of appropriate regulations would take place through an extensive consultation process.

Principles Guiding the Obligations of High-Impact AI Systems

A series of grand principles would guide the obligations related to high-impact AI systems. These would align with international standards for AI governance. Most of these principles can be found in Article 4bis on general principles applicable to all AI systems in the latest version of the AI Act (amended by the European Parliament):

  • Human oversight and monitoring: High-impact AI systems are designed and developed to promote an appropriate level of interpretability, and are measured and evaluated.
  • Transparency: Providing the public with sufficient information to understand the capabilities, limitations, and potential impacts of AI systems.
  • Fairness and equity: Taking appropriate measures to mitigate discriminatory outcomes.
  • Safety: Taking appropriate measures to mitigate harm risks.
  • Accountability: Adopting governance mechanisms for high-impact AI systems to ensure compliance with their obligations.
  • Validity and robustness: The AI system functions as intended, is stable, and resilient.

Who Will Have to Comply with the AIDA?

The AIDA applies to organizations engaged in activities regulated by its framework. It would also be the first Canadian law to regulate AI systems in the private sector.

The Canadian government has issued guidance on company responsibilities across the AI lifecycle in a supplementary document, published on March 13, 2023.

  • Companies designing or developing high-impact AI systems must identify and address risks related to harm and bias. They must document the AI system’s appropriate use and limitations, adjusting measures as needed.
  • Companies making high-impact AI systems available must consider potential uses during deployment. They must ensure users understand any usage restrictions and the system’s limitations.
  • Companies operating AI systems must use them as intended, assess and mitigate risks, and continuously monitor their performance.

The Canadian government also offers specific examples of risk assessment and mitigation measures tailored to regulated activities. This aims to guide companies and facilitate compliance.

Moreover, the AIDA promotes responsible AI research, innovation, and the development of a regulatory framework adaptable to AI’s evolving nature. The government plans regular cycles to develop and assess regulations and guidelines in collaboration with system operators and stakeholders.

AIDA Implementation in Line with the Evolving and Technical Nature of AI

The Minister of Innovation, Science, and Industry will enforce the AIDA, aligning it with technological advancements. The minister can order record publication, conduct audits for harm or infractions, and halt system use if imminent harm risks exist. A new Commissioner for AI and Data will support by establishing an expertise center for regulation drafting and AIDA enforcement.

Under the AI Act, national supervisory authorities ensure its application. In France, the Conseil d’État proposes CNIL as the AI system regulator. The AI Act’s latest version proposes an AI Office to coordinate national authorities, mediate, and provide expertise.

Sanctions for Non-Compliance with Canadian Regulations

The AIDA introduces two types of sanctions for non-compliance: administrative monetary penalties (AMPs) and prosecution for regulatory offenses. Criminal offenses apply when individuals cause serious harm, punishable by imprisonment under the criminal code.

Additionally, the AIDA establishes three new criminal offenses to directly prohibit and penalize behaviors linked to artificial intelligence use.

Next Steps in Regulation

The AIDA would come into force as early as 2025. Bill C-27 must be reviewed in committee in the House of Commons, then it will pass through readings in the Senate before receiving royal assent. The government also intends to launch an inclusive consultation process to guide the implementation of the regulations. This will include determining the types of systems to be considered as high-impact and the standards and certifications for compliance to be used.

Share the Post:
Search
Close this search box.