LIAD: 5 points to remember
- In June 2022, the Canadian government introduced the Artificial Intelligence and Data Act (LIAD) as part of Bill C-27, the Digital Charter Implementation Act 2022
- LIAD adopts a risk-based approach, distinguishing “high-impact AI systems” and imposing obligations on them.
- These will be guided by “key principles” aligned with international AIS governance standards: human supervision and oversight, transparency, justice and equity, security, accountability, validity and robustness.
- LIAD would be the first Canadian law to regulate AIS in the private sector
- There are two types of penalties for non-compliance with LIAD: administrative monetary penalties with prosecution for regulatory offences, and a separate mechanism for criminal offences.
In June 2022, the Canadian government introduced the Artificial Intelligence and Data Act (AIDA ) as part of Bill C-27, the Digital Charter Implementation Act 2022. Its aim is to strengthen Canadians’ confidence in digital technologies.
Canada already has a solid legal framework that can be applied to certain uses of AI. Bill C-27 seeks to modernize this existing framework, as well as introducing new regulations exclusive to the use of artificial intelligence systems (AI systems or AIS).
The global emergence of AI regulations: Canada’s position
LIAD has been designed to align with national standards, while complying with human rights principles and evolving international standards in the field of artificial intelligence, such as the AI Act proposed by the European Union. For example, like the AI Act, LIAD currently defines an “artificial intelligence system” using the OECD definition. Canada plans to collaborate with international partners such as the EU, the UK and the USA, to ensure global protection for Canadians and their businesses.
LIAD’s risk-based approach
By adopting a risk-based approach similar to that of the AI Act, the Canadian government seeks to mitigate risks in order to prevent prejudice and discriminatory outcomes. Under this approach, AI systems are regulated according to their level of risk. AIS related to access to services or employment, health and safety, biometric systems and systems capable of influencing human behavior on a large scale have particularly caught the government’s attention (as has that of the European Union in developing the AI Act). LIAD therefore lays down a series of obligations and responsibilities for “high-impact AIS” and their operators.
High-incidence AI systems
Under LIAD, reckless or malicious use of AI that could cause serious harm to Canadians or their interests is prohibited, and two types of risks associated with high-impact AI systems will be regulated:
- The risk of harm to individuals, whether physical or psychological, in the form of property damage or economic loss.
- The risk of systemic bias in the private sector: the aim is to ensure that there is no unjustified adverse differentiation based on one or more of the grounds for discrimination prohibited by the Canadian Human Rights Act. This ensures that a system does not, for example, use gender as an indicator of income.
In addition, measures will have to be put in place before the artificial intelligence system can be marketed. They would be set out in regulations, with obligations proportionate to the risk and distinct according to the type of activity regulated. Appropriate regulations would be drawn up through an extensive consultation process.
Principles guiding the obligations of high-incidence AI systems
A series of key principles would guide the obligations for high-impact AI systems. These would be in line with international standards for AIS governance. In fact, the majority of them can be found in Article 4bis on the general principles applicable to all AI systems of the latest version of the AI Act (amended by the European Parliament):
- Human supervision and monitoring: high-incidence AIS are designed and developed to promote an appropriate level of interpretability, and are measured and evaluated.
- Transparency: providing the public with sufficient information to understand the capabilities, limitations and potential impacts of AIS.
- Fairness and equity: taking appropriate measures to mitigate discriminatory outcomes
- Security: taking appropriate measures to mitigate the risk of harm
- Accountability: adoption of governance mechanisms for high-impact AIS to ensure compliance with their obligations (see “Who will have to comply with LIAD?”).
- Validity and robustness: AIS functions as intended, is stable and resilient
Who must comply with LIAD?
LIAD applies to organizations engaged in activities regulated by its framework. It would also be the first Canadian law to regulate AIS in the private sector.
The Canadian government has provided guidance on companies’ responsibilities based on their position in the AI lifecycle in a companion document aimed at increasing understanding of the bill among Canadians and AI stakeholders published on March 13, 2023:
- Companies designing or developing a high-impact AI system will need to take steps to identify and address harm and bias risks, document the appropriate use and limitations of the AIS, and adjust measures as necessary
- Companies making a high-impact AI system available will need to consider potential uses during deployment and take steps to ensure that users are aware of any restrictions on how the system is intended to be used and understand its limitations
- Companies managing AI system operations will need to use AI systems as directed, assess and mitigate risks, and provide ongoing system monitoring
The Canadian government also provides examples of risk assessment and mitigation measures based on the regulated activity, to guide companies and facilitate compliance.
In addition, LIAD supports responsible AI research and innovation, and the emergence of a regulatory framework capable of adapting to the evolving nature of AI. To this end, the government is planning regular cycles of development and evaluation of regulations and guidelines in collaboration with regulated system operators and other stakeholders.
Application of LIAD in line with the evolving, technical nature of AI
The Minister of Innovation, Science and Industry would be empowered to execute and enforce LIAD to ensure that the law evolves in line with technological developments. This minister could also order the publication of files or carry out an audit in the event of prejudice or infringement. In addition, it could order the cessation of use of a system if there is a risk of imminent harm. A new position of AI and Data Commissioner will be created to develop a center of expertise to assist the Minister in the development of regulations and the application of LIAD.
Under the AI Act, Member States’ national supervisory authorities are responsible for ensuring that the regulation is applied and implemented. In France, the Conseil d’Etat has encouraged the CNIL to become the national supervisory authority responsible for regulating AI systems. In addition, the creation of an AI Office has been provided for in the latest version of the AI Act. Its role would be to oversee and coordinate the activities of national authorities in their role as mediator, and to provide them with advice and expertise.
Penalties for non-compliance with regulations
LIAD provides for two types of sanctions in the event of non-compliance with regulations: administrative monetary penalties (AMPs) and the prosecution of regulatory offences; and criminal offences when an individual causes serious harm, with an offence under the Criminal Code punishable by imprisonment.
In fact, LIAD creates three new criminal offences to prohibit and directly condemn behavior specific to the use of artificial intelligence.
Next regulatory steps
LIAD would come into force in 2025 at the earliest. Bill C-27 must be examined in committee in the House of Commons, then given reading in the Senate, before receiving Royal Assent. The government also intends to launch an inclusive consultation process to guide the implementation of the regulations. In particular, it will determine which types of system should be considered as having a high impact, and which compliance standards and certifications should be used.