Last updated on 08/23/2023
5 key points to remember:
- AI brings numerous ethical challenges, particularly in the fields of employment and healthcare.
- Generative AI systems could affect 300 million full-time jobs worldwide.
- However, according to the OECD, AI would influence the quality rather than the quantity of jobs.
- Many solutions exist to promote ethical AI, such as the principles of fairness and vigilance.
- The European Union, the United States, Canada, and China have integrated ethics as a fundamental principle in their regulations.
Introduction
When we talk about artificial intelligence (AI), ethics is at the heart of all concerns. Ethics in the digital world is defined as the responsible, sustainable, and rational use of AI. Ethical AI must therefore be able to operate without discrimination, having a positive impact on the environment and society. This article will address the ethical issues of AI, particularly the question of its impact on employment, and will present solutions and recommendations for ethical AI.
The ethical issues of AI
At first glance, artificial intelligence and ethics seem to diverge. Indeed, ethics approach moral cases in their singularity while AI, particularly generative AI, relies on rather formal writings, with little nuance. The CNIL published a report in 2017 on the ethical issues of algorithms and artificial intelligence. According to this report, the issues raised by AI are numerous and concern: biases, discrimination and exclusion; algorithmic fragmentation; human identity; and the quality, quantity, and relevance of data.
Discrimination is indeed one of the major ethical problems of AI. In the United States, examples highlight this issue, such as the case involving State Farm. They faced a complaint for using algorithms that favored white homeowners. Requests from white homeowners were processed three times faster than those from black homeowners.
The impact of AI on employment
According to a Goldman Sachs report from March 28, 2023, AI, particularly generative AI systems, could affect 300 million full-time jobs globally. It could automate 25% of the labor market. Generative AI can write computer code, spot errors for programmers, analyze digital data for financial analysts, and synthesize information for legal assistants.
The DALL-E platform, developed by Open AI and capable of quickly creating images from textual descriptions, will also impact jobs in the fields of artistic creation and graphic design.
However, this risk is to be nuanced because AI will also create many jobs. According to the World Economic Forum study, Future of Jobs 2023, 69 million jobs will be created in the next five years thanks to AI. These tools can also allow humans to refocus on subjects where they have real expertise, and eliminate repetitive tasks. We will witness a transformation of jobs with added value from humans.
AI “influences the quality rather than the quantity of jobs”
According to a July 14, 2023 study by the OECD, AI primarily influences job quality rather than quantity. In finance and manufacturing sectors, 63% of workers using AI report increased professional fulfillment. The report also notes that AI can reduce tedious and risky tasks, enhancing worker motivation and safety. Despite these benefits, the OECD recommends social dialogue and public action to encourage employer training and support for digital job transformation.
What solutions for ethical AI?
Various public organizations propose solutions to address ethical issues arising from AI’s potential negative impacts. In 2017, CNIL published a report following a public debate on algorithmic and AI ethics. Two foundational principles emerged for « AI serving humans. » Firstly, all algorithms should uphold fairness to safeguard personal data and prioritize user interests. Secondly, vigilance should enable direct responses to the demands posed by these unpredictable technological objects.
Recommendations from the CNIL
In addition to these two principles, the CNIL recommends public authorities and private organizations to:
- Train all actors in the “algorithmic chain” in ethics (designers, professionals, citizens): digital literacy must enable every human to understand the workings of the machine.
- Make algorithmic systems understandable by strengthening existing rights and organizing mediation with users.
- Work on the design of algorithmic systems to serve human freedom, to counter the “black box” effect.
- Create a national platform for algorithm audits.
- Encourage research on ethical AI and launch a major national cause participatory project around a general interest research project.
- Strengthen the ethical function within companies.
Other recommendations propose educating people about AI from a young age to prevent inequalities and continuous professional training. In healthcare, the white paper from the National Council of the Order of Doctors (CNOM) emphasizes integrating ethics into AI. It stresses the importance of ensuring humans retain ethical decision-making capabilities in the evolving technological landscape.
AI regulations around the world: ethics as a fundamental principle
To implement these solutions and recommendations, various countries are developing regulations on AI by integrating ethical principles. Ethical AI is now a fundamental and global principle. The Recommendation on the Ethics of AI adopted by the 193 member states of UNESCO in November 2021 illustrates this. It is indeed the very first global normative instrument on the subject and establishes a set of major ethical principles to be respected, such as fairness and non-discrimination, the right to privacy, and human oversight and decision-making.
In Europe:
The European Union aims, through the AI ACT currently under debate, to establish itself as a global leader in developing safe, reliable, and ethical artificial intelligence. The legislation prioritizes the protection of ethical principles.
In the latest amendments dated June 14, 2023, the European Parliament reinforced ethical considerations in the AI ACT. Recital 9bis highlights the importance of AI systems adhering to general principles that promote a coherent and human-centered approach to ethical and trustworthy AI.
Ethical obligations are central in the AI ACT: operators must prioritize the development and use of ethical and trustworthy AI systems that are human-centered. Safeguards are essential to ensure AI respects the values of the Union and the Charter.
In 2020, the European Parliament adopted a resolution with recommendations for the Commission on AI, robotics, and related technologies. Key recommendations included human-centered AI, a risk-based approach, good governance, and eliminating biases and discrimination.
In the United States and Canada:
The United States is also seeking to integrate ethics into its AI regulations. They have established a charter of rights based on five major principles, including protection against algorithmic discrimination and the fair use and design of AI systems. To learn more about U.S. AI regulations, an article on this topic is available on naaia.ai.
Canada, in its AI and Data Act (LIAD), includes the principle of ethics in a risk-based approach to prevent harm and discriminatory outcomes.
In China:
Finally, China has developed six guidelines for ethical AI, including human control over AI systems, improving human conditions, and promoting fairness and justice.
The scientific community is also increasingly interested in the subject of AI ethics. For example, Vanessa Nurock, a lecturer in political theory and ethics, questions the idea that AI can be gendered. Her study concludes that AI suffers from many discriminatory biases against gender, as shown by the example of Amazon’s algorithm for sorting CVs, where those containing feminine terms were systematically downgraded.
In the healthcare sector, Julien Duguet, Gauthier Chassang, and Jérôme Béranger have explored the ethical implications of AI use, questioning the extent to which algorithms and their creators should influence medical decisions. They argue that algorithms must maintain ethical integrity throughout their development and deployment, with responsibility shared between designers and owners.
These authors are increasingly contributing to the topic, and as algorithmic applications become more widespread, AI ethics will undoubtedly remain a subject of ongoing reflection in the future.