AI and ethics: what’s at stake?

AI & Ethics: 5 points to remember

  • AI raises many ethical issues, particularly in the fields of employment and health.
  • Generative AI systems could affect 300 million full-time jobs worldwide
  • Nevertheless, according to the OECD, AI would influence the quality rather than the quantity of jobs
  • Many solutions exist to promote ethical AI, such as the principle of loyalty and the principle of vigilance.
  • The European Union, the United States, Canada and China have incorporated ethics as a fundamental principle in their regulations.


When it comes to artificial intelligence (AI), ethics are at the heart of everything we do.

Digital ethics is defined as the responsible, sustainable and reasoned use of AI. Ethical AI must therefore be able to operate without discrimination, and have a positive impact on the environment and society.

The fears it arouses are mainly concentrated in the areas of health and employment.

This article will address, under the prism of the various regulations established or to come, the ethical issues of AI with in particular the question of its impact on employment. It will present the solutions and recommendations put forward by authorities and public authorities to implement the principle of ethical AI.

  • The ethical challenges of AI

At first glance, artificial intelligence and ethics seem to diverge. Indeed, ethics approaches moral cases in their singularity, whereas AI, particularly generative AI, relies on rather formal writings, with few nuances.

The CNIL published a report in 2017 on the ethical challenges of algorithms and artificial intelligence. According to the report, the issues raised by AI are numerous and include: bias, discrimination and exclusion; algorithmic fragmentation; human identity; and data quality, quantity and relevance.

Indeed, discrimination is one of the major ethical problems of AI. Several examples in the United States illustrate this, notably the case of the insurance company State Farm, which has been the subject of a complaint for its use of algorithms favoring white homeowners, who are three times more likely than black homeowners to have their claims processed quickly.

As far as healthcare is concerned, although it has now been demonstrated that AI offers numerous benefits and advantages (particularly in terms of research and diagnostic aids), it nevertheless presents risks for the protection of privacy and medical confidentiality. In particular, the risk of breaches of confidentiality and misuse of patient data can lead to discrimination against people in their capacity as employees or insured persons. Finally, the risks inherent in AI in the medical field, notably via errors or biases, have a direct impact on liability issues.

  • The impact of AI on employment

According to a Goldman Sachs report of March 28, 2023, AI and in particular so-called “generative” AI systems could affect 300 million full-time jobs worldwide, then automating 25% of the entire labor market. Generative AI will be able to perform certain jobs. Examples include computer programmers, where generative AI can write computer code and spot errors, financial analysts who handle large volumes of digital data, and legal assistants who synthesize information.

The DALL-E platform, developed by Open AI and capable of rapidly creating images from text descriptions, will also have an impact on the creative arts and graphics professions.

However, this risk needs to be qualified, as AI will also create many new jobs. According to the World Economic Forum’s study, Future of Jobs 2023, 69 million jobs will be created in the next five years thanks to AI. These tools can also enable humans to refocus on subjects where they have real expertise, and cancel out repetitive tasks. We’re going to see a transformation of jobs, with an increase in human value.

An OECD study dated July 14, 2023 states that AI “influences the quality rather than the quantity of jobs”. The survey found that among workers using AI in the finance and manufacturing sectors, 63% said they were more fulfilled professionally. According to the same report, “workers and employers report that AI can reduce tedious and dangerous tasks, improving workers’ motivation and physical safety”. Even if there are these positive effects, the OECD recommends social dialogue and public action to encourage employers to train their employees and support them in the digital transformation of their businesses.

  • What solutions for ethical AI?

To address these ethical issues, solutions are being proposed by various public bodies in areas where AI can have a negative impact. In 2017, the CNIL published a report following a public debate it led on the ethical challenges of algorithms and AI. The result is two founding principles for “AI at the service of mankind”. First of all, a principle of fairness should be applied to all algorithms to protect personal data and ensure that users’ interests come first. Secondly, a principle of vigilance would enable a “direct response to the requirements imposed by these technological objects due to their unpredictable nature”.

In addition to these two principles, CNIL recommends that public authorities and private bodies :

  1. Ethics training for all links in the “algorithmic chain” (designers, professionals, citizens). (designers, professionals, citizens): digital literacy must enable every human being to understand the workings of the machine;
  2. Making algorithmic systems understandable by strengthening existing rights and organizing mediation with users;
  3. Designing algorithmic systems to serve human freedom to counter the “black box” effect;
  4. Set up a national platform for auditing algorithms ;
  5. Encourage research into ethical AI and launch a major national participatory cause around a research project of general interest ;
  6. Strengthen the ethics function within companies (for example, setting up ethics committees, disseminating sector-specific best practices or revising ethics charters).

Other recommendations call for AI education from an early age to avoid inequalities, but also for training professionals throughout their careers. In the healthcare sector, the white paper published by the Conseil National de l’Ordre des Médecins (CNOM) ” Doctors and patients in the world of data, algorithms and Artificial Intelligence “The need to integrate the notion of ethics into AI: ” in this fast-moving technological whirlwind, we must propose […] to succeed in organizing and ensuring complementarity between man and machine, with the former retaining the ethical capacity to always have the last word “.

  • AI regulations around the world: ethics as a fundamental principle

In order to implement these solutions and recommendations, various countries are developing AI regulations that integrate ethical principles. Ethical AI is now a fundamental, global principle. The Recommendation on the Ethics of AI adopted by UNESCO’s 193 member states in November 2021 illustrates this. It is the world’s first standard-setting instrument on the subject, and provides a basis of major ethical principles to be respected, such as fairness and non-discrimination, the right to privacy, and human supervision and decision-making.

  • In Europe :

The European Union’s main objective with the AI ACT, currently under discussion, is to “make the Union a leading global player in the development of safe, reliable and ethical artificial intelligence”, as well as to guarantee “the protection of ethical principles”.

In the latest amendments to the AI ACT dated June 14, 2023, the European Parliament has included new provisions on ethics. Recital 9bis refers to this importance: ” It is important to note that AI systems should make every effort to comply with the general principles establishing a high-level framework that promotes a coherent and human-centered approach to ethical and trustworthy AI”.

Ethical obligations thus feature prominently in the AI ACT: operators will have to make every effort to develop and use AI systems that are human-centered, ethical and trustworthy. Safeguards must also be put in place to ensure the development and use of ethical AI “that respects the values of the Union and the Charter”.

This is not the first time that the European Union has taken an interest in the subject. Indeed, in 2020, the European Parliament adopted a resolution containing recommendations to the Commission concerning a framework for the ethical aspects of artificial intelligence, robotics and related technologies. These recommendations for ethical AI included: AI that is human-centered and developed by humans; an approach based on risk assessment; good governance; and an absence of bias and discrimination.

  • In the United States, Canada and China:

The United States is also looking to integrate ethics into its AI regulations. To this end, they have drawn up a charter of rights based on 5 key principles, including protection against algorithmic discrimination and the fair use and design of AI systems. To learn more about U.S. AI regulations, a related article is available at naaia.ai .

Canada in its AI and Data Act (LIAD) includes the principle of ethics in a risk-based approach to prevent harm and discriminatory outcomes.

Finally, China has developed six guidelines for ethical AI, including human control over AIS, improving the human condition and promoting fairness and justice.


The scientific world is also increasingly interested in the subject of Ethics and AI. For example, Vanessa Nurock, lecturer in political theory and ethics, questions the idea that AI can be gendered. His study concludes that AI suffers from numerous gender biases, as demonstrated by the example of Amazon’s algorithm for sorting CVs, where those containing feminine terms were systematically devalued.

Turning to the healthcare sector, Julien Duguet, Gauthier Chassang and Jérôme Béranger looked at the ethical issues surrounding the use of AI, and in particular asked the question: “How far can we let algorithms and the people who design them control medical decisions? They advocate that algorithms should be “ethical and moral from the moment they are developed to the moment they are used, since responsibility lies with both designers and owners”.

More and more authors are writing on the subject, and it’s certain that as algorithmic applications become more widespread, AI ethics will continue to be the subject of much reflection in the future…