AI-related biases: understanding, identifying and mitigating distortions for fair governance

AI now occupies a central place in our societies, influencing a wide range of sectors, from healthcare and education to marketing and legal systems. However, far from being neutral tools, AI systems can reproduce or amplify existing biases, or even create new ones. These systematic distortions can impact decisions, behaviours and interactions, thereby undermining fairness […]
AI Act and Harmonised Standards: role, development process, and state of progress of European AI standards

The AI Act adopts a risk-based approach: the greater the risks an AI system poses to people’s health, safety, or fundamental rights, the stricter the legal obligations it must comply with. This graduated logic forms the foundation of the new European framework of trust for AI. To make these obligations operational, the regulation combines two […]
Why AI testing is becoming a strategic issue for organisations

Testing an artificial intelligence is no longer a simple technical formality. It is an essential condition to guarantee the reliability, security and compliance of modern systems. Without rigorous testing processes, an AI can produce errors, amplify biases, invent answers or adopt unexpected behaviours.These failures undermine user trust, generate legal risks and can damage the organisation’s […]
From the AI Agent to Agentic AI: two notions not to be confused anymore

In the field of AI, the notions of “AI agent” and “agentic AI” are increasingly mentioned, often presented as equivalent even though they refer to two distinct concepts. As these technologies evolve, clarifying this distinction becomes crucial: it is an essential condition for designing appropriate governance frameworks, managing emerging risks, and steering innovation responsibly. 1. […]