AI Act and Harmonised Standards: role, development process, and state of progress of European AI standards

The AI Act adopts a risk-based approach: the greater the risks an AI system poses to people’s health, safety, or fundamental rights, the stricter the legal obligations it must comply with. This graduated logic forms the foundation of the new European framework of trust for AI. To make these obligations operational, the regulation combines two […]
Why AI testing is becoming a strategic issue for organisations

Testing an artificial intelligence is no longer a simple technical formality. It is an essential condition to guarantee the reliability, security and compliance of modern systems. Without rigorous testing processes, an AI can produce errors, amplify biases, invent answers or adopt unexpected behaviours.These failures undermine user trust, generate legal risks and can damage the organisation’s […]
From the AI Agent to Agentic AI: two notions not to be confused anymore

In the field of AI, the notions of “AI agent” and “agentic AI” are increasingly mentioned, often presented as equivalent even though they refer to two distinct concepts. As these technologies evolve, clarifying this distinction becomes crucial: it is an essential condition for designing appropriate governance frameworks, managing emerging risks, and steering innovation responsibly. 1. […]
The golden tech triangle: How Qatar, Saudi Arabia and the UAE are building sovereign, ethical and eecure AI

This analysis unpacks the data, AI and cybersecurity regulatory frameworks shaping Qatar, Saudi Arabia and the United Arab Emirates. It explains how these nations are designing ethical, secure and sovereign AI ecosystems aligned with their national visions, and what this means for corporate compliance. 1. Data: the foundation of national AI strategies 1.1 Qatar: a […]