Generative AI and regulation: understanding the risks and obligations of the AI Act

With the rise of AI systems, capable of acting, interacting, and sometimes deciding, questions of security, transparency, and responsibility become urgent. What are the risks of generative AI? How can they be anticipated without hindering innovation? Where do we stand on the regulation of generative AI? The AI Act, the first attempt at AI regulation at the European level, seeks […]
Embedded AI and medical devices: steering a regulatory and strategic transformation

The integration of artificial intelligence into medical devices upends traditional logics of regulation. Between requirements of the medical device framework, constraints of the AI Act, governance of algorithmic risks and management of data (GDPR, Data Act), companies must rethink their compliance strategy in the face of a complex European framework combining constraints that are at […]
High-risk AI: Understanding the AI Act list to anticipate your obligations

The European regulation on artificial intelligence, the AI Act, introduces a risk-based approach to regulate the use of AI. It imposes specific requirements on high-risk AI systems, meaning those likely to have a significant impact on fundamental rights, health, or safety. A public consultation is currently open to clarify the classification of the systems concerned, possible exceptions, and […]
Shadow AI: a governance and compliance challenge for organizations

In the wake of BYOD (Bring Your Own Device), a new practice is gaining ground in companies: Shadow AI. Employees adopt artificial intelligence tools without validation or official supervision. A marketer generates a campaign with ChatGPT, a developer codes thanks to GitHub Copilot, an analyst manipulates sensitive data in a local notebook. These uses seem harmless, […]