High-risk AI: Understanding the AI Act list to anticipate your obligations

The European regulation on artificial intelligence, the AI Act, introduces a risk-based approach to regulate the use of AI. It imposes specific requirements on high-risk AI systems, meaning those likely to have a significant impact on fundamental rights, health, or safety. A public consultation is currently open to clarify the classification of the systems concerned, possible exceptions, and […]
Shadow AI: a governance and compliance challenge for organizations

In the wake of BYOD (Bring Your Own Device), a new practice is gaining ground in companies: Shadow AI. Employees adopt artificial intelligence tools without validation or official supervision. A marketer generates a campaign with ChatGPT, a developer codes thanks to GitHub Copilot, an analyst manipulates sensitive data in a local notebook. These uses seem harmless, […]
What is the cost of non-compliance with the AI Act?

The European regulation on artificial intelligence is entering a decisive phase. With the adoption of the AI Act, companies that develop, deploy, or use artificial intelligence systems are now subject to a demanding legal framework. In case of non-compliance, the sanctions provided are in line with the societal, economic, and ethical stakes represented by AI. […]
AIMS: The artificial intelligence management system for AI compliance and governance

Why companies need an AIMS Deploying an AI model without a clear framework may seem harmless… until the GDPR audit. This increasingly common case illustrates a simple reality: without a structuring framework, deploying AI in a company means taking major risks. This is precisely the role of an AIMS – Artificial Intelligence Management System: to centralize […]