AI governance : A cybersecurity matter first and foremost

Artificial intelligence has entered a new phase that now directly concerns CISOs.

No longer merely an innovation topic, it has become a new risk domain to manage — one that cuts across the information system, business functions, and the supplier chain.

First reality: AI is spreading rapidly, often outside existing security frameworks.
Use cases are multiplying — generative AI, internal assistants, augmented business tools — sometimes driven by internal teams, sometimes integrated through third-party solutions. The risk is no longer isolated experimentation, but the rapid proliferation of systems that are poorly inventoried, insufficiently assessed, and inadequately secured, with direct impact on business processes.

Second reality: AI significantly expands the attack surface.
It relies on models, platforms, APIs, and external providers embedded at the core of the information system. For CISOs, this means increased dependence on the technology supply chain, more complex data flows, and heightened requirements in terms of security, robustness, and oversight — often without centralized visibility.

Third reality: the regulatory framework is becoming immediately operational.
As of August, the AI Act will introduce obligations applicable to generative AI systems, interactive systems, and deepfakes. These requirements directly intersect with security responsibilities: security by design, robustness, documentation, traceability, and supplier oversight.

In addition, the Cyber Resilience Act (CRA) will impose new cybersecurity obligations on digital products placed on the market, including software components integrating AI. For CISOs, the CRA reinforces a critical point: the need to document security assurances, manage vulnerabilities over time, and control software and supplier dependencies — well beyond the traditional perimeter of the internal IT environment.

In this context, AI now concentrates multiple categories of risk that the security function can no longer address in a fragmented manner:

  • Cyber risks, with new attack vectors and increased third-party dependence;
  • Operational risks, in cases of system malfunction or drift;
  • Business risks, when AI impacts critical processes or sensitive decisions;
  • Reputational risks, when AI is visible to customers or partners;
  • Regulatory risks, with formalized, cumulative, and auditable requirements (AI Act, CRA, data protection).

For CISOs, the challenge is no longer limited to securing technical components, but to structuring a global and continuous governance framework for AI risk. Concretely, this requires:

  • Maintaining a comprehensive inventory of AI systems in use, whether developed internally or provided by third parties;
  • Assessing their risk level — regulatory and non-regulatory — based on actual use cases;
  • Managing over time the necessary actions: cybersecurity requirements, technical measures, contractual clauses, supplier guarantees, vulnerability management, and documentation — without unnecessarily burdening teams.

To address these needs, we have developed Naaia, a centralized AI governance solution designed as a management tool for security functions. It enables organizations to build a single repository of AI systems, automatically assess risks, identify applicable obligations (AI Act, CRA, etc.), and generate associated action plans, including across the supply chain. Regulatory developments are continuously integrated to secure governance and relieve teams from ongoing monitoring and coordination efforts.

When it comes to AI, trust cannot be declared — it must be built, documented, and actively managed from day one.

Contact the Naaia team to learn more.

Share the Post: