Since 2023, AI agents have moved from the experimental stage to operational uses across many sectors: finance, healthcare, industry, human resources, and public services. Capable of acting autonomously or semi-autonomously, these agents promise significant gains in productivity and performance.
However, this increased autonomy comes with legal, ethical, operational, and cybersecurity risks, making a structured approach to their governance and management essential.
1. What is an AI agent? Definition and recent evolution
AI agents, in their common understanding, refer to AI system software with specific characteristics:
- They are based on an AI model pursuing a defined or undefined objective, which has not undergone additional development or significant modification.
- They are accessible through a studio in which users can edit their parameters.
- They are configured with the aim of automating a complex, contextualized task, making decisions, and executing actions without necessarily requiring human intervention.
AI agents relate to the concept of agency, that is, a system’s ability to:
- Act autonomously,
- Initiate actions,
- Plan sequences,
- Adapt to changing contexts,
- Pursue high-level objectives without continuous human supervision.
Example: an AI agent may be an automated assistant tasked with sorting incoming emails. It analyzes each message, identifies its category (sales, support, urgent), and then applies the appropriate action, such as archiving or ticket creation. It thus performs a specific task, defined in advance, without exceeding the scope of this role.
The emergence of frameworks such as AutoGPT (2023), LangGraph (2024), or agents integrated into cloud suites (Microsoft Copilot, Google Agentspace) has accelerated the adoption of AI agents in real professional environments.
2. Uses of AI agents by sector
2.1 Finance and insurance
The financial sector is among the first to have integrated AI agents, due to the growing complexity of operations, the increase in data volumes, and the multiplication of compliance requirements.
AI agent use cases
- Risk analysis agents: AI agents continuously assess portfolios, detect anomalies, and adjust risk scores based on internal and external data.
- Compliance agents: they ensure continuous monitoring of transactions (AML / KYC), prioritize alerts, and prepare compliance files for human validation.
- Autonomous algorithmic trading: some agents automatically execute orders according to predefined strategies, based on market conditions and risk constraints.
Associated risks and specific challenges of AI agents
The use of AI agents in financial activities raises major governance challenges:
- Lack of decision explainability: decisions made or recommended by AI agents may be opaque and therefore difficult to explain, which is problematic in light of regulatory requirements (auditability, traceability, justification of decisions).
- Bias and indirect discrimination: underlying models may reproduce or amplify biases present in historical data, leading to unfair risk assessments for certain customer profiles.
- Legal and financial liability: in the event of financial loss, undetected fraudulent transactions, or erroneous decisions made by an autonomous agent, the question of human, organizational, or technological responsibility remains complex and requires a clear supervisory framework.
2.2 Healthcare and life sciences
The healthcare and life sciences sectors offer strong potential for the use of AI agents, given the complexity of medical data, pressure on healthcare systems, and growing needs for clinical decision support. These agents must be designed as assistance tools and must not replace healthcare professionals.
AI agent use cases
- Diagnostic support agents: AI agents analyze medical records, laboratory results, and imaging to identify clinical signals and suggest diagnostic paths, notably by comparison with cohorts of similar patients.
- Care coordination agents: they automate the scheduling of appointments, examinations, and follow-ups, helping to streamline care delivery and optimize the use of hospital resources.
- Clinical research: in life sciences, AI agents explore scientific literature and clinical trial data to identify correlations, formulate hypotheses, and accelerate biomedical research.
Associated risks and specific challenges of AI agents
The use of AI agents in healthcare and life sciences raises major issues related to security, ethics, and responsibility:
- Protection of health data: AI agents process highly sensitive medical data, increasing the risk of privacy breaches in the event of security failures, poor access governance, or uncontrolled data use.
- Risk of medical errors: misinterpretation of data, model biases, or incomplete clinical information may lead to inaccurate recommendations, with potential consequences for quality of care and patient safety.
- Excessive reliance on algorithmic recommendations: increased use of AI agents may weaken clinical judgment if not properly supervised, making it essential to implement mechanisms for human oversight, explainability, and clearly defined responsibility.
2.3 Human resources and talent management
Human resources functions represent a prime area of application for AI agents, in a context marked by an increasing number of applications, rapid skills evolution, and the need to better anticipate talent needs.
AI agent use cases
- Candidate pre-screening agents: AI agents analyze CVs, cover letters, and professional profiles to identify candidates best matching defined criteria, while prioritizing profiles for recruiters to review.
- Automated onboarding agents: they support new employees during their integration by automating certain administrative steps, delivering personalized content, and facilitating role onboarding.
- Skills management and internal mobility agents: these agents cross-reference HR data, internal evaluations, and business needs to identify skills gaps, recommend training, and propose mobility opportunities.
Associated risks and specific challenges of AI agents
The use of AI agents in HR raises significant challenges related to fairness, compliance, and governance:
- Risk of indirect discrimination: models may reproduce or amplify biases related to age, gender, or origin due to non-neutral historical data.
- Protection of personal data and regulatory compliance: AI agents process sensitive data covered by the GDPR, requiring high standards of transparency, data minimization, and information for data subjects.
- Human control over decisions: excessive reliance on automation may reduce human involvement in structuring decisions such as recruitment or career development, making clear mechanisms for supervision and accountability essential.
2.4 Industry, supply chain, and logistics
Industry, supply chain, and logistics are key areas for the application of AI agents, due to the complexity of value chains, the multiplicity of stakeholders, and the need to continuously optimize production and supply flows.
AI agent use cases
- Predictive maintenance agents: AI agents continuously analyze data from industrial sensors and maintenance histories to anticipate failures, plan interventions, and reduce unplanned downtime.
- Supply chain optimization agents: they combine demand data, production capacities, inventory levels, and logistical constraints to adjust flows, limit shortages, and reduce costs.
- Real-time production planning agents: these agents adapt production schedules based on operational disruptions, demand fluctuations, or external constraints.
Associated risks and specific challenges of AI agents
The use of AI agents in industry and logistics raises significant challenges related to reliability, security, and governance:
- Cascading effects of automated decisions: a configuration or decision error may quickly propagate across the entire value chain, affecting production, inventories, and deliveries.
- Dependence on external data: the quality of AI agents’ decisions relies on data that may be incomplete or unreliable, potentially weakening operational trade-offs.
- Cyber vulnerabilities of industrial systems: integrating AI agents into critical industrial environments increases the attack surface and requires strengthened cybersecurity and access control measures.
2.5 Public sector and citizen services
The public sector and citizen services represent a growing field of application for AI agents, in a context marked by increasing volumes of administrative requests, the search for greater efficiency, and the need to ensure equitable access to public services, while providing reinforced oversight given fundamental rights concerns.
AI agent use cases
- User orientation agents: AI agents assist citizens with administrative procedures by directing them to the appropriate services, automating certain responses, and facilitating access to public information.
- Administrative decision-support agents: they analyze complex files to prioritize processing or formulate recommendations for public officials, particularly in social aid or resource allocation.
- Social or tax fraud detection agents: these agents cross-reference large volumes of administrative data to identify inconsistencies or atypical behaviors likely to reveal fraudulent situations.
Associated risks and specific challenges of AI agents
The use of AI agents in the public sector raises particularly sensitive issues related to transparency, fairness, and democratic accountability:
- Infringement of fundamental rights: poorly supervised automated decisions may affect access to rights, social benefits, or essential services.
- Opacity of decision criteria: the lack of clear explainability of algorithmic recommendations complicates their understanding by both public officials and citizens.
- Insufficient contestability of decisions: the lack of appeal mechanisms and human control may limit users’ ability to challenge an automated decision, making the implementation of appropriate procedural safeguards essential.
3. Major cross-cutting risks of AI agents
Beyond sector-specific issues, the deployment of AI agents raises cross-cutting risks common to all organizations. These risks concern legal, ethical, operational, and cybersecurity dimensions and call for a global AI governance approach.
3.1 Legal and Regulatory Risks
The growing autonomy of AI agents exposes organizations to regulatory non-compliance risks, particularly when these systems contribute to decisions with legal or significant effects on individuals.
- Non-compliance with the GDPR: Article 22 of the GDPR strictly regulates fully automated decisions producing legal effects. The use of AI agents without human oversight mechanisms, information to data subjects, or avenues for recourse may constitute a direct violation of the European framework.
- Exposure to emerging regulations: the gradual entry into force of the European AI Act, as well as the adoption of AI framework laws in Asia (South Korea, Taiwan, Japan), imposes new obligations regarding risk classification, transparency, and governance.
- Uncertain legal liability: in the event of harm caused by an autonomous decision, the allocation of responsibility among the organization, human teams, technology providers, and the AI agent itself remains legally complex.
3.2 Ethical Risks
AI agents also raise structural ethical issues linked to their ability to influence, recommend, or automate sensitive decisions.
- Algorithmic bias: agents may reproduce or amplify biases present in training data, leading to indirect discrimination or inequitable treatment.
- Weakening of human autonomy: excessive reliance on AI agent recommendations may reduce individuals’ ability to exercise critical judgment, particularly in complex decision-making contexts.
- Lack of transparency and explainability: the opacity of certain models makes it difficult to understand decision-making logic, undermining user and stakeholder trust.
3.3 Operational and Cybersecurity Risks
From an operational standpoint, AI agents introduce new vectors of technical and organizational risk.
- Misconfiguration or misuse of agents: poorly configured or insufficiently controlled agents may produce erroneous decisions or be exploited for malicious purposes.
- Excessive access to internal systems: AI agents often require broad access to databases or critical systems, increasing the attack surface in the event of compromise.
- Difficulty of ex post auditing: the autonomous chaining of decisions and actions complicates traceability and auditing of agent behavior, particularly in the event of incidents or disputes.
In the face of the growing power of AI agents, AI management is becoming a strategic lever for organizations.
Manage your AI agents with confidence
AI agents are already transforming your business processes. The question is no longer whether they should be used, but how to deploy, supervise, and govern them responsibly.
Naaia supports organizations in AI management: from AI agent inventory to risk management, regulatory compliance, and operational steering.