Artificial intelligence (AI) is transforming the financial sector, offering innovative solutions to optimize their processes and business models. According to McKinsey, generative AI could generate potential annual gains of between $200 and $340 billion in the banking sector globally, due to increased productivity. However, its integration also entails risks. To avoid these issues, it is essential to adhere to strict regulations and compliance standards. This ensures responsible use and prevents ethical and legal risks.
AI Use cases in Finance
The uses of AI in the financial sector are numerous and offer many benefits. Here are some examples:
Customer Service
Chatbots, particularly those powered by generative AI, significantly improve customer service by offering personalized assistance and 24/7 availability. These systems can quickly process and resolve common requests, reducing the need for human intervention. For example, chatbots can answer frequently asked questions, help resolve account issues or guide users in using online banking services. In addition, they can provide tailored financial advice and recommend products personalized to customers’ specific needs. This enhances customer loyalty.
Moreover, this automation manages repetitive tasks more efficiently. It increases overall productivity and reduces operational costs for customer service teams.
Risk Management
AI significantly improves risk management processes such as Know Your Customer (KYC) and fraud detection. For KYC, AI makes checks faster and more accurate by classifying documents and extracting the necessary information quickly. In addition, it enhances fraud and money laundering (AML) detection by monitoring transactions in real time. It identifies suspicious behavior and anomalies by spotting unusual patterns, which may indicate money laundering.
By automating the process, AI reduces the cost and complexity of these operations. It increases verification accuracy by minimizing human error and false positives.This proactive surveillance boosts overall security, enhances financial investigations, and strengthens financial institutions’ ability to combat illegal activities. Additionally, it improves transaction security.
Market analysis and opportunity identification
AI improves credit scoring or insurance underwriting by rapidly analyzing large datasets, making decisions more accurate and rapid. It can identify lending opportunities often missed by traditional methods, such as borrowers with atypical but solid credit histories. AI also allows scoring models to be continuously updated based on new data, providing financial institutions with greater flexibility and adaptability.
Moreover, AI can analyze vast volumes of financial data and sentiments expressed in social media and news. This capability offers valuable insights and predictions on market trends and investor behavior. This enables asset managers to make more informed decisions and seize market opportunities.
Generative AI can assist in document analysis, such as for a customer’s ESG assessments, by searching for relevant information or indicators. This capability standardizes and enhances the efficiency of the process, leading to significant time savings.
Human resources
The application of AI outside the core financial business, notably in human resources such as recruitment and staff appraisal, offers significant benefits. For example, it improves recruitment efficiency by automatically sorting CVs and reducing unconscious bias. However, it is essential to ensure that its use complies with ethical and legal standards to avoid any discrimination.
Risks
As there are benefits to using AI in the financial sector, there are a range of associated risks. These include:
Bias and lack of transparency
AI can incorporate biases present in training data, perpetuating inequalities. For example, a major risk is the exclusion of certain populations from access to credit. If the training data mainly includes profiles from a majority socio-economic group, the AI may exhibit bias. This bias can lead to unjustly dismissing credit applications from minorities. Consequently, the lack of diversity in the data results in unfair and discriminatory decisions. Moreover, the absence of transparency complicates understanding the decision-making criteria and challenging these decisions, posing significant ethical and regulatory challenges.
Poor prediction quality and hallucinations
The quality of input data is crucial for AI. Biased data can lead to erroneous predictions, affecting financial decisions. For example, errors in market forecasts can lead to significant financial losses. In addition, AI hallucinations can provide incorrect information, causing costly errors.
Cybersecurity
Financial institutions, handling vast amounts of sensitive data, face rising risks of cyber-attacks. The recent data breach of 30 million Santander customers highlights this risk. Such security breaches can lead to data theft and financial loss. AI algorithms are vulnerable to adversarial attacks, emphasizing the need for robust protection measures. Reinforcing security, especially in cloud solutions, and maintaining continuous monitoring are crucial to safeguarding data and operations.
Regulations
Europe – AI ACT
The AI Act has officially been adopted, and the first obligations concern AI
systems having unacceptable risk and therefore prohibited, with a deadline of February
2, 2025. As AI is used everywhere, certain uses in the financial sector may be impacted
and above all considered by the AI Act to be either having unacceptable risk (prohibited)
or high-risk.
Prohibited Practices
Among the prohibited practices, the following are 3 out of the 8 use cases listed by the AI Act:
- Exploitation of Vulnerabilities: AI applications that target people in precarious financial situations with unsuitable financial products are prohibited. For example, an AI system that offers high-interest loans to individuals with poor credit scores.
- Social Scoring: AI systems used by financial institutions to assess creditworthiness based on social network activity, social behavior, or other personal aspects unrelated to financial history are prohibited.
- Emotional Inference in the Workplace: AI systems that infer employees’ emotions or feelings in the workplace, such as during meetings or while handling customer complaints, are prohibited except for medical or safety reasons.
High-risk AI System
The AI Act stipulates those certain uses of AI, which can be used in finance, are considered high-risk:
- An AI system used to evaluate creditworthiness or establish credit scores of individuals, as well as those for risk assessment and pricing in life and health insurance. This excludes systems designed to detect financial fraud.
- An AI system used in recruitment and employment such as sorting CVs and evaluating candidates.
- An AI system used for remote biometric identification such as in a data center housing sensitive financial information to enhance its security. The system uses facial recognition to scan individuals as they approach the entrance to the center. Captured images are automatically compared with a database of biometric data of authorized employees.
However, the use of an AI system for biometric verification is not considered high-risk as long as it is used to confirm that a given natural person is who they claim to be. For example, facial recognition to access an online bank account.
As with any artificial intelligence system, human oversight is essential. The AI Act provides for the principle of a « stop button » to enable human intervention when necessary. In the financial sector, it is crucial to prevent and control automated errors. These errors could have serious consequences for both markets and customers.
The general application of the AI Act will begin 24 months from its entry into force on August 1, 2024, meaning August 2, 2026. Provisions regarding prohibited AI systems will become enforceable 6 months after the AI Act comes into force, on February 2, 2025. Non-compliance with the AI Act can lead to penalties of up to 35 million euros or 7% of worldwide annual turnover.
Other regulations
As the activities of the financial sector involve the management of personal data, it is crucial to apply regulations such as the AI Act and the GDPR in a complementary manner. Financial institutions must comply with AI-specific requirements while adhering to the strict data protection standards imposed by the GDPR. This integrated approach ensures the security, transparency and protection of personal data, safeguarding the rights of individuals and the trust of customers.
With regard to cybersecurity, the DORA (Digital Operation Resilience Act) regulation seeks to strengthen the IT security of financial institutions by introducing harmonized information and communication technology (ICT) risk management requirements to ensure the sector’s resilience in the event of severe operational disruption.
All this comes on top of the large number of regulations affecting the financial sector, such as Directive 2023/2225 on credit agreements for consumers, which ensures better consumer protection, in addition to other sector-specific legislation such as anti-money laundering and reporting and auditing requirements. More recently, the European Central Bank has indicated that the use of AI in finance, although in its first stages, needs to be monitored and potentially regulated to protect consumers and ensure market stability.
United States
In the USA, there is no horizontal regulation on a par with the EU’s AI Act. Following President Biden’s Executive Order, the U.S. Treasury Department published a report on Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Sector.The report highlights the significant opportunities and challenges AI brings to the security and resilience of the financial services sector. It proposes measures to address immediate operational risks associated with AI and tackle challenges related to cybersecurity and fraud.
Also noteworthy is a proposed law at the federal level « Algorithmic Accountability Act » of 2023 that requires companies to assess the impact of the AI systems they use and sell, while ensuring greater transparency and enabling consumers to make informed choices. This affects many sectors and uses including the financial sector. The Consumer Financial Protection Bureau (CFPB) also issued guidelines in September 2023 for financial institutions using AI, requiring them to provide accurate and specific reasons following an adverse action such as credit limit reduction or credit denial to ensure transparency and prevent discrimination.
At state level, there is legislation such as Local Law 144 of the New York City Council, in the world’s financial center, which regulates the use of automated decision tools for hiring and promoting applicants and employees in the city. It requires that these tools be subject to an annual independent audit to detect bias, and that a summary of these audits be made public. In addition, Colorado law SB21-169 prohibits insurance companies from using external consumer data and predictive algorithms to unfairly discriminate against customers based on sensitive characteristics such as race, gender, or sexual orientation.
To remember
Ensuring compliance of AI systems in the financial sector, one of the most regulated in Europe, starts with mapping and screening all AI systems used in the banking organization.
Equipping your organization with an Artificial Intelligence Management System (AIMS) like Naaia guarantees optimal management and support for your teams, thanks to inventory and qualification tools, an incident reporting system and integrated monitoring. Moreover, it facilitates the sharing of information between teams, making them more efficient and de-siloed to be in line with the cross-functional nature of AI within the company and the requirements of product regulations requiring monitoring throughout the lifecycle of the AI system deployed.
Naaia can support you in your compliance process and help you deal with the complexities of the regulatory environment so that you can turn regulatory constraints into strategic opportunities.