- A McKinsey study states that over 60% of financial institutions have already integrated AI into some part of their operations
- In the financial industry, AI is most commonly used for:
- Fraud detection and prevention, thanks to comprehensive analysis of data and transactions
- Credit scoring and credit risk assessment, by understanding large volumes of interrelated data
- Risk management and decision-making support, through insights into market trends and early detection of asset volatility
- Regulators – most notably the EU with its EU AI Act – view these AI applications in financial institutions as high-risk scenarios, placing the highest compliance requirements on these institutions
- A clear path forward is AI governance, which will plan, document, communicate transparently, and monitor the operation of such AI systems
- TrustPath can help operationalize AI governance, accelerate vendor due diligence, and build trust at every step of the AI lifecycle
From early fraud detection in milliseconds to transforming how clients are assessed, AI has entered both the financial andtrust insurance industries and is finding more and more applications. For financial institutions such as banks or insurance companies, the potential of artificial intelligence is clear – faster operations, smarter and more timely decision-making, and a better experience for the most important ones – the users.
However, as the use of AI speeds up, so do the risks related to AI technology, as well as the ways of controlling its use – that is, AI regulation.
According to a recent McKinsey’s study, over 60% of financial institutions have already integrated AI into their operations in some way. But integration is the easy part. Now, issues with compliance, governance, and transparency of AI systems are emerging, which are necessary for the safe scaling of AI. For the financial industry, the question is no longer “should we use AI,” but rather “how do we use AI without exposing our business to risk?”
That’s exactly why we decided to write this article – to share a clear and actionable view on how financial institutions can adopt AI in a safe and effective way.
At the beginning of the article, we will outline how financial institutions like banks and insurance companies use AI, comment on what AI governance should look like, and explain what regulators currently expect, through regulations such as the EU AI Act, which governs the development, use, and application of AI.
Let’s start from the beginning and try to understand how financial institutions most commonly use AI.
How Financial Institutions Use AI in Their Business Operations?
As we mentioned in the introduction, AI in the financial industry is not the future – it’s already the present. The fact that over 60% of financial institutions have already implemented AI into their operations says enough. However, below we’ll outline the most common ways financial institutions use AI, along with real-world examples.
Fraud Detection and Prevention
It’s long been said that where there is money, there is fraud. This has been one of the biggest threats to banks and insurance companies since the beginning. Their operations may change, but fraud attempts remain a constant. However, technology development has helped reduce fraud, and technologies like AI contribute even more.
Because of its ability to analyze and understand large amounts of data in a short time, AI can analyze millions of transactions in real time and identify potential anomalies that would likely go unnoticed using traditional, non-technological methods.
For example, NatWest partnered with OpenAI to improve fraud prevention through digital assistants, helping address losses of £570 million caused by fraud in the first half of 2024. Also, MetLife, one of the world’s largest insurance companies, uses AI to detect suspicious patterns in claims, significantly reducing fraud-related losses.
Credit Scoring and Credit Risk Assessment
Risk assessments have historically been among the most manual processes in financial institutions. But with the rise of advanced technologies and AI, this is changing. AI can better understand large volumes of transactions, signals, and client-related correlations, building a more complete and accurate risk profile.
For example, JPMorganChase uses AI to evaluate credit worthiness and operational efficiency across all areas of its business.
In insurance, AI improves risk assessment based on many parameters – from health status to natural events – enabling more personalized insurance packages and better user segmentation. This approach is used by Swiss Re, a major Swiss insurance company.
Risk Management and Support in Decision-Making
As mentioned a few times already, thanks to its ability to analyze large data sets in a short time, financial institutions use AI to identify, quantify, and mitigate risk. But not only risks related to their clients – also risks tied to their investments.
In capital markets, AI models analyze market trends and give early warnings about investment volatility. For example, Goldman Sachs uses AI to improve productivity in programming and speed up risk assessment, with CEO David Solomon stating that AI brings a 20–30% boost in productivity.
Now that we understand how AI is applied in the financial industry, let’s look at how AI can be implemented while reducing the risks it brings.
What Regulators Think About Use of AI in the Finance Industry?
Let’s go back to the data mentioned at the beginning of the article, shared by McKinsey – more than 60% of financial institutions already use AI in some part of their operations. However, governments around the world have realized what the use of AI in the financial world involves, so when drafting laws, they took into account the ways AI is used in this industry.
Below, we’ll explain how current regulations affect the use of AI in the financial sector.
The Impact of the EU AI Act on the Financial Industry
The EU AI Act, the first comprehensive regulation governing the development, deployment, and use of AI, takes a risk-based approach, classifying AI systems into four categories: from low-risk, to limited-risk, high-risk, and prohibited systems. You can find more details about this risk classification in our guide to the EU AI Act.
When we compare the previously mentioned ways financial institutions use AI with the EU AI Act, we can conclude that systems used for credit scoring are classified as high-risk systems.
This means that companies using such systems – in this case, financial institutions – will need to comply with the strictest rules under the EU AI Act, which include (but are not limited to):
- Creating detailed technical documentation and risk assessments
- Implementing human oversight of the system
- Ensuring transparency of AI systems, meaning users must be clearly informed about the processes and how the AI reached its conclusions
- Monitoring the system after deployment and keeping records
As outlined in our guide, the penalties are severe and can reach up to €35 million or 7% of global turnover, depending on the nature and frequency of the violation.
FCA’s View on the Use of AI in the Financial Industry
The UK’s Financial Conduct Authority (FCA) has not yet published specific AI regulations, but it has issued guidance stating that AI systems must be explainable, fair, and accountable – especially when making decisions that affect people’s lives, which is certainly the case with credit scoring.
In its AI discussion paper from 2022, the FCA emphasized that:
- Companies must ensure that their AI models make decisions free from bias or discriminatory factors
- Companies are expected to protect data privacy and conduct ongoing testing of their systems
- Companies should be able to explain their systems and how they reach conclusions – even if the systems are complex
SEC’s View on the Use of AI in the Financial Industry
In the United States, the Securities and Exchange Commission (SEC) has increased its focus on the use of AI and predictive analytics in financial markets. In 2023, the SEC proposed rules requiring investment advisors and brokers to eliminate or neutralize conflicts of interest when using AI in client interactions.
Key guidelines:
- Companies must evaluate whether AI could lead to outcomes that go against investors’ interests
- The SEC is concerned about “black box” AI models that are not understandable or interpretable
Although detailed rules are still in development, the direction is clear: financial institutions will be held accountable for how AI influences decision-making, risk management, and outcomes for end users.
How Financial Institutions Can Safely Adopt AI?
By understanding how AI is used in the financial industry, it’s easy to see that the importance of AI is high – it supports not only operational efficiency but also financial performance, which leads to long-term stability for companies.
However, the use of AI also affects end users, something that regulators are well aware of. That’s why the financial industry is a key focus when it comes to AI implementation.
It’s therefore very important to establish AI governance processes, which include:
- Creating clear and detailed technical documentation that maps the model’s logic, data sources, datasets used for training, and the intended use
- Performing regular evaluations of AI models to detect early signs of bias, model drift, and other issues
- Introducing human oversight, especially when AI model outputs influence end users – which is often the case in the financial industry
- Maintaining audit trails that help regulators, save time for interim actors, and build trust among all stakeholders
According to a study by the World Economic Forum, only 20% of companies believe they are fully prepared for AI model governance. Leading financial institutions already see AI governance as a key factor in adopting AI, which we see through our AI job board that is growing every day with roles related to AI governance.
In short, safe adoption of AI without having governance processes in place is very difficult – and in our view, impossible in the long run.
How TrustPath Can Help Banks and Insurances with AI Adoption
In the fast-moving race driven by the rise of AI, several factors play a crucial role:
- Competitive pressure to develop increasingly advanced AI systems and products in order to stand out in the market
- Regulatory pressure through the introduction of laws that govern the development, application, and use of AI (such as the EU AI Act)
- Growing complexity in documenting and governing AI models, caused by the two factors above
Because of these challenges, a logical question arises – how can you scale your business quickly without falling into regulatory traps that can cause financial, reputational, and operational damage? This is exactly the question TrustPath helps answer.
We help financial institutions (banks and insurance companies) operationalize AI governance, speed up vendor due diligence, and build trust at every step of the AI lifecycle.
Our AI vendor assessment framework is built for enterprise companies – including banks and insurers – and effectively addresses all the questions raised by regulators.
Schedule your demo or contact us to learn more.