Artificial intelligence (AI) is rapidly becoming a permanent fixture in the insurance sector. Where insurers traditionally assessed risk analyses and claims manually, algorithms now make it possible to carry out these processes faster and with greater precision. Whether it concerns calculating premiums, assessing claims or detecting fraud, algorithms are increasingly taking centre stage in the insurance industry. Efficient? Certainly, but not without risks.
What happens, for example, if a customer is refused insurance based on an opaque AI model? Or if a claim is rejected without any human intervention? In a recent study, De Nederlandsche Bank (DNB) has warned that while AI offers opportunities in the insurance sector, it also raises significant concerns. AI applications in insurance can have a major impact on decision-making, explainability and data protection. Insurers would therefore be wise to focus not only on the benefits of AI, but also on the associated risks.
In this blog we explore how AI is changing the role of insurers, the opportunities this creates and the risks that must be considered when using this technology. We conclude by offering insurers practical guidance for a future-proof and responsible use of AI.
AI refers to algorithms or systems that, with a certain degree of autonomy, perform tasks such as pattern recognition, decision-making or risk prediction. These are intelligent instructions that process data and produce an outcome on that basis. Many of these systems can also improve themselves by learning from new data. More information on how AI works and its applications can be found here.
AI is also rapidly gaining ground in the insurance sector. Insurers are now using AI extensively to calculate premiums, automatically assess claims and detect potentially fraudulent behaviour. Automating these processes offers significant advantages. For insurers, this means faster and more efficient internal processes, lower operational costs and a reduced risk of human error. Customers benefit from smoother, more tailored services with fewer manual barriers. Where customers previously faced manual assessments, long waiting times and telephone explanations, many processes can now be handled by AI within seconds. De DNB and the Dutch Authority for the Financial Markets (AFM) expect the use of AI in the financial sector to increase significantly in the coming years.
At the same time, the use of AI raises serious legal and ethical considerations for the insurance sector. With the European AI Act, which has been partially in force since August 2024, new obligations have been introduced for the use of AI. This legislation classifies certain AI applications in the insurance sector, particularly for life and health insurers, as high-risk, triggering strict compliance requirements. The General Data Protection Regulation (GDPR) continues to apply in full, including rules on automated decision-making and data protection. Breaches of these laws may result in substantial sanctions and fines imposed by supervisory authorities such as the Dutch Data Protection Authority (AP).
Are you considering using AI as an insurer? Then it is wise to look beyond efficiency alone. In the following sections we discuss which risks may arise in practice and why caution is required.
The use of AI creates the impression of objective, data-driven decision-making, but in practice it also carries the risk of unintended discrimination. AI models are trained on historical datasets that are not always neutral. These data may contain biases, for example based on age, gender, background or place of residence. When an AI system adopts such patterns, certain groups may be systematically disadvantaged. Consider the use of AI in credit assessments. If women historically received credit less often, there is a risk that a model trained on this data will continue to assign women unjustifiably lower credit scores. This can result in exclusion from essential financial services, such as loans or insurance.
For insurers, this underlines the importance of careful and representative data selection. The use of AI must not lead to unintended or unequal treatment of individuals or groups. Under the AI Act, high-risk applications are subject to far-reaching obligations, including the requirement to take measures to detect, mitigate and document bias in datasets. Insurers are therefore expected to ensure that the outcomes of AI applications are fair and responsible. In practice, however, these concepts have not yet been clearly defined by legislators and supervisory authorities. This makes it difficult for insurers to assess whether their AI systems meet the statutory requirements. As a result, many insurers develop their own policies, often based on differing interpretations.
The use of AI in insurance processes also creates challenges in terms of transparency and explainability. Many AI models are technically complex. They consist of multiple layers and process hundreds of variables that together lead to a specific outcome. As a result, it is often difficult to determine which factors were decisive in a particular decision. This lack of insight makes it unclear to customers, and sometimes even to insurers themselves, on what basis an application is approved or rejected.
This so-called black box effect can be problematic, particularly in situations where customers are entitled to a clear explanation. Under the GDPR, customers have the right to an explanation of automated decisions, including how the decision was reached and what its consequences are. If an insurer cannot provide this, it may constitute an infringement of the customer’s privacy rights. The AI Act also imposes additional transparency requirements on high-risk AI systems. Insurers are required to build in explainability, actively monitor risks, make documentation on the use of AI available, and carry out a Fundamental Rights Impact Assessment (FRIA).
The risk of insufficient transparency is not only a compliance issue, but also directly affects customer trust. When decisions are not explainable, this can lead to perceptions of arbitrariness or unfair treatment, with reputational damage as a result. Think of the use of a so-called AI agent: a chatbot that does not make it clear that no human is involved can cause frustration and mistrust. Notably, insurers in practice appear to be particularly concerned about these risks, as shown by research conducted by the DNB. Transparency is therefore an essential prerequisite for the reliable and customer-focused use of AI.
Fraud prevention has long been a key priority for insurers, and with the rise of AI they have gained a powerful new tool. Where traditional controls are often limited to sampling or standard analyses, modern AI systems can rapidly analyse vast volumes of data and identify high-risk patterns. Examples include unusual claims behaviour, uncovering suspicious networks or predicting an increased risk of fraud. Profiling is often used in this context, grouping customers into risk categories based on behavioural or background characteristics. By combining different data sources, such as transactions, emails and even social media, subtle signals that previously went unnoticed can be detected. For insurers this results in faster detection, lower costs and a better protected portfolio.
However, this approach is not without risks. When insurers use AI to identify potential fraudsters through profiling, they enter legally sensitive territory. The AI Act prohibits AI applications that automatically label individuals as fraudsters based on personal characteristics without factual or verifiable justification. This may occur, for example, where someone receives an elevated risk score solely because of their nationality, the neighbourhood in which they live or the number of children they have. Permitted AI applications in this area may fall within the high-risk category and must then comply with strict requirements on transparency, risk management and human oversight. The GDPR adds an additional layer, including the right to human review in the context of automated decision-making.
The risks around explainability, discrimination and privacy are real, but that does not mean AI is unsuitable for the sector. On the contrary: insurers that commit to responsible use can gain a competitive advantage. The key lies in a thoughtful, structural approach. With these four practical steps, insurers can get started straight away.
Know which decisions your AI systems make, based on which data, and how to explain this clearly to customers and authorities alike. Design processes so that automated outcomes remain explainable and verifiable. Document models and decision rules, and make sure customer-facing interactions, for example through AI agents, are transparent about the role of AI within the organisation.
Use only datasets that are sufficiently representative of the target group to which the model is applied. Carry out bias analyses in advance and on a periodic basis, particularly for models used in risk assessment, acceptance or fraud detection. Record how bias is monitored, corrected and reported, as required under the AI Act and the GDPR.
Define clear roles and responsibilities for the development, implementation and control of AI applications within the organisation. Set out procedures for system testing, incident registration and oversight. Ensure that AI systems are not only assessed upfront, but are also continuously monitored and evaluated, with sufficient scope for audits and adjustments where necessary.
Ensure that employees have a solid understanding of how AI works and the risks involved. Equip legal, compliance and IT teams with the tools to identify risks at an early stage. In addition, develop a broadly supported AI strategy that aligns with business objectives while taking legal obligations into account. Where possible, connect with existing industry initiatives, such as the Ethical Framework for Data-Driven Applications of the Dutch Association of Insurers, and stay well informed about new guidance from the authorities.
In this way, insurers can build reliable, legally compliant and future-proof AI applications step by step.
For insurers, the task is clear: seize the opportunities offered by AI, but take the risks seriously. Whether it concerns fraud prevention, risk assessment or customer interaction, each application requires a balance between smart automation and reliable, explainable decision-making. This starts with transparency, explainability and human oversight, and extends to robust governance and well-informed staff.
AI may accelerate processes, but it also demands a strong moral and legal compass. Insurers that invest in this today are building sustainable and trustworthy insurance services for the future.
Does your organisation use AI? Make sure you are on the right legal track. We are happy to help. Feel free to contact us to explore the options.