The insurance sector is no exception when it comes to AI. For many years, most processes were largely manual, labour-intensive and dependent on specialist human expertise. Now we see a shift towards digital, data-driven decision-making. Insurers are actively experimenting with algorithms that analyse claims, create risk profiles and even shape customer communication. There is a great deal of enthusiasm, but there are also serious concerns. How far may an insurer go in collecting and using data? When does personalisation become discrimination? And how do you keep fraudsters out when AI is also a powerful tool for them?
In this blog I highlight three so-called hot topics that currently sit at the heart of the AI debate within the insurance sector, namely:
Taken together, these topics show how profound and multifaceted the impact of AI is on a sector that has traditionally revolved around caution and risk management.
The first development that stands out is the growing integration of generative AI into the core processes of insurers. Claims handling is an area that clearly benefits from automation. Claims often consist of a mix of photos, reports, statements, policy terms and conditions, and correspondence. For claims handlers this means a lot of reading and weighing of facts. Generative AI systems can process and structure this information more quickly. They extract relevant passages from documents, compare them with policy terms and conditions and flag irregularities. This makes the process faster and more careful.
Interest in and use of generative AI is also growing in underwriting. Underwriting is the process in which an insurer assumes financial risks in return for payment. It includes conducting research and assessing the level of risk that each applicant brings with them before the insurer takes on that risk. Where underwriters used to rely on manual analysis and experience, they can now draw on models that combine large volumes of historical data with up-to-date information. In theory, AI makes it possible to assess risks in a more detailed and efficient way. The result is faster acceptance, better insight and more efficient use of business resources.
Generative AI also raises new questions. One of these concerns explainability. Generative AI is often a black box: the model reaches conclusions based on patterns in data that people cannot follow. When a claim is rejected, the insurer must be able to explain why. This follows from both case law and the Claims Handling Code of Conduct of the Dutch Association of Insurers. It is not only a legal obligation, it is also essential to maintain customer trust.
Responsibility for errors is another key issue. What happens if a generated report contains incorrect conclusions and a claim is wrongly rejected as a result? The insurer remains ultimately responsible for the outcomes of generative AI.
At the same time, that responsibility becomes more blurred as suppliers, data quality and human review play a growing role. This underlines the need for robust legislation and regulation for the use of generative AI and for clear frameworks for human oversight.
The second topic may appear positive at first glance and follows directly from the previous one, namely personalisation. Using AI, insurers can offer their products in an increasingly tailored way per individual. This seems efficient: someone who drives carefully, lives healthily and has never smoked can, for example, be rewarded with a lower premium. Yet this remains a sensitive topic where it is not clear which data are used in the assessment and why. The use of a postcode that predicts ethnicity or income level may lead to higher premiums for socially more vulnerable groups, which raises ethical questions about equal treatment and social responsibility. This makes transparency from insurers essential when AI is used to personalise insurance products.
The GDPR forces insurers to critically assess the personal data they process and for which purposes they process those data. Data minimisation, transparency and proportionality, which are explicitly named as principles in the GDPR, sit uneasily with the personalisation of insurance if there is no full insight into the personal data used. In addition, the AI Act classifies systems for risk assessment and pricing as high-risk AI. This means that from 2 August 2026 insurers will be required to comply with several obligations in order to use such AI applications in a compliant way.
The challenge for insurers is to find the right balance between innovation and equality. Personalisation can generate significant value, but only if consumers understand how it works, if data are used lawfully, and if it is clear that personalisation does not lead to exclusion or unfair differences in premium allocation. An insurer that manages to strike this balance can use it as a differentiating factor. At the same time, it runs reputational and compliance risks if mistakes are made.
Where insurers use AI to work more efficiently, fraudsters use AI to mislead insurers. Deepfake videos of loss events, manipulated photos, fake identities and fraudulent AI-generated documents have become serious risks that are increasingly observed. The use of AI makes the detection of fraudulent claims more difficult than ever.
Insurers respond with AI models aimed at fraud detection, creating a kind of arms race with fraudsters. The advanced technology involved often makes fraud prevention complex and costly. There is also a risk that genuine customers are flagged as fraudsters, the so-called false positives. The impact of being denied insurance can be very significant, which makes human review essential. In addition, large-scale data processing for fraud detection involves privacy risks. Insurers must constantly assess whether data are necessary and can be used in a proportionate way.
What these three developments have in common is that they force the insurance sector to look ahead not only in technological terms, but also in legal and ethical terms. AI offers huge opportunities, but the success of these applications depends entirely on the extent to which insurers manage to deploy the technology in a responsible way. The key lies precisely in combining technological innovation with social responsibility. Insurers that now invest in transparency and careful data use in the field of AI will reap the benefits at a later stage.
Would you like to know more about the various challenges of using AI in the insurance sector and the steps your organisation can take? Please contact us via our contact form.
[1] HR 3 februari 1989, ECLI:NL:HR:1989:AB8306.