Profiling in insurance practice: efficient, but not without risk

Data-driven working has become indispensable in the insurance sector. Analysing large volumes of data is necessary to assess risks, process insurance applications and meet statutory obligations such as customer due diligence and fraud prevention. Profiling and automated decision-making can play a central role in this context and enable insurers to support decisions at scale and organise processes efficiently.

At the same time, technologies such as profiling and automated decision-making affect the relationship with the insured. This is particularly true where automated analyses lead to decisions with tangible consequences, such as acceptance, premium setting or termination of an insurance policy. In those cases, legal and societal expectations increase. Transparency, due care and human oversight are not mere formalities, but essential conditions for responsible use. Insurers that embed these safeguards structurally avoid data-driven innovation turning into liability or reputational risks. In this blog, we discuss the key legal points of attention around profiling and automated decision-making and what this means in practice for insurers.

What is profiling?

Profiling means using personal data in a fully or partly automated manner to say something about a person. It is not just about collecting data, but primarily about drawing conclusions or making predictions. In insurance practice, profiling is used, for example, to determine premiums, assess applications or estimate fraud risk. Data from different sources is often combined to create a profile that says something about a person’s current or future behaviour.

Not every form of segmentation qualifies as profiling. A simple, static classification of customers, for example by age category, without conclusions being drawn or consequences attached, will usually fall outside its scope. The decisive factor lies in the assessment. As soon as data is used to form a judgement about an individual or to assess their opportunities, risks or reliability, profiling is involved. This is precisely because such assessments can have concrete consequences for the rights and freedoms of insured persons.

Profiling and automated decisions

Profiling and automated decision-making are often confused, but the distinction is legally important. Profiling concerns analysing and predicting behaviour or risks based on data. Automated decision-making goes one step further. In that case, a decision with consequences for a person is taken without human involvement, such as rejecting an insurance application or charging a higher premium.

For insurers, the greatest risk lies in this combination. Profiling is often permitted, but once its outcome leads to a decision without human intervention, stricter rules apply. In practice, this often develops gradually. What starts as a supporting risk model can evolve into a fully automated decision-making system. That is precisely why it is important for insurers to have a clear view of where data analysis ends and decision-making begins.

Requirements for profiling

Under the General Data Protection Regulation (‘GDPR’), profiling is not a free-for-all for insurers. The Dutch Data Protection Authority emphasises that profiling is only permitted when carried out carefully and proportionately. In practice, this means that insurers must have a clear legal basis for profiling, comply with the GDPR principles and pay attention to the purpose, the impact on the insured and the security of personal data.

Transparency plays a central role. Insured persons must be able to understand that profiling is being used, which data is involved and what the possible consequences are. Profiling must not be an invisible process that takes place entirely behind the scenes. Precisely because profiling models can produce conclusions with significant impact, the Authority expects insurers to communicate openly and carefully about this, for example through an accessible privacy notice.
 
In addition, profiling must not go beyond what is necessary. The use of large datasets or external data sources is not automatically justified, even if it is technically possible. Insurers must continuously assess whether the chosen approach is proportionate to the intended purpose and whether the interests and rights of the insured are adequately protected.

In short, profiling is permitted and often indispensable in insurance practice, but it requires clear choices, transparent communication and ongoing attention to proportionality and due care. Insurers that embed these principles structurally reduce the risk of data-driven working turning into privacy or compliance issues.

What does an insurer need to arrange in practice?

Profiling and automated decision-making require a careful design of data-driven processes. For insurers, this means making the right choices not only technically, but also legally and organisationally. Key points of attention include the following:

1. Carry out a Data Protection Impact Assessment (‘DPIA’)
Profiling can pose a high risk to the privacy of insured persons and therefore requires a DPIA. Such an assessment helps to identify risks at an early stage and implement appropriate measures. For more information on conducting a DPIA, click here.

2. Ensure transparency towards insured persons
Insurers must clearly explain in their privacy notice whether and how profiling or automated decision-making is used, which data is involved and what the possible consequences are.

3. Safeguard the right to a human review
Where decisions are taken fully automatically and have tangible effects, insured persons must be able to rely on a human assessment.

4. Assess the need for a Data Protection Officer (‘DPO’)
In some cases, appointing a DPO is mandatory or at least strongly recommended, particularly where profiling is a structural part of business operations. Insurers can assign this role internally or opt for support from an external DPO. Here you can read more about our support as a DPO.

5. Use accurate and up-to-date data
Decisions may only be based on accurate and current data. Incorrect data creates legal risks and leads to incomprehensible and unjust outcomes.

6. Ensure appropriate information security
The data used must be properly secured. This is especially important where sensitive or large-scale datasets are processed.

7. Respect the rights of data subjects
Insured persons must be able to exercise their privacy rights, such as access, rectification and objection, easily in practice.

How we can support

We support insurers in designing data-driven processes in a legally future-proof manner. This includes DPIA matters, transparent privacy information and effective human oversight of automated decision-making, as well as governance, supervisory requirements and the practical application of the GDPR.

Would you like to know more about how your organisation can deploy profiling and automated decision-making carefully within the applicable legal frameworks? Please contact us.

Contact us

Back to overview