What the Dutch Data Protection Authority will focus on in 2026

What do artificial intelligence (‘AI’), cookie tracking and digital infrastructure have in common? According to the Dutch Data Protection Authority (Autoriteit Persoonsgegevens, AP), they all directly affect citizens’ fundamental freedoms and together form the core of privacy supervision in the coming years.

The AP has published its 2026 annual plan and makes clear choices. This is not a repetition of earlier priorities, but a sharpened supervisory strategy for the period 2026 to 2028. Three themes take centre stage: mass surveillance, AI and digital resilience. With this, the regulator shows that privacy is no longer just about protecting personal data in individual cases. Increasingly, it concerns structural questions about power, autonomy and fundamental rights in a digital society.

The AP emphasises that it approaches supervision from fundamental values such as non-discrimination, autonomy and transparency of power. These values are not abstract principles. They directly guide which technologies, sectors and applications receive priority, with the aim of preventing harmful situations around data and AI applications before they arise.

New priorities

In the 2026 annual plan, the AP makes clear that it has to make tough choices. Not every privacy issue can be addressed with the same priority. The regulator therefore focuses primarily on large-scale systems and organisations with significant societal impact, both in the public sector and in business.

This does not mean that smaller incidents suddenly become irrelevant. It does mean that the AP deploys its limited capacity where technology structurally affects freedom, fundamental rights and power relations.

Mass surveillance: tracking also under scrutiny

One of the most striking priorities is mass surveillance. In this context, the AP looks not only at camera surveillance or law enforcement practices, but also at how people are tracked online. According to the regulator, the standard is clear: people should be able to move freely, both in public spaces and online, without being constantly tracked or observed.

Notably, tracking and cookies are increasingly being placed under the broader concept of surveillance. This is a strong qualification and will undoubtedly trigger debate, particularly among organisations that view tracking primarily as a marketing tool. For organisations, however, the signal is clear: structural data collection and profiling remain a key focus of supervision. Digital marketing practices therefore take on a broader societal significance than before.

AI as the second pillar: rules must be clear in advance

The second priority is AI. The AP sees that AI is developing rapidly and increasingly influences decisions that directly affect people. Think of selection, assessment, risk profiling or automated service delivery. This brings real risks, such as discrimination, disinformation and loss of human autonomy.

For that reason, the regulator wants to be involved primarily at the front end. The AP emphasises that intervention after the fact is often difficult or even impossible with AI systems. Once systems are operational and personal data has been processed, errors and undesirable effects are not easy to reverse. Clear frameworks in advance are more effective than corrections afterwards. Transparency, explainability and the prevention of structural bias are becoming increasingly important. This preventive approach aligns with European AI legislation, in which risk-based supervision and upfront safeguards are central.

For organisations, this means that AI governance is no longer a future issue. Anyone deploying AI in processes involving personal data should already be considering control, accountability and safeguards, before supervision, complaints or public debate arise. The AP explicitly invests in building knowledge and providing support, so that organisations and public authorities better understand where the boundaries lie before problems emerge.

Digital resilience: control over digital dependencies

Digital resilience is the third priority. The AP emphasises that citizens and organisations are becoming increasingly dependent on digital services and infrastructure, including for essential public and commercial processes. That dependency increases the impact of disruptions, data breaches or misuse of personal data.

The regulator therefore focuses, among other things, on awareness of cyber security and digital dependencies. For organisations, this means that digital resilience goes beyond technical security alone. It concerns how you deal with suppliers, chains and data, and how they ensure that digital services remain reliable. Privacy and information security are increasingly converging. Trust in digital systems is becoming a structural precondition.

In conclusion

The 2026 annual plan is a strong strategic document. The priorities of mass surveillance, AI and digital resilience address the issues that organisations and society will face in the coming years.

For businesses and public authorities, the message is clear: now is the time to build privacy and AI governance in from the outset, rather than trying to fix issues afterwards. Supervision of data and algorithms is no longer a side issue, but a prerequisite for fair innovation and a healthy digital rule of law.

Those who already reflect on transparency, responsible data use and digital resilience will not only be in a stronger legal position later on, but will also reduce reputational risks and avoid intrusive supervision after the fact.

Do you have any questions after reading this blog? Please feel free to contact us.

Contact us

Back to overview