One AI Act, Multiple Supervisory Authorities: How the Netherlands Is Organising AI Oversight

In March, the AI Impact Barometer of the Dutch Data Protection Authority of the Dutch Data Protection Authority (Autoriteit Persoonsgegevens, AP) turned red. The message to the new cabinet was firm: organisations are deploying AI with increasing frequency, while clarity on rules and oversight is lagging behind. The AP warned of risks involving unsafe and discriminatory algorithms, while effective enforcement is not yet always possible.

The draft bill for the AI Act Implementation Act (UAIV) (currently still in public consultation) is intended to change that. The legislator is not doing so by reinventing the substance of the AI Act. (Those rules are already laid down in the European legislation, and some are already in force.) No, the UAIV primarily addresses something else: who supervises in the Netherlands, who may enforce, and with what powers? Those hoping for a single, clear AI one-stop-shop will be disappointed.

No Dedicated AI Watchdog, but a Supervisory Landscape

The Netherlands is not opting for a single central AI supervisory authority. The proposal distributes oversight across multiple existing authorities.

The Dutch Data Protection Authority (AP) and the National Inspectorate for Digital Infrastructure (Rijksinspectie Digitale Infrastructuur, RDI) are given key roles, but the Authority for the Financial Markets (AFM), De Nederlandsche Bank (DNB), product safety supervisors, and specific actors within the administration of justice also have a place. How many supervisory authorities you count depends on your counting method: formal market surveillance authorities, sectoral supervisors, coordinating bodies, or special supervisory routes.

The result? The Netherlands is not getting a dedicated AI supervisor. The Netherlands is getting an AI supervisory landscape.

Logical, but Not Simple

This choice is administratively understandable. AI is not a sector. AI appears in healthcare, finance, education, employment, infrastructure, public services, the judiciary, and many other sectors. A single supervisory authority could hardly cover all those domains substantively.

That is why the proposal aligns as much as possible with existing supervisory structures. Product-related high-risk AI remains close to existing product safety oversight. Financial AI largely stays with AFM and DNB. And for many socially and fundamental rights-sensitive applications, the AP clearly comes into view.

This is logical. But for organisations, it does not necessarily make things simpler. AI compliance becomes not only a question about the substance of the AI Act, but also about the Dutch supervisory column you end up in.

The AP Becomes the Linchpin, but Not the Only Player

If one supervisory authority is clearly gaining weight, it is the Dutch Data Protection Authority. It comes remarkably close to becoming a horizontal AI linchpin, without becoming the AI authority.

The AP is given a broad role in many AI applications that touch on fundamental rights, transparency, and prohibited practices. The AP also features prominently in various high-risk AI system applications from Annex III of the AI Act, for example in biometrics, education, employment, essential services, and democratic processes.

This fits with the type of risks the AP has been warning about for some time. Many AI applications are not just about technical safety, but about the impact on people: selection, assessment, access to services, government decisions, privacy, and discrimination.

At the same time, it is important not to pretend that everything ends up with the AP. The RDI, for example, has a clear role as the single point of contact under Article 70 of the AI Act (more on this later), AFM and DNB remain firmly in view in the financial sector, existing product safety supervisors play an important role for product-related AI, and for AI in the administration of justice, the draft bill even opts for a separate institutional route.

The Advice Was Followed, but Not Copied

An interesting point lies in the coordination.

In the final joint advice from the AP and RDI last year, the combined role of AP and RDI was clearly visible. RDI was positioned as an important link for coordination, expertise, and infrastructure.

In the draft bill, this has been formalised differently, although the main thrust of the final advice is clearly recognisable: distributed oversight, significant weight for the AP, alignment with existing sectoral supervisors, and emphasis on cooperation. But the cabinet is not adopting the advice one-to-one.

Anyone reading the statutory text might initially think that the Minister of Economic Affairs and Climate Policy becomes the single point of contact and assumes a coordinating role, and that the RDI becomes less prominent than described in the final advice. In practice, implementation appears to run via the RDI through delegation (see especially the Explanatory Memorandum). This is more than legal detail. It shows how the Netherlands intends to structure the system administratively: political responsibility formally with the Minister, and technical/operational expertise largely with the RDI.

Additionally, prohibited AI practices and transparency obligations are not fully concentrated at the AP, certainly not in the financial sector. Moreover, for the administration of justice, a separate institutional route has been chosen, outside the normal market surveillance model.

All in all, this makes the proposal more nuanced. But also more complex.

New Teeth for Supervisory Authorities

The UAIV is not just an organisational chart. The proposal also gives supervisory authorities robust enforcement instruments.

Most striking is the power to conduct investigations under a fictitious identity. In short, supervisory authorities can attempt to access AI systems as mystery shoppers. This is relevant for systems that are not easily verifiable from the outside.

Additionally, the proposal contains powers to intervene with respect to online interfaces in case of serious risks, to apply administrative enforcement, and to recover enforcement costs. Fining powers are also embedded nationally.

This gives the AI Act real teeth in the Netherlands. Not just on paper, but also in supervision.

Coordination Will Be the Litmus Test

A distributed supervisory model has advantages. It leverages existing knowledge, it prevents a single new supervisory authority from having to reinvent everything, and it aligns better with sectors where robust oversight already exists.

But there is also a risk. The more supervisory authorities are involved, the greater the chance of overlap, disputes, or differing interpretations.

This is certainly true for AI systems used in multiple contexts. A single AI system may be relevant for transparency obligations, high-risk classification, and sector-specific rules. Then it must be clear who is responsible for what.

Cooperation, data sharing, and coordination between supervisory authorities therefore become preconditions for the functioning of the entire system. Without proper coordination, 'distributed oversight' quickly turns into 'fragmented oversight'.

Not to mention the additional personnel and resources this will require from freshly minted AI Act supervisory authorities. The legislation is now in the making: is the Cabinet also adjusting the budget accordingly?

How to Address This in Your AI Act Compliance Strategy?

For organisations, the analysis starts with their own role under the AI Act.

Are you a provider, for example, because you develop an AI system or place it on the market or put it into service under your own name? Or are you a deployer, because you use an AI system under your own responsibility?

This distinction matters. A provider of a high-risk AI system has different obligations than a deployer using such a system.

Next comes classification. Is there a prohibited AI practice? Is it a high-risk AI system? Does a transparency obligation apply? Or do mainly general obligations apply, such as AI literacy?

Under the Dutch UAIV, another layer is therefore added: which supervisory authority is responsible?

AI Compliance Becomes Administrative Too

For organisations, AI compliance therefore becomes not only legally substantive, but also administratively practical.

For finance, AFM or DNB are the obvious choices. For many fundamental rights-sensitive applications, the AP. For product-related AI, existing product safety supervisors. For critical infrastructure, the RDI/ILT sphere comes into view. But for the administration of justice, a separate route applies via the Procurator General of the Supreme Court.

This means organisations must not only assess their AI systems against the AI Act, but also position them on the Dutch supervisory map.

Who is our likely supervisory authority? What information will that party want to see? What powers can they deploy? And how does this fit into our existing governance, compliance, and documentation?

Conclusion: Put Oversight on Your AI Roadmap

The AI Act is European, but enforcement is being organised at the national level. The Netherlands is not opting for a single AI desk, but for a supervisory landscape with multiple routes, robust powers, and strong emphasis on coordination (at least, in this proposal, the UAIV still needs to pass both Chambers of Parliament).

This makes the UAIV more important than it might appear at first glance. This is not a technical annex to the AI Act. This is the act that determines who may ask questions, who may investigate, and who can enforce.

The practical message for organisations is therefore clear: map your AI systems, determine your role under the AI Act for each system, classify the risk, and already link the likely supervisory authority to it.

AI Act implementation therefore means not only reading rules, but also predicting oversight. Those who know which supervisory column their AI system falls into will be far less surprised - even if the supervisory authority does not announce itself with a business card, but enters through the front door of your AI system as an "ordinary user".

Need help mapping your AI systems, determining your role, or preparing for oversight? We are here for you.

Contact us

Back to overview