Imagine a digital helper that orders your groceries, secures festival tickets or books a holiday for you, without you having to click anything. Convenient? Or a little unsettling?
These types of applications are made possible by the rise of so-called AI agents, arguably the biggest trend in the AI landscape of 2025. Unlike existing AI systems, which are mainly good at ‘thinking’, an AI agent not only has an artificial brain but also artificial arms. They are not limited to suggestions, but can fully autonomously make payments, apply for licences and enter into contracts. They are sometimes described as the colleague who never takes a coffee break.
Not everyone is enthusiastic about these agents. Unconscious interactions with agents can lead to unwanted agreements. We previously wrote about this in an earlier blog. To limit these risks, various laws require clear communication about the use of AI agents. But how do you comply with this duty, and what happens if you do not? We explain this below.
Under the Dutch Civil Code (Articles 6:227b-c and 3:15d) and the e-Commerce Directive (Article 10), consumers must be informed before entering into an agreement about essential matters such as the identity of the counterparty and how the contract is concluded. If you use an AI agent that negotiates autonomously, consumers must be explicitly informed in advance that they are communicating with an AI agent rather than a human. This can be done through a notice such as: “You are now speaking with our virtual AI agent.”
Through tools such as OpenAI’s Operator, consumers will soon be able to use agents themselves. These agents can, on their behalf, automatically buy groceries or festival tickets. Such agents can meet their duty to inform the consumer via a real-time screen recording that shows which products are being purchased and how the ordering process works. Consumers themselves do not have to inform third-party providers that an AI agent is acting on their behalf.
Article 50 of the AI Act introduces transparency obligations for various AI systems, including systems that interact directly with natural persons. Direct interaction comes in many forms: think of chatbots and voice assistants, but also of agents that autonomously email hotel owners. As with the Civil Code, this obligation is tied to the moment of interaction. The person involved must know before the conversation starts that he or she is communicating with an agent rather than a human.
This duty does not apply if it is obvious that the system is not human. A clear example is an AI agent, “Chatbot Alex”, that gives standardised answers to frequently asked questions. Here, it is reasonable to assume that customers understand it is not a human. However, this exception does not apply to agents that automatically arrange tickets or hotel bookings by sending emails. In those situations, people may not realise they are dealing with a bot, and the agent must disclose its artificial nature.
The General Data Protection Regulation (GDPR) also includes a duty to inform, which varies depending on whether data are collected and processed directly or indirectly.
If you collect personal data directly from individuals (for example via chatbots that take orders), Article 13 GDPR requires you to provide clear information at the time of collection about, among other things:
• the identity of the controller or its representative,
• the purposes of the processing,
• the legal basis for the processing, and
• other relevant information (such as recipients or retention periods).
In most cases, a controller can comply with this duty by providing a clear link to the privacy notice. For example, an order-processing chatbot might state: “See our privacy notice for more information about how we handle your data.” The information must be provided at the moment the personal data are obtained. A privacy notice that is merely available somewhere on the website without a link in the chat is not sufficient.
If the controller wants to further process the collected data for new purposes, the individual must, before that further processing, receive information about the new purpose in addition to the privacy notice. This might be a message such as: “This conversation is partly used for training purposes.” A separate legal basis is required for this further processing. If explicit consent is necessary, for example when processing special categories of personal data, the notification must not only inform but also explicitly request consent.
The GDPR does not require the same notification to be repeated for every subsequent processing activity, as long as the purpose does not change and the individual has already been properly informed. A chatbot therefore does not need to repeat in every message that conversations are stored for training, and a grocery agent does not need to state with every order that the consumer’s credit card details are being used.
If your AI agent collects personal data indirectly, such as by scraping websites, the duty to inform under Article 14 GDPR applies. In addition to the information required under Article 13, you must also disclose the data source and the categories of personal data. The information must be provided within one month of collection. There are exceptions, for example where informing individuals is impossible or would require disproportionate effort. Due to the large number of individuals and data sources involved in scraping, this threshold is often met. In such cases, a publicly available and clearly accessible privacy notice will suffice.
In other situations, this exception does not apply. Consider an insurance agent that calculates premiums based on external claims data. Here, it is not disproportionate to inform the individuals personally, for example by email or letter. The agent must provide this information at the first point of contact or within one month of data collection.
Failing to comply with information duties may lead to unwanted legal consequences. First, you risk fines and other sanctions for breaching the law. In addition, non-transparent use of agents can give rise to liability issues, for example where agents make high-risk financial decisions without informing their customers. Finally, non-compliance can cause significant reputational harm and erode customer trust.
AI agents promise to make our lives even easier, but they can also blur the line between human and machine. Various laws, from the Civil Code to the AI Act and the GDPR, therefore require clear communication when AI agents are used. Ignoring these duties can lead to liability, reputational damage and compliance fines. All in all, there are more than enough reasons to provide clear and transparent information when deploying AI agents.
Does your organisation use AI? Make sure you stay on the right legal track. We are happy to help. Feel free to get in touch to explore the options.