The rise of AI agents is changing how we work, communicate and do business. These AI systems, designed to perform tasks autonomously on behalf of individuals or organisations, may take us into unfamiliar legal territory. They also raise important questions about risk management.
In this blog, I guide you through the various challenges associated with deploying AI agents and outline two key steps you can take to frame the use of third-party AI agents and limit your risks when deploying AI agents. I will also introduce our AI Agent Policy Generator. Free to use for any website.
Our current legal and regulatory framework is built on core concepts such as transparency, legal personality, consensus ad idem and attribution. These concepts become less clear when applied to AI agents. This can create a vacuum in which organisations deploy AI agents without setting clear boundaries and without an effective plan.
Like any other AI system, AI agents cannot function without data. They collect, process and generate information on an unprecedented scale. Collection often takes place through assertive web scraping methods. This raises several fundamental legal questions:
Crawling and scraping: Are AI agents or other bots allowed to crawl your website(s) and systems and collect data without permission?
Intellectual property: If an AI agent collects and uses third-party content, is this lawful? Does it infringe copyright or database rights? Can you prevent text and data mining (TDM)?
Privacy and personal data protection: How do AI agents align with GDPR principles such as lawfulness of processing, purpose limitation and data minimisation? If an AI agent unlawfully uses personal data, who is responsible for the infringement?
A second challenge with AI agents is transparency. My colleague David addresses this in his blog “Transparency in AI agents: when, why and how”. He asks whether a person has the right to know if they are communicating with an AI or a human. This is not only an ethical question but also touches on fundamental rights and obligations under European and national legislation.
The AI Act, for example, emphasises specific transparency requirements for AI systems that interact directly with individuals, but the practical implementation of these requirements remains challenging for now.
Can an AI agent act as a representative? My colleague Jade explores this in her blog “Is a contract with an AI agent legally binding?”. An AI agent has no legal personality but does act on behalf of a natural or legal person. This affects fundamental concepts such as the doctrine of legitimate expectations, unauthorised representation and attributable appearance of authority.;
Deploying AI agents brings a range of compliance challenges that go beyond the transparency obligations mentioned above, particularly within the framework of the AI Act and other relevant European legislation.
The AI Act follows a risk-based approach, meaning that obligations for AI systems are aligned with the level of risk a system poses to the health, safety or fundamental rights of individuals. AI systems that pose unacceptable risks, such as behavioural manipulation, are even completely prohibited.
In practice, it can be difficult to classify general-purpose AI agents correctly, especially where their application may change depending on the context. However, it is important to realise that deploying an AI agent that falls within the statutory list of Annex I or the use cases of Annex III of the AI Act may be classified as a high-risk AI system under the Regulation.
If an AI system qualifies as high risk, stricter obligations apply, including adequate human oversight, safeguards for technical robustness and cybersecurity, data governance, logging, risk and quality management, conformity assessments and CE marking.
The AI Act is not the only relevant legislation. The GDPR must not be overlooked. Existing rules on the processing of personal data and automated decision-making also apply to AI agents. Compliance goes beyond legislation alone. AI agents may also perform tasks that are not aligned with an organisation’s internal policies.
The many challenges associated with deploying AI agents call for a focused and structured approach that goes beyond ad hoc solutions and isolated measures.
For organisations that encounter AI agents or intend to deploy them, it is crucial to start with the basics. This means defining and communicating clear boundaries for AI agents and other bots that may interact with your organisation or act on its behalf. This can be done through an external policy document on the website or through specific terms and conditions. This is the legal basic hygiene that every organisation should have in place from the outset.
Such documents include, among other things:
Crawling and scraping: You determine which data may be accessed and collected, and under which conditions.
Explicit copyright reservation: You decide whether TDM in relation to your content is permitted or not.
Use of personal data: You specify whether and which personal data on your website may be used.
Authority to act: You define which types of agreements AI agents may or may not conclude on behalf of a person or organisation, and possibly up to what value.
Limitation of liability: By including a limitation of liability, you determine the extent to which you are liable for incorrect actions of an AI agent.
To help organisations with this first step, we have developed a free AI Agent Policy Generator. This allows you to easily generate an AI Agent Policy. Based on a short questionnaire, a document is automatically created that specifically addresses the topics listed above. You can then publish this document on your website, for example in a visible location such as the footer, and optionally with a clickwrap for explicit acceptance by the user.
The output of our generator is more than just a policy document. The annex to the document also contains instructions, based on your answers, for bots in a format suitable for a robots.txt file for your website. This is practical, as a robots.txt file is still the prevailing standard for communicating with crawlers, scrapers, AI agents and similar bots.
Are the topics mentioned above essential for you? If so, consider incorporating them into your terms and conditions (which are properly provided) and implementing additional technical measures against scraping and TDM. Where necessary, one of our specialists will be happy to assist you with drafting and implementing such terms or a robots.txt file.
But what about enforcement? In practice, this can present challenges and is sufficient material for a separate blog. I will leave this aside for now.
Once the first step has been taken, the next essential step is to implement risk-based governance for AI agents, whereby:
the context and application areas of the AI agent(s) for your organisation are defined (third-party AI agents and or proprietary AI agents);
a targeted risk analysis is carried out (including compliance risks);
concrete technical and organisational control measures are documented in a risk treatment plan;
the control measures are effectively implemented.
This targeted approach, tailored to the specific context of the organisation, aligns well with the risk-based approach of, among others, the AI Act and provides concrete tools to manage risks effectively.
Do you need support with this second step? Feel free to contact one of our compliance specialists.
The rise of AI agents marks yet another chapter in digital transformation. It remains to be seen how the capabilities of AI agents will develop over the coming years. It is also important that universal technical standards are developed, including for general opt-outs by rightholders for TDM, which are machine-readable and binding on bots.1
However, it is essential that organisations start evolving alongside these developments now. This requires awareness, practical solutions and a proactive approach to potential risks. Only by bringing these elements together can we ensure that AI agents are framed and deployed responsibly, in a way that does justice to both the innovative power and the societal impact of this technology.
Use our free AI Agent Policy Generator to immediately create a policy document and accompanying robots.txt guidance. Complete the short questionnaire and download the free report.
1 The European Commission has also launched a tender for a feasibility study into a central register of opt-outs in the context of the exception for TDM.