Is an agreement with an AI agent legally binding?

Imagine this. You run a small gym called FitFitFit B.V. (FitFitFit) and you use an AI agent. This AI agent manages customer interactions, negotiates with suppliers, publishes offers on the website, drafts agreements and concludes them with customers. One day, FitFitFit goes viral on social media. To your alarm, you discover that the AI agent has created a subscription structure in which all services (unlimited gym access, free protein shakes, group classes, physiotherapy and nutrition consultations) are bundled for just €8 per month. On top of that, 2,000 new customers have already signed up and completed payment. The gym cannot accommodate this number of members, but the customers consider the agreement to be binding (the FitFitFit case).

The central question is this: to what extent is FitFitFit bound by an agreement concluded by its AI agent?

This blog explores the legal implications of that question. It first explains what an AI agent is and which legal requirements apply to a valid agreement. It then examines consensus, the concept of non-genuine mistake and the principle of legitimate reliance. The blog concludes with practical recommendations.

Please note: due to the limited scope of this blog, only the above aspects are discussed. We recognise that additional legal doctrines may also be relevant, such as information duties, human oversight and clauses in general terms and conditions. These topics will be addressed in a follow-up blog.

What is an AI agent?

An AI agent is a system driven by algorithms that can independently make decisions and perform actions within a predefined framework. AI agents often use machine learning, enabling them to learn from past experiences and adjust their behaviour accordingly. In business, AI agents can be deployed for a wide range of tasks, such as customer service, data analysis and, as in the FitFitFit case, conducting negotiations and concluding agreements.

Although AI agents can act autonomously, they remain, in principle, tools of a natural person or legal entity. This raises questions about the legal status of agreements concluded by AI agents.

What are the requirements for a valid agreement?

Under Dutch contract law, several requirements apply to the formation of a valid agreement. Key requirements in this context are:

  • Agreement: there must be an offer and an acceptance (Article 6:217 Dutch Civil Code (DCC)). The intention of a party must correspond with its declaration (Article 3:33 DCC). Legal capacity: the parties must be capable of independently performing legal acts.
  • Legal capacity: the parties must be capable of independently performing legal acts. 

An AI agent is not a legal person and lacks legal capacity. As a result, an AI agent cannot itself conclude a legally binding agreement. A natural person or legal entity must always be involved for a valid agreement to exist. The question in the FitFitFit case is whether the agreement meets the requirement of consensus. This is closely linked to the issue of non-genuine mistake and the principle of legitimate reliance.

Is there consensus?

A valid agreement arises when a sufficiently definite and clear offer is made and accepted by the counterparty. It is essential that intention and declaration correspond, both in the offer and in the acceptance. With human contracting parties, alignment between intention and declaration can usually be established through conduct, communication and evidence. With agreements concluded by AI agents, this assessment is more nuanced.

  • Intention and declaration correspond: as long as an AI agent acts within predefined parameters (for example, a subscription price between €50 and €60 per month) and performs accordingly, this requirement is met. Although the specific outcome may be unknown, the user has defined the agent’s freedom of action, which reflects their intention.
  • Intention and declaration do not correspond: if an AI agent makes fully autonomous decisions that cannot be traced back to the owner’s instructions, or if errors occur, the alignment between intention and declaration may be disputed. Possible causes include:
    • Learned behaviour: for example where an AI agent unintentionally grants an 80 per cent discount instead of 20 per cent due to learning effects.
    • Parameter errors: for example where an AI agent unintentionally grants an 80 per cent discount instead of 20 per cent due to incorrect settings by the owner.
    • Software errors: for example where a bug causes the AI agent to add extra services free of charge.
    • External causes: for example where hackers manipulate the AI agent to offer unrealistic prices.

In the FitFitFit case, the content of the offer deviates significantly from the owner’s intention. This raises the question whether the agreement is invalid due to non-genuine mistake, or whether consumers were entitled to rely on the offer.

Can FitFitFit rely on non-genuine mistake?

Non-genuine mistake arises where intention and declaration do not correspond, meaning that no agreement is formed. This differs from ordinary mistake (Article 6:228 DCC), where a party enters into an agreement based on an incorrect assumption. Non-genuine mistake does not concern a mutual misunderstanding, but an incorrectly made declaration, for example due to a typographical error or miscommunication. A well-known example is the judgment of the Court of Appeal in Stichting Postwanorder/Otto, where a pricing typo for LCD televisions did not result in a binding agreement.

When invoking non-genuine mistake, the responsibility of parties such as FitFitFit plays a key role. If FitFitFit has failed to implement adequate control and monitoring mechanisms, has made parameter errors or has not established rapid correction procedures, the error may be attributable to it. In that case, reliance on non-genuine mistake is less likely to succeed. If, however, the error arose outside the owner’s control (for example due to a hack or a bug that could not reasonably have been foreseen), a reliance on non-genuine mistake is stronger.

However, if the consumer was entitled to rely on the validity of the offer, a reliance on non-genuine mistake is less likely to succeed. This therefore requires an assessment of whether consumers could legitimately assume that the FitFitFit offer was valid.

Can the consumer rely on legitimate reliance?

The principle of legitimate reliance means that an agreement is formed if the consumer could reasonably assume that the offer reflected the true intention of FitFitFit (Article 3:35 DCC). If there is doubt as to the correctness of the offer, the consumer has a duty to investigate. This means that, in the case of a clearly anomalous offer, the consumer must check whether a mistake has been made. The key question is whether the consumer could reasonably rely on the correctness of the offer, or whether there were circumstances that undermined that reliance.

Case law shows that certain factors can exclude reliance on legitimate reliance. This is the case, for example, where the price is extremely high or low, or where there is an obvious error in the content of the offer. In such situations, no valid agreement is formed. The perspective of the reasonably well-informed consumer is decisive. This refers to a consumer who already intended to purchase the product and has carried out some orientation on its features and price. Where there is an objective appearance of authority, this may support a successful reliance on legitimate reliance.

Price deviations and obvious errors

From the above-mentioned judgment in Stichting Postwanorder/Otto, it follows that a consumer may not rely on an extremely low price if it clearly deviates from market-conform prices. In that case, the consumer should reasonably have understood that a mistake had been made. In the FitFitFit case, where a subscription is offered for €8, the low price triggers a duty to investigate. As this price is far below market levels, a reliance on legitimate reliance will, in principle, fail.

That said, the use of AI agents introduces additional circumstances that may strengthen a consumer’s reliance on the validity of an offer. This may occur, for example, where a consumer asks whether the offer is genuine and the AI agent responds: “Yes, it sounds fantastic, but it really is true.” Trust may also be reinforced because interaction with an AI agent resembles a conversation. This differs fundamentally from a general website offer, as was the case in Stichting Postwanorder/Otto. As a result of this enhanced trust, an agreement may still be validly formed.

Communication errors by AI

The communication of the AI agent also matters. Suppose an AI agent communicates on behalf of FitFitFit using incomprehensible characters or inconsistent messages. Is there then reason for the consumer to investigate whether the AI agent is still acting in accordance with the intention of FitFitFit?

From HR Kantharos van Stevensweert it follows that a party cannot rely on legitimate reliance where it could have known that a mistake was made. If the AI agent’s communication contains obvious errors or deviates from customary communication, the consumer is expected to carry out further investigation. In that case, reliance on legitimate reliance is less likely to succeed. However, a recent Canadian judgment has imposed limits on this duty to investigate, which may also be relevant in the Netherlands. That judgment holds that a consumer cannot be expected to consult multiple information sources on a website where a chatbot states something that contradicts another source. This is understandable, as it is often impossible for a consumer to determine which statement is correct. In that case, an agreement is formed unless other circumstances undermine the validity of the offer.

AI agents and the appearance of authority

Another relevant question is whether the doctrine of apparent authority can be applied by analogy to AI agents. If a consumer assumes that an AI agent acts on behalf of FitFitFit, may they rely on the alignment between FitFitFit’s intention and declaration? Under Article 3:61(2) DCC and case law such as HR Kribbenbijter and HR ING/Bera Holding, a contracting party may rely on apparent authority where this appearance is objectively created and attributable to the represented party.

In the FitFitFit case, it could be argued that the AI agent is deployed in such a way that the consumer could reasonably assume that it is “authorised”, allowing reliance on the validity of the offer. At present, however, this reasoning does not hold. First, AI agents cannot, as a matter of law, perform legal acts, which is a prerequisite for authority. Second, the above-mentioned Canadian judgment indicates that incorrect statements by a chatbot are directly attributed to the company.

Conclusion

The FitFitFit case illustrates the complex legal issues that arise with agreements concluded by AI agents. Although AI agents can operate autonomously, the company deploying the AI agent is, in principle, bound by such an agreement if the consumer could legitimately rely on the validity of the offer. If this is not the case and there is non-genuine mistake due to an unforeseeable error or hack, this may mean that no agreement was formed.

Recommendations for companies using AI agents:

  • Clearly defined parameters: programme AI agents with strict parameters and avoid learned behaviour by granting excessive negotiation freedom.
  • Control mechanisms: implement controls to identify high-risk agreements (for example above a certain value or below a certain price threshold).
  • Human oversight: introduce human review before sending order confirmations (this may be burdensome at scale).
  • Monitor and correct: ensure continuous monitoring and adjustment of AI-driven processes.

By taking these measures, companies such as FitFitFit can reduce the risks associated with agreements concluded by AI agents and prevent legal disputes.

Does your organisation use AI?

Make sure you are legally on the right track. We are happy to help. Get in touch without obligation to explore the possibilities.

Contact us

Back to overview