From October 2025 to March 2026, legal professionals tested our AI Pro Pack in their daily practice. What started as a beta phase quickly became a serious test case: does legal AI truly work in practice, and if so, under what conditions?
As the beta phase draws to a close, it was time to bring all testers together. On 11 February, we hosted an exclusive beta event: an in-depth afternoon of interactive sessions and discussions about what AI can already deliver in legal practice and how we move forward from here.
These are the key insights.
The afternoon opened with Mark Zijlstra and Koen van Jaarsveld sharing the main findings from the beta phase. Since October, a clear trend has emerged: use of the AI Pro Pack has steadily increased. Testers did not merely experiment, they integrated it into their workflows.
It is also clear how the pack is being used. Privacy and compliance-focused GPTs are the most popular. These include tools for DPIAs, data breach analyses, AI Act assessments, clause reviews and policy drafting. The GPTs do not replace legal judgement. They accelerate analysis and provide a structured framework for thinking.
A survey among 91 respondents shows broad appreciation for the product and clear added value in daily practice. At the same time, the message from the profession is consistent and professional: reliability is essential. Lawyers value efficiency, but never at the expense of control. There is also demand for expansion into additional practice areas. We are already developing a public procurement assistant.
Without doubt, the highlight of the afternoon was the Prompt Challenge. Two Legal Counsels were given identical assignments. One worked without AI. The other used the AI Pro Pack.
The assignments:
Participants could follow both reasoning processes live. The substantive outcomes were strikingly similar. The route to get there was not. The manual analysis was thorough but time-consuming. The AI Pro Pack immediately provided structure, key issues and relevant legal frameworks. It did not replace legal judgement, but significantly shortened the path to it. This aligned closely with the keynote message: the strength of a lawyer lies in knowledge, experience and judgement. AI does not change who is responsible. It changes how we work.
In his keynote, “AI & Legal Practice 2026”, Mark Zijlstra outlined the broader development. AI is no longer a tool you use occasionally. It is becoming infrastructure. We are moving from generic chat functionality to integrated AI ecosystems linked to knowledge bases, contract templates, compliance processes and internal workflows. This shift changes legal services not only in substance, but also organisationally. The real question is no longer whether to use AI, but whether to use AI responsibly.
That is precisely where the AI Pro Pack distinguishes itself from generic AI solutions. It is not a standalone chatbot. It is a legally curated environment, built on our expertise and designed for responsible use within organisations.
During the panel discussion, three beta testers shared their experiences with daily use of the AI Pro Pack: Wilma van de Meerakker (IJk), Marjolein van der Heide (UNETI Labs) en Niels Dutij. A clear pattern emerged. Everyone benefited and worked more efficiently. AI supports structuring, drafting and initial analysis.
At the same time, one theme kept returning: human oversight remains essential. Implementation within existing workflows requires careful consideration. That professional caution is not a barrier. It is a precondition for responsible use.
In addition to reflection, the event focused on practical application. The workshops were hands-on and immediately applicable. Participants chose between two sessions: “Build your own GPT” or “Draft an AI policy”.
Participants worked on prompt engineering and GPT configuration. Topics included role definition, objective setting, output structure, contextualisation and tone of voice.
We explained how embeddings, Retrieval-Augmented Generation (RAG) and vector databases operate in practice. One important insight stood out: a strong GPT does not start with technology, but with clear instructions and domain expertise. We also presented our prompt library as an example of how legal knowledge can be embedded structurally.
In the workshop led by Pelçim Kaygusuz, the focus was governance. Using AI without a framework invites risk. Why is an AI policy necessary? Because AI tools are now widely accessible and easy to use. Without clear rules, shadow IT emerges, responsibilities become blurred and the risk of improper data processing increases.
Participants worked on defining scope and purpose, ethical principles, roles and responsibilities, and alignment with legislation such as the AI Act and the GDPR. The result was not a theoretical document, but a practical foundation for implementation within their own organisations.
Not the technology, but the mindset. The testers are pioneers. They belong to the first generation of lawyers who are not only testing AI, but actively shaping its development. They provide critical feedback, identify areas for improvement and contribute to further refinement.
At the end of the afternoon, they received a Founding Member certificate, a symbolic gesture recognising their substantive contribution.
During the beta phase, the AI Pro Pack evolved from a pilot into a serious tool for recurring legal tasks that consume significant time, but should not have to. It is practical, legally robust and designed with digital sovereignty in mind.
Its distinguishing features include:
The beta phase runs until 1 March. After that, we move to the next stage of broader rollout. Would you like to explore how the AI Pro Pack can be implemented within your organisation? Or would you like a demonstration of our AI environment? Please feel free to contact us.