Recap: AI regulation in 2025

2025 has flown by and, as expected, it has been a turbulent year for AI and its regulation. In 2025, the train of technological progress did not slow down, and a new series of AI applications, systems and models were released, all promising to make our lives easier. At the same time, it was also a year in which scepticism about AI continued to grow. Does AI technology really deliver on its promises? Is the AI economy nothing more than a large bubble? It is difficult to speculate on this with any degree of certainty. What we can do is take stock of developments, so that we can enter 2026 with perspective.

Technological developments

In 2025, there were not many truly major breakthroughs in terms of new AI applications or the way we use computers. The year was mainly focused on improving existing tools: making large language models such as ChatGPT, Copilot and Gemini more powerful and capable; integrating AI into existing applications; and, as we also see with many clients, embedding AI into internal processes. Despite this ‘steady as she goes’ approach, there were still a number of technical developments that have an impact on AI in 2025 and in the years to come.

Reflective ‘reasoning’ models

The most significant improvement for large language models such as ChatGPT has been the introduction of a ‘thinking mode’. In this mode, these tools go through a number of steps comparable to a reasoning process in order to arrive at more precise answers. At their core, language models still do exactly the same thing: they predict and complete words based on the text of the user’s prompt. With reasoning models, however, words are first completed according to a fixed structure, for example: “To answer the user’s question, I first need to know whether (…)”. By handling questions in a more step-by-step manner, more performance can be extracted from the same models without making them significantly more complex.

This was also very much needed. The major improvements of recent years, such as GPT-3.5 and GPT-4, appear to have reached something of a plateau. Simply making models larger and more complex is no longer widely seen as a cost-effective way to squeeze additional performance out of language models. Model developers are therefore increasingly reliant on these kinds of smart techniques to achieve further gains.

The AI Act: multiple components enter into force

In 2025, several parts of the AI Act also entered into force. The AI Act is structured so that rules for different risk categories apply in phases, with the largest portion still ahead of us in 2026. Over the past year, we have seen the rules for prohibited practices, AI literacy and general-purpose AI (GPAI) come into effect. In brief:

  • Since 2 February 2025, the rules on prohibited practices (Article 5 AIA) have applied. This means that all AI systems that fall within the scope of Article 5 may no longer be deployed and must be stopped immediately. Fines for continuing to use such systems can be substantial.
  • Also since 2 February 2025, the rules on AI literacy have applied. This requires staff to be sufficiently trained, or retrained, to properly understand the AI systems they use in their work and to use them responsibly. This calls for an organisation- and application-specific approach.
  • Since 2 August 2025, the rules for general-purpose AI (GPAI) have entered into force. These include a set of obligations relating to the development of generic AI models, how the safety of those models is ensured, and the maintenance of a register of copyrighted content used to train the model.

In addition, over the course of the year the European Commission published a series of guidance documents on how various parts of the AI Act should be interpreted. This includes guidance on the scope of the concept of an AI system, prohibited practices, GPAI, and much more. Further guidelines are also planned for the coming year, including on the different high-risk applications.

What lies ahead?

There is once again a great deal on the agenda for this year, particularly from a regulatory perspective. From August onwards, the AI Act will be almost fully in force, meaning that from that moment we must truly comply with all requirements set out in the text.

Or is that really the case? The agenda of the European Council of Ministers and the Parliament also includes the Omnibus Regulation proposal relating to the AI Act. This proposal, which I also discussed in an earlier blog, aims to trim a significant number of elements of the AI Act and to grant a deferral of obligations for high-risk AI. It is therefore possible that the year will turn out differently than expected and that we may yet be given an additional one or two years to become compliant.

At the same time, this is still only a proposal. Whether it will successfully pass through the European Parliament is far from certain. In this climate of uncertainty, it remains important for organisations to do everything they can to ensure timely compliance.

Curious to see how AI will continue to develop this year? Stay up to date with our AI newsletter.

Sign up AI newsletter

Back to overview