Standardisation and AI: why standards will soon be crucial for compliance with the AI Act

The AI Act was adopted some time ago, but practical compliance is only now beginning to take shape. At the end of 2025, the first draft standard (prEN) under the AI Act entered public consultation. This marked an important turning point. The translation of legislation into concrete technical requirements is now becoming a real and tangible process.

This blog is the first in a series in which, throughout 2026, we will reflect on the key developments surrounding the AI Act and on one of its most underestimated components: standardisation.

As early as May 2023, the European Commission issued a formal standardisation request to develop European standards in support of the obligations for high-risk AI systems. The outcomes of that request will become increasingly visible this year. Standardisation is therefore no longer an abstract future exercise. The expectation is that several standards will be released for public consultation during 2026 through the working groups of the European standardisation bodies.

In this blog, I will guide you through three levels of standardisation: international, European and, ultimately, the harmonised standards that will directly support compliance with the AI Act.

1. International standards: the global foundation

Standardisation begins at international level. Organisations such as ISO (International Organization for Standardization) and IEC (International Electrotechnical Commission) develop technical standards that are applied worldwide.

Within the AI domain, ISO/IEC JTC 1/SC 42 is particularly relevant. This committee develops standards on topics such as:

  • terminology and core AI concepts;

  • risk management and governance;

  • transparency and explainability;

  • data quality and bias mitigation;

  • lifecycle management of AI systems.

International standards provide organisations with a shared baseline. They help ensure that AI is not approached in fundamentally different ways across countries or sectors and often serve as the starting point for further European standardisation. For companies with international operations, these standards are also important to ensure consistent policies across multiple jurisdictions.

2. European standards: further development within the EU framework

Although international standards are influential, Europe has its own standardisation ecosystem. Within the EU, standards are developed by:

  • CEN (European Committee for Standardization);

  • CENELEC (electrotechnical standards);

  • ETSI (telecommunications and digital infrastructure).

European standards often build on ISO standards, but place additional emphasis on European values and legal frameworks, such as:

  • product safety;

  • the protection of fundamental rights, including privacy and non-discrimination;

  • human oversight and control; and

  • alignment with existing European legislation.

The EU explicitly views standardisation as a tool to enable innovation while at the same time strengthening trust in AI systems.

3. Harmonised standards: the compliance key under the AI Act

The most relevant step for organisations is the development of harmonised standards. These are European standards that are formally designated by the European Commission to support EU legislation, including the AI Act.

During development, standards are often published as “prEN” (preliminary European Norm). These are draft versions that are not yet final, but already provide guidance on future obligations. Only once a standard has been formally adopted and its reference has been published by the Commission in the Official Journal of the EU does it become a harmonised European standard, an “hEN” (harmonised European Standard).

Why are harmonised standards becoming so important?

Like many other EU regulations, the AI Act contains open-ended obligations, in particular for high-risk AI systems and their providers. The AI Act sets out the “what”, while harmonised standards will define the “how”. For many organisations, they therefore represent the practical route to demonstrable compliance. Compliance with a harmonised standard also unlocks an important legal mechanism: the presumption of conformity. In principle, supervisory authorities may then assume that the relevant legal requirements have been met, provided the standard has been correctly applied.

Which standardisation tracks are currently underway under the AI Act?

In May 2023, the European Commission issued a formal standardisation request to CEN and CENELEC to develop European harmonised standards in support of the requirements for high-risk AI systems under the AI Act. This request forms the foundation of the current standardisation programme.

CEN-CENELEC JTC 21: the core committee for AI Act standards

The centre of gravity of this programme lies with the Joint Technical Committee CEN-CENELEC JTC 21 – Artificial Intelligence (JTC 21). This committee was established with the explicit mandate to develop European AI standards that support the implementation of the AI Act. JTC 21 works through various working groups on so-called horizontal standards. These are sector-agnostic standards that directly reflect the core obligations set out in the AI Act.

Importantly, JTC 21 does not start from scratch. Much of the work builds on existing ISO/IEC standards, adapted to European legal requirements. In many cases, ISO/IEC standards are also referenced as supporting standards.

The topics: standards for the core obligations of high-risk AI systems

The European Commission has identified ten main topics for which standards must be developed. The most important include:

  • AI risk management systems;

  • dataset governance and data quality;

  • documentation and logging;

  • transparency and information for users;

  • human oversight;

  • accuracy and robustness;

  • cybersecurity requirements;

  • quality management systems for providers.

For organisations, this means that these standards will ultimately determine how the obligations in Chapter III of the AI Act for high-risk AI systems and their providers must be implemented in practice.

First standard in consultation

A significant recent development is that, at the end of 2025, the first AI Act standard entered public consultation: prEN 18286 – AI Quality Management System for EU AI Act purposes. This standard is intended to support providers of high-risk AI systems in meeting Article 17 of the AI Act on quality management systems. The public consultation ran until 29 December 2025. The expectation is that several additional standards from other working groups will enter consultation during 2026.

Because the standardisation timeline is lagging behind the implementation schedule of the AI Act, CEN and CENELEC decided in 2025 to accelerate the process. The expectation is that a next package of standards will be adopted between 2026 and 2027 and can then be harmonised by the Commission.

Practical relevance: preparing now

For providers, this means that compliance will not only be assessed from a legal perspective, but will increasingly be technically standardised. From a strategic perspective, it is therefore prudent to take action now by:

  • monitoring prEN developments, see here the JTC 21 work programme;

  • following and, where appropriate, implementing relevant ISO/IEC standards;

  • embedding quality and risk management processes within the organisation.

Conclusion: standards as the roadmap to AI Act compliance

The AI Act provides the legal framework, but standards will determine in practice how organisations can demonstrate compliance. Those who gain early insight into standardisation developments and prepare in time will build robust AI governance and avoid a future compliance catch-up. Standardisation is not a technical detail. It is a core element of AI regulation.

In the upcoming blogs in this series, we will further explore the key developments around the AI Act and the harmonised standards that organisations can expect in 2026.

On 2 August 2026, the AI Act will become fully applicable. This creates even greater urgency for CAICO®'s to contribute to effective AI compliance within organisations. During our alumni afternoon, you will receive practical tools and strategic insights.

Alumni event

Back to overview