Smart technology, reckless use? AI and ESG in practice

The use of artificial intelligence (AI) raises questions not only about technology and efficiency, but also about social responsibility. In this blog, we provide an initial overview of how AI intersects with three key pillars of sustainable and responsible business: Environmental, Social and Governance, collectively referred to as ESG. These themes are becoming increasingly important, particularly with the introduction of new legislation such as the AI Regulation (AI Act), the Corporate Sustainability Due Diligence Directive (CSDDD) and the reporting obligations under the Corporate Sustainability Reporting Directive (CSRD).

Environmental: AI consumes significant amounts of energy, water and resources

AI relies on substantial computing power. This has direct environmental effects, including energy consumption, water use, electronic waste and emissions. Consider the training of language models, the execution of search queries and the continuous operation of data centres. In addition, AI can indirectly increase environmental pressure, for example through the growing use of autonomous vehicles or the optimisation of consumption behaviour.

Although concerns are frequently voiced, the precise scale of this impact remains unclear and difficult to determine. There are currently no standardised methods to objectively measure or report the environmental impact of AI. International organisations such as the United Nations and UNESCO therefore advocate clear standards and structural monitoring. European legislation is also paying attention to this issue. The AI Act calls for the development of codes of conduct for the sustainable use of AI, and the CSRD requires organisations to report on ecological impacts, including those of digital technologies.

In short, organisations cannot ignore the environmental impact of AI, even if precise figures are not always available.

Social: probabilities, bias and the human dimension

AI operates on the basis of probability. A system is trained on large volumes of historical data, identifies patterns and predicts the most likely outcome. This approach makes AI powerful, but it also entails risks. If the underlying data is incomplete, one-sided or biased, the system will learn and replicate those patterns, including their flaws.

This is known as bias. It is not only a technical risk, but also a social issue. Consider AI used in recruitment and selection or in credit assessment. Such systems can systematically disadvantage certain groups without this being immediately visible. Women, people of colour, minorities and low-income groups are particularly at risk.

The AI Act seeks to limit these risks by prohibiting certain applications or classifying them as high-risk. For example, the use of AI for emotion recognition in the workplace or in education is prohibited. Profiling and the use of sensitive data for biometric categorisation are also subject to strict regulation.

In addition, both legislators and international organisations emphasise the importance of explainability, human oversight and critical reflection. AI may support decision-making, but it must never be the sole decision-maker, especially in contexts where fundamental rights are at stake.

Governance: responsibility requires action, not box-ticking

Good governance means it is clear who within the organisation is responsible for AI systems. The AI Act and the CSDDD require organisations to establish risk analyses and internal governance structures. Examples include a Chief AI Compliance Officer, ethics committees or specialised governance boards.

In practice, however, ethics often remains limited to non-committal discussions and abstract guidelines that disappear into desk drawers. Engineers rapidly build algorithms and data analysts develop complex models, often without clear ethical frameworks. Problems then only become visible when an AI scandal emerges and no one knows who was responsible.

Ethics and accountability must therefore not be reduced to a superficial tick-the-box exercise. They must be embedded in the foundations of AI development and in the organisation itself. Privacy by design, bias monitoring and transparency are not optional extras, but essential conditions. Good governance requires organisations and employees who are aware of their role and responsibility in a digital society.

AI and ESG: responsibility is not optional, but essential

The use of AI affects all aspects of ESG, from CO₂ emissions to social exclusion and internal decision-making. This makes the topic complex, but also unavoidable. Organisations are now faced with the task of taking active and integrated responsibility. In doing so, they must remain alert to the risk of diffusion of responsibility, which is particularly significant when embedding ethics within the organisation.

Organisations must prepare for reporting obligations, supervisory mechanisms and ethical considerations. Not because this is demanded by Brussels, but because technology without oversight inevitably entails risks. Using AI consciously, with due regard for the environment, society and ethics, is no longer a voluntary choice. It is a necessity for responsible business conduct.

Do you need support in deploying AI responsibly? We are happy to help. Contact us to discuss the possibilities.

Contact us

Back to overview