The AI Act requires organisations, as from 2 February 2025, to ensure that their employees are AI literate. This means that employees must understand what AI is, how it works and what impact it can have. Below we answer the most frequently asked questions about this new obligation.
1. What is AI literacy?
AI literacy means that employees and board members have sufficient knowledge of what AI is, how it works and which legal and ethical aspects are involved. AI literacy is about insight and practical application across the entire organisation. Can you assess the impact of AI on your organisation? Do you understand how to deploy AI systems in a safe and legally responsible manner?
2. Why is AI literacy important for my organisation?
AI can affect decision-making, compliance and innovation. Imagine a bank using AI to assess loan applications, enabling faster decisions, automated compliance with regulation and the development of innovative credit models.
AI literacy also helps organisations to identify opportunities and base strategic decisions on a realistic understanding of what AI can and cannot do. For example, an organisation may implement AI analytics to predict market trends, allowing it to make informed strategic decisions and seize new growth opportunities.
Without AI literacy, organisations may face risks such as:
-
Incorrect implementations: errors in the implementation of AI can lead to undesirable outcomes, such as the introduction of bias.
-
Legal disputes: if AI infringes the GDPR or other relevant laws and regulations.
-
Ethical issue: the use of AI can raise ethical concerns, such as a lack of transparency in decision-making.
-
Unrealistic expectations of AI: AI is sometimes seen as a magic solution to complex problems, which can result in poor investments or disappointment when expectations are not met.
-
Blind reliance on suppliers: vendors may present AI as a black box without addressing legal or ethical risks.
3. Which aspects do I need to understand to be AI literate?
Key topics include:
-
The basics of AI: what is AI? And how does it work?
-
The technical side of AI: how does AI work technically?
-
Prohibited and high-risk AI: which AI practices are prohibited and what legal obligations apply to high-risk AI?
-
Ethical principles for trustworthy AI: which ethical principles should be considered when developing and using AI?
-
Data awareness: which rules apply to the collection and use of data?
4. How can my organisation improve AI literacy?
Start with:
-
Training: provide employees with practical knowledge of AI and the applicable legal framework.
-
Create an AI policy: establish an organisation-wide policy that sets clear boundaries for the use of AI, including guidelines on ethics, transparency and oversight.
-
Involve experts: work with legal and technical advisers for guidance.
-
Promote collaboration across departments: encourage collaboration between people with different backgrounds/teams to reduce bias in AI.
-
Implement a monitoring and feedback system: enable continuous evaluation of AI systems and the identification of areas for improvement.
Want to know more or need support?
AI literacy is essential for future-proof organisations. It enables innovation while keeping legal risks under control.
The Dutch Data Protection Authority also underlines the importance of AI literacy and emphasises that organisations must train their employees accordingly. In the document ‘Get started with AI Literacy’, the Dutch Data Protection Authority provides practical guidance to help organisations get started.
Has your organisation already taken the right steps? We are happy to help you design an AI literacy programme that fits your organisation. Contact us for more information.
