AI in your organisation: where do you start?

AI can deliver significant benefits. Many organisations now recognise its potential and are starting to develop and/or use AI. The journey towards successful AI implementation differs for every organisation, but it always starts with a careful and structured approach. This is the first blog in a series in which we discuss legal and compliance challenges and our approach using our AI toolkit. In this first blog, I outline several key initial steps you can take when your organisation starts working with AI.

Take stock

Before developing or using an AI solution, it is important to identify your organisation’s specific needs. What are the current challenges and where are the opportunities for improvement? Which concrete problems do you want to solve, and what value does the AI solution add? Once you have a specific AI application in mind, clearly map out its technology and operation and define its scope of use. “Measuring is knowing”.

In larger organisations, successful AI implementation requires the involvement of multiple stakeholders. Identify who these stakeholders are and involve them as early as possible so their input can be taken into account in good time. This helps prevent unnecessary delays in decision-making and creates shared support across the organisation.

Handle data with care

Data is the fuel for AI models. If you are developing AI, it is crucial to first identify which data you will use to train the AI model. Then assess whether the selected data is suitable for the intended AI application. Review the data for quality, representativeness, diversity and, of course, the lawfulness of its use. This can prevent many problems at a later stage.

Care is also required when using an existing AI application. First, where possible, it is advisable to check the datasets used by the provider. Has the AI been trained on data that matches your intended use? Imagine this scenario: a vegetable grower is unpleasantly surprised when their brand-new AI inspection system rejects all their tomatoes because the system was trained exclusively on potatoes. In addition, it is important to understand what data you will be inputting when using the AI application. Is this customer data or commercially sensitive information? Consider what else happens to this data and whether you can justify that use.

Which laws and regulations apply?

This may be obvious, but when developing and using AI you must take the relevant laws and regulations into account. Identify which legal frameworks apply to your situation. The first framework that will likely come to mind is the AI Act. This widely discussed European regulation entered into force on 1 August and plays a central role in the regulation of AI.

For many AI applications, compliance does not stop with the AI Act. If you process personal data, you must also comply with the General Data Protection Regulation (GDPR). Depending on your specific situation, other legislation and sector-specific standards may also apply, at both European and national level.

It is therefore important to carefully qualify and categorise your AI applications based on the applicable legislation, so you gain a clear understanding of the obligations your organisation must meet.

Policy and governance

Depending on the complexity of the AI application and the size of the organisation, you will need to set internal frameworks. This can be done, for example, through an internal AI policy. Such a policy defines the do’s and don’ts, the level of human oversight, roles and responsibilities, and the control measures to be implemented. The more your organisation intends to do with AI, and the more complex the AI applications become, the more important it is to pursue a clear and consistent course.

Ensure the right expertise

I cannot stress this enough: ensure you have the right expertise in AI governance and compliance. You will need to monitor and manage the AI applications you develop or use throughout their entire lifecycle. In addition, the AI Act requires that every employee who works with AI receives sufficient knowledge and instructions to use it responsibly. Make sure these people understand which rules apply, what happens to the data used by the AI application, and what an AI application can do, but also, importantly, what its limitations are. Who will take the lead on this within your organisation?

We can support you in this and provides the necessary expertise. Our Certified AI Compliance Officer (CAICO) helps you navigate these legal and ethical AI challenges. If you want to deepen your own knowledge, or that of your employees, in AI compliance and governance, our CAICO training programme may be of interest.

Our AI toolkit

We understand the complexity of the challenges associated with developing and using AI. That is why we have developed our own AI toolkit. This toolkit provides our CAICOs with practical tools to effectively assess AI applications, risks and compliance. Through the AI inventory in our toolkit, you gain insight into your organisation’s AI landscape and into the operation of the AI applications you intend to develop or use. With the AI Act compliance scan, you receive a detailed report on the applicable standards for your specific AI applications, your organisation’s current situation, and the steps you still need to take to achieve compliance. Using our Data Protection Impact Assessment (DPIA), familiar from the GDPR, and our Fundamental Rights Impact Assessment (FRIA), we can assess the impact on the fundamental rights of the individuals concerned and identify appropriate control measures for your organisation.

Would you like to know more about how our AI toolkit can help your organisation with developing and procuring AI? Visit our AI toolkit page for more information or contact one of our experts.

Contact us

Back to overview