
Zahed Ashkara
AI Compliance Expert
Ready to start with AI Literacy?
Learn everything about responsible AI use and EU AI Act compliance. Perfect for organizations that want to comply with the new legislation.
View AI Literacy TrainingAI Compliance Expert
Learn everything about responsible AI use and EU AI Act compliance. Perfect for organizations that want to comply with the new legislation.
View AI Literacy TrainingLaden...
During an internal audit at the municipality of Middelveld, policy advisor Noor van der Wijst stares at an Excel sheet with more than a hundred columns. Each field represents an algorithm that has quietly crept in over the past few years: from automatic parking enforcement to a model that predicts which students need extra care hours. Noor needs to answer one simple question: which of these systems are "high risk" according to the EU AI Act?
The AI Act divides all applications into a pyramid of four layers. At the bottom are minimal and limited-risk systems; these require at most transparency notifications. At the very top are the "unacceptable" use cases, such as real-time facial recognition on the street: these are simply prohibited. But in the middle โ the broad, gray strip of high-risk AI โ is where the real game is played. Here, strict design, documentation, and supervision requirements apply. For Noor, the key question is therefore not whether a model is useful, but whether it falls within the high-risk scope of Article 6 and Annex III. (1, 2)
Article 6 actually works as a double threshold. A system is high risk when (1) it appears in Annex III โ think of social security decisions, law enforcement, or critical infrastructure โ and (2) it poses a real danger to health, safety, or fundamental rights. (2) In practice, a municipality must first compare its use cases against the Annex, and then perform a quick 'fundamental rights test': who could be harmed if the model fails?
Noor discovers that the parking enforcement module doesn't go beyond an automatic recommendation; a parking officer ultimately decides for themselves. Limited risk, check. The model that selects students for extra care hours? That affects access to public services (Annex III ยง5) and can lead to stigmatization. High risk.
Inventorying seems simple โ copy-paste all algorithms into a spreadsheet โ but reality is erratic. IT calls something a "tool", HR talks about a "dashboard", and the supplier sells an "AI module". Scoping therefore starts with language: define what constitutes an AI system in your organization. The Dutch government uses a broad description in its Algorithm Register ("any automated decision or data analysis that affects citizens"). (3) Adopt that definition and you'll avoid endless semantic discussions.
Noor then organizes a so-called heatmap round: in two workshops, she places each algorithm on a large screen with two axes โ impact on fundamental rights versus chance of errors. Lawyers, data specialists, and policy people shift post-its back and forth. Within a morning, a visual risk landscape emerges: red dots (potentially high-risk) cluster around social benefits, permit issuance, and fraud monitoring.
Suppliers like to put the "AI inside" label on every software package. Some claim their model falls outside the scope because "a human always confirms with a click". Such a checkbox approach doesn't hold up. The EU AI Act clearly states that human intervention only counts if the supervisor is actually able to correct and has time to intervene. A 'yes button' without context or a stop button doesn't qualify. (4)
Not all risk models are in-house developments; many are hidden in external SaaS tools. Think of a cloud package that automatically sends payment reminders based on a credit score. Therefore, explicitly ask about AI functionalities in every procurement scan, even if the product is primarily HR software or CRM.
Only when each system has a label โ unacceptable, high, limited, or minimal โ can you freeze the list and start a Fundamental Rights Impact Assessment (FRIA) for the high-risk category. That's exactly what Episode 3 is about. The AI Act stipulates that public deployers must publish a FRIA before use, detailing all potential effects, mitigations, and human oversight protocols. (5)
As long as the answer to one of these questions is no, your organization is in Noor's risk phase: the fact sheet is larger than the confidence. In the next episode, we'll therefore dive into the FRIA methodology: how to put risks on paper without drowning in legal jargon?
Stay tuned โ because compliance begins with knowing what you have.
Want to know how your organization scores in terms of risk classification and scoping of AI systems? We offer a quick inventory scan that shows where you stand and what you still need to do. Feel free to contact us for more information.
Discover in 5 minutes whether your AI systems comply with the new EU AI Act legislation. Our interactive tool gives you immediate insight into compliance risks and concrete action steps.