Laden...

Zahed Ashkara
AI Compliance Expert
Ready to start with AI Literacy?
Learn everything about responsible AI use and EU AI Act compliance. Perfect for organizations that want to comply with the new legislation.
View AI Literacy TrainingLaden...
AI Compliance Expert
Learn everything about responsible AI use and EU AI Act compliance. Perfect for organizations that want to comply with the new legislation.
View AI Literacy TrainingImane, a single mother from Rotterdam, still remembers how two investigators asked her to submit bank statements again after years of back problems. What she didn't know at the time: a machine learning model, trained on thousands of old fraud investigations, had flagged her as "high risk." The stress led to sleepless nights and a month without benefits. Rotterdam paused the system in 2021 after sharp criticism about bias and lack of transparency, but for Imane it was too late. (wired.com)
Imane's case is not an isolated incident. Earlier, the court in The Hague already issued a harsh judgment on SyRI, the national system that scanned neighborhoods with welfare recipients for fraud and violated fundamental rights in the process. (theguardian.com) And when asking the police about predictive policing, you'll often hear about the Crime Anticipation System (CAS): a data-driven heat map that theoretically prevents burglaries, but in practice may primarily reinforce existing prejudices. These examples show exactly why the EU in the new AI Act speaks of high-risk AI when an application influences decisions around social security, law enforcement, or essential services.
Since August 2024, the AI Act is officially in effect. The regulation requires developers and deploying government organizations to implement a whole range of measures: from a fundamental rights impact assessment before deployment to detailed log files, human supervisors with mandate, and registration in both the EU database and (in the Netherlands) the Algorithm Register. (matheson.com) The philosophy is clear: if society cannot explain why someone receives a certain risk label, the system should not go live.
The timeline is tight. Six months after entry into force – so February 2, 2025 – all prohibited practices, such as real-time facial recognition on the street or social scoring systems, must be stopped. A year later, transparency requirements for general AI models apply, and from August 2, 2026, most high-risk systems must be fully compliant. Only product-related high-risk AI (for example in medical devices) has an extension until August 2, 2027. (reuters.com, matheson.com) That seems far away, but anyone who has ever migrated a municipal ERP project knows how quickly two years pass.
In the coming weeks, I'll take you from inventory to post-market monitoring. We'll start by mapping all algorithms within your organization and determining which ones truly fall under "high risk." Then we'll dive into the FRIA methodology, data quality & bias tests, human oversight in practice, contract management with suppliers, registration requirements, and a workable audit routine. Step by step, with lessons learned from municipalities, inspectorates, and independent administrative bodies, so that your team will not only be compliant, but also demonstrably build trust with citizens and supervisors.
So stay tuned: each episode translates the legal text into concrete approaches, including templates, checklists, and practical examples. This way, we ensure that Imane's story becomes the exception – not the norm.
Want to know how your organization scores on the compliance requirements of the EU AI Act for high-risk AI systems? We offer a quick baseline assessment that shows where you stand and what you still need to do. Feel free to contact us for more information.
Discover in 5 minutes whether your AI systems comply with the new EU AI Act legislation. Our interactive tool gives you immediate insight into compliance risks and concrete action steps.