Episode 3 – The FRIA: fundamental rights in the boardroom
From Excel spreadsheet to moral compass
When Noor van der Wijst completed her heatmap (see Episode 2), it turned out that over a third of the algorithms qualified as high risk. A significant percentage, but the real challenge was yet to come: conducting a Fundamental Rights Impact Assessment (FRIA) for each high-risk system. The AI Act requires public organizations to demonstrably show before deployment how a model can affect fundamental rights – and what they do about it. (1)
During the first FRIA workshop, in a meeting room full of post-its and coffee cups, Noor immediately noticed how abstract 'fundamental rights' sounds to data engineers and how legal the concept seems to policy makers. The art is to bring both worlds together: technical detail and societal value.
FRIA ≠DPIA light
Many municipalities initially thought of the AI Act as a kind of 'DPIA plus' (the privacy impact analysis from the GDPR). But the purpose of the FRIA goes beyond data protection: every fundamental right from the EU Charter counts. (2) So not just privacy, but also non-discrimination, human dignity, freedom of expression, even the right to housing when an algorithm determines whether someone gets social housing.
Privacy specialists no longer have exclusive rights. Noor formed a multidisciplinary team: lawyer, ethicist, data scientist, policy advisor, and a citizen representative from the neighborhood council. Only then does it become visible how a model affects the living environment.
The FRIA flow in five logical steps
1. Context & purpose
Describe why the algorithm exists, who the beneficiaries are, and what decisions are linked to it. For example: "Model predicts likelihood of welfare fraud and triggers manual case investigation."
2. Fundamental rights mapping
Map each involved right against the intended operation. Is someone being categorized? Do they get a label that's difficult to refute? The team marks in a matrix where potential violations lie.
3. Risk analysis (impact × probability)
Use the heatmap from step 2 as a basis. Impact: how serious is the damage in case of an error? Probability: how likely is it to go wrong? This creates a color coding that decision-makers understand immediately.
4. Mitigation strategy
For each high (red) risk, the team determines appropriate measures: data quality checks, bias tests, human mandate to reverse decisions, explanation functionality for citizens.
5. Transparency & publication
The FRIA is not a drawer document. According to the AI Act, it must be publicly accessible (for example via the algorithm register), in understandable language, with explained risks and safeguards taken. (5)
The conversation that matters
Meanwhile, the most important work takes place not in the template, but in the dialogue. The data scientist who explains that the model combines variables that indirectly refer to ethnicity; the lawyer who asks if that could conflict with Article 21 (non-discrimination); the policy manager who realizes that a model that's too sharp creates more workload for social teams.
Noor uses 'what-if' sessions: scenarios where the model is wrong. An example: a single father with irregular income is wrongly labeled as a fraud risk and loses temporary income support. How does the system detect that error? What emergency brake does the citizen have? Those stories give meaning to numbers.
Common mistakes – and how to avoid them
Starting too late – A FRIA is not a post-audit. Build it parallel to model development; otherwise you keep repairing what's already in the code.
Pseudo-participation – A consultation evening with five residents is not an anchored citizen voice. Involve representative panels in every phase and give their input weight in decisions.
'One size fits all' templates – Each use case requires nuance. A recidivism model in juvenile justice requires different safeguards than an AI tool for parking rates. The format may be the same, the content never.
Executive translation
Finally, Noor presents the FRIA findings directly to the board of mayor and aldermen. Not a 40-page PDF, but a visual dashboard: risk heatmap, mitigating measures, remaining residual risks. The board sees at a glance that two models still score red on non-discrimination. Decision: pause until additional bias tests are completed. Exactly the human-in-command role the AI Act intends. (4)
What you can do tomorrow
Check if your current DPIA process can be broadened toward fundamental rights scope.
Establish a multidisciplinary FRIA core team – including citizen perspective.
Develop a modular FRIA template that easily scales with new models.
In Episode 4, we zoom in on data quality and bias mitigation: how do you ensure that the promised safeguards in the FRIA actually hold when the model runs?
Keep following the series; fundamental rights are not a legal footnote, but the compass on which responsible AI in the public sector sails.
Want to know how your organization can implement an effective FRIA methodology? We offer workshops and guidance in setting up a fundamental rights impact assessment that is both compliant and practically workable. Feel free to contact us for more information.
🎯 Free EU AI Act Compliance Check
Discover in 5 minutes whether your AI systems comply with the new EU AI Act legislation. Our interactive tool gives you immediate insight into compliance risks and concrete action steps.