Episode 4 ā Data quality & bias mitigation: from raw source to robust model
The first test result hit like a bomb
A new algorithm was supposed to predict which students needed extra guidance at a vocational college in the eastern Netherlands. After running for one night, it turned out that nearly eighty percent of the 'high-risk' recommendations fell on boys with a migration background, while they made up less than half of the population. The data scientist immediately put their finger on the sore spot: the training data consisted largely of old files from a period when specific neighborhoods were monitored more intensively. Bias wasn't in the code, but already hidden deep in the data layer.
How contaminated data can undermine the FRIA
In the previous episode, we saw how the Fundamental Rights Impact Assessment (FRIA) exposes fundamental rights risks. That exercise remains paperwork as long as the underlying datasets aren't clean. A single skewed field can neutralize the carefully described mitigations in the FRIA in one fell swoop. This poses a real governance risk: when a model influences social benefits or permit granting, an error can have direct legal and political consequences.
The EU AI Act requires that high-risk AI systems be based on "training, validation and test data sets that are relevant, representative, free of errors and complete". (1) This is not a technical formality, but a legal obligation that directly impacts the liability of the government organization.
The lifecycle of public data: every step counts
The source files used in the public sector often have a long history. Registration systems change, definitions shift, fields are filled in manually. In such a hybrid archive, silent assumptions arise: 'empty field means no problem' or 'postal code is a neutral characteristic'. Those who want to combat bias must make these assumptions explicit and test them, step by step: from extraction to transformation, from sampling to label choice.
Extraction: detecting semantic noise
When pulling data from operational systems, it regularly turns out that fields are used differently than the documentation suggests. Think of a "housing costs" column where one municipality stores bare rent, another the all-inclusive price. Such semantic noise feeds model unreliability and can lead to systematic errors in decisions.
Transforming & cleaning: more than removing spaces
Cleaning is more than removing spaces. Descriptive fields like profession or family situation have countless spellings. A machine learns patterns; inconsistent spelling creates artificial correlations. Here, data documentation in 'datasheets' form helps, stating per column who fills it, how often it mutates, and which values are legitimate.
Sampling: the pitfall of selection bias
Public datasets are rarely random. Fraud investigations often focus on risk groups, making positive cases abundantly present in the training set. The model then 'learns' that this group is inherently risky. Resampling or synthetic data can bring balance here, but only if the process is transparently recorded.
Label choice: breaking bias feedback loops
Labels are sometimes derived from decisions that were already biased. Having a fraud team label which files received 'justified recovery' cuts off reflection on prejudice: a bias feedback loop. An independent labeling round, preferably double-blind, reduces the risk.
Techniques to measure bias
For public models, bias must be assessed not only technically but also socially relevant. Two indicators form the core:
Statistical parity difference ā measures whether the result is equally distributed across relevant groups
Equal opportunity difference ā checks whether the error margin (false negatives/positives) is fairly distributed
A model for parking control can be statistically unequal ā fining certain neighborhoods more often ā without the ultimate error rate being unfair. Yet such inequality can prove politically unacceptable. Bias analysis must therefore always be placed alongside policy and stakeholder context. (2)
Strategies for mitigation
When a model deviates significantly, there are roughly three layers to intervene:
1. Pre-processing: correcting at the source
Re-sampling of underrepresented groups
Re-weighting of training examples
Removing proxy variables (like postal code that can reveal ethnicity)
2. In-processing: compensating during training
Algorithmic techniques like adversarial debiasing
Fairness constraints enforced during training
Multi-objective optimization balancing accuracy and fairness
3. Post-processing: calibrating output
Score calibration per demographic group
Adjusting decision thresholds
Ensemble methods combining different models
The choice depends on the political mandate, transparency requirements, and the extent to which adjustment doesn't frustrate the original goal. A recidivism predictor in juvenile justice was ultimately corrected purely in post-processing; the original model remained intact, but the score was recalibrated so false positives among girls decreased.
Production monitoring: bias drifts with the stream
Once the model is live, attention shifts to data drift. New rules, changing inflow, or a pandemic can skew data relationships within months. The EU AI Act requires that high-risk systems remain "accurate, robust and cybersecure" throughout their lifecycle. (3)
Continuous monitoring ā for example, quarterly bias reporting in the same metrics as the FRIA ā is therefore essential. Automatic alerting can warn when:
The distribution of input features shifts significantly
Model performance drops below preset thresholds
Bias metrics exceed acceptable limits
Governance hooks: who maintains oversight?
Data quality and bias mitigation only have impact if there's a structure where findings are consistently fed back to administrators. More and more municipalities are creating an Algorithm Board where legal, ethical, and technical experts monthly review data quality, bias reports, and incidents.
An escalation protocol describes when a model should be paused, comparable to the safety stop in the food industry. Typical triggers are:
Bias metrics exceeding baseline by 20%
Citizen complaints about systematic unequal treatment
Significant data drift not corrected within a week
Technical incidents threatening model integrity
Stories that stick
The vocational college case at the beginning of this article had a sequel: after re-sampling and removing postal code as a variable, the imbalance dropped from eighty to twenty percent. More importantly: a student panel now gave the model a passing grade on 'fairness'. Teachers also noticed no extra workload, as the redistribution led to fewer ā but better ā intervention recommendations.
That's the type of success story that builds support for responsible AI.
Practical checklist for data quality
ā Document your data pipeline with datasheets for each dataset
ā Test for bias in all phases: extraction, transformation, sampling, labeling
ā Implement monitoring for data drift and bias metrics in production
ā Establish governance structures with escalation protocols
ā Involve stakeholders in defining fairness and acceptable trade-offs
ā Publish transparently about bias mitigation in the algorithm register (4)
Looking ahead: human oversight 2.0
In the next episode, we'll explore how human oversight can be more than a formal checkmark. We'll look at role profiles, training requirements, and technical tooling that enables supervisors to truly intervene when the model deviates. Because even with clean data, one constant remains: algorithms make mistakes ā humans must be able to correct them.
So stay aboard; data hygiene is just the beginning of mature, fundamental rights-resilient AI in the public sector.
Want to know how your organization can implement a robust data governance and bias mitigation strategy? We offer workshops and guidance in setting up data quality processes that are both compliant and practically workable. Feel free to contact us for more information.
šÆ Free EU AI Act Compliance Check
Discover in 5 minutes whether your AI systems comply with the new EU AI Act legislation. Our interactive tool gives you immediate insight into compliance risks and concrete action steps.