
Zahed Ashkara
AI & Legal Expert
Ready to start with AI Literacy?
Learn everything about responsible AI use and EU AI Act compliance. Perfect for organizations that want to comply with the new legislation.
View AI Literacy TrainingAI & Legal Expert
Learn everything about responsible AI use and EU AI Act compliance. Perfect for organizations that want to comply with the new legislation.
View AI Literacy TrainingLaden...
(Blog 4 of the series "AI & Finance under the EU AI Act")
Ruben has been driving claim-free for ten years, yet his car insurance premium suddenly jumps by 18%. His insurer's app records every turn, braking action, and kilometer driven. "You drive more frequently after 11 PM and regularly use busy ring roads," reads the automatic explanation. Ruben doesn't understand: he lives outside the city, drives defensively, and has never filed a claim. Customer service refers to the "telematics model" – a self-learning algorithm that weighs driving behavior. Under the EU AI Act, such a model is high-risk: it determines direct access to (and pricing of) a financial product. If the premium jump feels arbitrary or discriminatory, the insurer faces reputation and penalty risks.
The AI Act places dynamic insurance premiums in the same risk category as credit scoring: Annex III, point 5 – access to essential services1. This means:
At SafeDrive Insurance, data scientist Lara analyzes thousands of driving profiles every month. She discovers that night trips count relatively heavily, regardless of actual damage probability. Taxi drivers, emergency responders, and healthcare workers are thus structurally disadvantaged. Lara escalates this to the AI governance board; the model receives re-weighting and additional auditing for 'protected classes'. Result: night trips remain relevant, but their weight is calibrated to proven claims data rather than raw frequency.
Route | Action | Impact |
---|---|---|
1. Segment audit | Measure model errors per subgroup (age, profession, region) | Detects systematic bias early |
2. Proxy detection | Use SHAP analysis to find hidden discrimination | Prevents indirect discrimination |
3. Explainability layer | Show top premium drivers in customer app | Increases transparency and trust |
4. Feedback mechanism | Let customers correct incorrect data | Improves model precision |
5. AI literacy | Train underwriting teams in bias recognition | Strengthens human oversight |
Measure model errors per subgroup (age, profession, region) and test whether deviations fall within statistical margins. A model that performs well for the entire population may still be systematically wrong for specific groups.
Use causal discovery or SHAP analysis to see if seemingly neutral variables (driving time) function as proxies for protected characteristics. Night trips may correlate with certain professions or socioeconomic status, for example.
Show top premium drivers in plain language: "80% driving behavior, 15% annual kilometers, 5% vehicle type." This reduces frustration and lowers complaints. Customers better understand why their premium rises or falls.
Let customers mark incorrectly registered trips; these labels feed the retraining process and increase model precision. A trip labeled as 'aggressive driving' while the customer was stuck in traffic can be corrected this way.
Organize quarterly sessions where underwriting teams review model updates, discuss bias cases, and refine override criteria. Human oversight is only effective if employees understand how the model works.
Fair price differentiation delivers more than compliance. Marketing uses it as a unique selling point; investors appreciate the lower reputation risks. Moreover, the audit trail creates a solid defense line when regulators or NGOs ask questions about discriminatory effects.
SafeDrive now uses their transparency approach as a marketing tool: "The only insurer that explains why your premium rises or falls." This differentiates them from competitors still using black-box models. The result: 15% more new customers through word-of-mouth marketing.
After six months, the number of escalations to the complaints committee drops by 40%. NPS rises because customers see a clear premium breakdown and can easily correct erroneous trips. Financially, it pays off: less churn and cleaner risk segmentation, improving margins.
The biggest breakthrough comes from an unexpected angle: by systematically collecting customer feedback about incorrectly registered trips, SafeDrive discovers that their GPS system systematically classifies parking garages as 'aggressive driving' due to low speeds and many turns. A simple adjustment of algorithm parameters for parking locations reduces false positives by 25%.
The next episode zooms in on algorithmic investing: how asset managers organize human oversight to prevent model drift and market manipulation. Then we conclude with a practical guide for an integrated AI governance framework within financial institutions.
The common thread remains the same: AI compliance as competitive advantage, not cost center. Organizations that now invest in transparent, explainable AI systems build trust with customers and regulators alike.
Want to know more about fairness audits or an AI literacy program for underwriting teams? Embed AI builds modular workshops and tooling for insurers who want to stay ahead of the EU AI Act.
Discover in 5 minutes whether your AI systems comply with the new EU AI Act legislation. Our interactive tool gives you immediate insight into compliance risks and concrete action steps.