
Zahed Ashkara
AI & Legal Expert
Ready to start with AI Literacy?
Learn everything about responsible AI use and EU AI Act compliance. Perfect for organizations that want to comply with the new legislation.
View AI Literacy TrainingAI & Legal Expert
Learn everything about responsible AI use and EU AI Act compliance. Perfect for organizations that want to comply with the new legislation.
View AI Literacy TrainingLaden...
Fatima, compliance manager at NovaBank, receives an angry email from a customer: "My loan was rejected, but nobody can tell me why." The signature – IT-system decision – sounds cold. Fatima actually knows why: a machine-learning model estimates creditworthiness based on payment behavior, neighborhood, and click traces from the mobile app. Until yesterday, this was mainly an IT story. Since the EU AI Act came into effect this year, responsibility has shifted to the business itself – and thus to teams like Fatima's.
The AI Act sorts financial use cases into three buckets1. Algorithms for terrorist financing or social scoring? Prohibited. Systems that determine access to basic banking, loans, or insurance? Automatically high-risk. Chatbots that only handle general questions? Low regulatory pressure, provided they're transparent.
In practice, the most commonly used AI solutions at banks, insurers, and fintechs fall into that middle category. This means: model documentation, data governance, continuous risk analyses, human oversight, and demonstrable AI literacy for everyone working with them.
The definitions seem abstract, but Fatima recognizes them everywhere on the floor:
Even the latter falls under the AI Act because it directly affects premiums and thus access to services.
"Human in the loop" sounded like tick-the-box at NovaBank for years. An employee clicked approve after the model flashed "green." Under the AI Act, that same employee must be able to explain why customer A gets a credit limit and customer B doesn't, including the role of postal code, device type, or timing.
This requires new skills: recognizing variables, seeing bias possibilities, and knowing when you may override a model.
Fatima starts with a simple experiment. She has the team search through twenty rejected files for similarities. Within an hour, they see patterns that previously went unnoticed – higher rejection rates in one specific region, remarkably low scores for freelancers in the cultural sector. The penny drops: AI literacy isn't a luxury; it's necessary to protect duty of care and reputation.
Step | Action | Result |
---|---|---|
1. AI mapping | Inventory all models that directly decide on loans, premiums, or transactions | Overview of name, purpose, and data sources |
2. Data chain | Check origin, representativeness, and recent updates of all data sources | Validation protocol for new sources |
3. Decision rules | Explain why certain variables count (no more black box) | Transparent explanation for customers and regulators |
4. Override procedures | Build procedures for manual interventions with logging | Feedback loop for model improvement |
5. AI literacy | Invest in continuous training for all involved teams | Competent employees who can assess models |
Which models directly decide on loans, premiums, or transactions? Put name, purpose, and data sources in one overview.
Origin, representativeness, and recent updates. For every new source: validate again.
No black box in board presentations. In plain language: why does mobile operating system count? Why does shopping area X get a risk uplift?
Employees log not only that they manually intervened, but also why. That feedback feeds the retrain process.
Basic knowledge for customer advisors, in-depth sessions for risk & compliance. Not as a one-time workshop, but as a continuous learning path2.
Three months later, the quick wins are visible. The percentage of "unexplained" rejections drops, complaint handling takes less time, and the marketing department proudly uses the new transparency in campaign material: We explain how our digital assessment works.
The CFO sees something else happening: better insight into the models generates sharper questions for suppliers. NovaBank prunes unnecessary features, reduces license costs, and brings more expertise in-house. The risk budget shifts from firefighting to innovation.
This opening blog is the wake-up call. In the upcoming parts, we'll dive into:
Always with the goal that Fatima now has clear: responsible AI use as a competitive advantage, not as a burdensome cost center.
Curious about what an AI literacy program looks like for financial teams? We build modularly: from basic sessions for customer advisors to deep dives for model validators. Feel free to send a message to exchange ideas.
Discover in 5 minutes whether your AI systems comply with the new EU AI Act legislation. Our interactive tool gives you immediate insight into compliance risks and concrete action steps.