Laden...

Zahed Ashkara
AI & Legal Expert
Ready to start with AI Literacy?
Learn everything about responsible AI use and EU AI Act compliance. Perfect for organizations that want to comply with the new legislation.
View AI Literacy TrainingLaden...
AI & Legal Expert
Learn everything about responsible AI use and EU AI Act compliance. Perfect for organizations that want to comply with the new legislation.
View AI Literacy Training(Blog 3 of the series "AI & Finance under the EU AI Act")
Liang, head of Fraud & AML at PayWave, startles as the dashboard lights up red: within 0.2 seconds, the AI transaction monitoring model marks a €1,250 payment as suspicious and blocks customer Rosa's card. Three minutes later, she calls angrily: "I'm standing at the checkout and can't pay – why not?" Liang knows his team will be unable to provide an answer as long as the model combines black-box logic with thousands of behavioral signals. Since the EU AI Act came into force, this is no longer allowed – even though fraud detection legally falls into a gray area.
The AI Act recognizes four risk levels. Credit scoring is explicitly listed in Annex III and is therefore high-risk. For AI systems that detect financial fraud, it's more nuanced: the legislator has specifically excluded financial fraud detection from the high-risk list1 – a lobbying result to avoid hampering innovation. Some commentators nevertheless advise banks to treat these systems as high-risk, precisely because they can block transactions or freeze accounts. The result is confusion: can fraud AI now participate in the heavy AI Act procedures or not?
Even if a fraud model is "formally" not high-risk, it often directly intervenes in essential payment services. A false positive means a customer cannot transfer rent or pay for groceries – precisely the kind of fundamental rights the AI Act aims to protect. Moreover, strict obligations already apply from PSD22, the 6th AMLD3, and DORA4. Those who are smart harmonize these frameworks into one governance framework and avoid duplicate work.
Location or consumer segment as a proxy for 'risk' can lead to indirect discrimination. A model that systematically blocks more transactions in certain neighborhoods or for specific age groups creates unequal access to financial services.
A few percent of wrongful blockades seems little, but on millions of real-time transactions, this means thousands of angry phone calls per day. Reputational damage and operational costs quickly pile up.
Fraud methods change weekly; without regular re-training, model performance degrades quickly. What was effective last month may have become a sieve today.
Step | Action | Result |
---|---|---|
1. Make the decision chain visible | Map every threshold: alert, soft-block, hard-block | Helps determine where human oversight is needed |
2. Measure dual metrics | Always report both fraud detection ratio and customer impact (false positives) | Balance between security and service |
3. Document root causes | Record which features determined the score for each blockade | Meets transparency and explainability requirements |
4. Build escalation playbooks | Clear reversal procedure within 15 minutes for wrongful blockades | Minimizes reputational damage |
5. Increase AI literacy | Train fraud analysts in feature interpretation and concept drift signaling | Strengthens human oversight, mandatory under the AI Act |
Start by mapping every threshold in your fraud detection pipeline. When is a transaction only flagged for review? When is it temporarily blocked? And when does a hard blockade follow? This mapping helps determine where human oversight is most critical.
Traditionally, fraud teams focus on detection ratios: how much real fraud do we catch? Under the AI Act, you must also systematically measure how many legitimate customers you affect. These dual metrics provide insight into the real impact of your model.
For every blockade, it must be clear which features determined the decision. Was it the location? The timing? The amount? This documentation is essential for transparency and helps identify bias patterns.
Develop clear procedures for quickly reversing wrongful blockades. Customers must be able to pay again within 15 minutes, with a clear explanation of what happened and why.
Train your fraud analysts not only in recognizing fraud patterns but also in interpreting model features and signaling concept drift. This human oversight is mandatory under the AI Act.
After three months of twin-tracking fraud score and customer impact, PayWave halves the number of wrongful blockades; NPS rises by 7 points, while actual fraud loss remains the same. The board sees that better explanation not only reduces compliance risks but also reduces costs for call center and chargebacks.
The most important breakthrough comes from an unexpected angle: by systematically documenting why certain transactions were blocked, the team discovers that the model overreacts to weekend transactions above €500. A simple adjustment of threshold values for weekends reduces false positives by 30%, without letting real fraud slip through.
The lessons learned from real-time fraud AI form the blueprint for all high-risk-like use cases: credit scoring, insurance pricing, but also generative AI in customer contact. One uniform AI governance framework prevents each department from having to reinvent the wheel.
PayWave now uses the same transparency principles for their chatbot (which advises customers on savings products) and their robo-advisor (which compiles investment portfolios). The result: consistent compliance and a better customer experience across all touchpoints.
In part 4, we explore what fairness means for dynamic insurance premiums and how actuarial models get a 'bias overhaul' under the AI Act. Then we dive into human oversight in algorithmic investments.
The common thread remains the same: AI compliance as a competitive advantage, not as a cost center. Organizations that now invest in transparent, explainable AI systems build trust with customers and regulators.
Want to know more about hands-on training around AI fraud detection and AI Act compliance? Embed AI develops modularly from basic workshops to deep-dives for model validators. Feel free to get in touch.
Discover in 5 minutes whether your AI systems comply with the new EU AI Act legislation. Our interactive tool gives you immediate insight into compliance risks and concrete action steps.