
Zahed Ashkara
AI & Legal Expert
Ready to start with AI Literacy?
Learn everything about responsible AI use and EU AI Act compliance. Perfect for organizations that want to comply with the new legislation.
View AI Literacy TrainingAI & Legal Expert
Learn everything about responsible AI use and EU AI Act compliance. Perfect for organizations that want to comply with the new legislation.
View AI Literacy TrainingLaden...
Anyone who predicted in 2015 that algorithms would handle the first screening of job applications within a decade was met with smiles of disbelief. Today, it's the most normal thing in the world. The average recruiter barely scans a resume before a model has narrowed the stack down to a handful of "best-fits." That convenience comes with a price. Since the European AI Act came into effect on August 2, 2024, the selection button has suddenly become a legal detonation cord. Companies with more than a few dozen employees are discovering that it doesn't matter whether their AI tool is supplied by a major software vendor or built by a resourceful internal data analyst: the law calls the organization using the system the deployer, and that deployer bears full responsibility when things go wrong. For HR departments, this represents a silent revolution, as the compliance role previously belonged mainly to legal and IT. Now it shifts directly into recruitment practice.
The heart of the AI Act is a risk slider. AI that autonomously controls weapons is prohibited, generative art falls under light transparency rules, but anything that affects "decisions on access to employment" is almost always automatically high-risk. This classification activates, among other requirements, the following obligations: detailed technical documentation, continuous risk assessments, data governance, robust logging, human oversight, transparency for those affected, and – new for virtually everyone in HR – a clearly defined obligation for AI literacy training. The law literally states that anyone working with high-risk AI "must sufficiently understand how the system works, what it can do, and what mistakes it can make." Those who casually dismiss this face fines of up to seven percent of global annual revenue in extreme cases. That's not a minor detail in the audit report; it's an existential business risk.
Aspect | AI Act provision | HR/Recruitment application | Impact |
---|---|---|---|
High-risk classification | Annex III: "employment, worker management and access to self-employment" | CV ranking, video analysis, AI chatbots asking knockout questions | Strictest requirements: extensive risk assessments, technical documentation, and mandatory audits |
AI literacy | Article 4 | Requirement to train all recruiters and hiring managers | Timely e-learning and workshops, evidence (certificates/logs) |
Transparency | Article 13 | Clearly informing candidates about AI use in selection processes | Adjusting job descriptions, chat interfaces, and landing pages |
Human oversight | Article 14 | Recruiters must be able to override AI decisions | Establishing procedures, defining escalation points, and maintaining audit logs |
Bias mitigation | Article 10 | Regular bias tests on CV parsers and video analysis tools | Technical tests & reports, root-cause analyses, and mitigation plans |
The definitions from Brussels sound abstract, but you recognize them immediately on the floor. Take the automated resume parsing that almost every ATS now includes as standard. While a recruiter gets coffee, a model fills in missing fields, scoring algorithms rank twenty resumes at the top, and a rule-based filter removes files without degree mentions. That entire orchestra counts as one high-risk system. Or look at the pop-up window on the careers site that cordially asks: "Do you already have a work permit for the Netherlands?" and politely thanks for the interest if the answer is "no." Even such a simple knockout question activates the high-risk label, because it effectively decides whether someone can apply.
Even more treacherous are the tools running behind the scenes. Many companies use sentiment analysis to automatically tag video interviews with labels like "enthusiastic" or "hesitant." Their marketing department calls it candidate experience analytics, but for the AI Act, it's simply assessment software with direct consequences for access to employment. Even dashboards for internal performance measurement – think of algorithms ranking pick-pack employees by productivity – fall under the same category. As soon as such a score influences promotion, bonus, or improvement plans, the legislation speaks of high-risk.
"Human in the loop" sounds pleasant, but in practice, it takes getting used to. A recruiter who has relied on a ranking model for years must now explain why they invited candidate X despite the model ranking them thirteenth, or why they rejected candidate Y despite a golden score. The AI Act doesn't ask for heroic explanations about neural networks; it asks for consistent, traceable reasoning. This forces HR professionals to take a fresh look at their craft. They need to know which variables a model uses, how bias can creep in, and which signals are overweighted in the final ranking. Only then can the human in the loop actually correct rather than merely signing off on what the machine presents.
Imagine Rima, recruitment manager at a logistics scale-up in Tilburg. In the morning, she opens her dashboard. At the top, a notification flashes: the CV model has processed 438 new profiles and placed 37 candidates in the "strong match" category. In the old world, she would simply click on the first profile. Now a "compliance checkbox" appears first. To continue, Rima must declare that she is checking the automatically generated shortlist for incorrect assumptions. She scrolls through the list, notices a striking number of men over forty, and decides to manually add two female candidates with comparable experience. Her action is properly logged.
In the afternoon, an intake call is scheduled with the supplier of their video assessment platform. The vendor is sending a new model version and must demonstrate that the facial analysis is no longer less accurate for darker skin tones. Rima is not a data scientist, but the AI literacy modules she completed in the fall give her the right questions: on what training data was the update tested, what is the false-negative ratio per demographic segment, how long are the raw video recordings kept? The vendor gets a bit defensive initially but understands that these questions are now standard.
At the end of the day, Rima gets a Slack ping from Legal: "Can you confirm that all new recruiters have completed the mandatory AI literacy e-learning?" Thanks to a connection with the LMS database, she immediately sees a progress percentage of eighty percent. The last two new colleagues receive a friendly reminder. Compliance concerns? Hardly. The system reports everything at a glance.
How do organizations get there? Not by sending yet another Excel sheet with checkboxes, but through five sequential steps that fit seamlessly into the HR cycle. The first step is to inventory: which AI functions are hidden in software used daily? Many companies are startled to discover that even Excel plugins or low-code flows with ML components fall under the law. The second step is risk classification: which use cases directly affect the career of an employee or applicant? Once something fits into Annex III, it moves to step three: remediation. Sometimes this means retraining a model, sometimes simply adjusting a parameter that unintentionally factors in age or postal code. Step four is ensuring human oversight. This doesn't always require expensive dashboards; a good procedure can suffice, provided the decision AND the override are stored. The final step is the AI literacy training line. This is where the greatest gain can be made, because knowledge sharing not only covers compliance but also accelerates innovation. Teams that understand where their algorithms fall short identify opportunities for improvement more quickly.
It's sometimes said that the AI Act mainly brings extra paperwork. Practice shows the opposite. Companies that invest in AI literacy in a timely manner report fewer data breaches, recognize biases faster, and achieve a higher candidate satisfaction score. A recruiter who understands how the decision tree in her CV filter is structured also dares to give more targeted feedback to the supplier. This improves the model faster and makes the entire process more transparent. Moreover, candidates notice the difference. Applicants who are clearly informed about the role of AI experience the selection process as fairer, even if they are rejected. Transparency breeds trust, trust breeds brand equity.
Those who have the basics in order in 2025 will reap the benefits for longer than one audit cycle. First, it reduces future remodeling costs. A bias-free, well-monitored system doesn't need to be completely rebuilt in two years if new guidelines emerge. Second, it positions the company as an attractive employer in a market where tech-savvy talent is increasingly critical of ethics and diversity. And last but not least: if HR proactively takes up the AI Act agenda, its role grows from supportive to strategic. That's not a compliance story; that's just hard business value.
The AI Act initially sounds like a legal text full of bureaucratic sentences, but beneath the surface lies a practical roadmap for better, fairer, and more human recruitment. Organizations that seize this opportunity not only build a shield against fines; they create an advantage in the battle for talent. The key lies with HR professionals who deepen their AI literacy and with management that supports that effort. Embed AI helps companies do precisely that: from the initial risk scan to setting up hands-on training and configuring control dashboards. Don't wait for the regulator to knock. Take the first step today and show that your recruitment isn't just smart, but also responsible. That's the future of work, and it begins – very concretely – with a well-trained recruiter with insight into the code behind the shortlist.