This is episode 6 of our 'AI in the Public Sector' series. In the previous episode, we discussed human oversight in AI systems. This week we dive into the crucial role of procurement and contracts in AI compliance.
The blind spot in AI procurement
When purchasing a new HR application, no one at the municipality of Breda asked whether it contained AI. It was only when an employee received a notification with a "fit score" for an internal applicant that it became clear the software supplier had added an embedded AI module. The contract? It doesn't mention explainability, bias logs, or a kill switch.
This case is not isolated: in many public organizations, AI functionalities creep in through SaaS packages without legal or ethical requirements being set upfront. The result is that organizations have to solve problems after the fact that they could have prevented upfront.
The reality is that many procurement and contract teams are not yet equipped to recognize AI-related risks, let alone contractually address them. At the same time, suppliers increasingly use AI functionalities without explicitly communicating this. This creates a dangerous blind spot in the compliance chain.
The AI Act and chain responsibility
The EU AI Act puts an end to this convenience. Deployers (users) of high-risk AI systems must be able to demonstrate that the system complies with the law1. This also applies if the model comes through a supplier. Procurement is therefore not a technical side issue but the starting point of compliance.
The law introduces clear chain responsibility where each party in the AI value chain has specific obligations. Providers must ensure conformity assessments and CE marking, but deployers remain responsible for correct use and monitoring. If a supplier cannot deliver what you need—think bias logs or FRIA input—then your organization will be left holding the bag during an audit or incident.
This chain responsibility means that compliance cannot be passed off to the supplier. Organizations must actively set requirements and be able to verify them. This requires a fundamental shift in how we think about procurement and contracting: from passive customer to active compliance partner.
Which requirements belong in your contract by default?
Effective AI contracts go beyond standard delivery conditions. They must ensure specific technical and organizational measures that enable compliance. Here are the essential elements:
Explainability and transparency
The system must be able to show a comprehensible rationale for each decision. Not a black box, but a model that explains why a particular result was given. This means suppliers must be able to demonstrate which factors contributed to a specific output and to what extent.
Contractually, you must establish that explainability functions are available to end users and that this explanation meets AI Act requirements. Think of real-time explanation of decisions, identification of the most influential factors, and possibilities for users to request explanations.
Bias and performance monitoring
Suppliers must structurally provide logs showing bias checks, accuracy, false positive/negative rates, and data drift. These logs must not only be available but also regularly updated and analyzed.
Set requirements for monitoring frequency (e.g., monthly), the metrics that must be reported, and the thresholds at which action must be taken. Ensure you have access to raw data so you can perform your own analyses.
Mitigation options and correction possibilities
Require that there are functions for post-processing corrections or calibrations and that these are technically accessible to the deployer. This includes possibilities to correct bias, manually override decisions, and adjust the model based on feedback.
Important is that these mitigation options don't just exist theoretically but are also practically usable for your organization. Therefore, also contract training and support in using these functions.
Kill switch and shadow mode
The ability to immediately pause a model without dependence on the supplier is crucial. Shadow mode options to first test new versions without impact are equally essential for responsible implementation.
An effective kill switch means your organization can independently intervene in problems without having to wait for the supplier. Shadow mode enables you to test updates and changes before they go live.
Data governance and quality requirements
Describe which source data, annotation processes, and sampling strategies suppliers must document. Data governance is the foundation of reliable AI, so set high requirements for documentation and traceability.
This includes insight into the origin of training data, methods for data annotation, bias checks in the dataset, and procedures for data updates. Ensure you can verify that the data is representative for your use case.
Audit rights and reporting
Contractually established audits, with accompanying reports that align with the AI Act and your internal algorithm register2. Audit rights give you the ability to independently verify that the system meets agreed requirements.
Define the scope of audits, frequency, and who may conduct them. Ensure reports come in a format directly usable for compliance purposes and transparency registrations.
How do you incorporate this into your tender?
A successful AI tender starts with thorough preparation and clear requirements. The traditional approach of functional specifications is insufficient for AI systems that are inherently more complex and less predictable.
AI questionnaire and market exploration
Start with an AI questionnaire during market exploration: which AI functionalities are included, how are they secured, what is the chain of (sub)suppliers? This phase is crucial to understand what the market can offer and where the risks lie.
Important questions are: Which AI technologies are used? How is bias prevented and monitored? What data is used for training? How is explainability ensured? What certifications does the supplier have? Who are the sub-suppliers in the AI chain?
Program of requirements and award criteria
Subsequently, anchor the AI requirements in your program of requirements and award criteria. Distinguish between must-haves (e.g., AI Act conformity) and nice-to-haves (e.g., advanced explainability features).
Weigh compliance aspects heavily in the award. A cheap solution that doesn't comply with the AI Act will ultimately become much more expensive due to compliance problems and possible sanctions.
Model documents and standardization
Add model documents such as an AI compliance annex where these conditions are standardized. This prevents having to reinvent the wheel with each tender and ensures consistency within your organization.
Develop templates for AI contract clauses, checklists for AI assessments, and standard reporting formats. This makes the process more efficient and increases the quality of your contracts.
Expertise in the assessment committee
Many municipalities now choose to have AI experts or ethical advisors join the assessment committee. This prevents beautiful promises on the pitch slide from failing once the contract becomes legal.
These experts can verify technical claims, assess risks, and ensure that compliance requirements are realistic and achievable. Invest in this expertise, because the costs don't outweigh the risks of a bad AI contract.
Practical example: AI procurement for a social domain app
A medium-sized municipality demanded specific AI compliance measures in its 2024 tender for a social domain app. The result shows both the power and challenges of proactive contracting.
The set requirements
The municipality set three core requirements:
Real-time explanation of the decision (top 3 influencing factors)
Quarterly reports with fairness scores per demographic group
Possibility for model pause by the municipality itself
These requirements were included as hard criteria in the program of requirements, with corresponding burden of proof for suppliers.
The implementation challenge
During implementation, it turned out that the model initially couldn't explain how the advice score was determined. The supplier had mentioned explainability as a feature, but this wasn't operational for the specific model being used.
The supplier had to pay extra to adapt and integrate explainability tooling. This cost three months of extra development time and 15% additional costs, but ultimately delivered a system that fully met the requirements.
The lessons
Setting requirements works—but only if you also enforce them. The municipality had the courage to delay implementation until all requirements were met. This led to short-term frustration but long-term compliance.
Technical verification is essential. Claims about AI functionality must be verified with proof-of-concepts or demos during the tender process.
Budget for compliance. AI compliance often results in technical adjustments that come with costs. Plan for this in advance.
Contracts and collaboration with legal and procurement teams
AI compliance in contracts requires close collaboration between different disciplines. The traditional separation between legal, IT, procurement, and compliance doesn't work for AI projects that touch all domains.
Multidisciplinary approach
Effective AI contracting requires input from:
Legal department: AI Act compliance, liability, privacy
IT department: Technical feasibility, integration, security
Develop standard AI clauses, annexes, and checklists to make this collaboration structural. This prevents expertise from being lost during personnel changes and ensures consistent quality.
Examples of useful tools:
AI risk assessment template
Standard contract clauses for different AI categories
Checklist for technical verification
Reporting format for compliance monitoring
Training and awareness
Train procurement and contract managers in the basics of the AI Act and AI technology. They don't need to become experts, but must know when they need specialist help.
Organize regular knowledge sessions where different departments learn from each other's expertise. AI compliance is a team sport that is only successful if everyone understands their role.
What you can do tomorrow
Don't wait for the perfect AI strategy before starting with better contracting. There are concrete steps you can take immediately:
Review ongoing contracts
Inventory which of your current suppliers use AI functionalities, even if this isn't explicitly stated in the contract. Many SaaS suppliers have now added AI features without communicating this.
Identify compliance gaps in existing contracts and plan contract amendments or addenda. Focus first on high-risk systems and critical processes.
Develop standard tools
Start developing a standard AI tender annex (compliance annex) that you can use in future tenders. Start simple and expand based on experience.
Create a checklist of AI-related questions that must be asked in every tender. This prevents important aspects from being overlooked.
Build expertise
Train your procurement and contract teams in the basics of the AI Act and AI compliance. Invest in external expertise where needed, but also build internal knowledge.
Network with other organizations facing similar challenges. Many municipalities and government organizations are happy to share experiences and best practices.
The strategic value of proactive AI contracting
Good AI contracting is more than risk management—it's a strategic instrument that helps organizations innovate responsibly. By establishing compliance requirements upfront, you create space for experimentation within clear frameworks.
Organizations that lead in AI contracting develop a competitive advantage. They can implement new AI applications faster because the compliance foundation is already laid. They attract better suppliers who are serious about responsible AI. And they prevent costly compliance problems that are much harder to solve after the fact.
The investment in better AI contracting pays for itself through avoided risks, faster implementations, and better supplier relationships. But above all, it creates the foundation for sustainable AI adoption that delivers value without putting the organization at risk.
In episode 7 of our series, we dive into the registration and transparency process: how and where do you record which models you use and what they do? From EU database to the Dutch algorithm register.
🎯 Free EU AI Act Compliance Check
Discover in 5 minutes whether your AI systems comply with the new EU AI Act legislation. Our interactive tool gives you immediate insight into compliance risks and concrete action steps.