AI Literacy Training

A comprehensive training for both employees and management to effectively and responsibly implement AI within your organization. From practical AI applications to strategic implementation and compliance, this training provides a complete overview of everything you need to know about AI in practice.

EU AI Act compliance: From February 2, 2025, the European AI Act requires demonstrable AI literacy for anyone working with high-risk AI. Recruiters using CV screeners, bankers managing credit models, healthcare teams running triage apps: all have the same obligation — understanding what the algorithms do, what errors they can make, and how to correct them.

Our one-day training establishes exactly that foundation. We build the session in a modular way: one shared core and sector-specific deep dives in separate breakouts. This way, you get customization without having to book five separate workshops.

Half day (09:00 - 13:00)
Customized

AI Risk Zones per Sector: EU AI Act Priorities

The table below shows which sectors have priority under the EU AI Act, and why these specific sectors need special attention for AI compliance.

PrioritySector / DomainReason (“why now?”)Typical high-risk use-cases (Annex III)
Tier 1Financial services
(banks, insurers, fintech)
Strong EU supervision frameworks (EBA, DORA); reputation riskCredit scoring, transaction monitoring, biometric KYC
Healthcare & med-techMDR connection; patient safetyDiagnostic support, AI triage, robot surgery
Public sector
(ministries, municipalities, executive agencies)
Political pressure for transparency; procurement requirement for AI Act complianceRisk assessments (benefits), crowd monitoring, chatbot services
HR-tech / Recruitment servicesAnnex III explicitly mentions AI in recruitment & selectionCV screening, video analysis, performance tracking
Critical infrastructure
(energy, transport, air & rail)
Safety component → automatically high-riskPredictive maintenance, traffic control
Tier 2EduTech / Universities & collegesAnnex III education systems → high-riskProctoring, adaptive learning
Manufacturing & logisticsRapid AI adoption; OSHA & CE requirementsVision inspection, autonomous vehicles
Tier 3Scale-ups building generative AI productsIP risks & brand reputationChat co-pilots, content generation

HR & Recruitment – where the chain reaction begins

The Challenge

Why here specifically? The law explicitly mentions selection AI, because a faulty algorithm can literally exclude someone from the job market. Recruiters are therefore the first line of defense against bias and the first to be fined if they blindly trust their dashboard.

The Approach

What does the training look like? We start with a live bias scan on your own CV filter: twenty anonymous profiles, one ranking model, ten minutes of suspense. The outcome forms the red thread for the rest of the morning. We discuss transparency towards candidates ("You are being screened by AI") and practice human overrides – including the log texts that an auditor wants to see. Favorite take-away from previous participants: the "explain sheet" with which recruiters can explain in three sentences why candidate X was rejected anyway. According to Sanne Derksen, Head of Talent Acquisition at a fintech scale-up, "that sheet pays for itself in one day in credibility with candidates and hiring managers."

Finance – compliance is culture, AI adds a new layer

The Challenge

What's at stake? Credit scoring and anti-fraud models naturally fall into the high-risk box. Fines of up to seven percent of global revenue sound louder in boardrooms than any keynote on innovation.

The Approach

What do we do in the break-out? We introduce a fault in a fictional credit model: a bias alert on postal code 1092. The group decides live whether the model pauses, how that appears in the ISAE-3402 reporting, and what customer communication follows. In between, participants learn how to link AI Act documentation to existing risk frameworks, so there isn't yet another compliance layer on top, but one integrated story.

Healthcare & Med-tech – no algorithm may break patient trust

The Challenge

Why is the bar highest here? A triage app or decision-support tool can make the difference between treatment or not. Errors are not only paid for with money, but with reputation and sometimes lives.

The Approach

How do we make it tangible? We simulate an emergency scenario: the AI advises sending patients with mild symptoms home, but the attending physician has doubts. The group goes through a real-time decision tree: open log files, force second opinion, inform patient. Afterwards, we trace the input to concrete improvement actions: additional training data, explanation module for nurses, and an escalation button for doctors.

Public Sector – transparency is the default, not the exception

The Challenge

The tension: strict procurement rules, media sensitivity, and a critical citizen. A seemingly simple resident chatbot can make headlines if it gives legal advice it's not authorized to give.

The Approach

The workshop essence: We dissect an existing municipal chatbot. In teams, we rewrite the opening message ("I am an AI assistant, not a lawyer"), define trust boundaries, and consider what happens when the answer is NOT certain. Result: a concrete blueprint of transparency and fallback rules that can go live the same day.

Critical Infrastructure – when downtime is not an option

The Challenge

Why extra exciting? An incorrect prediction in a predictive maintenance model can shut down a turbine or, worse, put people at risk.

Critical infrastructure includes essential services and systems such as energy, water, transportation, telecommunications, and cybersecurity that are indispensable for the functioning of society. These sectors automatically fall into the highest risk category of the EU AI Act due to their societal impact in case of disruptions.

The Approach

What we do: We present an imaginary malfunction: sudden temperature spike, the model is uncertain. The group decides whether the asset goes offline, what the incident log looks like, and how the operator explains why the AI became confused. This immediately addresses the core question of the AI Act: how do you explain a black-box model to a human operator?

What You Will Learn

Organizational Implementation

  • Develop an effective AI strategy for your organization
  • Identify promising AI applications within your work processes
  • Create support and overcome resistance to AI
  • Create a phased implementation plan for AI adoption

Policy & Governance

  • Develop an organization-wide AI policy
  • Create practical guidelines for responsible AI use
  • Implement effective control mechanisms
  • Balance innovation and risk management

Compliance & Risk Management

  • Understand the EU AI Act and its impact on your organization
  • Implement privacy-by-design principles in AI use
  • Identify and mitigate AI-related risks
  • Ensure transparency and explainability of AI systems

What It Gives You

Strategic Implementation
Organization-wide Impact
Create an AI-driven organization with a clear strategy
EU AI Act Ready
Compliance & Governance
Meet all legal AI obligations and minimize risks
Adoption & Acceptance
Cultural Change
Foster a positive attitude towards AI within your organization

What You Can Expect

Introduction to AI and practical applications
Working with AI tools like ChatGPT and Copilot
AI strategy and implementation in organizations
Legal frameworks and compliance (EU AI Act)
Safe and responsible AI use
Best practices and case studies
AI Governance and risk management
Hands-on workshops and exercises
Developing AI policies
Certificate of participation

Preview of the AI Literacy Training

View some slides from the training to get an impression of what you can expect.

What is AI Literacy?

AI literacy is essential for every modern organization, especially with the introduction of the EU AI Act:

Definition of AI Literacy

AI literacy means that employees have sufficient understanding of:

  • How AI systems work and their capabilities
  • The impact of AI on daily work activities
  • Responsible use of AI tools
  • Recognizing opportunities and risks
  • Effectively collaborating with AI systems

Benefits

  • Increased productivity
  • Better decision-making
  • Competitive advantage
  • Compliance with legislation

Legal Context

  • EU AI Act requires AI literacy
  • Mandatory for organizations using AI
  • Part of compliance and risk management

Your Trainer

Zahed Ashkara - AI Trainer

Zahed Ashkara

AI Trainer & EU AI Act Specialist

As the founder of Embed AI, Zahed combines his expertise in artificial intelligence with a passion for innovation. With his legal background and extensive knowledge of AI technology, he helps organizations with their digital transformation.

As an EU AI Act specialist and certified AI Compliance Officer (CAICO), he helps organizations implement AI systems that comply with the new European legislation.

Zahed's unique combination of legal expertise and technical AI knowledge makes him the ideal trainer for this training. His practical approach ensures that you not only understand the theory, but can also apply it directly within your organization.

Investment

495,-excl. VAT per person

Included:

  • Comprehensive training materials
  • Lunch and refreshments
  • Certificate of participation
  • Implementation plan template
  • Practical guides and checklists
  • Reference materials and documentation

Optional extras:

  • + € 4,500,- In-company training (max. 15 participants)
  • + € 250,- Additional participant for in-company training

Available discounts:

  • Early-bird discount (>2 months before start) (10% discount)
  • Organization discount (3+ participants) (15% discount)

Available Dates

may

No longer available

june

No longer available

july

July 4, 2025
July 11, 2025
July 18, 2025
July 25, 2025

august

August 1, 2025
August 8, 2025
August 15, 2025
August 22, 2025
August 29, 2025

What Participants Say

Kasim Alsina
Senior Internal Auditor | CIA, CRMA, CISA, CAMS, CFE, CCE
I found the AI Literacy course really informative and easy to follow. It gave a solid overview of how AI works, where it's being used, and some of the risks and ethical issues to keep in mind. As an internal auditor, it helped me understand how AI might affect things like internal controls, data handling, and risk areas. It's definitely made me more confident when it comes to discussing AI-related topics at work. A really useful course overall, highly recommended!
David Lovell
Senior Consultant Privacy & Data Protection
Zahed is a professional and skilled trainer who, with his substantive knowledge and practical approach, knows how to thoroughly understand and explain complex topics. Much is still unclear about the European AI Regulation, but the masterclass provided clarity regarding the legislation and its impact on companies. I gained a better understanding of the risk categorization for AI models and the (legal) requirements associated with them. The practical approach, combined with technological insight, makes this a valuable training that I recommend every legal professional to follow.