On 2 February 2025, Article 4 of the EU AI Act became binding for every company in Europe that uses AI. It demands that staff and other persons dealing with AI systems have a sufficient level of AI literacy. 15 months later, 53 percent of German companies still cite lack of AI competence as their single biggest blocker to AI adoption7. That is not a compliance problem. It is a strategic failure hiding behind a compliance label.
Most Mittelstand companies reach for the same answer: a 45-minute e-learning module, rolled out company-wide, with a completion certificate at the end. It does not meet the law, it does not change behaviour, and it burns political capital with the Betriebsrat. Meanwhile BNetzA has already published its literacy guidance, enforcement begins 2 August 2026, and customers are starting to ask for proof in procurement RFPs.
This guide is for the HR Director, Chief Compliance Officer, CTO, or Geschaeftsfuehrer who needs to turn Article 4 from a checkbox into a real capability. No slide templates. No regulatory panic. Just the role-based literacy framework, the 90-day rollout, and the decisions that separate companies that pass the audit from companies that fund the regulators.
TL;DR
Article 4 is already binding. It took effect 2 February 2025. BNetzA enforcement starts 2 August 2026. Penalties: up to EUR 7.5 million or 1.5 percent of global turnover.
Generic e-learning fails both the law and the business. 53 percent of Mittelstand firms cite the competence gap as their top blocker - and 40 percent say existing training does not deliver.
Role-based literacy is the answer. 5 role archetypes (Executive, Manager, Knowledge Worker, Operations, Developer) x 3 competency levels (Foundation, Applied, Advanced).
90 days is enough to go from audit to rollout if you treat this as a capability programme, not a compliance module.
The Betriebsrat is not the obstacle. It is the ally that makes literacy stick - if you bring it in during curriculum design, not after rollout.
The Mittelstand Literacy Paradox
German SMEs are catching up on AI adoption fast. Active AI usage doubled from 17 percent in 2025 to 41 percent in 2026, with another 48 percent planning adoption7. But the competence side of the equation is moving in the opposite direction - and this gap is now the single most important blocker for Mittelstand AI programmes.
- The top barrier is not technology or cost - Bitkom 2026 lists lack of AI competence as the number one obstacle at 53 percent, ahead of data protection (44 percent) and integration (39 percent)7.
- Spending on AI is shrinking, not growing - Mittelstand firms spent 0.35 percent of revenue on AI in 2025, down from 0.41 percent in 2024, even as AI capabilities accelerate8.
- Training reaches the wrong people - Only 14 percent of German companies train all or almost all employees on digital topics. Two-thirds train only parts of the workforce, typically senior managers who least need it10.
- The silicon ceiling is real - Only 51 percent of frontline workers regularly use AI, compared to 75 percent of managers. BCG calls this the silicon ceiling - the people closest to the work benefit the least20.
- Existing training does not work - 40 percent of companies report that employees lack interest in digital training, 40 percent lack time, and 39 percent say training does not deliver the expected results8.
- The productivity prize is massive - Employees who get 81 or more hours of AI training annually report 14 hours per week of productivity gains19. The gap between what AI could deliver and what it does deliver is a training gap, not a technology gap.
Key Data Point
EY reports that companies are missing up to 40 percent of AI productivity gains due to gaps in talent strategy. Only 28 percent of organisations are on track for what EY calls the Talent Advantage19. AI literacy is not the regulatory burden - it is the lever that unlocks the ROI everyone already paid for.
This is the paradox: the same Mittelstand companies that cite skills as their top blocker are also cutting AI spend and training the wrong people. Article 4 of the EU AI Act turns this from a management problem into a legal one. That reframing is useful - because it finally gets literacy onto the board agenda.
| Indicator | Current State | Source |
|---|---|---|
| Active AI use in German firms | 41% (up from 17% in 2025) | Bitkom 20267 |
| Top AI adoption blocker | Lack of competence (53%) | Bitkom 20267 |
| Firms training all employees on digital topics | Only 14% | Bitkom 202510 |
| Frontline workers regularly using AI | 51% (vs 75% of managers) | BCG 202520 |
| AI productivity gains missed | Up to 40% | EY 202519 |
| Mittelstand AI spend as % of revenue | 0.35% (down from 0.41%) | Bitkom 20268 |
What Article 4 Actually Requires (and What It Does Not)
Article 4 is short - one dense paragraph - but it is also deliberately open. It does not prescribe a curriculum, a vendor, or a minimum number of hours. It demands that you can show you took proportionate, documented measures. Here is what that actually means in practice.
The exact obligation
The Article 4 wording: providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in1.
- Providers AND deployers - Article 4 applies whether you build AI or merely use it. For the Mittelstand, deployer is the usual role. Using ChatGPT Enterprise, an HRIS with AI screening, or a custom AI agent all make you a deployer2.
- Staff AND other persons - The scope is wider than your payroll. Contractors, temporary workers, and external service providers using your AI on your behalf fall within scope. Contract clauses with vendors need to reflect this14.
- Sufficient, not maximum - The threshold is sufficient for the role. A production worker interacting with a quality inspection AI does not need prompt engineering mastery. But they do need to know what the system can and cannot reliably flag2.
- Proportional to role - The text explicitly lists technical knowledge, experience, education, training, and context. Five variables. Any programme that treats all staff identically fails the proportionality test15.
- Measures, not certifications - Article 4 does not require a specific certification. It requires that you take measures and can evidence them. What counts as evidence is what BNetzA and the Commission ask for - attendance, assessment, written governance, context-specific content15.
Timeline and enforcement
| Date | Event | Implication for the Mittelstand |
|---|---|---|
| 2 February 2025 | Article 4 enters into application | Obligation is binding now - even though enforcement is not yet active |
| June 2025 | BNetzA publishes AI literacy guidance | German national expectations now documented - use this as the baseline |
| July 2025 | BNetzA AI Service Desk launches | Low-threshold contact point for regulatory questions - free for SMEs |
| 2 August 2026 | Full AI Act applicability; Article 4 enforcement starts | Fines become possible; customers start asking for proof in tenders |
| Rolling | Civil claims tied to AI errors and training gaps | Plaintiff lawyers already use Article 4 non-compliance as evidence of duty breach |
What Article 4 does NOT require
- Not a specific certification - TUV, DEKRA, ISO 42001, and similar certifications can support compliance but none are mandatory6.
- Not a fixed curriculum - The Commission explicitly rejects the idea that one curriculum fits all. Proportionality to role and context is the rule2.
- Not a single training vendor - Bitkom Akademie, KI-Campus, custom programmes, or blended approaches all satisfy the law as long as the measures fit the role6.
- Not a one-off event - The obligation is continuous. A single onboarding module in 2025 does not discharge a 2026 duty14.
- Not a ban on ChatGPT - Article 4 is agnostic about which AI systems you use. The tools are a policy question; literacy is the legal requirement.
BNetzA Guidance (June 2025)
BNetzA confirmed that literacy measures must be documented, role-appropriate, and evidenced. A company that cannot show a written governance trail will be treated as non-compliant regardless of informal training activity. The AI Service Desk accepts low-threshold questions from SMEs free of charge11.
Why Generic E-Learning Fails Both the Law and the Business
The default Mittelstand response to Article 4 is a single 45-minute generic module purchased from an LMS vendor, pushed to every employee, with a certificate at the end. It is cheap, fast, and comfortable. It also does not work - legally or operationally. Here is why.
Five reasons it fails the law
- Fails the proportionality test - A single module ignores the five variables Article 4 names: technical knowledge, experience, education, training, and context. A CTO and a warehouse worker do not need the same content1.
- Fails the context test - Literacy must be tied to the AI systems actually in use. Generic content on large language models does not equip an HR manager to supervise AI screening15.
- Fails the evidence test - Completion of a generic module proves attendance, not comprehension. BNetzA guidance asks for measures that meaningfully raise literacy, which requires role-appropriate assessment11.
- Fails the continuous obligation - Article 4 is not a one-off. A single module in 2025 does not address AI systems added in 202614.
- Fails the broader obligation for contractors - Generic internal modules rarely extend to external persons using your AI, leaving a scope gap that regulators can flag14.
Five reasons it fails the business
- Employees tune out - 40 percent of companies report staff lack interest in digital training, and 39 percent say training does not deliver results10. Generic content accelerates this.
- Training stays at the knowledge layer - Generic modules teach what AI is, not how to spot a hallucination in a specific workflow. Behaviour does not change17.
- The silicon ceiling gets worse - Managers sit through the module and move on. Frontline workers - who most need applied literacy - still do not get role-specific content20.
- Shadow AI grows - Without practical guidance, staff default to private ChatGPT accounts for work tasks. 40 percent of German companies already have this problem17.
- Betriebsrat pushes back - Works councils rarely block AI literacy itself, but they do block generic rollouts that look punitive or surveillance-heavy. A real programme with role-specific design sails through BetrVG Paragraph 87 discussions; a blanket module stalls.
Generic E-Learning vs Role-Based Literacy
Generic 45-Min Module
- ✗ Low completion quality - click-through completion rates over 90% but comprehension rates below 30%
- ✗ Fails proportionality - one content for all roles
- ✗ No context fit - generic LLM content, not tied to your AI systems
- ✗ No behaviour change - knowledge layer only
- ✗ Audit risk - BNetzA expects documented role-appropriate measures
Role-Based Literacy Programme
- ✓ Proportionate content - 5 role archetypes x 3 levels
- ✓ Context-tied - tied to the AI systems your company actually uses
- ✓ Behaviour-focused - scenario exercises, escalation drills, spot checks
- ✓ Evidence trail - attendance + assessment + applied behaviour sampling
- ✓ Betriebsrat-friendly - role-specific design eases co-determination
The Role-Based Literacy Framework (5 Roles x 3 Levels)
A literacy programme that satisfies Article 4 and actually raises capability uses a role-based matrix. Five role archetypes map to three competency levels. Each cell has a specific content set, assessment approach, and refresh cycle. This is the single most important design decision - get this right and everything downstream becomes mechanical.
The three competency levels
- Foundation (2-4 hours, all employees) - What AI is, what it is not, where the company uses it, how to raise concerns. This is the universal baseline that Article 4 requires for every employee dealing with AI systems17.
- Applied (8-20 hours, role-specific) - How to use the specific AI systems in the role, how to spot failure modes, how to escalate, how to comply with disclosure and DSGVO. This is where the proportionality test lives17.
- Advanced (40-100 hours over 3-6 months, critical roles) - How to build, govern, approve, and audit AI systems. Required for IT, data, compliance, and senior managers with AI responsibility17.
The five role archetypes
- Executive / Geschaeftsfuehrung / Board - Strategic literacy. Needs to understand risk categories, investment decisions, the EU AI Act obligations, and how to challenge AI-heavy proposals. Applied level, renewed annually.
- Manager / Team Lead - Operational literacy. Needs to understand the AI systems their team uses, how to supervise outputs, when to escalate, and how to coach staff. Applied level, with quarterly updates for managers of high-risk systems.
- Knowledge Worker (Sachbearbeiter, Analyst, Specialist) - Tool literacy. Needs deep familiarity with the AI systems they interact with daily - prompt practice, hallucination detection, DSGVO red lines, escalation paths. Applied level, with role-specific deep dives.
- Operations / Shop Floor / Field Service - Interface literacy. Needs to understand how to read AI outputs (quality flags, predictive alerts, anomaly warnings), how to act or escalate, and when to override. Foundation plus a short applied module per system.
- IT / Developer / Data / Compliance - Governance and engineering literacy. Needs to understand AI architecture, risk assessment, conformity procedures, monitoring, and auditing. Advanced level, with ongoing CPE-style updates.
The competency matrix
| Role | Level | Hours / Year | Primary Outcome |
|---|---|---|---|
| Executive / Board | Applied | 10-15 | Able to set AI policy, approve investments, challenge vendor claims |
| Manager / Team Lead | Applied | 12-20 | Able to supervise AI-assisted work, coach staff, escalate correctly |
| Knowledge Worker | Applied | 15-25 | Able to use AI daily with appropriate scepticism and DSGVO awareness |
| Operations / Shop Floor | Foundation + short applied | 4-8 | Able to read AI outputs, act within scope, escalate out-of-scope situations |
| IT / Developer / Compliance | Advanced | 40-100 | Able to design, govern, audit, and document AI systems under the AI Act |
| Contractors / Temps | Role-mirrored | Matches internal equivalent | Coverage extended via contract clauses and vendor training evidence |
Design Rule
Every role x level cell must specify four things: the learning outcome, the content list, the assessment method, and the refresh cycle. If any one is missing, the cell fails the Article 4 proportionality test and the audit evidence does not hold up.
“The comprehensive further training of employees, for example on the use of artificial intelligence, is an investment in the future viability of their own company.”
- Dr. Ralf Wintergerst, President of Bitkom9
Turn Article 4 into a capability, not a checkbox
Book a 30-minute call. We will map your literacy gaps against the 5x3 framework.

The 90-Day Rollout Playbook
90 days is enough to go from a standing start to a rollout that satisfies Article 4 and actually raises capability - if you treat this as a capability programme, not a compliance module. Three phases, twelve weeks, one document trail.
Phase 1: Audit and Role Mapping (Weeks 1-4)
- Week 1: AI system inventory - List every AI system in use or planned. Include SaaS tools with AI features (Microsoft Copilot, Salesforce Einstein, HRIS auto-screening), general-purpose AI (ChatGPT, Claude, Gemini), and any custom agents. Classify each by AI Act risk category16.
- Week 2: Role mapping - Cross-reference every role in the company against the AI systems they interact with. Tag each role-system pair with the required competency level (Foundation, Applied, Advanced). Include contractors and external service providers14.
- Week 3: Gap assessment - Measure current literacy against required level per role. Quick self-assessment plus manager input works well. The gap is almost never what leadership thinks - expect frontline to be behind, knowledge workers overestimated, and managers inconsistent20.
- Week 4: Scope and governance - Draft the literacy policy, the Betriebsvereinbarung text, the evidence collection plan, and the refresh cycle. Get legal and Betriebsrat alignment before building content.
Phase 2: Curriculum Build and Pilot (Weeks 5-8)
- Week 5: Content sourcing - Mix external foundation content (KI-Campus free, Bitkom Akademie paid) with custom role-applied content for your actual AI systems. Do not write Foundation from scratch - buy that layer6.
- Week 6: Assessment design - Build short, role-specific assessments that measure comprehension, not just completion. Scenario-based questions beat multiple choice. Include at least one applied task per role.
- Week 7: Pilot with 2-3 departments - Roll out to a pilot group spanning different role archetypes. Collect feedback on content, length, relevance, and assessment fairness. Adjust.
- Week 8: Refine - Close the feedback loop. Update content, fix assessments, finalise the Betriebsvereinbarung. Prepare rollout communications.
Phase 3: Rollout and Measurement (Weeks 9-12)
- Week 9: Wave 1 rollout - Deploy to the first wave (usually 20-30 percent of staff). Monitor completion, assessment scores, and time-to-complete. Flag outliers.
- Week 10: Wave 2 rollout - Expand to the next wave. Hold weekly office hours for questions. Collect evidence of applied behaviour (escalation examples, spotted hallucinations, DSGVO flags raised).
- Week 11: Wave 3 rollout - Full company plus contractors and vendors. Send training-evidence clauses to vendors for service providers using AI on your behalf14.
- Week 12: Measure and report - Complete the evidence dossier: attendance, assessment scores, behaviour examples, policy sign-offs. Present to the board. Set the next refresh cycle.
Article 4 Compliance Checklist
- AI system inventory with risk classification (complete, signed off)
- Role-to-system mapping across all employees and contractors
- Role-based curriculum with Foundation / Applied / Advanced levels
- Assessment design per role (comprehension, not just completion)
- Betriebsvereinbarung signed covering content, scope, data collection
- Training delivered with attendance and assessment evidence
- Applied behaviour evidence captured (escalations, flags, corrections)
- Refresh cycle documented and scheduled (annual minimum)
- Vendor contract clauses updated for contractor literacy coverage
- Written governance policy signed by management and works council
Betriebsrat and Legal Alignment: The Hidden Gate
AI literacy programmes fail more often from Betriebsrat or legal friction than from bad content. German co-determination gives works councils strong rights on training content, delivery, and measurement. Ignore this and a well-designed programme stalls for months. Engage early and the Betriebsrat becomes the rollout accelerator.
What the works council can (and will) raise
- BetrVG Paragraph 87(1)(7) - Co-determination on occupational training when training content, method, and mandatory vs voluntary nature are at stake. AI literacy rollouts almost always touch this clause.
- BetrVG Paragraph 95 - Guidelines on selection, transfer, regrouping, and dismissal. AI literacy assessments that feed into performance reviews or promotion decisions trigger this.
- BetrVG Paragraph 96 - Promotion of occupational training. Works councils can actively shape the content and delivery.
- BetrVG Paragraph 87(1)(6) - Technical monitoring of employees. If the LMS or training platform tracks behaviour beyond what is needed for Article 4 evidence, co-determination applies to the tool itself.
- DSGVO / GDPR - Training records and assessment data are personal data. Legal basis, retention period, and access rights all need written clarity.
- Allgemeines Gleichbehandlungsgesetz (AGG) - Assessments must not produce adverse impact against protected groups. This is a live risk for AI-literacy tests that rely heavily on language proficiency.
The Betriebsvereinbarung essentials
| Clause | Purpose | Common Pitfall |
|---|---|---|
| Scope definition | Which roles, systems, and contractors are covered | Leaving contractors out and creating a scope gap |
| Content ownership | Who approves curriculum changes | Unilateral employer control triggers co-determination |
| Delivery modes | In-person, online, blended, timing, working hours | Treating training as unpaid time outside working hours |
| Assessment use | How assessment results are used and stored | Using assessments for promotion without BetrVG 95 basis |
| Data protection | What personal data is collected, retention, access | Over-collection of telemetry beyond Article 4 evidence need |
| Non-compliance handling | What happens if an employee does not complete training | Disciplinary measures without proportionality ladder |
Practical Rule
Involve the Betriebsrat during Week 4 of the 90-day rollout, not during Week 12. Early involvement turns the works council into a design partner. Late involvement turns them into a gate. The same programme lands in 90 days or 270 days depending on which path you choose.
How BNetzA Will Actually Audit You
Enforcement begins 2 August 2026. By then BNetzA will have had more than a year of guidance, Service Desk traffic, and EU-level coordination to refine its audit approach. Companies that prepare for the likely audit pattern now will pass routinely. Companies that wait will scramble.
The likely audit pattern
- Trigger event - Audits will follow complaints, incidents (AI-caused harm, data leaks), sector-wide sweeps, or random sampling. SMEs are unlikely to be random-sampled first, but sector sweeps of manufacturing, financial services, and healthcare are expected12.
- Documentation request - BNetzA will ask for the AI system inventory, the literacy policy, the Betriebsvereinbarung, and proof of training delivery. Companies that can produce these within two weeks signal strong governance11.
- Evidence review - Completion rates, assessment scores, and role coverage. Gaps trigger follow-up questions - do you train contractors, do you refresh annually, do you have content for high-risk systems.
- Behaviour sampling - Interviews or scenario tests with a sample of staff. Can the sales team describe how to disclose AI use to customers? Can the HR team explain what the AI screening tool does and does not measure? This is where generic modules break.
- Proportionality challenge - BNetzA can ask why the literacy level matches the role and context. Expect to defend your five-by-three matrix.
- Gap-closure order or fine - For small gaps, expect a remediation order with a deadline. For systemic failure, fines up to EUR 7.5 million or 1.5 percent of turnover4.
Evidence to keep ready
- AI system register - Up to date, signed off, includes risk classification and systems added since last review.
- Role-to-system mapping - Shows every employee and contractor with the AI systems they touch and the required literacy level.
- Curriculum dossier - Content per role x level, source (internal, KI-Campus, Bitkom Akademie, vendor), last revision date.
- Delivery records - Attendance logs, completion dates, assessment scores, pass/fail records.
- Behaviour evidence - Examples of applied literacy - escalations, flags, corrections. This is what moves BNetzA from skeptical to satisfied.
- Governance trail - Signed policy, Betriebsvereinbarung, refresh cycle, vendor clauses, board minutes showing AI literacy as a standing item.
- Incident and near-miss log - Documents how the literacy programme is updated when things go wrong. Shows a live, learning programme rather than a frozen compliance module.
Tooling Landscape: Bitkom Akademie vs KI-Campus vs Custom
No single vendor solves Article 4 end-to-end. The right answer is a blend - external foundation content plus a custom internal layer that reflects your actual AI systems. Here is how the main options compare for the Mittelstand.
The four main options
- Bitkom Akademie - Paid programme, strong German-market focus, live online seminars, covers AI Leadership and broader KI-Kompetenzschulungen 2026. Good for foundation plus manager-level applied. Pricing per seat22.
- KI-Campus - Free, academic origin, deep catalogue from Foundation to Advanced. Good for baseline literacy at no cost, weaker on context-specific applied content. Works as the cheap Foundation layer6.
- TUV / DEKRA / Fraunhofer-linked - Certification-flavoured, often tied to broader AI governance training. Good for IT and Compliance advanced-level certifications. Premium pricing6.
- Custom internal curriculum - Built around your specific AI systems, workflows, and escalation paths. Cannot be bought off the shelf. Required for the Applied and Advanced layers where proportionality lives.
Comparison table
| Option | Best For | Cost Indicator | Proportionality Fit |
|---|---|---|---|
| KI-Campus | Foundation for all staff | Free | Weak (generic) |
| Bitkom Akademie | Managers, executives, applied layer | Paid per seat, medium | Medium |
| TUV / DEKRA / Fraunhofer | IT, compliance, advanced certifications | Premium | Medium-High (for roles it targets) |
| Custom internal | Applied and Advanced levels tied to your AI systems | High build cost, low marginal cost | High (by design) |
| Blended approach | Whole company - recommended | Low-medium overall | High |
What Works
The pattern that consistently passes Article 4 at Mittelstand scale is: KI-Campus for Foundation across all staff (free), Bitkom Akademie or equivalent for manager-level Applied content, TUV or Fraunhofer-linked for IT and Compliance Advanced, plus a custom internal layer tied to your actual AI systems. Three external inputs, one internal spine.
How Superkind Fits
Superkind builds custom AI agents for Mittelstand and enterprise companies. Because the agents we build are the AI systems your staff interact with, Article 4 literacy is part of how we deliver - not an afterthought added at the end.
- Literacy built into every agent delivery - Every custom agent we ship includes role-specific applied content covering what the agent does, what it cannot do reliably, failure modes, escalation paths, and DSGVO handling. Proportionate by design.
- Process-first discovery translates to role-based literacy - Because we map every workflow before building, we know which roles interact with the agent. That map becomes the literacy matrix, not a separate exercise.
- Evidence collected from day one - Training attendance, assessment scores, and applied behaviour from the first rollout wave. Your Article 4 evidence dossier fills itself.
- Betriebsrat-ready documentation - Every agent ships with a plain-German description suitable for the Betriebsvereinbarung, covering what data the agent sees, which decisions require human review, and how staff escalate.
- Integration with external foundation content - We assume you use KI-Campus, Bitkom Akademie, or a similar Foundation layer. Our applied content slots on top without duplication.
- Refresh cycle built in - When the agent changes (new capability, new data source, new risk), the literacy content updates with it. You do not maintain a frozen curriculum while the system evolves.
- Audit-ready from week one - Every agent includes audit logging, clear system documentation, and a role-based literacy package. If BNetzA asks, the evidence is already assembled.
- No platform lock-in for training - Content delivers through your existing LMS or learning platform. We do not sell a training platform - we embed literacy into how your agents ship.
| Approach | Generic AI Vendor | Superkind |
|---|---|---|
| Literacy ownership | Customer problem, not vendor problem | Shipped with every agent |
| Content source | Generic product documentation | Role-specific, tied to your workflows |
| Evidence | Customer builds from scratch | Collected from day one |
| Betriebsrat docs | None provided | Plain-German agent description included |
| Refresh cycle | Customer maintains | Updated with every agent change |
| Audit readiness | Customer assembles | Ready for BNetzA from week one |
Superkind
Pros
- ✓ Literacy-by-design - role-based content shipped with every agent
- ✓ Evidence from day one - attendance, assessment, applied behaviour
- ✓ Betriebsrat-friendly - plain-German docs, co-determination-ready
- ✓ No platform lock-in - delivers through your existing LMS
- ✓ Refresh built in - content updates with the agent
Cons
- ✗ Not a training platform vendor - we embed literacy, we do not sell courseware
- ✗ Foundation layer not covered - KI-Campus or Bitkom Akademie still needed for general literacy
- ✗ Scope tied to our agents - literacy for third-party AI systems still your responsibility
- ✗ Requires process access - we need to understand real workflows to build proportionate content
“What AI literacy is should depend on the context. AI literacy is not just a general term; it is about the specific role of each person in the organisation and their interaction with specific AI systems.”
- European Commission, AI Literacy Q&A (DG CONNECT)2
Frequently Asked Questions
Yes. Article 4 entered into application on 2 February 2025. The obligation to ensure a sufficient level of AI literacy for staff and other persons dealing with AI systems already applies. Supervision and enforcement by national market surveillance authorities starts on 2 August 2026, but companies should be in compliance today, not in nine months.
Both providers and deployers of AI systems, which for the Mittelstand means any company using AI - not just building it. The obligation covers staff plus "other persons dealing with the operation and use" of AI systems on the company's behalf. That includes contractors, temporary workers, and external service providers who interact with your AI tools.
Article 4 is deliberately proportionate. The required level depends on the person's technical knowledge, experience, education, training, and the context in which the AI system is used. A production worker using a quality inspection AI needs different literacy than a procurement manager approving supplier scoring or a legal counsel reviewing hiring systems. Generic "What is AI" slides for everyone fail both the letter and the spirit of the law.
Non-compliance with Article 4 falls under the general infringement tier of the AI Act: up to EUR 7.5 million or 1.5 percent of total worldwide annual turnover, whichever is higher. For SMEs, the cap flips - it is whichever amount is lower. The larger financial risk, however, is reputational and contractual: customers, insurers, and public tenders increasingly require proof of AI literacy compliance.
Germany's draft AI Act Implementing Act designates the Bundesnetzagentur (BNetzA) as the general market surveillance authority. BNetzA published guidance on AI literacy in June 2025 and launched an AI Service Desk in July 2025 as a low-threshold contact point. Sector-specific regulators such as BaFin for financial services retain oversight for their domains.
Industry benchmarks suggest 2 to 4 hours of foundational literacy for every employee, 8 to 20 hours of role-applied training for people who actually use AI in their work, and 40 to 100 hours over 3 to 6 months for staff who build, govern, or approve AI systems. The exact figures are less important than the role-appropriate design and documented evidence of completion.
For most roles in the Mittelstand, no. A single generic module will not cover the proportionality test (technical knowledge, experience, context). It also fails to equip staff to actually recognise AI risks in their day-to-day work, which is what BNetzA guidance and the Commission Q&A both emphasise. Treat 45 minutes as the absolute floor for roles with no AI contact, not as a complete programme.
Under BetrVG Paragraph 87, the works council has co-determination rights for training content and delivery modes, and under Paragraph 95 for selection guidelines when training affects promotion. AI literacy programmes typically require a Betriebsvereinbarung covering curriculum, mandatory vs voluntary scope, data collected during assessments, and any performance evaluation consequences.
You can use them as building blocks, not as a complete answer. Off-the-shelf programmes cover foundational literacy well, but proportionality requires role- and context-specific content that no external provider knows in advance. A realistic setup combines external foundation courses (KI-Campus free, Bitkom Akademie paid) with a custom internal layer that trains staff on your specific AI systems, data, and escalation paths.
Document four things: the AI system inventory with risk classification, the competency requirements mapped to roles and AI systems, the training curriculum with content and delivery records, and proof of completion per employee. Keep attendance logs, assessment results, content revisions, and the policy document signed off by management and the works council. BNetzA has signalled that a written governance trail is the baseline expectation.
Measure three layers. Completion (did staff finish the module), comprehension (do they pass a role-appropriate assessment), and behaviour (do they actually apply the literacy - spotting hallucinations, escalating correctly, disclosing AI use to customers). Only the third layer shows real literacy, and it needs periodic spot checks, not a one-off test.
AI literacy under Article 4 covers understanding how AI systems work, their limits, risks, and the governance context. AI security training covers threat-specific topics like prompt injection, data leakage, and misuse detection. Both are needed but they are not interchangeable. Security training that skips literacy leaves staff unable to judge when a situation is outside their AI system's competence.
Article 4 covers "other persons dealing with the operation and use of AI systems" on the company's behalf. That includes contractors, temporary workers, and external service providers. The obligation scales with their level of interaction - a consultant building an AI agent needs advanced literacy, a cleaner in a facility with AI systems does not. Contract clauses should specify the required literacy level for each role.
The AI Act does not set a fixed interval, but the proportionality principle implies refresh cycles tied to how fast your AI systems, risks, and regulations change. A practical pattern: annual mandatory refresh for all staff, quarterly updates for staff using high-risk systems, and ad-hoc briefings when new agents go live or significant regulatory changes land. Document the schedule in your governance policy.
Three things, in order of likelihood. First, customers and public tenders start requiring proof of compliance as a prerequisite - you lose deals before any regulator acts. Second, civil claims arise when AI-caused errors reveal untrained staff. Third, BNetzA enforcement kicks in from August 2026, with fines up to EUR 7.5 million or 1.5 percent of turnover. Ignoring Article 4 is a rising-cost strategy, not a no-cost one.
Sources
- EU Artificial Intelligence Act - Article 4: AI literacy
- European Commission - AI Literacy Questions & Answers
- EU AI Act - Implementation Timeline
- EU AI Act - Article 99: Penalties
- EU AI Act - Small Businesses Guide
- AI Literacy Programs in Europe - Supporting Article 4
- Bitkom - Kuenstliche Intelligenz in Deutschland (Studienbericht 2026)
- Bitkom KI Studie 2026 - 41 Prozent deutscher Firmen nutzen KI aktiv
- Bitkom - Durchbruch bei Kuenstlicher Intelligenz (Dr. Ralf Wintergerst)
- Silicon Saxony - Bitkom: Further training on digital topics
- CMS Law - AI update for employers: AI Act Implementing Act (Germany)
- CMS Law - AI Act: Transition periods and German market surveillance
- Delbion - EU AI Act Article 4: AI Literacy Obligation for Providers and Deployers
- Travers Smith - The EU AI Act's AI literacy requirement: key considerations
- Compliquest - AI Literacy Under the EU AI Act Article 4: What to know
- Latham & Watkins - Upcoming EU AI Act Obligations: Mandatory Training
- DataCamp - The State of Data and AI Literacy in 2026
- DataCamp - AI Skills Gap in 2026: Why Training Is Not Enough
- EY - Work Reimagined Survey 2025: Missing 40 percent of AI productivity gains
- BCG - AI at Work 2025: Momentum builds but gaps remain
- World Economic Forum - Future of Jobs Report 2025
- Bitkom Akademie - KI-Kompetenzschulungen 2026
- DIHK - Skilled Labour Report 2025/2026
- Bundesnetzagentur (BNetzA) - AI Service Desk
Ready to turn Article 4 into a capability?
Book a 30-minute call with Henri. We will map your AI systems against the 5x3 framework and outline a 90-day rollout - no commitment, no sales pitch.
Book a Demo →
