Definition: EU AI Act
The EU AI Act (Regulation EU 2024/1689) is the European Union’s binding legal framework for artificial intelligence, establishing risk-based obligations for organizations that develop, deploy, or use AI systems within the EU.
Core characteristics of EU AI Act
The Act applies to any organization - regardless of where it is based - that places AI systems on the EU market or uses them to affect EU residents.
- Four-tier risk classification: unacceptable risk (banned), high-risk (strict controls), limited risk (transparency obligations), minimal risk (self-regulatory)
- Phased implementation timeline: key deadlines in February 2025, August 2026, and August 2027
- Extraterritorial scope: applies to non-EU companies whose AI affects EU users
- Mandatory human oversight (Article 14) for all high-risk AI systems
EU AI Act vs. GDPR
GDPR governs how personal data is collected, stored, and processed. The EU AI Act governs how AI systems are built, deployed, and monitored. Both apply simultaneously to AI systems that process personal data - which includes most enterprise AI deployments. GDPR compliance does not imply EU AI Act compliance: an AI system can be fully GDPR-compliant while still violating the Act’s transparency or human oversight requirements. For Mittelstand companies, the practical difference is that GDPR primarily creates data handling obligations, while the EU AI Act creates system design and documentation obligations.
Importance of EU AI Act in enterprise AI
The EU AI Act is the most consequential regulatory development for enterprise AI governance in Europe since GDPR. Gartner estimates that 60% of enterprises will need to modify at least one AI deployment for compliance by 2026. For regulated industries - financial services, healthcare, logistics - the Act elevates AI compliance from voluntary best practice to legal obligation with material financial consequences.
Methods and procedures for EU AI Act
Three structured approaches guide enterprise compliance programs.
Risk classification assessment
The first step is classifying every AI system the organization develops or uses against the Act’s four risk tiers.
- Map all AI tools, vendors, and internal systems to the relevant risk category
- Identify high-risk systems: AI in hiring, credit scoring, medical devices, critical infrastructure, and law enforcement
- Document classification decisions with written rationale for each system
- Review vendor contracts to establish whether your organization is a deployer or provider under the Act
Conformity assessment and documentation
High-risk AI systems require a conformity assessment before deployment. This includes technical documentation of the system’s design, training data, performance metrics, and risk controls. For systems using large language models, documentation must cover the base model’s training data and intended use constraints. Organizations must maintain logs sufficient to trace AI system decisions for the period required by applicable sector regulation.
Article 4 AI literacy program
Article 4, effective February 2025, requires organizations to ensure staff who deploy or use AI have sufficient AI literacy for their role. Compliance requires a structured training program mapped to job functions - not generic awareness communication. This overlaps directly with AI adoption programs and is verifiable during supervisory audits.
Important KPIs for EU AI Act
Measuring compliance progress requires a clear set of tracked indicators.
Documentation completeness
- AI system inventory: percentage of deployed AI systems with complete risk classification documentation
- Conformity assessment coverage: percentage of high-risk systems with completed assessments
- Vendor review completion: percentage of AI vendor contracts reviewed for Act obligations
- Training completion rate: percentage of relevant staff with completed Article 4 literacy training
Ongoing monitoring metrics
For high-risk AI systems, the Act requires continuous performance monitoring and incident logging. Data governance infrastructure must support audit-ready logging of AI system inputs, outputs, and human override decisions. Gartner recommends maintaining 24-month logs for high-risk deployments in regulated sectors.
Risk exposure tracking
Legal teams should track the Act’s fine exposure as a compliance KPI: the maximum penalty relative to current non-compliant system count and revenue gives a concrete risk number for board reporting. This framing converts regulatory compliance from a cost center argument into a risk quantification exercise.
Risk factors and controls for EU AI Act
Misclassifying AI system risk tier
The most common error is classifying a high-risk AI system as limited or minimal risk to avoid compliance overhead. Regulators will audit against the Act’s Annex III list of high-risk applications, not against an organization’s internal classification. Systems that make decisions about employees - scheduling, performance review, task routing - may qualify as high-risk under employment use cases.
- Conduct classification reviews with legal counsel familiar with Annex III
- Document the reasoning for each classification in writing
- Reassess when AI system scope or use cases expand
General purpose AI (GPAI) obligations
Organizations deploying general purpose AI models - including large language models via API - face a distinct set of obligations depending on whether the model is classified as having systemic risk. GPAI providers must publish technical documentation and model cards; deployers must ensure downstream use stays within documented scope. Using a foundation model for undocumented high-risk tasks creates compliance exposure for the deployer.
Vendor dependency for compliance evidence
Many Mittelstand companies rely on third-party AI systems where the technical documentation is held by the vendor. If the vendor cannot provide the conformity documentation required by the Act, the deployer bears the compliance gap. Procurement processes must now include EU AI Act documentation requirements as a standard vendor evaluation criterion.
Practical example
A 300-person German insurance company used an AI system to assist underwriters in risk scoring for commercial property policies. Before the Act, the system had no formal documentation and no audit log. The compliance program began with a risk classification workshop that identified the system as high-risk under the Act’s financial services and insurance use case provisions. The team then ran a 90-day conformity assessment covering training data, decision logic, and override rates.
- Complete AI system inventory covering 14 internal tools and 8 vendor-supplied systems
- Risk classification decisions documented for all 22 systems with legal sign-off
- High-risk system audit log infrastructure deployed covering 24-month retention
- Article 4 literacy training completed by 100% of underwriting and claims staff within 60 days
Current developments and effects
Digital Omnibus and SME relief
In March 2025, the European Commission proposed the Digital Omnibus package, which includes targeted EU AI Act modifications for SMEs and companies under 250 employees. If adopted, these modifications would reduce documentation and conformity assessment obligations for smaller deployers. The proposal is under legislative review as of mid-2025 and has not yet been adopted - Mittelstand companies should plan for full compliance requirements until the relief measures are confirmed in law.
- Commission proposal: reduce conformity assessment scope for non-provider SMEs
- Targeted relief for companies using, but not developing, AI systems
- Decision expected in late 2025 or early 2026
Sector-specific guidance from national authorities
National AI supervisory authorities across the EU are publishing sector-specific implementation guidance that supplements the Act’s general requirements. Germany’s AI regulatory authority (expected under BNetzA oversight) will issue guidance for manufacturing and financial services by late 2025. Organizations should monitor their sector authority’s output alongside the Act’s text.
AI Act and workflow automation design
Article 14’s human oversight requirement is reshaping how AI agent and automation workflows are designed. Systems that previously routed decisions automatically must now include documented override points for human review in high-risk use cases. This is converging with Human-in-the-Loop design practices and is becoming a standard architectural requirement for compliant enterprise AI deployments.
Conclusion
The EU AI Act is the defining regulatory framework for enterprise AI in Europe through the 2020s. For Mittelstand companies, the immediate priorities are a complete AI system inventory, risk classification with legal review, and Article 4 literacy training for all affected staff. Organizations that build compliance infrastructure now - documentation, audit logging, vendor review processes - create a reusable foundation that reduces the cost of each future AI deployment. Compliance with the Act and effective AI governance are not competing priorities: the organizations that handle both together will deploy AI faster, with less legal exposure, than those treating compliance as a barrier.
Frequently Asked Questions
When do EU AI Act requirements apply to Mittelstand companies?
The first binding deadline was February 2, 2025, when Article 4 AI literacy requirements took effect. High-risk system obligations - conformity assessments, technical documentation, human oversight - apply from August 2, 2026. General purpose AI model obligations apply from August 2, 2025. Companies should treat 2025 as the planning and documentation year and 2026 as the enforcement year.
Does the EU AI Act apply to my company if we only use AI, not build it?
Yes. The Act applies to both AI providers (developers) and deployers (companies that put AI systems into use within their business). As a deployer, you are responsible for ensuring AI systems you purchase or access via API are used within their documented scope, staff are trained, and high-risk systems have appropriate oversight mechanisms in place.
What makes an AI system high-risk under the Act?
Annex III of the Act lists eight high-risk application areas: biometric identification, critical infrastructure management, education and vocational training, employment and workforce management, access to essential private services and benefits, law enforcement, migration and border control, and administration of justice. AI systems used in any of these areas - including AI tools that assist with hiring, employee scheduling, or performance evaluation - are subject to high-risk requirements.
What is the Article 4 AI literacy requirement?
Article 4 requires organizations that deploy AI to ensure relevant staff have sufficient AI literacy for their role and the specific AI systems they interact with. This is not a one-time training event but an ongoing obligation. Compliance requires documenting which roles interact with which AI systems and maintaining evidence that each role has received appropriate, role-specific training.
How does the EU AI Act relate to GDPR compliance?
The two regulations apply in parallel and have overlapping data-related obligations. GDPR governs data processing; the EU AI Act governs AI system design, documentation, and oversight. An AI system that is GDPR-compliant may still be non-compliant under the Act if it lacks required conformity documentation, human oversight mechanisms, or transparency disclosures. Organizations need separate compliance programs for each regulation, though some infrastructure - data logs, vendor agreements, training records - serves both.
What happens if a company is found to be non-compliant?
Fines are tiered by violation type. Violations of prohibited AI practices (unacceptable risk systems) carry fines up to €35 million or 7% of global annual turnover. Violations of high-risk system obligations carry fines up to €15 million or 3% of turnover. Providing incorrect information to authorities carries fines up to €7.5 million or 1.5% of turnover. National supervisory authorities can also require AI systems to be withdrawn from use pending compliance remediation.