Classification Is the Foundation of EU AI Act Compliance
Before you can comply with the EU AI Act, you need to answer one question: what risk tier does your AI system fall into?
The classification determines every obligation that follows — documentation, conformity assessment, human oversight, and post-market monitoring.
The Four Risk Tiers
Tier 1: Unacceptable Risk (Prohibited)
Banned outright under Article 5: social scoring, manipulative AI targeting vulnerable groups, untargeted facial scraping, emotion recognition in workplaces/schools, biometric categorization by sensitive characteristics. Enforceable since February 2, 2025.
Tier 2: High Risk
AI in sensitive areas identified in Annex III (8 categories) and Annex I (regulated products). Triggers the full compliance suite. Annex III enforceable August 2, 2026.
Tier 3: Limited Risk
Transparency obligations only — chatbots, emotion recognition, AI-generated content must disclose AI involvement to users.
Tier 4: Minimal Risk
No mandatory obligations. Spam filters, recommendation engines, search, productivity tools.
The Eight High-Risk Categories (Annex III)
1. Biometrics
Remote biometric identification (non-real-time), biometric categorization, emotion recognition outside prohibited contexts. Examples: Facial recognition for building access, voice authentication, age verification.
2. Critical Infrastructure
AI as safety components in digital infrastructure, road traffic, water/gas/heating/electricity. Examples: Power grid load balancing AI, traffic management, water treatment optimization.
3. Education and Vocational Training
AI determining access to education, evaluating learning outcomes, assessing education levels, monitoring student behavior. Examples: Admissions screening, automated grading, proctoring software.
4. Employment and Worker Management
AI in recruitment (ad targeting, CV screening, interviewing), employment decisions, performance monitoring. Examples: Resume screening AI, interview assessment tools, productivity monitoring.
5. Essential Services
AI evaluating creditworthiness, insurance risk/pricing, public benefits eligibility, emergency dispatch priority. Examples: Credit scoring models, insurance underwriting AI.
6. Law Enforcement
Individual risk assessment, deception detection, evidence reliability, crime prediction, facial recognition in investigations.
7. Migration, Asylum, Border Control
Border management (risk assessment), asylum/visa processing, identification of irregular migration.
8. Administration of Justice and Democratic Processes
AI assisting judicial authorities in researching and applying law, AI influencing elections or referendums.
Step-by-Step Classification Process
Step 1: Describe Your System Precisely
Document what your AI does, what decisions it makes, what data it uses, who it affects, and the context of operation. Vague descriptions lead to incorrect classifications.
Step 2: Check Prohibited Practices (Article 5)
Does your system engage in any banned practice? If yes, stop — it cannot legally operate.
Step 3: Check Annex III Categories
Map against each high-risk category. A system qualifies as high-risk if it falls within any one category.
Step 4: Check Article 6(3) Exception
Even within Annex III, a system may not be high-risk if it does not pose significant risk of harm. This exception is narrow and must be documented.
Step 5: Check Transparency Obligations
If not high-risk, does it require disclosure? Chatbots, emotion recognition, and AI-generated content trigger Article 50.
Step 6: Document Your Classification
Document your reasoning thoroughly. Regulators will want to see how you arrived at your classification, not just the result.
Common Classification Pitfalls
- Misclassifying limited-risk as minimal: Chatbots without AI disclosure violate Article 50.
- Ignoring deployer obligations: Using a third-party high-risk AI means you have deployer obligations.
- Classifying by technology, not use case: The same model can be minimal-risk in one context, high-risk in another.
- Overlooking cross-category systems: A recruitment AI with performance monitoring spans multiple Annex III areas.
After Classification
- High-risk: Full compliance suite — Articles 9-17, conformity assessment, post-market monitoring
- Limited-risk: Transparency obligations (Article 50)
- Minimal-risk: Voluntary codes of conduct
Classification determines your roadmap. The sooner you classify, the sooner you know what work lies ahead.
Start your compliance interview with AI Comply Help — classify your AI systems and generate compliance documents in a single conversation.
AI Comply Help supports compliance operations and is not a substitute for legal advice.