
EU AI Act Compliance Checklist for 2026
The EU AI Act is no longer on the horizon. It is here, and the clock is running. With the first major enforcement deadline arriving on August 2, 2026, every organisation that develops, deploys, or distributes AI systems within the European Union must have a clear compliance roadmap in place. Waiting is not a strategy. It is a liability.
This checklist distils the full text of Regulation (EU) 2024/1689 into a practical, step-by-step action plan. Whether you are a startup shipping a single machine-learning model or an enterprise with dozens of AI-powered products, this guide tells you exactly what to do, in what order, and by when.
Understanding the EU AI Act Risk Tiers
The EU AI Act establishes a risk-based regulatory framework. Every obligation you face, from documentation to conformity assessment, depends on where your AI system falls in the four-tier risk classification hierarchy.
Prohibited Practices (Unacceptable Risk)
Article 5 of the EU AI Act bans certain AI practices outright. These include:
- Social scoring by public authorities that leads to detrimental treatment of individuals
- Real-time remote biometric identification in publicly accessible spaces for law enforcement (with narrow exceptions)
- Subliminal manipulation techniques that cause or are likely to cause physical or psychological harm
- Exploitation of vulnerabilities of specific groups (age, disability, social or economic situation)
- Untargeted scraping of facial images from the internet or CCTV to build facial recognition databases
- Emotion recognition in workplaces and educational institutions (with limited exceptions)
- Biometric categorisation systems that categorise individuals based on sensitive attributes such as race, political opinions, or sexual orientation
- Predictive policing based solely on profiling or personality traits
If your AI system falls into any of these categories, there is no path to compliance. The system must be discontinued before February 2, 2025, the first prohibition deadline that has already passed.
High-Risk AI Systems
Article 6 and Annex III define AI systems that pose significant risks to health, safety, or fundamental rights. These include AI used in:
- Critical infrastructure (energy, transport, water supply, digital infrastructure)
- Education and vocational training (determining access, assessing students, proctoring)
- Employment and worker management (recruitment, task allocation, performance monitoring, termination decisions)
- Essential services (credit scoring, insurance pricing, emergency service dispatch)
- Law enforcement (evidence evaluation, recidivism prediction, profiling)
- Migration and border control (risk assessment, document authentication)
- Justice and democratic processes (legal research tools that influence judicial decisions)
- Biometric identification and categorisation (remote biometric systems)
High-risk systems carry the heaviest compliance burden: risk management, data governance, technical documentation, human oversight, accuracy requirements, and mandatory registration in the EU database.
Limited Risk (Transparency Obligations)
AI systems that interact directly with people but do not qualify as high-risk still carry transparency requirements under Article 50. These include:
- Chatbots and virtual assistants: users must be informed they are interacting with an AI
- Deepfakes and AI-generated content: must be clearly labelled as artificially generated or manipulated
- Emotion recognition systems: individuals must be informed when such a system is being applied to them
- Biometric categorisation: individuals must be notified of the system's operation
Minimal Risk
AI systems that do not fall into any of the above categories (such as spam filters, AI-powered video games, or inventory management tools) face no mandatory obligations under the Act, though voluntary codes of conduct are encouraged.
Your Complete Compliance Checklist
The following twelve steps form a comprehensive compliance programme. Work through them sequentially; each step builds on the previous one.
1. Inventory All AI Systems
You cannot manage what you have not mapped. Conduct a thorough audit of every AI system your organisation develops, deploys, imports, or distributes. This includes:
- Internally developed models and algorithms
- Third-party AI components embedded in your products
- AI-powered SaaS tools used by your employees
- Automated decision-making systems, even simple rule-based ones that incorporate machine learning
For each system, record the business owner, the technical team responsible, the data sources it uses, and the populations it affects. This inventory becomes the foundation of every compliance activity that follows.
2. Classify the Risk Tier for Each System
With your inventory complete, assess each AI system against the risk tiers described above. This is the single most consequential compliance decision you will make: it determines every obligation that follows.
Start with Article 5 prohibitions. If a system does not fall under a prohibition, evaluate it against the Annex III high-risk categories and the conditions in Article 6(2). Pay particular attention to AI systems that serve as safety components of products already covered by EU harmonised legislation (Article 6(1)).
If a system is clearly minimal risk, document that determination and the reasoning behind it. Regulators will want to see that you conducted the analysis, not just that you reached a conclusion.
3. Check for Prohibited Practices
This step demands zero ambiguity. Review each AI system against the eight categories of prohibited practices in Article 5. If any system touches social scoring, subliminal manipulation, exploitation of vulnerable groups, untargeted biometric scraping, emotion recognition in restricted contexts, biometric categorisation on sensitive attributes, or predictive policing, it must be decommissioned or fundamentally redesigned.
Document your analysis for each system. If a system is close to a prohibition boundary, seek legal counsel before proceeding.
4. Document Intended Purpose and Technical Specifications
Article 13 and Annex IV require comprehensive documentation of every high-risk AI system. At minimum, document:
- The intended purpose: what the system is designed to do, and just as importantly, what it is not designed to do
- Technical architecture: model type, training methodology, key hyperparameters
- Data specifications: training data sources, data preparation methods, data quality measures
- Performance metrics: accuracy, precision, recall, fairness metrics, and the conditions under which they were measured
- Known limitations: scenarios where the system underperforms, edge cases, known biases
This documentation is not optional and must be maintained throughout the system's lifecycle.
5. Implement a Risk Management System (High-Risk)
Article 9 mandates a continuous, iterative risk management process for high-risk AI systems. This system must:
- Identify and analyse known and reasonably foreseeable risks
- Estimate and evaluate risks that may emerge when the system is used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse
- Adopt risk mitigation measures: design choices, testing protocols, operational constraints
- Test the system to ensure measures are effective, including under real-world conditions where feasible
The risk management system is not a one-time exercise. It must be updated throughout the AI system's entire lifecycle, from development through deployment and operation.
6. Ensure Data Governance and Quality
Article 10 sets rigorous requirements for the data used to train, validate, and test high-risk AI systems. You must:
- Implement data governance practices covering data collection, annotation, storage, and preprocessing
- Ensure training, validation, and testing datasets are relevant, sufficiently representative, and as free of errors as possible
- Account for the specific geographic, contextual, behavioural, or functional setting in which the system will be used
- Address potential biases that may affect the health and safety of persons or lead to discrimination
Data governance is particularly critical because data quality issues compound through the AI pipeline. Poor training data leads to unreliable systems, which leads to non-compliance.
7. Maintain Technical Documentation
Annex IV prescribes what technical documentation for high-risk AI systems must contain. This includes:
- A general description of the AI system
- Detailed description of elements and the development process
- Information about monitoring, functioning, and control
- A description of the risk management system
- A description of changes made throughout the lifecycle
- Performance metrics and testing results
- Detailed description of the system's accuracy, robustness, and cybersecurity measures
This documentation must be drawn up before the AI system is placed on the market or put into service, and kept up to date throughout the system's lifetime.
8. Set Up Human Oversight Mechanisms
Article 14 requires high-risk AI systems to be designed so that they can be effectively overseen by natural persons. This means:
- The system must have a human-machine interface that enables oversight
- Overseers must be able to fully understand the system's capabilities and limitations
- Overseers must be able to correctly interpret the system's outputs
- Overseers must be able to decide not to use the system, override its output, or reverse its decisions
- Overseers must be able to intervene or interrupt the system with a "stop" button or similar procedure
The level of human oversight required is proportionate to the risks posed by the specific system and its operating context.
9. Ensure Accuracy, Robustness, and Cybersecurity
Article 15 requires high-risk AI systems to achieve an appropriate level of accuracy, robustness, and cybersecurity. Specifically:
- Accuracy levels must be declared and communicated to deployers
- Systems must be resilient to errors, faults, and inconsistencies that may occur in their operating environment
- Cybersecurity measures must protect against attempts to alter system behaviour, exploit vulnerabilities, or manipulate training data (data poisoning, adversarial attacks, model inversion)
This is not just a technical requirement. You must be able to demonstrate and document these properties to regulators.
10. Register in the EU Database (High-Risk)
Article 71 requires providers of high-risk AI systems to register those systems in the EU database before placing them on the market or putting them into service. The registration must include:
- The provider's name and contact details
- A description of the system's intended purpose
- The system's risk classification
- The conformity assessment procedure followed
- The member states where the system is placed on the market
The EU database is publicly accessible, so your registration will be visible to regulators, auditors, and the public.
11. Conduct Conformity Assessment
Before placing a high-risk AI system on the market, providers must conduct a conformity assessment (Article 43). Depending on the system category:
- Most high-risk systems can use internal conformity assessment based on Annex VI procedures
- Certain biometric systems require third-party conformity assessment by a notified body
- Systems already covered by existing EU harmonised legislation may follow the conformity assessment procedures of that legislation
The conformity assessment must demonstrate compliance with all applicable requirements in Chapter III, Section 2 of the Act.
12. Implement Transparency Requirements
Even if your system is not high-risk, Article 50 imposes transparency obligations:
- AI interaction disclosure: Inform users when they are interacting with an AI system (chatbots, virtual assistants)
- Content labelling: AI-generated text, images, audio, or video must be labelled as artificially generated in a machine-readable format
- Deepfake disclosure: Synthetic content depicting real persons or events must be clearly disclosed
- Emotion recognition notification: Inform individuals when emotion recognition is applied to them
These transparency requirements apply regardless of risk classification and are among the first obligations to come into force.
Key Deadlines You Cannot Miss
The EU AI Act entered into force on August 1, 2024, but obligations phase in over a staggered timeline:
| Deadline | Obligation |
|---|---|
| February 2, 2025 | Prohibited AI practices must cease |
| August 2, 2025 | Obligations for GPAI models take effect; governance structures must be in place |
| August 2, 2026 | Full obligations for high-risk AI systems; penalties for non-compliance enforceable |
| August 2, 2027 | Obligations for high-risk AI systems that are safety components of products under existing EU legislation |
The August 2, 2026 deadline is the critical milestone for most organisations. By this date, all high-risk AI system requirements (risk management, data governance, technical documentation, human oversight, accuracy, robustness, cybersecurity, EU database registration, and conformity assessment) must be fully implemented.
Penalties for Non-Compliance
The EU AI Act enforces a tiered penalty structure that scales with the severity of the violation:
- Prohibited practices: Fines up to 35 million EUR or 7% of annual worldwide turnover, whichever is higher
- High-risk system violations: Fines up to 15 million EUR or 3% of annual worldwide turnover
- Incorrect information to authorities: Fines up to 7.5 million EUR or 1.5% of annual worldwide turnover
For SMEs and startups, the fines are capped at lower amounts proportionate to their size, but they are still substantial enough to threaten business viability.
Beyond financial penalties, non-compliance carries reputational risk, potential market access restrictions, and the possibility of injunctive relief requiring you to cease using non-compliant AI systems entirely.
Assess Your Compliance in Minutes
Reading a checklist is the first step. Acting on it is what matters.
AI Comply HQ automates this entire compliance workflow. Our guided interview walks you through every checklist item above (risk classification, prohibited practice screening, documentation requirements, transparency obligations) and generates an audit-ready compliance report at the end.
Here is what the process looks like:
- Start a compliance interview. Answer plain-language questions about your AI system (no legal expertise required)
- Get your risk classification. Our system maps your answers to the EU AI Act risk tiers automatically
- Receive your compliance report. A structured document covering every applicable requirement, with specific action items for any gaps
- Track your progress. Monitor compliance status across all your AI systems from a single dashboard
The entire process takes under 20 minutes for a single AI system.
Start your free 7-day trial and complete your first compliance assessment in under 20 minutes.
The August 2, 2026 deadline is less than five months away. Every day you delay is a day closer to enforcement without a plan. Start now.