EU AI Act Enforcement Dates and Deadlines: Complete Timeline
Regulatory Updates

EU AI Act Enforcement Dates and Deadlines: Complete Timeline

AI Comply HQ Team16 min read

The EU AI Act Timeline Is Already Underway

The EU AI Act is not a future regulation. It entered into force on August 1, 2024, and its provisions are rolling out in phases through 2027. Some obligations are already enforceable. Others take effect in the coming months. The most consequential deadline, full compliance for high-risk AI systems, arrives on August 2, 2026.

This phased enforcement structure means that different obligations apply at different times, and the deadlines vary depending on the type of AI system, the risk classification, and your role in the AI value chain. Missing a deadline does not just create legal exposure. It can result in administrative fines of up to 35 million EUR or 7% of worldwide annual turnover for the most serious violations.

This guide provides the complete enforcement timeline with every major date, explains what each deadline means in practice, and offers concrete preparation advice for each milestone.

The Foundation: Entry Into Force and the AI Office

August 1, 2024 | Entry Into Force

The EU AI Act (Regulation (EU) 2024/1689) was published in the Official Journal of the European Union on July 12, 2024, and entered into force 20 days later on August 1, 2024. This date started the clock on all subsequent enforcement deadlines.

Entry into force does not mean immediate enforceability. The regulation uses a staggered application model, with different chapters becoming applicable at different intervals after entry into force.

February 2, 2025 | AI Office Establishment Deadline

The European Commission was required to establish the AI Office by this date. The AI Office serves as the central EU-level body for AI Act implementation and enforcement. Its responsibilities include:

  • Developing guidelines and implementing acts for the regulation
  • Coordinating enforcement actions across Member States
  • Managing the EU database of high-risk AI systems
  • Overseeing compliance of general-purpose AI (GPAI) models
  • Facilitating the development of codes of practice

The AI Office was formally established and became operational in early 2024, ahead of this deadline, as part of the European Commission's Directorate-General for Communications Networks, Content and Technology (DG CNECT).

Phase 1: Prohibited Practices (Already Enforceable)

February 2, 2025 | Banned AI Practices Take Effect

This was the first major enforcement milestone. Since February 2, 2025, the following AI practices have been prohibited under Article 5 of the AI Act:

  • Subliminal manipulation: AI systems that deploy subliminal techniques beyond a person's consciousness to materially distort behaviour in a manner likely to cause significant harm
  • Exploitation of vulnerabilities: AI systems that exploit vulnerabilities of specific groups of persons due to their age, disability, or social or economic situation
  • Social scoring: AI systems by public authorities (or on their behalf) that evaluate or classify natural persons based on their social behaviour or personality characteristics, leading to detrimental treatment
  • Real-time remote biometric identification in publicly accessible spaces by law enforcement, except in narrowly defined circumstances
  • Untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases
  • Emotion recognition in workplaces and educational institutions, except for medical or safety reasons
  • Biometric categorisation to infer sensitive attributes (race, political opinions, trade union membership, religious beliefs, sex life or sexual orientation), except for certain law enforcement purposes

What this means today: If your organisation uses any AI system that falls within these prohibited categories, you are already in violation. The penalties for deploying prohibited AI practices are the most severe under the AI Act: fines of up to 35 million EUR or 7% of worldwide annual turnover, whichever is higher.

For a detailed analysis of each prohibited practice and how to audit your AI systems against them, see our guide on EU AI Act Prohibited Practices.

Preparation Status Check

  • Have you audited all your AI systems against the prohibited practices list?
  • Have you removed or decommissioned any systems that fall within the prohibitions?
  • Have you documented your assessment and conclusions?

Phase 2: AI Literacy and Governance Obligations

February 2, 2025 | AI Literacy Obligation (Article 4)

Also enforceable since February 2, 2025, Article 4 requires that providers and deployers of AI systems ensure that their staff and other persons dealing with the operation and use of AI systems on their behalf have a sufficient level of AI literacy. This obligation applies to all AI systems, not just high-risk ones.

AI literacy, as defined in the regulation, means the skills, knowledge, and understanding that allow providers, deployers, and affected persons to make an informed deployment of AI systems, taking into account their rights and obligations, the intended purpose of the AI system, and the potential risks and benefits.

What this means today: You should already have AI literacy training programs in place for all personnel who work with AI systems. This includes not only technical staff but also business users, procurement teams, and management.

Phase 3: General-Purpose AI Obligations

August 2, 2025 | GPAI Model Obligations Take Effect

The next major milestone arrives on August 2, 2025, when the obligations for general-purpose AI (GPAI) models under Chapter V of the AI Act become enforceable.

GPAI models are AI models trained with a large amount of data using self-supervision at scale, that display significant generality, and are capable of competently performing a wide range of distinct tasks. Large language models (LLMs) such as GPT-4, Claude, Gemini, and Llama are the primary examples.

Obligations for All GPAI Model Providers (Article 53)

From August 2, 2025, all GPAI model providers must:

  • Prepare and keep up-to-date technical documentation of the model, including its training and testing processes, and the results of the model's evaluation
  • Provide information and documentation to downstream providers who integrate the GPAI model into their AI systems
  • Establish a policy to respect EU copyright law, including the text and data mining opt-out provisions of the Copyright Directive
  • Publish a sufficiently detailed summary about the content used for training the GPAI model, using a template provided by the AI Office

Additional Obligations for Systemic Risk GPAI Models (Article 55)

GPAI models classified as posing systemic risk (those trained with total computing power exceeding 10^25 floating point operations (FLOPs), or designated by the Commission based on certain criteria) face additional obligations:

  • Performing model evaluations, including adversarial testing, to identify and mitigate systemic risks
  • Assessing and mitigating possible systemic risks at the EU level
  • Tracking, documenting, and reporting serious incidents and possible corrective measures to the AI Office and relevant national competent authorities
  • Ensuring an adequate level of cybersecurity protection for the model and its physical infrastructure

Codes of Practice Timeline

The AI Act envisions codes of practice as a mechanism for GPAI model providers to demonstrate compliance. The AI Office was tasked with encouraging and facilitating the drawing up of codes of practice, with the goal of having them ready by August 2, 2025, the same date GPAI obligations become enforceable. Providers that adhere to an approved code of practice can rely on it to demonstrate compliance until harmonised standards are established.

What to do now: If your organisation provides or uses GPAI models, you should be actively preparing for the August 2, 2025 deadline. Review the technical documentation requirements, assess whether your models might be classified as posing systemic risk, and monitor the development of codes of practice through the AI Office.

Phase 4: The Big Deadline for High-Risk AI Obligations

August 2, 2026 | Full High-Risk AI Obligations Take Effect

This is the deadline that will affect the largest number of organisations. On August 2, 2026, the full set of obligations for high-risk AI systems listed in Annex III becomes enforceable. These obligations apply to both providers and deployers.

Annex III high-risk AI systems include AI used in:

  1. Biometric identification and categorisation of natural persons
  2. Management and operation of critical infrastructure (including road traffic, water, gas, heating, and electricity supply)
  3. Education and vocational training (determining access, evaluating learning outcomes, assessing appropriate levels of education)
  4. Employment, workers management, and access to self-employment (recruitment, CV screening, performance evaluation, promotion and termination decisions)
  5. Access to and enjoyment of essential private and public services (creditworthiness assessment, emergency services dispatching, health and life insurance risk assessment)
  6. Law enforcement (individual risk assessment, polygraphs, evidence evaluation, crime prediction)
  7. Migration, asylum, and border control management (polygraphs, risk assessment, document authenticity verification)
  8. Administration of justice and democratic processes (AI systems for applying the law to concrete facts, influencing the outcome of elections)

What Becomes Enforceable

From August 2, 2026, providers and deployers of Annex III high-risk AI systems must comply with:

  • Risk management systems (Article 9): identifying and mitigating risks throughout the AI system lifecycle
  • Data governance (Article 10): ensuring training data quality, relevance, representativeness, and freedom from bias
  • Technical documentation (Article 11 and Annex IV): comprehensive documentation of the system's design, development, testing, and performance
  • Record-keeping and logging (Article 12): automatic logging of events during the AI system's operation
  • Transparency and instructions for use (Article 13): providing deployers with information necessary for compliant use
  • Human oversight (Article 14): designing systems for effective human oversight, including override capabilities
  • Accuracy, robustness, and cybersecurity (Article 15): ensuring consistent performance and resilience
  • Quality management systems (Article 17): systematic compliance governance
  • Conformity assessment (Article 43): verifying compliance before market placement
  • EU declaration of conformity (Article 47): formal declaration of compliance
  • CE marking (Article 48): affixing the CE marking to compliant systems
  • Registration in the EU database (Article 49): registering high-risk AI systems before market placement
  • Post-market monitoring (Article 72): systematic ongoing performance monitoring
  • Serious incident reporting (Article 62): reporting to national competent authorities

For deployers specifically, the obligations include:

  • Appropriate use and oversight (Article 26): using systems according to instructions, assigning competent human oversight
  • Fundamental rights impact assessment (Article 27): for public bodies and certain private deployers
  • Input data relevance (Article 26(4)): ensuring input data is relevant to the intended purpose
  • Monitoring and incident reporting (Article 26(5)): monitoring system operation and informing providers and authorities of risks

For the full breakdown of what providers and deployers must do, use our EU AI Act Compliance Checklist.

National Competent Authorities Deadline

Member States were required to designate their national competent authorities and notify the Commission by August 2, 2025. These authorities are responsible for enforcing the AI Act at the national level. Each Member State must also designate or establish at least one notifying authority and at least one market surveillance authority.

The readiness of national competent authorities varies across Member States. Some countries, notably France, Germany, the Netherlands, and Spain, have been proactive in establishing their AI regulatory structures. Others are still in the process of designating authorities and building enforcement capacity.

Phase 5: Extended Deadline for Certain High-Risk Systems

August 2, 2027 | Annex I High-Risk Systems (Products Under EU Harmonisation Legislation)

The final major enforcement deadline is August 2, 2027, when the obligations take effect for high-risk AI systems that are safety components of products, or are themselves products, covered by the EU harmonisation legislation listed in Annex I. This includes AI systems subject to:

  • Machinery Regulation (2023/1230)
  • Toy Safety Directive (2009/48/EC)
  • Recreational Craft Directive (2013/53/EU)
  • Lifts Directive (2014/33/EU)
  • Radio Equipment Directive (2014/53/EU)
  • Pressure Equipment Directive (2014/68/EU)
  • Cableway Installations Regulation (2016/424)
  • Personal Protective Equipment Regulation (2016/425)
  • Appliances Burning Gaseous Fuels Regulation (2016/426)

Note that AI systems that are medical devices (covered under the MDR/IVDR in Annex II) have a different timeline. Their AI Act obligations apply from August 2, 2026, not 2027, because they fall under the Annex III high-risk classification as well.

What this means: If your AI system is a safety component of a product regulated under any of the legislation listed in Annex I, you have until August 2, 2027 to achieve full compliance. However, starting preparation now is strongly advised, as the conformity assessment process for these products often involves third-party notified bodies with lengthy review timelines.

Complete Timeline Summary

DateMilestoneKey Obligations
July 12, 2024Publication in Official JournalN/A
August 1, 2024Entry into forceAll deadlines begin counting
February 2, 2025Prohibited practices enforceableArticle 5 bans apply; AI literacy obligation (Article 4)
August 2, 2025GPAI obligations applyTechnical documentation, copyright compliance, training data summaries; national competent authority designation deadline
August 2, 2026High-risk AI (Annex III) obligations applyFull provider and deployer obligations for all Annex III high-risk systems
August 2, 2027High-risk AI (Annex I products) obligations applyFull obligations for AI in products under EU harmonisation legislation

Preparation Advice for Each Deadline

For February 2, 2025 (Already Past: Verify Compliance)

If you have not already taken these steps, do so immediately:

  1. Audit against prohibited practices. Review every AI system in your portfolio against the Article 5 prohibitions. Document your findings. If any system falls within a prohibition, decommission it or modify it to fall outside the prohibition's scope.
  2. Implement AI literacy training. Ensure all relevant staff have completed AI literacy training. Document the training, its content, and attendance records.
  3. Review and remediate. If you discover gaps, address them immediately. The penalties for prohibited practices are already enforceable.

For August 2, 2025 (Immediate Priority)

  1. GPAI model providers: Prepare technical documentation, establish copyright compliance policies, and draft training data summaries
  2. Downstream integrators: Identify all GPAI models used in your AI systems and ensure your providers are prepared to supply the required documentation
  3. Monitor codes of practice: Track the AI Office's development of codes of practice and assess whether adherence would benefit your compliance posture

For August 2, 2026 (Primary Planning Horizon)

  1. Classify all AI systems by risk level. Use the Annex III categories to determine which systems are high-risk.
  2. Determine your role. Map your organisation to provider, deployer, importer, or distributor for each AI system.
  3. Conduct gap analysis. Compare your current compliance posture against every requirement in Articles 8-27.
  4. Build or extend your quality management system. Implement systematic processes for compliance governance.
  5. Prepare technical documentation. Complete Annex IV documentation for all high-risk systems.
  6. Implement human oversight protocols. Design and document oversight mechanisms for each high-risk system.
  7. Conduct conformity assessments. Complete internal or third-party conformity assessments as required.
  8. Register in the EU database. Prepare for registration of all high-risk AI systems.
  9. Establish post-market monitoring. Implement systematic performance monitoring systems.
  10. Conduct fundamental rights impact assessments. Complete before deploying high-risk systems (mandatory for public bodies, strongly recommended for all deployers).

For a tool-by-tool comparison of platforms that can help manage this compliance process, see Best EU AI Act Compliance Tools Compared.

For August 2, 2027

  1. Product manufacturers: Coordinate AI Act compliance with existing product safety conformity assessments
  2. Engage notified bodies early. Third-party assessment capacity is limited; book early to avoid delays.
  3. Integrate AI Act requirements into product development cycles. Treat AI Act compliance as a design requirement, not a post-development add-on.

Delegated and Implementing Acts: The Details Still Being Written

The AI Act delegates significant rulemaking authority to the European Commission. Several delegated and implementing acts are still in development and will add detail to the regulation's framework:

  • Harmonised standards: The Commission has tasked European standardisation organisations (CEN, CENELEC, ETSI) with developing harmonised standards for the AI Act. Compliance with these standards will provide a presumption of conformity with the regulation's requirements. Development is ongoing, with initial standards expected by late 2025 or early 2026.
  • Common specifications: Where harmonised standards are not available, the Commission may adopt common specifications as implementing acts. Providers that comply with common specifications will also enjoy a presumption of conformity.
  • High-risk system classification amendments: The Commission can update the list of high-risk AI systems in Annex III through delegated acts, adding or modifying categories as technology and risks evolve.
  • Benchmarks and metrics: The AI Office is developing benchmarks for evaluating GPAI models, including methodologies for assessing systemic risk.

Organisations should monitor these developments closely, as the practical details of compliance will be significantly shaped by the standards and specifications that emerge.

Enforcement and Penalties Quick Reference

The penalties under the AI Act are structured by severity:

ViolationMaximum Fine
Prohibited AI practices (Article 5)35 million EUR or 7% of worldwide annual turnover
High-risk AI obligations (Chapters 2-3, Title III)15 million EUR or 3% of worldwide annual turnover
Incorrect information to authorities7.5 million EUR or 1% of worldwide annual turnover

For SMEs and startups, the regulation specifies that fines should be proportionate. The lower of the two amounts (fixed amount or percentage of turnover) applies. However, even the reduced amounts represent existential financial exposure for smaller companies.

For a complete analysis of how enforcement will work in practice, see our guide on EU AI Act Fines and Enforcement.

What You Should Do Today

Regardless of where you stand in the compliance journey, here are the actions you should take right now:

  1. Know your deadlines. Use this timeline to identify which obligations already apply to you and which are approaching.
  2. Inventory your AI systems. You cannot comply with regulations you do not understand. Document every AI system your organisation develops, deploys, or distributes.
  3. Classify by risk. Determine which systems are high-risk, which are GPAI, and which fall under the prohibited practices. Use our EU AI Act Risk Assessment Guide for a structured methodology.
  4. Assign ownership. Compliance requires clear internal accountability. Designate a responsible person or team for AI Act compliance.
  5. Start with what is already enforceable. If you have not verified compliance with the prohibited practices and AI literacy obligations, do that today.
  6. Plan for August 2, 2026. This is the deadline that will affect the most organisations. Begin your compliance program now, not six months from now.
Start Your Free Compliance Assessment

The EU AI Act's enforcement timeline is aggressive by design. The European Commission intended to give organisations enough time to prepare while ensuring that the regulation delivers meaningful consumer and citizen protection on a defined schedule. Companies that understand the timeline and work backwards from each deadline will achieve compliance. Those that wait until deadlines are imminent will face rush compliance costs, enforcement risk, and potential market access disruption.

Ready to assess your EU AI Act compliance?

Start a guided compliance interview, get your AI system's risk classification, and generate an audit-ready report.

Start Your Free 7-Day Trial