
EU AI Act: Frequently Asked Questions Answered
Your EU AI Act Questions, Answered
The EU AI Act is the world's first comprehensive legal framework for artificial intelligence. Since its publication in the Official Journal on July 12, 2024, and entry into force on August 1, 2024, organisations across every sector have been working through a cascade of questions: Does this apply to me? What do I need to do? How much time do I have?
This FAQ compiles the most frequently asked questions we receive from compliance officers, legal teams, CTOs, and business leaders. Each answer references the specific EU AI Act articles, annexes, and timelines so you can trace every obligation back to the regulation itself.
If you want a structured overview of your obligations, start with our EU AI Act Compliance Checklist.
1. Who Does the EU AI Act Apply To?
The AI Act applies to a broad range of actors across the AI value chain. Under Article 2, the regulation covers:
- Providers who develop AI systems or general-purpose AI (GPAI) models and place them on the EU market or put them into service, regardless of whether those providers are established in the EU or in a third country.
- Deployers of AI systems who are established in the EU or who use AI systems whose output is used in the EU.
- Importers and distributors who make AI systems available on the EU market.
- Product manufacturers who place AI systems on the market as part of or alongside their product.
- Authorised representatives of providers established outside the EU.
The key principle is territorial reach: if the AI system's output is used within the EU, or if the system is placed on the EU market, the regulation applies, even if the provider is headquartered in San Francisco, Singapore, or anywhere else outside Europe.
There are limited exemptions for AI systems used exclusively for military or defence purposes, for purely personal non-professional activities, and for research and development activities before the system is placed on the market.
2. What Is High-Risk AI Under the EU AI Act?
High-risk AI systems are defined in two ways under Article 6:
Category 1 (Article 6(1)): AI systems intended to be used as a safety component of a product, or that are themselves a product, covered by the Union harmonisation legislation listed in Annex I. These include toys, machinery, medical devices, motor vehicles, aviation systems, and more. A conformity assessment is already required under those product safety frameworks.
Category 2 (Article 6(2) and Annex III): AI systems that fall into one of eight use-case areas listed in Annex III:
- Biometric identification and categorisation of natural persons
- Management and operation of critical infrastructure
- Education and vocational training (admissions, assessment, monitoring)
- Employment, workers management, and access to self-employment (recruitment, task allocation, performance monitoring)
- Access to and enjoyment of essential private and public services (credit scoring, insurance pricing, emergency services dispatching)
- Law enforcement
- Migration, asylum, and border control management
- Administration of justice and democratic processes
If your AI system falls into one of these areas and is not covered by one of the narrow exemptions in Article 6(3), it is classified as high-risk and subject to the full compliance regime: conformity assessments, technical documentation, quality management systems, post-market monitoring, and registration in the EU database.
For a deeper dive into how to determine your system's risk level, see our EU AI Act Risk Assessment Guide.
3. Do I Need to Register My AI System?
Yes, if it is high-risk. Article 49 requires providers of high-risk AI systems to register the system in the EU database (established under Article 71) before placing it on the market or putting it into service. Deployers of high-risk AI systems that are public authorities, EU institutions, or entities acting on their behalf must also register.
The registration must include information specified in Annexes VIII and IX, including the system's intended purpose, a summary of the conformity assessment, and contact information for the provider.
For GPAI models with systemic risk, providers must also notify the European AI Office under Article 52.
4. What Are the Fines for Non-Compliance?
The EU AI Act establishes three tiers of administrative fines under Article 99:
- Up to 35 million EUR or 7% of global annual turnover (whichever is higher) for violations involving prohibited AI practices under Article 5.
- Up to 15 million EUR or 3% of global annual turnover for non-compliance with high-risk AI system requirements, provider/deployer obligations, or GPAI model obligations.
- Up to 7.5 million EUR or 1% of global annual turnover for supplying incorrect, incomplete, or misleading information to regulatory authorities or notified bodies.
For SMEs and startups, the fixed-amount caps serve as effective maximums. The regulation includes proportionality provisions that require enforcement authorities to consider the entity's size, market share, and economic viability.
For a complete analysis, read our EU AI Act Fines and Enforcement guide.
5. Does the EU AI Act Apply Outside the EU?
Yes. Article 2(1) gives the AI Act significant extraterritorial reach. The regulation applies to:
- Providers placing AI systems on the EU market or putting them into service in the EU, regardless of where they are established.
- Providers and deployers located in a third country where the output produced by the AI system is used in the EU.
- Importers and distributors making AI systems available on the EU market.
This means a US-based SaaS company whose AI-powered product has European customers is captured by the regulation. A Chinese manufacturer whose AI system is imported into Europe is captured. Any company whose AI system generates outputs (decisions, recommendations, predictions) that affect people in the EU is likely in scope.
Non-EU providers must appoint an authorised representative established in the EU before placing their AI system on the market (Article 22).
6. What About Open-Source AI?
The AI Act includes a partial exemption for open-source AI models and systems, but it is narrower than many assume.
Under Article 2(12), providers of free and open-source AI systems are generally exempt from the high-risk requirements unless the system is:
- Listed as high-risk under Article 6 and Annex III
- Subject to transparency obligations under Article 50
- A prohibited practice under Article 5
For GPAI models released under open-source licences, Article 53(2) provides a lighter-touch regime. Open-source GPAI model providers must still comply with certain obligations, including making available a sufficiently detailed summary of the training data and complying with EU copyright law, but they are exempted from several of the documentation and risk management requirements that apply to proprietary GPAI models.
That said, if an open-source GPAI model is classified as presenting systemic risk (Article 51), the full GPAI systemic risk obligations apply regardless of the licence.
7. When Do I Need to Comply? What Are the Key Deadlines?
The AI Act uses a phased implementation timeline:
| Deadline | Obligation |
|---|---|
| February 2, 2025 | Prohibited practices ban enforceable (Article 5) |
| August 2, 2025 | GPAI model obligations enforceable (Articles 51-55); National competent authorities designated |
| August 2, 2026 | Full obligations for high-risk AI systems (Chapter III); Conformity assessments, quality management, technical documentation, post-market monitoring all required |
| August 2, 2027 | Obligations for high-risk AI systems that are safety components of products in Annex I |
If you operate a high-risk AI system under Annex III, August 2, 2026 is your deadline. That is just months away. If your high-risk AI system is a safety component of a product covered by Annex I harmonisation legislation, you have until August 2, 2027.
8. What Is a Conformity Assessment?
A conformity assessment is the process by which a provider demonstrates that a high-risk AI system meets all the requirements set out in Chapter III, Section 2 of the AI Act before placing it on the market.
Under Article 43, most high-risk AI systems undergo a self-assessment (conformity assessment based on internal control, as described in Annex VI). The provider reviews its own compliance with requirements for risk management, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy, robustness, and cybersecurity.
Certain high-risk AI systems, particularly those used for real-time remote biometric identification, must undergo a third-party conformity assessment performed by a notified body (Annex VII procedure).
After completing the assessment, the provider draws up an EU declaration of conformity (Article 47) and affixes the CE marking (Article 48).
9. Do I Need an AI Literacy Programme?
Yes. Article 4 establishes a cross-cutting obligation for AI literacy that applies to all providers and deployers of AI systems, regardless of risk classification. This obligation has been enforceable since February 2, 2025.
Organisations must ensure that their staff and other persons dealing with the operation and use of AI systems on their behalf have a sufficient level of AI literacy. This includes understanding:
- The capabilities and limitations of the AI system
- The potential impact on fundamental rights
- How the system operates and what its outputs mean
- How to interpret and act on the system's results
The level of literacy required is proportionate to the context, intended purpose, and the persons affected. While the regulation does not prescribe a specific training curriculum, compliance teams should document their literacy programmes, training materials, and participation records.
10. How Does the EU AI Act Interact with GDPR?
The AI Act and GDPR are complementary, not mutually exclusive. Article 2(7) explicitly states that the AI Act does not affect the application of the GDPR.
In practice, this means:
- If your AI system processes personal data, you must comply with both the AI Act and the GDPR simultaneously.
- The GDPR's requirements for lawful basis, data minimisation, purpose limitation, and data subject rights all continue to apply.
- The AI Act adds additional data governance requirements under Article 10, specifically for training, validation, and testing data sets used for high-risk AI systems.
- Where the AI Act requires a fundamental rights impact assessment (Article 27), this complements but does not replace a GDPR Data Protection Impact Assessment (DPIA) under Article 35 of the GDPR.
- Enforcement may be coordinated: the AI Act designates market surveillance authorities, while GDPR is enforced by data protection authorities. Some member states may combine these functions.
Organisations should integrate their AI Act compliance workflows with existing GDPR compliance processes rather than treating them as separate workstreams.
11. What Documentation Do I Need?
For high-risk AI systems, the documentation burden is substantial. Article 11 and Annex IV specify the technical documentation requirements, which include:
- General description of the AI system (intended purpose, developer identity, version history)
- Detailed description of the system's elements and development process (methods, design specifications, system architecture, algorithms, data requirements)
- Information about training, validation, and testing data (data sets used, data preparation methodology, data labelling, data cleaning, bias detection)
- Risk management documentation (the risk management system per Article 9, known risks, risk mitigation measures)
- Description of human oversight measures (Article 14)
- Information on accuracy, robustness, and cybersecurity (metrics, test results, known limitations)
- Quality management system documentation (Article 17)
- EU declaration of conformity (Article 47)
- Post-market monitoring plan (Article 72)
- Logs and records of automatic logging (Article 12)
Deployers of high-risk AI systems have lighter documentation requirements but must still maintain logs of the system's operation (Article 26(5)) and conduct fundamental rights impact assessments where required.
For GPAI model providers, Article 53 requires detailed technical documentation including training methodology and compute used, a sufficiently detailed summary of training data, and compliance with EU copyright law.
12. What Are Prohibited AI Practices?
Article 5 bans specific AI practices outright. These prohibitions have been enforceable since February 2, 2025:
- AI systems using subliminal, manipulative, or deceptive techniques that distort behaviour and cause significant harm
- AI systems exploiting vulnerabilities related to age, disability, or socio-economic circumstances
- Social scoring systems by public authorities leading to detrimental or unfavourable treatment
- AI systems assessing individual criminal offending risk based solely on profiling or personality traits
- Untargeted scraping of facial images from the internet or CCTV to create facial recognition databases
- Emotion recognition in workplaces or educational institutions (with narrow exceptions)
- Biometric categorisation using sensitive characteristics (race, political opinion, sexual orientation, etc.)
- Real-time remote biometric identification in publicly accessible spaces for law enforcement (with narrow exceptions)
Violations carry the highest fine tier: up to 35 million EUR or 7% of global annual turnover. For a detailed analysis, see our EU AI Act Prohibited Practices guide.
13. What Is a Fundamental Rights Impact Assessment?
Article 27 requires deployers of high-risk AI systems to carry out an assessment of the impact on fundamental rights before putting the system into use. This is distinct from the risk management system that providers must implement under Article 9.
The fundamental rights impact assessment (FRIA) must include:
- A description of the deployer's processes in which the high-risk AI system will be used
- A description of the period of time and frequency of use
- The categories of natural persons and groups likely to be affected
- The specific risks of harm likely to impact the identified groups
- A description of human oversight measures
- The measures to be taken if risks materialise, including internal governance and complaint mechanisms
This requirement applies specifically to deployers that are bodies governed by public law, private entities providing public services, and deployers of high-risk AI systems in certain sensitive areas (credit scoring, insurance pricing, etc.).
The FRIA should be coordinated with existing GDPR Data Protection Impact Assessments to avoid duplication and ensure consistency.
14. What Are GPAI Obligations?
General-purpose AI (GPAI) models, such as large language models, have their own dedicated obligation framework under Articles 51 to 56.
All GPAI model providers must:
- Draw up and keep up-to-date technical documentation (Article 53(1)(a) and Annex XI)
- Make information and documentation available to downstream AI system providers (Article 53(1)(b))
- Establish a policy to comply with EU copyright law (Article 53(1)(c))
- Publish a sufficiently detailed summary of the training data (Article 53(1)(d))
GPAI models with systemic risk (Article 51) face additional obligations:
- Perform model evaluations, including adversarial testing (Article 55(1)(a))
- Assess and mitigate possible systemic risks (Article 55(1)(b))
- Track, document, and report serious incidents to the AI Office and national authorities (Article 55(1)(c))
- Ensure adequate cybersecurity protections (Article 55(1)(d))
A GPAI model is classified as having systemic risk if it has high-impact capabilities or if its cumulative compute used for training exceeds 10^25 FLOPs. The European Commission can also designate models as systemic risk through a decision process.
These obligations took effect on August 2, 2025.
15. How Do I Classify My AI System's Risk Level?
Risk classification is the foundation of your entire compliance strategy. The AI Act does not assign risk levels arbitrarily. It follows a structured decision tree:
Step 1: Is it banned? Check whether your AI system falls under any of the prohibited practices in Article 5. If yes, you cannot operate it in the EU.
Step 2: Is it high-risk? Check two pathways:
- Does it serve as a safety component of a product regulated under Annex I? (Article 6(1))
- Does its intended purpose fall into one of the eight areas listed in Annex III? (Article 6(2))
If either answer is yes, check whether the narrow exception in Article 6(3) applies: if the system does not pose a significant risk of harm to health, safety, or fundamental rights, and is not used for profiling or decision-making, it may be exempted. However, you must document this assessment.
Step 3: Does it have transparency obligations? Under Article 50, systems that interact with people (chatbots), generate synthetic content (deepfakes, synthetic audio/video), or perform emotion recognition or biometric categorisation carry specific transparency requirements regardless of risk level.
Step 4: Minimal or no risk. If none of the above categories apply, your AI system falls into the minimal-risk category with no specific regulatory obligations (though codes of conduct under Article 95 are encouraged).
We strongly recommend documenting your risk classification reasoning, even for systems you determine to be minimal risk. In an audit, demonstrating that you performed a thorough assessment is itself a compliance signal.
For tool-assisted risk classification, see our comparison of Best EU AI Act Compliance Tools.
16. Do I Need to Notify Users They Are Interacting with AI?
Yes, for specific categories of AI systems. Article 50 establishes transparency obligations that apply regardless of risk classification:
- AI systems that interact with natural persons must be designed so that the person is informed they are interacting with an AI system, unless this is obvious from the circumstances.
- AI systems generating synthetic audio, image, video, or text must ensure the output is marked in a machine-readable format and is detectable as artificially generated.
- Deployers of emotion recognition or biometric categorisation systems must inform the persons exposed.
- Deployers of deepfake systems must disclose that the content is artificially generated or manipulated.
These obligations apply to limited-risk systems and above. Failure to comply falls under the Tier 2 fine structure (up to 15 million EUR or 3% of global annual turnover).
17. What Role Do National Authorities Play?
Each EU member state must designate at least one national competent authority and one market surveillance authority to oversee and enforce the AI Act (Article 70). These authorities are responsible for:
- Monitoring compliance in their jurisdiction
- Conducting investigations and market surveillance activities
- Imposing corrective measures (requiring modifications, withdrawals, or recalls)
- Issuing administrative fines
At the EU level, the European AI Office (established within the European Commission) coordinates enforcement, manages the EU database for high-risk AI systems, and has direct supervisory powers over GPAI model providers.
Member states were required to designate their national competent authorities by August 2, 2025.
18. Can I Still Deploy AI Systems During the Transition Period?
Yes. The AI Act does not ban or restrict the deployment of AI systems during the transition period. However, the phased enforcement means that some obligations are already in effect:
- Since February 2, 2025: Prohibited practices cannot be deployed. AI literacy obligations apply.
- Since August 2, 2025: GPAI model obligations are enforceable.
- By August 2, 2026: Full high-risk AI system requirements must be met.
Organisations should use the remaining transition period to conduct gap analyses, implement compliance frameworks, and prepare technical documentation. Waiting until the deadline is a high-risk strategy, especially given the complexity of conformity assessments and the documentation requirements.
Start Your Compliance Journey Today
The EU AI Act is not a future concern. It is a current obligation with escalating deadlines. Whether you need to classify your AI systems, prepare technical documentation, or run a fundamental rights impact assessment, the time to act is now.
AI Comply HQ provides a guided, conversational compliance assessment that helps you identify your obligations, classify your systems, and generate the documentation you need, in hours, not months.
Start Your Free Compliance AssessmentThis FAQ is based on Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 (the EU AI Act). It is provided for informational purposes and does not constitute legal advice. Consult qualified legal counsel for guidance specific to your organisation.