
EU AI Act General-Purpose AI (GPAI) Requirements: What Model Providers Need to Know
The EU AI Act introduces an entirely new regulatory category that did not exist in earlier drafts of the legislation: General-Purpose AI (GPAI) models. This category captures the large foundation models and multi-purpose AI systems that have transformed the technology landscape since 2022. If your organization develops, fine-tunes, or distributes a GPAI model, you are subject to a distinct set of obligations that carry significant compliance and operational implications.
This article provides a comprehensive analysis of what the EU AI Act requires of GPAI model providers, including the standard obligations that apply to all GPAI models, the elevated requirements for models with systemic risk, and the limited exemptions available for open-source releases.
What Qualifies as a General-Purpose AI Model?
The EU AI Act defines a general-purpose AI model in Article 3(63) as an AI model that is trained with a large amount of data using self-supervision at scale, that displays significant generality, and that is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market. This definition is deliberately broad and captures models that can be integrated into a variety of downstream AI systems.
In practical terms, GPAI models include:
- Large language models (LLMs) such as GPT-series models, Claude, Gemini, Llama, and Mistral.
- Multimodal foundation models capable of processing text, images, audio, or video.
- Models that are fine-tuned from a general-purpose base model, where the fine-tuned version retains significant generality.
- Models distributed via API access, download, or embedded within downstream products.
The key distinguishing factor is generality. A model trained exclusively for a single narrow task, such as detecting fraudulent credit card transactions, would typically not qualify as a GPAI model. But a model that can be prompted or adapted to perform translation, summarization, code generation, question answering, and creative writing almost certainly does.
Organizations that are uncertain about classification should conduct a structured assessment. Our EU AI Act risk assessment guide provides a framework that can be adapted for GPAI classification decisions.
Standard Obligations for All GPAI Model Providers (Article 53)
Every provider of a GPAI model, regardless of its size or capability level, must comply with a baseline set of obligations under Article 53. These obligations apply from August 2, 2025, although enforcement timelines allow a transitional period for models already on the market.
Technical Documentation
GPAI model providers must draw up and keep up to date technical documentation of the model, including its training and testing process and the results of its evaluation. This documentation must be sufficiently detailed to allow:
- Downstream providers (those who integrate the GPAI model into their own AI systems) to understand the model's capabilities and limitations.
- Regulatory authorities to assess compliance with the AI Act.
- Third parties conducting conformity assessments to evaluate the model.
The European Commission, in consultation with the AI Office, has published templates and guidelines specifying the minimum content for GPAI technical documentation. At a minimum, documentation should cover:
- A general description of the model, including its architecture, the number of parameters, and the modalities it supports.
- A description of the training process, including training data sources, data curation methodology, and computational resources used.
- Information about the model's intended purpose and reasonably foreseeable uses.
- Performance evaluation results, including benchmarks, known limitations, and failure modes.
- Information about risk mitigation measures taken during development.
Copyright Policy Compliance
Under Article 53(1)(c), GPAI model providers must put in place a policy to comply with Union copyright law, in particular Directive (EU) 2019/790, including the identification and compliance with reservations of rights expressed by rightsholders pursuant to Article 4(3) of that Directive.
This is a significant obligation. Model providers must:
- Implement technical measures to identify and respect opt-out requests from content creators and publishers.
- Maintain records of training data sources and the steps taken to comply with copyright law.
- Make available a sufficiently detailed summary of the content used for training the GPAI model, following a template provided by the AI Office.
The training data summary is a public-facing document. It must be detailed enough to give a "meaningful understanding" of the data used, without requiring disclosure of trade secrets or proprietary information.
Transparency Obligations
GPAI model providers must make available to downstream providers information that is necessary for them to comply with their own obligations under the AI Act. This includes:
- The technical documentation described above.
- Information necessary for downstream providers to comply with transparency obligations when integrating the GPAI model into AI systems that interact with individuals.
- Information about the model's capabilities and limitations that downstream providers need to conduct their own risk assessments.
This downstream notification requirement creates a chain of accountability. When a downstream provider integrates a GPAI model into a high-risk AI system, they need to understand the model well enough to fulfil their own obligations around risk management, data governance, and human oversight. The GPAI model provider must give them the information to do so.
Cooperation with Authorities
GPAI model providers must cooperate as necessary with the European Commission, the AI Office, and national competent authorities. This includes providing information and documentation upon request, facilitating access to the model for evaluation purposes, and responding to supervisory inquiries in a timely manner.
Systemic Risk: The Elevated Tier
The EU AI Act creates a second, more demanding tier of obligations for GPAI models that pose systemic risk. A GPAI model is presumed to have systemic risk if it meets one of the following conditions:
The Computational Threshold
A GPAI model is classified as having systemic risk if the cumulative amount of computation used for its training, measured in floating point operations (FLOPs), is greater than 10^25 FLOPs. This threshold was calibrated to capture the most capable models available at the time of the Act's adoption.
As of early 2026, models that meet or exceed this threshold include the frontier models from major AI laboratories. The Commission has the power to update this threshold through delegated acts as computational capabilities evolve and as the relationship between compute and model capability becomes better understood.
Commission Designation
Even if a GPAI model does not meet the computational threshold, the Commission may designate it as having systemic risk based on criteria set out in Annex XIII. These criteria include:
- The number of registered end users.
- The number of business users who have integrated the model into their products.
- The model's performance on relevant benchmarks, including measures of general capability.
- The number of parameters.
- The amount and diversity of training data.
- Any other factor the Commission considers relevant to assessing the potential for systemic impact.
This designation power gives the Commission flexibility to capture models that may pose systemic risk despite falling below the computational threshold, for example because of unusually broad deployment or because of capabilities that emerge at lower compute levels.
Additional Obligations for Systemic Risk Models (Article 55)
Providers of GPAI models with systemic risk must comply with all of the standard obligations under Article 53, plus a set of additional requirements under Article 55.
Model Evaluation
Providers must perform state-of-the-art model evaluations, including adversarial testing (red-teaming), to identify and mitigate systemic risks. These evaluations must assess:
- The model's potential to generate content that could facilitate harm, including biological, chemical, radiological, nuclear, or cyber threats.
- The model's susceptibility to manipulation, jailbreaking, or prompt injection.
- The accuracy and reliability of the model's outputs across different contexts.
- The model's potential for unintended emergent capabilities.
Evaluations should be conducted throughout the model's lifecycle, not just at the point of release. As models are updated, fine-tuned, or deployed in new contexts, re-evaluation is necessary.
Red-Teaming
Red-teaming is explicitly required for systemic risk models. This goes beyond standard quality assurance testing. Red-teaming involves adversarial testing designed to probe the model's safety guardrails, identify potential failure modes, and assess the model's resilience to deliberate misuse.
The AI Office may issue specific guidance on red-teaming methodologies, and providers should be prepared to demonstrate that their red-teaming processes are thorough, well-documented, and conducted by personnel or teams with appropriate expertise.
Cybersecurity Requirements
Providers of systemic risk models must ensure an adequate level of cybersecurity protection for the model and its physical infrastructure. This includes:
- Protecting model weights, training data, and inference infrastructure from unauthorized access.
- Implementing access controls and audit trails for model interaction.
- Monitoring for and responding to security incidents that could compromise the model's integrity or availability.
Incident Reporting
Providers must track, document, and report serious incidents to the AI Office and, where appropriate, to national competent authorities. A serious incident includes any event that results in or could result in serious harm to individuals, critical infrastructure, or the environment, or that reveals a previously unknown systemic risk.
Energy Consumption Reporting
Providers must document and report the energy consumption of the model during training and, where practicable, during inference. This aligns with broader EU sustainability objectives and may inform future regulatory measures on the environmental impact of AI.
Codes of Practice
The EU AI Act encourages the development of codes of practice to facilitate compliance with GPAI obligations. The AI Office has been convening stakeholders to develop these codes, which are intended to provide practical guidance on:
- How to prepare technical documentation that meets the Act's requirements.
- Best practices for copyright compliance in training data curation.
- Methodologies for model evaluation and red-teaming.
- Approaches to cybersecurity risk management for GPAI models.
While codes of practice are voluntary, adherence to an approved code of practice can serve as a presumption of conformity with the relevant provisions of the AI Act. Providers that choose not to follow a code of practice must demonstrate equivalent compliance through alternative means.
Open-Source GPAI: Exemptions and Their Limits
The EU AI Act provides a partial exemption for GPAI models released under free and open-source licences. Under Article 53(2), providers of open-source GPAI models are exempt from certain obligations, specifically the technical documentation and downstream notification requirements, provided that:
- The model's parameters (weights) are made publicly available.
- The model's architecture and usage information is publicly accessible.
- The model is released under a licence that permits access, use, modification, and distribution.
However, this exemption has important limits:
- Copyright compliance is not exempted: Even open-source GPAI model providers must comply with copyright law and make available a training data summary.
- Systemic risk obligations override the exemption: If an open-source GPAI model meets the systemic risk threshold (10^25 FLOPs or Commission designation), the provider must comply with all obligations under both Article 53 and Article 55, regardless of the open-source nature of the release.
- The exemption does not protect downstream deployers: Organizations that integrate an open-source GPAI model into a high-risk AI system remain fully subject to the Act's requirements for high-risk systems. The open-source exemption applies only to the model provider's obligations, not to the obligations of those who use the model.
This means that organizations deploying open-source models in sensitive applications, such as healthcare, law enforcement, or financial services, cannot rely on the open-source exemption to avoid compliance. They must conduct their own risk assessments, maintain documentation, and implement all required safeguards. For a full checklist of applicable requirements, see our EU AI Act Compliance Checklist.
Enforcement and Penalties
The AI Office serves as the primary enforcement body for GPAI obligations. Penalties for non-compliance with GPAI-specific requirements can reach up to 15 million EUR or 3% of global annual turnover, whichever is higher. For providing incorrect, incomplete, or misleading information to the AI Office, fines of up to 7.5 million EUR or 1% of turnover apply.
These penalties are distinct from the fines applicable to high-risk AI system violations, and they can apply concurrently if a provider is in breach of both GPAI obligations and high-risk system requirements. For a complete overview of the enforcement landscape, see our guide on EU AI Act fines and enforcement.
Practical Steps for GPAI Model Providers
Organizations developing or distributing GPAI models should take the following steps:
-
Classify your model: Determine whether your AI model meets the definition of a GPAI model under Article 3(63), and whether it exceeds the systemic risk threshold of 10^25 FLOPs.
-
Prepare technical documentation: Begin drafting comprehensive documentation that covers architecture, training data, evaluation results, and known limitations. Use the AI Office's templates as a starting point.
-
Implement copyright compliance: Develop or enhance your data curation pipeline to identify and respect copyright opt-out requests. Prepare a training data summary for public release.
-
Establish downstream communication channels: Create processes for sharing required information with downstream providers who integrate your model into their own AI systems.
-
If systemic risk applies, implement additional safeguards: Set up model evaluation, red-teaming, cybersecurity, and incident reporting programs that meet Article 55 requirements.
-
Engage with codes of practice: Participate in or monitor the development of codes of practice facilitated by the AI Office, and assess whether adherence to an approved code is the most efficient path to compliance.
-
Build compliance monitoring capabilities: Establish ongoing monitoring to track regulatory developments, respond to supervisory requests, and update documentation as your model evolves.
The GPAI provisions of the EU AI Act represent a fundamental shift in how foundation models are regulated. Model providers that invest in compliance infrastructure now will be positioned to operate effectively within the EU market and to serve downstream customers who need reliable, well-documented, and transparent AI capabilities.
Start Your Free Compliance AssessmentFor organizations evaluating tools to support their GPAI compliance efforts, our comparison of the best EU AI Act compliance tools provides a detailed analysis of available options. And for a broader understanding of what activities the Act prohibits outright, regardless of documentation or safeguards, review our guide to EU AI Act prohibited practices.