How to Conduct an EU AI Act Risk Assessment (Step-by-Step)
Risk Assessment

How to Conduct an EU AI Act Risk Assessment (Step-by-Step)

AI Comply HQ Team10 min read

Risk assessment is the cornerstone of EU AI Act compliance. Every obligation, from documentation to conformity assessment to human oversight, flows from a single determination: what risk tier does your AI system fall into? Get this wrong, and you will either over-invest in compliance for a minimal-risk system or, far worse, under-invest in compliance for a high-risk one.

This guide walks you through the complete risk assessment process as defined by the EU AI Act (Regulation (EU) 2024/1689). It mirrors the structured approach used by compliance professionals and is designed to produce a defensible, auditable classification for each of your AI systems.

Why Risk Assessment Must Come First

The EU AI Act is a risk-based regulation. Unlike prescriptive frameworks that impose uniform requirements regardless of context, the AI Act calibrates obligations to the level of risk a system poses to health, safety, and fundamental rights.

This means risk assessment is not merely an administrative exercise. It is the gateway that determines:

  • Whether your system is banned outright (Article 5 prohibited practices)
  • Whether you must comply with the full high-risk regime (Chapter III, Section 2)
  • Whether you face limited transparency obligations (Article 50)
  • Whether you are exempt from mandatory requirements (minimal risk)

A flawed risk assessment will cascade errors through your entire compliance programme. Invest the time to get it right.

Step 1: Build Your AI System Inventory

Before you can assess risk, you need a complete inventory of every AI system your organisation develops, deploys, imports, or distributes. Many organisations underestimate the scope of this exercise.

What Counts as an AI System?

The EU AI Act defines an AI system broadly in Article 3(1) as a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment, and that infers from inputs how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

This definition captures:

  • Deep learning models and neural networks
  • Traditional machine learning classifiers (random forests, gradient boosting, SVMs)
  • Natural language processing systems (chatbots, document analysers, summarisers)
  • Computer vision systems (image classification, object detection, facial recognition)
  • Recommendation engines
  • Robotic process automation with adaptive decision-making components
  • Generative AI systems (LLMs, image generators, code assistants)

It does not capture simple rule-based systems without any learning or inference capability, though the boundary can be unclear. When in doubt, include the system in your inventory and assess it.

Information to Capture

For each AI system, document at minimum:

FieldPurpose
System name and versionUnique identification
Business ownerAccountability
Technical teamDay-to-day responsibility
Intended purposeWhat the system is designed to do
Deployment contextWhere and how it is used
Affected populationsWho is impacted by the system's outputs
Data sourcesWhat data the system ingests
Output typePredictions, classifications, recommendations, decisions, content
Autonomy levelDoes a human review outputs before action?
Current statusDevelopment, testing, production, deprecated

This inventory becomes the master register against which all subsequent compliance activities are tracked.

Step 2: Screen for Prohibited Practices

The first risk assessment gate is binary: is this system prohibited under Article 5? If yes, no amount of compliance engineering will save it. The system must be redesigned or decommissioned.

Work through the eight categories of prohibited practices systematically:

Prohibition Checklist

  1. Social scoring: Does the system evaluate or classify natural persons based on social behaviour or predicted personality characteristics, leading to detrimental treatment unrelated to the context in which the data was generated?

  2. Subliminal manipulation: Does the system deploy subliminal techniques beyond a person's consciousness to materially distort behaviour in a manner that causes or is likely to cause harm?

  3. Exploitation of vulnerabilities: Does the system exploit vulnerabilities related to age, disability, or social or economic situation to materially distort behaviour?

  4. Untargeted facial image scraping: Does the system create or expand facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage?

  5. Emotion recognition in restricted contexts: Does the system infer emotions of natural persons in workplaces or educational institutions, except for medical or safety reasons?

  6. Biometric categorisation on sensitive attributes: Does the system categorise natural persons based on biometric data to deduce race, political opinions, trade union membership, religious beliefs, sex life, or sexual orientation?

  7. Predictive policing: Does the system make risk assessments of natural persons to predict criminal offences based solely on profiling or personality traits?

  8. Real-time remote biometric identification in public spaces: Does the system perform real-time biometric identification in publicly accessible spaces for law enforcement purposes, outside the narrow exceptions in Article 5(1)(h)?

For each system in your inventory, document a yes/no answer to each question with supporting evidence. A "maybe" should be escalated to legal review immediately.

Step 3: Evaluate Against High-Risk Categories

If a system clears the prohibition screen, the next question is whether it qualifies as high-risk. The Act defines two pathways to high-risk classification.

Pathway 1: Safety Component (Article 6(1))

An AI system is high-risk if it is:

  • A safety component of a product covered by EU harmonised legislation listed in Annex I (e.g., machinery, medical devices, vehicles, toys, lifts, pressure equipment), and
  • The product is required to undergo a third-party conformity assessment under that legislation

This pathway captures AI systems embedded in physical products that are already regulated.

Pathway 2: Annex III Standalone Systems (Article 6(2))

An AI system is high-risk if it falls into one of the use-case categories listed in Annex III:

  1. Biometrics: remote biometric identification (not real-time in public for law enforcement, which is prohibited), biometric categorisation, emotion recognition
  2. Critical infrastructure: management and operation of road traffic, water, gas, heating, electricity supply, and digital infrastructure
  3. Education and vocational training: determining access, assessing learning outcomes, monitoring prohibited behaviour during exams, adaptive learning that affects education path
  4. Employment and worker management: recruitment, screening, hiring decisions, task allocation, performance monitoring, promotion, termination
  5. Access to essential services: creditworthiness assessment, risk pricing for life and health insurance, evaluation of emergency call reliability, eligibility for public assistance
  6. Law enforcement: individual risk assessment, polygraph-adjacent tools, evidence evaluation, profiling in criminal investigations
  7. Migration, asylum, and border control: risk assessment, document authentication, visa application processing
  8. Administration of justice: researching and interpreting facts and law, applying law to facts

The Article 6(3) Exception

Even if a system falls into an Annex III category, Article 6(3) provides a narrow exception: a system is not high-risk if it does not pose a significant risk of harm to health, safety, or fundamental rights, including by not materially influencing the outcome of decision-making. This exception does not apply if the system performs profiling of natural persons.

If you intend to rely on this exception, you must document the reasoning thoroughly. Regulators will scrutinise claims that an Annex III system is not actually high-risk.

Step 4: Assess Limited-Risk Transparency Obligations

Systems that are neither prohibited nor high-risk may still trigger transparency requirements under Article 50:

  • Chatbots and conversational AI: Users must be informed they are interacting with an AI system
  • Emotion recognition systems: Individuals must be informed when such a system is applied to them
  • Deep fakes: Synthetic content depicting real persons or events must be labelled
  • AI-generated content: Text, images, audio, or video generated by AI must be marked in a machine-readable format

Assess each system for these obligations and document whether they apply.

Step 5: Document Your Classification Decision

For every AI system in your inventory, produce a risk classification record that includes:

  • System identifier from your inventory
  • Risk classification: Prohibited, High-Risk (Pathway 1 or 2), Limited Risk, or Minimal Risk
  • Rationale: The specific articles and annexes that support the classification
  • Evidence reviewed: What information you considered (system architecture, intended purpose, deployment context, affected populations)
  • Reviewer: Who conducted the assessment and their qualifications
  • Date: When the assessment was completed
  • Review schedule: When the classification will be re-evaluated

This documentation serves two purposes: it demonstrates regulatory diligence, and it provides a defensible basis if your classification is challenged.

Step 6: Plan Compliance Activities Based on Classification

With classifications in hand, map each system to its required compliance activities:

Risk TierRequired Actions
ProhibitedDecommission or fundamental redesign
High-RiskFull Chapter III, Section 2 compliance: risk management (Art. 9), data governance (Art. 10), technical documentation (Art. 11, Annex IV), record-keeping (Art. 12), transparency/instructions for use (Art. 13), human oversight (Art. 14), accuracy/robustness/cybersecurity (Art. 15), EU database registration (Art. 71), conformity assessment (Art. 43)
Limited RiskTransparency obligations (Art. 50)
Minimal RiskNo mandatory obligations; voluntary codes of conduct encouraged

Prioritise your high-risk systems. They carry the heaviest compliance burden and the steepest penalties for failure.

Step 7: Establish Ongoing Monitoring and Re-Assessment

Risk assessment is not a one-time activity. The EU AI Act requires ongoing monitoring, and real-world conditions change:

  • System capabilities evolve through retraining or fine-tuning
  • Deployment contexts shift as business needs change
  • New regulatory guidance or case law may alter classification boundaries
  • New data about real-world impacts may reveal previously unidentified risks

Establish a review cadence (quarterly at minimum for high-risk systems, annually for others) and define triggers for ad-hoc re-assessment (significant model updates, new deployment contexts, incident reports).

Common Pitfalls to Avoid

Under-scoping the inventory. Organisations routinely miss AI systems embedded in third-party SaaS tools, legacy systems with ML components, or AI used by individual teams without central IT awareness.

Conflating "intended purpose" with "actual use." The Act requires you to assess risks under both intended use and reasonably foreseeable misuse. A system designed for customer segmentation that could foreseeably be used for discriminatory pricing needs to be assessed for that foreseeable misuse.

Relying too heavily on the Article 6(3) exception. The exception for Annex III systems that pose no significant risk is narrow and will be interpreted strictly by regulators. Default to treating Annex III systems as high-risk unless you have overwhelming evidence to the contrary.

Treating risk assessment as a legal exercise only. Effective risk assessment requires deep technical understanding of the system, its data, and its failure modes. Involve your engineering team alongside legal and compliance.

Assess Your Compliance in Minutes

Conducting a rigorous risk assessment takes expertise, structure, and thorough documentation. AI Comply HQ streamlines this entire process.

Our guided compliance interview walks you through every assessment step described in this article. You answer plain-language questions about your AI system (its purpose, deployment context, affected populations, data sources) and our system automatically maps your answers to the EU AI Act's risk classification framework.

At the end, you receive:

  • A definitive risk classification with the specific articles and annexes that apply
  • A gap analysis identifying which compliance requirements you have met and which remain open
  • An action plan with prioritised steps to close compliance gaps
  • An audit-ready report you can present to regulators, clients, or your board

The entire process takes under 20 minutes per AI system.

Start your free 7-day trial and classify your first AI system today.

Ready to assess your EU AI Act compliance?

Start a guided compliance interview, get your AI system's risk classification, and generate an audit-ready report.

Start Your Free 7-Day Trial