EU AI Act Prohibited AI Practices: Is Your System at Risk?
Compliance Guides

EU AI Act Prohibited AI Practices: Is Your System at Risk?

AI Comply HQ Team12 min read

Article 5 of the EU AI Act draws a hard line. Certain AI practices are deemed so fundamentally threatening to human rights, safety, and democratic values that no amount of safeguards, transparency, or oversight can make them acceptable. They are simply prohibited.

The penalties for operating a prohibited AI system are the most severe under the Act: up to 35 million EUR or 7% of annual worldwide turnover, whichever is higher. And critically, the prohibition provisions have been in force since February 2, 2025, meaning organisations are already exposed to enforcement action if they continue operating prohibited systems.

This article examines each of the eight prohibited practice categories in detail, explains the regulatory rationale behind each prohibition, and provides a practical framework for auditing your AI systems to ensure none of them cross the line.

The Eight Prohibited AI Practices

1. Subliminal Manipulation

Article 5(1)(a) prohibits AI systems that deploy subliminal techniques beyond a person's consciousness, or purposefully manipulative or deceptive techniques, with the objective or effect of materially distorting a person's behaviour, causing or being likely to cause that person or another person significant harm.

What this means in practice:

The prohibition targets AI systems designed to influence human behaviour through mechanisms that bypass conscious awareness. This includes:

  • Dark patterns amplified by AI: Systems that use personalised psychological profiling to deploy manipulative interface designs tailored to exploit individual cognitive biases
  • Subconscious persuasion engines: AI that analyses user behaviour in real time and dynamically adjusts stimuli (visual, auditory, haptic) to influence decisions without the user's awareness
  • Algorithmic addiction engineering: Systems specifically designed to create compulsive behavioural loops by exploiting neurological reward mechanisms

The key legal elements are: (1) the technique operates beyond conscious awareness or is purposefully manipulative/deceptive, (2) it materially distorts behaviour, and (3) it causes or is likely to cause significant harm.

Note that the prohibition requires the element of harm. Persuasive AI used in advertising, for example, is not automatically prohibited, but it becomes prohibited when it deploys subliminal or deceptive techniques that cause significant harm. The boundary between legitimate personalisation and prohibited manipulation will likely be tested extensively in enforcement actions.

How to audit: Review your AI systems for any component that intentionally operates on user psychology without explicit user awareness. If a system is designed to influence behaviour through mechanisms the user cannot perceive or understand, it warrants immediate legal review.

2. Exploitation of Vulnerabilities

Article 5(1)(b) prohibits AI systems that exploit any of the vulnerabilities of a natural person or specific group of persons due to their age, disability, or a specific social or economic situation, with the objective or the effect of materially distorting the behaviour of that person or a person belonging to that group in a manner that causes or is likely to cause that person or another person significant harm.

What this means in practice:

This prohibition protects vulnerable populations from AI systems that take advantage of their specific characteristics:

  • Children: AI systems that exploit children's developmental vulnerabilities (limited critical thinking, susceptibility to authority figures, difficulty distinguishing advertising from content) to manipulate their behaviour
  • Elderly users: Systems that exploit age-related cognitive decline, reduced digital literacy, or social isolation to influence purchasing decisions, financial transactions, or information consumption
  • People with disabilities: AI that exploits disabilities (visual impairment, cognitive impairment, motor limitations) to create information asymmetries or manipulate interactions
  • Economically disadvantaged groups: Systems that exploit financial desperation to drive acceptance of unfavourable terms, predatory lending, or exploitative employment arrangements

How to audit: Identify which user populations interact with your AI systems. For each system, assess whether the system's design, outputs, or behavioural influence mechanisms could disproportionately affect vulnerable groups. Pay particular attention to AI-driven pricing, content recommendation, and decision-making systems that interact with diverse populations.

3. Social Scoring by Public Authorities

Article 5(1)(c) prohibits AI systems used by public authorities, or on their behalf, for the evaluation or classification of natural persons or groups of persons over a certain period of time based on their social behaviour or known, inferred, or predicted personal or personality characteristics, where the resulting social score leads to detrimental or unfavourable treatment in social contexts unrelated to the contexts in which the data was originally generated, or treatment that is unjustified or disproportionate to the gravity of the social behaviour.

What this means in practice:

This prohibition directly targets government-operated or government-commissioned systems that:

  • Aggregate behavioural data across multiple life domains (financial, social, civic, online) to generate a composite "trustworthiness" or "social" score
  • Use such scores to restrict access to services, opportunities, or rights in contexts unrelated to the behaviour being scored
  • Create chilling effects on lawful behaviour by making citizens aware that their actions are being aggregated into a score that affects their life prospects

The prohibition applies specifically to public authorities and entities acting on their behalf. Private social scoring systems are not covered by this specific prohibition, though they may fall under other provisions (subliminal manipulation, exploitation of vulnerabilities) depending on their design and effects.

How to audit: If you provide AI systems to public sector clients, review whether any system aggregates personal data across multiple contexts to produce scores or classifications that affect individuals' access to services or opportunities.

4. Real-Time Remote Biometric Identification in Public Spaces

Article 5(1)(h) prohibits the use of real-time remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement, with strictly limited exceptions.

What this means in practice:

This prohibition targets live surveillance systems that identify individuals in public spaces using biometric data (primarily facial recognition, but also voice recognition, gait analysis, and other biometric modalities). The ban specifically covers:

  • Real-time identification (as opposed to post-event analysis)
  • In publicly accessible spaces (streets, parks, shopping centres, transport hubs)
  • For law enforcement purposes

The narrow exceptions allow real-time biometric identification only for:

  • Targeted search for specific victims of abduction, trafficking, or sexual exploitation
  • Prevention of specific, substantial, and imminent threats to life or physical safety, or genuine and present or foreseeable threat of a terrorist attack
  • Identification of suspects of specific criminal offences punishable by a custodial sentence of at least four years (within a defined list of offences)

Even these exceptions require prior judicial authorisation (or immediate use followed by authorisation within 24 hours in urgent cases) and a fundamental rights impact assessment.

How to audit: If you develop or deploy biometric identification technology, assess whether any deployment involves real-time identification in publicly accessible spaces. Even systems sold for non-law-enforcement purposes must be evaluated for reasonably foreseeable use in law enforcement contexts.

5. Untargeted Facial Image Scraping

Article 5(1)(e) prohibits AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage.

What this means in practice:

This prohibition targets the practice of building facial recognition databases by indiscriminately collecting facial images without the knowledge or consent of the individuals depicted. Specifically prohibited:

  • Scraping social media platforms, websites, or publicly available image repositories to collect facial images for training or populating facial recognition systems
  • Capturing and storing facial images from CCTV or surveillance camera footage without specific, targeted authorisation
  • Aggregating facial images from multiple sources to build comprehensive biometric databases

The prohibition is absolute. There are no exceptions. It applies regardless of the downstream use of the database.

How to audit: Review your data collection practices for any biometric training data. If your AI systems use facial recognition, verify that all facial image data was collected with appropriate consent and authorisation, not through untargeted scraping. Audit third-party data providers to ensure they have not used prohibited collection methods.

6. Emotion Recognition in Workplaces and Educational Institutions

Article 5(1)(f) prohibits AI systems that infer emotions of natural persons in the areas of workplace and education, except where the AI system is intended to be put into service or placed on the market for medical or safety reasons.

What this means in practice:

This prohibition bans:

  • Workplace emotion monitoring: Systems that analyse employee facial expressions, voice patterns, body language, or biometric signals to infer emotional states during work activities
  • Educational emotion tracking: Systems that monitor students' emotional engagement, attention, frustration, or satisfaction during learning activities
  • Emotion-based performance evaluation: Using inferred emotional data as an input to performance reviews, productivity assessments, or behavioural evaluations

The medical and safety exceptions are narrow:

  • Medical systems that detect emotional distress for therapeutic purposes (e.g., mental health screening tools used in clinical settings)
  • Safety systems that detect operator fatigue or impairment in safety-critical roles (e.g., drowsiness detection for truck drivers or heavy machinery operators)

These exceptions require the system to be specifically designed and validated for the medical or safety purpose.

How to audit: Review any AI system that analyses human faces, voices, or body language. If the system infers emotional states, even as a secondary feature, and is deployed in a workplace or educational context, it is likely prohibited unless the medical or safety exception applies. Attention analytics, engagement scoring, and sentiment analysis applied to employees or students all warrant scrutiny.

7. Biometric Categorisation on Sensitive Attributes

Article 5(1)(g) prohibits biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation.

What this means in practice:

This prohibition targets AI systems that use biometric data (facial features, voice characteristics, gait, etc.) as the basis for inferring sensitive personal characteristics. Specifically:

  • Facial analysis systems that attempt to infer race or ethnicity
  • Voice analysis systems that purport to detect sexual orientation
  • Gait or physiological analysis systems that claim to infer political or religious beliefs
  • Any biometric system designed to categorise individuals by sensitive attributes listed in Article 9 of the GDPR

The prohibition focuses on the act of deduction or inference using biometric data. It does not prohibit biometric categorisation that relates to non-sensitive attributes (e.g., age estimation for age verification purposes), nor does it prohibit non-biometric categorisation based on sensitive attributes.

How to audit: Examine any AI system that processes biometric data. Determine whether the system categorises individuals or infers characteristics about them. If any inferred characteristic maps to the sensitive categories listed in the prohibition, the system must be discontinued or redesigned to remove that inference capability.

8. Predictive Policing Based on Profiling

Article 5(1)(d) prohibits AI systems used for making risk assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics.

What this means in practice:

This prohibition targets predictive policing systems that:

  • Predict an individual's likelihood of committing a crime based on their demographic profile, behavioural patterns, or assessed personality characteristics
  • Generate risk scores for individuals based on profiling rather than on concrete evidence of criminal activity
  • Feed such predictions into law enforcement targeting, surveillance allocation, or investigative prioritisation

The prohibition applies to predictions based solely on profiling or personality assessment. AI systems that assess criminal risk based on objective evidence (such as evidence gathered during an active investigation) are not prohibited, though they may be classified as high-risk.

How to audit: If you provide AI systems to law enforcement agencies, review whether any system generates individual-level risk predictions based on profiling. Systems that predict crime hotspots based on historical incident data (location-based, not individual-based) are generally not caught by this prohibition, but systems that assign risk scores to identifiable individuals based on their characteristics are.

Conducting a Prohibition Audit

Given the severity of penalties (up to 35 million EUR / 7% of turnover) and the fact that prohibitions are already enforceable, every organisation should conduct a systematic prohibition audit:

Step 1: Scope the Audit

Review your complete AI system inventory. Include systems developed internally, third-party systems deployed by your organisation, and AI components embedded in products you distribute.

Step 2: Apply Each Prohibition

For each system, work through all eight prohibition categories. Document a clear determination for each: does this system engage in this prohibited practice? Support each determination with evidence about the system's design, purpose, data, and deployment context.

Step 3: Escalate Borderline Cases

If any system is close to a prohibition boundary, escalate to legal counsel immediately. Do not attempt to interpret borderline cases without specialist legal advice.

Step 4: Remediate or Decommission

For any system that falls within a prohibition, you have two options: redesign the system to remove the prohibited functionality entirely, or decommission it. There is no compliance pathway for prohibited practices.

Step 5: Document and Monitor

Record the audit results, including the methodology, evidence reviewed, and determinations made. Establish a monitoring process to re-evaluate systems as they evolve and as regulatory guidance clarifies the prohibition boundaries.

Assess Your Compliance in Minutes

The prohibition screening is the first step in any compliance programme, and it is the most urgent. AI Comply HQ's guided compliance interview includes a comprehensive prohibited practice screening that maps your AI system's characteristics against all eight Article 5 categories.

In under 20 minutes, you will know whether any of your systems are at risk and what actions to take.

Start your free 7-day trial and screen your AI systems for prohibited practices today.

The prohibition provisions are already in force. Do not wait for the August 2, 2026 deadline. Act now.

Ready to assess your EU AI Act compliance?

Start a guided compliance interview, get your AI system's risk classification, and generate an audit-ready report.

Start Your Free 7-Day Trial