From low to high risk: Breaking down the EU AI Act’s risk levels

TrustPath reading time
Max
4
min read
Key takeaways:
  • The EU AI Act uses a risk-based approach to regulate AI, classifying systems into four levels: prohibited, high-risk, limited-risk, and minimal-risk, each with unique compliance rules.
  • High-risk AI systems require strict safeguards, including technical documentation, human oversight, and registration in the EU’s AI registry to ensure safety and accountability.
  • General Purpose AI (GPAI), like GPT or BERT, are versatile models regulated based on impact, requiring strict documentation, transparency, and safeguards for systemic risks.
Make your company enterprise-ready!
Shorten sales cycles, speed up deal closures, and build buyer confidence with TrustPath.
Book a demo
TrustPath
In this article
The Current Status and Implementation Timeline
The Four Risk Levels Defined by the EU AI Act
General Purpose AI (GPAI) Considerations
Navigating Risk in the EU AI Act with TrustPath
LinkedinWhatsApp

The EU AI Act is changing the way artificial intelligence is developed and used in Europe. With its risk-based approach, the Act sets clear rules for AI systems, ensuring they are safe, trustworthy, and transparent. But what does this mean for AI companies?

Well, this article will break down the EU AI Act’s risk levels, focusing on what they mean for your AI business. We’ll explain how the EU AI Act’s risk categories work, why high-risk AI deserves special attention, and how you can prepare for compliance. Whether you’re already building AI or just exploring opportunities in Europe, this article will help you navigate the EU AI Act with confidence.

Let’s start by looking at the four categories defined by the Act and providing clear examples to help you understand where your system might fit.

The Current Status and Implementation Timeline

The EU AI Act's final agreement in early 2024 represents the culmination of extensive negotiations between European institutions. Organizations should note these critical dates:

  • Act came into force: August 2024
  • First provisions enforcement: February 2025
  • Compliance deadlines vary by risk category

Now that we understand the implementation timeline, let’s move to the four risk levels defined by the EU AI Act.

The Four Risk Levels Defined by the EU AI Act

As mentioned above, the EU AI Act follows a risk-based approach, and it categorizes AI systems into four AI risk levels: prohibited, high-risk, limited-risk, and minimal or no-risk.

Each level comes with its own set of rules and compliance requirements. Understanding where your AI system fits is essential for ensuring compliance with EU AI regulations.

Unacceptable Risk (Prohibited AI)

The Act identifies eight categories of AI systems that are completely banned due to their unacceptable risks to fundamental rights and societal values. These prohibited applications include:

  1. Subliminal manipulation systems - AI designed to alter behavior without awareness, such as covert influence on voting or consumer decisions.
  2. Vulnerability exploitation systems - technologies targeting psychological or circumstantial vulnerabilities, particularly concerning minors and disadvantaged groups.
  3. Biometric categorization - systems categorizing people based on sensitive characteristics like gender, ethnicity, or political views.
  4. Social scoring - general-purpose evaluation systems that could lead to discriminatory treatment based on social behavior.

Law enforcement may receive limited exceptions for certain applications, subject to strict oversight and judicial authorization.

High-Risk AI Systems

High-risk AI systems require extensive compliance measures but are permitted with proper safeguards. These systems fall into two main categories:

Safety Components of Regulated Products:

  • Medical devices and equipment
  • Transportation systems
  • Critical infrastructure components
  • Industrial machinery

Standalone High-Risk Applications include AI systems used in:

  • Education and vocational training
  • Employment and worker management
  • Essential services access
  • Law enforcement and justice administration
  • Migration and border control

Organizations deploying high-risk AI must implement comprehensive risk management systems, maintain detailed technical documentation, and ensure human oversight. Regular conformity assessments and registration in the EU's public AI registry are mandatory, except for law enforcement applications which follow special provisions. Find more information about this here.

Limited-Risk AI Systems

Limited-risk systems require transparency but face lighter compliance obligations than high-risk systems. The primary focus is ensuring users understand when they're interacting with AI technology.

Requirements include:

  • Clear disclosure of AI interaction
  • Transparency about emotion recognition capabilities
  • Labeling of deep fake content
  • Regular system performance monitoring

Common examples include chatbots, recommendation systems, and content moderation tools.

Minimal-Risk AI Systems

The majority of AI applications fall into this category, facing minimal regulatory requirements. These systems include spam filters, inventory management tools, and basic business analytics applications. While formal compliance obligations are limited, organizations should maintain good practices in:

  • Data quality management
  • System documentation
  • User privacy protection
  • Regular performance monitoring

General Purpose AI (GPAI) Considerations

What is General Purpose AI?

General Purpose AI systems, also known as foundation models, are versatile AI systems trained on vast amounts of data that can be adapted for multiple tasks and applications. Think of platforms like GPT, BERT, or LLaMA - these models aren't built for a single purpose but can be used across many different applications, from writing code to analyzing documents to generating images.

Why Does the EU Care?

The EU's focus on GPAI stems from three main concerns:

  • These models can impact millions of users across multiple sectors simultaneously
  • Only a few large companies have the resources to develop them, raising competition concerns
  • Organizations building applications on these models often can't fully understand or control their behavior

Risk Classification for GPAI

The Act takes a straightforward approach to GPAI regulation based on the model's potential impact:

Systemic Risk Models

  • The most powerful models, typically requiring massive computing power for training
  • Think of the largest language models and multimodal AI systems
  • Face strict requirements around documentation, testing, and transparency
  • Must provide detailed information to downstream users about capabilities and limitations

Non-Systemic Risk Models

  • Smaller-scale foundation models
  • Subject to basic transparency requirements
  • Must still comply with sector-specific regulations depending on use case

Open Source Exception

The Act provides some flexibility for open-source models if they:

  • Allow free access and modification
  • Maintain development transparency
  • Avoid high-risk applications

Organizations using GPAI should verify their provider's compliance status and implement appropriate safeguards based on their specific use case. The key is understanding that while the base model has its own risk level, the final application might fall into a different risk category under the Act.

Navigating Risk in the EU AI Act with TrustPath

The EU AI Act presents organizations with complex compliance challenges that require systematic approaches and robust tools. As we've seen, different AI risk levels demand different levels of oversight, documentation, and ongoing monitoring. Successfully navigating these requirements while maintaining innovation and competitive advantage requires a comprehensive compliance partner.

TrustPath offers comprehensive solutions for navigating EU AI Act risks and compliance:

  • Automated risk assessment tools
  • Documentation management systems
  • Compliance monitoring dashboards
  • Expert guidance and support

Schedule your demo today for a personalized compliance assessment and platform demonstration.

Make your company enterprise-ready!
Shorten sales cycles, speed up deal closures, and build buyer confidence with TrustPath.
Get Started Now
TrustPath