What is the EU AI Act?
The EU AI Act is a new legislation made by the European Union to set clear rules for how AI can be developed, used, and sold in Europe. Its main goal is to make sure AI systems are safe, trustworthy, and transparent. This law affects both companies building AI and companies that buy and use AI systems.
Who does the EU AI Act affect?
In simple terms, the EU AI Act affects:
AI companies developing and/or selling AI systems in Europe.
Any business or organization using AI systems in their work, even if the company is not based in the EU.
More specifically, the EU AI Act defines the following parties, making them subject to the this law:
- Providers: Companies that develop and sell AI systems in the EU under their own brand name.
- Deployers: Businesses or organizations that use AI systems or AI-powered products in their operations within the EU.
- Product manufacturers: Companies that produce physical products with embedded AI technology.
- Importers: Organizations that bring AI systems from outside the EU and sell them in the EU under another company’s brand name.
- Distributors: Businesses that make AI systems available in the EU market, even though they are not the developers or importers of the systems.
- Authorized representatives: Individuals or organizations that represent an AI system provider and handle their legal obligations within the EU.
Want to understand what your role is in the AI supply chain?
Get in touch
What are the risk levels defined by the EU AI Act?
The EU AI Act divides AI systems into four risk levels: minimal risk, limited risk, high risk, and unacceptable risk. Each level has different requirements based on how much risk the AI system could pose to people or society.
Risk categories according to the EU AI Act
Examples of AI systems according to the risk they pose
Minimal risk
- AI used in video games
- AI for spam filters in email
- AI-powered search engines
- Recommendation algorightms for online shopping or streaming services
Minimal risk
- AI chatbots on customer service websites
- AI for simple fraud detection
- Virtual assistants like Siri or Alexa
- AI systems used for creating content (e.g., automatic translations or basic text generation)
High risk
- AI in healthcare (e.g., medical diagnostics)
- AI for critical infrastructure (e.g., managing electricity, water)
- AI in education (e.g., systems that determine test scores)
- AI in law enforcement (e.g., facial recognition for crime prevention)
- AI in transportation (e.g., self-driving cars)
- AI in employment (e.g., systems that make hiring or promotion decisions)
Unacceptable risk
- AI that uses real-time biometric surveillance (e.g., facial recognition) in public spaces without proper legal justification
- AI systems that manipulate people through subliminal techniques or exploit vulnerable groups (like children or people with disabilities)
- AI that scores individuals’ social behavior (like China’s social credit scoring)
- AI used in “predictive policing” (predicting crimes before they happen)
What are the key obligations for AI companies?
The obligations AI companies must follow depend on the few factors:
Risk level associated with AI system
Minimal risk
For AI systems with minimal risk, such as AI in games or spam filters, there are no specific obligations under the EU AI Act. These systems are considered low impact and do not require extra regulations.
Limited risk
Companies with limited risk AI systems, like chatbots or virtual assistants, must:
- Transparency: Inform users that they are interacting with AI. For example, when using a chatbot, companies must let customers know they’re speaking with an AI, not a human.
High risk
For high-risk AI systems, such as those used in healthcare or law enforcement, companies have more obligations:
- Technical documentation: Create and maintain detailed documents explaining how the AI system works, how it was trained, and what risks it poses. This documentation must be available for regulatory review.
- Risk management: Implement a risk management system to identify, assess, and reduce risks throughout the entire lifecycle of the AI system. This includes risks to privacy, security, and safety.
- Data governance: Ensure that the data used to train the AI system is high quality, accurate, and free from bias. This is crucial for ensuring that the AI system works fairly and reliably.
- Transparency: Ensure that users are aware of the AI’s decision-making process, especially when the system makes significant decisions affecting their lives, such as job applications or legal decisions.
- Human oversight: Implement procedures that allow human intervention if the AI system malfunctions or makes decisions that could harm people. This is especially important in high-risk sectors like healthcare.
- Security measures: Protect the AI system from hacking or unauthorized access. Companies must ensure that the system remains secure throughout its use.
Unacceptable risk
These systems, like AI that manipulates behavior or uses real-time biometric surveillance, are banned under the EU AI Act. Companies are not allowed to develop or deploy these types of AI systems in Europe.
Role in the AI value chain
Providers
A provider is any company or organization that develops or sells an AI system. Providers are responsible for making sure the AI system meets all the requirements, from design and development to marketing and distribution. They must ensure the system is safe, transparent, and compliant with the EU AI Act.
Providers of high-risk AI systems
For high-risk AI systems, such as those used in healthcare or law enforcement, companies have more obligations:
- Risk management: Implement a full risk management process that identifies, reduces, and monitors risks throughout the system’s lifecycle.
- Data governance: Ensure that the data used for training is accurate, fair, and high quality, to avoid biases and errors in your AI system.
- Technical documentation: Keep detailed records explaining how your AI works, how it was tested, and what risks it may pose. This documentation must be available for regulatory authorities.
- Conformity assessment: Perform a conformity check to verify that your system complies with EU standards, after which you must register your system and mark it with a CE label to confirm it meets the rules.
- Human oversight and monitoring: Ensure humans can supervise and intervene in the system’s operation, especially in high-risk scenarios.
- Reporting and incident management: If your system experiences serious incidents, you must report these to authorities, take corrective action, and cooperate with investigations.
Providers of general-purpose AI (GPAI)
For companies offering general-purpose AI models, the obligations focus on ensuring transparency and compliance:
- Documentation and disclosure: Maintain clear documentation about how your AI model was trained, tested, and evaluated, and share this information with other companies that integrate your model into their systems.
- Synthetic content disclosure: If your AI generates synthetic content (like text, images, or video), you must clearly disclose that the content was created by AI.
- Intellectual property compliance: Make sure your AI model does not violate copyright laws or intellectual property rights.
- Cooperation with authorities: Participate in regulatory audits, investigations, and provide information as needed.
Deployers
A deployer is a company or organization that uses an AI system in its operations. Deployers are responsible for using the AI according to the provider’s instructions and ensuring that it works safely and transparently in the environment where it is deployed.
Deployers of high-risk AI systems
If you deploy high-risk AI systems, there are obligations to ensure the safe and ethical use of these technologies:
- Follow providers’ guidelines: You must use the system according to the instructions provided by the AI system’s developers, ensuring human oversight is in place.
- Data quality verification: Ensure that the input data used by the system is accurate and appropriate for the intended use.
- Transparency: You must clearly inform users about how decisions are made by the AI system, especially in cases where the AI significantly affects people’s rights or interests.
- Cooperate with authorities: Be ready to cooperate with regulatory bodies in case of investigations or audits, and report any incidents that might affect the AI’s performance or safety.
- Special requirements for public use: If the AI system is deployed by public authorities or for public services, additional obligations apply, such as assessing the impact on human rights.
Deployers of general-purpose AI (GPAI)
If you deploy general-purpose AI (GPAI) models in your company, you also have specific obligations under the EU AI Act:
- Transparency: Ensure that users interacting with the GPAI system know they are dealing with AI, not a human. This is especially important when the AI system generates content or makes decisions that affect users.
- Adhere to providers’ guidelines: Follow the guidelines set by the GPAI provider, ensuring that the system is used appropriately and safely within your organization.
- Monitor and report: Regularly monitor how the GPAI system is performing. If you notice any issues or incidents, you must report them to the provider and, if necessary, to the relevant authorities.
- Data protection and ethics: Ensure that the deployment of GPAI systems aligns with data protection laws and ethical principles, particularly when the AI system handles sensitive data or affects individuals’ rights.
- Special rules for high-risk use: If you’re deploying a GPAI system in a high-risk sector (like healthcare, law enforcement, or public services), additional obligations apply, including registering the system with authorities and conducting impact assessments.
What should enterprise buyers know about the EU AI Act?
For enterprise buyers, partnering with AI vendors that comply with the EU AI Act is essential to managing risks and ensuring that AI systems are safe and lawful. Here’s how it helps mitigate risk, broken down into key areas.
Regulatory risk and compliance
- Risk mitigation: Using AI systems that comply with the EU AI Act ensures your company avoids fines and legal issues. Non-compliance with high-risk AI systems can lead to significant penalties.
- Vendor accountability: Compliant AI vendors provide the necessary documentation, certifications (like CE marking), and ongoing monitoring to ensure your systems meet EU regulatory standards.
Privacy and data retention
- Data governance: AI systems often handle sensitive personal data. Compliant vendors ensure that these systems follow strict data privacy and security rules, reducing risks of data breaches.
- Privacy safeguards: Vendors must adhere to the Act’s data governance standards, helping protect your company from privacy violations and ensuring customer trust.
Reputation and trust
- Reputation management: Non-compliant AI systems that are biased, unsafe, or infringe on privacy can damage your company’s reputation. Partnering with compliant vendors protects your brand by ensuring fairness, transparency, and ethical AI practices.
- Transparency: Expect your AI vendors to offer full transparency about how their systems work, especially for high-risk applications. This is key to maintaining trust with customers and stakeholders.
Operational and legal liability
- Liability Reduction: Your company is responsible for the AI systems it deploys. By choosing compliant vendors, you minimize legal exposure and ensure your company remains aligned with EU regulations.
- Compliance Assurance: Partnering with compliant vendors helps streamline your procurement process, ensuring that the AI systems you adopt meet legal and operational requirements.
Explore our TrustAI Center Library
Access a growing database of over 1,000 trusted AI companies. Compare vendors, review compliance documentation, and find the right partner for your enterprise needs—quickly and confidently.
Start exploring
When will the EU AI Act take effect?
The EU AI Act will be implemented gradually, with key milestones spread across the next several years. These stages allow businesses and AI providers the necessary time to adjust and prepare for full compliance. Below are the most important dates to keep in mind for the Act’s enforcement:
Timeline of the EU AI Act
In simple terms, the EU AI Act affects:
- March 13, 2024: The European Parliament votes to approve the AI Act with a significant majority.
- August 1, 2024: The EU AI Act enters into force, 20 days after its publication in the Official Journal of the European Union.
- Six months after enforcement (February 2025): The ban on prohibited AI practices begins.
- 12 months after enforcement (August 2025): Obligations concerning general-purpose AI models come into effect.
- 24 months after enforcement (February 2026):
- Providers of standalone high-risk AI systems must comply with the Act’s obligations.
- Transparency obligations take effect for providers and deployers of AI systems that interact with people, create synthetic content, perform emotion recognition, or use biometric categorization and deepfake systems.
- 24-36 months after enforcement (February 2026 - February 2027): Deployers of high-risk AI systems created by third-party providers must comply.
- 36 months after enforcement (August 2027): Providers of high-risk AI systems subject to EU harmonization legislation must comply (refer to Annex I of the EU AI Act for more details).
What penalties and enforcement will the EU AI Act introduce?
The EU AI Act is designed to ensure safe, ethical, and transparent use of AI technologies. To enforce compliance, the Act introduces a tiered penalty structure based on the severity of violations. Companies that fail to comply with the EU AI Act’s requirements face fines and penalties, ensuring accountability and responsible AI practices across the industry.
Penalties for non-compliance with the EU AI Act
The EU AI Act categorizes penalties into three levels:
€35 million
or
7% of global annual turnover
For serious breaches, such as deploying prohibited AI systems.
€15 million
or
3% of global annual turnover
For violations of high-risk obligations, like lacking transparency.
€7.5 million
or
1.5% of global annual turnover
For minor infringements, such as incomplete documentation or misleading information.
Additional enforcement measures
- Market surveillance: EU authorities will actively monitor compliance across Member States to enforce the Act’s standards.
- Corrective actions: Companies may be required to fix non-compliant AI systems, and in severe cases, withdraw them from the market.
- Public disclosure: Non-compliance may be made public, potentially causing reputational harm.
How to build a compliance strategy for the EU AI Act
Ensuring compliance with the EU AI Act brings numerous advantages to companies and organizations, from building competitive advantage if you adopt it early to avoiding risks and fines.
Benefits of complying with the EU AI Act
Complying with the EU AI Act not only helps avoid regulatory issues but also brings strategic advantages. Below are key benefits that support both your business reputation and operational success.
Minimize legal and financial risks
Following the Act’s guidelines mitigates the risk of facing penalties or regulatory action. By ensuring your AI systems are compliant, you avoid potential fines, legal challenges, and reputational damage associated with non-compliance. Proactively managing these risks safeguards your company’s finances and enhances long-term operational stability.
Strengthen your reputation
By adhering to responsible AI practices, you build trust with customers, investors, and regulators, showing that your organization prioritizes safety, fairness, and ethical technology. This commitment to compliance can make your brand more attractive to stakeholders and reinforce a positive public image, setting you apart as a leader in responsible AI.
Enhance market access and competitiveness
Compliance with the EU AI Act enables seamless operations within the EU market, positioning your business to expand without regulatory barriers. By meeting compliance standards, you enhance your credibility in the eyes of EU clients and partners, providing a competitive advantage over non-compliant businesses and reinforcing your brand as a trusted provider.
Improve AI system quality
Working toward compliance often leads to improvements in your AI systems. Regular assessments and adherence to standards help you identify areas for improvement, refine system performance, and enhance reliability. This focus on quality makes your AI solutions more effective and efficient, ultimately benefiting both your business and its end users.
Steps to building a compliance strategy
To meet EU AI Act requirements, organizations need a structured approach that covers every stage of the AI lifecycle. The following steps outline a roadmap to ensure your AI systems meet compliance standards effectively.
Identify your role in the AI value chain and your obligations
Review the specific obligations for your role in the AI lifecycle, whether you are provider, deployer, product manufacturer, importer, distributor, or authorized representative.
Define and document use cases
Clearly outline the intended purpose and context for each AI application within your organization. Classify these use cases under the relevant risk categories as specified by the EU AI Act.
Conduct thorough risk assessments
Perform a detailed risk analysis of each AI system, assessing potential issues such as bias, discrimination, or security vulnerabilities. For instance, in AI-driven hiring processes, analyze risks of unintended bias in recommendations based on historical data.
Implement a comprehensive risk management plan
Develop a strategy to manage identified risks, including mitigation techniques and corrective measures. For example, if bias in recruitment AI is a concern, consider integrating oversight steps like human review and debiasing algorithms, along with procedures to address user feedback.
Ensure data privacy and protection
Align with GDPR and other data privacy standards, ensuring user data is handled with transparency and security. For many organizations, this may involve leveraging the expertise of in-house data privacy teams to meet these regulations effectively.
Maintain detailed documentation
Keep thorough documentation of your AI systems, including technical details, decision-making logic, and performance metrics. Accurate records are essential for audits, evaluations, and ongoing compliance checks.
Provide user transparency
Offer clear, accessible information about your AI systems, including their functions and potential risks. This empowers users with a better understanding of how the AI impacts them and how decisions are made.
Establish human oversight for high-risk AI
For high-risk systems, establish processes that allow humans to oversee and intervene in key decisions, with protocols to ensure accountability.
Continuous monitoring and evaluation
After deployment, regularly monitor your AI systems to ensure they continue to operate safely and effectively. Implement an ongoing assessment protocol to catch and address any performance issues or risks that may arise post-deployment.
Sounds complicated? We've got you covered.
Get in touch and learn how we can help you comply with the EU AI Act.
Get in touch
How TrustPath can help you dominate compliance and accelerate growth
The EU AI Act is a game-changer, and being compliant isn’t just a checkbox—it’s a competitive advantage. TrustPath turns the complexity of compliance into a streamlined process that empowers AI companies to close deals faster and helps enterprises make smarter, risk-free choices. Ready to stay ahead of the game?
For AI Companies
TrustPath empowers AI providers by simplifying the process of meeting EU regulatory standards. Here’s how we can help:
Get compliant, stay ahead and accelerate your sales
Get your documentation right – without the hassle
Compliance paperwork is a headache, but TrustPath makes it easy. Automatically generate everything you need, including essential policies enterprise customers are looking for, and be ready to meet any regulatory or customer challenge head-on.
Show, don’t tell – build trust with transparency
Don’t just say your AI is compliant—prove it. TrustPath lets you clearly show how your AI systems operate. Enterprise buyers want proof, and we give you the tools to deliver.
Stay ahead, always
Regulations change, but TrustPath keeps you one step ahead. Our monitoring tools ensure you’re always in compliance, with real-time updates and incident reporting features that keep you on track. No surprises—just smooth, risk-free growth.
Book a demo
Stop guessing - Partner with AI vendors you can trust
Get the facts – real, verified compliance
Stop relying on vague promises. TrustPath gives you access to vendor profiles that detail compliance, risk management, and transparency measures. Know exactly what you’re getting before you sign on the dotted line.
Confidence in every decision
Make vendor selection easy with our AI Vendor Assessment Framework. Compare vendors, dive deep into their compliance, and make informed decisions without the stress. You don’t have time for guesswork—TrustPath provides the clarity you need.
Protect your business, protect your data
Your data security is non-negotiable. TrustPath ensures that AI vendors adhere to stringent data governance standards aligned with GDPR and the EU AI Act. Keep your business safe and compliant while gaining the confidence to innovate.
Book a demo