AI regulations

The European Parliament adopts the EU AI Act

Job type
Max
4
min read
Ready to make your AI company enterprise-ready?
Shorten sales cycles, build trust, and deliver value with TrustPath.
Book a demo
Get started with TrustPath
Key takeaways:
  • The European Parliament adopts the EU AI Act, creating the first comprehensive AI legal framework globally, impacting businesses worldwide.
  • The EU I Act emphasizes transparency, responsibility, and ethical AI use, mandating detailed documentation for high-risk AI systems to ensure fairness and security.
  • Viewing the EU AI Act not as a burden but as an opportunity, businesses can stand out by embracing compliance, building trust, and gaining a competitive edge.

Until yesterday, many regarded the idea of regulating AI as a mere joke. However, as of today, this has become reality.

Following the approval of the European Commission, the European Union has taken a historic step towards regulating artificial intelligence with the adoption of the EU AI Act. This legislation marks the first comprehensive legal framework for AI in the world, setting the stage for a future where responsible innovation drives advancements in this powerful technology.

Businesses worldwide, not just those within the EU, should take note of the imminent changes and act promptly to ensure compliance. The EU AI Act's jurisdiction extends beyond the borders of the European Union, impacting any organization that develops, deploys, interacts with consumers or uses AI systems within the European market. 

Considering there are over 400,000 AI models globally, these regulations present a considerable challenge for companies involved in developing, deploying, or simply using any of these models. In this article, we aim to help you grasp the critical insights from this decision.

Understanding the Timeline

The EU AI Act is expected to enter into force 20 days after its publication in the EU Official Journal and will be applicable two years from that date. While the full implementation will be phased, key provisions will kick in at different stages:

  • 6 months from now - certain high-risk practices like social scoring, which assigns individuals a score based on various factors like their online activity or financial history, will be prohibited. This underscores the EU's commitment to protecting fundamental rights and preventing discriminatory practices.
  • 12 months from now - General Purpose AI models (GPAIs), which are designed to adapt to a broad range of tasks and environments, will be subject to specific regulations. Existing GPAs will have an additional 24 months to comply. These regulations aim to ensure the robustness, safety, and fairness of these increasingly powerful AI models.
  • 36 months from now - the strictest regulations will apply to high-risk AI systems like facial recognition technology. These systems will be subject to rigorous human oversight, transparency requirements, and robust risk management procedures. Also, these systems will have to prepare the most comprehensive documentation that documents every step of its development.

Now that we know about timelines, let’s learn what this is all about.

Transparency and Responsibility

The EU AI Act places a significant emphasis on the necessity for detailed documentation for AI systems, with varying requirements based on the system's risk level. It specifically targets high-risk AI systems, such as those used in law enforcement facial recognition, AI-powered recruitment tools, and medical diagnosis algorithms, which pose a substantial threat to fundamental rights or safety. 

Companies operating in these high-risk areas are required to maintain extensive documentation efforts to comply with the Act. This includes preparing to document various aspects of their AI systems thoroughly, as explained in our article.

But the EU AI Act goes beyond simple technical requirements. It delves into core ethical principles, ensuring AI systems are:

  • Fair and non-discriminatory - this means eliminating bias from data sets, algorithms, and decision-making processes. The Act mandates ongoing monitoring and human intervention to ensure fairness throughout the AI development lifecycle.
  • Transparent and explainable - users need to understand how AI systems reach decisions. The Act requires developers to provide clear explanations for how AI systems arrive at their outputs, fostering trust and accountability. This is particularly crucial in areas like high-risk decision making, such as loan approvals or criminal justice applications.
  • Robust and secure - AI systems must be reliable and protected against cyber threats to safeguard user privacy and prevent unintended harm. The Act outlines stringent security measures to mitigate risks associated with data breaches, unauthorized access, and manipulation of AI systems.

Does this sound like another regulatory burden? We see this as a great opportunity for all AI businesses. Let’s learn why in the next section.

Embracing the Golden Opportunity

While the Act may seem complex at first glance, it presents a golden opportunity for AI businesses to differentiate themselves in the market. By proactively aligning with the principles of responsible AI, organizations can:

  • Build trust and transparency - customers are increasingly concerned about the implications of AI. By demonstrating adherence to the EU's principles, AI businesses can build trust and foster stronger relationships with their customers and investors. That means customer advocacy and increase of revenues.
  • Gain a competitive edge - as vendors increasingly prioritize AI solutions that comply with regulations, early adopters will be well-positioned to secure partnerships and achieve a competitive advantage. This means better position in RFPs and higher chances to win more deals.
  • Future-proof their operations - the EU AI Act represents a first step toward a global conversation on AI governance. By embracing responsible AI practices now, businesses will be prepared for future regulations and evolving market expectations. Late revisions are usually expensive, drag the attention from core business, and pose numerous risks.

We have already published a blog article on benefits early AI compliance brings to companies and you can read it here.

Taking the EU AI Act Impact Assessment as the First Step

We understand that navigating the complexities of the AI Act can be daunting. To help you get started, we have developed a free self-assessment tool. This tool will guide you through a series of questions designed to assess the risk level of your AI system based on the criteria outlined in the EU AI Act. 

However, the self-assessment is just the first step. Our platform is ready to help you throughout the entire AI compliance journey, and it’s tailor-made to your use case and needs.

If still in doubt of taking the first step now, please read the following blog posts:

Share this article
LinkedInWhatsApp
Ready to make your AI company enterprise-ready?
Shorten sales cycles, build trust, and deliver value with TrustPath.
Book a demo
Get started with TrustPath