- The draft General Purpose AI Code of Practice aims to help to comply with the EU AI Act for general purpose AI systems, focusing on transparency, risk management, and ethical AI practices.
- Who needs to comply:
- Providers of General-Purpose AI systems (e.g., Mistral, ElevenLabs).
- Companies deploying General Purpose AI with systemic risks (depending on the capabilities of a model).
- Key features include:
- Preparing detailed documentation (model architecture, training/testing processes, use cases, and policies).
- Conducting risk assessments and implementing safeguards mitigating and managing risks.
- Adhering to IP and copyright laws, and providing for transparency as to what data has been used to train the model.
- Why it matters:
- Ensures compliance with the EU AI Act, market access and enterprise trust.
- Builds a competitive edge by showcasing responsible AI development.
- TrustPath’s role:
- Simplifies compliance through automated tools for documentation, risk assessments, and monitoring.
- Helps AI companies meet regulations efficiently while focusing on innovation and growth.
Imagine this: your AI startup is ready to disrupt industries, but new regulations emerge that you can’t afford to ignore. The EU’s AI Act is a landmark regulation of AI, and now the draft Code of Practice for General-Purpose AI systems provides more clarity and helps companies get closer to compliance with the AI Act.
Unveiled in November 2024, the draft General-Purpose AI Code of Practice indicates a future pathway for compliance with the AI Act, focusing on transparency, risk management, and ethical AI practices. But it’s also a wake-up call for founders: meeting these requirements isn’t optional. Whether it’s documenting your model’s architecture or managing systemic risks, the demands are significant. Even though this is only a draft, it already indicates a minimal benchmark of what will be mandatory to comply with the AI Act.
In this blog, we’ll decode these EU’s latest AI guidelines, explain how they affect AI businesses, and outline the steps they need to take to stay ahead. Compliance isn’t just about avoiding fines—it’s a gateway to trust and enterprise success. Let’s dive in.
Understanding the Code of Practice
As already mentioned, the draft GPAI Code of Practice aims to simplify the complex regulatory landscape and guide AI companies in adhering to best practices. But let’s start by understanding the fundamentals.
What Is the General-Purpose AI Code of Practice?
In simple terms, draft Code of Practice is a framework designed to help providers of General-Purpose AI systems like Mistral, ElevenLabs, AlephAlpha, and similar, comply with the EU AI Act. It focuses on transparency, safety, risk management, and ethical AI usage. It also complements the EU AI Act by providing practical steps for securing compliance..
Who Needs to Follow the General-Purpose AI Code of Practice?
The draft General-Purpose AI Code of Practice will apply to organizations that develop general-purpose AI systems, including those with systemic risks.. Understanding who is required to comply ensures accountability and fosters trust in AI technologies.
a) Providers of General-Purpose AI systems - companies like Mistral, ElevenLabs, and AlephAlpha, which create AI models that can perform various tasks, such as generating text, voice, or visuals, and are widely integrated into other applications.
b) Providers of General-Purpose AI systems that could pose systemic risks - misinformation, discrimination, or expanding ability for cyberattacks.
To comply, these organizations must address several important features that will be outlined in the final version of theCode of Practice, to be released in Q2 2025. Let’s explore what those key features may be, based on the draft we have access to.
What Are the Key Features of General-Purpose AI Code of Practice?
Now that we understand who needs to follow the General-Puprose AI Code of Practice, let’s explore the key features that guide compliance. These principles ensure transparency, minimize risks, and promote ethical AI development and deployment.
Transparency Requirements
Companies must prepare detailed documentation covering:
- Model architecture and parameters.
- Training and testing processes.
- Intended uses and integration scenarios.
- Acceptable use policies and evaluation results.
Risk Assessment and Mitigation
Providers are required to:
- Identify and assess risks and systemic risks (e.g., cybersecurity, misinformation, discrimination).
- Implement continuous monitoring and technical safeguards.
Intellectual Property and Copyright
Adherence to copyright laws, including respecting data ownership and avoiding the misuse of copyrighted materials.
By adhering to these key features, organizations can ensure they align with best practices. But why does it matter? Let's find out in the next section.
Why Does This Matter?
The Code of Practice isn’t just about ticking boxes—it’s about building trust and ensuring that AI systems are safe, fair, and transparent. For AI businesses, compliance with the Code means a stronger position in the market and better alignment with enterprise customer expectations.
By understanding what is already in the draft Code of Practice, AI companies take the first step toward compliance and turn regulations into a competitive advantage.
What Will the Code of Practice Mean for AI Companies?
The Code of Practice for General-Purpose AI will introduce new responsibilities and challenges for AI companies. While it ensures ethical and safe deployment of AI systems, it also requires significant operational adjustments. Here’s what it means in practical terms:
Obligations Under the Code
General-purpose AI systems will need to adhere to several key responsibilities, including:
Comprehensive Documentation
- Detailing the model’s architecture, parameters, and intended use cases.
- Providing clear acceptable use policies for downstream applications.
Copyright Directive Compliance
- Respecting data and protected works ownership such as copyright.if rights are reserved against text and data mining.
- Make sure the outputs of the AI system does not generate copyrighted materials.
Risk Management
- Identifying and mitigating risks, such as large-scale disinformation or cybersecurity threats.
- Establishing safety frameworks and incident reporting procedures.
Challenges AI Companies May Face
Adapting to the requirements of the Code of Practice can be daunting, especially for early-stage AI businesses. These challenges highlight areas where adjustments are needed:
- Complexity of compliance - navigating the detailed requirements for technical documentation, risk assessment, and ethical guidelines can be time-intensive.
- Resource allocation - smaller companies may struggle with operationalizing the Code of Practice, as it requires dedicating resources to understanding compliance requirements and implementing effective systems while still focusing on product development.
- Uncertainty around enforcement - since the Code is still in draft form, there may be changes that require companies to adapt quickly.
Opportunities for Growth
While challenging, the Code presents a unique chance for AI companies to gain market advantages. These opportunities can outweigh the short-term hurdles:
- Building trust - Enterprises are already asking their providers to show how they comply with these codes of practice, highlighting its importance for building lasting partnerships. Demonstrating compliance with the Code positions AI companies as responsible and trustworthy partners.
- Market expansion - adherence to the future Code can open doors to the European market, ensuring AI systems meet local regulatory expectations.
- Competitive edge - early adopters of the Code’s principles can lead the industry in AI deployment, setting themselves apart from competitors.
For AI companies, understanding these obligations and challenges is the foundation for turning compliance into a strategic advantage. Adapting now ensures readiness for the final Code and builds resilience for the future.
How Will the Code of Practice Impact AI Companies?
The AI Code of Practice is set to redefine how AI companies operate by introducing stricter standards and reshaping business priorities. Here’s a breakdown of its potential impacts:
Operational Changes
The Code of Practice introduces requirements that demand changes in how AI companies document and manage their systems. These adjustments will likely impact daily operations and long-term planning:
- Increased documentation requirements - companies will need to invest time and resources into creating detailed technical documentation and risk assessments.
- Proactive risk management - a continuous process of identifying, mitigating, and reporting systemic risks will become a core responsibility.
Market Dynamics
Compliance with the Code of Practice will influence how AI companies are perceived in the marketplace. It could become a differentiator—or a barrier:
- Compliance as a market barrier - non-compliance could restrict access to the European market, limiting growth opportunities.
- Customer expectations - enterprise customers are likely to prioritize vendors that demonstrate adherence to the Code, making compliance essential for winning deals.
Resource Implications
Adapting to the Code may require rethinking resource allocation and investment. These changes will shape how companies balance compliance with innovation:
- Focus shift - balancing compliance efforts with innovation may require reallocating internal resources or seeking external support.
In the next section, we’re going to explain what steps companies should take in order to comply with the Code of Practice. Let’s move on.
Steps AI Companies Should Take to Comply with the AI Code of Practice
Meeting the requirements of General-Purpose AI Code of Practice for AI necessitates a strategic approach. Here’s how AI companies can prepare:
- Conduct a compliance audit - assess current operations to identify gaps in documentation, risk management, and ethical practices. This audit will serve as a foundation for building a compliance strategy.
- Strengthen documentation processes - ensure comprehensive technical documentation, including model architecture, training details, intended uses, and acceptable use policies, is in place and up to date.
- Implement risk mitigation measures - adopt systems for identifying, monitoring, and mitigating systemic risks, such as cybersecurity vulnerabilities and misinformation potential.
- Align with copyright laws - establish policies to ensure compliance with intellectual property rights, including due diligence on datasets and prevention of copyright violations, when rights are reserved.
Leverage external expertise - consider using compliance tools, like TrustPath, to automate documentation, streamline risk assessments, and meet the Code’s requirements efficiently.
Why Enterprises Should Prioritize Compliance in Their AI Vendors?
As enterprises integrate AI into their operations, understanding the compliance of their AI providers is crucial.
Compliance with the future Code of Practice ensures that vendors adhere to transparency, risk mitigation, and ethical standards, which directly impact the safety and reliability of the AI systems enterprises deploy.
By asking for compliance documentation, enterprises can identify and mitigate new types of risks, such as systemic biases, cybersecurity vulnerabilities, and misuse of intellectual property. TrustPath enables enterprises to assess their AI vendors’ readiness, giving them the confidence to make informed decisions and safeguard their operations.
How Will the Code of Practice Impact the AI Supply Chain?
The Code of Practice will have a ripple effect across the entire AI supply chain, influencing model providers, developers, and end users.
- AI model providers - Companies like OpenAI, Mistral, and AlephAlpha need tools to share compliance documentation and technical details with developers building on top of their models. They will also must comply with the Code of Practice by ensuring their models meet transparency, risk management, and ethical guidelines.
- Developers - Teams creating applications using AI models, such as ElevenLabs (voice synthesis tools) or Jasper (content generation), will face a tough position in the market. They will receive compliance requests from their buyers and will prioritize models that can provide the necessary compliance information. If a model provider cannot supply this data, developers will struggle to build competitive applications, especially in the EU market, where strict adherence to the Code of Practice is crucial. Ensuring alignment with the Code is essential to meet enterprise standards and maintain market viability.
- End users - Enterprise buyers like financial institutions, healthcare providers, and retail companies will request compliance documentation from both developers and model providers. For example, a bank integrating AI for fraud detection will need proof of compliance to mitigate regulatory and reputational risks.
By fostering compliance at every stage, the Code of Practice aims to enhance trust, mitigate risks, and create a transparent AI ecosystem.
We’ve got you covered
TrustPath specializes in helping AI companies become enterprise-ready by providing robust compliance documentation and frameworks that align with regulatory standards. Our platform has operationalized the codes of practice, enabling businesses to create essential compliance documents swiftly and use TrustPath to ensure full adherence to these practices, building transparency and trust with enterprise customers.
Utilizing TrustPath’s expertise allows AI companies to focus on innovation while ensuring compliance with evolving regulations.
By taking these steps, AI companies can stay ahead of regulatory demands, reduce risks, and position themselves as trusted players in the AI ecosystem.
Ready to start winning enterprise deals because of AI compliance? Get in touch.