AI compliance glossary

Understand key terms in AI compliance

Navigate the complex world of AI compliance with ease. This glossary breaks down essential terms and concepts to help AI professionals and enterprise buyers alike gain clarity on regulations, standards, and practices that shape today’s AI landscape. Stay informed and stay compliant.

AI Compliance Glossary

AI (Artificial intelligence)

A machine-based system capable of making predictions, recommendations, or decisions influencing environments. AI systems operate autonomously or semi-autonomously, using inference or pattern recognition to achieve specific goals within defined parameters.

AI Accountability

The responsibility of providers and deployers to ensure AI systems are developed, deployed, and monitored in compliance with regulations and ethical standards, ensuring that AI’s use does not infringe on fundamental rights and operates as intended.

AI Accuracy

The degree to which an AI system’s outcomes align with the intended purpose, minimizing errors and biases. For high-risk systems, accuracy must be continuously monitored to maintain system integrity and prevent risks to users and affected persons.

AI Autonomy

The capacity of an AI system to function with minimal human intervention, making decisions based on programmed objectives and evolving inputs. This autonomy allows AI to operate independently, adapting to changing environments and inputs.

AI Bias Audit

A structured evaluation aimed at identifying, analyzing, and mitigating biases within AI systems. Bias audits ensure that AI models operate fairly, produce non-discriminatory outcomes, and comply with ethical and regulatory standards to protect affected groups.

AI Bias Detection and Mitigation

Processes to identify and reduce biases in AI, which may arise from systemic, computational, or human sources, potentially leading to unfair outcomes. Bias mitigation ensures AI systems work equitably across diverse groups and contexts.

AI Buyer’s Risk Assessment

A process through which potential buyers of AI assess risks associated with an AI system, including compliance, security, and ethical factors. The assessment aids buyers in making informed decisions by evaluating operational risks and impact.

AI Compliance Framework

A structured set of guidelines and best practices designed to ensure that AI systems meet legal, ethical, and operational standards. The framework addresses accountability, transparency, risk management, and data security requirements.

AI Deployers

Individuals or entities that operate or use AI systems in various applications. Deployers are responsible for ensuring that systems function as intended and in compliance with regulatory standards, especially when using high-risk AI systems.

AI Distributors

Natural or legal persons within the supply chain, other than providers or importers, who make AI systems available on the EU market. They must ensure that systems meet all regulatory standards and provide essential documentation upon request.

AI Ethics

Principles guiding the responsible development and use of AI to align with human values and rights, including transparency, fairness, and accountability. AI ethics frameworks aim to prevent harm, discrimination, and misuse of AI systems in various contexts.

AI Importer

A natural or legal person who places an AI system from a third country on the EU market. Importers are responsible for verifying compliance with EU regulations and ensuring all necessary documentation accompanies the system.

AI Interpretability

The extent to which the meaning of AI outputs is clear and understandable within the system’s designed functional purpose. Interpretability helps operators or overseers effectively use and govern the AI system in practice.

AI Product Manufacturers

Entities responsible for integrating AI systems as components of physical products. They must ensure these systems comply with safety and compliance requirements before releasing the products on the market.

AI Provider

Any entity that develops or markets an AI system in the EU. Providers are accountable for system compliance, including meeting technical, safety, and documentation standards, especially for high-risk applications.

AI System

‘AI system’ means a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments

AI Transparency

Obligations ensuring that an AI system’s functions, capabilities, and limitations are clear to users and regulators, promoting informed use and adherence to compliance requirements, particularly in high-risk cases.

AI auditing

The process of systematically reviewing AI systems to ensure adherence to regulatory, ethical, and operational standards, focusing on system accuracy, fairness, and accountability. Audits identify risks and verify compliance with established guidelines.

AI compliance

The adherence of AI systems to established legal, ethical, and technical standards to ensure safe and trustworthy deployment. Compliance encompasses data governance, accountability, and user rights in alignment with regulatory requirements.

AI explainability

The degree to which an AI system’s mechanisms can be understood by humans. Explainability allows stakeholders to gain insights into how the system generates outputs, fostering trust, facilitating debugging, and enabling accountability.

AI governance

Oversight structures and policies ensuring that AI systems are developed, deployed, and monitored in accordance with legal, ethical, and organizational standards, including data security, fairness, transparency, and risk management protocols.

AI literacy

The ability of users and stakeholders to understand basic AI concepts, including its risks, limitations, and implications on decision-making. AI literacy enables informed use, promotes transparency, and enhances trust in AI-driven outcomes.

AI monitoring and Reporting

Continuous oversight of AI system performance and compliance, ensuring it adheres to regulatory standards and operates as intended. Monitoring includes documenting system outputs and reporting deviations or incidents affecting user rights or safety.

AI policies

Internal organizational guidelines that establish standards for AI development, deployment, and oversight, focusing on ethics, accountability, and data governance to align AI use with regulatory and organizational objectives.

AI questionnaires

Standardized sets of questions assessing AI system risks, performance, and compliance. These questionnaires help AI buyers evaluate AI reliability, fairness, and adherence to ethical and regulatory standards, providing insights for decision-makers.

Ready to make your AI company enterprise-ready?

Shorten sales cycles, build trust, and deliver value with TrustPath.
Book a demo
TrustPath