Data Minimization
The principle that only the necessary amount of data should be collected, processed, and retained to achieve the AI system’s intended purpose, particularly for personal and sensitive data, to safeguard privacy and prevent unauthorized data use.
Data Provenance
The tracking and documentation of the origin, history, and quality of data used in AI systems. Maintaining data provenance ensures transparency, accountability, and compliance, supporting risk management and trustworthiness.
Data Quality Standards
Standards ensuring that the data used for training and operating AI systems is complete, accurate, and free from biases that could affect system performance, especially in high-risk applications where data quality directly impacts decision integrity.
Data Sovereignty
A principle ensuring that data collected, processed, and stored by AI systems complies with applicable jurisdictional regulations, particularly for cross-border data transfers, and respects individuals’ rights to control their personal data.
Due diligence questionnaires
Standardized surveys evaluating AI systems’ compliance, performance, and ethical adherence. These questionnaires help AI buyers assess potential risks, including data governance, transparency, and regulatory compliance, prior to deployment.
EU Database for High-Risk AI
A centralized EU database where high-risk AI systems are registered to enhance transparency, facilitate regulatory oversight, and provide accessible information on compliance status and risk assessments associated with these systems.
Emergent Properties
Unanticipated behaviors or effects that arise in complex AI systems, often as a result of interactions among components. These properties may lead to unintended consequences and require careful monitoring and management.
Emotion Recognition System
AI systems designed to infer emotions or intentions from biometric data, such as facial expressions or voice intonation, potentially impacting users’ privacy. Such systems are subject to stricter regulations in sensitive environments like workplaces.
Enterprise AI Compliance
Policies and standards applied within organizations to ensure AI systems meet regulatory and ethical standards, fostering trust and accountability. Enterprise AI compliance encompasses data security, risk assessment, and stakeholder engagement.
Ethical AI
The development and use of AI systems that prioritize human values, transparency, and fairness to protect individuals’ rights and freedoms. Ethical AI practices are intended to prevent discrimination, misuse, and adverse impacts on society.
Explainability
The capability of an AI system to make its decisions or outcomes understandable to users or regulators, particularly in high-risk applications, supporting transparency and allowing individuals to assess system impact on their rights and interests.
Explainable AI (XAI)
AI systems designed to provide clear, interpretable explanations of their decision-making processes. Explainable AI enhances transparency and accountability, allowing users to understand and trust AI-driven outcomes in complex applications.
Fail-Safe Mechanism
Built-in measures allowing AI systems to revert to a safe state in the event of errors, technical malfunctions, or unexpected behavior, thereby preventing harm to users, the public, or affected environments.
Fairness in AI
A characteristic that addresses equity and bias in AI systems, ensuring outcomes do not discriminate against specific groups. Fair AI systems promote inclusivity, mitigate unintended harm, and align with societal standards of justice.
Fundamental Rights Impact Assessment
A pre-deployment assessment of high-risk AI systems to identify and mitigate risks to individuals’ rights and freedoms, ensuring AI use complies with EU human rights protections and does not adversely affect vulnerable groups.
General Data Protection Regulation (GDPR)
A European Union regulation ensuring the protection of individuals’ personal data. AI systems processing personal data must adhere to GDPR principles, including consent, transparency, and accountability, to safeguard privacy rights.
General-Purpose AI
AI models or systems capable of performing a wide range of functions across multiple contexts. These models are typically trained on extensive datasets and may be adapted or fine-tuned for specific applications.
General-Purpose AI with Systemic Risks
General-purpose AI models deemed to pose broad risks due to their capabilities and potential for widespread impact. They are subject to additional regulatory requirements for transparency, risk management, and impact assessment.
High-Risk AI Systems
AI systems identified by the EU as having significant potential to impact health, safety, or fundamental rights. These systems must meet strict compliance standards, including risk assessments, documentation, and human oversight.
Human Oversight
Mechanisms that allow humans to monitor and, where necessary, intervene in AI system operations to prevent or mitigate harmful outcomes, particularly relevant for high-risk AI systems in sensitive sectors.
Human Rights Impact Assessment
A review assessing the impact of an AI system on fundamental human rights, including privacy and non-discrimination. This assessment identifies and mitigates potential harms, ensuring AI deployment aligns with EU rights protections.
Human-in-the-Loop (HITL)
Involvement of human oversight in AI decision-making, allowing humans to interpret, adjust, or override AI outputs. HITL configurations enhance safety, accountability, and the ethical deployment of AI systems.
ISO 42001
An international standard for managing AI systems, focusing on guidelines for responsible AI development, deployment, and oversight. ISO 42001 outlines best practices for transparency, accountability, and continuous risk management.
Impact Assessment
A pre-deployment evaluation of potential risks and impacts associated with an AI system, focusing on user safety, privacy, and rights. This assessment informs risk management strategies and enhances compliance with regulatory standards.
Intellectual Property Compliance
Adherence to copyright, patent, and other intellectual property laws in the development and deployment of AI systems, ensuring that any protected content or methods used in AI are authorized or properly licensed.