Socio-Technical Factors
The interplay of human, organizational, and technical influences on AI system design, development, and deployment. Socio-technical factors shape AI risks and benefits, impacting fairness, interpretability, and system acceptance.
Stakeholder Engagement
Involvement of relevant groups, such as end users, civil society, and experts, in discussions around AI system development and deployment to address potential risks and align with ethical and societal standards.
TEVV (Test, Evaluation, Verification, and Validation)
A series of processes that assess whether an AI system meets design requirements and performs reliably. TEVV activities are crucial throughout the AI lifecycle for ensuring trustworthiness and safety.
Technical Robustness
The capacity of an AI system to withstand faults, tampering, or unexpected inputs, maintaining stable performance and accuracy across a range of scenarios, particularly critical for high-risk AI applications.
Third-Party Risk Management
The process of evaluating and mitigating risks associated with third-party AI providers, ensuring they adhere to compliance, ethical, and security standards to protect users and stakeholders from potential harm.
Transparency Reporting
Regular reports detailing AI systems’ decision-making processes, risk management measures, and compliance with ethical and legal standards. Transparency reports foster accountability and build trust with users and regulators.
Trustworthiness Characteristics
Essential traits of an AI system—such as safety, fairness, transparency, and privacy—that together establish its trustworthiness. These characteristics guide risk management and enhance system reliability in diverse contexts.
Trustworthy AI
AI that meets defined criteria for reliability, fairness, safety, privacy, transparency, and accountability. Trustworthy AI builds user confidence and minimizes risks, promoting responsible use across various applications.
Unacceptable risk
Risk levels associated with AI systems that present significant threats to individuals’ fundamental rights, safety, or freedoms. Such risks warrant prohibition or strict regulation to prevent AI deployment that may cause irreversible harm or exploitation.
User Consent Management
Mechanisms for obtaining and managing user consent for data processing in AI systems, particularly in sensitive contexts involving personal data, ensuring compliance with privacy and data protection laws.
Vendor Due Diligence
An evaluation of AI vendors’ adherence to regulatory and ethical standards before partnership or procurement. Due diligence ensures vendors meet necessary compliance requirements, safeguarding organizational and user interests.
Vendor assessment framework
A structured set of criteria used to evaluate the compliance and performance of AI vendors, focusing on ethical standards, data protection, and risk management. Vendor assessments guide informed decision-making in procurement.
Vulnerability Assessment
A process to identify potential weaknesses in AI systems, focusing on security, bias, and operational stability. Assessments help mitigate risks associated with unauthorized access, data breaches, and compliance gaps.
Vulnerability Assessment
Identification and evaluation of potential security weaknesses within an AI system, facilitating proactive measures to protect against unauthorized access, data breaches, and system manipulation.
White-Box AI Models
AI models with transparent, interpretable internal mechanisms, allowing users to understand the logic behind outputs. White-box models facilitate explainability and trust, supporting compliance with transparency requirements.
Social Transparency
Social transparency focuses on the broader societal impact of AI systems, addressing ethical, fairness, and privacy considerations. It emphasizes making AI systems accountable for their societal outcomes by identifying and mitigating potential biases, ensuring equitable results, and safeguarding privacy. Social transparency helps foster trust not just among users but across society as a whole.