AI Trust & Safety Assurance Registry is a publicly accessible registry that documents the governance, risk and compliance controls implemented by organizations to make their AI systems safe, trustworthy and compliant.
As a licensed distributor of ISO/IEC standards and the first software solution vendor to pilot ISO/IEC 42001 (the world's first management system for AI standard), Fairly AI enable organizations to accelerate their adoptions of standards and best practices and prepare them for AI compliance certifications and audits worldwide.
If you have any questions or concerns with AI systems or models listed in the registry, please contact the third-party AI Incident Reporting Center powered by Fairly AI
Report IncidentAI Governance is the process of creating policies and controls to ensure organizational accountability for risk and compliance of AI systems and models.
Organization in our registry have adopted AI Governance frameworks that help them achieve their governance goals. Here is a list of frameworks from the Fairly Responsible AI Tracker.
AI Risk Management is the process of identifying, assessing, mitigating, and monitoring risks associated with the development, deployment, and use of AI systems and models.
Organizations in our registry have adopted the Three Lines of Defense Model Risk Management framework or the NIST AI Risk Management Framework as their AI Risk Management framework.
1st Line: The Development Team regularly assesses model performance and updates the risk status based on current data.
2nd Line: The AI Trust & Safety team reviews and approves the risk status, ensuring that higher-risk models undergo additional checks using the Fairly AI Risk and Oversight platform.
3rd Line: Internal Audit [in conjunction with third party attestation service from Fairly AI] evaluates the accuracy and appropriateness of the risk status assigned to each model and reports any inconsistencies.
AI Compliance ensures that AI systems and their development, deployment, and usage adhere to relevant legal, regulatory, ethical, organizational standards and policies.
Organizations in our registry has adopted at the minimum the ISO/IEC 42001 standard:
ISO/IEC 42001 is the world’s first AI management system standard. It specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS) within organizations. It is designed for entities providing or utilizing AI-based products or services, ensuring responsible development and use of AI systems
In addition, they may have adopted additional compliance standards such as:
AI Risk Management is the process of identifying, assessing, mitigating, and monitoring risks associated with the development, deployment, and use of AI systems and models.
Organizations in our registry have adopted the Three Lines of Defense Model Risk Management framework or the NIST AI Risk Management Framework as their AI Risk Management framework.
1st Line: The Development Team regularly assesses model performance and updates the risk status based on current data.
2nd Line: The AI Trust & Safety team reviews and approves the risk status, ensuring that higher-risk models undergo additional checks using the Fairly AI Risk and Oversight platform.
3rd Line: Internal Audit [in conjunction with third party attestation service from Fairly AI] evaluates the accuracy and appropriateness of the risk status assigned to each model and reports any inconsistencies.
AI Compliance ensures that AI systems and their development, deployment, and usage adhere to relevant legal, regulatory, ethical, organizational standards and policies.
Organizations in our registry has adopted at the minimum the ISO/IEC 42001 standard:
ISO/IEC 42001 is the world’s first AI management system standard. It specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS) within organizations. It is designed for entities providing or utilizing AI-based products or services, ensuring responsible development and use of AI systems
In addition, they may have adopted additional compliance standards such as:
AI Governance is the process of creating policies and controls to ensure organizational accountability for risk and compliance of AI systems and models.
Organization in our registry have adopted AI Governance frameworks that help them achieve their governance goals. Here is a list of frameworks from the Fairly Responsible AI Tracker.