AI Trust & Safety Assurance Registry

Daily Reports - Saasy Webflow Template

AI Trust & Safety Assurance Registry is a publicly accessible registry that documents the governance, risk and compliance controls implemented by organizations to make their AI systems safe, trustworthy and compliant.


As a licensed distributor of ISO/IEC standards and the first software solution vendor to pilot ISO/IEC 42001 (the world's first management system for AI standard), Fairly AI enable organizations to accelerate their adoptions of standards and best practices and prepare them for AI compliance certifications and audits worldwide.

Self-assessment completed for AI Trust & Safety Assurance
Third-party attestation completed for AI Trust & Safety Assurance
Third-party validation completed for AI Trust & Safety Assurance
The world's first management system for AI standard
Bias in AI systems and AI aided decision making technical report

AI Incident Reporting

If you have any questions or concerns with AI systems or models listed in the registry, please contact the third-party AI Incident Reporting Center powered by Fairly AI

Report Incident

Governance

AI Governance is the process of creating policies and controls to ensure organizational accountability for risk and compliance of AI systems and models.

Organization in our registry have adopted AI Governance frameworks that help them achieve their governance goals. Here is a list of frameworks from the Fairly Responsible AI Tracker.

Risk

AI Risk Management is the process of identifying, assessing, mitigating, and monitoring risks associated with the development, deployment, and use of AI systems and models.

Organizations in our registry have adopted the Three Lines of Defense Model Risk Management framework or the NIST AI Risk Management Framework as their AI Risk Management framework.

1st Line: The Development Team regularly assesses model performance and updates the risk status based on current data.

2nd Line: The AI Trust & Safety team reviews and approves the risk status, ensuring that higher-risk models undergo additional checks using the Fairly AI Risk and Oversight platform.

3rd Line: Internal Audit [in conjunction with third party attestation service from Fairly AI] evaluates the accuracy and appropriateness of the risk status assigned to each model and reports any inconsistencies.

Compliance

AI Compliance ensures that AI systems and their development, deployment, and usage adhere to relevant legal, regulatory, ethical, organizational standards and policies.

Organizations in our registry has adopted at the minimum the ISO/IEC 42001 standard:

ISO/IEC 42001 is the world’s first AI management system standard. It specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS) within organizations. It is designed for entities providing or utilizing AI-based products or services, ensuring responsible development and use of AI systems


In addition, they may have adopted additional compliance standards such as:

Risk

AI Risk Management is the process of identifying, assessing, mitigating, and monitoring risks associated with the development, deployment, and use of AI systems and models.

Organizations in our registry have adopted the Three Lines of Defense Model Risk Management framework or the NIST AI Risk Management Framework as their AI Risk Management framework.

1st Line: The Development Team regularly assesses model performance and updates the risk status based on current data.

2nd Line: The AI Trust & Safety team reviews and approves the risk status, ensuring that higher-risk models undergo additional checks using the Fairly AI Risk and Oversight platform.

3rd Line: Internal Audit [in conjunction with third party attestation service from Fairly AI] evaluates the accuracy and appropriateness of the risk status assigned to each model and reports any inconsistencies.

Compliance

AI Compliance ensures that AI systems and their development, deployment, and usage adhere to relevant legal, regulatory, ethical, organizational standards and policies.

Organizations in our registry has adopted at the minimum the ISO/IEC 42001 standard:

ISO/IEC 42001 is the world’s first AI management system standard. It specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS) within organizations. It is designed for entities providing or utilizing AI-based products or services, ensuring responsible development and use of AI systems


In addition, they may have adopted additional compliance standards such as:

Governance

AI Governance is the process of creating policies and controls to ensure organizational accountability for risk and compliance of AI systems and models.

Organization in our registry have adopted AI Governance frameworks that help them achieve their governance goals. Here is a list of frameworks from the Fairly Responsible AI Tracker.

"Guru Link is a Toronto-based employment agency that developed the Path Pilot product, an innovative AI Career Companion. FAIRLY LEVEL THREE: THIRD-PARTY VALIDATION"

Learn More

"Suno Wellness is an AI-powered mental health companion that's revolutionizing therapy support. FAIRLY LEVEL THREE: THIRD-PARTY VALIDATION"

Learn More

"recruitRyte is a cutting-edge AI-driven recruitment platform designed to revolutionize the way companies source and hire talent. FAIRLY LEVEL TWO: THIRD-PARTY ATTESTATION"

Learn More