AI Trust & Safety Assurance Registry Listing for
Assessment completed for AI Trust & Safety Assurance
Independent Internal Audit completed for ISO 42001
AI Governance is the process of creating policies and controls to ensure organization accountability for risk and compliance of their AI systems and models.
Nebuly has adopted the NIST AI Risk Management Framework (RMF) 1.0 which includes actions, references, and related guidance to achieve the outcomes for the four functions in the AI RMF: Govern, Map, Measure, Manage
AI Risk Management is the process of identifying, assessing, mitigating, and monitoring risks associated with the development, deployment, and use of AI systems and models.
Nebuly has adopted the NIST AI Risk Management Framework 1.0 as their AI Risk Management framework:
AI Compliance ensures that AI systems and their development, deployment, and usage adhere to relevant legal, regulatory, ethical, organizational standards and policies.
Nebuly has adopted ISO/IEC 42001 standards:
If you have any questions or concerns with Nebuly's AI systems or models, please contact the third-party AI Incident Reporting Center powered by Fairly AI at incidents@fairly.ai or use the button below
Report IncidentNebuly unlocks AI success by understanding your users. Nebuly captures every aspect of the User-LLM interaction, helping you prioritize improvements and quickly optimize the user experience.
Fairly AI Trusty & Safety Assurance Registry is a publicly accessible registry that documents the governance, risk and compliance controls implemented by organizations to make their AI systems safe, trustworthy and compliant.