Policies, Platform, and Choosing a Framework

February 9, 2024
Policies, Platform, and Choosing a Framework

As AI technology matures, it’s likely that we will see the number of frameworks increase. This page consists of a collection of frameworks and international standards specifically about AI or in areas that are connected to it. The aim is to make it easier for you to choose frameworks that fit your particular use case.

Want to learn more about AI regulation that is focused on particular jurisdiction? Take a look at our free Global AI Regulations Map.

One question that you might have is how does this ‘fit’ into the work Fairly AI does? These frameworks are the kinds of documents that provide the foundation from which we build policies. Each policy is built up of individual controls which act as individual ‘questions’ about a particular issue in AI or other areas connected to it. When you use our system, you can choose an answer and provide evidence to support your answer. We then categorize our controls based on the AI lifecycle.

What this means is that for a given area, such as ‘development’ or ‘validation’, we can see where gaps exist across a number of different policies.What that tells us is that the source documents for policy (e.g. regulations, standards, and frameworks) have gaps. Since we are able to detect those gaps, we are better equipped to help you choose a series of policies that fit your use case.

We update the list below as we come across new frameworks and standards in the AI space, you can make suggestions on our GitHub and please don't forget to give us a star if you find our work useful!

General-use Responsible AI and Risk Management Frameworks
Concept-based Frameworks
Agentic Systems

Cybersecurity and Safety

Fairness and Bias

High Impact Risk

System Management
Industry-based Frameworks
Cognitive Technology


Healthcare and Pharmaceuticals
Role-based Frameworks
Startup Founders
Leadership and Executives
Looking to operationalize a responsible AI framework? Work with Fairly to build your policy control center!



  • February 2024: Australia's Responsible AI Framework, OpenAI's Preparedness Framework (Beta), UNESCO's Guidance for Generative AI in Education and Research, Radical Ventures' Responsible AI for Startups (RAIS) framework, Responsible Innovation Labs Responsible AI Policy.

DISCLAIMER. The insights and information contained in this article is intended for informational purposes only and we at Fairly AI do not intend for it to constitute legal advice, opinion, or recommendations that you, the reader, or others can depend on. For any legal issues linked to the information and insights contained in this article, please contact a lawyer/attorney to retain legal advice.

Fairly provides links to other websites beyond our control and we are not responsible for and do not give any warranties or make any representations regarding such websites. We are not responsible for or liable in relation to the content or security of these sites (other than to the extent required by law.)

Fairly makes reasonable endeavors to ensure reliability of the information presented on the app and website, but makes no warranties or representations whatsoever as to its accuracy or reliability. Similarly, due to the nature of the internet, we do not guarantee that the Fairly will be available on an uninterrupted and error free basis.

You may be interested in

Want to get started with safe & compliant AI adoption?

Schedule a call with one of our experts to see how Fairly can help