Who is in charge of auditing AI/ML models? The important gatekeepers of Responsible AI

Who is in charge of auditing AI/ML models? The important gatekeepers of Responsible AI

Artificial Intelligence (AI) and Machine Learning (ML) in particular is the gateway to the future of “cheap prediction.” In “Prediction Machines: The Simple Economics of Artificial Intelligence,” three eminent economists, Professor Ajay Agrawal, Professor Joshua Gans and Professor Avi Goldfarb recast the rise of AI as a drop in the cost of prediction - a tool capable of organizing and processing information at thousands of times the rate of human staff; a tool capable of reducing labor hours while simultaneously buffering the accuracy and efficiency of various tasks. While powerful, such a large advancement in technology also bears large responsibility. Machine Learning algorithms may handle sensitive data and make important decisions such as granting or denying loans to applicants, managing supply chains or even accepting students into college. When an AI is trusted to make decisions that impact the well-being of businesses and individuals alike, it is of the utmost importance that risk is minimized before reaching the public. This is why auditors, both internal and external, are so important – they are the third and fourth line of defense before an AI meets the public, before it succeeds or fails.

While international NGOs such as the World Economic Forum, Responsible AI Institute and ForHumanity have all championed the need for external AI audits, there is no standardization or regulation exists yet for mandatory external AI audits. However, in the banking industry, existing guidelines and regulations are in place for Internal Auditors to perform audits on regulatory models as part of established Model Risk Management processes, regardless whether they are AI models or traditional statistical models.

With the speed of AI innovation outpacing Governance, Risk and Compliance, an Internal Auditor’s job at major financial institutions is much more difficult than it used to be. In order to properly inspect whether processes and procedures were followed within the organization when developing, validating, governing and deploying AI models, the work of multiple teams and their communication histories must be analyzed. However, the way this information is stored is often inconsistent and ineffective.  Long lists of changes and explanations may be stored in the emails of employees who have long moved on to other projects, leaving auditors to track them down. Sometimes data is buried in massive difficult to understand spreadsheets, and sometimes this information may not even exist at all. Information needed to ensure a model follows internal and external regulations and to ensure safety is nowhere to be found. Sometimes this means a task is slightly more difficult. Sometimes it means long delays until former project members can assist. In every case, this poor organization of information leads to delays. In a world where time is money, delays are awful. Developers find their skills wasted on repeated documentation and wish to be developing, auditors find themselves frustrated with incomplete data, and management finds themselves watching the clock as deadlines draw near and budgets begin to break. We hear these frustrations and offer a solution.

AI Requirements Library

Fairly AI’s AI Governance, Risk and Compliance management SaaS solution offers a robust set of features designed to tackle such grievances and streamline the process of AI model development from inception to market. By providing automated documentation checklists and guidelines, questions that would otherwise need to be asked by validators and auditors are answered in advance, and all relevant information is stored in one location within a clean, easy to use user interface. This tool is known as the AI Requirements Library - the location where all quantifiable requirements related to the development and validation of the model is stored.

Audit trails and comments

Audit trails are also provided, designed to capture how various teams worked together during the end-to-end AI model journey. Comments as well as versioned reports by developers and validators provide insight as to how the model was developed and what changes were made throughout the process, as well as the rationale behind these decisions. Validators' effective challenges are now also captured as evidence both in the form of comments and as part of their Model Validation Reports. AI models consist of many different moving parts and stakeholders, and auditors may find that additional insight from comment histories can be extremely beneficial to the auditing process.  

Explainable Risk Manager

Perhaps one of the most useful features, our Explainable Risk Manager allows auditors to view the overall risk of all models at a glance to quickly and easily gauge how well they fit within internal guidelines. This is done by analyzing performance metrics, human assessments, and comparing against known standards and benchmarks, providing an easy-to-understand visual representation of the overall risk between models.

Documented AI Assistant

Fairly AI aims to make the auditing process easier on all counts, not just the gathering of information. We recognize that report writing can be just as arduous as collecting information, if not more so. Through our Documented AI Assistant, writing reports becomes an easy, painless, and intuitive process. By standardizing report building, internal auditors and external auditors will have the tools and data they need the first time around, saving time otherwise spent revising and resending existing documents.


The payoff

By simplifying and streamlining manual auditing process, labor hours required to make an audit are expected to decrease, providing a quantifiably increase to the cost efficiency of models-to-market. Even more significant though, is the unquantifiable amount saved by preventing ethical, legal, financial, and reputational harms that may result from faulty AI making it passed auditing teams.  With the right tools, auditors are far less likely to miss important processes and procedures that can cause defects, and AI models are far more likely to perform as expected when they hit the market. Through Fairly AI, you can expect an efficient and streamlined auditing process that saves significant time and labor, bringing safer and better models-to-market.

About Fairly AI

Fairly AI’s mission is to support the broad use of fair and responsible AI by helping organizations accelerate safer and better models-to-market. We offer an award-winning AI Governance, Risk and Compliance SaaS solution for automating Model Risk Management. Our enterprise-scale, API-driven productivity and risk management tools feature automated and transparent reporting, bias detection, and continuous explainable model risk monitoring, thus streamlining AI development, validation, audit and documentation processes across the three lines of defense for financial institutions around the world. Visit us at or follow us on Twitter and LinkedIn.

You may be interested in

Want to get started with safe & compliant AI adoption?

Schedule a call with one of our experts to see how Fairly can help