Organizations working to develop new artificial intelligence often fumble their way through AI model documentation and reporting. Documenting decisions and writing reports seems simple enough, however, these tasks quickly drain time and resources from organizations that don’t have workflows in place to manage them efficiently. Unfortunately, this can (and does) delay model implementation. To avoid headaches, costly delays, and compliance issues, organizations can follow successful AI leaders and use technology to automate documentation and reporting.
Accurate and complete model documentation is a foundational part of model risk management (MRM), which has become critically important with the maturation and broader adoption of AI. Documentation is essentially a form of record keeping. Consistent documentation helps AI model developers and data scientists ensure they follow best practices, policies, and regulations. Documentation also allows developers’ QA and compliance counterparts to verify and evaluate risk consistently — ultimately mitigating and reducing risks before they cause financial and reputational harm.
Early calls for MRM came after the 2008 financial meltdowns, when the banking industry faced intense scrutiny. At that time, the Federal Reserve Board (FRB) and the Office of the Comptroller of the Currency (OCC) in the USA introduced the SR11-7 framework, and the Office of the Superintendent of Financial Institutions (OSFI) in Canada debuted E23. More recent AI/ML fiascos such as the latest Zillow catastrophe have brought attention and urgency to the practice of MRM outside the financial industry.
At the minimum, when model developers and data scientists create model development reports, they need to include training data, testing data and performance results, as well as decisions and the rationale behind them, to make AI models explainable. The Information Commissioner's Office (ICO) and the Alan Turing Institute in the U.K. call this Process-based Explainability. It’s a key missing answer to the question “How to operationalize responsible AI?” which is seldom discussed in explainability (XAI) conversations.
Manually compiling information and writing a single model development report can take weeks if not months. Add in multiple revisions and multiply by tens, hundreds, or thousands of models organizations will develop as they scale AI, and model development reports quickly overwhelm model developers and data scientists. “I love spending half of my job writing regulatory compliant model development reports,” says no data scientist!
The Solution: Automation
Model developers and data scientists know there are problems and inefficiencies surrounding documentation and reporting, but most are overwhelmed with responsibilities and don’t have the bandwidth to fix the issues. Therefore, the best thing organizations can do is leverage technology to capture data and build reports automatically. This helps ensure AI models are transparent and auditable in a scalable and standardized fashion.
How to Automate AI Documentation
Organizations with advanced AI practices are already building or buying solutions for automating AI documentation. For example, a leading U.S. bank with thousands of AI models waiting to be validated and deployed started building its own automation technology in-house four years ago.
If organizations want to buy rather than build a solution, they can choose from several available options on the market. Traditional end-to-end DS/ML platforms such as H2O and DataRobot offer auto-documentation as a feature. Using such built-in features works great when data scientists all use the same DS/ML platform. However, many organizations have fragmented data science teams (think global banks with hundreds, if not thousands of data scientists working in silo units). In those environments, independent AI governance, risk, and compliance solutions like the one my company, FAIRLY, offers allow data scientists to continue building models using any platforms or tools they prefer, while satisfying the requirements to achieve a consistent and standardized documentation and reporting process.
A benefit of adopting a technology solution and moving the process of documentation “closer to the code” is that doing so significantly reduces the strain on data scientists. With less time going toward writing reports, data scientists’ hours can be reallocated toward the work they prefer: building AI. Not only does this boost job satisfaction and help retain talents, it saves businesses money. According to the U.S. Bureau of Labor Statistics, the mean salary of a data scientist in the U.S. is around $100K per year. Reducing the hours spent on documentation and applying them elsewhere is cost effective.
There’s no doubt that documentation and reporting requirements will become more stringent in the future as new AI regulations are passed. The World Economic Forum, the Bank of England and the European Commission are each calling for better processes and technology to increase accountability, transparency, and explainability of AI models. Ultimately, this means (as the Chair of AI Governance at a large Swiss wealth management company put it) “the necessity for record keeping is not going away, it’s only going to increase.” Organizations that have not already done so need to think about how they can make boring (but important) documentation tasks more efficient now.
The bottom line is that organizations cannot reverse engineer responsible AI. Those that put in effort upfront to automate processes now will gain an advantage as they work to speed AI models into production ahead of their competitors. Plus, they will have peace of mind knowing that automation is improving their reporting, which ultimately helps accelerate safer and better models to market.
FAIRLY’s mission is to support the broad use of fair and responsible AI by helping organizations accelerate safer and better models-to-market. We offer an award-winning AI Governance, Risk and Compliance solution for automating Model Risk Management. Our enterprise-scale, API-driven productivity and risk management tools feature automated and transparent reporting, bias detection, and continuous explainable model risk monitoring, thus streamlining AI development, validation, audit and documentation processes across the three lines of defense for financial institutions around the world. Visit us at https://www.fairly.ai or follow us on Twitter and LinkedIn.