Blog

5 takeaways from the Global AI Virtual Conference Model Risk Management workshops and roundtable

March 19, 2021
5 takeaways from the Global AI Virtual Conference Model Risk Management workshops and roundtable

FAIRLY was recently given the opportunity to speak at the Global Artificial Intelligence Virtual Conference where we shared with other experts our thoughts on how AI should be managed and built in the future. Discussion centered around applications of AI, their efficiency, and most interestingly how ethics should be integrated into artificial intelligence. Much was learned during these discussions, so we felt it was important to summarize some of our key takeaways for our readers, to help broaden the discussion around AI and the future. Without further ado, here are the big ideas:

  1. Many AI models make too many assumptions – Finance Professor Nikola Gradojevic from University of Guelph spoke in detail about current algorithms used to predict and trade securities for financial institutions. He explains that many models assume a normal distribution for pricing options, and this imposes restrictions on what they can accomplish.  To solve this issue, he proposes what’s known as a non-parametric network model – a model that learns and analyzes risk based on self-taught patterns as opposed to restricting itself to specific parameters.
  2. Many models break under changing conditions and inadequate risk management– Gradojevic went on to explain how many existing models pigeon-hole themselves by looking at small data sets. For example, many models used to price securities completely broke in the wake of covid-19 because prices of things began moving in new ways not before seen by existing algorithms. Alongside non-parametric models, he proposes a fuzzy logic system to solve this issue. In regular models, decisions typically fall under either “yes” or “no”. Whether a decision is good or not is typically a binary system. With fuzzy logic, decisions are instead given a risk weighting and compared against each other. This allows models to tolerate some variance in data and handle changing conditions better. Fairly believes proper risk management is essential to any system, and Gradojevic’s take is an intriguing one that deserves to be explored.
  3. Models take too long to build – COVID “broke” many models, and any event that altered data or changed people’s behavior on a similar level would do the same. But why is that true? Surely one could create a new model to account for changes? According to Agus Sudjianto, Executive Vice President and Head of Corporate Risk Management at Wells Fargo, this is because models simply take too much time to build. Creating a model that performs well enough is difficult on its own, but internal auditing and validation of a model can add years to its development phase before it hits the market. Simply put, creating models often takes significantly longer than the societal changes they are built for. In order for AI systems to truly shine, this hurdle must be overcome.
  4. AI systems are not truly explainable or transparent - Sudjianto explained that transparency in AI models today, while sought after, does not truly exist. AI today are explained through what he calls “post hoc”, or after-the-event analysis. Essentially, the results produced by AI models are analyzed after the fact, often by yet another AI, to reverse engineer the results and use them to predict future outputs. Often, while the results are predictable, the inner workings of AI models are still misunderstood and thus not truly explainable, resulting in a sort of pseudo-transparency. Sudjianto said this is not good enough, and instead AI must be built ground up with explainability in mind, a task becoming more difficult as models become more complex. One proposed idea is to break models down into easily understandable parts, isolating processes into a series of linear systems. Because tasks are becoming more complex, maintaining transparency may require breaking the tasks into small understandable parts.
  5. Proper communication is one of the biggest hurdles of AI development - Everyone agreed that one of the biggest bottlenecks in AI development today is managing information, or more precisely, sharing it with each other. Sudjianto went on to compare AI development to that of racing a car. For a race car to win a race, it needs the work of several professionals. Someone to drive the car, someone to design the car, someone to manufacture the car. In the same way, developing an AI model requires collaboration between different teams with different skill sets. Because of the large networks involved, information is often lost or mismanaged. Fairly AI’s David Van Bruwaene suggests all team members do their work on a single platform that can easily organize and share information, alleviating and automating the struggle of information management and making collaboration easier.

About FAIRLY

FAIRLY’s mission is to support the broad use of fair and responsible AI by helping organizations accelerate safer and better models-to-market. We offer an award-winning AI Governance, Risk and Compliance SaaS solution for automating Model Risk Management. Our enterprise-scale, API-driven productivity and risk management tools feature automated and transparent reporting, bias detection, and continuous explainable model risk monitoring, thus streamlining AI development, validation, audit and documentation processes across the three lines of defense for financial institutions around the world. Visit us at https://www.fairly.ai or follow us on Twitter and LinkedIn.

You may be interested in

Want to get started with safe & compliant AI adoption?

Schedule a call with one of our experts to see how Fairly can help