Is the Model Fit for Purpose?
There has been an overload of talk amongst a vast amount of individuals about how the future is approaching faster than it ever has before. The explosive potential of this future is highlighted by the forecast that revenues of generative artificial intelligence (AI) technology offerings are set to reach $3.7 billion in 2023, expanding to a staggering $36 billion by 2028, according to a 2023 report by S&P Global Market Intelligence. In the midst of this exciting trajectory, a group of experts gathered online on June 27, 2023, to stoke the conversation around Generative AI, Model Risk Management, Safety and Compliance.
In the BrightTalk webinar entitled “AI: the future of Business - Achieving Safe and Compliant Generative AI with Lessons from Model Risk Management”, Ramesh Dontha, the host and a digital transformation pro, invited special guests Dr. Jon Hill, Professor of Model Risk atNYU Tandon School of Engineering, David Van Bruwaene, Co-founder and CEO at Fairly AI and Hassan Patel, Director of Global AI Policy Compliance Engineering at Fairly AI to dive deep into the conversations that are shaping our future.
Hassan, playing a critical role at Fairly, is in charge of translating regulatory space into actionable policies for businesses that want to increase their AI compliance. When it comes to the kind of compliance organizations need to think about regarding generative AI, it is known that there are no existing regulations and policies that address generative AI yet. However, large pieces of legislation drafted such as the EU AI act has proposed categorizing AI based on risk-tiering such as high vs. low risk. In the specific use case of generative AI such as ChatGPT, Hassan states that the key issue here is that risk is now shifted to the user rather than the deployer of AI. This is why Fairly is doing risk assessments to created a LLM policy, so compliance and risk can be managed with reliability.
Dr. Jon Hill, a published author and public speaker with a rich academic background, holds expertise in risk management. Despite his credentials, he refrains from calling himself an academic. Yet, his wisdom, especially around model and credit risk, equity risk, and operational risk, is profound. He posed a thought-provoking question from George Box's 1987 book on control theory, "All Models are wrong. The question is, how wrong do they have to be before they stop being useful?" He further stressed the importance of recognizing when risks need to be mitigated, suggesting the three lines of defense against Model Risk.
David, who taught ethics and formal logic at Cornell, Berkeley and the University of Waterloo, was able to give details regarding the intersection of regulation, ethics, and technological innovation in the context of generative AI. He remarks how it is known that it is hard to define moral patterns as society does not have a clear or well agreed on definition of these guidelines. However, there is an ability to make progress in areas where there are strong moral agreements. By limiting these categories, we can turn them into outputs to be able to identify them and block them. This will help evaluate these models on the likelihood that they will produce these types of products/content that can be unsafe.
The speed of AI innovation is outpacing risk and compliance. As we hurtle into a future filled with technology, innovation, and governance, the path forward remains largely unsolved. However, with the dedicated team at Fairly, we are confident that we can help your organization accelerate the safe, secure and compliant adoption of generative AI using best practices from proven Model Risk Management frameworks. Buckle up, it's going to be an exciting ride!
To learn other lessons from this webinar, watch the recording here: https://www.brighttalk.com/webcast/18550/585919?utm_source=brighttalk-sharing&utm_medium=web&utm_campaign=linkshare