Use Cases

Compliance Automation for Fairly Lending

October 1, 2024
Compliance Automation for Fairly Lending

Introduction

In this case study, we explore how a leading sponsor bank implemented the Fairly AI Oversight and Risk Management Platform to ensure compliance with OCC’s Fair Lending exam requirements for their Tax Loan Model using the Fairly AI Trust & Safety Assurance Level Two Third-Party Validation solution.

By leveraging Fairly platform's capabilities, second line’s model validation and fair lending teams  were able to automate qualitative and quantitative assessments, set up policies and controls to mitigate identified risks, and continuously monitor these policies and controls to ensure ongoing compliance with always up-to-date internal policies and external regulations.

Challenges

Status quo model risk management process does not work because of:

  • A lack of transparency: the bank needed a solution to provide visibility and controls of AI models  to ensure fairness, accountability, and transparency.
  • Increased regulatory scrutiny: due to recent high-profile regulatory compliance failures in the U.S. Fintech and Banking sector, regulators like the OCC and FDIC have raised the bar, “turning over every stone” in their regulatory examinations.
  • Operational inefficiency: manual compliance processes cannot cope with the increased velocity and complexity of the new generation of AI models. Continuous monitoring cannot be done with manual labor alone.

The Fairly AI Solution

Ease the burden on regulatory compliance implementation and documentation requirements:

  • Build up-to-date policies and controls in a standardized system without IT overhead.
  • Scan structured data and unstructured text to provide answers for conceptual soundness assessments.

Focus on priorities–data-driven risk detection for data and models in both AI and non-AI models: 

  • Auto-detect privacy, security, and bias risks in datasets and models.
  • Continuously identify and monitor new vulnerabilities and proxy data issues using purpose-built expert models, metadata, and statistical techniques.

Results 

Our ongoing validation and monitoring system operating against 35 controls adhering to ISO/IEC TR 24027 - Bias in AI Systems and AI Aided Decision Making  ensured the outcome of the model did not have any bias.. After initial setup, each subsequent quantitative validation itself only took minutes, saving over 100+ hours.

You may be interested in

Want to get started with safe & compliant AI adoption?

Schedule a call with one of our experts to see how Fairly AI can help