Risk Tiering an AI Use-Case

January 31, 2024
Risk Tiering an AI Use-Case

When looking at the risk involved with procuring or developing an AI system, it isn’t enough to examine the risk generated by the AI model itself. AI models exist within systems. Systems exist within organizations. And organizations exist within a context. 

Starting with a context, there are a number of factors that can affect the level of risk faced by a company that is developing, deploying, or using AI. For instance, ‘healthcare’ as a sector is often seen as being “high-risk” but an AI-driven scheduling system for physicians is less risky than an application for AI-driven diagnostics. This shows that the use-case within a sector affects risk. 

Furthermore, the type of AI technology a supplier uses affects risk as well. Some systems don’t even use AI, but rather simpler models that could be run out of an Excel file. In those cases, it's very easy to trace how an input turns into an output, so risk is lower when compared to an AI model that’s probabilistic and may have slightly different outputs every time it's run.

From another perspective, a company might be developing an AI system but hasn’t released it yet. As a result, that company wouldn’t have any users and no users means less risk than a system that was in use. 

An additional factor when considering the context an AI-driven organization operates in is its legal jurisdiction. Some jurisdictions have passed laws regarding AI use (such as the EU AI Act in Europe) whereas other jurisdictions rely on applying existing laws to AI use cases.

When combining all of these factors, a company procuring or developing AI technology can understand the level of risk they’re taking on by operating a certain kind of model for a certain kind of use in a certain type of sector. 

Fairly's award-winning platform can help ease your team's minds when trying to understand risk, book a call with us to learn more.

DISCLAIMER. The insights and information contained in this article is intended for informational purposes only and we at Fairly AI do not intend for it to constitute legal advice, opinion, or recommendations that you, the reader, or others can depend on. For any legal issues linked to the information and insights contained in this article, please contact a lawyer/attorney to retain legal advice.

Fairly provides links to other websites beyond our control and we are not responsible for and do not give any warranties or make any representations regarding such websites. We are not responsible for or liable in relation to the content or security of these sites (other than to the extent required by law.)

Fairly makes reasonable endeavors to ensure reliability of the information presented on the app and website, but makes no warranties or representations whatsoever as to its accuracy or reliability. Similarly, due to the nature of the internet, we do not guarantee that the Fairly will be available on an uninterrupted and error free basis.

You may be interested in

Want to get started with safe & compliant AI adoption?

Schedule a call with one of our experts to see how Fairly can help