Fairly Answers: Isn't it impractical to test LLMs?

May 12, 2024
Fairly Answers: Isn't it impractical to test LLMs?

With our work in AI safety, we at Fairly come across a lot of questions. The curious, the skeptical, and the concerned all want to know what exactly we do, whether the work we do is actually meaningful, and how much of the AI safety landscape we cover.

Concern: AI safety testing is an unrealistic aspiration for organizations because in cybersecurity, even though there's extensive testing and scanning breaches still happen. LLMs are complex and people don't understand the mathematics, complexity, and the relationships involved in them. We need to focus on controls to detect when things break down. Safer AI means having small models that are interpretable and large ones that have strong controls that prevent actions from being taken.

Answer: LLMs are large and complex. But just because something is complex and relatively opaque doesn't mean we don't try to safety test it at all. In other domains, there are huge integrated systems with many points of failure that we still use and benefit from, take automobiles for instance. We might not know how the brakes on a car travelling 300 mph in 175 degree weather operates in real-world conditions but because those conditions are generally atypical for the life we live, we instead focus on situations that are closer to actual use cases. Similarly for generative AI, we won't cover every conceivable safety risk or every unknown-unknown risk, but that shouldn't stop us from simulating real-world scenarios and conducting mass-safety tests. Controls, policies, and process are a key part of this too. Holistic safety requires having multiple layers that work in tandem.

We want to use our insights from testing AI systems to help you build safer AI. Book a call to connect with our (red) team!

DISCLAIMER. The insights and information contained in this article is intended for informational purposes only and we at Fairly AI do not intend for it to constitute legal advice, opinion, or recommendations that you, the reader, or others can depend on. For any legal issues linked to the information and insights contained in this article, please contact a lawyer/attorney to retain legal advice.

Fairly provides links to other websites beyond our control and we are not responsible for and do not give any warranties or make any representations regarding such websites. We are not responsible for or liable in relation to the content or security of these sites (other than to the extent required by law.)

Fairly makes reasonable endeavors to ensure reliability of the information presented on the app and website, but makes no warranties or representations whatsoever as to its accuracy or reliability. Similarly, due to the nature of the internet, we do not guarantee that the Fairly will be available on an uninterrupted and error free basis.

You may be interested in

Want to get started with safe & compliant AI adoption?

Schedule a call with one of our experts to see how Fairly can help