Blog

Red teaming an emotional companion chatbot: a case study of Suno Chat

February 7, 2024
Red teaming an emotional companion chatbot: a case study of Suno Chat

Trigger warning: we briefly mention self-harm in this article.

Facts

Sometimes you need someone to talk to. To vent to. To pour your heart out to. Maybe some of us have asked ChatGPT questions at 3:00AM. What if there was a chatbot that you could talk to, one that’s empathetic, one that doesn’t give you the same boilerplate advice every 15 minutes. Enter Suno.chat. Suno is an AI-powered emotional companion developed to proactively and holistically support your wellness.

Issue

Chatbots come in many shapes and sizes. Some might help you easily return a product you bought online. Others might provide tech support. One thing to note is that different chatbots carry different risks. It isn’t necessarily about the sector they’re in either. A chatbot that helps you book an appointment carries a very different risk profile to a chatbot that provides you with a preliminary diagnosis. 

AI use cases are expanding fast. Learn the basics about AI risk in our article on risk tiers.

When it comes to emotional support and companionship, there are people who are genuinely struggling with difficult situations that give rise to challenging emotions. As a result, there are certain risks that are more concerning in Suno’s case than there are with others. Specifically, we wanted to avoid Suno encouraging users towards self-harm.

Response

AI-driven emotional companions are a relatively new phenomenon. As a result we used a ‘ground-up’ approach to risk identification. What does this entail? In Suno’s case, it meant that since there weren’t AI frameworks for red teaming emotional support chatbots, we instead developed a red teaming policy in-house.

Keep on top of new developments in AI policy, check out our Responsible AI Framework Tracker!
Analysis

What we noticed was that much of the harm that could come from using an emotional companion chatbot could be broadly classified into two risk areas: derailment and a lack of helpfulness. With derailment, we tested for behavior that encouraged self-harm, lacked empathy, and used foul language. In terms of gauging helpfulness, we looked at how often our automated red teaming agent would send Suno into a loop of the same responses, whether it retained memory of early parts of the conversation, and how it resolved each conversation.

You might wonder why something like looping or a general lack of helpfulness might be an issue. What we concluded in our analysis was that if someone required emotional support, a chatbot that simply looped responses or provided generic feedback that was not tailored to that person’s situation might frustrate them. If someone is already in an emotionally vulnerable situation, this frustration might prevent them from opening up further or seeking help. As a result, we aimed to identify each instance of unhelpfulness and worked with the developer to remove that kind of behavior.

Because Suno is run using large language models, we needed a way to rapidly test across the dimensions we identified. Fairly leveraged the use of AI to run automated tests on Suno and generate reports summarizing our findings.

Conclusion

Fairly AI’s automated red teaming solution provided Suno’s developers with the testing and insights needed to make adjustments to the AI models they used and the fine-tuning they did. 

We want to use our insights from testing products like Suno to help you build better AI systems. Book a call to connect with our (red) team!

DISCLAIMER. The insights and information contained in this article is intended for informational purposes only and we at Fairly AI do not intend for it to constitute legal advice, opinion, or recommendations that you, the reader, or others can depend on. For any legal issues linked to the information and insights contained in this article, please contact a lawyer/attorney to retain legal advice.

Fairly provides links to other websites beyond our control and we are not responsible for and do not give any warranties or make any representations regarding such websites. We are not responsible for or liable in relation to the content or security of these sites (other than to the extent required by law.)

Fairly makes reasonable endeavors to ensure reliability of the information presented on the app and website, but makes no warranties or representations whatsoever as to its accuracy or reliability. Similarly, due to the nature of the internet, we do not guarantee that the Fairly will be available on an uninterrupted and error free basis.

You may be interested in

Want to get started with safe & compliant AI adoption?

Schedule a call with one of our experts to see how Fairly can help