AI-in-the-Loop as a Means of Risk Mitigation

You’ve almost certainly been reading headlines about the explosion of generative artificial intelligence (AI), including the impressive performance of various large language models such as Google’s Bard. There is still some level of unpredictability of these models, leading to their developers exhorting caution in deploying these tools in high-stakes settings, such as healthcare. For instance, a company deploying a generative AI chatbot or conversational agent in a peer-to-peer mental health support venue might rightly be concerned about liability if a misstep from the generative AI model resulted in harm.

Improvement of the technology is not the only way to mitigate such liability risk; thoughtful design of the tool’s deployment can also have a risk-reducing effect. For instance, a recent study published in Nature demonstrated that by using the chatbot dubbed HAILEY to provide just-in-time feedback to a human offering peer-to-peer mental health support, the human-AI collaboration led to increased conversational empathy, without the peer supporter becoming reliant on the AI chatbot. This makes use of the AI-in-the-loop rather than the human-in-the-loop paradigm sometimes seen for autonomous systems.

Instead of completely handing over the reins to the generative AI chatbot, interposing the generative model in the task being performed by a human is likely to lead to a better experience for the support recipients, as well as a smaller liability profile for the company or entity deploying such an AI tool.

Image:Human–AI collaboration enables more empathic conversations in text-based peer-to-peer mental health support, Nature Machine Intelligence, 2023

Read more AI news and insights here.