Screeners, navigators and nudgers: The future of conversational AI in healthcare
Using virtual agents to offload human work and support customers’ needs is nothing new. Think of the process of calling your cable company to inquire about service outages: You will almost certainly be met with an automated number-based menu, along with the invitation to “listen carefully, as our options have changed.”
The difference today, say experts, is that the agents are often conversational. Using natural language processing, they’re intended to bring much more of an empathetic – some might even call it human – model to customer service.
“The commerce world has been doing this for a while,” said Nathan Treloar, president and cofounder of conversational AI vendor Orbita, during an ATA2020 virtual presentation Monday.
In the healthcare industry, he said, the potential for a virtual agent to support a patient seeking services through basic triaging is “pretty obvious.”
As telehealth use has surged in response to the novel coronavirus pandemic, health systems have turned to conversational agents such as chatbots to do just that. Government agencies and tech giants, including Microsoft and Apple, have championed chatbots as a way for patients to determine whether they have COVID-19 symptoms and to help guide them toward care.
But that use case is just the beginning, said experts.
When developing and implementing conversational agents for healthcare, said Lan Chi “Krysti” Vo, medical director of telehealth in the Department of Child and Adolescent Psychiatry and Behavioral Sciences at the Children’s Hospital of Philadelphia, “We want to look at the workflow for the patient and the workflow for the clinician and say, ‘How do we make both of their lives easier?'”
In the primary care setting, Vo said, chatbots could act as screeners for well-child visits to direct patients to the most appropriate level of care.
“My ultimate goal is to develop a system where patients aren’t getting lost,” Vo said.
Jim Roxburgh, CEO of Banner Telehealth Network, referred to agents as “navigators”: getting patients to the most helpful services in the most efficient way possible.
When it comes to creating a so-called warm interaction between a patient and a conversational agent, Treloar advises limiting medical jargon.
At the same time, Vo pointed out, many evidence-based diagnostic criteria can often sound stiff. Creators must strike a balance, she said, between asking questions derived from these criteria and still sounding approachable.
Treloar proposed a hypothetical scenario based on a rheumatoid arthritis standard survey, where an agent would methodically ask a patient “which of the following 50 symptoms do you have?” Clearly, that is not a viable, user-friendly model, he said.
Treloar also touted the advantage of sculpting “snackable” interactions. Rather than having a bot ask patients an open-ended question, such as “How are you feeling today?,” which could prompt a lengthy, potentially security-compromising response – he suggests creating closed questions to efficiently direct patients in the right direction.
Amwell Chief Medical Officer Peter Antall pointed to the potential for conversational agents to address patient follow-ups, particularly regarding the management of chronic conditions such as diabetes.
“Expect to hear the word ‘nudge’ a lot in the next few years,” he predicted.
Bots are also increasingly likely to show up in patient portals and emergency departments to update individuals about their care. They may be able to incorporate “sentiment analysis,” using energy and tone to inform clinicians about patients’ health.
They aren’t just for patients, though. Panel experts noted the development of clinician-facing experiences for automating the collection of information during visits: a virtual assistant who can be in the exam room, listening to the conversation and providing electronic health-record-based decision support.
Though the technological potential for conversational agent use is immense, examples are likely to face regulatory and legal challenges, such as the need to install an appropriate level of end-to-end security.
But when it comes to potentially high-stress interactions, Treloar said, conversational agents have one major advantage: It’s impossible to make them mad.
“The nice thing about virtual assistants is they don’t have any ego,” he said.