Is AI more helpful than humans in health care?

Body
Kevin Cevasco

The use of artificial intelligence (AI) is rapidly expanding to increase productivity and quality in both personal and professional settings, performing tasks previously thought only humans were capable of completing. Like other industries, health care is implementing this technology to enhance patient experiences and improve health outcomes, specifically using conversational agents like chatbots and AI-based conversational large language models (LLM), such as ChatGPT. Conversational agent apps have been developed for personalized health coaching, blood pressure monitoring, medication reminders and motivational coaching for healthy eating and physical activity to name a few.  

The eHealth market relies on a direct-to-consumer business model that avoids costly regulatory compliance studies that assess their effectiveness. Chatbot apps offer the potential to improve consumer uptake and treatment adherence leading to a stable business revenue base. However, research conducted by Public Health PhD candidate Kevin Cevasco and colleagues concluded that despite popular claims, there is not enough evidence to conclude chatbots have better patient uptake and adherence than non-chatbot apps, such as in addressing anxiety and depression care. 

“It is unclear if chatbots improve patient experiences or if trends to include chatbots in electronic health applications are due to technology hype cycles and pressures to innovate,” said Cevasco. “The real impact of chatbot-enabled applications is largely unknown.” 

Cevasco and team reached these conclusions after conducting a systematic literature review and meta-analysis which sought to confirm if publications proclaiming the efficacy of electronic health (eHealth) chatbots provide measures for user engagement, adherence, and working alliance.  

“We selected studies where chatbot applications were patient-facing with health education, monitoring, or treatment-related purposes. We selected studies with primary measures that were focused on the user’s health outcomes and engagement with the applications,” said Cevasco.  

After thorough screening, Cevasco and team discovered that there are very few randomized controlled trials on eHealth chatbot applications. From the small sample of publications available, the documented evidence suggests that patient engagement with chatbot eHealth applications is not conclusively better than human interventions or apps without conversational agents. 

“The small number of studies available suggests researchers should include methods for measuring patient uptake, engagement, and working alliance in their ongoing conversational agent research.” 

Despite the few research studies that support the claims of AI’s effectiveness in health care settings, Cevasco and colleagues’ work expands on the future implications of the clinical applications of LLMs. Currently, 50% of patients do not adhere to their prescribed medication regimens, and Cevasco is optimistic about the potential for AI to improve patient care if developed and implemented correctly.  

“Although chatbots have the potential for improving personalized medicine by increasing health literacy and providing easily available and understandable health information, it is important the applications have the capacity to address the complexity of different health challenges,” said Cevasco 

Contributing authors include College of Public Health MPH in Epidemiology alumna Rachel Morrison, PhD in Health Services Research student Rediet (Redd) Woldeselassie, and Seth Kaplan, professor of Psychology in the College of Humanities and Social Sciences. 

“Patient engagement with conversational agents in health applications 2016-2022: A Systematic Review and Meta-Analysis" was published online April 10, 2024, in the Journal of Medical Systems.