Chatbot technology still has some way to go in healthcare

Chatbot technology still has some way to go in healthcare

Facebook
Twitter
LinkedIn

[ad_1]

While chatbots are attracting interest from developers, vendors and payers, the technology is not as technologically advanced as marketers claim.

Researchers at Johns Hopkins University recently analyzed 78 health-related chatbots and found that despite the marketing hype, only a few use machine learning and natural language processing methods.published in npj | digital medicinesaid chatbots in healthcare are in a nascent state of development and require further research before wider adoption.

“Many of the bots we reviewed follow pre-determined algorithms. They systematically guide you through a process. They haven’t reached the level of automation that they can read user language, understand intent, and respond to questions,” said one of the study’s lead authors. 1. Smisha Agarwal, Assistant Professor in the Department of International Health, Johns Hopkins Bloomberg School of Public Health.

Most of the apps reviewed by Agarwal’s team used a fixed input method for dialog interactions, and 88 percent of them had finite-state dialog management, meaning the bot could only speak so much to the patient. Only a few apps allow users to write a few sentences and receive relevant responses.

According to Melanie Laffin, a senior machine learning expert at technology consulting firm Booz Allen Hamilton, chatbot technology has seen advances in computing time, data storage/analysis cost, and algorithmic complexity. But contextually, the technology has yet to evolve.

“Chatbots struggle to understand context and are best suited for narrow-scope conversations, such as simple question/answer and information retrieval,” Raffin said. “For chatbots, moderately complex conversations are often difficult to understand and therefore fail to solve.”

clinical use

While much of the focus remains on administrative duties, more and more chatbot solutions are being used for clinical purposes, especially for mental health and primary care. For all but six of the applications reviewed by the Johns Hopkins team, their approach was not supported by a therapeutic framework.

“There’s a lot of untapped potential. They’re not using patient background information to personalize health information, and whether you’re a 40-year-old man with high blood pressure or a 22-year-old woman, you’re taking the same path in the app. You’re guided through The same process, but if you have a chronic disease in your 40s, the path can be very personal compared to someone in your 20s,” Agarwal said.

As more companies roll out chatbot mental health solutions, the lack of regulation puts end users at risk, Agarwal said. “Anything can be sold to patients, and there’s no way to really assess their robustness,” she said.

Craig Klugman, a professor of bioethics at DePaul University and a frequent researcher in the ethics of medical technology research, said many developers who use chatbots for mental health purposes have tried to make a difference by posting a disclaimer on their website about not using the app for clinical diagnosis. Reduce responsibility.

“If you look at the fine print, they say don’t use it for medical purposes. It’s entertainment. We don’t diagnose or treat anyone,” Krugman said. “If they’re diagnosing and treating patients, they need to have a licensed provider to support it.”

There are also privacy concerns with the clinical use of chatbots. Of the 78 apps her team reviewed, only 10 complied with health information privacy regulations, Agarwal said, which is especially notable when they interact with vulnerable patient populations, such as those with mental illness.

Download the Modern Healthcare app to stay informed as industry news emerges.

trust issues

Chatbots have grown in popularity over the past two years as healthcare providers increasingly use chatbots to screen potential COVID-19 patients.But an analysis by Indiana University researchers Journal of the American Medical Informatics Association It was found that users do not necessarily trust chatbots compared to humans performing the same tasks.

“All things being equal, people don’t trust AI very much. They don’t think it’s that powerful,” said Alan Dennis, the study’s lead author and a professor of information systems at Indiana University’s Kelley School of Business. “The most important thing a vendor can do is to build trust and gain a clearer understanding of how the chatbot was developed, who supports the recommendations, and how it was tested.”

Dennis said when his team looked at chatbots for mental health screening, it found similarities to COVID-19 screening. When people are screened for mental health purposes, they need information and emotional support, he said.

“People seeking help for mental health and other stigmatizing disorders need emotional support, and chatbots can’t provide that. You can’t make chatbots feel sorry or empathetic for you. You can program it, but ultimately, people will Knowing a chatbot won’t make you sad,” Dennis said.

see data

Cybil Roehrenbeck, a partner at law firm Hogan Lovells who specializes in AI-related health policy, said healthcare systems are likely to use the technology as assistive AI rather than fully autonomous software programs. “In this case, you have a clinician overseeing and using the information as they see fit,” she said. That means the technology is less risky than other types of fully autonomous AI systems.

She added that any AI used from a clinical perspective should have its algorithms rigorously validated and compared to non-AI services. In fact, anything involving artificial intelligence comes down to data, Ruffin said. Many organizations struggle with data organization and governance, which negatively impacts the effectiveness of any AI project, she said.

“To ensure chatbots are effective, you need relevant and accurately labeled data and a well-defined scope of the chatbot knowledge base,” Raffin said. “Furthermore, chatbots should integrate with other systems to accurately provide information to users, which can be a challenge for authentication needs. Ultimately, the better the data, the more effective the chatbot will be.”

Agarwal is bullish on the future of chatbots if technology can better integrate personal information. She said the technology is critical to helping patients with stigmatized medical problems and is therefore sensitive to in-person resolution, such as HIV or other sexually transmitted diseases. “I think there’s a lot of room for growth,” Agarwal said.

Dennis is optimistic about the potential uses of chatbots, but says until more progress is made, they should be limited to administrative and business-related tasks.

“Look at what primary care providers really don’t want to do, and see if you can lighten their load by taking on more mundane busy jobs so you can free them up to do what they really want to do, It’s patient care,” Dennis said.

[ad_2]

Source link

More to explorer