Didn’t see the result of AI?The project may be the missing part

Didn’t see the result of AI?The project may be the missing part

Facebook
Twitter
LinkedIn

[ad_1]

There is no doubt that one of the hottest topics in the medical field is artificial intelligence. The future of AI is exciting: it has helped radiology identify cancer images, found diabetes through retinal scans, and predicted the patient’s risk of death, just to name a few that can provide examples of medical progress.

However, the healthcare system’s approach to making AI a reality is often flawed-causing AI to get involved without measurable results. If they go the wrong way, they will eventually use AI “solutions” to solve the perceived problems without being able to verify whether these problems are real or measurable.

Suppliers often turn on AI solutions…and then walk away, leaving the health system unsure of how to use these new insights within the confines of its old workflow. Moreover, these tools are usually deployed without engineering rigor to ensure that the new technology is testable or recoverable.

result? These potential AI insights are usually ignored, trivial help, quickly outdated or harmful (the worst case is harmful). But who knows?

Early sepsis detection is a common AI solution that usually causes excitement among health systems and suppliers.

In fact, finding patients with sepsis happened to be my first task at Penn Medicine. The idea is that if we can find patients at risk of sepsis earlier, then some treatments can be used, which (in our opinion) save lives.

Starting from the background of missile defense, I naively think this is an easy task. It seems very intuitive, there is a similarity of “find the missile, launch the missile”.

My team has developed one of the highest performance sepsis models ever.[1] It has been validated, deployed, and has brought more laboratory tests and faster ICU transfers-but it has produced zero patient outcome changes.

Facts have proved that Penn Medicine is already very good at finding patients with sepsis, and actually does not need such advanced algorithms. If we have completed the complete engineering process that Penn Medicine has adopted, we will not find any evidence that the initial problem statement “find the sepsis patient” was the problem at all.

This engineering design work will save many months of work and save the system deployment that is ultimately distracting.

In the past few years, vendors and medical systems have put forward hundreds of propositions to successfully implement AI. So, why are only a few research results showing actual value?[2]

The problem is that many health systems try to solve health care problems by simply deploying supplier products. What this method lacks is the engineering rigor required to design a complete solution, including technology, personnel workflow, measurable value and long-term operational capabilities.

This supplier-first approach is usually isolated and helpless. Independent tasks are assigned by independent teams. The completion of these tasks becomes a measure of the success of the project.

Therefore, success depends entirely on the task, not on value. It is difficult to link these tasks (or projects) with actual important indicators (saving lives, saving money) and requires a comprehensive engineering approach.

It is usually unknown whether these projects are effective and how they work (or if they were ever needed). The incomplete way to look at it is: if AI technology is deployed, it declares success and the project is complete. There is no engineering required to define and measure values.

Obtaining value from healthcare AI is a problem that requires meticulous, thoughtful and long-term solutions. When the hospital workflow changes, even the most useful AI technology may suddenly stop executing.

For example, the readmission risk model of the University of Pennsylvania School of Medicine suddenly showed a subtle decrease in risk scores. The culprit? Unexpected EHR configuration changes. Because the complete solution has been designed, the data feed is monitored and the team can quickly communicate and correct EHR changes.

For each deployed predictive model, we estimate that these types of situations occur approximately twice a year. Therefore, even during operation, it is necessary to continuously monitor the system, workflow and data.

In order for AI in healthcare to reach its potential, the health system must extend its energy beyond clinical practice and centralize full ownership of all AI solutions. Strict engineering, clearly defined results are directly related to measurable value, and will become the basis for building all successful AI programs.

Value must be defined in terms of lives saved, money saved, or patient/clinical satisfaction. The medical system that will succeed from AI will be one that carefully defines its problems, measures the evidence for these problems, and conducts experiments to link hypothetical interventions with better outcomes.

A successful health system will understand the need for a rigorous design process to properly expand its solutions in operations, and be willing to treat both technical and personnel workflows as part of the engineering process.

Just like Blockbuster, it is now famous for failing to reconsider the way its films are made-medical systems that refuse to consider themselves as engineers face a huge risk of being seriously behind the proper use of AI technology.

It is one thing to ensure that the website and email server are working properly. Ensuring that the health system is optimizing care for heart failure is another matter.

One is IT services, and the other is a complete product solution that requires a comprehensive team of clinicians, data scientists, software developers, and engineers, as well as clearly defined success metrics: saving lives and/or saving dollars.

[1] Giannini, HM, Chivers, C., Draugelis, M., Hanish, A., Fuchs, B., Donnelly, P., & Mikkelsen, ME (2017). Development and implementation of machine learning algorithms for early identification of sepsis in multi-hospital academic medical systems. American Journal of Respiratory and Critical Care Medicine. 195.

[2] Digital reconstruction of healthcare, John Halamka, MD, MD, and Paul Cerrato, MD, Catalyst innovation in NEJM care delivery, 2020; 06

DOI: https://doi.org/10.1056/CAT.20.0082

Mike Draugelis is the chief data scientist at Penn Medicine and he leads Predictive healthcare team.

[ad_2]

Source link

More to explorer