Machine Learning Times
Machine Learning Times
EXCLUSIVE HIGHLIGHTS
Wise Practitioner – Predictive Analytics Interview Series: Oscar Porto and Fábio Ferraretto at DHAUZ
 In anticipation of their upcoming presentation at Predictive Analytics...
PAW Preview Video: Evan Wimpey, Director of Strategic Analytics at Elder Research
 In anticipation of his upcoming presentation at Predictive Analytics...
Podcast: Real-Time Machine Learning: Why It’s Vital and How to Do It
  Welcome to the next episode of The Machine Learning...
PAW Preview Video: Aric LaBarr, Institute for Advanced Analytics at NC State University
 In anticipation of his upcoming presentation at Predictive Analytics...
SHARE THIS:

3 weeks ago
Deep Learning on Electronic Medical Records is Doomed to Fail

 
Originally published in Moderndescartes.com, March 22, 2022.

A few years ago, I worked on a project to investigate the potential of machine learning to transform healthcare through modeling electronic medical records. I walked away deeply disillusioned with the whole field and I really don’t think that the field needs machine learning right now. What it does need is plenty of IT support. But even that’s not enough. Here are some of the structural reasons why I don’t think deep learning models on EMRs are going to be useful any time soon.

1. Data is fragmented

There are many players in the field – Epic, Cerner, Meditech, AllScripts, AthenaHealth, to name a few. It isn’t necessarily bad to have many players, but these players don’t cooperate in data interoperability, as they see the difficulty of data migration as a competitive moat.

Each player also tends to specialize in a certain kind of clinic – big research hospital, small regional hospital, outpatient clinic, urgent care center, radiology center, etc.. So any given patient will have their medical records spread out across several EMR vendors – and if you’re trying to do machine learning with only one vendor or hospital’s data, you’re going to see a very nonrandom subset of patient data. The key data your model needs to make a decision may be entirely absent. Alternatively, it may be present but only in a misleadingly partial way.

Over the last few years, this problem is fixing itself through increased market consolidation. Still, this problem can only be completely solved through data interoperability standards and mandates, which will take a decade to happen and another decade to implement. With incomplete data, deep learning will place great importance on the scraps of information it manages to find, and then spit out an overconfident answer when a patient shows up with an unexpectedly complete record.

2. Data is Workflow, Workflow is Data. (with apologies to Lisp)

EMR software is widely hated by the nurses and doctors who have to use it. It’s slow, bloated, nonintuitive, requires workarounds, etc. etc. etc.. The root of this evil is that every hospital brings its own conceited and byzantine patchwork of procedures, checks, and rituals to the table. The EMR vendor, to secure the deal, promises that they can implement these workflows, resulting in a mess of bloated, redundant, and half-thought out features.

Life would be simpler if only these hospitals could set aside their arrogance and just go with the recommended workflow! Unfortunately, each layer of process is written in the blood of patients that have died due to medical error. The advice against rewriting software systems is fully applicable to rewriting hospital workflows.

To continue reading this article, click here.

Leave a Reply