Recently a middle-aged patient visited Seattle-based physician Thomas Payne complaining about substantial, unexpected weight loss and foot tingling. The doctor was puzzled—those symptoms could indicate anything from an infection to dozens of more complex ailments, such as diabetes or cancer. So Payne, who also serves as medical director of information technology services at the University of Washington School of Medicine, did something unusual. After performing a standard physical examination and filling in his patient's medical record, he turned to an online tool—DXplain—for help.
Payne keyed in the symptoms, and the computer program suggested a handful of potential conditions, including a rare disease called amyloidosis, in which abnormal proteins build up in the body, interfering with normal organ function and causing nerve damage. Further exams and a biopsy at another institution confirmed the tip—the patient was one of the roughly 4,000 people in the U.S. who receive this diagnosis every year.
Even five years ago, if Payne had been stumped about a case, he would have first turned to a trusted colleague or spent hours sifting through a mountain of textbooks and scientific research to puzzle out such an obscure diagnosis. DXplain draws on those same textbooks and peer-reviewed studies to make its own assessments—but does so within seconds. “Could I have come up with that same list of conditions? Perhaps if I thought long enough,” says Payne, who more typically sees patients with the flu or arthritis than with inexplicable nerve damage. But, he warns, the scientific literature shows that “when pressed for time, we don't sit down and think about these things like we should, and then those diagnoses may be missed.”
Such misses are too common, according to the National Academies of Sciences, Engineering, and Medicine, which published major reports on the causes of medical errors in 1999 and 2015. Some of these errors can arise from poor record keeping or miscommunication. But often misdiagnosis is to blame. Reviews of medical transcripts suggest that between 6 and 17 percent of adverse events in hospitals can be tied to mistaken diagnoses. The National Academies' 2015 report estimated that 10 percent of patient deaths in the U.S. result from these incorrect conclusions—and the corresponding inappropriate treatment.
Among the solutions that the academies recommended was that hospitals and clinicians should employ more tools—formally referred to as clinical decision-support systems—that might help improve their decision making. At its most basic level, that could mean following a checklist to avoid skipping key steps in important routines. A growing number of medical schools, teaching hospitals and other care centers are also paying for computer-based assistance such as DXplain or its competitors VisualDx and Isabel. Right now VisualDx, the most popular diagnostic support system, is licensed at more than 1,600 hospitals and clinics across the U.S., according to its manufacturer.
The clinical decision-support industry says its wares can help clinicians confirm their diagnoses or suggest alternatives. But physicians have not exactly embraced the new tools with open arms. The big question is whether adopting such software solutions will substantially enhance the practice of medicine or simply add another unnecessary complication to doctors' already pressed schedules.
The idea of enlisting computers to help inform medical diagnoses is not new. The first computing efforts that targeted clinicians' errors began in the 1970s. Then, in the mid-1980s, Massachusetts General Hospital began working on DXplain with the goal of helping to improve diagnoses. The approach seemed promising, but it did not actually take off at the time, partly because patient records were still being written by hand, and turning to a computer-based program added another cumbersome step.
A lot has happened since then. Computers are now integral to standard medicine. They have taken over record keeping in most clinics, hospitals and private practices, with encouragement from federal incentives. Such shifts have boosted quality, safety and efficiency in the health care system.
The clinical decision-support systems have changed, too. They have become much faster and often link directly to the studies from which they draw, allowing clinicians to quickly assess evidence and learn more about the potential diagnosis. VisualDx, for that matter, highlights its “visual” aspect—it includes diagrams of what body parts may be affected and pictures of maladies for easier comparison.
Crucially, scientists have also learned more about why people make certain kinds of mistakes and how to counteract them. Researchers have identified a number of cognitive traps into which physicians sometimes fall when making a diagnosis. One that seems particularly amenable to correction by computers is the so-called anchoring error. Studies suggest that doctors often get stuck on the first diagnosis that occurs to them—the anchor—even if it is wrong. Then they may subconsciously give greater weight to any information that reinforces that diagnosis and dismiss—or not even bother to look for—other data.
In a busy hospital ward or medical practice, anchoring errors can happen for myriad reasons. A harried clinician may forget to ask if a patient recently traveled even when that answer could substantially change the likely diagnosis—resulting in situations where, for example, an Ebola patient might be sent home from a hospital with instructions to take Tylenol for a high fever and pain rather than being quarantined and provided immediate care. Still other problems may stem from the way doctors are educated. Often students are given case studies that reflect prototypical symptoms rather than real-world complexities. Textbook cases are not as common as one might think.
That kind of discrepancy is where these systems hope to find their sweet spot. Each program employs proprietary algorithms to link symptoms with diagnoses and flag which conditions may be most likely or most dangerous and so need to be ruled out quickly. Some are even capable of automatically pulling information from a patient's current electronic records, thereby reducing the need for doctors to reenter the same information.
Just how much decision-support programs would slash errors, however, remains hard to estimate. But preliminary data look promising. A 2011 study of VisualDx compared how well emergency room doctors at two different institutions were able to diagnose a particular skin infection with and without computer assistance. Clinicians who used VisualDx made the correct diagnosis 64 percent of the time. Those who did not made the correct diagnosis only 14 percent of the time. A preliminary study of Isabel presented at a conference in 2014 concluded that the service improved the ability of 40 medical students to make accurate diagnoses by as much as a third. A study of DXplain, published in 2010, found that when residents at the Mayo Clinic used it with diagnostically complex cases, the program dramatically decreased medical costs because it led to shorter, more effective hospital stays.
Hurdles to Clear
Nevertheless, beneficial changes are often slow in coming. In July the National Academies held a one-day meeting to check on progress in reducing diagnostic errors. John Ball, the physician who chaired the academies' 2015 report, said ahead of the meeting that he expected “disappointing” results because many of the recommendations to reduce error—including greater use of computerized decision-making tools—have not yet been adopted on a large scale. Ball says his own seven-hospital system in North Carolina has not yet made much progress integrating these systems into its care.
Part of the problem in North Carolina, Ball notes, is that the various hospitals and doctors in his network work with different electronic record-keeping systems and protocols, which makes it impossible to standardize such changes. The other issue, he says, is that doctors may be reluctant to spend time learning the system until they are certain that it will be worth it.
Institutional inertia is an issue across the U.S., observes Mark Graber, president and co-founder of the Society to Improve Diagnosis in Medicine. “Health care organizations don't really 'own' the problem of diagnostic error and don't recognize it as something they need to focus on,” he says. “Physicians, in general, think they are doing a good job and think they don't really need to worry about [it].”
In addition, some experts, such as Sandra Fryhofer, a past president of the American College of Physicians and a practicing internist in Atlanta, fear that widespread adoption of these programs might have unintended consequences. If such software becomes more accessible to patients, she worries that they may forgo a doctor's visit because they think they already know what is wrong or, alternatively, needlessly fret because the program suggests a scary result—something that doctors say happens now when people search for their symptoms on the Internet.
Doctors such as Payne say they are not concerned about being replaced, however. What they envision is a safer, smarter approach—like the complex backup systems in a plane's cockpit. They hope that with such built-in redundancies and cues, perhaps they can chart a more reliable, smoother course for us all.