Siddhartha Mukherjee weighs in on how doctors arrive at a diagnosis and how computers can assist but not replace them.
Is this the doctor of the future? Probably not.
I am a big fan of Dr. Siddhartha Mukherjee, a cancer physician, researcher, and stem cell biologist who is also a phenomenally gifted writer and an unequaled explainer of science. I previously reviewed his Pulitzer-Prize-winning book on cancer, The Emperor of All Maladies: A Biography of Cancer and his subsequent book on genetics, The Gene: An Intimate History. I was delighted to learn that he is working on another book, this time on immunology, where among other things he plans to address what he calls “the nonsense about vaccination and autism.” I can’t wait to read it.
Everything he writes is worth reading, and fortunately we needn’t wait for his next book. In the April 3, 2017 issue of The New Yorker, he offers us food for thought on another subject with an insightful article titled “A.I. Versus M.D.” asking “What happens when diagnosis is automated?” I encourage you to click on the link and read the whole original article, with all its details and with a writing style that is pure pleasure to read; but for those who don’t have the time or inclination, I will attempt to provide a brief summary.
He starts with a vignette about doctors-in-training learning to diagnose early signs of stroke on CT scans. A resident sees something he can’t really describe. It just “looks funny.” Sure enough, follow-up scans reveal an evolving stroke in the “funny” spot. The resident who noticed the “funny” area acknowledged that he couldn’t have found it by using any rule book: the process was partly subconscious. Things click together as radiologists grow and learn. Could a machine do as well or better? Could it grow and learn in a similar way?
Mukherjee asks, “How do doctors learn to diagnose? And could machines learn to do it too?” He describes the stepwise diagnostic process he was taught in medical school: collecting facts from the patient’s history and physical exam, listing potential causes, considering the likelihood of each, and confirming or disconfirming a working hypothesis with lab and imaging tests. But he notes that the real art of diagnosis is not so straightforward. Expert clinicians with a lot of experience have learned to recognize subtle patterns. The process is similar to identifying an animal like a rhinoceros: you don’t have to methodically sort through other candidate animals, you just see and recognize the pattern of a rhinoceros.
Computers are already interpreting EKGs and mammograms. But computers are limited by the rules they were given, and doctors still have to review the results. There are many false positives. With computer-aided detection, the rate of breast biopsies increased but the detection of small, invasive breast cancers (the ones we most want to find) decreased.
Computer science has progressed from rule-based algorithms to learning-based ones, using the computing strategy of training “neural networks” where weights of connections are adjusted to achieve the desired outputs. Researchers trained a computer to recognize melanomas by showing it thousands of images of various skin lesions and correcting its mistakes. When tested, the computer outperformed expert dermatologists. It was right 72% of the time; the dermatologists scored 66%. The computer can’t tell us how it knows what it knows, just as human intuition can’t explain itself. (I like to think of this as the “Aunt Tillie” phenomenon. When you see your Aunt Tillie you immediately recognize who she is, but you can’t explain to others how you know and you can’t teach them what to look for so they could recognize her.)
Will diagnostic radiologists someday become obsolete and be replaced by machines? Not likely, because radiologists don’t just classify; they are also able to notice other unexpected findings. But there is great potential for learning-based computers to assist with things like reading Pap smears, listening to heart sounds, or predicting relapses in psychiatric patients. One of the experts Mukherjee interviews says, “We can do better. Why not let machines help us?”
He observes a busy dermatologist and points out that she doesn’t just identify what; she asks why. Did the rash appear when the patient switched to a new shampoo? And the human doctor-patient encounter has nonspecific effects; patients feel better.
There are concerns. How will computer diagnosis be integrated into medical practice? What about cost? What about liability if the computer misses a diagnosis? Will doctors come to rely on computers and fail to develop their own diagnostic skills?
Mukherjee’s final concern is that the process of scientific discovery often begins in the clinic. Chance observations by clinicians have often led to advances in understanding the pathophysiology of disease. We neglect that opportunity at our peril. The machines can only do what they have been taught to do or have learned to do; they can’t think outside the box like humans.
This article was originally published in the Science-Based Medicine Blog.