Rodin’s Thinker is doing his best to think but if he hasn’t learned critical thinking skills, he is likely to make mistakes. The human brain is prone to a multitude of cognitive errors.
Critical thinking in medicine is what the Science-Based Medicine (SBM) blog is all about. Jonathan Howard has written a superb book, Cognitive Errors and Diagnostic Mistakes: A Case-Based Guide to Critical Thinking in Medicine, that epitomizes the message of SBM. In fact, in the Acknowledgements, he credits the entire team at SBMfor teaching him “an enormous amount about skepticism and critical thinking”, and he specifically thanks Steven Novella, Harriet Hall (moi!), and David Gorski.
Dr. Howard is a neurologist and psychiatrist at NYU and Bellevue Hospital. The book is a passionate defense of science and a devastating critique of Complementary and Alternative Medicine (CAM) and pseudoscience. Its case-based approach is a stroke of genius. We humans are story-tellers; we are far more impressed by stories than by studies or by textbook definitions of a disease. Dr. Howard points out that “Anecdotes are part of the very cognition that allows us to derive meaning from experience and turn noise into signal.” They are incredibly powerful from an emotional standpoint. That’s why he chose to begin every discussion of a cognitive error with a patient’s case, an anecdote.
CAM knows how effective this can be; that’s why it relies so heavily on anecdotes. When doctors think of a disease, they are likely to think of a memorable patient they treated with that disease, and that patient’s case is likely to bias their thinking about other patients with the same disease. If there is a bad outcome with a treatment, they will remember that and may reject that treatment for the next patient even if it is the most appropriate one. Dr. Howard uses patient stories to great advantage, first providing the bare facts of the case and then letting the patient’s doctors explain their thought processes so we can understand exactly where and why they went wrong. Then he goes on to explain the psychology behind the cognitive error, with study findings, other examples, and plentiful references. If readers remember these cases, they might avoid similar mishaps.
An encyclopedia of cognitive errors
The book is encyclopedic, running to 30 chapters and 588 pages. I can’t think of anything he failed to mention, and whenever an example or a quotation occurred to me, he had thought of it first and included it in the text. I couldn’t begin to list all the cognitive errors he covers, but they fall roughly into these six categories:
- Errors of overattachment to a particular diagnosis
- Errors due to failure to consider alternative diagnoses.
- Errors due to inheriting someone else’s thinking.
- Errors in prevalence perception or estimation.
- Errors involving patient characteristics or presentation context.
- Errors associated with physician affect, personality, or decision style.
A smattering of examples
There is so much information and wisdom in this book! I’ll try to whet your appetite with a few excerpts that particularly struck me.
- Discussing an issue with others who disagree can help us avoid confirmation bias and groupthink.
- Negative panic: when a group of people witness an emergency and fail to respond, thinking someone else will.
- Reactance bias: doctors who object to conventional practices and want to feel independent may reject science and embrace pseudoscience.
- Cyberchondria: using the Internet to interpret mundane symptoms as dire diagnoses.
- Motivated reasoning: People who “know” they have chronic Lyme disease will fail to believe 10 negative Lyme tests in a row and then believe the 11th test if it is positive.
- The backfire effect: “encountering contradictory information can have the paradoxical effect of strengthening our initial belief rather than causing us to question it.”
- Biases are easy to see in others but nearly impossible to detect in oneself.
- Checklists for fake diseases take advantage of the Forer effect. As with horoscopes and cold readings, vague, nonspecific statements convince people that a specific truth about them is being revealed. Fake diseases are unfalsifiable: there is no way to rule them out.
- When presenting risk/benefit data to patients, don’t present risk data first; it will act as an “anchor” to make them fixate on risk.
- The doctor’s opinion of the patient will affect the quality of care.
- Randomness is difficult to grasp. The hot hand and the gambler’s fallacy can both fool doctors. If the last two patients had disease X and this patient has similar symptoms, the doctor will think he probably has disease X too. Or if the doctor has just seen two cases of a rare disease, it will seem unlikely that the next patient with similar symptoms will have it too.
- Apophenia: the tendency to perceive meaningful patterns with random information, like seeing the face on Mars.
- Information bias: doctors tend to think the more information, the better. But tests are indicated only if they will help establish a diagnosis or alter management. They should not be ordered out of curiosity or to make the clinician feel better. Sometimes doctors don’t know what to do with the information from a test. This should be a lesson for doctors who practice so-called functional medicine: they order all kinds of nonstandard tests whose questionable results give no evidence-based guidance for treating the patient. Doctors should ask “How will this test alter my management?” and if they can’t answer, they shouldn’t order the test.
- Once a treatment is started, it can be exceedingly difficult to stop. A study showed that 58% of medications could be stopped in elderly patients and only 2% had to be re-started.
- Doctors feel obligated to “do something” for the patient, but sometimes the best course is to do nothing. “Just don’t do something, stand there.” At the end of their own life, 90% of doctors would refuse the treatments they routinely give to patients with terminal illnesses.
- Incidentalomas: when a test reveals an unsuspected finding, it’s important to remember that abnormality doesn’t necessarily mean pathology or require treatment.
- Fear of possible unknown long-term consequences may lead doctors to reject a treatment, but that should be weighed carefully against the well-known consequences of the disease itself.
- It’s good for doctors to inform patients and let them participate in decisions, but too much information can overwhelm patients. He gives the example of a patient with multiple sclerosis whose doctor describes the effectiveness and risks of 8 injectables, 3 pills, and 4 infusions. The patient can’t choose; she misses the follow-up appointment and returns a year later with visual loss that might have been prevented.
- Most patients don’t benefit from drugs; the NNT tells us the Number of patients who will Need to be Treated for one person to benefit.
- Overconfidence bias: in the Dunning-Kruger effect, people think they know more than the experts about things like climate change, vaccines and evolution. Yet somehow these same people never question that experts know how to predict eclipses.
- Patient satisfaction does not measure effectiveness of treatment. A study showed that the most satisfied patients were 12% more likely to be admitted to the hospital, had 9% higher prescription costs, and were 26% more likely to die.
- The availability heuristic and the frequency illusion: “Clinicians should be aware that their experience is distorted by recent or memorable [cases], the experiences of their colleagues, and the news.” He repeats Mark Crislip’s aphorism that the three most dangerous words in medicine are “in my experience”.
- Illusory truth: people are likely to believe a statement simply because they have heard it many times.
- What makes an effective screening test? He covers concepts like lead time bias, length bias, and selection bias. Screening tests may do more harm than good. The PSA test is hardly better than a coin toss.
- Blind spot bias: Everyone has blind spots; we recognize them in others but can’t see our own. Most doctors believe they won’t be influenced by gifts from drug companies, but they believe others are unconsciously biased by such gifts. Books like this can make things worse: they give us false confidence. “Being inclined to think that you can avoid a bias because you [are] aware of it is a bias in itself.”
- He quotes from Contrived Platitudes: “Everything happens for a reason except when it doesn’t. But even then you can in hindsight fabricate a reason that will satisfy your belief system.” This is the essence of what CAMdoes, especially the versions that attribute all diseases to a single cause.
Some juicy quotes
Knowledge of bias should contribute to your humility, not your confidence.
Only by studying treatments in large, randomized, blinded, controlled trials can the efficacy of a treatment truly be measured.
When beliefs are based in emotion, facts alone stand little chance.
CAM, when not outright fraudulent, is nothing more than the triumph of cognitive biases over rationality and science.
Reason evolved primarily to win arguments, not to solve problems.
More
He includes a thorough discussion of the pros and cons of limiting doctors’ work hours, with factors most people have never considered, and a thorough discussion of financial motivations.
The book is profusely illustrated with pictures, diagrams, posters, and images from the Internet like “The Red Flags of Quackery” from sci-ence.org. Many famous quotations are presented with pictures of the person quoted, like Christopher Hitchens and his “What can be asserted without evidence can be dismissed without evidence”.
He never goes beyond the evidence. Rather than just giving study results, he tells the reader when other researchers have failed to replicate the findings.
We rely on scientific evidence, but researchers are not immune from bias. He describes the many ways research can go astray: 235 biases have been identified that can lead to erroneous results. As Ioannidis said, most published research findings are wrong. But all is not lost: people who understand statistics and the methodologies of science can usually distinguish a good study from a bad one.
He tells the infamous N-ray story. He covers the file drawer effect, publication bias, conflicts of interest, predatory journals, ghostwriting, citation plagiarism, retractions, measuring poor surrogates instead of meaningful clinical outcomes, and outright fraud. Andrew Wakefield features prominently. Dr. Howard’s discussions of p-hacking, multiple variables, random chance, and effect size are particularly valuable. HARKing is Hypothesizing After the Results are Known. It can be exploited to create erroneous results.
He tells a funny story that was new to me. Two scientists wrote a paper consisting entirely of the repeated sentence “Get me off your fucking mailing list” complete with diagrams of that sentence. It was rated as excellent and was accepted for publication!
Conclusion: Well worth reading for doctors and for everyone else
As the book explains, “The brain is a self-affirming spin-doctor with a bottomless bag of tricks…” Our brains are “pattern-seeking machines that fill in the gaps in our perception and knowledge consistent with our expectations, beliefs, and wishes”. This book is a textbook explaining our cognitive errors. Its theme is medicine but the same errors occur everywhere. We all need to understand our psychological foibles in order to think clearly about every aspect of our lives and to make the best decisions. Every doctor would benefit from reading this book, and I wish it could be required reading in medical schools. I wish everyone who considers trying CAM would read it first. I wish patients would ask doctors to explain why they ordered a test.
The book is not inexpensive. The price on Amazon is $56.99 for both softcover and Kindle versions. But it might be a good investment: you might save much more money that that by applying the principles it teaches, and critical thinking skills might even save your life. Well-written, important, timely, easy, and entertaining to read, lots of illustrations, packed with good stuff. Highly recommended.
This article was originally published in the Science-Based Medicine Blog.