After Covid, being open with patients about uncertainty may be the surest way to build trust in medicine.
IN 2001, when the pediatric allergist Gideon Lack asked a group of some 80 parents in Tel Aviv if their kids were allergic to peanuts, only two or three hands went up. Lack was puzzled. Back home in the UK, peanut allergy had fast become one of the most common allergies among children. When he compared the peanut allergy rates among Israeli children with the rate among Jewish children in the UK, the UK rate was 10 times higher. Was there something in the Israeli environment—a healthier diet, more time in the sun—preventing peanut allergies from developing?
He later realized that many Israeli kids started eating Bamba, a peanut-based snack cookie, as soon as they could handle solid foods. Could early peanut exposure explain it? The idea had never occurred to anyone because it seemed so obviously wrong. For years, pediatricians in the UK, Canada, Australia, and the United States had been telling parents to avoid giving children peanuts until after they’d turned 1, because they thought early exposure could increase the risk of developing an allergy. The American Academy of Pediatrics even included this advice in its infant feeding guidelines.
Lack and his colleagues began planning a randomized clinical trial that would take until 2015 to complete. In the study, published in The New England Journal of Medicine, some children were given peanut protein early in infancy while others waited until after the first year. Children in the first group had an 81 percent lower risk of peanut allergy by age 5. All the past guidelines, developed by expert committees, may have inadvertently contributed to a slow increase in peanut allergies.
As a doctor, I found the results unsettling. Before the findings were released, I had counseled a new parent that her baby girl should avoid allergenic foods such as peanut protein. Looking back, I couldn’t help but feel a twinge of guilt. What if she now had a peanut allergy?
The fact that medical knowledge is always shifting is a challenge for doctors and patients. It can seem as though medical knowledge comes with a disclaimer: “True … for now.”
MEDICAL SCHOOL PROFESSORS sometimes joke that half of what students learn will be outdated by the time they graduate. That half often applies to clinical practice guidelines (CPGs), and it has real-life consequences.
A CPG, usually drawn up by expert committees from specialized organizations, exists for almost any ailment with which a patient can be diagnosed. While the guidelines aren’t rules, they are widely referred to and can be cited in medical malpractice cases.
When medical knowledge shifts, guidelines shift. Hormone replacement therapy, for example, used to be the gold-standard treatment for menopausal women struggling with symptoms such as hot flashes and mood changes. Then, in 2013, a trial by the Women’s Health Initiative demonstrated that the therapy may have been riskier than previously thought, and many guidelines were revised.
Also, for many years, women over 40 were urged to get annual mammograms—until new data in 2009 showed that early, routine screenings were resulting in unnecessary biopsies without reducing breast cancer mortality. Regular mammograms are now suggested mainly for women over 50, every other year.
Medical reversals usually happen slowly, after multiple studies shift old recommendations. Covid-19 has accelerated them, and made them both more visible and more unsettling. Early on, even some medical professionals presented the coronavirus as no more severe than the flu, before its true severity was widely described. For a time, people were told not to bother with masks, but then they were advised to try double-masking. Some countries are extending the intervals between the first and second vaccine doses. Of course the state of the pandemic, and of our knowledge about it, has been shifting constantly. Still, throughout the past year and a half, we’ve all experienced medical whiplash.
It’s too early to say how these reversals will affect the way patients perceive the medical profession. On the one hand, seeing debate among medical experts conducted openly could give people a heightened understanding of how medical knowledge evolves. It could also inculcate a lasting skepticism. In 2018, researchers analyzed 50 years’ worth of polling data on trust in medicine. In 1966, 73 percent of Americans reported having confidence in “the leaders of the medical profession.” By 2012 that number had dropped to 34 percent—in part, the authors surmised, because of the continued lack of a universal health care system.
THE ANCIENT GREEK sea god Proteus was able to see the future, but he was forbidden from sharing his prophecies unless he was captured. This was challenging, because he was a shape-shifter: He could become a young man, a tree, a bull, a flame. No one has explored the protean nature of science more prominently than the Viennese scientist and philosopher Thomas Kuhn. In The Structure of Scientific Revolutions, published in the early 1960s, he proposed that science shape-shifts, or advances, through five sequential phases.
The first involves accepting “normal science,” the prevailing theory or “paradigm,” and conducting experiments that merely verify and reinforce the paradigm. During this phase, skepticism is often suppressed. Phase 2 involves finding an “anomaly” that doesn’t fit with the paradigm, but treating it as an outlier. In phase 3, a critical mass of threatening “anomalies” lead to a “crisis”—which prompts phase 4: “revolution,” by way of a series of new experiments to test alternative theories. Finally, a new worldview emerges, a “mature science.” The phases then repeat.
Remarkably, Kuhn didn’t argue that science is in search of “truth,” but rather that it “moves away from” an outdated, problematic, and “primitive” worldview. Also key is that what scientists and non-scientists understand in the new paradigm is reflective of what they see, as well as what they have been taught to see from experience. A switch in gestalt may be “I used to see a planet, but now I see a satellite”—referring to points in time and assuming that the initial observation may have been true. A paradigm shift, on the other hand, may word it as “I used to see a planet, but I was wrong, as it’s actually a satellite.”
Kuhn based his phases primarily on physics. What happens when we apply them to medicine and health care? When we deal with human lives and preventing illness, “advancement” can look a lot like “flip-flopping.” Is a changed recommendation an admission of harm? And where does that leave us with large public health efforts? Medical reversals place doctors in a bind. Improved medical knowledge represents progress, but honestly admitting to a past error may lead patients to see them as incompetent, breeding mistrust.
What if we got rid of reversals? That’s what University of Chicago Medical School professor Adam Cifu and oncologist Vinayak Prasad propose in Ending Medical Reversal: Improving Outcomes, Saving Lives. In many cases, they conclude, recommendations are simply issued too soon and are based on low-quality trials. Guideline committees may succumb to groupthink or feel pressured to reach a consensus where none exists. “If we look at something like peanut restriction,” Cifu told me, “the initial recommendations were mostly based on theory—good immunology theory, but theory nonetheless.” If doctors “stick with what’s evidence-based, our advice will be less likely to be overturned.”
Yet diseases don’t wait for evidence. Doctors must sometimes make medical decisions even if good data is rare or unavailable. Cifu and Prasad draw a sharp distinction between evidence- and theory-based recommendations, but in practice, doctors often adopt a looser framework. They may use lower-quality (often theory-based) recommendations until they can be replaced with higher-quality ones. Doctors combine this knowledge with their own personal experience in making clinical decisions.
Medical guidelines are similarly a composite thing, often seeking to balance new evidence with deference to established authority. And decisionmakers may also consider how a revision will affect trust in the system as a whole. In the 1990s, for example, the rotavirus gastroenteritis infection killed more than 130,000 children globally each year. In 1998 the pharmaceutical company Wyeth released a vaccine, called RotaShield, that dramatically reduced the mortality rate. Within a year, however, doctors and patients poured in with complaints. Among the inoculated, there seemed to be a small increase in a bowel condition called intussusception, which in rare cases can be deadly. In 1999, after 15 reported cases of vaccine-related intussusception, both the Vaccine Adverse Event Reporting System (VAERS) and the Centers for Disease Control ordered that RotaShield be withdrawn from the American market. It’s worth noting that VAERS is limited by the honor code: Adverse events are not confirmed.
In a 2012 paper titled “The First Rotavirus Vaccine and the Politics of Acceptable Risk,” Jason Schwartz, then a fellow at the University of Pennsylvania, explored the thinking behind the withdrawal. In his view, the decision wasn’t purely evidence-based. Schwartz told me that while some “argued that keeping the vaccine would have, in absolute terms, saved more lives,” the decisionmakers weighed trust: “You can’t have a vaccine out there with a notable risk of a harmful condition.”
According to this reasoning, the RotaShield reversal should increase our trust in vaccines: It shows that the system we use to monitor them works. (Two safer rotavirus vaccines have since been introduced and remain in use.) Vaccines such as MMR have been monitored for decades by the same system, and observers have seen no alarming signs—proof of their extraordinary safety. We’ve recently seen similar safety processes play out with the AstraZeneca and Johnson & Johnson Covid-19 vaccines. Still, a paradox of medicine is that the steps we take to make the system more trustworthy can make it seem less so.
THE FLIP SIDE of that paradox is that getting doctors to be comfortable expressing uncertainty may be the surest way to instill patient trust. Steven Hatch, a professor of infectious diseases at the University of Massachusetts, argues that medical reversals unsettle us because both medical professionals and patients are too fixated on being sure. “The public often thinks that they go to their doctor, the doctor runs the test, and the test reveals the truth,” Hatch told me. “But most of the time, we weigh sets of data and arrive at weighted possibilities which are not rock-solid.”
Doctors might approach different kinds of patients differently. Some people are comfortable with uncertainty and risk; others, says Hatch, struggle “to deal with ambiguity in their lives in general.” With the latter, doctors must resist the temptation to create a false sense of certainty, because “it’s really when things go wrong that a patient may feel cheated by the system.”
Hatch’s observations made me think of Diane, a woman I met a few years ago at a yoga retreat. Now in her sixties and retired, Diane is healthy, active, and cheerful, but she’d gone decades without visiting a doctor. She’d avoided preventative screenings of all kinds, in large part because it seemed to her that medical advice is always changing.
A few years ago, one of Diane’s friends—a woman who’d also avoided routine screenings—died of colon cancer. This inspired Diane to make a few doctor’s appointments and, in December 2019, she had her first physical exam since the early 1990s. Still, she found herself confused about how much uncertainty was normal in the doctor-patient relationship. She told me that when she asked her doctor if a prescribed skin cream would make her skin sensitive to the sun, her doctor told Diane that sun sensitivity wasn’t a side effect. Later, at home, Diane looked up the medication and found a warning that the cream actually did make people more sensitive to sunlight. “The doctor admitted to being unsure, which didn’t bother me,” Diane said. “But then she ended up telling me the wrong information. It’s hard for me to overlook that.”
Diane has struggled with the changing recommendations during the pandemic, and with figuring out how they should shape her behavior. “It almost seems like no one knew what they were talking about,” she recently told me. “First, it was no mask, then it was mask. Now, it’s two masks. It’s hard to keep up.”
Diane’s husband is a pilot, so I suggested a flying analogy. Sometimes a pilot who has been flying the same route for years has to shift because of severe turbulence or weather, perhaps flying thousands of feet higher or lower than what was originally planned. Usually the pilot announces the change to the cabin, and the passengers understand. Most don’t see the pilot as newly untrustworthy or incompetent; on the contrary, they’d worry if the plane shifted course and no announcement was made. Changes are inevitable when new information arrives, and transparency should increase trust, not erode it.