Read the Latest from the Blog!

Yes Please!

Welcome to the Blog

Actually, Covid Optimists and Pessimists Are Both Right

Mild and dire forecasting models serve different purposes, and can be tricky to interpret. But when they appear similar, it may signal the end of the pandemic.

 

c/o Getty Images

CONSIDER THIS THOUGHT experiment: J is a 55 year-old patient who has smoked two packs of cigarettes a day since he was 22. He has just been diagnosed with stage III non-small-cell lung cancer. His doctor uses a series of methods, including a model, to decide his prognosis.

In Situation 1, his doctor uses the “precautionary principle” and presents the worst-case scenario based on a model of the worst case: J has about six months to live.

In Situation 2, the doctor bases her prognosis on future-projecting J’s present situation, by definition not the worst-case scenario and more “optimistic”: J has another two years to live.

Which scenario is better?

The answer isn’t so straightforward. In medicine, prognostication is fraught with its own challenges and depends largely on the data and model used, which may not perfectly apply to an individual patient. More importantly: The patient is part of the model. If the information used then shifts the patient’s behavior, the model itself changes–more precisely, the weights given to certain variables in the model change either toward a more negative or positive outcome. In the first scenario, J may decide to shift his behavior to make the most of his next six months, perhaps extending it to nine months or longer. Does that mean the model was inaccurate? No. It does mean that knowledge of the model helped nudge J toward a more optimistic outcome. In the second scenario the opposite may happen: J may continue his two-pack-a-day smoking habit, or only cut down to a pack a day, which may hasten a more negative outcome. It’s entirely possible that J in Situation 1 lives for two years, and in Situation 2 lives for six months.

This pattern exists everywhere, from prognosticating climate change to even polling (knowing poll results can affect voting behaviorpotentially changing the outcome). We’ve seen a similar dilemma with Covid-19 pandemic modeling, which may help explain the divisiveness over everything from when the pandemic may end to whether lockdowns are appropriate. Last year, just as the World Health Organization declared Covid-19 a global pandemic, I wrote about uncertainty and risk perception. When faced with uncertainty we defer to experts, but a month later the National Institute of Health’s Anthony Fauci correctly noted that experts are fraught with predicting what was (and still is) a “moving target.”

Over the past few weeks we’ve seen more opinion pieces focused on optimism: that herd immunity will be reached by April, and summer will be more like 2019, wide open and carefree. We’ve also seen how this optimism, based on a “present-day accurate model” can sway behavior: from schools opening (but then locking back down) to Texas’ recent removal of its mask mandate potentially contributing to an uptick in cases. Others have taken a more pessimistic approach, saying it may be another two years until things “return to normal,” and the virus variants are a “whole other ballgame.” Today, in Michigan and in Canada, a potential variant-fueled third wave suggests a less optimistic outlook (for now). We’re all deeply familiar with how this pattern has repeated itself several times over the past year, and even experts disagree (and some have changed tack). It’s more than just bad news bias. But how do we reconcile this dichotomy between the “optimists” and the “pessimists”? It may come down to how we understand the purpose of epidemiological models in general, and the two types of pandemic forecasting models.

Justin Lessler is an associate professor of epidemiology at Johns Hopkins University and is part of a team that regularly contributes to the Covid-19 Forecast Hub. He specifies that there are four main types of models: theoretical, which help us understand how disease systems work; strategic, which help public officials make decisions, including to “do nothing”; inferential, which help estimate things like levels of herd immunity; and forecasting, which project what will happen in the future based on our best guess how the response and epidemic will actually unfold.

When it comes to forecasting models, there are those whose forecasts are not worst-case scenario by definition (thus more optimistic), which aim to describe present-day patterns in transmission and susceptibility and project out, assuming the current patterns stay the same. In these “dynamic causal models” a variety of different variables are added to also include, as University College London based biomathematician Karl Friston described, unknown factors that affect how the virus spreads, dubbed “dark matter.”

Then there are forecasting models guided by the “precautionary principle,” aka “scenario models,” where the assumptions are often the most conservative. These account for the worst-case scenario, to allow governments to best prepare with supplies, hospital beds, vaccines, and so forth. In the UK, the government’s Scientific Advisory Group for Emergencies focuses on these models and thus guides policy around lockdowns. In the US, President Biden’s Covid-19 task force is the closest equivalent, while the epidemiologists and actuaries that appear nonconformist may be the closest we get to a group like the Independent SAGE (which Friston works with).

“The type of modeling we do for the Independent SAGE is concerned with getting the granularity right, ensuring the greatest fit–with minimal complexity–to help us look under the hood, as it were, at what is really going on,” Friston told me. “So, the fundamental issue is namely, do we comply with the precautionary principle using worst-case scenario modeling of unmitigated responses, or do we commit to the most accurate models of mitigated response?”

This gets to the heart of the tension between various “experts.” For instance, epidemiologists like Stanford’s John Ioannidis have tended to be more concerned with modeling the pandemic to accurately explain current patterns (and extending this pattern into the future), which can come off as more optimistic and isn’t typically used to guide policy.

**Originally published in Wired, March 2021**