With coronavirus, sometimes the right answer is ‘I don’t know’

The coronavirus crisis has turned everyone into an amateur statistician overnight. We know about “flattening the curve,” exponential growth (of increasing infections and deaths), and models projecting potential outcomes of the pandemic. Headlines will alternately tout studies projecting a massive overwhelming of our healthcare system absent severe lockdown for months and others that estimate deaths from COVID-19 in the tens of thousands, but not millions, if we stick with the ongoing lockdowns.

If you’re left feeling confused about whether we’re headed in the right direction, whatever that means these days, you’re not alone. With the coronavirus crisis, we hear predictions day in and day out that sometimes seem at odds with one another. It might make you think the experts don’t know what they’re talking about. After all, we all want someone to tell us confidently what is going to happen so we can plan our lives.

But since lots of data is just beginning to be collected about COVID-19, the future is going to remain a mystery for some time. At this point in the crisis, one thing people are getting is a true crash course in the world of statistical uncertainty.

We are accustomed to rewarding those who boldly make confident predictions: so-and-so will win this election, it will definitely rain tomorrow, the Atlanta Falcons have a 99.7% chance of winning the Super Bowl, and so on. We have little time or patience for someone who says, “Maybe this will happen, but maybe it won’t.”

But let’s define “modeling” briefly: A model takes hard data about what we know, makes assumptions about what that data means, and then creates an estimate of the future.

Those who truly work in scientific fields (the worlds of modeling of all sorts of outcomes from elections to the weather to the spread of diseases) will tell you that uncertainty is part and parcel of what we do. Modeling is a lot of science but also a little bit art, relying on a scientist’s or expert’s judgment and assumptions. Estimates and predictions are subject to change as we learn new things and hone our assumptions further.

This is absolutely not to say models shouldn’t be trusted. The preponderance of scientific evidence is clear that COVID-19 is deadly, is not “just the flu,” and has the potential to be devastating even if we practice social distancing for some time. Models are often all we have to guide public policy decisions in situations like these, where we need guidance to know if schools should stay shuttered or sports seasons postponed. In times of uncertainty when the stakes are high, policymakers should not be faulted for being focused on avoiding potential worst-case scenarios, even if they are unlikely to occur.

But contrary to those who would criticize medical experts such as Dr. Anthony Fauci, who have updated their views and changed their assessments in response to new information, we should celebrate those who are upfront about their assumptions and who update their forecasts when new information emerges, even if it makes their past estimates look “wrong.”

Models can always be refreshed with new information that renders yesterday’s “prediction” obsolete. Take as an example: hurricane season “spaghetti models.” The charts resemble, well, spaghetti, with various lines projecting where each model says a hurricane might go. Of course, each line is just the center of one estimated possible path, it can and will shift, and so on. These “spaghetti model” maps update as a storm nears, with new estimates about where a storm might go.

Different models produce each of the squiggles on the chart, and they differ because they each take in various sources of hard data but then use unique assumptions about what might cause the storm to move a certain way. Some experts might assume a particular atmospheric factor matters a lot, others might assume it doesn’t matter at all. Some might assume that storms tend to stick to their current track; others might assume a storm has more of an ability to go in a wild new direction.

At least with a hurricane, forecasters have data on a multitude of past hurricanes and hurricane seasons to draw upon when creating and testing models. With the coronavirus, we are still learning a lot we don’t fully know: how fatal the disease is, how many people have it, how many who have it are tested, how well and for how long people will adhere to social distancing guidelines, so on and so forth. These are all data points that are still being created and collected in real-time.

Less-sophisticated commentators can often be spotted virtue-signaling by demanding fealty to “the science” that just so happens to conform to their worldview, while conveniently ignoring “science” that does not.

The voices we should instead elevate and admire are the ones that acknowledge uncertainty, who update their assumptions and forecasts as we learn more, and who are willing to give that most honest of answers: We don’t know.

Related Content