As COVID-19 claims more victims, scientific models make headlines. We need these models to make informed decisions. But how can we tell whether a model can be trusted? The philosophy of science, it seems, has become a matter of life or death. Whether we are talking about traffic noise from a new highway or about climate change or a pandemic, scientists rely on models, which are simplified, mathematical representations of the real world. Models are approximations and omit details, but a good model will robustly output the quantities it was developed for.
Models do not always predict the future. This does not make them unscientific, but it makes them a target for science skeptics. I cannot even blame the skeptics, because scientists frequently praise correct predictions to prove a model’s worth. It isn’t originally their idea. Many eminent philosophers of science, including Karl Popper and Imre Lakatos, opined that correct predictions are a way of telling science from pseudoscience.
But correct predictions alone don’t make for a good scientific model. And the opposite is also true: a model can be good science without ever making predictions. Indeed, the models that matter most for political discourse are those that do not make predictions. Instead they produce “projections” or “scenarios” that, in contrast to predictions, are forecasts that depend on the course of action we will take. That is, after all, the reason we consult models: so we can decide what to do. But because we cannot predict political decisions themselves, the actual future trend is necessarily unpredictable.
This has become one of the major difficulties in explaining pandemic models. Dire predictions in March for COVID’s global death toll have not come true. But they were projections for the case in which we take no measures; they were not predictions.
Political decisions are not the only reason why a model may merely make contingent projections rather than definite predictions. Trends of global warming, for example, depend on the frequency and severity of volcanic eruptions, which themselves cannot currently be predicted. They also depend on technological progress, which itself depends on economic prosperity, which again depends, among many other things, on whether society is in the grasp of a pandemic. Sometimes asking for predictions is really asking for too much.
Predictions are also not enough to make for good science. Recall how each time a natural catastrophe happens, it turns out to have been “predicted” in a movie or a book. Given that most natural catastrophes are predictable to the extent that “eventually something like this will happen,” this is hardly surprising. But these are not predictions, they are scientifically meaningless prophecies because they are not based on a model whose methodology can be reproduced, and no one has tested whether the prophecies were better than random guesses.
Thus, predictions are neither necessary for a good scientific model nor sufficient to judge one. But why, then, were the philosophers so adamant that good science needs to make predictions? It’s not that they were wrong. It’s just that they were trying to address a different problem than what we are facing now.
Scientists tell good models from bad ones by statistical methods that are hard to communicate without equations. These methods depend on the type of model, the amount of data and the field of research. In short, it’s difficult. The rough answer is that a good scientific model accurately explains a lot of data with few assumptions. The fewer the assumptions and the better the fit to data, the better the model.
But the philosophers were not concerned with quantifying explanatory power. They were looking for a way to tell good science from bad science without having to dissect scientific details. And while correct predictions may not tell you whether a model is good science, they increase trust in the scientists’ conclusions because predictions prevent scientists from adding assumptions after they have seen the data. Thus, asking for predictions is a good rule of thumb, but it is a crude and error-prone criterion. And fundamentally it makes no sense. A model either accurately describes nature, or it doesn’t. At which moment in time a scientist made a calculation is irrelevant for the model’s relation to nature.
A confusion closely related to the idea that good science must make predictions is the belief that scientists should not update a model when new data come in. This can also be traced back to Popper & Co., who thought it is bad scientific practice. But of course, a good scientist updates their model when they get new data! This is the essence of the scientific method: When you learn something new, revise. In practice, this usually means recalibrating model parameters with new data. This is why we see regular updates of COVID case projections. What a scientist is not supposed to do is add so many assumptions that their model can fit any data. This would be a model with no explanatory power.
Understanding the role of predictions in science also matters for climate models. These models have correctly predicted many observed trends, from the increase of surface temperature, to stratospheric cooling, to sea ice melting. This fact is often used by scientists against climate change deniers. But the deniers then come back with some papers that made wrong predictions. In response, the scientists point out the wrong predictions were few and far between. The deniers counter there may have been all kinds of reasons for the skewed number of papers that have nothing to do with scientific merit. Now we are counting heads and quibbling about the ethics of scientific publishing rather than talking science. What went wrong? Predictions are the wrong argument.
A better answer to deniers is that climate models explain loads of data with few assumptions. The computationally simplest explanation for our observations is that the trends are caused by human carbon dioxide emission. It’s the hypothesis that has the most explanatory power.
In summary, to judge a scientific model, do not ask for predictions. Ask instead to what degree the data are explained by the model and how many assumptions were necessary for this. And most of all, do not judge a model by whether you like what it tells you.