Like many teachers, I’ve agonized over what to tell my students about the crises convulsing us lately, the pandemic and U.S. presidential election. What lessons can we draw from what’s happened? I’ve decided to double down on the anti-wisdom I lay on all my classes: Distrust authorities, including me.

I’ve inadvertently demonstrated that precept for my students. I’m a lefty with an optimistic streak, so I predicted that Joe Biden would win handily on election night; that was my take on polls showing Biden leading Trump in Florida, Ohio and other swing states. Pollsters, I told my students, had surely corrected mistakes they made four years ago, when they underestimated Trump’s support. But once again pollsters got “Donald Trump wrong,” as Politico put it. Wishful thinking led me, and perhaps many of them, astray.

Optimism has also distorted my view of the coronavirus. Last March, I took heart from warnings by Stanford epidemiologist John Ioannidis that we might be overestimating the deadliness of the virus and hence overreacting to it. He predicted that the U.S. death toll might reach only 10,000 people, lower than the average annual toll of seasonal flu. I wanted Ioannidis to be right, and his analysis seemed plausible to me, but his prediction turned out to be wrong by more than an order of magnitude.

Ironically, Ioannidis is renowned for raising doubts about scientific experts. In his blockbuster 2005 article “Why Most Published Research Findings Are False,” Ioannidis presented evidence that a majority of the claims in peer-reviewed papers cannot be corroborated. Since then, Ioannidis has continued documenting problems in the scientific literature and tracing them to factors such as confirmation bias, competition for funding and conflicts of interest.

“There is increasing evidence that some of the ways we conduct, evaluate, report and disseminate research are miserably ineffective,” Ioannidis declared in Scientific American in 2018. He recommends investments in research on “how to get the best science and how to choose and reward the best scientists. We should not trust opinion (including my own) without evidence.”

Ioannidis has also found that journals are more likely to publish “extreme, spectacular results (the largest treatment effects, the strongest associations, or the most unusually novel and exciting biological stories),” as he and two co-authors put it in a 2008 paper, “Why Current Publication Practices May Distort Science.” Spectacular results, of course, are more likely to be wrong. Although the scientific community has attacked Ioannidis’s views of COVID-19, his critiques of the scientific literature have been embraced.

Another notable expert critic of experts is social psychologist Philip Tetlock of the University of Pennsylvania. In his 2005 book Expert Political Judgment, Tetlock reported on a study of 284 professional pundits, including academics, government officials and journalists, who comment on politics and related issues via mass media and in scholarly journals and conferences. Tetlock assessed the accuracy of 28,000 of these pundits’ predictions concerning elections, wars, economic collapses and other events. 

The experts’ accuracy was no better than chance, or a dart-throwing monkey, as Tetlock put it. Not only that, but their accuracy was inversely proportional to their prominence. That is, the more exposure they got from CNN, Fox News, the Wall Street Journal and the New York Times, the less likely their predictions were to hold up. This counterintuitive finding makes sense when you consider that pundits get more attention by making dramatic pronouncements, which are also more likely to be wrong.

As a science journalist, I’ve criticized lots of experts, including Nobel Prize winners and tenured professors at fancy universities. I’ve second-guessed proponents of string and multiverse theories and biological theories of war. My last book argued that all “solutions” to the mind-body problem are flawed, and that experts favor certain theories for subjective reasons. Over roughly the last year, I’ve presented evidence that cancer care, psychiatry and medicine in general have been compromised by financial conflicts of interest.

In addition to telling my students about my work, and that of Ioannidis and Tetlock, I mention philosophical critiques of science mounted by Thomas Kuhn and Karl Popper. Scientists can never prove a theory is true, Popper insisted; they can only falsify or disprove it. Kuhn, similarly, warned that absolute truth is unattainable; scientific theories are always provisional, subject to change.

But after dumping all this skepticism on my students, I warn them not to be too skeptical. I remind them that science, in spite of its fallibility, represents an extraordinarily powerful method for understanding and manipulating nature. Science has helped us vanquish smallpox and other diseases, send spacecraft to the moon and Mars, and invent jets, smartphones and other technologies that have transformed our planet.

We believe in the bedrock theories of science—quantum mechanics, relativity, the big bang, the theory of evolution, the genetic code—because scientists have amassed overwhelming evidence for them. We should believe that vaccines are effective and that fossil-fuel emissions are warming the planet for the same reason.

So yes, I tell my students, distrust scientists and other experts, while never forgetting that sometimes they get things right. Although Ioannidis was wrong about the deadliness of COVID-19, his claims about the unreliability of peer-reviewed science have been widely accepted because he backs them up with data.

Scientists can also earn our trust, paradoxically, by admitting their fallibility. Many of us trust what Anthony Fauci says about COVID-19 because he “admits uncertainties and failings,” as a Scientific American article has noted. In his 2015 book Superforecasters, Tetlock presents evidence that ordinary people, nonprofessionals, can cultivate a better-than-average ability to predict events such as elections, wars and economic booms and busts. These “superforecasters are “careful, curious, open-minded, persistent and self-critical,” as one reviewer put it (italics added).

Self-criticism, I tell my students, is difficult. It’s much easier to spot flawed thinking in others than in yourself. How do I practice self-criticism? I try to be transparent—with students, readers and myself—about my own prejudices. In a new book, Pay Attention: Sex, Death, and Science, a lightly fictionalized memoir, I reveal, perhaps too candidly, how my rationality, such as it is, is entangled with my desires and fears.

I also try to understand the perspectives of those with whom I disagree. That’s why, last spring, I spoke to a Texan strength-training guru and Trump supporter who thought that the U.S. was overreacting to COVID-19. I have even sought positive things to say about Trump. For example, I appreciate his desire to extricate the U.S. from wars in Afghanistan and Syria, which his own generals thwarted.

I nonetheless voted against Trump because he is a spectacularly untrustworthy authority, who has been proven to be a liar over and over again. Humility and self-criticism are utterly alien to his nature. He believes, or professes to believe, only that which helps him maintain power, regardless of evidence.

Now that the election is over, I find myself once again peering into the future. Will Trump’s devotees and the Republican Party accept Joe Biden and Kamala Harris as their leaders? Will the Pfizer vaccine turn out to be as effective as a small, preliminary trial seems to suggest? I’m trying hard, but not that hard, to keep my wishful thinking in check.

Further Reading:

A Dig Through Old Files Reminds Me Why I’m So Critical of Science

Advice to Young Science Writers: Ask “What Would Chomsky Think?