Meet the Co-Founder of an Apocalypse Think Tank

Martin Rees, astrophysicist and founding member of the Center for the Study of Existential Risk, talks differentiating sci-fi from real doomsday possibilities

Join Our Community of Science Lovers!

This year the Doomsday Clock moved forward for the first time since 2012. The theoretical countdown to catastrophe was devised 67 years ago by the Bulletin of the Atomic Scientists, a watchdog group created in 1945 by scientists who worked on the Manhattan Project. Its contemporary caretakers have inched the clock three minutes closer to midnight based on the threats of climate change and a slowdown in disarmament.

But global warming and nuclear malaise are not the only threats facing humanity. One organization is looking at the potential threats posed by emerging technologies—dangers no one has even considered yet. The Center for the Study of Existential Risk (CSER) at the University of Cambridge, founded in 2012, develops scientific methodologies for evaluating new global risks—to determine, for example, if a scenario in which robots take over the earth represents science fiction or a real-life possibility. Some of the world's greatest minds, including Stephen Hawking, Jaan Tallinn (a founding engineer of Skype) and philosopher Huw Price, contribute to the endeavor.

SCIENTIFIC AMERICAN sat down with one of the center's co-founders, astrophysicist Lord Martin Rees, to ponder the possible end of life as we know it. Edited excerpts follow.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Why start a group that delves into the threat of new technologies?
Throughout history our ancestors have confronted risks: pestilence, storms, earthquakes and human-induced disasters. But this century is different. It's the first when one species, ours, can determine the planet's future, threaten our civilization and jeopardize the existence of future generations.

What types of scenarios do you examine?
At the moment, there is a wide divergence among experts about both the probabilities and the impacts. Climate scientists differ on whether there are tipping points that could lead to catastrophe. There is a huge range of views among artificial-intelligence experts: some think that human-level AI with a mind of its own (and goals orthogonal to those of humans) could develop before midcentury; others deem this prospect very remote and argue that we should focus our concern on the ethics and safety of dumb autonomous robots (military drones, for instance). And there is already a lively debate on the frontiers of biotech. I hope that CSER will help forge a firmer consensus about which risks are most real and help to raise these on the agenda.

What are the major risks to humanity as you see them and how serious are they?
I'm personally pessimistic about the community's capacity to handle advances in biotech. In the 1970s the pioneers of molecular biology famously formulated guidelines for recombinant DNA at the Asilomar conference. Such issues arise even more starkly today. There is current debate and anxiety about the ethics and prudence of new techniques: “gain of function” experiments on viruses and the use of so-called CRISPR gene-editing technology. As compared with the 1970s, the community is now more global, more competitive and more subject to commercial pressures. I'd fear that whatever can be done will be done somewhere by someone. Even if there are formally agreed protocols and regulations, they'll be as hard to enforce as the drug laws. Bioerror and bioterror rank highest on my personal risk register for the medium term (10 to 20 years).

Is there anything people worry about that they shouldn't?
Many who live in the developed world fret too much about minor risks (carcinogens in food, low-radiation doses, plane crashes, and so forth). Some worry too much about asteroid impacts, which are among the natural risks that are best understood and easiest to quantify. Moreover, it will soon be possible to reduce that risk by deflecting the path of asteroids heading for the earth. That's why I support the B612 Sentinel project.

What should worry us more are threats that are newly emergent. They surely merit more attention, and they are what CSER aims to study. It's an important maxim that the unfamiliar is not the same as the improbable. The stakes are so high that even if we can reduce the probability of catastrophe by one part in a million, we'll have earned our keep.

It’s Time to Stand Up for Science

If you enjoyed this article, I’d like to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history.

I’ve been a Scientific American subscriber since I was 12 years old, and it helped shape the way I look at the world. SciAm always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too.

If you subscribe to Scientific American, you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized.

In return, you get essential news, captivating podcasts, brilliant infographics, can't-miss newsletters, must-watch videos, challenging games, and the science world's best writing and reporting. You can even gift someone a subscription.

There has never been a more important time for us to stand up and show why science matters. I hope you’ll support us in that mission.

Thank you,

David M. Ewalt, Editor in Chief, Scientific American

Subscribe