This year the Doomsday Clock moved forward for the first time since 2012. The theoretical countdown to catastrophe was devised 67 years ago by the Bulletin of the Atomic Scientists, a watchdog group created in 1945 by scientists who worked on the Manhattan Project. Its contemporary caretakers have inched the clock three minutes closer to midnight based on the threats of climate change and a slowdown in disarmament.

But global warming and nuclear malaise are not the only threats facing humanity. One organization is looking at the potential threats posed by emerging technologies—dangers no one has even considered yet. The Center for the Study of Existential Risk (CSER) at the University of Cambridge, founded in 2012, develops scientific methodologies for evaluating new global risks—to determine, for example, if a scenario in which robots take over the earth represents science fiction or a real-life possibility. Some of the world's greatest minds, including Stephen Hawking, Jaan Tallinn (a founding engineer of Skype) and philosopher Huw Price, contribute to the endeavor.

SCIENTIFIC AMERICAN sat down with one of the center's co-founders, astrophysicist Lord Martin Rees, to ponder the possible end of life as we know it. Edited excerpts follow.

Why start a group that delves into the threat of new technologies?
Throughout history our ancestors have confronted risks: pestilence, storms, earthquakes and human-induced disasters. But this century is different. It's the first when one species, ours, can determine the planet's future, threaten our civilization and jeopardize the existence of future generations.

What types of scenarios do you examine?
At the moment, there is a wide divergence among experts about both the probabilities and the impacts. Climate scientists differ on whether there are tipping points that could lead to catastrophe. There is a huge range of views among artificial-intelligence experts: some think that human-level AI with a mind of its own (and goals orthogonal to those of humans) could develop before midcentury; others deem this prospect very remote and argue that we should focus our concern on the ethics and safety of dumb autonomous robots (military drones, for instance). And there is already a lively debate on the frontiers of biotech. I hope that CSER will help forge a firmer consensus about which risks are most real and help to raise these on the agenda.

What are the major risks to humanity as you see them and how serious are they?
I'm personally pessimistic about the community's capacity to handle advances in biotech. In the 1970s the pioneers of molecular biology famously formulated guidelines for recombinant DNA at the Asilomar conference. Such issues arise even more starkly today. There is current debate and anxiety about the ethics and prudence of new techniques: “gain of function” experiments on viruses and the use of so-called CRISPR gene-editing technology. As compared with the 1970s, the community is now more global, more competitive and more subject to commercial pressures. I'd fear that whatever can be done will be done somewhere by someone. Even if there are formally agreed protocols and regulations, they'll be as hard to enforce as the drug laws. Bioerror and bioterror rank highest on my personal risk register for the medium term (10 to 20 years).

Is there anything people worry about that they shouldn't?
Many who live in the developed world fret too much about minor risks (carcinogens in food, low-radiation doses, plane crashes, and so forth). Some worry too much about asteroid impacts, which are among the natural risks that are best understood and easiest to quantify. Moreover, it will soon be possible to reduce that risk by deflecting the path of asteroids heading for the earth. That's why I support the B612 Sentinel project.

What should worry us more are threats that are newly emergent. They surely merit more attention, and they are what CSER aims to study. It's an important maxim that the unfamiliar is not the same as the improbable. The stakes are so high that even if we can reduce the probability of catastrophe by one part in a million, we'll have earned our keep.