The best science transforms our conception of the universe and our place in it and helps us to understand and cope with changes beyond our control. Relativity, natural selection, germ theory, heliocentrism and other explanations of natural phenomena have remade our intellectual and cultural landscapes. The same holds true for inventions as diverse as the Internet, formal logic, agriculture and the wheel.
What dramatic new events are in store for humanity? Here we contemplate 12 possibilities and rate their likelihood of happening by 2050. Some will no doubt bring to mind long-standing dystopian visions: extinction-causing asteroid collisions, war-waging intelligent machines, Frankenstein’s monster. Yet the best thinking today suggests that many events will not unfold as expected. In fact, a scenario could be seen as sobering and disappointing to one person and curious and uplifting to another. One thing is certain: they all have the power to forever reshape how we think about ourselves and how we live our lives.
Cloning of a Human
The process is extremely difficult, but it also seems inevitable
By Charles Q. Choi
Ever since the birth of Dolly the sheep in 1996, human cloning for reproductive purposes has seemed inevitable. Notwithstanding past dubious claims of such an achievement—including one by a company backed by a UFO cult—no human clones have been made, other than those born naturally as identical twins. Despite success with other mammals, the process has proved much more difficult in humans—which may strike some people as comforting and others as disappointing.
Scientists generate clones by replacing the nucleus of an egg cell with that from another individual. They have cloned human embryos, but none has yet successfully grown past the early stage where they are solid balls of cells known as morulas—the act of transferring the nucleus may disrupt the ability of chromosomes to align properly during cell division. “Whenever you clone a new species, there’s a learning curve, and with humans it’s a serious challenge getting enough good-quality egg cells to learn with,” says Robert Lanza of Advanced Cell Technology in Worcester, Mass., who made headlines in 2001 for first cloning human embryos. Especially tricky steps include discovering the correct timing and mix of chemicals to properly reprogram the cell.
Even with practiced efforts, some 25 percent of cloned animals have overt problems, Lanza notes—minor slips during reprogramming, culturing or handling of the embryos can lead to developmental errors. Attempting to clone a human would be so risky, Lanza says, it “would be like sending a baby up into space in a rocket that has a 50–50 chance of blowing up.”
Ethical issues would persist even assuming foolproof techniques. For instance, could people be cloned without their knowledge or consent? On the other hand, a clone might lead a fuller life, because it “really gets to learn” from the original, says molecular technologist George M. Church of Harvard Medical School. “Say, if I learned at 25 I had a terrific ear for music but never got music lessons, I could tell my twin to try it at 5.”
The possibility of human cloning may not be restricted to Homo sapiens, either. Scientists may soon completely sequence the Neandertal genome. Although DNA is damaged during fossilization, an excellent fossil could yield enough molecules to generate a cloneable genome, Church suggests. Bringing a cloned extinct species to term in a modern species is even more challenging than normal cloning, considering that such factors as the womb environment and gestation period might be mismatched. The only clone so far of an extinct animal—the bucardo, a variety of ibex that died off in 2000—expired immediately after birth because of lung defects.
In the U.S., not all states have banned human reproductive cloning. The United Nations has adopted a nonbinding ban. If human cloning happens, it will “occur in a less restrictive area of the world—probably by some wealthy eccentric individual,” Lanza conjectures. Will we recoil in horror or grow to accept cloning as we have in vitro fertilization? Certainly developing new ways to create life will force us to think about the responsibilities of wielding such immense scientific power.
The world’s biggest particle collider might uncover new slices of space
By George Musser
Wouldn’t it be great to reach your arm into a fourth dimension of space? You could then liberate yourself from the shackles of ordinary geometry. Hopelessly tangled extension cords would slip apart with ease. A left-handed glove could be flipped over to replace the right-handed one your dog ate. Dentists could do root canals without drilling or even asking you to open your mouth.
As fantastic as extra dimensions of space sound, they might really exist. From the relative weakness of gravity to the deep affinity among seemingly distinct particles and forces, various mysteries of the world around us give the impression that the known universe is but the shadow of a higher-dimensional reality. If so, the Large Hadron Collider (LHC) near Geneva could smash particles together and release enough energy to break the shackles that keep particles in three dimensions and let us reach into that mind-blowing realm.
Proof of extra dimensions “would alter our whole notion of what reality is,” says cosmologist Max Tegmark of the Massachusetts Institute of Technology, who in 1990 wrote a four-dimensional version of the video game Tetris to get a taste of what extra dimensions might be like. (You keep track of the falling blocks using multiple 3-D slices of the full 4-D space.)
In modern physics theories, the main rationale for extra dimensions is the concept of supersymmetry, which aims to unite all the different types of particles into one big happy family. Supersymmetry can fulfill that promise only if space has a total of 10 dimensions. The dimensions could have gone unnoticed either because they are too small to enter or because we are, by our very nature, stuck to a 3-D membrane like a caterpillar clutching onto a leaf.
To be sure, not every proposed unified theory involves extra dimensions. So their discovery or nondiscovery would be a helpful data point. “It would focus what we do,” says physicist Lisa Randall of Harvard University, who made her name studying the caterpillar-and-leaf option.
One way to get at those dimensions is to crank up the energy of a particle accelerator. By the laws of quantum mechanics, the more energy a particle has, the more tightly confined it is; an energy of one tera-electron-volt (TeV) corresponds to a size of 10–19 meter. If an extra dimension is that big, the particle would literally fall into it and begin to vibrate.
In 1998 physicist Gordon Kane of the University of Michigan at Ann Arbor imagined that the LHC smashed together two protons and created electrons and other particles that not only had the energy of 1 TeV but also integer multiples thereof, such as 2 or 3 TeV. Such multiples would represent the harmonics of the vibrations in extra dimensions set off by the collision. Neither standard particle processes nor exotica such as dark matter particles could account for these events.
Extra dimensions might betray themselves in other ways. If the LHC produced subatomic black holes, they would be immediate proof of extra dimensions, because gravity in ordinary 3-D space is simply too weak to create holes of this size. For geometric reasons, higher dimensions would strengthen gravity on small scales. They would likewise change the small-scale behavior of other forces, such as electromagnetism. And by dictating how supersymmetry operates, they might lead to distinctive patterns among the masses and other properties of particles. Besides the LHC, scientists might find hints of extra dimensions in measurements of the strength of gravity and in observations of the orbits of black holes or of exploding stars.
The discovery would transform not only physics but also its allied disciplines. Extra dimensions might explain mysteries such as cosmic acceleration and might even be a prelude to reworking the entire notion of dimensionality—adding to a growing sense that space and time emerge from physical principles that play out in a spaceless, timeless realm.
“So while extra dimensions would be a terrific discovery,” says physicist Nima Arkani-Hamed of the Institute for Advanced Study in Princeton, N.J., “at a deeper level, conceptually they aren’t particularly fundamental.”
Whatever the charms of extra dimensions for physicists, we will never be able to visit them for ourselves. If they were open to the particles that make up our bodies, the added liberty of motion would destabilize complex structures, including life. Alas, the frustration of tangled cords and the pain of dental work are necessary trade-offs to allow us to exist at all.
How will we respond to a signal from outer space?
By John Matson
Fifty years ago a young astronomer, indulging in a bit of interstellar voyeurism, turned a telescope on the neighbors to see what he could see. In April 1960 at the National Radio Astronomy Observatory in Green Bank, W.Va., Frank Drake, then 29, trained a 26-meter-wide radio telescope on two nearby stars to seek out transmissions from civilizations possibly in residence there. The search came up empty, but Drake’s Project Ozma began in earnest the ongoing search for extraterrestrial intelligence, or SETI.
Drake, who turned 80 in May, is still at it, directing the Carl Sagan Center for the Study of Life in the Universe at the nonprofit SETI Institute in Mountain View, Calif. Instead of just borrowing time from other astronomical instruments, those in the field now have purpose-built tools at their disposal, such as the fledgling Allen Telescope Array (ATA) in Hat Creek, Calif. But funding is scarce—the ATA growth stalled at 42 dishes of a planned 350—and astronomers have not yet gathered enough data to make firm pronouncements about intelligent life in the universe.
“Although we have ‘been doing it’ for 50 years, we have not been on a telescope very much of that time,” says Jill Tarter, director of the Center for SETI Research at the SETI Institute. “What we can say is that every star system in the galaxy isn’t populated by a technology that’s broadcasting radio signals at this time.”
Theoretical astrophysicist Alan P. Boss of the Carnegie Institution for Science agrees. “The lack of a SETI signal to date simply means that civilizations that feel like broadcasting to us are not so common that the limited SETI searches would have found one,” Boss says. “There is still a lot of the galaxy that has not yet been searched.” One of the most extensive campaigns to date, Project Phoenix, surveyed nearby stars across a wide range of frequencies using some of the world’s largest radio telescopes. In nine years Phoenix sampled roughly 800 stars, less than one millionth of 1 percent of the Milky Way.
Even for stars that have been scanned, the parameters for a possible signal are frustratingly numerous. Like those for terrestrial radio, they include frequency (what station does it broadcast on?), time (24/7 or midnight sign-off?), type of modulation (AM or FM?), and so on. “At the very least this search is nine-dimensional,” Tarter says, “and we could guess right about what to look for and build the right instrument for eight of those dimensions, but we could still miss it because we got one wrong.”
Arguments for SETI and for widespread life in general have been bolstered by the confirmation that planetary systems are common around other stars. Most of the 400-plus exoplanets are scalding giants inhospitable to life as we know it. But in the next few years NASA’s Kepler space telescope, now surveying more than 100,000 stars for planets, should settle the question of how common Earth-like planets are.
Even on Earth-like worlds, however, technological, radio-broadcasting life may not be common. Many researchers hold out more hope for finding simpler life-forms, such as microbes or slime molds. Boss says that life of this kind should be widespread, but we will not have the technology to detect it for two decades at best.
But what if someone does pick up a signal from an intelligent civilization? The SETI community has protocols in place, such as alerting observatories around the world for verification, but the same cannot be said of the world’s governments. A United Nations–level framework does not yet exist to guide the contentious next steps—if we hear a shout from a potentially hostile neighbor, do we dare shout back?
It would not be an entirely new experience for Drake, who as a graduate student thought he had made a detection. “You feel a very special emotion if you think that has happened, because you realize everything is going to change,” he says, noting that we would soon be enriched with new knowledge of other worlds, species and cultures. “It’s an emotion you have to feel to understand, and I felt it.”
A local conflict could produce a global nightmare
By Philip Yam
The end of the cold war and ongoing arms-control efforts by the U.S., Russia and other countries have greatly reduced the threat of global nuclear annihilation. But rogue nations and continued tensions make a local exchange of nuclear firepower all too real.
A single detonation can cause horrible death in several ways. The Hiroshima blast—equal to about 15 kilotons of TNT—generated supersonic wind speeds that crushed concrete buildings near ground zero. Heat from the blast scorched to death anyone within one kilometer. People many kilometers away eventually succumbed to radiation poisoning and cancer.
Global effects, however, would not happen unless dozens of bombs exploded, as might occur in an exchange between Pakistan and India. In modeling the effects, scientists have assumed that those nations would unload their entire arsenals, so that about 100 Hiroshima-size bombs would go off [see “Local Nuclear War, Global Suffering,” by Alan Robock and Owen Brian Toon; Scientific American, January].
Aside from 20 million killed in the war, many outside the conflict would perish over time. That is because the blasts would throw up five million metric tons of soot into the upper atmosphere. Driven by weather patterns, the particulates would encircle the globe in about a week; within two months they would blanket the planet. Darkened skies would rob plants of sunlight and disrupt the food chain for 10 years. The resulting famine could kill the one billion people who now survive on marginal food supplies.
The outcome is grim. But there is one bright spot: it is within humanity’s ability—and responsibility—to see that such a world-changing event never happens.
Creation of Life
Synthetic biology remakes organisms, but can it bring inanimate matter to life?
By David Biello
A scientist adds a few chemical compounds to a bubbling beaker and gives it a swirl. Subtle reactions occur, and, lo and behold, a new life-form assembles itself, ready to go forth and prosper. Such is the popular imagining of synthetic biology, or life created in the lab.
But researchers in this field are not as interested in animating the inanimate. In fact, scientists remain far from understanding the basic processes that could allow inert, undirected compounds to assemble into living, self-replicating cells. The famous Miller-Urey experiment of 1952, which created amino acids from primordial goo, remains difficult to replicate conclusively.
Rather synthetic biology today is about modifying existing organisms. It can be seen as genetic engineering on steroids: instead of replacing one gene, synthetic biologists modify large chunks of genes or even entire genomes. The change in DNA can force organisms to churn out chemicals, fuels and even medicines. “What they’re doing is constructing from scratch the instruction set for life and adding that to something already alive, replacing the natural instruction set,” explains biological engineer Drew Endy of Stanford University. “It defines an alternative path forward for promulgating life on earth. You no longer need to descend directly from a parent.”
In that regard, some scientists do not see any reason to replicate an existing cell with a man-made one. “Making something as close as possible to an existing cell, you might as well use the existing cell,” argues geneticist and technology developer George M. Church of Harvard Medical School. And manipulating genomes has become so widespread that even high schoolers do it.
Synthetic biology, in fact, is all about bringing the principles of large-scale engineering to biology. Imagine a world where bamboo is programmed to grow into a chair, rather than roughly woven into that shape through mechanical or human industry, or where self-assembling solar panels (otherwise known as leaves) feed electricity to houses. Or trees that exude diesel fuel from their stems. Or biological systems that are reengineered to remove pollution or to thrive in a changing climate. Reprogrammed bacteria might even be able to invade our bodies to heal, acting as an army of living doctors inside us.
“In principle, everything that is manufactured could be manufactured with biology,” Church argues. It is already happening on a small scale: enzymes from high-temperature microbes used in laundry detergent have been reengineered to perform in cold water, thereby saving energy.
Synthetic biology “is going to fundamentally change the way we make everything for the next 100 years,” predicts David Rejeski, director of the science, technology and innovation program at the Woodrow Wilson International Center for Scholars in Washington, D.C. “We can engineer matter at a biologically relevant scale. That’s as big a change as the industrial revolution back in the 19th century.”
With great promise comes great risk, too—namely, in the form of modified organisms escaping the lab. Most such creations today are too ungainly to survive in the wild. For more sophisticated creations in the future, synthetic biologists expect that various safeguards would need to be instituted, such as strict monitoring or a kind of self-destruct sequence in the new genetic code. Because scientists can entirely remake organisms at the genetic level, they can insulate them from natural systems, Endy says: “We can make them fail fast.”
Nevertheless, some scientists are indeed attempting to re-create life. Carole Lartigue, Hamilton Smith and others at the J. Craig Venter Institute have made a bacterial genome from scratch and even turned one type of microbe into another. Researchers elsewhere have created synthetic organelles and even an entirely novel organelle, the so-called synthosome, to make enzymes for synthetic biology. Life from scratch may be imminent.
Such a feat does not mean scientists will understand how life arose in the first place, but it might provoke fears that humanity has achieved the undeserved power of deities. But the creation could also have a more humbling effect—by transforming our understanding of our fellow life-forms. “The benefits would be to remake our civilization in partnership with life at the molecular level to sustainably produce the materials, energy and feedstocks we need,” Endy says. “We will have a balance of partnership with the rest of life on the planet in a way that is very different from the way we now interact with nature.”
They would transform the grid—if they can exist at all
By Michael Moyer
You can build a coal-fired power plant just about anywhere. Renewables, on the other hand, are finicky. The strongest winds blow across the high plains. The sun shines brightest on the desert. Transporting that energy into cities hundreds of kilometers away will be one of the great challenges of the switch to renewable energy.
The most advanced superconducting cable can move those megawatts thousands of kilometers with losses of only a few percent. Yet there is a catch: the cable must be kept in a bath of liquid nitrogen at 77 kelvins (or –196 degrees Celsius). This kind of deployment, in turn, requires pumps and refrigeration units every kilometer or so, greatly increasing the cost and complexity of superconducting cable projects.
Superconductors that work at ordinary temperatures and pressures would enable a truly global energy supply. The Saharan sun could power western Europe via superconducting cables strung across the floor of the Mediterranean Sea. Yet the trick to making a room-temperature superconductor is just as much of a mystery today as it was in 1986, when researchers constructed the first superconducting materials that worked at the relatively high temperatures of liquid nitrogen (previous substances needed to be chilled down to 23 kelvins or less).
Two years ago the discovery of an entirely new class of superconductor—one based on iron—raised hopes that theorists might be able to divine the mechanism at work in high-temperature superconductors [see “An Iron Key to High-Temperature Superconductivity?” by Graham P. Collins; Scientific American, August 2009]. With such insights in hand, perhaps a path toward room-temperature superconductors would come into view. But progress has remained slow. The winds of change don’t always blow on cue.
What happens when robots start calling the shots?
By Larry Greenemeier
Artificial-intelligence (AI) researchers have no doubt that the development of highly intelligent computers and robots that can self-replicate, teach themselves and adapt to different conditions will change the world. Exactly when it will happen, how far it will go, and what we should do about it, however, are cause for debate.
Today’s intelligent machines are for the most part designed to perform specific tasks under known conditions. Tomorrow’s machines, though, could have more autonomy. “As the kinds of tasks that we want machines to perform become more complex, the more we need them to take care of themselves,” says Hod Lipson, a mechanical and computer engineer at Cornell University. The less we can foresee issues, Lipson points out, the more we will need machines to adapt and make decisions on their own. As machines get better at learning how to learn, he says, “I think that leads down the path to consciousness and self-awareness.”
Although neuroscientists debate the biological basis for consciousness, complexity seems to be a key part, suggesting that computers with adaptable and advanced hardware and software might someday become self-aware. One way we will know that machines have attained that cognitive level is that they suddenly wage war on us, if films such as The Terminator are correct. More likely, experts think, we will see it coming.
That conceit derives from observations of humans. We are unique for having a level of intelligence that enables us to repeatedly “bootstrap” ourselves up to reach ever greater heights, says Selmer Bringsjord, a logician and philosopher at Rensselaer Polytechnic Institute. Whereas animals seem to be locked into an “eternally fixed cognitive prison,” he says, people have the ability to free themselves from their cognitive limitations.
Once a machine can understand its own existence and construction, it can design an improvement for itself. “That’s going to be a really slippery slope,” says Will Wright, creator of the Sims games and co-founder of Berkeley, Calif.–based robotics workshop the Stupid Fun Club. When machine self-awareness first occurs, it will be followed by self-improvement, which is a “critical measurement of when things get interesting,” he adds. Improvements would be made in subsequent generations, which, for machines can pass in only a few hours.
In other words, Wright notes, self-awareness leads to self-replication leads to better machines made without humans involved. “Personally, I’ve always been more scared of this scenario than a lot of others” in regard to the fate of humanity, he says. “This could happen in our lifetime. And once we’re sharing the planet with some form of superintelligence, all bets are off.”
Not everyone is so pessimistic. After all, machines follow the logic of their programming, and if this programming is done properly, Bringsjord says, “the machine isn’t going to get some supernatural power.” One area of concern, he notes, would be the introduction of enhanced machine intelligence to a weapon or fighting machine behind the scenes, where no one can keep tabs on it. Other than that, “I would say we could control the future” by responsible uses of AI, Bringsjord says.
This emergence of more intelligent AI won’t come on “like an alien invasion of machines to replace us,” agrees futurist and prominent author Ray Kurzweil. Machines, he says, will follow a path that mirrors the evolution of humans. Ultimately, however, self-aware, self-improving machines will evolve beyond humans’ ability to control or even understand them, he adds.
The legal implications of machines that operate outside of humanity’s control are unclear, so “it’s probably a good idea to think about these things,” Lipson says. Ethical rules such as the late Isaac Asimov’s “three laws of robotics”—which, essentially, hold that a robot may not injure a human or allow a human to be injured—become difficult to obey once robots begin programming one another, removing human input. Asimov’s laws “assume that you program the robot,” Lipson says.
Others, however, wonder if people should even govern this new breed of AI. “Who says that evolution isn’t supposed to go this way?” Wright asks. “Should the dinosaurs have legislated that the mammals not grow bigger and take over more of the planet?” If control turns out to be impossible, let’s hope we can peaceably share the planet with our silicon-based companions.
Move the beach chair back: rising seas will literally reshape the world
By David Biello
The U.S. is shrinking—physically. It has lost nearly 20 meters of beach from its East Coast during the 20th century. The oceans have risen by roughly 17 centimeters since 1900 through expansion (warmer water taking up more space) and the ongoing meltdown of polar ice.
That increase, however, is a small fraction compared with what’s to come. “Plan for one meter by the end of this century,” says glaciologist Robert Bindschadler, an emeritus scientist at NASA. “The heat in the ocean is killing the ice sheet.”
Some of the famous predictions—Florida under five meters of sea-level rise and a gaping bay where Bangladesh used to be—may be centuries away. But expect an ice-free Arctic and different coastal contours by 2100. By the reckoning of economist Nicholas Stern of the London School of Economics, 200 million people live within one meter above the present sea level, including eight out of 10 of the world’s largest cities and all the megacities of the developing world. “They’re going to have to move,” Bindschadler suggests.
In fact, unless greenhouse gas emissions are tamed, the seas will keep rising as the ice sheets covering mountain ranges (constituting roughly 1 percent of the planet’s ice), Greenland (9 percent) and Antarctica (90 percent) melt away. All told, they harbor enough water to eventually raise sea levels by at least 65 meters.
It takes centuries to melt an entire ice sheet, but still, the ice is disappearing faster than scientists had expected even a few years ago. Even with gradual sea-level rise, the risk of catastrophic storm surges and the like creeps up.
The gravitational pull of ice on surrounding waters is a recently appreciated surprise, too: generally speaking, if Greenland ice melts, “most of the sea-level rise occurs in the Southern Hemisphere,” and vice versa for Antarctic ice, says physicist W. Richard Peltier of the University of Toronto. “West Antarctica is the region we believe is most susceptible to destabilization by ongoing global warming.”
Even if greenhouse gas emissions decline, the polar meltdowns will be difficult to avoid because ice sheets lag the overall climate and, once melted, have a hard time re-forming. Just how humans will adapt to a more watery world is still not known. Of today’s trend, Bindschadler notes, “We’re not going to avoid this one.”
Will the overdue Big One tear California asunder?
By Katherine Harmon
Los Angeles might not end up as an island when the Big One rocks California, but any sizable seismic event on the San Andreas fault will send L.A. several meters closer to San Francisco. Scientists and the public have long expected a major quake to strike the West Coast; the U.S. Geological Survey estimates that California has a 99 percent chance before 2038 of experiencing at least a magnitude 6.7 quake—the same size as the 1994 Northridge earthquake.
But it could easily be bigger. Much bigger. If most of the San Andreas were to rupture in one event, an earthquake could reach a magnitude 8.2, says Lucy Jones, chief scientist of the Multi-Hazards Demonstration Project at the USGS in southern California.
The San Andreas fault runs about 1,300 kilometers from southern California up past the Bay Area. It forms the boundary between the North American plate, moving southeasterly, and the Pacific plate, heading to the northwest. From geologic records, scientists think that the fault usually ruptures about once every 150 years. The last big movement, however, was about 300 years ago.
A magnitude 7.8 earthquake (which a 2008 USGS and California Geological Society report calls a “plausible event”) would shake some 10 million southern Californians, killing about 1,800 and injuring 50,000. A rumbler this size, which the USGS modeled as the ShakeOut Earthquake Scenario outreach project, would mean a fault movement of about 13 meters. Such a slip would sever roads, pipelines, railways and communications cables that cross the fault and trigger landslides. Aftershocks—some as powerful as magnitude 7.2—would rattle the region for weeks. The quake would cause some $200 billion in damage, and long-term infrastructure and business disruption would cost billions more, Jones notes.
But the San Andreas isn’t the only fault likely to slip, and seismic activity along one fault—even one thousands of kilometers away—can set off others that are getting ready to go. An offshore 6.5 quake that shook northern California in January occurred on the southern edge of the Cascadia subduction zone, which runs just offshore of the Pacific Northwest. This plate boundary could unleash at least a magnitude 9.0—the size of the 2004 Sumatra quake that spawned a devastating tsunami. Geologic records show evidence of an earthquake in 1700 that sent a tsunami all the way to Japan, and a similarly sized quake has about a one-in-10 chance of occurring in the next few decades.
Predicting earthquakes is a little bit like trying to guess your weather for the week based on knowing the climate, says geophysicist Robert Yeats of Oregon State University. Realizing that a quake will probably occur sometime soon, he adds, “doesn’t affect your holiday plans, but it’s going to affect your building codes.” The bigger buildings can be among the safest—some of the state’s skyscrapers having been built to withstand magnitudes upward of 7.8. And just because a big earthquake appears overdue, the next seismic shift might not lead to the worst-case scenario. Scientists are still learning more about the frequency of big quakes (greater than magnitude 6.0) in the geologic record, and some newer evidence suggests that smaller earthquakes might be more the norm on the San Andreas.
When the Big One does come, it might not prove as devastating as long feared thanks to modern, savvy construction and public readiness campaigns. Much greater havoc can come from even moderate earthquakes in poorer, less prepared areas of the world. The January quake in Haiti, for instance, killed nearly a quarter of a million people—a sobering example of how a sudden slip of a fault can quickly crumble cities that have not had the luxury of careful planning.
It would solve environmental headaches, but it remains hard to achieve
By Michael Moyer
According to the old quip, a practical fusion reactor will always be about 20 years away. Nowadays that feels a bit optimistic. The world’s largest plasma fusion research project, the ITER reactor in southern France, won’t begin fusion experiments until 2026 at the earliest. Engineers will need to run tests on ITER for at least a decade before they will be ready to design the follow-up to that project—an experimental prototype that could extract usable energy from the fusing plasma trapped in a magnetic bottle. Yet another generation would pass before scientists could begin to build reactors that send energy to the grid.
And meanwhile there is no end to the world’s energy appetite. “The need for energy is so great and growing so rapidly around the world that there has to be a new approach,” says Edward Moses, director of the National Ignition Facility, a major fusion test facility in Livermore, Calif., that focuses laser beams onto a small fuel pellet to induce fusion.
In theory, fusion-based power plants would provide the answer. They would be fueled by a form of heavy hydrogen found in ordinary seawater and would produce no harmful emissions—no sooty pollutants, no nuclear waste and no greenhouse gases. They would harness the forces at work inside the sun to power the planet.
In practice, however, fusion will probably not change the world as physicists have imagined. The technology needed to trigger and control self-sustaining fusion has proved elusive. Moreover, the first reactors will almost certainly be too expensive to deploy widely this century.
Moses and others believe that the fastest route to harness fusion energy is to use a hybrid approach, employing fusion reactions to accelerate fission reactions in nuclear waste. In this method, called LIFE (for laser inertial fusion engine), powerful lasers focus their energy onto a small fuel pellet. The blasts ignite brief bursts of fusion. The neutrons from these fusion reactions travel outward and strike a shell of fissile material—either the spent fuel from an ordinary nuclear power plant or depleted uranium, a common ordnance. When the neutrons strike the radioactive waste, they trigger additional decays that generate heat for energy production and accelerate the breakdown of the material into stable products (thus solving the nuclear waste disposal problem as well). Moses claims he could build an engineering prototype of the LIFE design by 2020 and connect a working power plant to the grid by 2030.
In other words, a practical fusion reactor is only about 20 years away.
An extinction-level event is unlikely, but “airbursts” could flatten a city
By Robin Lloyd
On June 13 an asteroid called 2007 XB10 with a diameter of 1.1 kilometers—and the potential to cause major global damage—will zip past Earth. As far as near-Earth objects go, it will pass, fortunately, pretty far, at 10.6 million kilometers, or 27.6 times the Earth-moon distance. Indeed, no giant asteroids appear poised to rewrite history any time soon. The bad news is that we can expect in the next 200 years a small space rock to burst in the atmosphere with enough force to devastate a small city.
A near-Earth object (NEO) is an asteroid or comet that comes within 195 million kilometers of the planet. In 2009 NASA tallied 90 as approaching within five lunar distances and 21 within one lunar distance or less. NEO hunters typically detect them as specks on images, and such momentary glimpses can make their orbits hard to calculate. So researchers can only lay odds of an impact as they await more data. NASA has spotted 940 NEOs one kilometer or more in diameter (about 85 percent of the estimated total of that size), and none will collide with Earth. (The NEO that wiped out the dinosaurs was about 10 kilometers wide.)
The bigger threat now, however, involves the smaller rocks, according to a National Research Council (NRC) report released earlier this year. These asteroids and comets—100,000 or so of them span 140 meters or more—are too small to bring about an Armageddon, but even those at the lowest end of that range could deliver an impact energy of 300 megatons of TNT. And these events occur on average far more frequently (every 30,000 years or so for a 140-meter object) than, say, a one-kilometer impact (every 700,000 years).
Given the possible danger, Congress mandated in 2005 that NASA find 90 percent of such NEOs by 2020. But budget shortfalls will make it impossible for scientists to meet that deadline, the NRC has found. NEO hunters get about $4 million in federal funding annually.
In any case, in terms of risk, researchers are thinking even smaller, because the most likely NEO scenario is a 30- to 50-meter-diameter “city killer,” a bolide that would detonate in the atmosphere. The most famous of such devastating “airbursts” occurred in 1908 over Tunguska, Siberia, an event that flattened an area the size of London. The famous Meteor Crater in Barringer, Ariz., resulted from a meteorite in this size category.*
At this point, some of the best information on airbursts is kept by the U.S. Department of Defense, Department of Energy and Comprehensive Test Ban Treaty monitoring stations. The NRC report, which asks for more sharing of these closely held data, estimates that 25-meter airbursts occur every 200 years. Most explode over the oceans where the direct risk to life is low but where the initiation of a tsunami is possible. Panel member Mark Boslough of Sandia National Laboratories says a four-meter object blazes in once every year.
And what would we do if we spot a NEO with our name on it? Realistic mitigation plans are in their infancy, says NRC panelist Michael F. A’Hearn of the University of Maryland. For moderately big objects and with notice of years or decades, kinetic impactors make the most sense. The idea is to slam one or more large spacecraft into an object to alter its path. Nuclear detonation is the only option for NEOs exceeding 500 meters across when warning time is months to years.
For city-destroying sizes and short lead times, the choices are limited, perhaps restricted only to evacuation, which we would be lucky to pull off effectively at this point, A’Hearn thinks. All the more reason, it seems, to be thankful that nothing’s headed our way—as far as we know.
Notwithstanding the tameness of H1N1, influenza viruses could still wipe out millions and wreak economic havoc
By Katherine Harmon
The H1N1 virus has packed less of a pandemic punch than initially feared, but it has uncovered some hard truths about our readiness—or lack thereof—for coping with a more deadly pathogen. Despite vast medical advances since the 1918 influenza epidemic, a novel, highly contagious illness could still devastate populations and upend social, economic, political and legal structures the globe over.
A new virulent strain—of flu or any other virus—could kill off millions, even those who appear to be in their prime, says Lawrence O. Gostin, a professor of global health law at Georgetown University. In addition, many nations would likely close borders, an action that would be accompanied by discrimination against individuals and recrimination among governments. International trade and commerce would drop, which would have “enormous financial implications,” says Gostin, who estimates a 3 to 5 percent drop in global GDP (amounting to a loss of $1.8 trillion to $3 trillion). This cycle of instability and contagion could last “on the order of years,” he notes, as subsequent waves of an illness arrive with changing seasons.
In situations where a viral threat is just becoming apparent, policy makers and others would have to make tough decisions with imperfect and incomplete information. Basic human rights could face challenges as governments tried to subdue the spread.
If the contagious affliction came from a human malefactor, the social upheaval would most likely be worse. “The burden of morbidity and mortality would be lower,” Gostin remarks, but “when it’s man-made, people fear the worst. It’s very socially and economically disruptive—more so than a natural disaster.”
*Erratum (9/2/10): The Meteor, aka Barringer, Crater is located near Winslow, Ariz.