Roughly 12,900 years ago a global-cooling anomaly contributed to the extinction of 35 mammal species, including the mammoth. In some areas, average temperatures may have dropped as much as 15 degrees Celsius (27 degrees Fahrenheit). New evidence, in the form of diamonds several nanometers wide, supports a theory proposed last year that a comet collision or a similar explosive event threw up debris and caused the cooling.
Nanodiamonds occur only in sediment exposed to high temperatures and pressures, such as that produced by a cometary impact. Researchers uncovered them in six sites in North America: Murray Springs, Ariz.; Bull Creek, Okla.; Gainey, Mich.; Topper, S.C.; Lake Hind, Manitoba; and Chobot, Alberta. Preliminary searches in Europe, Asia and South America have also turned up similar finds in sediments of the same age, showing that the event had global reach. But it was definitely not as large as the one that wiped out the dinosaurs 65 million years ago. The study landed in the January 2 Science. David Biello
Interplanetary travel probably means that astronauts will need to carry ecosystems along to supply food and oxygen. Past studies of potential space food have considered poultry, fish and even snails, newts and sea urchin larvae, but they all have downsides. Chickens, for instance, require a lot of food and space, and aquatic life is sensitive to water conditions that may be hard to maintain.
Scientists at Beihang University in Beijing suggest recruiting silkworms, which are already eaten in parts of China. These insects breed quickly, require little space, food or water, and produce only minute amounts of excrement, which could serve as fertilizer for onboard plants. Silkworm pupae, which are mostly edible protein, contain twice as much essential amino acids as pork and four times as much as eggs and milk. The scientists, whose conclusions were published online December 24, 2008, by Advances in Space Research, also point out that chemical processes could even make the silk digestible. Move over, Tang. Charles Q. Choi
The robberies were a fitting end to a terrible year. On the Monday after Christmas, thieves in New York City held up five different banks in just over six hours, the near-final entries in the city's 444 bank robbery cases in 2008 a 54 percent increase over 2007. "It makes me think that the recession is making people go to extreme measures," one bystander told the New York Times, summing up the commonly held viewpoint that as the economy contracts, crime will swell to fill the void. And what a contraction we face: "People fear that we're headed for Armageddon," remarks David Kennedy, director of the Crime Prevention Center at the John Jay College of Criminal Justice.
But Kennedy and other researchers think that unemployment and financial desperation are not so inexorably linked to theft and murder. The factors that influence crime rates are far more varied and complex than any economic indicator.
Take, for example, the Great Depression. In the years after the stock market crash of 1929, crime plummeted as well. "People sitting in their houses don't make great targets for crime," says Bruce Weinberg, an economist at Ohio State University. "People going out spending cash and hanging out in big crowds do." That was especially true in the Roaring Twenties, a time that also suffered from Prohibition and its attendant crime syndicates.
American cities have gone through two other major crime epidemics in the last century one in the late 1960s into the early 1970s and another at the tail end of the 1980s into the early 1990s, when the nationwide murder rate hit an all-time high. The first happened at a boom time; the second struck during a recession. But in both cases, the primary underlying cause was a spike in the drug trade heroin in the 1970s, crack cocaine in the 1990s.
Even though these "outside shocks to the system," as Kennedy calls them, play a strong role in determining crime rates, recent research has teased out some links between the overall economy and crime. When Weinberg and his collaborators Eric D. Gould of Hebrew University and David Mustard of the University of Georgia examined young males with no more than a high school education the demographic group that commits the most crime they found that average wages and unemployment rates were directly linked to the incidence of property crimes. (Here property crimes refer to felonies such as burglary, auto theft and robbery, the last of which is ordinarily classified as a violent crime because of the implied use or threat of force.) Hard times also lead to more domestic abuse.
Murder rates have never linked very well to the unemployment rate or other standard economic indicators, but Rick Rosenfeld, a criminologist at the University of Missouri St. Louis, thinks that is because those statistics do not tell the full story. "When we're trying to understand criminal behavior, we're trying to understand the behavior of people," he says, "so it's preferable to use subjective indicators as well as objective indicators." He and Robert Fornango of Arizona State University traced murder rates against the Consumer Sentiment Index a survey of how people view their current financial situation and how hopeful they are about the future. They found that lower index scores strongly correlate with higher murder rates.
"I don't think that newly unemployed people become criminals," Rosenfeld notes, but "marginal consumers the shopper who goes to discount stores many of those consumers turn to street markets during an economic downturn. These are often markets for used goods, but some are stolen goods. As demand increases, incentives for criminals to commit crimes expand." And on the black market, any dispute between buyer and seller that would ordinarily be handled by the Better Business Bureau might now be settled with violence. The Consumer Sentiment Index reached a 28-year low last November.
Could this signal a coming jump in the murder rate? Not necessarily, criminologists say. Another outside force has appeared, this one in the form of modern crime deterrent tactics. Many police departments now maintain frequently updated maps of high-crime areas to more effectively deploy foot patrols "putting cops on the dots," as William Bratton, chief of the Los Angeles Police Department who pioneered the technique in New York City in the early 1990s, likes to say. Police departments have also begun to interact directly with known criminal groups, placing them on notice that violence by any member of the group will result in a harsh crackdown on all. The technique leads to more self-policing within the group and resulted in the "Boston Miracle" of the 1990s. It has since been expanded to hundreds of municipalities around the country.
In addition, a recent study showed that a direct economic stimulus can act as a salve. Communities in the 1930s that spent more on public works programs had lower crime rates than other communities, an auspicious portent for the current federal government's stimulus package. "Can we prevent this stuff entirely?" Kennedy asks. "No, we can't. But medicine used to be a thing where, when an epidemic swept through, you put on a mask and hoped. Now you get a flu shot, and it helps."
Starting this month, roughly one quarter of the world's population will lose sleep and gain sunlight as they set their clocks ahead for daylight saving. People may think that with the time shift, they are conserving electricity otherwise spent on lighting. But recent studies have cast doubt on the energy argument some research has even found that it ultimately leads to greater power use.
Benjamin Franklin is credited with conceiving the idea of daylight saving in 1784 to conserve candles, but the U.S. did not institute it until World War I as a way to preserve resources for the war effort. The first comprehensive study of its effectiveness occurred during the oil crisis of the 1970s, when the U.S. Department of Transportation found that daylight saving trimmed national electricity usage by roughly 1 percent compared with standard time.
Scant research had been done since, during which time U.S. electricity usage patterns have changed as air conditioning and household electronics have become more pervasive, observes economist Matthew Kotchen of the University of California, Santa Barbara. But lately, changes to daylight saving policies on state and federal levels have presented investigators new chances to explore the before-and- after impacts of the clock shift.
In 2006 Indiana instituted daylight saving statewide for the first time. (Before then, daylight time confusingly was in effect in just a handful of Indiana's counties.) Examining electricity usage and billing since the statewide change, Kotchen and his colleague Laura Grant unexpectedly found that daylight time led to a 1 percent overall rise in residential electricity use, costing the state an extra $9 million. Although daylight time reduces demand for household lighting, the researchers suggest that it increased demand for cooling on summer evenings and heating in early spring and late fall mornings. They hope to publish their conclusions this year in the Quarterly Journal of Economics.
Investigators got another opportunity in 2007, when daylight time nationwide began three weeks earlier, on the second Sunday in March, and ended one week later in the fall. California Energy Commission resource economist Adrienne Kandel and her colleagues discovered that extending daylight time had little to no effect on energy use in the state. The observed drop in energy use of 0.2 percent fell within the statistical margin of error of 1.5 percent.
Not all recent analyses suggest that daylight saving is counterproductive. Instead of studying the impact daylight saving changes had on just one state, senior analyst Jeff Dowd and his colleagues at the U.S. Department of Energy investigated what effect it might have on national energy consumption, looking at 67 electric utilities across the country.
In their October 2008 report to Congress, they conclude that the four-week extension of daylight time saved about 0.5 percent of the nation's electricity per day, or 1.3 trillion watt-hours in total. That amount could power 100,000 households for a year. The study did not just look at residential electricity use but commercial use as well, Dowd says.
The disparities between regional and national results could reflect climate differences between states. "The effect we saw could be even worse in Florida, where air conditioning is used heavily," Kotchen suggests.
If time shifting turns out to be an energy waster, should the sun set on daylight saving? Certainly that would please farmers, who have long opposed it for how it disrupts their schedules. The chances, though, appear nil. "I'm skeptical we could change daylight saving time on a national level, because we've become accustomed to it," Kotchen says, adding that "we might want to consider it for other costs or benefits it could have." Retailers, especially those involved with sports and recreation, have historically argued hardest for extending daylight time. Representatives of the golf industry, for instance, told Congress in 1986 that an extra month of daylight saving was worth up to $400 million annually in extra sales and fees.
So instead of worrying about cranking up the air conditioner at home, think about what more you can do outdoors when the sun is out. Softball, anyone?
Deep in the deluge of knowledge that poured forth from science in the 20th century were found ironclad limits on what we can know. Werner Heisenberg discovered that improved precision regarding, say, an object's position inevitably degraded the level of certainty of its momentum. Kurt G del showed that within any formal mathematical system advanced enough to be useful, it is impossible to use the system to prove every true statement that it contains. And Alan Turing demonstrated that one cannot, in general, determine if a computer algorithm is going to halt.
David H. Wolpert, a physics-trained computer scientist at the NASA Ames Research Center, has chimed in with his version of a knowledge limit. Because of it, he concludes, the universe lies beyond the grasp of any intellect, no matter how powerful, that could exist within the universe. Specifically, during the past two years, he has been refining a proof that no matter what laws of physics govern a universe, there are inevitably facts about the universe that its inhabitants cannot learn by experiment or predict with a computation. Philippe M. Binder, a physicist at the University of Hawaii at Hilo, suggests that the theory implies researchers seeking unified laws cannot hope for anything better than a "theory of almost everything."
Wolpert's work is an effort to create a formal rigorous description of processes such as measuring a quantity, observing a phenomenon, predicting a system's future state or remembering past information a description that is general enough to be independent of the laws of physics. He observes that all those processes share a common basic structure: something must be configured (whether it be an experimental apparatus or a computer to run a simulation); a question about the universe must be specified; and an answer (right or wrong) must be supplied. He models that general structure by defining a class of mathematical entities that he calls inference devices.
The inference devices act on a set of possible universes. For instance, our universe, meaning the entire world line of our universe over all time and space, could be a member of the set of all possible such universes permitted by the same rules that govern ours. Nothing needs to be specified about those rules in Wolpert's analysis. All that matters is that the various possible inference devices supply answers to questions in each universe. In a universe similar to ours, an inference device may involve a set of digital scales that you will stand on at noon tomorrow and the question relate to your mass at that time. People may also be inference devices or parts of one.
Wolpert proves that in any such system of universes, quantities exist that cannot be ascertained by any inference device inside the system. Thus, the "demon" hypothesized by Pierre-Simon Laplace in the early 1800s (give the demon the exact positions and velocities of every particle in the universe, and it will compute the future state of the universe) is stymied if the demon must be a part of the universe.
Researchers have proved results about the incomputability of specific physical systems before. Wolpert points out that his result is far more general, in that it makes virtually no assumptions about the laws of physics and it requires no limits on the computational power of the inference device other than it must exist within the universe in question. In addition, the result applies not only to predictions of a physical system's future state but also to observations of a present state and examining a record of a past state.
The theorem's proof, similar to the results of G del's incompleteness theorem and Turing's halting problem, relies on a variant of the liar's paradox ask Laplace's demon to predict the following yes/no fact about the future state of the universe: "Will the universe not be one in which your answer to this question is yes?" For the demon, seeking a true yes/no answer is like trying to determine the truth of "This statement is false." Knowing the exact current state of the entire universe, knowing all the laws governing the universe and having unlimited computing power is no help to the demon in saying truthfully what its answer will be.
In a sense, however, the existence of such a paradox is not exactly earth-shattering. As Scott Aaronson, a computer scientist at the Massachusetts Institute of Technology, puts it: "That your predictions about the universe are fundamentally constrained by you yourself being part of the universe you're predicting, always seemed pretty obvious to me and I doubt Laplace himself would say otherwise if we could ask him." Aaronson does allow, though, that it is "often a useful exercise to spell out all the assumptions behind an idea, recast everything in formal notation and think through the implications in detail," as Wolpert has done. After all, the devil, or demon, is in the details.
Cindy Hale, an ecologist at the University of Minnesota, answers e-mails from a lot of distraught citizens of the Great Lakes region. The residents, it seems, have introduced certain earthworms into their gardens, she says, "and now they've got that 'nothing grows here syndrome.'"
Long considered a gardener's friend, earthworms can loosen and aerate the soil. But the story is different in the Great Lakes region. The last Ice Age wiped out native earthworms 10,000 years ago, and ever since the Northeast forest has evolved without the crawlers, Hale says. But now earthworms are back, a product of fishers who toss their worms into the forest, of off-road vehicles and lumber trucks that carry them in the treads of their tires, and of people who bring in mulch and any worms that might be in it from other areas.
As invasive creatures, the earth worms wreak the most havoc with hardwood forests, such as those consisting of maple, basswood, red oak, poplar or birch species. (Conifer-dominated forests seem to experience less dramatic impacts.) According to Peter Groffman, a microbial ecologist at the Cary Institute of Ecosystem Studies in Millbrook, N.Y., northern hardwood forests have relied on thick layers of leaf litter that serve as a rooting medium. The earth worms, Groffman reports, "come into an area with a thick organic mat, and two to five years later that layer is gone."
As a result, some northern hardwood forests that once had a lush understory now have but a single species of native herb and virtually no tree seedlings. Evidently, earthworms change the forest soils from a fungal to a bacterial-dominated system, which speeds up the conversion of leaf detritus to mineral compounds and thereby potentially robs plants of organic nutrients.
Not all foreign earthworms are destructive. Of the 5,000 species around the globe, only about 16 of the European and Asian varieties do the real damage. One of them is the night crawler (Lumbricus terrestris), a popular fish bait that can measure up to 15 to 20 centimeters (six to eight inches). Another is the Alabama jumper (Amynthas agrestis) also known as the snake worm or crazy worm an aggressive Asian worm that lives at high densities and can literally jump off the ground or out of a bait can, according to fishing lore. A voracious eater, it does the most harm to the soil.
The presence of the earthworms affects more than just the plants. John Maerz, a wildlife ecologist at the University of Georgia, says that adult salamanders that consume these earthworms are more successful at reproduction but that earthworms are too big for juvenile salamanders to eat, which leads to a net loss in salamander numbers. The amphibians themselves, Maerz notes, are an important prey species for "snakes, small mammals, turkeys and a host of forest creatures."
Once established, earthworms are impossible to remove from the environment, Hale says. Concerned about their impact, the U.S. Department of Agriculture recently awarded Hale and her fellow biologists a three-year, $397,500 grant to study the ecology of the earthworm invasions in cold-temperate hardwood forests. The scientists also hope to answer questions about nutrient and carbon cycling including whether the earthworm activity helps to sequester carbon in the soil or releases it back into the atmosphere. "The jury is still out on this issue," Hale explains.
Researchers agree that the best hope is to contain the worms, which spread only five to 10 meters a year on their own. That may mean new regulations governing off-road vehicles, bait disposal by anglers, or equipment hygiene and use in the logging industry. Hale would like to control community mulch piles as well: "I remember when I first heard about them, I thought, what a great idea, but think about it. You take leaves, weed seeds and earthworms from all over, bring them in, mix them up and then disperse them back out. That's a horrible idea."
Silicon has transformed the digital world, but researchers are still eager to find substances that will make integrated circuits smaller, faster and cheaper. High on the list is graphene planar sheets of honeycomb carbon rings just one atom thick. This nanomaterial sports a range of properties including ultrastrength, transparency (because of its thinness) and blisteringly fast electron conductivity that make it promising for flexible displays and superspeedy electronics. Isolated only four years ago, graphene already appears in prototype transistors, memories and other devices.
But to go from lab benches to store shelves, engineers need to devise methods to make industrial quantities of large, uniform sheets of pure, single-ply graphene. Researchers are pursuing several processing routes, but which approach will succeed remains unclear. "We've seen claims by groups that say that they can coat whole silicon wafers with monolayer sheets of graphene cheaply," reports James M. Tour, a chemist at Rice University. "But so far no one has publicly demonstrated it."
Making small amounts is surprisingly easy, states graphene's discoverer, Andre K. Geim of the University of Manchester in England. In fact, "you produce a bit of graphene every time you drag a pencil point across paper," he notes the pencil's graphite is actually a stack of graphene layers. The initial graphene-making methods worked similarly to pencil writing: researchers would abrade some graphite and then search the debris with a microscope for suitable samples or separate individual flakes with sticky tape.
Although most scientists consider such mechanical "exfoliation" techniques to be suited only for making tiny amounts, Geim does not necessarily agree: "Recently the procedure was scaled up to produce as much graphene as you want." He uses ultrasound to break up graphite into individual layers that are dispersed in a liquid. The suspension can then be dried out on a surface, which leaves a film of overlapping pieces of graphene crystals. Whether these sheets of multiple crystals can work well enough for many applications is uncertain, however, because edge boundaries of individual flakes tend to impede the rapid flow of electrons.
Bigger samples might come from chemical exfoliation. Last May collaborators James P. Hamilton of the University of Wisconsin Platteville and Jonathan N. Coleman of Trinity College Dublin in Ireland showed that graphene dissolves in certain organic solvents. "You place graph ite in a bucket, dump in organic liquids that dissolve it," Hamilton says, "then you remove the solvent and out comes this gray stuff that's pure graphene." Hamilton's start-up company, Graphene Solutions, hopes to convert that graphene into uniform, single-crystal sheets and, ultimately, to commercialize the process.
Other chemical exfoliation techniques are possible. Rod Ruoff, now at the University of Texas at Austin, and his former colleagues at Northwestern University have shown that adding acid to graphite in water can yield graphite oxide that can be separated into individual pieces. Suspended in liquid, the flakes are then deposited onto a substrate to form a film. The addition of other chemicals or heat can drive off the oxygen groups, yielding graphene.
One such oxygen-removing agent is rocket fuel, scientists from Rutgers University found specifically, vapors of hydrazine, a highly reactive and toxic compound. Last year Yang Yang and Richard B. Kaner of the University of California, Los Angeles, simplified the Rutgers approach by using liquid hydrazine. "We then deposit the pieces onto silicon wafers or other, more flexible substrates," Yang says. The results are single-layer films composed of many platelets. The pair are now trying to improve the quality of the sheets, as well as find a safer alternative to hydrazine.
Researchers at the Massachusetts Institute of Technology and elsewhere are looking to make graphene using chemical vapor deposition (CVD), an established process that could be readily integrated into microchip fabrication. In CVD, volatile chemicals react and deposit themselves on a substrate as a thin coating. The M.I.T. process employs a simple, tube-shaped furnace containing nickel substrates, electrical engineer Jing Kong says. "At one end, we flow in hydrocarbon gas, which decomposes in the heat," she explains. Carbon atoms then fall onto the nickel surface, which acts as a catalyst to help form the graphene films. The quality of the graphene, though, depends on the substrate whether it consists of many nickel crystals or only one, Kong explains. Unfortunately, single-crystal nickel, the most desirable, is costly.
Graphene from CVD has led to one of the biggest achievements yet. A group led by Byung Hee Hong of Sungkyunkwan University in South Korea made high-quality films that the scientists stamped onto a clear, bendable polymer. The result was a transparent electrode. Improved versions could replace the more expensive transparent electrodes (typically made from indium titanium oxide) used in displays.
Ultimately, the graphene-making game may see more than one winner. Trinity College's Coleman says that the solution-based exfoliation methods, which to date produce graphene up to several tens of microns wide, are probably best suited for "middle-size industrial quantities, whereas the Intels of the world will likely be more interested in growing huge areas of graphene using CVD-type processes," which so far can make samples up to a few square centimeters. But perhaps best of all, none of the approaches seem to face insurmountable hurdles. As Rice's Tour puts it: "I'll bet that the problems will be solved within a year or two."
Even in the teeming and varied world of bacteria, Wolbachia is something of a standout. Within its insect host, the bacterium acts as a gender-bending, egg-killing, DNA-hijacking parasite that is passed down from one generation to the next via the female to her eggs. Hosted by at least one fifth of all insect species, it is possibly the most prolific parasite on earth. But now Wolbachia itself is being hijacked, to help humans gain the upper hand in the long-running war against mosquito-borne diseases.
In particular, a team at the University of Queensland in Australia and Central China Normal University in Wuhan zeroed in on a Wolbachia strain that halves the life span of its natural, fruit-fly host. The scientists have successfully introduced it into an entirely new host: Aedes aegypti, the mosquito that spreads the virus that causes dengue fever, which produces severe, flulike symptoms and rash and, in its more dangerous hemorrhagic form, can be fatal in about 5 percent of cases.
Wolbachia's life-shortening effect does not appear to inconvenience A. aegypti's reproduction. In fact, it confers an ad vant age to infected females by killing the eggs of uninfected females fertilized by an infected male. But the bacterium could be disastrous for the dengue virus, which has a long incubation period: it takes up to two weeks to invade the mosquito, rep lic ate, get into the mosquito's salivary glands and then spread to a new host, explains Scott O'Neill, an entomologist at the University of Queensland.
If infected with the life-shortening Wolbachia strain, the mosquito may not live long enough for its dengue passenger to incubate and move on. Given that newly hatched female mosquitoes usually wait two days before they have their first blood meal and potentially take the dengue virus onboard, the 21- to 27-day life span of a Wolbachia-harboring mosquito therefore offers only a narrow time frame for dengue to incubate and spread.
Researchers have also found another surprising side effect. Infected mosquitoes attempted to bite human volunteers more frequently but could not draw any blood. On closer inspection, the team discovered that the mosquitoes' proboscises had become "bendy" and could not penetrate the skin.
It was an unexpected windfall, O'Neill remarks, as a mosquito that cannot bite cannot transmit dengue or any other disease. "We're talking about shortening life by 50 percent, but they're already dead if they can't stick their stylet into somebody's arm," he says.
The research has attracted considerable interest, particularly in far-north Queens land, which is in the grip of a major dengue outbreak. Current methods of dengue control focus on eliminating the mosquito's favorite breeding sites in containers of water, explains Scott Ritchie, a medical en t omologist at Queensland Health, the state's health department, and the University of Queensland. But it is no easy task.
"It's very labor intensive, as guys have to go house to house and try to get rid of containers that are holding water, and in a lot of areas those containers are holding potable water that people need," Ritchie says. Although this tactic is reasonably successful in urban Australia, it would be far less practical, or safe, in the densely populated shantytowns of Brazil's Rio de Janeiro, for example. In which case, a bio logical control such as Wolbachia that spreads naturally starts to look pretty enticing.
Wolbachia appears even more attractive considering its potential application in controlling other insect-borne diseases, such as malaria and the tsetse fly's sleeping sickness. Filariasis might be an especially good target because the parasitic worms that cause the illness incubate "for a long period," says Ramakrishna U. Rao, a molecular parasitologist at the Washington University School of Medicine in St. Louis. Rao notes, however, that the success occurred in the laboratory, "so what happens if you actually introduce [Wolbachia-infected mosquitoes] into the field?"
O'Neill and his colleagues are setting up such field trials, bringing wild, uninfected mosquitoes into outdoor cages of infected individuals, to see if the Wolbachia strain will take over under natural conditions. (Thankfully this strain does not show the gender-altering effects of other Wolbachia varieties, such as the one that infects wood lice; otherwise it might reduce the fitness of the infected population.) Researchers hope the Wolbachia-harboring mosquitoes will gradually come to dominate and along the way get rid of the mosquitoes' other, less human-friendly passengers.
Climbers summiting Mount Everest have as little oxygen in their bloodstream as residents of coastal areas who are in cardiac arrest or who are even dead. Four physicians from University College London trekked up Everest and drew their own blood for analysis. They found that because of the altitude, they had about a quarter less oxygen in their blood than is normal for people at sea level. The analysis also confirmed other effects of being at high altitudes, such as the increase in hemoglobin to ferry as much oxygen as possible. Besides helping climbers, the findings, in the January 8 New England Journal of Medicine, could lead to better treatments for oxygen-deprived heart and lung patients on the ground. Jordan Lite
Galaxies and the giant black holes at their hubs fit together as if they were made for one another. Did the holes come first and guide the formation of their galaxies, did the galaxies come first and build up holes, or did some common factor sculpt both? At the American Astronomical Society meeting in January, Christopher Carilli of the National Radio Astronomy Observatory and his colleagues argued that the holes came first. They found that galaxies in the early universe were 30 times more massive than their black holes, whereas present-day galaxies are 1,000 times heavier. "Black holes came first and somehow we don't know how grew the galaxy around them," Carilli said. Other astronomers were skeptical, wondering whether the ancient galaxies seem undersized merely because of a statistical selection effect. Even if true, the study does not explain how a black hole can nurture a galaxy; if anything, it should tear it apart. George Musser
The presence of methane on Mars, first discovered a few years ago, has piqued the curiosity of researchers, who wonder if the gas results from geologic activity or, more intriguingly, from living organisms, as is largely the case on Earth. Though by no means settling the issue, new detections of methane at least point in the direction of further study.
Using ground-based telescopes, Michael J. Mumma of the NASA Goddard Space Flight Center and his colleagues monitored about 90 percent of the Red Planet's surface for three Martian years (equal to seven Earth years). They detected large methane belches during the summer of 2003 and located the areas of those emissions.
Mumma is careful not to overstate the significance of his study, published online January 15 by Science. Although the methane could have come from the activity of microbes living below the permafrost, an equally plausible explanation is that it came from reactions between minerals and water trapped in rocky layers underneath. The methane could also be a relic of past processes, somehow sequestered and then released. Still, by knowing that Mars's methane comes from discrete areas, scientists can look for new sources and target the regions for future lander missions.
Looking through a peephole can change the direction an object appears to move a tilted rod going left to right seems to move downward at an angle when viewed through a hole (for a video clip, go to www.SciAm.com/mar2009/aperture). Dale Purves and his colleagues at Duke University think they know why. They asked volunteers to describe how they perceived the motion of moving lines seen through apertures. They also developed computer simulations of a virtual rod moving in three-dimensional space in which information regarding its direction was stripped out (via projection onto a two-dimensional surface). How the volunteers saw the movement nearly perfectly matched those generated by the flattened-out simulation, suggesting that images formed on our basically two-dimensional retinas do not convey aspects of three-dimensional motion. Hence, our perceptions of the directions of moving objects are mental constructs based on past experience. Scrutinize the analysis in the January 6 Proceedings of the National Academy of Sciences USA. Charles Q. Choi