Governments and aid organizations spend tens of billions of dollars a year trying to lift the people of developing nations out of poverty through better education, health care and other programs. Evaluating those efforts, however, has proved difficult. Retrospective studies of different populations could not control for differences in wealth and culture. Some economists became frustrated with their inability to answer specific policy questions, such as how to increase school attendance. Increasingly, however, economists are finding success with a method well honed in medicine: the randomized trial.

Using randomized prospective trials in economic development policy is not new. Since the 1960s, the U.S. has occasionally implemented them to answer important practical questions in health care, welfare and education policy. By randomly splitting people into two groups, one of which receives an experimental intervention, researchers can set up potentially simple, unbiased comparisons between two approaches. But these evaluations typically cost hundreds of thousands to millions of dollars, largely putting them out of reach of academic researchers, says development economist Abhijit Banerjee of the Massachusetts Institute of Technology.

The emergence of cheap, skilled labor in India and other countries during the 1990s changed that, Banerjee says, because these workers could collect the data inexpensively. At the same time, nongovernmental organizations (NGOs) were proliferating and started looking for ways to evaluate their antipoverty programs.

In 2003 Banerjee and his colleagues Esther Duflo and Sendhil Mullainathan founded an M.I.T. institute devoted to the use of randomized trials, called the Poverty Action Lab. Lab members have completed or begun a variety of projects, including studies of public health measures, small-scale loans (called microcredit), the role of women in village councils, AIDS prevention, and barriers to fertilizer use. The studies typically piggyback on the expansion of an NGO or government program. Researchers work with the organization to select appropriate measures of the program's outcome and hire an agency to collect or spot-check the data.

Doling out interventions by lottery may seem unethical, Duflo says, but "in most instances we do not know whether the program will work or whether it is the best use of the money." Many programs start as small pilots, she points out. "We just propose to conduct the pilot so that we can learn something." The sponsoring organization may already perceive an intervention as beneficial, in which case it might select a needy, nonrandom group to receive it first, Banerjee says. In many instances, researchers have to randomly assign different communities to the two study groups, because selecting only some individuals from a classroom or neighborhood would be perceived as unfair. "That's the place where we hit the ethical constraint bang-on all the time--what's the unit of randomization?" Banerjee says.

Beyond ethical quandaries, randomized trials are also much trickier to interpret in a social context than they seem, argues Alok Bhargava of the University of Houston. An intervention that causes one change, such as deworming children to see the effects on school absenteeism, may cause others, such as encouraging parents to send their kids to school, he explains. Others contend that experiments are limited because the social programs themselves are. "Helping us to understand which of these programs are effective is very useful," says Mark Rosenzweig of the Kennedy School of Government at Harvard University, "but one should not think these programs will create economic growth that will alleviate poverty." Growth depends more on institutions, such as the status of property rights, which affect how people respond to interventions, he says.

The designers of these experiments are aware of the limits of studying a small group of people in a single place over a small fraction of a lifetime. But they say they are trying to create solid ground from which to generalize. "If you have a number of these things, you start to build up a picture," says Michael Kremer, a Harvard University affiliate of the M.I.T. lab. "Development goes through a lot of fads. We need to have evidence on what works."