Last year the United Nations set a goal of eliminating extreme poverty worldwide by 2030. That's an audacious target. One of the first steps—figuring out where the most impoverished people live—has proved surprisingly difficult. Conducting economic surveys in poor or conflict-prone countries can be expensive and dangerous. Researchers have tried to work around this limitation by searching nighttime satellite images for unusually dark areas. “Places lit up at night are generally better off,” explains Marshall Burke, an assistant professor of earth science systems at Stanford University. But this method is imperfect, especially for differentiating between grades of poverty. From space, at night, mild and extreme poverty look the same—dark.

Burke and his team at Stanford think they have found a way to improve the study of satellite images using machine learning. The researchers trained image-analysis software on both daytime and nighttime satellite imagery for five African nations. By combining both sets of data, the computer “learned” which daytime features (roads, urban areas, agricultural lands) were correlated with different levels of night-lights brightness. “The night lights are a tool to figure out what's important in the daytime imagery,” Burke says.

Once the training was complete, the software could spot impoverished areas simply by looking at daytime satellite images. When the researchers compared the results with survey data from the five African countries, they found that their method outperformed other nontraditional poverty-predicting tools, including the night-lights model. Governments and nonprofits could use the tool to determine whom to target in a cash-transfer program, for example, or to evaluate how well a certain antipoverty policy works. The researchers have plans to collaborate with the World Bank to chart out poverty in places such as Somalia. Next, Burke and his team want to use their new technique to create an Africa-wide map.