Virtual worlds such as Second Life and World of Warcraft allow players to adopt virtual personas or engage in combat on digital battlefields, but what if similar technology could let government intelligence analysts play out antiterrorism scenarios that would help with better understandings of the consequences of Middle East policy recommendations? A team of researchers at the University of Maryland in College Park, Md., believe they have created just such a virtual world using computational models that mimic terrorist behavior based a variety of factors, including social, political and religious beliefs.

The researchers hope that the U.S. Department of Defense, which helped fund their project, will take advantage of artificial intelligence built into this digital mock-up of the Middle East to estimate how different actions—such as building schools, burning drug crops and performing massive security sweeps—might affect the complex real-life interactions among the many diverse ethnic groups in the region, according to a study to be published Friday in Science.

"We provide a probability distribution of how likely something is to happen," says V.S. Subrahmanian, a University of Maryland computer science professor and director of the school's Institute for Advanced Computer Studies. Subrahmanian, the lead researcher on the project, has spent the past five years using data such as the frequency and location of suicide bombings to model the behavior of terrorist groups. The virtual world is based on real-world information of about 100 different terrorist groups and the regions where they operate.

This information that was crucial to building the probability modeling software that the virtual world's Cultural Adversarial Game Engine (CAGE) uses to predict war game-like outcomes, says John Dickerson, a recent graduate of Maryland who assisted with Subrahmanian's research. A key feature of CAGE is its ability to let intelligence analysts compare the simulated reactions of different terrorist groups to the same action.

Like a more sober version of the Sims gaming franchise, the researchers' virtual antiterrorism exercise begins with an intelligence analyst choosing a region and tweaking different values, in this case it could be the level of financial, military and/or political support that a local government provides to a known terrorist group. Using tables positioned along the side of the screen the analyst could have Country A provide more financial support to one group while taking away some support from another group. The virtual world would then play out a scenario that could include attacks by the terrorist groups on Country A, a neighboring country or a competing terrorist organization.

CAGE includes two algorithms, the first of which is called Convex and is designed to analyze a hypothetical situation created by an analyst and predict what a terrorist group will do in response to that situation. The researchers designed the second algorithm, Cape, to predict what a given terrorist group will do over a specified time period.

Although the Defense Department has not formally tested the virtual world, Subrahmanian says he is working to bring in a group of analysts to put the project through it paces and see how it holds up.

Digital war games are not without their critics. All these games have been marked by at least two severe limitations, Selmer Bringsjord, a logician, philosopher and chairman of Rensselaer Polytechnic Institute's Department of Cognitive Science, notes on the Rensselaer Artificial Intelligence and Reasoning (RAIR) Laboratory Web site. For one, "real humans have ethical and religious beliefs, have histories, can communicate in languages, and so on," according to Bringsjord. The second flaw is a lack of real-life consequences for actions taken in a digital environment. For these reasons, RAIR is working on its own war-gaming system that, like Subrahmanian's, also relies as much as possible on real data and artificial intelligence to play out predicted scenarios in a virtual world.