The capture and analysis of surveillance footage has been an indispensable tool for U.S. counterterrorism and law enforcement in the past decade. Video analysis software has improved since the 9/11 terrorist attacks—it can be programmed to identify certain patterns and colors, for example, and to issue security alerts when these characteristics are detected. But as terrorists and criminals change their tactics to slip through security the surveillance technologies designed to stop or catch them must likewise become more sophisticated.

One of the biggest challenges to improving video analytics is programming the software to identify specific people and objects under a variety of conditions, such as poor lighting, cluttered backgrounds and subtle changes in appearance (such as facial hair). Video analytics has a lot of room to improve in these areas with the help of software that sharpens computer vision, enhances facial- and pattern-recognition capabilities, and captures the motion of people and things passing in front of the camera's lens. [See our earlier coverage of post-9/11 security and surveillance.]

A team of New York University researchers has homed in on motion capture as a particularly promising approach to analysis. Associate computer science professor Chris Bregler is studying whether potential security threats can be identified via unique patterns of movement. How might someone walk if he was carrying a bomb in his backpack?
A person's body moves differently when it must compensate for some unnatural burden, such as a heavy backpack or even high-heeled shoes, says Bregler, who performs much of his research out of N.Y.U.'s Movement Lab, a motion-capture studio and research group housed at the school's ITP Tisch School of the Arts. When a person is weighed down by a backpack, for example, his or her feet will strike the ground with greater force than they otherwise would. Yet a person with natural-grown weight learns how to counterbalance it and maintain an even stride, he adds.

Bregler and his team have identified certain movement signatures with the help of the same motion-capture technology used for special effects in the Lord of the Rings and Harry Potter movies. Reflective markers placed at various locations on a special motion-capture suit can be picked up by a series of cameras that Bregler places in a circle around the Movement Lab. As a person in the suit moves, the cameras capture information about the reflective markers and feed the data to software that maps the location of each marker on a computer screen.

Outside the laboratory, however, the real world poses problems for motion-capture research—there simply are not enough motion-capture suits for everyone to wear. As a result, Bregler's team is developing software that can detect a person's motion signature without the markers.

Their GreenDot project trains cameras to scan the surroundings and identify spots that are unique, such as the way light reflects off a shirt's button differently than it does off the shirt's fabric. These spots are then represented as green dots on the computer screen. "Wherever you have texture [the software] can find something and track it from frame to frame," Bregler says. GreenDot's objective is to program a computer to recognize a person based on his or her motions. The software should even be able to identify a person's emotional state, cultural background, and other attributes based on movement.

Bregler's motion-analysis research attracted the attention of the Pentagon's Defense Advanced Research Projects Agency (DARPA) in 2000 as a possible means of identifying security threats. Following 9/11 his researched ramped up thanks to funding from the National Science Foundation and the U.S. Office of Naval Research. Law enforcement and counterterrorism organizations already had facial-recognition technology but were looking for additional ways to better make sense of countless hours of surveillance footage.

New York City's Police Department and Mayor Michael Bloomberg's security staff also expressed an interest in Bregler's research in 2004. "They asked us if we could detect suicide bombers, and that's why we started the experiment studying artificial weight," he says.

Facial recognition has in recent years matured to the point where it is useful but only if the subject is within six or so meters from the camera, Bregler says. "That's why we're working on something called intrinsic biometrics, which studies your body patterns, your timing, the way you walk, the way you use your hands when you speak," he adds. "These are very hard to fake."

Based on the decade or so it took for facial recognition to become reliable, Bregler estimates intrinsic biometrics is years away from maturity. Ten years after facial recognition started yielding meaningful results, it has moved beyond the realm of security and is being used by Facebook, Google, Apple's iPhoto and a number of other commercial applications. "We are just starting with intrinsic biometrics so estimating from the past it probably takes five to 10 more years before it becomes as accurate as facial recognition and other biometrics," he adds.