Over the past decade optics researchers have shown mirrors are not necessary to see objects outside the line of sight. That success, though, required exotic lasers firing pulses lasting less than a trillionth of a second in duration and high-performance sensors able to detect single photons. Now a team at Boston University has shown an algorithm and an ordinary digital camera can also look around corners without mirrors—and do so without such costly and complex equipment.

Periscopes and mirrors make it easy to peer around a corner, but the clarity they offer comes at a risk. Mirrors and periscopes must be placed in the line of sight of what they are observing, where they can easily be detected and destroyed. Furtive observers would prefer to operate outside the line of sight by extracting information from nonobvious sources such as light reflected from the matte surfaces of painted walls.

Light rays bounce off silvery metal surfaces like mirrors at the same angle at which they arrive, as if they were little balls bouncing off a surface that was perfectly flat on an atomic scale. Matte surfaces like painted walls and white poster board look smooth to the eye, but are rough on an atomic scale, so they scatter light at a wide range of angles rather than in a uniform direction. Thus a matte surface scrambles the light coming from different directions so our eyes cannot recognize where it came from.

In 2009 Ramesh Raskar, head of MIT Media Lab’s Camera Culture research group, and colleagues timed how long it takes very short-duration laser pulses directed into an unseen area to travel from the laser to the object and back. Since then Raskar's group and others have greatly enhanced those “time of flight” observations that, like microwave and optical radar pulses, measure distance by counting light’s travel time to and from a target.

Seeking a simpler approach, electrical and computer engineer Vivek Goyal and colleagues at Boston University analyzed the problem of looking around a corner by considering light as rays that follow straight lines between surfaces, an approach used in designing optics. They trace the path of light rays coming from an object on one side of a wall that goes around a corner by bouncing off a matte surface and entering a camera on the other side of the wall. In that simple arrangement the camera only sees the matte surface because it scatters the light uniformly.

However, they found that putting a flat opaque “occluder” between the hidden object—an illuminated screen displaying images—and the matte surface changes the picture. The occluder casts shadows that block light from parts of the display screen from reaching parts of the matte surface. The effect is similar to a partial lunar eclipse, where Earth blocks sunlight from reaching parts of the moon.

By tracing light rays from the edges of the shadows, Goyal’s team could map what parts of the screen would illuminate what parts of the matte surface. Then they created algorithms that worked backward from images of the matte surface recorded by the digital camera to re-create the pattern shown on the screen.

The experiment took place as an LCD monitor displayed the scene of interest. The light from that image shone on the back of a solid, occluding object that blocked some light, but let the rest go around it as a penumbra that lit up an imaging wall with a matte surface. The light projected on the wall did not resemble the original scene. But a digital camera captured the projection and a computer algorithm then processed it successfully to reconstruct the scene of interest.  Credit: Charles Saunders Nature

“Having an opaque occluder in the scene...enables this to work,” Goyal says. To test their idea, they built a tabletop model with a four-megapixel digital camera facing a matte surface on one side of an internal wall with an occluder and a digital display on the other side. Running camera images of the matte surface through their algorithms reproduced the images displayed on the screen, Goyal’s group wrote in the January 24 Nature. The images from their “computational periscope” are far from perfect. But Goyal says, “We never thought it would work this well,” adding, “Conceptually, this could be an app in a smartphone.” Although the principles are simple and widely applicable, he says, “writing such a mobile phone app would be pretty nontrivial” because it would have to adapt to the environment where it was being used.

Raskar was unimpressed. He called the demonstration “similar to concurrent work” on ultrafast time-of-flight systems, and said the computational system’s need to estimate the occluder shape and location before it could be used “may be challenging in a real-world scenario” such as covert observations.

Yet optical engineer Martin Laurenzis of the French–German Research Institute, who was not involved in the study, was far more optimistic. “The underlying principle is very clever,” he says, noting a pinhole camera creates images by blocking most of the light, but Goyal’s system blocks only a few rays. “Both approaches aim to reveal scenes around a corner and extend the perception of optical sensors,” Laurenzis says, but he thinks the two aim for “totally different applications.” Goyal says he has never systematically compared the two approaches because “they seem so ‘apples and oranges’ in what they do and what they achieve,” He thinks combining the two techniques could produce exciting results, and is pursuing it intently.

The Defense Advanced Research Projects Agency (DARPA) is backing much of the research in hope of developing new types of optical-information gathering. But the field is still young. “Even after a decade of non-line-of-sight time-of flight imaging, we are still in the beginning,” Laurenzis says.