More In This Article
Imagine snapping a panoramic picture from the top of the Empire State Building, then zooming in on a speck to reveal a quarter lying on the sidewalk. That’s the promise of single-shot gigapixel cameras—cameras that shoot images composed of at least one billion pixels, or picture elements. Apart from their obvious appeal to photographers, gigapixel images also hold tremendous potential for law enforcement and the military. Such high resolution would enable unmanned aerial vehicles to capture detail down to a license plate number while flying at altitudes too high to be spotted from the ground.
The Internet is already abuzz with sites, such as Google Earth, 360world.eu and GigaPan (created by Carnegie Mellon University, NASA and Google), that allow gigapixel digital photographs to be uploaded, viewed and shared across the Web. But these photographs actually consist of several megapixel-size images pieced together digitally. This is often accomplished using a long-lens digital single-lens reflex (SLR) camera placed atop a motorized mount. Software controls the movement of the camera, which captures a mosaic of hundreds or even thousands of images that, when placed together, create a single, high-resolution scene. The main drawback to this approach is that it can take up to several hours to complete the shoot, during which time lighting conditions may change and objects can move in and out of the frames.
Researchers are working to develop a camera that can take a gigapixel-quality image in a single snapshot. The U.S. Defense Advanced Research Projects Agency is investing $25 million over the next three and a half years into developing such compact devices. “We are no longer dealing with fixed installations or army tank units or missile silo units,” says Ravi Athale, a consultant to DARPA on this program. “[Fighting terrorism requires] an awareness of what’s going on in a wide area the size of a medium city.” Current satellite images or those taken from drones are extremely high resolution but very narrow in view, like “looking through a soda straw,” Athale says.
But today’s camera-size digital processors and memory are unprepared to manage gigapixel images, which contain more than 1,000 times the amount of information of megapixel images. (A 10-gigapixel image would take up more than 30 gigabytes of hard drive space.) Oliver Cossairt and Shree K. Nayar of Columbia University’s school of engineering, with funding from DARPA, have taken one promising approach: using computations to reduce such complexity. “Rather than thinking about it as capturing the final image, you’re capturing information you would need to compute the final image,” says Nayar, chair of Columbia’s department of computer science.
In a paper to be presented at April’s IEEE International Conference on Computational Photography in Pittsburgh, Cossairt and Nayar propose three compact gigapixel camera designs, two of which they built. Each relies on a unique ball-shaped lens that they selected for its simplicity—indeed, they built their first prototype around a crystal ball that they bought on New York City’s Canal Street. Unlike flatter lenses, which lose resolution toward the edges, a sphere’s perfect symmetry allows for uniform resolution. One of the Columbia lens designs resembles a fly’s eye, with half the sphere covered in small, hexagonal relay lenses that transmit images to an array of sensors just above them.
Of course, any advanced imaging technology invites concerns over privacy. Christopher Hills, a security consultant with Securitas Security Services who also runs the site gigapixel360.com, acknowledges that a landscape gigapixel image of a city could be scrutinized to see into the windows of homes. “Still, if you were to go to your window, someone in another nearby building or on the street would be able to see you. That’s why they make shades,” Hills observes.
This article was originally published with the title Can You See Me Now?.