PHOTOSYNTHESIS Photosynthetic chromatophores are bubbles of liquid that form on the membranes of bacteria (purple bacteria, for example) that use a sunlight, carbon dioxide and water to produce the energy needed for respiration and other functions. These bacteria use chromatophores to store the proteins necessary for photosynthesis. Image: © IAKOV KALININ, COURTESY OF ISTOCKPHOTO.COM
The study of processes that make life possible is hardly a leisurely pursuit, but that doesn't preclude researchers from taking advantage of the most advanced video gaming technology available to aid in their work. A team of University of Illinois at Urbana–Champaign (U.I.U.C.) physicists has assembled a supercomputer consisting of several hundred superfast graphics processing units (GPUs)—typically used for rendering highly sophisticated video game graphics—that they think will help them build a simulation depicting how chromatophore proteins turn light energy into chemical energy, a process called photosynthesis.
"Ninety-five percent of the energy that life on Earth requires are fueled by photosynthetic processes," says Klaus Schulten, a (U.I.U.C.) physics professor leading the simulation-building effort and director of the school's Theoretical and Computational Biophysics Group. To better understand how these processes work, Schulten's team is assembling a computer-based, virtual photosynthetic chromatophore.
Photosynthetic chromatophores are bubbles of liquid that form on the membranes of bacteria that harness sunlight, carbon dioxide and water to produce the energy needed for respiration and other functions. These bacteria use chromatophores to store the proteins necessary for photosynthesis. This is simplest of all types of photosynthetic systems, Schulten says.
Simple or not, a photosynthetic chromatophore consists of 100 million atoms. "We know every atom of the chromatophore, but only when we know the how the atoms are arranged can we fully understand these systems and how they work," Schulten says. "What's been lacking is the ability to run simulations fast enough to mimic real life."
Schulten expects it will be a year or two before his team can complete their virtual chromatophore. This projection would be several years longer if not for the researchers' ability to crunch numbers on "Lincoln," a supercomputer at the University of Illinois's National Center for Supercomputing Applications (NCSA) powered by 384 NVIDIA Corp. GPUs running in parallel (which means they split up the workload) in concert with Lincoln's 1,536 central processing units (CPUs).
When CPUs and GPUs work together to process information on a computer, it is considered a "co-processing" configuration. That is, when Schulten and his team run their software on Lincoln, the CPUs and GPUs share the work, although the GPUs take on a larger portion of the load.
CPUs and GPUs didn't always work so well together. GPUs became a popular tool in the 1990s for speeding up computer graphics, but these processors could only understand programs written using graphics programming languages like OpenGL and Cg, whereas CPUs worked with more general purpose languages such as C. This changed in 2006 when NVIDIA introduced its Compute Unified Device Architecture (CUDA), an interface that let C programs run on the company's GPUs. Thanks to CUDA's success, NVIDIA today introduced a faster version of its GPU that can work with a larger number of software programs, including those written in C++.
Advanced Micro Devices (AMD), the only other major GPU-maker, last week introduced what it is calling "the most powerful processor ever created", capable of up to 2.72 teraflops of computing power per GPU. (A teraflop is equal to 1 trillion calculations per second.) NVIDIA has not released the specs on its new GPU but a spokesman says that the previous generation delivers just under 1 teraflop per GPU. Advances made by NVIDIA and AMD mean big things for science. The advent of programmable GPUs that can be used for more than simply graphics processing has helped the researchers "accelerate their simulations by a factor of 10," Schulten says.
Schulten's view of processing speed is pragmatic, however. "If everything takes a long, long time, you get only one chance to test your work," he says. "This makes it very difficult to learn from trial and error."