Gaming Tech Aids Scientists Building Virtual Synthetic Chromatophore

Researchers are relying on graphics processing units to help build a highly complex computer simulation depicting how chromatophore proteins create photosynthesis

Join Our Community of Science Lovers!


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


The study of processes that make life possible is hardly a leisurely pursuit, but that doesn't preclude researchers from taking advantage of the most advanced video gaming technology available to aid in their work. A team of University of Illinois at Urbana–Champaign (U.I.U.C.) physicists has assembled a supercomputer consisting of several hundred superfast graphics processing units (GPUs)—typically used for rendering highly sophisticated video game graphics—that they think will help them build a simulation depicting how chromatophore proteins turn light energy into chemical energy, a process called photosynthesis.

"Ninety-five percent of the energy that life on Earth requires are fueled by photosynthetic processes," says Klaus Schulten, a (U.I.U.C.) physics professor leading the simulation-building effort and director of the school's Theoretical and Computational Biophysics Group. To better understand how these processes work, Schulten's team is assembling a computer-based, virtual photosynthetic chromatophore.

Photosynthetic chromatophores are bubbles of liquid that form on the membranes of bacteria that harness sunlight, carbon dioxide and water to produce the energy needed for respiration and other functions. These bacteria use chromatophores to store the proteins necessary for photosynthesis. This is simplest of all types of photosynthetic systems, Schulten says.

Simple or not, a photosynthetic chromatophore consists of 100 million atoms. "We know every atom of the chromatophore, but only when we know the how the atoms are arranged can we fully understand these systems and how they work," Schulten says. "What's been lacking is the ability to run simulations fast enough to mimic real life."

Schulten expects it will be a year or two before his team can complete their virtual chromatophore. This projection would be several years longer if not for the researchers' ability to crunch numbers on "Lincoln," a supercomputer at the University of Illinois's National Center for Supercomputing Applications (NCSA) powered by 384 NVIDIA Corp. GPUs running in parallel (which means they split up the workload) in concert with Lincoln's 1,536 central processing units (CPUs).

When CPUs and GPUs work together to process information on a computer, it is considered a "co-processing" configuration. That is, when Schulten and his team run their software on Lincoln, the CPUs and GPUs share the work, although the GPUs take on a larger portion of the load.

CPUs and GPUs didn't always work so well together. GPUs became a popular tool in the 1990s for speeding up computer graphics, but these processors could only understand programs written using graphics programming languages like OpenGL and Cg, whereas CPUs worked with more general purpose languages such as C. This changed in 2006 when NVIDIA introduced its Compute Unified Device Architecture (CUDA), an interface that let C programs run on the company's GPUs. Thanks to CUDA's success, NVIDIA today introduced a faster version of its GPU that can work with a larger number of software programs, including those written in C++.

Advanced Micro Devices (AMD), the only other major GPU-maker, last week introduced what it is calling "the most powerful processor ever created", capable of up to 2.72 teraflops of computing power per GPU. (A teraflop is equal to 1 trillion calculations per second.) NVIDIA has not released the specs on its new GPU but a spokesman says that the previous generation delivers just under 1 teraflop per GPU. Advances made by NVIDIA and AMD mean big things for science. The advent of programmable GPUs that can be used for more than simply graphics processing has helped the researchers "accelerate their simulations by a factor of 10," Schulten says.

Schulten's view of processing speed is pragmatic, however. "If everything takes a long, long time, you get only one chance to test your work," he says. "This makes it very difficult to learn from trial and error."

It’s Time to Stand Up for Science

If you enjoyed this article, I’d like to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history.

I’ve been a Scientific American subscriber since I was 12 years old, and it helped shape the way I look at the world. SciAm always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too.

If you subscribe to Scientific American, you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized.

In return, you get essential news, captivating podcasts, brilliant infographics, can't-miss newsletters, must-watch videos, challenging games, and the science world's best writing and reporting. You can even gift someone a subscription.

There has never been a more important time for us to stand up and show why science matters. I hope you’ll support us in that mission.

Thank you,

David M. Ewalt, Editor in Chief, Scientific American

Subscribe