Reality Bytes: 3-D Data Demands Force CG Moviemakers to Get Creative with Computer Efficiency

In the making of Avatar, data-caching helped artists create highly detailed visual effects while saving time and storage space


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Moviemakers continue to up the ante in their quest to make film animation as realistic as live action, thanks to improvements in 3-D computer-generated (CG) graphics. These efforts can pay off in big ways—James Cameron's Avatar earned a mountain of money and three Academy Awards. But, as New Zealand digital effects–maker Weta Digital can attest, painstakingly creating three-meter-tall blue bioluminescent aliens required an unprecedented amount of computing power and data storage—and those resources are likely be dwarfed by subsequent projects.

"The biggest changes I've seen are in the complexity of the movies, starting with The Lord of the Rings," says Paul Gunn, data center systems administrator for Weta, the company responsible for the stunning visuals in The Lord of the Rings film trilogy as well those in as Avatar. A significant part of this complexity comes from moviemakers' demands that digitally rendered characters and scenes become more lifelike even when shot in close-up.

The main job of Weta's data center is rendering, a process that adds texture, shading, reflection and other visual aspects to digital images, "turning them into something we can produce on the screen," Gunn says. Avatar's graphics rendering required the services of more than 4,300 computer servers (containing nearly 35,000 central processing unit cores) to process digital images into movie-quality visuals—a system the company refers to as its "renderwall". For Avatar hundreds of visual effects artists fed terabytes' worth of work into the renderwall, which refined those computer-designed images into something closer to the finished product.

"We can't predict what an artist will need so we have to provide them with a smorgasbord of resources to work with," Gunn says. "That can be troublesome, because the dynamics of our data center environment changes quickly. We end up with fairly large surges in demand from several artists for a particular movie shot, which consists of a pile of individual frames."

This boils down to a heavy demand for similar or the same pieces of data, such as the computer code that creates the texture of the leaves in the rainforest on Pandora, the Saturnian moon where Avatar is set. "Texture is a set of data that's commonly used to give the movie a uniform look," Gunn says. "In the past we had this code on lots of different file servers." This was inefficient because the same four-terabyte master copy of the movie's images resided in 10 other locations throughout the data center. If the data were updated on one server, Gunn and his team would then have to make sure that same data were updated on all the servers on which it was stored.

For Avatar, instead of generating 40 terabytes of data that included 10 copies of the same information, Weta and data storage provider NetApp in Sunnyvale, Calif., devised a system that gave artists access cached data. A cache is a temporary memory buffer used to store data that is used most often, a setup designed to make data access faster and more efficient. Weta and NetApp created several caching servers on Weta's network to handle the large number of users requesting access to the movie's visual effects files.

All of the renderwall machines accessed these NetApp-caching file servers, which in turn accessed the master copy of the movie's images. When changes were made to the master images, these changes would automatically be reflected in the caching file servers as well, with minimal lag time. The cache servers held only the data most in demand by the movie artists, which turned out to be about 800 gigabytes of the original four-terabyte data set. "However, that 800 gigabytes of data was enough to answer more than 97 percent of all data requests," Gunn says.

NetApp's FlexCache software was used to automatically balance the renderwall's throughput requirements to keep data request bottlenecks from forming. "The key to caching is to understand what data is most in demand and who's demanding it," says Brendon Howe, a NetApp vice president and general manager.

Gunn says he cannot talk about any of the technology Weta is using for current and future movie projects, including The Hobbit or Planet of Apes sequel Rise of the Apes, but he can say that since the company implemented the caches for Avatar they have been able to improve the technology to make their system even more efficient.

It’s Time to Stand Up for Science

If you enjoyed this article, I’d like to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history.

I’ve been a Scientific American subscriber since I was 12 years old, and it helped shape the way I look at the world. SciAm always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too.

If you subscribe to Scientific American, you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized.

In return, you get essential news, captivating podcasts, brilliant infographics, can't-miss newsletters, must-watch videos, challenging games, and the science world's best writing and reporting. You can even gift someone a subscription.

There has never been a more important time for us to stand up and show why science matters. I hope you’ll support us in that mission.

Thank you,

David M. Ewalt, Editor in Chief, Scientific American

Subscribe