Early Wednesday afternoon, the supercomputer was running nine different research jobs. One of the jobs, which required 72 computer cores to perform, was a simulation of a potential polio program for India. The simulation included information about India's population (ages, population dispersal throughout the country, migration patterns and demographic data) and played out a scenario of how the disease might spread as people interacted with each other. "It's a probabilistic approach," Eckhoff says. "Some interactions lead to disease, some don't."
Intellectual Ventures has plans to further expand its supercomputer by adding nodes. The company's computer facilities have room to grow and can accommodate up to 3,000 cores without needing to change the facility's power and cooling systems. The researchers estimate that they could squeeze in up to 6,000 cores if investments were made to beef up power and cooling.
The demand for supercomputer power on a budget has attracted tech vendors to the high-performance computing space that have generally played in a smaller sandbox. Microsoft (through its Windows Azure Platform), Amazon (through its Amazon Web Services), and others are offering "cloud" services, whereby they use their massive data centers to host the data, software and computing resources for their customers, who access the information they seek through their desktop computers.
Microsoft earlier this week introduced an initiative that will focus specifically on offering hosted high-performance computing resources. "Our understanding is that the Microsoft Technical Computing Group is working on bringing 'technical computing,' supercomputing, to the masses," says John-Luke Peck, an Intelligent Ventures systems engineer, who points out that his company's supercomputer uses Microsoft software that can take advantage of parallel processing. "Their solution can and will bring opportunities to researchers, students, and others, that were not previously available."
Although much has been made of computing in the cloud, this is not an option for every research group, including Intellectual Ventures. The primary reason for building their own supercomputer is that some of their projects could have national security implications, which means those data cannot be exported to foreign countries (where many service providers have data centers), says Chuck Whitmer, a consulting physicist to Intellectual Ventures and the neutronics and modeling lead for TerraPower.
A secondary reason is that a distributed, cloud-based approach has more time delay in information transfer than when a system is on-site. Whereas Intellectual Ventures can, generally speaking, achieve a data transfer rate of 20 gigabits per second to get data from its computers to the supercomputer, Peck says, the researchers would probably not get even one-tenth of that speed if they used a supercomputer located offsite.