Electrochemical powering could help to reduce processors' heat dissipation, but there is a way to make a much bigger difference. Most of the heat from a chip is generated not by the switching of transistors, but by resistance in the wires that carry signals between them. The problem is not the logic, then, but the legwork. During the late 1990s, when transistors were about 250 nanometers across, 'logic' and 'legwork' accounted for roughly equal amounts of dissipation. But today, says Michel, “wire energy losses are now more than ten times larger than the transistor-switching energy losses”. In fact, he says, “because all components have to stay active while waiting for information to arrive, transport-induced power loss accounts for as much as 99% of the total”.
This is why “the industry is moving away from traditional chip architectures, where communication losses drastically hinder performance and efficiency”, says Garimella. The solution seems obvious: reduce the distance over which information-carrying pulses of electricity must travel between logic operations. Transistors are already packed onto 2D chips about as densely as they can be. If they were stacked in 3D arrays instead, the energy lost in data transport could be cut drastically. The transport would also be faster. “If you reduce the linear dimension by a factor of ten, you save that much in wire-related energy, and your information arrives almost ten times faster,” says Michel. He foresees 3D supercomputers as small as sugar lumps.
What might 3D packaging look like? “We have to look for examples with better communication architecture,” Michel says. “The human brain is such an example.” The brain's task is demanding: on average, neural tissue consumes roughly ten times more power per unit volume than other human tissues — an energy appetite unmatched even in an Olympic runner's quadriceps. The brain accounts for just 2% of the body's volume, but 20% of its total energy demand.
Yet the brain is fantastically efficient compared to electronic computers. It can achieve five or six orders of magnitude more computation for each joule of energy consumed. Michel is convinced that the brain's efficiency is partly due to its architecture: it is a 3D, hierarchical network of interconnections, not a grid-like arrangement of circuits.
This helps the brain to make much more efficient use of space. In a computer, as much as 96% of the machine's volume is used to transport heat, 1% is used for communication (transporting information) and just one-millionth of one per cent is used for transistors and other logic devices. By contrast, the brain uses only 10% of its volume for energy supply and thermal transport, 70% for communication and 20% for computation. Moreover, the brain's memory and computational modules are positioned close together, so that data stored long ago can be recalled in an instant. In computers, by contrast, the two elements are usually separate. “Computers will continue to be poor at fast recall unless architectures become more memory-centric”, says Michel. Three-dimensional packaging would bring the respective elements into much closer proximity.
All of this suggests to Michel that, if computers are going to be packaged three-dimensionally, it would be worthwhile to try to emulate the brain's hierarchical architecture. Such a hierarchy is already implicit in some proposed 3D designs: stacks of individual microprocessor chips (on which the transistors themselves could be wired in a branching network) are stacked into towers and interconnected on circuit boards, and these, in turn, are stacked together, enabling vertical communication between them. The result is a kind of 'orderly fractal' structure, a regular subdivision of space that looks the same at every scale.