# The Fundamental Physical Limits of Computation

What constraints govern the physical process of computing? Is a minimum amount of energy required, for example, per logic step? There seems to be no minimum, but some other questions are open

It may at first seem inconceivable that a purposeful sequence of operations, such as a computation, could be achieved in an apparatus whose direction of motion at any one time is nearly random. This style of operation is quite common, however, in the microscopic world of chemical reactions. There the trial-and-error action of Brownian motion, or random thermal motion, suffices to bring reactant molecules into contact, to orient and bend them into the specific conformation required for them to react, and to separate the product molecules after the reaction. All chemical reactions are in principle reversible: the same Brownian motion that accomplishes the forward reaction sometimes brings product molecules together and pushes them backward through the transition. In a state of equilibrium a backward reaction is just as likely to occur as a forward one.

In order to keep a reaction moving in the forward direction, we must supply reactant molecules and remove product molecules; in effect, we must provide a small driving force. When the driving force is very small, the reaction will take nearly as many backward steps as forward ones, but on the average it will move forward. In order to provide the driving force we must expend energy, but as in our ball-and-pipe realization of the Fredkin gate the total amount of energy can be as small as we wish; if we are willing to allow a long time for an operation, there is no minimum amount of energy that must be expended. The reason is that the total energy dissipated depends on the number of forward steps divided by the number of backward steps. (It is actually proportional to the logarithm of this ratio, but as the ratio increases or decreases so does its logarithm.) The slower the reaction moves forward, the smaller the ratio will be. (The analogy of the faster and slower swimmers is valid once again: it requires less total energy to go the same net number of reaction steps forward if the reaction moves slowly.)

We can see how a Brownian Turing machine might work by examining a Brownian tape-copying machine that already exists in nature: RNA polymerase, the enzyme that helps to construct RNA copies of the DNA constituting a gene. A single strand of DNA is much like the tape of a Turing machine. At each position along the strand there is one of four "bases": adenine, guanine, cytosine or thymine (abbreviated A, G, C and I). RNA is a similar chain like molecule whose four bases, adenine, guanine, cytosine and uracil (A. G. C and U) bind to "complementary" DNA bases.

The RNA polymerase catalyzes this pairing reaction. The DNA helix is normally surrounded by a solution containing a large number of nucleoside triphosphate molecules, each consisting of an RNA base linked to a sugar and a tail of three phosphate groups. The RNA-polymerase enzyme selects from the solution a single RNA base that is complementary to the base about to be copied on the DNA strand. It then attaches the new base to the end of the growing RNA strand and releases two of the phosphates into the surrounding solution as a free pyrophosphate ion. Then the enzyme shifts forward one notch along the strand of DNA in preparation for attaching the next RNA base. The result is a strand of RNA that is complementary to the template strand of DNA. Without RNA polymerase this set of reactions would occur very slowly, and there would be little guarantee that the RNA and DNA molecules would be complementary.

The reactions are reversible: sometimes the enzyme takes up a free pyrophosphate ion, combines it with the last base on the RNA strand and re-leases the resulting nucleoside triphosphate molecule into the surrounding solution, meanwhile backing up one notch along the DNA strand. At equilibrium, forward and backward steps would occur with equal frequency; normally other metabolic processes drive the reaction forward by removing pyrophosphate and supplying the four kinds of nucleoside triphosphate.

Rights & Permissions

View
1. 1. jtdwyer 04:44 PM 6/1/11

In fact, a bit of data is not information, not even for logic gates. Data being operated on by processors are merely transient copies of data representing information - no functional information system destroys any required data, no how many times it's 'ANDed'.

Moreover, in equating information to bits and energy, physicists apparently aren't aware that data stored in most electronic memories ('chips') require continual energy refreshment or their 'information' will simply dissipate!

Physical memory of all types, from processor registers, caches, memory chips, device buffers, magnetic and even optical media is continuously being released and reallocated, requiring that data be moved into to newly assigned physical memory, overlaying any previously stored data. All this is accomplished without any unintentional loss of information, which is managed and interpreted by software, not logic circuits!

Did you ever try to load a current spreadsheet file into an old copy of the program? If so, you should understand the difference between data and information.

When the article states:
"Are irreversible logic gates and erasures essential to computation? If they are, any computation we perform has to dissipate some minimum amount of energy."
…it ignores the physical realities of actual information processes. Just for example, even if some method of instantaneous computation were devised, a computer system based on that method could only operate as fast as data could be delivered for computation and retrieved from it. That and all data would be useless without properly interpretive software to finally represent it as information. Information management typically requires far more processing than computation of data and the enormous migration of data required for processing requires far more time than computations do.

IMO, the quest for instantaneous computing is much more dependent on the energy requirements and performance of information storage, retrieval and routing than logic gates.

2. 2. Some guy 12:03 AM 6/2/11

The AND operation and any other irreversible operation does destroy information which costs energy. I think the point made in the article was if you could make a computer that doesn't lose information then you can compute without that expense. Logic gates are the basis of computing. Software is merely a convenience.

3. 3. jtdwyer in reply to Some guy 02:55 AM 6/2/11

The AND operation has two parameters: a source data field and a destination field. The source field is not altered by the operation. If the destination field is a copy of an original data field, not only is its data not destroyed, but the AND operation produces additional data.

By the author's definition of irreversible operations or erasures, only repeated operations on a single set of data could ever be performed, since no new data could ever be brought into storage and no results could be extracted - that would require erasure of previous data.

Logic gates do absolutely nothing unless instructed by software to operate on data provided by software - very convenient indeed!

4. 4. deisner 06:08 PM 6/3/11

1. Every page of the original article contains detailed figures that add much to the authors' arguments. If you're going to publish the article (which is appreciated), please include all of it.

2. Would today's Scientific American publish an article of this technical sophistication and length (nine pages)?

3. Check out what SciAm covers looked like back in the 1980's when this was published (including the July 1985 cover):

http://www.coverbrowser.com/covers/scientific-american/42

The covers are beautiful, informative, and depict real subjects of the science journalism contained in that issue.

Compare that to the current cover:

http://www.scientificamerican.com/media/cover/cover_2011-06.jpg

Or these recent gems:

http://www.scientificamerican.com/media/cover/cover_2008-02.jpg

http://www.scientificamerican.com/media/cover/cover_2008-03.jpg

http://www.scientificamerican.com/media/cover/cover_2010-03.jpg

Why do the covers have to be so freaking goofy? This is embarrassing. What happened to this once-great magazine?

5. 5. jtdwyer in reply to deisner 08:51 PM 6/3/11

Really excellent comment! I dug back into my 'library' and found the original print issue.

I think a more direct comparison for the cover illustrations of current issues would be OMNI magazine of the 1970s. Unfortunately, much of the content of recent issues is also comparable to OMNI...

I hadn't even realized that the original figures had been omitted - they really do enhance the reader's ability to understand the article.

However, I still object to many of the premises of the article and I think information theory at least as adopted and incorporated into physics.

The article states:
"Here is another example of information destruction: the expression 2 + 2 contains more information than the expression = 4. If all we know is that we have added two numbers to yield 4, then we do not know whether we have added 1 + 3, 2 + 2, 0 + 4 or some other pair of numbers. Since the output is implicit in the input, no computation ever generates information."

Again, a programmer is not at the mercy of logic gates and can maintain the source variables for any 'destructive' operation. In that case the last statement is doubly incorrect: _all_ computations generate additional information and, if source data is maintained no information is destroyed!

Lastly, the description of first illustration on page 49 in the original issue states:
"The "logic gates" central to the design of a chip expend energy because they discard information."

Does it require more energy for a person of a chip to calculate that 2 + 2 = 4 than it does to calculate that 1 + 1 = 2? In the second case the original values can be derived so, according to the article, no information is lost. I guess it's presumed that a person or a chip cannot remember the numbers that produced the sum of 4...

I don't see that any relationship has been established between and data content or information and any energy requirements, except as some obtuse abstract misconception.

6. 6. bernsten69 08:51 AM 6/7/11

I see the problem as thus:

Let me go with the ball-system you describe. In order to perform a computation, some minimum amount of energy must be put into the system to move the balls forward. After the computation, in order to USE that information (ie the balls go into another logic gate etc.), some amount of that initial energy must be transfered forward into whatever apparatus uses the result of the computation. The measurement of the result creates some minimum amount of entropy that makes it impossible for the system to completely reverse itself. A simple example would be some sort of light sensor at the end of your computation system that detects the presence of the ball. When the photons bounce off of the ball, they impart a certain minimum amount of energy to the ball (possibly negative if it pushes against its direction of motion). The state of the ball therefore becomes, to some extent, uncertain and more entropic. After the ball bounces against some immovable object that reflects 100% of its energy back towards the beginning part of the machine, the ball(s) will arrive back at the starting location. However, the energy that they impart to the 'first mover' that caused the balls initial motion is now uncertain (by whatever uncertainty the measurement created). Further, there is a non-zero probability that, due to the measurement interaction, the balls do not return to their original states.

Although, in theory, a self-contained, reversible computation machine is possible, in the physical universe, to actually observe (and use) the result of the computation, a certain amount of entropy is introduced into the system which makes it impossible for the system to return to its original state. The minimum amount of energy necessary to perform a computation relates to the entropy imparted by the observation of the computation - A strange but thermodynamically sound conclusion.

Adding in a bit of Quantum Mechanics, until we observe the output of the computation, the computation machine state exists in a superposition of all possible states. In otherwords, no computation (in the classical sense) is possible without an observation. Since observation imparts a certain non-zero entropy into the computation machine state, no computation is possible without the expenditure of a certain minimum amount of energy, and no 'computation machine' is fully reversible in the physical universe.

7. 7. rodovre 12:18 PM 6/13/11

Errata: Page 4, 4th Para. has been truncated.

8. 8. wolfkiss in reply to jtdwyer 03:24 PM 6/13/11

Yes, you can save any data within any argument that is passed to some function. However, this "saving" of the inputs requires extra resources in the form of an array or data structure *before* those inputs are entered into the function. These extra resources are not abstract, because the execution of the "saving" and the function require the use of entirely separate groups logic gates (or the same gates at different times). The former group duplicates inputs while the latter computes the programmed function. The function execution does in fact destroy the existence of its copy of the inputs. This is just a fact of Turing equivalent discrete state computation at the physical level. Logic gates don't "remember" unless additional logic gates are employed.

But it's even worse than that, because the system must expend even *more* physical resources in order to create a pointer between the inputs and the outputs in order to maintain any relevance between these discrete and independent states. The noiseless environment that is the genius of Turing equivalent computation comes at an entropic cost, and that cost is amplified when trying to preserve, and associate, past states with present states.

We forget this today because memory has become so relatively cheap. But in the good ol' days software engineering really was engineering because the ability to execute some code was very constrained by available time and memory.

You must sign in or register as a ScientificAmerican.com member to submit a comment.
Click one of the buttons below to register using an existing Social Account.

## More from Scientific American

• Features | 33 minutes ago

### Scientific American's 2013 Gadget Guide: 10 Technologies You Need to See [Slide Show]

• News | 1 hour ago | 1

### In a 'Rainbow' Universe, Time May Have No Beginning

• News | 11 hours ago | 2

### Hopes Dashed for HIV Cure with Bone Marrow Transplant

• Reuters | 14 hours ago | 1

### Winter Storm Pushes Up U.S. East Coast after Deep-Freeze in the South

• Reuters | 16 hours ago | 2

More »

## Latest from SA Blog Network

• ### Gag Me With a Spoon: "Val-Speak" Takes Over SoCal

Cocktail Party Physics | 10 hours ago
• ### Nerds and Words: Week 49

Overthinking It | 18 hours ago
• ### Photoblogging: Muppet or Flamingo?

MIND
The Thoughtful Animal | 19 hours ago
• ### Sunday Species Snapshot: Fijian Monkey-Faced Bat

Extinction Countdown | 21 hours ago
• ### Right now, there's a giant blue chicken in Trafalgar Square

Tetrapod Zoology | 22 hours ago

## Science Jobs of the Week

The Fundamental Physical Limits of Computation

X

Give a 1 year subscription as low as \$14.99

X

X

###### Welcome, . Do you have an existing ScientificAmerican.com account?

No, I would like to create a new account with my profile information.

X

Are you sure?

X