The problems with the U.S. health care system described by Sharon Begley in “The Best Medicine” are accurate. It is gratifying that the National Institutes of Health is finally willing to fund real comparative effectiveness research. But the NIH, under pressure from Congress, has been reluctant to fund studies directly comparing the costs of competing treatments. I retired from the medical research field in part because of this refusal to look for the most effective and least costly answers and to support research on how to reduce unnecessary care.

Why is serious cost control not a part of either political party’s health care “reform” plans? To get elected, one must accept money from the very groups that require reform and regulation. Consequently, we get cosmetic reforms that never address the real issues that double the cost of health care. Instead reductions in care to the aged and poor are the preferred cost-control mechanisms. Until voters are freed from the election propaganda of special interests, the U.S. will continue to have the world’s most costly and least efficient health care system and the worst health care outcomes of any developed nation.
Thomas M. Vogt
Bountiful, Utah

In “Evolution of the Eye,” Trevor Lamb draws together multiple lines of evidence to create a persuasive narrative for the early evolution of the vertebrate eye. But is it fair to equate historical constraints with defects in describing how vertebrate photoreceptors are on the back of the “inside-out” retina, shadowed by blood vessels and overlying cells? Has a possible advantage to this arrangement been ruled out?
Donald Robinson
Vancouver, B.C.

Lamb replies: There are indeed clear advantages that presumably led the eye vesicle to fold inward during evolution. This infolding put the photoreceptors in close proximity to the retinal pigment epithelium, enabling the biochemical recycling of retinoids following light absorption, the atten­uation of light that passes through the photoreceptors unabsorbed, and the delivery of oxygen and nutrients from the overlying choroid tissue. Other by-products of this infolding remain as “scars” of evolution, however.

In Peter Byrne’s interview with Leonard Susskind, “The Bad Boy of Physics,” Susskind insists that reality may forever be beyond reach of our understanding, partly because of his principle of black hole complementarity, which holds that there is an inherent ambiguity in the fate of objects that fall into a black hole. From the object’s point of view, it passes the hole’s perimeter and is destroyed at the singularity at its center. To an external observer, it is incinerated at the event horizon. It seems clear that this apparent ambiguity stems from the fact that—according to general relativity—the passage of time differs for the object and observer.

What actually happens is that from the vantage point of the observer, the object appears “frozen in time” when it arrives at the event horizon (and permanently disappears from view upon the horizon’s expansion). One should not conclude that the object’s fate is ambiguous. The event is merely observed in a different way depending on the observer’s frame of reference.
Anthony Tarallo
The Hague, the Netherlands

Susskind replies: Tarallo provides a succinct account of how classical relativists described matter falling into a black hole before the early 1970s. The problem with that view dates back to Stephen Hawking’s discovery that the combination of quantum mechanics and general relativity implies that black holes evaporate. As Hawking emphasized, if bits of matter “permanently disappear from view,” then that evaporation leads to a contradiction with those rules. His solution was to give up the standard rules of quantum mechanics, but after two decades of confusion a consensus emerged that Hawking was wrong. Today the highly unintuitive black hole complementarity and holographic principles are central pillars of the quantum theory of gravity.

The event is indeed observed in a different way depending on the observer’s frame of reference. That is how two apparently contradictory things can both occur.

I would like to clarify that “reality may forever be beyond reach of our understanding” is a stronger statement than I intended. What I wanted to convey is that the hardwired concepts that evolution equipped us with are not suitable for visualizing the strange and unintuitive behavior of the quantum world, let alone the quantum-gravity world. Still, physicists have been very good at rewiring their circuits by means of abstract mathematics, which must replace old ways of visualizing the world each time we encounter something radically new.

In “The Limits of Intelligence,” Douglas Fox points out that human intelligence is limited by communication among neurons in the brain, which is limited in turn by the size of our neurons. “The human mind, however,” Fox writes, “may have better ways of expanding without the need for further biological evolution.” He goes on to suggest social interactions as a means to pool our intelligence with others. What Fox forgets to point out, however, is that as a species we have not yet learned to use our individual brains to full capacity. In fact, a typical person uses only about 10 percent of his or her brain. Rather than dwelling on the constraints imposed on the human mind by nature, wouldn’t it be more useful—as well as smarter—to figure out ways to boost and strengthen existing neuronal connections in our brains, thereby making the most of what we already possess?
Andrea Rothman
Great Neck, N.Y.

Fox replies: It has been estimated that only 1 to 15 percent of neurons in the human brain are firing at any given instant. But it does not necessarily follow that we could use the other 90 percent or so of our neurons and suddenly be smarter. Letting most of our neurons lie idle most of the time is a design principle that has evolved into our brain. Having neurons lie idle uses a lot less energy than having them spike—and so having lots of neurons that you do not use all that often actually maximizes the ratio of information processed to energy spent.

For example, the more neurons you have, the more pathways any particular nerve spike can travel. So each nerve spike inherently contains more information—and your brain can get away with firing fewer of those energy-expensive spikes. Even if you discounted all of the above and obstinately started firing every neuron in your brain every second, you would still have to pay for all those extra energy-hungry spikes, and it could easily double or quadruple the calories your brain consumes. In other words, nothing is free. The brain we have is almost certainly evolved for maximum information per energy spent.