“Heart Therapy,” by Gabor Rubanyi, explains how the heart can develop new blood vessels in response to blockages in the coronary arteries (although it does not do so enough to get around the blockages for most patients). It also describes investigations into how to promote these so-called collateral vessels.
The article answers a question I have had for 28 years. Until my first, minor heart attack in 1989, I had been running four miles, five days a week for more than a decade. Then, suddenly, I was unable to run at all. My doctor, knowing my family health history, suspected trouble, and he was right. It was pretty exciting for the cardiology staff examining me when I experienced a second heart attack while I was hooked up to an electrocardiogram on a treadmill. I simply became very exhausted.
After undergoing a double-bypass surgery, which was a breeze at age 43, I went home and mowed the lawn. I asked the surgeon about muscle damage, and he indicated that the area was about 10 millimeters across even though one of my coronary arteries was completely blocked and had been for a very long time. He also said that my collateral circulation was highly developed.
I had always wondered what was primarily responsible for this circulatory savior. The possibilities were vigorous exercise or natural processes. Rubanyi makes it clear that both were contributors. Hence, I can thank my usually terrible, but here lucky, genes and my exercise program.
DANGERS OF AI
Scientific American should offer a counterbalance to the complacency of Gordon Briggs and Matthias Scheutz's assertion that “superintelligent machines that pose an existential threat to humanity are the least of our worries” [“The Case for Robot Disobedience”] (echoed in Michael Shermer's casually dismissive “Apocalypse AI” [Skeptic] in the March issue). While the benefits of technology's expanding reach are abundant, many serious thinkers—including Stephen Hawking, Elon Musk and Bill Gates—have expressed fundamental concerns about the possibility that machines could come to exceed human capacity for thinking.
Scientific American has a broader role than cheerleading for new science and technology. It would be feasible for it to organize leading analyses addressing potential longer-range catastrophic changes, such as the “singularity”: when machine intelligence exceeds human intelligence. It would also be important to lay out forecasts and policy responses for the already current reality of AI displacing not just “mundane” blue-collar labor but also highly skilled professional work, such as medical diagnoses or legal research. Our society seems to be turning a corner where overblown concerns about the job threat of automation from the 1950s have morphed into a broader and very real challenge—one that requires a greatly expanded approach to retraining adult workers and updating education to support lifetime flexibility across occupations.
BRIAN J. TURNER
In “When Facts Backfire” [Skeptic], Michael Shermer discusses cognitive dissonance—in which holding two incongruous thoughts at the same time creates an uncomfortable tension, prompting people to spin-doctor facts to reduce it—and the backfire effect—in which corrections to an erroneous idea that conflict with a person's worldview or self-concept cause that person to embrace the error even more.
What would be the evolutionary advantages that would cause us to develop these two brain attributes? In our current time, I can see only the disadvantages.
SHERMER REPLIES: First, the backfire effect is just a description of an observation of what people do in response to facts counter to their beliefs, not a brain attribute. Cognitive dissonance is a better descriptor for an internal state, although we should remember that all such descriptions are inferences from behavior, language, brain scans, and so on, not direct observations of someone else's mind.
Second, there are good reasons to think that cognitive dissonance has an evolutionarily adaptive purpose, as social psychologist Carol Tavris outlined it in an e-mail to me: “When you find any cognitive mechanism that appears to be universal—such as the ease of creating ‘us-them’ dichotomies, ethnocentrism (‘my group is best’) or prejudice—it seems likely that it has an adaptive purpose. In these examples, binding us to our tribe would be the biggest benefit. In the case of cognitive dissonance, the benefit is functional: the ability to reduce dissonance is what lets us sleep at night and maintain our behavior, secure that our beliefs, decisions and actions are the right ones.
“The fact that people who cannot reduce dissonance usually suffer mightily (whether over a small but dumb decision or because of serious harm inflicted on others) is itself evidence of how important the ability to reduce it is.”
In “Keep Hospitals Weapons-Free” [Forum], Nathaniel P. Morris argues that “Tasers and guns issued to security guards” at hospitals “do more harm than good.” I worked in security at a zoo for a decade or so, and we carried only pepper spray. But given the concern of animals potentially escaping, we kept a few shotguns in locked cases in various locations around the zoo. Perhaps something like that approach could work in hospitals as well.
There is a clear connection between Clara Moskowitz's article about an investigation of whether space and time could be made of tiny informational building blocks [“Tangled Up in Spacetime”] and Juergen A. Knoblich's article on growing part of the developing human brain in the lab for research [“Lab-Built Brains”]. In both cases, scientists are trying to stimulate insight by constructing “toy models” of something out there in the real world (the universe in one case, the brain in the other).
Of course, in the case of spacetime, the model is a theory, whereas in the case of the brain, the model is a so-called organoid that enjoys its own existence. Yet the two are not that different. Applying the holographic principle that Moskowitz describes—in which certain physical theories may be equivalent to ones applicable to a lower-dimensional universe—we could say that one kind of conception is a 2-D version of the other. The question remains, however, of which is which.
Director, Center for Creative Inquiry
“Data Deliver in the Clutch,” by Steve Mirsky [Anti Gravity], refers to Daniel Kahneman as a Nobel economist. His field is primarily psychology, but he shared the 2002 economics Nobel Prize for his work in behavioral economics.