During my 30-year practice of diagnostic radiology, I spent many hours educating physicians and surgeons on the importance of false positives and false negatives in the diagnostic process. No diagnostic test is 100 percent accurate. My mantra was always: don’t treat initial test results. Always confirm the diagnosis with other independent data before performing surgery or prescribing pharmaceuticals with serious side effects.
I applaud the general theme of mathematician John Allen Paulos in “Weighing the Positives” [Advances]. First he makes the valid argument that medical tests will be positive for some patients without disease. He then illustrates this with a statistical analysis of mammography on one million patients, resulting in 9,960 false positives. He makes a monumental error, however, in stating, “If the 9,960 healthy people are subjected to harmful treatments ranging from surgery to chemotherapy to radiation, the net benefit of the tests might very well be negative.”
Because mammography, prostate-specific antigen levels and all other initial testing for common cancers are merely screening tests, no patient ever receives definitive treatment for cancer before these tests are confirmed by a biopsy. Cynical health care watchdogs may cite this as excessive testing, but such measures avoid the negative effects of overtreatment that Paulos invokes.
J. G. McCully
In “The Department of Pre-Crime,” James Vlahos mentions the potential danger of prejudging individuals by using predictive policing techniques but avoids discussion of a more serious potential consequence of such “crime forecasting”: the positive-feedback reinforcement of existing biases to more deeply criminalize certain populations and deepen injustice.
If police are already focusing on and arresting in some neighborhoods over others, feeding information into the machine may result in still greater police presence, more arrests, more predicted crime, still more police presence and still more arrests. If the initial bias is for factors other than actual crime, the result may be the deepening of injustice, not a reduction of crime.
The racial, ethnic and financial divides in crime and justice in the U.S. are well documented. The most obvious examples are in the discrepancies in drug laws, where the use of “crack” cocaine gets far more serious penalties than the powdered version, with the meaningful difference being that crack is used primarily in black communities.
African-Americans are perhaps eight times more likely to be incarcerated than whites. Poor people are much more likely to be convicted and sent to prison than wealthier people. Young people in poorer, nonwhite neighborhoods have a much different experience with respect to the police than whites. They are probably more likely to get a criminal record than their white counterparts in wealthier communities who engage in the same behaviors.
Once into the criminal system, people can lose their right to vote, have their reputations and futures tainted, and have reduced access to jobs. They are, in a sense, trained to continue and pass on a more criminal culture.
OVERRATED DOWN UNDER?
Although the gist of the “The Coming Mega Drought” [Forum]—Peter H. Gleick and Matthew Heberger’s essay on the possibility of Australia’s Millennium Drought being repeated in the southwestern U.S.—rings true, the comments praising Australia’s response to its drought need a bit of context. There is unfortunately a political aversion to human reuse of water in Australia. (I have heard a specific put-down: “Would you like to drink poo water?”) The $13.2 billion being spent by the country’s five largest cities to add to desalination capacity is extremely wasteful as the same end can possibly be achieved by treatment and reuse. Desalination is also energy-intensive.
Much of Australia’s response to the Millennium Drought has been good, but some of it has been disastrously wasteful. Victoria’s previous state government, for instance, spent megadollars on a pipeline, now mothballed, to take water from agricultural irrigation land north of the Great Dividing Range so as to ensure Melbourne had water to flush down its toilets. And the cost of desalination is arguably unnecessary when subsidizing the harvesting of roof runoff was apparently not even considered.
The U.S. could learn from some of our water-saving efforts—but not all of them!
Les G. Thompson
HEBERGER REPLIES: Both letters raise valid and interesting points. There was no room to delve into these issues in the short space available. For a more detailed review of these issues, please see Chapter 5 (“Australia’s Millennium Drought: Impacts and Responses”) in The World’s Water, Vol. 7, edited by Peter H. Gleick (Island Press, 2011).
Michael Shermer’s “In the Year 9595” [Skeptic] confuses different aspects of computer intelligence: emergence of computers that can be called intelligent or conscious (two different milestones); the “singularity” (in which a replicator starts creating generations of capability faster than humans can comprehend); and transference of a human into a different “container.”
Shermer assumes that computer intelligence will emerge because we design a computer to accomplish that. But other paths include creating learning machines that develop intelligence or consciousness from this activity, as in the human brain. Or some tipping point may occur within the complexity of computers, networks and other technology. We need not understand what will result from our creations.
I anticipate computers that can pass the Turing test of consciousness [in which answers to questions cannot be distinguished from answers a human gives] by midcentury and devices that assert their own consciousness by the end of the century. John Brunner’s supercomputer in the 1968 novel Stand on Zanzibar responded to the question “Are you or aren’t you a conscious entity?” with: “It appears impossible for you to determine if the answer I give to that question is true or false.” I suspect Brunner’s computer is correct.
There are again multiple paths to singularity. Once we have silicon devices that reproduce silicon devices somewhat autonomously, one route is established. Genetic engineering of people could also lead in this direction, as could cyborg approaches.
On transferring personality to another platform, I agree with Shermer’s skepticism. It is marginally conceivable that a “clone” might be able to receive a brain transplant. But it is very likely we will have intelligent machines before we have a platform that can adopt sufficient aspects of human personality, and once we have machines that intelligent, why would they support this activity?