Eleventh-hour negotiations aren't uncommon in Washington, D.C., but the most recent duel over the debt limit seems especially tense. Unless its debt ceiling is raised from its current $14.3 trillion, or its budget is miraculously balanced, the U.S. will default on its financial obligations on August 2, leading to a credit downgrade, delayed government payments and other serious economic troubles. Debt default is an outcome that's almost unanimously opposed, so the failure of decision-making feels especially frustrating. With such complex political gamesmanship at play, neuroscience and game theory may offer some insight into the stalemate, suggesting that a sense of moral superiority could be disrupting a natural tendency to cooperate.
From an immediate political perspective, the primary cause of the standoff is that Republicans won't raise the debt ceiling without major spending cuts, and they're unwilling to accept any tax increases as part of a deal. Needless to say, this position is a nonstarter for negotiation with the Democrats, many of whom want to increase tax revenues to ease the degree of the draconian cuts. At some point, a mutual decision will be made, leading both parties either to claim at least partial victory or to pass the responsibility to someone else.
Viewed with scientific detachment, whatever compromise eventually emerges will be a remarkable product of the complex neuronal calculus that goes into collective decision-making. Each brain in the House and Senate is trying to master an intricate game of strategy. Risk is being measured against payoff, stakes are being continually reassessed, and all of these calculations are updated fluidly as new information becomes available. Moreover, each congressional brain has to run a simulation of other brains to determine whether cooperation is likely—a feat with its own host of computational complexities.
Although researchers have long thought that these sorts of neural calculations must in principle be going on, only in the last decade or so have neuroscientists studied them as the decisions are actively implemented in the human brain. To do such analyses, scientists deploy brain-scanning devices on volunteers as they participate in simple games of strategy that mimic essential features of more natural negotiating scenarios.
Perhaps the most well-known of these games is the prisoner's dilemma. Originally formulated and studied in 1950 as a way of thinking about nuclear-war strategy, this game imagines that you and a partner are being separately interrogated about your involvement in a crime. If you confess, but your partner stays silent (or vice versa), you get a light jail sentence (a year, say), whereas the silent partner gets a steep penalty (four years in jail). If you both confess, you each get three years. If you're both silent, you each get two years.
If you simply calculated the relative payoffs (or punishments) for the four possible outcomes, followed instructions to behave rationally, and assumed your partner adhered to the same self-centered utilitarian strategy, the game would end in mutual confession—basically, the "two distrusting jerks" scenario. This result is formalized as the famous Nash equilibrium (pdf) (with no offense to famed mathematician John Nash).
What's interesting, though, is that humans are really not distrusting jerks at all. In fact, around 50 percent of prisoner's dilemma games played by real people end in mutual cooperation, with both partners showing tight-lipped solidarity. In essence, our brains seem primed for a level of cooperation beyond what's theoretically in our rational best interest. Without this bias, our species may never have struck on such fundamentally trust-based behaviors as reciprocal food sharing—a thought that puts present political squabbles into perspective.
To put it another way, the neural operations in social decision-making involve calculating trust as well as implementing strategy. In support of this claim, several recent functional MRI brain scanning studies have shown that activation of the ventral striatum—a deep structure in the forebrain—predicts the intention to trust a partner in games similar to the prisoner's dilemma.
The current political reality, though, quite evidently is an example not of trust, but rather its breakdown. Tellingly, one of the proposed legislative maneuvers for ending the debt crisis is a so-called resolution by disapproval, a baroque and slickly crafted instrument that essentially bundles all the political blame, passes it off to the president, and ultimately gets the needed legislation passed by presidential veto.
Although trust can erode for many reasons, one set of brain-imaging studies points to perceived moral character—that perennial political target—as a culprit. Work by psychologists Mauricio Delgado of Rutgers University and Elizabeth Phelps of New York University has shown that perceptions of a fictitious partner's trustworthiness are influenced by character narratives describing that partner. Specifically, subjects took more risks when interacting with praiseworthy partners as opposed to morally suspect ones, even though both kinds of partners were equally likely to reciprocate. Functional imaging data from the subjects' brains told a similar tale, showing that activity in the striatum was dampened by information about moral character. In a sense, strong perceptions about a partner's moral character can reduce our reliance on learning-related feedback signals in games of trust.
Obviously, these studies don't add up to any specific diagnosis of what's gone wrong in the Beltway debt-limit standoff. Real negotiation is still complex beyond true scientific study, and no analysis of parlor games played in a brain scanner seems likely to replace the usual social and historical frameworks for understanding political behavior. Still, it's heartening to think that researchers can peek behind the curtain and look at the neural mechanics behind our agreements and disagreements as well as the social factors that influence decision-making. As this scientific approach matures, a day may come when we understand enough about ourselves to find new common ground with one another.