Here's one thing on which everyone agrees: social psychology is overwhelmingly composed of liberals—around 85 percent, according to a 2012 survey by researchers at Tilburg University in the Netherlands. The question of why this is the case, and whether it presents a problem, is more controversial. The topic has exploded over the past several years, with claims of both overt hostility and subtle bias against conservative students, colleagues and their publications being met with reactions ranging from knee-jerk dismissal to sincere self-reflection and measured methodological critique.

A paper published online last year in Behavioral and Brain Sciences by José L. Duarte of Arizona State University and his colleagues attempts to organize the existing research relevant to this debate. Two central questions arise: Is the ideological imbalance the result of true bias against conservatives or some more benign cause, such as self-selection into the field? And regardless, would more political diversity improve the validity of our science?

Duarte and his colleagues provide evidence suggesting that social psychology is not a welcoming environment for conservatives. Several studies have shown that papers are reviewed less favorably if they support conservative positions, and anonymous surveys reveal a considerable percentage of social psychologists willing to report negative attitudes toward conservatives. This should not surprise us. Everything social psychologists know about group behavior suggests that overwhelming homogeneity, especially when defined through an important component of one's identity such as political ideology, will lead to negativity toward an out-group. We also know a thing or two about confirmation bias—the tendency to view new information as supporting one's preexisting beliefs. So it would be odd to think it might not affect judgments in our own field.

But would more political diversity increase the validity of sociopsychological findings? As the authors note, this concern mainly applies to the small subset of research dealing with politically charged issues (gender, race, morality). They argue that having a range of political opinions in these domains would combat the pernicious effects of confirmation bias and groupthink by introducing more dissent.

Duarte and his colleagues identify various examples of research that they believe to be “tainted”—by assuming, for instance, that liberal views are objectively more valid than conservative ones—and conclude that “the parameters [of the field] are not set properly for the optimum discovery of truth. More political diversity would help the system discover more truth.” Conservative social psychologists, they maintain, would test different hypotheses, better identify methodologies in which liberal values are embedded, and be more critical in general of theories and data that advance liberal narratives.

Finally, the authors offer several recommendations for how to curb any negative effects that political homogeneity poses for scientific validity. First, the field should promote political diversity by changing how diversity is defined in the mission statements of our professional societies. Second, professors should be more mindful of how they treat nonliberal views and should actively encourage nonliberals to join the field. Finally, we should change research practices in ways that allow researchers to better detect where bias might be intruding on decision making.

These arguments have provoked a variety of responses in the field. And here is one more. Clearly, we should care about any evidence of bias influencing how we conduct or evaluate research. Further if we deny the possibility of such a bias, without reference to empirical investigation, then we will have failed as responsible scientists committed to the pursuit of truth—and ironically so, given that another of the most important lessons from social psychology teaches that we are in no position to evaluate the objectivity of our own decision making.

So what is the best solution if such a bias does threaten the validity of the field? The authors' key proposal is straightforward: add more conservatives into the mix to “diversify the field to the point where individual viewpoint biases begin to cancel each other out.” In short, we need to add the opposite kind of ideological bias to our literature. If liberals distort science one way, conservatives will distort it the opposite way, and it will all cancel out in the end.

This idea may seem counterintuitive—that to have a more reliable and valid science, we need more bias, just a different kind. But it is rooted in a simple statistical principle. Let us say we are collecting guesses of how many M&Ms there are in a glass jar that actually holds 5,000 of the candies. If we just ask a population notorious for underestimating, the average of their guesses will likely be lower than the truth (say, 4,000). And if we just ask a population notorious for overestimating, the average of their guesses will likely be higher than the truth (perhaps 6,000). But if we combine these populations, then the average of the total guesses will be closer to the truth. This is the wisdom of crowds.

But how neatly does this principle apply to the issue at hand? What does it mean, in practice, to have the biases that are embedded in researchers' hypotheses, methods and peer reviews “cancel out” over time? If I embed liberal values in my research, and Joe Researcher embeds conservatives ones, why would the ultimate outcome be more truth discovered as opposed to just more time and resources wasted, both our own and that of others who might be influenced by our ideologically distorted work? Furthermore, it remains unclear, according to other investigations, whether more ideological diversity would reduce or amplify group bias and polarization.

These questions are central to justifying the Duarte paper's claim that adding researchers who would “seek to explain the motivations, foibles, and strengths of liberals as well as conservatives” is the best way “for social psychology to correct longstanding errors on politicized topics,” as Duarte and his colleagues assert. Correcting old errors by adding different errors is a tough sell.

I prefer a different solution. Let's improve the validity of our science by trying to reduce error, not by introducing new kinds of it. The authors dismiss this as an impossibility; they feel that, as an ideologically homogeneous group, we are bound to repeat our mistakes. But although no silver bullet exists, researchers have indeed identified beneficial interventions to combat bias in decision making, and papers such as that from Duarte and his colleagues can be seen as a strong reminder that social psychology should make this work a priority. For example, this research emphasizes the crucial importance of instilling “an awareness of one's fallibilities and a sense of humility concerning the limits of one's knowledge,” as Scientific American Mind advisory board member Scott O. Lilienfeld and his colleagues at Emory University write in a 2009 paper.

Duarte and his colleagues provide evidence of one way in which our professional decisions might systematically deviate from an appropriate application of the scientific method. Let's be open to this possibility, address this concern and fulfill our responsibility as scientists. And if more conservatives, or libertarians, or Greens, or independents, or Whigs, or Californians, or art history majors, or single parents, or whoever are more attracted to the field as a result, then fine. We do not need more ideology in social psychology; we need less. That is the best way to discover more truth.