Hillary Clinton is heading for a landslide victory over Donald Trump. But wait. Trump is pulling ahead and could take the White House. No, Clinton has a clear lead and is gaining ground. Nearly every day, a new poll comes out touting a different result, leaving voters wondering what to believe.

The results of recent elections give even more reason for scepticism. In 2013, the Liberal Party of Canada confounded expectations when it won the provincial elections in British Columbia. The following year, polls overestimated support for Democrats in the US congressional elections. And this year, some pollsters underestimated Britons’ support for leaving the European Union in the Brexit referendum. These blunders have led some political commentators to say that polls are headed for the graveyard.

“It’s harder and harder to find people willing to pay for any polls, given their poor performance this year and last year. They’re heavily discredited in the UK,” says Stephen Fisher, a political sociologist at the University of Oxford.

As the US presidential election approaches, pollsters are scrambling to improve their methods and avoid another embarrassing mistake. Their job is getting harder. Until as recently as ten years ago, polling organizations were able to tap into public opinion simply by calling people at home. But large segments of the population in developed countries have given up their landlines for mobile phones. That is making them more difficult for pollsters to reach because people will often not answer calls from unfamiliar numbers.

 
Credit: NATURE, October 19, 2016, doi:10.1038/538304a; Sources: Centers for Disease Control and Prevention; Pew Research Center Surveys, 2000–15

So the pollsters are fighting back. They are fine-tuning their efforts in reaching mobile phones, using statistical tools to correct for biases and turning to online surveys. The increasing number of online polls has prompted the formation of polling aggregates, such as FiveThirtyEight, RealClearPolitics and Huffington Post, which combine and average the results to develop more nuanced forecasts.

“Polling’s going through a series of transitions. It’s more difficult to do now,” says Cliff Zukin, a political scientist at Rutgers University in New Brunswick, New Jersey. “The paradigm we’ve used since the 1960s has broken down and we’re evolving a new one to replace it—but we’re not there yet.”

Changing times

The ingredients of an accurate poll are fairly simple, but they can be hard to find, and everyone uses a different recipe to pull them all together. Start by recruiting a large group of people—preferably more than 1,000. The sample should be split evenly between women and men. And it should reflect the population’s mix in terms of race, education, income and geographical distribution, to represent these groups’ different views and voting behaviours. Once the data are in hand, pollsters analyse the gaps in their sample and weight the results to account for groups that are under-represented.

“Polling is an art, but it’s largely a scientific endeavour,” says Michael Link, president and chief executive of Abt SRBI polling firm in New York City and former president of the American Association for Public Opinion Research.

It’s also a process that is conducted behind closed doors. Polls are run by a mix of companies and academic groups, but they are generally commissioned by news organizations and political groups. As a result, pollsters rarely share the details of their techniques. “There’s a lot of people who make a living doing this, and whose reputations are set on it,” says Jill Darling, survey director at the University of Southern California’s Center for Economic and Social Research in Los Angeles.

The data-gathering part of polling used to be relatively easy in developed countries. Pollsters simply called people at home—at first, by hand, and later with automatic diallers in the United States. But landlines are quickly going the way of the telegraph. In 2008, more than eight in every ten US households had landlines; by 2015, that number had dropped to five and it continues to decline. In the United Kingdom, more people have landlines but the fraction is dropping. As of this year, 53% of them claim that they never or rarely use them.

The mobile revolution has hit pollsters hard in the United States because federal regulations require that mobile phones be called manually. And people often do not answer calls to their mobiles when an unfamiliar number pops up. In 1997, pollsters could get a response rate of 36% but that has dropped to just 10% or less now. As a result, pollsters are struggling to reach as many people, and costs are going up: each mobile-phone interview costs about twice as much as a landline one. There is also a ‘non-response bias’, because people who respond to pollsters’ calls sometimes do not reflect a representative sample, says Frederick Conrad, head of the Program in Survey Methodology at the University of Michigan in Ann Arbor.

Despite the expense and difficulty of calling people, this method still produces the most accurate results, says Courtney Kennedy, director of survey research at the Pew Research Center in Washington DC. US pollsters now call mobile phones for more than half of their samples, and that fraction will probably rise as more and more people ditch their landlines.

Pollsters are also grappling with another major problem—predicting who will vote. That is likely to be unusually difficult in the United States this year because many voters aren’t enamoured of the leading candidates, who have historically low approval ratings.

US national elections typically have turnouts of 40–55%, lower than most other developed countries, according to the Organisation for Economic Co-operation and Development. In the United Kingdom, by contrast, 60–70% of the eligible population usually votes. Richer, older, better-educated people, and those who voted in the previous election, are more likely to vote, but this varies with each election.

Pollsters typically base their estimates of turnout on their own proprietary mix of factors such as respondents' voting history, whether they’re registered with a political party, their engagement with politics, whether they say they're planning to vote, as well as demographics and socioeconomic factors. “‘Likely voter’ modelling is notoriously the secret-sauce aspect of polling,” says Kennedy.

It’s also one of the most difficult parts of accurate polling. In the 2014 mid-term US election, most pollsters failed in their forecasts of Democratic voting. Turnout was just 36%—a record low in the past 70 years—which disproportionately depressed votes for Democratic candidates.

In the 2015 UK general election, most major pollsters, including ICM Unlimited and YouGov, underestimated the turnout of older, Conservative Party voters, according to an inquiry published in March by the British Polling Council and Market Research Society. The inquiry also found that pollsters have systematic biases in their samples. They tend to have too many Labour supporters at the expense of Conservative ones. They had applied weighting and adjustment procedures to the raw data, but this has not mitigated the bias problem. Another source of error identified in the report is “herding”—when pollsters consciously or unconsciously adjust their polls so that their results seem similar to those released earlier, causing the polls to converge.

The bias in favour of left-leaning parties is not unique to the United Kingdom. The inquiry analysed more than 30,000 polls from 45 countries and found a similar, although smaller, bias. The report did not give an explanation for why, but some pollsters in the United States and Britain attribute the trend to inaccurate predictions of who will turn up to vote.

In the case of the United Kingdom, the panel recommended that pollsters work to obtain more representative samples and to investigate better ways to weight them.

Pollsters are also trying to improve their accuracy by changing how they model likely voters. In the past, they treated their sample in a binary fashion: determining how many would turn out on election day and how many would stay home. Now they tend to assign a probability to whether someone will vote.

More transparency could help. Pollsters in the United Kingdom share their methodologies with the British Polling Council, which aided the recent investigation and has led to fruitful debates about ways to improve accuracy, says Fisher, who participated in the inquiry.

In data we trust

Even if polling organizations manage to collect a representative sample, they can’t always trust the responses that people give them. One of the starkest examples in the United States came in the 1982 election for California’s governor. Los Angeles Mayor Tom Bradley, an African American, was consistently leading in the polls but lost the election by a narrow margin. Afterwards, pollsters suggested that the discrepancy arose because some voters might not have wanted to admit that they would not support an African American candidate. This is now known as the ‘Bradley effect’.

A variation on this is the ‘shy Tory effect’, named after Conservative-leaning voters in the United Kingdom who hide their views or misreport their intentions to pollsters. That makes some experts wonder whether a shy Trump effect might come into play in the forthcoming US election—in which a fraction of voters are embarrassed about or reluctant to admit their support for Trump or opposition to Clinton. But most major pollsters doubt that this will be a major factor because polls before the Republican primary elections gauged support for Trump accurately and he has performed similarly in online polls and in ones that use live interviews.

Advanced technology may allow pollsters to get a better read on voters’ true feelings. Online polls, for instance, allow people to respond at their convenience and state their intentions without fear of judgement from a live interviewer. They also make it easy to collect thousands of responses in a short time and at a lower cost: about US$30,000 for a 12-minute survey as opposed to more than US$70,000 for a similar telephone one, says Chris Jackson, vice-president at Ipsos Public Affairs, a global market-research and polling firm in Washington DC.

But online polls have challenges, too. They typically recruit by advertising on popular websites, so people choose whether to participate, and that means that there might be a built-in bias in their samples. Pollsters don’t exactly know who is missing from the poll, and it’s harder to estimate the reliability of the final poll numbers.

Some pollsters have begun experimenting with polls conducted through text messages. As with online polls, people can choose to respond whenever they want and avoid talking to a person. Michael Schober, a psychologist at The New School for Social Research in New York City, and his colleagues tested the differences between live and text interviews. “The lack of time pressure and social pressure of texting leads people to disclose more information and be more honest,” he says.

Another approach is to assemble a panel of people to survey repeatedly. The most prominent is a University of Southern California Dornsife/Los Angeles Times Presidential Election tracking poll that launched in July. These pollsters randomly selected people on the basis of information from the US Postal Service and contacted them by mail, recruiting 3,000 people to participate each week in their online surveys. Unlike other polls, they need not continually recruit new respondents, and their response rate is at least 15%—higher than for telephone polls. The pollsters have enough data to know the demographics of their sample very well and can have confidence in their trends, says Darling, who leads the survey.

However, if their sample turns out to be biased, then all polls for the duration of the sample will be biased. This may be the case with this year’s poll, which leans slightly towards Trump, according to the aggregator FiveThirtyEight.

To reduce the risk of bias, researchers are experimenting with a new type of poll. Andrew Gelman, a statistician and political scientist at Columbia University in New York City, and his colleagues have collected a very large set of people and divided them up into tens of thousands of demographic categories. The researchers tested this extreme categorization method on polling data from the 2012 US presidential election, showing that it produced accurate forecasts of state-level results by using highly tuned weights to correct for the non-representative sample3. However, this sophisticated method takes much more time and requires more detailed data than are usually gathered.

It could be a glimpse of the future, however. ‘Big data’ are where more accurate results will come from, says Joe Twyman, head of political and social research for Europe, Middle East and Africa at YouGov. “It will be about linking a respondent’s voting data with Internet usage, other survey data, and demographic information, creating a much richer picture of that person, which will allow for more accurate granulations of predictions,” he says. Pollsters would use this information to assess who is likely to vote and to analyse the survey results—for example, by determining which issues most concern different voters.

The low cost of Internet polling has triggered a surge in the number of polls of varying quality, making it hard for journalists, policymakers and others to separate the wheat from the chaff. Poll aggregators attempt to weight polls on the basis of the past reliability, but that doesn’t guarantee future success, especially if low-quality and short-lived polling outfits are included in the mix.

Contrary to bold claims of the death of polls, practitioners say that they are merely going through a transition. But pollsters do recognize that some of the barriers are insurmountable. As election seasons lengthen and people find more reasons to survey public opinion, the number of polls will continue to rise. Pollsters recognize that they can only ask so much of people, says Gelman. “There’s a non-renewable resource of public trust.”

This article is reproduced with permission and was first published on October 19, 2016.