Tag Archives: US Election

How Did the Polls get the US Elections so Badly Wrong?

14 Dec

Written by Carl Davidson, Head of Insight at Research First

USA Map Vote and Elections USA Patriotic Icon Pattern

The day after Donald Trump won the US Presidential Election, The Dominion Post ran a headline saying ‘WTF’. It left off the question mark so not to cause offence (and asked us to believe that they really meant ‘Why Trump Flourished’). But the question lingers regardless.

For those of us in the research business, WTF? was quickly followed by ‘how did the polls get it so wrong?’.

It’s a good question. And coming hot on the heels of the polls’ failure to predict Brexit, an important one.

People have attempted to answer this question in a number of ways, and each of them tells us something a little different about the nature of polling, the research industry, and voters in general.

The first response might be called the ‘divide and conquer’ argument. This is the one that says not all the polls got the election result wrong. The USC/LA Times poll, for instance, tracked a wave of support for Trump building and predicted Trump’s victory a week out. Similarly, the team at Columbia University and Microsoft Research also predicted Trump’s victory. But this seems to me to be a disingenuous argument because most polls clearly got the result wrong. And with enough polls running, some of them have to give the contrary view. Another way to think about this is that even a broken watch is right twice a day.

There is a variation on this argument that we might call ‘divide and conquer 2.0’. This is the argument that says people outside of the industry misunderstood what the polls actually meant. The best example here might be Nate Silver’s FiveThirtyEight.com. Before the election 538 gave Trump about a thirty percent chance of winning. To most people, that sounds like statistical short hand for ‘no chance’. But to statisticians, it means that if we ran the election ten times, Trump would win three of them. In other words, Silver was saying all along that Trump could win. Just it was more likely that Hilary would. As Nassim Nicholas Taleb might put it, the problem here is that non-specialists were ‘fooled by randomness’. There is merit in this argument but it seems too much of ‘a bob each way’ position (and note how it shifts the fault from the pollsters to the pundits).

The next argument might be called ‘duck and run’. This is the argument that says the fault lies with the voters themselves because they probably misrepresented their intentions. Pollsters typically first ask people if they intend to vote, and only then who they’re going to vote for. But, of course, there’s no guarantee the answer to either is accurate. This seems to be the explanation that David Farrar (who is one of New Zealand’s most thoughtful and conscientious pollsters) reached for when approached by Stuff. Given how many Americans didn’t vote in the election, expect to hear this argument often. But surely all this really means is that the pollsters asked the wrong questions, or asked them of the wrong people?

A variation on this ‘duck and run’ argument is that polls are at their least effective where a tight race is being run. On election night nearly 120 million votes were cast but the difference between the two candidates was only about 200,000 (or less than one third of one percent). It could be that no polling method is sufficiently precise to work under these conditions. If you want to try this line of argument in the office, award yourself a bonus point for referring to the ‘bias-variance dilemma’.

But I think all of these arguments are a kind of special pleading. Worse than that, much of what the industry is now saying looks like classic hindsight bias to me. This is also known as the ‘I-Knew-It-All-Along Effect’, which describes the tendency, after something has happened, to see the event as having been inevitable (despite not actually predicting it). While it’s easy to be wise after the fact, the point of polling is to provide foresight, not hindsight.

And no matter how well intentioned any of these arguments might be, it’s hard not to think we’ve seen them all before. Philip Tetlock’s masterful Expert Political Judgment: How Good Is It? reports a 20 year research project tracking predictions made by a collection of experts. These predictions were spectacularly wrong but even more dazzling was the experts’ ability to explain away their failures. They did this by some combination of arguing that their predictions, while wrong, were such a ‘near miss’ they shouldn’t count as failure; that they made ‘the right mistake’; or that something ‘exceptional’ happened to spoil their lovely models (think ‘black swans’ or ‘unknown unknowns’). In other words, the same arguments that we’re now seeing the polling industry rolling out to explain what happened with this election.

For me, all of these arguments miss the point and distract us from the real answer. The pollsters (mostly) got the election wrong because the future – despite all our clever models and data analytics – is fundamentally uncertain. Our society loves polls because we crave certainty. It’s the same reason we fall for the Cardinal Bias, the tendency to place more weight on what can be counted than on what can’t be. But certainty will always remain out of reach. What Trump’s victory really teaches us is that all of us should spend less time reading polls and more time reading Pliny the Elder. It was Pliny, after all, who told us ‘the only certainty is that nothing is certain’.

Research First is PRINZ’s research partner, and specialises in impact measurement, behaviour change, and evidence-based insights.

Image credit: iStock

%d bloggers like this: