Opinion
Rejecting polls that contradict personal biases rejects reality
Old Gold & Black
By
Guest Columnist
Thursday, April 20, 2017

There was no way to look at all the information polls were telling us in 2016 and not walk away with the conclusion that Hillary Clinton was anything other than a favorite.

At the same time, the polls were clear; the race was uncertain and very volatile. For example, by late October, around 15 percent of the public hadn’t committed to either Clinton or Trump; at the same point in the 2012 election, that number was around five percent. That, and the historically disliked candidates, made for volatility. This election was extraordinarily uncertain.

There’s another factor here that is often overlooked: margin of error. Polls are more a best guess at what the margin between candidates will look like, not a crystal ball of who will come out on top.

If a poll had projected Clinton to beat Trump by one point in a given state and Clinton won by five points, that is a four-percentage-point error. If Trump ends up winning by three points in that state, that is still a four-point error. We naturally consider the latter situation as an upset the polls missed and the former as a correct prediction. In our first-past-the-post system, the person who has the most votes takes office; margin does not matter.

That, however, is not what polls are fundamentally designed to predict. Every poll has about a five-point margin of error (depending on the poll’s sample size and the poll firm’s methodology). Polls are not soothsayers that will always be exactly accurate, but instead are a best estimate.

In the 2012 election, there was an error of about four points; it just happened to be in Obama’s favor, so that election was not an “upset,” because Obama had a small lead over Romney. This cycle, there was a much smaller nation-wide error of about two points; the final polls had Clinton +4, and the result was about Clinton +2.

The error just happened to be in Trump’s favor and his Electoral College position was advantageous, so the election was an “upset.” A similar situation took place when the U.K. voted to leave the European Union. The Economist’s tracker had the polls virtually tied and the result had Leave about four points ahead of Remain. That result is well within the margin of error, yet the result was construed as a major upset.

The problem may not be with the polls, but with us. The idea of the U.K. leaving the E.U. was unthinkable to many, not because the polls told us it was unthinkable, but because of prior notions about the E.U. itself.

Similarly, many thought Trump had a zero percent chance not because the polls told them, but because they could not imagine a man who bragged about sexual assault on tape becoming president. It is easy to blame polls when your prediction is wrong. FiveThirtyEight had Trump at roughly a 30 percent chance. In other words, there was more of a chance of Trump becoming president than you flipping a coin and getting heads twice. The polls were off, no doubt about it. But, they shouldn’t be blamed for a failure of prognostication on our behalf.

All of this might just be a long way to say that when our president attacks polling firms for low favorability numbers, don’t accept his argument that the numbers are “phony.” If the polls show a result that seems unrealistic, there’s a chance that you should expand your worldview to include that result as a possibility.

Conventional wisdom is often wrong; polls are not inclined to fit into any one narrative. They are instead committed to discovering the various sentiments and the voting behavior of our nation as a whole. If you disregard all polls because they’re “phony,” you’re missing valuable information about how our country feels about issues and how they might vote on them. In future elections, Trump may discover this at his peril.