How the UK Pollsters Got It So Wrong: A Reality Check

by | Jun 1, 2015 | Elections, Politics, Statistics

No one expected a landslide.

Political polling for the UK elections held in early May predicted a virtual dead heat right up until the polls opened.

And then everything went sideways. The polls were wrong – significantly wrong. The conservative Tories won far more parliamentary nods than anyone thought possible, taking 26 seats from the Liberal Democrats, more than double the number predicted.

The handwringing has begun, and the government opened an investigation into why the polls went off the rails, and how.

Already experts have posited some suggestions as to what went wrong.

For one, voter turned was “significantly lower” than predictions had pegged, which also means polls included the opinions of a lot of people who didn’t bother to actually vote. That likely skewed some numbers.

But it’s not just about following voter intention. While certainly not every poll taken on behalf of research marketing has such major national — and international – implications as a election polling in a major Western nation, the lessons emerging from England already are good reminders of what can go wrong with polling on any scale.

Sometimes people lie.
 Or, to use the British-ism, sometimes they “tell porkies.” Maybe they feared looking bad, or feared being judged for voting against liberal policies despite their true beliefs, as if conservative isn’t cool despite so many deciding to vote that way. Maybe it was subconscious. Maybe they felt awkward responding to a person’s query instead of something more online and anonymous. The statistician behind NCPolitics, a UK-based blog, offers some evidence that these psychological factors may have come into play.

Questions matter. Fivethirtyeight.com discovered that a UK polling agency had queried people about their preferences two ways — once by asking which party they planned to vote for, and another asking a longer, more specific question about who they would vote for considering the needs of their own constituency. Many predictive agencies ran with the results from the second question, but it turns out the first one more closely mirrored reality. This one’s a lesson that clearly can apply to both sides of The Pond.

Timing matters.
 If polls were conducted during the workday when many people weren’t home, and conservative voters are more likely to hold traditional office jobs so they wouldn’t be home during the workday —  it’s easy to see a potential disconnect there, notes Simon James, a global analytics Think of America’s changing economy, with more contract workers, less predictable shift assignments, and more freelancers, and the potential here for havoc on polling timing could grow large.

Another takeaway for us all: research isn’t perfect. We have a lot of powerful polling tools, and we constantly update our processes based on past experience. But it’s good to remember that while professional research marketing can provide a huge boost to everything from business to politics, it’s not infallible.

Even the best research marketers know that sometimes, they get it wrong.

The key is understanding where those mistakes occurred, being professional enough to admit your own missteps, and learning the lessons of the past so they don’t repeat.

Archives