View the article’s original source
Author: email@example.com (Neville Hobson)
Reading the various reports, narratives and commentaries this weekend about the results of the UK general election that took place on May 7, the overall perspective I’ve formed on all of that is how could the expert commentators, opinion-formers and outcome-predictors have got it all so wrong?
The election result produced a clear win for the Conservatives with a slender majority in the House of Commons (12 seats), and the virtual annihilation of the primary opposition political parties – the leaders of Labour, the Liberal Democrats and Ukip have all quit – that confounded every single opinion poll in the months, weeks and days leading up to May 7, which had all predicted a hung Parliament as the best outcome anyone could expect.
So another coalition government looked a likely election outcome according to those polls – followed perhaps by another election in six months or so – and many column inches and pixels have been spent in offering what-if? scenarios of who might be able to form a government with whom, etc (the BBC’s interactive tool was especially good), much of it based on those opinion poll results.
About the only thing the pollsters did get right was the surging Scottish Nationalist Party which triumphed in Scotland in almost a clean sweep, winning 56 of 59 Scottish seats at Westminster.
Having been in America since May 3 with hardly a moment spare to look at the TV never mind online news, I had been shielded from any mainstream reporting and commentary back home in the run-up to election day last Thursday (our election was unquestionably not a big news item in the US mainstream media). What I did see, though, was plenty of comment and opinion on social media channels, notably Twitter, that presented a view of Labour being well ahead as the likely voting preference of a majority, and reinforced much of the mainstream feeling about a close-run election and a hung Parliament.
And so I flew back to the UK on Thursday night US time arriving here on Friday morning UK time to the news that took me by surprise as much as it apparently did all those experts I mentioned – not a close-run thing at all but a pretty decisive Conservative victory, nothing like a hung Parliament, and a political landscape that no longer looked familiar with the downfall of the traditional political opposition.
With the nationalists rampant in Scotland and the Conservatives resurgent just about everywhere else outside the large urban centres in England, the former looks alarmingly like a one-party state with the latter arguably close to that territory. Indeed, it doesn’t look like a very United Kingdom at the moment.
But analysis on comment like that is for more knowledgeable subject-matter experts to ponder over.
What interests me mostly now is those opinion polls I mentioned earlier – how could they have got it so wrong?
You can choose from a great deal of opinion on that question, to which I add my two-pence-worth to suggest a combination of factors such as:
1. Reliance on an opinion-polling system that, largely, behaves the same as 50 years ago when few-to-many was the only communication model: the few controlled the news and methods of communication (the mainstream media companies); the many (the great British public) formed opinion based on what they read in the newspapers or heard on the radio (TV was still in its infancy) – their only reliable sources of news and information; and the pollsters formed their predictions based on what the public told them in answer to narrow questions where you read what the newspapers said to help you form opinions.
That’s totally not the picture today where the mainstream media is but one element in an immersive crowded information and communication landscape that enables anyone with an opinion and an internet connection to become a content-creator, news broadcaster and opinion-former.
2. Lack of trust in, and engagement by, the political process and politicians themselves: let’s start with the Edelman Trust Barometer 2015 published in January that shows a continuing trend line for lack of trust in governments and politicians on a worldwide level, not only in the UK.
3. Public tiredness and disenchantment with politics in general and this election process in particular: so much partisan opinion and commentary – yes, I do call it propaganda – where it has been tough to filter signal from relentless noise and focus on what you think is credible and trustworthy to warrant your attention and your willingness to believe.
A case in point for me was the Leaders’ Debate on BBC’s Question Time programme on April 30. Debate? Hardly. Prepared sound-bite responses by each leader individually to questions from a carefully-controlled audience. The inauthenticity of it was breath-taking.
What on earth is the point in this #bbcqt “debate”? The most inauthentic political TV I’ve ever seen.
— Neville Hobson (@jangles) April 30, 2015
(Of course, I should point out that some analysts are saying that this TV event was instrumental in helping many voters decide who to vote for. If that’s true, then I’ll stick to my day job.)
4. The remoteness of much of it: so much stuff by people you don’t know, with hashtags on social media like #GE2015 that are tsunamis of opinions you don’t trust because much of it is so clearly partisan; and politicians who sound so patronising with their so-sincere-sounding and constant over-use of phrases like “hard-working families” and “working people” that you eventually tune it all out.
Some or all of this probably contributed to the huge number of “Don’t know” responses when people were asked by pollsters for their voting intentions – 25 percent of voters said they didn’t know who they’d vote for on the day, according to one report I saw.
That meant that the polling organizations, pundits and others were left to predict outcomes based on incomplete data from which to glean credible insights, along with that imperfect methodology for a contemporary society – are those the major factors that let it all be so wrong?
A terrible night for us pollsters. I apologise for a poor performance. We need to find out why.
— Stephan Shakespeare (@StephanShaxper) May 8, 2015
I read of one poll where the organizers predicted the actual election outcome with some clarity (and accuracy as it turned out) but who said they didn’t publish it for fear of being ridiculed: their poll was so totally different to all the others that were predicting a neck-and-neck close race, hung Parliament, etc.
And what was their methodology? Actually talking to voters: ringing them up on the phone and directly asking them relevant questions that they would want to answer.
YouGov’s Antony Wells summarized what he thought of the polling debacle:
[…] there is something genuinely wrong here. For several months before the election the polls were consistently showing Labour and Conservative roughly neck-and-neck. Individual polls exist that showed larger Conservative or Labour leads and some companies tended to show a small Labour lead or small Conservative lead, but no company consistently showed anything even approaching a seven point Conservative lead. The difference between the polls and the result was not just random sample error, something was wrong.
It’s worth taking a look at the 700+ comments to Well’s blog post.
So the current polling system used in this kind of significant national event has suffered a severe setback in how it is regarded from accuracy, trust and credibility perspectives. This has clearly rung a loud alarm bell as the British Polling Council, the trade body for the polling industry, has announced with some understatement that it’s setting up a public enquiry into what went wrong:
The final opinion polls before the election were clearly not as accurate as we would like, and the fact that all the pollsters underestimated the Conservative lead over Labour suggests that the methods that were used should be subject to careful, independent investigation.
The British Polling Council, supported by the Market Research Society, is therefore setting up an independent enquiry to look into the possible causes of this apparent bias, and to make recommendations for future polling.
The focus of the enquiry will be on polling methodology, according to the announcement.
Looking forward to learning what those recommendations are.