UK General Election 2015
For many months before the vote on 7 May 2015 the news media were reporting the prediction that the outcome of the election would be a hung parliament with no party having an overall majority of MPs and thus being unable to govern the country without the support of MPs from another party. Because this prediction rapidly became old news they tried to generate new news by asking the politicians what they would do in such a case and whether they would enter into coalition with this party or that party and how could they resolve their differences in policy. Not too surprisingly, the politicians from the larger parties wanted to talk about winning outright instead. However, the prediction of a hung parliament was so consistent from poll to poll that politicians from some of the smaller parties actually started negotiating through the news media and laying out their ‘red lines’ for what policies they would require or prohibit in order to lend their support to a minority government.
In the actual election, of course, one party did manage to get more than half of the 650 MP’s ‘seats’ in the House of Commons (they got 331, 51%) and so will not need permission from smaller parties to enact its policies in government.
So how could the predictions be so wrong? Surely we should expect better?
Stop! No. Those are the questions the news media are asking… The real question is should we expect to be able to predict the will of the people in an election?
Obviously, a political party has an interest in knowing where it stands a reasonable chance of getting a candidate elected and where the chance is so very slim that resources should not be wasted fighting a lost cause (at least this time around). Individual candidates will have a keen interest in the predicted result too.
Parties can also try to use the predictions to try to get us to vote for their candidates rather than ‘waste’ our votes on some no-hope candidate - so called, tactical voting.
Finance organisations are also interested - they need to prepare to work within the policies of a new government and continue to make profit. One thing they would hate is to have a surprise result - it loses them money.
News media are also interested in the prediction so that they can try to elicit interesting or un-cautious responses or even out-and-out gaffes in response to their probing: ‘I’d never go into coalition with so-and-so party’ or ‘Of course we can’t win’.
So what went wrong?
Stop! No. Again, that’s not our question.
So, why did the political parties and finance houses and news media think the result would be a hung parliament? Did they do their own research and come to that conclusion? No. they bought that service from the polling organisations.
The polling organisations spend much time asking people how they intend to vote, what sorts of policies they support, what they think of the performance of the party representatives in the TV debates, what they think of the honesty of the party representatives, what they think of the capabilities or government experience of the various parties and a host of other questions. Of course, they don’t just ask people: people volunteer this information in their social media output. The polling organisations suck up this information to assess how popular a particular policy, opinion, comment or gaffe is. They even ask people to say how they actually did vote as they come out of the polling stations - so much for it being a secret ballot. From this they assemble their predictions drawing on their experience of asking similar questions in previous elections and the previous results.
In other words it’s market research - but with a bit more at stake than the re-branding of supermarket baked beans. Though the supermarket board of directors might disagree.
Perhaps the news media, political parties and finance houses have a right to be annoyed at the inaccuracy of the polling organisations. They bought a set of predictions which turned out to be not true - or only partly true. They’re all now complaining about the quality of the service they received and attempting to evade criticism of their decision to rely on those predictions by claiming that all the polling organisations were saying the same thing and that everyone else was relying on them too. I’ll bet that the predictions came with an accuracy disclaimer - but so what? Who reads a disclaimer? For that matter, whose job was it to read the disclaimers and recommend buying this or that prediction? Let’s blame that person instead.
So let’s get down to the real question: should we expect to be able to predict the will of the people in an election?
No.
OK. Let’s try again: Why should or shouldn’t we expect to be able to predict the will of the people in an election?
-
We have a right to know.
Well, no. We certainly do not have a right to know how our neighbours or even our family members intend to vote. They may choose to tell us or even try to influence our decision in conversation or by putting party posters on their property but it is certainly not our ‘right’ to know about another individual’s voting intentions.
Of course, the polling companies do conduct extensive research so is it our right to know what they think they have found out? Polling companies spend a fortune drawing together huge amounts of information and would not be able to do so unless they got paid somehow. They compete against each other in the claimed accuracy of their output to build up their prestige and credibility and so secure future funding. Who pays them? Is it a public service paid for by the tax payer? A branch of Government? A philanthropist who just thinks it would be good for people to know? No, they are just ordinary companies out to make a profit (or maybe just cover their costs) for their shareholders or other owners and engaging in normal advertising and marketing such as giving ‘free samples’ of their analysis so as to attract future business. You might argue that the poll predictions are results of scientific research and therefore should be in the public domain - but there are many technology firms and pharmaceutical companies which would strongly disagree with that argument.
-
I should know so that I can vote tactically.
This is not bad as such arguments go but it does not stand up to analysis. The UK’s voting system is well established - some might say, old fashioned. People in a constituency elect a representative who attends Parliament as an MP and who should then represent the best interests of not just those who voted for that MP but of the whole constituency. Some people do vote for the specific person but most are probably more interested in the candidate’s party - or even who the leader of that party is. This can result in people voting for the candidate that represents a particular party almost regardless of the candidate’s qualifications or even the party’s current policies. Of course, these people’s votes are no less valid for having this motivation. Because the candidate with the largest number of votes wins - even if they actually get a low percentage of the vote, it can result in electing an MP whose party holds policies which are not approved of by the majority of the people in the constituency.
Let us take, for example, one particular constituency where at the election there were six candidates representing the Blue, Yellow, Red, Purple, Orange and Green parties and the results were as shown in the table below (numbers are drawn from the BBC Election 2015 web pages).
Now let us imagine that half of those who voted Red might have preferred a Yellow MP to the Blue MP that they actually got. If the Red voters had had an accurate prediction of the result then they might have decided to vote for the Yellows instead - so that the Yellows would have won by 38.3% + 7.4% = 45.7% ‘Yellow’ to 39.8% ‘Blue’. But then, of course, the prediction would have turned out to have been inaccurate…
The publication of predicted results causes some voters to change the way they eventually vote and so affects the result. This can be exploited and abused by those wishing to influence the results of an election. Obviously the parties themselves are campaigning to get the result they want and/or believe that the people need, but do we want the polling organisations to have so much influence?
Candidate's Party | %Share of vote |
Blue | 39.8% |
Yellow | 38.3% |
Red | 14.8% |
Purple | 2.8% |
Orange | 2.7% |
Green | 1.6% |
-
It’s in the public interest.
This is the argument-stopping claim of a lazy journalist. Most of the time when journalists spout this phrase what they actually mean is that ‘some of the public are interested and this can help me improve sales for my organisation’. ‘In the public interest’ actually means for the benefit or advancement of the interests of the people. Unless they can show how the public benefited from the publication of whatever it was, then it’s not ‘in the public interest’ it’s perhaps just interesting to their readership.
One possibility is that they believe they are publishing their prediction ‘in the public interest’ to try to prevent us electing what they consider to be the wrong MPs. In which case it’s just bias.
Whole populations are not fickle. It can take a lot to turn a population from one course to another. There is a branch of science which attempts to predict how a population will behave under certain circumstances but it falls far short of predicting tactical voting in humans. If it was totally accurate there would be no need for elections as the will of the people would be known without needing the rigmarole of an election. However, it is not accurate.
There is a fantastic (read: based on fantasy) short story by Isaac Asimov called ‘Franchise’ in which the science has been refined so far that there is no need to hold a full vote. All that’s needed is the considered opinion of one person to finalise the prediction. We’re not quite there yet… not quite.
One important purpose of regular elections is to check with the people: ‘is this still what you want?’. It’s no good complaining that ‘The People’ keep changing their collective mind - it’s called democracy. Individual people are rarely offered exactly what they want by any single political party and so usually have to make a compromise and select a package of policies including some that they don’t really want. Some individuals may also hold extreme views (compared with the majority) and find that no party offers even part of what they want and so have to choose whether or not to vote for a ‘least-bad’ package from one or another party - or perhaps form their own party.
I get a really bad feeling when political parties resort to marketing and advertising tactics to ‘Engage with The People’. Marketing and advertising are used to sell products or services to us and convince us that ‘you really need this, right now’ when actually we’re doing not too badly with the rival brand or indeed without a similar product at all.
Update 6 Feb 2016 - This post has been split and the remainder of the original can be found in UK General Election 2015 ‘North/South Divide’.
Update 6 Feb 2016
A report by the BBC of the so called analysis of the failure of the polling companies to predict the outcome of the May 2015 UK general election reveals that they didn’t get answers from the right people… No! Really? What it does show is that the media collectively are still trying to shift the blame for predicting a hung parliament.
A report commissioned by the British Polling Council and the Market Research Society is apparently to be delivered to them by 1 March 2016 and published by them ‘as soon as possible thereafter’. I wonder what gems of wisdom will be revealed? Let me guess: It’s not fair, people didn’t answer our questions as we expected. and We didn’t get answers from the right people.
Update 7 Aug 2016
The Report of the Inquiry into the 2015 British general election opinion polls was actually published on 30 March 2016. I was so excited I missed it.
In the executive summary of the report it states ‘In historical terms, the 2015 polls were some of the most inaccurate since election polling first began in the UK in 1945. However, the polls have been nearly as inaccurate in other elections but have not attracted as much attention because they correctly indicated the winning party.’.
Or more concisely: it was as inaccurate as some other previous polls - but nobody minded before.
The summary also states ‘Our conclusion is that the primary cause of the polling miss in 2015 was unrepresentative samples.’.
OK - now we have a new term for the inaccuracy, a ‘polling miss’. And it was mostly caused by asking the wrong people. No! Really?
To be fair, the summary does make a very important point: ‘We reject deliberate misreporting as a contributory factor in the polling miss…’. Or: we were wrong but we didn’t mean to be.
I admit: I have not read the whole report.
Post a comment
All comments are held for moderation; simple HTML formatting accepted.
Send feedback by e-mail , alternatively complete the form below.