Stay up to date with notifications from The Independent

Notifications can be managed in browser preferences.

Lies and statistics? The pollsters who make the numbers add up

Predicting who'll win an election is an inexact business, not least because voters don't always tell the truth. Rob Sharp meets the pollsters who make the numbers add up

Monday 15 February 2010 01:00 GMT
Comments

Your support helps us to tell the story

From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.

At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.

The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.

Your support makes all the difference.

Do you remember where you were on 10 April 1992? Robert Worcester, founder of Ipsos MORI, certainly does. It was the day after John Major led the Conservatives to their fourth consecutive election victory.

None of the pollsters had seen it coming. The day before the election their data had suggested Labour would narrowly beat the Tories by one percentage point, the equivalent of eight Commons seats, resulting in a hung parliament. Instead, Major's party took an eight-percentage-point lead, or 65 seats, resulting in an overall majority of 21. It was one of the biggest upsets in polling history.

"It was chaos," says Worcester. "There were five principal polling organisations and three refused to talk to the media. I don't believe in maintaining silence, so I was one of the few people interviewed on TV and ended up bearing the brunt of the blame ... I said we didn't know what we'd done wrong, but that we wouldn't leave a stone unturned until we found out."

The Market Research Society, representing the pollsters, looked into what went wrong in 1992, and blamed inaccurate data about the socio-economic breakdown of the population.

As the number-crunchers gear up for another general election, are they now more confident of their data? This month polling company ComRes reported that the Tories had slumped to 38 per cent of the vote, just seven percentage points ahead of Labour, on 31 per cent, with the Liberal Democrats at 19 per cent. Like the predicted result for 1992, this would result in a hung parliament.

So how have ComRes arrived at this latest prediction? The raw data is these days is gathered over the phone (pollsters claim people are more likely to tell the truth than in face-to-face interviews). At the ComRes offices on Millbank, around 30 employees sit behind rows of computer monitors, some busy on the phone, others tapping numbers into databases. Most polling organisations will speak to a sample of 1,000 people per opinion poll. The bigger the sample, the better chance of accuracy. With a sample of 1,000 people, there is a 95 per cent chance of accuracy to within the margin of error, in this case plus or minus three per cent.

Most polling organisations select these 1,000 interviewees randomly from the records of telephone numbers. Of the major polling organisations – ComRes, ICM, Ipsos MORI, Populus and YouGov – only YouGov relies instead on samples taken from its own pool of 300,000 'panel members' – and conducts its polls online. But any sample of 1,000 people may be unrepresentative because it features, say, too many habitual Labour- voters, or too many people from lower-income groups, or too many from certain regions. So the pollsters put the responses they collect through a process of "weighting".

For example, if a sample of 1,000 people is made up of 550 men and 450 women, it doesn't accurately reflect the British population (51 per cent female). To compensate, the answers of female respondents will be given slightly more "weight" (in this case they will each count as 1.133 people). The same procedure is carried out to adjust for age, social class and region. Pollsters will also take into account how people voted in the previous election, and a person's statistical likelihood of actually casting his or her vote.

Pollsters then convert their results into seats in the House of Commons. In carrying out this conversion, online calculators such as that at electoral- calculus.co.uk can also factor in the effect of marginal constituencies, where a seat is most likely to change hands, as well as the effects of tactical voting. Back in 1992, the pollsters had inaccurate information about socio-economic groups partly because the latest census results were 11 years out-of-date; we also now know that many people who in the event voted Conservative lied to the pollsters about their voting intentions. These days pollsters argue that weighting according to past vote gets around that bias.

"Historically, over the longer term, pollsters have had a mixed record," admits ComRes founder Andrew Hawkins. "But in 2005, within 48 hours of the election every eve-of-election poll was [accurate to] within three per cent. This is strikingly accurate given that the journey from poll question to results is not a straightforward one. It's not like using a dipstick on a car engine. It's more of a modelling process." Every pollster, he says, should "show a willingness to change their methods over time."

Do we have any choice but to listen to the pollsters? Not really, says John Rentoul, The Independent on Sunday's chief political commentator: "We don't have any alternative way of finding out what the electorate is thinking, apart from focus groups, but they don't deliver the numbers." But some pollsters will never lay the ghosts of 1992 to rest.

"I'm not confident that we can't have another 1992 election, in the same way I can't be confident that journalists aren't going to make any more mistakes," says Robert Worcester. "You can't be 100 per cent sure that lightning isn't going to strike twice. It's statistics, and I spend a heap of my life explaining this. It's the marriage of asking questions with the science of sampling. I'd always say that it's 95 per cent expertise and five per cent luck. And if you're not lucky, then you're in the wrong business."

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in