Polling: it's broke, but how do we fix it?
Inadequate methods and unreliable respondents are challenging pollsters, says Conrad Jameson
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference."The sharply contradictory findings will be seen as evidence of volatility among the voters," read the desperate front page of yesterday's Daily Telegraph, trying to explain away a rise in Labour's lead to 21 points in its own Gallup Poll on the same day that ICM reported Labour's lead shrinking to five points in The Guardian. But it isn't the shaking and quacking of a volatile electorate that you hear in the polls - not, certainly, for two polls with near identical dates for fieldwork. The noise is rather the clatter and clang of an old banger of a sample survey technique that, after breaking down in the last general election, should never have been allowed out without an MOT.
Last time the polls crashed so hard that it was surprising that any pollsters got out alive. Polls had been wrong in three post-war elections. But here was something else. They didn't just pick the wrong party. They missed, on average, by nine points, equivalent to more than double the 4.3 per cent swing that Labour needs in this election to win its biggest victory since 1945.
Its gasket blown, its tyres flat, the clapped-out opinion poll needed an upgraded replacement, featuring long, cross-examining questionnaires, panels for tracking opinion over time and experiments to find out which issues and personalities really do flip voting intentions. What did the public get instead? Minor repairs passed off as a major overhaul.
The most learned-sounding repair was the phoniest: a revision of the sampling technique to get rid of a built-in Labour bias. The culprit was supposed to be the cheap-and-cheerful quota sample, which told interviewers how many, say, skilled, blue-collar workers to interview but left them to pick and choose respondents within the quota itself. Since quotas left too much discretion, finger-wagged The Economist, tighter samples were needed - so that within a sample of blue-collar workers, there should also be quotas for, say, the numbers of them who lived in council flats.
Gallup was applauded by The Economist for replacing quotas last January with the classical pinprick random sample that nowadays can be done cheaply by telephone. But why did the Labour bias in quota samples only show up in 1992? Quotas actually replaced the more expensive random samples in the 1970s. And where was the evidence that the bias was the result of quotas? It wasn't to be found. The Market Research Society took two years to admit the conclusion of its own post-mortem: hardly a quarter of the error in the 1992 election could be put down to slovenly quotas.
Even more unconvincing was the beguiling reassurance that the polls had only been caught short by a last-minute swing - quite possibly caused by a sudden increase in the numbers of people telling pollsters they were optimistic about the economy and their wage packets. But that explanation couldn't be right. At the last election optimists in the polls never outnumbered pessimists. But this time optimists are a thumping majority - and Labour is still in front. And why should polls get caught out by a last-minute swing? The survey that the polls' reputation stand or fall by is held on the eve of the election, when voters are practically inside the voting booth.
So just why were polls off last time by a whopping nine points? All of us, lay and professional alike, know the answer. Poll respondents were lying.
It was bad enough in the Seventies and Eighties. That's when voters flipped out of their class-bound gold-fish bowls that, in happier days, made their voting behaviour so much easier to predict. Studies started showing voters slithering back and forth between parties or swimming away with a "don't know" or "refuse to answer". The changelings and cop-outs are still with us - with heavy "don't know" scores still flashing a danger warning. Most polls show them at 20 to 30 per cent. And pollsters are don't knows, too, in that they don't know which way the "don't knows" will flip. That, no doubt, is why "don't know" scores are so rarely published.
But downright lying is something new and typical of the Nineties - that is why new dark clouds of doubt hang over pollsters' performance. Lying is so new that, pretend as they might, they don't know how to deal with it. Volatility should, in theory, cancel out in eve-of-election polls. More difficult to deal with are the evasions of Essex Man who likes to pretend he lives in Hampstead. How are pollsters to figure out how many are lying, and which way?
For feisty Bob Worcester of MORI the problem doesn't exist. The pollster's Panglossian assumption stays intact: people mean what they say and say what they mean. Ask well-honed questions in a proper sample, he argues, and, lo, you come out with MORI's impressive forecast of the South Wirral landslide, off only by two points for the Tories and only one point for Labour.
But do voters lie less at by-elections? And what explains MORI's own fiasco in the 1992 election? Or scores that have popped up in several recent surveys asking people how they voted last time, which show Labour won the last election? Or, even funnier, the exit polls at the last election showing a majority favouring Labour's fatal policy of increasing taxes for more welfare services?
Listen to Nick Sparrow of ICM, for whom the L-factor exists, all right, but that's no bother. It can be whisked away by a statistical lie-detector test that happens to be ICM's own patented invention. Unfortunately, the panel data that ICM uses for adjusting its polling scores can't be scrutinised, only ICM's impressive redictions - when ICM asked people how they voted last time it came out with scores jolly close to the actual 1992 results.
But who says that success in rediction means success in prediction as well? And why, if the lie detector test is so important, does it change the scores published in The Guardian yesterday by only a single percentage point, with Labour rising to a six-point rather than a five-point lead after adjustments?
The truth is that pollsters cannot possibly know how to adjust for lying - assuming, of course, that adjustment is needed - until after another general election or two, when new statistical and questionnaire techniques have been tested.
So what are we poor punters supposed to do with our hand-wringing doubts amid shrill claim and counter-claim? Tony Simpson of the Harris poll gives the kindly answer to the tune of a belly laugh: live with them - and don't bet. With that sound advice - plus a hint from Aristotle about knowing the exactitude each type of inquiry allows - we can still make out heads from tails, say, by making allowances ourselves for how many of the "don't knows", for example, are crypto-Tories. Look at the record. We've as good a chance of getting it right as the pollsters.
The writer is the retired managing director of a market research firm.
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments