I was the only pollster to predict the Trump landslide
James Johnson – a former adviser to Theresa May – reveals how his firm, JL Partners, called the scale of Trump’s shock victory. ‘Trump might be a wolf,’ one voter told him, ‘but he’s straight about it’
The Wednesday before election day I was in Detroit. After months of conducting interviews, focus groups and surveys, I asked my last question of a voter in this election.
Sheree is a Black 31-year-old single mother. She voted for Hillary Clinton in 2016, sat out 2020, and was undecided in 2024. Throughout our 90-minute interview, we discussed her concerns about Donald Trump (“He’s just so aggressive”), Kamala Harris (“Fake”), the border (“A real disgrace”), education (“I don’t want my daughter to be worrying about sharing a bathroom with a boy”), and the economy (“I just feel stuck”).
But despite never having voted Republican before, when pushed for a preference, Sheree told me she was leaning towards Donald Trump. The primary reason? The economy: she simply felt better off under the Trump administration than she has under that of Joe Biden. She also felt that the former president was more authentic than Harris. “Donald Trump might be a wolf. But he’s straight about it. Kamala is a wolf in sheep’s clothing,” she said.
Sheree, like one in three of her fellow ethnic minority voters, backed Trump on Tuesday – the best outcome a Republican candidate has achieved among non-white voters in decades.
It was only one interview, but that conversation with Sheree felt like a moment.
It certainly made me feel better about our late pre-election polling and projections. Our final poll predicted a three-point lead for Trump in the popular vote – the biggest lead forecast by any pollster, and we were one of very few to have him winning the national vote share. Our final projection showed a larger margin in the electoral college than any other outfit. We had called a Trump win since 24 September, and for 25 of the final 30 days we had a forecast of 312 to 226 electoral votes for Trump. On Tuesday morning, as Harris’s odds shot up, my polling firm suddenly felt like rather an exposed hill to be perched upon.
The chill soon faded. JL Partners was the most accurate polling and modelling firm, not only in the country, but globally. Many others fell short, refused to back a position, or predicted a Harris victory. It is a testament to the fantastic team that my co-founder Dr Tom Lubbock and I work with every day.
How did we do it? First, rather than just using one method to reach voters, we utilised a mixed-mode approach. Our research found that phone-only polls give too much weight to those who are willing to give pollsters the time of day on the phone: our statistics show that such a person is more likely to be an older, white, liberal woman. This was the undoing of the poll by Ann Selzer that showed Harris winning Iowa by three points (Trump went on to win the state by 13).
Online-only polls also over-sample. They pick up too many people who are educated, more engaged, younger, and working from home. Those groups are all more likely to vote Democrat.
Instead, we blended our methods, depending on the audience we were trying to reach, using an in-house algorithm. We used a combination of live call to cell, live call to landline, SMS to web, online panel, and in-app game polling. Each method reached a different type of voter – the latter, for example, incentivising poll responses by awarding in-game points as people play games on their phone, is more likely to pick up young, non-white men. That showed that 30 per cent of our non-white sample backed Trump, in line with the results.
We picked up that elusive Trump voter who is less likely to trust polls, or have the time to fill them out, by meeting them on their own terms as they went about their daily lives.
We also made bold but firm calls when it came to the composition of our samples. For instance, our polls kept supplying us with a significant number of 2016 and 2020 non-voters. In our final poll, that number was a seemingly staggeringly high 17 per cent. The sample was predominantly rural, and heavily Trump-voting by a margin of 20 points.
Other pollsters scaled or weighted these numbers down. They assumed they could not be real. But we did not herd or hedge, and went with the data even when it challenged our assumptions. We were right to do so, and a turnout surge – especially in rural areas – helped Trump with his most favourable demographics.
When Harris joined the race, we made another decision: we would not award Harris the typical incumbency bonus that almost every other forecast model did. We received criticism for this, but it was the right call: including such a bonus would have swung national win probabilities by around five points in Harris’s favour.
This should not have been a surprise for modellers – Harris was not the incumbent. In fact, just as incumbents around the world have been toppled one after another since 2020, at multiple points in the race our polling was showing that the incumbent – President Biden – was hurting her chances.
We also learnt from our experience in the UK general election earlier this year, when we were also one of the most accurate pollsters.
When a voter says they do not know how they will vote, most pollsters simply remove them from the sample altogether. Instead, we used a model, trained from demographic data and other answers that respondents gave in our survey, to predict how they would vote. For example, if you said in our survey that you didn’t know how you would vote, but that you cared about the economy and you rated Trump as better on it, we categorised you as a Trump voter.
As was the case in the UK general election – where we showed a smaller Labour lead than other polling firms – this approach got us closer to the final result.
These decisions may feel like subjective judgements. To some extent they are: polling is a science, but you also need to make calls to decide how to deal with questions such as incumbency bias, what your samples look like, and how to treat undecided voters.
The difference for us was that we did not make these decisions based on the commentariat’s wisdom or our own hunches. Having run hundreds of 90-minute face-to-face interviews with voters over the last two years, we were confident that these decisions were based on the real picture.
These long-form interviews were also key to understanding non-white vote switchers to Trump: a 10-minute conversation would not capture their voting intention. They would start the interview leaning towards Harris, but end it by telling us they would vote Trump – a better indication of where they would end up landing, after their worldview on the economy, border, and family values became clear.
Sheree might just be one voter. But conversations with people like her grounded our analysis in the reality of this election.
I have spoken to a North Carolina fortune teller, scaled Nevada sand dunes with a swing voter, even sat for two hours with an Iowan nudist. I often questioned what I was doing. But by combining cutting-edge statistical skills with time spent on the ground with everyday Americans, we were able to succeed where others fell short. We did not herd or get buffeted by groupthink, but stuck to our guns and called the result right.
James Johnson is a New York City-based pollster, and previously served as an adviser to Theresa May. He is the co-founder of polling company JL Partners
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments