Here at 538, we think a big part of our jobs during election season is to explore and explain how much trust you should put in all the people telling you who's going to win. More than anyone else, given the amount of data they produce and the press and public's voracious appetite for it, that includes the pollsters. That's why we do things like publish ratings of pollster accuracy and transparency and make complex election forecasting models to explore what would happen if the polls are off by as much as they have been historically.
Polls are also important for the reporting we do here at 538, which is rooted in empiricism and data. Furthermore, the quality of the data we're getting about public opinion is important not just for predicting election outcomes and doing political journalism, but also for many other parts of our democratic process.
Suffice it to say, if polls are getting more or less accurate, the public needs to know. And now that the 2024 election is in the rearview mirror, we can take a rough first look at how accurate polling was.
Just one note on scope before we get started: In this article, I will be taking just a broad look at how polls did in states where the results are final or nearly final. That means we won't assess the accuracy of national polls yet, given how many votes are still left to count in California and other slow-counting states, and won't be assessing the accuracy of individual pollsters, which we'll do when we update our pollster ratings next spring.
Despite the early narrative swirling around in the media, 2024 was a pretty good year to be a pollster. According to 538's analysis of polls conducted in competitive states* in which over 95 percent of the expected vote was counted as of Nov. 8 at 6 a.m. Eastern, the average poll conducted over the last three weeks of the campaign missed the margin of the election by just 2.94 percentage points. In the seven main swing states (minus Arizona, which is not yet at 95 percent reporting), pollsters did even better: They missed the margin by just 2.2 points.
This measure, which we call "statistical error," measures how far off the polls were in each state without regard for whether they systematically overestimated support for one candidate. And by this metric, state-level polling error in 2024 is actually the lowest it has been in at least 25 years. By comparison, state-level polls in 2016 and 2020 had an average error close to 4.7 percentage points. Even in 2012, which stands out as a good year for both polling and election forecasting, the polls missed election outcomes by 3.2 percentage points.
At this early juncture, we can only speculate as to why error was so low this year. One reason could be that pollsters have mostly moved away from conducting polls using random-digit dialing - a type of polling that has recently tended to generate results that oscillate more wildly from poll to poll than other methods. One notable pollster that does still use RDD is Selzer & Co., which had Vice President Kamala Harris leading President-elect Donald Trump by 3 points in its final poll of Iowa this year. Trump ended up winning the state by about 13 points, making for a 16-point error. It looks possible that Selzer's poll had too many Democrats and college-educated voters in it, factors the firm generally does not attempt to correct for due to Selzer's philosophy of "keeping [her] dirty hands off the data" (to be fair, this approach had worked excellently until this year; Selzer is one of the top-rated pollsters in 538's pollster ratings).
Quinnipiac University, which also uses RDD, also generated polls that didn't seem consistent across states, though they ended up being closer to the outcome than Selzer. Meanwhile, other prominent pollsters that previously used RDD have now stopped using the method. That includes ABC News, which, after publishing an RDD poll that found now-President Joe Biden ahead of Trump by 17 points in Wisconsin in 2020 (something the pollsters behind the survey rightly identified as an outlier result when it was published), now sources its polls from Ipsos, which conducts polls online among respondents who are randomly recruited by mail and telephone.
Another factor is that pollsters are increasingly balancing their samples on both demographic and political variables, such as individuals' recalled vote in the last election. While this can cause some strange results, it generally stabilizes the polls and produces fewer outliers than one would expect by random chance alone.
Based on our preliminary findings, pollsters that used this aggressive approach to modeling had lower error than others. While this is just a loose proxy as we conduct a more thorough analysis, we found that pollsters who conducted their surveys with online probability panels, interviewed people with robo-calls, or included text messages or phone calls as part of a bigger mixed-mode sampling design tended to use more complex weighting schemes (and were especially reliant on recalled vote) - and also had lower error than pollsters using more hands-off modes: