2020 voting FILE RESTRICTED
Washington CNN  — 

Polls in the final weeks of the 2020 election campaign were farther off the mark from the election results on average than polls in any election in decades, according to a new task force report released Monday. The analysis, from the American Association for Public Opinion Research, suggests partisan differences in who chooses to take polls – portending challenges for pollsters trying to avoid similar problems in the future.

While nearly all polls in the two weeks leading up to Election Day correctly pegged Joe Biden as the winner of the presidential race nationwide, polling performance was more mixed across states, and the average size of the miss on the margin in the presidential race and other contests was larger than in other recent years. The errors tended to be larger in more Republican parts of the country, and overall, consistently underestimated the support for Republican candidates, a trend observed in several recent federal elections.

The average 2020 errors were high regardless of how a poll’s interviews were conducted or how it selected people to interview. There were errors across contests as well – it wasn’t just the presidential polls that missed, but the down-ballot ones, too. The errors were also fairly consistent over time, meaning polls conducted just before the election, when more voters would have made up their minds, were no better than those a week or even months earlier.

The report suggests that the widespread miss across polls was not due to a repeat of the errors that sent 2016 state polling astray – including late shifts in voter preferences and not ensuring that polls included the correct share of people without college degrees – but rather that new sources of error had emerged, with the evidence largely pointing toward differences in political views between those who responded to polls and those who did not.

But in analyzing the data collected through those pre-election surveys, the task force did not have enough information to say with certainty what caused those errors or whether they were limited only to election estimates. Being more precise would require in-depth study of those who chose not to participate in polls, a task pollsters are just beginning to undertake.

“Following up with the people who we tried to contact but who aren’t taking our polls is really important for getting an understanding about, is there something systematically different about them? Why are they not participating and what reasons do they give? How much of this is unique to the particular moment versus something that’s a more structural or enduring issue that polling’s going to confront going forward?” said Joshua D. Clinton, a professor of political science at Vanderbilt University who chaired the task force.

In one effort to pinpoint the source of the error, the task force adjusted the results of several pre-election surveys so that poll takers presidential preferences matched the outcome of the election to see what else might change. That meant, for example, taking a national survey where Biden had 52% support and Donald Trump had 42% and weighting it so that Biden supporters made up 51% of the total and Trump supporters represented 47%.

The exercise did not significantly move the numbers for demographics like age, race, education or gender – traits which researchers often use to correct for survey non-response. But it did move the numbers for partisanship and for self-reported 2016 vote. That suggests two possibilities: Either the makeup of partisans reached by the poll was incorrect, or that the types of people reached within some subsets were not representative.

A growing consensus

As many states have finalized voter records with updated information on who voted in 2020 and how they cast their ballots, more pollsters and voter list vendors have started to release their own analyses of what happened in polling and in the election. Several of those have pointed toward conclusions similar to those suggested by the AAPOR Task Force Report, and a consensus appears to be building around four possible ways that the polls missed.

First, the polls may have underrepresented the share of Republicans in the electorate. Perhaps Republicans were dissuaded from taking polls by the frequent criticism leveled against them by Trump, or because of lower levels of trust in frequent sponsors of polls such as media organizations and academic institutions. Or, with the politicization of the pandemic and Republican leaders railing against pandemic-related restrictions, it’s possible Republicans were just harder to reach because they were less likely than Democrats to stay home as a precautionary measure against Covid-19

Second, the group of people interviewed in polls and identified as likely voters may have included too many Democrats. The theory holds that Democrats were unusually enthusiastic about the election and were also more apt to stay home because of the politicization of responses to the coronavirus pandemic, and therefore, may have been easier to reach and more apt to take a survey once they were contacted.

Third, the polls could have had the right overall share of Democrats and Republicans but got the wrong types of people within those subsets or among independents. Maybe they interviewed too many Republicans who had turned away from Trump and not enough of his core supporters, for example.

And fourth, polling may have erred in its estimates of how infrequent voters would behave, either in how many new voters would turn out or their candidate preferences. Turnout in 2020 was so high that at minimum, about 1 in 7 voters were people who did not cast ballots in 2016, nearly three times higher than the equivalent figure between 2012 and 2016. And since poll respondents do tend to be more politically engaged than those who opt out of polls, it’s especially difficult to tell whether the poll respondents in this more disengaged subset of the electorate were representative of the broader group of new voters.

But moving from possible explanations to clear answers is a challenge, and pollsters don’t yet have the data they need to draw firm conclusions. There are few agreed-upon sources of truth for the election polling metrics that matter the most, such as partisanship, which makes it challenging to effectively diagnose what went wrong when polls miss.

There are voter lists which show who voted in 2020, but information on the demographic characteristics and political leanings of voters comes from statistical modeling and varies depending on who is doing the modeling. Exit polls, which traditionally interviewed voters as they left their polling places and therefore avoided the peril inherent in identifying likely voters, are now more reliant on pre-election surveys to capture the sizable pool of absentee and early voters, and so are subject to some of the same concerns as other pre-election polls.

And even the Census Bureau’s estimates of the voting population from its post-election Current Population Survey have some error built in due to reliance on self-reported voting behavior, which is often overstated, and those figures don’t include any information about vote choice.

None of these sources can definitively show what the difference is between the voters who took polls in 2020 and those who did not. Without more concrete information about who the people are who did not take polls in 2020 and why they opted not to, finding solutions could be a challenge.

A consortium of Democratic campaign pollsters released a post-election assessment in April which suggested that getting the wrong people within their subset of Trump supporters was a bigger problem than wrongly estimating the size of any particular group.

“What we have settled on is the idea there is something systematically different about the people we reached, and the people we did not,” the report states, going on to note that initial analysis points to an underrepresentation of people who saw Trump as presidential and an overrepresentation of those who favored government action.

Similarly, an analysis from the Kaiser Family Foundation on polling conducted with the Cook Political Report also points to political differences between those taking polls and those who did not: “What is clear in our analysis and others’ is that polls are missing a certain segment of voters who disproportionately supported President Trump.”

Surveys conducted using online panels, where the same people are interviewed at fairly regular intervals, have some ability to track voter preferences over time using data collected as past elections happened rather than being dependent on a poll taker’s ability to accurately report what they did four years ago.

Doug Rivers, a Stanford University political science professor and chief scientist for YouGov, drew on YouGov’s data to provide evidence suggesting a difference among Republicans at a Roper Center event in January. Looking at YouGov’s panelists, Rivers said, “the 2016 Trump voters who still approved of Trump in December of 2019 had declining participation rates over 2020, and 2016 Trump voters who … disapproved of him at the end of 2019 actually had increasing participation rates, the only group that actually went up in its participation rate over time. So our weighting on 2016 Trump vote unfortunately had the effect that we had too many 2016 Trump voters who were not enthusiastic about him and too few who were enthusiastic about him.”

Differential non-response – the technical term for this type of issue – hadn’t been much of a problem for surveys until now. The share of people contacted to participate in polls who choose to take part – the response rate – has declined sharply in the last two decades, but research assessing the validity of low-response rate polls generally found that they were still gathering a representative sample of Americans.

Analysis from the Pew Research Center as recently as 2016 found that low-response-rate telephone polls produced estimates on many demographic and political measures that were similar to high-response-rate polls. For many surveys, adjusting a handful of demographic results to match the population totals in a process called weighting – typically for age, race and ethnicity, gender, and educational attainment – was enough to ensure that poll results would represent the views of the full adult population.

But the new report suggests such straightforward adjustments may no longer do the trick for polls seeking to measure election preferences, and the finding could have implications for the interpretation of data on other political and issue topics.

If the main source of error turns out not to be the relative size of the groups of partisans who were interviewed, but a difference within a group of partisans between those who respond to polls and those who do not, it would be hard to find evidence of that error outside of a comparison to election results. A poll could look completely reasonable in its partisan composition and still be off the mark if it isn’t taking the right steps to account for differences within partisan groups. That would mean a poll’s ability to get the right result could become more reliant on statistical modeling.

“The polling results are increasingly dependent upon the statistical adjustments that are being done,” said Clinton in presenting the preliminary results of the report to AAPOR’s conference attendees in May. “That makes it very hard as a consumer to evaluate what’s going on because you don’t know how much of what’s going on is due to the data that’s being collected vs. the assumptions that are being made to adjust those results.”

Until there is a clear consensus on which of the most likely possible causes of the 2020 errors contributed the most, pollsters may have a difficult time choosing which adjustments to make and proving that their polls are really representative. Some have begun applying new weights to their surveys to adjust for partisan composition or self-reported 2020 vote preferences, but there isn’t much evidence to suggest that those adjustments do enough to make up for what happened in 2020.

The answer likely lies in knowing more about who took the poll and who opted out.

“There are different clues that you get depending on the methods that you use,” Clinton said in an interview. “If you’re doing a registration-based sample, then I think you can get clues, because you know, or you think you know, what the demographics and the partisanship are of the people who aren’t responding to your survey…Or if you are doing an online survey from an existing panel that has taken other surveys in the past, that may give you clues by saying are there characteristics of people who are choosing to take the survey or not.”

The pollsters who can successfully interpret those clues will be able to paint a more accurate picture of public opinion in America today.

Disclaimer: The author is a member of the AAPOR task force involved in preparing this research.