02:48 - Source: CNN
What 2016 should teach us about 2020 polls
(CNN) —  

Earlier this week, the Democratic race for president was seemingly rocked by a Monmouth University poll showing former Vice President Joe Biden falling from 32% to 19%.

The poll, which had Biden in a virtual tie with Vermont Sen. Bernie Sanders and Massachusetts Sen. Elizabeth Warren, was an outlier compared to the polling average, as well as to an CNN/SRSS poll conducted during nearly the same period. Later polls, such as one from Quinnipiac University, confirmed that Biden still leads the Democratic race with about 30% support.

While some have suggested (including Biden’s own pollster) that Monmouth should not have released the results, I have the opposite view. Monmouth should have put the poll out there, and it’s the job of the media to put every poll in its proper context.

The big reason why pollsters should publish outlier polls is that it proves they’re running an honest shop. Every poll has a margin of error, but remember that the margin of error (as traditionally reported) covers only 95% of all results. There is going to be that one out of 20 times when a result falls outside the margin of error.

With so many polls being conducted of the Democratic race nationally, we should see a decent amount of polls that fall outside the margin of error of the average. Monmouth was one of those cases. There should be many more “outliers” to follow.

Moreover, the margin of error for primary polls is going to be wider than you might expect. Because of the expense to poll a subset of the population, most national primary polls are going to have a margin of error of 5 points or greater. A pollster could have something that looks like an outlier that is still within the margin of error.

When a pollster continuously produces results close to the average, it means something funky is going on. It could mean a pollster is not reporting outlier polls or somehow weighting their polls to match the average. That’s bad science. Pollsters must and should have faith in their methods, like Monmouth did with this poll, and as other, such as Ann Selzer of Iowa fame, did in the past when releasing outlier results. Selzer did so when she was one of the few pollsters to correctly pick up on Barack Obama’s strength ahead of the 2008 Iowa caucuses. She did so again, when she published a national poll showing Obama up double-digits on Republican Mitt Romney in 2012.

If pollsters suppress their surveys, they may actually be missing a real trend. There have been instances in the United States and abroad where pollsters admitted they withheld publishing results because they didn’t match the average. Sometimes, those “outlier” polls actually turned out to be accurate, and the average was inaccurate.

So what is the media to do if a pollster puts out a poll that looks very different from the others? It’s not to ignore the outlier. Doing that is not much better than pollsters not releasing outliers. In 2016, many outlets ignored supposed outliers showing Trump doing better in key battlegrounds than the average. On the other end, we shouldn’t be hyping outliers, either – like I fear many did with the Monmouth poll.

Rather, folks in the media, including myself, should give context to outlier polls. We can do so either by averaging and/or listing the results of all recent polling data. By showing the full scope of polling results, the media can demonstrate to its audience that polling is an inexact science. Polls can shine a light of where a race generally is – but it can’t pinpoint it. Margins of error and other types of potential polling error (e.g. coverage error) are real.

While it may seem counterintuitive, averages are actually made more accurate by outliers. As FiveThirtyEight’s Nate Silver has shown, when pollsters begin to herd (or people stop including outliers in their average), it will make the average less accurate.

My hope (however fleeting) is for the Monmouth poll to be a learning lesson. Outliers should happen, and we should report them responsibly.