(CNN)Republicans escaped disaster in West Virginia on Tuesday when Don Blankenship finished third in the Senate primary. In fact, Blankenship ended up about 15 points behind winner Patrick Morrisey. That came as a surprise to some in the media and no doubt to some consumer of news who heard reports of a "Blankenship surge" over the weekend.
The Blankenship 'surge' is why we should be wary of internal data and polls
I'm just not sure there ever was a "surge," however, given the public data, and I would argue this is another example of why we should be very wary of data that cannot be verified.
After the 2016 campaign, there was a belief that the polls were "broken." But as I've shown, the public polls so far this cycle have been pretty predictive of the results. Sometimes they have been off, but all within an error range that should be expected given the historical predictiveness of polls.
That's why coming into the West Virginia election, I thought there was only one characterization of the race that was appropriate: We just didn't know who was going to win with real certainty.
The last two polls of the race that weren't sponsored by either candidate (one of which was conducted for Fox News and one by the Republican GOPAC by National Research) had Morrisey and Evan Jenkins neck and neck with Blankenship in third, though close enough to potentially win given the past predictiveness of primary polling. The Fox News poll is considered "gold standard"; the Republican GOPAC is not but it tracked relatively closely with the Fox News poll result.
Those polls were conducted a few weeks before the election, so it wasn't possible to dismiss the idea of a Blankenship surge, but there was no sign in the publicly available polling data that he was actually surging.
The surge narrative instead came when Politico and then The Weekly Standard reported internal poll data over the weekend. That's polling sponsored by someone involved with a campaign or associated with a group supporting a candidate. Further, it is data that is not released with any sort of regularity by the group sponsoring the poll.
Politico referred to internal polling without specific numbers. The Weekly Standard followed up with two polls supposedly showing Blankenship in the lead with Morrisey and Jenkins behind him.
Internal polling released to the public at its best is a shaky proposition. The polling is often biased in favor of the candidate the person or group releasing the poll wants to win. The reason is that said group or person will choose to release the polls that looked best for their candidate, while keeping the ones that don't look good away from public view.
Yet that wasn't exactly the situation in West Virginia. We didn't know how the "internal" polling was going to be biased in West Virginia because we didn't know who conducted the internal polls or who sponsored them. Whoever was pushing these polls could have had motives other than getting the result right. They had little on the line because their names would not be uttered publicly, even if the results were off.
This meant the results could have been biased in some other way to make them less than accurate. The pollsters might have used special models of the electorate that didn't match their actual thoughts of what the electorate would look like. The pollsters might have chosen one-night samples out of multiple nights that were the best for Blankenship. Maybe they wanted to scare voters into turning out for their candidate who was not Blankenship. Maybe they wanted President Donald Trump to issue a statement against Blankenship. We just have no idea.
It's not that I think the internal polls were fake (like some other polling has been this cycle). The polls could very well have been accurate representations of what voters thought when the polls were released. In other words, Blankenship could really have been surging, and his surge stopped only when Trump said people shouldn't vote for him.
But I think the problem runs deeper than that. There was no way to know if we could trust the West Virginia internal polls that were released. They failed many of my criteria for trusting polling. We didn't know who conducted the polls, how they were conducted, what questions were asked, when the questions were asked or why the pollsters conducted the poll (i.e. for whom). Failing five of the major criteria for when a poll is reportable was a pretty clear sign that the data in those polls was not something I would trust.
The only way to have known if Blankenship had momentum was if pollsters who were not affiliated with his campaign went back into the field in the final few weeks. Apparently, the costs of polling prohibited that. That's no reason, though, to use data in which the people behind it won't even put their names on it.
Now, if this were just a one-off in the last few months, I'd be less concerned about how the media is allowing internal polls to shape election narratives during the Trump era. It's not a one-off, though.
It was only a few months ago when anyone in "the know" was told to expect that Democrat Conor Lamb would beat Republican Rick Saccone by mid-single digits in the Pennsylvania 18th Congressional District special election. The average of publicly released polls had Lamb up by less than 3 points, and he won by fewer than 1,000 votes. The internal data shared with reporters certainly added to the narrative post-Pennsylvania that you shouldn't trust data, even though the public data was actually quite good in that race.
There also seems to be an inside-the-Beltway belief that Democrats are destined to take back the House based off some internal data. That was illustrated well in an article last week in which the White House legislative liaison Marc Short told Trump that Democrats were all but certain to win back the House. Yet the publicly available data suggests far more caution is needed.
I'm hopeful that going forward we'll be more cautious when it comes to magical internal data that is floated into the public sphere. We'll recognize that the public data is useful, though view it with the appropriate caveat about polling being an imperfect tool for understanding the electorate. I'm doubtful, though, that either one of these will occur.