RSS

BREAKING NEWS: Pollsters in Conflict over Predicted Election Results

Nate Silver Is Unskewing Polls — All Of Them — In Trump’s Direction

The vaunted 538 election forecaster is putting his thumb on the scales.

11/05/2016 03:59 pm ET | Updated 3 hours ago

ASSOCIATED PRESS
Nate Silver sits on the stairs at Allegro hotel in downtown Chicago on Nov. 9, 2012. The statistician correctly predicted the 2012 presidential winner in all 50 states and almost all the Senate races.
 During the 2012 election, Republicans who hated the daily onslaught of polling showing that Mitt Romney was headed toward a comfortable defeat turned to Dean Chambers, the man who launched the website Unskewed Polls. The poll numbers were wrong, he said, and by tweaking a few things, he could give a more accurate count. His final projection had Romney winning close to all 50 states.

Chambers has wisely abandoned the field of election forecasting, and this year says he thinks the various models predicting a Hillary Clinton victory are probably accurate. The models themselves are pretty confident. HuffPost Pollster is giving Clinton a 98 percent chance of winning, and The New York Times’ model at The Upshot puts her chances at 85 percent.

There is one outlier, however, that is causing waves of panic among Democrats around the country, and injecting Trump backers with the hope that their guy might pull this thing off after all. Nate Silver’s 538 model is giving Donald Trump a heart-stopping 35 percent chance of winning as of this weekend.

He ratcheted the panic up to 11 on Friday with his latest forecast, tweeting out, “Trump is about 3 points behind Clinton ― and 3-point polling errors happen pretty often.”

So who’s right?

The beauty here is that we won’t have to wait long to find out. But let’s lay out now why we think we’re right and 538 is wrong. Or, at least, why they’re doing it wrong.

The short version is that Silver is changing the results of polls to fit where he thinks the polls truly are, rather than simply entering the poll numbers into his model and crunching them.

Silver calls this unskewing a “trend line adjustment.” He compares a poll to previous polls conducted by the same polling firm, makes a series of assumptions, runs a regression analysis, and gets a new poll number. That’s the number he sticks in his model ― not the original number.

He may end up being right, but he’s just guessing. A “trend line adjustment” is merely political punditry dressed up as sophisticated mathematical modeling.

Guess who benefits from the unskewing?

By the time he’s done adjusting the “trend line,” Clinton has lost 0.2 points andTrump has gained 1.7 points. An adjustment of below 2 points may not seem like much, but it’s enough to throw off his entire forecast, taking a comfortable 4.6 point Clinton lead and making it look like a nail-biter.

It’s enough to close the gap between the two candidates to below 3 points, which allows Silver to say that it’s now anybody’s ballgame, because “3-point polling errors happen pretty often.”

By monkeying around with the numbers like this, Silver is making a mockery of the very forecasting industry that he popularized.

That line in itself is disingenuous, though. For the polls to be wrong, there wouldn’t need to be one single 3-point error. All of the polls ― all of them, as Brianna Keilar would put it ― would have to be off by 3 points in the same direction. That’s happened before, but in 2012 the error favored President Barack Obama. In 2014, it favored Republicans. Errors are just as likely to favor Clinton as they are to favor Trump, and they would have to favor Trump. And we still haven’t accounted for the unique fact that one campaign has a get-out-the-vote operation, while the other doesn’t.

By monkeying around with the numbers like this, Silver is making a mockery of the very forecasting industry that he popularized. “The idea that she’s a prohibitive, 95 percent-plus favorite is hard to square with polling that has frequently shown 5- or 6-point swings within the span of a couple weeks, given that she only leads by 3 points or so now,” he told Politico recently. “[E]verything depends on one’s assumptions, but I think that our assumptions ― a Clinton lead, sure, but high uncertainty ― has repeatedly been validated by the evidence we’ve seen over the course of the past several months.”

I get why Silver wants to hedge. It’s not easy to sit here and tell you that Clinton has a 98 percent chance of winning. Everything inside us screams out that life is too full of uncertainty, that being so sure is just a fantasy. But that’s what the numbers say. What is the point of all the data entry, all the math, all the modeling, if when the moment of truth comes we throw our hands up and say, hey, anything can happen. If that’s how we feel, let’s scrap the entire political forecasting industry.

Silver’s guess that the race is up for grabs might be a completely reasonable assertion ― but it’s the stuff of punditry, not mathematical forecasting.

Punditry has been Silver’s go-to move this election cycle, and it hasn’t served him well. He repeatedly pronounced that Trump had a close to 0 percent chance of winning the Republican primary, even as he led in the polls. “Trump’s chances [are] higher than 0 but (considerably) less than 20 percent,” he wrote in November.

Silver was far from alone in being wrong. I said myself, at the time, that Trump had no chance of winning the primary. But I did so as a pundit, and, as a pundit, I am endowed by my creator the inalienable right to be consistently wrong and never apologize. But our polling model always forecast a Trump win, and our polling team, led by Natalie Jackson, stuck by that prediction, because their job is to follow the numbers. They were right, while pundits like me and Silver were wrong.

MANDEL NGAN VIA GETTY IMAGES

Even though the HuffPost and New York Times models both have very high confidence that Clinton will win ― and The Upshot’s Nate Cohn has consistently said Clinton is the runaway favorite, even after Jim Comey’s entrance into the campaign ― we differ on how big the current gap is. Much of the disagreement comes from using a different style of regression analysis. The difference between the Upshot’s 3 percent and 538’s 3 percent is that the Upshot is using one style of regression analysis to get theirs ― whereas 538 is actually moving the polling numbers and then running its regression analysis.

The HuffPost model uses a stickier regression, which assumes that the populace does not swiftly change its mind on a dime about a candidate based on a bad news day. After all, one difference between pundits and regular people is that the former live on Twitter with cable news on in the background, while regular folks catch campaign developments in snippets, if at all. That makes it really, really hard for even the worst day to cause a real 6-point swing.

So when new polls come out showing a candidate surging, that surge has to sustain itself to register fully in our model. The Times model, which actually relies on HuffPost data, puts a bit more grease in its charts, and so the numbers can move up or down more quickly. In the last week, the model’s confidence in a Clintonwin fell from 93 to 85, while HuffPost’s barely dropped.

So if Nate Cohn and Nate Silver both see a roughly 3-point race, why is one Nate confident in a Clinton win and the other sparking a collective global freakout?

Because Silver is also unskewing state polls, which explains, for instance, why 538 is predicting Trump will win Florida, even as we and others (and the early vote) see it as a comfortable Clinton lead. To see how it works in action, take the Marist College poll conducted Oct. 25-26. Silver rates Marist as an “A” pollster, and they found Clinton with a 1-point lead. Silver then “adjusted” it to make it a 3-point Trump lead. HuffPost Pollster, meanwhile, has near certainty Clinton is leading in Florida.

In response to this article, Silver tweeted:

The reason we adjust polls for the national trend is because **that’s what works best emperically**. It’s not a subjective assumption.

“Every model makes assumptions but we actually test ours based on the evidence. Some of the other models are barley even empirical,” he said in another post.

We’ll have to wait and see what happens. Maybe Silver will be right come Election Day ― Trump will win Florida, and we’ll all be in for a very long night. Or our forecast will be right, she’ll win the state by 5 or 6, and we can all turn in early.

If he’s right, though, it was just a good guess ― a fortunate “trend line adjustment” ― not a mathematical forecast. If you want to put your faith in the numbers, you can relax. She’s got this.

This story has been updated with a response from Nate Silver.

Sign up to get Ryan Grim’s Bad News in your inbox.


Comments are closed.