Call for Assistance: 410-353-6894

0 Item Daily: 6AM - 9PM EST


Nate Silver’ predictions just before the election were so hedged that it was impossible for him to *not* be right.

Posted by Emeritus Professor of Economics Bob Sandy on 12/9/2024
Mr. Silver said both Harris and Trump could win by large margins in the electoral college and that their odds of winning were essentially 50-50. He wrote in the New York Times on October 23 that his gut feeling was the Trump would win, but warned that this prediction was not based on any analysis of polls, so it should be taken with grain of salt. Apparently, Silver wanted to take credit for predicting a Trump win while avoiding any discredit if Harris won.
Silver’s aggregation of within-swing-State polls had Trump ahead by a razor thin margin in five states and Harris ahead by a razor thin margin two. Trump won all 7. If these 7 states were each truly tosses of a fair coin, as per Silver’s description, the chance of getting all seven going for Trump is 0.5^7, or 0.0078. There was very likely something else going on than a run of seven bad coin flips for Harris. One possibility is that there was news that helped Trump in the last couple of days. What that news might have been is far from clear. My recollection includes stories about the Nazi symbolism in Trump’s choice of Madison Square Garden for his last rally and that an obscure comedian who called Puerto Rico garbage would sway both Puerto Ricans living in the US to vote against Trump.

Another possible explanation different than a last minute surge for Trump is that the polls were wrong. Either the pro-Trump voters were under-represented in the polls or they lied to pollsters by pretending to be undecided or pro-Harris. Either of those explanations, or both at the same time, seem plausible to me. If one or both are true, then that implies systematic failures in the polls that Silver aggregated. Yet, Silver claims he knows which polls are the reliable and which to avoid. 

Another issue is what constitutes being “close” on predicting the electoral college, which boils down to being accurate for the swing states. In his November 3 post Silver referenced the margin of error in the swing state polls. Properly calculating the margin of error is vexed topic. Weighting, which every polls does, widens the margin of error compared to an unweighted random sample. On the other hand, aggregating polls from around the same time and in the same state, narrows the margin of error by increasing the sample size. Most pollsters ignore the effect of weighting on the margin of error. Silver appears to have also ignored the effect the increasing sample size on the margin of error. Between the two factors, the net effect would generally to narrow the margin of error in a swing state because each of them had lots of polls. For example, based on counting the number of dots for week before the election in on 538’s graph, I got 28 polls in Pennsylvania. When Silver refers to a typical 3% margin of error for one state, that figure would come from a random sample of 1,000 voters. That is not the margin of error for the aggregation of dozens of polls in a swing state. 

The basic problem is that election polls have become highly dubious. If 99% of the people randomly contacted to respond to a poll refuse and that refusal is related to their voting preferences, a poll, or an aggregation of such polls, can be way off. If the pollster’s work-around for non-response is internet opt-in polling with no random sampling, the concept of the statistical margin of error becomes meaningless. Silver blames the errors in polls on pollsters shading their results to match the polling consensus, what he calls “herding”. Made up or shaded-to-consensus results do not have statistical margins of error. Nevertheless, Silver applied that statistical concept to polls and aggregations even when he sure there is herding.

Robert Sandy
Emeritus Professor of Economics
Indiana University Indianapolis