In the early days after the 2016 presidential election, one of the first reflections many are looking into is how polling data was so off the mark.
Harvard Business Review author Eben Harrell spoke with Leslie John, a Harvard Business School associate professor and Hillary Clinton supporter. She came into Tuesday morning with a worried disposition, despite the fact that the vast majority of pre-election polls predicted that Clinton would win by a comfortable margin. Clinton, of course, did not end up winning the electoral college despite narrowly topping new President-elect Donald Trump in the popular vote.
After the results were finalized, John elaborated why she was so concerned and how the polls could have been so wrong.
“We tend to think that because we now routinely use algorithms and computer-generated predictions, the results will be unbiased,” she says. “But there are two problems with that thinking. The first is that, at the end of the day, humans build the algorithms. And all sorts of biases can be introduced at the point of construction. It’s also possible that the inputs—in this case, the polling—was flawed. I could see Trump supporters who were also anti-establishment may have viewed polling officials as part of the establishment and refused to engage with them. Another factor that might lead to a response bias in the polling might be what behaviorists call ‘socially desirable responding’—you can imagine women being reluctant to admit that they were going to vote for Trump after the footage surfaced of his bragging about sexual assault, for example.”
The post-election shock stems after years of polling analytics getting more and more accurate. Famously, Nate Silver, founder of data media site FiveThirtyEight, predicted the 2008 and ‘12 elections almost perfectly. However, even during the beginning of Trump’s campaign, Silver’s numbers began to appear more and more flawed. Despite that, most of his team’s predictions held close to his vaunted models leading to Nov. 8th, which estimated Clinton had north of a 70 percent chance of winning.
“There’s tons of research showing that people are overconfident in their beliefs,” John says. “We think our prediction abilities are better than they are. And if you add to overconfidence a desire for certain outcomes—for instance, I think most elite commentators were anti-Trump—it magnifies the problem.”
Outside of data modeling, John notes there were several deviations that would let Trump sling just close enough to surpass 270 electoral votes. This includes how the public perceives a person who’s open about their beliefs and concepts like “psychological reactance.”
“With Clinton there were so many examples where she wasn’t forthcoming, so she came across as a hider, which I think explains in part why she was viewed as untrustworthy by so many Americans,” she notes. There’s also the matter of negativity, which John explains is more important of a factor in one’s decision-making than feelings of joy.
“Research shows that bad things influence us more than good things—we feel greater despair at bad news than joy at good news.”