University of Massachusetts Amherst

Search Google Appliance

Links

Professor Justin Gross Addresses Election Predictions in Monkey Cage

Professor Justin Gross published the following article in the Washington Post's Monkey Cage on November 29, 2016.

See original here: https://www.washingtonpost.com/news/monkey-cage/wp/2016/11/29/how-to-better-communicate-election-forecasts-in-one-simple-chart/?utm_term=.4dd58e8df414

"How to better communicate election forecasts — in one simple chart"

On Oct. 30, the Chicago Cubs were down three games to one, just a loss away from blowing their chance at a first World Series championship in more than a century. Analysts said the Cubs’ chances were only 13 to 15 percent. “The Cubs have a smaller chance of winning than Trump does,” announced FiveThirtyEight, which put Trump’s chances at 21 percent.

The Cubs then staged a dramatic comeback and won the World Series in what reporters called one of the greatest games ever, if not the greatestever. And although Cleveland Indians fans were understandably heartbroken, they did not rush to condemn the data analysts. Reporters and pundits did not spill gallons of ink bemoaning the fact that “data died tonight” or that “the data broke.”

Yet this was exactly the reaction to Donald Trump’s presidential victory — even though it was arguably just as likely and maybe even more likely than the Cubs’ victory. For weeks, we heard a dramatic narrative of polls gone wildly astray and a “devastating blow to the credibility” of the supposed experts.

One thing is clear: People don’t understand the uncertainty underlying predictions — or at least they better understand uncertainty in sports than in elections. Perhaps this is because there are more baseball games than elections. Or perhaps sports writers have an incentive to push the “ain’t over till it’s over” narrative while political reporters face more complicated incentives, pressured both by the public’s desire for certainty and the newsroom’s interest in a close horse race.

Regardless of the explanation, a widespread resistance to thinking probabilistically about the election was the bigger problem than polling errors. Here’s one way to help people understand probabilities better.

The problem wasn’t the polls. It was how they were interpreted.

In his post-election interview with election analyst Sean Trende, NPR’s Scott Simon stated the conventional wisdom: “The polls leading up to the 2016 presidential election were wrong, wrong, wrong. Pollsters were wrong. Reporters who cited those polls were wrong on a scale that makes history.” In fact, it was Simon who was wrong.

As Trende later wrote: “The story of 2016 is not one of poll failure. It is a story of interpretive failure and a media environment that made it almost taboo to even suggest that Donald Trump had a real chance to win the election.” Neither the national polls nor the state polls did any worse on average than in 2012.

Journalists and voters understandably care more about who wins, though, and they looked obsessively to pollsters and forecasters for reassurance. They were looking in the wrong place. Indeed, a number of forecasters — FiveThirtyEight’s Nate Silver most prominently — repeatedly said that the race was close and that we shouldn’t underestimate the possibility of highly correlated polling errors — whereby most polls miss in the same direction. 

Their appeals were largely ignored or mocked. The Huffington Post’s Ryan Grim attacked Silver, writing that “Silver is changing the results of polls to fit where he thinks the polls truly are, rather than simply entering the poll numbers into his model and crunching them.” He finished with words of reassurance for supporters of Democratic nominee Hillary Clinton: “If you want to put your faith in the numbers, you can relax. She’s got this.”

Falling victim to the ‘illusion of certainty’

On Nov. 14, Silver was in the hot seat again — this time with “The Daily Show’s” Trevor Noah. Noah asked, “What is the point of the prediction if the prediction doesn’t happen?” Silver replied, “It’s not a prediction; it’s a forecast. It’s an estimate of risk.” Noah was in no mood for nuance: “I’m going to choke you, Nate Silver.”

But Silver was right: Assessing the probabilities of various outcomes is a realistic goal; fortunetelling is not. Researchers such as Gerd Gigerenzerhave long noted the tendency of experts to exaggerate their certitude to meet people’s demand for peace of mind. This “illusion of certainty” threatens informed decision-making.

Better decisions demand acknowledging uncertainty, but humans seem wired to be uncomfortable with randomness. We overestimate the predictability of an event after it has occurred. We even misremember how confident our earlier predictions were once we see the actual outcome. This “creeping determinism” or “hindsight bias” — the tendency to think outcomes must have been inevitable in retrospect — makes us especially unforgiving of supposed experts who couldn’t see what now seems so obvious to us.

How to communicate uncertainty better

In general, probability is a deeply challenging concept — for journalists, the public and even statistics instructors. Numerical representations of chance are a relatively recent innovation, dating only to the mid-17th century. Mystical explanations of events and a religious commitment to determinism hindered the development of intuitions about probability. Philosophers continue to debate different theories of probability to this day. So it is no wonder we struggle with it.

But certain representations of probability are more readily grasped than others. In particular, we have trouble understanding risk in terms of the “percent chance” but we do better when simple raw numbers of different outcomes are depicted visually.

Two risk communication experts use what they call the Risk Characterization Theater to help people visualize different types of health and environmental risks. Here’s how it can help us understand elections.

In the figure below, I have represented three prominent forecasts from election eve in terms of seats in a theater that seats a thousand people. A darker seat represents a Trump victory.


Risk Characterization Theatre representations of different election eve forecasts. Figure by Justin Gross, created in the R statistical environment with the RCT package. See Rifken and Bouwer’s book “The Illusion of Certainty: Health Benefits and Risks.”

On the left is FiveThirtyEight’s final forecast. The prevalence of dark seats signals that Trump’s chances were far from negligible. In the middle is The Upshot’s forecast — which gave Trump a smaller chance but is hardly as reassuring to Clinton supporters as, say, Grim’s “She’s got this.” Only the Huffington Post forecast at right instills a great deal of confidence in a Clinton victory.

Of course, true risk depends on the severity of the potential loss. If you imagine that each of the darkened seats in the theater on the right is booby-trapped with explosives, even a high level of assurance may not be high enough.

At the end of his interview on NPR, Trende had this to say: “If I had one wish, it would be world peace or feeding everyone. But if I had, like, 50 wishes, one of them would be for people to understand probabilities a little bit better.” He’s right. Maybe some theater will help.

Tags: 

  • Faculty Publications

News Type: 

  • Faculty News