Elections, Predictions, and Moral Dilemmas

1. The name on everybody's lips*, has lately been Nate Silver. As Jonathan Green of The Drum explains, Silver runs a blog called FiveThirtyEight at The New York Times, and he predicted that Obama would win the election. This might not seem like a big deal (after all, we have sea creatures predicting outcomes of world cups!), but something about Silver’s data-crunching methods and the contrast to old-fashioned punditry seems to have caught the collective imagination. Big Think talks about a “big win for big data”, and The Guardian looks at how “Silver’s success should also have a profound impact on how businesses leverage data analysis”. It’s not “just” about big data though – it’s also about using this data to forecast the future. And that is where things start to get interesting...

2. Prediction markets have been around for a while, and it’s eight years since James Surowiecki wrote his book The Wisdom of Crowds. I don’t understand all the intricacies of prediction markets, but I do understand that for some types of questions, you are better off asking a lot of people and averaging their answer, rather than asking just one person. Even if that one person is an expert. This works surprisingly neatly and well for things like the weight of cows and the number of jellybeans in a jar (though as one of my friends said, “if you want to know the height of the Eiffel tower, use a tape measure), as well as for sports games, but it’s more tricky for geopolitical events like elections and wars. The crucial question is under which conditions asking a crowd is going to yield a good answer; the conditions need to (for example) minimise common biases like anchoring, and the responses of the individual people in your “crowd” should also be as independent as possible. The “perfect” conditions have probably not been discovered yet, but DARPA is working on it as we speak.

3. I’ve been a participant in some research on prediction markets (funded by DARPA, actually), and in the discussions surrounding how to get the most reliable data (and thus the best prediction), the conditions mentioned above were talked about quite a lot. One thing that did not come up as often as I would have liked, however, was the interdependence of the data being produced. Not in the sense of each person’s answer being dependent on another person’s answer – independent measurement is a hallmark of the scientific method after all – but also the way that in prediction markets, the measurements do not seem to be independent of the outcome itself. Or the other way around – there is the potential for the outcome to depend on, or be heavily influenced by, the way it is being measured. Like a self-fulfilling prophecy, of sorts.

4. Or, the opposite of a self-fulfilling prophecy, perhaps. I’m going to start sounding like an old Greek chorus, but: I have some problems with the idea of predicting the future (with a high degree of accuracy). It is possible that forecasting will never reach the kind of certainty that means these things become a problem, but… imagine that you knew that “the markets” believed with 95% certainty that Israel would attack Iran by the end of the week. What would you do? Would you try to manipulate the market? Would you give yourself over to fatalism? Would you agitate for peace? Would you attack Israel pre-emptively? And then, what would happen to those betting on the market? These are old questions about fate and determinism and crystal balls, but the packaging is new (and a bit worrying, to me).

5. I’ve been reading Isaac Asimov's Nine Tomorrows, a collection of sci-fi short stories written in 1957. One of the characters who keeps cropping up is Multivac, a super-(analog)-computer, who in one of the stories is able to predict the future with a high degree of accuracy. In one of the stories, after a bizarre and unexplainable assassination-attempt on this all-powerful Multivac, some scientists trying to solve the mystery ask Multivac what it wants, most of all, right now. Multivac replies, “I want to die.” The assassination, if successful, would have been suicide. It failed because Multivac could predict what would happen, was forced to tell its human carers, and then they intervened. Yes, I know, it’s science fiction. But it highlighted to me the value of uncertainty, when big data and prediction models are promising the opposite.

6. * Chicago is not related to anything else in this post.

---
Photo credit: Creative Commons, Jason Langheine

Add new comment

Filtered HTML

  • Web page addresses and e-mail addresses turn into links automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <blockquote> <code> <ul> <ol> <li> <dl> <dt> <dd>
  • Lines and paragraphs break automatically.

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
CAPTCHA
This question is to help prevent automated spam submissions.
Image CAPTCHA
Enter the characters shown in the image.