Questions participated in: 14 (of a total of 19questions)
# of correct predictions: 14 (that’s 100% for you keeping track at home!)
Starting money and rank: $6,000/ 45 participants
Ending money and rank: $32,264.14 / 1 out of 45 particpants
While fun I’m not really sure prediction markets (as set up in this experiment) were particularly helpful. The questions were so specific (a hypothetical question: Will the French attack the World Series with a VBIED?) and the time frame so limited (April to July) that I felt I could play the odds, run out the clock and pull down a pretty decent haul.
My second hesitation about prediction markets involves this article about the abject failure of their utility in some pretty big events. Now these weren’t ‘black swan’ events but rather were policy/political questions were the parameters of the issue were known, a lot of specific information was available and yet the predictions were way off. They failed horribly over the Supreme Court decision to uphold the ACA and ended up looking much like they just followed the punditocracy.
By going a bit back in Ritholtz’s archives, I think he hits upon the key that explains the rather uninteresting futures questions AND the ease with which someone like me can achieve a 100% success rate. After all, I know I think I’m brilliant but even I can’t be that good.
“The “wisdom of crowds” is a colloquial way of describing the market as a complex system. The work on wisdom of crowds shows that when certain conditions are met—diversity, aggregation, and incentives — markets tend to be efficient. Conversely, when one or more of those conditions are violated, markets can and do become inefficient (i.e., price is no longer an unbiased reflection of value).
And, what the prediction market I was involved with lacked those conditions. There wasn’t (as far as I could tell) much diversity in terms of participants (and therefore, ideas).
It didn’t seem that there were sufficient incentives for many participants. We were giving notional money and I was interested and checked in frequently to check the health of my ‘portfolio’, the static value of many participants led me to conclude that after they signed up they never participated again.
Aggregation is the platform in which to bring all these opinions together. The market in the pilot project did a fine job of that. They made a concerted effort to recruit people into the project. So, no problems there.
David Leonhardt points out that while prediction markets clearly have flaws, the whole reason they’ve taken off recently is that old ways of peering into the crystal ball of the future where at least as unsatisfactory.
The answer, I think, is to take the best of what both experts and markets have to offer, realizing that the combination of the two offers a better window onto the future than either alone. Markets are at their best when they can synthesize large amounts of disparate information, as on an election night. Experts are most useful when a system exists to identify the most truly knowledgeable — a system that often resembles a market.
As far as the use of prediction markets in intelligence analysis goes, I suspect we’ve still got a number of kinks to work out. Aggregation appears the easiest part to solve but what sort of incentive system would be appropriate? Perhaps even more importantly, how do we build diversity when there would be all sorts of security and confidentiality concerns?
Beats me…but at least I got braggin’ rights for ending up in the #1 slot.