Nate Silver is live with his new FiveThirtyEight, and it being the start of the tournament, I looked at their model for help in filling out my own bracket. It’s quite a slick presentation, and you can see the probabilities of any team reaching any stage of the tournament.
That’s all good! From that page, there’s a link to show data in a table, instead of the large bracket, and when you mouse over the table, they’ll show percentages to 3 decimal places. Yes, that’s up to 5 significant figures (well, it’s 5 figures; I doubt more than 2 are really significant!). That’s cool from a geeky perspective (I should know: I do something similar when I’m displaying projected stats, showing 2 decimal places for integer values), but it’s implying precision that any predictive model really doesn’t have. And it’s inviting scrutiny of that extra precision.
All of this is to raise a question about how Silver’s model sees the East bracket shaking out. Here’s the two teams that I found most interesting, with their probabilities of reaching each round (from the Elite 8 on, I’m showing the 5-significant figures):
|Team||3rd? Round||Sweet 16||Elite Eight||Final Four||Dynamic Duo||Champion|
No, I’m not bashing Silver for favoring Michigan State because he’s from East Lansing (disclosure: I was born in the University of Virginia Hospital). What I found interesting here is that Michigan State’s chances of getting to the Final Four or the championship game are just ahead of Virginia’s, but Virginia’s chances of winning it all are just ahead of Michigan State’s.
I clicked on the link to “read about our model“, which does describe the inputs of the model, but in detail its mechanics. If I’m literally taking the above numbers as actual probabilities out to 5 decimal places, then the model must be somehow considering team-specific match-ups. After all, Virginia and Michigan State would meet in the Sweet 16, and after that, they’d face the exact same opponents. They have an almost identical chance to reach the Elite 8 (which only one of them can do), with UVa having about a 0.35% better chance of doing that. Yet Michigan State is much more likely to reach the final 4, by nearly .7%. That’s plausible: maybe the system thinks Michigan State is a stronger team, but just had a harder road to get to the Sweet 16.
If Michigan State is simply better, then I’d expect their chances to go further to pull ahead of Virginia’s a little more. But that doesn’t happen. Its chances of making the championship game are just 0.06% better, and Virginia is given a 0.11% better chance to win the title.
So doing a little conditional probability, I computed these chances for the teams to win a game at each of the remaining stages:
|Team||Elite Eight||Final Four||Championship|
Bear in mind that the potential opponents from this point are the identical. Michigan State is nearly 2% more likely to beat its possible Elite 8 opponent, but Virginia is 1% more likely to beat its possible Final Four or Championship game opponent.
So from this I infer one of two possibilities. Either the model somehow takes into account particular matchups between teams, in which case it might assume Michigan State is more likely to beat, say, Villanova than UVa is, but UVa is more likely to beat Florida, Kansas, or Arizona; or maybe this is simply statistical noise from a Monte Carlo simulation process – the two teams are of almost identical strength, and when you run them through the model, sometimes UVa does slightly better, while at other times Michigan State does a little better.
In the big picture, this really doesn’t make much difference; Silver’s model gives both teams almost exactly the same chances to progress. But when those chances are sometimes shown to five significant figures, that begs the question of what, if anything, the additional precision means. So I’m wondering: is it a Monte Carlo output, do you consider team-specific matchups, or is there some other explanation I haven’t considered?