7 February 2014: I found a bug in the program that generated the second table, the one using wOBA – 0.020 for any players not forecast, so I’ve replaced the older table with a corrected version, and adjusted some of the other text to reflect that.
Unfortunately the data Dr. Larson’s site had was not sufficient to compute wOBA for many systems, so to include more systems, I went with the lower common denominator. Many fantasy leagues care about HR and average, but not (directly) 2B, 3B, or (often at all) BB. Certainly a real baseball team cares about more detailed offense.
Much of the extra data from Dr. Larson does let me compute wOBA, as does the Oliver data Brian Cartwright shared, so this post runs the same analysis for wOBA, using only those sources where I have the data to compute it. If you’re reading this and your source was in my last post but not this one, send me more detailed data (to geoff at rotovalue dot com) and I’ll update this post to include your system.
The sources I’m using here are:
- CAIRO - from S B of the Replacement Level Yankees Weblog.
- Marcel - the basic projection model from Tom Tango, coauthor of The Book. This year I’m using Marcel numbers generated by Jeff Zimmerman, using Jeff Sackmann’s Python code.
- MORPS - A projection model by Tim Oberschlake.
- Steamer/Razzball - Rate projections by Jared Cross, Dash Davidson, and Peter Rosenbloom, and playing time projections from Rudy Gamble of Razzball.com.
- RotoValue - my current model, based largely on Marcel, but with adjustments for pitching decision stats and assuming no pitcher skill in BABIP.
- Zips - A projection model from Dan Szymborski.
- CBS Sportsline
- Fangraphs Fans
- Oliver - Brian Cartwright’s system (2013 data available at Fangraphs.com).
Removing systems with less data also gives a larger set of commonly projected players, which is good. So not only is this a better overall statistic for comparison, the subset of players projected by all systems gives better insight into how the systems performed.
The range of errors in this group is actually tighter than what I saw with Avg, with just 0.0028 separating the lowest RMSE (Steamer/Razzball) from the highest, even though wOBA numbers are higher than batting averages. So I’d say all these systems do a pretty good job here on these players.
When I add in average wOBA minus 0.020 for players not forecast by a system, I get this:
Now the spread is wider. The filter of being able to compute wOBA removed most of the systems with very few projected players,
but it is interesting to see that the lowest errors come from Fangraphs Fans projections, which also happens to project the fewest players by far. Shifting to this test sees the Steamer/Razzball RMSE rise by 0.0020, while the Fangraphs RMSE drops by 0.0036, more than 10%. and there’s no obvious relationship between number of players forecast and the RMSE errors of a system.
Steamer/Razzball still tops this chart, with Oliver and AllConsensus in a virtual tie now.