Expanded 2013 wOBA Projections Comparison

7 February 2014: I found a bug in the program that generated the second table, the one using wOBA – 0.020 for any players not forecast, so I’ve replaced the older table with a corrected version, and adjusted some of the other text to reflect that.
I’ve just posted a comparison of nearly 20 different projection sources, using batting average for offense. I prefer a more comprehensive metric, and used Tom Tango’s wOBA in my earlier analysis.
Unfortunately the data Dr. Larson’s site had was not sufficient to compute wOBA for many systems, so to include more systems, I went with the lower common denominator. Many fantasy leagues care about HR and average, but not (directly) 2B, 3B, or (often at all) BB. Certainly a real baseball team cares about more detailed offense.
Much of the extra data from Dr. Larson does let me compute wOBA, as does the Oliver data Brian Cartwright shared, so this post runs the same analysis for wOBA, using only those sources where I have the data to compute it. If you’re reading this and your source was in my last post but not this one, send me more detailed data (to geoff at rotovalue dot com) and I’ll update this post to include your system.
The sources I’m using here are:

Source Num Avg wOBA MAE RMSE
Actual 310 0.3312 0.0000 0.0000
Steamer/Razzball 310 0.3383 0.0232 0.0300
AllConsensus 310 0.3421 0.0235 0.0305
Oliver 310 0.3375 0.0238 0.0306
Consensus 310 0.3397 0.0237 0.0310
ZiPS 310 0.3394 0.0234 0.0311
FangraphsFans 310 0.3467 0.0244 0.0313
CAIRO 310 0.3396 0.0243 0.0314
Marcel 310 0.3436 0.0245 0.0321
RotoValue 310 0.3367 0.0244 0.0321
MORPS 310 0.3445 0.0249 0.0326
CBS 310 0.3475 0.0258 0.0328
RotoChamp 310 0.3540 0.0247 0.0328
y2012 310 0.3420 0.0295 0.0387

Removing systems with less data also gives a larger set of commonly projected players, which is good. So not only is this a better overall statistic for comparison, the subset of players projected by all systems gives better insight into how the systems performed.
The range of errors in this group is actually tighter than what I saw with Avg, with just 0.0028 separating the lowest RMSE (Steamer/Razzball) from the highest, even though wOBA numbers are higher than batting averages. So I’d say all these systems do a pretty good job here on these players.
When I add in average wOBA minus 0.020 for players not forecast by a system, I get this:

Source Num wOBA MLB wOBA StdDev MAE RMSE Missing
Actual 634 0.3236 634 0.3236 0.0439 0.0000 0.0000 0
Steamer/Razzball 504 0.3357 634 0.3319 0.0254 0.0261 0.0357 163
Oliver 1445 0.3014 634 0.3295 0.0276 0.0263 0.0358 23
AllConsensus 1514 0.3031 634 0.3351 0.0255 0.0261 0.0358 15
ZiPS 1000 0.3120 634 0.3309 0.0285 0.0262 0.0362 29
CAIRO 507 0.3363 634 0.3337 0.0257 0.0269 0.0364 178
Consensus 786 0.3322 634 0.3333 0.0243 0.0267 0.0364 80
FangraphsFans 331 0.3449 634 0.3416 0.0264 0.0269 0.0365 311
Marcel 750 0.3306 634 0.3367 0.0243 0.0273 0.0373 105
MORPS 539 0.3367 634 0.3384 0.0259 0.0277 0.0378 164
RotoChamp 434 0.3490 634 0.3468 0.0270 0.0282 0.0384 231
RotoValue 751 0.3293 634 0.3295 0.0282 0.0279 0.0396 105
CBS 751 0.3359 634 0.3346 0.0435 0.0319 0.0461 88
y2012 611 0.3298 634 0.3289 0.0489 0.0361 0.0522 129

Now the spread is wider. The filter of being able to compute wOBA removed most of the systems with very few projected players, but it is interesting to see that the lowest errors come from Fangraphs Fans projections, which also happens to project the fewest players by far. Shifting to this test sees the Steamer/Razzball RMSE rise by 0.0020, while the Fangraphs RMSE drops by 0.0036, more than 10%. and there’s no obvious relationship between number of players forecast and the RMSE errors of a system.
Steamer/Razzball still tops this chart, with Oliver and AllConsensus in a virtual tie now.

Leave a comment

Your email address will not be published. Required fields are marked *