- CAIRO - from S B of the Replacement Level Yankees Weblog.
- Marcel - the basic projection model from Tom Tango, coauthor of The Book. This year I’m using Marcel numbers generated by Jeff Zimmerman, using Jeff Sackmann’s Python code.
- MORPS – A projection model by Tim Oberschlake.
- Steamer/Razzball – Rate projections by Jared Cross, Dash Davidson, and Peter Rosenbloom, and playing time projections from Rudy Gamble of Razzball.com.
- RotoValue - my current model, based largely on Marcel, but with adjustments for pitching decision stats and assuming no pitcher skill in BABIP.
Like last year, I’m computing standard deviation, mean average error (MAE) and root mean square error (RMSE) for each source.
This table includes only those players projected by all five systems who played in 2012 also.
The spread in errors for the projection systems is small, and all systems do much better than using 2012 numbers. Steamer/Razzball had the lowest overall errors, while MORPS and my updated RotoValue model had almost identical errors, just behind Marcel. The simple average consensus ranked second best, just ahead of CAIRO.
Below is a more detailed table, showing averages for all the players a system projects. Num is the total number of players, the first wOBA column is the cumulative wOBA of all those players. MLB is the number of projected players who actually had a plate appearance in 2013, and the second wOBA is the cumulative wOBA of those players. For that set, I again computed RMSE and MAE, and sorted by the former.
Steamer again had the lowest errors, but MORPS moves a little ahead of my system, and into a vitual tie with Marcel, while CAIRO now ranks a little ahead of the Consensus, perhaps because I compute a consensus from whatever sources I had available, which often was just Marcel and my own RotoValue system, two which had somewhat higher overall errors. The errors from the projections are not quite as bunched together as before, but are still close (and all much lower than the errors using 2012 data). It’s interesting that aside from the consensus, the ordering of lowest errors is almost the same as the ordering of projecting the fewest players.
Finally, in this last table I’m averaging in any player not projected by a system to use that system’s league average wOBA minus 0.020.
Each system has
lower higher average errors now , although the ones that projected fewer players tended to see their errors drop a little more. The big picture stays basically the same, though: all the projections are much better than 2012, Steamer performed the best, followed by CAIRO, and they all are still pretty close to each other, with a spread of just under 0.004 in RMSE between Steamer and RotoValue, the best and worst ranked projection systems.
Next I’ll perform similar analysis on pitching projections.
Update January 31 2014 Two points:
1. In computing the errors I’ve bias-adjusted each source. So if an exogenous event changes the overall run environment (say, unusually mild weather, a changed strike zone, a somewhat different ball, or some other factor), the systems are not judged primarily on how well they guessed the new run environment. Effectively I first compute a delta of each player’s wOBA relative to the average of that projection (or Actual), and then compare those deltas. So adding any arbitrary constant to all projections has no effect whatsoever on my reported errors.
2. While running my program for pitching stats, I noticed a bug which affected the second and third tables in this chart. Some players, for whom my database did not have an MLBAM ID, were excluded from the averages. None of the players projected by all systems were affected, and the relative order of systems remained the same. But the exact numbers have changed slightly.
Update 7 February 2014: I found a bug in the program that generated the last table, so after fixing that I’ve replaced the table above and adjusted some of the commentary in light of the corrected data.