In the past three years I’ve done reviews of baseball projections systems with actual data for those systems for which I could get data. Will Larson maintains a valuable site of projections from many different sources, and most of the sources I’m comparing are from that.
As in the past, I’m computing root mean square error (RMSE) and mean absolute error (MAE) for each source compared to actual data. For these tests, I am doing a bias adjustment, so the errors are relative to the average of a source. I care more about how a system projects players relative to its own projected averages than about how well it projlects the overall league average.
I have data from these systems:
- AggPro – A projection aggregation method from Ross J. Gore, Cameron T. Snapp, and Timothy Highley.
- Bayesball – Projections from Jonathan Adams.
- CAIRO – from S B of the Replacement Level Yankees Weblog.
- CBS Projections from CBS Sportsline.
- Davenport Clay Davenport’s projections.
- ESPN Projections from ESPN.
- Fans Fans’ projections from Fangraphs.com.
- Larson Will Larson’s projections.
- Marcel – the basic projection model from Tom Tango, coauthor of The Book. This year I’m using Marcel numbers generated by Jeff Zimmerman, using Jeff Sackmann’s Python code.
- MORPS – A projection model by Tim Oberschlake.
- Rosenheck Projections by Dan Rosenheck.
- Oliver – Brian Cartwright’s projection model.
- Steamer – Projections by Jared Cross, Dash Davidson, and Peter Rosenbloom.
- Steamer/Razzball – Steamer rate projections, but playing time projections from Rudy Gamble of Razzball.com.
- RotoValue – my current model, based largely on Marcel, but with adjustments for pitching decision stats and assuming no pitcher skill in BABIP.
- RV Pre-Australia – The RotoValue projections taken just before the first Australia games last year. Before the rest of the regular season I continued to tweak projections slightly.
- ZiPS – projections from Dan Szymborski of Baseball Think Factory and ESPN.
In addition, I’ve computed a source “All Consensus”, which is a simple average of each of the above (ignoring a source if it doesn’t project some particular category).
Not all the models had enough data to compute wOBA, so the tables (below the jump) only include those sources which do. The other sources do affect the All Consensus values for those stats where they do have data.
First, as an “apples-to-apples” comparison, I’m comparing only those players projected by each system (279 total):
The lowest errors came from the Consensus, so there may be some marginal improvement from averaging multiple sources. But the spread among systems was rather small. Steamer did best among actual systems, but they all did markedly better than my simple benchmark of 2013 data. ZiPS was second-best, followed by the two RotoValue models (yay!). Marcel remains quite competitive here, though, which shows that a basic model can still do quite well.
Next I’m rerunning the analysis using 20 points worse than league average wOBA for any player not projected, and now comparing the 643 players projected by at least 1 system:
The errors are a bit bigger, as this set includes more players, and those who will play less (and thus be less likely to perform close to their true talent). Steamer is again the best single system, this time edging out ZiPS slightly, and the Consensus now just behind Steamer/Razzball. Oliver, CBS, and Fangraphs Fans, which all lagged Marcel in the smaller set, now do better, as all systems now have lower errors than Tango’s monkey system. My model, however, dropped back relative to the other systems, which implies my projections for less strong players may be relatively weaker than other systems.
The spread between the best and worst system in RMSE is just 0.0023, even smaller than last year’s spread, while the gap from the weakest system to 2013 data is over 5 times as large. So using projections is better than simply relying on last year’s data. Steamer also came out on top in the comparison I did last year, but the spread between systems is smaller this time, so which projections you use matters far less than that you use projections.
Update: Rudy Gamble of Razzball.com asked if I could rerun the analysis for players with 500 or more PA. So here’s the table:
This is very much like the apples-to-apples table above, as very few systems didn’t have a projection. This is a set of smaller, and better, players, and the overall errors are lower, but the ordering remains about the same.
It’s interesting that a simple equally weighted average of the various projections seems to do best, but isn’t that surprising, in a “view of the masses phenomenon. Do you have the ability to optimize the weights of the projections to create an optimal consensus? For example, maybe an optimal consensus uses 40% steamer and 10% of six other projections. Basically, solve for the optimal protection weights to get the lowest RMSE.
Glad to see your numbers are pretty much in line with what I found when I did this a couple months ago. Hat tip to you for including so many projection systems, I know those merges are a pain! Great stuff!