Yesterday I ran comparisons of several projections systems for an all-inclusive batting statistic, wOBA. Today I’m running the same tests, computing root mean square error (RMSE) and mean absolute error (MAE), for two commonly used fantasy statistics, ERA and WHIP. These tests are bias-adjusted, so what matters is a player’s ERA or WHIP relative to the overall average of that system, compared with the player’s actual statistic relative to the actual overall average. The lower the RMSE or MAE, the better a projection system predicted the actual data.
I have data for these projection models:
- AggPro – A projection aggregation method from Ross J. Gore, Cameron T. Snapp, and Timothy Highley.
- Bayesball – Projections from Jonathan Adams.
- CAIRO – from S B of the Replacement Level Yankees Weblog.
- CBS Projections from CBS Sportsline.
- Davenport Clay Davenport’s projections.
- ESPN Projections from ESPN.
- Fans Fans’ projections from Fangraphs.com.
- Larson Will Larson’s projections.
- Marcel – the basic projection model from Tom Tango, coauthor of The Book. This year I’m using Marcel numbers generated by Jeff Zimmerman, using Jeff Sackmann’s Python code.
- MORPS – A projection model by Tim Oberschlake.
- Rosenheck Projections by Dan Rosenheck.
- Oliver – Brian Cartwright’s projection model.
- Steamer – Projections by Jared Cross, Dash Davidson, and Peter Rosenbloom.
- Steamer/Razzball – Steamer rate projections, but playing time projections from Rudy Gamble of Razzball.com.
- RotoValue – my current model, based largely on Marcel, but with adjustments forpitching decision stats and assuming no pitcher skill in BABIP.
- RV Pre-Australia – The RotoValue projections taken just before the first Australia games last year. Before the rest of the regular season I continued to tweak projections slightly.
- ZiPS – projections from Dan Szymborski of Baseball Think Factory and ESPN.
First up is ERA, comparing the 75 pitchers projected by all systems:
Not surprisingly the errors, even as a percentage of the average, are much higher here than for wOBA, as pitching performance is more volatile than batting performance. Will Larson’s projections did best here, followed by the Consensus, CBS Sportsline, and Steamer. All the models handily beat using 2013 data, albeit not as decisively as with wOBA, but but seven lagged behind Tango’s Marcel system. The other notable thing to me is that the average ERA for these players of every system is higher than the actual. Indeed 2013 actual data was slightly lower, so this shows how dependent systems are on older historical data. Aggregate ERA has fallen sharply since 2012, and the projection models are still reflecting that higher run environment to a large degree.
When you’re preparing for a fantasy auction, you care much more about how a system rates players relative to each other than how it rates them compared to the actual data. For these 75 pitchers, Steamer projected an aggregate 3.81 ERA, while they actually had a 3.43 ERA (weighted by actual 2014 innings pitched). But Steamer still ranks lower in errors than Fangraphs Fans’ projections, which came closest to the actual average of 3.44. This is the impact of bias adjustment: while the fans did better at projecting the actual league average ERA, Steamer tended to be closer on more individual players when adjusted for its league average.
To underscore that point, here’s the same ERA table as above, but this time doing raw errors, i.e. not doing any bias adjustment at all:
Now Fans ranks well above Steamer, which is near the bottom in this test. The lower errors here tend to come from systems that come closer to matching the actual league ERA. Even the naive 2013 data as a forecast now is no longer dead last by a wide margin, while CBS, Zips, and Fans are the best performing systems here. Comparing these two tables shows the impact of bias adjustment. What matters most for fantasy valuation is how players compare relative to each other, and not how well some system predicts the actual run environment players are in. So the first table is a better comparison of projections for fantasy purposes.
Both these tables are “apples to apples”, comparing only those players that each system projected. And it’s a small set of overall better than average pitchers, a group which overall is easier to project than a deeper set of MLB pitchers.
But of course if you’re in a fantasy league, it doesn’t help when a system doesn’t project someone you might care about. So this next table will use an ERA of 0.50 worse than the system’s league average for anybody not projected, and compare against a set of almost 700 pitchers:
Here systems get more credit for projecting more players, so long as those projections are better than the default 0.50 worse than average. This shakes up the order quite a bit. Now Steamer/Razzball does best, followed by the Consensus, Clay Davenport, and Steamer. Will Larson, the winner of the test of fewer (and overall better) players drops to the middle of the pack, while CBS Sportsline and Zips now slip behind Marcel. Bayesball, which ranked worst in the earlier test, improves markedly. The overall errors increase quite a bit, as we’re now comparing projections for a much deeper set of players, many of whom have much less of a track record.
Finally, let’s take a look at WHIP. First, the “apples-to-apples” table comparing only the 75 pitchers projected by all systems:
This time the CBS Sportsline projections wind up with the lowest errors, followed by the consensus and Zips. Marcel actually beats most systems in this test. As a percentage of the projected average, the errors are smaller for WHIP than ERA, which makes sense since WHIP stabilizes more quickly, but the spread in errors of WHIP between systems is much wider, so systems vary more in their projections of WHIP than ERA for this sample of pitchers.
Finally, here’s the table using a WHIP of 0.10 worse than the projected league average for missing players:
In this test Clay Davenport’s model edges out Steamer/Razzball and Steamer for lowest RMSE, with Fangraphs Fans, AggPro, and ESPN close behind. Marcel beats more than half the models, as in the test of the 75 players projected by all, but now for the first time in these tests it also beats out the consensus, which is usually among the best performing systems. But here, the wider spread among systems in errors might work against a crowd-sourcing approach which usually does quite well with other stats.
Projections systems vary much more on WHIP than they do on ERA (or wOBA), but in general they all perform much better than 2013 data. Yet while for wOBA, most systems usually beat the benchmark of Tom Tango’s Marcel system, for these pitching stats, Marcel is still quite good. Projecting pitching is harder than projecting hitting. Fantasy veterans know that already, of course, but these numbers support that conclusion also.