Thursday I compared five projection systems with their projections for weighted on base average. Today I’m looking at two different pitching categories, runs allowed per 9 innings (RA9) and WHIP, walks plus hits per innings pitched.
Tom Tango kindly highlighted my post yesterday, but suggested that one of my charts was useless, because it did not compare the systems on the same players. So today I’ll just have two charts per stat, one using only those players projected by all systems, and another filling in a “missing” value for any player a system did not project.
The systems are the same five:
- CAIRO – from S B of the Replacement Level Yankees Weblog.
- Marcel – the basic projection model from Tom Tango, coauthor of The Book. This year I’m using Marcel numbers generated by Jeff Zimmerman, using Jeff Sackmann’s Python code.
- MORPS – A projection model by Tim Oberschlake.
- Steamer/Razzball – Rate projections by Jared Cross, Dash Davidson, and Peter Rosenbloom, and playing time projections from Rudy Gamble of Razzball.com.
- RotoValue – my current model, based largely on Marcel, but with adjustments for pitching decision stats and assuming no pitcher skill in BABIP.
And, as before, I’m calculating RMSE and MAE, and sorting by the former. The error is bias-adjusted, so I’m first comparing each player’s stat to the average of the system, and then I compare those deltas with the actual delta. I’m using actual innings pitched to weight the averages. First up, RA9.
These are the 373 pitchers projected by all the systems:
Steamer again is the best, followed by the Consensus. There’s a much higher level of errors here, at 25% or more of the statistic, compared to about 10-12% when projecting wOBA. Also, the systems vary more in how well they do. Pitching is indeed harder to predict.
Next I’ve assumed an RA9 of 0.50 above the league average for any players not projected by a system:
|y2012||630||4.3382||664||4.4439||1.9624||1.4250||2.3074||168|This comparison shakes up the order quite a bit compared to using the subset of all projected pitchers. Steamer still had the lowest errors, but MORPS and CAIRO now do much better, while Marcel, the Consensus average, and my RotoValue projection did relatively worse in this test.Steamer and Consensus are still the top two, while MORPS moves ahead of Marcel, and CAIRO passes RotoValue. Including more, and worse, pitchers raises the errors of all the systems.
Now on to WHIP, for the 373 pitchers projected by all:
The RMSEs here are a little tighter than for RA9, but till much wider than for wOBA. My model edged out Steamer/Razzball in RMSE here, but was a bit behind it in MAE, while the Consensus did even better than mine.
Now here’s adding league-average WHIP plus 0.100 for any players not projected:
Steamer comes out on top in this test, while my model and the Consensus do drop back. Pitching statistics take longer to stabilize, so it’s not so surprising to see the relatively higher errors here. Very observant readers might have noticed a slight difference in the actual WHIP and RA9 in the two columns of the second tables. That’s an artifact of a small number of pitchers who did not record an out, and thus had 0 IP. When I’m computing errors, these pitchers aren’t included (since I weight by IP, they have a weight of 0), but their runs, hits, and walks allowed do slightly increase the aggregate actual statistics.
7 February 2014: I found and fixed a bug in my code that generated the tables using a worse than league average value for missing players, so the affected tables in this post were changed to reflect that.