A big unknown in fantasy projections is playing time. Indeed projecting playing time well may be as, or even more, important, to fantasy owners than projecting a player’s rate statistics.
The playing time in RotoValue projections is the result of a three-stage process. First, a basic projection algorithm uses a player’s age and historical performance to project playing time. That same process projects various rate statistics as well, and is the foundation of the projection model.
The second stage is manual: I periodically review injury reports, and set an injury factor for each player projected to miss regular season time. That factor is a guess at the part of the season the player will likely miss. Once I have these factors, reduce the players’ stats by the part of the season they’re projected to miss. So if Alex Rodriguez is expected to miss half the season, I subtract 50% of his original projection from all his stats.
The third phase is what I call “normalization”. I add up each team’s cumulative projected IP and PA, both total and by position, and I compare those totals to the previous year’s actual per-team averages. At 162 games per year and 9 innings per game, a typical MLB team should throw about 1440 innings per season (rainouts, extra inning games, and not pitching the bottom of the 9th can affect this slightly), so if a team projects to have 1700 innings, or just 1200 innings, I know there’s a problem. Similarly if the total projected PA from a team’s 1B are above 1000, or below 400, there’s also an issue.
If a team’s projected total at a position is 25% above, or 25% below, the positional average from last year’s actual data, I look to adjust.
The adjustment is algorithmic: I sort players by “value”: for batters, I’m using a variant of weighted on base average (wOBA) that includes SB and CS; for pitchers I’m adding Fielding Indepented Pitching (FIP) plus 3 times WHIP. When I need to add playing time, I compare players to the worst one, and look to allocate more time to those who are better performers; if I’m reducing time, I compare to the best player, and reduce more from worse performers (to the extent that they have played). There are restrictions on how much I can add: a player won’t be projected to go above the 90th percentile of PA/IP at his position in the previous year. Also, if a player was marked as having an injury in phase 2, I won’t increase his playing time at all.
After first doing by-position checks, I then perform another check for the entire team, so that the team totals are within a reasonable range. Here the range I use is the actual minimum and maximum IP/PA from the prior season.
*Somebody* has to play OF for the Mets this year, so after my normalization step I project Mets outfielders to get 1869 cumulative PA. Other projection systems give less time to Mets outfielders:
Now of course it’s quite possible that the Mets will trade for an outfielder from somewhere else, so the current collective Mets outfielders may wind up with low total PA.
Before I did my normalization step, my model projected the lowest cumulative total PA for the Mets outfield, just 1098 PA.
In cases where there’s overcrowding, the normalization step reduces playing time, as with the Mariners, where Marcel and MORPS both projected over 3000 total PA for the team. Especially when using mostly algorithmic projections, I think there’s value in checking cumulative playing time, and making sure totals are at least fairly reasonable.
For rather deep leagues, this also has an impact on pricing, as it reduces values of players in crowded situations, while boosting values of players with little competition for playing time. Savvy owners understand this general point, but it’s even better when fantasy projections take it into account, too.