## Tuesday, September 01, 2015

### End of Season Stats Update

The end of season statistics I post have always walked a fine line between exhibiting a reasonable balance of accuracy and simplicity and veering to far into being needlessly inaccurate. That fundamental tension will not be removed with the changes I am making, but they will clean up a couple shortcuts that were too glaring for me to ignore any longer. I'm writing these changes up about a month in advance so that I don't have to devote any more space to them in the already bloated post that explains all the stats.

Runs Created

I've modified the knockoff version of Paul Johnson's Estimated Runs Produced that I use to calculate runs created to consider intentional walks, hit batters, and strikeouts. The idea behind using ERP, which is what I refer to as a "skeleton" form of linear weights, rather than using some other construct, is that I don't want to recalculate each and every weight every season. Instead, using fixed relationships between the various offensive events that hold fairly well across average modern major league environments is easy to work with from year-to-year and avoids giving the appearance of customization that explicit weights for each event would convey. Mind you, such a formula can still be expressed as x*S + y*D + z*T + ...

The previous version I was using was:

(TB + .8H + W + .7SB - CS - .3AB)*x, where x is around .322

To this formula I need to add IW and HB. This is all fairly straightforward--hit batters can count the same as walks, intentional walks will be half of a standard walk:

TB + .8H + W + HB - .5IW + .7SB - CS - .3AB

The rationale for counting intentional walks as half of a standard walk is that it is fairly close to correct (Tom Ruane's work suggests the ratio of IW value to standard walk value was .57 for 1960-2004). There are other possible approaches, such as removing IW altogether but assigning them the value of the batter's average plate appearance. There is certainly logic behind such a method; just doing it the simple way is a bit more conservative in terms of recognizing differences between batters.

Hit batters are actually slightly more valuable on average than walks due to the more random situations in which they occur, but such a distinction would be overkill given the approximate nature of the other coefficients.

I considered making adjustments for the other events included in the standard statistics (strikeouts, sacrifice hits, sacrifice flies, double plays) but ultimately chose to forego including each. The difference between a strikeout and a non-strikeout out is around .01 runs; given that there are numerous shortcuts already being taken, this is simply not enough for me to worry about in this context. Sacrifices and double plays are problematic due to the heavy contextual influences, although I came very close to just counting sacrifice flies the same as any other out. I would include K, SH, and SF if I was trying to do this precisely, but I would still leave double plays alone.

This was also a good opportunity to update the multipliers for all versions of the skeleton, which I did with the 2005-2014 major league totals to get these formulas:

ERP = (TB + .8H + W - .3AB)*.319 (had been using .324)
= .478S + .796D + 1.115T + 1.434HR + .319W - .096(AB - H)

ERP = (TB + .8H + W + .7SB - CS - .3AB)*.314 (had been using .322)
= .471S + .786D + 1.100T + 1.414HR + .314W + .220SB - .314CS - .094(AB - H)

ERP = (TB + .8H + W + HB - .5IW + .7SB - CS - .3AB)*.310
= .464S + .774D + 1.084T + 1.393HR + .310(W - IW + HB) + .155IW + .217SB - .310CS - .093(AB - H)

The expanded versions illustrate one of the weaknesses of the skeleton approach, or perhaps more precisely using total bases and hits rather than splitting out the hit types, as it results in the relationship between hit types being a bit off, particularly in the case of the triple. Still, I find the accuracy tradeoff acceptable for the purposes for which I use the end of season statistics.

For the batters who appeared in the 2014 end of season statistics, the biggest change switching to the version including HB and IW was five runs. Jon Jay, Carlos Gomez, and Mike Zunino gain five runs while Victor Martinez loses five runs. Of the 312 hitters, 263 (84%) change by no more than a run in either direction. So the differences are usually not material, another reason why I personally didn't mind the inaccuracy. But Carlos Gomez might disagree.

Along with the RC change are some necessary changes to other statistics. PA is now defined as AB + W + HB. OBA is (H + W + HB)/(AB + W + HB), and Secondary Average is (TB - H + W + HB)/AB, which is equal to SLG - BA + (OBA - BA)/(1 - OBA).

RAA for Pitchers

For as long as I've been running these reports, I've used the league run average as the baseline for Runs Above Average for both starters and relievers. This despite using very different replacement levels (128% of league average for starters and 111% for relievers). I've rationalized this somewhere, I'm sure, but the fundamental flaw is apparent when you look at my reliever reports and see three or four run gaps between RAA and RAR for many pitchers.

I want to avoid using the actual league average split for any given season, since it can bounce around and I'd rather use the league overall average in some manner. So my approach instead will be to look at the starter/reliever split in eRA (Run Average estimated based on component statistics, including actual hits allowed, so akin to Component ERA rather than FIP) for the last five league-seasons and see what makes sense.

The resulting difference in baseline between starters and relievers will not be as large as that exhibited in the replacement levels. The replacement level split attempts to estimate the true talent difference between the two roles, recognizing that most relievers would not be anywhere near as effective in a starting role. This adjustment is simply trying to compare an individual pitcher to what the composite league average pitcher would be estimated to have in his role (SP/RP) and does not account for our belief that the average starter is a better pitcher than the average reliever.

Additionally, using eRA rather than actual RA makes the adjustment more conservative than it otherwise might be, because it considers component performance rather than actual runs allowed. Part of the reliever advantage in RA is that the scoring rules benefit them. Why did I not then take this into account? I actually don't use RA in calculating pitcher RAA or RAR, I use a version of Relief RA, which was created by Sky Andrecheck and makes an adjustment for inherited runners (a simple one that doesn't consider base/out state, simply the percentage that score). The version I use considers bequeathed runners as well, so as to adjust starter's run averages for bullpen support as well. But the statistics on inherited and bequeathed runners by role for the league are not readily available, so I based the adjustment on eRA, which I already have calculated for each league-season broken out for starters and relievers.

This chart should be fairly self-explanatory: seRA is starter eRA, reRA is reliever eRA, eRA is the league eRA, s ratio = seRA/Lg(eRA), r ratio = reRA/Lg(eRA), and S IP% is the percentage of league innings thrown by starters. The relationships are fairly stable for the last five years, and so I have just used the simple average of the league-season s and r ratios to figure the adjustments.

RAA (for SP) = (LgRA*1.025 - RRA)*IP/9

RAA (for RP) = (LgRA*.951 - RRA)*IP/9

You can check my math as the weighted average of the adjustment is 1.025(.665) + .951(1 - .665) = 1.0002.