The most common class of metrics used in sabermetrics for cross-era comparisons use relative measures of actual or estimated runs per out or sother similar denominator. These include ERA+ for pitchers and OPS+ or wRC+ for batters (OPS+ being an estimate of relative runs per out, wRC+ using plate appearances in the denominator but accounting for the impact of avoiding outs). While these metrics provide an estimate of runs relative to the league average, they implicitly assume that the resulting relative scoring level is equally valuable across all run environments.

This is in fact not the case, as it is well-established that the relationship between run ratio and winning percentage depends on the overall level of run scoring. A team with a run ratio of 1.25 will have a different expected winning percentage if they play in a 9 RPG environment than if they play in a 10 RPG environment. Metrics like ERA+ and OPS+ do not translate relative runs into relative wins, but presumably the users of such metrics are ultimately interested in what they tell us about player contribution to wins.

There are two key points that should be acknowledged upfront. One is that the difference in win value based on scoring level is usually quite small. If it wasn’t, winning percentage estimators that don’t take scoring level into account would not be able to accurately estimate W% across the spectrum of major league teams. While methods that do consider scoring level are more accurate estimators of W% than similar methods that don’t, a method like fixed exponent Pythagorean can still produce useful estimates despite maintaining a fixed relationship between runs and wins.

The second is that players are not teams. The natural temptation (and one I will knowingly succumb to in what follows) is to simply plug the player’s run ratio into the formula and convert to a W%. This approach ignores the fact that an individual player’s run rate does not lead directly to wins, as the performance of his teammates must be included as well. Pitchers are close, because while they are in the game they are the team (more accurately, their runs allowed figures reflect the totality of the defense, which includes contributions from the fielders), but even ignoring fielding, non-complete games include innings pitched by teammates as well.

For the moment I will set that aside and instead pretend (in the tradition of Bill James’ Offensive Winning %) that a player or pitcher’s run ratio can or should be converted directly to wins, without weighting the rest of the team. This makes the figures that follow something of a freak show stat, but the approach could be applied directly to team run ratios as well. Individuals are generally more interesting and obviously more extreme, which means that the impact of considering run environment will be overstated.

I will focus on pitchers for this example and will use Bob Gibson’s 1968 season as an example. Gibson allowed 49 runs in 304.2 innings, which works out to a run average of 1.45 (there will be some rounding discrepancies in the figures). In 1968 the NL average RA was 3.42, so Gibson’s adjusted RA (aRA for the sake of this post) is RA/LgRA = .423 (ideally you would park-adjust as well, but I am ignoring park factors for this post). As an aside, please resist the temptation to instead cite his RA+ of 236 instead. Please.

.423 is a run ratio; Gibson allowed runs at 42.3% of the league average. Since wins are the ultimate unit of measurement, it is tempting to convert this run ratio to a win ratio. We could simply square it, which reflects a Pythagorean relationship. Ideally, though, we should consider the run environment. The 1968 NL was an extremely low scoring league. Pythagenpat suggests that the ideal exponent is around 1.746. Let’s define the Pythagenpat exponent to use as:

x = (2*LgRA)^.29

Note that this simply uses the league scoring level to convert to wins; it does not take into account Gibson’s own performance. That would be an additional enhancement, but it would also strongly increase the distortion that comes from viewing a player as his own team, albeit less so for pitchers and especially those who basically were pitching nine innings/start as in the case of Gibson.

So we could calculate a loss ratio as aRA^x, or .223 for Gibson. This means that a team with Gibson’s aRA in this environment would be expected to have .223 losses for every win (basic ratio transformations apply; the reciprocal would be the win ratio, the loss ratio divided by (1 + itself) would be a losing %, the complement of that W%, etc.)

At this point, many people would like to convert it to a W% and stop there, but I’d like to preserve the scale of a run average while reflecting the win impact. In order to do so, I need to select a Pythagorean exponent corresponding to a reference run environment to convert Gibson’s loss ratio back to an equivalent aRA for that run environment. For 1901-2015, the major league average RA was 4.427, which I’ll use as the reference environment, which corresponds to a 1.882 Pythagenpat exponent (there are actually 8.94 IP/G over this span, so the actual RPG is 8.937 which would be a 1.887 exponent--I'll stick with RA rather than RPG for this example since we are already using it to calculate aRA).

If we call that 1.882 exponent r, then the loss ratio can be converted back to an equivalent aRA by raising it to the (1/r) power. Of course, the loss ratio is just an interim step, and this is equivalent to:

aRA^(x*(1/r)) = aRA^(x/r) = waRA

waRA (excuse the acronyms, which I don’t intend to survive beyond this post) is win-Adjusted Run Average. For Gibson, it works out to .450, which illustrates how small the impact is. Pitching in one of the most extreme run environments in history, Gibsons aRA is only 6.4% higher after adjusting for win impact.

In 1994, Greg Maddux allowed 44 runs in 202 innings for a run average of 1.96. Pitching in a league with a RA of 4.65, his aRA was .421, basically equal to Gibson. But his waRA was better, at .416, since the same run ratio leads to more wins in a higher scoring environment.

It is my guess that consumers of sabermetrics will generally find this result unsatisfactory. There seems to be a commonly-held belief that it is easier to achieve a high ERA+ in a higher run scoring environment, but the result of this approach is the opposite--as RPG increases, the win impact of the same aRA increases as well. Of course, this approach says nothing about how “easy” it is to achieve a given aRA--it converts aRA to an win-value equivalent aRA in a reference run environment. It is possible that it could be simultaneously “easier” to achieve a low aRA in a higher scoring environment and that the value of a low aRA be enhanced in a higher scoring environment. I am making no claim regarding the impressiveness or aesthetic value, etc. of any pitcher’s performance, only attempting to frame it in terms of win value.

Of course, the comparison between Gibson and Maddux need not stop there. I do believe that waRA shows us that Maddux’ rate of allowing runs was more valuable in context than Gibson’s, but there is more to value than the rate of allowing runs. Of course we could calculate a baselined metric like WAR to value the two seasons, but even if we limit ourselves to looking at rates, there is an additional consideration that can be added.

So far, I’ve simply used the league average to represent the run environment, but a pitcher has a large impact on the run environment through his own performance. If we want to take this into account, it would be inappropriate to simply use LgRA + pitcher’s RA as the new RPG to plug into Pythagenpat; we definitely need to consider the extent to which the pitcher’s teammates influence the run environment, since ultimately Gibson’s performance was converted into wins in the context of games played by the Cardinals, not a hypothetical all-Gibson team. So I will calculate a new RPG instead by assuming that the 18 innings in a game (to be more precise for a given context, two times the league average IP/G) is filled in by the pitcher’s RA for his IP/G, and the league’s RA for the remainder.

In the 1968 NL, the average IP/G was 9.03 and Gibson’s 304.2 IP were over 34 appearances (8.96 IP/G), so the new RPG is 8.96*1.45/9 + (2*9.03 - 8.96)* 3.42/9 = 4.90 (rather than 6.84 previously). This converts to a Pythagenpat exponent of 1.59, and an pwaRA (personal win-Adjusted Run Average?) of .485. To spell that all out in a formula:

px = ((IP/G)*RA/9 + (2*Lg(IP/G) - IP/G)*LgRA/9) ^ .29

pwaRA = aRA^(px/r)

Note that adjusting for the pitcher’s impact on the scoring context reduces the win impact of effective pitchers, because as discussed earlier, lowering the RPG lowers the Pythagenpat exponent and makes the same run ratio convert to fewer wins. In fact, considering the pitcher’s effect on the run environment in which he operates actually brings most starting pitchers’ pwaRA closer to league average than their aRA is.

pwaRA is divorced from any real sort of baseball meaning, though, because pitchers aren’t by themselves a team. Suppose we calculated pwaRA for two teammates in a 4.5 RA league. The starter pitches 6 innings and allows 2 runs; the reliever pitches 3 innings and allows 1. Both pitchers have a RA of 3.00, and thus identical aRA (.667) or waRA (.665). Furthermore, their team also has a RA of 3.00 for this game, and whether figured as a whole or as the weighted average of the two individuals, the team also has the same aRA and waRA.

However, if we calculate the starter’s pwaRA, we get .675, while the reliever is at .667. Meanwhile, the team has a pwaRA of .679, which makes this all seem quite counterintuitive. But since all three entities have the same RA, the lower the run environment, the less win value it has on a per inning basis.

I hope this post serves as a demonstration of the difficulty of divorcing a pitcher’s value from the number of innings he pitched. Of course, the effects discussed here are very small, much smaller than the impact of other related differences, like the inherent statistical advantage of pitchers over shorter stints, attempts to model differences in replacement level between starters and relievers, and attempts to detect/value any beneficial side effects of starters working deep into games.

One of my long-standing interests has been the proper rate stat to use to express a batter’s run contribution (I have been promising myself for almost as long as this blog has been existence that I will write a series of posts explaining the various options for such a metric and the rationale for each, yet have failed to do so). I’ve never had the same pull to the question for pitchers, in part because the building block seems obvious: runs/out (which depending on how one defines terms can manifest itself as RA, ERA, component ERA, FIP-type metrics, etc.)

But while there are a few adjustments that can theoretically made between a hitter’s overall performance expressed as a rate and a final value metric (like WAR), the adjustments (such as the hitter’s impact on his team’s run scoring beyond what the metric captures itself, and the secondary effect that follows on the run/win conversion) are quite minor in scale compared to similar adjustments for pitchers. While the pitcher (along with his fielders) can be thought as embodying the entire team while he is the game, that also means that said unit’s impact on the run/win conversion is significant. And while there are certainly cases of batters whose rates may be deceiving because of how they are deployed by their managers (particularly platooning), the additional playing time over which a rate is spread increases value in a WAR-like metric without any special adjustment. Pitchers’ roles and secondary effects thereof (like any potential value generated by “eating” innings) have a more significant (and more difficult to model) impact on value than the comparable effects for position players.

## Tuesday, March 14, 2017

### Win Value of Pitcher Adjusted Run Averages

Subscribe to:
Posts (Atom)