If I tell you that three teams in the same league-season played the same number of games (113), and that one of them scored 679 runs, another scored 670, and the third scored 633, how confident would you be in using this limited data to rank the productivity of their offenses? As usual in this series, we are ignoring park factors and other contextual factors (like quality of opposition/not having to face one’s own pitching staff); since they are from the same league-season, you don’t need to worry about whether the win value of each team’s runs was the same. Assume also that runs will be distributed across games by a known distribution like Enby, so the distribution is also not a differentiator. Assume that we don’t care about any “luck”; the actual total is what matters, not what a run estimator came up with. What else do you need to know?

I would contend that given the (admittedly restrictive) parameters I’ve placed on the exercise, you now know almost everything you need to know. In a small number of cases, and to a small extent, you are missing valuable information – but for most situations, you should need no additional information.

Now suppose I told you something similar about three players: same league season, same number of games played (111), and three runs created estimates: one player created 106 runs, one 92, and one 88. Do you feel like you need any additional information to put these players in the proper order of offensive productivity?

I hope that your answer here is yes, and a lot of it. I’ve told you how many games each have played, but that doesn’t tell you how many opportunities they’ve had at the plate. Sure enough, in this case one of the players had substantially fewer plate appearances than the others (489, 490, 451 respectively). Given that the player who created 90 runs had 39 more plate appearances than the player who created 86, it seems likely that the latter player was actually more productive on a rate basis.

I did not tell
you how many plate appearances each of the three teams had in their 113 games;
I don’t think it’s relevant to the question at hand, but the answer is 4493,
4611, and 4556 respectively. Why do we need to know plate appearances (or *something*) in the case of players, but not in the case of
teams? Understanding this gets to the heart of the reason this series needs to
exist at all, why applying the same rate stat to team offenses and player
offense may not work as intended.

In the previous installment, I asked the question: “Where do plate appearances come from?” The answer is that every inning (excluding walkoff situations) starts with three PAs guaranteed, and only by avoiding outs (reaching base and not being subsequently retired on the bases) can a team generate additional plate appearances.

From a team perspective, then, plate appearances are not an appropriate denominator for a rate stat, because differences in team plate appearances are the result of differences in performance between the teams. To return to the three teams discussed above, they are the 1994 Indians, Yankees, and White Sox respectively. The Indians had the fewest PA of the three yet scored the most runs. Does this mean that their offense, which already scored more runs than the other two clubs, was even more superior than the raw numbers would suggest?

An offense does not set out to maximize its plate appearances, nor does it set out to score the maximum number of runs it can in the minimum number of plate appearances. An offense sets out to maximize its total runs scored. Plate appearances are a function of the rate at which a team makes outs. At this point it might be helpful to consider the three teams:

New York’s OBA was 22 points higher than Cleveland’s and thus they generated an extra plate appearance per game. When ranking team offenses, it wouldn’t make sense to penalize the Yankees for this, which would be the case if we used R/PA. The difference in plate appearances simply reflects the different manner in which New York and Cleveland went about creating runs. For a team, plate appearances are inextricably linked with their OBA. Each inning, a team attempts to score as many runs as it possibly can before making three outs. It’s possible to score one run in a complete inning with as few as four or as many as seven plate appearances. Whether a team uses four, five, six, or seven plate appearances to score a single run is irrelevant in terms of that run’s impact on them winning or losing the game (*). Thus outs or an equivalent like innings are the correct choice for the denominator of a team rate stat.

(*) I am speaking here simply about the direct impact of the runs scored and not any downstream effects or the predictive value of team performance. Perhaps the team that uses seven PA to score one run benefits by wearing down the opposing pitcher or is more likely to have success in the future because they had four of seven batters reach base compared to one in four for the team that only needed four PA. Here we’re just focused on the win value directly attributable to the run scored and not any secondary or predictive effects.

The fact that outs are fixed for each team each inning (ignoring walkoffs) means that outs are also fixed for each team each game (ignoring walkoffs, rainouts, extra innings, and foregone bottom of the ninths). Which means that outs are also fixed for each team each season (ignoring those factors and cases in which teams don’t play out their full schedules, or have to play tiebreakers), which means that R/G and raw seasonal runs scored total are essentially equivalent to looking at R/O for a team. So for the question I asked at the beginning of the article, just knowing that the three teams had played an equal number of games, we had a pretty good idea how they would “truly” rank using R/O.

For players, this is not at all the case, since even in an equal number of games, players will get different numbers of plate appearances for a variety of reason (batting order position, the team’s OBA (remember, higher OBA teams will generate more PA), whether or not they play the full game), a fact that is intuitive to most baseball fans. What is less intuitive, though, is that even in the same number of plate appearances, players can make very different numbers of outs. Since we’ve already accepted that team OBA defines how many plate appearances a team will generate, it isn’t much of a leap to conclude that if we have two players who create the same number of runs (using a formula that doesn’t explicitly account for their impact on the team’s OBA) in the same number of plate appearances, the player who makes fewer outs was more productive when we consider the totality of their offensive contribution. Even though the two players were equally productive in their plate appearances, the player who made fewer outs generated more plate appearances for his teammates, a second-order effect that needs to be considered when evaluating individual offensive contribution. For teams, the runs scored total already reflects this effect.

This would be an appropriate time to note that this series is focused on evaluating offenses, but of course every offensive metric can be reviewed in reverse as a defensive metric. However, since the obvious denominator for teams is outs, it is also the obvious denominator for individual pitchers. We don’t need to worry about a pitcher’s impact on his team’s plate appearances – when he is in the game, he is solely responsible (setting aside the question of how the team’s performance should be allocated between the pitcher and his fielders) for the number of plate appearances the opponent generates, and his goal is to record three outs while minimizing the number of runs he allows, regardless of how many opponents come to the plate. Outs are clearly the correct denominator for the rate stat, and innings pitched are nothing more than outs/3 (and even better, IP account for all outs, including many that don’t show up in the standard statistical categories).

In thinking about the development of early baseball statistics and the legacy of those standard statistics on how the overwhelming majority of fans thought about baseball before the sabermetric revolution took hold, it is striking that the early statisticians understood these concepts as they applied to pitchers. When pitchers were completing almost all their starts, simple averages of earned runs allowed sufficed, for the same reason that team R/G tells you most everything you need to do. As complete games became rarer, ERA took hold, properly using innings in the denominator. For most of the twentieth century, and even post-sabermetric revolution, baseball fans are conditioned to think about innings pitched as the denominator for all manner of pitching metrics – even those like strikeout and walk frequency for which plate appearances would make a much more logical denominator. (Of course, present day sabermetrics has embraced metrics like K% and W% for pitchers, but the per inning versions remain in use as well).

The parallel development of offensive statistics resulted in the opposite phenomenon. While early box scores tracked “hands out” (essentially outs made) for individual batters, batting average eventually became the dominant statistic. Setting aside the issues with “at bats” and how they distort people’s thinking and saddled us with the mouthful of “plate appearances” to describe the more fundamental quantity of the two, the standard batting statistics have conditioned fans to think about batting rates (walk rate, home run rate, etc.) in the correct manner (or one adjacent to being correct, depending on whether at bats or plate appearances are the denominator), but leave people struggling with how to properly express a batter’s overall productivity. Again, this is the opposite problem of how pitching statistics were traditionally constructed. One can imagine that it all might be very different had the Batting Average taken the form of a hit/out ratio rather than hits/at bats.

## No comments:

## Post a Comment

I reserve the right to reject any comment for any reason.