## Wednesday, September 29, 2021

### Rate Stat Series, pt. 13: Relativity for the Linear Weights Framework

Of the three frameworks for evaluating individual offense, linear weights offers the simplest calculation of runs created or RAA, but will be the hardest to convert to a win-equivalent rate – mentally if not computationally. In order to do this, we need to consider what our metrics actually represent and make our choices accordingly. The path that I am going to suggest is not inevitable – it makes sense to me, but there are certainly valid alternative paths.

In attempting to measure the win value of a batter’s performance in the linear weights framework, we could construct a theoretical team and measure his win impact on it. In so doing, one could argue that the batter’s tertiary impact (which would be ignored under such an approach) is immaterial, perhaps even illusory, and that the process of converting runs to wins is independent from the development of the run estimate. Thus we could use a static approach for estimating runs and a dynamic team approach for converting those runs to wins.

I would argue in turn that the most consistent approach is to continue to operate under the assumption that linear weights represents a batter’s impact on a team that is average once he is added to it, and thus not allow any dynamism in the runs to wins conversion. Since under this school of thought all teams are equal, whether we add Frank Thomas or Matt Walbeck, there is no need to account for how those players change the run environment and the run/win conversion – because they both ultimately operate in the same run environment.

One could argue that I am taking a puritanical viewpoint, and that this would become especially clear in a case in which one compared the final result of the linear weights framework to the final result of the theoretical team framework. As we’ve seen, RAA is very similar between the two approaches, but the run/win conversions will diverge more if in one case we ignore the batter’s impact on the run environment. In any event, the methodology we’ll use for the theoretical team framework will be applicable to linear weights as well, if you desire to use it.

Since we will not be modeling any dynamic impact of the batter upon the team’s run environment, it is an easy choice to start with RAA and convert it to wins above average (WAA) by dividing by a runs per win (RPW) value. An example of this is the rule of thumb that 10 runs = 1 win, so 50 RAA would be worth 5 WAA.

There are any number of methods by which we could calculate RPW, and a couple philosophical paths to doing so. On the latter, I’m assuming that we want our RPW to be represent the best estimate of the number of marginal runs it would take for a .500 team (or more precisely a team with R = RA) to earn a marginal win. Since I’ve presumed that Pythagenpat is the correct run to win conversion, the most consistent is to use the RPW implied by Pythagenpat, which is:

RPW = 2*RPG^(1 – z) where z is the Pythagenpat exponent

so when z = .29, RPW = 2*RPG^.71

For the 1966 AL, this produces 8.588 RPW and for the 1994 AL it is 10.584. So we can calculate LW_WAA = LW_RAA/RPW, and LW_WAA/PA seems like the natural choice for a rate stat:

This tightens the gap between Robinson and Thomas as compared to a RAA/PA comparison, and since we’ve converted to wins, we can look at WAA/PA without having to worry about the underlying contextual differences (note: this is actually not true, but I’m going to pretend like it is for a little bit for the sake of the flow of this discussion).

There is another step we could take, which is to recognize that the Franks do influence the context in which their wins are earned, driving up their team’s RPGs and thus RPWs and thus their own WAAs. Again, I would contend that a theoretically pure linear weights framework assumes that the team is average after the player is added. Others would contend that by making that assertion I’m elevating individual tertiary offensive contributions to a completely unwarranted level of importance, ignoring a measurable effect of individual contribution because the methodology ignores an immaterial one. This is a perfectly fair critique, and so I will also show how we can adjust for the hitter’s impact on the team RPW in this step. Pete Palmer makes this adjustment as part of converting from Batting Runs (which is what I’m calling LW_RAA) to Batting Wins (what I’m calling LW_WAA), and far be it from me to argue too vociferously against Pete Palmer when it comes to a linear weights framework.

What Palmer would have you do next (conceptually as he uses a different RPW methodology) is take the batter’s RAA, divide by his games played, and add to RPG to get the RPG for an average team with the player in question added. It’s that simple because RPG already represents average runs scored per game by both teams and RAA already captures a batter’s primary and secondary contributions to his team’s offense. One benefit or drawback of this approach, depending on one’s perspective, is that unlike the theoretical team approach it is tethered to the player’s actual plate appearances/games. Using the theoretical team approach from this series, a batter always gets 1/9 of team PA. Under this approach, a batter’s real world team PA, place in the batting order, frequency of being removed from the game, etc. will have a slight impact on our estimate of his impact on an average team. We could also eschew using real games played, and instead use something like “team PA game equivalents”. For example, in the 1994 AL the average team had 38.354 PA/G; Thomas, with 508 PA, had the equivalent of 119.21 games for an average hitter getting 1/9 of an average team’s PA (508/38.354*9). I’ve used real games played, as Palmer did, in the examples that follow.

Applying the Palmerian approach to our RPW equation:

TmRPG = LgRPG + RAA/G

TmRPW = 2*TmRPG^.71

LW_WAA = LW_RAA/TmRPW

For the Franks, we get:

The difference between Thomas and Robinson didn’t change much, but both lose WAA and WAA/PA due to their effect on the team’s run environment as each run is less valuable as more are scored.

I have used WAA/PA as a win-equivalent rate without providing any justification for doing so. In fact, there is very good theoretical reason for not doing so. One of the key underpinnings of all of our rate stats is that plate appearances are not fixed across contexts – they are a function of team OBA. Wins are fixed across contexts – always exactly one per game. Thus when we compare Robinson and Thomas, it’s not enough to simply look at WAA/PA; we also need to adjust for the league PA/G difference or else the denominator of our win-equivalent rate stat will distort the relativity we have so painstakingly tried to measure.

In the 1966 AL, teams averaged 36.606 PA/G; in 1994, it was 38.354. Imagine that we had two hitters from these leagues with identical WAA and PA. We don’t have to imagine it; in 1966 Norm Siebern had .847 WAA in 399 PA, and in 1994 Felix Jose had .845 WAA in 401 PA (I’m using Palmer-style WAA in this example). It seems that Siebern had a minuscule advantage over Jose. But while wins are fixed across contexts (one per game, regardless of the time and place), plate appearances are not. A batter using 401 PA in 1994 was taking a smaller share of the average PA than one taking 399 in 1966 (you might be yelling about the difference in total games played between the two leagues due to the strike, but remember that WAA already has taken into account the performance of an average player – whether over a 113 or 162 game team-season is irrelevant when comparing their WAA figures). In 1994, 401 PA represented 10.46 team games worth of PA; in 1966, 399 represented 10.90 worth. In fact, Siebern’s WAA rate was not higher than Jose’s; despite having two fewer PA, Siebern took a larger share of his team’s PA to contribute his .85 WAA than Jose did.

If we do not make a correction of this type and just use WAA/PA, we will be suggesting that the hitters of 1966 were more productive on a win-equivalent rate basis than the hitters of 1994 (although this is difficult to prove as by definition the average player’s WAA/PA will be 0, regardless of the environment in which they played). I don’t want to get bogged down in this discussion too much, so I will point you here for a discussion focused just on this aspect of comparing across league-seasons.

There are a number of different ways you could adjust for this; the “team games of PA” approach I used would be one. The approach I will use is to pick a reference PA/G, similar to our reference Pythagenpat exponent from the last installment, and force everyone to this scale. For all seasons 1961-2019, the average PA/G is 37.359 which I will define as refPA/G. The average R/G is 4.415, so the average RPG is 8.83 and the refRPW is 9.390.

If we calculate:

Then we will have restated a hitter’s WAA rate in the reference environment. This is an option as our final linear weight win stat:

This increases Thomas’ edge over Robinson, while giving Jose a miniscule lead over Siebern. As a final rate stat, I find it a little unsatisfying for a couple of reasons:

1. while the ultimate objective of an offense is to contribute to wins, runs feels like a more appropriate unit

2. related to #1, wins compress the scale between hitters. There’s nothing wrong with this to the extent that it forces us to recognize that small differences between estimates fall squarely within the margin of error inherent to the exercise, but it makes quoting and understanding the figures more of a challenge.

3. WAA/PA, adjusted for PA context or not, is only differentially comparable; ideally we’d like to have a comparable ratio

My solution to this is to first convert adjusted WAA/PA to an adjusted RAA/PA, which takes care of objections #1 and 2, then to convert it to an adjusted R+/PA, which takes care of objection #3. At each stage we have a perfectly valid rate stat; it’s simply a matter of preference.

To do this seems simple (let’s not get too attached to this approach, which we’ll revisit in a future post):

adjRAA/PA = adjWAA/PA*refRPW (remember, by adjusting WAA/PA using the refPA/G, we’ve restated everything in the terms of the reference league)

adjR+/PA = adjRAA/PA + ref(R/PA)  (reference R/PA is .1182, which can be obtained by dividing the ref R/G by the ref PA/G)

We can also compute a relative adjusted R+/PA:

= ((RAA/PA)/TmRPW * Lg(PA/G)/ref(PA/G) * refRPW + Ref(R/PA))/Ref(R/PA)

= (RAA/PA)/TmRPW * Lg(PA/G)/ref(PA/G) * refRPW/ref(R/PA) + 1

I included raw R+/PA and its ratio the league average (relative R+/PA) to compare to this final relative adjusted R+/PA. For three of the players, the differences are small; it is only Frank Robinson whose standing is significantly diminished. This may seem counterintuitive, but remember that the more ordinary hitters have much smaller impact on RPW than the Franks. Relative to Thomas, Robinson gets more of a boost from his lower RPW (his team RPW was 19% lower than Thomas) than Thomas does from the PA adjustment (the 1994 AL had 4.8% more PA/G than the 1966 AL).

We could also return to the puritan approach (which I actually stubbornly favor for the linear weights framework) and make these adjustments as well. The equations are the same as above except where we use TmRPW, we will instead use LgRPW – reverting to assuming that the batter has no impact on the run/win conversion.

Here the impact on the Franks is similar; both are hurt of course when we consider their impact on TmRPW. Next time, we will quit messing around with half-measures – no more mixing linear run contributions with dynamic run/win converters. We’re going full theoretical team.

## Wednesday, September 15, 2021

### Rate Stat Series, pt. 12: Relativity for the Player as a Team Framework

I have now covered all of the ground I wished to cover regarding the construction of rate stats for various frameworks for evaluating individual offense, so I think this would be a good point to take stock of the conclusions I have drawn. From here on out, I will write about these tenets as if they are inviolable, which is not actually the case (“sound” and “well-reasoned to me” are as far as I’m willing to go), but I’m moving on to other topics:

* For team offense, Runs/Out is the only proper rate stat

* If evaluating individuals as teams (i.e. applying a dynamic run estimator like Runs Created or Base Runs directly to an individual’s statistics), Runs/Out is also the proper rate stat. Any other choice breaks consistency with treating the individual as a team, and consistency is the only thing going for such an approach. On a related note, don’t treat individuals as teams.

* If using a linear weights framework, the proper rate stat is RAA/PA, R+/PA, or restatement of those (wOBA is the one commonly used). These metrics all produce consistent rank orders and can be easily restated from one another. Any of them are valid choices at the user’s discretion, with the only objective distinction being whether you want ratio and differential comparability (R+/PA), only care about differential comparability (R+/PA or RAA/PA), or would prefer a different scale altogether at the sacrifice of direct comparisons without futher modifications (wOBA).

* If using a theoretical team framework, R+/O+ and R+/PA+ are equivalent, with the user needing to decide whether they want to think about individual performance in terms of R/O or R/PA.

Throughout this series, I have not worried about context, and originally envisioned a final installment that would briefly discuss some of the issues with comparing player’s rates across league-seasons. In putting it together, I decided that it will take a few installments to do this properly. It will also take another league-season to pair with the 1994 AL in order to make cross-context comparisons.

I have chosen the 1966 AL to fill this role. In the expansion era (1961 – 2019), the AL has averaged 4.52 R/G. 1994 was third-highest at 5.23, 15.6% higher than average. The closest league to being an inverse relative to the average is the 1966 AL (3.89 R/G, 13.8% lower than average). The obvious choice would have been 1968, since it was the most extreme, but as 1994 was not the most extreme I thought a league that was similarly low scoring would be more appropriate.

The other reason I like the 1966 AL for his purpose is that, just as was the case in 1994, the superior offensive player in the league was a future Hall of Fame slugger named Frank. In making comparisons across the twenty-eight year and (more importantly) 1.34 R/G differences between these two league-seasons, the two Franks will serve as our primary reference points.

To look at the 1966 AL we will first need to define our runs created formulas. I will not repeat the explanations of how these are calculated – I used the exact same approach as for the 1994 AL, which are demonstrated primarily in the parts 1, 9, and 10.

Base Runs:

A = H + W – HR = S + D + T + W

B = (2TB - H – 4HR + .05W)*.79920 = .7992S + 2.3976D + 3.996T + 2.3976HR + .0400W

C = AB – H = Outs

D = HR

BsR = (A*B)/(B + C) + D

Linear Weights:

LW_RC = .4560S + .7751D + 1.0942T + 1.4787HR + .3044W - .0841(outs)

LW_RAA = .4560S + .7751D + 1.0942T + 1.4787HR + .3044W - .2369(outs)

wOBA = (.8888S + 1.2981D + 1.7075T + 2.2006HR + .6944W)/PA

Theoretical Team:

TT_BsR = (A + 2.246PA)*(B + 2.346PA)/(B + C + 7.915PA) + D - .6658PA

TT_BsRP = ((A + 2.246PA)*(B + 2.346PA)/(B + C + 7.915PA) + D + .185PA)*PAR – .8519PA

TT_RAA = ((A + 2.246PA)*(B + 2.346PA)/(B + C + 7.915PA) + D + .185PA)*PAR – .9572PA

Using these equations, let’s look at the leaderboards for the key stats for each framework. First, for the player as a team:

Of course, we’ll get slightly different results from the different frameworks, but two things should be obvious: Robinson was the best offensive player in the league (his lead over Mantle in second place is bigger than the gap from Mantle to ninth-place Curt Blefary, and he was also among the leaders in PA), and the difference in league offensive levels carried through to individuals.

Next for the linear weights framework:

And finally for the theoretical team framework:

Now comes the hard part – how do we compare these performances from the 1966 AL against those from the 1994 AL? It should be implied, but in all comparisons that follow, I am only concerned about the value of a given player’s contributions relative to the context in which he played; this is not about whether/to what extent the overall quality of play differed between 1966 and 1994, or about how the existence of pitcher hitting in 1966 but not in 1994 impacted the league average, etc. I also am not going all the way in accounting for context, as I’ve still done nothing to park-adjust. Park adjustments are important, but whether you adjust or not is irrelevant to the question of which rate stat you should use. I’m also not interested in how “impressive” a given rate is due to the variation between players (e.g. z-scores) – I’m simply trying to quantify the win value of the runs a player has contributed.

The obvious first step to compare players across two league-seasons is to compare the difference or ratio of their rate to the league rate. As we’ve discussed throughout this series, some stats can be compared using both differences and ratios, but others can only be compared (at least meaningfully) with one or the others, and others still can be compared but the ratio or difference no longer has any baseball meaning unless some transformation is carried out (wOBA is the most prominent example we’ve touched on).

The table below shows the rates for our two Franks, the respective league rate, and the difference and ratio (if applicable) for each of the key rate stats we’ve looked at:

One of the other reasons I was drawn to the pairing of these two league-seasons is how close Thomas and Robinson are in the key metrics when compared to the league using a ratio. Thomas has a slight advantage in each metric, except for BsR/O, where the flaw of treating an individual as a team can be seen. Playing in a high offense context, Thomas’ crazy rates (to put it in simple terms related to but not the run unit stats used in this series, Thomas hit .353/.492/.729 to Robinson’s .316/.406/.637) result in a large estimated tertiary contribution when treating him as his own team. Expressing these metrics as ratios hammers home that even for extreme performers (defining “extreme” within the usual confines of a major league season), there isn’t much difference between using the linear weights and theoretical team frameworks, and both are reasonable choices of framework. Treating an individual as a team, not so much.

We’ve expressed each player’s rate relative to the league average, but we haven’t answered the question of which construct is appropriate – the difference or the ratio, or does the answer differ depending on the metric? And how can we make that determination? In lieu of some guiding principle, it is just a matter of personal preference. Most people are drawn to ratios, and the widespread adoption of metrics like ERA+, ERA-, OPS+, and wRC+ speak to that preference. This case illustrates one of the reasons why ratios make sense intuitively – while the Franks are very close when comparing ratios, Thomas has a healthy lead when comparing differences as the league average R/PA and R/O for 1994 are much higher than the same for 1966. It seems intuitive that a batter will be able to exceed the league average by a larger difference when it is .2419 rather than .1528.

Still, in order to really understand how to compare players across contexts, and ensure that our intuition is grounded in reality, we need to think about how runs relate to wins. All of the metrics in that table which I consider the key metrics discussed in this series have been expressed in runs, and this is a natural starting point as the goal of an offense is to score runs. Of course the ultimate reason why an offense wants to score runs is so that they might contribute to wins.

I will break one of the rules I’ve tried to adhere to by begging the question and asserting that Pythagenpat and associated constructs are the correct way to convert runs to wins. Of course it is just a model, and while it is a good one, it is not perfect, but I think it is the best choice given its accuracy and its seemingly reasonable results for extreme situations.

Using Pythagorean also lends itself to adoption of a ratio rate stat, as one way to express the Pythagorean Theorem is that the expected ratio of wins to losses is equal to the ratio of runs to runs allowed raised to some power. If we start by thinking about the team/player as team framework for evaluating offense, it is natural to construct a win-unit rate stat by treating the team’s R/O as the numerator and the league average as the denominator. The ratio of these two, raised to some power, is then the equivalent win ratio that results. This is exactly the approach that Bill James took in his early Baseball Abstracts.

We can easily use this relationship to demonstrate that a simple difference of R/O does not capture the differences in win values between players. If one player exceeds the league average R/O by 10% and another 50%, then assuming a Pythagorean exponent of 2, player A will have a “win ratio” of 1.21 and player B will have a win ratio of 2.25. Even if we convert those to winning percentages (.5475 and .6923), the difference between the two player’s R/O does not capture the win value difference.

The ratio doesn’t either, of course, since it would need to be squared, but if we assume a constant Pythagorean exponent like 2, this is simply a matter of scaling, with no impact on the rank order to players. However, if we use a custom Pythagorean exponent, this assumption breaks down, as we can illustrate by comparing the two Franks. Since we’re starting with the assumption that Pythagenpat is correct, this means that we will always need to make some kind of adjustment in order to convert our run rate into an equivalent win rate.

Since we are treating the players as teams, the consistent approach is to first calculate the RPG for each player as a team, then calculate the Pythagenpat exponent for the situation, then convert their relative BsR/O to an equivalent win value. The RPG for the 1966 AL was 3.893 with 25.482 O/G, while in the 1994 AL it was 5.226 with 25.188 O/G.

T_RPG = LgR/G + BsR/O*LgO/G

x = T_RPG^.29 (for a Pythagenpat z value of .29)

Relative BsR/O = (BsR/O)/(LgBsR/O)

Win Ratio = (Relative BsR/O)^x

Offensive Winning Percentage = Win Ratio/(Win Ratio + 1) = (BsR/O)^x/((BsR/O)^x + (LgBsR/O)^x)

Here, we conclude that just looking at Relative BsR/O understates Thomas’ superiority, as Pythagenpat estimates more wins for a given run ratio when the RPG increases (e.g. a run ratio of 8/4 will produce more wins than a run ratio of 7/3.5).

If we were actually going to use this framework, we could leave our final relative rate stat in the form of a win ratio or OW%, but I would prefer to convert it back to a run rate. Even within the conceit of the player as a team framework, it’s hard to know what to do with an OW% (or the even less familiar win ratio). We can say that Thomas’ OW% of .903 means that a team that hit like Thomas and had league average defense (runs allowed) would be expected to have a .903 W%, but even if you follow the player as team framework, you would probably like to have a result that can be more easily tied back to the player’s performance as the member of a team, not what record the Yuma Mutant Clone Franks would have. One way to return the win ratio to a more familiar scale is to convert it back to a run ratio.

Let’s define “r” to be the Pythagenpat exponent for some reference context, which I will define to be 8.83 RPG – the major league average for the expansion era (1961 – 2019). We can then easily convert our estimated win ratios for the Franks to run ratios that would produce the same estimated win ratio in the reference (8.83 RPG) environment.

The calculation is simply (Win Ratio)^(1/r). Since r = 8.83^.29 = 1.881, this becomes Win Ratio^.5316, which produces 2.614 for Robinson and 3.282 for Thomas. Following the time-honored custom of dropping the decimal place, we wind up with a win-equivalent relative BsR/O of 261 for Robinson and 328 for Thomas, compared to their initial values of 217 and 260 respectively. These both increased because both players would radically alter their run environments, increasing the win value of their relative BsR/O. I would demonstrate the lesser impact on more typical performers if I cared about this framework beyond being thorough.

While I do think that this example demonstrates that just looking at a ratio of R/O does not appropriately capture the win impact between players, the individual OW% approach goes many steps too far, but it is the logical conclusion of the player as a team methodology. If you were on that train until we got to the end of the line, I’d encourage you to consider jumping off at one of the earlier stations (I’d prefer you get on the green or red linear weights or theoretical team lines rather than ever embarking on the player as team blue line). Next time, I’ll explore those paths to win-unit rate stats using those frameworks.

## Wednesday, September 01, 2021

### Rate Stat Series, pt. 11: Rate Stats for the Theoretical Team Framework II

Once PAR has been incorporated, it should be clear that a different approach will be needed as our run estimate already includes the batter’s secondary contribution – it is starting out on a similar basis to wRC.  We also can fall back on our necessary test for a rate stat – it must produce the same RAA as the RAA produced by the full implementation of the framework we are looking at.

To argue why this should be the gateway criteria in a slightly different way than I did previously, consider the place of RAA in the theoretical team framework. The theoretical team framework is an attempt to value the batter by estimating the difference between the runs scored for a team on which he accumulates an even share of the plate appearances, and the runs scored for a team on which he does not play. While we have constructed an absolute estimate of runs created using this approach, it inherently is screaming out for a marginal approach – taking the difference between the team with the player and the team without the player. That is exactly what RAA is supposed to represent – and by calculating TT_RAA, we have (to the best of our ability) captured the batter’s primary, secondary, and tertiary contributions. TT_RAA is the marginal impact of the player on a team, and any rate stat that starts by using TT_BsRP should produce the same RAA figure as TT_RAA.

So let’s look at the key figures for our hitters:

Here, “RAA” is what we calculated above, based on TT_BsR/O or TT_BsR+/PA. As you can see, it does not match TT_RAA. If it did, we wouldn’t need to worry about a special set of rate stats for the TT framework – we could just piggyback on the same methodology used for linear weights. It’s clear that we need to apply something different to TT_BsRP when we use the full-blown theoretical team approach including PAR. I would also argue that this is evidence that the theoretical team approach should not be viewed simply as an alternative path to using linear weights, but rather a third unique framework for evaluating a batter’s contribution.

I would now go through a tortured explanation of the logic behind finding a denominator that achieves the desired result, but there is no need – David Smyth developed the answer and explained it clearly and concisely in a June 21, 2001 FanHome post:

On the subject of a rate stat such as R+/PA or R+/O, etc....

The whole idea behind this method is to compute the impact of a player on a theoretical team. On the team level (even a theoretical one), impact is in terms of runs and outs. The R+ generated by the procedure on this thread [NOTE: R+ is theoretically equivalent to what I am calling TT_BsRP] tells us the difference in runs between the theoretical team without the indicated batter (that is, a team of 8 average hitters), and the theoretical team with the indicated batter added. We can also compute the difference in the OUTS between these two teams, and use that total as the rate stat denominator. Call it O+.

And it's easy to calculate. You simply multiply the batter PA by the out percentage (1-OBA) of the reference team...It seems to me that the preferred rate stat for the R+ framework is not R+/PA nor R+/O; it's R+/O+.

Frank Thomas had 149.9 TT_BsRP in 508 PA. The reference team had a .3433 OBA (same as the league average). Thus, his TT_BsRP/O+ (a specific implementation of Smyth’s R+/O+) would be:

O+ = PA*(1 – LgOBA) = 508*(1 - .3433) = 333.6

TTBsRP/O+ = TT_BsRP/O+ = 149.9/333.6 = .4494

What does this represent? It is not easy to explain it in a sentence, but I will try: Frank Thomas contributed .4494 runs to the theoretical team’s offense for each of its outs distributed to him. When Smyth said that PA*(1 – RefOBA) was equal to the difference in the team’s outs, he was referring to the theoretical team construct, in which the difference in team plate appearances is the batter’s own PA, since in the left hand term (T_BsR) the team has 9*PA, and in the right hand term (R_BsR), the team has 8*PA. The difference in outs is this difference (1*PA) times the team’s out rate.

In this sense, O+ can be thought of in terms of “freezing team outs”. We know that for a team (excluding the list of complicating circumstances like rainouts, foregone batting in the bottom of the ninth, and walkoffs), outs are fixed quality. Regardless of whether we add Frank Thomas or Matt Walbeck to a reference team, the team’s outs are fixed. In the TT/PAR approach, we start by capturing the batter in question’s secondary contribution through the PAR multiplier, but we never directly change the number of PA we start from. Thus, Thomas’ 508 PA and Walbeck’s 355 PA will turn into outs at the same rate for the purpose of the calculation. In reality, Thomas’ team will compile more PA thanks to his greater secondary contribution, but the equation handles this with a multiplier, freezing the original value.

You may not find that explanation convincing – I am struggling to articulate a concept that I may be fooling myself into believing I understand. You might be more convinced by the demonstration of what Thomas’ RAA is under this approach:

TT_RAA = (TT_BSRP/O+ - LgR/O)*O+

which for Thomas = (.4494 - .2075)*333.6 = 80.7 which is the same as his TT_RAA

In fact, I jumped the gun by calling it TT_RAA before I proved it was equal to what we previously called TT_RAA. It is in fact:

This leaves unanswered the question of what the rate stat using TT_RAA should be. Similar to how we had R+/PA and RAA/PA when working with linear weights, we should have an appropriate rate stat for the RAA figure. By manipulating the TT_RAA equation, we can see that TT_RAA/O+ should be consistent with R+/O+:

TT_RAA = (R+/O+ - LgR/O)*O+

so divide both sides by O+ to get:

TT_RAA/O+ = R+/O+ - LgR/O

Thus TT_RAA/O+ and R+/O+ are analogous to RAA/PA and R+/PA, except that the difference is LgR/O rather than LgR/PA.

So we’re done, right? Or is there something vaguely unsatisfying about all this, that after an entire series in which I argued that R/O was a team measure and not an individual one, does it bother you that we have left our rate stat for an individual in the form of R/O?

On one hand, it absolutely shouldn’t – R/O is the correct choice of rate stat for a team, and in this case we have modeled the player’s impact on the team and its R/O, and so there’s nothing wrong with expressing a final result in terms of R/O. The problem was not with R/O per se – it was with the inputs that we were putting in for individual batters. Additionally, R+/O+ allows our team and individual rate stats to converge. Team R+/O+ will reduce to team R/O if accept the premise that the proper “reference team” for a team is itself, and that it’s R+ is equal to its actual runs scored since actual runs scored obviously accounts for the team’s primary, secondary, and tertiary offensive actions. Not to mention any quaternary actions we could dream up or the fact that using those terms doesn’t even make sense when talking about teams.

On the other hand, O+ is not in any way an intuitive metric – just re-read my tortured explanation of what it represents. A batter’s R+/O+ can be contextualized in any number of meaningful ways, just as regular old R/O can, but I’m not sure even those presentations (ala Runs Created/25.2 Outs) are truly relatable to a player’s performance except as a scaling device.

There is a very simple and equivalent alternative, though, and you may have already noticed what it might be from the formulas. We defined O+ as PA*(1 – LgOBA), which is really just plate appearances times a constant. If we just divide by that constant (1 – LgOBA), we can restate everything on the basis of PA, with no loss of ratio comparability.

Doing this, we now have:

(TT_BsRP/PA – LgR/PA)*PA = TT_RAA

and TT_BsRP/PA = R+/O+ * (1 – LgOBA)

R+/PA+ = TTBsRP/PA

I am going to call TT_BsRP/PA (R+/PA+) even though PA+ is just equal to PA. I’m doing this primarily for ease of discussion, so that R+/PA+ will represent the theoretical team calculation, while R+/PA will represent the linear weights equivalent—and they are equivalent (which also means that R+/O+ could be applied to the linear weights framework – more on this in a later installment). My other justification is that in theoretical team framework, the actual number of plate appearances we plug in is not of importance when dealing with a rate stat, as we are always defining the reference team’s PA to be eight times the batter’s PA. We use the batter’s PA because it allows estimates like TT_BsR to reflect his actual playing time, but if all we care about is the rate at the end, we could use 1 plate appearance or 650 or any number we wanted. Thus I prefer to think about this quantity of plate appearances as the player’s share of the reference team’s PA rather than really being tied to his own, and feel justified in distinguishing it through the abbreviation PA+.

This approach to developing a rate stat for the TT framework can be thought of as “freezing plate appearances” as compared to the “freezing outs” approach of R+/O+. I first became aware of it through a FanHome post in 2007 by David Smyth (surprise). By that time it was over five years since Smyth had published the R+/O+ methodology and I had adopted it myself, so by the time we discussed what I am calling R+/PA+ I was in the interesting position of advocating one Smyth construct against the newer. We quickly verified their equivalence and left it there, which is the position I’m taking now.

By definition, R+/O+ and R+/PA+ are perfectly correlated since the only difference between the two is multiplying by a constant. One allows us to express a rate stat in terms of R/O, which is consistent with how we would state a team rate stat; the other allow us to express it in terms of R/PA, which is how we would state a rate state from the linear weight framework. Both can be compared using ratios, which will be equivalent; both can be compared using differences, with the question of what the denominator for that difference should be up to the user. Both denominators can be used with RAA as well for RAA+/O+ or RAA+/PA+ (using RAA+ to refer RAA calculated within the full-blown theoretical team framework), and since R+/O+ and R+/PA+ can both be used to calculate RAA, they will also be consistent with RAA+ per O+ or PA+.

I’ll close with a sample calculation. Frank Thomas had 149.9 TT_BsRP in 508 PA, which is .2951 R+/PA+. The league average R/PA was .1363, so Thomas’ TT_RAA was (.2951 - .1363)*508 = 80.7, same as calculated previously using R+/O+ or directly from the original TT_RAA formula. The league OBA was .3433, so Thomas’ R+/O+  is equal to .2951/(1 - .3433) = .4494 as calculated previously.