Note: This is largely the same explanation as last year; the only significant change is that I switched from FIP to a Base Runs-centric DIPS. Admittedly, this is completely unnecessary, but I decided to use BsR where I could to back up my advocacy for it. It certainly doesn’t hurt, but it is needlessly complicated for that (DIPS) application. I have also added a "R" column which is for rookie. I based this on the list of choices for the IBA Rookie of the Year listed at Baseball Prospectus. I can't guarantee that I marked every rookie (I tried), but I believe I got all the serious ROY hopefuls at the very least.

For the past several years I have been posting Excel spreadsheets with sabermetric stats like RC for regular players on my website. I have not been doing this because I think it is a unique thing that nobody else does--Hardball Times, Baseball Prospectus, and other sites have similar data available. However, since I figure my own stats for myself anyway, I figured I might as well post it on the net.

This year, I am not putting out Excel spreadsheets, but I will have Google Spreadsheets that I will link to from both this blog and my site. What I wanted to do here is a quick run down of the methodology used. These will be added as they are completed; as I post this, there are none, but by the end of the week they should start popping up.

First, I should acknowledge that the primary data source is Doug’s Stats, and that park data for past seasons comes from KJOK’s park database. Baseball-Reference.com and ESPN.com round out the sources.

The general philosophy of these stats is to do what is easiest while not being too imprecise, unless you can do something just a little bit more complex and be more precise. Or at least it used to be. Then I decided to put my money where my mouth was on the matter of Base Runs for pitchers and teams and Pythagenpat. On the other hand, using ERP as the run estimator is not optimal--I could, in lieu of having empirical linear weights for 2007, use Base Runs or another approach to generate custom linear weights. I have decided that does not constitute a worthwhile improvement. Others might disagree, and that’s alright. I’m not claiming that any of these numbers are the state of the art or cannot be improved upon.

First, the team report. I list Park Factor (PF), Winning %, Expected Winning % (EW%), Predicted Winning % (PW%), Wins, Losses, Runs, Runs Allowed, Runs Created (RC), Runs Created Allowed (RCA), Runs/Game (R/G), Runs Allowed/Game (RA/G), Runs Created per Game (RCG), and Runs Created Allowed per Game (RCAG):

EW% is based on runs and runs allowed in Pythagenpat, with the exponent = RPG^.29. PW% is based on runs created and runs created allowed in Pythagenpat.

Runs Created and Runs Created Allowed are both based on a simple Base Runs formula. For the offense, the formula is:

A = H + W - HR - CS

B = (2TB - H - 4HR + .05W + 1.5SB)*.76

C = AB - H

D = HR

For the defense:

A = H + W - HR

B = (2TB - H - 4HR + .05W)*.78

C = AB - H (approximated as IP*2.82, or whatever the league (AB-H)/IP average is)

D = HR

Of course, these are both put together, like all BsR, as A*B/(B + C) + D. The only difference between the formulas is that I include SB and CS for the offense, but don’t want to waste time scrounging up stolen bases allowed for the defense.

R/G, RA/G, RCG, and RCAG are all calculated straightforwardly by dividing by games, then park adjusted by dividing by park factor. Ideally, you use outs as the denominator, but for teams, outs and games are so closely related that I don’t think it’s worth the extra effort.

Next, we have park factors. I have explained the methodology used to figure the PFs before, but the cliff’s notes version is that they are based on five years of data when applicable, include both runs scored and allowed, and they are regressed towards average (PF = 1), with the amount of regression varying based on the number of years of data used. There are factors for both runs and home runs. The initial PF (unshown) is:

iPF = (H*T/(R*(T - 1) + H) + 1)/2

where H = RPG in home games, R = RPG in road games, T = # teams in league (14 for

It is important to note, since there always seems to be confusion about this, that these park factors already incorporate the fact that the average player plays 50% on the road and 50% at home. That is what the adding one and dividing by 2 in the iPF is all about. So if I list

In the calculation of the PFs, I did not get picky and take out “home” games that were actually at neutral sites, like the Astros/Cubs series that was moved to

If this season, the Astros played four “home” games in an extreme environment in which say 20 runs were scored per game, they would have 819.2 runs added in to the home total. The road games would contribute 777.6 runs to the five-year total. Now, for the five years the Astros’ home games would have a total of 9.70272 RPG versus 9.6 for the road games. The park factor, when fully figured with the regression factor would be 1.0045, when we know that it should be 1.0000. I’m not going to spend too much time worrying about that kind of discrepancy, and that’s a high end example of what the discrepancy would actually be. And I round off to two decimal places anyway, so both would end up 1.00.

Next is the relief pitchers report. I defined a starting pitcher as one with 15 or more starts. All other pitchers are eligible to be included as a reliever. If a pitcher has 40 appearances, then they are included. Additionally, if a pitcher has 50 innings and less than 50% of his appearances are starts, he is also included here (this allows some swingmen type pitchers who wouldn’t meet either the minimum start or appearance standards to get in).

For all of the player reports, ages are based on simply subtracting their year of birth from 2007. I realize that this is not compatible with how ages are usually listed and so “Age 27” doesn’t necessarily correspond to age 27 as I list it, but it makes everything a heckuva lot easier, and I am more interested in comparing the ages of the players to their contemporaries, for which case it makes very little difference.

Anyway, for relievers, the statistical categories are Games, Innings Pitched, Run Average (RA), Relief Run Average (RRA), Earned Run Average (ERA), Estimated Run Average (eRA), DIPS-style estimated Run Average (dRA), Guess-Future (G-F), Inherited Runners per Game (IR/G), Inherited Runs Saved (IRSV), hits per ball in play (%H), Runs Above Average (RAA), and Runs Above Replacement (RAR).

All of the run averages are park adjusted. RA is R*9/IP, and you know ERA. Relief Run Average subtracts IRSV from runs allowed, and thus is (R - IRSV)*9/IP; it was published in __By the Numbers__ by Sky Andrecheck. eRA, dRA, %H, and RAA will be explained in the starters section.

Guess-Future is a JUNK STAT. G-F is A JUNK STAT. I just wanted to make that clear so that no anonymous commentator posts that without any explanation. It is just something that I have used for some time that combines eRA and strikeout rate into a unitless number. As a rule of thumb, anything under 4 is pretty good. I include it not because I think it is meaningful, but because it is a number that I have been looking at for some time and still like to, despite the fact that it is a JUNK STAT. JUNK STATS can be fun as long as you recognize them for what they are. G-F = 4.46 + .095(eRA) - .113(KG), where KG is strikeouts per 9 innings. JUNK STAT JUNK STAT JUNK STAT JUNK STAT JUNK STAT

Inherited Runners per Game is per relief appearance (G - GS); it is an interesting thing to look at, I think, in lieu of actual leverage data. You can see which closers come in with runners on base, and which are used nearly exclusively to start innings. Of course, you can’t infer too much; there are bad relievers who come in with a lot of people on base, not because they are being used in high leverage situations, but because they are long men or what have you. I think it’s mildly interesting, so I include it.

Inherited Runs Saved is the difference between the number of inherited runs the reliever allowed to score, subtracted from the number of inherited runs an average reliever would have allowed to score, given the same number of inherited runners. I do not park adjust this figure. Of course, the way I am doing it is without regard to which base the runners were on, which of course is a very important thing to know. Obviously, with a lot of these reliever measures, if you have access to WPA and LI data and the like, that will probably be more significant.

IRSV = Inherited Runners*League % Stranded - Inherited Runs Scored

Runs Above Replacement is a comparison of the pitcher to a replacement level reliever, which is assumed to be a .450 pitcher, or as I would prefer to say, one who allows runs at 111% of the league average. So the formula is (1.11*N - RRA)*IP/9, where N is league runs/game. Runs Above Average is simply (N - RRA)*IP/9.

On to the starting pitchers. The categories are Innings Pitched, Run Average, ERA, eRA, dRA, KG, G-F, %H, Neutral W% (NW%), Quality Start% (QS%), RAA, and RAR.

The run averages (RA, ERA, eRA, dRA) are all park-adjusted, simply by dividing by park factor.

eRA is figured by plugging the pitcher’s stats into the Base Runs formula above (the one not including SB and CS that is used for estimating team runs allowed), multiplying the estimated runs by nine and dividing by innings.

dRA is a DIPS method (which of course means that Voros McCracken is the true developer), using Base Runs as the run estimator. This is overkill, since a DIPS estimator like FIP will work just fine, but I decided to use Base Runs wherever I could this year. To find, it first estimate PA as IP*x + H + W, where x = Lg(AB-H)/IP. Then, find %K (K/PA), %W (W/PA), %HR (HR/PA), and BIP% = 1- %K - %W - %HR. Next, find estimated %H (which I will just call %H for the sake of this explanation, but it is not the same as the %H displayed in the stats. That is the pitcher’s actual rate, (H-HR)/(estimated PA-W-K-HR)) as BIP%*Lg%H.

Then you use BsR to find the new estimated RA:

A = %H + %W

B = (2*(%H*Lg(TB-4*HR)/(H-HR) + 4*%HR) - %H - 5*%HR + .05*%W)*.78

C = 1 - %H - %W - %HR

D = %HR

dRA = (A*B/(B+C) + D)/C*25.2/PF

Neutral Winning Percentage is the pitcher’s winning percentage adjusted for the quality of his team. It makes the assumption that all teams are perfectly balanced between offense and defense, and then projects what the pitcher’s W% would be on an average team. I do not place a lot of faith in anything based on wins and losses, of course, and particularly not for a one-year sample. In the long run, we would expect pitchers to pitch for fairly balanced teams and for run support for an individual to be the same as for the pitching staff as a whole. For individual seasons, we know that things are not going to even out.

I used to use Run Support to compare a pitcher’s W% to what he would have been expected to earn, but now I have decided that is more trouble than it is worth. RS can be a pain to run down, and I don’t put a lot of stock in the resulting figures anyway. So why bother? NW% = W% - (Mate + .5)/2 +.5, where Mate is (Team Wins - Pitcher Wins)/(Team Decisions - Pitcher Decisions).

Likewise, I include Quality Start Percentage (which of course is just QS/GS) only because my data source (Doug’s Stats) includes them. As for RAA and RAR for starters, RAA = (N - RA)*IP/9, and RAR = (1.25*N - RA)*IP/9.

For hitters with 300 or more PA, I list Plate Appearances (PA), Outs (O), Batting Average (BA), On Base Average (OBA), Slugging Average (SLG), Runs Created (RC), Runs Created per Game (RG), Secondary Average (SEC), Speed Unit (SU), Hitting Runs Above Average (HRAA), Runs Above Average (RAA), Hitting Runs Above Replacement (HRAR), and Runs Above Replacement (RAR).

I do not bother to include hit batters, so take note of that for players who do get plunked a lot. Therefore, PA are simply AB + W. Outs are AB - H + CS. BA and SLG you know, but remember that without HB and SF, OBA is just (H + W)/(AB + W). Secondary Average = (TB - H + W)/AB. I have not included net steals as many people (and Bill James himself) does--it is solely hitting events.

The park adjustment method I’ve used for BA, OBA, SLG, and SEC deserves a little bit of explanation. It is based on the same principle as the “Willie Davis method” introduced by Bill James in the __New Historical Baseball Abstract__. The idea is to deflate all of the positive offensive events by a constant percentage in order to make the new runs created estimate from those stats equal to the park adjusted runs created we get from the player’s actual stats. I based it on the run estimator (ERP) that I use here instead of RC.

X = ((TB + .8H + W - .3AB)/PF + .3(AB - H))/(TB + W + .5H)

X is unique for each player and is the deflator. Then, hits, walks, and total bases are all multiplied by X in order to park adjust them. Outs (AB - H) are held constant, so the new At Bat estimate is AB - H + H*X, which can be rewritten as AB - (1 - X)*H. Thus, we can write BA, OBA, SLG, and SEC as:

BA = H*X/(AB - (1 - X)*H)

OBA = (H + W)*X/(AB - (1 - X)*H + W*X)

SLG = TB*X/(AB - (1 - X)*H)

SEC = SLG - BA + (OBA - BA)/(1 - OBA)

Next up is Runs Created, which as previously mentioned is actually Paul Johnson’s ERP. Ideally, I would use a custom linear weights formula for the given league, but ERP is just so darn simple and close to the mark that it’s hard to pass up. I still use the term “RC” partially as a homage to Bill James (seriously, I really like and respect him even if I’ve said negative things about RC and Win Shares), and also because it is just a good term. I like the thought put in your head when you hear “creating” a run better than “producing”, “manufacturing”, “generating”, etc. to say nothing of names like “equivalent” or “extrapolated” runs. None of that is said to put down the creators of those methods--there just aren’t a lot of good, unique names available. Anyway, RC = (TB + .8H + W + .7SB - CS - .3AB)*.322.

RC is park adjusted, by dividing by PF, making all of the value stats that follow park adjusted as well. RG, the rate, is RC/O*25.5. I do not believe that outs are the proper denominator for an individual rate stat, but I also do not believe that the distortions caused are that bad. (I still intend to finish my rate stat series and discuss all of the options in excruciating detail, but alas you’ll have to take my word for it now).

Speed Unit is my own take on a “speed skill” estimator ala Speed Score. I AM NOT CLAIMING THAT IT IS BETTER THAN SPEED SCORE. I don’t use Speed Score because I always like to make up my own crap whenever possible (while of course recognizing that others did it first and better), because some of the categories aren’t readily available, and because I don’t want to mess with square roots. Anyway, it considers four categories: runs per time on base, stolen base percentage (using Bill James’ technique of adding 3 to the numerator and 7 to the denominator), stolen base frequency (steal attempts per time on base), and triples per ball in play. These are then converted to a pseudo Z-score in each category, and are on a 0-100 scale. I will not reprint the formula here, but I have written about it before here. I AM NOT CLAIMING THAT IT IS BETTER THAN SPEED SCORE. I AM NOT CLAIMING THAT IT IS AS GOOD AS SPEED SCORE.

There are a whopping four categories that compare to a baseline; two for average, two for replacement. Hitting RAA compares to a league average hitter; it is in the vein of Pete Palmer’s Batting Runs. RAA compares to an average hitter at the player’s primary position. Hitting RAR compares to a “replacement level” hitter; RAR compares to a replacement level hitter at the player’s primary position. The formulas are:

HRAA = (RG - N)*O/25.5

RAA = (RG - N*PADJ)*O/25.5

HRAR = (RG - .73*N)*O/25.5

RAR = (RG - .73*N*PADJ)*O/25.5

PADJ is the position adjustment, and it is based on 1992-2001 data. For catchers it is .89; for 1B/DH, 1.19; for 2B, .93; for 3B, 1.01; for SS, .86; for LF/RF, 1.12; and for CF, 1.02.

How do I deal with players who split time between teams? I assign all of their statistics to the team with which they played more, even if this means it is across leagues. This is obviously the lazy way out; the optimal thing would be to look at the performance with the teams separately, and then sum them up.

You can stop reading now if you just want to know how the numbers were calculated. The rest of this post will be of a rambling nature and will discuss the underpinnings behind the choices I have made on matters like park adjustments, positional adjustments, run to win converters, and replacement levels.

First of all, the term “replacement level” is obnoxious, because everyone brings their preconceptions to the table about what that means, and people end up talking past each other. Unfortunately, that ship has sailed, and the term “replacement level” is not going away. Secondly, I am not really a believer in replacement level. I don’t deny that it is a valid concept, or that comparisons to replacement level can be useful for answering certain questions. I just don’t believe that replacement level is clearly the correct baseline. I also don’t believe that it’s clearly NOT the correct baseline, and since most sabermetricians use it, I go along with the crowd in this case.

The way that reads is probably too wishy-washy; I do think that it is PROBABLY the correct choice. There are few things in sabermetrics that I am 100% sure of, though, and this is certainly not one of them.

I have used distinct replacement levels for batters, starters, and relievers. For batters, it is 73% of the league RG, or since replacement levels are often discussed in these terms, a .350 W%. For starters, I used 125% of the league RA or a .390 W%. For relievers, I used 111% of the league RA or a .450 W%. I am certainly not positive that any of these choices are “correct”. I do think that it is extremely important to use different replacement levels for starters and relievers; Tango Tiger convinced me of this last year (he actually uses .380, .380, .470 as his baselines). Relievers have a natural RA advantage over starters, and thus their replacements will as well.

Now, park adjustments. Since I am concerned about the player’s value last season, the proper type of PF to use is definitely one based on runs. Given that, there are still two paths you can go down. One is to park adjust the player’s statistics; the other is to park adjust the league or replacement statistics when you plug in to a RAA or RAR formula. I go with the first option, because it is more useful to have adjusted RC or adjusted RA, ERA, etc. than to only have the value stats adjusted. However, given a certain assumption about the run to win converter, the two approaches are equivalent.

Speaking of those RPW: David Smyth, in his Base Wins methodology, uses RPW = RPG. If the RPG is 9.4, then there are 9.4 runs per win. It is true that if you study marginal RPW for teams, the relationship is not linear. However, if you back up from the team and consider things in league context, one can make the case that the proper approach is the simple RPW = RPG.

Given that RPW = RPG, the two park factor approaches are equivalent. Suppose that we have a player in an extreme park (PF = 1.15, approximately like Coors Field) who has a 8 RG before adjusting for park while making 350 outs in a 4.5 N league. The first method of park adjustment, the one I use, converts his value into a neutral park, so his RG is now 8/1.15 = 6.957. We can now compare him directly to the league average:

RAA = (6.957 - 4.5)*350/25.5 = +33.72

The second method would be to adjust the league context. If N = 4.5, then the average player in this park will create 4.5*1.15 = 5.175 runs. Now, to figure RAA, we can use the unadjusted RG of 8:

RAA = (8 - 5.175)*350/25.5 = +38.77

These are not the same, as you can obviously see. The reason for this is that they are in two different contexts. The first figure is in a 9 RPG (2*4.5) context; the second figure is in a 10.35 RPG (2*4.5*1.15) context. Runs have different values in different contexts; that is why we have RPW converters. If we convert to WAA, then we have:

WAA = 33.72/9 = +3.75

WAA = 38.77/10.35 = +3.75

Once you convert to wins, the two approaches are equivalent. This is another advantage for the first approach: since after park adjusting, everyone in the league is in the same context, there is no need to convert to wins at all. Sure, you can convert to wins if you want. If you want to compare to performances from other seasons and other leagues, then you need to. But if all you want to do is compare David Wright to Prince Fielder to Hanley Ramirez, there is no need to convert to wins. Personally, I think that stating something as +34 is a lot nicer than stating it as +3.8, if you can get away with it. None of this is to deny that wins are not the ultimate currency, but runs are directly related to wins, and so there is no difference in conclusion from using them if the RPW is the same for all players, which it is for a given league season coupled with park adjusting runs rather than context.

Finally, there is the matter of position adjustments. What I have done is apply an offensive positional adjustment to set a baseline for each player. A second baseman’s RAA will be figured by comparing his RG to 93% of the league average, while a third baseman’s will compare to 101%, etc. Replacement level is set at 73% of the estimated average for each position.

So what I am doing is comparing to a “replacement hitter at position”. As Tango Tiger has pointed out, there is really no such thing as a “replacement hitter” or a “replacement fielder”--there are just replacement players. Every player is chosen because his total value, both hitting and fielding, is sufficient to justify his inclusion on the team. Segmenting it into hitting and fielding replacements is not realistic and causes mass confusion.

That being said, using “replacement hitter at position” does not cause too many distortions. It is not theoretically correct, but it is practically powerful. For one thing, most players, even those at key defensive positions, are chosen first and foremost for their offense. Empirical work by Keith Woolner has shown that the replacement level hitting performance is about the same for every position, relative to the positional average.

The offensive positional adjustment makes the inherent assumption that the average player at each position is equally valuable. I think that this is close to being true, but it is not quite true. The ideal approach would be to use a defensive positional adjustment, since the real difference between a first baseman and a shortstop is their defensive value. When you bat, all runs count the same, whether you create them as a first baseman or as a shortstop.

Figuring what the defensive positional adjustment should be, though, is easier said than done. Therefore, I use the offensive positional adjustment. So if you want to criticize that choice, or criticize the numbers that result, be my guess. But do not claim that I am holding this up as the correct analytical structure. I am holding it up as the most simple and straightforward structure that conforms to reality *reasonably* well, and because while the numbers may be flawed, they are at least based on an objective formula. If you feel comfortable with some other assumptions, please feel free to ignore mine.

One other note here is that since the offensive PADJ is a proxy for average defensive value by position, ideally it would be applied by tying it to defensive playing time. I have done it by outs, though. For example, shortstops have a PADJ of .86. If we assume that an average full-time player makes 10% of his team’s outs (about 408 for a 162 game season with 25.5 O/G) and the league has a 4.75 N, the average shortstop is getting an adjustment of (1 - .86)*4.75/25.5*408 = +10.6 runs. However, I am distributing it based on player outs. If you have one shortstop who makes 350 outs and another who makes 425 outs, then the first player will be getting 9.1 runs while the second will be getting 11.1 runs, despite the fact that they may both be full-time players.

The reason I have taken this flawed path is because 1) it ties the position adjustment directly into the RAR formula rather then leaving it as something to subtract on the outside and more importantly 2) there’s no straightforward way to do it. The best would probably be to use defensive innings--set the full-time player to X defensive innings, figure how Derek Jeter’s innings compare to X, and adjust his PADJ accordingly. Games in the field or games played are dicey because they can cause distortion for defensive replacements. Plate Appearances avoid the problem that outs have of being highly related to player quality, but they still have the illogic of basing it on offensive playing time. And of course the differences here are going to be fairly small (a few runs). That is not to say that this way is preferable, but it’s not horrible either, at least as far as I can tell.

Given the inherent assumption of the offensive PADJ that all positions are equally valuable, once we have a player’s RAR, we should account for his defensive value by adding on his runs above average relative to a player at his own position. If there is a shortstop out there who is -2 runs defensively versus an average shortstop, he is without a doubt a plus defensive player, and a more valuable defensive player than a first baseman who was +1 run better than an average first baseman. Regardless, since we have implicitly assumed that they are both average defensively *for their position* when RAR was calculated, the shortstop will see his value docked two runs. This DOES NOT MEAN that the shortstop has been penalized for his defense. The whole process of accounting for positional differences, going from hitting RAR to positional RAR, has benefited him.

It is with some misgivings that I publish “hitting RAR” at all, since I have already stated that there is no such thing as a replacement level hitter. It is useful to provide a low baseline total offensive evaluation that does not include position, though, and it can also be thought of as the theoretical value above replacement in a world in which nobody plays defense at all.

The DH is a special case, and it caused a lot of confusion when my MVP post was linked at BTF last year. Some of that confusion has to do with assuming that any runs above replacement methodology is the same as VORP from the __Baseball Prospectus__. Obviously there are similarities between my approach and VORP, but there also key differences. One key difference is that I use a better run estimator. Simple, humble old ERP is, in my opinion, a superior estimator to the complex MLV. I agree with almost all of the logic behind MLV--but using James’ Runs Created as the estimator to fuel it is putting lipstick on a pig (this is a much more exciting way of putting it in the 2008 context, don’t you think?).

The big difference, though, as it relates to the DH, is that VORP considers the DH to be a unique position, and I consider DHs as in the same pool as first baseman. The fact of the matter is that first baseman outhit DH. There are any number of potential explanations for this; DHs are often old or injured, hitting as a DH is harder than hitting as a position player, etc. Anyway, the exact procedure for VORP is propriety, but it is apparent that they use some sort of average DH production to set the DH replacement level. This makes the replacement level for a DH lower than the replacement level for a first baseman.

A couple of the aforementioned nimrods took the fact that VORP did this and assumed that my figures did as well. What I do is evaluate 1B and DH against the same replacement RG. This actually helps first baseman, since the DHs drag the average production of the pool down, thus resulting in a lower replacement level than I would get if I considered first baseman on their own. Contrary to what the chief nimrod thought, this is not “treating a 1B as a DH”. It is “treating a 1B as a 1B/DH”.

It is true, however, that this method assumes that a 1B and a DH have equal defensive value. Obviously, a DH has no defensive value. What I advocate to correct this is to treat a DH as a bad defensive first baseman, and thus knock another five or ten runs off of his RAR for a full-time player. I do not incorporate this into the published numbers, but you should keep it in mind. However, there is no need to adjust the figures for first baseman upwards, despite what the nimrods might think. The simple fact of the matter is that first baseman get higher RAR figures by being pooled with the DHs than they would otherwise.

The "Willie Davis" method has always intrigued me. Before I knew any better, I used it with Runs Created. I eventually set it up to use Baseruns, but never got back to it. In fact, I substituted Baseruns for RC in Brock2, and the projections look a lot more reasonable. Brock2 with RC would give insanely high predictions for high O.P.S. players.

ReplyDeleteIf you tell me what years you'd like to include in your dataset, I could derive a simple ERP equation for you. I just calculated empirical LW for each season, similar to what Ruane did. Before, I used the entire Retrosheet era for the Run Expectancy chart. Obviously, the sample sizes are small for single season LW, but now I can combine them together any which way I want. I'll post them on Google Docs shortly. I'm still fooling around with the pivot-tables.

Thanks for the offer, but if I'd wanted a better formula, I'd have put some initiative in on it myself. I do look forward to seeing your spreadsheet though--feel free to leave a link in the comments here (or in a related post) if you'd like.

ReplyDeleteWhen I first learned Excel, one of the first big things I did was enter Brock2 by hand from the back of the 85 Abstract (this was a good 12-13 years after it was actually published, mind you). I never did actually play around with it too much...I remember putting in Manny and Thome and my other Indian heroes of the time.

The reason I've used the Willie Davis approach here is that I want to maintain a run park factor approach, while maintaining the same ERP if you used the park-adjusted BA/OBA/SLG to figure ERP as if you figured ERP, then park-adjusted. I'm not crazy about it's application to the questions of moving players into radically different contexts, but the vast majority of the PFs are +/- .05 of 1 so I'm a lot more confident in it.

Aww, any reason you chose to use Google Spreadsheets? I absolutely love being able to investigate the structure of stats and formulas and being able to experiment with them in excel. It's great for learning both Excel and sabermetrics. That's why I love your sites so much. Darn it.

ReplyDeleteOh well then, anyway thanks so much for putting this together.

OK, I figured out that I was using the wrong link. I needed to post the link to the published version.

ReplyDeleteFor the Linear Weights:

http://spreadsheets.google.com/pub?key=pzy9IhjJPqas3SLH5qvVTYg&hl=en

For the Run Expectancy data:

http://spreadsheets.google.com/pub?key=pzy9IhjJPqasczX-d6q_eUA&hl=en

The reason I use Google is storage space. I used to post them on my free Tripod site, but that only has 20 MB of space. Google has (essentially for my purposes) unlimited space.

ReplyDeleteAlso, I do like that people can view the results without having to download the file. Some people are interested in tinkering, but a lot just want to know the results.

That said, I will be happy to email you the excel versions of whichever spreadsheet you want if you send me an email.

Patriot,

ReplyDeleteThanks for posting the links. Consider the spreadsheets "beta versions" as I can't quite seem to find a decent way to present the data. I'd be interested to here your suggestions about how to clean things up a bit.

Terps, my suggestion would be to just present the results. You seem to have included all of the calculations in the spreadsheet. I would just hide more columns so that only the real good stuff (LW, BsR formula, RE). For example, in your RE sheet, I would just display the situation, the frequency, and the RE, and hide all of the other columns.

ReplyDeleteMy goal here was to calculate a set of LW that I could apply to individual hitters. So the uncertainty on my part is probably due to the fact that I'm including categories you really can't apply to individual batters. Although, I like the idea of giving the baseruuner credit for PB and WP advances like you did in your 1876-1881 series. I could probably get rid of Balks, Defensive Indifference, and Other Advance, but then the LW and RC wouldn't reconcile without fudging them. Of course, they're not going to reconcile anyway when I apply them since the LW were derived from a dataset that excluded partial and home-half of the ninth and later innings. Once I iron out the details, I'm going to attempt to use Baseruns to estimate Linear Weights back to 1876. So if I don't want to include the 3 categories I listed above (BK,DI,OA), then I probably shouldn't include them in my Full Baseruns formula. Ignoring Balks definitely won't be an issue since they were hardly ever called prior to the Play-By-Play era (1954-2008).

ReplyDeleteAfter looking at your 2008 batting data and seeing how great your ERP formula worked, you have convinced me to switch to a simpler method. I've thrown out 8 categories (XI,PkO,PkE,BK,PB,WP,DI,OA) and combined one (ROE+RFC). When throwing out categories I need to be careful when reconciling the events that I'm including so that the Linear Weights and Runs Created values remain in proportion with one another. This involves making a slight adjustment to the Runs Per Out (the Total Runs Per Out, not the rate stat) for each event. Using the data for all seasons (see the "LW All Seasons sheet" in my spreadsheet), here is what the new LW and RC look like:

ReplyDeleteLW RC Event

0.464 0.467 1B

0.767 0.770 2B

1.051 1.052 3B

1.404 1.404 HR

0.310 0.310 UIBB

0.164 0.164 IBB

0.337 0.337 HBP

0.493 0.495 ROE

-.087 0.111 SH

-.271 -.098 OUT

-.273 -.109 SO

0.192 0.192 SB

-.419 -.267 CS

These will be the LW I plug into my BsR equation used to estimate LW for pre-1954 seasons.

[Note on what follows below. In my head I understand the process I'm going to explain, but don't know how to express it clearly.]

The empirical LW will form the backbone of the BsR equations used to estimate LW for years after 1953. You might be saying, why not use the empirical data for each season? Well, I don't feel quite comfortable using only 1 season's worth of data. But I do feel comfortable combining the empirical data and then plugging that into Baseruns to generate single season Linear Weights. I will combine the empirical LW in this manner to form BsR equations used to estimate the single-season LW for post-1953 years:

ML: 1954-1962

ML: 1963-1972

AL: 1973-1985

NL: 1973-1985

AL: 1986-1992

NL: 1986-1992

AL: 1993-2007

NL: 1993-2007

Thanks for listening to my ideas here. Maybe this project will give me an excuse to start my own blog.

I agree with not using the one year empirical data.

ReplyDeleteYou should start a blog, if for nothing else just as a place to keep all your research in one place for your own reference. Any readership etc. is a bonus. I used to do all of the stuff I do here even before I was posting it on the internet.

Patriot,

ReplyDeleteaidenbdud@yahoo.com.

thanks so much. I appreciate all you guys do and I love to read it.

Terpsfan, I agree with p. Having a place where all your work is organized I would image is very useful. Hey, I would read it.

Patriot,

ReplyDeleteI get 25.2 Outs/Game when using AB-H from 1993-2007. Any reason you use 25.5?

I use outs = AB - H + CS, which average around 25.5 (25.45 and 25.53 in the AL and NL this year, respectivley).

ReplyDeleteI do use 25.2 for AB-H; if you look at the "dRA" formula in the post, there is a multiplication by 25.2 for that very reason.

OK, I forgot that you were including CS in your ERP formula. There are ton of great ideas in this post. I've re-read it at least 3 times so far.

ReplyDeleteIn the spirit of shameless-self promotion, I've created a blog. My first sabermetric post will be about Linear Weights. Look for it sometime over the weekend. I did write an introductory post:

ReplyDeletehttp://thehumanraindelay.blogspot.com/

Note, that I stole Mike Hargrove's nickname for the title of my blog.

Patriot,

ReplyDeleteThanks for posting the link.

I am not trying to be sarcastic here, but are you some sort of mathematical genius? Reading some of your works, especially the mathematically heavy ones on your other website, I end up getting lost most of the time. Of course, this is not your fault, but a testament to my lack of intelligence. If you don't consider yourself a mathematical genius, then where did you get your mathematical training? If you say that it was mostly self-taught, then you are a mathematical genius.

Hear that sound? That's the sound of the mathematical geniuses of the world laughing at the suggestion that I might be in their ranks.

ReplyDeleteThere's no doubt that I have mathematical skills and training at a level greater than the average member of the population--but I assume that you and just about everybody drawn to sabermetrics does too.

I don't use any kind of math beyond what I learned in Calculus II or III in college (which is good, because I didn't go far beyond that level), but I don't explain it particularly well, so it probably seems a lot more complicated than it is. "Partial differentiation" sounds a lot scarier than the actual process of taking a derivative is.

Patriot,

ReplyDeleteI was just frustrated at myself for not being able to follow your section on Win% Estimators on your other website.

Z-scores, derivatives, slopes, logarithms, etc... My head felt like it was going to explode.

That's not a very well-written article. It just piles on idea after idea without any kind of master plan on how to organize it or tie it all together.

ReplyDeleteIt has all the hallmarks of the writings of a mad genius, except the genius part.

Patriot:

ReplyDeleteI always look for to this post every year, and you didn't disappoint.

I would love to hve the spreadsheets:

KJOKBASEBALl

AT

YAHOO.COM