I will admit up front that I have not paid much attention this year to the award debates, either in the mainstream or the sabersphere. This is good in the sense that I am coming into this cold, without having read many other perspectives that might bias me one way or another. It’s also bad for the same reason--while I don’t think I’ve ever found mainstream commentary on player value particularly useful, there are a lot of others out there worth reading.

I simply decided this year that I wasn’t going to waste any time thinking about awards until the season was over. Not that I ever obsessed over them previously, but I pretty much completely shut them out of my mind this year. So much so that when an acquaintance who knows I’m a baseball nut asked me who I thought should be the NL Cy Young winner a couple weeks ago, he was shocked when the best I could offer was “uh, probably either Halladay or Kershaw”.

In any event, let me start in the AL. This rookie crop belongs to the pitchers. My top three candidates are all starting pitchers. Michael Pineda got out the best start, Ivan Nova had the flashiest win-loss record, but Jeremy Hellickson was the AL’s most valuable rookie pitcher. Hellickson led the trio in innings (189 to Pineda’s 171 and Nova’s 165) and RRA (3.24 to 3.81 and 4.10). Combining the two, I have Hellickson at 52 RAR, Pineda 36, and Nova 30.

Hellickson’s BABIP was just .229, so from a strict DIPS perspective one could make a case for Pineda (or even Nova) ahead of Hellickson. But for a retrospective award, I stick to actual runs allowed and first-order component RA for the most part. If Pineda and Hellickson were close, I would consider moving the former ahead, but the gap is too big in this case.

For the remaining two spots on the ballot, the top position players are Dustin Ackley, Eric Hosmer, and Jemile Weeks. Ackley was the most productive hitter of the three, while Hosmer had 130 more PA than either of them. I have Ackley and Weeks both at 23 RAR with Hosmer at 21. Fielding and baserunning would seem to favor Weeks.

Greg Holland deserves a mention at least a mention as a reliever. Holland stranded 31 of 33 baserunners, the second-best performance of any AL reliever, and his peripherals were terrific as well. However, his 26 RAR is thanks in large part to the inherited runner performance, and thanks to Hosmer I wouldn’t be comfortable naming him the most valuable rookie on his own team. So I see it as:

1) SP Jeremy Hellickson, TB

2) SP Michael Pineda, SEA

3) SP Ivan Nova, NYA

4) 2B Jemile Weeks, OAK

5) 2B Dustin Ackley, SEA

You’ll note that I consider Mark Trumbo an afterthought. Yes, he hit 29 homers, but he also drew just 25 walks. His .290 OBA was second-lowest among AL first baseman with 300 PA, so despite the power, he ranks in the middle of the pack offensively at his position. He wouldn’t crack my top ten.

If Trumbo is the biggest source of divergence from my take on the award and the mainstream, his NL counterpart will certainly be Craig Kimbrel. Kimbrel was terrific by any measure, but in the end you have 77 innings pitched. I don’t believe in extreme leverage bonuses--or much of a leverage bonus at all. I’ll give him an arbitrary 25% boost to get to 25 RAR, but no more.

Among position players, the three standouts are Kimbrel’s teammate Freddie Freeman and Washington teammates Wilson Ramos and Danny Espinosa. I have them all essentially even in terms of RAR at 27. BP’s FRAA likes Espinosa’s fielding and baserunning, and that’s enough to put him in the lead. I suspect Freeman will get more support than Ramos, but the two aren’t that far apart as hitters, with Freeman creating 5.3 runs per game and Ramos 5.0. Freeman had nearly 200 more PA, but Ramos is a catcher. Freeman’s fielding reputation is good, but his FRAA was -5. It can go either way, but I prefer Ramos.

Josh Collmenter and Vance Worley were the top starters, with apologies to Cory Luebke, who I could certainly make a ballot case for, but will refrain lest I be accused of favoritism. Collmenter worked 23 more innings than Worley, which puts him 5 RAR ahead (36 to 31). Collmenter did have a BABIP of just .263 to Worley’s .293, but the dRA difference is not large enough (4.06 to 3.72) to convince me to put Worley ahead.

Depending on how you value Espinosa’s fielding, you certainly could conclude that he was more valuable than Collmenter--conservatively, I’ll stick with the later, and so my ballot is:

1) SP Josh Collmenter, ARI

2) 2B Danny Espinosa, WAS

3) SP Vance Worley, PHI

4) C Wilson Ramos, WAS

5) RP Craig Kimbrel, ATL

## Monday, October 31, 2011

### IBA Ballot: Rookie of the Year

## Friday, October 28, 2011

### Baseball

It has been a part of my life for almost as long as I can remember and it will remain so for as long as I live. For seven months of the year, it is as familiar a part of my life as brushing my teeth or eating dinner, and so it is easy to take for granted. But then one day I wake up and suddenly it is gone, and in the void there is malaise. When the weather is nice, it is played; when it is dark and cold, it moves towards the tropics and away from focus. While it can be used to tell seasons, it scoffs at time while it is played. The competitors dictate the endpoint through their play.

It is a team game, but in many ways it allows the individual to stand and be judged on his own merits. It is a game that, through its variants and offshoots, is quite playable by a large number of people. It is the great American pastime, but it is also the great Cuban passion, the great Dominican pastime, perhaps the most popular import Japan has ever known. We call it baseball, but it is equally beisbol, yakyu, honkbal, pelota.

It is a game simple enough that it can be described (and recorded, on nothing more complex than a piece of paper) discretely--by inning, by score, by out, by baserunner, by count--yet complex enough that there are hundreds and hundreds of people like me who are fascinated by it and spend much of our free time thinking about it, yet we still discover new things about it.

And if you are wired to view the world in a certain way, to try to find and verify patterns, to quantify when possible, and sometimes to find meaning and order through randomness and chance--then sabermetrics is a vessel for enjoying it, understanding it, and celebrating it. To know that what we have seen over the last month is not just unlikely--but rather to have a systematic way of thinking that allows us to estimate just how unlikely--does not detract from it.

Once in a while we are presented with just one more game--one game that is, without question, the end. It almost goes against the spirit of the game to be pettily constrained by a set limit of games that cannot be cheated, unlike the nine innings that often become ten, and sometimes become twelve, and on glorious occasions become twenty, and in theory can be infinite. The potential is often greater than the payoff--but either way, the journey was incredible.

## Sunday, October 09, 2011

### Brief Playoff Meanderings

* There have been eighteen postseasons in which the Division Series has been held (I’m counting the 1981 playoffs between the half-season winners as Division Series). 2011 set the new record for the most aggregate games played in the round, with nineteen. The maximum is twenty, and had the Rays managed to take an additional game from the Rangers it would have been reached. The previous high was eighteen, which occurred in 1981, 2001 and 2003.

The record for most total games played in the postseason (since 1995; in this case I’m excluding 1981 because the LCS was only a five-game series at that point) is 38 in 2003--two LDS went four and the World Series went six, but all other series went the distance. The ALCS and NLCS are both well-remembered (I can just say Grady Little or Aaron Boone and Steve Bartman and you’ll remember the circumstances).

No other postseason has come particularly close; the runner-up is 2001, which saw 35 total games played despite each LCS only lasting five games. The fewest games played in a post-season is 28 in 2007--every series was a sweep except for the two involving Cleveland, who beat New York in four in the ALDS then lost to Boston in seven in the ALCS. To put 2007 in perspective, every series from here on out in 2011 could be a sweep, and the total games played would be 31.

A natural follow-up question is “What is the expected number of postseason games?” If you assume that each game is a 50/50 proposition (equally matched teams, no home field advantage, no variation in team quality from day-to-day, etc.), then it’s very straightforward to estimate series length with the geometric distribution.

For a five-game series under those assumptions, there is a 25% chance for a sweep and a 37.5% chance for a four or five game series. For a seven-game series, there is a 12.5% chance for four games, 25% for five games, and 31.25% for six or seven games. Thus, the expected length of a five-game series is 4.125 games, the expected length of a seven-game series is 5.8125 games, and the expected number of games in the postseason is 33.9375. 1997, 2002 and 2004 all met expectations with 34 games.

However, if one compares the expected series lengths to the observed series length in the divisional era (1969 and foreward), he will find that five-game series do not conform to expectations:

Five-game series tend to be resolved in fewer games than one would expect assuming an equal probability of each outcome. The difference is statistically significant by reasonable standards. The average is just 3.86 games. Assuming that one of the teams has a .716 expected winning percentage comes close to minimizing the error assuming the geometric distribution framework:

I’m presenting this as a curiosity, and I’m certainly not suggesting that we should assume that the assumptions I described are useless when thinking about the Division Series. And on the other hand, seven-game series since 1969 conform almost as well as one could hope for:

There is a slight tendency for series to be resolved more quickly than one would expect, but it isn’t particularly significant, and the average of 5.75 is not far off the expected 5.81.

*What I’m going to say here is not in any way novel; many fans, both sabermetrically-inclined and not have expressed the same opinion over the years. But there were two instances that I considered so egregious in the Arizona/Milwaukee game give that I can’t help but comment on it here.

I have always thought that many managers are way too eager to make substitutions that sacrifice offense for baserunning or defense or the pitcher’s slot in the lineup, but I’m not sure I’ve ever seen a better display of it than in the aforementioned Game Five. In the eighth inning, Arizona trailed 2-1 with runners at first and third and two out. Chris Young drew a walk to load the bases and advance Miguel Montero from first to second, bringing Ryan Roberts up with the bases loaded.

At this point, Kirk Gibson decided to pinch-run for Montero, sending Collin Cowgill in. Montero occupied the #4 spot in the order, while Roberts was #7. Thus, it doesn’t take a rocket scientist to realize that, with an additional inning to go, there was a pretty good chance that Montero’s vacated spot would come up to bat again, and barring Arizona scoring at least two runs and holding Milwaukee in the bottom of the eighth, it would come with the Diamondbacks still needing a run (when I say needing a run, I mean it in the sense that Gibson apparently considered, since I would never say you don’t “need” more runs at any point in the game).

One would have to evaluate the marginal value of Cowgill’s baserunning very highly to see that as a winning move, especially considering that Montero would be off with contact given that their were two outs. Of course, as it played out, Roberts grounded into a fielder’s choice, and Montero’s spot did come up in the ninth, with the game now tied but runners at the corners and two outs. Henry Blanco hit into a fielder’s choice, and Arizona did not mount a threat in the tenth before allowing the game-winning run in the bottom of the frame.

The second move was not nearly as egregious, but it was still quite puzzling to me. With a 2-1 lead in the top of the ninth, Ron Roenicke summoned his closer, John Axford. The pitcher’s spot was due up fourth in the bottom of the ninth, so he double-switched Axford into Rickie Weeks’ #5 spot since he’d made the last out of the eighth.

Given that Roenicke wanted to make a double switch, Weeks was the only obvious candidate to be replaced--removing Braun or Fielder would be worse, especially since they were closer to coming to the plate, and Nyjer Morgan’s second spot was due up sixth in the bottom of the ninth. (One could make a case that Morgan would be the best candidate, but given that he got the walkoff hit in the tenth it wouldn’t be an argument that would fly over well with the “results not process” crowd).

What I find interesting about the double-switch for the home team taking the lead into the top of the ninth is that the only way the batting order matters at all is if Axford surrenders the lead. Thus, while you preserve Axford’s ability to pitch the tenth without sabotaging your offense in the ninth, you also know that if he does so, it will be only after he yielded a run in the ninth. You know that you will “need” runs if the #5 spot ever comes to the plate again.

Of course, this all worked out for Roenicke, since Axford pitched a 1-2-3 tenth, Morgan got the game-winning hit, and the #5 spot never batted again. And Roenicke does apparently like to bring Counsell in as a defensive replacement for Weeks, so if Weeks is going to come out of the game anyway, the double switch is the way to do it.

## Sunday, October 02, 2011

### End of Season Statistics 2011

The spreadsheets are published as Google Spreadsheets, which you can download in Excel format by changing the extension in the address from "=html" to "=xls". That way you can download them and manipulate things however you see fit.

The data comes from a number of different sources. Most of the basic data comes from Doug's Stats, which is a very handy site. KJOK's park database provided some of the data used in the park factors, but for recent seasons park data comes from anywhere that has it--Doug's Stats, or Baseball-Reference, or ESPN.com, or MLB.com. Data on pitcher's batted ball types allowed, doubles/triples allowed, and inherited/bequeathed runners comes from Baseball Prospectus.

The basic philosophy behind these stats is to use the simplest methods that have acceptable accuracy. Of course, "acceptable" is in the eye of the beholder, namely me. I use Pythagenpat not because other run/win converters, like a constant RPW or a fixed exponent are not accurate enough for this purpose, but because it's mine and it would be kind of odd if I didn't use it.

If I seem to be a stickler for purity in my critiques of others' methods, I'd contend it is usually in a theoretical sense, not an input sense. So when I exclude hit batters, I'm not saying that hit batters are worthless or that they *should* be ignored; it's just easier not to mess with them and not that much less accurate.

I also don't really have a problem with people using sub-standard methods (say, Basic RC) as long as they acknowledge that they are sub-standard. If someone pretends that Basic RC doesn't undervalue walks or cause problems when applied to extreme individuals, I'll call them on it; if they explain its shortcomings but use it regardless, I accept that. Take these last three paragraphs as my acknowledgment that some of the statistics displayed here have shortcomings as well.

The League spreadsheet is pretty straightforward--it includes league totals and averages for a number of categories, most or all of which are explained at appropriate junctures throughout this piece. The advent of interleague play has created two different sets of league totals--one for the offense of league teams and one for the defense of league teams. Before interleague play, these two were identical. I do not present both sets of totals (you can figure the defensive ones yourself from the team spreadsheet, if you desire), just those for the offenses. The exception is for the defense-specific statistics, like innings pitched and quality starts. The figures for those categories in the league report are for the defenses of the league's teams. However, I do include each league's breakdown of basic pitching stats between starters and relievers (denoted by "s" or "r" prefixes), and so summing those will yield the totals from the pitching side. The one abbreviation you might not recognize is "N"--this is the league average of runs/game for one team, and it will pop up again.

The Team spreadsheet focuses on overall team performance--wins, losses, runs scored, runs allowed. The columns included are: Park Factor (PF), Home Run Park Factor (PFhr), Winning Percentage (W%), Expected W% (EW%), Predicted W% (PW%), wins, losses, runs, runs allowed, Runs Created (RC), Runs Created Allowed (RCA), Home Winning Percentage (HW%), Road Winning Percentage (RW%) [exactly what they sound like--W% at home and on the road], Runs/Game (R/G), Runs Allowed/Game (RA/G), Runs Created/Game (RCG), Runs Created Allowed/Game (RCAG), and Runs Per Game (the average number of runs scored an allowed per game). Ideally, I would use outs as the denominator, but for teams, outs and games are so closely related that I don’t think it’s worth the extra effort.

The runs and Runs Created figures are unadjusted, but the per-game averages are park-adjusted, except for RPG which is also raw. Runs Created and Runs Created Allowed are both based on a simple Base Runs formula. The formula is:

A = H + W - HR - CS

B = (2TB - H - 4HR + .05W + 1.5SB)*.76

C = AB - H

D = HR

Naturally, A*B/(B + C) + D.

I have explained the methodology used to figure the PFs before, but the cliff’s notes version is that they are based on five years of data when applicable, include both runs scored and allowed, and they are regressed towards average (PF = 1), with the amount of regression varying based on the number of years of data used. There are factors for both runs and home runs. The initial PF (not shown) is:

iPF = (H*T/(R*(T - 1) + H) + 1)/2

where H = RPG in home games, R = RPG in road games, T = # teams in league (14 for AL and 16 for NL). Then the iPF is converted to the PF by taking x*iPF + (1-x), where x = .6 if one year of data is used, .7 for 2, .8 for 3, and .9 for 4+.

It is important to note, since there always seems to be confusion about this, that these park factors already incorporate the fact that the average player plays 50% on the road and 50% at home. That is what the adding one and dividing by 2 in the iPF is all about. So if I list Fenway Park with a 1.02 PF, that means that it actually increases RPG by 4%.

In the calculation of the PFs, I did not get picky and take out “home” games that were actually at neutral sites, like the Astros/Cubs series that was moved to Milwaukee in 2008.

There are also Team Offense and Defense spreadsheets. These include the following categories:

Team offense: Plate Appearances, Batting Average (BA), On Base Average (OBA), Slugging Average (SLG), Secondary Average (SEC), Walks per At Bat (WAB), Isolated Power (SLG - BA), R/G at home (hR/G), and R/G on the road (rR/G) BA, OBA, SLG, WAB, and ISO are park-adjusted by dividing by the square root of park factor (or the equivalent; WAB = (OBA - BA)/(1 - OBA) and ISO = SLG - BA).

Team defense: Innings Pitched, BA, OBA, SLG, Innings per Start (IP/S), Starter's eRA (seRA), Reliever's eRA (reRA), RA/G at home (hRA/G), RA/G on the road (rRA/G), Battery Mishap Rate (BMR), Modified Fielding Average (mFA), and Defensive Efficiency Record (DER). BA, OBA, and SLG are park-adjusted by dividing by the square root of PF; seRA and reRA are divided by PF.

The three fielding metrics I've included are limited it only to metrics that a) I can calculate myself and b) are based on the basic available data, not specialized PBP data. The three metrics are explained in this post, but here are quick descriptions of each:

1) BMR--wild pitches and passed balls per 100 baserunners = (WP + PB)/(H + W - HR)*100

2) mFA--fielding average removing strikeouts and assists = (PO - K)/(PO - K + E)

3) DER--the Bill James classic, using only the PA-based estimate of plays made. Based on a suggestion by Terpsfan101, I've tweaked the error coefficient. Plays Made = PA - K - H - W - HR - HB - .64E and DER = PM/(PM + H - HR + .64E)

Next are the individual player reports. I defined a starting pitcher as one with 15 or more starts. All other pitchers are eligible to be included as a reliever. If a pitcher has 40 appearances, then they are included. Additionally, if a pitcher has 50 innings and less than 50% of his appearances are starts, he is also included as a reliever (this allows some swingmen type pitchers who wouldn’t meet either the minimum start or appearance standards to get in).

For all of the player reports, ages are based on simply subtracting their year of birth from 2011. I realize that this is not compatible with how ages are usually listed and so “Age 27” doesn’t necessarily correspond to age 27 as I list it, but it makes everything a heckuva lot easier, and I am more interested in comparing the ages of the players to their contemporaries, for which case it makes very little difference. The "R" category records rookie status with a "R" for rookies and a blank for everyone else; I've trusted Baseball Prospectus on this. Also, all players are counted as being on the team with whom they played/pitched (IP or PA as appropriate) the most.

For relievers, the categories listed are: Games, Innings Pitched, Run Average (RA), Relief Run Average (RRA), Earned Run Average (ERA), Estimated Run Average (eRA), DIPS Run Average (dRA), Batted Ball Run Average (cRA), SIERA-style Run Average (sRA), Guess-Future (G-F), Inherited Runners per Game (IR/G), Batting Average on Balls in Play (%H), Runs Above Average (RAA), and Runs Above Replacement (RAR).

IR/G is per relief appearance (G - GS); it is an interesting thing to look at, I think, in lieu of actual leverage data. You can see which closers come in with runners on base, and which are used nearly exclusively to start innings. Of course, you can’t infer too much; there are bad relievers who come in with a lot of people on base, not because they are being used in high leverage situations, but because they are long men being used in low-leverage situations already out of hand.

For starting pitchers, the columns are: Wins, Losses, Innings Pitched, RA, RRA, ERA, eRA, dRA, cRA, sRA, G-F, %H, Pitches/Start (P/S), Quality Start Percentage (QS%), RAA, and RAR. RA and ERA you know--R*9/IP or ER*9/IP, park-adjusted by dividing by PF. The formulas for eRA, dRA, cRA, and sRA are in this article; I'm not going to copy them here, but all of them are based on the same Base Runs equation and they all estimate RA, not ERA:

* eRA is based on the actual results allowed by the pitcher (hits, doubles, home runs, walks, strikeouts, etc.). It is park-adjusted by dividing by PF.

* dRA is the classic DIPS-style RA, assuming that the pitcher allows a league average %H, and that his hits in play have a league-average S/D/T split. It is park-adjusted by dividing by PF.

* cRA is based on batted ball type (FB, GB, POP, LD) allowed, using the actual estimated linear weight value for each batted ball type. It is not park-adjusted.

* sRA is a SIERA-style RA, based on batted balls but broken down into just groundballs and non-groundballs. It is not park-adjusted either.

Both cRA and sRA are running a little high when compared to actual RA for 2010. Both measures are very sensitive and need to be recalibrated in order to overcome batted ball-type definition differences, frequencies of hit types on each kind of batted ball, and other factors, so keep in mind that they may not perfectly track RA without those adjustments (which I have not made in this case). I’ll let you make your own determination as to whether you find this data useful at all. Personally, I prefer to look at RRA, eRA, and dRA.

G-F is a junk stat, included here out of habit because I've been including it for years. It was intended to give a quick read of a pitcher's expected performance in the next season, based on eRA and strikeout rate. Although the numbers vaguely resemble RAs, it's actually unitless. As a rule of thumb, anything under four is pretty good for a starter. G-F = 4.46 + .095(eRA) - .113(K*9/IP). It is a junk stat. JUNK STAT JUNK STAT JUNK STAT. Got it?

%H is BABIP, more or less; I use an estimate of PA (IP*x + H + W, where x is the league average of (AB - H)/IP). %H = (H - HR)/(IP*x + H - HR - K). Pitches/Start includes all appearances, so I've counted relief appearances as one-half of a start (P/S = Pitches/(.5*G + .5*GS). QS% is just QS/(G - GS); I don't think it's particularly useful, but Doug's Stats include QS so I include it.

I've used a stat called Relief Run Average (RRA) in the past, based on Sky Andrecheck's article in the August 1999 By the Numbers; that one only used inherited runners, but I've revised it to include bequeathed runners as well, making it equally applicable to starters and relievers. I am using RRA as the building block for baselined value estimates for all pitchers this year. I explained RRA in this article, but the bottom line formulas are:

BRSV = BRS - BR*i*sqrt(PF)

IRSV = IR*i*sqrt(PF) - IRS

RRA = ((R - (BRSV + IRSV))*9/IP)/PF

The two baselined stats are Runs Above Average (RAA) and Runs Above Replacement (RAR). RAA uses the league average runs/game (N) for both starters and relievers, while RAR uses separate replacement levels for starters and relievers. Thus, RAA and RAR will be pretty close for relievers:

RAA = (N - RRA)*IP/9

RAR (relievers) = (1.11*N - RRA)*IP/9

RAR (starters) = (1.28*N - RRA)*IP/9

All players with 285 or more plate appearances are included in the Hitters spreadsheets. (I usually use 300 as a cutoff, but this year when I had the list sorted there were a number of players just below 300 that I was interested in, so I chose an arbitrarily lower threshold). Each is assigned one position, the one at which they appeared in the most games. The statistics presented are: Games played (G), Plate Appearances (PA), Outs (O), Batting Average (BA), On Base Average (OBA), Slugging Average (SLG), Secondary Average (SEC), Runs Created (RC), Runs Created per Game (RG), Speed Score (SS), Hitting Runs Above Average (HRAA), Runs Above Average (RAA), Hitting Runs Above Replacement (HRAR), and Runs Above Replacement (RAR).

I do not bother to include hit batters, so take note of that for players who do get plunked a lot. Therefore, PA are simply AB + W. Outs are AB - H + CS. BA and SLG you know, but remember that without HB and SF, OBA is just (H + W)/(AB + W). Secondary Average = (TB - H + W)/AB = SLG - BA + (OBA - BA)/(1 - OBA). I have not included net steals as many people (and Bill James himself) do--it is solely hitting events.

BA, OBA, and SLG are park-adjusted by dividing by the square root of PF. This is an approximation, of course, but I'm satisfied that it works well. The goal here is to adjust for the win value of offensive events, not to quantify the exact park effect on the given rate. I use the BA/OBA/SLG-based formula to figure SEC, so it is park-adjusted as well.

Runs Created is actually Paul Johnson's ERP, more or less. Ideally, I would use a custom linear weights formula for the given league, but ERP is just so darn simple and close to the mark that it’s hard to pass up. I still use the term “RC” partially as a homage to Bill James (seriously, I really like and respect him even if I’ve said negative things about RC and Win Shares), and also because it is just a good term. I like the thought put in your head when you hear “creating” a run better than “producing”, “manufacturing”, “generating”, etc. to say nothing of names like “equivalent” or “extrapolated” runs. None of that is said to put down the creators of those methods--there just aren’t a lot of good, unique names available. Anyway, RC = (TB + .8H + W + .7SB - CS - .3AB)*.322.

RC is park adjusted by dividing by PF, making all of the value stats that follow park adjusted as well. RG, the rate, is RC/O*25.5. I do not believe that outs are the proper denominator for an individual rate stat, but I also do not believe that the distortions caused are that bad. (I still intend to finish my rate stat series and discuss all of the options in excruciating detail, but alas you’ll have to take my word for it now).

I have decided to switch to a watered-down version of Bill James' Speed Score this year; I only use four of his categories. Previously I used my own knockoff version called Speed Unit, but trying to keep it from breaking down every few years was a wasted effort.

Speed Score is the average of four components, which I'll call a, b, c, and d:

a = ((SB + 3)/(SB + CS + 7) - .4)*20

b = sqrt((SB + CS)/(S + W))*14.3

c = ((R - HR)/(H + W - HR) - .1)*25

d = T/(AB - HR - K)*450

James actually uses a sliding scale for the triples component, but it strikes me as needlessly complex and so I've streamlined it. I also changed some of his division to mathematically equivalent multiplications.

There are a whopping four categories that compare to a baseline; two for average, two for replacement. Hitting RAA compares to a league average hitter; it is in the vein of Pete Palmer’s Batting Runs. RAA compares to an average hitter at the player’s primary position. Hitting RAR compares to a “replacement level” hitter; RAR compares to a replacement level hitter at the player’s primary position. The formulas are:

HRAA = (RG - N)*O/25.5

RAA = (RG - N*PADJ)*O/25.5

HRAR = (RG - .73*N)*O/25.5

RAR = (RG - .73*N*PADJ)*O/25.5

PADJ is the position adjustment, and it is based on 1992-2001 offensive data. For catchers it is .89; for 1B/DH, 1.19; for 2B, .93; for 3B, 1.01; for SS, .86; for LF/RF, 1.12; and for CF, 1.02. It dawned on me when re-reading this before posting that the timeframe means that I’ve been using the same PADJ for ten years--which means two things:

1) I’m getting old

2) It’s probably time for an update. I’ll look at 2002-2011 in my forthcoming annual “Offense by Postion” post

That was the mechanics of the calculations; now I'll twist myself into knots trying to justify them. If you only care about the how and not the why, stop reading now.

The first thing that should be covered is the philosophical position behind the statistics posted here. They fall on the continuum of ability and value in what I have called "performance". Performance is a technical-sounding way of saying "Whatever arbitrary combination of ability and value I prefer".

With respect to park adjustments, I am not interested in how any particular player is affected, so there is no separate adjustment for lefties and righties for instance. The park factor is an attempt to determine how the park affects run scoring rates, and thus the win value of runs.

I apply the park factor directly to the player's statistics, but it could also be applied to the league context. The advantage to doing it my way is that it allows you to compare the component statistics (like Runs Created or OBA) on a park-adjusted basis. The drawback is that it creates a new theoretical universe, one in which all parks are equal, rather than leaving the player grounded in the actual context in which he played and evaluating how that context (and not the player's statistics) was altered by the park.

The good news is that the two approaches are essentially equivalent; in fact, they are equivalent if you assume that the Runs Per Win factor is equal to the RPG. Suppose that we have a player in an extreme park (PF = 1.15, approximately like Coors Field pre-humidor) who has an 8 RG before adjusting for park, while making 350 outs in a 4.5 N league. The first method of park adjustment, the one I use, converts his value into a neutral park, so his RG is now 8/1.15 = 6.957. We can now compare him directly to the league average:

RAA = (6.957 - 4.5)*350/25.5 = +33.72

The second method would be to adjust the league context. If N = 4.5, then the average player in this park will create 4.5*1.15 = 5.175 runs. Now, to figure RAA, we can use the unadjusted RG of 8:

RAA = (8 - 5.175)*350/25.5 = +38.77

These are not the same, as you can obviously see. The reason for this is that they take place in two different contexts. The first figure is in a 9 RPG (2*4.5) context; the second figure is in a 10.35 RPG (2*4.5*1.15) context. Runs have different values in different contexts; that is why we have RPW converters in the first place. If we convert to WAA (using RPW = RPG), then we have:

WAA = 33.72/9 = +3.75

WAA = 38.77/10.35 = +3.75

Once you convert to wins, the two approaches are equivalent. The other nice thing about the first approach is that once you park-adjust, everyone in the league is in the same context, and you can dispense with the need for converting to wins at all. You still might want to convert to wins, and you'll need to do so if you are comparing the 2010 players to players from other league-seasons (including between the AL and NL in the same year), but if you are only looking to compare Jose Bautista to Miguel Cabrera, it's not necessary. WAR is somewhat ubiquitous now, but personally I prefer runs when possible--why mess with decimal points if you don't have to?

The park factors used to adjust player stats here are run-based. Thus, they make no effort to project what a player "would have done" in a neutral park, or account for the difference effects parks have on specific events (walks, home runs, BA) or types of players. They simply account for the difference in run environment that is caused by the park (as best I can measure it). As such, they don't evaluate a player within the actual run context of his team's games; they attempt to restate the player's performance as an equivalent performance in a neutral park.

I suppose I should also justify the use of sqrt(PF) for adjusting component statistics. The classic defense given for this approach relies on basic Runs Created--runs are proportional to OBA*SLG, and OBA*SLG/PF = OBA/sqrt(PF)*SLG/sqrt(PF). While RC may be an antiquated tool, you will find that the square root adjustment is fairly compatible with linear weights or Base Runs as well. I am not going to take the space to demonstrate this claim here, but I will some time in the future.

Many value figures published around the sabersphere adjust for the difference in quality level between the AL and NL. I don't, but this is a thorny area where there is no right or wrong answer as far as I'm concerned. I also do not make an adjustment in the league averages for the fact that the overall NL averages include pitcher batting and the AL does not (not quite true in the era of interleague play, but you get my drift).

The difference between the leagues may not be precisely calculable, and it certainly is not constant, but it is real. If the average player in the AL is better than the average player in the NL, it is perfectly reasonable to expect the average AL player to have more RAR than the average NL player, and that will not happen without some type of adjustment. On the other hand, if you are only interested in evaluating a player relative to his own league, such an adjustment is not necessarily welcome.

The league argument only applies cleanly to metrics baselined to average. Since replacement level compares the given player to a theoretical player that can be acquired on the cheap, the same pool of potential replacement players should by definition be available to the teams of each league. One could argue that if the two leagues don't have equal talent at the major league level, they might not have equal access to replacement level talent--except such an argument is at odds with the notion that replacement level represents talent that is truly "freely available".

So it's hard to justify the approach I take, which is to set replacement level relative to the average runs scored in each league, with no adjustment for the difference in the leagues. The best justification is that it's simple and it treats each league as its own universe, even if in reality they are connected.

The replacement levels I have used here are very much in line with the values used by other sabermetricians. This is based both on my own "research", my interpretation of other's people research, and a desire to not stray from consensus and make the values unhelpful to the majority of people who may encounter them.

Replacement level is certainly not settled science. There is always going to be room to disagree on what the baseline should be. Even if you agree it should be "replacement level", any estimate of where it should be set is just that--an estimate. Average is clean and fairly straightforward, even if its utility is questionable; replacement level is inherently messy. So I offer the average baseline as well.

For position players, replacement level is set at 73% of the positional average RG (since there's a history of discussing replacement level in terms of winning percentages, this is roughly equivalent to .350). For starting pitchers, it is set at 128% of the league average RA (.380), and for relievers it is set at 111% (.450).

I am still using an analytical structure that makes the comparison to replacement level for a position player by applying it to his hitting statistics. This is the approach taken by Keith Woolner in VORP (and some other earlier replacement level implementations), but the newer metrics (among them Rally and Fangraphs' WAR) handle replacement level by subtracting a set number of runs from the player's total runs above average in a number of different areas (batting, fielding, baserunning, positional value, etc.), which for lack of a better term I will call the subtraction approach.

The offensive positional adjustment makes the inherent assumption that the average player at each position is equally valuable. I think that this is close to being true, but it is not quite true. The ideal approach would be to use a defensive positional adjustment, since the real difference between a first baseman and a shortstop is their defensive value. When you bat, all runs count the same, whether you create them as a first baseman or as a shortstop.

That being said, using “replacement hitter at position” does not cause too many distortions. It is not theoretically correct, but it is practically powerful. For one thing, most players, even those at key defensive positions, are chosen first and foremost for their offense. Empirical work by Keith Woolner has shown that the replacement level hitting performance is about the same for every position, relative to the positional average.

Figuring what the defensive positional adjustment should be, though, is easier said than done. Therefore, I use the offensive positional adjustment. So if you want to criticize that choice, or criticize the numbers that result, be my guest. But do not claim that I am holding this up as the correct analytical structure. I am holding it up as the most simple and straightforward structure that conforms to reality reasonably well, and because while the numbers may be flawed, they are at least based on an objective formula that I can figure myself. If you feel comfortable with some other assumptions, please feel free to ignore mine.

That still does not justify the use of HRAR--hitting runs above replacement--which compares each hitter, regardless of position, to 73% of the league average. Basically, this is just a way to give an overall measure of offensive production without regard for position with a low baseline. It doesn't have any real baseball meaning.

A player who creates runs at 90% of the league average could be above-average (if he's a shortstop or catcher, or a great fielder at a less important fielding position), or sub-replacement level (DHs that create 4 runs a game are not valuable properties). Every player is chosen because his total value, both hitting and fielding, is sufficient to justify his inclusion on the team. HRAR fails even if you try to justify it with a thought experiment about a world in which defense doesn't matter, because in that case the absolute replacement level (in terms of RG, without accounting for the league average) would be much higher than it is currently.

The specific positional adjustments I use are based on 1992-2001 data. There's no particular reason for not updating them; at the time I started using them, they represented the ten most recent years. I have stuck with them because I have not seen compelling evidence of a change in the degree of difficulty or scarcity between the positions between now and then, and because I think they are fairly reasonable. The positions for which they diverge the most from the defensive position adjustments in common use are 2B, 3B, and CF. Second base is considered a premium position by the offensive PADJ (.94), while third base and center field are both neutral (1.01 and 1.02).

Another flaw is that the PADJ is applied to the overall league average RG, which is artificially low for the NL because of pitcher's batting. When using the actual league average runs/game, it's tough to just remove pitchers--any adjustment would be an estimate. If you use the league total of runs created instead, it is a much easier fix.

One other note on this topic is that since the offensive PADJ is a proxy for average defensive value by position, ideally it would be applied by tying it to defensive playing time. I have done it by outs, though.

The reason I have taken this flawed path is because 1) it ties the position adjustment directly into the RAR formula rather then leaving it as something to subtract on the outside and more importantly 2) there’s no straightforward way to do it. The best would be to use defensive innings--set the full-time player to X defensive innings, figure how Derek Jeter’s innings compared to X, and adjust his PADJ accordingly. Games in the field or games played are dicey because they can cause distortion for defensive replacements. Plate Appearances avoid the problem that outs have of being highly related to player quality, but they still carry the illogic of basing it on offensive playing time. And of course the differences here are going to be fairly small (a few runs). That is not to say that this way is preferable, but it’s not horrible either, at least as far as I can tell.

To compare this approach to the subtraction approach, start by assuming that a replacement level shortstop would create .86*.73*4.5 = 2.825 RG (or would perform at an overall level of equivalent value to being an average fielder at shortstop while creating 2.825 runs per game). Suppose that we are comparing two shortstops, each of whom compiled 600 PA and played an equal number of defensive games and innings (and thus would have the same positional adjustment using the subtraction approach). Alpha made 380 outs and Bravo made 410 outs, and each ranked as dead-on average in the field.

The difference in overall RAR between the two using the subtraction approach would be equal to the difference between their offensive RAA compared to the league average. Assuming the league average is 4.5 runs, and that both Alpha and Bravo created 75 runs, their offensive RAAs are:

Alpha = (75*25.5/380 - 4.5)*380/25.5 = +7.94

Similarly, Bravo is at +2.65, and so the difference between them will be 5.29 RAR.

Using the flawed approach, Alpha's RAR will be:

(75*25.5/380 - 4.5*.73*.86)*380/25.5 = +32.90

Bravo's RAR will be +29.58, a difference of 3.32 RAR, which is two runs off of the difference using the subtraction approach.

The downside to using PA is that you really need to consider park effects if you, whereas outs allow you to sidestep park effects. Outs are constant; plate appearances are linked to OBA. Thus, they not only depend on the offensive context (including park factor), but also on the quality of one's team. Of course, attempting to adjust for team PA differences opens a huge can of worms which is not really relevant; for now, the point is that using outs for individual players causes distortions, sometimes trivial and sometimes bothersome, but almost always makes one's life easier.

I do not include fielding (or baserunning outside of steals, although that is a trivial consideration in comparison) in the RAR figures--they cover offense and positional value only). This in no way means that I do not believe that fielding is an important consideration in player valuation. However, two of the key principles of these stat reports are 1) not incorporating any data that is not readily available and 2) not simply including other people's results (of course I borrow heavily from other people's methods, but only adapting methodology that I can apply myself).

Any fielding metric worth its salt will fail to meet either criterion--they use zone data or play-by-play data which I do not have easy access to. I do not have a fielding metric that I have stapled together myself, and so I would have to simply lift other analysts' figures.

Setting the practical reason for not including fielding aside, I do have some reservations about lumping fielding and hitting value together in one number because of the obvious differences in reliability between offensive and fielding metrics. In theory, they absolutely should be put together. But in practice, I believe it would be better to regress the fielding metric to a point at which it would be roughly equivalent in reliability to the offensive metric.

Offensive metrics have error bars associated with them, too, of course, and in evaluating a single season's value, I don't care about the vagaries that we often lump together as "luck". Still, there are errors in our assessment of linear weight values and players that collect an unusual proportion of infield hits or hits to the left side, errors in estimation of park factor, and any number of other factors that make their events more or less valuable than an average event of that type.

Fielding metrics offer up all of that and more, as we cannot be nearly as certain of true successes and failures as we are when analyzing offense. Recent investigations, particularly by Colin Wyers, have raised even more questions about the level of uncertainty. So, even if I was including a fielding value, my approach would be to assume that the offensive value was 100% reliable (which it isn't), and regress the fielding metric relative to that (so if the offensive metric was actually 70% reliable, and the fielding metric 40% reliable, I'd treat the fielding metric as .4/.7 = 57% reliable when tacking it on, to illustrate with a simplified and completely made up example presuming that one could have a precise estimate of nebulous "reliability").

Given the inherent assumption of the offensive PADJ that all positions are equally valuable, once RAR has been figured for a player, fielding value can be accounted for by adding on his runs above average relative to a player at his own position. If there is a shortstop that is -2 runs defensively versus an average shortstop, he is without a doubt a plus defensive player, and a more valuable defensive player than a first baseman who was +1 run better than an average first baseman. Regardless, since it was implicitly assumed that they are both average defensively for their position when RAR was calculated, the shortstop will see his value docked two runs. This DOES NOT MEAN that the shortstop has been penalized for his defense. The whole process of accounting for positional differences, going from hitting RAR to positional RAR, has benefited him.

I've found that there is often confusion about the treatment of first baseman and designated hitters in my PADJ methodology, since I consider DHs as in the same pool as first baseman. The fact of the matter is that first baseman outhit DH. There is any number of potential explanations for this; DHs are often old or injured, players hit worse when DHing than they do when playing the field, etc. This actually helps first baseman, since the DHs drag the average production of the pool down, thus resulting in a lower replacement level than I would get if I considered first baseman alone.

However, this method does assume that a 1B and a DH have equal defensive value. Obviously, a DH has no defensive value. What I advocate to correct this is to treat a DH as a bad defensive first baseman, and thus knock another five or ten runs off of his RAR for a full-time player. I do not incorporate this into the published numbers, but you should keep it in mind. However, there is no need to adjust the figures for first baseman upwards --the only necessary adjustment is to take the DHs down a notch.

Finally, I consider each player at his primary defensive position (defined as where he appears in the most games), and do not weight the PADJ by playing time. This does shortchange a player like Ben Zobrist (who saw significant time at a tougher position than his primary position), and unduly boost a player like Buster Posey (who logged a lot of games at a much easier position than his primary position). For most players, though, it doesn't matter much. I find it preferable to make manual adjustments for the unusual cases rather than add another layer of complexity to the whole endeavor.

Player spreadsheets should be coming by the middle of the week.

2011 Park Factors

2011 Leagues

2011 Teams

2011 Team Offense

2011 Team Defense

2011 AL Relievers

2011 NL Relievers

2011 AL Starters

2011 NL Starters

2011 AL Hitters

2011 NL Hitters