Monday, December 21, 2009

A Caution on the Use of Baselined Metrics per PA

I threw this together based on a Twitter discussion I had last week that included Justin (@jinazreds), Matt (@devilfingers), Josuha (@JDSussman), and Erik (@Erik_Manning)--hopefully I didn't miss anybody. Said discussion was going just fine between the other parties until I stepped in and said the opposite of what I meant, so I need to clarify my point. The end result will be that I obfuscate my point, but that's par for the course around here.

I really should just get around to writing the rate stat series that I have been promising since I started this blog, and then I could give my thoughts on this topic from A-Z in one place. But this is a lot easier, and the rate stat series would have eight parts and be remarkably dry reading as I go around in circles.

Suppose we want to express a baselined measure of value as a rate stat. In this case, I'll work with something similar to Palmer's Batting Wins--wins above average, considering only offensive production--but the theory behind it has wider applications.

The standard way of doing this (incidentally, one of the few things that Tango Tiger, David Smyth, and myself ever fully agreed upon on the topic of rate stats in our many discussions at FanHome (at least at the time--I certainly don't presume to speak on behalf of those gentleman)) is to look at BW/PA. If we were working with a standard runs created method, we would look at RC/out. But when our metric has already been baselined to average, we have already incorporated the run value of avoiding outs/generating PA. RAA/Out will double-count that aspect of offense, more or less.

Of course, we all recognize that the value of a run varies depending on the context in which the hitter plays, so we convert RAA to WAA, and we have something like Batting Wins. Let's look at two players credited with a similar number of BW, but in very different contexts with a big difference in PA:

Nap Lajoie, 1903 AL: 5.8 BW in 509 PA
Frank Thomas, 1996 AL: 6.2 BW in 636 PA

Incidentally, the BW figures here are my rough estimates; for the purposes of this discussion, it doesn't really matter how they reflect specifically on Lajoie and Thomas--I don't care to compare them to see who was better, I just needed a good example. They actually differ fairly substantially from those published elsewhere, but that's not important. There will also be some rounding discrepancies from using just one decimal place throughout the post, but the purpose of this exercise is not a precise examination of the two players.

Figuring BW/650 PA, we come up with Lajoie at 7.4 and Thomas at 6.3. From this, we can conclude that Lajoie was significantly more productive on a rate basis as an offensive player, right?

Let's get a second opinion first. If the stat we wanted to put on a rate basis was standard Runs Created, we'd generally do that by taking RC/Out and comparing it to the league average. My estimates have Lajoie at 207 and Thomas at 195. One needs not be an expert on the relationship between the scales of the two metrics to realize that 207-195 is a much narrower gap than 7.4-6.3.

What is the cause of this discrepancy? It's not the RC/RAA inputs, since they are based on the same formulas. It's not a case of the metrics being incompatible--RC/Out and RAA/PA (or BW/PA) correlate very highly when the samples are drawn from similar contexts.

The problem is that Plate Appearances (which are obviously the denominator for BW/PA) are not constant across contexts. Outs are, more or less. No matter what era the game is played in, what park it's played in, how many runs are scored, or anything else, there are still three outs per inning. And (approximately) 27 outs per game. Even if you had five inning games in one league and thirteen inning games in another, it will all wash out (or close to it) when you look at runs per out.

On the other hand, plate appearances are not constant across environments. In 1903, AL teams averaged 35.8 PA/G (actually AB+W only), while in 1996 AL teams averaged 38.7. Therefore, 650 PA in 1903 are not equivalent to 650 PA in 1996. 650 PA in 1903 represent the number than an average offense would generate in 18.2 games, but in 1996 they represent just 16.8 games worth.

Getting back to the actual PA used by Larry and the Big Hurt, one would think that since Thomas came to the plate 127 more times that he had participated in a much larger share of his team's PA (even when we recognize the difference in schedule length). However, Lajoie's 509 PA are equivalent to 14.2 games; Thomas' 636 to 16.4 (*). Thomas had 15% more opportunities when you adjust for context, versus 25% more when only raw PA is considered (and this is without considering the difference in season length).

In a higher PA environment, players will get more raw opportunities, but each PA has less impact on wins and losses, as each represents a smaller portion of a game. We can adjust for this by normalizing Plate Appearances to some "reference level", common for all leagues.

So let's instead look at BW/650 PA, except we'll normalize PA to an average of 37.2/game (this is roughly the post-1901 major league average). Lajoie will now be credited with (5.8/509)*(35.8/37.2)*650 = 7.1 BW/650 and Thomas with (6.2/636)*(38.7/37.2)*650 = 6.6 BW/650. The gap is .5 BW, whereas before normalizing PA it was 1.1.

If you'd like a formula:

(Baselined metric/PA)*[(Reference PA/G)/(League PA/G)] = baselined metric/normalized PA


baselined metric/normalized PA = [(Baselined metric)*(Reference PA/G)]/[PA*(League PA/G)]

where "reference PA/G" is simply the fixed PA/G value everything is being scaled to (37.2 in the Lajoie/Thomas example)

When looking at players within the same league, one doesn't have to worry about this issue--in that situation, one doesn't even have to convert from runs to wins unless they are so inclined.

Let me circle back and explain the underlying premise of this post again, as I'm pretty sure I've been too verbose and may have distracted from it. Basically, the point I am trying to make is that a batter's contribution occurs within the context of his team's games (or, if we'd like to divorce the player from his actual team, the idealized games of a league average team). What matters is not the raw number of plate appearances a batter gets, but the proportion of his team's plate appearances that he gets. That's the point, in a nutshell.

So we could look at Lajoie/Thomas from that perspective as well, making it explicit with the use of percentages. Lajoie played in a league in which there were 140 games in a season and 35.8 PA/G, so the average team would get 140*35.8 = 5,012 PA, of which he was given 10.2% (509/5012). Thomas was given 10.1% of the idealized team's PA (636/162/38.7).

Therefore, their opportunity as measured in PA was essentially equal. Thomas actually had 127 more plate appearances because he played in an environment in which there were a lot more to go around in each game, and because he played in a league in which there were 22 extra games played. We want to adjust for the former cause when looking at BW/PA; the latter is not a problem because Thomas also had 22 extra games in which to increase his raw number of BW (it might be something you want to consider, in Lajoie's favor, if you are comparing raw BW totals).

(Incidentally, one can use this principle to try to adjust for the differing numbers of PA players get as a result of being on good or bad offensive teams, even within the same league. The most notable metric to incorporate this factor is David Tate's Marginal Lineup Value. I'll leave a full discussion of the pros and cons of that approach for another time).

When expressing individual batter's productivity as a rate, there are legitimate reasons not to use outs. I've written about some of them before. The good news, though, is that using outs does not cause an excessive amount of distortion on the player level, as long as you don't take it too far (as Bill James' old system of Offensive Won-Loss Records did). If I had to present just one rate stat and it had to be the most accurate estimate of individual offensive performance I could possibly offer, it would not be runs/out--it would be something like the WAA/Normalized PA presented here or something even more complex. (Just to be clear: if you use outs as a denominator, the numerator should be absolute runs; if you use PA as a denominator, then you can put your baselined metric in the numerator).

But the nice thing about working with outs (and I am fully aware that I'm repeating myself) is that outs are constant across all contexts. Outs are fixed at three per inning whether you play in the Baker Bowl in 1930 or in Dodger Stadium in 1968. Avoiding a lot of headaches that come from making sure you've considered all of the variables when using PA as your denominator might well be worth the tiny bit of distortion that comes with using outs. I know it is for me.

(*) If you really want to get cute, you could argue that we want to look at PA/Out as the number of outs is not constant across all league-seasons due to factors like extra inning games, home teams that don't bat in the bottom of the ninth, rainouts, etc. I wouldn't waste my time but I wanted to acknowledge it.

Tuesday, December 15, 2009

Leadoff Hitters, 2009

Once again, here is a look at the composite performances of the players who batted in the leadoff spot for each team. The data is from and again, it includes ALL of the PA out of the leadoff spot. In parentheses I list the players who appeared in twenty or more games in the #1 slot (which is not the same as starting twenty games; they could have been pinch runners, defensive replacements, etc.), but that does not in any way mean that they are the only contributor to the team total.

I always feel obligated to point out that as a sabermetrician, I think that the importance of the batting order is often overstated, and that the best leadoff hitters would generally be the best cleanup hitters, the best #9 hitters, etc. However, since the leadoff spot gets a lot of attention, and teams pay particular attention to the spot, it is instructive to look at how each team fared there.

The conventional wisdom is that the primary job of the leadoff hitter is to get on base and score runs. So let's start by looking at runs scored per 25.5 outs (AB - H + CS):

1. NYA (Jeter), 6.6
2. LAA (Figgins), 6.4
3. TOR (Scutaro), 6.3
7. LA (Fucal/Pierre), 5.9
Leadoff average, 5.3
ML average, 4.6
28. CIN (Taveras/Stubbs/Dickerson), 4.6
29. NYN (Pagan/Reyes/Cora), 4.5
30. OAK (Kennedy/Cabrera/Sweeney), 4.4

I will always list the top and bottom three, as well as the leader and trailer in each league if they are not already included. There will be some different names popping up on the leader lists, as there were a number of changes involving top leadoff hitters: injury-riddled seasons for Jose Reyes and Grady Sizemore, the flip-flop of Johnny Damon and Derek Jeter, and Hanley Ramirez' move into the #3 slot in the Florida batting order.

Next up is the other obvious metric, On Base Average, which here excludes HB and SF:

1. NYA (Jeter), .398
2. LAA (Figgins), .389
3. SEA (Suzuki), .382
6. PIT (McCutchen/Morgan), .362
Leadoff average, .344
ML average, .330
26. OAK (Kennedy/Cabrera/Sweeney), .320
28. SF (Velez/Rowand/Winn), .304
29. CIN (Taveras/Stubbs/Dickerson), .301
30. PHI (Rollins), .293

Two things jarred me when looking at this list--first, the fact that Pirates leadoff hitters led the NL in OBA. Andrew McCutchen (.366 in 487 PA) and Nyjer Morgan (.351 in 211) both contributed to this feat. Meanwhile, on the other side of the state, Jimmy Rollins led the Phillies to baseball's worst mark.

What I call Runners On Base Average is a modified OBA, equal to the Base Runs A factor per PA (or regular OBA less HR and CS in the numerator). It measures the number of times a player is actually on base available to be driven in by a teammate. It penalizes homers, obviously, but if you believe that the role of a leadoff hitter is to get on base for others, that is not necessarily a drawback. The leaders were:

1. NYA (Jeter), .364
2. LAA (Figgins), .359
3. SEA (Suzuki), .355
4. STL (Schumaker/Ryan/Lugo), .348
Leadoff average, .313
ML average, .296
28. CIN (Taveras/Stubbs/Dickerson), .272
29. DET (Granderson), .266
30. PHI (Rollins), .256

The Tigers leadoff men led baseball with 34 homers, dropping their already below-average .321 OBA to last in the AL when homers are removed. Incidentally, Astros leadoff hitters hit the fewest longballs (4).

Runs to RBI ratio is not a measure of quality, but rather of shape. The conventional stereotype of an ideal leadoff man would have a high ratio; those who are non-traditional are more likely to have a low ratio:

1. CIN (Taveras/Stubbs/Dickerson), 2.5
2. STL (Schumaker/Ryan/Lugo), 2.4
3. WAS (Guzman/Morgan/Harris), 2.4
5. LAA (Figgins), 2.1
Leadoff average, 1.6
ML average, 1.0
28. TEX (Kinsler/Borbon), 1.3
29. DET (Granderson), 1.2
30. SF (Velez/Rowand/Winn), 1.2

As you can see with just a glance, R/RBI ratio does not track the quality measures above very closely. Cincinnati ranked in the bottom three in the first group of metrics we examined, but here they lead the way, not due to any particular ability to score runs but due to their anemic .348 SLG (last) and .093 ISO (third last, ahead of only HOU and LAA). The Angels rank high as well, yet did well in runs scored and OBA.

Bill James' designed his Run Element Ratio for a similar purpose--identifying whether hitters fit the traditional mold of table setters or cleanup men. RER is the ratio of steals and walks (both events that do little to advance other baserunners) to extra bases (power). We should expect somewhat similar results to R/RBI ratio, but without the influence of teammates and with singles excluded from consideration:

1. LAA (Figgins), 2.4
2. HOU (Bourn/Matsui), 2.0
3. BOS (Ellsbury/Pedroia), 1.4
Leadoff average, 1.1
ML average, .8
28. PHI (Rollins), .7
29. SF (Velez/Rowand/Winn), .7
30. DET (Granderson), .6

Another Bill James measure was what I'll call Leadoff Efficiency--an estimated runs scored per 25.5 outs. James' formula assumes that 35% of runners on first (estimated as S + W - SB - CS) will score; 55% of runners on second (D + SB); 80% of runners on third (T); and of course homers always result in a run scored. As Tango Tiger has pointed out here in the past, these weights are not particularly accurate, which is evidenced by the fact that the average LE is 6% higher than the average of actual runs scored/25.5 outs for leadoff men. Nevertheless, it is James' metric and I'll present it as he figures it:

1. NYA (Jeter), 7.3
2. SEA (Suzuki), 6.4
3. TOR (Scutaro), 6.3
5. PIT (McCutchen/Morgan), 6.3
Leadoff average, 5.7
ML average, 5.5
28. OAK (Kennedy/Cabrera/Sweeney), 5.0
29. SD (Gwynn/Cabrera), 4.9
30. CIN (Taveras/Stubbs/Dickerson), 4.6

Transitioning back to metrics that are designed for more general application, David Smyth has suggested using 2*OBA + SLG for leadoff hitters. Since the most accurate weight for OBA in an OPS-type construction (for the purpose of predicting team runs scored) is somewhere in the vicinity of 1.5-1.8, using a weight of two gives a little bit of a boost to OBA, but not excessively so (and still closer to the ideal weight than what is used in standard OPS or even OPS+). I have taken 70% of the result to bring it back onto the normal OPS scale; since neither OPS nor 2OPS is on an organic scale, we might as well stick with the more familiar scale:

1. NYA (Jeter), 892
2. SEA (Suzuki), 851
3. TOR (Scutaro), 816
5. PIT (McCutchen/Morgan), 811
Leadoff average, 769
ML average, 754
27. OAK (Kennedy/Cabrera/Sweeney), 705
28. PHI (Rollins), 701
29. SD (Gwynn/Cabrera), 694
30. CIN (Taveras/Stubbs/Dickerson), 665

Finally, we can always just evaluate a leadoff hitter in the same way we'd generally evaluate any other: standard Runs Created per Game:

1. NYA (Jeter), 7.1
2. SEA (Suzuki), 6.2
3. PIT (McCutchen/Morgan), 5.7
Leadoff average, 5.0
ML average, 4.8
28. OAK (Kennedy/Cabrera/Sweeney), 4.1
29. SD (Gwynn/Cabrera), 3.8
30. CIN (Taveras/Stubbs/Dickerson), 3.7

If writing a piece like this obligates one to anoint one team's leadoff men as the most effective, then it's the Yankees, led by Derek Jeter. The worst? Well, it's tough to believe, but Willy Taveras managed to do what Jerry Hairston, Corey Patterson, and friends could not in 2008--lead the Reds leadoff slot to the bottom of the rankings in three categories.

Here is a link to a spreadsheet with all of the data, sorted by OBA:

Leadoff Hitters 2009

Wednesday, December 09, 2009

(Informally) Grading BBWAA Award Choices

Last time I tried to explain why I don't particularly care about whom the BBWAA annual awards are bestowed upon, and how my feelings on those awards differ from those I hold on some other awards.

Now I'm going to turn around and talk about the very results I claimed not to care about, which will understandably lead to charges of wanting to have it both ways. Perhaps, but I hope that my previous missive will allow you to see where I'm coming from.

First, a brief digression. While my opinion is of course the one I value most, I am nowhere near vain enough to assume that you care about my opinion (while also recognizing that I am not infallible). So I do take a look at the Internet Baseball Awards, now maintained by Baseball Prospectus, and add my two cents into that voting. I believe that we yahoos on the internet, as a group, do make better choices than the writers do as a group. Are there IBA results that I personally find dubious? Of course, but I think that overall they are more sensible than what the BBWAA proffers.

For fun, I am going to propose a series of letter grades by which to judge the BBWAA awards against your own judgment. I will illustrate this by looking at the MVP winners for the last ten seasons, and comparing my choices to those of the BBWAA and the IBA. I have also limited myself in making my selections only to what I felt at the time. I have not gone back and reviewed the statistics (or some of the new data that has become available, like better fielding metrics) to see if I would still view those awards the same way I do now. Remember, I'm not claiming that my opinion is infallible, and I certainly wouldn't make my claim about what my opinion was ten years ago. Also, the frequency of the grades doesn't make an abundance of sense--A+ is more common than A, for instance. The point here is just to offer a systematic way of categorizing your *own* opinion on the outcome of the vote, with mine just serving as a superfluous example.

The first letter grade is A+ (I've avoided pluses and minuses except in this case, as they are needlessly complex for a silly application, but you could figure out how to mix them in elsewhere if you wanted). An A+ selection is one that you agree with--the singular choice of the BBWAA is the singular choice that you would have made. The last BBWAA A+ selection (in MVP voting and in my opinion, of course, which will go unstated for the rest of the piece) were Albert Pujols and Joe Mauer in 2009.

An A selection is one in which you would have chosen a different player, but could have yourself made the case for the actual winner. Your candidate and the winner were very close and while you went one way, you wouldn't even waste your time trying to dissuade someone that endorsed the other player. The last A selection for me was Albert Pujols, 2005 NL. I felt that Derrek Lee was a sliver more valuable, but it was hard to argue that with any conviction.

A B selection is one win which you have a clear preference for a different candidate, but you can certainly see why others might support the winning player. This player will probably be in the top five on your ballot (or top three for the Cy Young), and his value estimate should be close enough to that of your player that it is within a reasonably restrictive confidence interval. The last B selection was Dustin Pedroia, 2008 AL. While I felt that one of the top two pitchers (Lee or Halladay) should have won the award, and that Mauer or Sizemore were more deserving position players, Pedroia was hardly an outlandish pick. I didn't endorse him, but it was a solid selection.

A C selection is one where you feel the player was clearly inferior to another, and while he would have been on the bottom of your ballot (or just off of it in the case of the Cy Young), you have a hard time accepting him as the best choice. The last C selection was Jimmy Rollins, 2007. I had Rollins eighth on my ballot, and felt that David Wright and Chipper Jones stood out as the top two. I also had Rollins behind two other players at his position and one other player on his team; he had a fine season, but the MVP was a bit much.

A D selection occurs when you don't feel the player should have even been in the top ten. This will likely only happen when the mainstream evaluation of the player's statistics differs widely from the sabermetric evaluation, or when the media has latched onto a storyline about a particular player and built an MVP case around it. In the last ten years, there has not been a D selection, only because of the (possibly too) large definition I have assigned to grade F.

A F selection is the same as a D selection, except the player is also judged to be inferior to one or ideally two or more comparable players. I used three criteria for comparable:

1) a teammate
2) a player at the same position and a somewhat similar profile as a hitter (Mark Grace would not be comparable to Frank Thomas, even though they were both first baseman; Jim Thome would be)
3) if the winner came from a contender, then a comparable player under condition #2 must have also come from a contender

The last F selection was Justin Morneau, 2006 AL. I believe that Morneau was not one of the ten most valuable players in the American League AND that his case was inferior to that of his teammate Joe Mauer.

I hope I've made it clear that I don't intend this exercise to be taken too seriously; it is just an organized way of assessing how the actual award choice compares to your own. It turns out that, even under the light of the grading system, the MVP choices have been decent for the last ten years. It's been even better in the NL, largely due to the presence of two superstars that are hard to ignore (although the AL does have an answer in Alex Rodriguez).

However, for my money the results of the IBA balloting have been nearly flawless. Only twice in twenty votes did I feel that there was a demonstrably more deserving recipient--and in both of those cases, I accept that it is possible that the IBA winner was truly the MVP under my personal standards (grade B choices). Sixteen times I have agreed with the IBA choice (A+), while three times it has been too close to call and I went with the other good option (A).

The uncharitable way of looking at this would be to say that I am a stathead ideologue, and that the other IBA voters (since they are self-selected among folks who at least have exposure to sabermetriclly-aware outlets) are ideologues as well, and so it is no surprise that there is a consensus. Perhaps. I tend to think that it illustrates that an informed, diverse group can make excellent decisions and arrive at consensus through the power of logic and analysis. But in the end it's all just for fun, so that would be a bit far to push it.

Tuesday, December 01, 2009

The MVP, the Hall of Fame, and the Emmys

In the past I have written disdainfully of the BBWAA post-season awards, going so far as to say that I don't care. I've said the same thing about Hall of Fame voting.

Whenever I do this, the post seems to get linked somewhere and people ask "If you don't care about it, why are you writing about it?" It's true that "I don't care" is a fairly strong declaration, and that what I'm actually aiming for is "I don't care about the specific outcomes of the voting process. I am interested in ways in which the outcomes could be improved by changing the process or the voter pool". Of course, if you need to slap a title on your blog post, the former is a lot easier to work with than the latter.  In any event, if you're not interested in my opinion, that's fine by me.  Don't read it.

To belabor this point, let me give you an example by discussing four sets of awards/honors that I don't care about in one way or another: the Daytime Emmys, the Primetime Emmys, the BBWAA awards, and the Baseball Hall of Fame. The exact manner in which I don't care about each differs, and should be illustrative of what I'm getting at:

The Daytime Emmys--I don't care about the Daytime Emmys because I don't watch daytime television. Not only does the identity of the award winners have no impact on me, I know and care next to nothing ("next to" is a necessary qualifier to avoid a gotcha when it turns out I've heard of some soap operas) about what is being honored. I don't know who won the awards, I don't care to know, and I don't have any opinion about who should have won them.

The Primetime Emmys--I may not care who wins the awards, but I watch some of the shows eligible for consideration or know something about the others that I don't watch. I'm not a TV critic and make no claims to be one; I watch what I enjoy, and I don't care whether it is considered worthy of praise by critics or considered to be garbage. While I think that it would be cool if LOST won the Emmy for best drama every year (or Monk and/or The Office for best comedy), I can't say that Mad Men is unworthy, because I don't watch it, know little about it, and I don't evaluate TV shows in the same way that Emmy voters do.

Baseball Hall of Fame--Last year I wrote a couple of posts titled "Why I Don't Care About the HOF". The main point was that I don't care about specific Hall of Fame selections (i.e. "Should Blyleven or Trammell be in?" or the endless Jim Rice debates) because I believe the system is too far gone. There have been so many mistakes made that even a concerted effort going forward will not salvage the Hall of Fame as a means to honor truly great players. Additionally, I believe that one of the reasons for the mistakes is the haphazard means of selecting players that have been employed over the years, and the lack of a coherent vision for the player selection process when the institution was founded.

The concept of a Hall of Fame in general, and how a hypothetical one should be constructed, is of interest to me. And so I do offer comments from time to time on how I feel the current Hall could be improved (although this hypothetical improvement would still be insufficient to salvage the inductee roster at this point), or about how a Hall could be designed in theory.

BBWAA Awards--I think that the questions posed by each of these awards are interesting, and I follow the game closely enough to come to my own informed judgments about which player should win. I think the voting process (ten-man ballot, two voters per city in the case of the MVP) itself is solid. I'm not wild about the instructions laid out for voting, but they could certainly be worse. Most importantly, I think it's worthwhile to honor the best players of each season

However, while the voting process and instructions are okay, I don't hold the judgment of those doing the voting in particularly high esteem--particularly with respect to a number of de facto criteria have emerged (or seem to have emerged). Most prominent amongst the de factor prerequisites I find objectionable are that a player must play for a contender (or otherwise have a clearly superior season to anyone else) and that starting pitchers are not seriously considered. With respect to Rookie of the Year voting, sometimes writers apparently can't be bothered to ascertain which players actually are rookies. And there is the issue that people who will report the news are called on to make the news, which may not have a tangible impact on the voting but raises a red flag just a little bit up the pole.

So at the end of the day I have enough qualms about the BBWAA awards to be uninterested in the results of who wins, except to the extent that the results give us insight into how the voters view the game or how the selection process could be improved. If I feel player X is undeserving, yet he wins the award, I might chuckle and shake my head; I might accuse the voters of overlooking one facet of the game and overvaluing another; but I'm not outraged. I'm not going to write about how Player Y who I prefer was robbed of the award; instead, I'll write about why Player Y really was the most valuable player of the league, which is a question that may be raised and brought to the forefront by the BBWAA awards, but could easily exist in a vacuum (if you think this distinction is splitting hairs, I disagree but understand where you're coming from).

Comparing the Hall of Fame votes to the annual award votes, I prefer the latter. The voting process is designed better, but more importantly, the mistakes of the past only cast a small shadow on present results.

Silly choices by the BBWAA for MVP or Cy Young can set a precedent, to a limited extent. One could attempt to justify voting for a closer as MVP because Willie Hernandez won, or for a player solely on the basis of impressive home run and RBI numbers because of Andre Dawson, 1987. And poor choices, even those in the past, can serve to reduce the respect given to the award.

However, in the case of the Hall of Fame, the mistakes of the past are never far from discussion, since each election builds on the one that came before it. The awards slate is wiped clean each year, but each Hall candidate is compared not only to their ballot mates but to the previous inductees. No single voter is compelled to change his standards to fit previous choices, but comparison to past inductees is unavoidable. And while the impact of a single questionable selection can be minimized (Jim Bottomley doesn't come up much in Hall discussions), a series of questionable selections is harder to push aside (like the Frankie Frisch-era VC selections that Bottomley was a part of). Furthermore, the honor of being a Hall of Famer itself is cheapened by poor selections, as the honor is to be considered in a group with the past inductees.

To summarize, in order to flesh out what I mean when I say I don't care about a certain baseball award, I've offered four gradations of indifference:

1. I care about neither the mission of the award nor the entities being honored (Daytime Emmys)
2. I care about the entities to some extent, but not about the mission of the award (Primetime Emmys)
3. I care about the entities, and think the mission of the award is solid in theory, but the implementation is such that it has lost me other than as a theoretical exercise (Baseball Hall of Fame)
4. I care about the entities, and the mission of the award, but the people entrusted with bestowing the award severely dampen my enthusiasm (BBWAA post-season awards)