Monday, October 18, 2010

Even More Mundane Comments on the Playoff Structure

You certainly don't need me to point out to you that run scoring is down in the playoffs compared to the regular season. I am just going to give you some data on the matter, and half-heartedly explore one possible explanation for why that is.

I figured the RPG (total runs in the game by both teams) for every World Series (through 2008) and a comparison of that to the overall RPG for that season (figured as a simple average of the AL and NL RPG) and the RPG for the two World Series participants (again, a simple average). I have limited the scope to the World Series so that the cross-era comparisons are on more of a level footing.

I've averaged the data by decade (a simple average for 1900-1909, 1910-1919, etc.) so that we can see how it has changed over time:



Frankly, this chart surprised me. I had expected that in recent years the disparity between regular season and World Series RPG levels would have increased, but in fact recent decades are the closest matches for regular season scoring.

Since I assumed this to be the case, I was going to put forth the argument that the playoff structure coupled with changes in the game (specifically, the increased use of relief pitchers) has caused the post-season to become a different game from the regular season, to a greater extent than in the past. My personal take on this phenomenon was to be that it was unfortunate--that the run scoring levels, pitcher usage, strategy choices, etc. should ideally be as close to the same as possible for the regular season and the post-season. I don't like the idea of playing 162 games under one set of conditions and switching to very different conditions to crown a champion.

But my assumption was unfounded. While run scoring declines in the World Series (you can pick your explanation--colder weather, reduced usage of marginal pitchers, increased usage of one-run strategies, or whatever other theory you'd like to advance), the decline has not grown over time. Today's World Series are generally as close to regular season scoring levels as they have ever been.

One other little tidbit to note is that generally, with the 1990s and 2000s actually being the most obvious exceptions, the pennant winners combine for a higher RPG than the majors as a whole. Obviously we expect that pennant winners are very good teams, and will likely both score more runs than the league average and allow less. If a team was equally good offensively and defensively in terms of runs above average, then their RPG would be equal to the league average.

However, if pennant winners were especially strong on defense relative to offense, then their RPG should be lower than the league average. Of course, you can rightly point out that runs scored and allowed have different win values, dependent on their unique combination of runs scored and allowed, and so you don't want to draw too much of a conclusion from this one way or another. Park factors are also ignored by this crude comparison. But if pitching and defense were everything as a minority of traditionalists would have you believe, then pennant winners should certainly have lower RPG than the league average.

Moving along, one possible explanation for lower scoring levels in the post-season is increased usage of top pitchers. I did a little crude investigating on this front by figuring the percentage of regular season innings thrown by a team's top three pitchers (in terms of innings), and comparing that the percentage of World Series innings thrown by the top three pitchers (again, in terms of innings). Please note that I did not consider the same three pitchers--the group under consideration is the three pitchers with the most innings in the games being considered (regular season or World Series).

The reason I chose three pitchers is because presumably the top three in IP will be the front three starters, which is all teams have traditionally needed to use in a seven-game series (of course in today's game four starters are usually employed). There are a number of weaknesses to this approach, including but by no means limited to:

1. It doesn't include the effect of relief aces, who have a disproportionate impact on win probability thanks to working in high leverage situations, and are often employed differently in the playoffs. They are also a relatively modern phenomenon that will damage cross-era comparisons of IP%.

2. It doesn't account for injuries and other factors that alter pitching workloads. If, for instance, a top pitcher is out for the Series, IP% will likely be lower than it might have been, but only because of the absence of the pitcher, not because of any intentional alteration in strategy.

3. IP% can be highly influenced by series length. If a series only goes four games, then it is likely that a larger percentage of the workload can be borne by the key pitchers of the staff.

4. Rainouts or other delays in the series can greatly skew the results by allowing pitchers to pitch more than they would have. This is particularly evident in the 1989 World Series and its earthquake delay; the A's IP% for the series was 88%, the highest in twenty years.

5. I am only considering the World Series; presumably managers are more conservative with the usage of their pitching staffs in earlier playoff rounds, or at least no more aggressive.

So I'm not claiming that these results are particularly informative. Nonetheless, I broke them up by decade as I did for the RPG data. Reg IP% is the simple average of IP% for the two pennant winners, simply averaged for the decade; WS IP% is the same for the World Series; and RAT is the ratio of WS IP% to Reg IP%, expressed as a percentage:



Again, I have to admit this is not what I expected to see. I expected that teams of earlier eras, heavily concentrating their workload on a few pitchers to begin with, would show a more even IP% between the regular season and World Series. The opposite appears to be true; earlier teams ratcheted up the workload for frontline pitchers in the Series to a greater extent than to today's pennant winners. Of course, the weakness in using three pitchers is illustrated by the fact that the ratio was fairly stable until the 1970s, around which time the trend towards larger starting staffs was accelerating.

Again, the data here is by no means conclusive or even particularly insightful. However, I expected to find support for my seat-of-the-pants belief that style of play in the playoffs had become more removed from the regular season over time. Instead, I have no solid ground to stand on to make such a claim (there may well be data out there that would support such a position, but it isn't here).

My argument would have been that since 1) teams could now get a higher percentage of innings from front-line pitchers in the World Series than in the regular season, and 2) that because runs scored declined more precipitously in the World Series, ergo changes should be made to the playoff series format to make it closer to regular season conditions. The most obvious alteration would be to eliminate off-days, which would eliminate the possibility of using a three-man rotation and possibly even encourage the use of five starters as in the regular season.

Leaving aside the practical problems with such a change (chief among them revenue concerns), I personally believe that such modifications would make the playoff series a better test of team strength since they would more closely track the conditions of the regular season. But I didn't find any evidence that the disparity between regular season play and World Series play has increased over time--to the limited extent that the data here addresses the issue, the disparity has actually lessened. Any push for changes that would close the gap is undermined by the fact that larger disparities (at least in terms of these two measures) were accepted throughout the twentieth century.

6 comments:

  1. Great job. If I am reading your first graph correctly, it looks like in the 1990s and 2000s the teams making the world series were below average in scoring during the regular season. Is that right?

    ReplyDelete
  2. No, not exactly, but I'm glad you brought it up because you are not the only one I managed to mislead.

    RPG in the chart is runs/game by both teams--that is (R + RA)/G, runs scored and given up. The teams that reached the WS in the 90s and 00s had fewer total runs scored in their games than did the average major league team. I do not have the R/RA breakdown handy, but I'm sure that they scored more runs than the average team, but that they also allowed less.

    I should have included a look at ML average RPG with the Rockies removed. In 1998, for instance, the major league RPG was 9.58, but only 9.52 when Colorado is removed. The drop in the 90s/00s relative RPGs of pennant winners may be a heavily influenced by the presence of the most extreme park of the 20th century.

    ReplyDelete
  3. I think Park Factor is playing a big part of what you're seeing historically. With a large % of the historical games starting in the 1920's being played in old Yankee Stadium, which was heavily pitching friendly, that would certainly cause a deviation from the ML average for those years.

    ReplyDelete
  4. That's certainly possible, but the effect should be tempered to some degree by the comparison to the RPG of the teams involved, which display a similar pattern to the total league comparison.

    ReplyDelete
  5. Good pitching always beats good hitting, and the teams in the World Series always have good pitching. (re: '69 Mets, '85 Royals, '10 Giants)

    And in baseball, good teams are still based on pitching and defense.

    I know I'm rambling and I have no evidence to back me up beyond conventional wisdom that has bee around for 150 years, but it works for me.

    ReplyDelete

I reserve the right to reject any comment for any reason.