Saturday, January 28, 2012

Crude Team Ratings, 2011

Anyone can throw together a spreadsheet and declare that they have a ranking system for teams. It’s not particularly hard to construct a reasonable method by which to take an initial estimate of team strength, adjust for strength of schedule, recalculate each team’s ranting, adjust for SOS again, rinse, repeat. I have done just that, and will present the 2011 ratings here.

If you want the full details, please refer to the linked post. The gist of the system is:

1) Start with a win ratio figure for each team. It could be actual win ratio, or an estimated win ratio.

2) Figure the average win ratio of the team’s opponents.

3) Adjust for strength of schedule, resulting in a new set of ratings.

4) Begin the process again. Repeat until the ratings stabilize.

The resulting figure is in the form of an adjusted win ratio; I force the average team to a rating of 100. The ratings can be plugged directly into an odds ratio--a team with a rating of 120 should win about 60% of the time against a team with a rating of 80 (120/(120 + 80)).

I’ll present four different sets of ratings here, each using a different win ratio as the input. It’s overkill to run this many, but if for some reason you prefer a certain estimate of win ratio, it may be represented.

Since 2011 is in the past, there’s no particular value in predictive ratings, so I’ll focus on the CTR based on actual wins and losses:

aW% is the adjusted W% based on CTR; SOS is the weighted average CTR of the team’s opponents; rk is the team’s ranking among the thirty teams; and s rk is the SOS rank.

The results aren’t particularly surprising; the teams are ranked pretty close to how they would be in W%. In some recent years, the results would favor AL teams much more than just looking at pure W%, but the National League held its own with the AL in 2011 as seen from the league/division ratings (simply the average rating for each member team):

That makes for a nice rank order of divisions, with East > West > Central, and AL > NL in each case. Still, the overall AL/NL rating difference of 103/97 is a lot smaller than previous seasons, including 108/93 in 2010. While the NL Central remained the weakest division, 89 was an improvement over the 82 rating in 2010. If Houston was in the AL rather than the NL (and assuming all the ratings stayed constant), the leagues would have each had a CTR of 100.

The next set of CTRs is based on Game Expected W% as described in this post. Basically, gEW% assumes independence between runs scored and runs allowed in a given game, and uses the 2011 empirical W% for teams scoring or allowing X runs in conjunction with each team’s actual game-by-game distribution of runs scored and runs allowed to estimate their W%. The resulting CTRs:

Using classic Pythagenpat as the input:

Finally, using Pythagenpat estimated win ratios based on runs created and runs created allowed:

Obviously there exist any number of possible combinations of win ratio estimates one could use, regression can be mixed in, etc. What I’ve presented here is just the most straightforward ratings based on obvious single inputs.

No comments:

Post a Comment

I reserve the right to reject any comment for any reason.