## Wednesday, January 07, 2015

### Crude Team Ratings, 2014

For the last several years I have published a set of team ratings that I call "Crude Team Ratings". The name was chosen to reflect the nature of the ratings--they have a number of limitations, of which I documented several when I introduced the methodology.

I explain how CTR is figured in the linked post, but in short:

1) Start with a win ratio figure for each team. It could be actual win ratio, or an estimated win ratio.

2) Figure the average win ratio of the team’s opponents.

3) Adjust for strength of schedule, resulting in a new set of ratings.

4) Begin the process again. Repeat until the ratings stabilize.

First, CTR based on actual wins and losses. In the table, “aW%” is the winning percentage equivalent implied by the CTR and “SOS” is the measure of strength of schedule--the average CTR of a team’s opponents. The rank columns provide each team’s rank in CTR and SOS:

I lost a non-negligible number of Twitter followers by complaining about the playoff results this year. As you can see, the eventual world champs had just the fourteenth most impressive win-loss record when taking quality of opposition into account. The #7 Mariners, #10 Indians, #11 Yankees, and #12 Blue Jays all were at least two games better than the Giants over the course of the season (at least based on this crude method of adjusting win-loss records). Note that this is not an argument about “luck”, such as when a team plays better or worse than one would it expect from their component statistics, this is about the actual win-loss record considering opponents’ records.

San Francisco played the second-worst schedule in the majors (90 SOS); of the teams that ranked ahead of them in CTR but failed to make the playoffs, Toronto had the strongest SOS (107, ranking seventh). Based on the Log5 interpretation of CTR described in the methodology post, this suggests that Toronto’s average opponent would play .543 baseball against San Francisco’s average opponent. The magnitude of this difference can be put into (potentially misleading) context by noting that the long-term home-field W% of major league teams is around .543. Thus the Giants could be seen as having played an entire 162 game schedule at home relative to the Blue Jays playing an even mix of home and road games. Another way to look at it is that Toronto’s average opponent was roughly equivalent to St. Louis or Pittsburgh while San Francisco’s average opponent was roughly equivalent to Milwaukee or Atlanta.

On the other hand, the disparity between the best teams as judged by CTR and those that actually made the playoffs is solely a function of the AL/NL disparity--the five playoff teams in each league were the top five teams by CTR. The AL/NL disparity is alive and well, though, as seen by the average rating by league/division (actually calculated as the geometric average of the CTR of the respective clubs):

While this is not the AL’s largest advantage within the five seasons I’ve published these ratings, it is the first time that every AL division is ranked ahead of every NL division. Typically there has been a weak AL division or strong NL division that prevented this, but not in 2014. Matchup the AL’s worst division and the NL’s best division (both the Central) and you can see why:

The two teams that battled to the end for the AL Central crown stood out, with the NL Central’s two combatants unable to distinguish themselves from Cleveland, who hung around the periphery of the AL Central race throughout September but was never able to make a charge. In all cases the Xth place team from the ALC ranks ahead of the Xth place team from the NLC. In fact, the same holds true for the other two geographic division pairings:

This would also hold for any AL/NL division comparison rather than just the arbitrary geographic comparisons, except for the NL East v. AL Central, where the NL-best Nationals rank ahead of the Tigers 129 to 123.

The AL’s overall CTR edge of 106-89 implies that the average AL team would have a .544 record against the average NL team, similar to the gap between SF and TOR opponents described above. This is very close to the AL’s actual interleague record (140-117, .545).

All the results discussed so far are based on actual wins and losses. I also use various estimated W%s to calculated CTRs, and will present those results with little comment. First, CTR based on gEW%, which considers independently each team’s distribution of runs scored and allowed per game:

Well, I will point out that by gCTR, the world champions are the epitome of average. Next is CTR based on EW% (Pythagenpat):

And based on PW% (Pythagenpat using Runs Created/Runs Created Allowed):

Last year I started including actual W-L CTR including the results of the playoffs. There are a number of reasons why one may want to exclude the playoffs (the different nature of the game in terms of roster construction and strategy, particularly as it relates to pitcher workloads; the uneven nature of the opportunity to play in postseason and pad a team’s rating; etc.), but in general the playoffs provide us with additional data regarding team quality, and it would be prudent to heed this information in evaluating teams. The chart presents each team’s CTR including the playoffs (pCTR), their rank in that category, their regular season-only CTR (rsCTR), and is sorted by pCTR - rsCTR:

Last year there was not a lot of movement between the two sets of ratings, since the top regular season teams also won their league’s pennants. It should be no surprise that both wildcard pennant winners in 2014 were able to significantly improve their standings in the ratings when postseason is taken into account. Still, San Francisco ranks just ninth, still trailing Seattle who didn’t even make the playoffs, and Kansas City is a distant third from the two teams they beat in the AL playoffs, Los Angeles and Baltimore.