## Tuesday, July 12, 2011

### Crude Team Ratings at the All-Star Break

Crude Team Ratings are a system I put together last year to adjust team records for strength of schedule. The resulting value is expressed on a scale where an average team gets 100, and the numbers themselves can be plugged directly into an odds ratio calculation. If a team with a rating of 120 plays a team with a rating of 90, they should win about 120/(120 + 90) = 57% of the time. Because of the way the ratings are calculated (explained in the linked article), a rating of 100 does not mean a .500 team--a .500 team will actually be a shade below 100 in a normal league.

The ratings are similar in theory to those published elsewhere, and so there’s nothing particularly unique or interesting about them. But I felt that the All-Star break was a logical point to stop and take a look at the ratings as they stand, both because interleague play is now complete and we can get a better read on the difference between the leagues, and because having some idea about strength of schedule to date (and in the future, although I haven’t figured that here) is helpful when handicapping the pennant races.

I will run through three sets of CTRs--one based on win/loss record (CTR), one based on R/RA (eCTR), and one based on RC/RC Allowed (pCTR). I prefer the latter two, especially at this point in the season, but you could also do a combination or factor in projections. I slapped the “crude” label on them in the name for a reason.

First, here are the CTRs based on actual win/loss record:

The league ratings (which are simply the average rating of the division or league’s members) show the AL with a much smaller advantage over the NL than in recent years. However, as you’ll see, the AL advantage grows as we move further away from actual record towards component record.
CTRs based on expected record (runs scored and allowed):

The top three teams remain the same for the three approaches, but each time a different club is ranked #1. Houston actually starts to look a little better as you go and only shares last place on the predicted list:

Boston’s component record has easily been the most impressive in MLB to date when adjusted for schedule. The Giants have played the weakest schedule by any measure and here only appear to be an average club. Pittsburgh has both exceeded its component record and benefitted from a weak schedule. Cleveland comes out better, as essentially an average team, and I was surprised to see that the Tribe has actually played a tough schedule.

Actual record gave the NL East the distinction of best division, but here the AL East returns to its customary position. The Centrals and the NL West are the weak divisions, the most notable element of which is that the NL Central is not alone at the bottom of the barrel as they were in 2010.

Finally, here is a freak show way of comparing the performances of teams so far in 2011 to what they did in 2010. The first column shows each team’s 2011 pCTR to date; the second column is their 2010 pCTR; and the third column is the implied winning percentage of the 2011 team against their 2010 predecessors. Of course, I’m comparing full seasons to (slightly more than) half seasons, not applying any regression, and presenting it as a W% is just a cute trick device. (Doing so if one takes the result seriously also implies that an average team is equally good in 2010 and 2011):

The Pirates and Indians being near the top of the list won’t surprise anyone, but the Red Sox while really good last year have been great so far. The range (if not the standard deviation; I’m not going to bother) of theoretical this year v. last year W% is pretty close to what you’d expect for a range of team W% in the current season.