Disclaimer: This post doesn't really have any direction; it wanders around to no real end. It also seemed a lot more interesting when it was in my head than it did after it was on the screen.

As you know, between 1900 and 1960, each major league without exception was comprised of eight teams, with the team with the best regular season record taking the pennant. The expression "first division", still in limited but diluted use today, referred to the top four teams in the circuit. Over the course of those six decades, there was one league that clearly stood out from the others in terms of the gap between the first division and the second division.

If one endeavors to quantify the amount of balance in a league's standings, there are a number of different possible approaches. The statistically-minded among us might immediately think about measuring the standard deviation of team winning percentage in each league-season, for instance.

But what about measures that would specifically measure the discrepancy in quality between the first division and the second division? Of course there are a number of different routes that one could go, but among the most obvious simple approaches are:

1. the fifth place team's games behind the pennant winner--this will tell you how large the gap was between the top of the first division and the top of the second division. I'll call this GB(5)

2. the winning percentage of the fourth place team--this is the target you're shooting for if you want to be a first division team. I'll call it W%(4).

3. the winning percentage of the fifth place team--this tells you where the second division begins in terms of wins. I'll call it W%(5).

4. the fifth place team's games behind the fourth place team--this is the direct gap between the first and second divisions. I'll call this GB(4-5)

5. the aggregate winning percentage of the first division--or of the second division, but it doesn't matter, as these two mathematically must be complements. I'll call this FD%.

These three indicators are all related to the quality gap between the first and second divisions, but come at it from slightly different perspectives. The first four approaches both ignore the entirety of the second division except for the fifth place team, but by doing so they establish the boundary between the two divisions. The last combines each group, but does nothing to temper the influence of outliers (on the high or low ends).

You could of course come up with other measures, but this is not intended to be a rigorous statistical examination--I just want to be able to establish that the league-season in question was somewhat unusual, and these rudimentary measures are sufficient for that purpose.

This is not a trivia post--if you want to guess, do it now, because the league in question is the 1950 AL. Here are the standings:

The imbalance, perfectly divided into two groups, jumps right off the page. The Yankees captured their second straight pennant by three games over the Tigers, with the Red Sox and Indians also in the hunt. But the second division lagged far behind, with the Browns and A's losing the equivalent of 100 games in a 162 game schedule.

You certainly don't need to formally look at any data to know that these are odd standings. Oddities like this are what can make flipping through a baseball encyclopedia so rewarding for those who dabble in statistics. There is a silly but tangible sense of discovery when you find some unusual statistical line or set of standings that you had not been previously aware of.

Anyway, this is a sabermetric blog, so you're going to be stuck with some pseudo-analysis rather than just a "gee whiz!". First let's look at the standard deviation of W%, which as mentioned above speaks to the balance of wins in the league but not specifically to the chasm between the first and second divisions. Still, out of the 123 major league-seasons in the 1900-1960 period, the 1950 AL ranks 14th in standard deviation of W%:

Many of the highest standard deviations occurred in the first twenty years of the period, so the 1950 AL ranks second in the post-war, pre-expansion era. Still, it doesn't stand out as anything remarkable in this respect as just four years later the standard deviation would be greater in the junior circuit. Here are the 1954 standings:

In this case, the chasm was between third and fourth place rather than fourth and fifth, and the standard deviation was high largely due to the Indians' rampage coupled with a strong effort from the Yankees (their 103 wins would have won the AL pennant in any other season between 1947-1960).

Moving on to the measures that specifically address the difference between the first and second division, we first have GB(5), which tells us how close the top of the second division was from the pennant. The 1950 AL was well above average but not remarkable in this regard, ranking in a tie for sixteenth-highest with the 1934 AL:

Of course GB(5) is strongly related to the performance of the pennant winner. You can see from the table that many of the league-seasons featured the great teams of the period...the 1906-07 Cubs, 1954 Indians, 1927 Yankees, 1931 A's, and the like. So I also looked at GB(5) divided by wins of the pennant winner, which bumps the 1950 AL up to thirteenth place.

Next we have W%(4), the floor of the first division, and this is where the unique nature of the 1950 AL starts to shine through. Cleveland, in fourth place at 92-62 (.597), had the highest W% of any fourth place finisher of the period, and it wasn't even close:

As you can see, it was by no means the first time that the Indians had a standout record for a fourth-place finisher.

In terms of W%(5), the ceiling of the second division, the 1950 AL made the bottom three (lowest W% by a fifth-place team):

Here it's the 1931 AL that leads the way, with St. Louis topping the second division with a 63-91 mark. As you might imagine, the race for fifth was very close, with Boston just one game back and Detroit two.

So it is not surprise that when we look at the gap between the first and second divisions, no league managed to come within five games of the 1950 AL:

Finally, we have FD%, which is the aggregate W% of the first division clubs. The 1950 AL comes in sixth:

As you can see, the 1950 AL leads the way among all post-1932 leagues, with the aforementioned 1954 AL next in line.

I hope that the combination of the "look test" and the data above will be enough to demonstrate that the 1950 AL was a uniquely two-tiered circuit. For those of you who write good history articles, I think that a brief history of how the AL franchises fortunes ebbed and flowed so as to create the conditions necessary for this historic imbalance would be a very interesting piece.

I don't write good history articles, so the Cliff's Notes (and potentially misleading summary) of the second division could be as follows:

* The White Sox never really recovered from the Black Sox scandal, with eight games back in 1940 the closest they got to a pennant.

* The Senators were solid contenders in the mid-20s and early 30s, winning three pennants, but outside of that, there's a reason "First in war, first in peace, last in the American League" was in use.

* The Browns had to share St. Louis with the Cardinals, and while neither team was strong in the first twenty years of the century, the Cards blew by them in on-field success by the late-20s and became the more popular draw, despite Bill Veeck's desperate efforts to win the patronage of the city's fans.

* Connie Mack was never able to rebuild the A's all the way again after selling off his stars of the 1930s--they had been respectable in 1947-49, but 1950 saw them collapse.

What would make such a piece more interesting is the fact that the form of 1950 generally held throughout the rest of the pre-expansion period. The degree of polarization between the strong and weak teams was not nearly as strong, of course, but with one major exception, the teams essentially stayed in their divisions throughout the decade.

The table below gives the finish for each franchise (sticking with their 1950 abbreviation in the case of Philadelphia (Kansas City) and St. Louis (Baltimore)); the first division finishes have been bolded:

As you can see, the Yankees stayed in the first division for the next ten years, while the Indians missed just once and the Red Sox thrice. The Senators stayed in the second division the whole time, with the A's escaping just once and the Browns franchise just once.

There was one significant change from the standings of 1950, and that was the reversal of fortune for the Tigers and White Sox. The Tigers would make it back into the first division just twice over the period, while the White Sox joined the Yankees in never missing over the next ten years.

Not only did the 1950 AL feature a huge gap between the first and second divisions, the second division also represented a sort of permanent league underclass and the first division a permanent group of contenders, with the aforementioned exception of the White Sox and Tigers respectively (of course, the large gap does indicate that the second division teams had a lot of work to do, so this is not entirely surprising). The second division saw three of its four members move to greener pastures, while the teams of the first division all remain in the same place today. With the exception of the White Sox, it would be fifteen years before a second division team of 1950 was able to win a pennant (the '65 Senators (Twins)).

After that, things got better quickly for the underclass, as the Orioles would emerge as the most consistent AL franchise of the next twenty years and the A's, after another move, would become just the second franchise to win three consecutive World Series. Meanwhile, the first division Indians tumbled into thirty years of hopelessness, emulating the historical examples of the Browns and Senators. But the particular state of imbalance in the AL, best demonstrated in the standings of 1950, had held for a long time.

## Monday, July 27, 2009

### An Unusual League

## Tuesday, July 21, 2009

### On the World Series Home Field Advantage

A week ago, the American League once again defeated the Neanderthal League (*) in the All-Star Game, securing home field advantage for the World Series. The "This time it counts" mantra about the game is premised on the notion that home field advantage is a significant thing to have (or at least the hope that TV viewers will believe that it is). So it is only natural to look back through history and see how home teams have fared in the World Series.

Let's start off with some theoretical calculations based on a few assumptions. Assume that the two teams are evenly matched, that there is no home field advantage, and that the outcome of each game is independent of any other. Therefore, each team has a 50% chance to win each game, and we can calculate the expected frequency of a 4, 5, 6, or 7 game series using the geometric distribution (I apologize for this digression as many of you know this better than I do):

P(x+r game series for one team) = C(x + r - 1, x)*(1 - p)^x*p^r

Where x = number of failures before r successes, p = probability of success, and C is the combination function

In this case, our successes are victories by the eventual series winner (always r = 4), x is losses by the eventual series loser (0-3), and p = .5.

C(x + r -1, x) is the number of different of distinct sets of wins and losses that can occur in the series. C(3, 0) is used for a four-game series, and is equal to 1--the only string of wins and losses that can produce a four-game series is WWWW. The formula for combinations is:

C(n, x) = n!/(x!(n-x)!)

So C(4, 1), the number of different combinations that can produce a five-game series, is 4!/(1!(4-1)!) = 4*3*2*1/(1*(3*2*1)) = 4. You can confirm this, as there are four possible strings (LWWWW, WLWWW, WWLWW, and WWWLW) that produce a five-game series. In fact, you can logically work out all the combinations fairly easily without the math for this application since we are only dealing with a seven-game series.

In a five-game series, the fifth game must be a win (same for the sixth and seventh games of six and seven-game series, respectively). So the victor can lose game 1, game 2, game 3, or game 4.

In a six-game series, the victor can lose games:

12, 13, 14, 15, 23, 24, 25, 34, 35, 45 = 10 combinations

And in a seven-game series:

123, 124, 125, 126, 134, 135, 136, 145, 146, 156, 234, 235, 236, 245, 246, 256, 345, 346, 356, 456 = 20 combinations

Anyway, doing all the math (and then doubling since we have only considered this from the perspective of one team), the theoretical probability of a given series length is:

4 = 12.5%

5 = 25%

6 = 31.25%

7 = 31.25%

So theoretically (since WS home field sites are on a 12-345-56 pattern), in 43.75% of World Series, the number of home games will be equal. 25% of the time, the team with the home field disadvantage on paper will actually play more home games, and 31.25% of the time the team with home field advantage on paper will get to benefit from it--if and only if there is a game seven.

We'll get back to some theoretical stuff later, but let's look at the actual empirical World Series results. I considered all World Series from 1922-2008 (1922 is when the seven-game series returned permanently) with the following exceptions:

* 1922 and 1923--both Giants/Yankees series, in 1922 they shared the Polo Grounds, and in 1923 they didn't follow the 12-345-67 pattern

* 1943-45--in the war years, a 123-4567 format was used to cut down on travel (and in 1944, the Cardinals and Browns shared Sportsman's Park, which would have made it unusual in any case)

First, let's look at the empirical proportions of series by length:

As you can see, the empirical and theoretical don't actually track particularly well. I'm not going to discuss this phenomenon in-depth here, but it is something to keep in mind when we delve back into theoretical stuff at the end of the post. The assumptions are all faulty to some degree or another--the teams are not evenly matched, the results of the games are not truly independent (Even if you start with the premise that this is largely true during the regular season, one could conjecture that it is less true in a short series as behavior will be highly influenced by the series status--teams down 3-1 behave a lot differently than teams up 3-1 or tied 2-2. This is a classic case of what Bill James called the law of competitive balance.), we have not considered home field advantage, etc. For some more reading on this topic, check out Phil Birnbaum's post at Sabermetric Research and the __Baseball Research Journal__ piece referenced there ("Relative Team Strengths in the World Series" by Alexander E. Cassuto and Franklin Lowenthal, __BRJ #35__).

Getting back to the actual data, we see what I will call a reverse home field advantage (a 5-game series, in which the "road" team actually hosts 3 games and plays two on the road) 20% of the time, no home field advantage (4 or 6 game series) 41% of the time, and a true home field advantage (7-game series) 40% of the time.

How often does the team with paper home field advantage actually win the Series? Let's break it down by series length:

This is pretty interesting, IMO. The paper home team wins 57% of the series, which seems impressive, but their strongest advantage comes when there is no home field advantage (61%), followed by reverse home fields (56%), and just 53% when there is a true home field.

Of course, the sample sizes aren't great when it's broken down like this, and it is unsurprising that the proportion of series won is less in seven games. What is interesting, though, is that the on-paper home team has such an advantage, and even in series in which they don't really benefit from it in the raw count. Are the first two games at home that much of an advantage, or is there something else going on here?

I'll leave that as a rhetorical question. There are a lot of factors in play here--the sample sizes aren't that large, we have not accounted for the quality of specific teams (which is tough to do in any case because of the fact they play in different leagues which were until recently truly separate in the regular season), etc.--and I don't really want to speculate about the influence of these myriad factors.

I did take a look at the regular season W% of the World Series participants, but as I just said, that's not a particularly telling measure, as it is possible that the leagues were unbalanced in any given year and that a lower W% in one could actually be indicative of a higher-quality team. I checked it anyway, and found that, for the group of series defined throughout this post, the winners had a mean W% of .616 with a median of .616, while the losers had a mean W% of .612 with a median of .610.

Teams with on-paper home field advantage had a mean W% of .615 and a median of .611; teams without on-paper home field advantage had a mean W% of .613 and a median of .610. There's no evidence of any sort of fluky quality difference, at least to the extent that W% captures quality. In terms of W%, the World Series winners, losers, on-paper home teams, and on-paper road teams are all essentially equal.

Let's also break down the series outcomes by on-paper home field advantage coupled with which team had a superior record. These figures will exclude the 1949 and 1958 series as the participants had identical regular season records:

So the team with the worse record has actually triumphed in one more series than their higher W% opponents (for reference, the mean W% for teams with the better record is .635 with a median of .636; the mean W% for teams with the lesser record is .593 with a median of .597, again excluding 1949 and 1958). Teams with home field advantage have been very successful, but those with worse records and home field even more so than teams which had both advantages.

Let's break down the home field W% by each game in the series:

As you can see, games 1, 2, and 6, which are home games for the team with on paper HFA, are the ones with the highest home W%. In game 7, the home field advantage is not particularly large. Those who make a big deal out of WS HFA are fond of pointing out that the home team has won the last eight game 7s, but they were just 2-6 in the previous eight, and I doubt there is anything significant going on. (Although I should point out that the period does correspond to the introduction of the designated hitter in WS play, even if I don't believe that has a significant effect (**)) Between 1952 and 1979 (which includes the 2-6 period mentioned above), road teams were 13-3 in game sevens.

One important caveat on comparing the game-by-game numbers is that as the series extends past the minimum of four games, we should expect to see less of a difference as mismatched teams are eliminated. It doesn't explain why the home field advantages are much smaller in games 3, 4, and 5, though, as there's no reason to suspect that the on-paper road teams are of substantially different quality than the on-paper home teams.

The overall World Series home W% is .573, high compared to the regular season average which is generally somewhere in the neighborhood of .540. Let's use this figure in place of a default assumption of a 50% outcome in each game to model the outcome of a series. Using the combinations detailed above, we can find the probability of any series outcome given these assumptions. For example, the probability of a 4-2 series in which the home team wins games 1, 2, 4, 5, and 6 would be .573^5*.427 (five home wins and one road win). Under these assumptions, we get these probabilities for the possible series outcomes (in this table, "home" refers to the teams with on-paper HFA and "road" to their opponents):

Even using the sample home W% of .573, we only expect the team with HFA to win 52.3% of the time. In fact, teams with HFA have won 56.8% of the series (46 of 79). What is the probability that this could have happened by chance, assuming that 52.3% is the true probability and that each series is independent of the others? It's 12.1%. Even if we assume that there is no true home field advantage at all, and each team will win 50% of the time, there is still a 5.7% chance that 46 out of 79 would be observed.

How about the individual game results (home teams are 268-200, .573)? If the true home field W% was .540 as it generally is for the regular season (and given all the other necessary assumptions for use of the binomial distribution), the probability of 268 successes in 468 trials is 7.1%.

So I am decidedly uncomfortable drawing any conclusions about the strength of home field advantage (on the series or game level) in the World Series from the sample data. The actual results show a stronger home field advantage than we might have expected, but not to such an extent that we must conclude that regular season assumptions about home field advantage do not apply.

It's certainly a good thing to have home field advantage for the World Series, or any game for that matter, and I'm not going to try to argue that basing home field on which league won the All-Star Game is anything but a gimmick. However, given that the previous method of determining home field was simply to alternate it yearly between the leagues, I don't think there's any real harm being done by this approach. If you really wanted to reward the stronger league, the overall interleague record would be far more likely to successfully identify the stronger league, but I don't consider the whole matter worth getting exercised over.

I have posted a Google Spreadsheet with the sequence of games in each series if you are interested. The first group of columns marked G1 through G7 indicate whether the eventual WS champion won the game (W) or lost (L). The second group of columns indicate whether the home team *in that particular game* won (H) or whether the road team won (R).

Finally, I'll close with some useless trivia. You probably know that there have been three series in which the home team won each game (1987 Twins over Cardinals, 1991 Twins over Braves, and 2001 Diamondbacks over Yankees). The most road games ever won in a series (that I considered for this study) is five, which has happened seven times--1926 Cardinals over Yankees, 1934 Cardinals over Tigers, 1952 Yankees over Dodgers, 1968 Tigers over Cardinals, 1972 A's over Reds, 1979 Pirates over Orioles, and 1996 Yankees over Braves.

P.S. After I wrote this post, but before I published it, Sky Andrecheck published a piece on the importance of World Seires HFA at Baseball Analysts. It addresses an interesting question that I will paraphrase as "Since the Dodgers have such a large lead in the playoff race, is the single most important regular season game left on their schedule (with regards to winning the World Series) the All-Star Game?"

I'll let you read Andrecheck's article to find the answer, but there's one minor point which overlaps with this post worth commenting on. Andrecheck notes that the playoff HFA has been higher than the regular season historically, and reasons that this has to do with the home team being the better team more often than not. While this is true for the league playoffs, there's no reason to suspect it to be true for the World Series in which home field alternated between leagues (even if the All-Star result method of determining home field has the effect of giving on-paper home field to a better team more often than not, home field has not been decided by that rule nearly often enough to have any impact on the results, and the amount of noise involved would be incredible in any event). I don't disagree with the notion that we can't say with any certainty that the World Series HFA is of different magnitude than the regular season HFA, but the better team having more home games leaves a lot to be desired as an explanation (again, for the World Series, not the league playoffs).

He also gives the probability of the team with home field winning as 51.26%, assuming that the home W% in the World Series is 54%. I didn't provide this figure in my post, as I approached the question from the standpoint of "Even if .570 is the true HW%...", but I am in agreement with it (naturally, as it is true by definition given the assumptions we both made).

In the comments to Andrecheck's article, there was a link to Cyril Morong's look at WS HFA, published in 2006, which means that I pretty much repeated here what he had done. However, we disagree on the probability of the on-paper home field team winning the series in six games (and thus of course we also disagree on the probability of them winning the series period). I am pretty sure that this is due to a faulty six-game series sequence he used.

(*) Sorry, I can't help it. I SHOULD take the high ground, but the sniveling "It's not REAL baseball" is way too much for me to handle. I'm weak like that.

(**) There was no DH in the World Series until 1978, at which point it was introduced on an alternating year basis. So in 1978, 1980, 1982, etc. the DH was used in all World Series games, and was not used at all in 1979, 1981, 1983, etc. Starting in 1986, the home team's rules were used.

So while the run of Game Seven home wins begins with the Cardinals in 1982 and also includes the Royals in 1985, in those series the road team's rule was being used in Game 7. All of the game sevens that follow, of course, used the home team's rule.

## Tuesday, July 14, 2009

### Meanderings

Meanderings are what you get when I either have no coherent ideas for a post or a number of things I want to write about that are all insufficient to fill out a full post. Other times, like this time, it's just a collection of junk thrown together.

* The recent deaths of Ed McMahon, Farrah Fawcett, and Michael Jackson within a few days of each other revived one of my least favorite memes--people dying in threes. I realize that very few people, if anyone, actually takes this sort of thing seriously, and really thinks that if two celebrities die today that movie studios should be contacting their insurance companies. Still, it is a perfect example of how multiple endpoints and loose definitions can lead to some awfully silly things being said.

The endpoints are wide open, as this adage never defines what the period is in which the three deaths should occur. Obviously, if you wait long enough, you will be able to group at least six billion people together in death. Practically, though, it leaves it open until the third person you need to form your group dies. Had Michael Jackson died three days later than he did, he still could have been in the group. If he was still alive and well, then people could have reached back in time for David Carradine, or waited around for Steve McNair and Robert McNamara. No matter.

The loose definition of such groups is also apparent. That they are reasonably well-known is the only qualification. Certainly Michael Jackson's fame outshined the other two, but they are in the group all the same. There was no need to wait around for two other people of Jackson's notoriety. If time had gone by and no one else of note had died, I'm sure somewhat would have dug through the obituaries and found a lesser-known individual to include in the group.

* Speaking of silliness, how about ESPN's 20 Year All-Star team, covering the twenty years that ESPN has been broadcasting MLB games? They have been showing the nominees for various positions during the Monday and/or Wednesday night games, opening up an internet poll throughout the week, and then announcing the winners on Sunday Night Baseball.

Obviously any time you let internet voting occur without any sort of screening or restrictions, you are bound to get some silly results (remember the pitiful All-Century Team that didn't include Hans Wagner among others?) So it's not worth criticizing the selections themselves, and it would be hard to do so anyway because they are the result of a disparate group of individual choices.

However, the whole exercise illustrates why I don't like this kind of exercise when the time period is restricted arbitrarily (obviously ESPN had its reasons for using twenty years, but it has no particular baseball significance). The selection of Nolan Ryan as top right-handed pitcher is illustrative of one of the biggies. Leaving aside the fact that Ryan has been lionized and overrated by many ordinary fans, with his strikeout and no-hit feats overshadowing the more mundane aspects of the game like preventing runs and winning games, and accepting for the sake of argument that Ryan is one of the five or ten greatest pitchers of all time, it is patently absurd to suggest that he is the best right-handed pitcher of the last twenty years, given that he only pitched in four of them...

...*Unless* you look at it from the perspective of "best to play in this period, period". Since Ryan played in the twenty-year period, he's eligible, and he's a reasonable choice within the bounds of this idiosyncratic viewpoint (remember, above we agreed to accept the premise that Nolan Ryan was one of the very greatest pitchers in history). I don't think this is what most people have in mind when they look at a question like this--do you want to put Cal Ripken or Tony Gwynn on an all-00s team?

There's the middle ground, which would be something like "I'll consider someone if they played a significant amount in the period, whether or not they actually have a case to being the best in that period." From this perspective, you could justify a vote for Cal Ripken on the 20-year team, because he was played in roughly half of the seasons and was still productive in most of them.

And then there's the literalist definition of twenty years, in which only performance within the period is taken into consideration, and thus it is getting dicey when you argue for Nolan Ryan over Dan Haren, let alone Greg Maddux or Mike Mussina. While most people will gravitate towards one of the latter two definitions, these types of exercise usually leave it open-ended, and the results are as much a question of how you approach the exercise as they are a judgment on any of the players involved.

There will be a rash of this stuff coming up near the end of the season and over the winter as the decade ends (Or does it? Even that is not so easy to define). I'll be over here with my fingers in my ear, yelling "STOP!" in vain, thank you very much.

* I love the MLB Network, and think it knocks ESPN's socks off in every aspect of broadcasting, analysis, game coverage, ...except one. Statistics.

The stats displayed on-screen on MLB Network, either during games or on MLB Tonight, are pathetic. I think the standard line for starting pitchers is W-L, ERA, K, and W. That's not so bad except for the omission of innings, which are sorely needed to contextualize the last three categories.

For hitters, though, you get BA, HR, R, and RBI. No plate appearances (or even at-bats). No OBA or SLG. They do display the OBA, SLG, and OPS leaders sometimes on MLB Tonight, but that's about it.

ESPN is running circles around them in this department. The standard batter line when watching a game on ESPN is BA/HR/RBI/OPS, with OBA, SLG, and OPS in tiny print at the top of the screen (at least until the at-bat starts and they are replaced by the always captivating "after x-y count" stats).

* You always see the barb that "you don't watch the games" directed at sabermetricians, and this is often coupled with the "living in your parents' basement" type of stereotype that adds up to nothing more than "sabermetricians are losers". You know, socially maladjusted folks who think girls have cooties and stand in the corner at any sort of social gathering they are roped into attending.

Obviously this argument is not even worth attempting to refute. However, the implicit assumption is kind of funny--that watching a large amount of baseball games makes one cool. After all, this argument is usually advanced by fans, not baseball professionals who are paid to watch and attend games. To the public at large, people who watch a lot of baseball games are probably not considered to be at the top of the social hipness scale. So the whole "watching games" argument (even if one was to accept the premise that sabermetricians don't watch games) really boils down to the Star Trek fans telling the Star Wars fans that they are losers.

* I am embarrassed to say that I was unaware that Steve Phillips attended the University of Michigan. Suddenly, it all makes sense.