05 September 2012

Expanded Roster: Why the Orioles are Possibly Better Than Their Season Run Differential

When the Orioles expand their roster, so do we.  Click here to find all of Camden Depot's Expanded Roster entries for 2012.  2011 Expanded Roster items can be found here.  As always, feel free to provide the Depot with suggestions for posts or with your own interest in writing an items or several to be posted here.  

note: This post was filed before last night's 12-0 drubbing of the Blue Jays.  However, I think what this post is getting at is still quite applicable.


Why the Orioles are possibly better than their season run differential
by Matt Perez 
 
One of the common claims about the Orioles this season is that they are defying their peripherals. A common metric to determine the quality of a team uses their run differential. When a team has been outscored, they are expected to win fewer than half of their games. When one looks at the Orioles run differential, we see that despite being outscored the Orioles have won considerably more than half of their games and have won eleven games more than expected as of September 4th. One could look at this and decide that the Orioles have simply been lucky so far. Others claim that the Orioles have outplayed their run differential for a variety of reasons such as because they've played a lot of close games and have lost many blowouts.
 
Suppose that a random team plays a three game series. They lose the first game 16-2 and win the second two games 2-0. According to how run differential is used to determine a team’s performance, one would expect this team to be swept. This example illustrates the perils of run differential because using this method would cause one too inaccurately give too much weight to the first game at the expense of the other two games. If the Orioles run differential is skewed because they've lost many blowouts and won many close games, then run differential would not give us an accurate result of the Orioles performance because it would be giving too much weight to the blowout losses and too little weight to the close wins.
 
I decided to test this hypothesis, I first used data provided free of charge from and copyrighted by Retrosheet (interested parties may contact Retrosheet at "www.retrosheet.org") that had among other stats the scores of every game played from 2000 to 2011, the date the game was played, and the teams playing. With this data, I am able to determine the historical accuracy of any models that I build. If something has been true for that period of time, it seems plausible that it would be accurate for the current season.
 
Using this data I was able to create what I call a per-game Pythagorean win percentage. Instead of using the run differential per season, I determined the run differential per game for each team. Doing this ensures that regardless of the amount of runs scored in a game, each game still has the same value. This method ensures that a team that won many low scoring games while losing many high scoring games would get proper credit. After I determined the value for each game, I took the mean for each team and season to get the per game Pythagorean winning percentage. For games in which no runs were scored, a team was given a zero percent chance of winning.
 
Once I have historical data for the metric, it is necessary to ensure that the metric effectively measures actual win percentage and that there is a significant correlation. If it is less precise than per season run differential or if there's no correlation between this stat and actual win percentage, than this proposed metric would tell me little about the Orioles past performance. Using data from the 2000 to 2011 seasons, I did a correlation analysis between actual win percent, Pythagorean win percent and per-game Pythagorean win percent. I discovered that while Pythagorean win percent has a correlation of .93765 to actual win percentage that Pythagorean win percent has a correlation of .97645 to actual win percentage. This is a very strong correlation and indicates that per game Pythagorean win percent is a more precise metric than per season Pythagorean win percent.
 
When I determined the Orioles per-game Pythagorean win percent, I discovered that they should be expected to win 69(69.2) games. While this doesn't fully explain the Orioles performance, it does indicate that the Orioles are better than per season win differential winning percentage suggests and that losses in high scoring games are making them look worse than they are. If this is in fact the case, then the Orioles should be favored to hold onto the second wild card and make it to the playoffs.
 
At the very least the last series of the Orioles regular season is against Tampa. I would expect a playoff berth to be at stake.

8 comments:

Anonymous said...

I'd like to see a graph of run differential by date. Similar graphs of games over .500 are easy to find (for instance, pennant-race.com) but I haven't seen one for run differential. The win-loss graph shows the orioles were very good though the middle of May, then sort of below average through July, then started winning a lot in August. Does the run differential chart show the same curve, or did they build up all of thier negatives early in the season? - TJ

Matt Perez said...

I don't know how to create a graph using this commenting platform. If you saw a graph of run differential, it would show the Os slightly above average in April and August, about average in May and well below average in June and July. The month that the Os have had the highest run differential this year is September by a considerable amount.

According to per month run differential(similar results as per season), the team was performing reasonably close to expectations in April and May(+1.5 wins). Things went haywire starting in June. In June, July and August, the team won three more games than expected.

According to per game run differential, the Os performed exactly as expected in April and May. In June and July, they performed reasonably close to expectations as they won 2.6 games more than expected. Things went haywire in August when they won 2.3games more than expected.

As the season goes on, the Os performance runs contrary to run differential.

Anonymous said...

The misuse of run differential has become painful to watch (not here, but every national commentator throws the O's run diff out as the reason they will collapse). Fangraphs had a chat today and this question was addressed specifically in regards to the O's performance. I 100% agree with them that teams that overperform their RS/RA in the first half are likely to make roster moves that put a better product on the field in the second half, rendering their first half RS/RA less relevant to begin with. It's an easy crutch for too many people who don't really understand statistics imo.

Matt Perez said...

It's important to remember that Run Differential is just one metric even if it is usually accurate. It's important to recognize that run differential may not help you project the future but it does let you see if there were any anomolies in the past.

What worries me about what Fangraphs said in their chat is that they have gone from saying that run differential means the Orioles will collapse to saying that run differential is meaningless. Both are oversimplified.

IMO, there needs to be a middle ground. I'm not sure it's clear how the Os are doing so well (I'm not in the bullpen camp). I think stats role should be trying to explain why the Orioles have outperformed this usually highly accurate metric.

Thanks for reading!

Matt Perez said...

Also one more thing.

You noted that Dave Cameron from Fangraphs stated that teams that overperform their run differential are likely to improve in the second half because they'll add players.

It's easy enough to test that theory. If we take the thirty teams who most outperformed their run differential and compare how they do from Apr-June and July-Sept then we notice the following.

Their run differential remains the same. But these teams win .556 of their games from Apr to June and only .499 from July to September.

If that's not predicting a collapse, I don't know what is. It sure doesn't look to me like the roster moves those teams make help very much.

Andy Lyttle said...

All these statistical models are completely useless. Go watch baseball some time and stop wasting your time with this nonsense. Here's the simple answer: in June and July, with bad defense at the corners, several key injuries, a shaky rotation, and some slumping hitters, the Orioles racked up a disproportionate negative run differential. They fixed their rotation and their defense, got Markakis back, some good hitters recovered their stroke, and now they are a deep team that is better than the Yanks over all. Only a fool imagines that bad Pythagorean models applicable to statistical constants apply meaningfully to a changing quantity like the Orioles, with their 155 and more transactions.

Jon Shepherd said...

As our wives, family, friends, whatever know...none of us should be spending more time watching baseball. Most of the people I communicate with here on the blog probably have seen at least half of the Orioles' games this season and are aware of the 51 or so players who have been with the big league club this year.

As with most disciplines, anecdotal, abstract observation is very important to developing an initial hypothesis about what has occurred. The next step is to challenge that idea by somehow applying cold data to validate it. If you just stop with the hypothesis, then you are stuck with rather poorly thought out concepts like magical trolls in your stomach cause illnesses. You really need that data to develop a better idea as to how valid your own perspective is.

As has been mentioned on this blog (you may have missed it), run differential is a descriptive statistic. It looks backward, not forward. Using descriptive statistics to project future performance makes a couple assumptions: (1) what happened is the true talent level being expressed and (2) that talent level will continue into the future. I believe this is what you tried to express and it is something that has been written quite a few times on this site.

Anyway, the next time you wear your copper magnetic bracelets, drink your own urine, and take a performance suplement...perhaps think about the mode of action and why such a thing could ever work. Try to think about how one could provide overwhelming evidence of its utility. Stopping with the initial thought is a disservice and is rather lazy. If one chooses to live in a simpler world and not worry about why things happen...that is OK as it is your life to live. However, it is rather to harp on others who find enjoyment in trying to understand how things work.

Jon Shepherd said...

Hmmm, looks like one letter and one word is missing in that response.

Eh, be creative.