27 March 2014

How Many Games Is a Buck Showalter Worth?

Buck | Keith Allison
A few weeks ago I looked at how well projections perform in comparison to the actual season ending tally of wins.  What we found was that the range was broad (about 75% of outcomes fall in a 20 game range), but that range is anchored strongly to the projection (aka the 50th percentile or push point).  In other words, a team slated to win 80 games is going to be between 70 and 90 wins three quarters of the time when the season ends.  A team slated to win 70 has a three quarter chance to find themselves between 60 and 80 wins.  In other words that connection is pretty accurate even though it severely lacks precision.  In other words, it is a fine baseline to use and indicates a good starting point when thinking about the level of talent for a team.

In that article I compiled projected and actual standings for every team since 2003.  I did not select one projection model over another as they all tend to be similarly accurate.  One thing that struck me about the projections for the Baltimore Orioles was how much this team consistently underperformed according to the preseason projections.


Year Manager exPct Actual Pct Diff Games
2003 Mike Hargrove .426 .438 2.0 162
2004 Lee Mazzilli .494 .481 -2.0 162
2005 Lee Mazzilli .481 .477 -0.5 107

Sam Perlozzo .481 .418 -3.5 55
2006 Sam Perlozzo .475 .432 -7.0 162
2007 Sam Perlozzo .463 .420 -2.9 69

Dave Trembley .463 .430 -3.1 93
2008 Dave Trembley .407 .420 2.0 162
2009 Dave Trembley .457 .395 -10 162
2010 Dave Trembley .457 .278 -9.7 54

Juan Samuel .457 .333 -6.3 51

Buck Showalter .457 .596 8.0 57
2011 Buck Showalter .488 .426 -10 162
2012 Buck Showalter .426 .574 24 162
2013 Buck Showalter .488 .525 6.0 162

In this data set, only three managers have ever outperformed the team's projected wins: Mike Hargrove (in his only season in the data set), Dave Trembley (amidst two and a half other underperforming years), and Buck Showalter.  Showalter has an interesting series of performances.  He had stellar over-performances in his short season debut in 2010 and the mesmerizing 2012 season (one of the greatest unexpected performances of the recent era).  Showalter also put in a very strong year last year.  In 2011, the team did horribly compared to expectations.  You can likely put a lot of that underperformance being associated with Brian Roberts' injuries, Brian Matusz' unraveling, and a couple black hole performances coming from Felix Pie and every pitcher making their way from Norfolk to fill out Matusz' spot.

Anyway, I figure beyond the Orioles' history, this database could be broken out to evaluate all managers as well as maybe how certain managers may affect their team's projections (something no projection attempts to do).  Below is the top 10 managers since 2003 in outperforming their team's expected wins with at least three seasons managed.



Manager Wins Games W/162
1 John Farrell 17.0 486 5.7
2 Fredi Gonzalez 35.4 1042 5.5
3 Tony LaRussa 45.0 1458 5.0
4 Don Mattingly 14.0 486 4.7
5 Ron Washington 31.0 1134 4.4
6 Mike Scioscia 47.0 1782 4.3
7 Jack McKeon 14.0 538 4.2
8 Buck Showalter 30.0 1191 4.1
9 Ozzie Guillen 35.0 1458 3.9
10 Bobby Cox 28.0 1296 3.5

Jim Leyland 28.0 1296 3.5

This list passes the sniff test quite well.  All of these managers are highly regarded at their craft.  The leading manager, John Farrell, has long been considered a great baseball man and only recently was given a chance to manage for the Blue Jays and Red Sox.  I had highly supported the Orioles in employing him over Buck.  Though more time may be needed to evaluate him with respect to this potential metric as he has two exceptional over-performance and one poor underperformance.

All of this said, these are actually some very fascinating numbers.  On the free agent market, a win this past offseason was worth about 6 MM.  If you would solely attribute any deviation from the expected wins to be the responsibility of the manager, then those top eight managers are worth over 20 MM per year.  That is rather remarkable and probably obscures true value of other personnel in the organization.  That said, I would be hard-pressed to say that the teams involved above are paying free agent prices for those increased wins.

Below are the bottom 10:



Manager Wins Games W/162
34 Buddy Black -7.0 1134 -1.0
35 Grady Little -6.0 486 -2.0
36 Dusty Baker -23.0 1620 -2.3

Jim Tracy -20.1 1412 -2.3
37 Eric Wedge -27.0 1620 -2.7
38 Alan Trammell -9.0 486 -3.0
39 Bob Geren -20.4 711 -4.7
40 Jerry Manuel -21.0 579 -5.9
41 John Russell -23.0 486 -7.7
42 Manny Acta -44.4 891 -8.1

Depending on your point of view, this list may not exactly pass the sniff test.  In order to put in three full seasons or more in the big leagues, teams have to think well of you.  The listing actually suggests that as well with 23 of 42 managers having positive W/162 values along with another nine within -1 win.  It appears few highly underperforming clubs' managers last long.  However, I'd also be hard-pressed to fully associate all of the misfortune to the manager alone.  That said, I think you would have a decent leg to stand on to say that Dusty Baker, Jim Tracy, Eric Wedge, Bob Geren, Jerry Manuel, John Russell, and Manny Acta have all managed some disappointing teams even though you do have a few playoff appearances within that group.  Just like the guys at the top of the list, it is probably a bit rash to say that these managers are completely responsible for the entirety of their team's underperformance when compared with their team's expected performance according to projection models.

In the end, I am unsure how many grains of salt to take this little exercise.  Maybe a sprinkle, maybe even a truckload.  I would suggest though to lean more in the sprinkle direction.  That said, Orioles fans should be quite content having a manager falling into the top 10 group as opposed to the bottom 10.

11 comments:

baachou said...

Since projection vs performance can be negatively impacted by serious injuries, do you think it would be worthwhile to make an adjustment for teams that lost position players due to injury? I'm thinking of maybe reducing the team's projection by 1/2 the amount of the missed playing time if a player who was projected to play at least 150 games misses more than 62 games then they get a small credit back to their projection. I wouldn't apply this to pitchers since pitchers have a higher injury risk.

What do you think?

Jon Shepherd said...

I would have to think a long while about that. It would be rather subjective. I think I prefer the general assumption that everyone suffers loss of players at the same rate so that over time, those things even out amongst managers.

Unknown said...

If you addressed this point before I apologize for bringing it up. It would seem possible that bad teams tend to underperform their projections and good teams to overperform their projections, at least slightly. Even though the 75% spread is 20 games for both good and bad teams, is it possible that more bad teams underperform and more good teams outperform? I ask because most of the managers who outperformed expectations managed good teams, and most who underperformed managed bad teams.


Today, it seems as though teams doing poorly are more willing to sacrifice their season with veteran-for-minor-leaguer trades and teams doing well make moves to improve themselves, something which I don't think is accounted for in projection systems, at least not objectively.

Jon Shepherd said...

There is a slight effect for teams above 90 wins. I will try to address that at a later date, but there is no impact below 90 wins.

Gadfly said...

I did a blog post about "WAR for managers" about a year ago. We need a system like this at B-R or Fangraphs: http://socraticgadfly.blogspot.com/2012/12/a-sabermetrics-war-need-managerial-wins.html

Richard Bergstrom said...

Isn't it an issue to keep switching projection systems for your base line instead of using something like Pythagorean record. Some of those projections are done before offseason transactions are completed. I understand one system can't be used for all the years because projection systems get tweaked, but it's an odd choice.

Jon Shepherd said...

I don't think switching projection models is much of a concern. They are all similarly precise and accurate. That was the reasoning behind the original data set. My main concern was to get projections from respectable sites that do well to determine expected playing time.

Matt Perez said...

It's funny you say that Orioles should be glad to have a manager in the top 10 because most of Bucks' Wins Above Replacement came with the Orioles. He's at +28 with his 550 or so games with the Os and that means he was +2 with his other 600 games.

Jon Shepherd said...

Sure...for whatever reason why it is the case, Orioles fans should be happy the team has exceeded expectations rather than under performed.

Anonymous said...

I'm confused. Where do you rate Bruce Bochy? Was he a poor manager because the Giants didn't win as much last year as expected, a mediocre manager because they didn't win their division in 2012, or a very good manager because he won the World Series in 2010 and 2012 when the Giants weren't expected to do anything spectacular. All this statistical analysis is fine and good but what counts in sports is the championships you win - something Tony LaRussa was great at and Dusty Baker less than great.

Jon Shepherd said...

I am not sure the Ricky Bobby approach to measuring success is the right answer.