09 February 2017

A Look At Fangraphs Projections

Certain things always happen during the offseason.  Free agents are signed, players are traded for, tons of digital ink is spilled discussing all the possible rumors and Fangraphs predicts that the Orioles will be the worst team in the AL East. At this point, it probably isn’t very surprising to learn that Fangraphs expects the Orioles to win ten fewer games in 2017 than they did in 2016. It’s probably more surprising to learn that Fangraphs does project the Rangers to win twelve fewer games in 2017 than they did in 2016. But how accurate is Fangraphs exactly? They certainly missed on the Orioles last year, but did they do better predicting other teams’ results?

Fangraphs had a surprisingly good year from a wins perspective in 2016. They were off by roughly 5.6 wins per team in 2016. In contrast, presuming that teams that every team would win 81 games would have been off by slightly more than 9 wins per team making it roughly a 40% improvement than just picking each team to win the same amount of games.

While they successfully predicted the record of 11 teams within 3 wins of their 2016 results, they also were off by 9 or more wins for 6 teams. Interestingly, Fangraphs did a worse job projecting five other teams than the Orioles in 2016. Presuming that each team would go .500 would have resulted in being off by 9 or more wins for 14 teams, while being within 3 wins for only 6 teams. This indicates that while Fangraphs had poor predictions for a number of MLB teams, using these predictions is better than using nothing at all. It also shows that Fangraphs should be expected to have poor predictions for roughly 20% of MLB teams.

Fangraphs did a decent job predicting runs scored for each team as they were off on average by .266 runs per game. If someone knew that teams would average roughly 4.48 runs per game, and predicted that each team would score that amount, then they would have been off by roughly .289 runs per game. Given that it’s hard to know before the season how many runs will be scored, this shows that Fangraphs had some success at predicting runs scored. Fangraphs did better when it came to runs allowed. On average, Fangraphs was off by roughly .3 runs per game. If one presumed that each team would allow 4.48 runs per game, then presuming each team would allow the same amount of runs means that the average prediction would have been off by .36 runs per game. Clearly, using Fangraphs projections have some value.

Fangraphs does worse when trying to project earned and unearned runs. Fangraphs was off by an average of 59 earned runs per team. If one used the actual number of runs scored, than presuming that each team allowed the same amount of earned runs would have resulted in the projection being off by 51.4 runs. If one used the number of earned runs projected to be scored by Fangraphs, then the average team would have been off by 69.6 earned runs. This isn’t an impressive result.

Likewise, the Fangraphs projections predicted that each team would allow 75 unearned runs. In reality, teams only allowed 52.76 runs. This is a pretty big difference. On average, Fangraphs was off by 23.7 unearned runs per team. If one presumed that each team gave up an equal amount of the actual unearned runs allowed, then each team would have been off by 10.4 unearned runs. If one used the number Fangraphs projected, then each team would have been off by 22.9 unearned runs. This indicates that Fangraphs is unable to project unearned runs. It also suggests that Fangraphs is unable to successfully predict the number of runs allowed in a season, but has the ability to guess which teams will allow the most or least runs only in context. This isn’t as useful as how many runs a team will allow.

The Fangraphs projections undoubtedly have some predictive ability and are better than using nothing. Sometimes, that can be very valuable. For example, suppose someone builds a model that can successfully predict whether a stock on Wall Street will go up in value that works 5% more accurately than the current models. This model may only have minimal predictive ability but is still good enough to be worth billions of dollars. A small increase in predictive ability can be highly valuable.

However, it doesn’t change the fact that a small increase in predictive ability is still only a small increase. Sure, it tells us more than we knew before, but that’s not the same as calling it gospel. The model in the paragraph above will fail a large percentage of the time. Such a model can both have extreme value, but still be wrong a significant amount of the time.

What does this mean for Orioles’ fans? The Fangraphs projections do have some validity to them, but they’ve historically missed on a large percentage of teams. Their results should be taken seriously, but with the understanding that they aren't holy writ. They’re neither perfect nor worthless. The same is probably true for PECOTA. They’re better than nothing, but that doesn't make them great.

4 comments:

Roger said...

Sure Fangraphs comes out better than no prediction at all (81 wins per team). But the real question is whether Fangraphs is better than the random fan (i.e. civilian as opposed to sabermatrician) who knows an average amount about the league. I could of predicted the Red Sox, Nats, Cubs, Dodgers, and Giants to be good and The Braves, Phillies, Reds, Padres, and A's to be bad. It's a little like picking the NCAA Bball brackets. How does Fangraphs perform relative to 1000 (or 10,000) random people's average projections? That would be a better measure of whether all these statistics can produce a high level prediction or not.

Matt P said...

I'm not sure I would judge the Steamer and ZIPS projections based on what Fangraphs does with them. I suspect that Fangraphs is the weakest link in the chain.

But yes, it would be interesting to see whether Fangraphs can beat the average fan. Unfortunately, I don't have access to 10,000 random people's projections.

Jon Shepherd said...

So,
(1) You have the projection models that are encapsulated within the team win model. The other aspect of the team win model are people who make informed decisions on playing time.
(2) Several studies are out there looking at writer predictions and win model projections and found models do a better job. If we are talking about Vegas betting lines, then we are asking a slightly different question. The past ten years have resulted in Vegas models being roughly on par with win models. Some win models, like BP's PECOTA-based model have not performed as well as the "Vegas" line or so I have been told. Still, it difference is not much.
(3) IIRC, one of my several articles on win projections a few years back almost addresses several of these issues. Feel free to search the archives.

Matt Bennett said...

I could be wrong here, but I think I remember a Matt P from Orioles-Nation. Is that you?