In my previous post about FIP, I discussed how pitchers give up harder contact in a hitters count resulting in more extra base hits and weaker contact in a pitchers count resulting in more singles. This means that while BABIP doesn’t change much based on count, the effect of the hits based on the count does. I attempted to find metrics that might help predict which players could benefit from this but was unable to do so.
In this post, I decided to look at pitchers that gave up weaker than average contact and see how their ERA and FIPs compare to each other. From 2000-2013, I looked at all pitchers that gave up fewer than average doubles, triples and home runs while giving up a larger than average percentage of singles on balls in play. Presumably, pitchers that fit this profile give up weaker contact than pitchers that don’t and therefore should have a lower ERA than FIP. This test should help determine the impact of giving up weaker contact.
There were 117 pitchers that fit these criteria and many of them were ones that one might expect. For example, star closers such as Mariano Rivera, Craig Kimbrel, Joakim Soria, Ryan Cook, Jim Johnson (not including his terrible 2014), B.J Ryan and David Robertson were all on this list. So were guys like Rick Porcello, Brett Anderson, Brandon Webb, Chien-Ming Wang, Doug Fister and Derek Lowe. These pitchers are well known for giving up a lot of ground balls and allowing only weak contact. A list of the entire 117 pitchers can be found here.
It should come as no surprise that the pitchers on this list record more saves than their average usage would suggest. These pitchers threw only 9% of total innings while recording 20.8% of total saves. It makes sense that pitchers that can avoid giving up hard contact end up being used as closers.
However, there are some surprising results when we compare their ERAs to their FIPs. Only 58 of the 117 pitchers on this list actually have an ERA lower than their FIP. The mean ERA is 3.73 while the mean FIP is 3.78 or about a difference of .05. This is minimal and unimportant. This suggests that even pitchers that end up allowing weaker than average contact still have an ERA that’s similar to their FIP. It appears that despite the fact that FIP doesn’t differentiate between a single, double or triple the stat still accurately describes performance.
The pitchers that primarily are able to outperform their FIP are those that are able to avoid giving up any type of hits whether they’re singles or for extra bases. There have been 26 pitchers that give up fewer 1B, 2B, 3B and HR than the 35th percentile. They have an average ERA of 3.03 and an average FIP of 3.55. But then again, it’s questionable whether that should be attributed to pitcher skill or to defense.
This doesn’t mean that FIP is necessarily 100% accurate. Some have proposed that Chris Tillman has extra value not measured via traditional statistics because he is able to keep opposing batters close to the bag and therefore prevents steals, prevents runners from advancing multiple bases on a hit and creates more double plays. But it is interesting to see that FIP “works” even in a situation where we’d expect it to fail.
This analysis indicates that pitchers do have some effect on how hard their pitches are hit and therefore whether impact the likelihood of a batter getting an extra base hit. Some pitchers appear to be better than average at preventing hard contact than other pitchers while most pitchers give up weaker contact in favorable pitch counts. I have been unable to determine an impartial way of determining which pitchers should be expected to give up weaker contact than their peers but it appears that someone using the eye test or knowledge about pitchers can predict this with reasonable accuracy. However, FIP appears to be accurate even when looking at pitchers that allow weaker contact than their peers. I am not quite sure how this can be the case but facts are facts. This possibly indicates that FIP is a better metric than its detractors suspect and suggests that people shouldn’t necessarily dismiss it simply because they find things that it doesn’t measure.
Showing posts with label Pitch Counts. Show all posts
Showing posts with label Pitch Counts. Show all posts
31 March 2015
17 March 2015
FIP and the Ball-Strike Count
Voros McCracken argued in 2001 that pitchers have little ability to prevent balls in the field of play from becoming hits and therefore a pitcher’s ability to record strikeouts, avoid walks, and limit home runs separates good pitchers from bad ones. He argues that one reason for this is that there’s a significant lack of year-to-year correlation in BABIP.
One of the questions I’ve had about this is what this means for ball-strike counts. If pitchers have little impact on balls in play then presumably they should give up a similar percentage of singles, doubles, and triples on a 0-2 count than on a 3-0. This seems unlikely and unintuitive because a pitcher can’t be as selective in a 3-0 count than in a 0-2 count. Using data from Retrosheet, I built a data file with all at-bats from 2000-2013 that ended up in either a single, double, triple, home run, or out in play and determined the count when the ball was hit into play. Below is the data.
These results do partly support FIP because as the count becomes more hitter friendly it becomes considerably more likely that the ball will be hit out of the park. Hitters hit a home run four times more often on a 3-0 count than on a 0-2 count. The idea behind FIP is that pitchers have more control over home runs than over balls in play and these results certainly back that idea up.
However, the results also show that triples and doubles are more likely in a hitter's count and singles are more likely in a pitcher's count. I think this chart is showing that pitchers are giving up harder hit balls on hitter's counts than in pitcher's counts. While I wouldn’t have expected these results, they do make sense because it’s easier to hit a lucky single than a double, triple, or home run.
The BABIP stat is very interesting because while it indicates that hitters do have a better BABIP in a hitter's count than in a pitcher's count, the impact is minimal. If one was just looking at BABIP and didn’t look at the type of hits then the differences would appear to be minor. This means that a lack of significant year-to-year BABIP is mostly irrelevant. The reason why pitchers have better results on balls in play in a pitcher's count rather than a hitter's count is because they’re giving up fewer extra-base hits and not fewer hits overall. Presumably there should be some value in giving up singles rather than doubles or triples.
It appears that this data does indicate that pitchers do better in pitcher's counts than in hitter's counts. The next question is whether certain types of pitchers are more likely to allow contact in pitchers counts rather than hitter's counts or whether they’re able to give up lighter contact. Basically, if we can identify pitcher groups that give up weaker contact than other groups then this would potentially show a flaw with FIP. Alternatively, if all pitchers allow contact at the same rate for each ball-strike count then what I found above may be interesting but is purely academic.
I’m not sure which metrics are the best to predict ball-strike count or the type of contact. The two metrics that I did use to see whether this is the case was K% and LD%. If pitchers with a high K% strike out a lot of batters then it seems possible that they’ll face more pitcher-friendly counts than those that have low K%. If so, logic would indicate that they’d give up more contact in pitcher-friendly counts. Likewise, pitchers that give up fewer line drives than average should probably give up lighter contact than those that give up more line drives than average. I’ll start by looking at K% and the data is below.
The data is interesting. It indicates that pitchers with above-average K% do considerably better in hitter's counts than those with below-average K%, and it would be interesting to figure out why that’s the case. It’s possible that it's primarily due to small sample size. However, all in all, pitchers with below average K% give up .03% more singles, .11% more doubles, .01% more triples, and .04% fewer home runs. Furthermore, this next chart shows how often they give up contact for a given ball-strike count.
Pitchers with below average strikeout rates do give up slightly more contact in hitter-friendly counts than in pitcher-friendly counts. The problem is that the difference is minimal. Simply put, this metric doesn’t show that pitchers with high K% either give up lighter contact or pitch in significantly more pitcher-friendly counts than those with low K%. Next up is LD% rates and below is the data.
Basically, the data show that pitchers with above-average LD% rates (lower than average) give up fewer singles, doubles, and triples than those with below-average LD% rates while allowing the same amount of home runs. This suggests that FIP may be improved by looking at the type of contact that a pitcher gives up but ultimately isn’t helpful for my purposes. The chart below shows that pitchers with above average and below average line drive rates allow contact at roughly similar rates for each unique ball-strike count.
Since I already looked at BABIP before realizing that it wouldn’t be helpful, I suppose it makes sense to discuss that quickly as well. Below is a chart with data.
Basically, it also shows that pitchers with good BABIPs allow roughly 18% fewer singles and doubles as well as 12% fewer triples than those with bad BABIPs. Pitchers with good and bad BABIPs also allow contact at roughly similar rates for each unique ball-strike count.
At this point, I’ve shown that pitchers have control over primarily their home run rates but also balls in play based on the count. However, I haven’t been able to show that there are certain types of pitchers that are able to be successful because they are able to minimize contact allowed in pitcher's counts. This could be for any of at least three reasons.
The first reason is that I simply could have picked poor metrics. Just because the metrics that I used to see whether certain pitchers minimize contact in pitcher's counts didn’t do that doesn’t mean that other metrics won’t be more successful.
The second reason is that this could be a fringe skill. It may be the case that only a few pitchers are able to consistently avoid hitter's counts and therefore give up fewer extra-base hits. If so, perhaps looking at only 10% of the population instead of 50% of the population will provide better results.
The third possibility is that pitchers aren’t able to control when they allow contact and therefore noting that avoiding hitter's counts reduces extra-base hits is merely academic.
Next week, I’ll look at this question with a different metric and see whether that changes the results.
One of the questions I’ve had about this is what this means for ball-strike counts. If pitchers have little impact on balls in play then presumably they should give up a similar percentage of singles, doubles, and triples on a 0-2 count than on a 3-0. This seems unlikely and unintuitive because a pitcher can’t be as selective in a 3-0 count than in a 0-2 count. Using data from Retrosheet, I built a data file with all at-bats from 2000-2013 that ended up in either a single, double, triple, home run, or out in play and determined the count when the ball was hit into play. Below is the data.
These results do partly support FIP because as the count becomes more hitter friendly it becomes considerably more likely that the ball will be hit out of the park. Hitters hit a home run four times more often on a 3-0 count than on a 0-2 count. The idea behind FIP is that pitchers have more control over home runs than over balls in play and these results certainly back that idea up.
However, the results also show that triples and doubles are more likely in a hitter's count and singles are more likely in a pitcher's count. I think this chart is showing that pitchers are giving up harder hit balls on hitter's counts than in pitcher's counts. While I wouldn’t have expected these results, they do make sense because it’s easier to hit a lucky single than a double, triple, or home run.
The BABIP stat is very interesting because while it indicates that hitters do have a better BABIP in a hitter's count than in a pitcher's count, the impact is minimal. If one was just looking at BABIP and didn’t look at the type of hits then the differences would appear to be minor. This means that a lack of significant year-to-year BABIP is mostly irrelevant. The reason why pitchers have better results on balls in play in a pitcher's count rather than a hitter's count is because they’re giving up fewer extra-base hits and not fewer hits overall. Presumably there should be some value in giving up singles rather than doubles or triples.
It appears that this data does indicate that pitchers do better in pitcher's counts than in hitter's counts. The next question is whether certain types of pitchers are more likely to allow contact in pitchers counts rather than hitter's counts or whether they’re able to give up lighter contact. Basically, if we can identify pitcher groups that give up weaker contact than other groups then this would potentially show a flaw with FIP. Alternatively, if all pitchers allow contact at the same rate for each ball-strike count then what I found above may be interesting but is purely academic.
I’m not sure which metrics are the best to predict ball-strike count or the type of contact. The two metrics that I did use to see whether this is the case was K% and LD%. If pitchers with a high K% strike out a lot of batters then it seems possible that they’ll face more pitcher-friendly counts than those that have low K%. If so, logic would indicate that they’d give up more contact in pitcher-friendly counts. Likewise, pitchers that give up fewer line drives than average should probably give up lighter contact than those that give up more line drives than average. I’ll start by looking at K% and the data is below.
The data is interesting. It indicates that pitchers with above-average K% do considerably better in hitter's counts than those with below-average K%, and it would be interesting to figure out why that’s the case. It’s possible that it's primarily due to small sample size. However, all in all, pitchers with below average K% give up .03% more singles, .11% more doubles, .01% more triples, and .04% fewer home runs. Furthermore, this next chart shows how often they give up contact for a given ball-strike count.
Pitchers with below average strikeout rates do give up slightly more contact in hitter-friendly counts than in pitcher-friendly counts. The problem is that the difference is minimal. Simply put, this metric doesn’t show that pitchers with high K% either give up lighter contact or pitch in significantly more pitcher-friendly counts than those with low K%. Next up is LD% rates and below is the data.
Basically, the data show that pitchers with above-average LD% rates (lower than average) give up fewer singles, doubles, and triples than those with below-average LD% rates while allowing the same amount of home runs. This suggests that FIP may be improved by looking at the type of contact that a pitcher gives up but ultimately isn’t helpful for my purposes. The chart below shows that pitchers with above average and below average line drive rates allow contact at roughly similar rates for each unique ball-strike count.
Since I already looked at BABIP before realizing that it wouldn’t be helpful, I suppose it makes sense to discuss that quickly as well. Below is a chart with data.
Basically, it also shows that pitchers with good BABIPs allow roughly 18% fewer singles and doubles as well as 12% fewer triples than those with bad BABIPs. Pitchers with good and bad BABIPs also allow contact at roughly similar rates for each unique ball-strike count.
At this point, I’ve shown that pitchers have control over primarily their home run rates but also balls in play based on the count. However, I haven’t been able to show that there are certain types of pitchers that are able to be successful because they are able to minimize contact allowed in pitcher's counts. This could be for any of at least three reasons.
The first reason is that I simply could have picked poor metrics. Just because the metrics that I used to see whether certain pitchers minimize contact in pitcher's counts didn’t do that doesn’t mean that other metrics won’t be more successful.
The second reason is that this could be a fringe skill. It may be the case that only a few pitchers are able to consistently avoid hitter's counts and therefore give up fewer extra-base hits. If so, perhaps looking at only 10% of the population instead of 50% of the population will provide better results.
The third possibility is that pitchers aren’t able to control when they allow contact and therefore noting that avoiding hitter's counts reduces extra-base hits is merely academic.
Next week, I’ll look at this question with a different metric and see whether that changes the results.
11 September 2012
Stephen Strasburg and the Verducci Effect
I sometimes think some ideas are firmly illustrated to be wrong and then become surprised when I see that there are back wellings where the idea still holds. One of these ideas is the Verducci Effect. The Verducci Effect specifically is about how with pitchers 25 years old and younger who throw more than 30 IP in comparison to their previous year pitcher will be at a greater risk for injury. The term is often broadened to mean really anything to do with giving a young pitcher too much work before he gets older. The idea had a lot of traction at first as it made some logical sense. Young arms are growing arms and young arms are less experienced arms. Like anyone at the gym or running, you slowly build up your strength or stamina over time with increasingly great feats. This makes sense to us.
However, time and time again, the Verducci Effect has been shown to not be real at all even though he delivers a column or two on it every year (though now it looks like he is transitioning over to injured closer stories). The earliest study I can find is from David Gassko in 2006 that found "overworked" pitchers appeared to pitch more, not less, innings the next year. Jeremy Greenhouse wrote a column on injuries and the Effect...once again finding nothing to the notion. I am sure there are many, many other articles by amazing writers who went on to be employed by Major League Baseball franchises.
That said...the Nationals claim they have evidence that shows in their favor the need to end Stephen Strasburg's season even though he is one of their best arms and they will be entering the playoffs. From my qualitative perspective, it has seemed that this application of the Effect pleases Tom Verducci. I figured to give the idea another look and measure the general idea in a slightly different (but incredibly simple) way. Are the Nats using data over several years, not just one? They have control of Strasburg's rights in 2013, 2014, and 2015. In a marathon sense, keeping him healthy over that period is more important than the result of four games in September and, arguably, a couple games in October. In a Keep-Your-Eyes-on-the-Prize sense, well, he should be pitching.
To test this, I took every pitcher from 1998 to 2007 (a ten year period) who threw more than 140 IP t age 23 and within his first three years of pitching at the MLB level. I then proceeded to sub-divide these players into 20 IP allotments. For instance, 140-159, 160-179, 180-199, 200-219, and 220+. I then individually compared their current season to the accumulation of their next three seasons. I compared how many innings they pitched as well as creating a metric for this study, vFIP. To measure vFIP, you divide a pitcher's age 23 FIP by the next three year accumulative FIP. A vFIP over 100 shows improvement and vice versa.
The first thing to look at would be injuries. Half of the ten pitchers in the 140-159 class suffered injuries over the next five years (Ricky Nolasco, Josh Beckett, Roy Oswalt, Daniel Cabrera, and Ken Cloude). This sounds like a great deal of loss, but every innings group roughly had the same injury effect rate, which would agree with previous studies.
Perhaps better would be to compare actual work loads over the age 24 to 26 seasons. In terms of innings pitched, there was no significant difference between the 140-159 group and the 160-179 (p=0.78) and 180-199 (p=0.66) groups. However, significant differences were found between that group and pitchers who threw more than 200 innings (0.03 and 0.04, respectively).
The final aspect to look at is performance. This is where I will break out the vFIP metric and here is the data set:
Again, there may be some selection bias in these groupings because if you are tossing over 200 innings then you have probably pitched very well and it will be difficult to improve upon that. Here are the raw FIPs for the groups.
That all said, I am not sure how this informs us about Stephen Strasburg. There is no evidence from the above methodology that injury rates decrease. It appears that pitchers who log in more innings during their age 23 season wind up throwing more innings in the future at about the same level of performance. Pitchers who are worked for 140-159 MLB innings tend to show improvement as a group in terms of performance, but not in innings pitched. This may be the result of less desirable pitchers being able to be discarded more easily when they have less of a track record.
However, time and time again, the Verducci Effect has been shown to not be real at all even though he delivers a column or two on it every year (though now it looks like he is transitioning over to injured closer stories). The earliest study I can find is from David Gassko in 2006 that found "overworked" pitchers appeared to pitch more, not less, innings the next year. Jeremy Greenhouse wrote a column on injuries and the Effect...once again finding nothing to the notion. I am sure there are many, many other articles by amazing writers who went on to be employed by Major League Baseball franchises.
That said...the Nationals claim they have evidence that shows in their favor the need to end Stephen Strasburg's season even though he is one of their best arms and they will be entering the playoffs. From my qualitative perspective, it has seemed that this application of the Effect pleases Tom Verducci. I figured to give the idea another look and measure the general idea in a slightly different (but incredibly simple) way. Are the Nats using data over several years, not just one? They have control of Strasburg's rights in 2013, 2014, and 2015. In a marathon sense, keeping him healthy over that period is more important than the result of four games in September and, arguably, a couple games in October. In a Keep-Your-Eyes-on-the-Prize sense, well, he should be pitching.
To test this, I took every pitcher from 1998 to 2007 (a ten year period) who threw more than 140 IP t age 23 and within his first three years of pitching at the MLB level. I then proceeded to sub-divide these players into 20 IP allotments. For instance, 140-159, 160-179, 180-199, 200-219, and 220+. I then individually compared their current season to the accumulation of their next three seasons. I compared how many innings they pitched as well as creating a metric for this study, vFIP. To measure vFIP, you divide a pitcher's age 23 FIP by the next three year accumulative FIP. A vFIP over 100 shows improvement and vice versa.
The first thing to look at would be injuries. Half of the ten pitchers in the 140-159 class suffered injuries over the next five years (Ricky Nolasco, Josh Beckett, Roy Oswalt, Daniel Cabrera, and Ken Cloude). This sounds like a great deal of loss, but every innings group roughly had the same injury effect rate, which would agree with previous studies.
Perhaps better would be to compare actual work loads over the age 24 to 26 seasons. In terms of innings pitched, there was no significant difference between the 140-159 group and the 160-179 (p=0.78) and 180-199 (p=0.66) groups. However, significant differences were found between that group and pitchers who threw more than 200 innings (0.03 and 0.04, respectively).
There is likely to be a selection bias in play here as if a pitcher was given the opportunity to throw 200 innings in a season then he is likely to be a very good pitcher (or considered to be one) and earn or be given the opportunity to pitch a great deal over the next three seasons. At the very least, it appears that pitching fewer than 200 innings changes what will happen much at all.
140 160 180 200 220 135 128.1 147.1 191.1 194
The final aspect to look at is performance. This is where I will break out the vFIP metric and here is the data set:
The groups are not significantly different from each other. However, there appears to be a slight improvement in performance from the 140-159 group. Again, this is not significant and likely requires a larger data set to see if this trend can be more firmly established, but the 140-159 pitchers improved their performance by 16% as a group. The other four groups were consistent with improvement ranged from 1 to 3 % better than their age 23 seasons.
140 160 180 200 220 123 118 104 100 108 111 83 103 83 92 87 102 93 113 87 76 104 100 96 122 131 82 98 106 65 120 106 124 83 98 99 90 98 92 130 101 86 114 96 85 94 105 96 129 80 116 106 111 105 82
Again, there may be some selection bias in these groupings because if you are tossing over 200 innings then you have probably pitched very well and it will be difficult to improve upon that. Here are the raw FIPs for the groups.
The data suggests that the 140-159 group pitchers saw great improvement in their performances. However, I would temper those differences with the idea that perhaps a pitcher who throws 140-159 innings at age 23 and proceeds to do poorly is more likely to be replaced in the rotation than a pitcher who tosses more innings. There may be a prejudice that benefits pitchers who threw more innings during their age 23 season.
140 160 180 200 220 Age 23 4.64 4.73 4.4 4.24 4.18 24-26 4.01 4.65 4.28 4.17 4.16
That all said, I am not sure how this informs us about Stephen Strasburg. There is no evidence from the above methodology that injury rates decrease. It appears that pitchers who log in more innings during their age 23 season wind up throwing more innings in the future at about the same level of performance. Pitchers who are worked for 140-159 MLB innings tend to show improvement as a group in terms of performance, but not in innings pitched. This may be the result of less desirable pitchers being able to be discarded more easily when they have less of a track record.
08 April 2011
Is Chris Tillman Injured or just a Taun Taun?
I want to be completely and utterly clear here. On this occasion, I have absolutely no inside information and am basing this solely off of Pitch f/x.
This Spring Training there were murmurs from opposing scouts that Chris Tillman had turned into a junk pitcher. He was no longer using his fastball as much and was using it as a show me pitch. Instead, he had increased his use of his secondary pitches. I had thought he was doing this just to get more feel on them and get ready for the season. Last night, it did not look so good.
Fastball
Count: 63
Swing and Miss: 2
Velocity: 87.3 +/- 1.1 mph (89.5 mph max)
Horizontal Run: -1.8 +/- 1.7 inches
Vertical Drop: 11.0 +/- 1.8 inches
In comparison, this is what he did on July 10, 2010:
Fastball
Count: 69
Swing and Miss: 2
Velocity: 91.4 +/- 1.2 mph (93.5 mph max)
Horizontal Run: -3.4 +/- 1.6 inches
Vertical Drop: 10.2 +/- 2.1 inches
Whenever I see a difference of 3 mph or more, it concerns me. That loss of velocity is a major hindrance. Comparing the two starts, Tillman was not missing bats with his fastball, but that loss of speed can give a batter more time to square up and make more solid contact. Compounding that with Tillman getting less movement on his fastball and it becomes more of a concern.
Another important aspect of pitching is to have a nice delta between your fastball and change up. The wider the margin while keeping the same arm action will affect the batter's ability to time. Last July the difference between the fastball and change up was 9.5 mph while last night it was 6.8 mph. The movement also looks a bit flatter. Last year it had more horizontal runs and more sink. The curve balls look different too, but both could be useful. Last year, the pitch was harder and had more drop. This year, it is about 3 mph slower with more horizontal movement and slightly less drop.
It may have just been a bad night.
How did it compare to last Saturday?
He was humming along at 89.4 with a max of 93.5 mph. His curve balls was about the same speed, but had almost twice as much movement. The delta on his fastball to change up was 11 mph. So . . . this was not the same pitcher. Tillman, as mentioned earlier, had diminished velocity during the Spring. His game last Saturday would count as that. However, last night was worse as his max speed was 4 mph less. Hopefully, it is just him not being able to adapt to the cold.
This Spring Training there were murmurs from opposing scouts that Chris Tillman had turned into a junk pitcher. He was no longer using his fastball as much and was using it as a show me pitch. Instead, he had increased his use of his secondary pitches. I had thought he was doing this just to get more feel on them and get ready for the season. Last night, it did not look so good.
Fastball
Count: 63
Swing and Miss: 2
Velocity: 87.3 +/- 1.1 mph (89.5 mph max)
Horizontal Run: -1.8 +/- 1.7 inches
Vertical Drop: 11.0 +/- 1.8 inches
In comparison, this is what he did on July 10, 2010:
Fastball
Count: 69
Swing and Miss: 2
Velocity: 91.4 +/- 1.2 mph (93.5 mph max)
Horizontal Run: -3.4 +/- 1.6 inches
Vertical Drop: 10.2 +/- 2.1 inches
Whenever I see a difference of 3 mph or more, it concerns me. That loss of velocity is a major hindrance. Comparing the two starts, Tillman was not missing bats with his fastball, but that loss of speed can give a batter more time to square up and make more solid contact. Compounding that with Tillman getting less movement on his fastball and it becomes more of a concern.
Another important aspect of pitching is to have a nice delta between your fastball and change up. The wider the margin while keeping the same arm action will affect the batter's ability to time. Last July the difference between the fastball and change up was 9.5 mph while last night it was 6.8 mph. The movement also looks a bit flatter. Last year it had more horizontal runs and more sink. The curve balls look different too, but both could be useful. Last year, the pitch was harder and had more drop. This year, it is about 3 mph slower with more horizontal movement and slightly less drop.
It may have just been a bad night.
How did it compare to last Saturday?
He was humming along at 89.4 with a max of 93.5 mph. His curve balls was about the same speed, but had almost twice as much movement. The delta on his fastball to change up was 11 mph. So . . . this was not the same pitcher. Tillman, as mentioned earlier, had diminished velocity during the Spring. His game last Saturday would count as that. However, last night was worse as his max speed was 4 mph less. Hopefully, it is just him not being able to adapt to the cold.
12 May 2008
Napkin Scratches: Handle with too much care?

One topic that comes up time and time again is how strong and durable pitchers used to be. Gone are the days where a pitcher would finish half the games he started. Gone are the days that no one needed a LOOGY. These are things we often lament. Right in hand with that lamenting is a palpable anger toward today's pampered pitchers. These guys are given strict pitch counts in the minors right into the majors. Gone too are the days when it was common to hear about some obscure prospect in PCL who threw 255 pitches in a single game that lasted 18 innings. It just isn't done.
So . . . why?
The first culprit is money. In 1976, the Reserve Clause was shot down. This enabled free agency and it changed the way a lot of different aspects in baseball were handled. In terms of pitching, it affected a few things. First off, it increased the value of pitching. Not only did the current game or season matter, but also seasons down the road. You had to invest a significant amount of money into a pitcher and you would not want to take a loss on it. This may have been the main pushing force of the five man rotation. Although, the five man rotation was not unheard of before the ruling, it quickly became the norm after it. Of course, as you may remember, the Orioles switched to the five man rotation in 1983 when Joe Altobelli took over.
Now, what the five man rotation does is give the pitcher more time off between starts for rest. The idea behind this is that pitching is a violent action. It is also not a very normal way to use your body. Quite unnatural. It follows reason that chronic repetition would result in physical damage. More time off between starts should equate to allowing the body to recuperate. Others argue against this though because it results in an uneven workout schedule. In the 4 man rotation, you would take a day off, throw a side session, then take a day off between starts. There is really little reason to think an uneven schedule hurts performance for any athlete that is not obsessive compulsive about such things. In my opinion, the value of a 5 man rotation is only a plus if the amount of strain forced on a pitcher requires an extra day of rest. It should also be expressed that not all arms are alike.
After the five man rotation was cemented in, pitch counts started popping up on the horizon. As far as I can tell, concern about pitch counts emerged in the late 80s and early 90s as drafted pitching prospects began making serious mint. The philosophy seeped into the majors in order to help preserve oft-injured pitchers (i.e. Saberhagen) or with fresh faces (i.e. Josh Beckett). The number typically chosen is 100. The PAP system goes with that. Top tier pitchers usually throw about 100 pitches per game. Even guys who claim not to use counts (i.e. Oziie Guillien) still seem to have pitchers that hover around the 100 mark.
That is the weird thing. Coaches who believe in pitch counts and those who don't . . . they do not vary much in terms of when a pitcher should be taken out. Again, this is now a bright line criteria. There is variation from arm to arm, some seem to be able to handle more and some less. This is pretty much regardless of coach. It seems that either no one uses pitch counts or that pitch counts just don't vary from normal performance measures.
This got me to wonder some things:
1. Training and development is better now than it was in the past
2. Players are more athletic and stronger now than they were
3. Pitching, which is more reliant on tendons and ligaments, has less potential to benefit from improvements in other areas of training and sport medicine. For instance, you can strengthen a pitcher's legs, but you cannot strengthen his tendons and ligaments to withstand a higher degree of torque.
So:
As technology and knowledge increases, so does the player's ability to perform, which is most likely to benefit a hitter more so than a pitcher due to the physical limitations placed on each activity.
That reminded me of this study. Read it. Seriously, read it. Good stuff. It is not perfect, but it provides a great approximation of how developed talent has increased over the years. Some interesting things pop out in that study. The ten best hitters ever in order from best to worst:
Bonds, Ted Williams, Aaron, Musial, Mays, Frank Robinson, Yaz, Ricky Henderson, Cobb, and Mantle.
Honestly, it makes sense. People often overlook how good guys like Ricky and Musial were. It is also interesting to think that Ted Williams would still kill in today's game.
Anyway, if players have gotten better . . . wouldn't pitching have gotten tougher if you assume that hitters will benefit more due to not being so reliant on tendons and ligaments? To this extent, we can use league quality as a coefficient to determine how today's pitching load per start compares to other eras. For this napkin scratching effort I am considering all starting pitching data in the AL from 1969 to 2007. I am predicting pitch counts based on the method I previously used. I am also normalizing the pitch counts based on the competition level of the league, taken from this graph.
RESULT
The black line is raw SP Pitches per Game and the orange line is league quality adjusted SP Pitches per Game.

What is interesting to note here is that as pitch counts are reduced, the league quality coefficient somewhat accounts for it. This suggests that league quality is causing a decrease in pitching counts. Others have countered that it might just be a relationship with runs per game. So, I ran another one with runs per game and as you can see there is much more variability in the runs per game adjusted line (black) and the league quality adjusted.

Conclusion
League quality appears to be affecting the number of pitches a SP throws. This suggests that pitch counts are either not followed or are more complex than a single line. It appears as if pitchers are being used quite similarly to how they were back in the 70s even though IP/G has dropped from 6.2 in 1973 to almost 5.2 in 2007.
Subscribe to:
Posts (Atom)