FanPost

BABIP and DIPS 3.0

Marc wants me to introduce myself, so I guess I should start by saying that my name is David Gassko, as my handle here might have given away. I write a weekly column for The Hardball Times, and also have been blogging at Statistically Speaking for almost a year now. I'll be writing here at BtB now as well, semi-regularly. I hope you enjoy my writing, and if not, I hope you enjoy my wrath.

Last August, I published an article updating Voros McCracken's Defensive Independent Pitching Statistic (DIPS), calling my system DIPS 3.0. To recap quickly, what McCracken found was that individual pitchers seemed to show little control over what happens to balls put into play. Instead, he found, pitchers only seem to have much control over the defensive independent categories--strikeouts, walks, and to a lesser extent, home runs.

I took his idea one step further (a step that was actually originally suggested by Voros), and based DIPS on batted ball information. Basically, I take a pitcher's batted ball line (ground balls, line drives, bunts, and outfield and infield fly balls) and transform it into a "regular" line--singles, doubles, triples, home runs, reached on error, as well as, strike outs, walks, and hit-by-pitch.

Here's a very simply run-down of how I do it. I take the number of batted balls the pitcher allowed, and assign him a league average line drive percentage, because based on my research with JC Bradbury in the Hardball Times Annual, it seems that pitchers have little control over how many line drives they allow. I then split the rest of their batted balls based on their actual batted ball percentages. Take, for example, Jarrod Washburn. He allowed 586 batted balls last year, and since the average line drive percentage in the AL last year was 19.9%, I assign him 117 line drives. He actually allowed 120. Washburn also allowed 206 outfield flies. So his "new" outfield fly number would be 206/(586-120)*(586-117) = 208. And so on for every batted ball. A better explanation of the whole method is available here.

To transform this translated batted ball line into "normal" statistics, I just take the average outcome of each type of batted ball, and multiply it by the translated line. So, for example, 50.8% of all line drives became singles in the American League last year. Since Washburn had 117 translated line drives, based on his liner numbers alone, he would be expected to have 59 singles. I do this for every batted ball type, and every hit-type (as well as reached-on-errors) as well.

I then plug that all into BaseRuns, and find how many runs the pitcher would be expected to allow.

Okay, 375 words in, and I'm finally getting to the point of this post. One criticism my system has received is that it zeroes out too many things. To some extent, this is true, though the point of DIPS is to zero-out the things that don't really matter. But here's the real question: how great is that extent?

On this very site, John Beamer wrote an article arguing that there is some skill involved in preventing line drives, which is certainly true. However, John's argument (not to put words in his mouth!) seemed to extend beyond that: As many others have said, John was arguing that disregarding one year's worth of line drives is incorrect; that there is some information contained in that information. He would not be the only person for whom I have respect to have said, that, more so, specifically in regards to DIPS 3.0. A poster that goes by the tag GuyDM posted the following on the Strategy and Sabermetrics board awhile ago:

Not to quarrel with the central importance of K and BB rates, but to some extent the correlation of your metric and DIPS 3.0 is inevitable. David is imposing league average LD%, and standard run values for every BIP type. So the only source of variance left is GB/FB ratio, which translates into roughly +-.25 R/G given the range in GB/FB.

Essentially, his point was that the process I go through for DIPS 3.0 does not leave much room for variance, less than there should be. Is that true? Well, luckily, the process is set up in such a way that we can actually check whether or not that is true. Using my expected batting lines against, we can calculate pitchers' expected batting average on balls in play (BABIP). In this case, I include expected reached on error in the numerator, because they are defense independent, since they are just based on batted ball distribution.

For example, Johan Santana was expected to allow 127 singles, 39 doubles, 4 triples, and 6 reached on error last year, according to DIPS 3.0. He also had 601 BIP, for a .294 BABIP.

Doing the math for every pitcher with at least 350 BFP in 2005 (171 in all), how much variance is there among these players? Well, the answer is .009. That's our standard deviation, which is a measure of spread. It means that 68% of all the pitchers would be expected to be within +- .009 points of average, and 95% would be expected to be within .018 points of the mean. How much is that? Since the average pitcher in our sample had about 500 BIP, that would make one standard deviation +/- 4.5 hits, or about .15 points of ERA.

So is that a lot or a little or what? Actually, it's just right. According to a research paper by Erik Allen and Arvis Hsu, the true standard deviation of BABIP is supposed to be, you guessed it, .009 points. DIPS 3.0 is perfectly capturing the true spread in BABIP, further support for disregarding a pitcher's line drive percentage. That extra batted ball information actually plays a big role in understanding the subtler differences between pitcher seasons, because it allows us to capture the true spread in BABIP, which Voros' two versions of DIPS (which has a spread of zero, since they assumed every pitcher would have the same BABIP, though the second did make some small adjustments based on handedness) could not. And once again, we have even more reason to believe that line drive percentage really means nothing over the course of one season.

So vive DIPS 3.0! Oh, and it's nice to join the BtB staff.

Trending Discussions