Heavy stuff

DMZ · July 25, 2005 at 8:57 pm · Filed Under General baseball 

In response to some topics that have come up in comment threads lately, as I tinker with maybe writing the USSM FAQ to help with frequently-rehashed topics, I offer for your edification:

Tom Ruane on batting orders (“batting orders matter even less than people have believed”). As in… almost not at all. I had a good argument with Rob Neyer about this once at a Pizza Feed: he held that it wasn’t such a big deal that it was worth caring about, and you might as well bat them in traditional roles and devote your energies to assembling a better bullpen or something. I argued that every run counts, and managers should constantly optimize… but for maybe three runs a year? You really are better off spending your energy elsewhere.

Tom Ruane on clutch hitting. (“One could argue that the forces at work here, if they exist, must be awfully weak to so closely mimic random noise, and if they are really that inconsequential perhaps we could assume they don’t exist without much loss of accuracy.”)

And here’s the 2003 Tom Tippett article “Can pitchers prevent hits on balls in play?” which I highly recommend.

Comments

30 Responses to “Heavy stuff”

  1. Dave on July 25th, 2005 9:01 pm

    I don’t really like Ruane’s conclusionary point that you quoted above. I think the best summation of how to evaluate clutch hitting I’ve heard went something like this:

    Whether it exists as a skill or not is a matter of theory, not practicality. What we do know is that we do not have the tools necessary to accurately find any hitters who are likely to hit better in clutch situations than we would otherwise expect. If we can’t predict who is going to hit well in that situation, then, for our intentions, it is random noise. If I can’t know it in the future, then as a decision maker, to me, it doesn’t exist.

    I’m pretty sure that was TangoTiger, and that’s a pretty rough paraphrase, but that sums up my thoughts on the issue pretty nicely.

  2. DMZ on July 25th, 2005 9:06 pm

    In fairness to Tom, he wrote a fairly lengthy bit for a conclusion that I chopped one sentence out of. He agrees with you, would be the short version.

  3. roger tang on July 25th, 2005 9:48 pm

    This is going in the left-hand reference column (I hope)?

  4. fiction on July 25th, 2005 9:53 pm

    Two years have been additcted to your site. Only have web-tv but stil check your site before the rosters crow. Thankyou

  5. fiction on July 25th, 2005 10:14 pm

    What met to type best to AMTRON

    DMZ—realise you are special.

  6. troy on July 25th, 2005 10:25 pm

    Holy stalker.

  7. fiction on July 25th, 2005 10:32 pm

    DMZ- Have you given up on MMs? Who are your best trade prospects?

    Are they differt than others?

  8. DMZ on July 25th, 2005 10:42 pm

    MMs? What’s that?

  9. Colm on July 25th, 2005 11:07 pm

    Morse at a guess.

  10. Colm on July 25th, 2005 11:08 pm

    Tom Trippet article highly recommended. Can’t get enough of it. Thanks for posting the link again.

  11. adam on July 26th, 2005 12:12 am

    I think posting a general fact about all these topics would be one of the most useful things….in U.S.S. Mariner history.

    Yes, I said it. History.

  12. Mat on July 26th, 2005 1:01 am

    DMZ – Did you really get the feeling that lineup effects didn’t matter after reading the Tom Ruane article you linked? Having read it, it seems to me that he showed lineup effects don’t matter much if you’ve got a bunch of league average hitters you are moving around in the lineup. However, when he got around to looking at where Bonds should hit, and when he looked at the ’61 Yankees specifically, there were deviations as large as 1 run/game between some of the best and worst lineups. Now, some of those lineups are just so bad that no traditional “baseball man” would ever even consider them, but that point didn’t seem very well explored in the article.

    It seems to me that the biggest lineup effects are going to come when you start putting actual players in the lineup, with a larger variety of skills (or lack of skills) than the league average guys. I remember in 2002 when the Royals were insistent on hitting Chuck Knoblauch and Neifi Perez in front of Carlos Beltran and Mike Sweeney at the top of the order. That year, Knoblauch and Perez both had sub-.290 OBPs. It was like KC was trying to kill rallies before Beltran and Sweeney had the chance to drive guys in. I think if Tom Ruane put the numbers in his calculation for that specific team, there would have been a more than 3 run/year penalty for hitting their worst two hitters in front of their best two hitters.

    An interesting article and a lot of work, to be sure, but I don’t think it provided a very good upper bound on the effects of lineup shuffling.

  13. DMZ on July 26th, 2005 1:18 am

    That’s not the point, though. He’s not arguing it’s 3r/y for best-worst, it’s 3r from any optimal lineup to a traditional one.

    Here’s my view on this: if you have nine players who all hit the same, it doesn’t matter. The more variance you have in individual skills, the more lineup becomes important. Now that’s a whole other interesting argument not investigated: how big do the differences have to be before it gets interesting?

    But what I take away from Ruane’s article is that given a reasonably normal major league team, if you arrange it in the traditional style, you might only be costing the team a couple of runs compared to the best possible arrangement.

  14. ray on July 26th, 2005 2:03 am

    I think batting order is more for the ego and mind of the player than anything else — just like who’s pitching on opening day.

  15. gerzowitz on July 26th, 2005 2:45 am

    I recently caught a brief comment by TangoTiger on BTF with regards to batting orders. He said that the difference between the best and worse real-life lineup construction is 50 runs or 5 wins. If that’s true, then that’s a lot more significant than originally thought.

  16. gerzowitz on July 26th, 2005 3:16 am

    Here’s what Tango actually said:

    “Generally speaking, you gain 5 to 15 runs in lineup ordering, which is either tiny or enormous, depending on your point of view.”

    “…it was a comparison between actual lineups, and optimal ones. The difference between the best possible and worst possible is probably 50 runs.”

    Ok, so the 50 runs is the difference between optimal and wacky (not real life) lineups.

  17. Itea on July 26th, 2005 5:28 am

    The Tippett article is interesting. What I’ve never agreed with in the conversation is that home runs should be subtracted from the equation. People look at BABIP, see low correlations, and suggest that it’s mostly random. I think it makes more sense to put the HRs back in and just look at opposing BA/SLG. There’s a much higher year-to-year correlation, and it answers the more important question – if a pitcher doesn’t walk or strike someone out, can they affect what happens when the batter hits the ball? And the answer is that they can.

    On an intuitive level I’ve never believed in the complete randomness of BABIP, and it’s not because of the Pedros of the world, it’s because at the other end of the spectrum I feel like I’ve witnessed many, many failed major league pitchers who’ve gotten disproportionally roped in their short careers. Most of these guys (perhaps all) don’t stay in the bigs long enough to be included in a year-to-year study; they perhaps throw 20 innings one year, 10 the next, and then disappear from the map. It’s difficult to study this class of pitcher, of course, because one could also posit that they have just been unlucky, but that unluckiness has translated into a performance record that prevents them from staying in major league baseball.

    Back to the HR issue – there is an inherent advantage given to fly-ball pitchers as opposed to ground-ball pitchers in the BABIP stat when HRs are subtracted. I think this affects the nominal conclusions of Tippet’s study. A HR is another hit – sure, it (usually) couldn’t possibly be fielded, but that’s just as true for a bullet down the line that clears the third baseman’s head by 10 feet. I think the numbers would be more informative if the HRs were put back in, because a HR is perhaps the ultimate expression of having your pitch hit very hard.

  18. Zzyzx on July 26th, 2005 7:12 am

    Itea – I felt that way for a while too (removing HRs was kind of unfair), but then I finally got it. This is useful as a predictive tool. If a pitcher is pitching surprisingly well but it turns out that he gives up a batting average of ball in play that’s 100 points lower than his teammates, it’s more likely that he is riding a hot streak than he’s discovered a new level of performance.

    Baseball is a weird sport in that there’s no direct relationship to how the batter does his job and what the results are. I’ve been hitting little dribblers all year in softball that I’ve been beating out for infield hits. I finally have been making decent contact the last few weeks, but they’ve all been hit right at people. (That’s why I don’t believe in clutch hitting by the way; I can’t see a skill that makes a player’s hits somehow avoid the defenders. It’s only a few inches that separate a game winning single and a game ending double play.)

    As for using this for a tool to show pitcher’s relative performance rather than predict, I suspect that defense independent slugging percentage would vary quite a bit. The hard hit pitcher would give up a lot more doubles and triples than the one who surrenders bloop singles. I’ve been tempted to run that study myself one day. However, this knowledge does have a use as long as you know what it means.

  19. Steve on July 26th, 2005 8:04 am

    #17: re BABIP.

    I discussed this in comments a few months ago. Most of us who have played Little League ball know very well that there is a great range in the ability of pitchers to control how often and how hard batters hit the ball. That is why so many people have a hard time believing the data about BABIP.

    But there is also a “luck” component as well. You can put almost any guy who can chuck pitches in the strike zone on the mound and eventually get three outs. Occasionally a hitter will not make good contact, or he will hit one directly at a fielder.

    For the average palooka starting out in the low minors or in college, his inability to exert much control over how hard batters hit the ball dwarfs the luck component. That guy gets hammered and never advances to the next level. Note that the luck component is still there, though. It’s just doesn’t play a significant role.

    The guys who can adequately limit how hard they get hit advance to the next level, where another weeding process goes. The process is repeated at each level.

    But note that if the luck component is relatively constant, luck becomes relatively more important in the overall outcome as we continue to select only those pitchers who can control what happens when a batter makes contact. By the time guys get weeded out to reach the MLB level, the “luck” element has become a bigger component of what happens on balls in play than the pitcher’s ability to control what happens to a batted ball. It’s at that point that a guy is ready to pitch in MLB, and only guys who have at least that threshold ability will survive in MLB.

    In essence, we have trimmed the group of pitchers to the select few whose ability to keep batters from hitting the ball is so good that the differences among them in that ability to limit damage when hitters make contact are getting lost in the background noise of associated with the random luck element.

    If that hypothesis is true, we ought to expect to observe the following:
    When we examine the data, we ought to occasionally see some residual differences in BABIP just emerging from the background noise for some pitchers.

    We ought to perdiodically see some pitchers get the the big leagues because by dint of luck or circumstance who are not among the select. Those pitchesr should get shelled and have very short careers.
    What we actually see is pretty consistent with those expectations.

  20. Josh on July 26th, 2005 8:30 am

    The Tom Tippett BABIP article is one of the most informative baseball articles I’ve ever read, thanks for pointing it out!

  21. Itea on July 26th, 2005 9:26 am

    #18 –

    If the point is to gauge whether a pitcher has been “unlucky”, then there is some merit to subtracting the HRs, but even in that context there is a huge amount of what could be called “luck”. The majority of HRs don’t clear the wall by very far, and the majority of HRs hit between the power alley and the foul pole on either side of the field aren’t hit far enough to clear the yard in dead center field. If we want to define the fair playing field as 0 degrees down the first base line and 90 degrees down the third base line, what is the logic in differentiating between the 45 degree sharp grounder (single up the middle) and the 57 degree grounder (6-3 groundout) as luck, but not the 380-foot fly ball in the same two directions?

    That’s ignoring the specifics of ballpark shape and OF defense. I realize that these studies try to take this into account, but different pitchers have very different patterns of where the balls-in-play against them land, and so (e.g.) a poor CF defense can cost Pitcher A a lot more hits than it costs Pitcher B.

    Honestly, if there was one number to be subtracted from the BABIP statistics, I think it would make more sense to take out the foul outs – that is something that is extremely stadium-dependent.

    McCracken’s findings aren’t that surprising, because the more one defines the action off the ball, the more pitcher-independent it is whether it’s a hit or not. If we define something like “line drives” as “balls hit 145-160 feet in the air that land 1.1-1.2 seconds after contact with the bat” (I’m just guessing here, but you get the idea), then I imagine that whether those line drives are hits or not is very random – either they are hit at someone, or they aren’t. Similar to “sharply hitten ground balls”, “weakly hitten ground balls”, etc. The differences between pitchers lie not in the results of these categories, but how many batted balls of each type they allow. So, it would make more sense to come up with a good way to typify these kinds of hits as opposed to expressing BABIP; it would be a more accurate expression of what is happening against that pitcher, and on that level we could consider the differences between “right at the second-baseman” and “ten feet to his right” as luck and pitcher-independent.

  22. Rusty on July 26th, 2005 9:33 am

    Re: 19 Steve said…
    In essence, we have trimmed the group of [MLB] pitchers to the select few whose ability to keep batters from hitting the ball is so good that the differences among them in that ability to limit damage when hitters make contact are getting lost in the background noise of associated with the random luck element.

    This is the best hypothesis that I have seen for the seemingly random fluctuations in BABIP at the major league level. I read somewhere, maybe BP, that researchers understand the necessity of looking at this question in the minor leagues, and from there we might as well look at college ball right down to Little League. If the hypothesis is true, then we should start to see less randomness in the numbers as we go down in level.

    On an intuitive level this makes sense to me. I’ve watched Mark Teixeira smoke 4 of 5 batting practice (75 mph) pitches out of the park. Switch to the other side of the plate and hit 3 more HR’s in a row. The point being that once pitchers obtain and control a major league fastball (85 to 95), we start seeing some randomness in the balls that Teixeira is able to serve up as souvenirs.

  23. Rusty on July 26th, 2005 9:42 am

    start seeing some randomness in the balls that Teixeira is able to serve up as souvenirs

    Oops, I just penned that. I guess it’s better stated… randomness in the balls that Teixeira hits hard, since the pitcher has some control over the balls hit for HR’s.

  24. Mat on July 26th, 2005 9:53 am

    DMZ – RE: #13, and lineup effects

    I reread the article, and I’m still dissatisfied. While I agree with you that lineup effects increase as variance in player skills increases, I remain unconvinced that Tom Ruane’s model gives us a good sense of how big those effects are. The 3 runs/year number is based solely on lineups consisting of league average guys, looking at traditional vs. optimal. Now, looking at the two specific teams that he runs, he gets a difference of 0.5 runs/game between traditional and optimal (not best and worst, which I shouldn’t have focused on in my earlier comment). That’s 81 runs/year, and that certainly would be significant. Personally, I think the truth lies somewhere between 3 and 81 r/y, but using his model, it appears it could be anywhere in that range.

    I guess my point is that, while lineup ordering might not matter much (from traditional to optimal), this model doesn’t show that, and in fact, it shows that for the only two specific lineups examined, the difference is quite large.

  25. Gregor on July 26th, 2005 10:25 am

    #24: This is precisely what bothered me reading the article. The problem with the first part of Ruane’s analysis is that by averaging over lineups for all teams in a league over entire seasons, you remove a lot of the variance that exists on actual teams. Note that the difference in OPS between the best and worst hitter (3 and 9) is .190 for the averaged AL lineup. In the NL, it is .193 between the 3rd and 8th spot (excluding the pitcher’s spot, of course). While I don’t have the time to verify this right now, I would be surprised if the difference between the best- and worst-hitting regular wasn’t higher for most every current actual team.

    That said, both of the actual teams Ruane investigated featured exceptional hitters, and hence, much larger than average spreads, so I think it is fair to assume that the truth is closer to 3 r/y than 81 r/y.

  26. Mat on July 26th, 2005 10:59 am

    Gregor – Given that other studies have seemed to find that lineup ordering doesn’t matter much, I’m inclined to agree that it’s closer to 3 r/y than 81 r/y in most cases. However, just how close is an interesting question. Even if it’s 10 r/y, the general rule of thumb tells us that’s worth an extra win per year. Provided that the optimal lineup isn’t all that wacky, it could be worth investigating.

  27. Colm on July 26th, 2005 1:00 pm

    Hey Gregor

    If the average OPS spread for all AL teams is .190, why would the actual number for most every team be higher? Surely the spread would be bigger for about half the teams, and smaller for the others?

    You could have one or two teams with no spread at all skewing the average, but what makes you suspect that?

  28. Evan on July 26th, 2005 1:29 pm

    I think the principle still holds. You should want your manager to construct a lineup to scratch out every run there is to be had, even if it’s only 3 runs/season.

    If he doesn’t, there’s no big direct negative impact on your team, but it’s indicative of an unwillingness to go after every single available run. And I think that’s a big red flag on a manager.

  29. Adam M on July 26th, 2005 5:29 pm

    In the case of certain players, where they are hit in the lineup matters about 100x more than any potential extra runs you could squeeze out. Exhibit A: Shawn Green 2004. Exhibit B: Hee Seop Choi 2005.

  30. Gregor on July 27th, 2005 11:59 am

    Colm (#27),

    The spread gets reduced by averaging because the highest OPS guy isn’t always going to hit in the # 3 spot (and the lowest OPS guy isn’t always going to hit in the #8/9 spot).

    A better way to conduct the analysis may be to sort the players on all teams (for each season) by OPS, and then perform the analysis in terms of #1 OPS guy to #9 OPS guy.