Derek’s Newest P-I piece
Derek’s latest Off the Wall column is up over at the P-I. He talks about the follies of making assessments of players based on spring training statistics, especially in circumtances where we have years and years of major league data to make an informed evaluation of a player’s ability. Here’s the most important sentence he writes:
The lesson of any serious examination of spring training statistics is that it’s a mistake to put any weight on them.
Now, I want to add on a little bit here, and I don’t think Derek will disagree with me, even if it seems like I might be contradicting his column a bit.
Giving any weight to spring training statistics is folly. Giving weight to spring training performance, in some cases, is not. Some players really do improve or decline certain aspects of their game over the offseason, and are different players in March since we last saw them in September. Thanks to the nature of spring training, we don’t have any real way to quantify the effects of these changes, but that doesn’t mean they don’t exist. These are the kinds of outliers that scouts can see before they’re quantifiable. If a player has added 4-5 MPH to his velocity, has improved his footwork or range through an offseason of hard work, added a cut fastball to his repertoire, or made other such significant adjustments, it’s worth exploring.
The key is to remember that these adjustments are rare. There might be one guy per organization that takes that kind of leap forward. If your evaluators are telling you that half your team has taken a huge step forward, well, you probably need new scouts. And Derek’s point about fans who want to see the most optimistic view of every player on the roster, using good spring training stats to back up their assertions that Random Scrub is due for a breakout year, is a good one. That’s not analysis, it’s wishcasting.
Taking a guy like Aaron Sele and extrapolating that he’s back to 2001 levels because he has a 1.50 ERA in 12 innings is the latter; someone looking for any reason to get excited about an opinion they want to hold. However, if a scout who has been watching the game for 15 years tells me that Jose Lopez has shown a more balanced approach at the plate this spring and has improved his footwork around the bag, well, that’s something I want to know. That’s not worthless information.
The best example of what I’m talking about is Albert Pujols. In 2000, in the low-A Midwest League, Pujols hit .324/.387/.565, then hit .284/.298/.481 in 81 at-bats after a promotion to high-A. He had established himself as an exciting prospect with a world of potential, but looked exposed at the end of the 2000 season against A-ball pitchers. He looked like a guy who needed another year, maybe 18 months, in the minors. Pujols showed up in the spring of 2001 as a monster, looking nothing like the kid who had been struggling in Potomac the previous fall. He had put on about 20 pounds of muscle and was tearing the cover off the ball. For the whole of spring training, Pujols was the Cardinals best hitter. At the end of the month, Tony LaRussa just couldn’t send him down, so they brought him north with the club and found him at-bats wherever they could. He hit .329/.403/.610 and darn near won the NL MVP.
At the end of the 2000 season, sticking Pujols in the lineup the following opening day was folly. But the guy who showed up to camp was significantly better than the guy who left the club the previous fall. The Cardinals made the right choice in carrying Pujols, because they recognized the shift in his performance. This is legitimate analysis; we just couldn’t have quantified it ahead of time.
Spring training statistics are worthless. But if a player is knocking your socks off with a performance so vastly different from what you thought you had, its at least worth reconsidering the previous opinion. Its the difference between statistics and performance. A narrow difference, but a potentially significant one.