Baseball, sabermetrics and risk intelligence

I’ve just been to see Moneyball, a new film based on the eponymous 2003 book by Michael Lewis. It tells the story of how Billy Beane, the general manager of the Oakland Athletics, led the team to a series of 20 consecutive wins in the 2002 baseball season, an American league record. This feat was apparently due to Beane’s use of sabermetricsMoneyball Poster

Sabermetrics is the application of statistical techniques to determining the value of baseball players. The term is derived from the acronym SABR, which stands for the Society for American Baseball Research. It was coined by Bill James, who began developing the approach while doing night shifts as a security guard at the Stokely Van Camp pork and beans cannery in the 1970s.

The drama revolves around the tension between Beane and the team’s scouts, who are first dismissive of, and then hostile towards, his statistical approach.  Rather than relying on the scouts’ experience and intuition, Beane selects players based almost exclusively on their on base percentage (OBP). By finding players with a high OBP but characteristics that lead scouts to dismiss them, Beane assembles a team of undervalued players with far more potential than the Athletics’ poor finances led people to expect.

There’s something very satisfying about seeing the scouts’ boastful claims about their expertise being undermined by newcomers with a more evidence-based approach. The same thing is occurring in other fields too, such as wine-tasting. In the 1980s, the economist Orley Ashenfelter found that he could predict the price of Bordeaux wine vintages with a model containing just three variables: the average temperature over the growing season, the amount of rain during harvest-time, and the amount of winter rain. This did not go down well with the professional wine tasters who made a fine living by trading on their expert opinions. All of a sudden, Ashenfelter’s equation threatened to make them obsolete, just as sabermetrics did with the old-fashioned scouts.

It would be wrong to conclude, however, that we can do away with intuition altogether. For one thing, you need lots of data and time to build reliable statistical models, and in the absence of these resources you have to fall back on intuition. If you have low risk intelligence, you’ll be screwed.

Secondly, risk intelligence is required even when sophisticated models and supercrunching computers are in plentiful supply. An overreliance on computer models can drown out serious thinking about the big questions, such as why the financial system nearly collapsed in 2007–2008 and how a repeat can be avoided. According to the economist Robert Shiller, the accumulation of huge data sets in the 1990s led economists to believe that “finance had become scientific.” Conventional ideas about investing and financial markets—and about their vulnerabilities—seemed out of date to the new empiricists, says Shiller, who worries that academic departments are “creating idiot savants, who get a sense of authority from work that contains lots of data.” To have seen the financial crisis coming, he argues, it would have been better to “go back to old-fashioned readings of history, studying institutions and laws. We should have talked to grandpa.”

Risk management is a complex process that requires both technical solutions and human skill.  Mathematical models and computer algorithms are vital, but such technical solutions can be useless or even dangerous in the hands of those with low risk intelligence.