Right + Early II: The Ehrlich/Simon Bet

In the article I mentioned in my last post, the New York Times said Jeremy Grantham was right but early. Jeremy Grantham passed the favor along by saying that Paul Ehrlich was right but early. He referred to a famous 1980 bet between economist Julian Simon and Paul Ehrlich, an entomologist who specializes in being spectacularly wrong about everything.

At the time, Ehrlich was claiming we were running out of commodities and prices would soon soar, destroying global civilization and killing billions of people. Simon challenged him to pick any five commodities and any date more than year away. Ehrlich picked Chrome, Copper, Nickel, Tin and Tungsten and 1990 as the date. A  basket was created with $200 worth each (at 1980 prices) of the five metals. The difference between the inflation-adjusted to 1980 dollars price of the basket in 1990 and $1,000 would be paid by Simon if the basket cost more than $1,000, or to Simon if the basket cost less than $1,000.

All five commodity prices went down, the basket was worth an inflation-adjusted $618 in 1990, so Ehrlich paid $382. Ever since Ehrlich and his fellow travelers have been explaining why he was really right (it’s just those inconvenient facts that got in the way). Now Grantham trumpets that Ehrlich has finally be proven right. I checked the numbers and as of last Friday the inflation-adjusted prices were Chrome ($196.60), Copper ($241.83), Nickel ($200.52), Tin ($126.42) and Tungsten ($205.36) for a total of $970.73.  However Grantham claims Ehrlich still won because three of the five commodities sell for more than $200. Actually what I find most impressive is how little commodity prices have moved over the years in real terms.

Grantham misses the risk intelligence point completely. Ehrlich claimed to be absolutely sure the prices would skyrocket. He was so sure, he pushed for policies that would impoverish or kill more than half the world, and he supported China’s horrendous population control policies (it turned out democracy and economic growth were far more effective in solving population growth). Moreover, he claimed to understand why prices would increase, meaning he should have been able to pick the five commodities most likely to go up in price, and the time interval most likely to prove him right. Being almost right 20 years after the deadline is WRONG!

Simon did not claim the prices would surely and always be below $1,000, just that it was a good bet given a specific deadline. Nevertheless, it is interesting that commodity prices have not declined in real terms. Since the beginning of the industrial revolution the ratio of commodity value to finished good value has fallen, as design and fabrication become more important (an apple is worth its commodity value, a pot is worth more than a lump of clay, a computer chip is worth far more than the sand and other materials it is constructed from).

I think what Simon perhaps underestimated was the tripling of real global GDP from the greatest economic boom in human history that brought billions out of poverty. Commodity consumption remained roughly constant, so commodities have roughly one-third the relative economic importance in 2011 versus 1980. If GDP had grown more slowly, real commodity prices would have declined. But explosive growth offset the long-term trend toward making labor and intangible assets more valuable than physical stuff; therefore real commodity prices stayed about the same.

The big difference between Simon and Ehrlich’s ways of thinking is not that Simon was right and Ehrlich wrong. It’s observing, and learning from observation, versus denying or explaining away all contrary evidence. That’s why Ehrlich is always so certain, even when his story is 180-degrees different from his story last year.

Now, guess which of Simon or Ehrlich’s ideas are shared by more policy experts.

Right + Early = Wrong and in denial

The New York Times had a fawning interview with Jeremy Grantham, describing him as “right but early.” If you tell me it will rain tomorrow, and it doesn’t but it rains next week, you were wrong. If you claim you were just early, you are in denial about being wrong.

It is possible to make a useful prediction when you are uncertain of the timing. I might say, for example, that commodity prices are in a bubble and will decline to half their current values sometime in the next five years. That could be right or wrong. But if I say merely that commodity prices will decline someday, I will either be proven right or the issue will still be open. I cannot be proven wrong, so the prediction has no meaning.

People with poor risk intelligence seize on current trends and extrapolate them to absurd levels. They get a lot of publicity for this. Other people argue against them, pointing to signs that the trend is already slowing, that it generates countervailing forces and that in any case it has to hit some limits.

What happens next? Either the trend does accelerate to cause some disaster, proving the prophet of doom correct. Or the trend slowly and quietly slows and reverses, in which case people never think to credit the skeptic with a victory. If they think about it later, they remember the trend and the guy who postdicted it, and misremember the order of events so they think he was right. The skeptic is remembered as a guy who denies all possibility of disaster and confused with people who either don’t care about disaster or profit from them.

Reputation tends to go to lucky fools and doomsayers who never remember being wrong (and therefore never learn).

Why do so many unexpected bad things seem to happen?

Think for a moment about the last few surprises you got. How many of them were pleasant versus unpleasant? And if there is a bold headline in the newspaper, how likely is it to be good versus bad news?

If you agree with me there is an asymmetry in the number of good and bad surprises, you might think it is because people selectively expect good things and therefore are more likely to be surprised at bad ones. However, that is the opposite of my experience.

I think the main culprit is precautions. Consider the World Trade Center attacks. We knew there were airplane hijackers, but all previous ones had wanted to survive. We knew there were suicide bombers, but they tended to be troubled young men with few skills and only the crudest of plans. Therefore, we had precautions against life-loving hijackers and death-loving misfits. The September 11th attackers chose their plan precisely because it was unexpected, that is, precisely because we had taken no precautions against it.

Or suppose you want to rob a bank. You could march in with a gun or bomb and demand money, but you can be pretty confident the bank has thought of that and taken precautions. In order to have much chance of success, you need to think of something unexpected. Four Swedish men, for example, found a place to hide in a Danish bank vault. They snuck in just before closing on Friday, and emptied out the safe deposit boxes over the weekend. Then they strolled out Monday morning (one of them left a bag of urine behind and was nabbed by DNA match but the others, and the loot, remain at large).

Now I’m not against precautions but it’s important to remember a logical point. Something has to happen. If a precaution reduces the probability of some event, it necessarily increases the probability of some other events. If you don’t know what those other events are, you have increased the probability of an unexpected outcome. Two things tend to make the unexpected events bad. First is exploitation by other people, like terrorists and robbers. Second is a point I mentioned in my last post, that a random occurrence can more easily cause damage disrupting plans than it can help by advancing them.

One solution is to be proactive. Instead of precautions to reduce the probability of bad things, which increases the probability of unexpected outcomes, try to think of things that increase the probability of good things. That adds to expected welfare as much as a precaution does, but it also helps reduce unexpected outcomes.

Another solution is to design plans that can benefit from unexpected surprises, plans that are open to opportunity. These can work better than plans designed to minimize dangers. Robert Burns taught us that, “the best-laid schemes o’ mice an’ men, gang aft agley.” So try a scheme in which agley is a good place to be.

The average result versus the calculation based on average inputs

 One of the most common risk errors is to do a computation assuming average values for uncertain inputs and treat the result as the average outcome.

For example, suppose we have a fair coin, that is one that has a 50 percent chance of flipping heads and a 50 percent chance of flipping tails, with each flip independent of prior flips. The probability of flipping one head is, of course, 50 percent. The probability of flipping nine heads in a row is 1/512 or 0.2 percent.

Suppose instead you hold a coin for which you have no idea of the probability of flipping heads. You think that probability is equally likely to be any number between 0 percent and 100 percent. The chance of flipping one head is still 50 percent. But the probability of flipping nine heads in a row is now 10 percent, not 0.2 percent. The reason is that if the coin has a high probability of flipping heads, say 95 percent, the chance of getting nine heads in a row is 63 percent while if the coin has a low probability of flipping heads, say 5 percent, the chance of getting nine heads in a row is 0.0000000002 percent. Thus the high probability of heads coins, the 95 percent’s, add much more to the 0.2 percent probability than the low probability of heads coins, the 5 percent’s, take away.

For a practical example, consider a proposed government program that will tax 0.1 percent of gross domestic product (GDP) to fund some useful service. The real (that is, after inflation) tax revenues will increase with real GDP growth, the real program costs will also increase at some rate. Let’s suppose we project average real growth rates for both real GDP and program real costs are three percent per year.

If we assume both growth rates are exactly three percent per year, the program will cost 0.1 percent of GDP. But suppose we instead assume there is some future uncertainty about the growth rates, that each month the rates can be 0.05 percent higher or lower than the previous month. So in the first month, the real GDP growth rate might be 2.95 percent / 12 or 3.05 percent / 12, and the same for the real program costs. Some factors will make the growth rates positively correlated, for example expanding population will generally increase both GDP and program costs. Other factors will argue for negative correlation, for example bad economic times mean low GDP growth and increased needs for government expenditures. We assume the changes in the two growth rates are independent, the positive and negative correlations offset.

The expected cost of this program is almost 0.2 percent of GDP, not 0.1 percent. Both average growth rate assumptions were correct, but the projected total cost was wildly incorrect. Like the coin, the reason is the asymmetry in costs. If GDP growth is slow and program costs rise quickly, the cost can easily be one percent of GDP or more. In the reverse circumstance, rapid GDP growth and slow growth in program costs, the program costs will likely be something like 0.03 percent or 0.04 percent of GDP. The high scenarios add more to the 0.1 percent projected cost than the low scenarios can subtract. In this case, 11% of the time the program costs come in under half the expected 0.1 percent level, with an average of 0.043 percent. 23 percent of the time the program costs come in over twice projection, with an average cost of 0.529 percent of GDP.

Or think about a project with a number of inter-related steps. Some will come in early and below budget, others will come in late and above budget. But the early steps won’t reduce total project time much because we usually can’t push up scheduling of later steps. We know, however, that the late steps will delay things, often causing cascading delays so a week late in one step can mean months late to the final deliverable. Also, it’s hard to save more than 10 or 20 percent in a step, but it’s easy to go 100 percent or 200 percent over budget.

People often do good-expected-bad case analyses to account for these effects, but these seldom capture the effect of genuine uncertainty. Within each good-expected-bad scenario, everything is certain. Beware of any calculation that substitutes averages (or even good, expected and bad values) for uncertain inputs. Your actual results are likely to be worse than the projections.