Thinking, fast and slow, and slower…

Daniel Kahneman’s recent bestseller, Thinking, Fast and Slow, is a brilliant summary of a lifetime’s work in the psychology of decision making. Together with his colleague, Amos Tversky, Kahneman revolutionized the way psychologists think about how people reason and make choices. Before these two young Turks burst on the scene in the early 1970s, psychologists had a rather rosy view of decision making that owed more to logic and mathematics than to empirical research. People were seen as utility-maximizers, rationally weighing up the pros and cons of the options available to them before opting for the one with the highest payoff. In a series of brilliant experiments, Kahnmen and Tversky exposed this picture as overly optimistic, and showed that human decision making is riddled with biases and cognitive short-cuts that work well enough most of the time, but can also lead to some pretty dumb mistakes.

The central thesis of Kahmenan’s book is that there are fundamentally two modes of thought, which he denotes System 1 and System 2. System 1 is fast, automatic, emotional, and subconscious; System 2 is slower, more effortful, more logical, and more deliberative. The biases and cognitive short-cuts are largely features of System 1, but we can overcome these by employing System 2. It just takes more effort and more time.

This is fine as far as it goes, but it leaves a crucial third kind of thinking out of the picture. This is the meditative, creative mode of thought that the psychologist Guy Claxton calls the “undermind” in his thought-provoking book, Hare Brain, Tortoise Mind. It is much slower than Kahneman’s System 2, and works away quietly in the background, below the level of conscious awareness, helping us to register events, recognize patterns, make connections and be creative. This is the kind of thought that can bubble away beneath the surface for weeks or even months, quietly turning over a problem and looking at it from different perspectives, before suddenly thrusting a solution into consciousness in that exciting Eureka! moment.

I think Claxton is onto something in claiming that the mind possesses three different processing speeds, not two.  Think of it as a kind of “cognitive sandwich” if you like. The top half of the bun is the lightning fast System 1 identified by Kahneman, the world of snap judgments and rapid heuristics.  The bottom half of the bun is the snail-paced undermind identified by Claxton, where thoughts cook slowly in the back oven. Both of these are unconscious processes, operating below the level of conscious awareness. The hamburger in the middle is conscious thought, Kahneman’s System 2.

Where does risk intelligence come in to all this? Risk intelligence tends to be domain-specific, and those with high risk intelligence build up models of a given domain slowly, often unconsciously, as they gradually accumulate experience in their specialist field.  These models may involve many different variables.  The expert horse handicappers I describe in chapter one of my book took at least seven different variables into account, including the speed at which a horse ran in its last race, the quality of the jockey, and the current condition of the racetrack.  People with high risk intelligence manage to keep track of all the relevant variables, and perform the complex mathematical task of weighing them up and combining them.  However, they usually do this unconsciously; they need not be mathematical wizards, since most of the computation goes on below the level of awareness.

There are some basic tricks for increasing risk intelligence across the board, and I discuss some of these in the book. Simply taking a general knowledge version of the RQ test can, for example, lead to rapid gains in risk intelligence because it encourages people to make more fine-grained distinctions between different degrees of belief.  Such rapid improvements in risk intelligence may well generalize to any field, so there may be a small domain-general component of risk intelligence.  But these rapid gains are the low-hanging fruit; once you have plucked them, further increases in risk intelligence may be harder to achieve, and will require immersing yourself in a particular field of study for perhaps many years. It is then that, to borrow Claxton’s metaphor, the “hare brain” must stand aside, and let the “tortoise mind” take over.

 

New “risk intelligence test” is a disappointment

Imagine my surprise, last week, when I read a report announcing that researchers in Germany had created “the first quick test for establishing an individual’s risk intelligence.” After all, I created an online risk intelligence test back in 2009, and I was simply following in the tracks of many researchers before me. What was so special about the new test from Germany, I wondered.

The answer, as I soon found out, is … nothing! In fact the test doesn’t measure risk intelligence at all. To be fair, the authors of the test do not pretend that it does. They claim, instead, that the test measures what they call “risk literacy.” It seems the journalists used poetic license when they described it as a risk intelligence test.

The test is certainly quick. There are only four questions, and it takes only a couple of minutes to answer them. But the questions are the same old probability puzzles that have been the mainstay of books about risk for decades.

For example, one of the questions is as follows:

Out of 1,000 people in a small town 500 are members of a choir. Out of these 500 members in a choir 100 are men. Out of the 500 inhabitants that are not in a choir 300 are men.

 What is the probability that a randomly drawn man is a member of the choir?

Many books that purport to help people think more clearly about risk focus on such analytical puzzles. But although these puzzles can be fun to explore and their solutions are often pleasingly counterintuitive, mastering probability theory is neither necessary nor sufficient for risk intelligence. We know it is not necessary because there are people who have very high risk intelligence yet have never been acquainted with the probability calculus. And mastering probability theory is not sufficient for risk intelligence either, as is demonstrated by the existence of nerds who can crunch numbers effortlessly yet show no flair for estimating probabilities or for judging the reliability of their predictions.

Risk intelligence is not about solving probability puzzles; it is about how to make decisions when your knowledge is uncertain. Outside of some highly structured risk-taking activities, such as are found in casinos and financial markets, dealing with uncertainty is a more useful skill than probability analysis. It is also much easier to learn, primarily because it depends on common sense and simple logic rather than abstract mathematics.

It is depressing to find people still confusing these two things. The journalists who have been waxing lyrical about the German test would do well to read Agatha Christie. Fifty years ago, in The Mirror Crack’d from Side to Side, she showed her disdain for the kind of probability puzzles that the German test regurgitates, when she has Dr Haydock complain: “I can see looming ahead one of those terrible exercises in probability where six men have white hats and six men have black hats and you have to work it out by mathematics how likely it is that the hats will get mixed up and in what proportion. If you start thinking about things like that, you would go round the bend. Let me assure you of that!

50% = I have no idea

Benjamin’s recent post about differences risk intelligence between users of various web browsers drew a lot interest after it was featured on Slashdot. Among the many emails we received, one in particular caught my attention, because it articulated very clearly a common reaction to the whole idea of risk intelligence.

The email noted that our risk intelligence test presents users with a scale with one end marked 0% (false) and the other marked 100 (%true), and objected that this implied there was no option for “don’t know.” The email went on to reason (correctly) that “therefore the only logical choice that can be made in the case of not knowing the answer is 50%.”

So, what’s the problem? The instructions for the test state clearly that if you have no idea at all whether a statement is true or false, you should click on the button marked 50%. So why did the author of this email state that there was no option for “don’t know”?

I think the problem may lie in the fact that, while 50% does indeed mean “I have no idea whether this statement is true or false,” it does not necessarily mean “I have no information.” There are in fact two reasons why you could estimate that a statement had 50% chance of being true:

1. You have absolutely no information that could help you evaluate the probability that this statement is true; OR

2. You have some information, but it is evenly balanced between supporting and undermining the statement

So, maybe that’s what the email was getting at. But even if this interpretation is correct, it doesn’t justify the claim that there is no option for “don’t know.” There is. It’s the 50% option. That’s what 50% means in this context.

The email went on to add: “It’s very curious that you use a scale; surely someone either believes that they know the correct answer or they don’t know the correct answer. I can’t see that there is any point in using a scale. I would think it far more sensible to present three options of True, False or Pass.”

But this simply begs the question. As I pointed out in a previous post, one of the most  revolutionary aspects of risk intelligence is that it challenges the widespread tendency to think of proof, knowledge, belief, and predictions in binary terms; either you prove/know/believe/predict something or you dont, and there are no shades of gray in between. I call this “the all-or-nothing fallacy,” and I regard it as one of the most stupid and pernicious obstacles to clear thinking.

Why should proof, or knowledge, or belief require absolute certainty? Why should predictions have to be categorical, rather than probabilistic? Surely, if we adopt such an impossibly high standard, we would have to conclude that we can’t prove or know anything at all, except perhaps the truths of pure mathematics. Nor could we be said to believe anything unless we are fundamentalists, or predict anything unless we are clairvoyant. The all-or-nothing fallacy renders notions such as proof, belief, and knowledge unusable for everyday purposes.

In 1690, the English philosopher John Locke noted that “in the greatest part of our concernments, [God] has afforded us only the twilight, as I may so say, of probability.” Yet, as emails like this show, we are still remarkably ill equipped to operate in this twilight zone.

The average result versus the calculation based on average inputs

 One of the most common risk errors is to do a computation assuming average values for uncertain inputs and treat the result as the average outcome.

For example, suppose we have a fair coin, that is one that has a 50 percent chance of flipping heads and a 50 percent chance of flipping tails, with each flip independent of prior flips. The probability of flipping one head is, of course, 50 percent. The probability of flipping nine heads in a row is 1/512 or 0.2 percent.

Suppose instead you hold a coin for which you have no idea of the probability of flipping heads. You think that probability is equally likely to be any number between 0 percent and 100 percent. The chance of flipping one head is still 50 percent. But the probability of flipping nine heads in a row is now 10 percent, not 0.2 percent. The reason is that if the coin has a high probability of flipping heads, say 95 percent, the chance of getting nine heads in a row is 63 percent while if the coin has a low probability of flipping heads, say 5 percent, the chance of getting nine heads in a row is 0.0000000002 percent. Thus the high probability of heads coins, the 95 percent’s, add much more to the 0.2 percent probability than the low probability of heads coins, the 5 percent’s, take away.

For a practical example, consider a proposed government program that will tax 0.1 percent of gross domestic product (GDP) to fund some useful service. The real (that is, after inflation) tax revenues will increase with real GDP growth, the real program costs will also increase at some rate. Let’s suppose we project average real growth rates for both real GDP and program real costs are three percent per year.

If we assume both growth rates are exactly three percent per year, the program will cost 0.1 percent of GDP. But suppose we instead assume there is some future uncertainty about the growth rates, that each month the rates can be 0.05 percent higher or lower than the previous month. So in the first month, the real GDP growth rate might be 2.95 percent / 12 or 3.05 percent / 12, and the same for the real program costs. Some factors will make the growth rates positively correlated, for example expanding population will generally increase both GDP and program costs. Other factors will argue for negative correlation, for example bad economic times mean low GDP growth and increased needs for government expenditures. We assume the changes in the two growth rates are independent, the positive and negative correlations offset.

The expected cost of this program is almost 0.2 percent of GDP, not 0.1 percent. Both average growth rate assumptions were correct, but the projected total cost was wildly incorrect. Like the coin, the reason is the asymmetry in costs. If GDP growth is slow and program costs rise quickly, the cost can easily be one percent of GDP or more. In the reverse circumstance, rapid GDP growth and slow growth in program costs, the program costs will likely be something like 0.03 percent or 0.04 percent of GDP. The high scenarios add more to the 0.1 percent projected cost than the low scenarios can subtract. In this case, 11% of the time the program costs come in under half the expected 0.1 percent level, with an average of 0.043 percent. 23 percent of the time the program costs come in over twice projection, with an average cost of 0.529 percent of GDP.

Or think about a project with a number of inter-related steps. Some will come in early and below budget, others will come in late and above budget. But the early steps won’t reduce total project time much because we usually can’t push up scheduling of later steps. We know, however, that the late steps will delay things, often causing cascading delays so a week late in one step can mean months late to the final deliverable. Also, it’s hard to save more than 10 or 20 percent in a step, but it’s easy to go 100 percent or 200 percent over budget.

People often do good-expected-bad case analyses to account for these effects, but these seldom capture the effect of genuine uncertainty. Within each good-expected-bad scenario, everything is certain. Beware of any calculation that substitutes averages (or even good, expected and bad values) for uncertain inputs. Your actual results are likely to be worse than the projections.