Benjamin’s recent post about differences risk intelligence between users of various web browsers drew a lot interest after it was featured on Slashdot. Among the many emails we received, one in particular caught my attention, because it articulated very clearly a common reaction to the whole idea of risk intelligence.

The email noted that our risk intelligence test presents users with a scale with one end marked 0% (false) and the other marked 100 (%true), and objected that this implied there was no option for “don’t know.” The email went on to reason (correctly) that “therefore the only logical choice that can be made in the case of not knowing the answer is 50%.”

So, what’s the problem? The instructions for the test state clearly that if you have no idea at all whether a statement is true or false, you should click on the button marked 50%. So why did the author of this email state that there was no option for “don’t know”?

I think the problem may lie in the fact that, while 50% does indeed mean “I have no idea whether this statement is true or false,” it does not necessarily mean “I have no information.” There are in fact *two *reasons why you could estimate that a statement had 50% chance of being true:

1. You have absolutely no information that could help you evaluate the probability that this statement is true; OR

2. You have some information, but it is evenly balanced between supporting and undermining the statement

So, maybe that’s what the email was getting at. But even if this interpretation is correct, it doesn’t justify the claim that there is no option for “don’t know.” There is. It’s the 50% option. That’s what 50% *means* in this context.

The email went on to add: “It’s very curious that you use a scale; surely someone either believes that they know the correct answer or they don’t know the correct answer. I can’t see that there is any point in using a scale. I would think it far more sensible to present three options of True, False or Pass.”

But this simply begs the question. As I pointed out in a previous post, one of the most revolutionary aspects of risk intelligence is that it challenges the widespread tendency to think of proof, knowledge, belief, and predictions in binary terms; either you prove/know/believe/predict something or you dont, and there are no shades of gray in between. I call this “the all-or-nothing fallacy,” and I regard it as one of the most stupid and pernicious obstacles to clear thinking.

Why should proof, or knowledge, or belief require absolute certainty? Why should predictions have to be categorical, rather than probabilistic? Surely, if we adopt such an impossibly high standard, we would have to conclude that we can’t prove or know anything at all, except perhaps the truths of pure mathematics. Nor could we be said to believe anything unless we are fundamentalists, or predict anything unless we are clairvoyant. The all-or-nothing fallacy renders notions such as proof, belief, and knowledge unusable for everyday purposes.

In 1690, the English philosopher John Locke noted that “in the greatest part of our concernments, [God] has afforded us only the twilight, as I may so say, of probability.” Yet, as emails like this show, we are still remarkably ill equipped to operate in this twilight zone.