What do video games say about you?

I think the way people play video games can tell you a lot about their personality. This thought first came to me in 1999, when I was sharing a flat in London with a friend of mine. To preserve his anonymity, I will not use his real name here. For the purposes of this story, l will refer to him simply as “Christ.”

Back then, Christ and I used to spend hours every day on the Playstation. We were particularly fond of the various Tomb Raider games, in which the player has to guide the protagonist, Lara Croft, through a series of ancient ruins and underground caverns. Along the way, Lara must fight various foes, and each blow they land on her takes away a certain amount of life. If her life indicator drops to zero, she dies, and the player must go back to the previous checkpoint and start again. Dotted around the game, however, are occasional medi-packs, which can be used to restore a certain amount of life to the wounded heroine.

My approach to playing the game was not particularly sophisticated. I would charge through the tunnels and chambers as fast as I could, hacking away at the enemies who confronted me, and picking up any medi-packs I spotted. If my life dropped to dangerously low levels, I’d use a medi-pack. Often, I would make it through to the next checkpoint by the skin of my teeth, with my life perilously low and no medi-packs left.

Christ took a very different approach. He would proceed very tentatively, exploring each section of tunnel, and every nook and cranny of each room. As a result, he managed to find a lot more medi-packs than I did. But he would never use them. If he took any damage, he’d let Lara die so he could go back to the last checkpoint, and try to make it all the way to the next one without losing any life. By the time he reached the end of the game, he had hundreds of un-used medi-packs, like a miser who reaches the end of his life with a pile of banknotes under his death-bed.

At the time, I realized that these different approaches to playing Tomb Raider said something about our different attitudes to risk. Christ was clearly much more risk averse than I was. But since then, I have come to think that our different playing styles reveal much more than just this. They reflect our whole attitude to life.

Take work for example. Christ and I are both writers, but we write very differently. I will write furiously for a few months, finish a book, and then not write again for a year or more. For a while, I live well on my advance, but then I plough it into some crazy project, or invest in a business that goes bust, and before long I’m poor again, and must return to my desk to write another book.

Christ, on the other hand, has written for several hours almost every day for the past ten years. In that time, he has steadily built up a small fortune. He is always talking about retiring the country, but he never seems to think he has enough money to take the plunge. Even now, with over a million pounds in the bank, and large royalty checks arriving every few months, he is still living in his run-down old house in Catford. Next year, he says, he will finally stop writing and leave London. But I’ve heard that before.

A few years ago, when another one of my hare-brained schemes had failed dismally, and I was crushed and broken, Christ invited me to stay with him in Catford to lick my wounds for a week or so. Each day, as I crawled out of my bed and made my way downstairs for breakfast, I would look up admiringly at the rows of colored paper pinned to the walls of the stairway. Each piece of paper had a list of milestones that Christ had set himself for each book, all dutifully ticked off as he had achieved them. These humble records spoke eloquently of his patient setting of goals, and the diligent accomplishment of each one. At the time, they felt like a silent rebuke of my way of living, and I even caught the occasional gleam of triumph in Christ’s eyes, as if my downtrodden air was the vindication he had always been seeking that his way of life was better than mine.

But now, a few years later, I’m back on my feet again, and I see things differently. Since that sojourn in Catford, I’ve written another book, made a pile of money, and squandered it on setting up a company that went nowhere. I’m poor again now, and have started writing yet another book – my eighth. But in those few years, I’ve also got married, moved to Ireland, and spent several terms abroad teaching at universities in Beirut and Guatemala. I’ve bought a house and redecorated it, created a website that has been visited by over 100,000 people, and taken one of my employers to the High Court (twice) and won (both times). In short, I’ve packed a lot in.

Christ, meanwhile, has done nothing except write. His bank account has grown steadily, but his house is still in the same dilapidated condition it was when he moved in ten years ago. He still hasn’t married his girlfriend, despite promising to do so every year for the past five years. And he still hasn’t moved to the country, though he does say he plans to do it next year. He’s had a heart attack, so I hope he does it soon. Otherwise, I fear he’ll arrive at those pearly gates with a bunch of medi-packs which by then will be completely useless.


Thinking, fast and slow, and slower…

Daniel Kahneman’s recent bestseller, Thinking, Fast and Slow, is a brilliant summary of a lifetime’s work in the psychology of decision making. Together with his colleague, Amos Tversky, Kahneman revolutionized the way psychologists think about how people reason and make choices. Before these two young Turks burst on the scene in the early 1970s, psychologists had a rather rosy view of decision making that owed more to logic and mathematics than to empirical research. People were seen as utility-maximizers, rationally weighing up the pros and cons of the options available to them before opting for the one with the highest payoff. In a series of brilliant experiments, Kahnmen and Tversky exposed this picture as overly optimistic, and showed that human decision making is riddled with biases and cognitive short-cuts that work well enough most of the time, but can also lead to some pretty dumb mistakes.

The central thesis of Kahmenan’s book is that there are fundamentally two modes of thought, which he denotes System 1 and System 2. System 1 is fast, automatic, emotional, and subconscious; System 2 is slower, more effortful, more logical, and more deliberative. The biases and cognitive short-cuts are largely features of System 1, but we can overcome these by employing System 2. It just takes more effort and more time.

This is fine as far as it goes, but it leaves a crucial third kind of thinking out of the picture. This is the meditative, creative mode of thought that the psychologist Guy Claxton calls the “undermind” in his thought-provoking book, Hare Brain, Tortoise Mind. It is much slower than Kahneman’s System 2, and works away quietly in the background, below the level of conscious awareness, helping us to register events, recognize patterns, make connections and be creative. This is the kind of thought that can bubble away beneath the surface for weeks or even months, quietly turning over a problem and looking at it from different perspectives, before suddenly thrusting a solution into consciousness in that exciting Eureka! moment.

I think Claxton is onto something in claiming that the mind possesses three different processing speeds, not two.  Think of it as a kind of “cognitive sandwich” if you like. The top half of the bun is the lightning fast System 1 identified by Kahneman, the world of snap judgments and rapid heuristics.  The bottom half of the bun is the snail-paced undermind identified by Claxton, where thoughts cook slowly in the back oven. Both of these are unconscious processes, operating below the level of conscious awareness. The hamburger in the middle is conscious thought, Kahneman’s System 2.

Where does risk intelligence come in to all this? Risk intelligence tends to be domain-specific, and those with high risk intelligence build up models of a given domain slowly, often unconsciously, as they gradually accumulate experience in their specialist field.  These models may involve many different variables.  The expert horse handicappers I describe in chapter one of my book took at least seven different variables into account, including the speed at which a horse ran in its last race, the quality of the jockey, and the current condition of the racetrack.  People with high risk intelligence manage to keep track of all the relevant variables, and perform the complex mathematical task of weighing them up and combining them.  However, they usually do this unconsciously; they need not be mathematical wizards, since most of the computation goes on below the level of awareness.

There are some basic tricks for increasing risk intelligence across the board, and I discuss some of these in the book. Simply taking a general knowledge version of the RQ test can, for example, lead to rapid gains in risk intelligence because it encourages people to make more fine-grained distinctions between different degrees of belief.  Such rapid improvements in risk intelligence may well generalize to any field, so there may be a small domain-general component of risk intelligence.  But these rapid gains are the low-hanging fruit; once you have plucked them, further increases in risk intelligence may be harder to achieve, and will require immersing yourself in a particular field of study for perhaps many years. It is then that, to borrow Claxton’s metaphor, the “hare brain” must stand aside, and let the “tortoise mind” take over.


Priorities and Risk – What Does a Student Do?

Suppose you are studying for two exams, one of which is much harder than the other, but which are both equally important to you. You estimate you have a 45 percent chance of passing the easy exam, but only a 5 percent chance of passing the difficult one. You have enough money to pay for a tutor to help you prepare for one of the exams, and you estimate that the tutor could boost your chances of passing by around 5 percent. which exam should you spend your money on? 

Dylan Evans poses this question in his new book Risk Intelligence (pages 154-55). He thinks the right answer to this question is that it makes no difference:

A person with high risk intelligence would feel indifferent; it wouldn’t matter to her which exam she spent her money on. For her, a 5 percent improvement is a 5 percent improvment, and that’s that. 

However, I think he’s wrong about this. The two outcomes are not identical, and the choice depends on the student’s priorities. Specifically, it depends on whether the student wants to maximize the chance of passing at least one exam, or maximise chance of passing both exams. I contacted Dylan with my own analysis and he invited me to present it here in the Projection Point blog.

In the scenario a student is studying two exams, one hard, one easy. She can get tutoring in one of the subjects and increase her chances of passing that exam by 5%. The exams are equally important. Does it matter in which exam she receives extra tutoring? The text suggests that her probability of passing the two exams changes from (0.45, 0.05) to either (0.5, 0.05) or (0.45, 0.1), i.e. by five percentage points. The analysis states that a highly risk intelligent student should be indifferent to the two options – after all five percent is five percent. I wasn’t so sure and decided to work out the example fully.

If the student gets tutoring in the easier exam:

  • Probability of passing both = (0.45 + 0.05) * 0.05 = 0.025
  • Probability of failing both = (1-(0.45+0.05))*(1-0.05) = 0.475
  • Probability of passing one or more = 1 – prob of failing both = 1-0.475 = 0.525

If the student gets tutoring in the harder exam:

  • Probability of passing both = (0.45) * (0.05 + 0.05) = 0.045
  • Probability of failing both = (1-0.45)*(1-(0.05+0.05)) = 0.495
  • Probability of passing one or more = 1 – prob of failing both = 1-0.495 = 0.505

So the outcomes are different and the choice depends on whether it is more important to increase the chances of passing at least one exam, or more important to try and pass both exams. For example both of the exams might be needed to pass the year, or alternatively only one successful exam is needed to pass the year. Note that the two exams are still equally important.

For these two different priorities the student should choose as follows:

  1. Need to pass both exams – get tutoring in the hard exam
  2. Need to pass one (or more) exams – get tutoring in the easy exam

Benefit of tutoring is proportional to current probability of passing

On the other hand if the benefit of getting tutoring is an increase in the probability of passing by a factor of 0.05, rather than by 5 percentage points, then there is no difference in outcomes. The 0.45 probability of passing goes to 0.45*1.05 or the 0.05 probability goes to 0.05*1.05.

If the student gets tutoring in the easier exam:

  1. Probability of failing both = (1-(0.45*1.05))*(1-0.05) = 0.501125
  2. Probability of passing both = (0.45 *1.05) * 0.05 = 0.023625
  3. Probability of passing one or more = 1 – prob of failing both = 1 – 0.501125 = 0.498875

If the student gets tutoring in the harder exam:

  1. Probability of failing both = (1-0.45)*(1-(0.05*1.05)) = 0.521125
  2. Probability of passing both = (0.45) * (0.05 * 1.05) = 0.023625
  3. Probability of passing one or more = 1 – prob of failing both = 1 – 0.501125 = 0.498875

Finally, when I contacted Dylan he suggested another possibility; if the student wanted to maximise the combined mark of the two exams, and if the expected sum of the marks in the exam, without tutoring, is (0.45m + 0.05m = 0.5m) then tutoring will raise that to either (0.5m + 0.05m) or (0.45m + 0.1m) which is (0.55m) in both cases.


  • The priorities of the student matter. These include:
    • Pass at least one exam
    • Pass both exams
    • Maximise the combined mark
  • The exact way the improvement is calculated matters: a fixed improvement of 5 percentage points vs an improvement of 5%

How Car Safety Features are Reducing Risk Intelligence on the Road

Driving is one of the riskiest activities that people in the United States can possibly do. Each year, thousands die as a result of traffic incidents, so risk intelligence behind the wheel is a crucial. Cars have been mass-produced in the United States for over a century, but they have gone through significant changes since the times of Henry T Ford. Many of the developments of both cars and highway law enforcement have centered on safety. Crumple zones, seat belt laws, air bags, anti-lock brakes, side-impact protection systems, traction control systems, and speed limits, are all designed to save lives and make driving a less risky activity. However, despite all these safety measures, road accidents and fatalities have remained constant over the last few decades

Currently, over 200 million people hold a driving license in the United States, and annual road  fatalities has remained at about 40,000 a year for the last 20 years, despite many of the improvements in car and road safety mentioned above. Of course, the number of cars on the road increases each year, but this still doesn’t account for why accidents remain at the level they do, and some people suggest the very safety improvements themselves are to blame. It seems, the safer cars become, the more risks drivers are willing to take, an effect that has become known as Smeed’s law, which points out that increases in safety equipment and changes in traffic law are being cancelled out by changes these systems cause in driving behavior.

Commercial drivers

For commercial drivers, fatal accident rates are actually increasing. Commercial vehicles, such as truck and van drivers, are far more likely to be involved in road collisions than other drivers are, because of the amount of miles they drive each year. Furthermore, schedules and long working hours lead van and truck drivers to take more risks, but also, these types of vehicle are fitted with all sorts of safety devices.

Commercial vehicle accidents cost the US economy millions of dollars each year because of the loss of working hours, repairs to vehicles, and claims on commercial vehicle insurance. As a result, haulage companies invest huge amounts of time and money looking at ways to reduce the number of accidents that their drivers are involved in, from analyzing crash reports, to actual onboard monitoring of driving habits, but the safety of the modern commercial vehicle could be the biggest culprit. While companies that provide truck and private van insurance often offer discounts for vehicles with such safety equipment, the practice could be having the opposite effect as intended, with the insurance companies inadvertently encouraging accidents with this incentive.

Peltzman effect

The first person to theorize the negative effect of safety regulations was Sam Peltzman, and his theory has become known as the Peltzman effect. He suggested that people adjust their behavior to counteract improvements in safety, so when the government introduced seatbelt laws or car manufacturers installed air bags in their vehicles, people’s risk intelligence changed because they recognized the chance of serious injury in an accident had diminished. The consequence of this is that people drive faster and more recklessly. Furthermore, safety equipment in cars may cause even more risk to pedestrians and cyclists, who don’t benefit from such technology, while motorists are driving faster and more recklessly.

Attempts to measure this Peltzman effect have been limited, mainly because of the huge number of variables involved with the compiling of traffic data. However, one study conducted in 2007 used the number of crashes in NASCAR to discover if changes to the car’s safety were making racing drivers more reckless. Using track data from 1972 to 1993, the researchers discovered that as cars became safer, the drivers became more reckless, and while the number of injuries in NASCAR had reduced due to safety changes to the cars and tracks, the number of accidents had increased.

A similar effect has been noticed in a UK study regarding the use of cycling helmets. Helmets are not compulsory in the UK, and research by a leading traffic psychologist found cyclists that wore helmets were more likely to be involved in an accident with a vehicle than those that didn’t wear them. The reasoning was that drivers give more room and drive more carefully around cyclists they perceive are unprotected compared to cyclists that are wearing helmets.

The Peltzman effect has led to a common witticism among traffic psychologists that suggests if you placed a dagger on everybody’s steering wheel and point it at the driver then road accidents would be reduced to virtually zero as everybody would drive exceptionally carefully or risk serious injury from even the slightest bump.

Jessie Hardcastle is a freelance writer from England who specialises in finance and investment for a number of UK journals and blogs. With a growing following she has recently be focussing mainly on the problems close to her London home as Europe continues to falter in the face of political indecision.

Risk Intelligence for the Layperson

If you intend to embark on a career as a gambler or a weather forecaster, or if you intend to speculate on the stock exchange or take a punt at your local horse races, improving your risk intelligence would stand you in good stead and improve your odds of winning. However, not everybody is interested in gambling or other ‘risky’ ventures, so why should you care about your risk intelligence?

In everyday situations, improved risk intelligence can be advantageous. Whether it be evaluating what insurance cover we might need, if we are a member of a jury, making business investment decisions, working in recruitment and employing someone with previous convictions and even whether or not to move in with a girlfriend/boyfriend, the ability to accurately assess the risks of each circumstance would leave us much more likely to get a positive final outcome. The layperson in particular exhibits three key flaws when evaluating risk:

1) Overconfidence – this is the most common flaw. It is when we are more confident in a particular outcome than we have reason or evidence to back it up with. Overconfidence is exceedingly common and often comes from those that we trust most to help us make decisions; experts. Relying on ‘experts’ is not always as simple as it might sound. Just because someone knows a lot about something, doesn’t mean they are aware of their limitations, of how much they don’t know. For evaluating risk, it is often more important to know what you don’t know, than it is to know what you do know. Therefore, by improving your own risk intelligence, you can evaluate how effective an expert is, how reliable their information is, and what risks you are taking to apply it to your own situation. The sort of experts that you would meet on a regular basis that would fall into this category would be doctors and other medical staff, and bank managers and other financial advisory staff.

2) Worst-Case Scenario Thinking – This is when we allow ourselves to be negative and make low-probability scenarios into near certainties, without any logic to back it up. It often creates with it a spiral of demoralising thoughts and even-worse outcomes, none of which are helpful for making a decision. It prevents genuine risk assessment and logical reasoning for a decision and often results in an emotive and poor decision. An example of this could be refusing medical treatment such as surgery to fix a badly-broken leg because of the risk of a low-probability outcome such as death. It can be very easy to become irrational and emotionally-involved and decide that opting for surgery will result in death and therefore it would be better to just live with a bad limp or inability to walk, the outcome of leaving the leg to heal by itself.

3) The Availability Heuristic: Imagination Inflation – estimating the likelihood of something happening based on how easily we remember it happening in the past. The illogicality of this can be seen by the fact that what we are remembering doesn’t need to have happened in real life, but we could have seen it in a film or a computer game. This can result in grossly inflated probabilities. It can also work the other way, with difficult to visualise scenarios being underestimated. For example, we could have been watching the film The Towering Inferno and the following week we come to renew our business’s fire insurance. Vividly remembering the burning building from that film, you are more likely to spend unnecessary money on more comprehensive and expensive cover than you actually require.

By being aware of these flaws and evaluating your thought processes against them when assessing risky decisions you can improve your risk intelligence and the general success of any risk-based decisions you make. Assessing how sure you are something is right or wrong, or is or isn’t going to happen, rather than simply looking at things from a black or white perspective as well as evaluating the outcomes of any decisions you take to improve on your thought process in the future will help you on your way to becoming a more astute risk taker.

Jessie Hardcastle is a freelance writer from England who specialises in finance and investment for a number of UK journals and blogs. With a growing following she has recently be focussing mainly on the problems close to her London home as Europe continues to falter in the face of political indecision.