When eight out of 94 equals .4%

In his book, Reckoning with Risk, Gerg Gigerenzer tells the following story about computing risk:

“A few years ago, I enjoyed a guided tour through Daimler-Benz Aerospace (DASA), which produces the Ariane, a rocket that carries satellites into orbit. Standing with my guide in front of a large poster that listed all 94 rockets launched so far (Ariane models 4 and 5), I asked him what the risk of an accident was. He replied that the security factor is around 99.6 percent. That was surprisingly high because on the poster I saw eight stars, which meant eight accidents. . . I asked my guide how eight accidents could translate into 99.6 percent certainty. He replied that DASA did not count the number of accidents, but rather computed the security factor from the design features of the individual parts of the rocket. He added that counting accidents would have included human errors and pointed out that behind one of these stars, for instance, was a misunderstanding between one worker who had not installed a screw, and the worker on the next shift who had assumed that his predecessor had done so.”

Theoretically, the calculation was correct. Combining the risks of the parts can predict the risk of the whole.

However, it describes a rocket that doesn’t exist. A rocket is not just a collection of its parts. All rockets are assembled by people and/or machines. Human or machine error is a major factor in considering the safety of the rocket. The 99.6% security factor simply did not represent the real world. Just look at the evidence. The actual failure rate was 21 times greater than the computed risk. Having an accident in one out of every 12 rockets drives a very different set of decisions than does one out of every 250.

Data should reflect the context in which your decisions and actions play out. Otherwise, despite its technical accuracy, it can drive the wrong decisions.  When the rocket was first developed, there wasn’t any experiential data.  Therefore, the calculation they used was probably the best approximation.  However, it would seem that the calculation should have been updated when the context changed and more information became available. This is especially true given that the actual data were significantly different from what was projected. It’s ok to change your metrics or your calculations as you learn more about your business.

Take a look at the data that you use to make decisions. Do they accurately reflect reality? Do you understand what information is included and excluded? Do you know who created this data and for what purpose? If you answered “no” to any of these questions, you might be working under a false sense of certainty.

Brad Kolar is the President of Kolar Associates, a leadership consulting and workforce productivity consulting firm. He can be reached at brad.kolar@kolarassociates.com.

Print Friendly, PDF & Email