This week, blogger Jim Wells has an excellent post regarding a common myth associated with six sigma and other quality techniques. The myth is that these techniques will magically provide knowledge and insight. Jim argues that in fact, these tools are only as effective as the knowledge of the person using them and the data that is put into them. Check out Jim’s full posting at:
http://qualitypractice.blogspot.com/2009/11/six-sigma-its-no-substitute-for.html
I often have a similar conversation with leaders. Many use data to substitute rather than complement their understanding of their business. That doesn’t work. Leaders should develop a model of the dynamics that exist in their business and how those dynamics might play out in the data. They need to do this before reviewing the data so that they can more effectively approach it.
A famous story illustrates this principle. In WWII, the Royal Air Force had a dilemma. Its planes were being shot down and it only had a limited amount of armor to reinforce them. They called in mathematician Abraham Wald. Wald studied the planes and found that there were common patterns of where bullet holes appeared and where they did not. Most people would have recommended that they fortify those spots with the most holes. That seemed like the best answer since the data showed that that is where the planes were being hit the most.
However, Wald took a step back and applied his understanding of aircraft and warfare to develop a model to help him interpret the data. His recommendation was to fortify the parts of the planes that didn’t have holes. His argument took into account additional information from his model. The planes he was looking at were the ones that returned. Therefore, the location of the holes that he saw was not critical to the planes ability to fly. However, the places where he didn’t see holes must represent where the other planes (that were shot down) were hit. His solution was simple but only because he had a model from which to understand his data.
I recently was working with a group of people who were trying to measure the impact of a particular solution on productivity. They were fortunate in that they rolled out the solution in phases so that at any time there were people using the solution and others who were not. Week after week their report stated (based on the data below) that their solution was helping to drive up productivity.
I challenged them on their conclusion. Clearly, their solution improved productivity. That wasn’t the issue. But their story was somewhat misleading. Impact requires two things – effectiveness and use. A good solution that is unused does not have much impact nor does a poor solution that is used heavily. In comparing the department average with the other two averages, it became clear that, there weren’t very many people using the new solution yet. While their productivity improved, it wasn’t enough to have a material impact on the department as a whole. The “impact” model helped us better make sense of the data so that the story changed from “This solution impacts the business” to “This solution has great potential but we need to roll it out more aggressively”.
Insights don’t come from data. They come from your understanding of your business applied to data.
Hey Brad,
Thanks for the referral. It is often true that we see what we want to see, as in the case of your example. The team wanted the productivity to be there to validate their success. There’s an old saying in statistics, “Figures Lie and Liars Figure”. While its a little harsh, it indicates that you can make data say almost anything you want it to say.