Syncing your data and your decisions

Is there such a thing as having too much data? Absolutely and more and more organizations are starting to feel the pain.

Automation, especially over the internet has allowed even the smallest companies to generate vast amounts of data. Unfortunately, most of it goes unused. Either it sits in databases and on servers unknown to all but a few people in the IT department (whose job is to collect it not use it) or it makes its way onto reports creating significant noise and clutter. In either case, it’s not helping.

But the sheer quantity of data isn’t the only problem that is overwhelming leaders. The bigger problem lies with data that are not aligned with decisions. I’m not talking about having the wrong data (which is a problem unto itself). Rather, I am referring to having the right data that is either too granular or frequent to be useful.

For example, one company reported employee satisfaction scores to two decimal places. The scores represented the percentage of people who answered a given question favorably (4 or 5 out of 5). There were two problems with this level of reporting.

First, on a scale of 0 to 100 (since it was a percentage), the hundredths place is very sensitive to small changes. When changes occur on a report, people have a natural tendency to spend time trying to understand and explain them. Control charts and other lean/six sigma tools can help leaders distill real changes from noise, but the constant changes draw attention away from more important problems.

Second and more importantly, the decisions that a leader would make on such data are not distinguishable in .01 increments. In fact, a change of 4 or 5 points might not even differentiate a decision or action.

From a statistical standpoint, small changes might have meaning. From a practical standpoint, they often don’t (see It might be significant but that doesn’t mean it matters).

The leaders who used this satisfaction report would get bogged down in discussions about why a number went from 65.34 to 65.89. They’d mistakenly take credit for “successful” programs, “good” implementation, and “responsiveness” to employee needs. However, from an employee standpoint, there really wasn’t much change. In either case, one out of every three people was still not happy.

Reporting should be aligned with decision-making. If your actions would be the same at 60, 65, or even 70, then there is no reason to report to a lower level of granularity than that. For diagnostic and “drill-down” purposes, having the exact number might be helpful. But, even then, it’s probably not necessary to go to the second decimal place.

Context should drive precision. For satisfaction, it is probably ok to report results in quartiles as that is the level that many people would differentiate their actions. On the other hand, for some manufacturing processes, the level of precision might need to go to five or six decimals places to support their decisions and actions.

The other type of precision that creates problems for leaders is frequency. In many organizations, reports come out too often. We’ve all heard advice when losing weight or making investments that we shouldn’t check the data every day. The same is true in business. Reporting too frequently can put your leaders into a reactive mode. They take new actions before seeing if prior actions are working. In The Fifth Discipline, Peter Senge introduced this problem:

Virtually all feedback processes have some form of delay. But often the delays are either unrecognized or not well understood. This can result in “overshoot,” going further than needed to achieve a desired result. The delay between eating and feeling full has been the nemesis of many a happy diner…Unrecognized delays can also lead to instability and breakdown, especially when they are long.”

Senge’s most famous example of this is setting the temperature in an unfamiliar shower. We turn the faucet toward hot. We then test the water. If it’s still cold, we turn the faucet further toward hot. We test it again. If it is still cold or just lukewarm, we once again turn the faucet further. Finally, once the system catches up with itself, when we reach in, we are scalded by extremely hot water. Overcompensating in the middle of a feedback loop will often get you burned.

Reporting too frequently can cause dysfunctional behavior. In one organization, customer satisfaction scores were released each week. If the scores went down, leaders would jump into action in an attempt to improve them. If the scores went up, they would often credit the success of their latest program.

The problem with this approach is that the scores that the leaders were seeing one week often had nothing to do with their last set of decisions. More likely, they were reflective of program or interventions that were put into place weeks or even month prior.

Think about a typical cycle in your organization. How long does it take between identifying a problem, determining a solution, implementing the solution, and having that solution start to add value? It’s probably a lot longer than a week.

Because of reporting frequency, many leaders fall into the trap of taking a new action before understanding if their prior actions have had any impact.

The context in which you make decisions should drive frequency just as it drives granularity. Monthly or quarterly reporting might be appropriate for satisfaction-related data. However, it would certainly be inappropriate for a day trader who needs real time information on the market.

There are times when extra precision is helpful. If you have just launched a new initiative, you might want to report more regularly and to a finer level of detail just to see if and when changes are starting to occur. But then again, that is a different context and your decisions and actions would reflect that. For general day-to-day operations, too much granularity and too much frequency will generally create unnecessary (and sometimes unhelpful) actions and effort.

Take a look at your reports and ask yourself these questions:

1) Is my data at the right level of granularity? What are the different “trigger points” in my decision-making? Is my data aligned with those trigger points?

2) How long does it take from the time I recognize a problem to the time a solution might start to show signs of working? Is my report frequency aligned with that cycle?

Synchronizing your data and decisions will help you focus on decision-making and actions. It will reduce the amount of data (and noise) with which you are dealing. It will save you time. And, it will probably drive faster, more efficient decisions and actions.

Brad Kolar is the President of Kolar Associates, a leadership consulting and workforce productivity consulting firm. He can be reached at brad.kolar@kolarassociates.com.

Print Friendly, PDF & Email