I recently read a quote from Jeff Bezos where he said that when anecdotal data and quantitative data don’t match, usually the quantitative is wrong. I thought that was interesting.
However, it’s an issue that many of us deal with on a regular basis. It’s hard to trust cold impersonal numbers when what you see with your own eyes seems to say the opposite.
So, how can two data sets that are looking at the same thing say different things?
Very often the problem isn’t with the data. It’s with your understanding of what the data are measuring.
You need to look at the assumptions, sample size and makeup, and context of both your experiences and metrics to find out if they are actually measuring the same thing in the same way.
Here are a few examples of experiences and metrics/data that seemed to be looking at the same thing, but really weren’t.
Why aren’t our patients satisfied?
In a hospital where I worked, we were having an issue with our patient satisfaction scores.
The scores were below target and lower than many of the hospitals in our region.
The management kept coming down on the staff telling them to improve the patient experience. However, the staff, who worked with patients every day, said that people seemed pretty satisfied. Management wasn’t convinced. Yet, it was hard to imagine that our nurses, doctors, and other staff members weren’t able to tell the difference between a happy patient and an unhappy patient.
As it turned out, our “patient satisfaction score”, didn’t actually measure patient satisfaction, at least not in absolute terms.
The score was calculated by comparing our patients’ average satisfaction score (on a scale of 1-5) against other hospitals. The result was a percentile ranking from 0 – 100. An 88 meant that we were better than 88% of the other hospitals surveyed. A 10 meant that we were worse than 90% of the other hospitals surveyed. If every hospital in the region had VERY satisfied patients (which was the case) and we only had satisfied patients, we’d have a low score. However, in talking to those patients, they will still seem satisfied. On the other hand, if every hospital had VERY dissatisfied patients and we had somewhat dissatisfied patients, then we’d have a high score. So the metric really wasn’t providing insight into satisfaction. It’s sort of like class rank in high school. If you go to an elite school, you could be in the 50th percentile and yet still be really smart.
The disconnect: Our metric was reporting relative satisfaction against other hospitals but our staff’s experiences were based on actual satisfaction.
What do you mean they aren’t getting enough training?
One company that I worked with was trying to reduce its training costs. Their strategy, like many companies, was to shift a significant amount of their training from in-person to virtual. This would eliminate the massive travel expenses that in-person training can incur.
They spent a couple of years making a huge push to develop and offer more virtual training.
In one year, they delivered over 1,000,000 hours of virtual training.
They were quite surprised when they got the results of their employee engagement survey that year. One of the lowest rated questions was “In the past year, I got the training that I needed.”
That didn’t make sense. They had plenty of data showing that people were taking the virtual training and taking a lot of it.
The problem became clearer once we dug into the comments.
A common theme on the training question was “I didn’t receive any training this year, only virtual.”
In other words, their employees didn’t consider virtual training to be “training”.
The disconnect: Employees were answering the question based on classroom training but the team was interpreting the question to apply to all training.
Of course, I’m going to get a fee
Several years ago, I ran into a problem making an airline reservation.
After receiving the confirmation email, I noticed that some of my personal information was incorrect. I contacted that airline to make the corrections. However, due to Federal Regulations they couldn’t change certain pieces of information after the reservation had been issued.
The customer service agent told me that she’d cancel my reservation and rebook it with the correct information. I said, “Wait a minute! Am I going to receive a cancellation fee if you do that? Every time I’ve cancelled a reservation, I’ve received a fee.”
The agent assured me that I wouldn’t receive a fee. She said that you can cancel within 24 hours with no penalty. But I didn’t believe her. I said, “How can that be. I’ve always gotten a fee.”
She was an expert in this area (she did this for a living), had more data than me (she’d booked thousands of reservations in her career) and was trained in the policies and procedures of the company. Yet, I thought she was wrong because what she told me ran counter to my experience.
It’s true that I always received a fee. However, I probably only cancelled 3-4 reservations in my life. And, none of those cancellations occurred within the first 24 hours.
The disconnect: The agent’s explanation was based on a specific context – cancellations within 24 hours. My generalization was based on a set of experiences that didn’t include that scenario.
People often ask me how to convince someone that his or her experiences are incorrect and that he or she should trust the data. My advice is always the same: don’t.
A person’s experiences are always 100% accurate.
What might be wrong are the interpretations and generalizations being made based on those experiences.
Then again, maybe you’re just measuring something different and don’t even know it.
You’re much better off having a conversation about how each of you came to your conclusions and where the difference may be occurring.
What can I do about it?
There are a few things you can do when your data do not match your experiences or observations:
Testing your experience and observational conclusions
- How do you “know”? On what basis did you draw your conclusions?
- What did you specifically ask or see?
- What criteria are you to draw your conclusions? Are those the same as the criteria that went into developing the quantitative metrics that you are comparing against?
- How many observations do you have? Remember my airline reservation example, I was basing my beliefs on just 3-5 observations.
- Do your observations cover a sufficiently diverse sample? In my airline reservation example none of my observations were based on cancelling within a 24 hour window. So, I actually didn’t have enough experiences to show me where there might be a difference.
- How did you collect your observational data? Is it from a neutral source or was it collected by someone with a vested interest in a particular result?
Testing your quantitative conclusions
1. What specifically does your metric measure?
By specific, I don’t just mean its name.
For example, a common misperception that kills many small businesses has to do with understanding the meaning of the word “profit”. Many people believe that profit is a measure of how much money you are making. Yet many small businesses go bankrupt even though they are showing a profit. That’s because profit is a theoretical measure. It just tells you the difference between what you sell a product/service for and what it costs to make it. However, profit doesn’t take into account whether you are actually receiving the money. If you sell 10 computers for $100 each and it cost you $50 to make each one, you’ve made a $500 profit. However, if the customer doesn’t pay the invoice (or doesn’t pay the invoice on time), you actually don’t have the $500. You haven’t made any money yet.
To find out whether you are “making money” you need to look at cash flow. Incidentally, that’s also why in the early days of eCommerce, so many businesses were able to operate without making a profit. They were growing so fast that even though they weren’t making a profit, they had new and increasing revenue coming in fast enough to pay the old costs.
2. What business question does your metric answer? Is that the same business question you’re your qualitative data/observations measured?
3. What’s the formula for the metric? Do you know exactly what’s going into the calculation (your criteria)? Are those the same criteria that you used for the qualitative/observational conclusions? Think back to the satisfaction scores. By understanding how it was computed, it became clearer what it was actually measuring.
4. Also, what’s included or excluded from your sample? Perhaps you are measuring more with your metric than you are observing. Or, perhaps the opposite is true.
5. Is it better for your metric to be high or low? Believe it or not, sometimes people misinterpret. For example, suppose you have a “quality” metric. Most people would assume that high is better. However, if you are measuring the number of errors per 1,000 products, then it’s better for that number to be low.
6. What’s the range? What’s the absolute lowest value it can take and what’s the highest? I once worked with a company that for some reason, multiplied their satisfaction scores (1-5) by 20 to put them on a 100-point scale. However, most people misunderstood that in doing so, the range was 20 (1*20) to 100 (5*20). Most people thought the range started at 0. A score of 70 out of 100 however, while never great, is even worse when your scale starts at 20 versus 0.
Unless there is a calculation or input error most data do a pretty good job of accurately telling you what’s happening. That’s true of your experience as well. The problem often isn’t with the data. The problem is usually that we haven’t understood SPECIFICALLY what the data are actually measuring.
——————————————–
Brad Kolar is an executive consultant, speaker, and author with Avail Advisors. Avail’s Rethinking Data workshop will help you learn to close the gap between analysis and action. You can reach Brad at brad.kolar@availadvisors.com.