Most of us out there who perform statistical analyses to guide
them and their organizations to solve problems do not have advanced degrees in
statistics. We’ve attended classes at university,
we’ve been to varying levels of Six Sigma training, or we’ve done self-study.
But I think I am safe to say that one thing we
all have learned is that statistically evaluating a set of data is complicated
and rife with uncertainty. We choose statistical tools to use among many
possible tools and numbers ‘pop’ out telling us if our hypothesis is correct or
not. From that data, we proceed to either take an action or not take an action depending
on the statistical results.
But how many of you finish with your analysis and wonder,
what if my analysis is wrong? Did I have enough data? Did I choose the proper statistical tool? Do I
even know the proper statistical tool? Arghh!! (*)
Most of us who are in decision making roles that require
analysis of data to determine choices are cautious people and risk averse. But we had our training. My ANOVA said that part A is better than part
B so why ask more questions?
I suggest that after any statistical analysis and before taking
an action based on that analysis, we ask two more questions.
- What is my confidence I am right?
- What is my risk of being wrong?
And I don’t mean the statistical definitions of ‘risk’ and ‘confidence.
I mean just sit back and take a broad overview of your data, where it came
from, how you evaluated it and ask yourself how strongly you feel your results
are true and ask yourself to decide what the impact is on your customer if your analysis is wrong?
Then you can decide what to do?
But how?
I came up with a simple chart to help guide what action to
take. I don’t know whether this is original or not, but here you go anyway.
Let’s look at each box in a little more detail.
1. Confidence of Being
Right is HIGH, Risk of Being Wrong is LOW (green quadrant)
You’ve done your analysis. You’ve used multiple tools, did
your “Practical / Graphical / Analytical” analysis and you feel very good that
you’ve found something significant and the benefits are measurable..
You find that the
cost to implement is acceptable and after some thought and study you realize that if you are wrong, the implications to the customer are minimal.
So, you recommend to Do It.
2. Confidence of Being
Right is HIGH, Risk of Being Wrong is HIGH (blue quadrant)
You’ve done your analysis. You’ve used multiple tools, did your “Practical / Graphical / Analytical” analysis and you feel very good that you’ve found something significant and the benefits are measurable.
However, you find that the
cost to implement is very high or you find that the effect on the customer if you are wrong is high.
Maybe wait and collect some more data. Even if your are pretty certain about your results,more data might help convince management, your customer (and you).
3. Confidence of Being
Right is LOW, Risk of Being Wrong is LOW (tan quadrant)
You’ve done your analyses. You’ve used multiple tools, did
your “Practical / Graphical / Analytical” analysis. But you are still not certain
if you’ve found something significant and you are not certain that the benefits are measurable.
However, you find that the cost to implement is acceptable and after some thought and study you realize that even if you are wrong, the implications to the customer are minimal.
So, you can decide make the change. After all the risk of being wrong is low and cost to implement is also low. In parallel you might decide to find someone more experienced than you to check your work and see if they agree.
4. Confidence of Being
Right is LOW, Risk of Being Wrong is HIGH (red quadrant)
You’ve done your analyses. You’ve used multiple tools, did your “Practical / Graphical / Analytical” analysis. But you are still not certain if you’ve found something significant. Maybe you’re uncertain if the tools you
used apply to this data set. Maybe you are not certain if the data was
collected properly. Or maybe you don't know if you have enough data.
You also see that the cost to implement is very high or you find that the effect on the customer if you are wrong is high.
You Don’t Do It. You might return to this sometime if more data
is collected or if something else changes. Or, you might decide to find someone more experienced than you to check your work and see if they have suggestions on how to become more confident
Please comment below
if you’re experience is different or if you feel this is way off base.
* I suspect Doctors of Statistical Science also have these 'argghh' moments
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.