x

x

Monday, March 8, 2021

The Myth of the Universal Problem Solving Method

Fellow Statistical Problem Solvers;

I  wrote the following after reading the article posted on  Linkedin  called  "Planning the Abandonment of Six Sigma" by Dr. Gregory Watson. Dr. Watson posits that the many current models used in problem solving each have unique reasons why they are not universal models and that such a universal continuous improvement model is needed, particularly to meet the needs of "Quality 4.0". 

Okay. I have never been a "buzz word" person and Quality 4.0, like its parent, Industry 4.0 are to me just fancy ways of framing concepts without solving anything. All Quality (or Industry) 4.0 says is that things are changing including information collection and evaluation and that we need to learn how to deal with and use those changes to thrive.

Here is a quote from Juran.com   "The core concept of Quality 4.0 is about aligning the practice of quality management with the emerging capabilities of Industry 4.0; to help drive organizations toward operational excellence."

Lots of pretty words that (to me) boil down to; "Hey! Things are changing. Let's use that and keep making things better!" Which is what you and I have been doing all along. 

But back to Dr. Watson's article. First, to his credit, he does a great job outlining many of the top improvement models and bringing out their weaknesses. Also the new IAQ model does make some of the currently unspoken steps, more clear. 

I won't dwell on his assertions that DMAIC does not engage in a 'strategy formulation process' nor complete the improvement process controls. I strongly disagree. DMAIC's failures are management failures not process failures - as could be said for any method. 

To me, the issue that any call for a new and improved continuous improvement model includes the assumption, especially in many managers minds, that a continuous improvement engineer should not need their brain. If we can be given a list of steps, and we follow those steps, an answer will pop out.  Sorry, but any improvement effort requires a trained investigator to think, adjust, and use whatever tools are needed to reach the solution. This may even mean moving outside a specific model into uncharted territory. 

As a (primarily) DMAIC practitioner, I have learned that the DMAIC is a flexible process that can be pared down or expanded up. And it always must include problem identification strategies going into a project and also consider implementing sustainable changes at the end. But this is also true for the other models. 

I guess that having a one size fits all model is a great goal. Don't misunderstand. We should all continue to explore "improvements to the improvement process."  But remember that often we have different models because they are each tuned to a specific need. Why would I ever consider using a financially based model to solve a shop floor quality issue? But its damn good for finances.

The new "IAQ" model proposes a  seven step process (Characterize, Investigate, Explore, Solve, Evaluate, Implement, Monitor). 

Think about those seven words and think about what you do every day as you find and solve problems. Sound familiar to you?  To me the IAQ is just re-branding current concepts by changing the names. Its DMAIC. Its PDCA. Its the '8 Step Problem Solving Process" and all the others. 

All improvement models, whether DMAIC, PDCA, 8-Step, Shainin, or others are subsets of an unspoken "master" model. This master model could potentially be described as "Find" problems, "Solve" problems" and "Fix" problems. In this case the PDCA model approaches our perceived "Master" model the closest and to be honest, that is what the IAQ model really does and does well. It provides that master concept. But I reject it replacing all other models, but I would embrace it being used, especially as a learning tool. 

Bottom line is, that as an investigators of problems, we have a calling to learn as many tools as possible and to apply the appropriate tool for the particular job at hand. It may be a simple PDCA one time, a full DMAIC another, and a Shainin based process the next. Or (and most important) it may be an amalgam of two or more models. Maybe I get part way into my PDCA work and realize that a DMAIC SIPOC would be a great way to frame my issue. However, as I am "Defining" my problem, I pull in some Shainin "FACTUAL" (c) tools to help narrow down my problem definition at which point I find I need to transition into a Design for Six Sigma process. 

Continuous Problem solving needs to be fluid, flexible, and adjust to the situation. That's all. It takes work. It takes training. It takes lots of study on ones own time. It also takes mentoring and coaching. What it does not take is a universal method with checklists that anyone can do. 


Monday, November 2, 2020

My book "The Brass Man and Other Stories" received a very good review from Reedsy.com (Discovery). One phrase from the reviewer, Christopher Rhine, that I really liked was, "The namesake of the collection and entire second half is brilliantly commanded by “The Brass Man,” a multi-generational, Foundation-like septpartite story following a brass-enclosed, robot named Sini during the multiple crises of a small island town."

Yes! "Foundation-like" Woo Hoo. 


http://www.amazon.com/dp/B089M1FBNZ


Sunday, April 19, 2020

Shared Moment

She was his last customer of the day.

He watched her as she unloaded the
grocery cart. She watched him as he
 scanned her items past the register.


When he handed her the receipt they
both felt a spark as their fingers
brushed.


Their eyes met and they laughed
realizing  that it was a winter day,
with low humidity, and they were
both standing on carpet.

Friday, February 14, 2020

Evaluating the Risk and Reward of your Statistical Analysis


Most of us out there who perform statistical analyses to guide them and their organizations to solve problems do not have advanced degrees in statistics.  We’ve attended classes at university, we’ve been to varying levels of Six Sigma training, or we’ve done self-study.  

But I think I am safe to say that one thing we all have learned is that statistically evaluating a set of data is complicated and rife with uncertainty. We choose statistical tools to use among many possible tools and numbers ‘pop’ out telling us if our hypothesis is correct or not. From that data, we proceed to either take an action or not take an action depending on the statistical results.

But how many of you finish with your analysis and wonder, what if my analysis is wrong? Did I have enough data?  Did I choose the proper statistical tool? Do I even know the proper statistical tool? Arghh!! (*)

Most of us who are in decision making roles that require analysis of data to determine choices are cautious people and risk averse.  But we had our training.  My ANOVA said that part A is better than part B so why ask more questions?

I suggest that after any statistical analysis and before taking an action based on that analysis, we ask two more questions.

  •      What is my confidence I am right?
  •      What is my risk of being wrong?


And I don’t mean the statistical definitions of ‘risk’ and ‘confidence. I mean just sit back and take a broad overview of your data, where it came from, how you evaluated it and ask yourself how strongly you feel your results are true and ask yourself to decide what the impact is on your customer if your analysis is wrong?

Then you can decide what to do?

But how?

I came up with a simple chart to help guide what action to take. I don’t know whether this is original or not, but here you go anyway.




Let’s look at each box in a little more detail.

1. Confidence of Being Right is HIGH, Risk of Being Wrong is LOW (green quadrant)

You’ve done your analysis. You’ve used multiple tools, did your “Practical / Graphical / Analytical” analysis and you feel very good that you’ve found something significant and the benefits are measurable..

You find that the cost to implement is acceptable and after some thought and study you realize that if you are wrong, the implications to the customer are minimal.  

So, you recommend to Do It.

2. Confidence of Being Right is HIGH, Risk of Being Wrong is HIGH (blue quadrant)

You’ve done your analysis. You’ve used multiple tools, did your “Practical / Graphical / Analytical” analysis and you feel very good that you’ve found something significant and the benefits are measurable.

However, you find that the cost to implement is very high or you find that the effect on the customer if you are wrong is high.

Maybe wait and collect some more data. Even if your are pretty certain about your results,more data might help convince management, your customer (and you).

3. Confidence of Being Right is LOW, Risk of Being Wrong is LOW (tan quadrant)

You’ve done your analyses. You’ve used multiple tools, did your “Practical / Graphical / Analytical” analysis. But you are still not certain if you’ve found something significant and you are not certain that the benefits are measurable. 

However, you find that the cost to implement is acceptable and after some thought and study you realize that even if you are wrong, the implications to the customer are minimal.

So, you can decide make the change. After all the risk of being wrong is low and cost to implement is also low. In parallel you might decide to find someone more experienced than you to check your work and see if they agree.


4. Confidence of Being Right is LOW, Risk of Being Wrong is HIGH (red quadrant)

You’ve done your analyses. You’ve used multiple tools, did your “Practical / Graphical / Analytical” analysis. But you are still not certain if you’ve found something significant. Maybe you’re uncertain if the tools you used apply to this data set. Maybe you are not certain if the data was collected properly. Or maybe you don't know if you have enough data.

You also see that the cost to implement is very high or you find that the effect on the customer if you are wrong is high.

You Don’t Do It.  You might return to this sometime if more data is collected or if something else changes. Or, you might decide to find someone more experienced than you to check your work and see if they have suggestions on how to become more confident


Please comment below if you’re experience is different or if you feel this is way off base.


* I suspect Doctors of Statistical Science also have these 'argghh' moments

Monday, May 13, 2019

Evaluating the Validity of Data Reported in Social Media and the Press


I was going to write a blog posting on this topic, but then I found this excellent article written by The Writing Center of the University of North Carolina.  Yes, maybe I wimped out, but this is really a good summary of how to look at data critically. 
   

However, just to reinforce a couple points.  

Don’t trust data because its quoted on one of your social media ‘friends’ posts. You may trust Bob, and Bob trusts Alex, who has always trusted Sanjay, who trusts Cindy, who trusts Cal, who has an agenda and is distorting the truth.

As the article points out, there are three ways to calculate the center of a data set (Mean, Median, Mode).  Often those with an agenda choose the one that helps to make their point the best.

Finally, I am always suspicious when a data set makes me say ‘Yes! That’s just what I thought. I knew I was right.”  

Am I falling for a biased study, because it matches my beliefs? Question yourself as much as you question others.

Monday, April 15, 2019

Outlier Identification

Its been a while since my last post.  But I can assure you that this post is not an outlier... or is it?

Identifying outliers in a data set is one of the most difficult tasks we face as problem solvers. Mostly because there are no definitive tests which absolutely identify whether a data point is unique or if it is a natural, expected part of the data set. 

Outlier identification reminds us that being a statistical practitioner requires more than a good handle on statistical tools and good knowledge of the process from which the data was collected. Outlier identification requires the ability to use one's mind to take in all this information and make the right decision. Well, at least not make the wrong decision.

The attached is a summary of some methods to look at outliers. It is not a complete compendium of the issue. Please comment below if you have other methods for outlier identification that you have used, or if you feel my presentation needs corrected or adjusted. 

Outlier Indentification