The Role of Statistics in Process Improvement Projects: Lean Six Sigma and Statistics

The Role of Statistics in Process Improvement Projects: Lean Six Sigma and Statistics
Page content

Statistics in Lean Six Sigma Projects

The use of statistical tools to understand the Lean Six Sigma (LSS) process often causes resistance to adopting the methodology.This resistance typically comes from the fact that statistics is not a common tool set either for management or the new LSS practitioners.The following briefly looks at the use of statistics in process improvement projects.

First of all, we need to understand what a statistic is, and what it is not. In the field of mathematics, we are looking for truth. We establish a theorem and develop proofs to verify the relationship is factual and always acts in a pedictable, set method. In other words we develop statements of equality.

As examples of mathematical equations we see statements of the form F=MA or e=MC2, but on the other hand, a statistic is simply a guess. We then set out to analyze our data to understand how good our guess is.

We see statistical statements all the time. Commonly in elections we hear, “Candidate A leads Candidate B 55 percent to 45 percent with a 3 point margin of error." Or said another way, we are guessing that A leads B by 55 to 45. But how good is our guess? Within 3 percentage points.

Another statistical guess says that “at 95 percent confidence there is no difference between item A and item B.” Again, our guess is that we have no difference about how good is our guess – 95 percent. That is, 95 out of 100 times when we compared the two, we couldn’t see a difference. The degree of confidence that is applied to the decision is, however, an issue for the LSS team and management. Is 95 percent needed or do we only need to be 90 percent confident in our decision?

Use Caution when Making Decisions Based on Statistics

Early in my career, I was working with a company that had a high speed manufacturing process. The R&D engineers where working on a major process change and were worried about the effect of the change on a quality defect. They did their testing and came to me for verification of the results. The methodology they used was correct as well as the calculations. They had assumed (guessed) that the change would not increase defects, and they set the risk (alpha) of abandoning a good process change at 5 percent. The calculations from the data indicated that the actual risk was marginally better by 2.5 percent of their limit, and they accepted the change and put it into production. The results were disastrous.The defects increased from 0.08 percent to over 20 percent. The costs to recover ran into the millions of dollars.

So how could we have such a failure if we did the statistics correctly? One reason was that the depth of the evaluation was very shallow; they needed to go further than just the one test. It’s also important to have an understanding of what statistics really tells us. Statistical testing for the sake of testing is worthless. It is in understanding what we gather from the analysis that we find value.

Statistical evaluation is only a tool to improve our decisions based on the limited data we can gather. Making decisions on marginal results based on subjective limits provides little more than throwing darts at a dart board. In the case above, making a critical decision based on a 2.5 percent margin on risk as if it were gospel was a mistake. The thing to remember is, all we are trying to do is fix a problem, and applying statistical reasoning aids for DMAIC projects will aid us in making a good decision–but it is up to the users, not the math itself, to interpret what the analysis means.