Statistics were first applied to quality control by Walter Shewhart whose work formed the foundation of modern Six Sigma programs. Perhaps less known than the concept of Six Sigma is that of Three Sigma. Here we consider 3 Sigma vs. 6 Sigma and identify key differentials between the two terms.
The science of statistics as they relate to quality control came from the mind of Walter Shewhart who determined that error rates or exceptions are empirical qualities based on standard deviation.
Sigma levels 1 through 6 designate the maximum number of defects per million in a process or system and relate to the overall percentage of accuracy according to the following specifications.
- 1 Sigma: 690K errors per million (31% accuracy).
- 2 Sigma: 308K errors per million (69% accuracy).
- 3 Sigma: 66.8K errors per million (93.3% accuracy).
- 4 Sigma: 6.2K errors per million (99.4% accuracy).
- 5 Sigma: 233 errors per million (99.97% accuracy).
- 6 Sigma: 3.4 errors per million (99.999997% accuracy).
Right away you can see that one way to differentiate 3 Sigma vs. 6 Sigma is the defect rate, but that is not the full meaning of the difference between the terms. Take a look at Six Sigma and then see how it compares to Three Sigma in common usage.
Image Credit: Wikimedia Commons/Tizio
6 Sigma Overview
Although one of the key concepts of Six Sigma is to strive for near perfection, the practical goal of Six Sigma programs is to continually improve the rate of accuracy as it approaches that nearly perfect goal. As the quality control of an enterprise progresses, it traverses lower sigma levels that have less accuracy. Six Sigma, however, is not just a measuring stick for performance, nor is it a technique for improving performance: Six Sigma as we know it addresses corporate culture and seeks to change it into an environment that is at every point optimized for quality.
Six Sigma, therefore, is an attempt to unify all employees of a corporation into a unified team that works together to produce high quality goods and services.
One of the major differences between 3 Sigma vs. 6 Sigma is the tolerance for defects. Walter Shewhart considered Three Sigma as the demarcation point that divides the ordinary from the extraordinary; the predictable from the unpredictable. Most companies would consider a Three Sigma performance as unacceptable.
Although the term can apply to defect rates, Three Sigma is more generally used to refer to the predictability of outcomes and the source of deviation from average values. It also looks at how causes can be assigned to either known causes, or unknown causes. In cases where too much error is not assignable, Three Sigma assumes that the system itself is to blame for errors and calls for it to be thoroughly redesigned.
3 Sigma vs. 6 Sigma
Although on the surface, the 3 Sigma vs. 6 Sigma comparison involves merely different levels of tolerance for defects, they are used quite differently in practice. Six Sigma deals with desired outcomes and the amount of defects permitted. Three Sigma determines the nature of influential factors that affect processes and their outcomes (products and services) and whether or not those factors are predictable. Summarizing 3 Sigma vs. 6 Sigma in a different way, Three Sigma is used to determine the state of a process while Six Sigma constitutes a methodology to set and achieve targets for quality outcomes.