Design of Experiments (DOE) – a tool for Six Sigma

An "experiment" tests a theory or guides to a solution to a question.  For example, "why are some products defective"?

How Many Factors?  How Many Tests?

One-factor experiments test only one variable at a time.  One-factor experiments are much simpler than multi-factor experiments; but they miss interactions and may require many cycles.

A simple one-factor experiment might be formalized in Example #1:

  • Theory: defects are due to an unrecognized defect in the raw materials from one of our three suppliers.
  • Test: Have one operator work at one machine.  Process several small batches, where each batch uses raw materials from a single supplier.  Ensure that other conditions (how long the machinery has been running today, etc.) are similar for each batch.  Track the number of defects by batch and supplier.
  • It should be clear whether the defects are grouped by supplier. This would support the theory.  (A surprising result may be that defects are equally shared by supplier, but are instead grouped by time of day, for example.  This would discredit the original theory, but lead to further theories and tests of, say, operator fatigue).

One-factor design is least helpful if interactions cause the defects. For example: most people may consume from 1 to 3 standard drinks of alcohol without risking death.  We might observe that people who have had no alcohol for several days may take a half-dose, full dose, or double dose of a specific medication and survive with no ill effects.  However, people who had 2 or 3 drinks plus a double dose of the medication might die as a result.

Two-level experiments have multiple variables.  These are more complex than one-factor experiments.

Example #2:

  • Theory: Either an operator or a supplier is responsible for the majority of defects.

One problem with two-level experiments is "confounding" the test.  From example #2, we have #2A:

  • Have operator Adam test materials from Supplier 1 and from Supplier 2.
  • Have operator Betty test materials from Supplier 3.
  • If the result is that Betty has the highest defect rate, it is unclear whether Betty or Supplier 3 is responsible.

Full-factorial orthogonal design avoids the "confounding" problem: test all variables against all other variables.  In example #2B, both operators use materials from both suppliers:

Test

Operator

Supplier

Defect Rate

1

Adam

1

3%

2

Betty

1

4%

3

Adam

2

3%

4

Betty

2

2%

5

Adam

3

7%

6

Betty

3

6%

It is clear that Supplier 3 is the major source of the defects. 

An interaction would be indicated if the final row of the table were:

Test

Operator

Supplier

Defect Rate

~

~

~

~

6

Betty

3

15%

Here it seems that Betty has problems with Supplier 3's materials – an example of an interaction which cannot be found using a one-factor experiment.

Full factorial design tests all combinations of all expected factors.  This may be expensive or time-consuming.  For example, a full factorial 3-factor table, where factors a, b and c each have 2 values, would have 8 tests :

Test#

Fa

Fb

Fc

Result

1

a1

b1

c1

R1

2

a2

b1

c1

R2

3

a1

b2

c1

R3

4

a2

b2

c1

R4

5

a1

b1

c2

R5

6

a2

b1

c2

R6

7

a1

b2

c2

R7

8

a2

b2

c2

R8

 

Two-level fractional factorial design tests fewer combinations, usually to save on cost or time.  For example, say that the above table represents too much testing.  A full fractional 3-factor table might be limited to 4 tests, where we limit interactions for factors a and b:

T#

Fa

Fb

Fc

 

Result

1

a1

b1

c1

 

R1

 

a2

b1

c1

Ignore

N/A

 

a1

b2

c1

Ignore

N/A

2

a2

b2

c1

 

R2

3

a1

b1

c2

 

R3

 

a2

b1

c2

Ignore

N/A

 

a1

b2

c2

Ignore

N/A

4

a2

b2

c2

 

R4

In this example, there may be a known reason to avoid testing the (a1, b2) and (a2, b1) combinations.  If time or cost constrains an experiment, there is no way to avoid missing some combinations.

From One Experiment to the Next

The first experiment for an injection molding process might test three variables: temperature; pressure; and cool-down time. Set a "low", "medium" and "high" value to each variable. Run 3X3X3 = 27 experiments. The best results come from, let's say, the 4 tests with medium or high temperature and medium or long cool-down time. 

This would indicate that a second experiment should focus on temperature and cool-down time with a different set of medium/high values.

Analysis of Variance: ANOVA

The concept is to structure experiments to control the variables and use Analysis of Variance (ANOVA) techniques to determine which variables contribute most to the results. It permits multivariate experiments so that any interaction among variables might be observed.

Summary

Problems with obvious causes should have been resolved already.

To solve the harder problems, or to make other fact-based improvements, it is vital to plan, perform and analyze tests in a systematic way.  Design of Experiments (DOE) provides this methodology.

By Oskar Olofsson

a