Stat 148 Experimental Designs Chapters 1&2

Cards (150)

  • An experiment is vital to the scientific method.
  • Experiment is a test or series of runs where purposeful changes are made to input variables of a process or system to observe and identify changes in the output response.
  • Phenomena are attributed to more natural behavior, while processes in experiments are a combination of operations, tangible elements, methods, people, and other resources that transform input into an output with attributes serving as the response variable.
  • Factors in experiments can be controllable or uncontrollable.
  • Objectives of experiments may include determining influential variables, setting influential variables for optimal response, minimizing variability in the response, and minimizing the effects of uncontrollable variables
  • Two strategies of experimentation: best guess approach and OFAT approach.
  • OFAT approach involves varying each factor while holding others constant to select the optimal combination for the best result and selecting the optimal combination of these factor levels that yield the best result.
  • Interaction among factors is not considered in the OFAT approach.
  • Factorial experiment is an experimental strategy where factors are varied together, not one-at-a-time.
  • Factorial experiments make efficient use of data and allow for the calculation of effects of individual factors and interactions.
  • Factorial experiment allows for studying the joint effect of two factors.
  • Analysis of Variance is needed to explain the variability in the response variable
  • Factorial experiments can be extended to 2^k factorial experiments and fractional factorial experiments.
  • ANOVA decomposes total variability into model variability and residual variability.
  • Factors in experiments are independent variables that can be manipulated, each with two or more levels
  • Experimental conditions, combinations of factor levels, are called treatments or treatment combinations.
  • Experimental units are objects subjected to specific experimental conditions, and when humans are involved, they are called subjects
  • Four basic principles in experimental design:
    • Randomization
    • Replication
    • Blocking
    • Factorial principle
  • Randomization:
    • Randomization of treatment: randomly assigning experimental units to treatments
    • Run order randomization: randomly determining the order of performing individual runs
  • Replication:
    • Performing independent runs in the experiment across all treatment combinations
    • Each replicate must undergo randomization processes
    • Replication is a principal tool for dealing with random error
    • Average result for replicated runs is generally closer to the true value than a single observation
    • Allows for statistical testing of differences between treatments and obtaining more precise estimates of parameters
  • Balanced design:
    • Sample sizes across all levels are equal
    • Unbalanced design occurs when sample sizes are not equal
  • Blocking:
    • Design technique to reduce or eliminate controllable variability transmitted from nuisance factors
    • Experimental units that are similar are grouped together to form a block
  • Factorial principle:
    • Factorial designs are efficient for studying the effects of two or more variables
    • Each replication runs all possible treatment combinations
    • Interaction effects can be investigated in a factorial design unlike in the OFAT designs
  • In experimental design, identifying covariates and adjusting responses before analysis is important, especially with quantitative variables
  • Analysis of covariance is similar to using auxiliary information in survey sampling
  • In an experiment, it is crucial to identify design factors, determine their ranges, and specific levels of treatments
  • The Ishikawa diagram, also known as the fishbone diagram, is used in the brainstorming stage to identify factors affecting a response variable
  • Choosing the appropriate experimental design involves considering sample size, run order, design factors like blocking and randomization restrictions, and selecting an empirical model.
  • Empirical models in experiments include first-order or main effects model, interaction model, and second-order model with quadratic effects
  • Fixed effects models are limited to conclusions within the levels of treatment used, while random effects models allow conclusions for all conditions
  • Mixed models are used in multifactor experiments, where some factors are considered fixed effects and others random effects
  • Monitoring the experiment is crucial to ensure adherence to the plan and avoid compromising experimental validity.
  • Experimental error is variability not explained by known influences, including measurement error, analysis error, sampling error, random error, and other factors affecting the response
  • Statistical methods like ANOVA, regression models, and hypothesis tests are vital for analyzing experimental results
  • Conclusions and recommendations must be drawn from the experiment results, followed by validation through follow-up runs and confirmation testing
  • All experiments are sequential and designed experiments, requiring careful planning and execution for reliable results
  • Statistical methods provide guidelines on the reliability and validity of results, allowing measurement of error and confidence in conclusions
  • In simple comparative experiments, inferences about differences in means are made using randomized designs
  • Power analysis allows us to determine the sample size required to detect an effect of a given size with a given degree of confidence
  • Four essential quantities in power analysis are:
    • Sample size, n
    • Effect size, δ or h
    • Level of significance, α = P(Type I error)
    • Power = 1 – β = 1 – P(Type II error)