An experiment is internally valid when the effects on the dependent variable are due to the independent variable. An internally valid experiment is free of confounding.
Manipulation check
Evaluates how well the experimenter manipulated the experimental situation. A manipulation check determines whether subjects followed directions and were appropriately affected by our treatments.
Pact of ignorance
Subjects expect their data to be discarded if they guess the experimental hypothesis, and don't volunteer this information to the experimenter. Experimenters don't want to test additional subjects and may take subject reports at face value.
Overcoming the pact of ignorance
1. Debrief subjects after the experiment and convey that you want to know if they guessed the hypothesis
2. Provide incentives for guessing the hypothesis
Mistakes that could produce threats to internal validity
Selecting the wrong statistical test
Improperly using a statistical test
Drawing the wrong conclusions from the test
External validity
An experiment is externally valid when its findings can be extended to other situations and populations.
Requirements for an externally valid study
The experiment must be internally valid
The experimental findings can be replicated
Generalizing across subjects
The findings can be extended to a larger group than our sample. Generalizing across subjects is critical to the external validity and usefulness of experimental findings.
Problems preventing generalization across subjects
The samples used in psychological research are often biased and may not represent the larger population
The samples may not always represent even college sophomores since we heavily depend on volunteers
Generalizing from procedures to concepts
Experimental variables like anger may have multiple operational definitions. When we generalize from our experimental results, we move from discussing our specific operational definition of anger to discussing the concept of anger itself.
It is dangerous to generalize from a single experiment's operational definition of anger. We cannot be sure of the reliability or validity of our procedures.
Research significance
A study achieves research significance when its findings clarify or extend knowledge gained from previous studies and raise implications for broader theoretical issues.
We should question novel findings when they contradict prior findings that have been successfully replicated. The burden of proof is on the experimenter who claims novel findings to explain this discrepancy.
We want to generalize beyond the laboratory to increase the external validity of our findings.
Since extraneous variables are uncontrolled in real world setting and operate in complex combinations, they can modify the influence of our individual variables.
Trade-off between laboratory and field experiments
The trade-off is between the laboratory's more precise control of extraneous variables and the field experiment's greater realism and external validity.
Hanson (1980) found that more laboratory than field studies reported a positive correlation between reported attitudes and behavior.
We can't confirm external validity until additional studies are completed in field settings.
Increasing and verifying external validity
1. Aggregation
2. Multivariate designs
3. Nonreactive measurements
4. Field experiments
5. Naturalistic observation
Aggregation
The grouping together and averaging of data to increase external validity. Combining the results of experiments with different subjects and methodologies increases the generality and external validity of our findings.
Meta-analysis
Uses statistical analysis to combine and quantify data from many comparable experiments to calculate an average effect size.
Aggregation establishes external validity by combining the results of experiments performed using different subjects, stimuli and/or situations, trials or occasions, and measures.
Multivariate design
Studies multiple DVs. For example, a study of repetitive strain places a computer keyboard at different distances from the subject (IV) and measures the effect on three different muscle groups (3 DVs).
Advantage of multivariate designs
Multivariate designs allow us to study the effect of an independent variable on combinations of dependent variables. These designs better simulate the complexity of the real world than univariate designs and provide more detailed information.
Analysis of multivariate experiments
We analyze multivariate experiments with a multivariate analysis of variance (MANOVA).
Handling a nonsignificant outcome
1. Accept the outcome, don't reframe your result as "almost significant"
2. Examine the experimental procedures for design flaws
3. If the design appears sound, decide whether the hypothesis was reasonable
Possible causes of a nonsignificant outcome
Confounding
Extraneous variables that increase within-subjects variability
Weak manipulation of the IV
Inconsistent or flawed procedures
Ceiling and floor effects
Insufficient power
Handling a faulty hypothesis
1. If previous studies supported the hypothesis and ours did not, look for differences in experimental design or sample
2. If there was no previous support and our design and execution were good, we may have to revise or discard our hypothesis