Chapter 11

    Cards (22)

    • maturation threat - change in behaviour that emerges spontaneously over time - people adapt to changed environments
      • improvement - comparison group
    • internal validity threats in one-group pretest/posttest designs
      1. maturation threats
      2. History threats
      3. Regression threats
      4. Attrition threats
      5. testing threats
      6. instrumental threats
    • history threats - 'historical' or external factors that systematically effect most members of the treatment group at the same time as the treatment itself - unclear if treatment caused the change
      • improvement - comparison group
    • regression threat - statistical concept - regression to the mean.
      when group average is unusually extreme at time 1 the next time the group is measured it is likely to be less extreme - closer to average performance
      works at both extremes + only occur when a group is measured twice
      • preventing - comparison groups + careful inspection of the pattern of results
    • Attrition threats - when a systematic type of participant drops out of the study before it ends
      • preventing - remove scores of participants that drop out
    • testing threats - change in participant as a result of taking a test more than once
      • preventing - abandon pre-test + use alternative forms of the test for the 2 measurements + comparison group
    • observer bias - when researchers expectations influence their interpretations of results - threat to internal + construct validity
    • demand characteristics - when participants change their behaviour to suit what they think researchers are looking for
    • avoiding demand characteristics
      • double-blind study
      • masked design
    • placebo effect - when people receiving treatment improve because they believe they're receiving a valid treatment
      • improving - double blind control
    • null effects - finding an iv didn't make a difference in the dv - no significant covariance between the 2
    • ceiling effect - scores squeezed together at the high end
    • floor effect - scores cluster at low end
    • manipulation checks - separate DV included
    • noise - too much unsystematic variability within each group
    • measurement error - human or instrument factor that can randomly inflate or deflate a persons true score on `dv
    • DV score = participants true score +/- random error of measurement
    • improving measurement error:
      • use reliable precise tools
      • measure more instances
    • improving individual differences
      • change design to within groups
      • more participants
    • situation noise - external distractions
      improving - carefully control surroundings
    • power - likelihood a study will return an accurate result when the IV has an effect
    • null effects should be reported transparently