BM CH11

Cards (56)

  • define selection effect
    In an independent groups design, when the two independent variable groups have systematically different kinds of participants in them.
  • define 'one-group, pretest/posttest design'
    An experiment in which a researcher recruits one group of participants; measures them on a pretest; exposes them to a treatment, intervention, or change; and then measures them on a posttest. (has no comparison group).
  • define maturation threat
    A threat to internal validity that occurs when an observed change in an experimental group could have emerged more or less spontaneously over time. People adapt to changed environments over time. (e.g. children get better at walking and talking; plants grow taller —but not because of any outside intervention.)
  • define history threat
    A threat to internal validity that occurs when it is unclear whether a change in the treatment group is caused by the treatment itself or by an external or historical factor that affects most members of the group. Something specific has happened between the pretest and posttest. (e.g. using less heating because the weather has gotten warmer).
  • define regression threat
    A threat to internal validity related to regression to the mean, a phenomenon in which any extreme finding is likely to be closer to its own typical, or mean, level the next time it is measured (with or without the experimental treatment or intervention). An unusually good performance or outcome is likely to regress downward (toward its mean) the next time. And an unusually bad performance or outcome is likely to regress upward (toward its mean) the next time. Either extreme is explainable by an unusually lucky, or an unusually unlucky, combination of random events. Only occur when a group is measured twice, and only when the group has an extreme score at pretest.
  • define regression to the mean
    A phenomenon in which an extreme finding is likely to be closer to its own typical, or mean, level the next time it is measured, because the same combination of chance factors that made the finding extreme are not present the second time. When a group average (mean) is unusually extreme at Time 1, the next time that group is measured (Time 2), it is likely to be less extreme— closer to its typical or average performance. Can happen when benefitting from randomness - factors that just happen to be going their way/advantaging them. (e.g. weather, your friend's moods, having a particularly unlucky day, familiarity with setting).
  • define attrition threat
    In a pretest/posttest, repeated-measures, or quasi-experimental study, a threat to internal validity that occurs when a systematic type of participant drops out of the study before it ends.
  • define testing threat
    In a repeated-measures experiment or quasi-experiment, a kind of order effect in which scores change over time just because participants have taken the test more than once; includes practice effects.
  • define instrumentation threat
    A threat to internal validity that occurs when a measuring instrument changes over time. (e.g. In observational research, the people who are coding behaviours are the measuring instrument, and over a period of time, they might change their standards for judging behaviour by becoming stricter or more lenient.)
  • define selection-history threat
    A threat to internal validity in which a historical or seasonal event systematically affects only the participants in the treatment group or only those in the comparison group, not both.

    Only for studies with at least two conditions
  • define selection-attrition threat
    A threat to internal validity in which participants are likely to drop out of either the treatment group or the comparison group, not both.

    Only for studies with at least two conditions
  • define observer bias
    A bias that occurs when observer expectations influence the interpretation of participant behaviours or the outcome of the study. (e.g. Freud in little Hans).
  • define demand characteristic
    A cue that leads participants to guess a study's hypotheses or goals; a threat to internal validity. Also called experimental demand.
  • define double-blind study
    A study in which neither the participants nor the researchers who evaluate them know who is in the treatment group and who is in the comparison group.
  • define masked design
    A study design in which the observers are unaware of the experimental conditions to which participants have been assigned. Also called blind design. Ppts know which group they are in, but observers do not.
  • define placebo effect
    A response or effect that occurs when people receiving an experimental treatment experience a change only because they believe they are receiving a valid treatment.
  • define 'double-blind placebo control study'
    A study that uses a treatment group and a placebo group and in which neither the researchers nor the participants know who is in which group.
  • define null effect
    A finding that an independent variable did not make a difference in the dependent variable; there is no significant covariance between the two. Also called null result. The 95% CI for the effect includes zero, such as a 95% CI of [-.21, .18].
  • define weak manipulations
    the differences between conditions aren't dramatic enough to show a difference. Not enough of a difference/increase/decrease to matter. Not used an operationalisation of the DV with enough sensitivity. (e.g. giving people $0.00, $0.25, and $1.00. In that case, it might be no surprise that the manipulation didn't have a strong effect. A dollar doesn't seem like enough money to affect most people's mood.)
  • define ceiling effect
    An experimental design problem in which independent variable groups score almost the same on a dependent variable, such that all scores fall at the high end of their possible distribution. (e.g. test is really easy so everyone scores really high). Can be caused by IVs and DVs.
  • define floor effect
    An experimental design problem in which independent variable groups score almost the same on a dependent variable, such that all scores fall at the low end of their possible distribution. Can be caused by IVs and DVs.
  • define manipualtion check
    In an experiment, an extra dependent variable researchers can include to determine how well a manipulation worked.
  • define noise
    Unsystematic variability among the members of a group in an experiment, which might be caused by situation noise, individual differences, or measurement error. Also called error variance, unsystematic variance. (In our salsa analogy, noise refers to the great number of the other flavours in the two bowls. Noisy within-groups variability can get in the way of detecting a true difference between groups). the more unsystematic variability there is within each group, the more the scores in the two groups overlap with each other. The greater the overlap, the less apparent the average difference.
  • define measurement error
    The degree to which the recorded measure for a participant on some variable differs from the true value of the variable for that participant. Measurement errors may be random, such that scores that are too high and too low cancel each other out; or they may be systematic, such that most scores are biased too high or too low. (e.g. a person who is 160 centimetres tall might be measured at 160.25 cm because of the angle of vision of the person using the meter stick, or they might be recorded as 159.75 cm because they slouched a bit.)
  • define situation noise
    Unrelated events or distractions in the external environment that create unsystematic variability within groups in an experiment. External distractions (usually from the environment like smells, lighting, noises).
  • define power
    An aspect of statistical validity. The likelihood that a study will show a statistically significant result when an independent variable truly has an effect in the population; the probability of not making a Type II error.
  • define type II error
    false negative.
    Fails to reject a null hypothesis that is actually false.
    Concludes there is not a significant effect, when actually there really is.
  • What twelve possible internal validity threats do you need to interrogate a study for when an experiment finds that an independent variable affected a dependent variable?
    - design confounds
    - selection effects
    - Order effects

    - maturation threats
    - history threats
    - regression threats
    - attrition threats
    - testing threats
    - instrumentation threats

    - Observer bias
    - demand characteristics
    - placebo effects
  • Why might experimenters conduct double-blind studies, measure variables precisely, or put people in controlled environments?
    to eliminate internal validity threats and increase a study's power to avoid false null effects.
  • What six threats to internal validity are especially relevant to the one-group, pretest/posttest design?
    maturation, history, regression, attrition, testing, and instrumentation threats.See an expert-written answer!We have an expert-written solution to this problem!
  • How can the six threats to internal validity relevant to the one-group, pretest/posttest design be eliminated?
    if an experimenter conducts the study using a comparison group.
  • What is the phenomenon known as spontaneous remission and what is a classified as?
    When the symptoms of depression or other disorders disappear, for no known reason, with time.

    Spontaneous remission is a specific type of maturation.
  • how can we prevent maturation threats?
    by using a comparison group.
  • What must the external factor effect to be a history threat?
    The external factor must affect most people in the group in the same direction (systematically), not just a few people (unsystematically).
  • how can we prevent history threats?
    by using a comparison group.
  • how can we prevent regression threats?
    by using comparison groups and careful inspection of the pattern of results.
  • how can we prevent attrition threats?
    by removing ppts results from the pretest average too when ppts drop out.
  • how can we prevent testing threats?
    by abandoning a pretest altogether an use a posttest-only design, or opt for alternative forms of the test for the two measurements.
  • how can we prevent instrumentation threats?
    by switching to a posttest-only design, or take steps to ensure that the pretest and posttest measures are equivalent.
  • What is the difference between an instrumentation threat and a testing threat?
    the MEASURING INSTRUMENT has changed from Time 1 to Time 2, whereas a testing threat means the PARTICIPANTS change over time from having been tested before.