Cards (61)

  • What are the two broad categories of error associated with measurement?
    Random error and constant or systematic error
  • How do random errors affect experimental results?

    Random errors obscure the results
  • What is the effect of constant errors on experimental results?

    Constant errors bias the results
  • What are extraneous variables?

    Undesirable variables that add error to our experiments
  • What is the aim of research design regarding extraneous variables?

    To eliminate or at least control the influence of extraneous variables
  • What does random allocation or counterbalancing achieve in experiments?

    It results in an even addition of error variance across levels of the IV
  • What are confounding variables?

    Extraneous variables that disproportionately affect one level of the IV more than the other levels
  • How do confounding variables affect internal validity?

    They introduce a threat to the internal validity of our experiments
  • What can confounds result in measuring?

    Either an effect of the IV on the DV when it is not present or no effect of the IV on the DV when it is present
  • What should researchers ideally do regarding confounding variables?

    Eliminate these variables
  • What are some methods to control for confounding variables?

    Random allocation, matching, counterbalancing, control group
  • What are the sources of confounding variables categorized as?

    Selection, history, maturation, instrumentation
  • What does selection bias result from?

    Bias resulting from the selection or assignment of participants to different levels of the IV
  • What is history in the context of threats to internal validity?

    Uncontrolled events that take place between testing occasions
  • What does maturation refer to in threats to internal validity?

    Intrinsic changes in the characteristics of participants between different test occasions
  • What is instrumentation in the context of threats to internal validity?

    Changes in the sensitivity or reliability of measurement instruments during the course of the study
  • What is reactivity in terms of internal validity?

    Awareness that participants are being observed may alter their behavior
  • How can reactivity threaten internal validity?

    If participants are more influenced by reactivity at one level of the IV than the other
  • What are the resulting artifacts of reactivity?

    Subject related, demand characteristics, experimenter related, experimenter bias
  • What are blind procedures used for?

    To counteract reactivity
  • What are the key concepts to measure quality in psychology?
    Precision, accuracy, reliability, and validity
  • What is precision in measurement?

    Exactness (consistency)
  • What is accuracy in measurement?

    Correctness (truthfulness)
  • What is reliability in measurement?

    The extent to which our measure would provide the same results under the same conditions
  • What is validity in measurement?

    The extent to which it is measuring the construct we are interested in
  • What is test-retest reliability?

    Measures fluctuations from one time to another
  • Why is test-retest reliability important?

    It is important for constructs which we expect to be stable
  • What is inter-rater reliability?

    Measures fluctuations between observers
  • What is parallel forms reliability?

    If we administer different versions of our measure to the same participants, would we obtain the same results?
  • What is internal consistency?

    Determines whether all items in a questionnaire are measuring the same construct
  • What is split-half reliability?

    Questionnaire items split into two groups and the halves are administered to participants on separate occasions
  • What is content validity?

    Does our test measure the construct fully?
  • What is face validity?

    Does it look like a good test?
  • What is criterion validity?

    Does the measure give results which are in agreement with other measures of the same thing?
  • What is construct validity?

    Is the construct we are trying to measure valid?
  • What is convergent validity?

    Correlates with tests of the same and related constructs
  • What is discriminant validity?

    Doesn’t correlate with tests of different or unrelated constructs
  • What is necessary and sufficient criteria in causation?

    Necessary means Y must be present to cause X; sufficient means Y is adequate to cause X
  • What does it mean if something is necessary but not sufficient?

    It is required but not enough on its own to achieve the outcome
  • What does it mean if something is sufficient but not necessary?

    It is enough to achieve the outcome but not required