reliability + validity

    Cards (9)

    • reliability:
      • how consistent the findings from the investigation/ measuring device are
      • measuring device - reliable if produces consistent results everytime its used
    • reliability of observational techniques:
      • assessing - repeat observation e.g rewatching + inter rate reliability - comparing with observer
      • improving - behavioural categories have to be operationalised, observers need to be equally experienced
      • graph - observer a on x axis, b on y (scattergram = +0.8 good reliability)
    • self report techniques:
      • assessing - test/retest reliability e.g give test again, inter-interviewer reliability e.g extent to which interviewers agree
      • improving - reduce ambiguity e.g rewrite questions, use same interviewer each time + good training to reduce leading questions
    • experiments:
      • DV often measured using rating scale/ behavioural categories - method used to measure should be consistent
      • improving - standardisation, instructions, procedures same each time, experiment is repeated then compared
    • validity:
      • producing a result that is legitimate
      • internal - study measures what its meant to measure
      • external - findings can be generalised beyond research setting
    • factors that affect internal validity:
      • investigator effects
      • not operationalised
      • cofounding variables
      • social desirability bias
      • demand characteristics
    • ecological validity:
      • to do with how DV is measured
      • example: field experiment godden + baddeley learning word lists - low mundane realism + aware theyre being assessed low ecological validity
    • assessing validity:
      • face validity - extent to which test items look like what the test is meant to measure
      • concurrent validity - comparing existing method/test with the one you want to use e.g existing questionnaire, one you want to use
    • improving validity:
      • poor face validity - rewrite questions, poor concurrent - remove irrelevant questions
      • demand characteristics + investigator effects - double blind
      • experiment - control group + standardisation
      • questionnaires - anonymity, no social desirability bias
      • observations - covert + behavioural categories operationalised
      • triangulation - number of different sources used as evidence
    See similar decks