Reliability

    Cards (13)

    • What is reliability
      refers to the consistency or replicability of research
    • What is internal reliability
      whether a procedure in a study can be replicated/whether the procedure of a study has been standardised so that each participant experiences the same thing
    • Why is internal reliability important
      it allows the researcher to replicate their research again in the future; to examine further behaviours
    • How can internal reliability be increased
      • use a standardised procedure: all elements of the procedure are kept the same for all participants e.g same instructions, same measuring tools so all participants have the same experience
      • using multiple measuring tools/measures of behaviour as these should back each other up which allows us to check the consistency of results
      • there are some instances when it is difficult to replicate research e.g if researcher isn't directly involved, if it's a natural setting, if measuring tools aren't being used properly)
    • What are words to look out for for examples of internal reliability
      "every", "always", "all" - we can infer the procedure has been standardised
    • What is external reliability
      whether the results of a study can be replicated from one time to another - we check this to try and support the findings of a study
    • How to increase external reliability
      use measures which generate quantitative data = easier to compare findings + check for consistency, unlikely to be misinterpreted when being analysed so there should be no inconsistency. diff interpretations or a subjective view = decrease consistency, important to check Ps understand Qs + that they're fully operationalised
      large sample sizes = results are less vulnerable to anomalies + allows us to see/more likely to show consistent trends + patterns
    • What is inter-rater reliability
      two or more individuals have a high agreement on a score and therefore the measurement is reliable - need at least 80% agreement between observers for measure to have inter-rater reliability e.g if there's more than 1 person observing the same behave/indv or diff observers watching diff individuals, they should agree on the behaviour measured
    • How to establish inter-rater reliability
      researchers initially compare results and check if they matched. results from each researcher are compared using a correlation. if positive correlation (80%+) = inter-rater reliability achieved
    • How to increase inter-rater reliability
      observe/measure behaviour in exact same way, in same location for same amount of time
      run a pilot test first so researchers are able to measure behaviour in exact same way
      train researchers on using measuring tools + ensure they understand exactly how to use them + what they mean
    • What are the ways of checking reliability of research
      split-half method
      test-retest method
    • What is the split-half method
      the results of two halves of a self report measure (e.g questionnaires or interviews) are similar, therefore we can assume the test is reliable e.g putting repeat question/s in both halves of a questionnaire (internal reliability), which may be asked slightly differently to check how reliable Ps are being answering Qs/check for consistency in responses. if answers are similar you have split-half reliability
    • What is the test-retest method
      testing participants more than once to see if results are consistent e.g give the same participants the same measure (internal reliability) on two or more separate occasions and compare results. if results are similar/the same = high test-retest reliability
    See similar decks