Reliability

Cards (13)

  • What is reliability
    refers to the consistency or replicability of research
  • What is internal reliability
    the extent to which a procedure can be replicated
  • Why is internal reliability important
    it allows the researcher to replicate their research again in the future; to examine further behaviours and increase/improve our knowledge of behaviour
  • How can internal reliability be increased
    • use a standardised procedure: all elements of the procedure are kept the same for all participants (e.g same instructions, same measuring tools so all participants have the same experience) so other researchers can replicate it exactly
    • using multiple measures of the same behaviour - allows us to check for consistency
    • there are some instances when it is difficult to replicate research e.g if researcher isn't directly involved, if it's a natural setting, if measuring tools aren't being used properly
  • What are words to look out for for examples of internal reliability
    "every", "always", "all" - we can infer the procedure has been standardised
  • What is external reliability
    the extent to which the results can be replicated
  • How to increase external reliability
    • use measures which generate quantitative data = so that the results can be repeated to compare findings + check for consistency. unlikely to be misinterpreted when being analysed so there should be no inconsistency - diff interpretations or a subjective view = decrease consistency.
    • large sample sizes = so data is less vulnerable to anomalies so allows us to check for consistent trends + patterns
  • What is inter-rater reliability
    the levels of agreement between 2 or more researchers when they are observing the same behaviour in the same way
    • when two or more individuals have a high agreement on a score, the measurement is reliable - need at least 80% agreement between observers for measure to have inter-rater reliability
  • How to establish inter-rater reliability
    researchers initially compare results and check if they matched. results from each researcher are compared using a correlation. if positive correlation (80%+) = inter-rater reliability achieved
  • How to increase inter-rater reliability
    • operationalise behaviour (e.g. behavioural categories) to ensure both researchers know what they're looking for
    • conduct a pilot study (trial run of study beforehand with a small sample) to amend/improve behavioural categories for real observation, so researchers are able to measure behaviour in exact same way
    • observe/measure behaviour in exact same way, in same location for same amount of time
    • train researchers on using measuring tools + ensure they understand exactly how to use them + what they mean
  • What are the ways of checking reliability of research
    split-half method - used to check internal
    test-retest method - used to check external
  • What is the split-half method
    when two halves of a self report measure (e.g a questionnaire) are similar, which may involve putting repeat Qs in both halves of a questionnaire to check how reliable (consistent) Ps are when answering Qs - if the results of the 2 halves are similar, we can assume the test is reliable/has split-half reliability
  • What is the test-retest method
    Ps are tested twice or more + results are checked to see if the results are the same/consistent
    e.g give the same participants the same measure (internal reliability) on two or more separate occasions and compare results. if results are similar/the same = high test-retest reliability