inter-rater reliability

Cards (3)

  • inter-observer reliability is a way to test the reliability of observational studies
  • For example, if your study required observers to assess participants' anxiety levels, you would expect different observers to grade the same behaviour in the same way, if one observer rated a participant's behaviour a 3 for anxiety, and another observer rated the exact same behaviour an 8, the results would be unreliable
  • inter-rater reliability can be assessed mathematically by looking for correlation between observers' scores. inter-observer reliability can be improved by setting clearly defined behavioural categories