Reliability & validity

Cards (13)

  • Inter-rater reliability:
    Extent to which 2 or more observers agree (produce same data). Measured by correlating observations of 2 or more observers. If there's more than 80% agreement on observations, data have inter-observer reliability.
  • How do you deal with issues of reliability in observations?
    Observers trained in use of coding system/behaviour checklist. Practice using it & discuss their observations. Investigator can then check reliability of their observations.
  • What is internal reliability?

    Measure of the extent to which something is consistent within itself. E.g, all questions on IQ test should be measuring same thing.
  • What is external reliability?
    Measure of consistency over several different occasions. E.g, if interviewer conducted interview & then conducted same one with same interviewee week later, outcome should be same.
  • What is the split-half method?

    Assesses internal reliability. 1 group participants given test once. Participants' answers to questions divided in half & compared. E.g, compare all answers to odd numbered questions with all answers to even numbered questions. Individual's scores on both halves of test should be very similar. 2 scores can be compared with correlation coefficient.
  • What is the test-retest method?
    Assesses external reliability. Group of participants given test/questionnaire/interview once & then again some time later. Answers compared & should be same. If tests produce scores these can be compared with correlation coefficient.
  • How do you deal with reliability issues in self-reports?
    Low internal reliability- questions can be removed to see if split-half test returns a high reliability score. External reliability- poorly written questions may cause confusion & need to be rewritten. If interview has low reliability interviewer may need to be retrained to be more consistent.
  • What is face validity?
    Concerns issue of whether self-report measure looks like it's measuring what researcher intended to measure. E.g, whether questions on stress questionnaire related to stress. Only requires intuitive measurement.
  • What is content validity?

    Looking at method of measurement & deciding whether it measures intended content.
  • What is concurrent validity?
    Comparing current method of measuring with a previously validated one on same topic. Participants given both measures at same time & then their scores are compared. Expect to get similar scores on both measurements.
  • What is construct validity?
    Assesses extent that a test measures target construct.
  • What is predictive validity?

    Concerned with whether scores on test predict what you would expect them to predict.
  • How do you deal with issues of validity?
    Internal validity low- items on test need to be revised e.g, to produce better match between scores on new test & established one. External validity- e.g, sampling method may produce unrepresentative sample & this can be improved.