Investigator Effects, Reliability and Validity

Cards (14)

  • Investigator effects - When the researcher's behaviour/characteristics either consciously or unconsciously influence the outcomes of the research. (1) For example, the researcher's gender or tone of voice may influence how the participant responds to the self report. (1)
  • Investigator effects can be controlled by:
    • Train experimenters to use a neutral tone of voice in the way they greet participants or ask questions
    • Ensure the researcher is the same gender as the participants
    • Provide a standardised script for the researchers to use so that they are asking questions or giving instructions in the same way
    • If the researcher is aware of the aims of the study, get another interviewer to conduct the self report who is unaware of the aims (double blind procedure)
  • Test re-test can be used to assess the reliability of self-reports
  • How to conduct test re-test:
    1. Participants are given a questionnaire or interview to complete (AO2)
    2. The same participants are then asked the same questions (AO2) after a time delay, e.g. two weeks
    3. Compare the data on a scattergraph to describe the correlation, then correlate the results from each questionnaire or interview using a stats test
    4. A strong positive correlation of +0.8 shows high reliability
  • Operationalising can be used to improve reliability of self-report methods
  • Operationalising: To be specific and clear when defining questions in an questionnaires or interviews, so they can be easily measured
  • Operationalising is important because if questions are vague, then it would not be possible to repeat the research to check for consistent results
  • Operationalising increases reliability as if the questions are operationalised, the other researchers can repeat the research in the same way to check for consistent results
  • Improving reliability of self-report methods:
    • Questionnaires - Make sure all questions are clear and understandable - this can be checked for with a pilot study
    • Interviews - using set questions will improve reliability
  • Face validity can be used to assess the validity of self-reports
  • How to conduct face validity:
    1. The quickest most superficial way of assessing for validity
    2. An independent psychologist in the same field looks at the questions in the questionnaire or interview (AO2) to see if they look like they measure what they intend to measure (AO2) at first sight/face value
    3. If the researcher says 'yes' then the self-report method is valid
  • Concurrent validity can also be used to assess validity of self-reports
  • How to conduct concurrent validity:
    1. Compare the results of the new questionnaire/interview (AO2) with the results from another similar pre-existing questionnaire/interview which has already been established for its validity
    2. Correlate the two sets of results gained from an appropriate stats test, this should exceed +0.8
    3. If results gained from both tests are similar, then we can assume the test is valid
  • To improve validity of self-reports:
    • Lie test - sets of nearly identical questions to test response consistency (“I never regret the things I say” might appear in the same test as “I’ve never said anything I later wished I could take back”
    • The use of standardised procedures across all participants (reduces chances of researcher bias)
    • Allow participants to remain anonymous
    • Avoid leading questions to ensure participants are not encouraged to respond in a particular way