assessing and improving reliability

    Cards (18)

    • reliability is a measure of consistency. If researchers replicate their study exactly, they will get similar results
    • what measurement tools do psychologists use to collect data from participants?
      • questionnaires & interviews
      • experimental conditions
      • observations
    • questionnaires collect data by using set lists of closed/open questions to measure opinions, personality traits and past behaviours.
    • observations collect data using clearly defined lists of operationalised behavioural categories to classify observed behaviours
    • External reliability is the extent to which a measure is consistent when repeated. E.g. the results of a study are consistent with an exact replication at a different time and/or with different participants
    • internal reliability is the extent to which different parts of a measure are consistent within itself e.g. if a 100 question test is divided into two 50 part tests, the results for each set of questions with the same participant would be similar.
    • internal reliability is assessed using the split half method.
    • split half method
      • the researcher splits the test into 2 parts
      • participants complete both parts
      • test the strength of the correlation between the two parts of the test
      • a strong correlation indicates internal reliability
    • external reliability is assessed using the test-retest method
    • test-retest method
      • The same participants are given the same questionnaire at separate time intervals (e.g. with a 6-month gap in between testing sessions)
      • If the same result is found per participant then external reliability is established
    • another way of assessing reliability is inter-observer/inter-rater reliability.
    • inter-observer reliability is the degree to which two or more observers agree on the same observation. All observers must agree on the behaviour categories and how they are going to record them before the observation begins. The observation is conducted separately by each observer to avoid conformity. The observers compare the two independent tally charts and then test the correlation between the two sets. If there is a strong positive correlation between the sets then this shows that there is good inter-observer reliability and that the behaviour categories are reliable
    • assessing external reliability by comparing through a correlation test
      The level of correlation is assessed using a test of correlation such as Spearman's Rho. A correlation of 0.8 or higher is accepted as strong correlation
    • improving reliability are changes that psychologists can make to the design of their studies
    • improving observations
      • Make sure that behavioural categories are properly operationalised and that they are measurable and self-evident (not ambiguous).
      • Some observers may need more training using the behavioural categories.
      • Pilot studies can identify poorly defined behavioural categories
    • improving interviews
      • use the same interviewer each time
      • if this isn't possible, structured interviews should be used rather than unstructured interviews. The interview will include a script the interviewer can follow, ensuring each participant has a similar experience
    • improving questionnaires
      • use closed questions to reduce the range of possible responses
      • if there is an established questionnaire that tests for what you need to measure, use it instead of making a new one
    • improving experiments
      • use standardised procedures for each participant so they all have the same experience for example, use a script
      • use established tests as measures e.g. a certified IQ test, rather than creating a new test
    See similar decks