reliability is a measure of consistency, if the results are not consistent then the measure is not reliable, can be assessed in two ways: test-retest and inter-observer
test-retest reliability
when the same person or group are asked to undertake the research measure on a different occasion
after the measure has been completed on two separate occasions, the two scores are then correlated, if the co-efficient is over +0.8 the research measure has strong reliability
limitation of test-retest reliability
assessing reliability using test-retest method is that demand characteristics can occur as they could be recalling what they put down previously
it can also be time consuming as the second test could take place over a year later
inter-observer reliability
when 2 or more observers are recording behaviour in a consistent way, behaviour categories are used in observations which are subjective, so when the separate observers correlate their observations, there should be a similarity
example of inter-observer reliability
Ainsworth's strange situation (1978) found 98% agreement across the observers on the behavioural categories observed, this makes the findings more meaningful
improving reliability - interviews
can be improved by reducing the researcher bias by having the same interviewer for all interviews, switching to structured interviews
improving reliability - experiments
can be improved by standardising procedures so there is more control extraneous variables so they do not become cofounding variables
improving reliability - observations
can be improved by operationalising behavioural categories (making them measurable) to reduce subjectivity and increase objectivity