Random error is due to chance variation and is not predictable
Systematic error is due to flaws in measurement approach
All measurements will be skewed in the same direction so the average is inaccurate regarding systematic error
Random error effects differ for each participant/measurement occasion
Systematic error effects are consistent across all participants/measurement occasions in the same direction
r is used to assess reliability of a measure
Positive r indicates greater reliability
Test-Retest Reliability is assessed by measuring the same individuals two points in a time
In a test-restest reliability, correlation measures how similarly each individual performed on two separate tests
Concerns with test-retest reliability are participant attribution, more expensive/time consuming, test items may be familiar to participants, potential that participants might learn between testing occasions
Alternate forms reliability generates enough items to create two forms/tests, and randomly divide the questions into two sets
Concerns of alternate forms of reliability are participant attrition, time consuming/costly, learning new material since time 1, hard to generate enough times for 2 tests, and no guarantee that two sets are equivalent
Internal Consistency Reliability is assessed at one point in time
Internal Consistency Reliability is the most commonly use measurement of reliability
Correlations between items should be high in Internal Consistency Reliability
Methods for measuring internal consistency reliability include split-half reliability, Cronbach's alpha, and item-total correlations
Split-half reliability is the correlation of individual's total score on one half of the test with their total socreon the other half of the test
Limitations of split-half consistency include making sure halves of test are 100% comparable to each other and does not take into account each individual item's role in the reliability of a measure
Cronbach's alpha is the correlation of each item with every other item/ average of all inter-item correlation coefficients
Advantages of Cronbach's alpha includes more comprehensive measurement than split-half and can indicate which particular items are lowering reliability
Item-total correlations are a correlation of each item score with the total score based on all items
Advantages of item-total correlation includes indicating reliability of each individual question, helpful if you need to increase reliability or modify survey, and might be used in conjunction with Cronbach's alpha.
Interrater Reliability is the correlation between the observations of raters
A reliable measure in interrater reliability must show a high agreement between raters or judges
Increasing sample size= reduces random error
Measurement error is what we base our understanding of reliability off of
Bigger sample can cut down the measurement error
Systematic error=consistently incorrect
Correlation of test-retest reliability measures how similarly each individual performed on the two separate tests
Correlations between items should be high for internal consistency reliability