Reliability refers to the consistency and precision of a test.
Errors are factors other then what the test aims to measure that may influence performance or results.
The Reliability Coefficient is the ratio between true score variance and total variance. It is the index of a test's reliability.
TYPES OF ERROR
Measurement Error
Random Error
Systematic Error
TYPES OF RELIABILITY
Test-Retest Reliability
Parallel or AlternateForms Reliability
InternalConsistency
Split-half Reliability
Interscorer Reliability
Test-Retest Reliability is an estimate of reliability obtained by correlating pairs of scores from the same people on two different administrations of the test.
Carryover effect happens when the test-retest interval is short, which results to the second test being influenced by what they practiced or remembered from the first test.
Practice effect is when the test taker's score on the second test is higher than the first test due to their experience on the first test.
Parallel or Alternate Form Reliability is established when when at least two versions of the same test yield almost the same results.
Split-half Reliability is obtained when correlating two pairs of scores obtained from equivalent halves of the same test administered once.
Internal Consistency or Inter-item Reliability is used when items are administered once and is the degree to which each item measures the same construct.
Inter-scorer Reliability refers to the degree of agreement or consistency between two or more scorers with regards to a particular measure.
Difficulty is the quality of a test item of being not easily solved or accomplished.
Discrimination is the degree in which a test item can differentiate among people with higher or lower levels of the trait.