Tend to use tests merely to obtain data, their task is often perceived as emphasizing the clerical and technical aspects of testing
Psychometrist
Data oriented, the end product is often a series of traits or ability descriptions
Psychological Assessment
Attempts to evaluate an individual in a problem situation so that the information derived from the assessment can somehow help with the problem
Psychological Assessment
Tests are only one method of gathering data, and the test scores are not end products, but merely means of generating hypotheses
Psychological Assessment
Places data in a wide perspective, with its focus being problemsolving and decision-making
Intelligence tests
Appeared to be objective, which would reduce possible interviewer bias
Test battery
If a single test could produce accurate descriptions of an ability or trait, administering a series of tests could create a total picture of the person
Individual differences and trait psychology
They assume that one of the best ways to describe the differences among individuals is to measure their strengths and weaknesses with respect to various traits
Psychologicalassessment has come to include a wide variety of activities beyond merely the administration and interpretation of traditional tests
One clear change in testing practices has been a relative decrease in the use and status of projectivetechniques
Criticisms of projective techniques
Overly complex scoring systems
Questionable norms
Subjectivity of scoring
Poor predictive utility
Inadequate or even nonexistent validity
The earliest form of assessment was through clinical interview
Structured interviews
Diagnostic Interview Schedule
Structured Clinical Interview for the DSM
Renard Diagnostic Interview
Theoretical orientation
Clinicians should research the construct that the test is supposed to measure and then examine how the test approaches this construct
Theoreticalorientation
Clinicians can frequently obtain useful information regarding the construct being measured by carefully studying the individual test items
Practical considerations
Tests vary in terms of the level of education (especially reading skills) that examinees must have to understand them adequately
Practical considerations
Some tests are too long, which can lead to a loss of rapport with, or extensive frustration on the part of, the examinee
Practical considerations
Clinicians have to assess the extent to which they need training to administer and interpret the instrument
Standardization
The basis on which individual test scores have meaning relates directly to the similarity between the individual being tested and the sample
Questions relating to the adequacy of norms
Is the standardization group representative of the population on which the examiner would like to use the test?
Is the standardization group large enough?
Does the test have specialized subgroup norms as well as broad national norms?
Standardization
Standardization of administration should refer not only to the instructions, but also to ensuring adequate lighting, quiet, no interruptions, and good rapport
Reliability
The reliability of a test refers to its degree of stability, consistency, predictability, and accuracy
Reliability
It addresses the extent to which scores obtained by a person are the same if the person is reexamined by the same test on different occasions
Methods of obtaining reliability
Test-retest
Alternate forms
Split half
Interscorer
Test-retestreliability
Administering the test and then repeating it on a second occasion, and calculating the reliability coefficient by correlating the scores obtained by the same person on the two different administrations
Alternateforms
If the trait is measured several times on the same individual by using parallel forms of the test, the different measurements should produce similar results
Splithalfreliability
The test is given only once, the items are split in half, and the two halves are correlated
Interscorerreliability
Two different examiners test the same client using the same test and then determine how close their scores or ratings of the person are
Standard error of measurement (SEM)
A statistical index of the amount of error that can be expected for test scores
Validity
Assesses what the test is to be accurate about, and whether it measures what it is intended to measure and produces information useful to clinicians
Methods of establishing validity
Content-related
Criterion-related
Construct-related
Content validity
Refers to the representativeness and relevance of the test content to the construct being measured
Valid test
Measures what it is intended to measure and produces information useful to clinicians
A psychologicaltest cannot be said to be valid in any abstract or absolute sense, but more practically, it must be valid in a particular context and for a specific group of people
A test can be reliable without being valid
A necessary prerequisite for validity is that the test must have achieved an adequate level of reliability
Valid test
Accurately measures the variable it is intended to measure
Constructing a test
1. Theoretically evaluate and describe the construct
2. Develop specific operations (test questions) to measure it
Main methods of establishing validity
Content-related
Criterion-related
Construct-related
Content validity
Representativeness and relevance of the assessment instrument to the construct being measured