Ch. 6

Cards (42)

  • Characteristics of Effective Selection Techniques
    • Reliable
    • Valid
    • Cost-efficient
    • Legally defensible
  • Reliability
    The extent to which a score from a selection measure is stable and free from error
  • Test-Retest Reliability
    1. Each one of several people takes the same test twice
    2. Scores from the first administration of the test are correlated with scores from the second to determine whether they are similar
    3. If they are, it has Temporal Stability
    4. The time should not be long enough so that the specific test answers have not been memorized, but short enough so that the person has not changed significantly
    5. Longer time interval = lower reliability coefficient
    6. Not appropriate for all kinds of tests
  • Trait Anxiety
    The amount of anxiety an individual has all the time
  • State Anxiety
    The amount of anxiety an individual has at any given moment
  • Alternate-Forms Reliability
    1. Two forms of the same test are constructed
    2. This counterbalancing of test-taking order is designed to eliminate any effects that taking one form of the test may have on scores on the second form
    3. If the scores of the two forms are similar when correlated, it has form stability
    4. Applicants retaking the same cognitive ability test, will increase their scores about twice as much as applicants taking an alternate form of the cognitive ability test
    5. The time interval should be as short as possible (the test could lack either form stability or temporal stability if longer)
    6. Two forms of a test should also have the same mean and standard deviation
  • Internal Reliability
    1. Looking at the consistency with which an applicant responds to items measuring a similar dimension or construct
    2. Extent to which the similar items are answered in similar ways is referred to as internal consistency and measures item stability
    3. Longer tests = higher internal consistency
    4. Item Homogeneity - do all items measure the same thing? Or do they have different constructs?
    5. More homogenous items = higher internal consistency
    6. Split-Half Method - items in the test are split into two groups
    7. Spearman-Brown Prophecy - researchers used this formula in Split-Half Method because the number of items are reduced to adjust the correlation
    8. Cronbach's Coefficient Alpha and K-R 20 - more popular and accurate methods of determining internal reliability
  • Scorer Reliability
    1. A test or inventory can have homogeneous items and yield heterogenous scores and still not be reliable if the person scoring the test makes mistakes
    2. Interrater Reliability - when human judgement of performance is involved
  • Validity
    The degree to which inferences from scores on tests or assessments are justified by the evidence
  • Content Validity
    The extent to which test items sample the content they are supposed to measure
  • Criterion Validity
    The extent to which a test score is related to some measure of job performance
  • Concurrent Validity
    A test is given to a group of employees who are already on the job
  • Predictive Validity
    The test is administered to a group of job applicants who are going to be hired
  • Validity Generalization
    The extent to which a test is found valid for a job in one location is valid for the same job in a different location
  • Construct Validity
    The extent to which a test actually measures the construct that purports to measure
  • Known-group validity

    A test is given to two groups of people who are known to be different on the trait in question
  • Face Validity
    The extent to which a test appears to be job related
  • Barnum Statements

    Statements that are so general that they can be true of almost everyone
  • Seventeenth Mental Measurement Yearbook is the most common source of test information and contains information about thousands of different psychological tests as well as reviews by test experts
  • Computer-Adaptive Testing
    • Fewer items required
    • Less time to complete
    • Finer distinctions in applicant ability can be made
    • Test-takers can receive immediate feedback
    • Test scores can be interpreted not only on the number of questions answered correctly, but on which questions were correctly answered
  • Establishing the Usefulness of a selection device
    1. Taylor-Russell Tables - designed to estimate the percentage of future employees who will be successful on the job if an organization uses a particular test
    2. The first information needed is the test's criterion validity coefficient - that is by conducting a criterion validity study with the test scores correlated with some measure of job performance
    3. The second piece of information that must be obtained is
  • Computer-based testing
    • Fewer items required
    • Less time to complete
    • Finer distinctions in applicant ability can be made
    • Test-takers can receive immediate feedback
    • Test scores can be interpreted not only on the number of questions answered correctly, but on which questions were correctly answered
  • Establishing the Usefulness of a selection device
    1. Obtain the test's criterion validity coefficient
    2. Obtain the selection ratio
    3. Obtain the base rate
  • Selection Ratio
    The percentage of people an organization may hire; lower selection ratio, higher usefulness
  • Base rate
    The percentage of employees currently on the job who are considered successful
  • Proportion of Correct Decisions
    1. Draw lines from the point on the y-axis (criterion score) that represents a successful applicant, and from the point on the x-axis that represents the lowest score of a hired applicant
    2. Quadrant 1: employees who scored poorly on the test but performed well on the job
    3. Quadrant 2: employees who scored well on the test and were successful on the job
    4. Quadrant 3: employees who scored well on the test and were successful on the job
    5. Quadrant 4: employees who scored low on the test and did poorly on the job
  • Lawshe Tables
    Needs validity coefficient, base rate, and applicant's test score
  • Brogden-Cronbach-Gleser Utility Formula
    1. Number of employees hired per year
    2. Average tenure
    3. Test Validity
    4. Standard Deviation of performance in dollars
    5. Mean standardized predictor score of selected applicants
  • Bias
    Refers to the technical aspects of a test; a test is considered biased if there are group differences in test scores that are unrelated to the construct being measured
  • Fairness
    Can include bias, but also includes political and social issues; equal probability of success on a job and have an equal chance of being hired
  • Determining a test's potential bias
    1. Finding out whether it will result in adverse impact (occurs if the selection rate for any group is less than 80% of the highest scoring group and the difference is statistically significant)
    2. Comparing the hiring rates of two groups
    3. Three criteria for a minimum qualification: it must be needed to perform the job, must be formally identified and communicated prior to the start of the selection process, and it must consistently applied
  • Single-group validity

    The test will significantly predict performance for one group and not others
  • Differential Validity

    A test is valid for two groups but more valid for one than for the other
  • Making the Hiring Decision
    1. Unadjusted Top-Down Selection
    2. Compensatory Approach
    3. Rule of Three
    4. Passing scores
    5. Multiple-cutoff approach
    6. Multiple-Hurdle Approach
    7. Banding
  • Unadjusted Top-Down Selection
    Applicants are rank-ordered on the basis of their test scores; selection is then made by starting with the highest score and moving down until all openings have been filled
  • Compensatory Approach

    The assumption is that if multiple test scores are used, the relationship between a low score on one test can be compensated for by a high score on another
  • Rule of Three
    The names of top three scorers are given to the person making the hiring decision
  • Passing scores
    Determines the lowest score on a test that is associated with acceptable performance on the job
  • Multiple-cutoff approach

    The applicants would be administered all of the tests at one time
  • Multiple-Hurdle Approach
    To reduce the costs associated with applicants failing one or more tests; applicant is administered one test at a time