TECHNIQUE AND DECISION

Cards (38)

  • Taylor Russell Tables
    A series of tables based on the selection ratio, base rate, and test validity that yield information about the percentage of future employees who will be successful if a particular test is used.
  • Proportion of Correct Decisions
    A utility method that compares the percentage of times a selection decision was accurate with the percentage of successful employees.
  • Lawshe Tables
    Tables that use the base rate, test
    validity, and applicant percentile on a test to determine the probability of
    future success for that applicant.
  • Brogden-Cronbach-Gleser Utility Formula
    Method of ascertaining the extent to which an organization will benefit from the use of a particular selection
    system.
  • Measurement Bias
    Group differences in test scores that are unrelated to the construct being measured.
  • Predictive Bias
    A situation in which the predicted level of job success falsely favors one group over another.
  • Multiple Regression
    A statistical procedure in which the scores from more than one criterion-valid test are weighted according to how well each test score predicts the criterion.
  • Top-Down Selection
    Selecting applicants in straight rank order of their test scores.
  • Compensatory Approach
    method of making selection decisions in which a high score on one test can compensate for a low score on another test. For example, a high GPA might compensate for a low GRE score
  • Passing Score
    The minimum test score that an applicant must achieve to be considered for hire.
  • Multiple Cut-Off Approach
    A selection strategy in which applicants must meet or exceed the passing score on more than one selection test.
  • Multiple Hurdle Approach
    Selection practice of administering one test at a time so that applicants must pass that test before being allowed to take the next test
  • Banding
    A statistical technique based on the standard error of measurement that allows similar test scores to be
    grouped.
  • Standard Error of Measurement (SEM)
    The number of points that a test score could be off due to test unreliability.
  • Reliability
    The extent to which a score from a test or from an evaluation is
    consistent and free from error.
  • Test Retest Reliability
    The extent to which repeated administration of the same test will
    achieve similar results
  • Temporal Stability
    The consistency of test scores across time.
  • Alternate Forms Reliability
    The extent to which two forms of the same test are similar.
  • Counterbalancing
    A method of controlling for order effects bygiving half of a sample Test A first, followed by Test B, and giving the other half of the sample Test B first, followed by Test A.
  • Form Stability
    the extent to which the scores on two forms of a test are similar
  • Internal Reliability
    The consistency with which an applicant responds to items
    measuring a similar dimension or construct (e.g., personality trait,
    ability, area of knowledge)
  • Item Stability
    The extent to which responses to the same test items are consistent.
  • Item Homogeneity
    The extent to which test items measure the same construct.
  • Split Half Method
    A form of internal reliability in which the consistency of item responses is determined by comparing scores
    on half of the items with scores on the other half of the items.
  • Kuder-Richardson Formula 20 (K-R 20) 

    statistic used to determine internal reliability of tests that use items with
    dichotomous answers (yes/no, true/false).
  • Spearman-Brown Prophecy Formula
    Used to correct reliability
    coefficients resulting from the split-half method.
  • Coefficient Alpha
    statistic used to determine internal reliability of tests that use interval or ratio scales
  • Scorer Reliability
    The extent to which two people scoring a test agree on the test score, or the extent to which a test is scored correctly.
  • Validity
    The degree to which inferences from test scores are justified by the evidence.
  • Content Validity
    The extent to which tests or test items sample the content that they are supposed to measure.
  • Concurrent Validity
    A form of criterion validity that
    correlates test scores with measures of job performance for employees currently working for an organization.
  • Predictive Validity
    form of criterion validity in which test
    scores of applicants are compared at a later date with a measure of job performance.
  • Restricted Range
    narrow range of performance scores that makes it difficult to obtain a significant validity coefficient
  • Validity Generalization (VG)
    The extent to which inferences
    from test scores from one organization can be applied to
    another organization
  • Construct Validity
    The extent to which a test actually measures the construct that it purports to measure.
  • Face Validity
    The extent to which a test appears to be valid
  • Mental Measurements Yearbook (MMY)

    A book containing information about the reliability and validity of various psychological tests.
  • Computer Adaptive Testing (CAT) 

    type of test taken on a computer in which the computer adapts the difficulty level of questions asked to the test taker’s success in answering previous questions