Psychological Assessment

Subdecks (7)

Cards (1324)

  • Statistics are used for purposes of description and to make inferences which are logical deductions about events that cannot be observed directly.
  • Descriptive statistics are used to provide a concise description of a collection of qualitative information.
  • Inferential statistics need to make inferences from observation of a sample to population.
  • Measurement is the act of assigning numbers or symbols to characteristics of things (people, events, whatever) according to rules.
  • Conditions for revisions include outdated materials, outdated vocabulary, words perceived as inappropriate or offensive, test norms that are no longer adequate to age-related norms, and psychometric soundness that needs to improve significantly.
  • Co-validation is the process of conducting two or more tests using the same sample of test takers.
  • Co-norming is the creation of norms or the revision of existing norms.
  • Theory should be improved significantly.
  • Pilot work, pilot study, or pilot research is a preliminary research surrounding the creation of a prototype of the test.
  • Selected-Response Format requires test takers to select response from a set of alternative responses.
  • A scale is a set of numbers (or other symbols) whose properties model empirical properties of the objects to which the numbers are assigned.
  • Item format refers to the form, plan, structure, arrangement, and layout of individual test items.
  • Scaling is the process of seeing rules for assigning numbers in measurement.
  • Constructed-Response Format requires test takers to supply or to create the correct answer, not merely selecting it.
  • Item pool is the reservoir or well from which the items will or will not be drawn for the final version of the test.
  • The steps in standardizing test development include identifying a need, deciding if the test will be theory based or empirical data driven, practical choices, table of specifications or blueprint, test tryout and initial refinements, gathering reliability and validity, gathering normative data, and doing further refinements.
  • Test conceptualization involves brain storming of ideas about what kind of test a developer wants to publish.
  • Test construction is the stage in the process that entails writing test items, revisions, formatting, seeing scoring rules.
  • Test conceptualization is the first stage in the process of developing tests.
  • Types of scales include raw scale, summative scale, Likert scale, Thurstone scale, method of paired comparisons, comparative scaling, categorical scaling, and Gumman scale.
  • Magnitude is the measurement or absolute value of a quantity, represented by a positive real number.
  • Equal intervals mean that the differences between numbers (units) anywhere on the scale are the same.
  • Absolute 0 refers to the property being measured not existing, making it impossible or an extreme difficulty to define an absolute point.
  • Content Validity explores the appropriateness of test items of a psychological test, meaning that the test covers the content that is supposed to cover.
  • Spearman Brown Formula allows a test developer or user to estimate internal consistency reliability from a correlation of two halves of a test.
  • Validity only establishes the presentation or physical appearance of the psychological test.
  • Construct validity is a judgment about the appropriateness of inferences drawn from test scores regarding individual standings on a variable called construct.
  • Coefficient Alpha (Cronbach’s Alpha) is the most common measure of internal consistency reliability.
  • Concurrent Validity states the extent to which test scores may be used to estimate an individual’s present standing on a criterion.
  • Discriminant validity is when a construct measure diverges from other measures that should be measuring different things.
  • Predictive validity is how well a certain measure can predict future behavior.
  • Kuder-Richardson Formula (KR20) measures the reliability estimate of split-half.
  • Criterion-related validity refers to the ability to draw accurate inferences from test scores to a related behavioral criterion of interest, indicating the extent to which a measure is related to an outcome.
  • Convergent validity is when a measure correlates well with other tests believed to measure the same construct.
  • Practicality of a Test must be usable, selection must be based on effort, affordability, and time frame, requires simple directions, easy administration and scoring.
  • Test development is an umbrella term for all that goes into the process of creating a test.
  • Coefficient Alpha is used when there are multiple Likert scales in a survey/questionnaire that form a scale and you wish to determine if the scale is reliable.
  • Validity indicates what the test aims or purports to measure.
  • Coefficient Alpha is applicable for personality and attitude scales.
  • Face validity relates more to what a test appears to measure to the person being tested that what the test actually measures.