focuses on the scientific study and application of individual differences in the work place
Psychometrics
measuring different attributes of individuals
Indirect approach to influencing behaviour
focus on individual differences
Psychological reactions are influenced by individual differences
I-O Psychologists interest =
limited to psychological differences. especially those related to work behaviour and/ or performance
implicit science
Implicit learning is defined as a type of long-term human memory that encompasses the subconscious knowledge or experience necessary to carry out a particular task (e.g. phonological processing, literacy skills or driving a car)
MID
measuring individual differences
measuring
Measuring individual differences [psychological testing/ psychometrics] is more complex as it involves measuring latent constructs [unseen or intangible attributes]
reliabilty
stability over time and internally consistent
Stability over time/ Temporal stability
Test-retest Reliability
Calculated by correlating measurements taken at time 1 with measurement taken at time 2
want measure to show the same result/similar at 2 different points in time then calculate the correlation
want people to get the same score
3 characteristics of internal consistency
Equivalent Forms of reliability
Split-half reliability method
Cronbach's Alpha
IC: equivalent forms of reliability
administering 2 different tests on 2 occasions
Calculated by correlating measurement from the sample of individuals who complete two different forms of the same test
IC: split half reliability method
Pretend that instead of one test, there are two or more
Problem: it depends how you split the test up so not always super reliable but it is cheaper because you only have to administer the test once
IC: Cronbach's Alpha
Cronbach's alpha is a statistic used to assess the internal consistency or reliability of a scale or questionnaire in industrial psychology. It measures the extent to which items within a scale measure the same underlying construct.
A high Cronbach's alpha (typically above 0.7) indicates that the items in the scale are consistent and reliable in measuring the intended construct. Industrial psychologists use Cronbach's alpha to ensure that the measurement instruments they use in their research or assessments are reliable and valid.
Inter-rater reliabilty
different individuals make judgement about a person e.g ratings of performance of a worker made by several different supervisors
Calculate various statistical indices to show level agreement among raters: Inter-rater reliability
Methods of Estimating Reliability
- test and retest
- alternate forms
- internal consistency
- inter-rater reliability
does the test measure what it claims to measure?
The difference between validity and reliability
Validity: relates to the measurement measuring what it claims to measure
Reliability: relates to the consistency or stability of measurement
Validity and Reliability
Refers to whether the experiment was repeated to see if it yielded the same results each time. If so, the experiment is said to be "valid".
Validity framework
Conduct job analysis to identify the important demands of a job and the human attributes necessary to meet these demands
job demands
talk to people, convince people
Job related attitudes
friendly, assertive, persuasive
job constructs
extraversion
predictors
test scores
criteria
job performance
Face validity
how the test scores look:
Validity approach that is demonstrated by the way the test looks in the 'eyes' of the test-takers
face validity relevance
makes test taking worthwhile, thereby increasing test taking motivation[& concomitantly makes for a reliable test taker and test taking]
Is face validity a subjective/qualitative judgement?
yes, It is a qualitative/ subjective judgement about whether the test items appear relevant for their purpose
Criterion related validity
demonstrated by correlating the test score with a performance measure; improves researcher's confidence in the inference
that people with higher test scores have higher performance
validity coefficient(correlation between a test score (predictor) and a performance measure(criterion)
What are the two designs of criterion related validity?
predictive validity
concurrent validity
Predictive validity
heavily criticized ,not practical,
-design in which there is a time lag between collection of the test data and the criterion data
e.g test all applicants, then hire applicants without using test scores, go back over time and collect performance data
Concurrent validity
no lag, more realistic, by testing current employees rather than potential employees
test taking motivation may not be as high for those already employed
Content related validity
demonstrates that the content of the selection procedure represents an adequate sample of important work behaviours and activities and/or worker KSAOs defined the job analyses
KSAOs
knowledge, skills and other attributes
Steps and uses for content related validity
ask subject matter experts
analyse their answers to identify or develop possible predictors for testing them