IRE348 - Weeks 6&7

Cards (81)

  • Social media
    • Provides a wider view of applicant
    • Also potentially provides info you shouldn't know and would influence your decision based on prohibited grounds
  • Social media
    Not a good indicator of what kind of worker you are - but can tip you off through potential red flags
  • Employers using social media to screen applicants
    • Not supposed to because of PIPEDA etc. but some do (because it's hard to prove that they didn't hire you only because of social media)
    • Journal of management found that SM is a terrible predictor of performance (work exp, cognitive ability, performance turnover retention)
    • It predicts sociability, extroversion, openability to new experiences - but there are other better scientific measurement that are better predictors of those thing
    • Very little legal ramifications - so lock that stuff down
    • What if they don't even know if it's you (if your information is not accurate)
  • Job performance
    • Behaviour (the observable things people do) that is relevant to accomplishing the goals of an organization
    • Multidimensional process to determine domains of performance and what they mean
    • Not just simply ability to do things
    • We need to make sure we isolate measurement we use to asses performance (criteria)
  • Criteria
    • Measures of job performance that attempt to capture individual differences among employees with respect to job-related behaviours
    • Can asses individual employees
  • Goal of measuring job performance
    Identify reliable and valid ways of identifying KSAOs linked to performance
  • The trick to measuring job performance
    • Determining a defensible system that will effectively accomplish this (best accomplished in a feedback loop to better inform the process)
    • How to balance different types of performance related to the job, and prioritize them
  • Job performance "domains"

    • The set of job performance dimensions (i.e., behaviours) that are relevant to the goals of the organization, or the unit, in which a person works
    • Includes: task performance, contextual performance, adaptive performance, counterproductive performance
  • Task performance
    • Duties related to the direct contribution to the org that form part of a job
    • These duties are part of the worker's formal job description
  • Contextual performance
    Activities or behaviours that are not part of a worker's formal job description but are important for effectiveness
  • Adaptive performance
    A worker's behavioural reactions to changes in a work system or work role (agility to be able to react to changes in a positive way)
  • Counterproductive performance
    • Voluntary behaviours that violate significant org norms
    • Threaten the well-being of an organization, its members, or both
  • Cambell's theory of performance
    • Declarative knowledge: knowledge of what's needed to do things - rules, context, how things are supposed to be done (what's required to do it)
    • Procedural knowledge and skill: also the knowledge/ability to actually do it
    • Motivation
  • Performance measurement
    • Defines what is meant by performance
    • A measure or set of measures that best captures the essence of the complexities of job-related performance
    • Plays an important role in developing strategies for effective recruitment and selection - scientific system allows for reliability and validity of capturing multiple performance domains
  • A sample of new hires will give you max effort and little knowledge
  • A sample of old hires will give you min effort and a lot of knowledge
  • Maximal performance
    • When worker is operating in best possible state of mind (usually in short term)
    • When measuring maximal perf, generally someone knows their being observed so they'll work on their absolute best (best done when getting measures of cognitive ability)
  • Typical performance
    • Typical performance needs day-to-day work - better to predict personality (contextual, maybe adaptive) - is this person motivated
    • Can we trust them to perform
  • Usually good idea to do both (maximal, typical) - to get measures of cognitive predictors and qualities and personality of the type of candidate
  • Objective job performance measures

    • Facts and hard data of what an ee produces
    • Quantity (volume of sales, time of completion, speed of production)
    • Quality (errors, mistakes, customer complaints)
    • Trainability (rate of increased production/sales growth)
    • Absenteeism (number of sickdays/times late)
    • Tenure (length of time in job, turnover rate)
    • Rate of advancement (number of promotions, increase in salary)
    • Accidents (number and costs, safety violations)
  • Subjective measures
    • Based on others opinions, done on basis of ranking/rating
    • Relative rating systems (or comparative rating systems)
    • Absolute rating systems
  • Relative rating systems
    • Comparing employees against one another to create a rank order
    • Rank order: the rater arranges the employees in order of their perceived overall performance level
    • Paired comparisons: the rater compares the overall performance of each worker with that of every other worker who must be evaluated
    • Forced distribution: system sets up a limited number of categories that are tied to performance standards (certain % gets A, B, C) - most problematic
  • Absolute rating systems
    • Compares the performance of one worker with an absolute standard of performance
    • Provides either an overall evaluation of performance or evaluation of each of the job dimensions
    • Types: Graphic rating scales, Behaviourally anchored rating scales, Behaviour observation scales
  • Graphic rating scales
    • Presents rater with the name and description of job dimension
    • A scale showing equal numeric intervals to reflect gradients of low to high performance
    • Verbal labels or "anchors" attached to each numeric scale point
    • Provides the rater with instructions
  • Behaviourally anchored rating scales (BARS)
    • Use CIT statements to derive job behaviours at varying levels of effectiveness
    • Then uses these statements to anchor performance as a measure of values placed along a rating scale - Take statements and anchor us on what good, bad, mediocre performance looks like
    - Point: move through different applications that exemplify what above or below average means
    No more guesswork - there's a clear indication
  • Behaviour observation scales (BOS)
    • An attempt to improve upon BARS
    • Behavioural statements focus on examples of positive behaviours for each job
    • Using a numeric scale, the employee is rated on the frequency of the positive behaviour
    • Individual scores then summed for an overall rating
    • Disadvantage - we don't have eyes on employees all the time - errors in measurement when extrapolating data we don't always have data on all the time
    • Use one thing to rate all the rest of performance
  • Subjective appraisals: The 360
    • By superiors, third parties, customers, peers, subordinates, self-appraisals
    • Includes information from many possible angles
    • Any one of these is going to be biased for a number of reasons
    • If you take in all of these though, and find a way to reconcile, you have more measurements on how to find performance
    • Disadvantage - all perspectives only see components of performance - very rarely are scores consistent between raters (not a holistic view)
    • Many different data points, different viewpoints but no one has all of the information
    • Use it to triangulate between different assessment items
  • Rater training
    • With subjective performance assessments, it is vital that raters receive rater training
    • Frame-of-reference (FOR) training: calibrate raters so they agree on the level of effectiveness for individual employee behaviours
    • Make sure all raters have same conceptualization of the various scales and measures we use so they're consistent in rating
    • See how far off scores of individual rates are - and see how much variability exists
    • Although rater training has been shown to lead to significant improvements in rating quality, we'll never be perfect - even with the best training approaches!
  • Overall distance accuracy
    The average discrepancy between an individual rater's ratings and a set of expert ratings that serves as the standard for comparison
  • Criterion relevance vs criterion deficiency
    • If we have measures that are unimportant or irrelevant to the performance dimension, we can say we have not maximized criterion relevance
    • If we've left out important dimensions necessary to comprehensively understand employee performance, we are criterion deficient
  • Range restriction
    • Occurs when raters use only a portion of the performance rating scale
    • Someone believes the highest you can get is an 80 and lowest is a 50
    • Problematic - raters with beliefs around range they're using
  • Halo/horn error

    Occurs when appraisers rate an individual either high (halo) or low (horn) on all characteristics because one characteristic is either high or low
  • Leniency/harshness effect

    • Tendency of many appraisers to provide unduly high or low performance appraisals
    • Just a tendency to mark people either very good or very bad
  • Contrast effect
    Tendency for a set of performance appraisals to be influenced upward by the presence of a very low performer or downward by the presence of a very high performer
  • Similarity effect
    Tendency of appraisers to inflate the appraisals of appraisees they see as similar to themselves
  • Central tendency error
    Occurs when appraisers rate all employees as "average" in everything
  • Recency effect
    • Tendency of appraisers to overweight recent events when appraising employee performance
    • Either good or bad recently compared to historical performance
  • Beauty effect
    Tendency for the physical attractiveness of a ratee to affect their performance appraisal
  • The "fair" factor
    • Fair is a social construct - not based on any scientific measure (how we feel about the process)
    • Are managers and employees satisfied with the performance assessments
    • Do they perceive the assessments to be both fair and accurate?
    • Do they perceive the assessments to be useful or practical?
  • Remember: whatever method your org chooses, the defensibility of selection systems and performance measures rests on the ability to demonstrate that they are job related