4+5 = Heuristics and biases

Cards (34)

  • Judgement and decision making (or JDM) is a core area of cognitive psychology, and one of the most lucrative.
    Basically, the study of how people select between options
    Tied exquisitely to the problem of induction
  • Deductive and inductive reasoning
    Deductive = set of rules for making inferences that are guaranteed to produce two conclusions i.e. all humans are mortal, I am human, ergo I am mortal

    Inductive = inferences from incomplete data, not sure if this thing will happen yet again but it probably will.
  • Judgment and decision making - The basic problem (again)

    Problem of induction is underconstrained ≠ no guaranteed right inductive answer
    But… not all inferences are equally likely
    Recall: Problem of vision – has to be inductive because your eye gets a 2D image that you get and you infer the 3D world.
  • What is the best inductive inference?
    As noted previously, given by rules of probability theory
    Rational Inductive Inference: Inductive inference based on the likelihood estimates of probability theory – rational because it is the ideal way of doing things.
    Not right all the time – still not guaranteed to be right, only more right than a non rational indicative inference, but it is right most of the time
  • Judgment and decision making - Normative vs actual

    Rational Inductive Inference = ideal way to make inductions
    Do we make rational inductive inferences?No, not most of the time.
    Why not?
    No probability module in brain, because it is too expensive to gather the necessary data, and is also very reasonable from a survival standpoint i.e. ate mushroom, was sick, don't eat mushrooms.
  • What is Rational Inductive Inference: 

    Inductive inference based on the likelihood estimates of probability theory – rational because it is the ideal way of doing things.
  • Judgement and decision making - Normative vs actual

    What we really want is a Quick and dirty estimates that is close enough to get us through the day to make the next decision.
    Heuristics… give decent inductions most of time without gathering lots of data, gets you to the next question/decision.
    Heuristics is a quick decision rule.
  • Judgement and decision making - Normative vs actual = Heuristics. 

    Heuristic: A problem solving approach that is not necessarily optimal or guaranteed to be right
    Heuristics often have a general purpose – answers across a wide range of possible decisions. E.g., trial and error, not necessarily an optimal heuristic, but it can be applied in any circumstance.
    Heuristics can be based on prior experience. E.g., educated guess
  • Judgement and decision making = Normative vs actual
    What you should do vs what you do do – reality.
  • Main characteristic of heuristics: Usually a method that works pretty well with limited resources
    The origin of the term comes from Herbert Simon, who used it in the context of satisficing = good enough approach to a solution
  • Judgement and decision making - heuristics information
    Heuristics are the foundations of human decision making (per Tversky and Kahneman), and have the properties of:
    • Working well (at least most of the time)
    • Almost always feeling compelling (– if it is the case that people make decisions based on rules, then the answer specified by this rule should feel very compelling).
    Interestingly, they can be wrong in very regular ways – in study of vision = visual illusions, taking advantage of something the visual system genuinely assumes is correct.
  • Basic concepts
    What does it mean to make a good or bad decision normally?
    • What’s “good”?
    • What’s “bad”?
    • How good is “good”?
    • How bad is “bad”?
  • At the core...JDM (Judgement and Decision Making) is about decisions
    Decisions goal = maximizing good stuff and minimizing bad stuff (goal of the cognitive system most of the time)How do we maximize the good stuff?
  • Utility: 

    A measure of goodness (made-up measure, is utile if it is good/useful for you)
  • Maximizing the good stuff (and minimizing the bad)

    First off, how do we decide what’s good?
    Utility as a function of value: Gains are good, losses are bad => not linear, depending on the situation the utility can change.
  • Gaining or losing when you have very little has more utility (positive or negative respectively). Early gains much more valuable in terms of utility than later gains.
  • Operationalisation of utility - Tversky and Kahneman
    Operationalising is almost impossible because utility is a subjective concept. Can say that “assuming that someone likes something then that thing will have more utility for you”. This graph is not operationalising it, it is just identifying the pattern. However, found that this is not the case.
  • Note:Gains and losses compound at decreasing rates: 3>2>1, but (2-1) > (3-2)
    Losing is worse than winning is good: |-1| > 1 (absolute value of losing 1 is greater (costs more utility) than gaining 1).
  • Maximizing the good stuff (and minimizing the bad) — summary

    Our decisions are made with the goal of maximizing utility
    We want to do the thing that will most likely get us positive utility
    How do we decide what is likely?
    We estimate probabilities
    Use these to maximize expected utility
    Expected Utility: The amount of value we can expect from an event
    Our decisions are made with the goal of maximizing expected utility
    How do we calculate expected utility?
  • Expected Utility
    Expected Utility: The product of the utility of an outcome and the probability of that outcome
    e(i) = p(i)u(i),
    where, e(i) = expected utility of event, i; p(i) = probability of event i; u(i) = utility of event i
    To make your choice rationally you must consider both the payoff and the likelihood of that payoff coming (a really great thing that has no possibility of happening won't do you any good)
  • Expected utility of a choice, c, is the sum of the expected utility of all the events, i, that occur as a result of c, or
    This is an estimate of how much you can expect to gain from making a specific decision
    Of course, try to make decisions that have good results
    A) c
    B) i
  • JDM (so far...)

    We’re trying to maximize good and minimize bad. We’re doing this by trying to maximize expected utility.
    Expected utility requires:
    1 - A value in utils (that’s easy to get...)
    2 - A probability (that means we have to estimate probability to get Expected utility)
    Problem: We make bad choices all the time...
    Why do we make bad choices? Answer: We’re not so good at estimating probabilities.
    So, how do we estimate probability? Another way of asking:
    What are the heuristics of JDM and how can they go wrong?
  • Heuristics of JDM
    Representativeness
    Availability
    Anchoring and Adjustment
    Most of the time these work very well…
  • Heuristics of JDM - Representativeness: 

    Judge something to be likely to the extent that it is representative of (i.e., similar to) things you are familiar with. I.e. is it looks like something I already know, then odds are that is what it is.
    However, optical illusions can come into play here.         
    Example: Rover looks just like your dog who is a golden retriever; assume p(Rover = golden retriever) is high
  • Heuristics of JDM - Availability: 

    Judge something to be likely to the extent that it is easy to think of examples of that thing              
    Example: Do more English words begin with T or K?
  • Heuristics of JDM - Anchoring and Adjustment: 

    When you estimate magnitudes, start with an initial estimate, and adjust it as new information arrives              
    Example: How big is an elephant? Know that if we see a big elephant first, and then see a baby next, that it does not mean all elephants are small.
  • Heuristics gone awry - Representativeness = Kahnemann & Tversky (1983)

    Please rank the following by their probability (1 for most probable)
    (a) Linda is a bank teller
    (b) Linda is an insurance salesperson
    (c) Linda is a bank teller and is active in the feminist movement
    (c) is a special case of (a) - we know from probability that c) is not more likely than a), but people tend to place it above a) = because people consider it from a decision making standpoint, not a probability standpoint. C is a subset of A
  • What is conjunction fallacy?
    Conjunction Fallacy: Judge the likelihood of two events as more likely than either in isolation – one of the two options is a very compelling lure.
    This is a term used to describe Heuristics of JDM that have gone awry, particularly for Representativeness
  • Heuristics gone awry: Availability
    Notice that all words fitting (i n g) also fit (_ n _), while some words fitting (_ n _) do not fit (in g)
    Conjunction fallacy again… easier to come up with words when we identify ing, as we know that, but trickier to think of it with just n.

    Seen again in example about likelihood of earthquakes in the US vs earthquakes in California = conjunction fallacy as people choose California because they can think of more examples.
  • Heuristics gone awry: Anchoring and Adjustment
    From Tversky & Kahnemann (1974)
    How much is 1x2x3x4x5x6x7x8 ?
    Median estimate of participants = 512
    How much is 8x7x6x5x4x3x2x1 ?
    Median estimate of participants = 2250 (Correct answer is 40,320)
    So, people are estimating long-string multiplication problems, but why the huge difference in estimates?
    The participants’ estimates are biased by the initial anchor (i.e., the first numbers they processed), even if those anchors are generated by unreliable or biased sources (relates to other A&A experiments)
  • Schemas coming into play with Anchoring and Adjustment:

    Schemas are mental frameworks that help us organize and interpret information based on past experiences and knowledge. Schemas play a key role in Anchoring and Adjustment:
    Schemas shape how we initially anchor our thoughts and influence how we adjust them, often keeping us tethered to our initial perceptions.
  • Heuristics gone awry:

    Our heuristics are so compelling we do not use standard probability theory even when have the necessary information
    The phenomenon of ignoring information about the population base rates (or the prior probabilities) is called...
    Base rate neglect
  • Bayes Theorem: P(B|A) 

    Bayes' rule follows from the idea of conditional probability.
    In a basic sense, it allows you to update your assessment of probability as you gather more evidence.
    = P(A|B)*P(B)/ P(A)

    That is, p(B|A) ≠ p(A|B), when p(B) ≠ p(A)
  • The strategies we use (heuristics) are incredibly optimal considering that we have a minimal information in most cases, but occasionally do fail.