the ultimate psy101 reviewer (not rlly Bismillah)

Subdecks (5)

Cards (414)

    1. p-value can only be affected by sample size, therefore large sample data may provide small, unimportant effects; small sample data may hide large, important effects
  • Non-significant results only tell us that an effect is not big enough to be found given the sample size
  • Criterion significance levels
    Probability level; willing to accept as the likelihood that the results were due to sampling error
  • All-or-nothing thinking - p<.05 is merely a rule of thumb and not a threshold to decide a 1-0 situation. We counter this by looking at confidence intervals
  • The intentions of the scientist
    Affects the conclusions from NHST before data collection
  • Success
    Defined by a scientist's results being significant
  • Researcher degrees of freedom
    Scientists may selectively report their results to focus on significant findings and exclude non-significant ones. (Many decisions to make when designing and analyzing a study)
    1. p-hacking
    Practices that lead to the selective reporting of significant p-value; reporting only the one that yields significant results
  • HARKing
    Presenting a hypothesis that was made after data collection as though it were made before data collection
  • EMBERS (Ways to counter the pitfalls)
    • Effect sizes
    • Meta-analysis
    • Bayesian estimation
    • Registration
    • Sense
    1. p-values can indicate how incompatible the data are with a specified statistical model (i.e., Ho)
  • The p-value can indicate how much the data can contradict the specific, expected statistical model
  • Should a p-value suggest compatibility with a hypothesis; does not = hypothesis is the sole true explanation
  • Open science
    Movement; makes the process, data, and outcomes of research freely available to everyone
  • Pre-registration
    Practice; making all aspects of your research process publicly available before data collection
  • Registered report
    Submission; academic journal; intended research protocol
  • Peer Reviewers' Openness Initiative
    Scientists; commit to the principles of open science; acting as expert reviewers
  • Effect size

    An objective, usually standardized measure of magnitude of observed effect; affected by sample size but not attached to a decision rule; affects how closely sample effect size matches the population e.f.s
  • Standardized
    The ability to compare effect sizes across different studies
  • Cohen's d
    • .2 (small)
    • .5 (medium)
    • .8 (large)
  • Pearson's r
    • .10 (small)
    • .30 (medium)
    • .50 (large)
  • Odds Ratio
    Effect size for counts (frequency); categorical variables; 2x2 contingency table
  • Meta-analysis
    Uses studies to get a definitive estimate of the effect in the population
  • Weighted average
    Each effect size is weighted by its precision
  • Bayesian statistics
    Using the data you collect to update your beliefs
  • Bayes' Theorem
    Conditional probability of two events = individual probabilities & inverse conditional probability; used to update prior distribution with data; used to update prior belief in a hypothesis based on the observed data
  • Prior probability

    Belief in a hypothesis before considering the data
  • Marginal likelihood/evidence
    Probability of the observed data
  • Likelihood
    Probability; observed data could be produced given the hypothesis/model
  • A posterior distribution can be used to obtain a point estimate
  • Power
    The ability to detect a significant effect, when it exists. Ability of a test to reject a Ho correctly
  • Having a power result of 0 means you cannot find a difference or relationship between variables/means</b>
  • Power values
    • 0.1-0.3 (low)
    • 0.8-0.9 (high)
  • Factors affecting power
    • Size of the effect expected to be found
    • Criterion significance level
    • No. of participants
    • Type of statistical test used
    • Between-participants/within-participants design
    • Hypothesis is 1-tailed/2-tailed
  • Within-groups variance/within-participants variability

    Variation in experimental scores among identically treated individuals within the same group who experienced the same experimental conditions
  • If you did not have enough power in a study, you wouldn't have been able to find an effect
  • In the case of a study having an enormous amount of participants but the effect size still being small = there can truly be no effect at all
  • The more power a test has, the narrower the confidence interval
  • Confidence interval
    Statistically determined interval estimate of a population parameter
  • Independent samples t-test
    Compare mean scores of two different groups of people