PSYC3010 all

Cards (115)

  • Factorial Design
    • Has at least two factors (IVs), each with at least two levels
    • Two IVs can be examined simultaneously
  • Advantages of Factorial Design
    • More economical in terms of participants
    • Allows us to examine the interaction of independent variables (assess generalisability)
  • Interactions in Factorial Designs
    • One IV interacts with another when the effects of one are different depending on the level of the other
    • And when it changes (moderates or qualifies) the impact of a second IV on the DV
  • Variance
    • Dispersion or spread of scores around a point of central tendency, e.g. mean
    • Error Variance: cannot be explained; should go up with more observations
    • Treatment Variance: systematic differences due to our IV
  • Three Questions of Two-Way ANOVA
    • Variance due to factor A? (df a-1)
    • Variance due to factor B? (df b-1)
    • Variance due to AxB interaction? (df(a-1)(b-1))
  • Structural Model of 2-way ANOVA
  • Variance and Significance
    The more variability attributable to the effects, the more significant they are
  • Assumptions of ANOVA
    • Population: normally distributed (normality) and have the same variance (homogeneity of variance)
    • Samples: Independent; obtained by random sampling; at least two observations and equal n
    • Data (DV Scores): measured on continuous scale for mathematical operations (mean, SD, variance)
  • Effect Sizes
    • Been proposed as an accompaniment, if not replacement, for significance testing, as it relays implications of findings (ANOVA is binary)
    • Offers another way of assessing reliability of results in terms of variance
    • Can compare size of effects within a factorial design: Cohen's d (0.2, 0.5, 0.8)
  • Eta-Squared (n)

    • Describes the proportion of variance in the SAMPLE'S DV scores that is accounted for by the effect
    • Considered biased
  • Omega Squared (w)

    • Describes the proportion of variance in the POPULATION'S DV scores that is accounted for by the effect
    • Less biased
    • Larger difference between n and w with smaller sample
  • Partial Eta-Squared
    • Proportion of residual variance accounted for by the effect (variance left over to be explained)
    • Usually more inflated
    • Can add up to >100%
    • Hard to make meaningful comparisons
  • Following-Up Main Effects
    Use linear contrasts (protected t test) to determine if a set of groups is different from another set using weights (aj)
  • Following-Up Interactions
    Test of simple effects: simple effects test the effects of one factor at each level of the other factor
  • Variance Partitioning of Omnibus Tests
    Variance partitioned into four parts: Effect due to first factor, Effect due to second factor, Effect due to interaction, Error/Residual/Within-group variance
  • Partitioning of Simple Effects
    • Simple effects re-partition the main effect and interaction variance
    • The simple effects of factor 2 should be equal to the combination of the main effect and the interaction
  • Simple Comparisons
    1. Follow up simple effects of interactions, comparing cell means rather than marginal
    2. Somewhat redundant, explaining the same thing more than once
    3. Increases family-wise error rate (use Bonferroni or conduct test a priori to avoid)
  • Higher-Order Factorial Designs
    • More than two independent factors
    • Allow for designs with higher external validity
  • Effects in HO Designs
    • Main Effects: Differences between marginal means of one factor averaging over levels of another
    • Two-Way Interactions: The effect of one factor changes depending on the level of another
    • Three-Way Interactions: The two-way interaction between two factors changes depending on the level of a third
  • Partitioning the Variance in Three-Way ANOVA (2x2x2)
    • 7 Omnibus tests: Main effects, 2-Way interactions, Error/residual, 3-way interaction
    • Larger partitions represent that the marginal means for that factor are very different from each other
  • Structural Models in Factorial ANOVA
  • Following-Up Significant Omnibus Effects in a 3-Way ANOVA
    Interaction followed by simple effects, if significant and more than 2 levels, simple simple effects (effect of factor A at each level of factor B, at each level of factor C ), and simple simple comparisons if still significant
  • Type 1 and 2 Errors
    • Type 1: Finding a significant difference in the sample that actually doesn't exist (a)
    • Type 2: Finding no significant difference in the sample when one does actually exist (B)
  • Power
    The probability of correctly rejecting the null hypothesis (=1-B)
  • Factors that affect power
    • MESS or SALE: Mean differences, Error Variance, Significance Level, Sample Size
  • Reducing Power
    • Improve operationalisation of variables (increases validity)
    • Improve measurement of variables (increases internal validity)
    • Improve design of your study (account for variance from other sources, e.g. blocking designs)
    • Improve methods of analysis (control for variance from other sources)
  • Blocking Designs
    • Introducing a variable into the design to reflect additional sources of variation or pre-existing differences on DV score
    • Variable is control or concomitant, associated with the DV, but the relationship is neither novel nor interesting
    • Generally match participants to blocking variable through stratified random assignment
    • Shouldn't interact with focal IV (often confound if it does)
  • Experimental vs Correlational Research
    • Experimental: Determine causation through manipulation of IVs in controlling setting, assessing effect on DV
    • Correlational: Measures IVs (predictors) and assesses level of association with outcome/DV (criterion)
  • Covariance
    • Average cross-product of the deviation scores
    • Positive value = positive relationship
    • When one variable is above/below the mean, the other is likely to be the same
    • Limitations: Covariance is scale-dependent
  • Correlation
    • Standardised covariance
    • Expresses the relationship between two variables in terms of standard deviations
    • Pearson's r, always -1 to +1
    • ZERO ORDER CORRELATION
  • Interpreting Pearson's r in terms of variance
    • r2 = the coefficient of determination, the proportion of explained variance
    • 1 - r2 = error or residual variance in data
  • Testing r for Significance
    1. Is r large enough to conclude that there is a non-zero correlation in the population?
    2. t = systematic variance divided by error variance
    3. df = N-2
  • r as a population estimate = radj
    • r is a sample statistic and is biased to the sample (like eta-squared)
    • Can calculate rho (p), the population correlation coefficient - rho is estimated by the "adjusted r" (like omega squared)
    • radj is always smaller than r (more conservative)
    • The difference between r and radj becomes smaller as sample size increases
  • Correlation and Predictions
    • = REGRESSION, estimating a score on one variable (Y, criterion) on the basis of scores on another variable (X, predictor)
    • Correlational designs infer causality based on theory
    • Objective is to find the best fitting line
  • Bivariate Regression Equation
    • Yhat = bX + a
    • Yhat = predicted value of Y (DV)
    • b = slope of regression line (change in Y with 1-unit change in X)
    • X = value of predictor (IV)
    • a = intercept (value of Y when X = 0)
  • Standardising the Regression Slope
    • b would become a standardised regression coefficient (beta, B)
    • B = Z-score change in Y predicted from a 1 SD increase in X
  • Error in Regression
    • If a score is different from the average, it is error (lenient)
    • If a score is different from the predicted value (regression line) it is error (conservative)
    • Average deviation of scores from the regression line
    • Called STANDARD ERROR OF THE ESTIMATE
  • Standard Error of the Estimate
    • Sy.x reflects the amount of variability around the regression slope (Yhat, variable conditional on X)
    • Can be calculated as SSerror over df, or SD of DV times sqroot of 1 - error variance in data
    • If r2 is zero, there is no error variance, and there is no association between the IV and the DV (standard error estimate is equal to SD of DV)
    • But should be much smaller than SD of DV
    • The regression line is fitted according to the least squares criteria: Such that E(Y - Yhat)2 is a minimum, i.e. such that errors of prediction are a minimum
  • What SEE tells us
    • Bigger rxy leads to smaller Sy.x
    • A high correlation between X and Y reduces the SEE and enhances the accuracy of the prediction
    • R2 is overly liberal with small samples, and so Sy.x is underestimated for small samples
  • Significance of the Regression Slope
    1. b and B, like r, can be tested for significance using a t-test
    2. t = (b)(sx)(Sqrt N-1)/Sy.x
    3. H0 is that b = 0 (no change in Y when X increases 1 unit)