Chapter 15

Cards (43)

  • Test of Difference
    An investigation of a hypothesis that two (or more) groups differ with respect to measures on a variable
  • Bivariate Tests of Differences
    Involve only two variables: a variable that acts like a dependent variable and a variable that acts as a classification variable
  • Choosing the right statistic depends on: Type of measurement, Nature of the comparison, Number of groups to be compared
  • Common Bivariate Tests
    • Mann-Whitney U-test
    • Wilcoxon test
    • Kruskal-Wallis test
    • Z-test (two proportions)
    • Chi-square test
    • t-test or Z-test
    • One-way ANOVA
  • Cross-Tabulation (Contingency) Table

    A joint frequency distribution of observations on two or more variables
  • Chi-Square Test

    Provides a means for testing the statistical significance of a contingency table
  • Independent Samples t-Test
    A test for hypotheses stating that the mean scores for some interval- or ratio-scaled variable grouped based on some less-than-interval classificatory variable are not the same
  • Pooled Estimate of the Standard Error
    An estimate of the standard error for a t-test of independent means that assumes the variances of both groups are equal
  • Paired-Samples t-Test
    An appropriate test for comparing the scores of two interval variables drawn from related populations
    1. Test for Differences of Proportions
    A technique used to test the hypothesis that proportions are significantly different for two independent samples or groups
    1. Test for Comparing Two Proportions
    2. Test statistic for differences in large random samples
  • Sp1-p2
    Pooled estimate of the standard errors of differences of proportions
  • One-Way Analysis of Variance (ANOVA)

    An analysis involving the investigation of the effects of one treatment variable on an interval-scaled dependent variable - a hypothesis-testing technique to determine whether statistically significant differences in means occur between two or more groups
  • ANOVA tests whether "grouping" observations explains variance in the dependent variable
  • The substantive hypothesis tested in ANOVA is: At least one group mean is not equal to another group mean
  • Grand Mean
    The mean of a variable over all observations
  • Between-Groups Variance
    The sum of differences between the group mean and the grand mean summed over all groups for a given set of observations
  • Within-Group Error or Variance
    The sum of the differences between observed values and the group mean for a given set of observations; also known as Total Error Variance
    1. Test
    A procedure used to determine whether there is more variability in the scores of one sample than in the scores of another sample
  • General Linear Model (GLM)

    A way of explaining and predicting a dependent variable based on fluctuations (variation) from its mean due to changes in independent variables
  • Regression Analysis
    A measure of linear association that investigates straight-line relationships between a continuous dependent variable and an independent variable that is usually continuous, but can be a categorical dummy variable
  • Correlation Coefficient
    A statistical measure of the covariation, or association, between two at-least interval variables
  • Correlation does not imply causation
  • Standardized Regression Coefficient (β)
    Estimated coefficient of the strength of relationship between the independent and dependent variables, expressed on a standardized scale where higher absolute values indicate stronger relationships
  • Slope of the coefficient
    Rise over run
  • β
    Indicative of the strength and direction of the relationship between the independent and dependent variable
  • α (Y intercept)
    A fixed point that is considered a constant (how much Y can exist without X)
  • Standardized Regression Coefficient (β)
    Estimated coefficient of the strength of relationship between the independent and dependent variables, expressed on a standardized scale where higher absolute values indicate stronger relationships (range is from -1 to 1)
  • Raw regression estimates (b1)
    Raw regression weights have the advantage of retaining the scale metric, which is also their key disadvantage. If the purpose of the regression analysis is forecasting, then raw parameter estimates must be used.
  • Standardized regression estimates (β1)
    Standardized regression estimates have the advantage of a constant scale. Standardized regression estimates should be used when the researcher is testing explanatory hypotheses.
  • Multiple Regression Analysis
    An analysis of association in which the effects of two or more independent variables on a single, interval-scaled dependent variable are investigated simultaneously.
  • Dummy variable

    The way a dichotomous (two group) independent variable is represented in regression analysis by assigning a 0 to one group and a 1 to the other.
  • Regression Equation: Y = b0 + b1X1 + b2X2 + b3X3 + ... + biXi
  • Coefficient of multiple determination (R2): 0.845
    1. value= 14.6; p<.05
  • Partial correlation
    The correlation between two variables after taking into account the fact that they are correlated with other variables too.
  • R2 in Multiple Regression

    The coefficient of multiple determination in multiple regression indicates the percentage of variation in Y explained by all independent variables.
  • Coefficients of Partial Regression (bn)
    The percentage of variance in the dependent variable that is explained by a single independent variable, holding other independent variables constant.
    1. test
    Tests statistical significance by comparing the variation explained by the regression equation to the residual error variation.
  • (MSR/k) / (MSE/(n-k-1)) = F