C5 Variability

Cards (27)

  • deviation scores are distances from the mean.
  • deviation scores can be used to measure variability.
  • A deviation score, X−X-bar, indicates the distance of a score from the mean.
  • The variance, which we denote with the symbol S-squared, is the mean of the squared deviation scores. S-squared = Σ(X−X-bar)-squared/n.
  • SS stands for sum of squares.
  • Sum of squares refers to the sum of squared deviations from the mean, Σ(X−X-bar)-squared
  • The standard deviation, S, is the square root of the variance.
  • Calculating the Standard Deviation
    Step 1 Find X-bar. Step 2 Subtract the mean from each score (X-X-bar) Step 3 Square each (X−X-bar)-squared. Step 4 Sum the values of (X−X-bar)-squared to obtain SS. Step 5 Calculate the Square-Root of SS/n to solve for S.
  • The standard deviation is expressed in the original units of measurement.
  • The standard deviation is the square root of the mean of the squared deviation scores.
  • The variance and standard deviation are omnipresent in the analysis of data, more than any other measure of variability, because both measures are more mathematically tractable than the range (and its mathematical relatives). They respond to arithmetic and algebraic manipulations.
  • The variance and standard deviation have the virtue of greater sampling stability. In repeated random samples, their values tend to jump around less than the range and related indices. There is less sampling variation which is of great importance in statistical inference.
  • To adequately appraise a difference between two means, one must take into account the underlying scale, or metric, on which the means are based.
  • The numerical size of a mean difference often is difficult to interpret without taking into account the standard deviation.
  • When a mean difference is divided by a“pooled”standard deviation, the resulting index is called an effect size (ES).
  • The pooled standard deviation is the average spread of all data points about their group mean (not the overall mean). It is a weighted average of each group's standard deviation. The weighting gives larger groups a proportionally greater effect on the overall estimate.
  • The problem of comparing the means of two distributions occurs frequently in both descriptive and inferential statistics.
  • The difference between descriptive statistics and inferential statistics is descriptive statistics state facts and proven outcomes from a population, whereas inferential statistics analyze samplings to make predictions about larger populations.
  • Steps to calculate the effect size that corresponds to the difference between the two means: 1. Calculate the difference between the two means. 2. Calculate each SS from its standard deviation. 3. Determine the pooled standard deviation.
    4. Divide the mean difference by the pooled standard deviation.
  • Measures of variability are important in describing distributions, and they play a particularly vital role in statistical inference.
  • Three measures of variability are range, variance, and standard deviation. The range gives the distance between the high score and the low score. The variance is the mean of the squared deviations, and the standard deviation is the square root of that quantity.
  • In comparison to the range, the variance and standard deviation are mathematically more tractable and are more stable from sample to sample.
  • Variance is calculated as the average of the squares of the differences between each data point and the mean.
  • Standard Deviation (SD) is the positive square root of the variance.
  • The margin of error is a statistic expressing the amount of random sampling error in the results of a survey. The larger the margin of error, the less confidence one should have that a poll result would reflect the result of a census of the entire population.
  • margin of error = Zᵧ ⋅ p ( 1 − p ) n
  • A larger standard dilation shows greater variation.