Measures of dispersion are descriptive statistics that describe how similar a set of scores are to each other
The more similar the scores are to each other, the lower the measure of dispersion will be
The less similar the scores are to each other, the higher the measure of dispersion will be
In general, the more spread out a distribution is, the larger the measure of dispersion will be
Measures of Dispersion
It is the measure of extent to which an individual items vary.
Indicator of consistency among a data set
Indicates how close data are clustered about a measure of central tendency
When the spread of data around the central item is high, the mean or median is less significant: thus, low spread enhances the meaningfulness of the median or mean.
Range
It is simplest measure of dispersion.
It is defined as the difference between the largest score in the set of data and the smallest score in the set of data.
Range = HS โ LS
The range is used when you have ordinal data or you are presenting your results to people with little or no knowledge of statistics.
Even though simple, range is not a reliable measure of dispersion especially when dealing with outliers
It tells the range of scores based from the 1st and 3rd quartile measures.
IQR = Q3-Q1
It simply half the value or the interquartile range, measuring the range between the median and either Q1 or Q3.
SIQR = IQR/2 = Q3 - Q1/2
Variance is defined as the average of the square deviations.
Variance is a measure of dispersion of observations within a data set.
The symbol for population variance is ๐2. while the symbol for sample variance is ๐ 2.
Variance is defined as the square root of the variance
It is a numerical value that describes the variability of observations from its central tendency measure.
The symbol for population standard deviation is ๐ while the symbol for sample standard deviation is ๐ or ๐๐ท.
The independent samples t-test is probably the single most widely used test in statistics.
It is used to compare differences between separate groups.
In Psychology, these groups are often composed by randomly assigning research participants to conditions.
However, this test can also be used to explore differences in naturally occurring groups.
For example, we may be interested in differences of emotional intelligence between males and females.
Any differences between groups can be explored with the independent t-test, as long as the tested members of each group are reasonably representative of the population.
The first step in calculating the independent samples t-test is to calculate the variance and mean in each condition.
One-Way ANOVA
The one-way analysis of variance is used to test the claim that three or more population/sample means are equal
This is an extension of the independent samples t-test
The response variable is the variable youโre comparing
The factor variable is the categorical variable being used to define the groups
We will assume k samples (groups)
The one-way is because each value is classified in exactly one way
Examples include comparisons by gender, race, political party, color, etc.
Conditions or Assumptions
The data are randomly sampled
The variances of each sample are assumed equal
The residuals are normally distributed
The null hypothesis is that the means are all equal
The alternative hypothesis is that at least one of the means is different
The ANOVA doesnโt test that one mean is less than another, only whether theyโre all equal or at least one is different.
Correlation
The relationship between two variables
Measured with a correlation coefficient
Most popularly seen correlation coefficient: Pearson Product-Moment Correlation
Types of Correlation
Positive correlation
High values of X tend to be associated with high values of Y.
As X increases, Y increases
Negative correlation
High values of X tend to be associated with low values of Y.
As X increases, Y decreases
No correlation
No consistent tendency for values on Y to increase or decrease as X increases
Correlation Coefficient
A measure of degree of relationship.
Between 1 and -1
Sign refers to direction.
Based on covariance
Measure of degree to which large scores on X go with large scores on Y, and small scores on X go with small scores on Y
Think of it as variance, but with 2 variables instead of 1 (What does that mean??)