A measure of the degree of linear relationship between two variables. The emphasis is on the degree to which a linear model may describe the relationship between the variables
The difference between the predicted value and the observed value. A positive residual is when the observed value is higher than the predicted value. A negative residual is when the observed value is lower than the predicted value. A residual is zero if the observed value is equal to the predicted value
The line which gives rise to the smallest total value of residuals squared. Often referred to as the "least squares" method. It is used to predict one value from another
A linear relationship is not revealed, but the scatter plot shows a pattern in the data. Correlation analyses can still be conducted on data with a curvilinear relationship if it is monotonic
Homoscedasticity: The assumption that the errors of prediction, for any given predicted value, have equal variances. If unequal variances are present, the data is heteroscedastic and a non-parametric test is needed
The p value indicates if something is statistically significant. A p value less than 0.05 suggests a less than 5% chance of rejecting the null hypothesis by mistake
The conventional cut-offs for reporting significance are: 0.05 (less than 5% chance of error), 0.01 (less than 1% chance of error), 0.001 (less than 0.1% chance of error). The exact p value or conventional cut-offs are reported for statistical significance
The conventional cut-offs for reporting significance are: 0.05 (less than 5% chance of error), 0.01 (less than 1% chance of error), 0.001 (less than 0.1% chance of error)
Cohen (1988, pp. 79-81) suggests the following guidelines: small r = .10 to .29, medium r = .30 to .49, large r = .50 to 1.0. These values are independent of positive/negative values (which indicate direction, not strength)