Inferential Statistics

Cards (21)

  • Differences between inferential and descriptive statistics:
    • descriptive -uses data to provide statistics of the sample, either through numerical calculations, graphs or tables
    • inferential - takes data from a sample and makes inferences about the larger population from which the sample was drawn. These drawn conclusions are generalised to the target population, so we have to be confident that the conclusions represent our sample
  • Experimental and null hypotheses
    when conducting psychological research, we must construct a hypothesis in which to test the significance of our theory
    • experimental - suggests there will be a difference (IV affects DV) or a relationship between variables. This can be directional (one-tailed) or non-directional (two-tailed)
    • null - if the results are non-significant, we cannot accept the original prediction, therefore we need a null version which suggests that there's no difference between the variables
  • Probability and significance:
    probability is the likelihood that the results are due to chance or not;
    • this is represented in data form as P=0.05 (this indicates the significance level
    • P represents the probability as a percentage (5%) that the results are due to chance. The number should be low
    • in order for results to be significant p<0.05. This means there results are only 5% likely that they're due to chance and we can be 95% confident that the results were down to a genuine difference
  • Significance and hypothesis:
    • if the results of the study are significant (p<0.05) then we accept the experimental hypothesis and reject the null
    • if the results aren't significant (p>0.05) then we accept the null hypothesis and reject the experimental
    we used a 5%/0.05 level of significance to balance out the risk of making type 1 and type 2 errors
  • Type 1 error
    this is where a psychologist accepts the experimental hypothesis and rejects the null hypothesis when they should not as the result was due to chance
    • we can see this when someone repeats a study and gets different results to the first study
    • this type of error is less likely to happen with a significance level of p<0.01 as there is a 99% confidence rate
  • Type 2 error
    this is where a psychologist rejects the experimental hypothesis and accepts the null hypothesis when they should not as the results weren't due to chance
  • Level of significance:
    • p<0.01 (1%) - stringent
    • p<0.05 (5%) - just right (most common)
    • p<0.10 (10%) - lenient
  • Deciding on a statistical test (3D method):
    • difference or correlation
    • level of data (nominal/ordinal/interval)
    • experimental design (independent measures/ repeated measures/ matched pairs design)
  • Step 1: difference or correlation:
    • decide whether a study is a test of difference or whether it's looking for a relationship between 2 variables
    • look out for words such as; link, association, relationship, correlation
  • Step 2: level of data:
    • identify the dependent variables (or dependent variables in the case of correlation research)
    • identify the type of data
  • Interval data
    interval data is on a fixed scale, where the difference between each number is standardised. This is called equidistant. They normally have a unit of measurement because of this
    • examples include time in seconds, height in centimetres, IQ using a standardised test, number of goals scored
  • Ordinal data
    ordinal data is not on a fixed scale, where the difference between each number is not fixed or standardised. These will not have a unit of measurement and are often found as rankings or ratings, often in a questionnaire or self-report method. This is more subjective
    • examples include mood ratings, aggression rankings, level of agreement to a statement
  • Nominal data
    nominal data is when data has been generated by placing participants into discrete categories. They do not have a unit of measurement and often involve yes/no responses
    • examples include pass/fail, high/low, unhappy/happy
  • Step 3: experimental design:
    • identify whether the study is using independent measures design (participants take part in one condition), repeated measures design (participants take part in both conditions) or matched pairs design (participants are matched on a certain variable and then they take part in different conditions)
  • Statistical tests:
    test of difference;
    • unrelated (independent measures)
    nominal - chi-squared
    ordinal - mann-whitney
    interval - unrelated t-test
    • related (repeated measures/matched pairs)
    nominal - sign test
    ordinal - wilcoxon
    interval - related t-test
    relationship;
    ordinal - spearman's rho
    interval - pearson's r
  • Using critical value tables:
    statistical analysis produces a calculated value (this is the outcome of the statistical test), which you must compare with a critical value (in a critical value table)
  • Using critical value tables:
    you need to take into account;
    • whether a one-tailed or two-tailed hypothesis has been used
    • the number of participants (N) used
    • what level of significance is being used (always 0.05 unless otherwise stated)
    • then check whether the calculated value is greater, less than or the same as the critical value to see whether the data is significant
    • this will be written as a rule under the critical value table
  • Degrees of freedom:
    sometimes in a critical value, instead of having an N column, they have a df column. You must calculate the df value and use that figure to read across your critical value table - opposed to the normal N value
  • Examples of how to work out df values:
    • chi-squared - df=(r-1) x (c-1), where r=number of rows and c=number of columns
    • related t-test - df=N-1 (number of participants - 1)
    • unrelated t-test - df= N1 + N2 -2 (using independent measures means you will have 2 sets of participants, and therefore 2 N values. Add the number of participants in group 1 and then number of participants in group 2 and then minus 2 from the total value)
  • Sign test:
    to determine whether something is significant, we used the inferential test of the sign test. This is used when the investigating is looking for a difference, when a repeated measures design is used and the data is nominal
  • Calculating the sign test:
    • identify all the scores that have improved (+), got worse (-) or stayed the same (0)
    • ignore/draw a line through any values that remained the same in both conditions
    • add up how many of each sign remains
    • the smallest value of the two is the calculated value ('S value' in the sign test)
    • adjust the N value. Any participants whose scores remained the same are removed from the sample
    • check whether the study is directional or non-directional
    • use the critical value table to check significance