Probability & Significance

Cards (23)

  • P-value represents the probability of something occurring by chance, ranging from 0 to 5 in Psychology. The closer the p-value is to 0, the lower the likelihood that the results were due to chance factors.
  • Level of significance is where the p-value is set, to determine if results were due to the independent variable and not chance.
  • In psychological experiments, the level of significance is usually set at p<0.05.
  • Results must be 95% significant for the experimental hypothesis to be accepted. Therefore they must be significant when p<0.05.
  • It is possible to be more stringent with a level of significance (e.g. p<0.01) or less stringent (e.g. p<0.1).
  • Probability refers to the likelihood of an event occurring. It can be expressed as a number (0.05) or a percentage (5%).
  • Statistical tests allow psychologists to work out the probability that their results could have occurred by chance, and in general psychologists use a probability level of 0.05. This means that there is a 5% probability that the results occurred by chance.
  • The significance level commonly set in psychological experiments is 0.05.
  • The significance level in hypothesis testing is typically determined by the researcher and is often set at 0.05 or 0.01.
  • The p-value represents the likelihood that the observed results were due to chance. A smaller p-value indicates a lower likelihood of chance and stronger evidence against the null hypothesis.
  • The smaller the level of probability, the greater the certainty the results are due to the experimental manipulation and less likely due to chance.
  • If the results are significant at the given probability level, the research hypothesis can be accepted and the null hypothesis rejected.
  • A type 1 error is a false positive. It is where you accept the alternative/experimental hypothesis when it is false.
  • A type 2 error is a false negative. It is where you accept the null hypothesis when it is false.
  • Type 1 error, also known as a false positive, is an error in rejecting a null hypothesis when it is actually true.
  • Type 2 error, also known as a false negative, is an error of not rejecting a null hypothesis when the alternative hypothesis is the true state of nature.
  • A type 1 error is an incorrect rejection of a true null hypothesis (false positive). The researcher believes that there is an effect when actually there is not one.
  • A type 2 error is incorrectly retaining a false null hypothesis (false negative). The researcher believes there is no effect when actually there is.
  • A type 2 error in hypothesis testing is when the null hypothesis is not rejected, even though it is actually false.
  • Critical values are a numerical value which researchers use to determine whether or not their calculated value (from a statistical test) is significant.
  • Some tests are significant when the observed (calculated) value is equal to or greater than the critical value, and for some tests the observed value needs to be less than or equal to the critical value.
  • Critical values are found in tables, which are individual to each statistical test.
  • When reading critical value tables, you must consider the probability level and whether the test is one-tailed or two-tailed. You may also have to look at the number of participants in each condition.