Year 2

Cards (61)

  • Content analysis
    A technique for analysing qualitative data of various kinds. Type of observational research in which people are studied indirectly via the communications they have produced e.g. in texts, emails, TV, film. Data can be placed into categories and counted (quantitative) or can be analysed in themes (qualitative).
  • Content analysis
    1. Coding/Categories (predetermined)
    2. Observe
    3. Tally
    4. Then draw a conclusion
  • Thematic analysis
    1. Create a transcript/familiarise yourself with the transcript
    2. Code data as you go through (coding units are more descriptive)
    3. Review coding looking for broader themes (combine codes into larger generalised themes)
    4. A theme is any idea that is recurrent
  • Content analysis
    • Many ethical issues may not apply, the material to study (e.g. TV adverts, films et.) may already be in the public domain. So there are no issues with obtaining consent.
    • Flexible method, can be adapted to produce qualitative or quantitative data as required- meaning it can be adapted to the aims of research.
    • Content analysis is an unobtrusive means of analysing interactions (doesn't require contact with people)
    • Easy for others to replicate, due to pre-determined categories (content analysis)
    • Inexpensive
  • Content analysis
    • Communication studied out of context, the researcher may attribute opinions and motivations to the speaker or writer that were not intended- thus reducing the validity of conclusions drawn.
    • May lack objectivity, especially when a thematic analysis has been used- threatens the validity of findings and conclusions
    • Causality cannot be established as it merely describes the data
  • Case studies
    An in-depth analysis of a single individual group, institution or event.
  • Case studies
    • Often involving unusual/abnormal individuals or events (e.g. a person with a rare disorder or the sequence of events leading up to the London riots)
    • Tend to be longitudinal and may involve gathering data from family and friends as well as the individual.
    • Qualitative- using interviews, observations and questionnaires
    • Quantitative- person may be subject to experimental or psychological testing.
  • Case studies
    • Rich and detailed insight into unusual or atypical behaviours. This may be preferred than a experiment. Such detail is likely to increase validity.
    • Applications (also in terms of normal functioning)e.g. case of HM demonstrated the existence of separate stores in STM and LTM
    • May generate a hypothesis for further study and one solitary contradictory instance may lead to a revision of an entire theory.
  • Case studies
    • Small samples make it difficult to generalise.
    • Risk of bias as researchers can become too involved and lose their objectivity: misinterpreting or influencing outcomes
    • Case studies often depend on the memory of the participants; retrospective data might be inaccurate. Thus, affecting validity.
    • Often after the event, so can be difficult to establish cause & effect.
    • Very difficult to replicate a case study they lack reliability.
  • Correlation
    A way of establishing whether there is a relationship between two variables called co-variables. A correlation assesses the strength and direction of an association.
  • Unlike experiments, there is no IV and DV. Correlational studies do not tell you about causal relationships
  • Correlational studies use other methods to collect data, mainly questionnaires- they use quantitative data (numerical data)
  • Correlation does not imply causation
  • Correlation coefficient
    A number between -1 and +1 that represents the direction and strength of a relationship between co-variables. The sign (+ or -) tells you the direction of the correlation- positive or negative. The number (between -1 - +1) tells you the strength.
  • Correlations
    • Useful starting point for research (before researchers commit to an experimental study), by assessing the strength and direction of a relationship, correlations provide a precise measure of how two co-variables are related. If variables are strongly related it may suggest hypotheses for future research.
    • Quick and relatively economical (cost-effective). Unlike a lab study there is no need for a controlled environment and no manipulation of variables is required. Data collected by others, (secondary data can be used e.g. government statistics) meaning correlations are less time consuming than experiments
    • Correlations allow the researcher to investigate naturally occurring variables that may be unethical or impractical to test experimentally. For example, it would be unethical to conduct an experiment manipulating hormone levels to see if people feel more or less depressed
  • Correlations
    • Lack of experimental manipulation and control means that correlations cannot establish cause and effect. It may be that another untested variable is causing the relationship between the two co-variables we are interested- an intervening variable. This is known as the third variable problem.
    • Methods used to measure variables may be flawed e.g. Adorno's F-scale has been accused of acquiescence bias, this would reduce the validity of the correlational study.
  • Levels of measurement
    Differences in precision, the nature of the data. Quantitative data can be nominal, ordinal or interval.
  • Nominal data

    Categories that can be counted. Discrete (using whole numerical values)- one item can only appear in one of the categories.
  • Ordinal data
    Data that is ranked/ ordered in some way and intervals are subjectiveunequal size- includes ordered categories e.g. a likert scale. Ordinal data lacks precision because it is based on subjective opinion. For these reasons ordinal data is sometimes referred to as 'unsafe' data because it lacks precision. Therefore, raw scores are converted into ranks (e.g. 1st 2nd 3rd) to be used in a statistical test.
  • Interval data
    Based on numerical scales that include units of equal, fixed, precisely defined size. E.g. time temperature and weight. Most precise and sophisticated form of data. It is 'better' than ordinal data because more detail is preserved as scores are not converted into ranks.
  • Choosing a statistical test
    • Three criteria: Looking for a difference or correlation?, Is the experimental design related (repeated measures/matched pairs) or unrelated (independent groups)?, The level of measurement
  • The range and SD cannot be calculated on nominal data as such in the form of frequencies. It is not appropriate to use the mean or SD for ordinal data as the intervals between units of measurement are not of equal size.
  • As the data becomes more sophisticated so does the average and measure of dispersion!
  • Significance level
    The point at which the researcher can claim to have discovered a significant difference or association and therefore can reject the null hypothesis and accept the alternative hypothesis.
  • Usually a significance level of 5%, written as P≤0.05. P stands for probability. This means that the probability that the observed effect happened by chance is equal to or less that 5% Therefore, we are 95% sure that the results were due to the manipulation of the IV (if it is an experiment)
  • Using statistical tables
    • The calculated value (result of the statistical test) is compared with a critical value a number that tells us whether or not we can reject the null hypothesis and accept the alternative hypothesis.
  • Type One (Optimistic) error
    Alternative hypothesis accepted the null hypothesis is rejected and when actually the null hypothesis is true. False positive or optimistic.
  • Type Two error
    Null hypothesis is accepted when actually the alternative is actually true. False negative or pessimistic.
  • A type I error is most likely to occur when the selected significance level is too lenient (e.g. 10%), whereas a type II error is most likely to occur when the selected significance is too strict (e.g. 1%). 5% compromise between too lenient (10%) or too stringent (1%) therefore a balance between Type 1 and type 2 error
  • Sign test
    1. Scores from condition B minus condition A to produce a sign of difference (+ or -) it doesn't matter which way you subtract just do it the same way for each set of scores
    2. Total number +s and –s is calculated
    3. Those with no difference should be disregarded and not included in the N value
    4. The S value (the calculated value) is the total of the less frequent sign
    5. Using the N value, significance level (usually 0.05) and whether the hypothesis is directional (one-tailed) or non directional (two- tailed), the S value will be compared to the critical value.
    6. If the S value is less than or equal to the critical value (and the result is in the right direction), then the findings are significant and the alternative hypothesis can be accepted.
  • The results are / are not significant as the calculated value of S = ……………. is higher / lower than the critical value ……….., where N=………………… for a …………….-tailed hypothesis, with a p ≤ 0.05 level of significance, the null hypothesis can be rejected / accepted and the alternative hypothesis rejected / accepted.
  • Chi square contingency table
    D.F. =(rows-1)x(columns-1) meaning the observed frequencies not totals or names of the conditions
  • Reliability
    How consistent the findings are or the measuring device is. E.g. a measuring device is said to be reliable if it produces consistent results every time it is used.
  • Assessing reliability
    1. Test-retest: the same test or questionnaire is given to the same person (or people) on two or more occasions. There must be sufficient time between test & re-test to ensure the pps cannot recall the answers but not so long that their attitudes/abilities have changed.
    2. Inter-observer reliability: the extent to which there is agreement between 2 or more observers. Involves a pilot study, observing the same event, recording data independently, and comparing and correlating the data between observers.
  • For both inter-rater reliability and test-retest, the two sets of scores are correlated. The correlation coefficient should be +0.8 or above to be judged as reliable.
  • Improving reliability
    • Questionnaires: A questionnaire that produces low test-retest reliability may need some items reworded or removed.
  • Reliability
    A measuring device is said to be reliable if it produces consistent results every time it is used
  • Assessing reliability
    1. Test-retest: the same test or questionnaire is given to the same person (or people) on two or more occasions
    2. Inter-observer reliability: the extent to which there is agreement between 2 or more observers
  • Inter-observer reliability
    • Involves a pilot study to check observers are correctly applying the behavioural categories
    • Observers observe the same event and record data independently
    • Data between observers is then compared and correlated
  • For both inter-rater reliability and test-retest, the correlation coefficient should be +0.8 or above to be judged as reliable