research methods

Subdecks (1)

Cards (49)

  • Case studies often involve longitudinal analysis of unusual individuals or events such as a person with a rare disorder.
  • Strengths of case studies?
    + offer rich and detailed insights, preferred to more superficial forms of examining behaviour and such detail is likely to increase validity of data collected.
    + can study quite unusual behaviours e.g. HM that couldn't be studied with other methods. Helps contribute to our understanding of typical functioning.
  • Weaknesses of case studies?
    W - prone to researcher bias, based on subjective interpretation of case which can be counterintuitive and reduce validity.
    W - small sample sizes with often unique characteristics makes it difficult to form generalisations.
  • Content analysis is a type of observational research that indirectly studies communications people produce.
    Coding involves reviewing communication and categorising observations into meaningful unit i.e. counting up the number of times a particular words or phrase appears.
  • Thematic analysis produces qualitative data - aims to produce themes rather than counts. More descriptive e.g. people with mental disorders may be denoted as a threat to our children would be developed into broader category such as control.
  • Evaluate content analysis?
    S - circumnavigate ethical issues as materials are often already in the public domain. No issues with obtaining consent.
    S - flexible method - can produce quantitative or qualitative, can be designed to suit research aims.
    W - may be reduced validity as communications are examined outside of the context they occurred in, may unpurposefully attribute the wrong motivations to the speaker.
    W - lack objectivity, esp. thematic analysis, threatens validity.
  • Test-retest?
    Same test of questionnaire is given to the same pp on two or more occasions and if the test is reliable then the same/similar results will be produced each time.
    • has to leave enough time so that pps don't remember answers but not too long that attitudes will change.
  • Inter-rater reliability?
    Comparing different observers observations - often done beforehand in small scale pilot study.
  • For both methods of testing reliability a correlation coefficient of +0.80 indicates reliability.
  • How to improve reliability of questionnaires?
    • deselect or rewrite unclear or ambiguous questions
    • may replace open with closed
  • How to improve reliability of interviews?
    • same interviewer each time
    • trained interviewers to ensure they ask clear questions
    • structured
  • How to improve reliability of observations?
    • operationalised and measurable behavioural categories
    • categories should not be overlapping
  • How to improve reliability of experiments?
    • standardised procedures
  • What is validity?
    • extent to which the observed effect is genuine.
  • Face Validity? (measuring of assessing validity)
    Eyeballing the test of measurement and determining if it measures what it is supposed to i.e. on the face of it is it measuring what it was designed to.
  • Concurrent validity? (ways of assessing validity)
    Whether the findings are similar to those on a well established test. A valid test would produce a result exceeding 0.80.
  • How to improve validity of questionnaires?
    • lie scale can test for social desirability bias
    • respondents assured all results are confidential
  • How to improve validity of observations?
    • well defined, operationalised, unambiguous behavioural categories.
  • How to improve validity of experiments?
    • control groups helps researchers be more certain that changes in the DV were due to changes in the IV.
    • single/double bind procedures helps to minimise demand characteristics and investigator effects
  • How to improve validity of qualitative methods?
    • triangulation - number of different sources
  • What are the conditions for a parametric test?
    1. interval ratio data
    2. normally distributed data
    3. homogeneity of variances
  • What is a type 1 error?
    False positive - null hypothesis is rejected when it should be accepted. Occurs when significance levels are too lenient.
  • What is a type 2 error?
    False negative - accepted the null when it should be rejected. Occurs when significance level is too stringent.
  • How do you reference a book?
    • author, date, title, place of publication and publishers name
  • How do you reference an article?
    • author, data, article title, journal name, volume and page numbers
  • How do you reference a website?
    • source, date, title, weblink and date accessed.