RESEARCH METHODS (P2)

Cards (100)

  • CASE STUDIES
    What is the definition of case studies?
    . A research method that involves a detailed study of a single individual, institution or event. Case studies provide a rich record of human events but are hard to generalise from.
  • What are case studies?
    . The case study is a scientific research method and thus aims to use objective and systematic methods. Case studies use information from a range of sources, such as from the person concerned and also from their family and friends.
    . Many techniques may be used-the people may be interviewed/observed while engaging in daily life. Psychologists might use IQ tests/personality tests or some other kind of questionnaire to produce psychological data about the person/group.
    . The findings are organised into themes to represent the individual's thoughts, emotions, experiences and abilities. The data is therefore usually qualitative, although quantitative data in the form of scores from tests may also be included.
    . Case studies are usually longitudinal-they follow the individual/group over an extended period of time.
  • What is Phineas Gage's case study as an individual example?
    . In 1848 Phineas was working on the construction of the American railway. An explosion of dynamite drove a tamping iron right through his skull. He survived and was able to function fairly normally, showing that people can live despite the loss of large amounts of brain matter. However, the accident affected Phineas' personality. A record was kept of events in the rest of his life and people he knew were interviewed. After the accident his friends said he was no longer the same man.
    . This case was important in the development of brain surgery to remove tumours because it showed that part of the brain could be removed without having a fatal effect.
  • What is David Peter Reimer's case study as an individual example?
    . David Peter Reimer 9August 22, 1965-May 4, 2004) was a Canadian man born biologically male but reassigned as a girl and raised female following medical advice and intervention after his penis was accidentally destroyed during a botched circumcision in infancy.
    . Psychologist John Money oversaw the case and reported the reassignment as successful and as evidence that gender identity is primarily learned. Academic sexologist Milton Diamond later reported that Reimer failed identify as female since the age of 9 to 11, and transitioned to living as a male at age 15. Well known in medical circles for years anonymously as the "John/Joan" case, Reimer later went public with his story to help discourage similar medical practices. He later committed suicide after suffering years of severe depression, financial instability, and a troubled marriage.
  • What is the mob behaviour, London Riots 2011 as an event example?
    . The London riots gave psychologists a chance to look at the apparently unruly behaviour of 'mobs'.
    . However Reicher and Scott (2011) argued their data showed that mob behaviour was not unruly. Mob's don't simply go wild but actually tend to target particular shops/people.
    . The patterns of what they attack and don't attack reveal something about the way they see the world and their grievances about the world.
  • What is the mass suicide of a cult group as an event example?
    . The cult group of the People's temple was run by Reverend Jim Jones. He convinced his congregation to give him all of their money/property. He came to see himself as God and demanded everyone else do the same.
    . The U.S government began to have serious questions about the conduct of the group and so Jones moved it to South America in the 1970's. However he became more paranoid and eventually ordered his 900 followers, including children, to commit suicide by drinking a combination of poison mixed with Kool-Aid.
    . The case study was used to reflect on social processes in groups and the effect of leaders on both conformity and obedience.
  • What is the evaluation of case studies?
    +Strengths
    . Offer rich, detailed insights that may shed light on unusual behaviour.
    . Can generate hypothesis for future studies
    . Contribute to understanding of normal function e.g. HM-showed normal memory functioning.
    . One contradictory case study can lead to the revision of a whole new theory.

    -Weaknesses
    . Observer bias=Reduced objectivity and validity.
    . Personal accounts from friends/family prone to inaccuracies (memory decay etc. ), also means lower validity.
    . Difficult to generalise as very small sample sizes.
    . Psychological harm (continued testing/interviews over decades etc. )
  • CONTENT ANALYSIS
    What is the definition of content analysis?

    A research technique that enables the indirect study of behaviour by examining communications that people produce, e.g. in texts, emails, TV, film and other media. May involve either qualitative or quantitative analysis, or both.
  • What is content analysis?

    . The process involved in conducting a content analysis is similar to any observational study, but instead of observing actual people a researcher usually makes observations indirectly through books, films, advertisements and photographs-any artefact people have produced.
    . There are a few things the researcher has to decide
  • What are the three things a researcher has to decide when conducting a content analysis?
    SAMPLING METHOD
    . If analysing different books, are you going to look at every page or just every 5th page etc. ?
    . If comparing content in various books, does the researcher select random books from the library or select a category e.g. romance/fiction
    . If analysing ads on TV, does the researcher sample behaviours every 30 seconds or just note down each time a behaviour occurs?

    CODING THE DATA
    . The process of putting your data (quantitative or qualitative) into behavioural categories
    . E.g. in adverts is the main person male or female or what product type is being advertised (food/drink/body/household)

    REPRESENTING DATA
    . Once you have your categories you need to record the instances of each category
    . You can count instances=quantitative analysis OR Describe examples in each category=qualitative analysis
  • What is the evaluation of content analysis?
    + Strengths
    . High ecological validity
    . A flexible method-can be adapted to get qualitative or quantitative data therefore can be adapted to suit the aims of the research
    . Generally avoids ethical issues as material analysed is already in public domain e.g. no consent issues

    - Weaknesses
    . Observer bias=Reduced objectivity and validity.
    . Content analysis may be culture biased (verbal/written content affected by language/culture of observer & behavioural categorisation.
    . Communication is studied out of context-the researcher may attribute meanings to data that weren't intended-reduces validity
  • THEMATIC ANALYSIS
    What is the definition of thematic analysis?
    A technique used when analysing qualitative data. Themes are identified and then data is organised according to these themes.
  • What are the intentions of thematic analysis?

    . To impose some kind of order on the data
    . To ensure that the 'order' represents the pt.'s perspective
    . To ensure that this 'order' emerges from the data rather than any preconceptions
    . To summarise the data so that hundreds of pages of text/hours of videotape
  • RELIABILITY
    What is the definition of reliability?
    Reliability refers to how consistent the findings from an investigation or measuring device are. A measuring device is said to be reliable if it produces consistent results every time it is used.
  • RELIABILITY OF OBSERVATIONAL TECHNIQUES
    What are examples of behavioural categories a researcher will keep a record of during an observation?
    Observations made of a young girl in a video
    Hits=3
    Touches=7
    Cuddles=3
    Sits next to=2
    Talks=9
  • What are the two ways we assess reliability?
    1. Second observation
    -It is important that an observer is consistent with their observations-to check this they can complete the observation a second time e.g. view a video recording of the young girl, make observations and compare the 1st observation and the 2nd
    -If results are consistent then the researcher has reliability

    2. Inter-observer reliability
    -HOWEVER-What if this research is biased? A better way to assess reliability is to have 2 or more observers making separate observations and then compare results.
    -The extent to which the observers agree on the observations is called inter-observer reliability
    -The agreement is calculated as a correlation co-efficient and if this is 0.80 or above then the observations were reliable.
  • What are the two ways we improve reliability in observations?
    1. Make categories clearer...
    -Make sure your categories have been properly operationalised and are self evident e.g. pushing rather than aggression
    -Categories should not overlap and all behaviours should be covered.

    2. Practice...
    -It may be that some observers just need more practice using behavioural categories so they can respond more quickly.
  • RELIABILITY OF SELF-REPORT TECHNIQUES
    What are the two ways we assess reliability?
    1. TEST-RETEST
    -To check reliability of self-report techniques we use test-retest
    -The researcher gives the test to a group of people and then gives the same people the same test a second time
    -Usually there is a short interval between tests e.g. a week/two weeks so that people don't remember their answers
    -If the test is reliable then the outcome should be the same every time

    2. Inter-interviewer reliability
    -For interviews, a researcher could assess the reliability of one interviewer by comparing answers on one occasion with answers from the same person with the same interviewer a week later
    -Or you could use the same idea as inter-observer reliability. Get 2 people to interview and compare the data from each interview.
  • How do we improve reliability in self-report methods?
    1. Reduce ambiguity
    -Questionnaires: make questions more explicitly clear, or replace open questions with fixed choice/closed alternatives
    -Interviews: Interviewer uses a structured interview as they are more controlled and prevent the interviewer asking questions that are too leading or ambiguous
  • What are the two ways we improve reliability in experiments?
    -The DV in an experiment is often measured using a rating scale or behavioural categories; e.g.
    . Bandura Bobo doll study-the DV in this study was aggression-this was assessed by observing the children's behaviour in a room full of toys and using behavioural categories such as verbal imitation
    . Rutter's orphanage study-used IQ score as one of the DV's
    -So reliability in an experiment is usually concerned with whether the method used to measure the DV is consistent. I.e. the observations or the self-report method

    1. Standardisation
    -To improve reliability in experiments researchers need to make sure that the procedures are exactly the same each time experiments are repeated with different participants
    -This means that we can compare the performance of participants as procedures have been standardised
  • VALIDITY
    What is the definition of validity?
    The extent to which an observed effect is genuine-does your experiment etc. measure what it is set out to measure. How accurate and true are the test/results and can it be generalised beyond the research setting within which it was found.
  • What is internal validity?
    The extent to which the effect found in a study can be taken to be real & caused by experimental manipulation (measured what it set out to measure)
  • What are some factors that can influence internal validity?
    . Demand characteristics
    . Confounding variables
    . Social desirability
    . Poorly operationalised behavioural categories
  • What is external validity?
    Whether the results can be generalised beyond the research situation to other people or situations.
  • What are some factors that can influence external validity?
    . Size of sample
    . Variety in sample (age/gender)
    . Time study was conducted
    . Area study was conducted
  • ECOLOGICAL VALIDITY
    What is the definition of ecological validity?
    The extent to which findings from a research study can be generalised to other settings or situations. A form of external validity.
  • What is temporal validity?
    The extent to which findings from a research study can be generalised to other historical times and eras. A form of external validity.
  • What are the two ways we can assess validity?
    1. Face Validity
    -A basic form of validity in which a measure is scrutinised to determine whether it appears to measure what it is supposed to measure. Where a behaviour appears at 1st sight to represent what is being measured. E.g. does a test of anxiety look like it measures anxiety?

    2. Concurrent Validity
    -The extent to which a psychological measure relates to an existing similar measure. E.g. correlate the findings of your test/questionnaire etc. with a recognised experiment that measured the same as you.
  • How can we improve validity in questionnaires?
    . If a questionnaire is judged to have low face validity then the questions should be revised so they relate more obviously to the topic.
    . If concurrent validity is low then the researchers should remove questions which may seem irrelevant and try checking the concurrent validity again.
    . As social desirability may be an issue when using questionnaires the researchers could get pt.'s to fill them in anonymously
  • How can we improve validity in observations?
    . To help improve the ecological validity of observations the researcher can make sure they are undetected and complete a covert observation-this way the pt.'s are much more likely to behave in a natural manner.
    . Also, make sure your behavioural categories are not too broad/overlapping or ambiguous as this may have a negative impact on the validity of data collected.
  • How can we improve validity in experimental research?
    . Use a control group-this means the researcher can better assess whether changes in the DV were due to the IV-for example if 1 group is given a type of therapy and a control group are not, then the researcher can be confident that changes in the 1st group were down to the therapy.
    . Standardise the procedures-this will ensure all pt.'s experience the same experiment and reduces investigator effects
    . Use single blind or double blind conditions-if the pt.'s don't know the aim of the study then they are less likely to show demand characteristics (helping increase validity). If a double blind condition is used the pt. and the experimenter don't know the aim and a 3rd party will conduct the experiment again reducing demand characteristics and investigator effects.
  • How can we improve validity in qualitative methods?
    . Coherence-when researchers have interpreted pieces of date e.g. info from case studies/interviews then check the data from the researchers for coherence.
    . Triangulation-use a number of different sources as evidence, e.g. data compiled through interviews with friends/family/personal diaries/observations, etc.
  • FEATURES OF SCIENCE
    What is the scientific method and theory construction?
    -The scientific method starts with observations of phenomena in the world
    -The inductive model leads scientists to develop hypotheses> Hypotheses are then tested empirically which may lead to new questions and a new hypothesis. Eventually data may be used to construct a theory.
    -The deductive model places theory construction at the beginning, after making observations.
    -In both models the process is repeated over and over again to refine knowledge.
  • What is the inductive model?
    Observations-->Testable hypothesis-->Conduct a study to test the hypothesis-->Draw conclusion-->Propose theory
  • What is the deductive model?
    Observations-->Propose theory-->Testable hypothesis-->Conduct a study to test hypothesis-->Draw conclusion.
  • What is the empirical method as a key feature of science?

    -Information is gained through direct observation or experiments rather than from unfounded beliefs or reasoned argument.
    -This is important because people can make claims about anything (such as the truth of a theory/the benefits of a treatment/the taste of a hamburger), but the only way we know such things to be true is through direct testing i.e. empirical evidence.
  • What is objectivity as a key feature of science?
    -When all sources of personal bias/researcher expectations are minimised so as not to distort or influence the research process .
    -Data collected in carefully controlled conditions i.e. within the lab is most likely to be objective.
  • What is replicability as a key feature of science?
    -The extent to which scientific procedures and findings can be repeated by other researchers.
    -If scientific theory is to be trusted the findings from it must be shown to be repeatable across a number of different contexts and circumstances.
    -If you can replicate a study and get same results-valid.
  • What is theory construction as a key feature of science?
    -Facts alone are meaningless.
    -Explanations or theories must be constructed to make sense of the facts.
    -A theory is a collection of general principles that explain observations and fact.
    -Such theories can then help us understand and predict the natural phenomena around us.
    -Scientists use both the inductive and and deductive methods for theory construction.
  • What is hypothesis testing as a key feature of science?
    -Theories are modified through hypothesis testing
    -This helps test the validity of a theory
    -A good theory must be able to generate testable expectations
    -These are stated in the form of a hypothesis
    -If a scientist fails to find support for a hypothesis the theory requires modification.
    -Hypothesis testing was only developed in the 20th century