Research methods

Cards (54)

  • Strength of a case study
    • Rich detailed insights
    • Contribute to understanding of typical functioning’s
    • Can generate hypothesis for further studies
  • limitation of a case study
    • small sample size = hard to generalise findings
    • final report based on subjective selection
    • personal accounts from patient + family = inaccuracy
  • context analysis is a research technique that enables the indirect study of behaviour by examining communications that people produce.
  • coding is the initial stage of context analysis. Turning qualitative data into quantitative data = counting the number of times a word is said
  • thematic analysis is a form of content analysis. The process involves identifying themes in the aspect they are studying. More descriptive than coding
  • strengths of context analysis
    • can get around many ethical issues
    • many of its study material is already in the public domain
    • high external validity
    • can produce both quantitative + qualitative data
  • limitations of context analysis
    • usually analysed outside context occurred = researchers may assume aspects of the context
    • lack of objectivity
  • reliability is the measure of consistency. If a particular measurement is made twice and = same then it is deemed reliable
  • Test retest is used to test reliability - giving the same test or question to the same people at a different time and if the results are the same = reliable
  • inter rater reliability is the extent to which there is a agreement between two or more observers involved in a behaviour or observation
  • inter rater reliability is tested by doing a small scale trial run to test whether they apply the behavioural categories in the same way. they must have over 0.8 to have high inter rater reliability.
  • to measure reliability the correlation coefficient should exceed +0.8
  • if questionnaires have low test-retest reliability it may be requires to deselect or re-write some of the questions.
  • to improve reliability in interviews, you should use the same interviewer and if not all same interviewers should be properly trained = less ambiguous or leading questions
  • to improve reliability in observation, ensure behavioural categories are operationalised and/or a discussion between observers about there decisions on certain behaviours.
  • to improve reliability in experiments there must be standardised procedures
  • validity is whether psychological test/observation/experiment produces a result that is legitimate. Whether they measure what they intend to measure + research be generalised outside the study setting
  • internal validity is whether the effects found are due to the changing of the independent variable or another factor
  • ecological validity is a type of external validity.
  • ecological validity is the extent to which the findings of the study can be generalised to the real world
  • temporal validity is a form of external validity
  • temporal validity is the extent to which findings from research study can be generalised other historical eras
  • face validity is whether it appears to measure what it intends to measure
  • concurrent validity is the extent to which a psychological measure relates to an existing similar measure
  • to improve validity of experiments :
    • have a control group
    • standardized procedures - minimize investigator effects
    • single/double blind
  • to improve validity in questionnaires:
    • use lie scales to reduce social desirability bias
    • all data submitted is anonymous
  • to improve validity in observations :
    • covert observation = more ecological validity
    • make behavioral categories specific + non-overlapping
  • to improve validity of qualitative research:
    • use triangulation
    • use of a number of different sources of evidence
  • usual significance level is psychology is what?
    0.05 %
  • the calculated value is also known as the observed value
  • critical value tells us whether we can reject the null hypothesis or not.
  • one tailed test = directional hypothesis
  • two tailed test = non-directional hypothesis
  • Type I error is when the null hypothesis is rejected and alternate is accepted when it should be the other way around
  • Type II error is when the null hypothesis is accepted when it should have been the alternate hypothesis
  • what section of the scientific report is the abstract?
    one
  • what is the abstract in a scientific report?
    short summary --> includes aims + hypothesis + method + results + conclusions
  • the introduction of a scientific report includes relevant theories + concepts
  • the method is a scientific report includes
    • design
    • sample - sampling method + target population
    • apparatus + materials
    • procedure - standardized instructions + (de)briefing
    • ethics
  • results section in a scientific report should be a summary of key findings
    • descriptive statistics = tables graphs charts
    • inferential statistics = statistical test, level of significance