Year 2

Cards (46)

  • What are correlations and correlation coefficients?
    • Correlations measure the relationship/association between 2 co-variables, plotted on a scattergram
    • Correlation coefficient tells us the strength and direction of the relationship between the 2 variables
    • A value of +1 represents a perfect positive correlation
    • A value of -1 represents a perfect negative correlation
    • The closer it is to +1 or -1 the stronger the relationship is and the closer to 0 the weaker it is
  • What are case studies?
    • A detailed and in-depth investigation, description and analysis of a single individual or group
    • These are often unusual people such as those with rare disorders
    • Usually involves qualitative data and a case history
    • Tend to be longitudinal and can involve gathering additional data from family and friends as well as the individual themselves
  • What are strengths of case studies?
    • Can offer rich, detailed insights that may shed light on very unusual and atypical forms of behaviour, which may be preferred to the more superficial forms of data from experiments
    • Contribute to our understanding of 'typical' functioning like the case of HM with separate STM and LTM stores
    • May generate hypothesis for future study and one contradictory instance can lead to the revision of an entire theory
  • What are some limitations of case studies?
    • Generalisation of findings is impossible when dealing with such a small sample
    • Information that makes it to the report is based on the subjective selection and interpretation of the researcher
    • Personal accounts from ppts. family and friends may be prone to inaccuracy and memory decay. meaning evidence from case studies tend to have low validity
  • What is content analysis?
    • A type of observational research where people are studied indirectly via the communications they have produced, such as emails, texts, TV films
    • Aim is to summarise and describe this in a systematic way so we can draw overall conclusions
  • What is coding and quantitative data?
    • Coding - categorising information into meaningful units like counting up the number of times a word appears to produce quantitative data
    • e.g. looking at TV adverts to see how often men and women are depicted in professional roles at work or familial roles at home
  • What is thematic analysis and qualitative data?
    • A form of content analysis that involves summarising qualitative data by identifying recurring themes and ideas within the data
    • Themes are likely to be more descriptive than coding units
    • Once researcher is satisfied that their developed themes cover most aspects of the data they are analysing, they may collect a new set of data to test the validity of their themes and categories
    • Assuming these explain the new data, the researcher writes up the final report typically using direct quotes from the data to illustrate the the,e
  • What are strengths of content and thematic analysis?
    • Content analysis can circumnavigate (get around) many ethical issues as much material of interest is already existent within the public domain thus there are no issues with obtaining permission
    • These communications are high in external validity and may access data of a sensitive data provided the authors consent to its use
    • Flexible in the sense that it can produce both qualitative and quantitative data depending on the aims of the research
  • What are limitations of content and thematic analysis?
    • Indirect study of people means the communications they produce are outside of the context within which it occurred, meaning the researcher can attribute opinions or motivations that were not intended originally
    • May suffer from a lack of objectivity especially when more descriptive forms of thematic analysis are employed
  • What are the stages of content analysis?
    • Sampling method - how material should be sampled e.g. time and event
    • Recording data - should data be transcribed, recorded, collected by an individual researcher or a team, etc.
    • Analysing and representing data - how material should be categorised or coded to summarise it e.g. qualitative or quantitative data
  • What is reliability?
    • A measure of consistency - if a test or measure in psychology measured the same thing on a particular day we would expect the same results on a different day
    • Includes psychological tests, observations, and questionnaires
  • What are ways of assessing reliability?
    • Test re-test: administering the same test or questionnaire to the same sample on different occasions and if reliable should produce same or similar results - must be sufficient time so participants cannot recall their answers but their attitudes and opinions haven't changed
    • Inter-observer reliability: may involve a pilot observation where observers watch the same event but record their data independently, which is then correlated to assess reliability
    • Inter-rater reliability: same as above but for content analysis
    • Inter-interviewer reliability: same as above but for interviews
  • How can we improve reliability in questionnaires?
    • Should be measured using the test-retest method and the correlation produced should exceed +.80
    • If it produces low test-retest reliability some of the items may need to be deselected or rewritten
    • This can include making questions less complex and unambiguous or replacing open questions with closed, fixed-choice alternatives
  • How can we improve reliability in interviews?
    • Using the same interviewer each time or properly training interviewers so one doesn't ask questions that are too leading or ambiguous
    • This is easily avoided in structured interviews where the interviewer's behaviour is more controlled by the fixed questions
    • Unstructured interviews that are 'free-flowing' are less likely to be reliable
  • How can we improve reliability in observations?
    • Making sure behavioural categories are properly operationalised and that they are measurable and self-evident
    • Categories should not overlap and all possible behaviours should be included on the checklist
    • If this doesn't happen different observers are left to make their own interpretations of data and will end up with inconsistent records
    • Low reliability means observers may need further training in using the behavioural categories or may with to discuss decisions with each other so they can apply their categories more consistently
  • What is validity?
    • Whether psychological tests, observations, or experiments produces legitimate results where the observed effects are genuine and measures what it was supposed to measure (internal)
    • Also refers to the extent to which findings can be generalised beyond the research context it occurred in (external)
  • What is internal validity?
    • Whether the effects observed within an experiment are due to manipulation of the independent variable and not other factors
    • Demand characteristics threatens the internal validity of studies as participants will respond to their expected demands of the situation rather than behaving naturally in correspondence to the aims of the study
  • What is external validity?
    • Relates to factors outside of the investigation such as generalising to other settings, other populations, etc.
  • What is ecological validity?
    • Concerns generalising the findings from a study to other settings, most particular to 'everyday life'
    • If the task used to measure the DV has low mundane realism and isn't like everyday life then it has low ecological validity
    • We must look at all aspects of the research set-up in order to decide whether findings can be generalised beyond the research setting
  • What is temporal validity?
    • Whether findings from a particular study hold true over time e.g. people now consider Freud's concepts to be outdated and sexist
  • What are ways of assessing validity?
    • Face validity - the extent to which a research instrument appears to measure what it is supposed to measure e.g. a survey designed to measure customer satisfaction might have high face validity if it includes questions that are clearly related to job satisfaction
    • Concurrent validity - the extent to which the results of a new assessment or test align with those of a previously validated, established measure, when both are administered at the same time
  • How do we improve validity in experiments?
    • Using a control group means we can better assess whether changes in the DV were due to the effect of the IV
    • Standardised procedures to minimise the impact of participant reactivity and investigator effects on the outcome's validity
    • Single-blind and double-blind procedures to reduce the effect of demand characteristics
  • How do we improve validity in questionnaires?
    • Many incorporate lie scales within the questions in order to assess the consistency of a respondent's response and to control for the effects of social desirability bias
    • Validity can be further enhanced by assuring respondents that their data will remain anonymous
  • How do we improve validity in observations?
    • Minimal intervention by the researcher may produce findings with high ecological validity
    • Covert observations means the behaviour of those observed is likely to be natural and authentic
    • Behavioural categories that are too broad, overlapping or ambiguous may have a negative impact on validity
  • How do we improve validity in qualitative research?
    • Depth and detail associated with qualitative research is better able to reflect a participant's reality meaning they have higher ecological validity than quantitative research
    • Interpretive validity - extent to which researcher's interpretation of events matches that of their participants, which can be demonstrated through coherence of the researcher's narrative and direct quotes from ppts. within the report
    • Triangulation - using a number of different sources as evidence like interviews with family/friends, personal diaries, etc. will further enhance validity
  • How do we choose a statistical test?
    • Is the researcher looking for a difference or a correlation?
    • Independent groups or repeated measures/matched pairs?
    • Nominal, ordinal, or interval data?
  • What is the acronym for choosing a statistical test?
    • Chicken Should Come Mashed With Sweetcorn Under Roast Potatoes
    • Chi-squared Sign test Chi-squared Mann-whitney Wilcoxon Spearman's rho Unrelated t-test Related t-test Pearson's r
  • What is nominal data?
    • Most basic/lowest level of measurement
    • Used when data is represented in the form of tally charts/categories
    • Discrete in that only one item can appear in one of the categories
    • Gives little information as it is basically just a headcount
  • What is ordinal data?
    • A type of categorical data with a set order or scale to it e.g. rate how much you like psychology on a scale of 1-10
    • Doesn't have equal intervals between each unit
    • Lacks precision as it is based on subjective opinion rather than objective measures, sometimes referred to as 'unsafe' data
    • 'At least' ordinal - any measurement where we cannot guarantee equal distance between data
  • What is interval data?
    • Based on numerical scaled that include units of equal, precisely defined size such as time, temperature, weight, etc.
    • Most precise and sophisticated form of data in psychology
    • e.g. the gap between 2 and 3 cm is exactly the same as the gap between 10 and 11 cm
  • How do we know which critical values to use?
    • One-tailed or two-tailed test? - hypothesis was either directional or non-directional
    • Number of participants in the study - N or df
    • Level of significance - p value, standard is 0.05 or 5%
  • What are type I and type II errors?
    • Type I = null hypothesis is rejected and alternative hypothesis is accepted, often referred to as a false positive as the researcher claims to have found a significance when it doesn't exist. More likely to be made if significance level is too lenient like 10%
    • Type II = alternative hypothesis is rejected and null hypothesis is accepted, referred to as a false negative when researcher thinks there is no significance but there is. More likely to be made if significance level is too low like 1% as potentially significant values may be missed
    • 5% level best balances the risk of making a type I or type II error
  • What is the abstract as a section of a scientific report?
    • A concise summary of the report, about 150-200 words including all the major elements of the study like the aims and hypotheses, methods and procedures, results and conclusions
    • Means researchers can read these instead of full reports to identify which studies are worthy of further examination
  • What is the introduction as a section of a scientific report?
    • A literature review of the general area of research detailing relevant theories
    • Broad themes ​are covered first, and these are ​narrowed in closer and closer to ​the current piece of research
  • What is the method as a section of a scientific report?
    • Should include sufficient detail so other researchers are able to precisely replicate the study if they wish
    • Cleary states the design, sample, apparatus/material, procedure, ethics, etc.
  • What are the results as a section of a scientific report?
    • Should summarise the key findings from the investigation, likely to include descriptive statistics
    • Inferential statistics should refer to statistical tests used, calculated and critical values, levels of significance, and which hypothesis was accepted
    • Qualitative data results may involve analysis of themes and/or categories
  • What is the discussion as a section of a scientific report?
    • Results will be verbally summarised, discussed in the context of the evidence presented
    • Limitations should be discussed as well as suggestions of how these may be addressed in future studies
    • Wider implications of the research are considered, including real-world applications and was contribution it has made to the existing knowledge base within the field
  • What is referencing as a section of a scientific report?
    • Full details of any source material cited in the report
    • Journal references: Gupta, S. (1991) Effects of time of day and personality on intelligence test scores. Personality and Individual Differences, 12(11), 1227-1231
    • Book references: Skinner, B.F. (1953) Science and Human Behaviour. New York; Macmillan
    • Web references: NHS (2018) Phobias: https://www.nhs.uk/conditions/phobias/ [Accessed May 2020]
  • What is objectivity as a feature of science?
    • Scientists aim to objective, keeping a 'critical distance' from their research and must not allow their personal opinions or biases to discolour the data they collect, or influence the behaviour of the participants they are studying
    • Objective methods include high control such as lab experiments
  • What the empirical method as a feature of science?
    • Empirical methods emphasise the importance of data collection based on direct, sensory experience
    • Experimental and observational methods are examples - a theory cannot claim to be scientific unless it has been empirically tested and verified