Research Methods

Cards (58)

  • Experimental method is the use of random allocation of participants and the manipulation of variables = establish cause and effect 
  • Independent variable - manipulated by the researcher (change)
    Dependent variable - measured by the researcher
  • Extraneous variable - variables other than the IV that affects the DV
    1. participant variables such as age and intelligence
    2. situational variables such as the environment / temperature and noise
    3. Experimenter variables concerning the personality and appearance of the researcher
    Confounding variables - uncontrolled extraneous variables that affect the results (negative)
  • Operationalisation of the variables - precisely define the variables into measurable factors (easy to identify)
  • Demand characteristics are features that allow the participants to work out the aim of the study = changing their behaviour to match the aim of the study
    • includes the ‘screw you effect’ that makes people purposely give wrong answers OR the other way and give all right answers to please the researcher
    • acting unnaturally out of fear and nervousness and due to social desirability bias (wanting to be liked and give the answers expected from them)
  • Investigator effects are the researchers features that affect the participants response
    • age / gender / tone of voice / ethnicity / unconsciously bias
  • LAB performed in controlled settings with standardised procedures = randomly allocated to conditions of the experiment (eg Baddeley encoding of memory)
    +) high degree of control meaning the IV and the DV are precisely operationalised
    +) replication by other researchers
    +) cause and effect can be established due to being controlled
    -) experimenter bias as their expectations can affect the results = influence the p’s
    -) low ecological validity = DC’s
  • FIELD occur in real world settings with the researcher manipulating the IV with the rest being controlled (Bickman’s study of obedience = uniform)
    +) high ecological validity = generalise findings to real life
    +) fewer demand characteristics as they don’t know they are taking part
    -) ethical issues as if they do not know they are being observed = right to withdraw and consent
    -) less replicable due to natural setting = environment changes each trial
  • QUASI theI IV is naturally occurring and can’t be directly manipulated by the researcher (it already exists = can’t randomly allocate ps to conditions)
    • typically in artificial settings
    +) more convenient and ethical as no direct manipulation of the IV and allocation to conditions
    +) often carried out in lab conditions = scientific and thorough = replicable
    -) confounding variables affect the DV = not randomly allocated
  • NATURAL where the IV is pre existing like a quasi BUT it has changed over time even if the researcher wasn’t interested
    +) more practical and ethical
    +) high external validity as they study real life issues as they happen = effect of natural disasters on stress levels
    -) naturally occurring event of interest might happen rarely = little opportunity to study it = limits the scope for generalising findings
    -) p’s may not be randomly allocated to conditions = less aware of the IV and the DV
  • Observational technique
    covert = p’s are unaware that they are being observed
    +) fewer SDB and DC = more valid data removes the problem of participant reactivity
    -) ethical issues
  • Observational technique
    overt = p’s are aware they are being observed (EG - Zimbardo)
    +) more ethical
    -) knowledge of being observed can influence the behaviour
  • Observational technique
    Participant observation involves the observer being actively involved in the situation being studied = Zimbaro
    +) increased insight of difficult to acceess behaviours = increases validity
    -) highly unethical if covert
    -) loss of objectivity
  • Observational technique
    Non participant involves the observer NOT being actively involved = Ainsworth’s
    +) more objective and less likely to be bias
    -) loss of valuable insight into behaviours if psychologist is too far removed from the p’s
  • Observational Design
    Naturalistic observation = natural occurring events
    +) high external validity as they are generalised to everyday
    -) replication can be difficult as environment varies
    SHAFFER AND EMERSON
  • Observational Design
    Controlled observation as the environment is manipulated in some way to observe target behaviour
    +) replication is easier as more controlled
    -) not readily applied to everyday
    MILGRAM OR AINSWORTH
  • Behavioural categories by dividing target behaviours into subsets of behaviours through coding systems = should reflect what is being studied 
  • Time sampling - counting behaviour in a set time frame = for example every 30 seconds within 10 minute time frame
    +) reduces amount of time spent
    +) effective in reducing number of observations that have to be made
    -) behaviour may be missed if random time samples are taken across the day (every 30 seconds within a 10 minute time frame)
  • Event sampling - counting number of times behaviour occurs in a target group (10 minutes)
    +) higher inter rater reliability due to being operationalised
    +) limits behaviour being observed = reduces chances of being missed
    -) problems with behaviour categories if they are too ambiguous or not defined
  • Inter - observer reliability = when two independent observers have the same behaviour categories and code the behaviour in the same way
    = should come up with similar scores and can use this for a statistical test (typically go for a correlation of +0.8)
  • p>0.05 being = / or more than means there is a 95% confidence rate and a 5% down to chance or luck = REJECT null hypothesis 
  • p<0.05 = being less than meaning there is a 95% confidence level and a less than 5% due to luck or chance = ACCEPT null hypothesis
  • If results are NOT significant then we accept the null as this assumes that there will be no significance = results are more than 5% due to chance 
  • If results are significant then we reject the null as this assumes no significance = less than 5% chance 
  • Type 1 error - when a difference or relationship is wrongly accepted as a real one (says pregnant when not actually pregnant)
  • Type 2 errors - wrongly accepted as being insignificant (not a real difference or relationship) = null is wrongly rejected (when it says not pregnant but you are)
  • normal distribution is something that tends to be around the mean (symmetrical pattern of frequency data = bell shaped curve)
  • Positive (R) skewed where data skews to the right (concentrated to the right) The tail of graph points to the right
    • contain mainly LOW scores with some high outliers = really hard test produces a lot of low scores
  • Negative (L) skewed where data clusters towards the left of the graph (tail points to the left)
    • mainly HIGH scores to one end with low outliers = really easy test so most people get high scores
  • Nominal data
    Independent measures
    test of difference
    = CHI square
  • Nominal data
    Repeated measures
    test of difference
    = SIGN test
  • Nominal data
    Independent measures
    Test of relationship
    = CHI square
  • Ordinal data
    Independent measures
    Test of difference
    = MANN WHITNEY
  • Ordinal data
    Repeated measure
    Test of difference
    = WILCOXON
  • Ordinal data
    Test for relationship
    = SPEARMANS RHO
  • Interval data
    Independent measures
    Test of difference
    = UNRELATED T TEST
  • Interval data
    Repeated measures
    Test of difference
    = RELATED T TEST
  • Interval data
    Test for correlation / relationship
    = PEARSON'S R
  • What is counterbalancing?

    technique used to deal with order effects when using a repeated measures design
  • Features of science
    1. replicability - being able to repeat a study to check the validity of the results
    2. objectivity - observations made without bias = not based on viewpoint
    3. falsification - verification and validation of the process where the scientific theory is tested to see if false