Information gained through direct observation rather than reasoned argument or belief
Objective
Data should not be affected by the expectations of the researcher. Data collection should be systematic and free from bias. Without objectivity there is no way of knowing if our findings are valid.
Controlled
All extraneous variables need to be controlled in order to be able to establish cause (IV) and effect (DV).
Replication
Scientists record their methods and standardise them carefully so that the same procedures can be followed in the future (replicated). Repeating a study is the most important way to demonstrate the validity of an observation or experiment. If the outcome is the same, then this indicates that the original findings are valid.
Hypothesis testing
A statement is made at the beginning of an investigation that serves as a prediction and is derived from a theory. There are different types of hypotheses (null and alternative), which need to be stated in a form that can be tested (i.e. operationalized and unambiguous).
Predictability
We should be aiming to be able to predict future behavior from the findings of our research.
The philosopher Thomas Kuhn suggested what distinguishes scientific disciplines from non-scientific disciplines is a shared set of assumptions and methods known as a paradigm. He argued that Psychology lacked a universally accepted paradigm and was therefore best seen as pre-science. Natural sciences on the other hand have a number of principles at their core. Eg biology has the theory of evolution. By contrast, psychology has many conflicting approaches.
Paradigm shift
Occurs when paradigms are challenged more and more over time until there is so much evidence that the old paradigm can no longer be accepted such that new paradigms will take its place. In psychology there was a paradigm shift from the psychoanalysis of the psychodynamic approach to the behaviourist approach and then again to the cognitive approach and then to the biological approach.
Reliability
Whether something is consistent. In the case of a study, whether it is replicable.
Types of reliability
Internal reliability - assesses the consistency of results across items within a test
External reliability - refers to the extent to which a measure varies from one use to another
Internal reliability
Assessed using the split half method - measures the extent to which all parts of the test contribute equally to what is being measured
External reliability
Assessed using inter-rater reliability - extent to which two or more observers are observing and recording behaviour in the same way
Assessed using test-retest reliability - involves presenting the same participants with the same test or questionnaire on two separate occasions and seeing whether there is a positive correlation between the two
Validity
Whether something is true (measures what it sets out to measure)
Types of validity
Internal validity - the extent to which a test is consistent within itself
External validity - whether it is possible to generalise the results beyond the experimental setting (i.e: onto the wider population)
Internal validity
Factors that affect it include participant variables, lack of experimental control, situational variables, and researcher bias
Types of internal validity
Content validity - the extent to which the questions/measurements in the study measure what we think we are measuring
Face validity - simple way of assessing whether a test measures what it claims to measure
Split-half method - comparing two halves of a test, questionnaire, or interview
External validity
Types include ecological validity, mundane realism, population validity, temporal validity, and concurrent validity
Aim
The researcher's area of interest - what they are looking at
Hypothesis
A precise, testable statement of what the researchers predict will be the outcome of the study
Types of hypotheses
Alternative hypothesis - states that there is a relationship between the two variables being studied
Null hypothesis - states that there is no relationship between the two variables being studied
Nondirectional hypothesis - predicts that the independent variable will have an effect on the dependent variable, but the direction of the effect is not specified
Directional hypothesis - predicts the nature of the effect of the independent variable on the dependent variable
Upon analysis of the results, an alternative hypothesis can be rejected or supported, but it can never be proven to be correct.
We must avoid any reference to results proving a theory as this implies 100% certainty, and there is always a chance that evidence may exist which could refute a theory.
How to write a hypothesis
1. Identify the key variables in the study
2. Operationalise the variables being investigated
3. Decide on a direction for your prediction
4. Write your hypothesis
Representative sample
A sample that that closely matched the target population as a whole in terms of key variables and characteristics
Target population
The group that the researchers draws the sample from and wants to be able to generalise the findings to
Sampling techniques
Random sampling
Stratified sample
Systematic sample
Opportunity sample
Volunteer sample
Pilot study
A small scale study conducted to ensure the method will work according to plan. If it doesn't then amendments can be made.
Experimental designs
Independent groups design - each participant only takes part in one condition of the IV
Repeated measures design - each participant takes part in all conditions of the IV
Matched pairs design - pairs of participants are matched on important characteristics and one member allocated to each condition of the IV
Operationalising variables
Clearly describing the variables (IV and DV) in terms of how they will be manipulated (IV) or measured (DV)
Independent variable
The variable that the experimenter manipulates (changes)
Dependent variable
The variable that is measured to tell you the outcome
Extraneous variable
Variables that if not controlled may affect the DV and provide a false impression that an IV has produced changes when it hasn't
Confounding variable
An extraneous variable that varies systematically with the IV so we cannot be sure of the true source of the change to the DV
Control of extraneous variables
Order effects
Participant variables
Situational variables
Investigator effects
Order effects
Can occur in a repeated measures design and refers to how the positioning of tasks influences the outcome e.g. practice effect or boredom effect on second task. These can be controlled using counterbalancing.
Participant variables
Participants in one group may differ in a significant way from participants in another group. This risk can be reduced via random allocation or by matched pairs.
Situational variables
Factors in the environment that may affect the DV. These can be reduced by using a standard procedure.
Investigator effects
These result from the effects of a researcher's behaviour and characteristics on the participants and the study.
Systematically with the IV so we cannot be sure of the true source of the change to the DV