What are the three tri-council policy (TCPS) statements in Canada?
Canadian Institute of Health Research (CIHR)
Social Sciences and Humanities Research Council of Canada (SSHRCC)
Natural Sciences and Engineering Research Council of Canada (NSERCC)
What are the six Canadian Psychological Association (CPA)'s ethical principles for psychology research?
Respect for dignity of persons.
Privacy and confidentiality.
Informed consent.
Minimising Harm
Freedom to Withdraw.
Use of Deception.
What exactly is privacy and confidentiality?
Confidentiality:Individualresponses kept best secret and useonlyforthepurposespromisedbytheresearcher?
Anonymity: When information cannotbeconnectedtopeople'sidentity.
What exactly is informed consent?
When subjects are informedofallavailableinformation about the study so they can make a rationaldecisiontoparticipate. A consent form contains all the elements of informedconsent and a place for the participant tosign.
What is erroneous about informed consent?
AutonomyIssues – The extent to which a participant can understand the information provided (e.g. minors, peoples with disorders and individuals are at risk of coercion).
Deception – Researchers often use passive and active deception when it is called for in an experiment.
Passive Deception
When one withholdsinformation due to concern with the participant knowingwhatthepurpose of the study is (the participant's behaviour fulfills the purpose of the study); thus, the data is useless.
Active Deception
When researchers misinformparticipants about the purpose of the study or some other aspect of the study.
When researchers use deception, what are their responsibilities to protect participants?
Must be justified.
Cannotwithholdinformation that would affect participants' decision to participate in the study.
Researchers have to debriefparticipants at the end of the study.
Debriefing
Occurring after the study, fulldisclosure of all aspects of the study. Issues such as deception and potentialharmfuleffects are disclose. This ensures the participants do not have any ill-feelings towards the field.
Reliability
Stability or consistency of the measurements produced by a specificmeasurementprocedure. This can be expressed by the following formula:
measuredscore=truescore+error.
Operational Definition
A definition of the variable in terms of the operations or techniques used to measure or manipulate it in a specific study.
How do we Achieve Reliability?
Trainobservers well – be explicit and specific with the instructions and detail regarding the variable being measured.
Wordquestions well – make sure that participants don't get confused with what exactly the question wants.
Calibrate and placeequipment well.
Observecontrastmultipletimes – we want to have multiple observations of the same variables.
How can we know how reliable a measure is?
Test-Retest Reliability
Internal Consistency Reliability
Split-Half Reliability
Test-Retest Reliability
How consistent a measure is acrosstime; take measures twotimes and correlationofscore at timeone with score at timetwo; scores should be similar.
Internal Consistency Reliability
How consistent is the measure across items intended to measure the same concept?;
Cronbach's Alpha: Based on the averageofalltheinter-itemcorrelations *and the numberofitems in the measure; how the items in the scale correlatewitheachother and if they correlatehighly with each other.
Internal Consistency Reliability
How consistent is the measureacrossitems intended to measure the same concept?;
Cronbach's Alpha: Based on the average of all the inter-itemcorrelations *and the numberofitemsinthemeasure; how the items in the scale correlate with eachother and if they correlate highly with each other.
Split-halfReliability: Splits the test inhalf computing a separatescore for eachhalf, and then calculates a correlation between the two scores.
Inter-Rater Reliability
How consistent is the measure when differentpeople are rating?; extent to which ratersagree in their observations.
Reliability indicates what?
It indicates the amountoferror but not accuracy; a measure can be highlyreliable but not accurate.
Construct Validity
The degree to which the operationaldefinition of a variableaccurately reflects the truetheoreticalmeaning of the variable.
What are the indicators of construct validity as a measure?
1. Does the content of a measure reflectthetheoreticalmeaning of the construct?:
Face and Content Validity
2. How does this measure relate to othermeasures and behaviour?
Predictive, Concurrent, Convergent, and Discriminant Validity.
Face Validity
The content of the measure appearstoreflecttheconstruct being measured.
Content Validity
The content of the measure capturesallaspects of the intendedconstruct.
Predictive Validity
Scores on the measure predictbehaviour on a criterionmeasured at a time in the future.
Concurrent Validity
Scores on the measure are relatedtoacriterionmeasured at the sametime.
Convergent Validity
Scores on the measure are related to othermeasures of the same construct or similar constructs.
Discriminant Validity
Scores on the measure are notrelated to othermeasures that are theoretically different.
What are the properties of the four scales of measurement?
Nominal
Ordinal
Interval
Ratio
Nominal
Has no numerical or quantitativeproperties. Instead, categories or groups simply differ from one another.
Ordinal
Allows us to rankorderthelevels of the variable being studied: rank ordering with numericvalues, values are small or larger than the next, interval between items are notknown.
Interval
Values are smaller or larger than the next, interval between items is known and is meaningful, notruezeropoint.
Ratio
Values are small or larger than the next, interval between items is known and is meaningful, has a truezeropoint.