self-report measures are when participants are asked to report their own thoughts, feelings, and behaviors
observational measure: a measure that is based on observations of behaviour or events in the environment
physiological measure: records biological data such as brain activity, hormone levels or heart rate
categorical variables: categories, also known as nominal variables
quantitative variables: continuous variables that can be measured in whole numbers, such as height, weight, and temperature.
ordinal scale: numerals of a quantitative variable that can be ranked in order of magnitude
interval scale: a scale that has a range of values that are equal in size and are measured in equal intervals (has no true zero)
example: IQ test
ratio scale: a scale that has a fixed number of equal intervals, such as 10 or 100
reliability: how consistent the results of a measure are
validity: whether the operationalization is measuring what is supposed to measure
test- retest reliability: how consistent is the test over time?
interrater reliability: consistent scores are obtained no matter who measures the variable
internal reliability: a study participant gives a consistent pattern of answers, no matter how the researchers phrase the question
correlation coefficient: a measure of the strength of the relationship between two variables. also known as r
slope direction: positive, negative or zero, the direction of the slope of the line of best fit
strength: how closely the data points cluster along a line of best fit drawn through them
face validity: the degree to which a test appears to measure what it is supposed to measure
content validity: how a measure captures all parts of a defined construct
criterion validity: establishes the extent to which a measure is associated w/ behavioral outcome w/ which it should be associated
groups paradigm: researchers see whether scores on the measure can discriminate among 2 or more groups whose behavior is already confirmed
convergent validity: an empirical test to see whether a self-report measure correlates w: other measures of a theoretically similar construct (discriminant validiity)
average inter-item correlation: measure of internal reliability for a set of items; mean of all possible correlations computed between each item and the others
Cronbach’s alpha: a correlation-based statistic that measures a scale’s internal reliability
construct validity:
discriminant validity:
open-ended: answer freely
forced choice: picking the best of two options
Likert scale: strongly agree, agree, neither agree or disagree and strongly disagree
leading question: wording leads people to a particular response
double-barreled question: asks two questions in one (poor construct validity, people may be only responding to one part of the question)
response sets: non-differentiation, type of shortcut people can take when answering survey questions
without thinking, just answering the questions... negatively, positively or neutrally
weaken construct validity because the respondents aren’t saying their real thoughts
acquiescence: yea-saying, when people say yes or strongly agree to every item instead of thinking carefully about each one
fence-sitting: playing it safe by answering in the middle of the scale, especially when survery items are controversial
socially-desirable responding: giving the answers that make the respondents look better than they really are
observational research: when a researcher watches people or animals and systematically records how they behave or what they are doing
could be a basis for frequency claims
observer bias: when observer's expectations influence their observations
observer effects, expectancy effects: the phenomenon can occur even in seemingly objective observations
masked design: also known as a double-blind design, where neither the participants nor the researchers know who is receiving the experimental treatment
reactivity: change in behavior when study participants know another person is watching