experimental psychology 2

Cards (76)

  • Experimental Hypothesis In the simplest experiments, an experimental hypothesis states a potential relationship between two variables: If A occurs, then we expect B to follow.
  • Independent Variable (IV
    • It is the dimension that the experimenter intentionally manipulates; it is the antecedent the experimenter chooses to vary. This variable is “independent” in the sense that its values are created by the experimenter and are not affected by anything else that happens in the experiment. 
  • Experimental operational definitions

    Explain the precise meaning of the independent variables; describe exactly what was done to create the various treatment conditions of the experiment
  • Hypothetical construct
    Concepts used to explain unseen processes, such as hunger, intelligence, or learning; postulated to explain observable behavior
  • Operational definition

    Specifies the precise meaning of a variable within an experiment in terms of observable operations, procedures, and measurements
  • Measured operational definitions of the dependent variable
    Describe exactly what procedures we follow to assess the impact of different treatment conditions
  • Dependent Variable (DV)

    The particular behavior we expect to change because of our experimental treatment; it is the outcome we are trying to explain
  • Levels of Measurement
    • Nominal: Classifies items into distinct categories that can be named
    • Ordinal: Measures magnitude in ranks
    • Interval: Measures magnitude using measures with equal intervals between values
    • Ratio: Measures magnitude using measures with equal intervals between all values and a true zero point
  • Operational definition

    Called so because it clearly describes the operations involved in manipulating or measuring the variables in an experiment
  • Reliability 
    • Means consistency and dependability. Good operational definitions are reliable: If we apply them in more than one experiment, they ought to work in similar ways each time.   Test-Retest Reliability 
    • Reliability of measures can also be checked by comparing scores of people who have been measured twice with the same instrument. They take the test once, then they take it again (after a reasonable interval). 
  • Interrater Reliability 
    • One way to assess reliability of measurement procedures is to have different observers take measurements of the same responses; the agreement between their measurements.
  • Interitem Reliability 
    • It is the extent to which different parts of a questionnaire, test, or other instruments designed to assess the same variable attain consistent results. Scores on different items designed to measure the same construct should be highly correlated. 
  • Statistical tests for evaluating internal consistency
    • Cronbach’s
  • Interitem reliability
    The consistency of results across different items within a test
  • Split-half reliability
    Splitting the test into two halves at random and computing a coefficient of reliability between the scores obtained on the two halves
  • Cronbach’s is the most widely used method for evaluating interitem reliability because it considers the correlation of each test item with every other item
  • Cronbach’s
    A statistical test used to evaluate the internal consistency of the entire set of items by considering the correlation of each test item with every other item
  • Cronbach’s
    Measuring split-half reliability for all possible ways a test could be split up into halves
  • The two halves of the test should correlate strongly
    Indicates that the items are measuring the same variable
  • Assessing interitem reliability
    • Two major approaches: split-half reliability
    • Internal consistency
  • Manipulation Check
    • Providing evidence for the validity of an experimental procedure.
  • Validity
    • refers to the principle of actually studying the variables that we intend to study. 
  • Face validity
    Validity of operational definitions is least likely to be a problem with variables that can be manipulated and measured fairly directly.
  • Content validity depends on whether we are taking a fair sample of the variable we intend to measure.
  • Concurrent validity is evaluated by comparing scores on the measuring instrument with another known standard for the variable being studied
  • Confounding is a situation when the value of an extraneous variable changes systematically across different conditions of an experiment
  • Internal validity is the degree to which a researcher can state a causal relationship between antecedent conditions and the subsequent observed behavior
  • Construct validity deals with the transition from theory to research application
  • Predictive validity is the degree to which a researcher can use procedures to predict future behavior or performance
  • Concurrent validity compares scores on the measuring instrument with an outside criterion, but concurrent validity is comparative, rather than predictive
  •  Physical Variables  - Aspects of testing conditions that need to be controlled. Social Variables 
    • qualities of the relations between subjects and experimenters that can influence results. Elimination and Constancy 
    • to make sure that an extraneous variable does not affect an experiment, sometimes we take it out- eliminate it. 
  •  Constancy of Conditions 
    - keeps all aspects of the treatment conditions as nearly similar as possible. If we can’t eliminate an extraneous variable, we try to make sure that it stays the same in all treatment conditions. 
  •  Balancing 
    • Sometimes neither elimination nor constancy can be used. - distributing the effects of an extraneous variable across the different treatment conditions of the experiment. 
  •  Single- Blind Experiment - an experiment in which subjects do not know which treatment they’re getting. 
  •  Martin Orne (1927-2000) is well known for his programmatic research on social variables in the experimental setting
  • Placebo Effect
    • Subjects expect an effect to occur
    • Subjects’ behavior changes
  • Experimenter Bias
    Experimenter does something that creates confounding in the experiment
  • Cover Stories
    Plausible but false explanation for the procedures used in the study
  • Placebo Effect
    Result of giving subjects a pill, injection, or other treatment that actually contains none of the independent variables; the treatment elicits a change in subjects’ behavior simply because subjects expect an effect to occur
  • Cover Stories
    Disguise the actual research hypothesis so that the subject will not guess what it is