PR

Cards (49)

  • Sampling
    The process of systematically selecting individuals, units or groups to be analyzed during the conduct of study
  • Generalizability
    The extent your findings can be applied in other contexts
  • Sampling errors
    Differences between what is present in a population and what is present in a sample
  • Determining sample size
    1. Use a formula (Slovin's formula)
    2. Use a table for sampling size calculation
    3. Use heuristics
    4. Conduct a literature review
  • Find out what sample of a population of 1,000 people you need to take for a survey on their soft drinks preferences using Slovin's formula
  • Compute sample sizes for the following populations
    1. 48
    2. 98
    3. 275
    4. 893
    5. 1082
  • Probability sampling
    Everyone has an equal chance of being selected
  • Probability sampling procedures
    • Simple random sampling
    • Systematic sampling
    • Stratified random sampling
    • Cluster sampling
    • Multi-stage sampling
  • Validity
    Are you actually measuring what you want to measure?
  • Face validity
    The test appears to measure the variables being studied
  • Face validity
    • A mathematical test consisting of problems in which the test taker has to add and subtract numbers
    • A questionnaire on algebra proficiency with unsuitable content, irrelevant questions, unclear language, and inconsistent formatting
  • Content validity
    Assessment whether a test (items/questions) is representative of all aspects of the construct (no missing aspect)
  • Content validity
    • An English proficiency test with questions covering listening, reading, writing, and speaking
    • A questionnaire with better content validity by ensuring all items are relevant to the construct it aims to measure
  • Criterion validity
    The extent to which scores on an inventory or scale correlate with external, non-test criteria
  • Criterion validity
    • A new English writing ability test is compared to an existing valid test of English writing ability
    • Pre-employment tests are validated by correlating test scores with future career success
  • Construct validity
    The instrument is able to detect what should exist theoretically
  • Measuring construct validity
    Group comparison - identify two groups, one experiencing the construct and one not, then run a t-test to see if there is a significant difference
  • Reliability
    The extent to which a scale produces consistent results if the measurements are repeated a number of times
  • Types of reliability
    • Test-retest reliability
    • Interrater reliability
    • Parallel forms reliability
    • Internal consistency reliability
  • Ideation
    The formation of ideas or concepts
  • Construct validity
    • Should be demonstrated empirically
  • Types of validity
    • Face validity
    • Content validity
    • Construct validity
    • Criterion validity
  • Measuring construct validity
    1. Group comparison
    2. Identify two groups
    3. Administer test to both groups
    4. Run t-test for two independent samples
  • There should be a significant difference between the two groups
  • Stages of stress
    • Alarm
    • Resistance
    • Exhaustion
  • Reliability
    Refers to the extent to which a scale produces consistent results, if the measurements are repeated a number of times
  • Types of reliability
    • Test-retest reliability
    • Interrater reliability
    • Parallel forms reliability
    • Internal consistency reliability
  • Test-retest reliability
    Consistency of results over time
  • Interrater reliability
    Consistency of agreement among people
  • Parallel forms reliability
    Consistency of equivalent versions of a test
  • Internal consistency reliability
    Consistent correlation between items belonging to one construct
  • Methods of measuring internal consistency reliability
    • Split-half coefficient
    • Cronbach's alpha
    • Kuder-Richardson 20 or 21
  • At the end of the discussion, the students should be able to: construct an instrument and establish its validity and reliability; describe the intervention, if applicable; plan the data collection and the data analysis; present a written methodology
  • Instrument
    A measurement device or tool (survey, test, questionnaire, etc.)
  • Data collection procedures
    • Survey
    • Observation
    • Interview
  • Types of interviews
    • Structured
    • Nondirective
    • Unstructured
    • Focus
  • Steps in collecting data
    1. Select your sample
    2. Send advance notification letter
    3. Make initial contact
    4. Screen and obtain consent
    5. Collect data
  • Data analysis procedures
    • Descriptive analyses
    • Inferential analyses
  • Descriptive statistics
    • Frequency
    • Percentage
    • Rank
    • Variability
    • Central tendency (mean, standard deviation)
  • Inferential statistics
    • Significant differences (t-test, ANOVA)
    • Chi-square