RESEARCH INSTRUMENT, VALIDITY AND REALIABILITY

Cards (22)

  • Research Instrument are basic tools researchers used to gather data for specific research problems.
  • Common instruments are performance tests, questionnaires, interviews and observation checklist.
  • adopting an instrument from the already utilized instruments from previous related studies
  • modifying an existing instrument when the available instruments do not yield the exact data that will answer the research problem.
  • a.     Structured
    The conduct of questioning follows a particular sequence and has a well-defined content.
  • a.     Semi-structured
    There is a specific set of questions, but there are also additional probes that may be done in an open-ended questions or close-ended manner.
  • 1.     Questionnaires
    -        Structured. It provides possible answers and respondents just have to select from them.
    -        Unstructured. It does not provide options and the respondents are free to give whatever answer they want.
  • Questionnaires (type of questions):
    • Yes or No Type
    • Recognition Type
    • Completion Type
    • Coding Type
    • Subjective Type
    • Combination Type
  • 1.     According to Shelley (1984), the length of a questionnaire must be two to four pages and the maximum time of answering is ten minutes. A desirable length of each question is less than 20 words.
  • Likert Scale This is the most common scale used in quantitative research. Respondents were asked to rate or rank statements according to the scale provided.
  • Semantic Differential In this scale, a series of bipolar adjectives will be rated by the respondents. This scale seems to be more advantageous since it is more flexible and easier to construct.
  • Validity
    A research instrument is considered --- if it measures what it supposed to measure
  • Face Validity It is also known as “logical validity.” It calls for an initiative judgment of the instruments as it “appear.” Just by looking at the instrument, the researcher decides if it is valid.
  • Content Validity An instrument that is judged with content validity meets the objectives of the study. It is done by checking the statements or questions if this elicits the needed information. Experts in the field of interest can also provide specific elements that should be measured by the instrument.
  • Construct Validity It refers to the validity of instruments as it corresponds to the theoretical construct of the study. It is concerning if a specific measure relates to other measures.
  • Concurrent Validity When the instrument can predict results like those similar tests already validated, it has concurrent validity.
  • Predictive Validity When the instrument can produce results similar to those similar tests that will be employed in the future, it has predictive validity. This is particularly useful for the aptitude test.
  • Reliability refers to the consistency of the measures or results of the instrument
  • Equivalent Forms Reliability It is established by administering two identical tests except for wordings to the same group of respondents.
  • Internal Consistency Reliability It determines how well the items measure the same construct. It is reasonable that when a respondent gets a high score in one item, he will also get one in similar items.
  • There are three ways to measure the internal consistency; through the split-half coefficient, Cronbach’s alpha, and Kuder-Richardson formula.
  • Test-retest Reliability Same test to the same group of respondents twice