finals

Cards (117)

  • The assessment of student learning starts with the institution’s vision, mission and core values, with a clear statement on the kinds of learning that the institution values most for its students.
  • Assessment works best when the program has a clear statement of objectives aligned with the institutional vision, mission and core values, ensuring clear, shared, and implementable objectives.
  • Outcome-based assessment focuses on the student activities that will still be relevant after formal schooling concludes, designing assessment activities which are observable and less abstract such as “to determine the student’s ability to write a paragraph” which is more observable than “to determine the student’s verbal ability”.
  • Assessing in learning involves comparing, relating cause & effect, justifying, summarizing, generalizing, inferring, classifying, applying, analyzing, evaluating, and creating.
  • Essays are a non-obj test; allow assessment of higher order thinking skills; require organization of thoughts on subject matter in coherent sentences.
  • Rmutilated sentences like; give enough clue rule 2 avoid open-ended item; there should be only 1 acceptable answer rule 3 the blank should be at the end or near the end of the sentence rule 4 ask question on more significant item & not on trivial matter rule 5 the length of blanks must not suggest the answer; make the blanks uniform in size
  • Assessment requires attention not only to outcomes but also and equally to the activities and experiences that lead to the attainment of learning outcomes, these are supporting student activities.
  • Assessment works best when it is continuous, ongoing and not episodic, assessment should be cumulative because improvement is best achieved through a linked series of activities done over time in an instructional cycle.
  • The index of difficulty is calculated by the formula p = ru + rl / total population x 100.
  • The square root of the standard deviation is often used in statistics.
  • The discrimination index is calculated by the formula du - dl or = ph - pl (proportion) - round off to the nearest tens.
  • The index of discriminating eme is calculated by the formula d = ru - rl / ½ of total population.
  • The difficulty index (DU or DL) is calculated by the formula no of students with correct answer / total no of students [upper or lower].
  • Sampling is mostly done for ease and cost.
  • Formulas for calculating the number of items needed and the percentage of items needed are:
  • For a test with 0 total, the difficulty index (DU or DL) is calculated as ph + pl / 2.
  • Begin assessment by specifying clearly and exactly what you want to assess, what you want to assess is/are stated in your learning outcomes/lesson objectives.
  • Range is the most simple measure of variability; it is the highest score minus the lowest score.
  • The median is the middle score for a set of scores arranged from lowest to highest.
  • Measures of dispersion or variability indicate how spread out a group of scores is or how varied the scores are.
  • Central tendency in a distribution refers to the center of the distribution.
  • Reliability in an instrument is the consistency of the scores obtained and can be measured using the split-half method or Kuder-Richardson formula.
  • Variance is a measure of variability; it is the average squared difference of the scores from the mean.
  • Intended learning outcome/lesson objective NOT CONTENT is the basis of the assessment task, you use content in the development of the assessment tool and task but it is the attainment of your learning outcome NOT content that you want to assess—this is outcome-based teaching and learning.
  • Standard deviation is a measure of dispersion; it is the square root of variance.
  • A measure of central tendency is a single value that attempts to describe a set of data by identifying the central position within that set of data.
  • Predictive validity is when test scores in an instrument are correlated with scores on a later performance (criterion measure) of students.
  • The most popular and well-known measure of central tendency is the mean, which is also known as the average or arithmetic mean.
  • Construct-related evidence of validity refers to the nature of the psychological construct or characteristics being measured by the test.
  • Traditional assessment includes paper-&-pencil-test and is inadequate to measure all forms of learning.
  • Types of portfolio include working, development, display, and assessment or evaluation.
  • Authentic assessment includes non-paper-&-pencil tests and is also known as alternative assessment.
  • A test is valid when it is aligned with the learning outcome.
  • Constructive alignment is based on constructivist theory that learners use their own activity to construct their knowledge or other outcomes.
  • There are three types of evidence of validity: content-related, criterion-related, and concurrent validity.
  • Portfolio can be classified according to purpose.
  • Criterion-related evidence of validity, also known as concrete validity, is the relationship between scores obtained using the instrument and scores obtained using one or more other tests.
  • Concurrent validity correlates highly with an external criterion, such as comparison of national math exam scores and course grades in grade 12 math.
  • Assessment tools for cognitive domain (declarative knowledge) include selected response, constructed response, matching type, multiple choice type, and completion type.
  • Validation and validity are used to determine the characteristics of the whole test itself.