|Posted on November 14, 2015 at 8:15 AM|
It’s actually impossible to determine whether any rubric, in isolation, is valid. Its validity depends on how it is used. What may look like a perfectly good rubric to assess critical thinking is invalid, for example, if used to assess assignments that ask only for descriptions. A rubric assessing writing mechanics is invalid for drawing conclusions about students’ critical thinking skills. A rubric assessing research skills is invalid if used to assess essays that students are given only 20 minutes to write.
A rubric is thus valid only if the entire assessment process—including the assignment given to students, the circumstances under which students complete the assignment, the rubric, the scoring procedure, and the use of the findings—is valid. Valid rubric assessment processes have seven characteristics. How well do your rubric assessment processes stack up?
Usability of the results. They yield results that can be and are used to make meaningful, substantive decisions to improve teaching and learning.
Match with intended learning outcomes. They use assignments and rubrics that systematically address meaningful intended learning outcomes.
Clarity. They use assignments and rubrics written in clear and observable terms, so they can be applied and interpreted consistently and equitably.
Fairness. They enable inferences that are meaningful, appropriate, and fair to all relevant subgroups of students.
Consistency. They yield consistent or reliable results, a characteristic that is affected by the clarity of the rubric’s traits and descriptions, the training of those who use it, and the degree of detail provided to students in the assignment.
Appropriate range of outcome levels. The rubrics’ “floors” and “ceilings” are appropriate to the students being assessed).
Generalizability. They enable you to draw overall conclusions about student achievement. The problem here is that any single assignment may not be a representative, generalizable sample of what students have learned. Any one essay question, for example, may elicit an unusually good or poor sample of a student’s writing skill. Increasing the quantity and variety of student work that is assessed, perhaps through portfolios, increases the generalizability of the findings.
Sources for these ideas are cited in my chapter, “Rubric Development,” in the forthcoming second edition of the Handbook on Measurement, Assessment, and Evaluation in Higher Education to be published by Taylor & Francis.