Linda Suskie

 A Common Sense Appr​oach to Assessment in Higher Education


Blog

Rubrics: Not too broad, not too narrow

Posted on April 3, 2016 at 6:50 AM

Last fall I drafted a chapter, “Rubric Development,” for the forthcoming second edition of the Handbook on Measurement, Assessment, and Evaluation in Higher Education. My literature review for the chapter was an eye-opener! I’ve been joking that everything I had been saying about rubrics was wrong. Not quite, of course!

 

One of the many things I learned is that what rubrics assess vary according to the decisions they inform, falling on a continuum from narrow to broad uses.

 

Task-specific rubrics, at the narrow end, are used to assess or grade one assignment, such as an exam question. They are so specific that they apply only to that one assignment. Because their specificity may give away the correct response, they cannot be shared with students in advance.

 

Primary trait scoring guides or primary trait analysis are used to assess a family of tasks rather than one specific task. Primary trait analysis recognizes that the essential or primary traits or characteristics of a successful outcome such as writing vary by type of assignment. The most important writing traits of a science lab report, for example, are different from those of a persuasive essay. Primary trait scoring guides focus attention on only those traits of a particular task that are relevant to the task.

 

General rubrics are used with a variety of assignments. They list traits that are generic to a learning outcome and are thus independent of topic, purpose, or audience.

 

Developmental rubrics or meta-rubrics are used to show growth or progression over time. They are general rubrics whose performance levels cover a wide span of performance. The VALUE rubrics are examples of developmental rubrics.

 

The lightbulb that came on for me as I read about this continuum is that rubrics toward the middle of the continuum may be more useful than those at either end. Susan Brookhart has written powerfully about avoiding task-specific rubrics: “If the rubrics are the same each time a student does the same kind of work, the student will learn general qualities of good essay writing, problem solving, and so on… The general approach encourages students to think about building up general knowledge and skills rather than thinking about school learning in terms of getting individual assignments done.”

 

At the other end of the spectrum, developmental rubrics have a necessary lack of precision that can make them difficult to interpret and act upon. In particular, they’re inappropriate to assess student growth in any one course.

 

Overall, I’ve concluded that one institution-wide developmental rubric may not be the best way to assess student learning, even of generic skills such as writing or critical thinking. As Barbara Walvoord has noted, “You do not need institution-wide rubric scores to satisfy accreditors or to get actionable information about student writing institution-wide.” Instead of using one institution-wide developmental rubric to assess student work, I’m now advocating using that rubric as a framework from which to build a family of related analytic rubrics: some for first year work, some for senior capstones, some for disciplines or families of disciplines such as the natural sciences, engineering, and humanities. Results from all these rubrics are aggregated qualitatively rather than quantitatively, by looking for patterns across rubrics. Yes, this approach is a little messier than using just one rubric, but it’s a whole lot more meaningful.

Categories: Rubrics, How to assess