Linda Suskie

  A Common Sense Approach to Assessment in Higher Education


Rubrics: Not too broad, not too narrow

Posted on April 3, 2016 at 6:50 AM

Last fall I drafted a chapter, “Rubric Development,” for the forthcoming second edition of the Handbook on Measurement, Assessment, and Evaluation in Higher Education. My literature review for the chapter was an eye-opener! I’ve been joking that everything I had been saying about rubrics was wrong. Not quite, of course!


One of the many things I learned is that what rubrics assess vary according to the decisions they inform, falling on a continuum from narrow to broad uses.


Task-specific rubrics, at the narrow end, are used to assess or grade one assignment, such as an exam question. They are so specific that they apply only to that one assignment. Because their specificity may give away the correct response, they cannot be shared with students in advance.


Primary trait scoring guides or primary trait analysis are used to assess a family of tasks rather than one specific task. Primary trait analysis recognizes that the essential or primary traits or characteristics of a successful outcome such as writing vary by type of assignment. The most important writing traits of a science lab report, for example, are different from those of a persuasive essay. Primary trait scoring guides focus attention on only those traits of a particular task that are relevant to the task.


General rubrics are used with a variety of assignments. They list traits that are generic to a learning outcome and are thus independent of topic, purpose, or audience.


Developmental rubrics or meta-rubrics are used to show growth or progression over time. They are general rubrics whose performance levels cover a wide span of performance. The VALUE rubrics are examples of developmental rubrics.


The lightbulb that came on for me as I read about this continuum is that rubrics toward the middle of the continuum may be more useful than those at either end. Susan Brookhart has written powerfully about avoiding task-specific rubrics: “If the rubrics are the same each time a student does the same kind of work, the student will learn general qualities of good essay writing, problem solving, and so on… The general approach encourages students to think about building up general knowledge and skills rather than thinking about school learning in terms of getting individual assignments done.”


At the other end of the spectrum, developmental rubrics have a necessary lack of precision that can make them difficult to interpret and act upon. In particular, they’re inappropriate to assess student growth in any one course.


Overall, I’ve concluded that one institution-wide developmental rubric may not be the best way to assess student learning, even of generic skills such as writing or critical thinking. As Barbara Walvoord has noted, “You do not need institution-wide rubric scores to satisfy accreditors or to get actionable information about student writing institution-wide.” Instead of using one institution-wide developmental rubric to assess student work, I’m now advocating using that rubric as a framework from which to build a family of related analytic rubrics: some for first year work, some for senior capstones, some for disciplines or families of disciplines such as the natural sciences, engineering, and humanities. Results from all these rubrics are aggregated qualitatively rather than quantitatively, by looking for patterns across rubrics. Yes, this approach is a little messier than using just one rubric, but it’s a whole lot more meaningful.

Categories: Rubrics, How to assess

Post a Comment


Oops, you forgot something.


The words you entered did not match the given text. Please try again.


Reply Glen Rogers
6:52 PM on May 1, 2016 
I agree with the messier approach you suggest. It is not new, but does involve a larger change in how we as assessment practitioners, together with a whole faculty, think about what we are doing in assessment, namely integrating assessment of student learning outcomes into classroom learning across the curriculum.
I wrote about how Alverno College rated such a system of assessment in an article entitled "Measurement and judgment in curriculum assessment systems", (Assessment Update, 1994, vol. 6). Since then, I have spent considerable time assisting institutions with taking on this kind of approach to fostering and ensuring student learning outcomes.

Alverno's approach has been influential and appreciated by faculty, though it has required an intense focus on student learning and has been less influential where research as a scholarship of discovery is a major faculty commitment.
Reply Mary Herrinton-Perry
9:46 AM on April 4, 2016 
Hi, Linda!

I am working on a rubric to assess my assessment plans. It certainly is a PTA rubric, but it also includes guidance meant to help the report writers move from novice to expert.

Just curious--what would you call this one?