|Posted on November 8, 2017 at 10:05 AM|
I was struck by Nicholas Kristof’s November 6 New York Times article, How to Reduce Shootings. No, I’m not talking here about the politics of the issue, and I’m not writing this blog post to advocate any stance on the issue. What struck me—and what’s relevant to assessment—is how effectively Kristof and his colleagues brought together and compellingly presented a variety of data.
Here are some of the lessons from Kristof’s article that we can apply to assessment reports.
Focus on using the results rather than sharing the results, starting with the report title. Kristof could have titled his piece something like, “What We Know About Gun Violence,” just as many assessment reports are titled something like, “What We’ve Learned About Student Achievement of Learning Outcomes.” But Kristof wants this information used, not just shared, and so do (or should) we. Focus both the title and content of your assessment report on moving from talk to practical, concrete responses to your assessment results.
Focus on what you’ve learned from your assessments rather than the assessments themselves. Every subheading in Kristof’s article states a conclusion drawn from his evidence. There’s no “Summary of Results’ heading like what we see in so many assessment reports. Include in your report subheadings that will entice everyone to keep reading.
Go heavy on visuals, light on text. My estimate is that about half the article is visuals, half text. This makes the report a fast read, with points literally jumping out at us.
Go for graphs and other visuals rather than tables of data. Every single set of data in Kristof’s report is accompanied by graphs or other visuals that let immediately let us see his point.
Order results from highest to lowest. There’s no law that says you must present the results for rubric criteria or a survey rating scale in their original order. Ordering results from highest to lowest—especially when accompanied by a bar graph—lets the big point literally pop out at the reader.
Use color to help drive home key points. Look at the section titled “Fewer Guns = Fewer Deaths” and see how adding just one color drives home the point of the graphics. I encourage what I call traffic light color-coding, with green for good news and red for results that, um, need attention.
Pull together disparate data on student learning. Kristof and his colleagues pulled together data from a wide variety of sources. The visual of public opinions on guns, toward the end of the article, brings together results from a variety of polls into one visual. Yes, the polls may not be strictly comparable, but Kristof acknowledges their sources. And the idea (that should be) behind assessment is not to make perfect decisions based on perfect data but to make somewhat better decisions based on somewhat better information than we would make without assessment evidence. So if, say, you’re assessing information literacy skills, pull together not only rubric results but relevant questions from surveys like NSSE, students’ written reflections, and maybe even relevant questions from student evaluations of teaching (anonymous and aggregated across faculty, obviously).
Breakouts can add insight, if used judiciously. I’m firmly opposed to inappropriate comparisons across student cohorts (of course humanities students will have weaker math skills than STEM students). But the state-by-state comparisons that Kristof provides help make the case for concrete steps that might be taken. Appropriate, relevant, meaningful comparisons can similarly help us understand assessment results and figure out what to do.
Get students involved. I don’t have the expertise to easily generate many of the visuals in Kristof’s article, but many of today’s students do, or they’re learning how in a graphic design course. Creating these kinds of visuals would make a great class project. But why stop student involvement there? Just as Kristof intends his article to be discussed and used by just about anyone, write your assessment report so it can be used to engage students as well as faculty and staff in the conversation about what’s going on with student learning and what action steps might be appropriate and feasible.
Distinguish between annual updates and periodic mega-reviews. Few of us have the resources to generate a report of Kristof’s scale annually—and in many cases our assessment results don’t call for this, especially when the results indicate that students are generally learning what we want them to. But this kind of report would be very helpful when results are, um, disappointing, or when a program is undergoing periodic program review, or when an accreditation review is coming up. Flexibility is the key here. Rather than mandate a particular report format from everyone, match the scope of the report to the scope of issues uncovered by assessment evidence.
Categories: Sharing & using results