Linda Suskie

  A Common Sense Approach to Assessment & Accreditation

Blog

What can an article on gun control tell us about creating good assessment reports?

Posted on November 8, 2017 at 10:05 AM

I was struck by Nicholas Kristof’s November 6 New York Times article, How to Reduce Shootings. No, I’m not talking here about the politics of the issue, and I’m not writing this blog post to advocate any stance on the issue. What struck me—and what’s relevant to assessment—is how effectively Kristof and his colleagues brought together and compellingly presented a variety of data.


Here are some of the lessons from Kristof’s article that we can apply to assessment reports.


Focus on using the results rather than sharing the results, starting with the report title. Kristof could have titled his piece something like, “What We Know About Gun Violence,” just as many assessment reports are titled something like, “What We’ve Learned About Student Achievement of Learning Outcomes.” But Kristof wants this information used, not just shared, and so do (or should) we. Focus both the title and content of your assessment report on moving from talk to practical, concrete responses to your assessment results.


Focus on what you’ve learned from your assessments rather than the assessments themselves. Every subheading in Kristof’s article states a conclusion drawn from his evidence. There’s no “Summary of Results’ heading like what we see in so many assessment reports. Include in your report subheadings that will entice everyone to keep reading.


Go heavy on visuals, light on text. My estimate is that about half the article is visuals, half text. This makes the report a fast read, with points literally jumping out at us.


Go for graphs and other visuals rather than tables of data. Every single set of data in Kristof’s report is accompanied by graphs or other visuals that let immediately let us see his point.


Order results from highest to lowest. There’s no law that says you must present the results for rubric criteria or a survey rating scale in their original order. Ordering results from highest to lowest—especially when accompanied by a bar graph—lets the big point literally pop out at the reader.


Use color to help drive home key points. Look at the section titled “Fewer Guns = Fewer Deaths” and see how adding just one color drives home the point of the graphics. I encourage what I call traffic light color-coding, with green for good news and red for results that, um, need attention.


Pull together disparate data on student learning. Kristof and his colleagues pulled together data from a wide variety of sources. The visual of public opinions on guns, toward the end of the article, brings together results from a variety of polls into one visual. Yes, the polls may not be strictly comparable, but Kristof acknowledges their sources. And the idea (that should be) behind assessment is not to make perfect decisions based on perfect data but to make somewhat better decisions based on somewhat better information than we would make without assessment evidence. So if, say, you’re assessing information literacy skills, pull together not only rubric results but relevant questions from surveys like NSSE, students’ written reflections, and maybe even relevant questions from student evaluations of teaching (anonymous and aggregated across faculty, obviously).


Breakouts can add insight, if used judiciously. I’m firmly opposed to inappropriate comparisons across student cohorts (of course humanities students will have weaker math skills than STEM students). But the state-by-state comparisons that Kristof provides help make the case for concrete steps that might be taken. Appropriate, relevant, meaningful comparisons can similarly help us understand assessment results and figure out what to do.


Get students involved. I don’t have the expertise to easily generate many of the visuals in Kristof’s article, but many of today’s students do, or they’re learning how in a graphic design course. Creating these kinds of visuals would make a great class project. But why stop student involvement there? Just as Kristof intends his article to be discussed and used by just about anyone, write your assessment report so it can be used to engage students as well as faculty and staff in the conversation about what’s going on with student learning and what action steps might be appropriate and feasible.


Distinguish between annual updates and periodic mega-reviews. Few of us have the resources to generate a report of Kristof’s scale annually—and in many cases our assessment results don’t call for this, especially when the results indicate that students are generally learning what we want them to. But this kind of report would be very helpful when results are, um, disappointing, or when a program is undergoing periodic program review, or when an accreditation review is coming up. Flexibility is the key here. Rather than mandate a particular report format from everyone, match the scope of the report to the scope of issues uncovered by assessment evidence.

Categories: Practical Tips

Post a Comment

Oops!

Oops, you forgot something.

Oops!

The words you entered did not match the given text. Please try again.

6 Comments

Reply ★ Owner
6:06 AM on November 15, 2017 
What a great resource, Marlene! Thank you for sharing!
Reply Marlene Clapp
2:59 PM on November 9, 2017 
I recently read the book, Creating a Data-Informed Culture in Community Colleges. I actually work for a Master's-level institution, but many of the tips offered in the book are useful to any higher ed institution. It's made me rethink how to approach the sharing of assessment findings and other institutional data.
Reply ★ Owner
6:18 AM on November 9, 2017 
Karen, thank you so much for your kind words!
Reply ★ Owner
6:17 AM on November 9, 2017 
Thank you for your thoughts, Camellia. Of course there's a lot happening in journalism these days that wouldn't be good models for our assessment reports. But this one article struck me as a "teachable moment."
Reply Camellia Moses Okpodu
12:06 PM on November 8, 2017 
Thank you for this wonderful post. I have been noticing a trend in reporting. They are posting headline titles to clearly get a response and/or coverage, but rarely have data to support their conclusions. Yesterday, I was struck by an article that alleged that a person had made a statement, which if you took time to read the article the person had not said what was used as the headline title, but someone else had. I think we have dangerously crossed over to the 15 sec sound bite. We look to skew the data to our favor. The data is not driving the question. The questions are driving the data. Thanks for this blog. It is one that I will continue to think about as I work with my colleagues in Assessment.
Reply Karen DiGiacomo
11:12 AM on November 8, 2017 
Another great blog, Linda. Our institution is still stuck in tables instead of graphs. Just that one change could make assessment data more accessible and understandable. I don't always comment on your blogs, but I always read them and find them oh, so valuable. Thank you.