|Posted on June 19, 2017 at 9:30 AM|
Someone on the ASSESS listserv recently asked how to advise a faculty member who wanted to collect more assessment evidence before using it to try to make improvements in what he was doing in his classes. Here's my response, based on what I learned in a book I discussed in my last blog post called How to Measure Anything.
First, we think of doing assessment to help us make decisions (generally about improving teaching and learning). But think instead of doing assessment to help us make better decisions than we would make without them. Yes, faculty are always making informal decisions about changes to their teaching. Assessment should simply help them make somewhat better informed decisions.
Second, think about the risks of making the wrong decision. I'm going to assume, rightly or wrongly, that the professor is assessing student achievement of quantitative skills in a gen ed statistics course, and the results aren't great. There are five possible decision outcomes:
1. He decides to do nothing, and students in subsequent courses do just fine without any changes. (He was right; this was an off sample.)
2. He decides to do nothing, and students in subsequent courses continue to have, um, disappointing outcomes.
3. He changes things, and subsequent students do better because of his changes.
4. He changes things, but the changes don't help; despite his best effort, changes in his teaching didn't help improve the disappointing outcomes.
5. He changes things, and subsequent students do better, but not because of his changes--they're simply better prepared than this year's students.
So the risk of doing nothing is getting Outcome 2 instead of Outcome 1: Yet another class of students doesn't learn what they need to learn. The consequence is that even more students consequently run into trouble in later classes, on the job, wherever, until the eventual decision is made to make some changes.
The risk of changing things, meanwhile, is getting Outcome 4 or 5 instead of Outcome 3: He makes changes but they don't help. The consequence here is his wasted time and, possibly, wasted money, if his college invested in something like an online statistics tutoring module or gave him some released time to work on this.
The question then becomes, "Which is the worst consequence?" Normally I'd say the first consequence is the worst: continuing to pass or graduate students with inadequate learning. If so, it makes sense to go ahead with changes even without a lot of evidence. But if the second consequence involves a major investment of sizable time or resources, then it may make sense to wait for more corroborating evidence before making that major investment.
One final thought: Charles Blaich and Kathleen Wise wrote a paper for NILOA a few years ago on their research, in which they noted that our tradition of scholarly research does not include a culture of using research. Think of the research papers you've read--they generally conclude either by suggesting how some other people might use the research and/or by suggesting areas for further research. So sometimes the argument to wait and collect more data is simply a stalling tactic by people who don't want to change.