|Posted on June 10, 2018 at 8:45 AM|
Architecture critic Kate Wagner recently said, “All buildings are interesting. There is not a single building that isn’t interesting in some way.” I think we can say the same thing about assessment: All assessment is interesting. There is not a single assessment that isn’t interesting in some way.
Kate points out that what makes seemingly humdrum buildings interesting are the questions we can ask about them—in other words, how we analyze them. She suggests a number of questions that can be easily adapted to assessment:
- How do these results compare to other assessment results? We can compare results against results for other students (at our institution or elsewhere), against results for other learning goals, against how students did when they entered (value-added), against past cohorts of students, or against an established standard. Each of these comparisons can be interesting. (See Chapter 22 of my book Assessing Student Learning for more information on perspectives for comparing results.)
- Are we satisfied with the results? Why or why not?
- What do these results say about our students at this time? Students, curricula, and teaching methods are rapidly changing, which makes them--and assessment--interesting. Assessment results are a piece of history: what students learned (and didn’t learn) at this time, in this setting.
- What does this assessment say about what we and our institution value? What does it say about the world in which we live?
Why do so many faculty and staff fail to find assessment interesting? I’ve alluded to a number of possible reasons in past blog posts (such as here and here), but let me throw out a few that I think are particularly relevant.
1. Sometimes assessment simply isn’t presented as something that’s supposed to be interesting. It’s a chore to get through accreditation, nothing more. Just as Kate felt obliged to point out that even humdrum buildings are interesting, sometimes faculty and staff need to be reminded that assessment should be designed to yield interesting results.
2. Sometimes faculty and staff aren’t particularly interested in the learning goal being assessed. If a faculty member focuses on basic conceptual understanding in her course, she’s not going to be particularly interested in the assessment of critical thinking that she's obliged to do. Rethinking key learning goals and helping faculty and staff rethink their curricula can go a long way toward generating assessment results that faculty and staff find interesting.
3. Some faculty and staff find results mildly interesting, but not interesting enough to be worth all the time and effort that’s gone into generating them. A complex, time-consuming assessment whose results show that students are generally doing fine and are not all that different from past years is interesting but not terribly interesting. The cost-benefit isn’t there. Here the key is to scale back less-interesting assessments—maybe repeat the assessment every two or three years just to make sure results aren’t slipping—and focus on assessments that faculty and staff will find more interesting and useful.
4. Some faculty and staff aren’t really that interested in teaching—they’re far more engaged with their research agenda. And some faculty and staff aren’t really that interested in improving their teaching. Institutional leaders can help here by rethinking incentives and rewards to encourage faculty and staff to try to improve their teaching.
Kate says, “All of us have the potential to be nimble interpreters of the world around us. All we need to do is look around.” Similarly, all of us have the potential to be nimble interpreters of evidence of student learning. All we need to do is use the analytical skills we learned in college and teach to our students to find what's interesting.
Categories: Good assessment