Linda Suskie

 A Common Sense Appr​oach to Assessment in Higher Education


Blog

Are our assessment processes broken?

Posted on April 25, 2016 at 11:30 AM

Wow…my response to Bob Shireman’s paper on how we assess student learning really touched a nerve. Typically about 50 people view my blog posts, but my response to him got close to 1000 views (yes, there’s no typo there). I’ve received a lot of feedback, some on the ASSESS listserv, some on LinkedIn, some on my blog page, and some in direct e-mails, and I’m grateful for all of it. I want to acknowledge in particular the especially thoughtful responses of David Dirlam, Dave Eubanks, Lion Gardiner, Joan Hawthorne, Jeremy Penn, Ephraim Schechter, Jane Souza, Claudia Stanny, Reuben Ternes, Carl Thompson, and Catherine Wehlburg.


The feedback I received reinforced my views on some major issues with how we now do assessment:


Accreditors don’t clearly define what constitute acceptable assessment practices. Because of the diversity of institutions they accredit, regional accreditors are deliberately flexible. HLC, for example, simply says that assessment processes should be “effective” and “reflect good practice,” while Middle States now simply says that they should be “appropriate.” Most of the regionals offer training on assessment to both institutions and accreditation reviewers, but the training often doesn’t distinguish between best practice and acceptable practice. As a result, I heard stories of institutions getting dinged because, say, their learning outcomes didn’t start with action verbs or their rubrics used fuzzy terms, even though no regional requires that learning outcomes be expressed in a particular format or that rubrics must be used.


And this leads to the next major issues…


We in higher education—including government policymakers—don’t yet have a common vocabulary for assessment. This is understandable—higher ed assessment is still in its infancy, after all, and what makes this fun to me is that we all get to participate in developing that vocabulary. But right now terms such as “student achievement,” “student outcomes,” “learning goal,” and even “quantitative” and “qualitative” mean very different things to different people.


We in the higher ed assessment community have not yet come to consensus on what we consider acceptable, good, and best assessment practices. Some assessment practitioners, for example, think that assessment methods should be validated in the psychometric sense (with evidence of content and construct validity, for example), while others consider assessment to be a form of action research that needs only evidence of consequential validity (are the results of good enough quality to be used to inform significant decisions?). Some assessment practitioners think that faculty should be able to choose to focus on assessment “projects” that they find particularly interesting, while others think that, if you’ve established something as an important learning outcome, you should be finding out whether students have indeed learned it, regardless of whether or not it’s interesting to you.


Is all our assessment work making a difference? Assessment and accreditation share two key purposes: first, to ensure that our students are indeed learning what we want them to learn and, second, to make evidence-informed improvements in what we do, especially in the quality of teaching and learning. Too many of us—institutions and reviewers alike—are focusing too much on how we do assessment and not enough on its impact.


We’re focusing too much on assessment and not enough on teaching and curricula. While virtually all accreditors talk about teaching quality, for example, few expect that faculty use research-informed teaching methods, that institutions actively encourage experimentation with new teaching methods or curriculum designs, or that institutions invest significantly in professional development to help faculty improve their teaching.


What can we do about all of this? I have some ideas, but I’ll save them for my next blog post.

 

Categories: None