Linda Suskie

  A Common Sense Approach to Assessment & Accreditation

Blog

Are our assessment processes broken?

Posted on April 25, 2016 at 11:30 AM

Wow…my response to Bob Shireman’s paper on how we assess student learning really touched a nerve. Typically about 50 people view my blog posts, but my response to him got close to 1000 views (yes, there’s no typo there). I’ve received a lot of feedback, some on the ASSESS listserv, some on LinkedIn, some on my blog page, and some in direct e-mails, and I’m grateful for all of it. I want to acknowledge in particular the especially thoughtful responses of David Dirlam, Dave Eubanks, Lion Gardiner, Joan Hawthorne, Jeremy Penn, Ephraim Schechter, Jane Souza, Claudia Stanny, Reuben Ternes, Carl Thompson, and Catherine Wehlburg.


The feedback I received reinforced my views on some major issues with how we now do assessment:


Accreditors don’t clearly define what constitute acceptable assessment practices. Because of the diversity of institutions they accredit, regional accreditors are deliberately flexible. HLC, for example, simply says that assessment processes should be “effective” and “reflect good practice,” while Middle States now simply says that they should be “appropriate.” Most of the regionals offer training on assessment to both institutions and accreditation reviewers, but the training often doesn’t distinguish between best practice and acceptable practice. As a result, I heard stories of institutions getting dinged because, say, their learning outcomes didn’t start with action verbs or their rubrics used fuzzy terms, even though no regional requires that learning outcomes be expressed in a particular format or that rubrics must be used.


And this leads to the next major issues…


We in higher education—including government policymakers—don’t yet have a common vocabulary for assessment. This is understandable—higher ed assessment is still in its infancy, after all, and what makes this fun to me is that we all get to participate in developing that vocabulary. But right now terms such as “student achievement,” “student outcomes,” “learning goal,” and even “quantitative” and “qualitative” mean very different things to different people.


We in the higher ed assessment community have not yet come to consensus on what we consider acceptable, good, and best assessment practices. Some assessment practitioners, for example, think that assessment methods should be validated in the psychometric sense (with evidence of content and construct validity, for example), while others consider assessment to be a form of action research that needs only evidence of consequential validity (are the results of good enough quality to be used to inform significant decisions?). Some assessment practitioners think that faculty should be able to choose to focus on assessment “projects” that they find particularly interesting, while others think that, if you’ve established something as an important learning outcome, you should be finding out whether students have indeed learned it, regardless of whether or not it’s interesting to you.


Is all our assessment work making a difference? Assessment and accreditation share two key purposes: first, to ensure that our students are indeed learning what we want them to learn and, second, to make evidence-informed improvements in what we do, especially in the quality of teaching and learning. Too many of us—institutions and reviewers alike—are focusing too much on how we do assessment and not enough on its impact.


We’re focusing too much on assessment and not enough on teaching and curricula. While virtually all accreditors talk about teaching quality, for example, few expect that faculty use research-informed teaching methods, that institutions actively encourage experimentation with new teaching methods or curriculum designs, or that institutions invest significantly in professional development to help faculty improve their teaching.


What can we do about all of this? I have some ideas, but I’ll save them for my next blog post.

 

Categories: Ideas

Post a Comment

Oops!

Oops, you forgot something.

Oops!

The words you entered did not match the given text. Please try again.

6 Comments

Reply ★ Owner
9:31 AM on May 7, 2016 
Thank you all for your thoughtful comments! See my new 5/7/2016 post on "Fixing assessment in American higher education" for five ideas I have on ways to improve what we do.
Reply David Onder
12:04 PM on May 2, 2016 
I have been wondering how long assessment needs to be done on a campus before it is no longer "in its infancy"? Assessment on our campus has been going on for almost 20 years, but assessment has just recently (past 5-10 years) moved more toward assessment of OUTCOMEs rather than INPUTs. I think the move to outcomes was a good move, but it makes for longer time to results. Too much focus on inputs like teaching and curricula and we lose sight of what we all hoped for anyway - what did the student learn and how well did they retain what they learned.

As for the proposal to use "preferred" practices - seems like semantics. What you prefer and what I prefer may still get the same response - "it depends". I think, since "best practices" is plural, the implication is there is more than one practice which could be considered best and these practices have been shown to be superior to other practices (at least under certain conditions). If those conditions apply, then great. Otherwise, experiment.
Reply Eric Landrum
8:32 AM on April 29, 2016 
I wonder if -- and I am not kidding here -- if we were to switch from "best practices" to "preferred practices?" Best does imply an absolute best, but I agree with another comment that one size does not fit all, and the "best practice" answer on a particular campus might be "it depends." There is some progress being made on institutional "transformational" changes, often with the help of NSF grant funding. And perhaps another alternative to best practices might be "evidence-based instructional practices," or EBIPs.
Reply Catherine Wehlburg
11:53 AM on April 28, 2016 
Linda, thank you for you wonderful framing and excellent questions! The assessment community does need to make some decisions about what is "good" and what is not acceptable. We have many different perspectives and that becomes confusing to everyone. Because of this, I don't think that assessment has made a lot of difference at the macro level, but I do believe that good assessment has made a huge difference at the course level. Faculty make changes based on their data and that makes an almost immediate difference in their teaching and then in their students' learning. So, I don't believe that it is all for show or for accountability. I do think we make a difference. But -- we could do so much more!!! I look forward to hearing your ideas!!!!!
Reply Jane Marie Souza
9:19 AM on April 28, 2016 
Your statement "Too many of us?institutions and reviewers alike?are focusing too much on how we do assessment and not enough on its impact" is true enough, but I am encouraged that I see progress on this front. As assessment in higher education becomes more mature and the assessment reviewers become more seasoned, there is a shift from "how" to "impact". Even over the last couple of years the new focus is becoming clearer. I have witnessed less of an emphasis on prescriptive ways to discover if students are learning intended outcomes, and more significance placed on use of the data collected. Very encouraging indeed. Then again, I'm a glass half full kind of person:)
Reply Mary Herrington-Perry
4:38 PM on April 25, 2016 
I've been thinking about this fact--"We in the higher ed assessment community have not yet come to consensus on what we consider acceptable, good, and best assessment practices"--for some time now. My tentative conclusion is that what we assessment directors consider acceptable depends chiefly on our own academic background, the academic program in question, the size of our operating budget, and the extent to which assessment is embraced on our campus. I think we need the flexibility to answer the question of what's acceptable accordingly, even if it requires us to preface every answer with, "Well, it depends"!