Linda Suskie

  A Common Sense Approach to Assessment & Accreditation

Blog

A New Paradigm for Assessment

Posted on May 21, 2017 at 6:10 AM

I was impressed with—and found myself in agreement with—Douglas Roscoe’s analysis of the state of assessment in higher education in “Toward an Improvement Paradigm for Academic Quality” in the Winter 2017 issue of Liberal Education. Like Douglas, I think the assessment movement has lost its way, and it’s time for a new paradigm. And Douglas’s improvement paradigm—which focuses on creating spaces for conversations on improving teaching and curricula, making assessment more purposeful and useful, and bringing other important information and ideas into the conversation—makes sense. Much of what he proposes is in fact echoed in Using Evidence of Student Learning to Improve Higher Education by George Kuh, Stanley Ikenberry, Natasha Jankowski, Timothy Cain, Peter Ewell, Pat Hutchings, and Jillian Kinzie.


But I don’t think his improvement paradigm goes far enough, so I propose a second, concurrent paradigm shift.


I’ve always felt that the assessment movement tried to do too much, too quickly. The assessment movement emerged from three concurrent forces. One was the U.S. federal government, which through a series of Higher Education Acts required Title IV gatekeeper accreditors to require the institutions they accredit to demonstrate that they were achieving their missions. Because the fundamental mission of an institution of higher education is, well, education, this was essentially a requirement that institutions demonstrate that its intended student learning outcomes were being achieved by its students.


The Higher Education Acts also required Title IV gatekeeper accreditors to require the institutions they accredit to demonstrate “success with respect to student achievement in relation to the institution’s mission, including, as appropriate, consideration of course completion, state licensing examinations, and job placement rates” (1998 Amendments to the Higher Education Act of 1965, Title IV, Part H, Sect. 492(b)(4)(E)). The examples in this statement imply that the federal government defines student achievement as a combination of student learning, course and degree completion, and job placement.


A second concurrent force was the movement from a teaching-centered to learning-centered approach to higher education, encapsulated in Robert Barr and John Tagg’s 1995 landmark article in Change, “From Teaching to Learning: A New Paradigm for Undergraduate Education.” The learning-centered paradigm advocates, among other things, making undergraduate education an integrated learning experience—more than a collection of courses—that focuses on the development of lasting, transferrable thinking skills rather than just basic conceptual understanding.


The third concurrent force was the growing body of research on practices that help students learn, persist, and succeed in higher education. Among these practices: students learn more effectively when they integrate and see coherence in their learning, when they participate in out-of-class activities that build on what they’re learning in the classroom, and when new learning is connected to prior experiences.


These three forces led to calls for a lot of concurrent, dramatic changes in U.S. higher education:

  • Defining quality by impact rather than effort—outcomes rather than processes and intent
  • Looking on undergraduate majors and general education curricula as integrated learning experiences rather than collections of courses
  • Adopting new research-informed teaching methods that are a 180-degree shift from lectures
  • Developing curricula, learning activities, and assessments that focus explicitly on important learning outcomes
  • Identifying learning outcomes not just for courses but for for entire programs, general education curricula, and even across entire institutions
  • Framing what we used to call extracurricular activities as co-curricular activities, connected purposefully to academic programs
  • Using rubrics rather than multiple choice tests to evaluate student learning
  • Working collaboratively, including across disciplinary and organizational lines, rather than independently


These are well-founded and important aims, but they are all things that many in higher education had never considered before. Now everyone was being asked to accept the need for all these changes, learn how to make these changes, and implement all these changes—and all at the same time. No wonder there’s been so much foot-dragging on assessment! And no wonder that, a generation into the assessment movement and unrelenting accreditation pressure, there are still great swaths of the higher education community who have not yet done much of this and who indeed remain oblivious to much of this.


What particularly troubles me is that we’ve spent too much time and effort on trying to create—and assess—integrated, coherent student learning experiences and, in doing so, left the grading process in the dust. Requiring everything to be part of an integrated, coherent learning experience can lead to pushing square pegs into round holes. Consider:

  • The transfer associate degrees offered by many community colleges, for example, aren’t really programs—they’re a collection of general education and cognate requirements that students complete so they’re prepared to start a major after they transfer. So identifying—or assessing—program learning outcomes for them frankly doesn’t make much sense.
  • The courses available to fulfill some general education requirements don’t really have much in common, so their shared general education outcomes become so broad as to be almost meaningless.
  • Some large universities are divided into separate colleges and schools, each with their own distinct missions and learning outcomes. Forcing these universities to identify institutional learning outcomes applicable to every program makes no sense—again, the outcomes must be so broad as to be almost meaningless.
  • The growing numbers of students who swirl through multiple colleges before earning a degree aren’t going to have a really integrated, coherent learning experience no matter how hard any of us tries.


At the same time, we have given short shrift to helping faculty learn how to develop and use good assessments in their own classes and how to use grading information to understand and improve their own teaching. In the hundreds of workshops and presentations I’ve done across the country, I often ask for a show of hands from faculty who routinely count how many students earned each score on each rubric criterion of a class assignment, so they can understand what students learned well and what they didn’t learn well. Invariably a tiny proportion raises their hands. When I work with faculty who use multiple choice tests, I ask how many use a test blueprint to plan their tests so they align with key course objectives, and it’s consistently a foreign concept to them.


In short, we’ve left a vital part of the higher education experience—the grading process—in the dust. We invest more time in calibrating rubrics for assessing institutional learning outcomes, for example, than we do in calibrating grades. And grades have far more serious consequences to our students, employers, and society than assessments of program, general education, co-curricular, or institutional learning outcomes. Grades decide whether students progress to the next course in a sequence, whether they can transfer to another college, whether they graduate, whether they can pursue a more advanced degree, and in some cases whether they can find employment in their discipline.


So where we should go? My paradigm springs from visits to two Canadian institutions a few years ago. At that time Canadian quality assurance agencies did not have any requirements for assessing student learning, so my workshops focused solely on assessing learning more effectively in the classroom. The workshops were well received because they offered very practical help that faculty wanted and needed. And at the end of the workshops, faculty began suggesting that perhaps they should collaborate to talk about shared learning outcomes and how to teach and assess them. In other words, discussion of classroom learning outcomes began to flow into discussion of program learning outcomes. It’s a naturalistic approach that I wish we in the United States had adopted decades ago.


What I now propose is moving to a focus on applying everything we’ve learned about curriculum design and assessment to the grading process in the classroom. In other words, my paradigm agrees with Roscoe’s that “assessment should be about changing what happens in the classroom—what students actually experience as they progress through their courses—so that learning is deeper and more consequential.” My paradigm emphasizes the following.

  1. Assessing program, general education, and institutional learning outcomes remain an assessment best practice. Those who have found value in these assessments would be encouraged to continue to engage in them and honored through mechanisms such as NILOA’s Excellence in Assessment designation.
  2. Teaching excellence is defined in significant part by four criteria: (1) the use of research-informed teaching and curricular strategies, (2) the alignment of learning activities and grading criteria to stated course objectives, (3) the use of good quality evidence, including but not limited to assessment results from the grading process, to inform changes to one’s teaching, and (4) active participation in and application of professional development opportunities on teaching including assessment.
  3. Investments in professional development on research-informed teaching practices exceed investments in assessment.
  4. Assessment work is coordinated and supported by faculty professional development centers (teaching-learning centers) rather than offices of institutional effectiveness or accreditation, sending a powerful message that assessment is about improving teaching and learning, not fulfilling an external mandate.
  5. We aim to move from a paradigm of assessment, not just to one of improvement as Roscoe proposes, but to one of evidence-informed improvement—a culture in which the use of good quality evidence to inform discussions and decisions is expected and valued.
  6. If assessment is done well, it’s a natural part of the teaching-learning process, not a burdensome add-on responsibility. The extra work is in reporting it to accreditors. This extra work can’t be eliminated, but it can be minimized and made more meaningful by establishing the expectation that reports address only key learning outcomes in key courses (including program capstones), on a rotating schedule, and that course assessments are aggregated and analyzed within the program review process.


Under this paradigm, I think we have a much better shot at achieving what’s most important: giving every student the best possible education.

Categories: Ideas

Post a Comment

Oops!

Oops, you forgot something.

Oops!

The words you entered did not match the given text. Please try again.

5 Comments

Reply ★ Owner
12:13 PM on May 29, 2017 
Thank you all for your feedback and thank you, Jillian, for pointing us to NILOA's response. I am an enormous fan of Dr. Mary-Ann Winkelmes and her transparency project at UNLV that greatly increases the meaning and relevance of the learning activities and assignments we gave our students: https://www.unlv.edu/provost/teachingandlearning
Reply Jillian Kinzie
11:42 AM on May 25, 2017 
I appreciate the feistiness of this post, and am most interested in Linda's idea of considering a new paradigm for the grading process. The focus on "process" is noteworthy to me. A new level of rigor, reflection, feedback, and revision should be part of a sound grading process. A revised approach to the grading process also suggests rethinking the tasks we students to do to demonstrate their learning. By the way, NILOA posted a viewpoint responding to the trio of AAC&U Liberal Education pieces here: https://illinois.edu/blog/view/915/489500. Let's keep the discussion animated!
Reply Fiona Chrystall
12:47 PM on May 22, 2017 
I totally agree. Having started my work in assessment as a Director of Teaching and Learning sixteen years ago, and having now been Director of Curriculum Quality Assurance and Assessment at a community college for the past four years, I can say that it seemed a much easier task in my former position and institution. I attribute that mostly to the direct connection to T & L. I have argued that my "bargaining currency" with faculty when working with them on assessment, is my long experience as a faculty member, not as an assessment administrator. Invariably, I get asked "do you think this would be an appropriate assessment measure?" and my response is "what is it you are hoping your students have learned, and how best can they demonstrate that to you?". General education assessment is one of my current challenges - for exactly the reason you state - it's not a "program". Then I have a conversation with a new chair overseeing one of our newest associate degree programs in Aviation & Career Pilot Technology (just this past Thursday), where we discuss whether or not the multi-choice exam questions dictated by the FAA truly are the best measures of our student learning outcomes for the program. When one of the outcomes is about making sound decisions based on available evidence in a given context , not so much! The discussion flowed to considering that the first set of students through the program to graduation knew their stuff but were not proficient in articulating their problem-solving and critical thinking abilities in writing, but when provided problem-based case studies to solve and present orally, they were clearly capable of high quality work in this area. Conclusion - what they were doing as a course assignment for a grade was a much better measure of their capabilities in relation to this learning outcome than any set of questions on the required FAA exam! That's just one recent example of where the meaningful program assessment and classroom assessment converge.
Reply Vicki Stieha
12:14 PM on May 22, 2017 
You are 100% on target from my perspective!

For example, I recently served as an external reviewer for a community college and we ran head on into the difficulty of assessing the "program" outcomes for a course of study that was highly suited to help students transfer to a university upon completion of the Associate's degree. The courses do not comprise a "program" but there was terrific evidence of student learning emerging from various courses. The metric of "how do you assess your academic and co-curricular outcomes relative to a set of program learning outcomes" was not one that would serve the continuous improvement of student learning.
Reply Ruth Slotnick
7:46 AM on May 21, 2017 
Spot on feisty rant! Now to push this forward as a collective.