Linda Suskie

  A Common Sense Approach to Assessment & Accreditation

Blog

A response to Bob Shireman on "inane" SLOs

Posted on April 10, 2016 at 8:55 AM

You may have seen Bob Shireman's essay "SLO Madness" in the April 7 issue of Inside Higher Ed or his report, "The Real Value of What Students Do in College." I sent him the following response today:


I first want to point out that I agree wholeheartedly with a number of your observations and conclusions.


1. As you point out, policy discussions too often “treat the question of quality—the actual teaching and learning—as an afterthought or as a footnote.” The Lumina Foundation and the federal government use the term “student achievement” to discuss only retention, graduation, and job placement rates, while the higher ed community wants to use it to discuss student learning as well.


2. Extensive research has confirmed that student engagement in their learning impacts both learning and persistence. You cite Astin’s 23-year-old study; it has since been validated and refined by research by Vincent Tinto, Patrick Terenzini, Ernest Pascarella, and the staff of the National Survey of Student Engagement, among many others.


3. At many colleges and universities, there’s little incentive for faculty to try to become truly great teachers who engage and inspire their students. Teaching quality is too often judged largely by student evaluations that may have little connection to research-informed teaching practices, and promotion and tenure decisions are too often based more on research productivity than teaching quality. This is because there’s more grant money for research than for teaching improvement. A report from Third Way noted that “For every $100 the federal government spends on university-led research, it spends 24 cents on teaching innovation at universities.”


4. We know through neuroscience research that memorized knowledge is quickly forgotten; thinking skills are the lasting learning of a college education.


5. “Critical thinking” is a nebulous term that, frankly, I’d like to banish from the higher ed lexicon. As you suggest, it’s an umbrella term for an array of thinking skills, including analysis, evaluation, synthesis, information literacy, creative thinking, problem solving, and more.


6. The best evidence of what students have learned is in their coursework—papers, projects, performances, portfolios—rather than what you call “fabricated outcome measures” such as published or standardized tests.


7. You call for accreditors to “validate colleges’ own quality-assurance systems,” which is exactly what they are already doing. Many colleges and universities offer hundreds of programs and thousands of courses; it’s impossible for any accreditation team to review them all. So evaluators often choose a random or representative sample, as you suggest.


8. Our accreditation processes are far from perfect. The decades-old American higher education culture of operating in independent silos and evaluating quality by looking at inputs rather than outcomes has proved to be a remarkably difficult ship to turn around, despite twenty years of earnest effort by accreditors. There are many reasons for this, which I discuss in my book Five Dimensions of Quality, but let me share two here. First, US News & World Report’s rankings are based overwhelmingly on inputs rather than outcomes; there’s a strong correlation with institutional age and wealth. Second, most accreditation evaluators are volunteers, and training resources for them are limited. (Remember that everyone in higher education is trying to keep costs down.)


9. Thus, despite a twenty-year focus by accreditors on requiring useful assessment of learning, there are still plenty of people at colleges and universities who don’t see merit in looking at outcomes meaningfully. They don’t engage in the process until accreditors come calling; they continue to have misconceptions about what they are to do and why; and they focus blindly on trying to give the accreditors whatever they think the accreditors want rather than using assessment as an opportunity to look at teaching and learning usefully. This has led to some of your sad anecdotes about convoluted, meaningless processes. Using Evidence of Student Learning to Improve Higher Education, a book by George Kuh and his colleagues, is full of great ideas on how to turn this culture around and make assessment work truly meaningful and useful to faculty.


10. Your call for reviews of majors and courses is sound and, indeed, a number of regional accreditors and state systems already require academic programs to engage in periodic “program review.” There’s room for improvement, however. Many program reviews follow the old “inputs” model, counting library collections, faculty credentials, lab facilities, and the like and do not yet focus sufficiently on student learning.

 

Your report has some fundamental misperceptions, however. Chief among them is your assertion that the three step assessment process—declare goals, seek evidence of student achievement of them, and improve instruction based on the results—“hasn’t worked out that way. Not even close.” Today there are faculty and staff at colleges and universities throughout the country who have completed these three steps successfully and meaningfully. Some of these stories are documented in the periodical Assessment Update, some are documented on the website of the National Institute for Learning Outcomes Assessment (www.learningoutcomeassessment.org), some are documented by the staff of the National Survey of Student Engagement, and many more are documented in reports to accreditors.


In dismissing student learning outcomes as “meaningless blurbs” that are the key flaw in this three-step process, you are dismissing what a college education is all about and what we need to verify. Student learning outcomes are simply an attempt to articulate what we most want students to get out of their college education. Contrary to your assertion that “trying to distill the infinitely varied outcomes down to a list… likely undermines the quality of the educational activities,” research has shown that students learn more effectively when they understand course and program learning outcomes.


Furthermore, without a clear understanding of what we most want students to learn, assessment is meaningless. You note that “in college people do gain ‘knowledge’ and they gain ‘skills,’” but are they gaining the right knowledge and skills? Are they acquiring the specific abilities they most need “to function in society and in a workspace,” as you put it? While, as you point out, every student’s higher education experience is unique, there is nonetheless a core of competencies that we should expect of all college graduates and whose achievement we should verify. Employers consistently say that they want to hire college graduates who can:

• Collaborate and work in teams

• Articulate ideas clearly and effectively

• Solve real-world problems

• Evaluate information and conclusions

• Be flexible and adapt to change

• Be creative and innovative

• Work with people from diverse cultural backgrounds

• Make ethical judgments

• Understand numbers and statistics

 

Employers expect colleges and universities to ensure that every student, regardless of his or her unique experience, can do these things at an appropriate level of competency.


You’re absolutely correct that we need to focus on examining student work (and we do), but how should we decide whether the work is excellent or inadequate? For example, everyone wants college graduates to write well, but what exactly are the characteristics of good writing at the senior level? Student learning outcomes, explicated into rubrics (scoring guides) that elucidate the learning outcomes and define excellent, adequate, and unsatisfactory performance levels, are vital to making this determination.


You don’t mention rubrics in your paper, so I can’t tell if you’re familiar with them, but in the last twenty years they have revolutionized American higher education. When student work is evaluated according to clearly articulated criteria, the evaluations are fairer and more consistent. Higher education curriculum and pedagogy experts such as Mary-Ann Winkelmes, Barbara Walvoord, Virginia Anderson, and L. Dee Fink have shown that, when students understand what they are to learn from an assignment (the learning outcomes), when the assignment is designed to help them achieve those outcomes, and when their work is graded according to how well they demonstrate achievement of those outcomes, they learn far more effectively. When faculty collaborate to identify shared learning outcomes that students develop in multiple courses, they develop a more cohesive curriculum that again leads to better learning.


Beyond having clear, integrated learning outcomes, there’s another critical aspect of excellent teaching and learning: if faculty aren’t teaching something, students probably aren’t learning it. This is where curriculum maps come in; they’re a tool to ensure that students do indeed have enough opportunity to achieve a particular outcome. One college that I worked with, for example, identified (and defined) ethical reasoning as an important outcome for all its students, regardless of major. But a curriculum map revealed that very few students took any courses that helped them develop ethical reasoning skills. The faculty changed curricular requirements to correct this and ensure that every student, regardless of major, graduated with the ethical reasoning skills that both they and employers value.


I appreciate anyone who tries to come up with solutions to the challenges we face, but I must point out that your thoughts on program review may be impractical. External reviews are difficult and expensive. Keep in mind that larger universities may offer hundreds of programs and thousands of courses, and for many programs it can be remarkably hard—and expensive—to find a truly impartial, well-trained external expert.


Similarly, while a number of colleges and universities already subject student work to separate, independent reviews, this can be another difficult, expensive endeavor. With college costs skyrocketing, I question the cost-benefit: are these colleges learning enough from these reviews to make the time, work, and expense worthwhile?


I would add one item to your wish list, by the way: I’d like to see every accreditor require its colleges and universities to expect faculty to use research-informed teaching practices, including engagement strategies, and to evaluate faculty teaching effectiveness on their use of those practices.


But my chief takeaway from your report is not about its shortcomings but how the American higher education community has failed to tell you, other policy thought leaders, and government policy makers what we do and how well we do it. Part of the problem is, because American higher education is so huge and complex, we have a complicated, messy story to tell. None of you has time to do a thorough review of the many books, reports, conferences, and websites that explain what we are trying to do and our effectiveness. We have to figure out a way to tell our very complex story in short, simple ways that busy people can digest quickly.

 


Categories: Ideas, Clearing the Fog

Post a Comment

Oops!

Oops, you forgot something.

Oops!

The words you entered did not match the given text. Please try again.

10 Comments

Reply Kerrie-Anne Sommerfeld
5:57 PM on April 22, 2016 
I'm an training and assessment consultant in Australia and work I with teachers in the vocational education sector (similar to community colleges). I'm also a great admirer of your work Linda, this article is a fantastic summary of what good and not so good practice in Assessment is and the challenges faced by teachers. It bears striking similarities to the issues faced here in Australia also.
Reply Cliff Adelman
9:25 PM on April 19, 2016 
One has to read Bob's first outing spitting at "SLO blurbs" in the essay he wrote for the Century Foundation prior to the Inside piece. Quite frankly, it's embarrassing, and I'm not going to count the ways here or I'll get in trouble---a measure of just how political this thing is in some circles. Let's just hope it all blows over and disappears beneath the waves.
Reply Mary Herrington-Perry
9:20 AM on April 12, 2016 
Yours is a very thoughtful and gracious response, Linda. (But I am still chuckling at the allusion to "Reefer Madness" in the title of Shireman's article.)
Reply Claudia Stanny
12:53 PM on April 11, 2016 
Thank you for this commentary, Linda.

How can we tell our story about what we learn from assessment and how programs get better when they use assessment evidence to guide decisions for improvement?

More importantly, how can we protect developing assessment practices on campuses when external forces continually change the expectations and reward systems? Accreditation bodies create new reporting mandates that throw programs off-balance. Suddenly, they must document a new area of assessment. They have barely had time to reflect on or use the assessment evidence they've collected. Worse, as you note, the external demands increasingly focus on outcomes that are at best tangentially related to quality of learning (graduation rate, first year starting salary of graduates).

Again, thank you for your good work and leadership on assessment.
Reply Melissa Simnitt
12:17 PM on April 11, 2016 
Hi Linda!

Well said. I agree that we are not doing a good enough job explaining what we do and what we've accomplished. I'm adding that to my list of areas I can work to improve on my campus.

Thank you!
Melissa
Reply Joseph Hoey
11:49 AM on April 11, 2016 
Hi Linda, what a thoughtful and comprehensive response to Shireman. Thanks so much for taking the time to put it together. I too hope Inside Higher Ed will publish it.
Reply Barbara Rodriguez
10:16 AM on April 11, 2016 
Linda, I appreciate your response as well. When I read the original article, I thought a response was needed and was going to offer to write one, but your response exceeds what I could have done. Thank you and I hope Inside Higher Ed will publish your response. It is well balanced and acknowledges the importance of assessing student learning outcomes. I have witnessed first hand anecdotally and through data the positive impact of outcomes-based assessment on the scholarship of teaching and learning.
Reply E Cook
8:50 AM on April 11, 2016 
Linda, thank you for taking the time to thoughtfully outline this response in a manner that promotes a better understanding of assessment. Through your respectful appreciation of many the points made by the original author you are helping to encourage dialogue that will, ultimately, help us all continue to strengthen practices that are promising and address areas for improvement. Although I don't share your wish to banish the concept of critical thinking skills, and I also don't quite understand the penchant nowadays for researchers to claim measures or studies are "validated"- which in my opinion tends to spread a general misunderstanding about the process of validity testing- I greatly enjoyed reading your essay. Thank you !!
Reply Linda Suskie
6:17 AM on April 11, 2016 
Thank you, Nhung! What struck me as I read Shireman's essays were what a poor job we in higher education do educating people like him about how good we are. We have to figure out a better way.
Reply Nhung Pham
11:41 AM on April 10, 2016 
I am an emerging scholar in assessment. I am looking for a posititon of assessment specialist, therefore, I am very familiar with the information Ms Suskie mentioned in the response. The outsiders often have such misconceptions about assessment. However, the more I read and learn about assessment of SLOs, I believe more in its impact on quality improvement. Thank you for your response. It is exactly what Ithought.