Linda Suskie

  A Common Sense Approach to Assessment & Accreditation

Blog

What do faculty really think about assessment?

Posted on March 4, 2018 at 8:05 AM Comments comments (0)

The vitriol in some recent op-ed pieces and the comments that followed them might leave the impression that faculty hate assessment. Well, some faculty clearly do, but a national survey suggests that they’re in the minority.


The Faculty Survey of Assessment Culture, directed by Dr. Matthew Fuller at Sam Houston State University, can give us some insight. Its key drawback is, because it’s still a relatively nascent survey, it has only about 1155 responses from its last reported administration in 2014. So the survey may not represent what faculty throughout the U.S. really think, but I nonetheless think it’s worth a look.


Most of the survey is a series of statements to which faculty respond by choosing Strongly Agree, Agree, Only Slightly Agree, Only Slightly Disagree, Disagree, or Strongly Disagree.


Here are the percentages who agreed or strongly agreed with each statement. Statements that are positive about assessment are in green; those that are negative about assessment are in red.

80% The majority of administrators are supportive of assessment.

77% Faculty leadership is necessary for my institution’s assessment efforts.

76% Assessment is a good thing for my institution to do.

70% I am highly interested in my institution’s assessment efforts.

70% Assessment is vital to my institution’s future.

67% In general I am eager to work with administrators.

67% Assessment is a good thing for me to do.

64% I am actively engaged in my institution’s assessment efforts.

63% Assessments of programs are typically connected back to student learning

62% My academic department or college truly values faculty involvement in assessment.

61% I engage in institutional assessment efforts because it is the right thing to do for our students.

60% Assessment is vital to my institution’s way of operating.

57% Discussions about student learning are at the heart of my institution.

57% In general a recommended change is more likely to be enacted by administrators if it is supported by assessment data.

53% I clearly understand assessment processes at my institution.

52% Assessment supports student learning at my institution.

51% Assessment is primarily the responsibility of faculty members.

51% Change occurs more readily when supported by assessment results.

50% It is clear who is ultimately in charge of assessment.

50% I am familiar with the office that leads student assessment efforts for accreditation purposes.

50% Assessment for accreditation purposes is prioritized above other assessment efforts.

49% Assessment results are used for improvement.

49% The majority of administrators primarily emphasize assessment for the improvement of student learning.

49% I engage in institutional assessment because doing so makes a difference to student learning at my institution.

48% Assessment processes yield evidence of my institution’s effectiveness.

48% I have a generally positive attitude toward my institution’s culture of assessment.

47% Senior leaders, i.e., President or Provost, have made clear their expectations regarding assessment.

47% Administrators are supportive of making changes.

46% I am familiar with the office that leads student assessment efforts for student learning.

45% Assessment data are used to identify the extent to which student learning outcomes are met.

44% My institution is structured in a way that facilitates assessment practices focused on improved student learning.

44% The majority of administrators only focus on assessment in response to compliance requirements.

43% Student assessment results are shared regularly with faculty members.

41% I support the ways in which administrators have used assessment on my campus.

40% Assessment is an organized coherent effort at my institution.

40% Assessment results are available to faculty by request.

38% Assessment data are available to faculty by request.

37% Assessment results are shared regularly throughout my institution.

35% Faculty are in charge of assessment at my institution.

33% Engaging in assessment also benefits my research/scholarship agenda.

32% Budgets can be negatively impacted by assessment results.

32% Administrators share assessment data with faculty members using a variety of communication strategies (i.e., meetings, web, written correspondence, presentations).

31% Assessment data are regularly used in official institutional communications.

30% There are sufficient financial resources to make changes at my institution.

29% Assessment is a necessary evil in higher education.

28% Communication of assessment results has been effective.

28% Assessment results are criticized for going nowhere (i.e., not leading to change).

27% Assessment results in a fair depiction of what I do as a faculty member.

27% Administrators use assessment as a form of control (i.e., to regulate institutional processes).

26% Assessment efforts do not have a clear focus.

26% I enjoy engaging in institutional assessment efforts.

24% Assessment success stories are formally shared throughout my institution.

23% Assessment results in an accurate depiction of what I do as a faculty member.

22% Assessment is conducted based on the whims of the people in charge.

21% If assessment was not required I would not be doing it.

21% Assessment is primarily the responsibility of administrators.

21% I am aware of several assessment success stories (i.e. instances of assessment resulting in important changes).

20% I do not have time to engage in assessment efforts.

19% Assessment results have no impact on resource allocations.

18% Assessment results are used to scare faculty into compliance with what the administration wants.

18% There is pressure to reveal only positive results from assessment efforts.

17% I avoid doing institutional assessment activities if I can.

17% I engage in assessment because I am afraid of what will happen if I do not.

14% I perceive assessment as a threat to academic freedom.

10% Assessment results are used to punish faculty members (i.e., not rewarding innovation or effective teaching, research, or service).

4% Assessment is someone else’s problem, not mine.


Overall, there’s good news here. Most faculty agreed with most positive statements about assessment, and most disagreed with most negative statements. I was particularly heartened that about three-quarters of respondents agreed that “assessment is a good thing for my institution to do,” about 70% agreed that “assessment is vital to my institution’s future,” and about two-thirds agreed that “assessment is a good thing for me to do.”


But there’s also plenty to be concerned about here. Only 35% agree that faculty are in charge of assessment and, by several measures, only a minority see assessment results shared and used. Almost 30% view assessment as a necessary evil.


Survey researchers know that people are more apt to agree than disagree with a statement, so I also looked at the percentages of faculty who disagreed or strongly disagreed with each statement. These responses do not mirror the agreed/strongly agreed results above, because on some items a larger proportion of faculty marked Only Slightly Agree or Only Slightly Disagree. Again, the positive statements are in green and the negative ones in red.

3% The majority of administrators are supportive of assessment.

6% Faculty leadership is necessary for my institution’s assessment efforts.

6% Assessment is a good thing for my institution to do.

7% Assessment is vital to my institution’s future.

8% I am highly interested in my institution’s assessment efforts.

8% In general a recommended change is more likely to be enacted by administrators if it is supported by assessment data.

9% I am actively engaged in my institution’s assessment efforts.

9% In general I am eager to work with administrators.

9% My academic department or college truly values faculty involvement in assessment.

10% Change occurs more readily when supported by assessment results.

10% Assessment is a good thing for me to do.

12% Assessment results are available to faculty by request.

13% Assessment is vital to my institution’s way of operating.

13% Assessment data are available to faculty by request.

13% The majority of administrators primarily emphasize assessment for the improvement of student learning.

13% I engage in institutional assessment efforts because it is the right thing to do for our students.

14% Discussions about student learning are at the heart of my institution.

14% I clearly understand assessment processes at my institution.

14% Assessment data are used to identify the extent to which student learning outcomes are met.

15% Assessments of programs are typically connected back to student learning.

15% Assessment results are used for improvement.

16% Assessment is primarily the responsibility of faculty members.

16% Administrators are supportive of making changes.

17% Assessment supports student learning at my institution.

18% Assessment processes yield evidence of my institution’s effectiveness.

18% I support the ways in which administrators have used assessment on my campus.

19% It is clear who is ultimately in charge of assessment.

19% Assessment is an organized coherent effort at my institution.

19% I have a generally positive attitude toward my institution’s culture of assessment.

20% Senior leaders, i.e., President or Provost, have made clear their expectations regarding assessment.

20% My institution is structured in a way that facilitates assessment practices focused on improved student learning.

20% I engage in institutional assessment because doing so makes a difference to student learning at my institution.

21% I am familiar with the office that leads student assessment efforts for accreditation purposes.

21% Budgets can be negatively impacted by assessment results.

22% The majority of administrators only focus on assessment in response to compliance requirements.

23% Student assessment results are regularly shared with faculty members.

24% I am familiar with the office that leads student assessment efforts for student learning.

24% Assessment for accreditation purposes is prioritized above other assessment efforts.

24% Assessment data are regularly used in official institutional communications.

28% Faculty are in charge of assessment at my institution.

29% Assessment results have no impact on resource allocations.

29% Assessment results are regularly shared throughout my institution.

29% I enjoy engaging in institutional assessment efforts.

31% Administrators share assessment data with faculty members using a variety of communication strategies (i.e., meetings, web, written correspondence, presentations).

31% Communication of assessment results has been effective.

31% Administrators use assessment as a form of control (i.e., to regulate institutional processes).

32% Assessment results are criticized for going nowhere (i.e., not leading to change).

32% Assessment results in a fair depiction of what I do as a faculty member.

33% There are sufficient financial resources to make changes at my institution.

34% Assessment success stories are formally shared throughout my institution.

34% Assessment results in an accurate depiction of what I do as a faculty member.

35% Assessment is primarily the responsibility of administrators.

36% I am aware of several assessment success stories (i.e., instances of assessment resulting in important changes).

36% Engaging in assessment also benefits my research/scholarship agenda.

41% Assessment efforts do not have a clear focus.

41% I do not have time to engage in assessment efforts.

42% Assessment is a necessary evil in higher education.

50% Assessment is conducted based on the whims of the people in charge.

50% There is pressure to reveal only positive results from assessment efforts.

53% Assessment results are used to scare faculty into compliance with what the administration wants.

55% I avoid doing institutional assessment activities if I can.

56% If assessment was not required I would not be doing it.

56% I engage in assessment because I am afraid of what will happen if I do not.

60% Assessment results are used to punish faculty members (i.e., not rewarding innovation or effective teaching, research, or service).

62% I perceive assessment as a threat to academic freedom.

78% Assessment is someone else’s problem, not mine.


Here there’s more good news. We want small proportions of faculty to disagree with the positive statements about assessment, and for the most part they do. About a third disagree that assessment results and success stories are shared, but that matches what we saw with the agree-strongly agree results.


But there are also areas of concern here. We want large proportions of faculty to disagree with the negative statements about assessment, and that doesn’t always happen. Less than a quarter disagree that budgets can be negatively impacted by assessment results and that administrators look at assessment only through a compliance lens. Less than a third disagreed that assessment results don’t lead to change or resource allocations. The results that concerned me most? Only 42% disagreed that assessment is a necessary evil; only half disagreed that there is pressure to reveal only positive assessment results; and only a bit over half disagreed that “If assessment was not required I would not be doing it.”


So, while most faculty “get” assessment, there are sizable numbers who don’t yet see value in it. We've come a long way, but there's still plenty of work to do!


(Some notes on the presentation of these results: Note that I sorted results from highest to lowest, rounded percentages to the nearest whole percent, and color-coded "good" and "bad" statements. Those all help the key points of a very lengthy survey pop out at the reader.)

Why do (some) faculty hate assessment?

Posted on February 28, 2018 at 10:25 AM Comments comments (11)

Two recent op-ed pieces in the Chronicle of Higher Education and the New York Times –and the hundreds of online comments regarding them—make clear that, 25 years into the assessment movement, a lot of faculty really hate assessment.


It’s tempting for assessment people to spring into a defensive posture and dismiss what these people are saying. (They’re misinformed! The world has changed!) But if that’s our response, aren’t we modeling the fractures deeply dividing the US today, with people existing in their own echo chambers and talking past each other rather than really listening and trying to find common ground on which to build? And shouldn’t we be practicing what we preach, using systematic evidence to inform what we say and do?


So I took a deeper dive into those comments. I did a content analysis of the articles and many of the comments that followed. (The New York Times article had over 500 comments—too many for me to handle—so I looked only at NYT comments with at least 12 recommendations.)


If you’re not familiar with content analysis, it’s looking through text to identify the frequency of ideas or themes. For example, I counted how many comments mentioned that assessment is expensive. I do content analysis by listing all the comments as bullets in a Word document, then cutting and pasting the bulleted comments to group similar comments together under headings. I then cut and paste the groups so the most frequently mentioned themes are at the top of the document. There is qualitative analysis software that can help if you don’t want to do this manually.


A caveat: Comments don’t always fall into neat, discrete categories; judgement is needed to decide where to place some. I did this analysis quickly, and it’s entirely possible that, if you’d done this instead of me, you might have come up with somewhat different results. But assessment is not rigorous research; we just need information good enough to help inform our thinking, and I think my analysis is fine for the purpose of figuring out how we might deal with this.


Why take the time to do a content analysis instead of just reading through the comments? Because, when we process a list of comments, there’s a good chance we won’t identify the most frequently mentioned ideas accurately. As I was doing my content analysis, I was struck by how many faculty complained that assessment is (I’m being snarky here) either a vast right-wing conspiracy or a vast left-wing conspiracy, simply because I’d never heard that before. It turned out, however, that there were other themes that emerged far more frequently. This is a good lesson for faculty who think they don’t need to formally assess because they “know” what their students are struggling with. Maybe they do…but maybe not.


So what did I find? As I’d expected, there are many reasons why faculty may hate assessment. I found that most of their complaints fall into just four broad categories:


It’s a waste of an enormous amount of time and resources that could be better spent on teaching. Almost 40% of the comments fell into this category. Some examples:

  • We faculty are angry over the time and dollars wasted.
  • The assessment craze is not only of little value, but it saps the meager resources of time and money available for classroom instruction.
  • Faced with outrage over the high cost of higher education, universities responded by encouraging expensive administrative bloat.
  • It is not that the faculty are not trying, but the data and methods in general use are very poor at measuring learning.
  • Our “assessment expert” told us to just put down as a goal the % of students we wanted to rate us as very good or good on a self-report survey. Which we all know is junk.


I and what I think is important is not valued or respected. Over 30% of the comments fell into this category. Some examples:

  • Assessment of student learning outcomes is an add-on activity that says your standard examination and grading scheme isn’t enough so you need to do a second layer of grading in a particular numerical format.
  • The fundamental, flawed premise of most of modern education is that teaching is a science.
  • Bureaucratic jargon subtly shapes the expectations of students and teachers alike.
  • When the effort to reduce learning to a list of job-ready skills goes too far, it misses the point of a university education.
  • Learning outcomes have disempowered faculty.
  • The only learning outcomes I value: students complete their formal education with a desire to learn more
  • Assessment reflects a misguided belief that learning is quantifiable.


External and economic forces are behind this. About 15% of comments fell into this category, including those right-wing/left-wing conspiracy comments. Some examples:

  • There’s a whole industry out there that’s invested in outcomes assessment.
  • The assessment boom coincided with the decision of state legislatures to reduce spending on public universities.
  • Educational institutions have been forced to operate out of a business model.
  • It is the rise of adjuncts and online classes that has led to the assessment push.


I’m unfairly held responsible for student learning. About 10% of comments fell into this category. Some examples:

  • Students, not faculty, are responsible for student learning.
  • It is much more profitable to skim money from institutions of higher learning than fixing the underlying causes of the poverty and lack of focus that harm students.
  • The root cause is lack of a solid foundation built in K-12.


Two things struck me about these four broad categories. The first one was that they don’t quite align with what I’ve heard as I’ve worked with literally thousands of faculty at hundreds of colleges over the last two decades. Yes, I’ve heard plenty about assessment being useless, and I’ve written about faculty feeling devalued and disrespected by assessment, but I’d never heard the external-forces or blame-game reasons before. And I’ve heard plenty about other reasons that weren’t mentioned in these comments, especially finding time to work on assessment, not understanding how to assess (or how to teach), and moving from a culture of silos to one of collaboration. I think the reason for the disconnect between what I’ve heard and what was expressed here is that these comments reflect the angriest faculty, not all faculty. But their anger is legitimate and something we should all work to address.


[UPDATED 2/28/2018 4:36 PM EST] So what should we do? First, we clearly need better information on faculty experiences and views regarding assessment so we can understand which issues are most pervasive and address them. The Surveys of Assessment Culture developed by Matt Fuller at Sam Houston State University is an important start.


In the meanwhile, the good news is the comments in and accompanying these two pieces all represent solvable problems. (No, we can’t solve all of society’s ills, but we can help faculty deal with them.) I’ll share some ideas in upcoming blog posts. If you don’t want to wait, you’ll find plenty of practical suggestions in the new 3rd edition of my book Assessing Student Learning: A Common Sense Guide.

Is higher ed assessment changing? You bet!

Posted on February 13, 2018 at 9:10 AM Comments comments (0)

Today marks the release of the third edition of my book Assessing Student Learning: A Common Sense Guide. I approached Jossey-Bass about doing a third edition in response to requests from some faculty who used it as a textbook but were required to use more recent editions. The second edition had been very successful, so I figured I’d update the references and a few chapters and be done. But as I started work on this edition, I was immediately struck by how outdated the second edition had become in just a few short years. The third edition is a complete reorganization and rewrite of the previous edition.


How has the world of higher ed assessment changed?


We are moving from Assessment 1.0 to Assessment 2.0: from getting assessment done—and in many cases not doing it very well—to getting assessment used. Many faculty and administrators still struggle to grasp that assessment is all about improving how we help students learn, not an end in itself, and that assessments should be planned with likely uses in mind. The last edition talked about using results, of course, but new edition adds a chapter on using assessment results to the beginning of the book. And throughout the book I talk not about “assessment results” but “evidence of student learning,” which is what this is really all about.


We have a lot of new resources. Many new assessment resources have emerged since the second edition was published, including the VALUE rubrics published by AAC&U, the many white papers published by NILOA, and the Degree Qualifications Profile sponsored by Lumina. Learning management systems and assessment information management systems are far more prevalent and sophisticated. This edition talks about these and other valuable new resources.


We are recognizing that different settings require different approaches to assessment. The more assessment we’ve done, the more we’ve come to realize that assessment practices vary depending on whether we’re assessing learning in courses, programs, general education curricula, or co-curricular experiences. The last edition didn’t draw many distinctions among assessment in these settings. This edition features a new chapter on the many settings of assessment, and several chapters discuss applying concepts to specific settings.


We’re realizing that curriculum design is a big piece of the assessment puzzle. We’ve found that, when faculty and staff struggle with assessment, it’s often because the learning outcomes they’ve identified aren’t addressed sufficiently—or at all—in the curriculum. So this book has a brand new chapter on curriculum design, and the old chapter on prompts has been expanded into one on creating meaningful assignments.


We have a much better understanding of rubrics. Rubrics are now so widespread that we have a much better idea of how to design and use them. A couple of years ago I did a literature review of rubric development that turned on a lot of lightbulbs for me, and this edition reflects my fresh thinking.


We’re recognizing that in some situations student learning is especially hard to assess. This edition has a new chapter on assessing the hard-to-assess, such as performances and learning that can’t be graded.


We’re increasingly appreciating the importance of setting appropriate standards and targets in order to interpret and use results appropriately. The chapter on this is completely rewritten, with a new section on setting standards for multiple choice tests.


We’re fighting the constant pull to make assessment too complicated. The pull of some accreditors’ overly complex requirements, some highly structured assessment information management systems, and some assessment practitioners with psychometric training to make things much more complicated than they need to be is strong. That this new edition is well over 400 pages says a lot! This book has a whole chapter on keeping assessment cost-effective, especially in terms of time.


We’re starting to recognize that, if assessment is to have real impact, results need to be synthesized into an overall picture of student learning. This edition stresses the need to sit back after looking through reams of assessment reports and ask, from a qualitative rather than quantitative perspective, what are we doing well? In what ways is student learning most disappointing?


Pushback to assessment is moving from resistance to foot-dragging. The voices saying assessment can’t be done are growing quieter because we now have decades of experience doing assessment. But while more people are doing assessment, in too many cases they’re doing it only to comply with an accreditation mandate. Helping people move from getting assessment done to using it in meaningful ways remains a challenge. So the two chapters on culture in the second edition are now six.


Data visualization and learning analytics are changing how we share assessment results. These things are so new that this edition only touches on them. I think that they will be the biggest drivers in changes to assessment over the coming decade.

A New Paradigm for Assessment

Posted on May 21, 2017 at 6:10 AM Comments comments (6)

I was impressed with—and found myself in agreement with—Douglas Roscoe’s analysis of the state of assessment in higher education in “Toward an Improvement Paradigm for Academic Quality” in the Winter 2017 issue of Liberal Education. Like Douglas, I think the assessment movement has lost its way, and it’s time for a new paradigm. And Douglas’s improvement paradigm—which focuses on creating spaces for conversations on improving teaching and curricula, making assessment more purposeful and useful, and bringing other important information and ideas into the conversation—makes sense. Much of what he proposes is in fact echoed in Using Evidence of Student Learning to Improve Higher Education by George Kuh, Stanley Ikenberry, Natasha Jankowski, Timothy Cain, Peter Ewell, Pat Hutchings, and Jillian Kinzie.


But I don’t think his improvement paradigm goes far enough, so I propose a second, concurrent paradigm shift.


I’ve always felt that the assessment movement tried to do too much, too quickly. The assessment movement emerged from three concurrent forces. One was the U.S. federal government, which through a series of Higher Education Acts required Title IV gatekeeper accreditors to require the institutions they accredit to demonstrate that they were achieving their missions. Because the fundamental mission of an institution of higher education is, well, education, this was essentially a requirement that institutions demonstrate that its intended student learning outcomes were being achieved by its students.


The Higher Education Acts also required Title IV gatekeeper accreditors to require the institutions they accredit to demonstrate “success with respect to student achievement in relation to the institution’s mission, including, as appropriate, consideration of course completion, state licensing examinations, and job placement rates” (1998 Amendments to the Higher Education Act of 1965, Title IV, Part H, Sect. 492(b)(4)(E)). The examples in this statement imply that the federal government defines student achievement as a combination of student learning, course and degree completion, and job placement.


A second concurrent force was the movement from a teaching-centered to learning-centered approach to higher education, encapsulated in Robert Barr and John Tagg’s 1995 landmark article in Change, “From Teaching to Learning: A New Paradigm for Undergraduate Education.” The learning-centered paradigm advocates, among other things, making undergraduate education an integrated learning experience—more than a collection of courses—that focuses on the development of lasting, transferrable thinking skills rather than just basic conceptual understanding.


The third concurrent force was the growing body of research on practices that help students learn, persist, and succeed in higher education. Among these practices: students learn more effectively when they integrate and see coherence in their learning, when they participate in out-of-class activities that build on what they’re learning in the classroom, and when new learning is connected to prior experiences.


These three forces led to calls for a lot of concurrent, dramatic changes in U.S. higher education:

  • Defining quality by impact rather than effort—outcomes rather than processes and intent
  • Looking on undergraduate majors and general education curricula as integrated learning experiences rather than collections of courses
  • Adopting new research-informed teaching methods that are a 180-degree shift from lectures
  • Developing curricula, learning activities, and assessments that focus explicitly on important learning outcomes
  • Identifying learning outcomes not just for courses but for for entire programs, general education curricula, and even across entire institutions
  • Framing what we used to call extracurricular activities as co-curricular activities, connected purposefully to academic programs
  • Using rubrics rather than multiple choice tests to evaluate student learning
  • Working collaboratively, including across disciplinary and organizational lines, rather than independently


These are well-founded and important aims, but they are all things that many in higher education had never considered before. Now everyone was being asked to accept the need for all these changes, learn how to make these changes, and implement all these changes—and all at the same time. No wonder there’s been so much foot-dragging on assessment! And no wonder that, a generation into the assessment movement and unrelenting accreditation pressure, there are still great swaths of the higher education community who have not yet done much of this and who indeed remain oblivious to much of this.


What particularly troubles me is that we’ve spent too much time and effort on trying to create—and assess—integrated, coherent student learning experiences and, in doing so, left the grading process in the dust. Requiring everything to be part of an integrated, coherent learning experience can lead to pushing square pegs into round holes. Consider:

  • The transfer associate degrees offered by many community colleges, for example, aren’t really programs—they’re a collection of general education and cognate requirements that students complete so they’re prepared to start a major after they transfer. So identifying—or assessing—program learning outcomes for them frankly doesn’t make much sense.
  • The courses available to fulfill some general education requirements don’t really have much in common, so their shared general education outcomes become so broad as to be almost meaningless.
  • Some large universities are divided into separate colleges and schools, each with their own distinct missions and learning outcomes. Forcing these universities to identify institutional learning outcomes applicable to every program makes no sense—again, the outcomes must be so broad as to be almost meaningless.
  • The growing numbers of students who swirl through multiple colleges before earning a degree aren’t going to have a really integrated, coherent learning experience no matter how hard any of us tries.


At the same time, we have given short shrift to helping faculty learn how to develop and use good assessments in their own classes and how to use grading information to understand and improve their own teaching. In the hundreds of workshops and presentations I’ve done across the country, I often ask for a show of hands from faculty who routinely count how many students earned each score on each rubric criterion of a class assignment, so they can understand what students learned well and what they didn’t learn well. Invariably a tiny proportion raises their hands. When I work with faculty who use multiple choice tests, I ask how many use a test blueprint to plan their tests so they align with key course objectives, and it’s consistently a foreign concept to them.


In short, we’ve left a vital part of the higher education experience—the grading process—in the dust. We invest more time in calibrating rubrics for assessing institutional learning outcomes, for example, than we do in calibrating grades. And grades have far more serious consequences to our students, employers, and society than assessments of program, general education, co-curricular, or institutional learning outcomes. Grades decide whether students progress to the next course in a sequence, whether they can transfer to another college, whether they graduate, whether they can pursue a more advanced degree, and in some cases whether they can find employment in their discipline.


So where we should go? My paradigm springs from visits to two Canadian institutions a few years ago. At that time Canadian quality assurance agencies did not have any requirements for assessing student learning, so my workshops focused solely on assessing learning more effectively in the classroom. The workshops were well received because they offered very practical help that faculty wanted and needed. And at the end of the workshops, faculty began suggesting that perhaps they should collaborate to talk about shared learning outcomes and how to teach and assess them. In other words, discussion of classroom learning outcomes began to flow into discussion of program learning outcomes. It’s a naturalistic approach that I wish we in the United States had adopted decades ago.


What I now propose is moving to a focus on applying everything we’ve learned about curriculum design and assessment to the grading process in the classroom. In other words, my paradigm agrees with Roscoe’s that “assessment should be about changing what happens in the classroom—what students actually experience as they progress through their courses—so that learning is deeper and more consequential.” My paradigm emphasizes the following.

  1. Assessing program, general education, and institutional learning outcomes remain an assessment best practice. Those who have found value in these assessments would be encouraged to continue to engage in them and honored through mechanisms such as NILOA’s Excellence in Assessment designation.
  2. Teaching excellence is defined in significant part by four criteria: (1) the use of research-informed teaching and curricular strategies, (2) the alignment of learning activities and grading criteria to stated course objectives, (3) the use of good quality evidence, including but not limited to assessment results from the grading process, to inform changes to one’s teaching, and (4) active participation in and application of professional development opportunities on teaching including assessment.
  3. Investments in professional development on research-informed teaching practices exceed investments in assessment.
  4. Assessment work is coordinated and supported by faculty professional development centers (teaching-learning centers) rather than offices of institutional effectiveness or accreditation, sending a powerful message that assessment is about improving teaching and learning, not fulfilling an external mandate.
  5. We aim to move from a paradigm of assessment, not just to one of improvement as Roscoe proposes, but to one of evidence-informed improvement—a culture in which the use of good quality evidence to inform discussions and decisions is expected and valued.
  6. If assessment is done well, it’s a natural part of the teaching-learning process, not a burdensome add-on responsibility. The extra work is in reporting it to accreditors. This extra work can’t be eliminated, but it can be minimized and made more meaningful by establishing the expectation that reports address only key learning outcomes in key courses (including program capstones), on a rotating schedule, and that course assessments are aggregated and analyzed within the program review process.


Under this paradigm, I think we have a much better shot at achieving what’s most important: giving every student the best possible education.

What does a new CAO survey tell us about the state of assessment?

Posted on January 26, 2017 at 8:40 AM Comments comments (6)

A new survey of chief academic officers (CAOs) conducted by Gallup and Inside Higher Education led me to the sobering conclusion that, after a generation of work on assessment, we in U.S. higher education remain very, very far from pervasively conducting truly meaningful and worthwhile assessment.


Because we've been working on this so long, as I reviewed the results of this survey, I was deliberately tough. The survey asked CAOs to rate the effectiveness of their institutions on a variety of criteria using a scale of very effective, somewhat effective, not too effective, and not effective at all. The survey also asked CAOs to indicate their agreement with a variety of statements on a five-point scale, where 5 = strongly agree, 1 = strongly disagree, and the other points are undefined. At this point I would have liked to see most CAOs rate their institutions at the top of the scale: either “very effective” or “strongly agree.” So these are the results I focused on and, boy, are they depressing.


Quality of Assessment Work

Less than a third (30%) of CAOs say their institution is very effective in identifying and assessing student outcomes. ‘Nuff said on that! :(


Value of Assessment Work

Here the numbers are really dismal. Less than 10% (yes, ten percent, folks!) of CAOs strongly agree that:

  • Faculty members value assessment efforts at their college (4%).
  • The growth of assessment systems has improved the quality of teaching and learning at their college (7%).
  • Assessment has led to better use of technology in teaching and learning (6%). (Parenthetically, that struck me as an odd survey question; I had no idea that one of the purposes of assessment was to improve the use of technology in T&L!)


And just 12% strongly disagree that their college’s use of assessment is more about keeping accreditors and politicians happy than it is about teaching and learning.

 

And only 6% of CAOs strongly disagree that faculty at their college view assessment as requiring a lot of work on their parts. Here I’m reading something into the question that might not be there. If the survey asked if faculty view teaching as requiring a lot of work on their parts, I suspect that a much higher proportion of CAOs would disagree because, while teaching does require a lot of work, it’s what faculty generally find to be valuable work--it's what they are expected to do, after all. So I suspect that, if faculty saw value in their assessment work commensurate with the time they put into it, this number would be a lot higher.

 

Using Evidence to Inform Decisions

Here’s a conundrum:

  • Over two thirds (71%) of CAOs say their college makes effective use of data used to measure student outcomes,
  • But only about a quarter (26%) said their college is very effective in using data to aid and inform decision making.
  • And only 13% strongly agree that their college regularly makes changes in the curriculum, teaching practices, or student services based on what it finds through assessment.


 So I’m wondering what CAOs consider effective uses of assessment data!


 Furthermore,

  • About two thirds (67%) of CAOs say their college is very effective in providing a quality undergraduate education.
  • But less than half (48%) say it’s very effective in preparing students for the world of work,
  • And only about a quarter (27%) say it’s very effective in preparing students for engaged citizens.
  • And (as I've already noted) only 30% say it’s very effective in identifying and assessing student outcomes.


How can CAOs who admit their colleges are not very effective in preparing students for work or citizenship engagement or assessing student learning nonetheless think their college is very effective in providing a quality undergraduate education? What evidence are they using to draw that conclusion?


And,

  • While less than half of CAOs saying their colleges are very effective in preparing students for work,
  • Only about a third (32%) strongly agree that their institution is increasing attention to the ability of its degree programs to help students get a good job.


My Conclusions

After a quarter century of work to get everyone to do assessment well:

  • Assessment remains spotty; it is the very rare institution that is doing assessment pervasively and consistently well.
  • A lot of assessment work either isn’t very useful or takes more time than it’s worth.
  • We have not yet transformed American higher education into an enterprise that habitually uses evidence to inform decisions.

Fixing assessment in American higher education

Posted on May 7, 2016 at 9:00 AM Comments comments (4)

In my April 25 blog post, “Are our assessment processes broken?” I listed five key problems with assessment in the United States. Can we fix them? Yes, we can, primarily because today we have a number of organizations and entities that can tackle them, including (in no particular order):


 

Here are five steps that I think will dramatically improve the quality and effectiveness of student learning assessment in the United States.

 

1. Develop a common vocabulary. So much time is wasted debating the difference between a learning outcome and a learning objective, for example. The assessment movement is now mature enough that we can develop a common baseline glossary of those terms that continue to be muddy or confusing.

 

2. Define acceptable, good, and best assessment practices. Yes, many accreditors provide professional development for their membership and reviewers on assessment, but trainers often focus on best practices rather than minimally acceptable practices. This leads to reviewers unnecessarily “dinging” institutions on relatively minor points (say, learning outcomes don’t start with action words) while missing the forest of, say, piles of assessment evidence that aren’t being used meaningfully.

 

Specifically, we need to practice what we preach with one or more rubrics that list the essential elements of assessment and define best, good, acceptable, and unacceptable performance levels for each criteria. Fortunately, we have some good models to work off of: NILOA’s Excellence in Assessment designation criteria, CHEA’s awards for Effective Institutional Practice in Student Learning Outcomes, recognition criteria developed by the (now defunct) New Leadership Alliance for Student Learning and Accountability, and rubrics developed by some accreditors. Then we need to educate institutions and accreditation reviewers on how to use the rubric(s).

 

3. Focus less on student learning assessment (and its cousins, student achievement, student success, and completion) and more on teaching and learning. I would love to see Lumina focus on excellent teaching (including engagement) as a primary strategy to achieve its completion agenda—getting more faculty to adopt research-informed pedagogies that help students learn and succeed. I’d also like to see accreditors include the use of research- and evidence-informed teaching practices in their definitions of educational excellence.

 

4. Communicate clearly and succinctly with various audiences what our students are learning and what we're doing to improve learning. I haven't yet found any institution that I think is doing this really well. Capella is the only one I’ve seen that impresses me, and even Capella only presents results, not what they're doing to improve learning. I'm intrigued by the concept of infographics and wish I'd studied graphic design! A partnership with some graphic designers (student interns or class project?) might help us come up with some effective ways to tell our complicated stories.

 

5. Focus public attention on student learning as an essential part of student success. As Richard DeMillo recently pointed out, we need to find meaningful alternatives to US News rankings that focus on what’s truly important—namely, student learning and success. The problem has always been that student learning and success are so complex that they can’t be summarized into a brief set of metrics.

But the U.S. Department of Education has opened an intriguing possibility. At the very end of his April 22 letter to accreditors, Undersecretary Ted Mitchell noted that “Accreditors may…develop tiers of recognition, with some institutions or programs denoted as achieving the standards at higher or lower levels than others.” Accreditors thus now have an opportunity to commend publicly those institutions that achieve certain standards at a (clearly defined) “best practice” level. Many standards would not be of high interest to most members of the public, and input-based standards (resources) would only continue to recognize the wealthiest institutions. But commendations for best practices in things like research- and evidence-informed teaching methods and student development programs, serving the public good, and meeting employer needs with well-prepared graduates (documented through assessment evidence and rigorous standards) could turn this around and focus everyone on what’s most important: making sure America’s college students get great educations.

A response to Bob Shireman on "inane" SLOs

Posted on April 10, 2016 at 8:55 AM Comments comments (11)

You may have seen Bob Shireman's essay "SLO Madness" in the April 7 issue of Inside Higher Ed or his report, "The Real Value of What Students Do in College." I sent him the following response today:


I first want to point out that I agree wholeheartedly with a number of your observations and conclusions.


1. As you point out, policy discussions too often “treat the question of quality—the actual teaching and learning—as an afterthought or as a footnote.” The Lumina Foundation and the federal government use the term “student achievement” to discuss only retention, graduation, and job placement rates, while the higher ed community wants to use it to discuss student learning as well.


2. Extensive research has confirmed that student engagement in their learning impacts both learning and persistence. You cite Astin’s 23-year-old study; it has since been validated and refined by research by Vincent Tinto, Patrick Terenzini, Ernest Pascarella, and the staff of the National Survey of Student Engagement, among many others.


3. At many colleges and universities, there’s little incentive for faculty to try to become truly great teachers who engage and inspire their students. Teaching quality is too often judged largely by student evaluations that may have little connection to research-informed teaching practices, and promotion and tenure decisions are too often based more on research productivity than teaching quality. This is because there’s more grant money for research than for teaching improvement. A report from Third Way noted that “For every $100 the federal government spends on university-led research, it spends 24 cents on teaching innovation at universities.”


4. We know through neuroscience research that memorized knowledge is quickly forgotten; thinking skills are the lasting learning of a college education.


5. “Critical thinking” is a nebulous term that, frankly, I’d like to banish from the higher ed lexicon. As you suggest, it’s an umbrella term for an array of thinking skills, including analysis, evaluation, synthesis, information literacy, creative thinking, problem solving, and more.


6. The best evidence of what students have learned is in their coursework—papers, projects, performances, portfolios—rather than what you call “fabricated outcome measures” such as published or standardized tests.


7. You call for accreditors to “validate colleges’ own quality-assurance systems,” which is exactly what they are already doing. Many colleges and universities offer hundreds of programs and thousands of courses; it’s impossible for any accreditation team to review them all. So evaluators often choose a random or representative sample, as you suggest.


8. Our accreditation processes are far from perfect. The decades-old American higher education culture of operating in independent silos and evaluating quality by looking at inputs rather than outcomes has proved to be a remarkably difficult ship to turn around, despite twenty years of earnest effort by accreditors. There are many reasons for this, which I discuss in my book Five Dimensions of Quality, but let me share two here. First, US News & World Report’s rankings are based overwhelmingly on inputs rather than outcomes; there’s a strong correlation with institutional age and wealth. Second, most accreditation evaluators are volunteers, and training resources for them are limited. (Remember that everyone in higher education is trying to keep costs down.)


9. Thus, despite a twenty-year focus by accreditors on requiring useful assessment of learning, there are still plenty of people at colleges and universities who don’t see merit in looking at outcomes meaningfully. They don’t engage in the process until accreditors come calling; they continue to have misconceptions about what they are to do and why; and they focus blindly on trying to give the accreditors whatever they think the accreditors want rather than using assessment as an opportunity to look at teaching and learning usefully. This has led to some of your sad anecdotes about convoluted, meaningless processes. Using Evidence of Student Learning to Improve Higher Education, a book by George Kuh and his colleagues, is full of great ideas on how to turn this culture around and make assessment work truly meaningful and useful to faculty.


10. Your call for reviews of majors and courses is sound and, indeed, a number of regional accreditors and state systems already require academic programs to engage in periodic “program review.” There’s room for improvement, however. Many program reviews follow the old “inputs” model, counting library collections, faculty credentials, lab facilities, and the like and do not yet focus sufficiently on student learning.

 

Your report has some fundamental misperceptions, however. Chief among them is your assertion that the three step assessment process—declare goals, seek evidence of student achievement of them, and improve instruction based on the results—“hasn’t worked out that way. Not even close.” Today there are faculty and staff at colleges and universities throughout the country who have completed these three steps successfully and meaningfully. Some of these stories are documented in the periodical Assessment Update, some are documented on the website of the National Institute for Learning Outcomes Assessment (www.learningoutcomeassessment.org), some are documented by the staff of the National Survey of Student Engagement, and many more are documented in reports to accreditors.


In dismissing student learning outcomes as “meaningless blurbs” that are the key flaw in this three-step process, you are dismissing what a college education is all about and what we need to verify. Student learning outcomes are simply an attempt to articulate what we most want students to get out of their college education. Contrary to your assertion that “trying to distill the infinitely varied outcomes down to a list… likely undermines the quality of the educational activities,” research has shown that students learn more effectively when they understand course and program learning outcomes.


Furthermore, without a clear understanding of what we most want students to learn, assessment is meaningless. You note that “in college people do gain ‘knowledge’ and they gain ‘skills,’” but are they gaining the right knowledge and skills? Are they acquiring the specific abilities they most need “to function in society and in a workspace,” as you put it? While, as you point out, every student’s higher education experience is unique, there is nonetheless a core of competencies that we should expect of all college graduates and whose achievement we should verify. Employers consistently say that they want to hire college graduates who can:

• Collaborate and work in teams

• Articulate ideas clearly and effectively

• Solve real-world problems

• Evaluate information and conclusions

• Be flexible and adapt to change

• Be creative and innovative

• Work with people from diverse cultural backgrounds

• Make ethical judgments

• Understand numbers and statistics

 

Employers expect colleges and universities to ensure that every student, regardless of his or her unique experience, can do these things at an appropriate level of competency.


You’re absolutely correct that we need to focus on examining student work (and we do), but how should we decide whether the work is excellent or inadequate? For example, everyone wants college graduates to write well, but what exactly are the characteristics of good writing at the senior level? Student learning outcomes, explicated into rubrics (scoring guides) that elucidate the learning outcomes and define excellent, adequate, and unsatisfactory performance levels, are vital to making this determination.


You don’t mention rubrics in your paper, so I can’t tell if you’re familiar with them, but in the last twenty years they have revolutionized American higher education. When student work is evaluated according to clearly articulated criteria, the evaluations are fairer and more consistent. Higher education curriculum and pedagogy experts such as Mary-Ann Winkelmes, Barbara Walvoord, Virginia Anderson, and L. Dee Fink have shown that, when students understand what they are to learn from an assignment (the learning outcomes), when the assignment is designed to help them achieve those outcomes, and when their work is graded according to how well they demonstrate achievement of those outcomes, they learn far more effectively. When faculty collaborate to identify shared learning outcomes that students develop in multiple courses, they develop a more cohesive curriculum that again leads to better learning.


Beyond having clear, integrated learning outcomes, there’s another critical aspect of excellent teaching and learning: if faculty aren’t teaching something, students probably aren’t learning it. This is where curriculum maps come in; they’re a tool to ensure that students do indeed have enough opportunity to achieve a particular outcome. One college that I worked with, for example, identified (and defined) ethical reasoning as an important outcome for all its students, regardless of major. But a curriculum map revealed that very few students took any courses that helped them develop ethical reasoning skills. The faculty changed curricular requirements to correct this and ensure that every student, regardless of major, graduated with the ethical reasoning skills that both they and employers value.


I appreciate anyone who tries to come up with solutions to the challenges we face, but I must point out that your thoughts on program review may be impractical. External reviews are difficult and expensive. Keep in mind that larger universities may offer hundreds of programs and thousands of courses, and for many programs it can be remarkably hard—and expensive—to find a truly impartial, well-trained external expert.


Similarly, while a number of colleges and universities already subject student work to separate, independent reviews, this can be another difficult, expensive endeavor. With college costs skyrocketing, I question the cost-benefit: are these colleges learning enough from these reviews to make the time, work, and expense worthwhile?


I would add one item to your wish list, by the way: I’d like to see every accreditor require its colleges and universities to expect faculty to use research-informed teaching practices, including engagement strategies, and to evaluate faculty teaching effectiveness on their use of those practices.


But my chief takeaway from your report is not about its shortcomings but how the American higher education community has failed to tell you, other policy thought leaders, and government policy makers what we do and how well we do it. Part of the problem is, because American higher education is so huge and complex, we have a complicated, messy story to tell. None of you has time to do a thorough review of the many books, reports, conferences, and websites that explain what we are trying to do and our effectiveness. We have to figure out a way to tell our very complex story in short, simple ways that busy people can digest quickly.

A lot to be thankful for

Posted on December 21, 2014 at 8:00 AM Comments comments (0)

I am so grateful to Paul Fain at Inside Higher Ed for interviewing me on my new book Five Dimensions of Quality: A Common Sense Guide to Accreditation and Accountability. I’m also deeply grateful to Taskstream for letting me share highlights of the book with hundreds of people through a webinar in October. And I am honored and humbled by the many people who have shared the news of my new book through LinkedIn, Twitter, Amazon reviews, and thoughtful comments on Paul’s interview.

 

But these are only the tip of the iceberg. As this year comes to a close, there’s a lot that all of us working on assessment and accreditation have to be thankful for. Compared to when I became actively involved with the assessment movement 15 years ago:

 

  • Faculty and administrators have an increasingly good understanding of assessment basics. Many know what learning outcomes and rubrics are, for example.
  • As more people understand assessment and are doing assessment, there’s less pushback. It’s harder for people to say this can’t be done or isn’t relevant when their colleagues are doing this and finding it relevant and useful.
  • Most accreditors are continuing to focus more on higher education outcomes than inputs.
  • We now have an increasingly impressive array of resources to help with assessment, accreditation, and accountability, including tools, technologies, research, and networks of practitioners who are always generous with help, advice, and support.
  • Because of all of this, most colleges now have quite a mass of data and information--perhaps not yet the best quality, but often appreciably better than what they had 10 or 20 years ago.

 

One thing hasn’t changed over the years, however, and that’s the dedication of people working in higher education. Yes, I’ve written about assessment bullies and stonewallers, but the vast majority of people with whom I’ve worked are fully committed to helping their students learn and succeed. They often accomplish miracles despite being overworked, underpaid, and underresourced. They are the reason that American higher education is as good as it is, and I am thankful to have the opportunity to work with them.

Is the word "assessment" hurting us?

Posted on October 26, 2014 at 6:00 AM Comments comments (0)

I am so grateful to Taskstream for hosting Thursday’s webinar, “Five Dimensions of Quality: A Common Sense Webinar on Accreditation and Accountability.” In the webinar, I explained the model of higher education quality that I’m advocating in my new book Five Dimensions of Quality: A Common Sense Guide to Accreditation and Accountability, released last week by Jossey-Bass.

 

One of the things I’m trying to do in the book is shift the vocabulary we are using. I’m getting tired of the word “assessment” because to many people it means the assessment process rather than the purpose and results of assessment. So in my book I refer not to a culture of assessment but to a culture of evidence. Similarly, I’m avoiding the word “improvement” because to many people it means tweaking what we’re doing around the edges, not rethinking what we’re doing to become more responsive and relevant. So in my book I refer not to a culture of improvement but to a culture of betterment. And, while almost all accreditors require goals and plans, I’ve found that many colleges have too many goals and too many plans and that they emphasize preserving the status quo over moving to new levels of excellence. So one of the dimensions of quality in my book is not “goals and plans” but focus and aspiration.


If you weren't able to join the webinar, you can view it at https://www1.taskstream.com/webinars/five-dimensions-of-quality-webinar/.


For more information on the book, including the table of contents, and to order a copy, visit Jossey-Bass

Let's stop the jargon!

Posted on June 24, 2014 at 7:05 AM Comments comments (0)

I recently read something about assessment that mentioned “andragogy.” Huh? I went scurrying to my dictionary and (fess-up time) Wikipedia, and I learned that the “ped” in pedagogy refers to children, just as in “pediatrics.” So even though the dictionary defines pedagogy as the art and science of teaching—no mention of children—some are advocating using the term “andragogy” to refer to teaching adults, which is what most colleges do.


I read this while I was reviewing the copyedit of my new book Five Dimensions of Quality: A Common Sense Guide to Accreditation and Accountability, which used the term “pedagogy” several times. Did I substitute “andragogy”? Heck, no! I did remove “pedagogy”—I don’t want to offend anyone who’s touchy about this—but I substituted plain old “teaching methods.”

 

We use so much jargon that my new book is sprinkled with “Jargon Alerts”: sidebars that explain much of the obfuscating jargon we use: “direct evidence,” “curriculum alignment,” “performance indicators,” “information literacy,” and “reliability,” to name just a few. Now that I think of it, I didn’t include “artifact,” which always makes me chuckle—how many faculty hear it and think they have to be archeological experts in order to do assessment?


I wrote a recent blog post about assessment bullies, and I think the use of jargon can be a bullying tactic. It’s a way of saying, “I know more than you, so you should defer to me.” I’m not saying all assessment practitioners are bullies, of course, but many of us, however well-intended, are guilty of using too much jargon. Can we all pledge to use plain English as much as possible?