Linda Suskie

  A Common Sense Approach to Assessment & Accreditation

Blog

Should assessments be conducted on a cycle?

Posted on July 30, 2018 at 8:20 AM

I often hear questions about how long an “assessment cycle” should be. Fair warning: I don’t think you’re going to like my answer.


The underlying premise of the concept of an assessment cycle is that assessment of key program, general education, or institutional learning goals is too burdensome to be completed in its entirety every year, so it’s okay for assessments to be staggered across two or more years. Let’s unpack that premise a bit.


First, know that if an accreditor finds an institution or program out of compliance with even one of its standards—including assessment—Federal regulations mandate that the accreditor can give the institution no more than two years to come into compliance. (Yes, the accreditor can extend those two years for “good cause,” but let’s not count on that.) So an institution that has done nothing with assessment has a maximum of two years to come into compliance, which often means not just planning assessments but conducting them, analyzing the results, and using the results to inform decisions. I’ve worked with institutions in this situation and, yes, it can be done. So an assessment cycle, if there is one, should generally run no longer than two years.


Now consider the possibility that you’ve assessed an important learning goal, and the results are terrible. Perhaps you learn that many students can’t write coherently, or they can’t analyze information or make a coherent argument. Do you really want to wait two, three, or five years to see if subsequent students are doing better? I’d hope not! I’d like to see learning goals with poor results put on red alert, with prompt actions so students quickly start doing better and prompt re-assessments to confirm that.


Now let’s consider the premise that assessments are too burdensome for them all to be conducted annually. If your learning goals are truly important, faculty should be teaching them in every course that addresses them. They should be giving students learning activities and assignments on those goals; they should be grading students on those goals; they should be reviewing the results of their tests and rubrics; and they should be using the results of their review to understand and improve student learning in their courses. So, once things are up and running, there really shouldn’t be much extra burden in assessing important learning goals. The burdens are cranking out those dreaded assessment reports and finding time to get together with colleagues to review and discuss the results collaboratively. Those burdens are best addressed by minimizing the work of preparing those reports and by helping faculty carve out time to talk.


Now let’s consider the idea that an assessment cycle should stagger the goals being assessed. That implies that every learning goal is discrete and that it needs its own, separate assessment. In reality, learning goals are interrelated; how can one learn to write without also learning to think critically? And we know that capstone assignments—in which students work on several learning goals at once—are not only great opportunities for students to integrate and synthesize their learning but also great assessment opportunities, because we can look at student achievement of several learning goals all at once.


Then there’s the message we send when we tell faculty they need to conduct a particular assessment only once every three, four, or five years: assessment is a burdensome add-on, not part of our normal everyday work. In reality, assessment is (or should be) part of the normal teaching-learning process.


And then there are the practicalities of conducting an assessment only once every few years. Chances are that the work done a few years ago will have vanished or at least collective memory will have evaporated (why on earth did we do that assessment?). Assessment wheels must be reinvented, which can be more work than tweaking last year’s process.


So should assessments be conducted on a fixed cycle? In my opinion, no. Instead:


  • Use capstone assignments to look at multiple goals simultaneously.
  • If you’re getting started with assessment, assess everything, now. You’ve been dragging your feet too long already, and you’re risking an accreditation action. Remember you must not only have results but be using them within two years.
  • If you’ve got disappointing results, move additional assessments of those learning goals to a front burner, assessing them frequently until you get results where you want them.
  • If you’ve got terrific results, consider moving assessments of those learning goals to a back burner, perhaps every two years or so, just to make sure results aren’t slipping. This frees up time to focus on the learning goals that need time and attention.
  • If assessment work is widely viewed as burdensome, it’s because its cost-benefit is out of whack. Perhaps assessment processes are too complicated, or people view the learning goals being assessed as relatively unimportant, or the results aren’t adding useful insight. Do all you can to simplify assessment work, especially reporting. If people don't find a particular assessment useful, stop doing it and do something else instead.
  • If assessment work must be staggered, stagger some of your indirect assessment tools, not the learning goals or major direct assessments. An alumni survey or student survey might be conducted every three years, for example.
  • For programs that “get” assessment and are conducting it routinely, ask for less frequent reports, perhaps every two or three years instead of annually. It’s a win-win reward: less work for them and less work for those charged with reviewing and offering feedback on assessment reports.

Categories: How to assess, Good assessment, Assessment culture

Post a Comment

Oops!

Oops, you forgot something.

Oops!

The words you entered did not match the given text. Please try again.

2 Comments

Reply Linda Suskie
2:34 PM on August 5, 2018 
Clifton, you're on the right track here. The time and effort put into assessment should be in proportion to the potential consequences. If an assessment may lead to a decision to make a major investment, such as a lot of new hires, it should be more approached more thoughtfully than one that might lead, at most, to a decision to modify homework assignments. But I don't agree that some assessments, such as institutional-level assessment, should occur only once every five years or so. A good college will want to collect interim measures to make sure it's on track to meet its targets at the end of the strategic plan. I wouldn't want to wait until the end of the plan to find out that the institution failed in achieving it.

Clifton Franklund says...
You are definitely correct about the need to stagger assessment. Faculty and other stakeholders need time to reflect upon the findings and probe the data a bit before reacting to them. This is difficult if several different outcomes are all simultaneously evaluated. I also think that the speed of the assessment "cycle" ought to be directly related to the scale of the issue being addressed: small-scale questions can be evaluated quicker than large-scale questions. For example, if you want to know if an new assignment in English 150 is better than the old one (and you have 24 sections of the course), you could easily complete the entire cycle in one semester. If, however, you want to know how the entire course is doing with regard to student learning, it might take a couple of semesters. Program-level assessment is where you will likely end up using two years to get good data. This is due to the scale of the problems, number of stakeholders involved, the number of students involved, and the inherent variance in the data. Institutional-level assessment would probably only occur every five years or so (coincident with the strategic plan). I'm not saying that this is a hard and fast rule, but I think that the pattern holds fairly well.
Reply Clifton Franklund
10:00 AM on July 31, 2018 
You are definitely correct about the need to stagger assessment. Faculty and other stakeholders need time to reflect upon the findings and probe the data a bit before reacting to them. This is difficult if several different outcomes are all simultaneously evaluated. I also think that the speed of the assessment "cycle" ought to be directly related to the scale of the issue being addressed: small-scale questions can be evaluated quicker than large-scale questions. For example, if you want to know if an new assignment in English 150 is better than the old one (and you have 24 sections of the course), you could easily complete the entire cycle in one semester. If, however, you want to know how the entire course is doing with regard to student learning, it might take a couple of semesters. Program-level assessment is where you will likely end up using two years to get good data. This is due to the scale of the problems, number of stakeholders involved, the number of students involved, and the inherent variance in the data. Institutional-level assessment would probably only occur every five years or so (coincident with the strategic plan). I'm not saying that this is a hard and fast rule, but I think that the pattern holds fairly well.