Linda Suskie

  A Common Sense Approach to Assessment & Accreditation

Blog

view:  full / summary

I'm not a fan of Bloom's

Posted on November 13, 2018 at 6:50 AM Comments comments (9)

I’m mystified by how Bloom’s taxonomy has pervaded the higher education assessment landscape. I’ve met faculty who have no idea what a rubric or a test blueprint or a curriculum map is, but it’s been burned into their brains that they must follow Bloom’s taxonomy when developing learning goals. This frustrates me no end, because I don’t think Bloom’s is the best framework for considering learning outcomes in higher education.


Bloom’s taxonomy of educational objectives is probably older than you are. It was developed by Benjamin Bloom in the 1950s. It divides learning goals into three domains: cognitive, affective (attitudinal), and psychomotor. Within the cognitive domain, it has six levels. Originally these were knowledge, comprehension, application, analysis, synthesis, and evaluation. A 2000 update renamed these levels and swapped the positions of the last two: remember, understand, apply, analyze, evaluate, and create. The last four levels are called higher order thinking skills because they require students to do more than understand.


So why don’t I like Bloom’s? One reason is that I’ve seen too many faculty erroneously view the six cognitive levels as hierarchy of prerequisites. Faculty have told me, for example, that first-year courses can only address knowledge and comprehension because students must thoroughly understand a subject before they can begin to think about it. Well, any elementary school teacher can tell you that’s bunk, but the misperception persists.


Even more important is that Bloom’s doesn’t highlight many of the skills and dispositions needed today. Teamwork, ethical judgment, professionalism, and metacognition are all examples of learning goals that don’t fit neatly into Bloom’s. That’s because they’re a combination of the cognitive and affective domains: what educators such as Costa & Kallick and Marzano and his colleagues call habits of mind.


I’m especially concerned about professionalism: coming to work or class on time, coming to work or class prepared to work, completing work on time, planning one’s time, giving work one’s best effort, self-evaluating one’s work, etc. Employers very much want these skills, but they get short shrift in Bloom’s.


So what do I recommend instead? In my workshops I suggest five categories of learning goals:

  • knowledge and understanding
  • career-specific thinking and performance skills
  • transferrable thinking and performance skills (the kinds developed in the liberal arts)
  • attitudes and values
  • habits of mind

But I also like the taxonomies developed by Dee Fink and by Marzano et al.


I wouldn’t expect every course or program to have learning goals in all five of these categories, of course. But I do suggest that no more than half of a course or program’s learning goals be in the knowledge and understanding category.


For more information, see Chapter 4 (Learning Goals: Articulating What You Most Want Students to Learn) in the new 3rd edition of my book Assessing Student Learning: A Common Sense Guide.

Grading group work

Posted on October 27, 2018 at 10:30 AM Comments comments (0)

Collaborative learning, better known as group work, is an important way for students to learn. Some students learn better with their peers than by working alone. And employers very much want employees who bring teamwork skills.


But group work, such as a group presentation, is one of the hardest things for faculty to grade fairly. One reason is that many student groups include some slackers and some overactive eager beavers. When viewing the product of a group assignment—say they’ve been asked to work together to create a website—it can be hard to discern the quality of individual students’ achievements fairly.


Another reason is that group work is often more about performances than products—the teamwork skills each student demonstrates. As I note in Chapter 21 of Assessing Student Learning: A Common Sense Guide, performances such as working in a team or delivering a group presentation are harder to assess than products such as a paper.


In their book Collaborative Learning Techniques: A Handbook for College Faculty, Elizabeth Barkley, Claire Major, and K. Patricia Cross acknowledge that grading collaborative learning fairly and validly can be challenging. But it’s not impossible. Here are some suggestions.


Have clear learning goal(s) for the assignment. If your key learning goal is for students to develop teamwork skills, your assessment strategy will be very different than if your learning goal is for them to learn how to create a well-designed website.


Make sure your curriculum includes plenty of opportunities for students to develop and achieve your learning goal. If your key learning goal is for students to develop teamwork skills, for example, you’ll need to provide lessons, classwork, and homework that helps them learn what good and poor teamwork skills are and to practice those skills. Just putting students into a group and letting them fend for themselves won’t cut it—students will just keep using whatever bad teamwork habits they brought with them.


Deal with the slackers--and the overactive eager beavers--proactively. Barkley, Major and Cross suggest several ways to do this. Design a group assignment in which each group member must make a discrete contribution for which they’re held accountable. Make these contributions equitable, so all students must participate evenly. Make clear to students that they’ll be graded for their own contribution as well as for the overall group performance or product. And check in with each group periodically and, if necessary, speak individually with any slackers and also those eager beavers who try to do everything themselves.


Consider observing student groups working together. This isn’t always practical, of course—your presence may stifle the group’s interactions—but it’s one way to assess each student’s teamwork skills. Use a rubric to record what you see. Since you’re observing several students simultaneously, keep the rubric simple enough to be manageable—maybe a rating scale rubric or a structured observation guide, both of which are discussed in the rubrics chapter of Assessing Student Learning.


Consider asking students to rate each other. Exhibit 21.1 in Assessing Student Learning is a rating scale rubric I’ve used for this purpose. I tell students that their groupmates’ ratings of them will be averaged and be 5% of their final grade. I weight peer ratings very low because I don’t want students’ grades to be advantaged or disadvantaged by any biases of their peers.


Give each student two grades: one grade for the group product or performance and one for his or her individual contribution to it. This only works when it’s easy to discern each student’s contribution. You can weight the two grades however you like—perhaps equally, or perhaps weighting the group product or performance more heavily than individual contributions, or vice versa.


Give the group a total number of points, and let them decide how to divide those points among group members. Some faculty have told me they’ve used this approach and it works well.


Barkley, Major and Cross point out that there’s a natural tension between promoting collaborative learning and teamwork and assigning individual grades. Whatever approach you choose, try to minimize this tension as much as you can.

Consider professionalism as a learning goal

Posted on September 23, 2018 at 10:35 AM Comments comments (0)

A recent Inside Higher Ed piece, “The Contamination of Student Assessment” by Jay Sterling Silver, argued that behaviors such as class attendance and class participation shouldn’t be factored into grades because grades should be “unadulterated measurements of knowledge and skills that we represent them to be—and that employers and graduate admissions committees rely on them to be.” In other words, these behaviors are unrelated to key learning goals.


He’s got a point; a grade should reflect achievement of key learning goals. (That’s what competency-based education tries to achieve, as I discussed in a blog several years ago.)  But I behaviors like coming to class, submitting work on time, giving assignments one’s best effort, and participating in class discussions are important. They fall under what I call professionalism: traits that include coming to work on time and prepared to work, dependably completing assigned work thoroughly and on time, giving one’s work one’s best effort, and managing one’s time.


Surveys of employers confirm that these are important traits in the people they hire. Every few years, for example, Hart Research Associates conducts a survey for AAC&U on how well employers think college graduates are prepared on a number of key learning outcomes. The 2018 survey added two learning outcomes that weren’t in previous surveys:

  • Self-motivated: ability to take initiative and be proactive
  • Work independently: set priorities, manage time and deadlines

Of the 15 learning outcomes in the 2018 survey, these were tied for #4 in importance by hiring managers.


So I think the answer is to add professionalism as an additional learning goal. Of course “professionalism” isn’t a well-stated learning goal; it’s a category. I leave it up to college and program faculty to decide how best to articulate what forms of professionalism are most important to their students and prospective employers.


Then an assignment like a library research paper might have three learning goals—information literacy, writing, and professionalism—and be graded on all three. Professionalism might be demonstrated by how well students followed the directions, whether the assignment was turned in on time, and whether the student went above and beyond the bare minimum requirements for the assignment.


Professionalism, by the way, isn’t just a skill and isn’t just an attitude. It’s a combination of both, similar to what Arthur Costa and Bea Kallick call habits of mind, which include things like persisting, managing impulsivity, taking responsible risks, and striving for accuracy. One of the reasons I’m not a fan of Bloom’s taxonomy is that it doesn’t really address habits of mind, which—as evidenced by Hart’s new survey—are becoming increasingly important learning goals of a college education.

Should rubrics be assignment-specific?

Posted on September 2, 2018 at 8:25 AM Comments comments (2)

In a recent guest post in Inside Higher Ed, “What Students See in Rubrics,” Denise Krane explained her dissatisfaction with rubrics, which can be boiled down to this statement toward the end of her post, “Ideally, rubrics are assignment specific.”


I don’t know where Denise got this idea, but it’s flat-out wrong. As I’ve mentioned in previous blog posts on rubrics, a couple of years ago I conducted a literature review for a chapter on rubric development that I wrote for the second edition of the Handbook of Measurement, Assessment, and Evaluation in Higher Education. The rubric experts I found (for example, Brookhart; Lane; Linn, Baker & Dunbar; and Messick) are unanimous in advocating what they call general rubrics over what they call task-specific rubrics: rubrics that assess achievement of the assignment’s learning outcomes rather than achievement of the task at hand.


Their reason is exactly what Denise advocates: we want students to focus on long-term, deep learning—in the case of writing, to develop the tools to, as Denise says, grapple with writing in general. Indeed, some experts such as Lane posit that one of the criteria of a valid rubric is its generalizability: it should tell you how well students can write (or think, or solve problems) across a range of tasks, not just the one being assessed. If you use a task-specific rubric, students will learn how to do that one task but not much more. If you use a general rubric, students will learn skills they can use in whole families of tasks.


To be fair, the experts also caution against general rubrics that are too general, such as one writing rubric used to assess student work in courses and programs across an entire college. Many experts (for example, Cooper, Freedman, Lane, and Lloyd-Jones) suggest developing rubrics for families of related assignments—perhaps one for academic writing in the humanities and another for business writing. This lets the rubric include discipline-specific nuances. For example, academic writing in the humanities is often expansive, while business writing must be succinct.


How do you move from a task-specific rubric to a general rubric? It’s all about the traits being assessed—those things listed on the left side of the rubric. Those things should be traits of the learning outcomes being assessed, not the assignment. So instead of listing each element of the assignment (I’ve seen rubrics that literally list “opening paragraph,” “second paragraph,” and so on ), list each key trait of the learning goals. When I taught writing, for example, my rubric included traits like focus, organization, and sentence structure.


Over the last few months I’ve worked with a lot of faculty on creating rubrics, and I’ve seen that moving from a task-specific to a general rubric can be remarkably difficult. One reason is that faculty want students to complete the assignment correctly: Did they provide three examples? Did they cite five sources? If this is important, I suggest making “Following directions” one of the learning outcomes of the assignment and including it as a trait assessed by the rubric. Then create a separate checklist of all the components of the assignment. Ask students to complete the checklist themselves before submitting the assignment. Also consider asking students to pair up and complete checklists for each other’s assignments.


To identify the other traits assessed by the rubric, ask yourself, “What does good writing/problem solving/critical thinking/presenting look like? Focus not on this assignment but on why you’re giving students the assignment. What you want them to learn from this assignment that they can use in subsequent courses or after they graduate?


Denise mentioned two other things about rubrics that I’d also like to address. She surveyed her students about their perceptions of rubrics, and one complaint was that faculty expectations vary from one professor to another. The problem here is lack of collaboration. Faculty teaching sections of the same course--or related courses--should collaborate on a common rubric that they all use to grade student work. This lets students work on the same important skill over and over again in varying course contexts and see connections in their learning. If one professor wants to emphasize something above and beyond the common rubric, fine. The common elements can be the top half of the rubric, and the professor-specific elements can be the bottom half.


Denise also mentioned that her rubric ran three pages, and she hated. I would too! Long rubrics focus on the trees rather than the forest of what we’re trying to help students learn. A shorter rubric (I recommend that rubrics fit on one page) focuses students on the most important things they’re supposed to be learning. If it frustrates you that your rubric doesn’t include everything you want to assess, keep in mind that no assessment can assess everything. Even a comprehensive final exam can’t ask every conceivable question. Just make sure that your rubric, like your exam, focuses on the most important things you want students to learn.


If you’re interested in a deeper dive into what I learned about rubrics, here are some of my past blog posts. My book chapter in the Handbook has the full citations of the authors I've mentioned here.

Is This a Rubric? 

Can Rubrics Impede Learning? 

Rubrics: Not Too Broad, Not Too Narrow 

What is a Good Rubric? 

What is a Rubric? 

Is assessment worth it?

Posted on August 14, 2018 at 8:50 AM Comments comments (1)

A while back, a faculty member teaching in a community college career program told me, “I don’t need to assess. I know what my students are having problems with—math.”


Well, maybe so, but I’ve found that my perceptions often don’t match reality, and systematic evidence gives me better insight. Let me give you a couple of examples.


Example #1: you may have noticed that my website blog page now has an index of sorts on the right side. I created it a few months ago, and what I found really surprised me. I aim for practical advice on the kinds of assessment issues that people commonly face. Beforehand I’d been feeling pretty good about the range and relevance of assessment topics that I’d covered. The index showed that, yes, I’d done lots of posts on how to assess and specifically on rubrics, a pet interest of mine. I was pleasantly surprised by the number of posts I’d done on sharing and using results.


But what shocked me was how little I’d written on assessment culture: only four posts in five years! Compare that with seventeen posts on curriculum design and teaching. Assessment culture is an enormous issue for assessment practitioners. Now knowing the short shrift I’d been giving it, I’ve written several more blog posts related to assessment culture, bring the total to ten (including this post).


(By the way, if there’s anything you’d like to see a blog post on, let me know!)


Example #2: Earlier this summer I noticed that some of the flowering plants in my backyard weren’t blooming much. I did a shade study: one sunny day when I was home all day, every hour I made notes on which plants were in sun and which were in shade. I’d done this about five years ago but, as with the blog index, the results shocked me; some trees and shrubs had grown a lot bigger in five years and consequently some spots in my yard were now almost entirely in shade. No wonder those flowers didn’t bloom! I’ll be moving around a lot of perennials this fall to get them into sunnier spots.


So, yes, I’m a big fan of using systematic evidence to inform decisions. I’ve seen too often that our perceptions may not match reality.


But let’s go back to that professor whose students were having problems with math and give him the benefit of the doubt—maybe he’s right. My question to him was, “What are you doing about it?” The response was a shoulder shrug. His was one of many institutions with an assessment office but no faculty teaching-learning center. In other words, they’re investing more in assessment than in teaching. He had nowhere to turn for help.


My point here is that assessment is worthwhile only if the results are used to make meaningful improvements to curricula and teaching methods. Furthermore, assessment work is worthwhile only if the impact is in proportion to the time and effort spent on the assessment. I recently worked with an institution that undertook an elaborate assessment of three general education learning outcomes, in which student artifacts were sampled from a variety of courses and scored by a committee of trained reviewers. The results were pretty dismal—on average only about two thirds of students were deemed “proficient” on the competencies’ traits. But the institutional community is apparently unwilling to engage with this evidence, so nothing will be done beyond repeating the assessment in a couple of years. Such an assessment is far from worthwhile; it’s a waste of everyone’s time.


This institution is hardly alone. When I was working on the new 3rd edition of my book Assessing Student Learning: A Common Sense Guide, I searched far and wide for examples of assessments whose results led to broad-based change and found only a handful. Overwhelmingly, the changes I see are what I call minor tweaks, such as rewriting an assignment or adding more homework. These changes can be good—collectively they can add up to a sizable impact. But the assessments leading to these kinds of changes are worthwhile only if they’re very simple, quick assessments in proportion to the minor tweaks they bring about.


So is assessment worth it? It’s a mixed bag. On one hand, the time and effort devoted to some assessments aren’t worth it—the findings don’t have much impact. On the other hand, however, I remain convinced of the value of using systematic evidence to inform decisions affecting student learning. Assessment has enormous potential to move us from providing a good education to providing a truly great education. The keys to achieving this are commitments to (1) making that good-to-great transformation, (2) using systematic evidence to inform decisions large and small, and (3) doing only assessments whose impact is likely to be in proportion to the time, effort, and resources spent on them.

Should assessments be conducted on a cycle?

Posted on July 30, 2018 at 8:20 AM Comments comments (2)

I often hear questions about how long an “assessment cycle” should be. Fair warning: I don’t think you’re going to like my answer.


The underlying premise of the concept of an assessment cycle is that assessment of key program, general education, or institutional learning goals is too burdensome to be completed in its entirety every year, so it’s okay for assessments to be staggered across two or more years. Let’s unpack that premise a bit.


First, know that if an accreditor finds an institution or program out of compliance with even one of its standards—including assessment—Federal regulations mandate that the accreditor can give the institution no more than two years to come into compliance. (Yes, the accreditor can extend those two years for “good cause,” but let’s not count on that.) So an institution that has done nothing with assessment has a maximum of two years to come into compliance, which often means not just planning assessments but conducting them, analyzing the results, and using the results to inform decisions. I’ve worked with institutions in this situation and, yes, it can be done. So an assessment cycle, if there is one, should generally run no longer than two years.


Now consider the possibility that you’ve assessed an important learning goal, and the results are terrible. Perhaps you learn that many students can’t write coherently, or they can’t analyze information or make a coherent argument. Do you really want to wait two, three, or five years to see if subsequent students are doing better? I’d hope not! I’d like to see learning goals with poor results put on red alert, with prompt actions so students quickly start doing better and prompt re-assessments to confirm that.


Now let’s consider the premise that assessments are too burdensome for them all to be conducted annually. If your learning goals are truly important, faculty should be teaching them in every course that addresses them. They should be giving students learning activities and assignments on those goals; they should be grading students on those goals; they should be reviewing the results of their tests and rubrics; and they should be using the results of their review to understand and improve student learning in their courses. So, once things are up and running, there really shouldn’t be much extra burden in assessing important learning goals. The burdens are cranking out those dreaded assessment reports and finding time to get together with colleagues to review and discuss the results collaboratively. Those burdens are best addressed by minimizing the work of preparing those reports and by helping faculty carve out time to talk.


Now let’s consider the idea that an assessment cycle should stagger the goals being assessed. That implies that every learning goal is discrete and that it needs its own, separate assessment. In reality, learning goals are interrelated; how can one learn to write without also learning to think critically? And we know that capstone assignments—in which students work on several learning goals at once—are not only great opportunities for students to integrate and synthesize their learning but also great assessment opportunities, because we can look at student achievement of several learning goals all at once.


Then there’s the message we send when we tell faculty they need to conduct a particular assessment only once every three, four, or five years: assessment is a burdensome add-on, not part of our normal everyday work. In reality, assessment is (or should be) part of the normal teaching-learning process.


And then there are the practicalities of conducting an assessment only once every few years. Chances are that the work done a few years ago will have vanished or at least collective memory will have evaporated (why on earth did we do that assessment?). Assessment wheels must be reinvented, which can be more work than tweaking last year’s process.


So should assessments be conducted on a fixed cycle? In my opinion, no. Instead:


  • Use capstone assignments to look at multiple goals simultaneously.
  • If you’re getting started with assessment, assess everything, now. You’ve been dragging your feet too long already, and you’re risking an accreditation action. Remember you must not only have results but be using them within two years.
  • If you’ve got disappointing results, move additional assessments of those learning goals to a front burner, assessing them frequently until you get results where you want them.
  • If you’ve got terrific results, consider moving assessments of those learning goals to a back burner, perhaps every two years or so, just to make sure results aren’t slipping. This frees up time to focus on the learning goals that need time and attention.
  • If assessment work is widely viewed as burdensome, it’s because its cost-benefit is out of whack. Perhaps assessment processes are too complicated, or people view the learning goals being assessed as relatively unimportant, or the results aren’t adding useful insight. Do all you can to simplify assessment work, especially reporting. If people don't find a particular assessment useful, stop doing it and do something else instead.
  • If assessment work must be staggered, stagger some of your indirect assessment tools, not the learning goals or major direct assessments. An alumni survey or student survey might be conducted every three years, for example.
  • For programs that “get” assessment and are conducting it routinely, ask for less frequent reports, perhaps every two or three years instead of annually. It’s a win-win reward: less work for them and less work for those charged with reviewing and offering feedback on assessment reports.

Should we abolish the word "demonstrate" from our assessment lexicon?

Posted on July 15, 2018 at 7:45 AM Comments comments (7)

The word “demonstrate” in learning goals raises a red flag for me. Consider these (real) learning goals:

  • Demonstrate fundamental business and entrepreneurship skills
  • Demonstrate critical and creative thinking.
  • Demonstrate information literacy skills.
  • Demonstrate teamwork and collaboration.
  • Demonstrate ethical self-awareness.
  • Demonstrate personal responsibility.


Clearly the people who wrote these learning goals were told that they had to start with an action word. So they plopped the word “demonstrate” in front of a fuzzy goal. But adding “demonstrate” doesn’t make the goal any less fuzzy. What are “fundamental business and entrepreneurship skills”? What is “personal responsibility”? Until these concepts are stated more clearly, these learning goals remain fuzzy and therefore difficult to assess meaningfully.


Now consider these (real) learning goals:

  • Demonstrate proficiency in analyzing work-related scenarios, taking appropriate action and evaluating results of the action.
  • Demonstrate proficiency in the use of technology for collecting and analyzing information
  • Demonstrate the ability to work cooperatively with others
  • Demonstrate enhanced competencies in time management


Here the phrase “demonstrate proficiency/ability/competencies” are simply superfluous, making the learning goal unnecessarily wordy. Consider these restatements:

  • Analyze work-related scenarios, take appropriate action, and evaluate the results of the action.
  • Use technology to collect and analyze information.
  • Work cooperatively with others.
  • Manage time effectively.


Not only are they clearer but, because they’re shorter, they pack a punch; they have a better chance of engaging students and getting them enthused about their learning.


So should we abolish the word “demonstrate” from our assessment lexicon? Well, consider this (real) learning outcome:

  • Demonstrate appropriate pitch, tone and demeanor in professional settings.


If we make clear what we want students to demonstrate, using observable terms, “demonstrate” may be fine.


Now consider these (real) learning outcomes:

  • Demonstrate appropriate, professional conduct. (if you define it)
  • Demonstrate professionalism and cultural sensitivity while interacting and communicating with others.


It could be argued that these learning outcomes are a bit fuzzy. What is appropriate, professional conduct, after all? What is cultural sensitivity? But if we clarified these terms in the learning outcome, we’d come up with a pretty long list of traits—so many that the learning outcome would be too cumbersome to be effective. In these cases, I’m okay with leaving these learning outcomes as is, provided that the rubrics used to assess them explicate these terms into traits with clear, concrete language that students easily understand.


So, no, I don't think we should abolish the word "demonstrate" altogether, but think twice--or even three times--before using it.

Getting started with meeting your professional development needs

Posted on June 24, 2018 at 4:30 PM Comments comments (1)

A recent paper co-sponsored by AALHE and Watermark identified some key professional development needs of assessment practitioners. 


While a book is no substitute for a rich, interactive professional development experience, some of the things that assessment practitioners want to learn about are discussed in my books Assessing Student Learning: A Common Sense Guide (new 3rd edition) and Five Dimensions of Quality: A Common Sense Guide to Accreditation and Accountability. Perhaps they’re a good place to kick off your professional development.


Analyzing and Interpreting Assessment Data


See Chapter 24 (Analyzing Evidence of Student Learning) of Assessing Student Learning (3rd ed.).


Analyzing and Interpreting Qualitative Results


See “Summarizing qualitative evidence” on pages 313-316 of Chapter 23 (Summarizing and Storing Evidence of Student Learning) of Assessing Student Learning (3rd ed.)


Reporting Assessment Results


See Chapter 25 (Sharing Evidence of Student Learning) of Assessing Student Learning (3rd ed.) and Chapter 16 (Transparency: Sharing Evidence Clearly and Readily) of Five Dimensions of Quality.


Assessment Culture


This is such a big issue that the 3rd edition of Assessing Student Learning devotes six chapters to it. See Part 3, which includes the following chapters:

Chapter 9 (Guiding and Coordinating Assessment Efforts)

Chapter 10 (Helping Everyone Learn What to Do)

Chapter 11 (Supporting Assessment Efforts)

Chapter 12 (Keeping Assessment Cost-Effective)

Chapter 13 (Collaborating on Assessment)

Chapter 14 (Valuing Assessment and the People Who Contribute)


A good place to start is Chapter 14, because it begins with a section titled, “Why is this so hard?” Even better, see the chapter that section summarizes: Chapter 4 (Why Is This So Hard?) of Five Dimensions of Quality.


Also see Chapter 17 (Using Evidence to Ensure and Advance Quality and Effectiveness) in Five Dimensions of Quality.


Culture of Change


See Chapter 18 (Sustaining a Culture of Betterment) of Five Dimensions of Quality along with the aforementioned Chapter 4 (Why Is This So Hard?) in the same book. For a briefer discussion, see “Value innovation, especially in improving teaching” on pages 180-181 of Chapter 14 (Valuing Assessment and the People Who Contribute) of Assessing Student Learning (3rd ed.).


Effective/Meaningful/Best Assessment Practices


See Chapter 3 (What Are Effective Assessment Practices?) of Assessing Student Learning (3rd ed.) and Chapter 14 (Good Evidence Is Useful) of Five Dimensions of Quality.


Co-Curricular Learning Outcomes and Assessment


Information on co-curricula is scattered throughout the new 3rd edition of Assessing Student Learning. See the following:

“Learning goals for co-curricular experiences” on pages 57-58 of Chapter 4 (Learning Goals: Articulating What You Most Want Students to Learn)

“Planning assessments in co-curricula” on pages 110-112 of Chapter 8 (Planning Assessments in Other Settings)

Chapter 20 (Other Assessment Tools) 

Chapter 21 (Assessing the Hard-to-Assess) 


Rubrics


See Chapter 15 (Designing Rubrics to Plan and Assess Assignments) of Assessing Student Learning (3rd ed.).


Establishing Standards


See Chapter 22 (Setting Meaningful Standards and Targets) of Assessing Student Learning (3rd ed.) and Chapter 15 (Setting and Justifying Targets for Success) of Five Dimensions of Quality.


Program Review


See Chapter 20 (Program Reviews: Drilling Down into Programs and Services) of Five Dimensions of Quality.

All assessment is interesting

Posted on June 10, 2018 at 8:45 AM Comments comments (2)

Architecture critic Kate Wagner recently said, “All buildings are interesting. There is not a single building that isn’t interesting in some way.” I think we can say the same thing about assessment: All assessment is interesting. There is not a single assessment that isn’t interesting in some way.


Kate points out that what makes seemingly humdrum buildings interesting are the questions we can ask about them—in other words, how we analyze them. She suggests a number of questions that can be easily adapted to assessment:


  • How do these results compare to other assessment results? We can compare results against results for other students (at our institution or elsewhere), against results for other learning goals, against how students did when they entered (value-added), against past cohorts of students, or against an established standard. Each of these comparisons can be interesting. (See Chapter 22 of my book Assessing Student Learning for more information on perspectives for comparing results.)
  • Are we satisfied with the results? Why or why not?
  • What do these results say about our students at this time? Students, curricula, and teaching methods are rapidly changing, which makes them--and assessment--interesting. Assessment results are a piece of history: what students learned (and didn’t learn) at this time, in this setting.
  • What does this assessment say about what we and our institution value? What does it say about the world in which we live?


Why do so many faculty and staff fail to find assessment interesting? I’ve alluded to a number of possible reasons in past blog posts (such as here and here), but let me throw out a few that I think are particularly relevant.


1. Sometimes assessment simply isn’t presented as something that’s supposed to be interesting. It’s a chore to get through accreditation, nothing more. Just as Kate felt obliged to point out that even humdrum buildings are interesting, sometimes faculty and staff need to be reminded that assessment should be designed to yield interesting results.


2. Sometimes faculty and staff aren’t particularly interested in the learning goal being assessed. If a faculty member focuses on basic conceptual understanding in her course, she’s not going to be particularly interested in the assessment of critical thinking that she's obliged to do. Rethinking key learning goals and helping faculty and staff rethink their curricula can go a long way toward generating assessment results that faculty and staff find interesting.


3. Some faculty and staff find results mildly interesting, but not interesting enough to be worth all the time and effort that’s gone into generating them. A complex, time-consuming assessment whose results show that students are generally doing fine and are not all that different from past years is interesting but not terribly interesting. The cost-benefit isn’t there. Here the key is to scale back less-interesting assessments—maybe repeat the assessment every two or three years just to make sure results aren’t slipping—and focus on assessments that faculty and staff will find more interesting and useful.


4. Some faculty and staff aren’t really that interested in teaching—they’re far more engaged with their research agenda. And some faculty and staff aren’t really that interested in improving their teaching. Institutional leaders can help here by rethinking incentives and rewards to encourage faculty and staff to try to improve their teaching.


Kate says, “All of us have the potential to be nimble interpreters of the world around us. All we need to do is look around.” Similarly, all of us have the potential to be nimble interpreters of evidence of student learning. All we need to do is use the analytical skills we learned in college and teach to our students to find what's interesting.

What are the characteristics of well-stated learning goals?

Posted on May 27, 2018 at 7:40 AM Comments comments (0)

When I help faculty and co-curricular staff move ahead with their assessment efforts, I probably spend half our time on helping them articulate their learning goals. As the years have gone by, I’ve become ever more convinced that learning goals are the foundation of an assessment structure…and without a solid foundation, a structure can’t be well-constructed.


So what are well-stated learning goals? They have the following characteristics:


They are outcomes: what students will be able to do after they successfully complete the learning experience, not what they will do or learn during the learning experience. Example: Prepare effective, compelling visual summaries of research.


They are clear, written in simple, jargon-free terms that everyone understands, including students, employers, and colleagues in other disciplines. Example: Work collaboratively with others.


They are observable, written using action verbs, because if you can see it, you can assess it. Example: Identify and analyze ethical issues in the discipline.


They focus on skills more than knowledge, conceptual understanding, or attitudes and values, because thinking and performance skills are what employers seek in new hires. I usually suggest that at least half the learning goals of any learning experience focus on skills. Example: Integrate and properly cite scientific literature.


They are significant and aspirational: things that will take some time and effort for students to learn and that will make a real difference in their lives. Example: Identify, articulate, and solve problems in [the discipline or career field].


They are relevant, meeting the needs of students, employers, and society. They focus more on what students need to learn than what faculty want to teach. Example: Interpret numbers, data, statistics, and visual representations of them appropriately.


They are short and therefore powerful. Long, qualified or compound statements get everyone lost in the weeds. Example: Treat others with respect.


They fit the scope of the learning activity. Short co-curricular learning experiences have narrower learning goals than an entire academic program, for example.


They are limited in number. I usually suggest no more than six learning goals per learning experience. If you have 10, 15, or 20 learning goals—or more—everyone focuses on trees rather than the forest of the most important things you want students to learn.


They help students achieve bigger, broader learning goals. Course learning goals help students achieve program and/or general education learning goals; co-curricular learning goals help students achieve institutional learning goals; program learning goals help students achieve institutional learning goals.


For more information on articulating well-stated learning goals, see Chapter 4 of the new 3rd edition of my book Assessing Student Learning: A Common Sense Guide.

Some learning goals are promises we can't keep

Posted on May 2, 2018 at 6:55 AM Comments comments (0)

I look on learning goals as promises that we make to students, employers, and society: If a student passes a course or graduates, he or she WILL be able to do the things we promise in our learning goals.


But there are some things we hope to instill in students that we can’t guarantee. We can’t guarantee, for example, that every graduate will be a passionate lifelong learner, appreciate artistic expressions, or make ethical decisions. I think these kinds of statements are important aims that might be expressed in a statement of values, but they’re not really learning goals, because they’re something we hope for, not something we can promise. Because they’re not really learning goals, they’re very difficult if not impossible statements to assess meaningfully.


How can you tell if a learning goal is true learning goal—an assessable promise that we try to keep? Ask yourself the following questions.


Is the learning goal stated clearly, using observable action verbs? Appreciate diversity is a promise we may not be able to keep, but Communicate effectively with people from diverse backgrounds is an achievable, assessable learning goal.


How have others assessed this learning goal? If someone else has assessed it meaningfully and usefully, don’t waste time reinventing the wheel.


How would you recognize people who have achieved this learning goal? Imagine that you run into two alumni of your college. As you talk with them, it becomes clear that one appreciates artistic expressions and the other doesn’t. What might they say about their experiences and views that would lead you to that conclusion? This might give you ideas on ways to express the learning goal in more concrete, observable terms, which makes it easier to figure out how to assess it.


Is the learning goal teachable? Ask faculty who aim to instill this learning goal to share how they help students achieve it. If they can name specific learning activities, the goal is teachable—and assessable, because they can grade the completed learning activities. But if the best they can say is something like, “I try to model it” or “I think they pick it up by osmosis,” the goal may not be teachable—or assessable. Don’t try to assess what can’t be taught.


What knowledge and skills are part of this learning goal? We can’t guarantee, for example, that all graduates will make ethical decisions, but we can make sure that they recognize ethical and unethical decisions, and we can assess their ability to do so.


How important is this learning goal? Most faculty and colleges I work with have too many learning goals—too many to assess well and, more important, too many to help students achieve well in the time we have with them. Ask yourself, “Can our students lead happy and fulfilling lives if they graduate without having achieved this particular learning goal?”


But just because a learning goal is a promise we can’t keep doesn’t mean it isn’t important. A world in which people fail to appreciate artistic expressions or have compassion for others would be a dismal place. So continue to acknowledge and value hard-to-assess learning goals even if you’re not assessing them.


For more information on assessing the hard-to-assess, see Chapter 21 of the new 3rd edition of Assessing Student Learning: A Common Sense Guide.

Value and respect: The keys to assessment success

Posted on March 28, 2018 at 6:25 AM Comments comments (1)

In my February 28 blog post, I noted that many faculty express frustration with assessment along the following lines:


  • What I most want students to learn is not what’s being assessed.
  • I’m being told what and how to assess, without any input from me.
  • I’m being told what to teach, without any input from me.
  • I’m being told to assess skills that employers want, but I teach other things that I think are more important.
  • A committee is doing a second review of my students’ work. I’m not trusted to assess student work fairly and accurately through my grading processes.
  • I’m being asked to quantify student learning, but I don’t think that’s appropriate for what I’m teaching.
  • I’m being asked to do this on top of everything else I’m already doing.
  • Assessment treats learning as a scientific process, when it’s a human endeavor; every student and teacher is different.


The underlying theme here is that these faculty don’t feel that they and their views are valued and respected. When we value and respect people:


  • We design assessment processes so the results are clearly useful in helping to make important decisions, not paper-pushing exercises designed solely to get through accreditation.
  • We make assessment work worthwhile by using results to make important decisions, such as on resource allocations, as discussed in my March 13 blog post.
  • We truly value great teaching and actively encourage the scholarship of teaching as a form of scholarship.
  • We truly value innovation, especially in improving one’s teaching because, if no one wants to change anything, there’s no point in assessing.
  • We take the time to give faculty and staff clear guidance and coordination, so they understand what they are to do and why.
  • We invest in helping them learn what to do: how to use research-informed teaching strategies as well as how to assess.
  • We support their work with appropriate resources.
  • We help them find time to work on assessment and to keep assessment work cost-effective, because we respect how busy they are.
  • We take a flexible approach to assessment, recognizing that one size does not fit all. We do not mandate a single institution-wide assessment approach but instead encourage a variety of assessment strategies, both quantitative and qualitative. The more choices we give faculty, the more they feel empowered.
  • We design assessment processes so faculty are leaders rather than providers of assessment. We help them work collaboratively rather than in silos, inviting them to contribute to decisions on what, why, and how we assess. We try to assess those learning outcomes that the institutional community most values. More than anything else, we spend more time listening than telling.
  • We recognize and honor assessment work in tangible ways, perhaps through a celebratory event, public commendations, or consideration in promotion, tenure, and merit pay applications.


For more information on these and other strategies to value and respect people who work on assessment, see Chapter 14, “Valuing Assessment and the People Who Contribute,” in the new third edition of my book Assessing Student Learning: A Common Sense Guide.

Making assessment worthwhile

Posted on March 13, 2018 at 9:50 AM Comments comments (9)

In my February 28 blog post, I noted that many faculty have been expressing frustration that assessment is a waste of an enormous amount of time and resources that could be better spent on teaching. Here are some strategies to help make sure your assessment activities are meaningful and cost-effective, all drawn from the new third edition of Assessing Student Learning: A Common Sense Guide.


Don’t approach assessment as an accreditation requirement. Sure, you’re doing assessment because your accreditor requires it, but cranking out something only to keep an accreditor happy is sure to be viewed as a waste of time. Instead approach assessment as an opportunity to collect information on things you and your colleagues care about and that you want to make better decisions about. Then what you’re doing for the accreditor is summarizing and analyzing what you’ve been doing for yourselves. While a few accreditors have picky requirements that you must comply with whether you like them or not, most want you to use their standards as an opportunity to do something genuinely useful.


Keep it useful. If an assessment hasn’t yielded useful information, stop doing it and do something else. If no one’s interested in assessment results for a particular learning goal, you’ve got a clue that you’ve been assessing the wrong goal.


Make sure it’s used in helpful ways. Design processes to make sure that assessment results inform things like professional development programming, resource allocations for instructional equipment and technologies, and curriculum revisions. Make sure faculty are informed about how assessment results are used so they see its value.


Monitor your investment in assessment. Keep tabs on how much time and money each assessment is consuming…and whether what’s learned is useful enough to make that investment worthwhile. If it isn’t, change your assessment to something more cost-effective.


Be flexible. A mandate to use an assessment tool or strategy that’s inappropriate for a particular learning goal or discipline is sure to be viewed as a waste of everyone’s time. In assessment, one size definitely does not fit all.


Question anything that doesn’t make sense. If no one can give a good explanation for doing something that doesn’t make sense, stop doing it and do something more appropriate.


Start with what you have. Your college has plenty of direct and indirect evidence of student learning already on hand, from grading processes, surveys, and other sources. Squeeze information out of those sources before adding new assessments.


Think twice about blind-scoring and double-scoring student work. The costs in terms of both time and morale can be pretty steep (“I’m a professional! Why can’t they trust me to assess my own students’ work?” ). Start by asking faculty to submit their own rubric ratings of their own students’ work. Only move to blind- and double-scoring if you see a big problem in their scores of a major assessment.


Start at the end and work backwards. If your program has a capstone requirement, students should be demonstrating achievement in many key program learning goals in it. Start assessment there. If students show satisfactory achievement of the learning goals, you’re done! If you’re not satisfied with their achievement of a particular learning goal, you can drill down to other places in the curriculum that address that goal.


Help everyone learn what to do. Nothing galls me more than finding out what I did wasn’t what was wanted and has to be redone. While we all learn from experience and do things better the second time, help everyone learn what to do so, their first assessment is a useful one.


Minimize paperwork and bureaucratic layers. Faculty are already routinely assessing student learning through the grading process. What some resent is not the work of grading but the added workload of compiling, analyzing, and reporting assessment evidence from the grading process. Make this process as simple, intuitive, and useful as possible. Cull from your assessment report template anything that’s “nice to know” versus absolutely essential.


Make assessment technologies an optional tool, not a mandate. Only a tiny number of accreditors require using a particular assessment information management system. For everyone else, assessment information systems should be chosen and implemented to make everyone’s lives easier, not for the convenience of a few people like an assessment committee or a visiting accreditation team. If a system is hard to learn, creates more work, or is expensive, it will create resentment and make things worse rather than better. I recently encountered one system for which faculty had to tally and analyze their results, then enter the tallied results into the system. Um, shouldn’t an assessment system do the work of tallying and analysis for the faculty?


Be sensible about staggering assessments. If students are not achieving a key learning goal well, you’ll want to assess it frequently to see if they’re improving. But if students are achieving another learning goal really well, put it on a back burner, asking for assessment reports on it only every few years, to make sure things aren’t slipping.


Help everyone find time to talk. Lots of faculty have told me that they “get” assessment but simply can’t find time to discuss with their colleagues what and how to assess and how best to use the results. Help them carve out time on their calendars for these important conversations.


Link your assessment coordinator with your faculty teaching/learning center, not an accreditation or institutional effectiveness office. This makes clear that assessment is about understanding and improving student learning, not just a hoop to jump through to address some administrative or accreditation mandate.

What do faculty really think about assessment?

Posted on March 4, 2018 at 8:05 AM Comments comments (0)

The vitriol in some recent op-ed pieces and the comments that followed them might leave the impression that faculty hate assessment. Well, some faculty clearly do, but a national survey suggests that they’re in the minority.


The Faculty Survey of Assessment Culture, directed by Dr. Matthew Fuller at Sam Houston State University, can give us some insight. Its key drawback is, because it’s still a relatively nascent survey, it has only about 1155 responses from its last reported administration in 2014. So the survey may not represent what faculty throughout the U.S. really think, but I nonetheless think it’s worth a look.


Most of the survey is a series of statements to which faculty respond by choosing Strongly Agree, Agree, Only Slightly Agree, Only Slightly Disagree, Disagree, or Strongly Disagree.


Here are the percentages who agreed or strongly agreed with each statement. Statements that are positive about assessment are in green; those that are negative about assessment are in red.

80% The majority of administrators are supportive of assessment.

77% Faculty leadership is necessary for my institution’s assessment efforts.

76% Assessment is a good thing for my institution to do.

70% I am highly interested in my institution’s assessment efforts.

70% Assessment is vital to my institution’s future.

67% In general I am eager to work with administrators.

67% Assessment is a good thing for me to do.

64% I am actively engaged in my institution’s assessment efforts.

63% Assessments of programs are typically connected back to student learning

62% My academic department or college truly values faculty involvement in assessment.

61% I engage in institutional assessment efforts because it is the right thing to do for our students.

60% Assessment is vital to my institution’s way of operating.

57% Discussions about student learning are at the heart of my institution.

57% In general a recommended change is more likely to be enacted by administrators if it is supported by assessment data.

53% I clearly understand assessment processes at my institution.

52% Assessment supports student learning at my institution.

51% Assessment is primarily the responsibility of faculty members.

51% Change occurs more readily when supported by assessment results.

50% It is clear who is ultimately in charge of assessment.

50% I am familiar with the office that leads student assessment efforts for accreditation purposes.

50% Assessment for accreditation purposes is prioritized above other assessment efforts.

49% Assessment results are used for improvement.

49% The majority of administrators primarily emphasize assessment for the improvement of student learning.

49% I engage in institutional assessment because doing so makes a difference to student learning at my institution.

48% Assessment processes yield evidence of my institution’s effectiveness.

48% I have a generally positive attitude toward my institution’s culture of assessment.

47% Senior leaders, i.e., President or Provost, have made clear their expectations regarding assessment.

47% Administrators are supportive of making changes.

46% I am familiar with the office that leads student assessment efforts for student learning.

45% Assessment data are used to identify the extent to which student learning outcomes are met.

44% My institution is structured in a way that facilitates assessment practices focused on improved student learning.

44% The majority of administrators only focus on assessment in response to compliance requirements.

43% Student assessment results are shared regularly with faculty members.

41% I support the ways in which administrators have used assessment on my campus.

40% Assessment is an organized coherent effort at my institution.

40% Assessment results are available to faculty by request.

38% Assessment data are available to faculty by request.

37% Assessment results are shared regularly throughout my institution.

35% Faculty are in charge of assessment at my institution.

33% Engaging in assessment also benefits my research/scholarship agenda.

32% Budgets can be negatively impacted by assessment results.

32% Administrators share assessment data with faculty members using a variety of communication strategies (i.e., meetings, web, written correspondence, presentations).

31% Assessment data are regularly used in official institutional communications.

30% There are sufficient financial resources to make changes at my institution.

29% Assessment is a necessary evil in higher education.

28% Communication of assessment results has been effective.

28% Assessment results are criticized for going nowhere (i.e., not leading to change).

27% Assessment results in a fair depiction of what I do as a faculty member.

27% Administrators use assessment as a form of control (i.e., to regulate institutional processes).

26% Assessment efforts do not have a clear focus.

26% I enjoy engaging in institutional assessment efforts.

24% Assessment success stories are formally shared throughout my institution.

23% Assessment results in an accurate depiction of what I do as a faculty member.

22% Assessment is conducted based on the whims of the people in charge.

21% If assessment was not required I would not be doing it.

21% Assessment is primarily the responsibility of administrators.

21% I am aware of several assessment success stories (i.e. instances of assessment resulting in important changes).

20% I do not have time to engage in assessment efforts.

19% Assessment results have no impact on resource allocations.

18% Assessment results are used to scare faculty into compliance with what the administration wants.

18% There is pressure to reveal only positive results from assessment efforts.

17% I avoid doing institutional assessment activities if I can.

17% I engage in assessment because I am afraid of what will happen if I do not.

14% I perceive assessment as a threat to academic freedom.

10% Assessment results are used to punish faculty members (i.e., not rewarding innovation or effective teaching, research, or service).

4% Assessment is someone else’s problem, not mine.


Overall, there’s good news here. Most faculty agreed with most positive statements about assessment, and most disagreed with most negative statements. I was particularly heartened that about three-quarters of respondents agreed that “assessment is a good thing for my institution to do,” about 70% agreed that “assessment is vital to my institution’s future,” and about two-thirds agreed that “assessment is a good thing for me to do.”


But there’s also plenty to be concerned about here. Only 35% agree that faculty are in charge of assessment and, by several measures, only a minority see assessment results shared and used. Almost 30% view assessment as a necessary evil.


Survey researchers know that people are more apt to agree than disagree with a statement, so I also looked at the percentages of faculty who disagreed or strongly disagreed with each statement. These responses do not mirror the agreed/strongly agreed results above, because on some items a larger proportion of faculty marked Only Slightly Agree or Only Slightly Disagree. Again, the positive statements are in green and the negative ones in red.

3% The majority of administrators are supportive of assessment.

6% Faculty leadership is necessary for my institution’s assessment efforts.

6% Assessment is a good thing for my institution to do.

7% Assessment is vital to my institution’s future.

8% I am highly interested in my institution’s assessment efforts.

8% In general a recommended change is more likely to be enacted by administrators if it is supported by assessment data.

9% I am actively engaged in my institution’s assessment efforts.

9% In general I am eager to work with administrators.

9% My academic department or college truly values faculty involvement in assessment.

10% Change occurs more readily when supported by assessment results.

10% Assessment is a good thing for me to do.

12% Assessment results are available to faculty by request.

13% Assessment is vital to my institution’s way of operating.

13% Assessment data are available to faculty by request.

13% The majority of administrators primarily emphasize assessment for the improvement of student learning.

13% I engage in institutional assessment efforts because it is the right thing to do for our students.

14% Discussions about student learning are at the heart of my institution.

14% I clearly understand assessment processes at my institution.

14% Assessment data are used to identify the extent to which student learning outcomes are met.

15% Assessments of programs are typically connected back to student learning.

15% Assessment results are used for improvement.

16% Assessment is primarily the responsibility of faculty members.

16% Administrators are supportive of making changes.

17% Assessment supports student learning at my institution.

18% Assessment processes yield evidence of my institution’s effectiveness.

18% I support the ways in which administrators have used assessment on my campus.

19% It is clear who is ultimately in charge of assessment.

19% Assessment is an organized coherent effort at my institution.

19% I have a generally positive attitude toward my institution’s culture of assessment.

20% Senior leaders, i.e., President or Provost, have made clear their expectations regarding assessment.

20% My institution is structured in a way that facilitates assessment practices focused on improved student learning.

20% I engage in institutional assessment because doing so makes a difference to student learning at my institution.

21% I am familiar with the office that leads student assessment efforts for accreditation purposes.

21% Budgets can be negatively impacted by assessment results.

22% The majority of administrators only focus on assessment in response to compliance requirements.

23% Student assessment results are regularly shared with faculty members.

24% I am familiar with the office that leads student assessment efforts for student learning.

24% Assessment for accreditation purposes is prioritized above other assessment efforts.

24% Assessment data are regularly used in official institutional communications.

28% Faculty are in charge of assessment at my institution.

29% Assessment results have no impact on resource allocations.

29% Assessment results are regularly shared throughout my institution.

29% I enjoy engaging in institutional assessment efforts.

31% Administrators share assessment data with faculty members using a variety of communication strategies (i.e., meetings, web, written correspondence, presentations).

31% Communication of assessment results has been effective.

31% Administrators use assessment as a form of control (i.e., to regulate institutional processes).

32% Assessment results are criticized for going nowhere (i.e., not leading to change).

32% Assessment results in a fair depiction of what I do as a faculty member.

33% There are sufficient financial resources to make changes at my institution.

34% Assessment success stories are formally shared throughout my institution.

34% Assessment results in an accurate depiction of what I do as a faculty member.

35% Assessment is primarily the responsibility of administrators.

36% I am aware of several assessment success stories (i.e., instances of assessment resulting in important changes).

36% Engaging in assessment also benefits my research/scholarship agenda.

41% Assessment efforts do not have a clear focus.

41% I do not have time to engage in assessment efforts.

42% Assessment is a necessary evil in higher education.

50% Assessment is conducted based on the whims of the people in charge.

50% There is pressure to reveal only positive results from assessment efforts.

53% Assessment results are used to scare faculty into compliance with what the administration wants.

55% I avoid doing institutional assessment activities if I can.

56% If assessment was not required I would not be doing it.

56% I engage in assessment because I am afraid of what will happen if I do not.

60% Assessment results are used to punish faculty members (i.e., not rewarding innovation or effective teaching, research, or service).

62% I perceive assessment as a threat to academic freedom.

78% Assessment is someone else’s problem, not mine.


Here there’s more good news. We want small proportions of faculty to disagree with the positive statements about assessment, and for the most part they do. About a third disagree that assessment results and success stories are shared, but that matches what we saw with the agree-strongly agree results.


But there are also areas of concern here. We want large proportions of faculty to disagree with the negative statements about assessment, and that doesn’t always happen. Less than a quarter disagree that budgets can be negatively impacted by assessment results and that administrators look at assessment only through a compliance lens. Less than a third disagreed that assessment results don’t lead to change or resource allocations. The results that concerned me most? Only 42% disagreed that assessment is a necessary evil; only half disagreed that there is pressure to reveal only positive assessment results; and only a bit over half disagreed that “If assessment was not required I would not be doing it.”


So, while most faculty “get” assessment, there are sizable numbers who don’t yet see value in it. We've come a long way, but there's still plenty of work to do!


(Some notes on the presentation of these results: Note that I sorted results from highest to lowest, rounded percentages to the nearest whole percent, and color-coded "good" and "bad" statements. Those all help the key points of a very lengthy survey pop out at the reader.)

Why do (some) faculty hate assessment?

Posted on February 28, 2018 at 10:25 AM Comments comments (14)

Two recent op-ed pieces in the Chronicle of Higher Education and the New York Times –and the hundreds of online comments regarding them—make clear that, 25 years into the assessment movement, a lot of faculty really hate assessment.


It’s tempting for assessment people to spring into a defensive posture and dismiss what these people are saying. (They’re misinformed! The world has changed!) But if that’s our response, aren’t we modeling the fractures deeply dividing the US today, with people existing in their own echo chambers and talking past each other rather than really listening and trying to find common ground on which to build? And shouldn’t we be practicing what we preach, using systematic evidence to inform what we say and do?


So I took a deeper dive into those comments. I did a content analysis of the articles and many of the comments that followed. (The New York Times article had over 500 comments—too many for me to handle—so I looked only at NYT comments with at least 12 recommendations.)


If you’re not familiar with content analysis, it’s looking through text to identify the frequency of ideas or themes. For example, I counted how many comments mentioned that assessment is expensive. I do content analysis by listing all the comments as bullets in a Word document, then cutting and pasting the bulleted comments to group similar comments together under headings. I then cut and paste the groups so the most frequently mentioned themes are at the top of the document. There is qualitative analysis software that can help if you don’t want to do this manually.


A caveat: Comments don’t always fall into neat, discrete categories; judgement is needed to decide where to place some. I did this analysis quickly, and it’s entirely possible that, if you’d done this instead of me, you might have come up with somewhat different results. But assessment is not rigorous research; we just need information good enough to help inform our thinking, and I think my analysis is fine for the purpose of figuring out how we might deal with this.


Why take the time to do a content analysis instead of just reading through the comments? Because, when we process a list of comments, there’s a good chance we won’t identify the most frequently mentioned ideas accurately. As I was doing my content analysis, I was struck by how many faculty complained that assessment is (I’m being snarky here) either a vast right-wing conspiracy or a vast left-wing conspiracy, simply because I’d never heard that before. It turned out, however, that there were other themes that emerged far more frequently. This is a good lesson for faculty who think they don’t need to formally assess because they “know” what their students are struggling with. Maybe they do…but maybe not.


So what did I find? As I’d expected, there are many reasons why faculty may hate assessment. I found that most of their complaints fall into just four broad categories:


It’s a waste of an enormous amount of time and resources that could be better spent on teaching. Almost 40% of the comments fell into this category. Some examples:

  • We faculty are angry over the time and dollars wasted.
  • The assessment craze is not only of little value, but it saps the meager resources of time and money available for classroom instruction.
  • Faced with outrage over the high cost of higher education, universities responded by encouraging expensive administrative bloat.
  • It is not that the faculty are not trying, but the data and methods in general use are very poor at measuring learning.
  • Our “assessment expert” told us to just put down as a goal the % of students we wanted to rate us as very good or good on a self-report survey. Which we all know is junk.


I and what I think is important is not valued or respected. Over 30% of the comments fell into this category. Some examples:

  • Assessment of student learning outcomes is an add-on activity that says your standard examination and grading scheme isn’t enough so you need to do a second layer of grading in a particular numerical format.
  • The fundamental, flawed premise of most of modern education is that teaching is a science.
  • Bureaucratic jargon subtly shapes the expectations of students and teachers alike.
  • When the effort to reduce learning to a list of job-ready skills goes too far, it misses the point of a university education.
  • Learning outcomes have disempowered faculty.
  • The only learning outcomes I value: students complete their formal education with a desire to learn more
  • Assessment reflects a misguided belief that learning is quantifiable.


External and economic forces are behind this. About 15% of comments fell into this category, including those right-wing/left-wing conspiracy comments. Some examples:

  • There’s a whole industry out there that’s invested in outcomes assessment.
  • The assessment boom coincided with the decision of state legislatures to reduce spending on public universities.
  • Educational institutions have been forced to operate out of a business model.
  • It is the rise of adjuncts and online classes that has led to the assessment push.


I’m unfairly held responsible for student learning. About 10% of comments fell into this category. Some examples:

  • Students, not faculty, are responsible for student learning.
  • It is much more profitable to skim money from institutions of higher learning than fixing the underlying causes of the poverty and lack of focus that harm students.
  • The root cause is lack of a solid foundation built in K-12.


Two things struck me about these four broad categories. The first one was that they don’t quite align with what I’ve heard as I’ve worked with literally thousands of faculty at hundreds of colleges over the last two decades. Yes, I’ve heard plenty about assessment being useless, and I’ve written about faculty feeling devalued and disrespected by assessment, but I’d never heard the external-forces or blame-game reasons before. And I’ve heard plenty about other reasons that weren’t mentioned in these comments, especially finding time to work on assessment, not understanding how to assess (or how to teach), and moving from a culture of silos to one of collaboration. I think the reason for the disconnect between what I’ve heard and what was expressed here is that these comments reflect the angriest faculty, not all faculty. But their anger is legitimate and something we should all work to address.


[UPDATED 2/28/2018 4:36 PM EST] So what should we do? First, we clearly need better information on faculty experiences and views regarding assessment so we can understand which issues are most pervasive and address them. The Surveys of Assessment Culture developed by Matt Fuller at Sam Houston State University is an important start.


In the meanwhile, the good news is the comments in and accompanying these two pieces all represent solvable problems. (No, we can’t solve all of society’s ills, but we can help faculty deal with them.) I’ll share some ideas in upcoming blog posts. If you don’t want to wait, you’ll find plenty of practical suggestions in the new 3rd edition of my book Assessing Student Learning: A Common Sense Guide.

An example of closing the loop...and ideas for doing it well

Posted on February 22, 2018 at 7:00 PM Comments comments (0)

I was intrigued by an article in the September 23, 2016, issue of Inside Higher Ed titled “When a C Isn’t Good Enough.” The University of Arizona found that students who earned an A or B in their first-year writing classes had a 67% chance of graduating, but those earning a C had only a 48% chance. The university is now exploring a variety of ways to improve the success of students earning a C, including requiring C students to take a writing competency test, providing resources to C students, and/or requiring C students to repeat the course.

 

I know nothing about the University of Arizona beyond what’s in the article. But if I were working with the folks there, I’d offer the following ideas to them, if they haven’t considered them already.

 

1. I’d like to see more information on why the C students earned a C. Which writing skills did they struggle most with: basic grammar, sentence structure, organization, supporting arguments with evidence, etc.? Or was there another problem? For example, maybe C students were more likely to hand in assignments late (or not at all).

 

2. I’d also like to see more research on why those C students were less likely to graduate. How did their GPAs compare to A and B students? If their grades were worse, what kinds of courses seemed to be the biggest challenge for them? Within those courses, what kinds of assignments were hardest for them? Why did they earn a poor grade on them? What writing skills did they struggle most with: basic grammar, organization, supporting arguments with evidence, etc.? Or, again, maybe there was another problem, such as poor self-discipline in getting work handed in on time.

 

And if their GPAs were not that different from those of A and B students (or even if they were), what else was going on that might have led them to leave? The problem might not be their writing skills per se. Perhaps, for example, that students with work or family obligations found it harder to devote the study time necessary to get good grades. Providing support for that issue might help more than helping them with their writing skills.

 

3. I’d also like to see the faculty responsible for first-year writing articulate a clear, appropriate, and appropriately rigorous standard for earning a C. In other words, they could use the above information on the kinds and levels of writing skills that students need to succeed in subsequent courses to articulate the minimum performance levels required to earn a C. When I taught first-year writing at a public university in Maryland, the state system had just such a statement, the “Maryland C Standard.”

 

4. I’d like to see the faculty adopt a policy that, in order to pass first-year writing, students must meet the minimum standard of every writing criterion. Thus, if student work is graded using a rubric, the grade isn’t determined by averaging the scores on various rubric criteria—that lets a student with A arguments but F grammar earn a C with failing grammar. Instead, students must earn at least a C on every rubric criterion in order to pass the assignment. Then the As, Bs, and Cs can be averaged into an overall grade for the assignment.

 

(If this sounds vaguely familiar to you, what I’m suggesting is the essence of competency-based education: students need to demonstrate competence on all learning goals and objectives in order to pass a course or graduate. Failure to achieve one goal or objective can’t be offset by strong performance on another.)

 

5. If they haven’t done so already, I’d also like to see the faculty responsible for first-year writing adopt a common rubric, articulating the criteria they’ve identified, that would be used to assess and grade the final assignment in every section, no matter who teaches it. This would make it easy to study student performance across all sections of the course and identify pervasive strengths and weaknesses in their writing. If some faculty members or TAs have additional grading criteria, they could simply add those to the common rubric. For example, I graded my students on their use of citation conventions, even though that was not part of the Maryland C Standard. I added that to the bottom of my rubric.

 

6. Because work habits are essential to success in college, I’d also suggest making this a separate learning outcome for first-year writing courses. This means grading students separately on whether they turn in work on time, put in sufficient effort, etc. This would help everyone understand why some students fail to graduate—is it because of poor writing skills, poor work habits, or both?

 

These ideas all move responsibility for addressing the problem from administrators to the faculty. That responsibility can’t be fulfilled unless the faculty commit to collaborating on identifying and implementing a shared strategy so that every student, no matter which section of writing they enroll in, passes the course with the skills needed for subsequent success.

Is higher ed assessment changing? You bet!

Posted on February 13, 2018 at 9:10 AM Comments comments (0)

Today marks the release of the third edition of my book Assessing Student Learning: A Common Sense Guide. I approached Jossey-Bass about doing a third edition in response to requests from some faculty who used it as a textbook but were required to use more recent editions. The second edition had been very successful, so I figured I’d update the references and a few chapters and be done. But as I started work on this edition, I was immediately struck by how outdated the second edition had become in just a few short years. The third edition is a complete reorganization and rewrite of the previous edition.


How has the world of higher ed assessment changed?


We are moving from Assessment 1.0 to Assessment 2.0: from getting assessment done—and in many cases not doing it very well—to getting assessment used. Many faculty and administrators still struggle to grasp that assessment is all about improving how we help students learn, not an end in itself, and that assessments should be planned with likely uses in mind. The last edition talked about using results, of course, but new edition adds a chapter on using assessment results to the beginning of the book. And throughout the book I talk not about “assessment results” but “evidence of student learning,” which is what this is really all about.


We have a lot of new resources. Many new assessment resources have emerged since the second edition was published, including the VALUE rubrics published by AAC&U, the many white papers published by NILOA, and the Degree Qualifications Profile sponsored by Lumina. Learning management systems and assessment information management systems are far more prevalent and sophisticated. This edition talks about these and other valuable new resources.


We are recognizing that different settings require different approaches to assessment. The more assessment we’ve done, the more we’ve come to realize that assessment practices vary depending on whether we’re assessing learning in courses, programs, general education curricula, or co-curricular experiences. The last edition didn’t draw many distinctions among assessment in these settings. This edition features a new chapter on the many settings of assessment, and several chapters discuss applying concepts to specific settings.


We’re realizing that curriculum design is a big piece of the assessment puzzle. We’ve found that, when faculty and staff struggle with assessment, it’s often because the learning outcomes they’ve identified aren’t addressed sufficiently—or at all—in the curriculum. So this book has a brand new chapter on curriculum design, and the old chapter on prompts has been expanded into one on creating meaningful assignments.


We have a much better understanding of rubrics. Rubrics are now so widespread that we have a much better idea of how to design and use them. A couple of years ago I did a literature review of rubric development that turned on a lot of lightbulbs for me, and this edition reflects my fresh thinking.


We’re recognizing that in some situations student learning is especially hard to assess. This edition has a new chapter on assessing the hard-to-assess, such as performances and learning that can’t be graded.


We’re increasingly appreciating the importance of setting appropriate standards and targets in order to interpret and use results appropriately. The chapter on this is completely rewritten, with a new section on setting standards for multiple choice tests.


We’re fighting the constant pull to make assessment too complicated. The pull of some accreditors’ overly complex requirements, some highly structured assessment information management systems, and some assessment practitioners with psychometric training to make things much more complicated than they need to be is strong. That this new edition is well over 400 pages says a lot! This book has a whole chapter on keeping assessment cost-effective, especially in terms of time.


We’re starting to recognize that, if assessment is to have real impact, results need to be synthesized into an overall picture of student learning. This edition stresses the need to sit back after looking through reams of assessment reports and ask, from a qualitative rather than quantitative perspective, what are we doing well? In what ways is student learning most disappointing?


Pushback to assessment is moving from resistance to foot-dragging. The voices saying assessment can’t be done are growing quieter because we now have decades of experience doing assessment. But while more people are doing assessment, in too many cases they’re doing it only to comply with an accreditation mandate. Helping people move from getting assessment done to using it in meaningful ways remains a challenge. So the two chapters on culture in the second edition are now six.


Data visualization and learning analytics are changing how we share assessment results. These things are so new that this edition only touches on them. I think that they will be the biggest drivers in changes to assessment over the coming decade.

Is this a rubric?

Posted on January 28, 2018 at 7:25 AM Comments comments (0)

A couple of years ago I did a literature review on rubrics and learned that there’s no consensus on what a rubric is. Some experts define rubrics very narrowly, as only analytic rubrics—the kind formatted as a grid, listing traits down the left side and performance levels across the top, with the boxes filled in. But others define rubrics more broadly, as written guides for evaluating student work that, at a minimum, lists the traits you’re looking for.


But what about something like the following, which I’ve seen on plenty of assignments?


70% Responds fully to the assignment (length of paper, double-spaced, typed, covers all appropriate developmental stages)

15% Grammar (including spelling, verb conjugation, structure, agreement, voice consistency, etc.)

15% Organization


Under the broad definition of a rubric, yes, this is a rubric. It is a written guide for evaluating student work, and it lists the three traits the faculty member is looking for.


The problem is that it isn’t a good rubric. Effective assessments including rubrics have the following traits:


Effective assessments yield information that is useful and used. Students who earn less than 70 points for responding to the assignment have no idea where they fell short. Those who earn less than 15 points on organization have no idea why. If the professor wants to help the next class do better on organization, there’s no insight here on where this class’s organization fell short and what most needs to be improved.


Effective assessments focus on important learning goals. You wouldn’t know it from the grading criteria, but this was supposed to be an assignment on critical thinking. Students focus their time and mental energies on what they’ll be graded on, so these students will focus on following directions for the assignment, not developing their critical thinking skills. Yes, following directions is an important skill, but critical thinking is even more important.


Effective assessments are clear. Students have no idea what this professor considers an excellently organized paper, what’s considered an adequately organized paper, and what’s considered a poorly organized paper.


Effective assessments are fair. Here, because there are only three broad, ill-defined traits, the faculty member can be (unintentionally) inconsistent in grading the papers. How many points are taken off for an otherwise fine paper that’s littered with typos? For one that isn’t double-spaced?


So the debate about an assessment should be not whether it is a rubric but rather how well it meets these four traits of effective assessment practices.


If you’d like to read more about rubrics and effective assessment practices, the third edition of my book Assessing Student Learning: A Common Sense Guide will be released on February 13 and can be pre-ordered now. The Kindle version is already available through Amazon.

Why are learning outcomes a good idea?

Posted on January 9, 2018 at 7:25 AM Comments comments (3)

Just before the holidays, the Council of Graduate Schools released Articulating Learning Outcomes in Higher Education. The title is a bit of misnomer; the paper focuses not on how to articulate learning outcomes but on why it’s a good idea to articulate learning outcomes and why it might be a good idea to have a learning outcome framework such as the Degree Qualifications Profile to articulate shared learning outcomes across doctoral programs.


What I found most useful about the paper was the strong case it makes for the value of articulating learning outcomes. It offers some reasons I hadn’t thought of before, and they apply to student learning at all higher education levels, not just doctoral education. If you work with someone who doesn't see the value of articulating learning outcomes, maybe this list will help.


Clearly defined learning outcomes can:


• Help students navigate important milestones by making implicit program expectations explicit, especially to first-generation students who may not know the “rules of the game.”


• Help prospective students weigh the costs and benefits of their educational investments.


• Help faculty prepare students more purposefully for a variety of career paths (at the doctoral level, for teaching as well as research careers).


• Help faculty ensure that students graduate with the knowledge and skills they need for an increasingly broad range of career options, which at the doctoral level may include government, non-profits, and startups as well as higher education and industry.


• Help faculty make program requirements and milestones more student-centered and intentional.


• Help faculty, programs, and institutions define the value of a degree or other credential and improve public understanding of that value.


• Put faculty, programs, and institutions in the driver’s seat, defining the characteristics of a successful graduate rather than having a definition imposed by another entity such as an accreditor or state agency.

Balancing regional and specialized accreditation demands

Posted on December 22, 2017 at 7:15 AM Comments comments (0)

Virtually all U.S. accreditors (and some state agencies) require the assessment of student learning, but the specifics--what, when, how--can vary significantly. How can programs with multiple accreditations (say regional and specialized) serve two or more accreditation masters without killing themselves in the process?


I recently posted my thoughts on this on the ASSESS listserv, and a colleague asked me to make my contribution into a blog post as well.


Bottom line: I advocate a flexible approach.


Start by thinking about why your institution's assessment coordinator or committee asks these programs for reports on student learning assessment. This leads to the question of why they're asking everyone to assess student learning outcomes.


The answer is that we all want to make sure our students are learning what we think is most important, and if we're not, we want to take steps to try to improve that learning. Any reporting structure should be designed to help faculty and staff achieve those two purposes--without being unnecessarily burdensome to anyone involved. In other words, reports should be designed primarily to help decision-makers at your college.


At this writing, I'm not aware of any regional accreditor that mandates that every program's assessment efforts and results must be reported on a common institution-wide template. When I was an assessment coordinator, I encouraged flexibility in report formats (and deadlines, for that matter). Yes, it was more work for me and the assessment committee to review apples-and-oranges reports but less work and more meaningful for faculty--and I've always felt they're more important than me.


So with this as a framework, I would suggest sitting down with each program with specialized accreditation and working out what's most useful for them.


  • Some programs are doing for their specialized accreditor exactly what your institution and your regional accreditor want. If so, I'm fine with asking for a cut-and-paste of whatever they prepare for their accreditor.
  • Some programs are doing for their specialized accreditor exactly what your institution and your regional accreditor want, but only every few years, when the specialized review takes place. In these cases, if the last review was a few years ago, I think it's appropriate to ask for an interim update.
  • Some programs assess certain learning goals for their specialized accreditor but not others that either the program or your institution views as important. For example, some health/medical accreditors want assessments of technical skills but not "soft" skills such as teamwork and patient interactions. In these cases, you can ask for a cut-and-paste of the assessments done for the specialized accreditor but then an addendum of the additional learning goals.
  • At least a few specialized accreditors expect student learning outcomes to be assessed but not that the results be used to improve learning. In these cases, you can ask for a cut-and-paste of the assessments done but then an addendum on how the results are being used.
  • Some specialized accreditors, frankly, aren't particularly rigorous in their expectations for student learning assessment. I've seen some, for example, that seem happy with surveys of student satisfaction or student self-ratings of their skills. Programs with these specialized accreditations need to do more if their assessment is to be meaningful and useful.


Again, this flexible approach meant more work for me, but I always felt faculty time was more precious than mine, so I always worked to make their jobs as easy as possible and their work as useful and meaningful as possible.


Rss_feed