Linda Suskie

  A Common Sense Approach to Assessment & Accreditation

Blog

view:  full / summary

What are the characteristics of well-stated learning goals?

Posted on May 27, 2018 at 7:40 AM Comments comments (0)

When I help faculty and co-curricular staff move ahead with their assessment efforts, I probably spend half our time on helping them articulate their learning goals. As the years have gone by, I’ve become ever more convinced that learning goals are the foundation of an assessment structure…and without a solid foundation, a structure can’t be well-constructed.


So what are well-stated learning goals? They have the following characteristics:


They are outcomes: what students will be able to do after they successfully complete the learning experience, not what they will do or learn during the learning experience. Example: Prepare effective, compelling visual summaries of research.


They are clear, written in simple, jargon-free terms that everyone understands, including students, employers, and colleagues in other disciplines. Example: Work collaboratively with others.


They are observable, written using action verbs, because if you can see it, you can assess it. Example: Identify and analyze ethical issues in the discipline.


They focus on skills more than knowledge, conceptual understanding, or attitudes and values, because thinking and performance skills are what employers seek in new hires. I usually suggest that at least half the learning goals of any learning experience focus on skills. Example: Integrate and properly cite scientific literature.


They are significant and aspirational: things that will take some time and effort for students to learn and that will make a real difference in their lives. Example: Identify, articulate, and solve problems in [the discipline or career field].


They are relevant, meeting the needs of students, employers, and society. They focus more on what students need to learn than what faculty want to teach. Example: Interpret numbers, data, statistics, and visual representations of them appropriately.


They are short and therefore powerful. Long, qualified or compound statements get everyone lost in the weeds. Example: Treat others with respect.


They fit the scope of the learning activity. Short co-curricular learning experiences have narrower learning goals than an entire academic program, for example.


They are limited in number. I usually suggest no more than six learning goals per learning experience. If you have 10, 15, or 20 learning goals—or more—everyone focuses on trees rather than the forest of the most important things you want students to learn.


They help students achieve bigger, broader learning goals. Course learning goals help students achieve program and/or general education learning goals; co-curricular learning goals help students achieve institutional learning goals; program learning goals help students achieve institutional learning goals.


For more information on articulating well-stated learning goals, see Chapter 4 of the new 3rd edition of my book Assessing Student Learning: A Common Sense Guide.

Some learning goals are promises we can't keep

Posted on May 2, 2018 at 6:55 AM Comments comments (0)

I look on learning goals as promises that we make to students, employers, and society: If a student passes a course or graduates, he or she WILL be able to do the things we promise in our learning goals.


But there are some things we hope to instill in students that we can’t guarantee. We can’t guarantee, for example, that every graduate will be a passionate lifelong learner, appreciate artistic expressions, or make ethical decisions. I think these kinds of statements are important aims that might be expressed in a statement of values, but they’re not really learning goals, because they’re something we hope for, not something we can promise. Because they’re not really learning goals, they’re very difficult if not impossible statements to assess meaningfully.


How can you tell if a learning goal is true learning goal—an assessable promise that we try to keep? Ask yourself the following questions.


Is the learning goal stated clearly, using observable action verbs? Appreciate diversity is a promise we may not be able to keep, but Communicate effectively with people from diverse backgrounds is an achievable, assessable learning goal.


How have others assessed this learning goal? If someone else has assessed it meaningfully and usefully, don’t waste time reinventing the wheel.


How would you recognize people who have achieved this learning goal? Imagine that you run into two alumni of your college. As you talk with them, it becomes clear that one appreciates artistic expressions and the other doesn’t. What might they say about their experiences and views that would lead you to that conclusion? This might give you ideas on ways to express the learning goal in more concrete, observable terms, which makes it easier to figure out how to assess it.


Is the learning goal teachable? Ask faculty who aim to instill this learning goal to share how they help students achieve it. If they can name specific learning activities, the goal is teachable—and assessable, because they can grade the completed learning activities. But if the best they can say is something like, “I try to model it” or “I think they pick it up by osmosis,” the goal may not be teachable—or assessable. Don’t try to assess what can’t be taught.


What knowledge and skills are part of this learning goal? We can’t guarantee, for example, that all graduates will make ethical decisions, but we can make sure that they recognize ethical and unethical decisions, and we can assess their ability to do so.


How important is this learning goal? Most faculty and colleges I work with have too many learning goals—too many to assess well and, more important, too many to help students achieve well in the time we have with them. Ask yourself, “Can our students lead happy and fulfilling lives if they graduate without having achieved this particular learning goal?”


But just because a learning goal is a promise we can’t keep doesn’t mean it isn’t important. A world in which people fail to appreciate artistic expressions or have compassion for others would be a dismal place. So continue to acknowledge and value hard-to-assess learning goals even if you’re not assessing them.


For more information on assessing the hard-to-assess, see Chapter 21 of the new 3rd edition of Assessing Student Learning: A Common Sense Guide.

Value and respect: The keys to assessment success

Posted on March 28, 2018 at 6:25 AM Comments comments (0)

In my February 28 blog post, I noted that many faculty express frustration with assessment along the following lines:


  • What I most want students to learn is not what’s being assessed.
  • I’m being told what and how to assess, without any input from me.
  • I’m being told what to teach, without any input from me.
  • I’m being told to assess skills that employers want, but I teach other things that I think are more important.
  • A committee is doing a second review of my students’ work. I’m not trusted to assess student work fairly and accurately through my grading processes.
  • I’m being asked to quantify student learning, but I don’t think that’s appropriate for what I’m teaching.
  • I’m being asked to do this on top of everything else I’m already doing.
  • Assessment treats learning as a scientific process, when it’s a human endeavor; every student and teacher is different.


The underlying theme here is that these faculty don’t feel that they and their views are valued and respected. When we value and respect people:


  • We design assessment processes so the results are clearly useful in helping to make important decisions, not paper-pushing exercises designed solely to get through accreditation.
  • We make assessment work worthwhile by using results to make important decisions, such as on resource allocations, as discussed in my March 13 blog post.
  • We truly value great teaching and actively encourage the scholarship of teaching as a form of scholarship.
  • We truly value innovation, especially in improving one’s teaching because, if no one wants to change anything, there’s no point in assessing.
  • We take the time to give faculty and staff clear guidance and coordination, so they understand what they are to do and why.
  • We invest in helping them learn what to do: how to use research-informed teaching strategies as well as how to assess.
  • We support their work with appropriate resources.
  • We help them find time to work on assessment and to keep assessment work cost-effective, because we respect how busy they are.
  • We take a flexible approach to assessment, recognizing that one size does not fit all. We do not mandate a single institution-wide assessment approach but instead encourage a variety of assessment strategies, both quantitative and qualitative. The more choices we give faculty, the more they feel empowered.
  • We design assessment processes so faculty are leaders rather than providers of assessment. We help them work collaboratively rather than in silos, inviting them to contribute to decisions on what, why, and how we assess. We try to assess those learning outcomes that the institutional community most values. More than anything else, we spend more time listening than telling.
  • We recognize and honor assessment work in tangible ways, perhaps through a celebratory event, public commendations, or consideration in promotion, tenure, and merit pay applications.


For more information on these and other strategies to value and respect people who work on assessment, see Chapter 14, “Valuing Assessment and the People Who Contribute,” in the new third edition of my book Assessing Student Learning: A Common Sense Guide.

Making assessment worthwhile

Posted on March 13, 2018 at 9:50 AM Comments comments (1)

In my February 28 blog post, I noted that many faculty have been expressing frustration that assessment is a waste of an enormous amount of time and resources that could be better spent on teaching. Here are some strategies to help make sure your assessment activities are meaningful and cost-effective, all drawn from the new third edition of Assessing Student Learning: A Common Sense Guide.


Don’t approach assessment as an accreditation requirement. Sure, you’re doing assessment because your accreditor requires it, but cranking out something only to keep an accreditor happy is sure to be viewed as a waste of time. Instead approach assessment as an opportunity to collect information on things you and your colleagues care about and that you want to make better decisions about. Then what you’re doing for the accreditor is summarizing and analyzing what you’ve been doing for yourselves. While a few accreditors have picky requirements that you must comply with whether you like them or not, most want you to use their standards as an opportunity to do something genuinely useful.


Keep it useful. If an assessment hasn’t yielded useful information, stop doing it and do something else. If no one’s interested in assessment results for a particular learning goal, you’ve got a clue that you’ve been assessing the wrong goal.


Make sure it’s used in helpful ways. Design processes to make sure that assessment results inform things like professional development programming, resource allocations for instructional equipment and technologies, and curriculum revisions. Make sure faculty are informed about how assessment results are used so they see its value.


Monitor your investment in assessment. Keep tabs on how much time and money each assessment is consuming…and whether what’s learned is useful enough to make that investment worthwhile. If it isn’t, change your assessment to something more cost-effective.


Be flexible. A mandate to use an assessment tool or strategy that’s inappropriate for a particular learning goal or discipline is sure to be viewed as a waste of everyone’s time. In assessment, one size definitely does not fit all.


Question anything that doesn’t make sense. If no one can give a good explanation for doing something that doesn’t make sense, stop doing it and do something more appropriate.


Start with what you have. Your college has plenty of direct and indirect evidence of student learning already on hand, from grading processes, surveys, and other sources. Squeeze information out of those sources before adding new assessments.


Think twice about blind-scoring and double-scoring student work. The costs in terms of both time and morale can be pretty steep (“I’m a professional! Why can’t they trust me to assess my own students’ work?” ). Start by asking faculty to submit their own rubric ratings of their own students’ work. Only move to blind- and double-scoring if you see a big problem in their scores of a major assessment.


Start at the end and work backwards. If your program has a capstone requirement, students should be demonstrating achievement in many key program learning goals in it. Start assessment there. If students show satisfactory achievement of the learning goals, you’re done! If you’re not satisfied with their achievement of a particular learning goal, you can drill down to other places in the curriculum that address that goal.


Help everyone learn what to do. Nothing galls me more than finding out what I did wasn’t what was wanted and has to be redone. While we all learn from experience and do things better the second time, help everyone learn what to do so, their first assessment is a useful one.


Minimize paperwork and bureaucratic layers. Faculty are already routinely assessing student learning through the grading process. What some resent is not the work of grading but the added workload of compiling, analyzing, and reporting assessment evidence from the grading process. Make this process as simple, intuitive, and useful as possible. Cull from your assessment report template anything that’s “nice to know” versus absolutely essential.


Make assessment technologies an optional tool, not a mandate. Only a tiny number of accreditors require using a particular assessment information management system. For everyone else, assessment information systems should be chosen and implemented to make everyone’s lives easier, not for the convenience of a few people like an assessment committee or a visiting accreditation team. If a system is hard to learn, creates more work, or is expensive, it will create resentment and make things worse rather than better. I recently encountered one system for which faculty had to tally and analyze their results, then enter the tallied results into the system. Um, shouldn’t an assessment system do the work of tallying and analysis for the faculty?


Be sensible about staggering assessments. If students are not achieving a key learning goal well, you’ll want to assess it frequently to see if they’re improving. But if students are achieving another learning goal really well, put it on a back burner, asking for assessment reports on it only every few years, to make sure things aren’t slipping.


Help everyone find time to talk. Lots of faculty have told me that they “get” assessment but simply can’t find time to discuss with their colleagues what and how to assess and how best to use the results. Help them carve out time on their calendars for these important conversations.


Link your assessment coordinator with your faculty teaching/learning center, not an accreditation or institutional effectiveness office. This makes clear that assessment is about understanding and improving student learning, not just a hoop to jump through to address some administrative or accreditation mandate.

What do faculty really think about assessment?

Posted on March 4, 2018 at 8:05 AM Comments comments (0)

The vitriol in some recent op-ed pieces and the comments that followed them might leave the impression that faculty hate assessment. Well, some faculty clearly do, but a national survey suggests that they’re in the minority.


The Faculty Survey of Assessment Culture, directed by Dr. Matthew Fuller at Sam Houston State University, can give us some insight. Its key drawback is, because it’s still a relatively nascent survey, it has only about 1155 responses from its last reported administration in 2014. So the survey may not represent what faculty throughout the U.S. really think, but I nonetheless think it’s worth a look.


Most of the survey is a series of statements to which faculty respond by choosing Strongly Agree, Agree, Only Slightly Agree, Only Slightly Disagree, Disagree, or Strongly Disagree.


Here are the percentages who agreed or strongly agreed with each statement. Statements that are positive about assessment are in green; those that are negative about assessment are in red.

80% The majority of administrators are supportive of assessment.

77% Faculty leadership is necessary for my institution’s assessment efforts.

76% Assessment is a good thing for my institution to do.

70% I am highly interested in my institution’s assessment efforts.

70% Assessment is vital to my institution’s future.

67% In general I am eager to work with administrators.

67% Assessment is a good thing for me to do.

64% I am actively engaged in my institution’s assessment efforts.

63% Assessments of programs are typically connected back to student learning

62% My academic department or college truly values faculty involvement in assessment.

61% I engage in institutional assessment efforts because it is the right thing to do for our students.

60% Assessment is vital to my institution’s way of operating.

57% Discussions about student learning are at the heart of my institution.

57% In general a recommended change is more likely to be enacted by administrators if it is supported by assessment data.

53% I clearly understand assessment processes at my institution.

52% Assessment supports student learning at my institution.

51% Assessment is primarily the responsibility of faculty members.

51% Change occurs more readily when supported by assessment results.

50% It is clear who is ultimately in charge of assessment.

50% I am familiar with the office that leads student assessment efforts for accreditation purposes.

50% Assessment for accreditation purposes is prioritized above other assessment efforts.

49% Assessment results are used for improvement.

49% The majority of administrators primarily emphasize assessment for the improvement of student learning.

49% I engage in institutional assessment because doing so makes a difference to student learning at my institution.

48% Assessment processes yield evidence of my institution’s effectiveness.

48% I have a generally positive attitude toward my institution’s culture of assessment.

47% Senior leaders, i.e., President or Provost, have made clear their expectations regarding assessment.

47% Administrators are supportive of making changes.

46% I am familiar with the office that leads student assessment efforts for student learning.

45% Assessment data are used to identify the extent to which student learning outcomes are met.

44% My institution is structured in a way that facilitates assessment practices focused on improved student learning.

44% The majority of administrators only focus on assessment in response to compliance requirements.

43% Student assessment results are shared regularly with faculty members.

41% I support the ways in which administrators have used assessment on my campus.

40% Assessment is an organized coherent effort at my institution.

40% Assessment results are available to faculty by request.

38% Assessment data are available to faculty by request.

37% Assessment results are shared regularly throughout my institution.

35% Faculty are in charge of assessment at my institution.

33% Engaging in assessment also benefits my research/scholarship agenda.

32% Budgets can be negatively impacted by assessment results.

32% Administrators share assessment data with faculty members using a variety of communication strategies (i.e., meetings, web, written correspondence, presentations).

31% Assessment data are regularly used in official institutional communications.

30% There are sufficient financial resources to make changes at my institution.

29% Assessment is a necessary evil in higher education.

28% Communication of assessment results has been effective.

28% Assessment results are criticized for going nowhere (i.e., not leading to change).

27% Assessment results in a fair depiction of what I do as a faculty member.

27% Administrators use assessment as a form of control (i.e., to regulate institutional processes).

26% Assessment efforts do not have a clear focus.

26% I enjoy engaging in institutional assessment efforts.

24% Assessment success stories are formally shared throughout my institution.

23% Assessment results in an accurate depiction of what I do as a faculty member.

22% Assessment is conducted based on the whims of the people in charge.

21% If assessment was not required I would not be doing it.

21% Assessment is primarily the responsibility of administrators.

21% I am aware of several assessment success stories (i.e. instances of assessment resulting in important changes).

20% I do not have time to engage in assessment efforts.

19% Assessment results have no impact on resource allocations.

18% Assessment results are used to scare faculty into compliance with what the administration wants.

18% There is pressure to reveal only positive results from assessment efforts.

17% I avoid doing institutional assessment activities if I can.

17% I engage in assessment because I am afraid of what will happen if I do not.

14% I perceive assessment as a threat to academic freedom.

10% Assessment results are used to punish faculty members (i.e., not rewarding innovation or effective teaching, research, or service).

4% Assessment is someone else’s problem, not mine.


Overall, there’s good news here. Most faculty agreed with most positive statements about assessment, and most disagreed with most negative statements. I was particularly heartened that about three-quarters of respondents agreed that “assessment is a good thing for my institution to do,” about 70% agreed that “assessment is vital to my institution’s future,” and about two-thirds agreed that “assessment is a good thing for me to do.”


But there’s also plenty to be concerned about here. Only 35% agree that faculty are in charge of assessment and, by several measures, only a minority see assessment results shared and used. Almost 30% view assessment as a necessary evil.


Survey researchers know that people are more apt to agree than disagree with a statement, so I also looked at the percentages of faculty who disagreed or strongly disagreed with each statement. These responses do not mirror the agreed/strongly agreed results above, because on some items a larger proportion of faculty marked Only Slightly Agree or Only Slightly Disagree. Again, the positive statements are in green and the negative ones in red.

3% The majority of administrators are supportive of assessment.

6% Faculty leadership is necessary for my institution’s assessment efforts.

6% Assessment is a good thing for my institution to do.

7% Assessment is vital to my institution’s future.

8% I am highly interested in my institution’s assessment efforts.

8% In general a recommended change is more likely to be enacted by administrators if it is supported by assessment data.

9% I am actively engaged in my institution’s assessment efforts.

9% In general I am eager to work with administrators.

9% My academic department or college truly values faculty involvement in assessment.

10% Change occurs more readily when supported by assessment results.

10% Assessment is a good thing for me to do.

12% Assessment results are available to faculty by request.

13% Assessment is vital to my institution’s way of operating.

13% Assessment data are available to faculty by request.

13% The majority of administrators primarily emphasize assessment for the improvement of student learning.

13% I engage in institutional assessment efforts because it is the right thing to do for our students.

14% Discussions about student learning are at the heart of my institution.

14% I clearly understand assessment processes at my institution.

14% Assessment data are used to identify the extent to which student learning outcomes are met.

15% Assessments of programs are typically connected back to student learning.

15% Assessment results are used for improvement.

16% Assessment is primarily the responsibility of faculty members.

16% Administrators are supportive of making changes.

17% Assessment supports student learning at my institution.

18% Assessment processes yield evidence of my institution’s effectiveness.

18% I support the ways in which administrators have used assessment on my campus.

19% It is clear who is ultimately in charge of assessment.

19% Assessment is an organized coherent effort at my institution.

19% I have a generally positive attitude toward my institution’s culture of assessment.

20% Senior leaders, i.e., President or Provost, have made clear their expectations regarding assessment.

20% My institution is structured in a way that facilitates assessment practices focused on improved student learning.

20% I engage in institutional assessment because doing so makes a difference to student learning at my institution.

21% I am familiar with the office that leads student assessment efforts for accreditation purposes.

21% Budgets can be negatively impacted by assessment results.

22% The majority of administrators only focus on assessment in response to compliance requirements.

23% Student assessment results are regularly shared with faculty members.

24% I am familiar with the office that leads student assessment efforts for student learning.

24% Assessment for accreditation purposes is prioritized above other assessment efforts.

24% Assessment data are regularly used in official institutional communications.

28% Faculty are in charge of assessment at my institution.

29% Assessment results have no impact on resource allocations.

29% Assessment results are regularly shared throughout my institution.

29% I enjoy engaging in institutional assessment efforts.

31% Administrators share assessment data with faculty members using a variety of communication strategies (i.e., meetings, web, written correspondence, presentations).

31% Communication of assessment results has been effective.

31% Administrators use assessment as a form of control (i.e., to regulate institutional processes).

32% Assessment results are criticized for going nowhere (i.e., not leading to change).

32% Assessment results in a fair depiction of what I do as a faculty member.

33% There are sufficient financial resources to make changes at my institution.

34% Assessment success stories are formally shared throughout my institution.

34% Assessment results in an accurate depiction of what I do as a faculty member.

35% Assessment is primarily the responsibility of administrators.

36% I am aware of several assessment success stories (i.e., instances of assessment resulting in important changes).

36% Engaging in assessment also benefits my research/scholarship agenda.

41% Assessment efforts do not have a clear focus.

41% I do not have time to engage in assessment efforts.

42% Assessment is a necessary evil in higher education.

50% Assessment is conducted based on the whims of the people in charge.

50% There is pressure to reveal only positive results from assessment efforts.

53% Assessment results are used to scare faculty into compliance with what the administration wants.

55% I avoid doing institutional assessment activities if I can.

56% If assessment was not required I would not be doing it.

56% I engage in assessment because I am afraid of what will happen if I do not.

60% Assessment results are used to punish faculty members (i.e., not rewarding innovation or effective teaching, research, or service).

62% I perceive assessment as a threat to academic freedom.

78% Assessment is someone else’s problem, not mine.


Here there’s more good news. We want small proportions of faculty to disagree with the positive statements about assessment, and for the most part they do. About a third disagree that assessment results and success stories are shared, but that matches what we saw with the agree-strongly agree results.


But there are also areas of concern here. We want large proportions of faculty to disagree with the negative statements about assessment, and that doesn’t always happen. Less than a quarter disagree that budgets can be negatively impacted by assessment results and that administrators look at assessment only through a compliance lens. Less than a third disagreed that assessment results don’t lead to change or resource allocations. The results that concerned me most? Only 42% disagreed that assessment is a necessary evil; only half disagreed that there is pressure to reveal only positive assessment results; and only a bit over half disagreed that “If assessment was not required I would not be doing it.”


So, while most faculty “get” assessment, there are sizable numbers who don’t yet see value in it. We've come a long way, but there's still plenty of work to do!


(Some notes on the presentation of these results: Note that I sorted results from highest to lowest, rounded percentages to the nearest whole percent, and color-coded "good" and "bad" statements. Those all help the key points of a very lengthy survey pop out at the reader.)

Why do (some) faculty hate assessment?

Posted on February 28, 2018 at 10:25 AM Comments comments (8)

Two recent op-ed pieces in the Chronicle of Higher Education and the New York Times –and the hundreds of online comments regarding them—make clear that, 25 years into the assessment movement, a lot of faculty really hate assessment.


It’s tempting for assessment people to spring into a defensive posture and dismiss what these people are saying. (They’re misinformed! The world has changed!) But if that’s our response, aren’t we modeling the fractures deeply dividing the US today, with people existing in their own echo chambers and talking past each other rather than really listening and trying to find common ground on which to build? And shouldn’t we be practicing what we preach, using systematic evidence to inform what we say and do?


So I took a deeper dive into those comments. I did a content analysis of the articles and many of the comments that followed. (The New York Times article had over 500 comments—too many for me to handle—so I looked only at NYT comments with at least 12 recommendations.)


If you’re not familiar with content analysis, it’s looking through text to identify the frequency of ideas or themes. For example, I counted how many comments mentioned that assessment is expensive. I do content analysis by listing all the comments as bullets in a Word document, then cutting and pasting the bulleted comments to group similar comments together under headings. I then cut and paste the groups so the most frequently mentioned themes are at the top of the document. There is qualitative analysis software that can help if you don’t want to do this manually.


A caveat: Comments don’t always fall into neat, discrete categories; judgement is needed to decide where to place some. I did this analysis quickly, and it’s entirely possible that, if you’d done this instead of me, you might have come up with somewhat different results. But assessment is not rigorous research; we just need information good enough to help inform our thinking, and I think my analysis is fine for the purpose of figuring out how we might deal with this.


Why take the time to do a content analysis instead of just reading through the comments? Because, when we process a list of comments, there’s a good chance we won’t identify the most frequently mentioned ideas accurately. As I was doing my content analysis, I was struck by how many faculty complained that assessment is (I’m being snarky here) either a vast right-wing conspiracy or a vast left-wing conspiracy, simply because I’d never heard that before. It turned out, however, that there were other themes that emerged far more frequently. This is a good lesson for faculty who think they don’t need to formally assess because they “know” what their students are struggling with. Maybe they do…but maybe not.


So what did I find? As I’d expected, there are many reasons why faculty may hate assessment. I found that most of their complaints fall into just four broad categories:


It’s a waste of an enormous amount of time and resources that could be better spent on teaching. Almost 40% of the comments fell into this category. Some examples:

  • We faculty are angry over the time and dollars wasted.
  • The assessment craze is not only of little value, but it saps the meager resources of time and money available for classroom instruction.
  • Faced with outrage over the high cost of higher education, universities responded by encouraging expensive administrative bloat.
  • It is not that the faculty are not trying, but the data and methods in general use are very poor at measuring learning.
  • Our “assessment expert” told us to just put down as a goal the % of students we wanted to rate us as very good or good on a self-report survey. Which we all know is junk.


I and what I think is important is not valued or respected. Over 30% of the comments fell into this category. Some examples:

  • Assessment of student learning outcomes is an add-on activity that says your standard examination and grading scheme isn’t enough so you need to do a second layer of grading in a particular numerical format.
  • The fundamental, flawed premise of most of modern education is that teaching is a science.
  • Bureaucratic jargon subtly shapes the expectations of students and teachers alike.
  • When the effort to reduce learning to a list of job-ready skills goes too far, it misses the point of a university education.
  • Learning outcomes have disempowered faculty.
  • The only learning outcomes I value: students complete their formal education with a desire to learn more
  • Assessment reflects a misguided belief that learning is quantifiable.


External and economic forces are behind this. About 15% of comments fell into this category, including those right-wing/left-wing conspiracy comments. Some examples:

  • There’s a whole industry out there that’s invested in outcomes assessment.
  • The assessment boom coincided with the decision of state legislatures to reduce spending on public universities.
  • Educational institutions have been forced to operate out of a business model.
  • It is the rise of adjuncts and online classes that has led to the assessment push.


I’m unfairly held responsible for student learning. About 10% of comments fell into this category. Some examples:

  • Students, not faculty, are responsible for student learning.
  • It is much more profitable to skim money from institutions of higher learning than fixing the underlying causes of the poverty and lack of focus that harm students.
  • The root cause is lack of a solid foundation built in K-12.


Two things struck me about these four broad categories. The first one was that they don’t quite align with what I’ve heard as I’ve worked with literally thousands of faculty at hundreds of colleges over the last two decades. Yes, I’ve heard plenty about assessment being useless, and I’ve written about faculty feeling devalued and disrespected by assessment, but I’d never heard the external-forces or blame-game reasons before. And I’ve heard plenty about other reasons that weren’t mentioned in these comments, especially finding time to work on assessment, not understanding how to assess (or how to teach), and moving from a culture of silos to one of collaboration. I think the reason for the disconnect between what I’ve heard and what was expressed here is that these comments reflect the angriest faculty, not all faculty. But their anger is legitimate and something we should all work to address.


[UPDATED 2/28/2018 4:36 PM EST] So what should we do? First, we clearly need better information on faculty experiences and views regarding assessment so we can understand which issues are most pervasive and address them. The Surveys of Assessment Culture developed by Matt Fuller at Sam Houston State University is an important start.


In the meanwhile, the good news is the comments in and accompanying these two pieces all represent solvable problems. (No, we can’t solve all of society’s ills, but we can help faculty deal with them.) I’ll share some ideas in upcoming blog posts. If you don’t want to wait, you’ll find plenty of practical suggestions in the new 3rd edition of my book Assessing Student Learning: A Common Sense Guide.

An example of closing the loop...and ideas for doing it well

Posted on February 22, 2018 at 7:00 PM Comments comments (0)

I was intrigued by an article in the September 23, 2016, issue of Inside Higher Ed titled “When a C Isn’t Good Enough.” The University of Arizona found that students who earned an A or B in their first-year writing classes had a 67% chance of graduating, but those earning a C had only a 48% chance. The university is now exploring a variety of ways to improve the success of students earning a C, including requiring C students to take a writing competency test, providing resources to C students, and/or requiring C students to repeat the course.

 

I know nothing about the University of Arizona beyond what’s in the article. But if I were working with the folks there, I’d offer the following ideas to them, if they haven’t considered them already.

 

1. I’d like to see more information on why the C students earned a C. Which writing skills did they struggle most with: basic grammar, sentence structure, organization, supporting arguments with evidence, etc.? Or was there another problem? For example, maybe C students were more likely to hand in assignments late (or not at all).

 

2. I’d also like to see more research on why those C students were less likely to graduate. How did their GPAs compare to A and B students? If their grades were worse, what kinds of courses seemed to be the biggest challenge for them? Within those courses, what kinds of assignments were hardest for them? Why did they earn a poor grade on them? What writing skills did they struggle most with: basic grammar, organization, supporting arguments with evidence, etc.? Or, again, maybe there was another problem, such as poor self-discipline in getting work handed in on time.

 

And if their GPAs were not that different from those of A and B students (or even if they were), what else was going on that might have led them to leave? The problem might not be their writing skills per se. Perhaps, for example, that students with work or family obligations found it harder to devote the study time necessary to get good grades. Providing support for that issue might help more than helping them with their writing skills.

 

3. I’d also like to see the faculty responsible for first-year writing articulate a clear, appropriate, and appropriately rigorous standard for earning a C. In other words, they could use the above information on the kinds and levels of writing skills that students need to succeed in subsequent courses to articulate the minimum performance levels required to earn a C. When I taught first-year writing at a public university in Maryland, the state system had just such a statement, the “Maryland C Standard.”

 

4. I’d like to see the faculty adopt a policy that, in order to pass first-year writing, students must meet the minimum standard of every writing criterion. Thus, if student work is graded using a rubric, the grade isn’t determined by averaging the scores on various rubric criteria—that lets a student with A arguments but F grammar earn a C with failing grammar. Instead, students must earn at least a C on every rubric criterion in order to pass the assignment. Then the As, Bs, and Cs can be averaged into an overall grade for the assignment.

 

(If this sounds vaguely familiar to you, what I’m suggesting is the essence of competency-based education: students need to demonstrate competence on all learning goals and objectives in order to pass a course or graduate. Failure to achieve one goal or objective can’t be offset by strong performance on another.)

 

5. If they haven’t done so already, I’d also like to see the faculty responsible for first-year writing adopt a common rubric, articulating the criteria they’ve identified, that would be used to assess and grade the final assignment in every section, no matter who teaches it. This would make it easy to study student performance across all sections of the course and identify pervasive strengths and weaknesses in their writing. If some faculty members or TAs have additional grading criteria, they could simply add those to the common rubric. For example, I graded my students on their use of citation conventions, even though that was not part of the Maryland C Standard. I added that to the bottom of my rubric.

 

6. Because work habits are essential to success in college, I’d also suggest making this a separate learning outcome for first-year writing courses. This means grading students separately on whether they turn in work on time, put in sufficient effort, etc. This would help everyone understand why some students fail to graduate—is it because of poor writing skills, poor work habits, or both?

 

These ideas all move responsibility for addressing the problem from administrators to the faculty. That responsibility can’t be fulfilled unless the faculty commit to collaborating on identifying and implementing a shared strategy so that every student, no matter which section of writing they enroll in, passes the course with the skills needed for subsequent success.

Is higher ed assessment changing? You bet!

Posted on February 13, 2018 at 9:10 AM Comments comments (0)

Today marks the release of the third edition of my book Assessing Student Learning: A Common Sense Guide. I approached Jossey-Bass about doing a third edition in response to requests from some faculty who used it as a textbook but were required to use more recent editions. The second edition had been very successful, so I figured I’d update the references and a few chapters and be done. But as I started work on this edition, I was immediately struck by how outdated the second edition had become in just a few short years. The third edition is a complete reorganization and rewrite of the previous edition.


How has the world of higher ed assessment changed?


We are moving from Assessment 1.0 to Assessment 2.0: from getting assessment done—and in many cases not doing it very well—to getting assessment used. Many faculty and administrators still struggle to grasp that assessment is all about improving how we help students learn, not an end in itself, and that assessments should be planned with likely uses in mind. The last edition talked about using results, of course, but new edition adds a chapter on using assessment results to the beginning of the book. And throughout the book I talk not about “assessment results” but “evidence of student learning,” which is what this is really all about.


We have a lot of new resources. Many new assessment resources have emerged since the second edition was published, including the VALUE rubrics published by AAC&U, the many white papers published by NILOA, and the Degree Qualifications Profile sponsored by Lumina. Learning management systems and assessment information management systems are far more prevalent and sophisticated. This edition talks about these and other valuable new resources.


We are recognizing that different settings require different approaches to assessment. The more assessment we’ve done, the more we’ve come to realize that assessment practices vary depending on whether we’re assessing learning in courses, programs, general education curricula, or co-curricular experiences. The last edition didn’t draw many distinctions among assessment in these settings. This edition features a new chapter on the many settings of assessment, and several chapters discuss applying concepts to specific settings.


We’re realizing that curriculum design is a big piece of the assessment puzzle. We’ve found that, when faculty and staff struggle with assessment, it’s often because the learning outcomes they’ve identified aren’t addressed sufficiently—or at all—in the curriculum. So this book has a brand new chapter on curriculum design, and the old chapter on prompts has been expanded into one on creating meaningful assignments.


We have a much better understanding of rubrics. Rubrics are now so widespread that we have a much better idea of how to design and use them. A couple of years ago I did a literature review of rubric development that turned on a lot of lightbulbs for me, and this edition reflects my fresh thinking.


We’re recognizing that in some situations student learning is especially hard to assess. This edition has a new chapter on assessing the hard-to-assess, such as performances and learning that can’t be graded.


We’re increasingly appreciating the importance of setting appropriate standards and targets in order to interpret and use results appropriately. The chapter on this is completely rewritten, with a new section on setting standards for multiple choice tests.


We’re fighting the constant pull to make assessment too complicated. The pull of some accreditors’ overly complex requirements, some highly structured assessment information management systems, and some assessment practitioners with psychometric training to make things much more complicated than they need to be is strong. That this new edition is well over 400 pages says a lot! This book has a whole chapter on keeping assessment cost-effective, especially in terms of time.


We’re starting to recognize that, if assessment is to have real impact, results need to be synthesized into an overall picture of student learning. This edition stresses the need to sit back after looking through reams of assessment reports and ask, from a qualitative rather than quantitative perspective, what are we doing well? In what ways is student learning most disappointing?


Pushback to assessment is moving from resistance to foot-dragging. The voices saying assessment can’t be done are growing quieter because we now have decades of experience doing assessment. But while more people are doing assessment, in too many cases they’re doing it only to comply with an accreditation mandate. Helping people move from getting assessment done to using it in meaningful ways remains a challenge. So the two chapters on culture in the second edition are now six.


Data visualization and learning analytics are changing how we share assessment results. These things are so new that this edition only touches on them. I think that they will be the biggest drivers in changes to assessment over the coming decade.

Is this a rubric?

Posted on January 28, 2018 at 7:25 AM Comments comments (0)

A couple of years ago I did a literature review on rubrics and learned that there’s no consensus on what a rubric is. Some experts define rubrics very narrowly, as only analytic rubrics—the kind formatted as a grid, listing traits down the left side and performance levels across the top, with the boxes filled in. But others define rubrics more broadly, as written guides for evaluating student work that, at a minimum, lists the traits you’re looking for.


But what about something like the following, which I’ve seen on plenty of assignments?


70% Responds fully to the assignment (length of paper, double-spaced, typed, covers all appropriate developmental stages)

15% Grammar (including spelling, verb conjugation, structure, agreement, voice consistency, etc.)

15% Organization


Under the broad definition of a rubric, yes, this is a rubric. It is a written guide for evaluating student work, and it lists the three traits the faculty member is looking for.


The problem is that it isn’t a good rubric. Effective assessments including rubrics have the following traits:


Effective assessments yield information that is useful and used. Students who earn less than 70 points for responding to the assignment have no idea where they fell short. Those who earn less than 15 points on organization have no idea why. If the professor wants to help the next class do better on organization, there’s no insight here on where this class’s organization fell short and what most needs to be improved.


Effective assessments focus on important learning goals. You wouldn’t know it from the grading criteria, but this was supposed to be an assignment on critical thinking. Students focus their time and mental energies on what they’ll be graded on, so these students will focus on following directions for the assignment, not developing their critical thinking skills. Yes, following directions is an important skill, but critical thinking is even more important.


Effective assessments are clear. Students have no idea what this professor considers an excellently organized paper, what’s considered an adequately organized paper, and what’s considered a poorly organized paper.


Effective assessments are fair. Here, because there are only three broad, ill-defined traits, the faculty member can be (unintentionally) inconsistent in grading the papers. How many points are taken off for an otherwise fine paper that’s littered with typos? For one that isn’t double-spaced?


So the debate about an assessment should be not whether it is a rubric but rather how well it meets these four traits of effective assessment practices.


If you’d like to read more about rubrics and effective assessment practices, the third edition of my book Assessing Student Learning: A Common Sense Guide will be released on February 13 and can be pre-ordered now. The Kindle version is already available through Amazon.

Why are learning outcomes a good idea?

Posted on January 9, 2018 at 7:25 AM Comments comments (3)

Just before the holidays, the Council of Graduate Schools released Articulating Learning Outcomes in Higher Education. The title is a bit of misnomer; the paper focuses not on how to articulate learning outcomes but on why it’s a good idea to articulate learning outcomes and why it might be a good idea to have a learning outcome framework such as the Degree Qualifications Profile to articulate shared learning outcomes across doctoral programs.


What I found most useful about the paper was the strong case it makes for the value of articulating learning outcomes. It offers some reasons I hadn’t thought of before, and they apply to student learning at all higher education levels, not just doctoral education. If you work with someone who doesn't see the value of articulating learning outcomes, maybe this list will help.


Clearly defined learning outcomes can:


• Help students navigate important milestones by making implicit program expectations explicit, especially to first-generation students who may not know the “rules of the game.”


• Help prospective students weigh the costs and benefits of their educational investments.


• Help faculty prepare students more purposefully for a variety of career paths (at the doctoral level, for teaching as well as research careers).


• Help faculty ensure that students graduate with the knowledge and skills they need for an increasingly broad range of career options, which at the doctoral level may include government, non-profits, and startups as well as higher education and industry.


• Help faculty make program requirements and milestones more student-centered and intentional.


• Help faculty, programs, and institutions define the value of a degree or other credential and improve public understanding of that value.


• Put faculty, programs, and institutions in the driver’s seat, defining the characteristics of a successful graduate rather than having a definition imposed by another entity such as an accreditor or state agency.

Balancing regional and specialized accreditation demands

Posted on December 22, 2017 at 7:15 AM Comments comments (0)

Virtually all U.S. accreditors (and some state agencies) require the assessment of student learning, but the specifics--what, when, how--can vary significantly. How can programs with multiple accreditations (say regional and specialized) serve two or more accreditation masters without killing themselves in the process?


I recently posted my thoughts on this on the ASSESS listserv, and a colleague asked me to make my contribution into a blog post as well.


Bottom line: I advocate a flexible approach.


Start by thinking about why your institution's assessment coordinator or committee asks these programs for reports on student learning assessment. This leads to the question of why they're asking everyone to assess student learning outcomes.


The answer is that we all want to make sure our students are learning what we think is most important, and if we're not, we want to take steps to try to improve that learning. Any reporting structure should be designed to help faculty and staff achieve those two purposes--without being unnecessarily burdensome to anyone involved. In other words, reports should be designed primarily to help decision-makers at your college.


At this writing, I'm not aware of any regional accreditor that mandates that every program's assessment efforts and results must be reported on a common institution-wide template. When I was an assessment coordinator, I encouraged flexibility in report formats (and deadlines, for that matter). Yes, it was more work for me and the assessment committee to review apples-and-oranges reports but less work and more meaningful for faculty--and I've always felt they're more important than me.


So with this as a framework, I would suggest sitting down with each program with specialized accreditation and working out what's most useful for them.


  • Some programs are doing for their specialized accreditor exactly what your institution and your regional accreditor want. If so, I'm fine with asking for a cut-and-paste of whatever they prepare for their accreditor.
  • Some programs are doing for their specialized accreditor exactly what your institution and your regional accreditor want, but only every few years, when the specialized review takes place. In these cases, if the last review was a few years ago, I think it's appropriate to ask for an interim update.
  • Some programs assess certain learning goals for their specialized accreditor but not others that either the program or your institution views as important. For example, some health/medical accreditors want assessments of technical skills but not "soft" skills such as teamwork and patient interactions. In these cases, you can ask for a cut-and-paste of the assessments done for the specialized accreditor but then an addendum of the additional learning goals.
  • At least a few specialized accreditors expect student learning outcomes to be assessed but not that the results be used to improve learning. In these cases, you can ask for a cut-and-paste of the assessments done but then an addendum on how the results are being used.
  • Some specialized accreditors, frankly, aren't particularly rigorous in their expectations for student learning assessment. I've seen some, for example, that seem happy with surveys of student satisfaction or student self-ratings of their skills. Programs with these specialized accreditations need to do more if their assessment is to be meaningful and useful.


Again, this flexible approach meant more work for me, but I always felt faculty time was more precious than mine, so I always worked to make their jobs as easy as possible and their work as useful and meaningful as possible.

Seminal readings on assessing student learning

Posted on December 8, 2017 at 7:00 AM Comments comments (1)

Someone on the ASSESS listserv recently asked for recommendations for a good basic book for those getting started with assessment. Here are eight books I recommend for every assessment practitioner's bookshelf (in addition, of course to my own Assessing Student Learning: A Common Sense Guide, whose third edition is coming out on February 4, 2018.)


Assessment Essentials: Planning, Implementing, and Improving Assessment in Higher Education by Trudy Banta and Catherine Palomba (2014): This is a soup-to-nuts primer on student learning assessment in higher education. The authors especially emphasize organizing and implementing assessment.


Learning Assessment Techniques: A Handbook for College Faculty by Elizabeth Barkley and Claire Major (2016): This successor to the classic Classroom Assessment Techniques (Angelo & Cross, 1993) expands and reconceptualizes CATs into a fresh set of Learning Assessment Techniques (LATs)—simple tools for learning and assessment—that faculty will find invaluable.


How to Create and Use Rubrics for Formative Assessment and Grading by Susan Brookhart (2013): This book completely changed my thinking about rubrics. Susan Brookhart has a fairly narrow vision of how rubrics should be developed and used, but she offers persuasive arguments for doing things her way. I’m convinced that her approach will lead to sounder, more useful rubrics.


Creating Significant Learning Experiences: An Integrated Approach to Designing College Courses by L. Dee Fink (2013): Dee Fink is an advocate of backwards curriculum design: identifying course learning goals, identifying how students will demonstrate achievement of those goals by the end of the course, then designing learning activities that prepare students to demonstrate achievement successfully. His book presents an important context for assessment: its role in the teaching process.


Using Evidence of Student Learning to Improve Higher Education by George Kuh, Stan Ikenberry, Natasha Jankowski, Timothy Cain, Peter Ewell, Pat Hutchings, and Jillian Kinzie (2015): The major theme of this book is that, if assessment is going to work, it has to be for you, your colleagues, and your students, not your accreditor. This book is a powerful argument for moving from a compliance approach to one that makes assessment meaningful and consequential. If you feel your college is simply going through assessment motions, this book will give you plenty of practical ideas to make it more useful.


Five Dimensions of Quality: A Common Sense Guide to Accreditation and Accountability by Linda Suskie (2014): I wrote this book after working for one of the U.S. regional accreditors for seven years and consulting for colleges in all the other U.S. accreditation regions. In that work, I found myself repeatedly espousing the same basic principles, including principles for obtaining and using meaningful, useful assessment evidence. Those principles are the foundation of this book.


Assessment Clear and Simple: A Practical Guide for Institutions, Departments, and General Education by Barbara Walvoord (2010): The strength of this book is its size: this slim volume is a great introduction for anyone feeling overwhelmed by all he or she needs to learn about assessment.


Effective Grading by Barbara Walvoord and Virginia Anderson (2010): This is my second favorite assessment book after my own! With its simple language and its focus on the grading process, it’s a great way to help faculty develop or improve assessments in their courses. It introduces them to many important assessment ideas that apply to program and general education assessments as well.

So you want to be a consultant?

Posted on November 21, 2017 at 8:25 AM Comments comments (1)

From time to time people contact me for advice, not on assessment or accreditation but for tips on how to build a consulting business. In case you’re thinking the same thing, I’m sorry to tell you that I really can’t offer much advice.


My consulting work is the culmination of 40 years of work in higher education. So if you want to spend the next 40 years preparing to get into consulting work, I can tell you my story, but if you want to build a business more quickly, I can’t help.


I began my career in institutional research, then transitioned into strategic planning and quality improvement. These can be lonely jobs, so I joined relevant professional organizations. Some of the institutions where I worked would pay for travel to conferences only if I was presenting, so I presented as often as I could. And I became actively involved in the professional organizations I joined—I was treasurer of one and organized a regional conference for another, for example. All these things helped me network and make connections with people in higher education all over the United States.


All institutional researchers deal with surveys, and early in my career I found people asking me for advice on surveys they were developing. Writing a good survey isn’t all that different from writing a good test, which I’d learned how to do in grad school. (My master’s is in educational measurement and statistics from the University of Iowa.) After finding myself giving the same advice over and over, I wrote a little booklet, which gradually evolved into a monograph on questionnaire surveys published by the Association for Institutional Research. I started doing workshops around the country on questionnaire design.


I love to teach, so concurrently throughout my career I’ve taught as an adjunct at least once a year—all kinds of courses, from developmental mathematics to graduate courses. That’s made a huge difference in my consulting work, because it’s given me credibility with both the teaching and administrative sides of the house.


Then I had a life-changing experience: a one-year appointment in 1999-2000 as director of the Assessment Forum at the old American Association for Higher Education. People often asked me for recommendations for a good soup-to-nuts primer on assessment. At that time, there wasn’t one (there were good books on assessment, but with narrower focuses). So I wrote one, applying what I learned in my graduate studies to the higher education environment, and was lucky enough to get it published. The book, along with conference sessions, continued networking, and simply having that one-year position at AAHE, built my reputation as an assessment expert.


When I went into full-time consulting about six years ago, I did read up a little on how to build a consulting business. I built a website so people could find me, and I built a social media presence and a blog on my website to drive people to the website. But I don’t really do any other marketing. My clients tell me that they contact me because of my longstanding reputation, my book, and my conference sessions.


So if you want to be a consultant, here's my advice. Take 40 years to build your reputation. Start with a graduate degree from a really good, relevant program. Be professionally active. Teach. Get published. Present at conferences. And get lucky enough to land a job that puts you on the national stage. Yes, there are plenty of people who build a successful consulting business more quickly, but I’m not one of them, and I can’t offer you advice on how to do it.

What can an article on gun control tell us about creating good assessment reports?

Posted on November 8, 2017 at 10:05 AM Comments comments (6)

I was struck by Nicholas Kristof’s November 6 New York Times article, How to Reduce Shootings. No, I’m not talking here about the politics of the issue, and I’m not writing this blog post to advocate any stance on the issue. What struck me—and what’s relevant to assessment—is how effectively Kristof and his colleagues brought together and compellingly presented a variety of data.


Here are some of the lessons from Kristof’s article that we can apply to assessment reports.


Focus on using the results rather than sharing the results, starting with the report title. Kristof could have titled his piece something like, “What We Know About Gun Violence,” just as many assessment reports are titled something like, “What We’ve Learned About Student Achievement of Learning Outcomes.” But Kristof wants this information used, not just shared, and so do (or should) we. Focus both the title and content of your assessment report on moving from talk to practical, concrete responses to your assessment results.


Focus on what you’ve learned from your assessments rather than the assessments themselves. Every subheading in Kristof’s article states a conclusion drawn from his evidence. There’s no “Summary of Results’ heading like what we see in so many assessment reports. Include in your report subheadings that will entice everyone to keep reading.


Go heavy on visuals, light on text. My estimate is that about half the article is visuals, half text. This makes the report a fast read, with points literally jumping out at us.


Go for graphs and other visuals rather than tables of data. Every single set of data in Kristof’s report is accompanied by graphs or other visuals that let immediately let us see his point.


Order results from highest to lowest. There’s no law that says you must present the results for rubric criteria or a survey rating scale in their original order. Ordering results from highest to lowest—especially when accompanied by a bar graph—lets the big point literally pop out at the reader.


Use color to help drive home key points. Look at the section titled “Fewer Guns = Fewer Deaths” and see how adding just one color drives home the point of the graphics. I encourage what I call traffic light color-coding, with green for good news and red for results that, um, need attention.


Pull together disparate data on student learning. Kristof and his colleagues pulled together data from a wide variety of sources. The visual of public opinions on guns, toward the end of the article, brings together results from a variety of polls into one visual. Yes, the polls may not be strictly comparable, but Kristof acknowledges their sources. And the idea (that should be) behind assessment is not to make perfect decisions based on perfect data but to make somewhat better decisions based on somewhat better information than we would make without assessment evidence. So if, say, you’re assessing information literacy skills, pull together not only rubric results but relevant questions from surveys like NSSE, students’ written reflections, and maybe even relevant questions from student evaluations of teaching (anonymous and aggregated across faculty, obviously).


Breakouts can add insight, if used judiciously. I’m firmly opposed to inappropriate comparisons across student cohorts (of course humanities students will have weaker math skills than STEM students). But the state-by-state comparisons that Kristof provides help make the case for concrete steps that might be taken. Appropriate, relevant, meaningful comparisons can similarly help us understand assessment results and figure out what to do.


Get students involved. I don’t have the expertise to easily generate many of the visuals in Kristof’s article, but many of today’s students do, or they’re learning how in a graphic design course. Creating these kinds of visuals would make a great class project. But why stop student involvement there? Just as Kristof intends his article to be discussed and used by just about anyone, write your assessment report so it can be used to engage students as well as faculty and staff in the conversation about what’s going on with student learning and what action steps might be appropriate and feasible.


Distinguish between annual updates and periodic mega-reviews. Few of us have the resources to generate a report of Kristof’s scale annually—and in many cases our assessment results don’t call for this, especially when the results indicate that students are generally learning what we want them to. But this kind of report would be very helpful when results are, um, disappointing, or when a program is undergoing periodic program review, or when an accreditation review is coming up. Flexibility is the key here. Rather than mandate a particular report format from everyone, match the scope of the report to the scope of issues uncovered by assessment evidence.

An easy, inexpensive, meaningful way to close the assessment loop

Posted on October 29, 2017 at 9:50 AM Comments comments (2)

Assessment results are often used to make tweaks to individual courses and sometimes individual programs. It can be harder to figure out how to use assessment results to make broad, meaningful change across a college or university. But here’s one way to do so: Use assessment results to drive faculty professional development programming.


Here’s how it might work.


An assessment committee or some other appropriate group reviews annual assessment reports from academic programs and gen ed requirements. As they do, they notice some repeated concerns about shortcomings in student learning. Perhaps several programs note that their students struggle to analyze data. Perhaps several others note that quite a few students aren’t citing sources properly. Perhaps several others are dissatisfied with their students’ writing skills.


Note that the committee doesn’t need reports to be in a common format or share a common assessment tool in order to make these observations. This is a qualitative, not quantitative, analysis of the assessment reports. The committee can make a simple list of the single biggest concern with student learning mentioned in each report, then review the list and see what kinds of concerns are mentioned most often.


The assessment committee then shares what they’ve noticed with whoever plans faculty professional development programming—what’s often called a teaching-learning center. The center can then plan workshops, brown-bag lunch discussions, learning communities, or other professional development opportunities to help faculty improve student achievement of these learning goals.


There needn’t be much if any expense in offering such opportunities. Assessment results are used to decide how professional development resources are used, not necessarily increase professional development resources.

Assessing the right things, not the easy things

Posted on October 7, 2017 at 8:20 AM Comments comments (2)

One of the many things I’ve learned by watching Ken Burns’ series on Vietnam is that Defense Secretary Robert MacNamara was a data geek. A former Ford Motor Company executive, he routinely asked for all kinds of data. Sounds great, but there were two (literally) fatal flaws with his approach to assessment.


First, MacNamara asked for data on virtually anything measurable, compelling staff to spend countless hours filling binders with all kinds of metrics—too much data for anyone to absorb. And I wonder what his staff could have accomplished had they not been forced to spend so much time on data collection.


And MacNamara asked for the wrong data. He wanted to track progress in winning the war, but he focused on the wrong measures: body counts, weapons captured. He apparently didn’t have a clear sense of exactly what it would mean to win this war and measure progress toward that end. I’m not a military scientist, but I’d bet that more important measures would have included the attitudes of Vietnam’s citizens and the capacity of the South Vietnamese government to deal with insurgents on its own.


There are three important lessons here for us. First, worthwhile assessment requires a clear goal. I often compare teaching to taking our students on a journey. Our learning goal is where we want them to be at the end of the learning experience (be it a course, program, degree, or co-curricular experience).


Second, worthwhile assessment measures track progress toward that destination. Are our students making adequate progress along their journey? Are they reaching the destination on time?


Third, assessment should be limited—just enough information to help us decide if students are reaching the destination on time and, if not, what we might to do help them on their journey. Assessment should never take so much time that it detracts from the far more important work of helping students learn.

What's a good schedule for assessing program learning outcomes?

Posted on August 26, 2017 at 8:20 AM Comments comments (9)

Chris Coleman recently asked the Accreditation in Southern Higher Education listserv ([email protected]) about schedules for assessing program learning outcomes. Should programs assess one or two learning outcomes each year, for example? Or should they assess everything once every three or four years? Here are my thoughts from my forthcoming third edition of Assessing Student Learning: A Common Sense Guide.


If a program isn’t already assessing its key program learning outcomes, it needs to assess them all, right away, in this academic year. All the regional accreditors have been expecting assessment for close to 20 years. By now they expect implemented processes with results, and with those results discussed and used. A schedule to start collecting data over the next few years—in essence, a plan to come into compliance—doesn’t demonstrate compliance.


Use assessments that yield information on several program learning outcomes. Capstone requirements (senior papers or projects, internships, etc.) are not only a great place to collect evidence of learning, but they’re also great learning experiences, letting students integrate and synthesize their learning.


Do some assessment every year. Assessment is part of the teaching-learning process, not an add-on chore to be done once every few years. Use course-embedded assessments rather than special add-on assessments; this way, faculty are already collecting assessment evidence every time the course is taught.


Keep in mind that the burden of assessment is not assessment per se but aggregating, analyzing, and reporting it. Again, if faculty are using course-embedded assessments, they’re already collecting evidence. Be sensitive to the extra work of aggregating, analyzing, and reporting. Do all you can to keep the burden of this extra work to a bare-bones minimum and make everyone’s jobs as easy possible.


Plan to assess all key learning outcomes within two years—three at most. You wouldn’t use a bank statement from four years ago to decide if you have enough money to buy a car today! Faculty similarly shouldn’t be using evidence of student learning from four years ago to decide if student learning today is adequate. Assessments conducted just once every several years also take more time in the long run, as chances are good that faculty won’t find or remember what they did several years earlier, and they’ll need to start from scratch. This means far more time is spent planning and designing a new assessment—in essence, reinventing the wheel. Imagine trying to balance your checking account once a year rather than every month—or your students cramming for a final rather than studying over an entire term—and you can see how difficult and frustrating infrequent assessments can be, compared to those conducted routinely.


Keep timelines and schedules flexible rather than rigid, adapted to meet evolving needs. Suppose you assess students’ writing skills and they are poor. Do you really want to wait two or three years to assess them again? Disappointing outcomes call for frequent reassessment to see if planned changes are having their desired effects. Assessments that have yielded satisfactory evidence of student learning are fine to move to a back-burner, however. Put those reassessments on a staggered schedule, conducting them only once every two or three years just to make sure student learning isn’t slipping. This frees up time to focus on more pressing matters.

Is It Time to Abandon the Term "Liberal Arts?

Posted on August 20, 2017 at 6:35 AM Comments comments (1)

Scott Jaschick at Inside Higher Ed just wrote an article tying together two studies showing that many higher ed stakeholders don’t understand—and therefore misinterpret—the term liberal arts.


And who can blame them? It’s an obtuse term that I’d bet many in higher ed don’t understand either. When I researched my 2014 book Five Dimensions of Quality: A Common Sense Guide to Accreditation and Accountability, I learned that the term liberal comes from liber, the Latin word for free. In the Middle Ages in Europe, a liberal arts education was for the free individual, as opposed to an individual obliged to enter a particular trade or profession. That paradigm simply isn’t relevant today.


Today the liberal arts are those studies that address knowledge, skills, and competencies that cross disciplines, yielding a broadly-educated, well-rounded individual. Many people use the term liberal arts and sciences or simply arts and sciences to try to make clear that the liberal arts comprise study of the sciences as well as the arts and humanities. The Association of American Colleges & Universities (AAC&U), a leading advocate of liberal arts education, refers to liberal arts as liberal education. Given today’s political climate, that may not have been a good decision!


So what might be a good synonym for the liberal arts? I confess I don’t have a proposal. Arts and sciences is one option, but I’d bet many stakeholders don’t understand that this includes humanities and social sciences, and this term doesn’t convey the value studying these things. Some of the terms I think would resonate with the public are broad, well-rounded, transferrable, and thinking skills. But I’m not sure how to combine these terms meaningfully and succinctly.


What we need here is evidence-informed decision-making, including surveys and focus groups of various higher education stakeholders to see what resonates with them. I hope AAC&U, as a leading advocate of liberal arts education, might consider taking on a rebranding effort including stakeholder research. But if you have any ideas, let me know!

Assessing learning in co-curricular experiences

Posted on August 8, 2017 at 10:35 AM Comments comments (2)

Assessing student learning in co-curricular experiences can be challenging! Here are some suggestions from the (drum roll, please!) forthcoming third edition of my book Assessing Student Learning: A Common Sense Guide, to be published by Jossey-Bass on February 4, 2018. (Pre-order your copy at www.wiley.com/WileyCDA/WileyTitle/productCd-1119426936.html)


Recognize that some programs under a student affairs, student development, or student services umbrella are not co-curricular learning experiences. Giving commuting students information on available college services, for example, is not really providing a learning experience. Neither are student intervention programs that contact students at risk for poor academic performance to connect them with available services.


Focus assessment efforts on those co-curricular experiences where significant, meaningful learning is expected. Student learning may be a very minor part of what some student affairs, student development, and student services units seek to accomplish. The registrar’s office, for example, may answer students’ questions about registration but not really offer a significant program to educate students on registration procedures. And while some college security operations view educational programs on campus safety as a major component of their mission, others do not. Focus assessment time and energy on those co-curricular experiences that are large or significant enough to make a real impact on student learning.


Make sure every co-curricular experience has a clear purpose and clear goals. An excellent co-curricular experience is designed just like any other learning experience: it has a clear purpose, with one or more clear learning goals; it is designed to help students achieve those goals; and it assesses how well students have achieved those goals.


Recognize that many co-curricular experiences focus on student success as well as student learning—and assess both. Many co-curricular experiences, including orientation programs and first-year experiences, are explicitly intended to help students succeed in college: to earn passing grades, to progress on schedule, and to graduate. So it’s important to assess both student learning and student success in order to show that the value of these programs is worth the college’s investment in them.


Recognize that it’s often hard to determine definitively the impact of one co-curricular experience on student success because there may be other mitigating factors. Students may successfully complete a first-year experience designed to prepare them to persist, for example, then leave because they’ve decided to pursue a career that doesn’t require a college degree.


Focus a co-curricular experience on an institutional learning goal such as interpersonal skills, analysis, professionalism, or problem solving.


Limit the number of learning goals of a co-curricular experience to perhaps just one or two.


State learning goals so they describe what students will be able to do after and as a result of the experience, not what they’ll do during the experience.


For voluntary co-curricular experiences, start but don’t end by tracking participation. Obviously if few students participate, impact is minimal no matter how much student learning takes place. So participation is an important measure. Set a rigorous but realistic target for participation, count the number of students who participate, and compare your count against your target.


Consider assessing student satisfaction, especially for voluntary experiences. Student dissatisfaction is an obvious sign that there’s a problem! But student satisfaction levels alone are insufficient assessments because they don’t tell us how well students have learned what we value.


Voluntary co-curricular experiences call for fun, engaging assessments. No one wants to take a test or write a paper to assess how well they’ve achieved a co-curricular experience’s learning goals. Group projects and presentations, role plays, team competitions, and Learning Assessment Techniques (Barkley & Major, 2016) can be more fun and engaging.


Assessments in co-curricular experiences need students to give them reasonably serious thought and effort. This can be a challenge when there's no grade to provide an incentive. Explain how the assessment will impact something students will find interesting and important.


Short co-curricular experiences call for short assessments. Brief, simple assessments such as minute papers, rating scales, and Learning Assessment Techniques can all yield a great deal of insight.


Attitudes and values can often only be assessed with indirect evidence such as rating scales, surveys, interviews, and focus groups. Reflective writing may be a useful, direct assessment strategy for some attitudes and values.


Co-curricular experiences often have learning goals such as teamwork that are assessed through processes rather than products. And processes are harder to assess than products. Direct observation (of a group discussion, for example), student self-reflection, peer assessments, and short quizzes are possible assessment strategies.

Should you collect more assessment data before using it?

Posted on June 19, 2017 at 9:30 AM Comments comments (1)

Someone on the ASSESS listserv recently asked how to advise a faculty member who wanted to collect more assessment evidence before using it to try to make improvements in what he was doing in his classes. Here's my response, based on what I learned in a book I discussed in my last blog post called How to Measure Anything.


First, we think of doing assessment to help us make decisions (generally about improving teaching and learning). But think instead of doing assessment to help us make better decisions than we would make without them. Yes, faculty are always making informal decisions about changes to their teaching. Assessment should simply help them make somewhat better informed decisions.


Second, think about the risks of making the wrong decision. I'm going to assume, rightly or wrongly, that the professor is assessing student achievement of quantitative skills in a gen ed statistics course, and the results aren't great. There are five possible decision outcomes:

1. He decides to do nothing, and students in subsequent courses do just fine without any changes. (He was right; this was an off sample.)

2. He decides to do nothing, and students in subsequent courses continue to have, um, disappointing outcomes.

3. He changes things, and subsequent students do better because of his changes.

4. He changes things, but the changes don't help; despite his best effort, changes in his teaching didn't help improve the disappointing outcomes.

5. He changes things, and subsequent students do better, but not because of his changes--they're simply better prepared than this year's students.


So the risk of doing nothing is getting Outcome 2 instead of Outcome 1: Yet another class of students doesn't learn what they need to learn. The consequence is that even more students consequently run into trouble in later classes, on the job, wherever, until the eventual decision is made to make some changes.


The risk of changing things, meanwhile, is getting Outcome 4 or 5 instead of Outcome 3: He makes changes but they don't help. The consequence here is his wasted time and, possibly, wasted money, if his college invested in something like an online statistics tutoring module or gave him some released time to work on this.


The question then becomes, "Which is the worst consequence?" Normally I'd say the first consequence is the worst: continuing to pass or graduate students with inadequate learning. If so, it makes sense to go ahead with changes even without a lot of evidence. But if the second consequence involves a major investment of sizable time or resources, then it may make sense to wait for more corroborating evidence before making that major investment.


One final thought: Charles Blaich and Kathleen Wise wrote a paper for NILOA a few years ago on their research, in which they noted that our tradition of scholarly research does not include a culture of using research. Think of the research papers you've read--they generally conclude either by suggesting how some other people might use the research and/or by suggesting areas for further research. So sometimes the argument to wait and collect more data is simply a stalling tactic by people who don't want to change.


Rss_feed