|Posted on January 31, 2019 at 7:45 AM|
Last year was not one of the best for higher ed assessment. A couple of very negative opinion pieces got a lot of traction among higher ed people who had been wanting to say, “See? Assessment is really as stupid and pointless as I’ve always thought it was.” At some American universities, this was a major setback on assessment progress.
The higher ed assessment community came together quickly with a response that I was proud to contribute to. But now that we’re in 2019, perhaps it would help if each of us in the assessment community reflects on why we’re here. Here’s my story, in three parts, about why assessment is my passion.
The first part is that I’m a data geek, so I find assessment fun. My first job out of grad school was in institutional research, and my favorite part of the job was getting a printout of student survey results and poring over it, trying to find the story in the numbers (to me it’s a treasure hunt), and sharing that story with others in ways that would get them excited about either feeling good about what’s going well or doing something about areas of concern.
The second part is that I love to teach. I’m not a great teacher, but I want to be the best teacher I can. I’ve always looked forward to seeing how my students do on tests and assignments. I can’t wait to tally up how they did on each test question or rubric criterion (that’s the data geek part of me). I cheer the parts they did well on and reflect on the parts where they didn’t. Why did so many miss Question 12? Can I do anything to help them do better, either during what’s left of this class or in the next one? If I can’t figure out what happened, I ask my students at the next class and, trust me, they’re happy to tell me how I screwed up!
The final reason that assessment is my passion is that I’m convinced that part of the answer to the world’s problems today is to help everyone get the best possible education. This dawned on me about 25 years ago, when I was on an accreditation team visiting a seminary. The seminary’s purpose was to educate church pastors (as opposed to, say, researchers or scholars). It was doing a thorough job educating students on doctrine, but there was very little in the curriculum on preparing students to help church members and others hear what Christians call the Good News. There was little attention to helping students develop skills to listen to and counsel church members, communicate with people of diverse backgrounds, and assess community needs, not to mention the practical skills of running a church such as budgeting and fundraising. While I’m not one to push my faith on others, I think the world might be a better place if people truly understood and truly followed the teachings of many faiths. If that’s the case, the world needs pastors well-prepared to do this. The seminary I visited had, I thought, a moral obligation to ensure—through assessment—that its graduates are prepared to be the best possible pastors, with all the skills that pastors need.
Since then, I’ve felt the same about many other colleges and many other disciplines. The world needs great teachers, nurses, lawyers, accountants, and artists. When I’ve visited U.S. service academies, I’m reminded that the U.S. needs great military officers.
Even more, the world needs people who can do all the things we promise in our gen ed curricula. The world needs people who can think critically, who recognize and avoid unethical behavior, who are open to new ideas, who can work with people from diverse backgrounds, who can evaluate the quality of evidence, arguments or claims, who are committed to serving their communities. Again I’m convinced that the world would be a far better place if everyone could do these things well.
None of us can change the world alone. But each of us can do our best with the students in our orbit, trying our best to make sure—through decent-quality assessments—that they’ve really learned what’s most important. Whenever anyone looks at the results of any assessment, be it a class quiz or a college-wide assessment, and uses those results to change what or how they teach, at least some students will get a better education as a result.
We need those better-educated students. This is what drives me. This is why I am devoting my life to helping others learn how to assess student learning. Assessment is one way each of us can help make the world a better place.
|Posted on August 14, 2018 at 8:50 AM|
A while back, a faculty member teaching in a community college career program told me, “I don’t need to assess. I know what my students are having problems with—math.”
Well, maybe so, but I’ve found that my perceptions often don’t match reality, and systematic evidence gives me better insight. Let me give you a couple of examples.
Example #1: you may have noticed that my website blog page now has an index of sorts on the right side. I created it a few months ago, and what I found really surprised me. I aim for practical advice on the kinds of assessment issues that people commonly face. Beforehand I’d been feeling pretty good about the range and relevance of assessment topics that I’d covered. The index showed that, yes, I’d done lots of posts on how to assess and specifically on rubrics, a pet interest of mine. I was pleasantly surprised by the number of posts I’d done on sharing and using results.
But what shocked me was how little I’d written on assessment culture: only four posts in five years! Compare that with seventeen posts on curriculum design and teaching. Assessment culture is an enormous issue for assessment practitioners. Now knowing the short shrift I’d been giving it, I’ve written several more blog posts related to assessment culture, bring the total to ten (including this post).
(By the way, if there’s anything you’d like to see a blog post on, let me know!)
Example #2: Earlier this summer I noticed that some of the flowering plants in my backyard weren’t blooming much. I did a shade study: one sunny day when I was home all day, every hour I made notes on which plants were in sun and which were in shade. I’d done this about five years ago but, as with the blog index, the results shocked me; some trees and shrubs had grown a lot bigger in five years and consequently some spots in my yard were now almost entirely in shade. No wonder those flowers didn’t bloom! I’ll be moving around a lot of perennials this fall to get them into sunnier spots.
So, yes, I’m a big fan of using systematic evidence to inform decisions. I’ve seen too often that our perceptions may not match reality.
But let’s go back to that professor whose students were having problems with math and give him the benefit of the doubt—maybe he’s right. My question to him was, “What are you doing about it?” The response was a shoulder shrug. His was one of many institutions with an assessment office but no faculty teaching-learning center. In other words, they’re investing more in assessment than in teaching. He had nowhere to turn for help.
My point here is that assessment is worthwhile only if the results are used to make meaningful improvements to curricula and teaching methods. Furthermore, assessment work is worthwhile only if the impact is in proportion to the time and effort spent on the assessment. I recently worked with an institution that undertook an elaborate assessment of three general education learning outcomes, in which student artifacts were sampled from a variety of courses and scored by a committee of trained reviewers. The results were pretty dismal—on average only about two thirds of students were deemed “proficient” on the competencies’ traits. But the institutional community is apparently unwilling to engage with this evidence, so nothing will be done beyond repeating the assessment in a couple of years. Such an assessment is far from worthwhile; it’s a waste of everyone’s time.
This institution is hardly alone. When I was working on the new 3rd edition of my book Assessing Student Learning: A Common Sense Guide, I searched far and wide for examples of assessments whose results led to broad-based change and found only a handful. Overwhelmingly, the changes I see are what I call minor tweaks, such as rewriting an assignment or adding more homework. These changes can be good—collectively they can add up to a sizable impact. But the assessments leading to these kinds of changes are worthwhile only if they’re very simple, quick assessments in proportion to the minor tweaks they bring about.
So is assessment worth it? It’s a mixed bag. On one hand, the time and effort devoted to some assessments aren’t worth it—the findings don’t have much impact. On the other hand, however, I remain convinced of the value of using systematic evidence to inform decisions affecting student learning. Assessment has enormous potential to move us from providing a good education to providing a truly great education. The keys to achieving this are commitments to (1) making that good-to-great transformation, (2) using systematic evidence to inform decisions large and small, and (3) doing only assessments whose impact is likely to be in proportion to the time, effort, and resources spent on them.
|Posted on July 30, 2018 at 8:20 AM|
I often hear questions about how long an “assessment cycle” should be. Fair warning: I don’t think you’re going to like my answer.
The underlying premise of the concept of an assessment cycle is that assessment of key program, general education, or institutional learning goals is too burdensome to be completed in its entirety every year, so it’s okay for assessments to be staggered across two or more years. Let’s unpack that premise a bit.
First, know that if an accreditor finds an institution or program out of compliance with even one of its standards—including assessment—Federal regulations mandate that the accreditor can give the institution no more than two years to come into compliance. (Yes, the accreditor can extend those two years for “good cause,” but let’s not count on that.) So an institution that has done nothing with assessment has a maximum of two years to come into compliance, which often means not just planning assessments but conducting them, analyzing the results, and using the results to inform decisions. I’ve worked with institutions in this situation and, yes, it can be done. So an assessment cycle, if there is one, should generally run no longer than two years.
Now consider the possibility that you’ve assessed an important learning goal, and the results are terrible. Perhaps you learn that many students can’t write coherently, or they can’t analyze information or make a coherent argument. Do you really want to wait two, three, or five years to see if subsequent students are doing better? I’d hope not! I’d like to see learning goals with poor results put on red alert, with prompt actions so students quickly start doing better and prompt re-assessments to confirm that.
Now let’s consider the premise that assessments are too burdensome for them all to be conducted annually. If your learning goals are truly important, faculty should be teaching them in every course that addresses them. They should be giving students learning activities and assignments on those goals; they should be grading students on those goals; they should be reviewing the results of their tests and rubrics; and they should be using the results of their review to understand and improve student learning in their courses. So, once things are up and running, there really shouldn’t be much extra burden in assessing important learning goals. The burdens are cranking out those dreaded assessment reports and finding time to get together with colleagues to review and discuss the results collaboratively. Those burdens are best addressed by minimizing the work of preparing those reports and by helping faculty carve out time to talk.
Now let’s consider the idea that an assessment cycle should stagger the goals being assessed. That implies that every learning goal is discrete and that it needs its own, separate assessment. In reality, learning goals are interrelated; how can one learn to write without also learning to think critically? And we know that capstone assignments—in which students work on several learning goals at once—are not only great opportunities for students to integrate and synthesize their learning but also great assessment opportunities, because we can look at student achievement of several learning goals all at once.
Then there’s the message we send when we tell faculty they need to conduct a particular assessment only once every three, four, or five years: assessment is a burdensome add-on, not part of our normal everyday work. In reality, assessment is (or should be) part of the normal teaching-learning process.
And then there are the practicalities of conducting an assessment only once every few years. Chances are that the work done a few years ago will have vanished or at least collective memory will have evaporated (why on earth did we do that assessment?). Assessment wheels must be reinvented, which can be more work than tweaking last year’s process.
So should assessments be conducted on a fixed cycle? In my opinion, no. Instead:
- Use capstone assignments to look at multiple goals simultaneously.
- If you’re getting started with assessment, assess everything, now. You’ve been dragging your feet too long already, and you’re risking an accreditation action. Remember you must not only have results but be using them within two years.
- If you’ve got disappointing results, move additional assessments of those learning goals to a front burner, assessing them frequently until you get results where you want them.
- If you’ve got terrific results, consider moving assessments of those learning goals to a back burner, perhaps every two years or so, just to make sure results aren’t slipping. This frees up time to focus on the learning goals that need time and attention.
- If assessment work is widely viewed as burdensome, it’s because its cost-benefit is out of whack. Perhaps assessment processes are too complicated, or people view the learning goals being assessed as relatively unimportant, or the results aren’t adding useful insight. Do all you can to simplify assessment work, especially reporting. If people don't find a particular assessment useful, stop doing it and do something else instead.
- If assessment work must be staggered, stagger some of your indirect assessment tools, not the learning goals or major direct assessments. An alumni survey or student survey might be conducted every three years, for example.
- For programs that “get” assessment and are conducting it routinely, ask for less frequent reports, perhaps every two or three years instead of annually. It’s a win-win reward: less work for them and less work for those charged with reviewing and offering feedback on assessment reports.
|Posted on March 28, 2018 at 6:25 AM|
In my February 28 blog post, I noted that many faculty express frustration with assessment along the following lines:
- What I most want students to learn is not what’s being assessed.
- I’m being told what and how to assess, without any input from me.
- I’m being told what to teach, without any input from me.
- I’m being told to assess skills that employers want, but I teach other things that I think are more important.
- A committee is doing a second review of my students’ work. I’m not trusted to assess student work fairly and accurately through my grading processes.
- I’m being asked to quantify student learning, but I don’t think that’s appropriate for what I’m teaching.
- I’m being asked to do this on top of everything else I’m already doing.
- Assessment treats learning as a scientific process, when it’s a human endeavor; every student and teacher is different.
The underlying theme here is that these faculty don’t feel that they and their views are valued and respected. When we value and respect people:
- We design assessment processes so the results are clearly useful in helping to make important decisions, not paper-pushing exercises designed solely to get through accreditation.
- We make assessment work worthwhile by using results to make important decisions, such as on resource allocations, as discussed in my March 13 blog post.
- We truly value great teaching and actively encourage the scholarship of teaching as a form of scholarship.
- We truly value innovation, especially in improving one’s teaching because, if no one wants to change anything, there’s no point in assessing.
- We take the time to give faculty and staff clear guidance and coordination, so they understand what they are to do and why.
- We invest in helping them learn what to do: how to use research-informed teaching strategies as well as how to assess.
- We support their work with appropriate resources.
- We help them find time to work on assessment and to keep assessment work cost-effective, because we respect how busy they are.
- We take a flexible approach to assessment, recognizing that one size does not fit all. We do not mandate a single institution-wide assessment approach but instead encourage a variety of assessment strategies, both quantitative and qualitative. The more choices we give faculty, the more they feel empowered.
- We design assessment processes so faculty are leaders rather than providers of assessment. We help them work collaboratively rather than in silos, inviting them to contribute to decisions on what, why, and how we assess. We try to assess those learning outcomes that the institutional community most values. More than anything else, we spend more time listening than telling.
- We recognize and honor assessment work in tangible ways, perhaps through a celebratory event, public commendations, or consideration in promotion, tenure, and merit pay applications.
For more information on these and other strategies to value and respect people who work on assessment, see Chapter 14, “Valuing Assessment and the People Who Contribute,” in the new third edition of my book Assessing Student Learning: A Common Sense Guide.
|Posted on March 13, 2018 at 9:50 AM|
In my February 28 blog post, I noted that many faculty have been expressing frustration that assessment is a waste of an enormous amount of time and resources that could be better spent on teaching. Here are some strategies to help make sure your assessment activities are meaningful and cost-effective, all drawn from the new third edition of Assessing Student Learning: A Common Sense Guide.
Don’t approach assessment as an accreditation requirement. Sure, you’re doing assessment because your accreditor requires it, but cranking out something only to keep an accreditor happy is sure to be viewed as a waste of time. Instead approach assessment as an opportunity to collect information on things you and your colleagues care about and that you want to make better decisions about. Then what you’re doing for the accreditor is summarizing and analyzing what you’ve been doing for yourselves. While a few accreditors have picky requirements that you must comply with whether you like them or not, most want you to use their standards as an opportunity to do something genuinely useful.
Keep it useful. If an assessment hasn’t yielded useful information, stop doing it and do something else. If no one’s interested in assessment results for a particular learning goal, you’ve got a clue that you’ve been assessing the wrong goal.
Make sure it’s used in helpful ways. Design processes to make sure that assessment results inform things like professional development programming, resource allocations for instructional equipment and technologies, and curriculum revisions. Make sure faculty are informed about how assessment results are used so they see its value.
Monitor your investment in assessment. Keep tabs on how much time and money each assessment is consuming…and whether what’s learned is useful enough to make that investment worthwhile. If it isn’t, change your assessment to something more cost-effective.
Be flexible. A mandate to use an assessment tool or strategy that’s inappropriate for a particular learning goal or discipline is sure to be viewed as a waste of everyone’s time. In assessment, one size definitely does not fit all.
Question anything that doesn’t make sense. If no one can give a good explanation for doing something that doesn’t make sense, stop doing it and do something more appropriate.
Start with what you have. Your college has plenty of direct and indirect evidence of student learning already on hand, from grading processes, surveys, and other sources. Squeeze information out of those sources before adding new assessments.
Think twice about blind-scoring and double-scoring student work. The costs in terms of both time and morale can be pretty steep (“I’m a professional! Why can’t they trust me to assess my own students’ work?” ). Start by asking faculty to submit their own rubric ratings of their own students’ work. Only move to blind- and double-scoring if you see a big problem in their scores of a major assessment.
Start at the end and work backwards. If your program has a capstone requirement, students should be demonstrating achievement in many key program learning goals in it. Start assessment there. If students show satisfactory achievement of the learning goals, you’re done! If you’re not satisfied with their achievement of a particular learning goal, you can drill down to other places in the curriculum that address that goal.
Help everyone learn what to do. Nothing galls me more than finding out what I did wasn’t what was wanted and has to be redone. While we all learn from experience and do things better the second time, help everyone learn what to do so, their first assessment is a useful one.
Minimize paperwork and bureaucratic layers. Faculty are already routinely assessing student learning through the grading process. What some resent is not the work of grading but the added workload of compiling, analyzing, and reporting assessment evidence from the grading process. Make this process as simple, intuitive, and useful as possible. Cull from your assessment report template anything that’s “nice to know” versus absolutely essential.
Make assessment technologies an optional tool, not a mandate. Only a tiny number of accreditors require using a particular assessment information management system. For everyone else, assessment information systems should be chosen and implemented to make everyone’s lives easier, not for the convenience of a few people like an assessment committee or a visiting accreditation team. If a system is hard to learn, creates more work, or is expensive, it will create resentment and make things worse rather than better. I recently encountered one system for which faculty had to tally and analyze their results, then enter the tallied results into the system. Um, shouldn’t an assessment system do the work of tallying and analysis for the faculty?
Be sensible about staggering assessments. If students are not achieving a key learning goal well, you’ll want to assess it frequently to see if they’re improving. But if students are achieving another learning goal really well, put it on a back burner, asking for assessment reports on it only every few years, to make sure things aren’t slipping.
Help everyone find time to talk. Lots of faculty have told me that they “get” assessment but simply can’t find time to discuss with their colleagues what and how to assess and how best to use the results. Help them carve out time on their calendars for these important conversations.
Link your assessment coordinator with your faculty teaching/learning center, not an accreditation or institutional effectiveness office. This makes clear that assessment is about understanding and improving student learning, not just a hoop to jump through to address some administrative or accreditation mandate.
|Posted on March 4, 2018 at 8:05 AM|
The vitriol in some recent op-ed pieces and the comments that followed them might leave the impression that faculty hate assessment. Well, some faculty clearly do, but a national survey suggests that they’re in the minority.
The Faculty Survey of Assessment Culture, directed by Dr. Matthew Fuller at Sam Houston State University, can give us some insight. Its key drawback is, because it’s still a relatively nascent survey, it has only about 1155 responses from its last reported administration in 2014. So the survey may not represent what faculty throughout the U.S. really think, but I nonetheless think it’s worth a look.
Most of the survey is a series of statements to which faculty respond by choosing Strongly Agree, Agree, Only Slightly Agree, Only Slightly Disagree, Disagree, or Strongly Disagree.
Here are the percentages who agreed or strongly agreed with each statement. Statements that are positive about assessment are in green; those that are negative about assessment are in red.
80% The majority of administrators are supportive of assessment.
77% Faculty leadership is necessary for my institution’s assessment efforts.
76% Assessment is a good thing for my institution to do.
70% I am highly interested in my institution’s assessment efforts.
70% Assessment is vital to my institution’s future.
67% In general I am eager to work with administrators.
67% Assessment is a good thing for me to do.
64% I am actively engaged in my institution’s assessment efforts.
63% Assessments of programs are typically connected back to student learning
62% My academic department or college truly values faculty involvement in assessment.
61% I engage in institutional assessment efforts because it is the right thing to do for our students.
60% Assessment is vital to my institution’s way of operating.
57% Discussions about student learning are at the heart of my institution.
57% In general a recommended change is more likely to be enacted by administrators if it is supported by assessment data.
53% I clearly understand assessment processes at my institution.
52% Assessment supports student learning at my institution.
51% Assessment is primarily the responsibility of faculty members.
51% Change occurs more readily when supported by assessment results.
50% It is clear who is ultimately in charge of assessment.
50% I am familiar with the office that leads student assessment efforts for accreditation purposes.
50% Assessment for accreditation purposes is prioritized above other assessment efforts.
49% Assessment results are used for improvement.
49% The majority of administrators primarily emphasize assessment for the improvement of student learning.
49% I engage in institutional assessment because doing so makes a difference to student learning at my institution.
48% Assessment processes yield evidence of my institution’s effectiveness.
48% I have a generally positive attitude toward my institution’s culture of assessment.
47% Senior leaders, i.e., President or Provost, have made clear their expectations regarding assessment.
47% Administrators are supportive of making changes.
46% I am familiar with the office that leads student assessment efforts for student learning.
45% Assessment data are used to identify the extent to which student learning outcomes are met.
44% My institution is structured in a way that facilitates assessment practices focused on improved student learning.
44% The majority of administrators only focus on assessment in response to compliance requirements.
43% Student assessment results are shared regularly with faculty members.
41% I support the ways in which administrators have used assessment on my campus.
40% Assessment is an organized coherent effort at my institution.
40% Assessment results are available to faculty by request.
38% Assessment data are available to faculty by request.
37% Assessment results are shared regularly throughout my institution.
35% Faculty are in charge of assessment at my institution.
33% Engaging in assessment also benefits my research/scholarship agenda.
32% Budgets can be negatively impacted by assessment results.
32% Administrators share assessment data with faculty members using a variety of communication strategies (i.e., meetings, web, written correspondence, presentations).
31% Assessment data are regularly used in official institutional communications.
30% There are sufficient financial resources to make changes at my institution.
29% Assessment is a necessary evil in higher education.
28% Communication of assessment results has been effective.
28% Assessment results are criticized for going nowhere (i.e., not leading to change).
27% Assessment results in a fair depiction of what I do as a faculty member.
27% Administrators use assessment as a form of control (i.e., to regulate institutional processes).
26% Assessment efforts do not have a clear focus.
26% I enjoy engaging in institutional assessment efforts.
24% Assessment success stories are formally shared throughout my institution.
23% Assessment results in an accurate depiction of what I do as a faculty member.
22% Assessment is conducted based on the whims of the people in charge.
21% If assessment was not required I would not be doing it.
21% Assessment is primarily the responsibility of administrators.
21% I am aware of several assessment success stories (i.e. instances of assessment resulting in important changes).
20% I do not have time to engage in assessment efforts.
19% Assessment results have no impact on resource allocations.
18% Assessment results are used to scare faculty into compliance with what the administration wants.
18% There is pressure to reveal only positive results from assessment efforts.
17% I avoid doing institutional assessment activities if I can.
17% I engage in assessment because I am afraid of what will happen if I do not.
14% I perceive assessment as a threat to academic freedom.
10% Assessment results are used to punish faculty members (i.e., not rewarding innovation or effective teaching, research, or service).
4% Assessment is someone else’s problem, not mine.
Overall, there’s good news here. Most faculty agreed with most positive statements about assessment, and most disagreed with most negative statements. I was particularly heartened that about three-quarters of respondents agreed that “assessment is a good thing for my institution to do,” about 70% agreed that “assessment is vital to my institution’s future,” and about two-thirds agreed that “assessment is a good thing for me to do.”
But there’s also plenty to be concerned about here. Only 35% agree that faculty are in charge of assessment and, by several measures, only a minority see assessment results shared and used. Almost 30% view assessment as a necessary evil.
Survey researchers know that people are more apt to agree than disagree with a statement, so I also looked at the percentages of faculty who disagreed or strongly disagreed with each statement. These responses do not mirror the agreed/strongly agreed results above, because on some items a larger proportion of faculty marked Only Slightly Agree or Only Slightly Disagree. Again, the positive statements are in green and the negative ones in red.
3% The majority of administrators are supportive of assessment.
6% Faculty leadership is necessary for my institution’s assessment efforts.
6% Assessment is a good thing for my institution to do.
7% Assessment is vital to my institution’s future.
8% I am highly interested in my institution’s assessment efforts.
8% In general a recommended change is more likely to be enacted by administrators if it is supported by assessment data.
9% I am actively engaged in my institution’s assessment efforts.
9% In general I am eager to work with administrators.
9% My academic department or college truly values faculty involvement in assessment.
10% Change occurs more readily when supported by assessment results.
10% Assessment is a good thing for me to do.
12% Assessment results are available to faculty by request.
13% Assessment is vital to my institution’s way of operating.
13% Assessment data are available to faculty by request.
13% The majority of administrators primarily emphasize assessment for the improvement of student learning.
13% I engage in institutional assessment efforts because it is the right thing to do for our students.
14% Discussions about student learning are at the heart of my institution.
14% I clearly understand assessment processes at my institution.
14% Assessment data are used to identify the extent to which student learning outcomes are met.
15% Assessments of programs are typically connected back to student learning.
15% Assessment results are used for improvement.
16% Assessment is primarily the responsibility of faculty members.
16% Administrators are supportive of making changes.
17% Assessment supports student learning at my institution.
18% Assessment processes yield evidence of my institution’s effectiveness.
18% I support the ways in which administrators have used assessment on my campus.
19% It is clear who is ultimately in charge of assessment.
19% Assessment is an organized coherent effort at my institution.
19% I have a generally positive attitude toward my institution’s culture of assessment.
20% Senior leaders, i.e., President or Provost, have made clear their expectations regarding assessment.
20% My institution is structured in a way that facilitates assessment practices focused on improved student learning.
20% I engage in institutional assessment because doing so makes a difference to student learning at my institution.
21% I am familiar with the office that leads student assessment efforts for accreditation purposes.
21% Budgets can be negatively impacted by assessment results.
22% The majority of administrators only focus on assessment in response to compliance requirements.
23% Student assessment results are regularly shared with faculty members.
24% I am familiar with the office that leads student assessment efforts for student learning.
24% Assessment for accreditation purposes is prioritized above other assessment efforts.
24% Assessment data are regularly used in official institutional communications.
28% Faculty are in charge of assessment at my institution.
29% Assessment results have no impact on resource allocations.
29% Assessment results are regularly shared throughout my institution.
29% I enjoy engaging in institutional assessment efforts.
31% Administrators share assessment data with faculty members using a variety of communication strategies (i.e., meetings, web, written correspondence, presentations).
31% Communication of assessment results has been effective.
31% Administrators use assessment as a form of control (i.e., to regulate institutional processes).
32% Assessment results are criticized for going nowhere (i.e., not leading to change).
32% Assessment results in a fair depiction of what I do as a faculty member.
33% There are sufficient financial resources to make changes at my institution.
34% Assessment success stories are formally shared throughout my institution.
34% Assessment results in an accurate depiction of what I do as a faculty member.
35% Assessment is primarily the responsibility of administrators.
36% I am aware of several assessment success stories (i.e., instances of assessment resulting in important changes).
36% Engaging in assessment also benefits my research/scholarship agenda.
41% Assessment efforts do not have a clear focus.
41% I do not have time to engage in assessment efforts.
42% Assessment is a necessary evil in higher education.
50% Assessment is conducted based on the whims of the people in charge.
50% There is pressure to reveal only positive results from assessment efforts.
53% Assessment results are used to scare faculty into compliance with what the administration wants.
55% I avoid doing institutional assessment activities if I can.
56% If assessment was not required I would not be doing it.
56% I engage in assessment because I am afraid of what will happen if I do not.
60% Assessment results are used to punish faculty members (i.e., not rewarding innovation or effective teaching, research, or service).
62% I perceive assessment as a threat to academic freedom.
78% Assessment is someone else’s problem, not mine.
Here there’s more good news. We want small proportions of faculty to disagree with the positive statements about assessment, and for the most part they do. About a third disagree that assessment results and success stories are shared, but that matches what we saw with the agree-strongly agree results.
But there are also areas of concern here. We want large proportions of faculty to disagree with the negative statements about assessment, and that doesn’t always happen. Less than a quarter disagree that budgets can be negatively impacted by assessment results and that administrators look at assessment only through a compliance lens. Less than a third disagreed that assessment results don’t lead to change or resource allocations. The results that concerned me most? Only 42% disagreed that assessment is a necessary evil; only half disagreed that there is pressure to reveal only positive assessment results; and only a bit over half disagreed that “If assessment was not required I would not be doing it.”
So, while most faculty “get” assessment, there are sizable numbers who don’t yet see value in it. We've come a long way, but there's still plenty of work to do!
(Some notes on the presentation of these results: Note that I sorted results from highest to lowest, rounded percentages to the nearest whole percent, and color-coded "good" and "bad" statements. Those all help the key points of a very lengthy survey pop out at the reader.)
|Posted on February 28, 2018 at 10:25 AM|
Two recent op-ed pieces in the Chronicle of Higher Education and the New York Times –and the hundreds of online comments regarding them—make clear that, 25 years into the assessment movement, a lot of faculty really hate assessment.
It’s tempting for assessment people to spring into a defensive posture and dismiss what these people are saying. (They’re misinformed! The world has changed!) But if that’s our response, aren’t we modeling the fractures deeply dividing the US today, with people existing in their own echo chambers and talking past each other rather than really listening and trying to find common ground on which to build? And shouldn’t we be practicing what we preach, using systematic evidence to inform what we say and do?
So I took a deeper dive into those comments. I did a content analysis of the articles and many of the comments that followed. (The New York Times article had over 500 comments—too many for me to handle—so I looked only at NYT comments with at least 12 recommendations.)
If you’re not familiar with content analysis, it’s looking through text to identify the frequency of ideas or themes. For example, I counted how many comments mentioned that assessment is expensive. I do content analysis by listing all the comments as bullets in a Word document, then cutting and pasting the bulleted comments to group similar comments together under headings. I then cut and paste the groups so the most frequently mentioned themes are at the top of the document. There is qualitative analysis software that can help if you don’t want to do this manually.
A caveat: Comments don’t always fall into neat, discrete categories; judgement is needed to decide where to place some. I did this analysis quickly, and it’s entirely possible that, if you’d done this instead of me, you might have come up with somewhat different results. But assessment is not rigorous research; we just need information good enough to help inform our thinking, and I think my analysis is fine for the purpose of figuring out how we might deal with this.
Why take the time to do a content analysis instead of just reading through the comments? Because, when we process a list of comments, there’s a good chance we won’t identify the most frequently mentioned ideas accurately. As I was doing my content analysis, I was struck by how many faculty complained that assessment is (I’m being snarky here) either a vast right-wing conspiracy or a vast left-wing conspiracy, simply because I’d never heard that before. It turned out, however, that there were other themes that emerged far more frequently. This is a good lesson for faculty who think they don’t need to formally assess because they “know” what their students are struggling with. Maybe they do…but maybe not.
So what did I find? As I’d expected, there are many reasons why faculty may hate assessment. I found that most of their complaints fall into just four broad categories:
It’s a waste of an enormous amount of time and resources that could be better spent on teaching. Almost 40% of the comments fell into this category. Some examples:
- We faculty are angry over the time and dollars wasted.
- The assessment craze is not only of little value, but it saps the meager resources of time and money available for classroom instruction.
- Faced with outrage over the high cost of higher education, universities responded by encouraging expensive administrative bloat.
- It is not that the faculty are not trying, but the data and methods in general use are very poor at measuring learning.
- Our “assessment expert” told us to just put down as a goal the % of students we wanted to rate us as very good or good on a self-report survey. Which we all know is junk.
I and what I think is important is not valued or respected. Over 30% of the comments fell into this category. Some examples:
- Assessment of student learning outcomes is an add-on activity that says your standard examination and grading scheme isn’t enough so you need to do a second layer of grading in a particular numerical format.
- The fundamental, flawed premise of most of modern education is that teaching is a science.
- Bureaucratic jargon subtly shapes the expectations of students and teachers alike.
- When the effort to reduce learning to a list of job-ready skills goes too far, it misses the point of a university education.
- Learning outcomes have disempowered faculty.
- The only learning outcomes I value: students complete their formal education with a desire to learn more
- Assessment reflects a misguided belief that learning is quantifiable.
External and economic forces are behind this. About 15% of comments fell into this category, including those right-wing/left-wing conspiracy comments. Some examples:
- There’s a whole industry out there that’s invested in outcomes assessment.
- The assessment boom coincided with the decision of state legislatures to reduce spending on public universities.
- Educational institutions have been forced to operate out of a business model.
- It is the rise of adjuncts and online classes that has led to the assessment push.
I’m unfairly held responsible for student learning. About 10% of comments fell into this category. Some examples:
- Students, not faculty, are responsible for student learning.
- It is much more profitable to skim money from institutions of higher learning than fixing the underlying causes of the poverty and lack of focus that harm students.
- The root cause is lack of a solid foundation built in K-12.
Two things struck me about these four broad categories. The first one was that they don’t quite align with what I’ve heard as I’ve worked with literally thousands of faculty at hundreds of colleges over the last two decades. Yes, I’ve heard plenty about assessment being useless, and I’ve written about faculty feeling devalued and disrespected by assessment, but I’d never heard the external-forces or blame-game reasons before. And I’ve heard plenty about other reasons that weren’t mentioned in these comments, especially finding time to work on assessment, not understanding how to assess (or how to teach), and moving from a culture of silos to one of collaboration. I think the reason for the disconnect between what I’ve heard and what was expressed here is that these comments reflect the angriest faculty, not all faculty. But their anger is legitimate and something we should all work to address.
[UPDATED 2/28/2018 4:36 PM EST] So what should we do? First, we clearly need better information on faculty experiences and views regarding assessment so we can understand which issues are most pervasive and address them. The Surveys of Assessment Culture developed by Matt Fuller at Sam Houston State University is an important start.
In the meanwhile, the good news is the comments in and accompanying these two pieces all represent solvable problems. (No, we can’t solve all of society’s ills, but we can help faculty deal with them.) I’ll share some ideas in upcoming blog posts. If you don’t want to wait, you’ll find plenty of practical suggestions in the new 3rd edition of my book Assessing Student Learning: A Common Sense Guide.
|Posted on March 16, 2015 at 8:10 AM|
One of my favorite chapters in my book Five Dimensions of Quality: A Common Sense Guide to Accreditation and Accountability is “Why Is This So Hard?” It was my “venting chapter,” with a pretty long list of the barriers to advancing in quality, and it was very cathartic to write.
One item on that list is succinct: The money’s not there. A new report by Third Way states the issue beautifully: “Federal policy incentivizes research first, second, and third—and student instruction last.” It goes on to explain, “For every $100 the federal government spends on university-led research, it spends twenty-four cents on teaching innovation at universities.” Its conclusion? “If one took its cues entirely on the federal government, the conclusion would be that colleges exist to conduct research and publish papers with student instruction as an afterthought.”
One professor at a regional comprehensive university put it to me this way: “I know I could be a better teacher. But my promotions are based on the research dollars I bring in and my publications, so that’s where I have to focus all my time. As long as my student evaluations are decent, there’s no incentive or reward for me to try to improve my teaching, and any time I spend on that is time taken away from my research, which is where the rewards are.”
The one bright spot here is that more and more colleges and universities are recognizing the need to invest in helping faculty improve their teaching. The last 20 years have seen a growth in “teaching learning centers” designed to do this along with other incentives and support, such as those at the University of Michigan reported by the Chronicle of Higher Education. But so far we are only scratching the surface. Colleges, universities, and government policymakers all need to do more to put their money where their mouth is, actively encouraging and supporting the great teaching and learning that is supposed to be higher education’s fundamental purpose.
|Posted on January 18, 2015 at 7:05 AM|
When I work with faculty on curriculum design, teaching strategies, or assessment methods, one of the most common reactions is, “This is great, but when am I going to find the time to do this?” It’s a legitimate question. Especially since the Great Recession, everyone in higher education has been asked to wear more and more hats, to do more with less. At some colleges I visit, the exhaustion is palpable.
There are only so many hours in a week, and we can’t create more time. So the only way to find time to work on the quality of what we do is to stop doing something else. If faculty are expected to bring new approaches to curricula, teaching strategies, and assessment on top of everything else, the message is that everything else is more important.
What can you stop or scale back? My first suggestion is to look at your committees; most colleges I visit have too many, and committee work expands to fill the time allotted. What would happen if a particular committee didn’t meet for the rest of the year?
Next, carve out times in the academic calendar when faculty can get together to talk. Some colleges don’t schedule any classes on, say, Wednesdays at noon, giving departments and committees time to meet. Some set aside professional development days at the beginning and/or end of the semester. Think twice about filling these days with a program that everyone is expected to attend; today it’s the rare college where everyone has the same professional development needs and will benefit from the same program. Instead consider asking each department to design its own agenda for the day.
Finally, look at your array of curricular offerings: your degree and certificate programs, your array of general education offerings, and so on. Each of those courses and programs needs to be reviewed, updated, planned, taught, and assessed. Three course preparations each semester don’t take as much time as four. Look at student enrollment patterns, then ask yourself if a course or program that attracts relatively few students is more important than the time freed up if it were no longer offered.
|Posted on November 10, 2014 at 8:40 AM|
‘Fess up time: my initial reaction to Jeffrey Alan Johnson’s recent Inside Higher Ed piece On Assessing Student Learning: Faculty Are Not the Enemy was “guilty as charged.” I’ve done plenty of presentations, workshops, and discussions on building a culture of assessment, and the conversation invariably turns to “getting faculty on board.” In fact, over the years, this has been far and away the biggest complaint/question I’ve heard about assessment, so it’s been easy for me to slide into this. Fortunately, my good friend Ginny Anderson, co-author of Effective Grading and a marvelous biology professor, sets me straight from time to time.
I try to point out a few things during these discussions. First, very often the issue is not getting faculty on board but getting institutional leadership on board, providing support for assessment and for using evidence for betterment. Second, it’s important to figure out why there’s foot-dragging on assessment—the reasons vary widely, and different reasons point to different solutions. My new book Five Dimensions of Quality: A Common Sense Guide to Accreditation and Accountability has a whole chapter titled, “Why is This So Hard?”
But Johnson’s piece homes in on one particular issue: the need for a culture of respect. One of the five dimensions of quality in my new book is a culture of community, including cultures of respect, communication, and collaboration, among others. An institution with a culture of respect treats everyone fairly and equitably. It trusts members of the institutional community, stepping in only when necessary. It taps the expertise of faculty (and staff) and the energy of students. It lets people learn from their mistakes. And it recognizes that communication is a two-way street, listening and not just telling.
In a recent blog I talked about assessment bullies. I’ve seen assessment coordinators who might not be bullies but who certainly have an anal-retentive streak! The best people to provide assessment coordination and guidance, as I explain in Assessing Student Learning: A Common Sense Guide, are those who are sensitive, open-minded, flexible, and ready to encourage and facilitate multiple approaches.
|Posted on April 1, 2014 at 7:30 AM|
One of the things I love about working in higher education is the people. Far from the old stereotypic image of stuffy, arrogant professors, at least 99% of them are warm, friendly, and caring—a joy to work with.
But every college seems to have a very small number of people who seem to view their life role as throwing up obstacles to others. They hold up department or committee work with pontificating or argument, making their colleagues uncomfortable. I recently saw one faculty member hold up a college’s work to define critical thinking, for example, by saying a generalized definition couldn’t fit with his own definition, which used the specialized jargon of his discipline…which of course no one else in the room understood.
If these people have any real or perceived authority—say, they teach the courses that are the focus of a department’s assessment, or they’re longstanding members of the college’s general education committee, or they’re on a tenure committee—the discomfort level grows exponentially. These people can intimidate those around them from accomplishing whatever they intend to do. They approach the definition of bullying (www.stopbullying.gov): unwanted, aggressive behavior that involves a real or perceived power imbalance.
Why do we tolerate these people? One reason, I think, is because most of us are basically nice people; we don’t want confrontations. Another is the high value we place on academic freedom and freedom of speech; it’s easy to stretch these concepts to say that anyone can say or do whatever one wants at any time, no matter how disrespectful or intimidating it is. Yet another is that sometimes these people are truly in positions of authority or have the support of people in authority.
Assessment bullies are of course not grade school bullies; they’re not going to beat anyone up in the school yard. But they can still do some damage, keeping a college from moving to where it needs to be on assessment. What can we do?
1. Try to figure out why the bully acts aggressively. I’ve often found that these people have real misunderstandings about assessment. Inform your committee’s work with readings on research and good practices…and invite the bully to bring his or her own. I’ve also found that these people, ironically, feel disrespected; they feel they’re being told that what they’ve been doing in their classes for years is wrong. Sometimes—not always—some private one-on-one conversations about their concerns and figuring out gentle, respectful strategies to address them can help.
2. Continue to respect academic freedom by giving these people appropriate venues to express their views, such as special meetings and open forums. Then limit the agendas of committee meetings to accomplishing the work at hand.
3. Don’t ignore bullying behavior. As www.stopbullying.gov says, be more than a bystander. Respectfully tell these people that calling the assessment coordinator the “assessment czar” is inappropriate, and ask them to stop. Be persistent and consistent about this.
Do you have any other ideas? I’d love to hear them!