|Posted on November 21, 2017 at 8:25 AM||comments (1)|
From time to time people contact me for advice, not on assessment or accreditation but for tips on how to build a consulting business. In case you’re thinking the same thing, I’m sorry to tell you that I really can’t offer much advice.
My consulting work is the culmination of 40 years of work in higher education. So if you want to spend the next 40 years preparing to get into consulting work, I can tell you my story, but if you want to build a business more quickly, I can’t help.
I began my career in institutional research, then transitioned into strategic planning and quality improvement. These can be lonely jobs, so I joined relevant professional organizations. Some of the institutions where I worked would pay for travel to conferences only if I was presenting, so I presented as often as I could. And I became actively involved in the professional organizations I joined—I was treasurer of one and organized a regional conference for another, for example. All these things helped me network and make connections with people in higher education all over the United States.
All institutional researchers deal with surveys, and early in my career I found people asking me for advice on surveys they were developing. Writing a good survey isn’t all that different from writing a good test, which I’d learned how to do in grad school. (My master’s is in educational measurement and statistics from the University of Iowa.) After finding myself giving the same advice over and over, I wrote a little booklet, which gradually evolved into a monograph on questionnaire surveys published by the Association for Institutional Research. I started doing workshops around the country on questionnaire design.
I love to teach, so concurrently throughout my career I’ve taught as an adjunct at least once a year—all kinds of courses, from developmental mathematics to graduate courses. That’s made a huge difference in my consulting work, because it’s given me credibility with both the teaching and administrative sides of the house.
Then I had a life-changing experience: a one-year appointment in 1999-2000 as director of the Assessment Forum at the old American Association for Higher Education. People often asked me for recommendations for a good soup-to-nuts primer on assessment. At that time, there wasn’t one (there were good books on assessment, but with narrower focuses). So I wrote one, applying what I learned in my graduate studies to the higher education environment, and was lucky enough to get it published. The book, along with conference sessions, continued networking, and simply having that one-year position at AAHE, built my reputation as an assessment expert.
When I went into full-time consulting about six years ago, I did read up a little on how to build a consulting business. I built a website so people could find me, and I built a social media presence and a blog on my website to drive people to the website. But I don’t really do any other marketing. My clients tell me that they contact me because of my longstanding reputation, my book, and my conference sessions.
So if you want to be a consultant, here's my advice. Take 40 years to build your reputation. Start with a graduate degree from a really good, relevant program. Be professionally active. Teach. Get published. Present at conferences. And get lucky enough to land a job that puts you on the national stage. Yes, there are plenty of people who build a successful consulting business more quickly, but I’m not one of them, and I can’t offer you advice on how to do it.
|Posted on November 8, 2017 at 10:05 AM||comments (6)|
I was struck by Nicholas Kristof’s November 6 New York Times article, How to Reduce Shootings. No, I’m not talking here about the politics of the issue, and I’m not writing this blog post to advocate any stance on the issue. What struck me—and what’s relevant to assessment—is how effectively Kristof and his colleagues brought together and compellingly presented a variety of data.
Here are some of the lessons from Kristof’s article that we can apply to assessment reports.
Focus on using the results rather than sharing the results, starting with the report title. Kristof could have titled his piece something like, “What We Know About Gun Violence,” just as many assessment reports are titled something like, “What We’ve Learned About Student Achievement of Learning Outcomes.” But Kristof wants this information used, not just shared, and so do (or should) we. Focus both the title and content of your assessment report on moving from talk to practical, concrete responses to your assessment results.
Focus on what you’ve learned from your assessments rather than the assessments themselves. Every subheading in Kristof’s article states a conclusion drawn from his evidence. There’s no “Summary of Results’ heading like what we see in so many assessment reports. Include in your report subheadings that will entice everyone to keep reading.
Go heavy on visuals, light on text. My estimate is that about half the article is visuals, half text. This makes the report a fast read, with points literally jumping out at us.
Go for graphs and other visuals rather than tables of data. Every single set of data in Kristof’s report is accompanied by graphs or other visuals that let immediately let us see his point.
Order results from highest to lowest. There’s no law that says you must present the results for rubric criteria or a survey rating scale in their original order. Ordering results from highest to lowest—especially when accompanied by a bar graph—lets the big point literally pop out at the reader.
Use color to help drive home key points. Look at the section titled “Fewer Guns = Fewer Deaths” and see how adding just one color drives home the point of the graphics. I encourage what I call traffic light color-coding, with green for good news and red for results that, um, need attention.
Pull together disparate data on student learning. Kristof and his colleagues pulled together data from a wide variety of sources. The visual of public opinions on guns, toward the end of the article, brings together results from a variety of polls into one visual. Yes, the polls may not be strictly comparable, but Kristof acknowledges their sources. And the idea (that should be) behind assessment is not to make perfect decisions based on perfect data but to make somewhat better decisions based on somewhat better information than we would make without assessment evidence. So if, say, you’re assessing information literacy skills, pull together not only rubric results but relevant questions from surveys like NSSE, students’ written reflections, and maybe even relevant questions from student evaluations of teaching (anonymous and aggregated across faculty, obviously).
Breakouts can add insight, if used judiciously. I’m firmly opposed to inappropriate comparisons across student cohorts (of course humanities students will have weaker math skills than STEM students). But the state-by-state comparisons that Kristof provides help make the case for concrete steps that might be taken. Appropriate, relevant, meaningful comparisons can similarly help us understand assessment results and figure out what to do.
Get students involved. I don’t have the expertise to easily generate many of the visuals in Kristof’s article, but many of today’s students do, or they’re learning how in a graphic design course. Creating these kinds of visuals would make a great class project. But why stop student involvement there? Just as Kristof intends his article to be discussed and used by just about anyone, write your assessment report so it can be used to engage students as well as faculty and staff in the conversation about what’s going on with student learning and what action steps might be appropriate and feasible.
Distinguish between annual updates and periodic mega-reviews. Few of us have the resources to generate a report of Kristof’s scale annually—and in many cases our assessment results don’t call for this, especially when the results indicate that students are generally learning what we want them to. But this kind of report would be very helpful when results are, um, disappointing, or when a program is undergoing periodic program review, or when an accreditation review is coming up. Flexibility is the key here. Rather than mandate a particular report format from everyone, match the scope of the report to the scope of issues uncovered by assessment evidence.
|Posted on October 29, 2017 at 9:50 AM||comments (2)|
Assessment results are often used to make tweaks to individual courses and sometimes individual programs. It can be harder to figure out how to use assessment results to make broad, meaningful change across a college or university. But here’s one way to do so: Use assessment results to drive faculty professional development programming.
Here’s how it might work.
An assessment committee or some other appropriate group reviews annual assessment reports from academic programs and gen ed requirements. As they do, they notice some repeated concerns about shortcomings in student learning. Perhaps several programs note that their students struggle to analyze data. Perhaps several others note that quite a few students aren’t citing sources properly. Perhaps several others are dissatisfied with their students’ writing skills.
Note that the committee doesn’t need reports to be in a common format or share a common assessment tool in order to make these observations. This is a qualitative, not quantitative, analysis of the assessment reports. The committee can make a simple list of the single biggest concern with student learning mentioned in each report, then review the list and see what kinds of concerns are mentioned most often.
The assessment committee then shares what they’ve noticed with whoever plans faculty professional development programming—what’s often called a teaching-learning center. The center can then plan workshops, brown-bag lunch discussions, learning communities, or other professional development opportunities to help faculty improve student achievement of these learning goals.
There needn’t be much if any expense in offering such opportunities. Assessment results are used to decide how professional development resources are used, not necessarily increase professional development resources.
|Posted on October 7, 2017 at 8:20 AM||comments (1)|
One of the many things I’ve learned by watching Ken Burns’ series on Vietnam is that Defense Secretary Robert MacNamara was a data geek. A former Ford Motor Company executive, he routinely asked for all kinds of data. Sounds great, but there were two (literally) fatal flaws with his approach to assessment.
First, MacNamara asked for data on virtually anything measurable, compelling staff to spend countless hours filling binders with all kinds of metrics—too much data for anyone to absorb. And I wonder what his staff could have accomplished had they not been forced to spend so much time on data collection.
And MacNamara asked for the wrong data. He wanted to track progress in winning the war, but he focused on the wrong measures: body counts, weapons captured. He apparently didn’t have a clear sense of exactly what it would mean to win this war and measure progress toward that end. I’m not a military scientist, but I’d bet that more important measures would have included the attitudes of Vietnam’s citizens and the capacity of the South Vietnamese government to deal with insurgents on its own.
There are three important lessons here for us. First, worthwhile assessment requires a clear goal. I often compare teaching to taking our students on a journey. Our learning goal is where we want them to be at the end of the learning experience (be it a course, program, degree, or co-curricular experience).
Second, worthwhile assessment measures track progress toward that destination. Are our students making adequate progress along their journey? Are they reaching the destination on time?
Third, assessment should be limited—just enough information to help us decide if students are reaching the destination on time and, if not, what we might to do help them on their journey. Assessment should never take so much time that it detracts from the far more important work of helping students learn.
|Posted on August 26, 2017 at 8:20 AM||comments (1)|
Chris Coleman recently asked the Accreditation in Southern Higher Education listserv (ACCSHE@listserv.uhd.edu) about schedules for assessing program learning outcomes. Should programs assess one or two learning outcomes each year, for example? Or should they assess everything once every three or four years? Here are my thoughts from my forthcoming third edition of Assessing Student Learning: A Common Sense Guide.
If a program isn’t already assessing its key program learning outcomes, it needs to assess them all, right away, in this academic year. All the regional accreditors have been expecting assessment for close to 20 years. By now they expect implemented processes with results, and with those results discussed and used. A schedule to start collecting data over the next few years—in essence, a plan to come into compliance—doesn’t demonstrate compliance.
Use assessments that yield information on several program learning outcomes. Capstone requirements (senior papers or projects, internships, etc.) are not only a great place to collect evidence of learning, but they’re also great learning experiences, letting students integrate and synthesize their learning.
Do some assessment every year. Assessment is part of the teaching-learning process, not an add-on chore to be done once every few years. Use course-embedded assessments rather than special add-on assessments; this way, faculty are already collecting assessment evidence every time the course is taught.
Keep in mind that the burden of assessment is not assessment per se but aggregating, analyzing, and reporting it. Again, if faculty are using course-embedded assessments, they’re already collecting evidence. Be sensitive to the extra work of aggregating, analyzing, and reporting. Do all you can to keep the burden of this extra work to a bare-bones minimum and make everyone’s jobs as easy possible.
Plan to assess all key learning outcomes within two years—three at most. You wouldn’t use a bank statement from four years ago to decide if you have enough money to buy a car today! Faculty similarly shouldn’t be using evidence of student learning from four years ago to decide if student learning today is adequate. Assessments conducted just once every several years also take more time in the long run, as chances are good that faculty won’t find or remember what they did several years earlier, and they’ll need to start from scratch. This means far more time is spent planning and designing a new assessment—in essence, reinventing the wheel. Imagine trying to balance your checking account once a year rather than every month—or your students cramming for a final rather than studying over an entire term—and you can see how difficult and frustrating infrequent assessments can be, compared to those conducted routinely.
Keep timelines and schedules flexible rather than rigid, adapted to meet evolving needs. Suppose you assess students’ writing skills and they are poor. Do you really want to wait two or three years to assess them again? Disappointing outcomes call for frequent reassessment to see if planned changes are having their desired effects. Assessments that have yielded satisfactory evidence of student learning are fine to move to a back-burner, however. Put those reassessments on a staggered schedule, conducting them only once every two or three years just to make sure student learning isn’t slipping. This frees up time to focus on more pressing matters.
|Posted on August 20, 2017 at 6:35 AM||comments (1)|
Scott Jaschick at Inside Higher Ed just wrote an article tying together two studies showing that many higher ed stakeholders don’t understand—and therefore misinterpret—the term liberal arts.
And who can blame them? It’s an obtuse term that I’d bet many in higher ed don’t understand either. When I researched my 2014 book Five Dimensions of Quality: A Common Sense Guide to Accreditation and Accountability, I learned that the term liberal comes from liber, the Latin word for free. In the Middle Ages in Europe, a liberal arts education was for the free individual, as opposed to an individual obliged to enter a particular trade or profession. That paradigm simply isn’t relevant today.
Today the liberal arts are those studies that address knowledge, skills, and competencies that cross disciplines, yielding a broadly-educated, well-rounded individual. Many people use the term liberal arts and sciences or simply arts and sciences to try to make clear that the liberal arts comprise study of the sciences as well as the arts and humanities. The Association of American Colleges & Universities (AAC&U), a leading advocate of liberal arts education, refers to liberal arts as liberal education. Given today’s political climate, that may not have been a good decision!
So what might be a good synonym for the liberal arts? I confess I don’t have a proposal. Arts and sciences is one option, but I’d bet many stakeholders don’t understand that this includes humanities and social sciences, and this term doesn’t convey the value studying these things. Some of the terms I think would resonate with the public are broad, well-rounded, transferrable, and thinking skills. But I’m not sure how to combine these terms meaningfully and succinctly.
What we need here is evidence-informed decision-making, including surveys and focus groups of various higher education stakeholders to see what resonates with them. I hope AAC&U, as a leading advocate of liberal arts education, might consider taking on a rebranding effort including stakeholder research. But if you have any ideas, let me know!
|Posted on August 8, 2017 at 10:35 AM||comments (2)|
Assessing student learning in co-curricular experiences can be challenging! Here are some suggestions from the (drum roll, please!) forthcoming third edition of my book Assessing Student Learning: A Common Sense Guide, to be published by Jossey-Bass on February 4, 2018. (Pre-order your copy at www.wiley.com/WileyCDA/WileyTitle/productCd-1119426936.html)
Recognize that some programs under a student affairs, student development, or student services umbrella are not co-curricular learning experiences. Giving commuting students information on available college services, for example, is not really providing a learning experience. Neither are student intervention programs that contact students at risk for poor academic performance to connect them with available services.
Focus assessment efforts on those co-curricular experiences where significant, meaningful learning is expected. Student learning may be a very minor part of what some student affairs, student development, and student services units seek to accomplish. The registrar’s office, for example, may answer students’ questions about registration but not really offer a significant program to educate students on registration procedures. And while some college security operations view educational programs on campus safety as a major component of their mission, others do not. Focus assessment time and energy on those co-curricular experiences that are large or significant enough to make a real impact on student learning.
Make sure every co-curricular experience has a clear purpose and clear goals. An excellent co-curricular experience is designed just like any other learning experience: it has a clear purpose, with one or more clear learning goals; it is designed to help students achieve those goals; and it assesses how well students have achieved those goals.
Recognize that many co-curricular experiences focus on student success as well as student learning—and assess both. Many co-curricular experiences, including orientation programs and first-year experiences, are explicitly intended to help students succeed in college: to earn passing grades, to progress on schedule, and to graduate. So it’s important to assess both student learning and student success in order to show that the value of these programs is worth the college’s investment in them.
Recognize that it’s often hard to determine definitively the impact of one co-curricular experience on student success because there may be other mitigating factors. Students may successfully complete a first-year experience designed to prepare them to persist, for example, then leave because they’ve decided to pursue a career that doesn’t require a college degree.
Focus a co-curricular experience on an institutional learning goal such as interpersonal skills, analysis, professionalism, or problem solving.
Limit the number of learning goals of a co-curricular experience to perhaps just one or two.
State learning goals so they describe what students will be able to do after and as a result of the experience, not what they’ll do during the experience.
For voluntary co-curricular experiences, start but don’t end by tracking participation. Obviously if few students participate, impact is minimal no matter how much student learning takes place. So participation is an important measure. Set a rigorous but realistic target for participation, count the number of students who participate, and compare your count against your target.
Consider assessing student satisfaction, especially for voluntary experiences. Student dissatisfaction is an obvious sign that there’s a problem! But student satisfaction levels alone are insufficient assessments because they don’t tell us how well students have learned what we value.
Voluntary co-curricular experiences call for fun, engaging assessments. No one wants to take a test or write a paper to assess how well they’ve achieved a co-curricular experience’s learning goals. Group projects and presentations, role plays, team competitions, and Learning Assessment Techniques (Barkley & Major, 2016) can be more fun and engaging.
Assessments in co-curricular experiences need students to give them reasonably serious thought and effort. This can be a challenge when there's no grade to provide an incentive. Explain how the assessment will impact something students will find interesting and important.
Short co-curricular experiences call for short assessments. Brief, simple assessments such as minute papers, rating scales, and Learning Assessment Techniques can all yield a great deal of insight.
Attitudes and values can often only be assessed with indirect evidence such as rating scales, surveys, interviews, and focus groups. Reflective writing may be a useful, direct assessment strategy for some attitudes and values.
Co-curricular experiences often have learning goals such as teamwork that are assessed through processes rather than products. And processes are harder to assess than products. Direct observation (of a group discussion, for example), student self-reflection, peer assessments, and short quizzes are possible assessment strategies.
|Posted on June 19, 2017 at 9:30 AM||comments (1)|
Someone on the ASSESS listserv recently asked how to advise a faculty member who wanted to collect more assessment evidence before using it to try to make improvements in what he was doing in his classes. Here's my response, based on what I learned in a book I discussed in my last blog post called How to Measure Anything.
First, we think of doing assessment to help us make decisions (generally about improving teaching and learning). But think instead of doing assessment to help us make better decisions than we would make without them. Yes, faculty are always making informal decisions about changes to their teaching. Assessment should simply help them make somewhat better informed decisions.
Second, think about the risks of making the wrong decision. I'm going to assume, rightly or wrongly, that the professor is assessing student achievement of quantitative skills in a gen ed statistics course, and the results aren't great. There are five possible decision outcomes:
1. He decides to do nothing, and students in subsequent courses do just fine without any changes. (He was right; this was an off sample.)
2. He decides to do nothing, and students in subsequent courses continue to have, um, disappointing outcomes.
3. He changes things, and subsequent students do better because of his changes.
4. He changes things, but the changes don't help; despite his best effort, changes in his teaching didn't help improve the disappointing outcomes.
5. He changes things, and subsequent students do better, but not because of his changes--they're simply better prepared than this year's students.
So the risk of doing nothing is getting Outcome 2 instead of Outcome 1: Yet another class of students doesn't learn what they need to learn. The consequence is that even more students consequently run into trouble in later classes, on the job, wherever, until the eventual decision is made to make some changes.
The risk of changing things, meanwhile, is getting Outcome 4 or 5 instead of Outcome 3: He makes changes but they don't help. The consequence here is his wasted time and, possibly, wasted money, if his college invested in something like an online statistics tutoring module or gave him some released time to work on this.
The question then becomes, "Which is the worst consequence?" Normally I'd say the first consequence is the worst: continuing to pass or graduate students with inadequate learning. If so, it makes sense to go ahead with changes even without a lot of evidence. But if the second consequence involves a major investment of sizable time or resources, then it may make sense to wait for more corroborating evidence before making that major investment.
One final thought: Charles Blaich and Kathleen Wise wrote a paper for NILOA a few years ago on their research, in which they noted that our tradition of scholarly research does not include a culture of using research. Think of the research papers you've read--they generally conclude either by suggesting how some other people might use the research and/or by suggesting areas for further research. So sometimes the argument to wait and collect more data is simply a stalling tactic by people who don't want to change.
|Posted on May 30, 2017 at 12:10 AM||comments (13)|
I stumbled across a book by Douglas Hubbard titled How to Measure Anything: Finding the Value of “Intangibles in Business.” Yes, I was intrigued, so I splurged on it and devoured it.
The book should really be titled How to Measure Anything Without Killing Yourself because it focuses as much on limiting assessment as measuring it. Here are some of the great ideas I came away with:
1. We are (or should be) assessing because we want to make better decisions than what we would make without assessment results. If assessment results don’t help us make better decisions, they’re a waste of time and money.
2. Decisions are made with some level of uncertainty. Assessment results should reduce uncertainty but won’t eliminate it.
3. One way to judge the quality of assessment results is to think about how confident you are in them by pretending to make a money bet. Are you confident enough in the decision you’re making, based on assessment results, that you’d be willing to make a money bet that the decision is the right one? How much money would you be willing to bet?
4. Don’t try to assess everything. Focus on goals that you really need to assess and on assessments that may lead you to change what you’re doing. In other words, assessments that only confirm the status quo should go on a back burner. (I suggest assessing them every three years or so, just to make sure results aren’t slipping.)
5. Before starting a new assessment, ask how much you already know, how confident you are in what you know, and why you’re confident or not confident. Information you already have on hand, however imperfect, may be good enough. How much do you really need this new assessment?
6. Don’t reinvent the wheel. Almost anything you want to assess has already been assessed by others. Learn from them.
7. You have access to more assessment information than you might think. For fuzzy goals like attitudes and values, ask how you observe the presence or absence of the attitude or value in students and whether it leaves a trail of any kind.
8. If you know almost nothing, almost anything will tell you something. Don’t let anxiety about what could go wrong with assessment keep you from just starting to do some organized assessment.
9. Assessment results have both cost (in time as well as dollars) and value. Compare the two and make sure they’re in appropriate balance.
10. Aim for just enough results. You probably need less data than you think, and an adequate amount of new data is probably more accessible than you first thought. Compare the expected value of perfect assessment results (which are unattainable anyway), imperfect assessment results, and sample assessment results. Is the value of sample results good enough to give you confidence in making decisions?
11. Intangible does not mean immeasurable.
12. Attitudes and values are about human preferences and human choices. Preferences revealed through behaviors are more illuminating than preferences stated through rating scales, interviews, and the like.
13. Dashboards should be at-a-glance summaries. Just like your car’s dashboard, they should be mostly visual indicators such as graphs, not big tables that require study. Every item on the dashboard should be there with specific decisions in mind.
14. Assessment value is perishable. How quickly it perishes depends on how quickly our students, our curricula, and the needs of our students, employers, and region are changing.
15. Something we don’t ask often enough is whether a learning experience was worth the time students, faculty, and staff invested in it. Do students learn enough from a particular assignment or co-curricular experience to make it worth the time they spent on it? Do students learn enough from writing papers that take us 20 hours to grade to make our grading time worthwhile?
|Posted on May 21, 2017 at 6:10 AM||comments (5)|
I was impressed with—and found myself in agreement with—Douglas Roscoe’s analysis of the state of assessment in higher education in “Toward an Improvement Paradigm for Academic Quality” in the Winter 2017 issue of Liberal Education. Like Douglas, I think the assessment movement has lost its way, and it’s time for a new paradigm. And Douglas’s improvement paradigm—which focuses on creating spaces for conversations on improving teaching and curricula, making assessment more purposeful and useful, and bringing other important information and ideas into the conversation—makes sense. Much of what he proposes is in fact echoed in Using Evidence of Student Learning to Improve Higher Education by George Kuh, Stanley Ikenberry, Natasha Jankowski, Timothy Cain, Peter Ewell, Pat Hutchings, and Jillian Kinzie.
But I don’t think his improvement paradigm goes far enough, so I propose a second, concurrent paradigm shift.
I’ve always felt that the assessment movement tried to do too much, too quickly. The assessment movement emerged from three concurrent forces. One was the U.S. federal government, which through a series of Higher Education Acts required Title IV gatekeeper accreditors to require the institutions they accredit to demonstrate that they were achieving their missions. Because the fundamental mission of an institution of higher education is, well, education, this was essentially a requirement that institutions demonstrate that its intended student learning outcomes were being achieved by its students.
The Higher Education Acts also required Title IV gatekeeper accreditors to require the institutions they accredit to demonstrate “success with respect to student achievement in relation to the institution’s mission, including, as appropriate, consideration of course completion, state licensing examinations, and job placement rates” (1998 Amendments to the Higher Education Act of 1965, Title IV, Part H, Sect. 492(b)(4)(E)). The examples in this statement imply that the federal government defines student achievement as a combination of student learning, course and degree completion, and job placement.
A second concurrent force was the movement from a teaching-centered to learning-centered approach to higher education, encapsulated in Robert Barr and John Tagg’s 1995 landmark article in Change, “From Teaching to Learning: A New Paradigm for Undergraduate Education.” The learning-centered paradigm advocates, among other things, making undergraduate education an integrated learning experience—more than a collection of courses—that focuses on the development of lasting, transferrable thinking skills rather than just basic conceptual understanding.
The third concurrent force was the growing body of research on practices that help students learn, persist, and succeed in higher education. Among these practices: students learn more effectively when they integrate and see coherence in their learning, when they participate in out-of-class activities that build on what they’re learning in the classroom, and when new learning is connected to prior experiences.
These three forces led to calls for a lot of concurrent, dramatic changes in U.S. higher education:
- Defining quality by impact rather than effort—outcomes rather than processes and intent
- Looking on undergraduate majors and general education curricula as integrated learning experiences rather than collections of courses
- Adopting new research-informed teaching methods that are a 180-degree shift from lectures
- Developing curricula, learning activities, and assessments that focus explicitly on important learning outcomes
- Identifying learning outcomes not just for courses but for for entire programs, general education curricula, and even across entire institutions
- Framing what we used to call extracurricular activities as co-curricular activities, connected purposefully to academic programs
- Using rubrics rather than multiple choice tests to evaluate student learning
- Working collaboratively, including across disciplinary and organizational lines, rather than independently
These are well-founded and important aims, but they are all things that many in higher education had never considered before. Now everyone was being asked to accept the need for all these changes, learn how to make these changes, and implement all these changes—and all at the same time. No wonder there’s been so much foot-dragging on assessment! And no wonder that, a generation into the assessment movement and unrelenting accreditation pressure, there are still great swaths of the higher education community who have not yet done much of this and who indeed remain oblivious to much of this.
What particularly troubles me is that we’ve spent too much time and effort on trying to create—and assess—integrated, coherent student learning experiences and, in doing so, left the grading process in the dust. Requiring everything to be part of an integrated, coherent learning experience can lead to pushing square pegs into round holes. Consider:
- The transfer associate degrees offered by many community colleges, for example, aren’t really programs—they’re a collection of general education and cognate requirements that students complete so they’re prepared to start a major after they transfer. So identifying—or assessing—program learning outcomes for them frankly doesn’t make much sense.
- The courses available to fulfill some general education requirements don’t really have much in common, so their shared general education outcomes become so broad as to be almost meaningless.
- Some large universities are divided into separate colleges and schools, each with their own distinct missions and learning outcomes. Forcing these universities to identify institutional learning outcomes applicable to every program makes no sense—again, the outcomes must be so broad as to be almost meaningless.
- The growing numbers of students who swirl through multiple colleges before earning a degree aren’t going to have a really integrated, coherent learning experience no matter how hard any of us tries.
At the same time, we have given short shrift to helping faculty learn how to develop and use good assessments in their own classes and how to use grading information to understand and improve their own teaching. In the hundreds of workshops and presentations I’ve done across the country, I often ask for a show of hands from faculty who routinely count how many students earned each score on each rubric criterion of a class assignment, so they can understand what students learned well and what they didn’t learn well. Invariably a tiny proportion raises their hands. When I work with faculty who use multiple choice tests, I ask how many use a test blueprint to plan their tests so they align with key course objectives, and it’s consistently a foreign concept to them.
In short, we’ve left a vital part of the higher education experience—the grading process—in the dust. We invest more time in calibrating rubrics for assessing institutional learning outcomes, for example, than we do in calibrating grades. And grades have far more serious consequences to our students, employers, and society than assessments of program, general education, co-curricular, or institutional learning outcomes. Grades decide whether students progress to the next course in a sequence, whether they can transfer to another college, whether they graduate, whether they can pursue a more advanced degree, and in some cases whether they can find employment in their discipline.
So where we should go? My paradigm springs from visits to two Canadian institutions a few years ago. At that time Canadian quality assurance agencies did not have any requirements for assessing student learning, so my workshops focused solely on assessing learning more effectively in the classroom. The workshops were well received because they offered very practical help that faculty wanted and needed. And at the end of the workshops, faculty began suggesting that perhaps they should collaborate to talk about shared learning outcomes and how to teach and assess them. In other words, discussion of classroom learning outcomes began to flow into discussion of program learning outcomes. It’s a naturalistic approach that I wish we in the United States had adopted decades ago.
What I now propose is moving to a focus on applying everything we’ve learned about curriculum design and assessment to the grading process in the classroom. In other words, my paradigm agrees with Roscoe’s that “assessment should be about changing what happens in the classroom—what students actually experience as they progress through their courses—so that learning is deeper and more consequential.” My paradigm emphasizes the following.
- Assessing program, general education, and institutional learning outcomes remain an assessment best practice. Those who have found value in these assessments would be encouraged to continue to engage in them and honored through mechanisms such as NILOA’s Excellence in Assessment designation.
- Teaching excellence is defined in significant part by four criteria: (1) the use of research-informed teaching and curricular strategies, (2) the alignment of learning activities and grading criteria to stated course objectives, (3) the use of good quality evidence, including but not limited to assessment results from the grading process, to inform changes to one’s teaching, and (4) active participation in and application of professional development opportunities on teaching including assessment.
- Investments in professional development on research-informed teaching practices exceed investments in assessment.
- Assessment work is coordinated and supported by faculty professional development centers (teaching-learning centers) rather than offices of institutional effectiveness or accreditation, sending a powerful message that assessment is about improving teaching and learning, not fulfilling an external mandate.
- We aim to move from a paradigm of assessment, not just to one of improvement as Roscoe proposes, but to one of evidence-informed improvement—a culture in which the use of good quality evidence to inform discussions and decisions is expected and valued.
- If assessment is done well, it’s a natural part of the teaching-learning process, not a burdensome add-on responsibility. The extra work is in reporting it to accreditors. This extra work can’t be eliminated, but it can be minimized and made more meaningful by establishing the expectation that reports address only key learning outcomes in key courses (including program capstones), on a rotating schedule, and that course assessments are aggregated and analyzed within the program review process.
Under this paradigm, I think we have a much better shot at achieving what’s most important: giving every student the best possible education.