Linda Suskie

  A Common Sense Approach to Assessment & Accreditation

Blog

Is It Time to Abandon the Term "Liberal Arts?

Posted on August 20, 2017 at 6:35 AM Comments comments (0)

Scott Jaschick at Inside Higher Ed just wrote an article tying together two studies showing that many higher ed stakeholders don’t understand—and therefore misinterpret—the term liberal arts.


And who can blame them? It’s an obtuse term that I’d bet many in higher ed don’t understand either. When I researched my 2014 book Five Dimensions of Quality: A Common Sense Guide to Accreditation and Accountability, I learned that the term liberal comes from liber, the Latin word for free. In the Middle Ages in Europe, a liberal arts education was for the free individual, as opposed to an individual obliged to enter a particular trade or profession. That paradigm simply isn’t relevant today.


Today the liberal arts are those studies that address knowledge, skills, and competencies that cross disciplines, yielding a broadly-educated, well-rounded individual. Many people use the term liberal arts and sciences or simply arts and sciences to try to make clear that the liberal arts comprise study of the sciences as well as the arts and humanities. The Association of American Colleges & Universities (AAC&U), a leading advocate of liberal arts education, refers to liberal arts as liberal education. Given today’s political climate, that may not have been a good decision!


So what might be a good synonym for the liberal arts? I confess I don’t have a proposal. Arts and sciences is one option, but I’d bet many stakeholders don’t understand that this includes humanities and social sciences, and this term doesn’t convey the value studying these things. Some of the terms I think would resonate with the public are broad, well-rounded, transferrable, and thinking skills. But I’m not sure how to combine these terms meaningfully and succinctly.


What we need here is evidence-informed decision-making, including surveys and focus groups of various higher education stakeholders to see what resonates with them. I hope AAC&U, as a leading advocate of liberal arts education, might consider taking on a rebranding effort including stakeholder research. But if you have any ideas, let me know!

A New Paradigm for Assessment

Posted on May 21, 2017 at 6:10 AM Comments comments (5)

I was impressed with—and found myself in agreement with—Douglas Roscoe’s analysis of the state of assessment in higher education in “Toward an Improvement Paradigm for Academic Quality” in the Winter 2017 issue of Liberal Education. Like Douglas, I think the assessment movement has lost its way, and it’s time for a new paradigm. And Douglas’s improvement paradigm—which focuses on creating spaces for conversations on improving teaching and curricula, making assessment more purposeful and useful, and bringing other important information and ideas into the conversation—makes sense. Much of what he proposes is in fact echoed in Using Evidence of Student Learning to Improve Higher Education by George Kuh, Stanley Ikenberry, Natasha Jankowski, Timothy Cain, Peter Ewell, Pat Hutchings, and Jillian Kinzie.


But I don’t think his improvement paradigm goes far enough, so I propose a second, concurrent paradigm shift.


I’ve always felt that the assessment movement tried to do too much, too quickly. The assessment movement emerged from three concurrent forces. One was the U.S. federal government, which through a series of Higher Education Acts required Title IV gatekeeper accreditors to require the institutions they accredit to demonstrate that they were achieving their missions. Because the fundamental mission of an institution of higher education is, well, education, this was essentially a requirement that institutions demonstrate that its intended student learning outcomes were being achieved by its students.


The Higher Education Acts also required Title IV gatekeeper accreditors to require the institutions they accredit to demonstrate “success with respect to student achievement in relation to the institution’s mission, including, as appropriate, consideration of course completion, state licensing examinations, and job placement rates” (1998 Amendments to the Higher Education Act of 1965, Title IV, Part H, Sect. 492(b)(4)(E)). The examples in this statement imply that the federal government defines student achievement as a combination of student learning, course and degree completion, and job placement.


A second concurrent force was the movement from a teaching-centered to learning-centered approach to higher education, encapsulated in Robert Barr and John Tagg’s 1995 landmark article in Change, “From Teaching to Learning: A New Paradigm for Undergraduate Education.” The learning-centered paradigm advocates, among other things, making undergraduate education an integrated learning experience—more than a collection of courses—that focuses on the development of lasting, transferrable thinking skills rather than just basic conceptual understanding.


The third concurrent force was the growing body of research on practices that help students learn, persist, and succeed in higher education. Among these practices: students learn more effectively when they integrate and see coherence in their learning, when they participate in out-of-class activities that build on what they’re learning in the classroom, and when new learning is connected to prior experiences.


These three forces led to calls for a lot of concurrent, dramatic changes in U.S. higher education:

  • Defining quality by impact rather than effort—outcomes rather than processes and intent
  • Looking on undergraduate majors and general education curricula as integrated learning experiences rather than collections of courses
  • Adopting new research-informed teaching methods that are a 180-degree shift from lectures
  • Developing curricula, learning activities, and assessments that focus explicitly on important learning outcomes
  • Identifying learning outcomes not just for courses but for for entire programs, general education curricula, and even across entire institutions
  • Framing what we used to call extracurricular activities as co-curricular activities, connected purposefully to academic programs
  • Using rubrics rather than multiple choice tests to evaluate student learning
  • Working collaboratively, including across disciplinary and organizational lines, rather than independently


These are well-founded and important aims, but they are all things that many in higher education had never considered before. Now everyone was being asked to accept the need for all these changes, learn how to make these changes, and implement all these changes—and all at the same time. No wonder there’s been so much foot-dragging on assessment! And no wonder that, a generation into the assessment movement and unrelenting accreditation pressure, there are still great swaths of the higher education community who have not yet done much of this and who indeed remain oblivious to much of this.


What particularly troubles me is that we’ve spent too much time and effort on trying to create—and assess—integrated, coherent student learning experiences and, in doing so, left the grading process in the dust. Requiring everything to be part of an integrated, coherent learning experience can lead to pushing square pegs into round holes. Consider:

  • The transfer associate degrees offered by many community colleges, for example, aren’t really programs—they’re a collection of general education and cognate requirements that students complete so they’re prepared to start a major after they transfer. So identifying—or assessing—program learning outcomes for them frankly doesn’t make much sense.
  • The courses available to fulfill some general education requirements don’t really have much in common, so their shared general education outcomes become so broad as to be almost meaningless.
  • Some large universities are divided into separate colleges and schools, each with their own distinct missions and learning outcomes. Forcing these universities to identify institutional learning outcomes applicable to every program makes no sense—again, the outcomes must be so broad as to be almost meaningless.
  • The growing numbers of students who swirl through multiple colleges before earning a degree aren’t going to have a really integrated, coherent learning experience no matter how hard any of us tries.


At the same time, we have given short shrift to helping faculty learn how to develop and use good assessments in their own classes and how to use grading information to understand and improve their own teaching. In the hundreds of workshops and presentations I’ve done across the country, I often ask for a show of hands from faculty who routinely count how many students earned each score on each rubric criterion of a class assignment, so they can understand what students learned well and what they didn’t learn well. Invariably a tiny proportion raises their hands. When I work with faculty who use multiple choice tests, I ask how many use a test blueprint to plan their tests so they align with key course objectives, and it’s consistently a foreign concept to them.


In short, we’ve left a vital part of the higher education experience—the grading process—in the dust. We invest more time in calibrating rubrics for assessing institutional learning outcomes, for example, than we do in calibrating grades. And grades have far more serious consequences to our students, employers, and society than assessments of program, general education, co-curricular, or institutional learning outcomes. Grades decide whether students progress to the next course in a sequence, whether they can transfer to another college, whether they graduate, whether they can pursue a more advanced degree, and in some cases whether they can find employment in their discipline.


So where we should go? My paradigm springs from visits to two Canadian institutions a few years ago. At that time Canadian quality assurance agencies did not have any requirements for assessing student learning, so my workshops focused solely on assessing learning more effectively in the classroom. The workshops were well received because they offered very practical help that faculty wanted and needed. And at the end of the workshops, faculty began suggesting that perhaps they should collaborate to talk about shared learning outcomes and how to teach and assess them. In other words, discussion of classroom learning outcomes began to flow into discussion of program learning outcomes. It’s a naturalistic approach that I wish we in the United States had adopted decades ago.


What I now propose is moving to a focus on applying everything we’ve learned about curriculum design and assessment to the grading process in the classroom. In other words, my paradigm agrees with Roscoe’s that “assessment should be about changing what happens in the classroom—what students actually experience as they progress through their courses—so that learning is deeper and more consequential.” My paradigm emphasizes the following.

  1. Assessing program, general education, and institutional learning outcomes remain an assessment best practice. Those who have found value in these assessments would be encouraged to continue to engage in them and honored through mechanisms such as NILOA’s Excellence in Assessment designation.
  2. Teaching excellence is defined in significant part by four criteria: (1) the use of research-informed teaching and curricular strategies, (2) the alignment of learning activities and grading criteria to stated course objectives, (3) the use of good quality evidence, including but not limited to assessment results from the grading process, to inform changes to one’s teaching, and (4) active participation in and application of professional development opportunities on teaching including assessment.
  3. Investments in professional development on research-informed teaching practices exceed investments in assessment.
  4. Assessment work is coordinated and supported by faculty professional development centers (teaching-learning centers) rather than offices of institutional effectiveness or accreditation, sending a powerful message that assessment is about improving teaching and learning, not fulfilling an external mandate.
  5. We aim to move from a paradigm of assessment, not just to one of improvement as Roscoe proposes, but to one of evidence-informed improvement—a culture in which the use of good quality evidence to inform discussions and decisions is expected and valued.
  6. If assessment is done well, it’s a natural part of the teaching-learning process, not a burdensome add-on responsibility. The extra work is in reporting it to accreditors. This extra work can’t be eliminated, but it can be minimized and made more meaningful by establishing the expectation that reports address only key learning outcomes in key courses (including program capstones), on a rotating schedule, and that course assessments are aggregated and analyzed within the program review process.


Under this paradigm, I think we have a much better shot at achieving what’s most important: giving every student the best possible education.

What does a new CAO survey tell us about the state of assessment?

Posted on January 26, 2017 at 8:40 AM Comments comments (6)

A new survey of chief academic officers (CAOs) conducted by Gallup and Inside Higher Education led me to the sobering conclusion that, after a generation of work on assessment, we in U.S. higher education remain very, very far from pervasively conducting truly meaningful and worthwhile assessment.


Because we've been working on this so long, as I reviewed the results of this survey, I was deliberately tough. The survey asked CAOs to rate the effectiveness of their institutions on a variety of criteria using a scale of very effective, somewhat effective, not too effective, and not effective at all. The survey also asked CAOs to indicate their agreement with a variety of statements on a five-point scale, where 5 = strongly agree, 1 = strongly disagree, and the other points are undefined. At this point I would have liked to see most CAOs rate their institutions at the top of the scale: either “very effective” or “strongly agree.” So these are the results I focused on and, boy, are they depressing.


Quality of Assessment Work

Less than a third (30%) of CAOs say their institution is very effective in identifying and assessing student outcomes. ‘Nuff said on that! :(


Value of Assessment Work

Here the numbers are really dismal. Less than 10% (yes, ten percent, folks!) of CAOs strongly agree that:

  • Faculty members value assessment efforts at their college (4%).
  • The growth of assessment systems has improved the quality of teaching and learning at their college (7%).
  • Assessment has led to better use of technology in teaching and learning (6%). (Parenthetically, that struck me as an odd survey question; I had no idea that one of the purposes of assessment was to improve the use of technology in T&L!)


And just 12% strongly disagree that their college’s use of assessment is more about keeping accreditors and politicians happy than it is about teaching and learning.

 

And only 6% of CAOs strongly disagree that faculty at their college view assessment as requiring a lot of work on their parts. Here I’m reading something into the question that might not be there. If the survey asked if faculty view teaching as requiring a lot of work on their parts, I suspect that a much higher proportion of CAOs would disagree because, while teaching does require a lot of work, it’s what faculty generally find to be valuable work--it's what they are expected to do, after all. So I suspect that, if faculty saw value in their assessment work commensurate with the time they put into it, this number would be a lot higher.

 

Using Evidence to Inform Decisions

Here’s a conundrum:

  • Over two thirds (71%) of CAOs say their college makes effective use of data used to measure student outcomes,
  • But only about a quarter (26%) said their college is very effective in using data to aid and inform decision making.
  • And only 13% strongly agree that their college regularly makes changes in the curriculum, teaching practices, or student services based on what it finds through assessment.


 So I’m wondering what CAOs consider effective uses of assessment data!


 Furthermore,

  • About two thirds (67%) of CAOs say their college is very effective in providing a quality undergraduate education.
  • But less than half (48%) say it’s very effective in preparing students for the world of work,
  • And only about a quarter (27%) say it’s very effective in preparing students for engaged citizens.
  • And (as I've already noted) only 30% say it’s very effective in identifying and assessing student outcomes.


How can CAOs who admit their colleges are not very effective in preparing students for work or citizenship engagement or assessing student learning nonetheless think their college is very effective in providing a quality undergraduate education? What evidence are they using to draw that conclusion?


And,

  • While less than half of CAOs saying their colleges are very effective in preparing students for work,
  • Only about a third (32%) strongly agree that their institution is increasing attention to the ability of its degree programs to help students get a good job.


My Conclusions

After a quarter century of work to get everyone to do assessment well:

  • Assessment remains spotty; it is the very rare institution that is doing assessment pervasively and consistently well.
  • A lot of assessment work either isn’t very useful or takes more time than it’s worth.
  • We have not yet transformed American higher education into an enterprise that habitually uses evidence to inform decisions.

Lessons from the Election for Higher Education

Posted on November 28, 2016 at 7:25 AM Comments comments (0)

If you share my devastation at the results of the U.S. presidential election and its implications for our country and our world, and if you are struggling to understand what has happened and wondering what you can do as a member of the higher education community, this blog post is for you. I don’t have answers, of course, but I have some ideas.

 

Why did Trump get so many votes? While the reasons are complex, and people will be debating them for years, there seem to be two fundamental factors. One can be summed up in that famous line from Bill Clinton’s campaign: It’s the economy, stupid.  Jed Kolko at fivethirtyeight.com found that people who voted for Trump were more likely to feel under economic threat, worried about the future of their jobs.

 

The other reason is education. Nate Silver at fivethirtyeight.com has tweeted that Clinton won all 18 states where an above average share of the population has advanced degrees, but she lost 29 of the other 32.  Education and salary are highly correlated, but Nate Silver has found signs that education appears to be a stronger predictor of who voted for Trump than salary.

 

Why is education such a strong predictor of how people voted? Here’s where we need more research, but I’m comfortable speculating that reasons might include any of the following:

  • People without a college education have relatively few prospects for economic security. In my book Five Dimensions of Quality I noted that the Council of Foreign Relations found that, “going back to the 1970s, all net job growth has been in jobs that require at least a bachelor’s degree.” I also noted a statistic from Anthony Carnevale and his colleagues: “By 2020, 65 percent of all jobs will require postsecondary education and training, up from 28 percent in 1973.”
  • Colleges do help students learn to think critically: to distinguish credible evidence from what I call “incredible” evidence, to weigh evidence carefully when making difficult decisions, and to make decisions based more on good quality evidence than on emotional response.
  • College-educated citizens are more likely to have attended quality good schools from kindergarten on, learning to think critically not just in college but throughout their schooling.
  • College-educated citizens are more optimistic because their liberal arts studies give them the open-mindedness and flexibility to handle changing times, including changing careers.

We do have a tremendous divide in this country—an education divide—and it is growing. While college degree holders have always earned more than those without a college degree, the income disparity has grown; college graduates now earn 80% more than high school graduates, up from 40% in the 1970s.

 

If we want a country that offers economic security, whose citizens feel a sense of optimism, whose citizens make evidence-informed decisions, and whose citizens are prepared for changes in their country and their lives, we need to work on closing the education divide by helping as many people as possible get a great postsecondary education.

 

What can we do?

  1. Welcome the underprepared. They are the students who really need our help in obtaining not only economic security but the thinking skills that are the hallmark of a college education and a sense of optimism about their future. The future of our country is in their hands.
  2. Make every student want to come back, as Ken O’Donnell has said, until they complete their degree or credential. Every student we lose hurts his or her economic future and our country.
  3. Encourage actively what Ernest Boyer called the scholarship of application: using research to solve real-life problems such as regional social and economic issues.
  4. Partner with local school systems and governments to improve local grade schools. Many regions of the country need new businesses, but new businesses usually want to locate in communities with good schools for their employees and their families.
  5. Create more opportunities for students to listen to and learn from others with different backgrounds and perspectives. Many colleges seek to attract international students and encourage students to study abroad. I’d like to go farther. Do we encourage our international students to share their backgrounds and experiences with our American students, both in relevant classes and in co-curricular settings? Do we encourage returning study abroad students to share what they learned with their peers? Do we encourage our students to consider not only a semester abroad but a semester at another U. S. college in a different part of the country?
  6. Create more opportunities for students to learn about the value of courtesy, civility, respect, compassion, and kindness and how to practice these in their lives and careers.

 

 

 

Lessons from the Election for Assessment

Posted on November 21, 2016 at 2:45 PM Comments comments (0)

The results of the U.S. presidential election have lessons both for American higher education and for assessment. Here are the lessons I see for meaningful assessment; I’ll tackle implications for American higher education in my next blog post.

 

Lesson #1: Surveys are a difficult way to collect meaningful information in the 21st century. If your assessment plan includes telephone or online surveys of students, alumni, employers, or anyone else, know going in that it’s very hard to get a meaningful, representative sample.

 

A generation ago (when I wrote a monograph Questionnaire Survey Research: What Works for the Association of Institutional Research), most people had land line phones with listed numbers and without caller ID or voice mail. So it was easy to find their phone number, and they usually picked up the phone when it rang. Today many people don’t have land line phones; they have cell phones with unlisted numbers and caller ID. If the number calling is unfamiliar to them, they let the call go straight to voice mail. Online surveys have similar challenges, partly because databases of e-mail addresses aren’t as readily available as phone books and partly because browsing habits affect the validity of pop-up polls such as those conducted by Survey Monkey. And all survey formats are struggling with survey fatigue (how many surveys have you been asked to complete in the last month?).

 

Professional pollsters have ways of adjusting for all these factors, but those strategies are difficult and expensive and often beyond our capabilities.

 

Lesson #2: Small sample sizes may not yield meaningful evidence. Because of Lesson #1, many of the political polls we saw were based on only a few hundred respondents. A sample of 250 has an error margin of 6% (meaning that if, for example, you find that 82% of the student work you assessed meets your standard, the true percentage is probably somewhere between 76% and 88%). A sample of 200 has an error margin of 7%. And these error margins assume that the samples of student work you’re looking at are truly representative of all student work. Bottom line: We need to look at a lot of student work, from a broad variety of classes, in order to draw meaningful conclusions.

 

Lesson #3: Small differences aren’t meaningful. I was struck by how many reporters and pundits talked about Clinton having, say, a 1% or 2% point lead without mentioning that the error margin made these leads too close to call. I know everyone likes to have a single number—it’s easiest to grasp—but I wish we could move to the practice of reporting ranges of likely results, preferably in graphs that show overlaps and convey visually when differences aren’t really significant. That would help audiences understand, for example, whether students’ critical thinking skills really are worse than their written communication skills, or whether their information literacy skills really are better than those of their peers.

 

Lesson #4: Meaningful results are in the details. Clinton won the popular vote by well over a million votes but still lost enough states to lose the Electoral College. Similarly, while students at our college may be doing well overall in terms of their analytic reasoning skills, we should be concerned if students in a particular program or cohort aren’t doing that well. Most colleges and universities are so diverse in terms of their offerings and the students they serve that I’m not sure overall institution-wide results are all that helpful; the overall results can mask a great deal of important variation.

 

Lesson #5: We see what we want to see. With Clinton the odds-on favorite to win the race, it was easy to see Trump’s chances of winning (anywhere from 10-30%, depending on the analysis) as insignificant, when in fact these probabilities meant he had a realistic chance of winning. Just as it was important to take a balanced view of poll results, it’s important to bring a balanced view to our assessment results. Usually our assessment results are a mixed bag, with both reasons to cheer and reasons to reflect and try to improve. We need to make sure we see—and share—both the successes and the areas for concern.

 

 

Fixing assessment in American higher education

Posted on May 7, 2016 at 9:00 AM Comments comments (4)

In my April 25 blog post, “Are our assessment processes broken?” I listed five key problems with assessment in the United States. Can we fix them? Yes, we can, primarily because today we have a number of organizations and entities that can tackle them, including (in no particular order):

 

Here are five steps that I think will dramatically improve the quality and effectiveness of student learning assessment in the United States.

 

1. Develop a common vocabulary. So much time is wasted debating the difference between a learning outcome and a learning objective, for example. The assessment movement is now mature enough that we can develop a common baseline glossary of those terms that continue to be muddy or confusing.

 

2. Define acceptable, good, and best assessment practices. Yes, many accreditors provide professional development for their membership and reviewers on assessment, but trainers often focus on best practices rather than minimally acceptable practices. This leads to reviewers unnecessarily “dinging” institutions on relatively minor points (say, learning outcomes don’t start with action words) while missing the forest of, say, piles of assessment evidence that aren’t being used meaningfully.

 

Specifically, we need to practice what we preach with one or more rubrics that list the essential elements of assessment and define best, good, acceptable, and unacceptable performance levels for each criteria. Fortunately, we have some good models to work off of: NILOA’s Excellence in Assessment designation criteria, CHEA’s awards for Effective Institutional Practice in Student Learning Outcomes, recognition criteria developed by the (now defunct) New Leadership Alliance for Student Learning and Accountability, and rubrics developed by some accreditors. Then we need to educate institutions and accreditation reviewers on how to use the rubric(s).

 

3. Focus less on student learning assessment (and its cousins, student achievement, student success, and completion) and more on teaching and learning. I would love to see Lumina focus on excellent teaching (including engagement) as a primary strategy to achieve its completion agenda—getting more faculty to adopt research-informed pedagogies that help students learn and succeed. I’d also like to see accreditors include the use of research- and evidence-informed teaching practices in their definitions of educational excellence.

 

4. Communicate clearly and succinctly with various audiences what our students are learning and what we're doing to improve learning. I haven't yet found any institution that I think is doing this really well. Capella is the only one I’ve seen that impresses me, and even Capella only presents results, not what they're doing to improve learning. I'm intrigued by the concept of infographics and wish I'd studied graphic design! A partnership with some graphic designers (student interns or class project?) might help us come up with some effective ways to tell our complicated stories.

 

5. Focus public attention on student learning as an essential part of student success. As Richard DeMillo recently pointed out, we need to find meaningful alternatives to US News rankings that focus on what’s truly important—namely, student learning and success. The problem has always been that student learning and success are so complex that they can’t be summarized into a brief set of metrics.

But the U.S. Department of Education has opened an intriguing possibility. At the very end of his April 22 letter to accreditors, Undersecretary Ted Mitchell noted that “Accreditors may…develop tiers of recognition, with some institutions or programs denoted as achieving the standards at higher or lower levels than others.” Accreditors thus now have an opportunity to commend publicly those institutions that achieve certain standards at a (clearly defined) “best practice” level. Many standards would not be of high interest to most members of the public, and input-based standards (resources) would only continue to recognize the wealthiest institutions. But commendations for best practices in things like research- and evidence-informed teaching methods and student development programs, serving the public good, and meeting employer needs with well-prepared graduates (documented through assessment evidence and rigorous standards) could turn this around and focus everyone on what’s most important: making sure America’s college students get great educations.

 

 

 

Are our assessment processes broken?

Posted on April 25, 2016 at 11:30 AM Comments comments (6)

Wow…my response to Bob Shireman’s paper on how we assess student learning really touched a nerve. Typically about 50 people view my blog posts, but my response to him got close to 1000 views (yes, there’s no typo there). I’ve received a lot of feedback, some on the ASSESS listserv, some on LinkedIn, some on my blog page, and some in direct e-mails, and I’m grateful for all of it. I want to acknowledge in particular the especially thoughtful responses of David Dirlam, Dave Eubanks, Lion Gardiner, Joan Hawthorne, Jeremy Penn, Ephraim Schechter, Jane Souza, Claudia Stanny, Reuben Ternes, Carl Thompson, and Catherine Wehlburg.


The feedback I received reinforced my views on some major issues with how we now do assessment:


Accreditors don’t clearly define what constitute acceptable assessment practices. Because of the diversity of institutions they accredit, regional accreditors are deliberately flexible. HLC, for example, simply says that assessment processes should be “effective” and “reflect good practice,” while Middle States now simply says that they should be “appropriate.” Most of the regionals offer training on assessment to both institutions and accreditation reviewers, but the training often doesn’t distinguish between best practice and acceptable practice. As a result, I heard stories of institutions getting dinged because, say, their learning outcomes didn’t start with action verbs or their rubrics used fuzzy terms, even though no regional requires that learning outcomes be expressed in a particular format or that rubrics must be used.


And this leads to the next major issues…


We in higher education—including government policymakers—don’t yet have a common vocabulary for assessment. This is understandable—higher ed assessment is still in its infancy, after all, and what makes this fun to me is that we all get to participate in developing that vocabulary. But right now terms such as “student achievement,” “student outcomes,” “learning goal,” and even “quantitative” and “qualitative” mean very different things to different people.


We in the higher ed assessment community have not yet come to consensus on what we consider acceptable, good, and best assessment practices. Some assessment practitioners, for example, think that assessment methods should be validated in the psychometric sense (with evidence of content and construct validity, for example), while others consider assessment to be a form of action research that needs only evidence of consequential validity (are the results of good enough quality to be used to inform significant decisions?). Some assessment practitioners think that faculty should be able to choose to focus on assessment “projects” that they find particularly interesting, while others think that, if you’ve established something as an important learning outcome, you should be finding out whether students have indeed learned it, regardless of whether or not it’s interesting to you.


Is all our assessment work making a difference? Assessment and accreditation share two key purposes: first, to ensure that our students are indeed learning what we want them to learn and, second, to make evidence-informed improvements in what we do, especially in the quality of teaching and learning. Too many of us—institutions and reviewers alike—are focusing too much on how we do assessment and not enough on its impact.


We’re focusing too much on assessment and not enough on teaching and curricula. While virtually all accreditors talk about teaching quality, for example, few expect that faculty use research-informed teaching methods, that institutions actively encourage experimentation with new teaching methods or curriculum designs, or that institutions invest significantly in professional development to help faculty improve their teaching.


What can we do about all of this? I have some ideas, but I’ll save them for my next blog post.

 

A response to Bob Shireman on "inane" SLOs

Posted on April 10, 2016 at 8:55 AM Comments comments (11)

You may have seen Bob Shireman's essay "SLO Madness" in the April 7 issue of Inside Higher Ed or his report, "The Real Value of What Students Do in College." I sent him the following response today:


I first want to point out that I agree wholeheartedly with a number of your observations and conclusions.


1. As you point out, policy discussions too often “treat the question of quality—the actual teaching and learning—as an afterthought or as a footnote.” The Lumina Foundation and the federal government use the term “student achievement” to discuss only retention, graduation, and job placement rates, while the higher ed community wants to use it to discuss student learning as well.


2. Extensive research has confirmed that student engagement in their learning impacts both learning and persistence. You cite Astin’s 23-year-old study; it has since been validated and refined by research by Vincent Tinto, Patrick Terenzini, Ernest Pascarella, and the staff of the National Survey of Student Engagement, among many others.


3. At many colleges and universities, there’s little incentive for faculty to try to become truly great teachers who engage and inspire their students. Teaching quality is too often judged largely by student evaluations that may have little connection to research-informed teaching practices, and promotion and tenure decisions are too often based more on research productivity than teaching quality. This is because there’s more grant money for research than for teaching improvement. A report from Third Way noted that “For every $100 the federal government spends on university-led research, it spends 24 cents on teaching innovation at universities.”


4. We know through neuroscience research that memorized knowledge is quickly forgotten; thinking skills are the lasting learning of a college education.


5. “Critical thinking” is a nebulous term that, frankly, I’d like to banish from the higher ed lexicon. As you suggest, it’s an umbrella term for an array of thinking skills, including analysis, evaluation, synthesis, information literacy, creative thinking, problem solving, and more.


6. The best evidence of what students have learned is in their coursework—papers, projects, performances, portfolios—rather than what you call “fabricated outcome measures” such as published or standardized tests.


7. You call for accreditors to “validate colleges’ own quality-assurance systems,” which is exactly what they are already doing. Many colleges and universities offer hundreds of programs and thousands of courses; it’s impossible for any accreditation team to review them all. So evaluators often choose a random or representative sample, as you suggest.


8. Our accreditation processes are far from perfect. The decades-old American higher education culture of operating in independent silos and evaluating quality by looking at inputs rather than outcomes has proved to be a remarkably difficult ship to turn around, despite twenty years of earnest effort by accreditors. There are many reasons for this, which I discuss in my book Five Dimensions of Quality, but let me share two here. First, US News & World Report’s rankings are based overwhelmingly on inputs rather than outcomes; there’s a strong correlation with institutional age and wealth. Second, most accreditation evaluators are volunteers, and training resources for them are limited. (Remember that everyone in higher education is trying to keep costs down.)


9. Thus, despite a twenty-year focus by accreditors on requiring useful assessment of learning, there are still plenty of people at colleges and universities who don’t see merit in looking at outcomes meaningfully. They don’t engage in the process until accreditors come calling; they continue to have misconceptions about what they are to do and why; and they focus blindly on trying to give the accreditors whatever they think the accreditors want rather than using assessment as an opportunity to look at teaching and learning usefully. This has led to some of your sad anecdotes about convoluted, meaningless processes. Using Evidence of Student Learning to Improve Higher Education, a book by George Kuh and his colleagues, is full of great ideas on how to turn this culture around and make assessment work truly meaningful and useful to faculty.


10. Your call for reviews of majors and courses is sound and, indeed, a number of regional accreditors and state systems already require academic programs to engage in periodic “program review.” There’s room for improvement, however. Many program reviews follow the old “inputs” model, counting library collections, faculty credentials, lab facilities, and the like and do not yet focus sufficiently on student learning.

 

Your report has some fundamental misperceptions, however. Chief among them is your assertion that the three step assessment process—declare goals, seek evidence of student achievement of them, and improve instruction based on the results—“hasn’t worked out that way. Not even close.” Today there are faculty and staff at colleges and universities throughout the country who have completed these three steps successfully and meaningfully. Some of these stories are documented in the periodical Assessment Update, some are documented on the website of the National Institute for Learning Outcomes Assessment (www.learningoutcomeassessment.org), some are documented by the staff of the National Survey of Student Engagement, and many more are documented in reports to accreditors.


In dismissing student learning outcomes as “meaningless blurbs” that are the key flaw in this three-step process, you are dismissing what a college education is all about and what we need to verify. Student learning outcomes are simply an attempt to articulate what we most want students to get out of their college education. Contrary to your assertion that “trying to distill the infinitely varied outcomes down to a list… likely undermines the quality of the educational activities,” research has shown that students learn more effectively when they understand course and program learning outcomes.


Furthermore, without a clear understanding of what we most want students to learn, assessment is meaningless. You note that “in college people do gain ‘knowledge’ and they gain ‘skills,’” but are they gaining the right knowledge and skills? Are they acquiring the specific abilities they most need “to function in society and in a workspace,” as you put it? While, as you point out, every student’s higher education experience is unique, there is nonetheless a core of competencies that we should expect of all college graduates and whose achievement we should verify. Employers consistently say that they want to hire college graduates who can:

• Collaborate and work in teams

• Articulate ideas clearly and effectively

• Solve real-world problems

• Evaluate information and conclusions

• Be flexible and adapt to change

• Be creative and innovative

• Work with people from diverse cultural backgrounds

• Make ethical judgments

• Understand numbers and statistics

 

Employers expect colleges and universities to ensure that every student, regardless of his or her unique experience, can do these things at an appropriate level of competency.


You’re absolutely correct that we need to focus on examining student work (and we do), but how should we decide whether the work is excellent or inadequate? For example, everyone wants college graduates to write well, but what exactly are the characteristics of good writing at the senior level? Student learning outcomes, explicated into rubrics (scoring guides) that elucidate the learning outcomes and define excellent, adequate, and unsatisfactory performance levels, are vital to making this determination.


You don’t mention rubrics in your paper, so I can’t tell if you’re familiar with them, but in the last twenty years they have revolutionized American higher education. When student work is evaluated according to clearly articulated criteria, the evaluations are fairer and more consistent. Higher education curriculum and pedagogy experts such as Mary-Ann Winkelmes, Barbara Walvoord, Virginia Anderson, and L. Dee Fink have shown that, when students understand what they are to learn from an assignment (the learning outcomes), when the assignment is designed to help them achieve those outcomes, and when their work is graded according to how well they demonstrate achievement of those outcomes, they learn far more effectively. When faculty collaborate to identify shared learning outcomes that students develop in multiple courses, they develop a more cohesive curriculum that again leads to better learning.


Beyond having clear, integrated learning outcomes, there’s another critical aspect of excellent teaching and learning: if faculty aren’t teaching something, students probably aren’t learning it. This is where curriculum maps come in; they’re a tool to ensure that students do indeed have enough opportunity to achieve a particular outcome. One college that I worked with, for example, identified (and defined) ethical reasoning as an important outcome for all its students, regardless of major. But a curriculum map revealed that very few students took any courses that helped them develop ethical reasoning skills. The faculty changed curricular requirements to correct this and ensure that every student, regardless of major, graduated with the ethical reasoning skills that both they and employers value.


I appreciate anyone who tries to come up with solutions to the challenges we face, but I must point out that your thoughts on program review may be impractical. External reviews are difficult and expensive. Keep in mind that larger universities may offer hundreds of programs and thousands of courses, and for many programs it can be remarkably hard—and expensive—to find a truly impartial, well-trained external expert.


Similarly, while a number of colleges and universities already subject student work to separate, independent reviews, this can be another difficult, expensive endeavor. With college costs skyrocketing, I question the cost-benefit: are these colleges learning enough from these reviews to make the time, work, and expense worthwhile?


I would add one item to your wish list, by the way: I’d like to see every accreditor require its colleges and universities to expect faculty to use research-informed teaching practices, including engagement strategies, and to evaluate faculty teaching effectiveness on their use of those practices.


But my chief takeaway from your report is not about its shortcomings but how the American higher education community has failed to tell you, other policy thought leaders, and government policy makers what we do and how well we do it. Part of the problem is, because American higher education is so huge and complex, we have a complicated, messy story to tell. None of you has time to do a thorough review of the many books, reports, conferences, and websites that explain what we are trying to do and our effectiveness. We have to figure out a way to tell our very complex story in short, simple ways that busy people can digest quickly.

 


Making a liberal arts degree relevant and employable

Posted on February 10, 2016 at 8:10 AM Comments comments (0)

One of the reasons I’m a passionate advocate of the liberal arts is because my own undergraduate liberal arts degree has served me so well…but then again, it was an unusual interdisciplinary program. Hopkins coded its liberal arts courses according to area of study: natural sciences courses were coded N, social and behavior sciences courses were coded S, humanities H. My Quantitative Studies major required a couple of entry level courses (probability and statistics) plus electives chosen from courses coded Q, with a certain number in the upper division.

 

I had a ball! In addition to math, I took courses in engineering, physics, economics, computer science, and psychology, where I discovered an unexpected passion for educational testing and measurement that led me to graduate study and my work today. At the same time, while Hopkins didn’t offer formal minors, I earned 18 credits in English. 

 

Memories of all this came back to me as I read Matthew Sigelman’s piece in Inside Higher Ed on creating liberal arts programs that combine foundational liberal arts skills such as writing and critical thinking with the entry level technical skills that employers seek. My knowledge of statistical analyses and computer programming got me my first positions. But my writing skills and interdisciplinary studies helped me move out of them, into a career in higher education that has required working with people from all kinds of academic backgrounds, speaking a bit of their language, and applying the concepts I’ve learned to their disciplines. I wouldn’t be where I am today without the combination of technical skills, writing skills, and broad liberal arts foundation that Sigelman advocates.

 

So here’s an idea. Many colleges today label “writing-intensive” courses with a W and require students to take a certain number of them. Why not do something similar with other skills that today’s employers are seeking? Label leadership- and teamwork-intensive courses L, data-intensive courses D, problem-solving -intensive courses P, technology-intensive courses T, analysis-intensive courses A, ethics-intensive courses E, and so on. Develop clear institutional guidelines on how to qualify for each label; some courses might earn multiple labels. Then encourage students in the liberal arts to take courses with whatever labels best fit their career interests—perhaps as an interdisciplinary major, perhaps as a minor, or perhaps as electives in a major or general education.

 

This will only work, of course, if curricula have enough flexibility to allow students to fit these courses in. But that’s a solvable challenge, and I think this is an idea worth considering.

 

Making assessment consequential

Posted on January 25, 2016 at 7:25 AM Comments comments (0)

Of course as soon as I posted and announced my last blog on helpful assessment resources, I realized I’d omitted two enormous ones: AAC&U, which has become an amazing resource and leader on assessment in general education and the liberal arts, and the National Institute of Learning Outcomes Assessment (NILOA), which has generated and published significant scholarship that is advancing assessment practice. I’ve edited that blog to add these two resources.

 

Last year the folks at NILOA wrote what I consider one of eight essential assessment books: Using Evidence of Student Learning to Improve Higher EducationIt’s co-authored by one of the greatest collections of assessment minds on the planet: George Kuh, Stan Ikenberry, Natasha Jankowski, Timothy Cain, Peter Ewell, Pat Hutchings, and Jillian Kenzie. They make a convincing case for rebooting our approach to assessment, moving from what they call a culture of compliance, in which we focus on doing assessment largely to satisfy accreditors, to what they call consequential assessment, the kind that truly impacts student success and institutional performance.


Here’s my favorite line from the book: “Good assessment is not about the amount of information amassed, or about the quality of any particular facts or numbers put forth. Rather, assessment within a culture of evidence is about habits of question asking, reflection, deliberation, planning, and action based on evidence” (p. 46). In other words, the most important kind of validity for student learning assessments is consequential validity.

 

The book presents compelling arguments for making this transformational shift, discusses challenges in making this shift and offers practical, research-informed strategies on how to overcome those challenges based on real examples of good practices. This book turned on so many light bulbs for me! As I noted in my earlier blog on eight essential assessment books, it’s a worthwhile addition to every assessment practitioner’s bookshelf.

 

I’ll be publishing a more thorough review of the book in an upcoming issue of the journal Assessment & Evaluation in Higher Education.