Linda Suskie

  A Common Sense Approach to Assessment in Higher Education


view:  full / summary

How to assess anything without killing yourself...really!

Posted on May 30, 2017 at 12:10 AM Comments comments (34)

I stumbled across a book by Douglas Hubbard titled How to Measure Anything: Finding the Value of “Intangibles in Business.” Yes, I was intrigued, so I splurged on it and devoured it.

The book should really be titled How to Measure Anything Without Killing Yourself because it focuses as much on limiting assessment as measuring it. Here are some of the great ideas I came away with:

1. We are (or should be) assessing because we want to make better decisions than what we would make without assessment results. If assessment results don’t help us make better decisions, they’re a waste of time and money.

2. Decisions are made with some level of uncertainty. Assessment results should reduce uncertainty but won’t eliminate it.

3. One way to judge the quality of assessment results is to think about how confident you are in them by pretending to make a money bet. Are you confident enough in the decision you’re making, based on assessment results, that you’d be willing to make a money bet that the decision is the right one? How much money would you be willing to bet?

4. Don’t try to assess everything. Focus on goals that you really need to assess and on assessments that may lead you to change what you’re doing. In other words, assessments that only confirm the status quo should go on a back burner. (I suggest assessing them every three years or so, just to make sure results aren’t slipping.)

5. Before starting a new assessment, ask how much you already know, how confident you are in what you know, and why you’re confident or not confident. Information you already have on hand, however imperfect, may be good enough. How much do you really need this new assessment?

6. Don’t reinvent the wheel. Almost anything you want to assess has already been assessed by others. Learn from them.

7. You have access to more assessment information than you might think. For fuzzy goals like attitudes and values, ask how you observe the presence or absence of the attitude or value in students and whether it leaves a trail of any kind.

8. If you know almost nothing, almost anything will tell you something. Don’t let anxiety about what could go wrong with assessment keep you from just starting to do some organized assessment.

9. Assessment results have both cost (in time as well as dollars) and value. Compare the two and make sure they’re in appropriate balance.

10. Aim for just enough results. You probably need less data than you think, and an adequate amount of new data is probably more accessible than you first thought. Compare the expected value of perfect assessment results (which are unattainable anyway), imperfect assessment results, and sample assessment results. Is the value of sample results good enough to give you confidence in making decisions?

11. Intangible does not mean immeasurable.

12. Attitudes and values are about human preferences and human choices. Preferences revealed through behaviors are more illuminating than preferences stated through rating scales, interviews, and the like.

13. Dashboards should be at-a-glance summaries. Just like your car’s dashboard, they should be mostly visual indicators such as graphs, not big tables that require study. Every item on the dashboard should be there with specific decisions in mind.

14. Assessment value is perishable. How quickly it perishes depends on how quickly our students, our curricula, and the needs of our students, employers, and region are changing.

15. Something we don’t ask often enough is whether a learning experience was worth the time students, faculty, and staff invested in it. Do students learn enough from a particular assignment or co-curricular experience to make it worth the time they spent on it? Do students learn enough from writing papers that take us 20 hours to grade to make our grading time worthwhile?

A New Paradigm for Assessment

Posted on May 21, 2017 at 6:10 AM Comments comments (6)

I was impressed with—and found myself in agreement with—Douglas Roscoe’s analysis of the state of assessment in higher education in “Toward an Improvement Paradigm for Academic Quality” in the Winter 2017 issue of Liberal Education. Like Douglas, I think the assessment movement has lost its way, and it’s time for a new paradigm. And Douglas’s improvement paradigm—which focuses on creating spaces for conversations on improving teaching and curricula, making assessment more purposeful and useful, and bringing other important information and ideas into the conversation—makes sense. Much of what he proposes is in fact echoed in Using Evidence of Student Learning to Improve Higher Education by George Kuh, Stanley Ikenberry, Natasha Jankowski, Timothy Cain, Peter Ewell, Pat Hutchings, and Jillian Kinzie.

But I don’t think his improvement paradigm goes far enough, so I propose a second, concurrent paradigm shift.

I’ve always felt that the assessment movement tried to do too much, too quickly. The assessment movement emerged from three concurrent forces. One was the U.S. federal government, which through a series of Higher Education Acts required Title IV gatekeeper accreditors to require the institutions they accredit to demonstrate that they were achieving their missions. Because the fundamental mission of an institution of higher education is, well, education, this was essentially a requirement that institutions demonstrate that its intended student learning outcomes were being achieved by its students.

The Higher Education Acts also required Title IV gatekeeper accreditors to require the institutions they accredit to demonstrate “success with respect to student achievement in relation to the institution’s mission, including, as appropriate, consideration of course completion, state licensing examinations, and job placement rates” (1998 Amendments to the Higher Education Act of 1965, Title IV, Part H, Sect. 492(b)(4)(E)). The examples in this statement imply that the federal government defines student achievement as a combination of student learning, course and degree completion, and job placement.

A second concurrent force was the movement from a teaching-centered to learning-centered approach to higher education, encapsulated in Robert Barr and John Tagg’s 1995 landmark article in Change, “From Teaching to Learning: A New Paradigm for Undergraduate Education.” The learning-centered paradigm advocates, among other things, making undergraduate education an integrated learning experience—more than a collection of courses—that focuses on the development of lasting, transferrable thinking skills rather than just basic conceptual understanding.

The third concurrent force was the growing body of research on practices that help students learn, persist, and succeed in higher education. Among these practices: students learn more effectively when they integrate and see coherence in their learning, when they participate in out-of-class activities that build on what they’re learning in the classroom, and when new learning is connected to prior experiences.

These three forces led to calls for a lot of concurrent, dramatic changes in U.S. higher education:

  • Defining quality by impact rather than effort—outcomes rather than processes and intent
  • Looking on undergraduate majors and general education curricula as integrated learning experiences rather than collections of courses
  • Adopting new research-informed teaching methods that are a 180-degree shift from lectures
  • Developing curricula, learning activities, and assessments that focus explicitly on important learning outcomes
  • Identifying learning outcomes not just for courses but for for entire programs, general education curricula, and even across entire institutions
  • Framing what we used to call extracurricular activities as co-curricular activities, connected purposefully to academic programs
  • Using rubrics rather than multiple choice tests to evaluate student learning
  • Working collaboratively, including across disciplinary and organizational lines, rather than independently

These are well-founded and important aims, but they are all things that many in higher education had never considered before. Now everyone was being asked to accept the need for all these changes, learn how to make these changes, and implement all these changes—and all at the same time. No wonder there’s been so much foot-dragging on assessment! And no wonder that, a generation into the assessment movement and unrelenting accreditation pressure, there are still great swaths of the higher education community who have not yet done much of this and who indeed remain oblivious to much of this.

What particularly troubles me is that we’ve spent too much time and effort on trying to create—and assess—integrated, coherent student learning experiences and, in doing so, left the grading process in the dust. Requiring everything to be part of an integrated, coherent learning experience can lead to pushing square pegs into round holes. Consider:

  • The transfer associate degrees offered by many community colleges, for example, aren’t really programs—they’re a collection of general education and cognate requirements that students complete so they’re prepared to start a major after they transfer. So identifying—or assessing—program learning outcomes for them frankly doesn’t make much sense.
  • The courses available to fulfill some general education requirements don’t really have much in common, so their shared general education outcomes become so broad as to be almost meaningless.
  • Some large universities are divided into separate colleges and schools, each with their own distinct missions and learning outcomes. Forcing these universities to identify institutional learning outcomes applicable to every program makes no sense—again, the outcomes must be so broad as to be almost meaningless.
  • The growing numbers of students who swirl through multiple colleges before earning a degree aren’t going to have a really integrated, coherent learning experience no matter how hard any of us tries.

At the same time, we have given short shrift to helping faculty learn how to develop and use good assessments in their own classes and how to use grading information to understand and improve their own teaching. In the hundreds of workshops and presentations I’ve done across the country, I often ask for a show of hands from faculty who routinely count how many students earned each score on each rubric criterion of a class assignment, so they can understand what students learned well and what they didn’t learn well. Invariably a tiny proportion raises their hands. When I work with faculty who use multiple choice tests, I ask how many use a test blueprint to plan their tests so they align with key course objectives, and it’s consistently a foreign concept to them.

In short, we’ve left a vital part of the higher education experience—the grading process—in the dust. We invest more time in calibrating rubrics for assessing institutional learning outcomes, for example, than we do in calibrating grades. And grades have far more serious consequences to our students, employers, and society than assessments of program, general education, co-curricular, or institutional learning outcomes. Grades decide whether students progress to the next course in a sequence, whether they can transfer to another college, whether they graduate, whether they can pursue a more advanced degree, and in some cases whether they can find employment in their discipline.

So where we should go? My paradigm springs from visits to two Canadian institutions a few years ago. At that time Canadian quality assurance agencies did not have any requirements for assessing student learning, so my workshops focused solely on assessing learning more effectively in the classroom. The workshops were well received because they offered very practical help that faculty wanted and needed. And at the end of the workshops, faculty began suggesting that perhaps they should collaborate to talk about shared learning outcomes and how to teach and assess them. In other words, discussion of classroom learning outcomes began to flow into discussion of program learning outcomes. It’s a naturalistic approach that I wish we in the United States had adopted decades ago.

What I now propose is moving to a focus on applying everything we’ve learned about curriculum design and assessment to the grading process in the classroom. In other words, my paradigm agrees with Roscoe’s that “assessment should be about changing what happens in the classroom—what students actually experience as they progress through their courses—so that learning is deeper and more consequential.” My paradigm emphasizes the following.

  1. Assessing program, general education, and institutional learning outcomes remain an assessment best practice. Those who have found value in these assessments would be encouraged to continue to engage in them and honored through mechanisms such as NILOA’s Excellence in Assessment designation.
  2. Teaching excellence is defined in significant part by four criteria: (1) the use of research-informed teaching and curricular strategies, (2) the alignment of learning activities and grading criteria to stated course objectives, (3) the use of good quality evidence, including but not limited to assessment results from the grading process, to inform changes to one’s teaching, and (4) active participation in and application of professional development opportunities on teaching including assessment.
  3. Investments in professional development on research-informed teaching practices exceed investments in assessment.
  4. Assessment work is coordinated and supported by faculty professional development centers (teaching-learning centers) rather than offices of institutional effectiveness or accreditation, sending a powerful message that assessment is about improving teaching and learning, not fulfilling an external mandate.
  5. We aim to move from a paradigm of assessment, not just to one of improvement as Roscoe proposes, but to one of evidence-informed improvement—a culture in which the use of good quality evidence to inform discussions and decisions is expected and valued.
  6. If assessment is done well, it’s a natural part of the teaching-learning process, not a burdensome add-on responsibility. The extra work is in reporting it to accreditors. This extra work can’t be eliminated, but it can be minimized and made more meaningful by establishing the expectation that reports address only key learning outcomes in key courses (including program capstones), on a rotating schedule, and that course assessments are aggregated and analyzed within the program review process.

Under this paradigm, I think we have a much better shot at achieving what’s most important: giving every student the best possible education.

How hard should a multiple choice test be?

Posted on March 18, 2017 at 8:25 AM Comments comments (27)

My last blog post on analyzing multiple choice test results generated a good bit of feedback, mostly on the ASSESS listserv. Joan Hawthorne and a couple of other colleagues thoughtfully challenged my “50% rule”—that any questions that more than 50% of your students get wrong may suggest something wrong and should be reviewed carefully.


Joan pointed out that my 50% rule shouldn’t be used with tests that are so important that students should earn close to 100%. She’s absolutely right. Some things we teach—healthcare, safety—are so important that if students don’t learn them well, people could die. If you’re teaching and assessing must-know skills and concepts, you might want to look twice at any test items that more than 10% or 15% of students got wrong.


With other tests, how hard the test should be depends on its purpose. I was taught in grad school that the purpose of some tests is to separate the top students from the bottom—distinguish which students should earn an A, B, C, D, or F. If you want to maximize the spread of test scores, an average item difficulty of 50% is your best bet—in theory, you should get test scores ranging all the way from 0 to 100%. If you want each test item to do the best possible job discriminating between top and bottom students, again you’d want to aim for a 50% difficulty.


But in the real world I’ve never seen a good test with an overall 50% difficulty for several good reasons.


1. Difficult test questions are incredibly hard to write. Most college students want to get a good grade and will at least try to study for your test. It’s very hard to come up with a test question that assesses an important objective but that half of them will get wrong. Most difficult items I’ve seen are either on minutiae, “trick” questions on some nuanced point, or questions that are more tests of logical reasoning skill than course learning objectives. In my whole life I’ve written maybe two or three difficult multiple choice questions that I’ve been proud of: that truly focused on important learning outcomes and didn’t require a careful nuanced reading or logical reasoning skills. In my consulting work, I’ve seen no more than half a dozen difficult but effective items written by others. This experience has led me to suggest that “50% rule.”


2. Difficult tests are demoralizing to students, even if you “curve” the scores and even if they know in advance that the test will be difficult.


3. Difficult tests are rarely appropriate, because it’s rare for the sole or major purpose of a test to be to maximize the spread of scores. Many tests have dual purposes. There are certain fundamental learning objectives we want to make sure (almost) every student has learned, or they’re going to run into problems later on. Then there are some learning objectives that are more challenging—that only the A or maybe B students will achieve—and those test items will separate the A from B students and so on.


So, while I have great respect for those who disagree with me, I stand by my suggestion in my last blog post. Compare each item’s actual difficulty (the percent of students who answered incorrectly) against how difficult you wanted that item to be, and carefully evaluate any items that more than 50% of your students got wrong.

What to look for in multiple choice test reports

Posted on February 28, 2017 at 8:15 AM Comments comments (2)

Next month I’m doing a faculty professional development workshop on interpreting the reports generated for multiple choice tests. Whenever I do one of these workshops, I ask the sponsoring institution to send me some sample reports. I’m always struck by how user-unfriendly they are!


The most important thing to look at in a test report is the difficulty of each item—the percent of students who answered each item correctly. Fortunately these numbers are usually easy to find. The main thing to think about is whether each item was as hard as you intended it to be. Most tests have some items on essential course objectives that every student who passes the course should know or be able to do. We want virtually every student to answer those items correctly, so check those items and see if most students did indeed get them right.


Then take a hard look at any test items that a lot of students got wrong. Many tests purposefully include a few very challenging items, requiring students to, say, synthesize their learning and apply it to a new problem they haven’t seen in class. These are the items that separate the A students from the B and C students. If these are the items that a lot of students got wrong, great! But take a hard look at any other questions that a lot of students got wrong. My personal benchmark is what I call the 50 percent rule: if more than half my students get a question wrong, I give the question a hard look.


Now comes the hard part: figuring out why more students got a question wrong than we expected. There are several possible reasons including the following:


  • The question or one or more of its options is worded poorly, and students misinterpret them.
  • We might have taught the question’s learning outcome poorly, so students didn’t learn it well. Perhaps students didn’t get enough opportunities, through classwork or homework, to practice the outcome.
  • The question might be on a trivial point that few students took the time to learn, rather than a key course learning outcome. (I recently saw a question on an economics test that asked how many U.S. jobs were added in the last quarter. Good heavens, why do students need to memorize that? Is that the kind of lasting learning we want our students to take with them?)



If you’re not sure why students did poorly on a particular test question, ask them! Trust me, they’ll be happy to tell you what you did wrong!


Test reports provide two other kinds of information: the discrimination of each item and how many students chose each option. These are the parts that are usually user-unfriendly and, frankly, can take more time to decipher than they’re worth.


The only thing I’d look for here is any items with negative discrimination. The underlying theory of item discrimination is that students who get an A on your test should be more likely to get any one question right than students who fail it. In other words, each test item should discriminate between top and bottom students. Imagine a test question that all your A students get wrong but all your failing students answer correctly. That’s an item with negative discrimination. Obviously there’s something wrong with the question’s wording—your A students interpreted it incorrectly—and it should be thrown out. Fortunately, items with negative discrimination are relatively rare and usually easy to identify in the report.

What does a new CAO survey tell us about the state of assessment?

Posted on January 26, 2017 at 8:40 AM Comments comments (6)

A new survey of chief academic officers (CAOs) conducted by Gallup and Inside Higher Education led me to the sobering conclusion that, after a generation of work on assessment, we in U.S. higher education remain very, very far from pervasively conducting truly meaningful and worthwhile assessment.

Because we've been working on this so long, as I reviewed the results of this survey, I was deliberately tough. The survey asked CAOs to rate the effectiveness of their institutions on a variety of criteria using a scale of very effective, somewhat effective, not too effective, and not effective at all. The survey also asked CAOs to indicate their agreement with a variety of statements on a five-point scale, where 5 = strongly agree, 1 = strongly disagree, and the other points are undefined. At this point I would have liked to see most CAOs rate their institutions at the top of the scale: either “very effective” or “strongly agree.” So these are the results I focused on and, boy, are they depressing.

Quality of Assessment Work

Less than a third (30%) of CAOs say their institution is very effective in identifying and assessing student outcomes. ‘Nuff said on that! :(

Value of Assessment Work

Here the numbers are really dismal. Less than 10% (yes, ten percent, folks!) of CAOs strongly agree that:

  • Faculty members value assessment efforts at their college (4%).
  • The growth of assessment systems has improved the quality of teaching and learning at their college (7%).
  • Assessment has led to better use of technology in teaching and learning (6%). (Parenthetically, that struck me as an odd survey question; I had no idea that one of the purposes of assessment was to improve the use of technology in T&L!)

And just 12% strongly disagree that their college’s use of assessment is more about keeping accreditors and politicians happy than it is about teaching and learning.


And only 6% of CAOs strongly disagree that faculty at their college view assessment as requiring a lot of work on their parts. Here I’m reading something into the question that might not be there. If the survey asked if faculty view teaching as requiring a lot of work on their parts, I suspect that a much higher proportion of CAOs would disagree because, while teaching does require a lot of work, it’s what faculty generally find to be valuable work--it's what they are expected to do, after all. So I suspect that, if faculty saw value in their assessment work commensurate with the time they put into it, this number would be a lot higher.


Using Evidence to Inform Decisions

Here’s a conundrum:

  • Over two thirds (71%) of CAOs say their college makes effective use of data used to measure student outcomes,
  • But only about a quarter (26%) said their college is very effective in using data to aid and inform decision making.
  • And only 13% strongly agree that their college regularly makes changes in the curriculum, teaching practices, or student services based on what it finds through assessment.

 So I’m wondering what CAOs consider effective uses of assessment data!


  • About two thirds (67%) of CAOs say their college is very effective in providing a quality undergraduate education.
  • But less than half (48%) say it’s very effective in preparing students for the world of work,
  • And only about a quarter (27%) say it’s very effective in preparing students for engaged citizens.
  • And (as I've already noted) only 30% say it’s very effective in identifying and assessing student outcomes.

How can CAOs who admit their colleges are not very effective in preparing students for work or citizenship engagement or assessing student learning nonetheless think their college is very effective in providing a quality undergraduate education? What evidence are they using to draw that conclusion?


  • While less than half of CAOs saying their colleges are very effective in preparing students for work,
  • Only about a third (32%) strongly agree that their institution is increasing attention to the ability of its degree programs to help students get a good job.

My Conclusions

After a quarter century of work to get everyone to do assessment well:

  • Assessment remains spotty; it is the very rare institution that is doing assessment pervasively and consistently well.
  • A lot of assessment work either isn’t very useful or takes more time than it’s worth.
  • We have not yet transformed American higher education into an enterprise that habitually uses evidence to inform decisions.

What are the characteristics of effective curricula?

Posted on January 6, 2017 at 8:20 PM Comments comments (9)

I'm working on a book chapter on curriculum design, and I've come up with eight characteristics of effective curricula, whether for a course, program, general education, or co-curricular experience:

• They treat a learning goal as a promise.

• They are responsive to the needs of students, employers, and society.

• They are greater than the sum of their parts.

• They give students ample and diverse opportunities to achieve key learning goals.

• They have appropriate, progressive rigor.

• They conclude with an integrative, synthesizing capstone experience.

• They are focused and simple.

• They use research-informed strategies to help students learn and succeed, including high-impact practices.


What do you think? Do these make sense? Have I missed anything? the curricula you work with have these characteristics?

Making a habit of using classroom assessment information to inform our own teaching

Posted on December 20, 2016 at 10:50 AM Comments comments (2)

Given my passion for assessment, you might not be surprised to learn that, whenever I teach, the most fun part for me is analyzing how my students have done on the tests and assignments I’ve given them. Once tests or papers are graded, I can’t wait to count up how many students got each test question right or how many earned each possible score on each rubric criterion. When I teach workshops, I rely heavily on minute papers, and I can’t wait to type up all the comments and do a qualitative analysis of them. I love to teach, and I really want to be as good a teacher as I can. And, for me, an analysis of what students have and haven’t learned is the best possible feedback on how well I’m teaching, much more meaningful and useful than student evaluations of teaching.


I always celebrate the test questions or rubric criteria that all my students did well on. I make a point of telling the class and, no matter how jaded they are, you should see their faces light up!


And I always reflect on the test questions or rubric criteria for which my students did poorly. Often I can figure out on my own what happened. Often it’s simply a poorly written question or assignment, but sometimes I have to admit to myself that I didn’t teach that concept or skill particularly well. If I can’t figure out what happened, I ask the class and, trust me, they’re happy to tell me how I screwed up! If it’s a really vital concept or skill and we’re not at the end of the course, I’ll often tell them, “I screwed up, but I can’t let you out of here not knowing how to do this. We’re going to go over it again, you’re going to get more homework on it, and you’ll submit another assignment (or have more test questions) on this.” If it's the end of the course, I make notes to myself on what I'll do differently next time.


I often share this story at the faculty workshops I facilitate. I then ask for a show of hands of how many participants do this kind of analysis in their own classes. The number of hands raised varies—sometimes there will be maybe half a dozen hands in a room of 80, sometimes more—but rarely do more than a third or half of those present raise their hands. This is a real issue, because if faculty aren’t in the habit of analyzing and reflecting on assessment results in their own classes, how can we expect them to do so collaboratively on broader learning outcomes? In short, it’s a troubling sign that the institutional community is not yet in the habit of using systematic evidence to understand and improve student learning, which is what all accreditors want.


Here, then, is my suggestion for a New Year’s resolution for all of you who teach or in any way help students learn: Start doing this! You don’t have to do this for every assignment in every course you teach, but pick at least one key test or assignment in one course whose scores aren’t where you’d like them. Your analysis and reflection on that one test or assignment will lead you into the habit of using the assessment evidence in front of you more regularly, and it will make you an even better teacher than you are today.

Lessons from the Election for Higher Education

Posted on November 28, 2016 at 7:25 AM Comments comments (1)

If you share my devastation at the results of the U.S. presidential election and its implications for our country and our world, and if you are struggling to understand what has happened and wondering what you can do as a member of the higher education community, this blog post is for you. I don’t have answers, of course, but I have some ideas.


Why did Trump get so many votes? While the reasons are complex, and people will be debating them for years, there seem to be two fundamental factors. One can be summed up in that famous line from Bill Clinton’s campaign: It’s the economy, stupid.  Jed Kolko at found that people who voted for Trump were more likely to feel under economic threat, worried about the future of their jobs.


The other reason is education. Nate Silver at has tweeted that Clinton won all 18 states where an above average share of the population has advanced degrees, but she lost 29 of the other 32.  Education and salary are highly correlated, but Nate Silver has found signs that education appears to be a stronger predictor of who voted for Trump than salary.


Why is education such a strong predictor of how people voted? Here’s where we need more research, but I’m comfortable speculating that reasons might include any of the following:

  • People without a college education have relatively few prospects for economic security. In my book Five Dimensions of Quality I noted that the Council of Foreign Relations found that, “going back to the 1970s, all net job growth has been in jobs that require at least a bachelor’s degree.” I also noted a statistic from Anthony Carnevale and his colleagues: “By 2020, 65 percent of all jobs will require postsecondary education and training, up from 28 percent in 1973.”
  • Colleges do help students learn to think critically: to distinguish credible evidence from what I call “incredible” evidence, to weigh evidence carefully when making difficult decisions, and to make decisions based more on good quality evidence than on emotional response.
  • College-educated citizens are more likely to have attended quality good schools from kindergarten on, learning to think critically not just in college but throughout their schooling.
  • College-educated citizens are more optimistic because their liberal arts studies give them the open-mindedness and flexibility to handle changing times, including changing careers.

We do have a tremendous divide in this country—an education divide—and it is growing. While college degree holders have always earned more than those without a college degree, the income disparity has grown; college graduates now earn 80% more than high school graduates, up from 40% in the 1970s.


If we want a country that offers economic security, whose citizens feel a sense of optimism, whose citizens make evidence-informed decisions, and whose citizens are prepared for changes in their country and their lives, we need to work on closing the education divide by helping as many people as possible get a great postsecondary education.


What can we do?

  1. Welcome the underprepared. They are the students who really need our help in obtaining not only economic security but the thinking skills that are the hallmark of a college education and a sense of optimism about their future. The future of our country is in their hands.
  2. Make every student want to come back, as Ken O’Donnell has said, until they complete their degree or credential. Every student we lose hurts his or her economic future and our country.
  3. Encourage actively what Ernest Boyer called the scholarship of application: using research to solve real-life problems such as regional social and economic issues.
  4. Partner with local school systems and governments to improve local grade schools. Many regions of the country need new businesses, but new businesses usually want to locate in communities with good schools for their employees and their families.
  5. Create more opportunities for students to listen to and learn from others with different backgrounds and perspectives. Many colleges seek to attract international students and encourage students to study abroad. I’d like to go farther. Do we encourage our international students to share their backgrounds and experiences with our American students, both in relevant classes and in co-curricular settings? Do we encourage returning study abroad students to share what they learned with their peers? Do we encourage our students to consider not only a semester abroad but a semester at another U. S. college in a different part of the country?
  6. Create more opportunities for students to learn about the value of courtesy, civility, respect, compassion, and kindness and how to practice these in their lives and careers.

Lessons from the Election for Assessment

Posted on November 21, 2016 at 2:45 PM Comments comments (0)

The results of the U.S. presidential election have lessons both for American higher education and for assessment. Here are the lessons I see for meaningful assessment; I’ll tackle implications for American higher education in my next blog post.


Lesson #1: Surveys are a difficult way to collect meaningful information in the 21st century. If your assessment plan includes telephone or online surveys of students, alumni, employers, or anyone else, know going in that it’s very hard to get a meaningful, representative sample.


A generation ago (when I wrote a monograph Questionnaire Survey Research: What Works for the Association of Institutional Research), most people had land line phones with listed numbers and without caller ID or voice mail. So it was easy to find their phone number, and they usually picked up the phone when it rang. Today many people don’t have land line phones; they have cell phones with unlisted numbers and caller ID. If the number calling is unfamiliar to them, they let the call go straight to voice mail. Online surveys have similar challenges, partly because databases of e-mail addresses aren’t as readily available as phone books and partly because browsing habits affect the validity of pop-up polls such as those conducted by Survey Monkey. And all survey formats are struggling with survey fatigue (how many surveys have you been asked to complete in the last month?).


Professional pollsters have ways of adjusting for all these factors, but those strategies are difficult and expensive and often beyond our capabilities.


Lesson #2: Small sample sizes may not yield meaningful evidence. Because of Lesson #1, many of the political polls we saw were based on only a few hundred respondents. A sample of 250 has an error margin of 6% (meaning that if, for example, you find that 82% of the student work you assessed meets your standard, the true percentage is probably somewhere between 76% and 88%). A sample of 200 has an error margin of 7%. And these error margins assume that the samples of student work you’re looking at are truly representative of all student work. Bottom line: We need to look at a lot of student work, from a broad variety of classes, in order to draw meaningful conclusions.


Lesson #3: Small differences aren’t meaningful. I was struck by how many reporters and pundits talked about Clinton having, say, a 1% or 2% point lead without mentioning that the error margin made these leads too close to call. I know everyone likes to have a single number—it’s easiest to grasp—but I wish we could move to the practice of reporting ranges of likely results, preferably in graphs that show overlaps and convey visually when differences aren’t really significant. That would help audiences understand, for example, whether students’ critical thinking skills really are worse than their written communication skills, or whether their information literacy skills really are better than those of their peers.


Lesson #4: Meaningful results are in the details. Clinton won the popular vote by well over a million votes but still lost enough states to lose the Electoral College. Similarly, while students at our college may be doing well overall in terms of their analytic reasoning skills, we should be concerned if students in a particular program or cohort aren’t doing that well. Most colleges and universities are so diverse in terms of their offerings and the students they serve that I’m not sure overall institution-wide results are all that helpful; the overall results can mask a great deal of important variation.


Lesson #5: We see what we want to see. With Clinton the odds-on favorite to win the race, it was easy to see Trump’s chances of winning (anywhere from 10-30%, depending on the analysis) as insignificant, when in fact these probabilities meant he had a realistic chance of winning. Just as it was important to take a balanced view of poll results, it’s important to bring a balanced view to our assessment results. Usually our assessment results are a mixed bag, with both reasons to cheer and reasons to reflect and try to improve. We need to make sure we see—and share—both the successes and the areas for concern.

Using Professional Development Funds Wisely

Posted on November 9, 2016 at 5:45 AM Comments comments (0)


I recently suggested to a college that it invest its professional development funds in helping faculty learn more about how to teach and assess. The response? We already do plenty—we give every faculty member funds to use however they like on professional development.


The problem with this approach is that there can be a difference between what people want to do and what they should do. If you gave me funds for my own personal growth and development, I’d probably use it to visit some fine restaurants instead of the gym membership that I really should get. If you gave me funds for professional development, I’d probably use it to go to a research conference in a nice location rather than organize a visit with a team of my colleagues a college that’s doing a great job teaching and assessing writing in our discipline.


One of the themes of my book Five Dimensions of Quality is “put your money where your mouth is.” Does your college really do this when it comes to professional development?


  • Does your college focus its professional development resources on the things your college says it’s focusing on? For example, if one of your college’s strategic goals is to be student-centered, do you focus professional development funds on helping faculty and staff learn what it means to be student-centered and how to incorporate student-centered practices into their teaching and other responsibilities?
  • Does your college give priority to funding sabbatical leave requests that, again, address your college’s top priorities? If your college’s mission or strategic goals include teaching excellence, for example, do you give high priority to sabbatical leaves that address the scholarship of teaching?
  • Does your college prioritize travel funding conferences and events that will help faculty and staff develop the knowledge and skills to address your college’s top priorities, such as student success?
  • Does your college prioritize sabbatical and travel funding for requests that include plans to disseminate that’s been learned to colleagues across your college?
  • Does your teaching-learning center use systematic evidence of what faculty and student development staff most need to learn when it plans its professional development offerings? For example, if assessments show that students across a variety of disciplines struggle to cite sources, does the TLC collaborate with librarians to offer programming on how to teach students to cite sources?
  • Does your assessment committee periodically review department assessment reports to identify what faculty and staff are doing well with assessment and what remains a struggle? Does it publicize successes, such as useful rubrics and prompts, to help others learn what good practices look like? Does it sponsor or recommend professional development to help faculty and staff with whatever aspects of assessment are most challenging?

An example of closing the loop...and ideas for doing it well

Posted on September 24, 2016 at 7:35 AM Comments comments (6)

I was intrigued by an article in the September 23, 2016, issue of Inside Higher Ed titled “When a C Isn’t Good Enough.” The University of Arizona found that students who earned an A or B in their first-year writing classes had a 67% chance of graduating, but those earning a C had only a 48% chance. The university is now exploring a variety of ways to improve the success of students earning a C, including requiring C students to take a writing competency test, providing resources to C students, and/or requiring C students to repeat the course.


I know nothing about the University of Arizona beyond what’s in the article. But if I were working with the folks there, I’d offer the following ideas to them, if they haven’t considered them already.


1. I’d like to see more information on why the C students earned a C. Which writing skills did they struggle most with: basic grammar, sentence structure, organization, supporting arguments with evidence, etc.? Or was there another problem? For example, maybe C students were more likely to hand in assignments late (or not at all).


2. I’d also like to see more research on why those C students were less likely to graduate. How did their GPAs compare to A and B students? If their grades were worse, what kinds of courses seemed to be the biggest challenge for them? Within those courses, what kinds of assignments were hardest for them? Why did they earn a poor grade on them? What writing skills did they struggle most with: basic grammar, organization, supporting arguments with evidence, etc.? Or, again, maybe there was another problem, such as poor self-discipline in getting work handed in on time.


And if their GPAs were not that different from those of A and B students (or even if they were), what else was going on that might have led them to leave? The problem might not be their writing skills per se. Perhaps, for example, that students with work or family obligations found it harder to devote the study time necessary to get good grades. Providing support for that issue might help more than helping them with their writing skills.


3. I’d also like to see the faculty responsible for first-year writing articulate a clear, appropriate, and appropriately rigorous standard for earning a C. In other words, they could use the above information on the kinds and levels of writing skills that students need to succeed in subsequent courses to articulate the minimum performance levels required to earn a C. (When I taught first-year writing at a public university in Maryland, the state system had just such a statement, the “Maryland C Standard.”;)


4. I’d like to see the faculty adopt a policy that, in order to pass first-year writing, students must meet the minimum standard of every writing criterion. Thus, if student work is graded using a rubric, the grade isn’t determined by averaging the scores on various rubric criteria—that lets a student with A arguments but F grammar earn a C with failing grammar. Instead, students must earn at least a C on every rubric criterion in order to pass the assignment. Then the As, Bs, and Cs can be averaged into an overall grade for the assignment.


(If this sounds vaguely familiar to you, what I’m suggesting is the essence of competency-based education: students need to demonstrate competence on all learning goals and objectives in order to pass a course or graduate. Failure to achieve one goal or objective can’t be offset by strong performance on another.)


5. If they haven’t done so already, I’d also like to see the faculty responsible for first-year writing adopt a common rubric, articulating the criteria they’ve identified, that would be used to assess and grade the final assignment in every section, no matter who teaches it. This would make it easy to study student performance across all sections of the course and identify pervasive strengths and weaknesses in their writing. If some faculty members or TAs have additional grading criteria, they could simply add those to the common rubric. For example, I graded my students on their use of citation conventions, even though that was not part of the Maryland C Standard. I added that to the bottom of my rubric.


6. Because work habits are essential to success in college, I’d also suggest making this a separate learning outcome for first-year writing courses. This means grading students separately on whether they turn in work on time, put in sufficient effort, etc. This would help everyone understand why some students fail to graduate—is it because of poor writing skills, poor work habits, or both?


These ideas all move responsibility for addressing the problem from administrators to the faculty. That responsibility can’t be fulfilled unless the faculty commit to collaborating on identifying and implementing a shared strategy so that every student, no matter which section of writing they enroll in, passes the course with the skills needed for subsequent success.

Helping Students Evaluate the Credibility of Sources

Posted on September 16, 2016 at 6:05 AM Comments comments (0)

Like many Americans, I have been appalled by this year’s presidential election train wreck. I am dismayed in so many ways, but perhaps no more so than by the many American citizens who either can’t or choose not to distinguish between credible and what I like to call incredible sources of information. Clearly we as educators are not doing enough to help our students learn how to do this.


I think part of problem is that we in higher education have historically focused on teaching our students to use only academic library resources, which have been vetted by professionals and are therefore credible. But today many of our students will never access a college library after they graduate—they’ll be turning to what I call the Wild West of the internet for information. So today it’s vital that we teach our students how to vet information themselves.


A number of years ago, I had my students in my first-year writing courses write a research paper using only online sources. Part of their assignment was to identify both credible and non-credible sources and explain why they found some credible and others not. Here’s the guidance I gave them:


Evaluating sources is an art, not an exact science, so there is no one set of rules that will help you definitively separate credible sources from non-credible sources. Instead, you have to use thinking skills such as analysis and evaluation to judge for yourself whether a source is sufficiently credible for you to use in your research. The following questions will help you decide.


What is the purpose of the source? Serious sources are more credible than satiric or humorous ones. Sources intended to inform, such as straight news stories, may be more credible than those intended to persuade, such as editorials, commentaries, and letters to the editorial, which may be biased.


Is the author identified? A source with an identified author(s) may be more credible than one without an author, although authoritative organizations (e.g., news organizations, professional associations) may publish credible material without an identified author.


Who is the author? A credible source is written by someone with appropriate education, training, or experience to write with authority on the topic. An unknown writer is less credible than a frequently published writer, and a student is less credible than a professor. If you feel you need more information on the author, do a database search and/or Google search for the author’s name.


Who published or sponsored the source? A scholarly journal is generally more credible than a popular magazine or newspaper. Sources whose purpose is to sell a product or point of view—including many “news” organizations and websites—may be less credible than those whose purpose is to provide impartial information and services. A website with a URL extension of .edu, .gov, or .org may be more credible than one ending in .com (but not, .gov, and .org sites often exist to promote a particular point of view). A source published by a reputable publisher or organization is often more credible than one published independently by the author or one published by a fly-by-night organization, because a reputable publisher or organization provides additional review and quality control.


How complete is the source’s information? Sources with more complete coverage of a topic may be more credible that those than provide limited coverage.


Is the content balanced or biased? Sources that present a balanced point of view are often more credible than those that clearly have a vested interest in the topic. If the author argues for one point of view, does he or she present opposing views fairly and refute them persuasively?


Are information, statements, and claims documented or unsupported? Sources that provide thorough, complete documentation for their information and claims are generally more credible than those that make unsupported or scantily-supported statements or claims. For example, information based on a carefully-designed research project is more credible than information based only on the author’s personal observations.


Has the source, author, publisher, and/or sponsor been recognized by others as credible? Sources found through academic databases such as Lexis Nexis or Infotrac are more credible than those only found through Google. Sources frequently reviewed, cited, or linked by others are more credible than those that no other expert or authority mentions or uses. You can do a database search and/or Google search for reviews of a source and to see how often it has been cited or linked by others. To look for links to a source, search on Google for “link:” and the URL (e.g., and see how many links are found.


Is the material well-written? Material that is clear, well-organized and free of spelling and grammatical errors is more credible than poorly-written material.


What is the date the material was published or last updated? Material with a clear publication date is more credible than undated material. For time-sensitive research topics, recent information is more credible than older information. Web sources that are updated regularly and well-maintained (e.g., no broken links) may be more credible than those that are posted and then neglected.


What are your own views and opinions? Don’t bring prejudices to your search. It’s easy to think that sources with which you agree are more credible than those with which you disagree. Keep an open, critical mind throughout your search, and be willing to modify your thesis or hypothesis as you learn more about your topic.

Rubric events this fall!

Posted on August 28, 2016 at 8:40 AM Comments comments (0)

This fall is the Semester of the Rubric for me. I'm doing sessions called "Everything I Thought I Knew About Rubrics Was Wrong" at the Assessment Institute at Indianapolis on October 17 and at the NEEAN Fall Forum on the College of the Holy Cross in Worcester, Massachusetts, on November 4. If you are in the Middle East, Africa, Europe, or Asia, I'm doing a workshop on "Building Rubrics" at the Assessment Leadership Conference at United Arab Emirates University in Al Ain on November 14-15. 

On another note, on January 17 I'm doing "Building a Culture of Quality," a retreat for institutional leaders sponsored by WASC at the Kellogg West Conference Center in Pomona, California.

For more information on any of these events, visit I hope to see you!

Can rubrics impede learning?

Posted on August 18, 2016 at 12:40 AM Comments comments (6)

Over the last couple of years, I’ve started to get some gentle pushback from faculty on rubrics, especially those teaching graduate students. Their concern is whether rubrics might provide too much guidance, serving as a crutch when students should be figuring out things on their own. One recent question from a faculty member expressed the issue well: “If we provide students with clear rubrics for everything, what happens when they hit the work place and can’t figure out on their own what to do and how to do it without supervisor hand-holding?”


It’s a valid point, one that ties into the lifelong learning outcome that many of us have for our students: we want to prepare them to self-evaluate and self-correct their work. I can think of two ways we can help students develop this capacity without abandoning rubrics entirely. One possibility would be to make rubrics less explicit as students progress through their program. First-year students need a clear explanation of what you consider good organization of a paper; seniors and grad students shouldn’t. The other possibility—which I like better—would be to have students develop their own rubrics, either individually or in groups, subject, of course, to the professor’s review.


In either case, it’s a good idea to encourage students to self-assess their work by completing the rubric themselves—and/or have a peer review the assignment and complete the rubric—before turning it in. This can help get students in the habit of self-appraising their work and taking responsibility for its quality before they hit the workplace.


Do you have any other thoughts or ideas about this? Let me know!

When might a nationally-accredited school be a good fit for regional accreditation?

Posted on July 25, 2016 at 11:10 AM Comments comments (0)

American accreditors fall into three broad groups: regional, national, and specialized. Of the three, regional accreditation is often seen as the most desirable for several reasons. First, regional accreditors are among the oldest accreditors in the U.S. and accredit the most prestigious institutions, giving them an image of quality. Second, employers are increasingly requiring job applicants to hold degrees from regionally accredited institutions. Third, some specialized accreditors require accredited programs to be in a regionally accredited institution. And finally, despite Federal regulations to the contrary, students from nationally-accredited institutions sometimes find it hard to transfer their credits elsewhere or to pursue a more advanced degree.


For all these reasons, nationally-accredited institutions sometimes consider pursuing regional accreditation. Unfortunately, in many instances regional accreditation is simply not a good fit—it’s like trying to fit a square peg into a round hole. Then the institution may either fail in its efforts to earn regional accreditation or, once accredited, run into problems maintaining its accreditation.


When might regional accreditation be a good fit?


1. Regional accreditation is only open to institutions that award at least one degree. If your institution offers only certificates and/or diplomas, it isn’t eligible.


2. Regional accreditors require all undergraduate degree programs to include certain components, including a general education or core curriculum studying the liberal arts and the development of certain skills and competencies.


3. Regional accreditors require a system of shared collegial governance. While none prescribes a particular governance system, all require that the respective roles, responsibilities, and authority of the board, leadership, administration, and faculty be clearly articulated. And an implicit expectation is that the institutional culture be one of communication and collaboration; regional accreditation simply becomes very difficult without these.


4. Because regional accreditors accredit a vast array of institutions, their standards are relatively imprecise, more a set of principles that are applied within the context of each institution’s mission. Regional accreditation is therefore a process that requires considerable time, thought, and effort by many members of the institutional community, not a task to be delegated to someone.


5. Regional accreditors expect a commitment to ongoing improvement beyond the minimum required for accreditation. Regional accreditation is not appropriate for an institution content to teeter on the edge of the bare minimum required for compliance.


6. Regional accreditors expect a commitment to collegiality within and across institutions. Volunteer peers from other institutions will work with your institution, and the accreditor expects your institution to return the favor once accredited, providing volunteer peer evaluators, presenting at conferences, and so on.


7. Regional accreditors expect a board that is empowered and committed to act in the best interests of the institution and its students. Again, regional accreditors are not prescriptive about board make-up and duties, but they want to see a board that has the commitment, capacity and authority to act in the institution’s best interests. Suppose, for example, that the president/CEO/owner develops early-onset Alzheimer’s and begins to make irrational decisions that are not in the best interest of the institution. Can the board bring about a change in leadership? If the board heads a corporation, can it put institutional quality ahead of immediate shareholder return on investment? If the board oversees other entities that are troubled, such as a church, hospital, or another educational institution, can it put the best interests of the accredited institution first, or will it be tempted to rob Peter to pay Paul?


Some shameless self-promotion here: my book Five Dimensions of Quality: A Common Sense Guide to Accreditation and Accountability aims to explain what regional accreditors are looking for in plain terms. If your nationally-accredited institution is considering moving to regional accreditation, I think the book is a worthwhile investment.

Meaningful assessment of AA/AS transfer programs

Posted on July 9, 2016 at 7:45 AM Comments comments (2)

I often describe the teaching-learning-assessment process as a four-step cycle:

1. Clear learning outcomes

2. A curriculum and pedagogies designed to provide students with enough learning opportunities to achieve those outcomes

3. Assessment of those outcomes

4. Use of assessment results to improve the other parts of the cycle: learning outcomes, curriculum, pedagogies, and assessment

I also often point out that, if faculty are struggling to figure out how to assess something, the problem is often not assessment per se but the first two steps. After all, if you have clear outcomes and you’re giving students ample opportunity to achieve them, you should be grading students on their achievement of those outcomes, and there’s your assessment evidence. So the root cause of assessment struggles is often poorly articulated learning outcomes, a poorly designed curriculum, or both.

I see this a lot in the transfer AA/AS degrees offered by community colleges. As I explained in my June 20 blog entry, these degrees, designed for transfer into a four-year college major, typically consist of 42-48 credits of general education courses plus 12-18 credits related to the major. The general education and major-related components are often what I call “Chinese menu” curricula: Choose one course from Column A, two from Column B, and so on. (Ironically, few Chinese have this kind of menu any more, but people my age remember them.)


The problem with assessing these programs is the second step of the cycle, as I explained in my June 20 blog. in many cases these aren’t really programs; they’re simply collections of courses without coherence or progressive rigor. That makes it almost impossible both to define meaningful program learning outcomes (the first step of the cycle) or assess them (the third step of the cycle).


How can you deal with this mess? Here are my suggestions.


1. Clearly define what a meaningful “program” is. As I explained in my June 20 blog entry, many community colleges are bound by state or system definitions of a “program” that aren’t meaningful. Regardless of the definition to which you may be bound, I think it makes the most sense to think of the entire AA/AS degree as the program, with the 12-18 credits beyond gen ed requirements as a concentration, specialization, track or emphasis of the program.

2. Identify learning outcomes for both the degree and the concentration, recognizing that there should often be a relation between the two. In gen ed courses, students develop important competencies such as writing, analysis, and information literacy. In their concentration, they may achieve some of those competencies at a deeper or broader level, or they may achieve additional outcomes. For example, students in social science concentrations may develop stronger information literacy and analysis skills than students in other concentrations, while students in visual arts concentrations may develop visual communication skills in addition to the competencies they learn in gen ed.

Some community colleges offer AA/AS degrees in which students complete gen ed requirements plus 12-18 credits of electives. In these cases, students should work with an advisor to identify their own,unique program/concentration learning outcomes and select courses that will help them achieve those outcomes.

3. Use the following definition of a program (or concentration) learning outcome: Every student in the program (or concentration) takes at least two courses with learning activities that help him or her achieve the program learning outcome. This calls for fairly broad rather than course-specific learning outcomes.

If you’re struggling to find outcomes that cross courses, start by looking at course syllabi for any common themes in course learning outcomes. Also think about why four-year colleges want students to take these courses. What are student learning, beyond content, that will help them succeed in upper division courses in the major? In a pre-engineering program, for example, I’d like to think that the various science and math courses students take help them graduate with stronger scientific reasoning and quantitative skills than students in non-STEM concentrations.

4. Limit the number of learning outcomes; quality is more important than quantity here. Concentrations of 12-18 credits might have just one or two.


5. Also consider limiting your course options by consolidating Chinese-menu options into more focused pathways, which we are learning improve student success and completion. I’m intrigued by what Alexandra Waugh calls “meta-majors”: focused pathways that prepare students for a cluster of four-year college majors, such as health sciences, engineering, or the humanities, rather than just one.

6. Review your curricula to make sure that every student, regardless of the courses he or she elects, will graduate with a sufficiently rigorous achievement of every program (and concentration) learning outcome. An important principle here: There should be at least one course in which students can demonstrate achievement of the program learning outcome at the level of rigor expected of an associate degree holder prepared to begin junior-level work. In many cases, an entry-level course cannot be sufficiently rigorous; your program or concentration needs at least one course that cannot be taken the first semester. If you worry that prerequisites may be a barrier to completion, consider Passaic County Community College’s approach, described in my June 20 blog.

7. Finally, you’ve got meaningful program learning outcomes and a curriculum designed to help students achieve them at an appropriate level of rigor, so you're ready to assess those outcomes. The course(s) you’ve identified in the last step are where you can assess student achievement of the outcomes. But one additional challenge faces community colleges: many students transfer before taking this “capstone” course. So also identify a program/concentration “cornerstone” course: a key course that students often take before they transfer that helps students begin to achieve one or more key program/concentration learning outcomes. Here you can assess whether students are on track to achieve the program/concentration learning outcome, though at this point they probably won’t be where you want them by the end of the sophomore year.

A big community college issue: Degree programs that really aren't

Posted on June 20, 2016 at 11:30 AM Comments comments (3)

Over the years I’ve worked with myriad community colleges, large and small, in dozens of states throughout the United States. More than many other higher ed sectors, community colleges truly focus on helping students learn, making assessment a relatively easy sell and making community colleges some of my favorites to work with.


But I’m seeing an issue at community colleges throughout the United States that deeply troubles me and can make assessment of program learning outcomes almost impossible. The issue flows from the two kinds of associate degree programs that community colleges offer. One kind is what many call “career and technical education” (CTE) programs. Often A.A.S. degrees, these are designed to prepare students for immediate employment. The other kind is what many call “transfer programs”: A.A. or A.S. programs, often named something like “General Studies” or “Liberal Education,” that are designed to prepare students to transfer into baccalaureate programs at four-year colleges.


The problem I’m seeing is that many of these programs, especially on the transfer side, aren’t really programs. Here’s how the regional accreditors’ standards define programs:


  • ACCJC: “Appropriate breadth, depth, rigor, sequencing, time to completion, and synthesis of learning”
  • HLC: “Require levels of performance by students appropriate to the degree or certificate awarded”
  • MSCHE: “Characterized by rigor and coherence… designed to foster a coherent student learning experience and to promote synthesis of learning”
  • NEASC effective July 1, 2016: “Coherent design and… appropriate breadth, depth, continuity, sequential progression, and synthesis of learning”
  • NWCCU: “Rigor that [is] consistent with mission… A coherent design with appropriate breadth, depth, sequencing of courses, and synthesis of learning”
  • SACS: “A coherent course of study”
  • WSCUC: “Appropriate in content, standards of performance, [and] rigor”



There’s a theme here: A collection of courses is not a program and, conversely, a program is more than a collection of courses. A true program has both coherence and rigor. In order for this to happen, some courses must be more advanced than others and build on what’s been learned in earlier courses. That means that some program courses should be at the 200-level and have prerequisites.


But many community college degree “programs” are in fact collections of courses, nothing more.


  • Many transfer degree “programs” consist of 42 or 45 credits of general education courses—virtually all introductory 100-level courses—plus another 12-18 credits of electives, sometimes in an area of specialization, sometimes not.
  • At virtually every community college I’ve visited, it’s entirely possible for students to complete an associate degree in at least one “program” by taking only 100-level courses.
  • In some disciplines, “program” courses are largely cognate requirements (such as physics for an engineering program) with perhaps only one course in the program discipline itself.
  • And on top of all this, any 200-level courses in the “program” are often sophomore-level in name only; they have no prerequisites and appear no more rigorous than 100-level courses.



Two years of 100-level study does not constitute an associate degree and does not prepare transfer students for the junior-level work they will face when they transfer. And a small handful of introductory courses does not constitute an associate degree program.


Turning community college associate degree programs into true programs with rigor and coherence is remarkably difficult. Among the barriers:


  • Some systems and states prohibit community colleges from offering associate degrees in liberal arts or business disciplines, as Charlene Nunley, Trudy Bers, and Terri Manning note in NILOA’s Occasional Paper #10, “Learning Outcomes Assessment in Community Colleges.”
  • In other systems and states, plenty of community college faculty have told me that their counterparts at local four-year colleges don’t want them to teach anything beyond introductory courses—the four-year faculty want to teach the rest themselves. (My reaction? What snobs.)
  • Yet other community college faculty have told me that they have felt pressure from the Lumina Foundation’s completion agenda to eliminate all course prerequisites.



So at some community colleges nothing can be done until laws, regulations, or policies are changed, leaving thousands of students in the meanwhile with a shortchanged education. But there are plenty of other community colleges that can do something. I’m particularly impressed with Passaic County Community College’s approach. Every degree program, even those in the liberal arts, has designated one 200-level course as its program capstone. The course is open only to students who have completed at least 45 credits and have taken at least one other course in the discipline. For the English AA, for example, this course is “Topics in Literature,” and for the Psychology option in the Liberal Arts AA, this course is “Social Psychology.” It’s a creative solution to a pervasive problem.

Join me in Indiana, Nebraska, California, and the Middle East

Posted on June 11, 2016 at 7:05 AM Comments comments (0)

Over the coming months I'm be speaking or doing workshops in a variety of public venues. If your schedule permits, please join me--I'd love to see you! For more information on any of these events, visit

  • On June 23, I'll be doing a post-conference workshop on "Meaningful Assessment of Program Learning Outcomes" at the Innovations 2016 in Faith-Based Nursing Conference at Indiana Wesleyan University in Marion.
  • On August 8, I'll be doing a workshop on "Using Assessment Results to Understand and Improve Student Learning" at Nebraska Wesleyan University in Omaha, sponsored by Nebraska Wesleyan University and Concordia, Doane, and Union Colleges.
  • On October 17 or 18 (date and time TBA), I'll be doing a session titled "Everything I Thought I Knew About Rubrics Was Wrong" at the 2016 Assessment Institute in Indianapolis.
  • On November 15 or 16 (date and time TBA), I'll be doing a session or workshop (topic to be announced) at the inaugural Assessment Leadership Conference sponsored by United Arab Emirates University in Al Ain.
  • On January 17, I'll be facilitating "Building a Culture of Quality: A Retreat for Institutional Leaders," hosted by the WASC Senior College and University Commission, at the Kellogg West Conference Center in Pomona, California.

What does your website say about your institution?

Posted on May 27, 2016 at 12:40 AM Comments comments (3)

Part of my preparation for working with or visiting any college is visiting its website. I’m looking for basic “get acquainted” info to help me understand the college and therefore do a better job helping it. The information I’m looking for often includes things like the following:

  • How big is the institution? This helps me because large institutions may need different assessment or accreditation support structures than small ones.
  • What are its mission, vision, and strategic goals? This helps me because assessment and accreditation work should focus on institutional achievement of its mission, vision, and strategic goals.
  • Who “owns” the institution? Is it public, private non-profit, or private for-profit? Who founded it, and how long ago? This gives me insight into possible unstated values of the institution. For example, an institution founded by a religious denomination may still abide by some of the denomination’s tenets, even if it is now independent. A public institution is typically under pressure to be all things to all people and may therefore be stretched too thin.
  • Who accredits the institution? Helpful for obvious reasons!
  • What kinds of programs does it offer? This helps me because professional/career programs often need different assessment approaches or support than liberal arts programs.
  • How are the institution’s academic programs organized? Sometimes there are several schools within a college or several colleges within a university.
  • How many programs does it offer? An institution offering 250 programs needs a different assessment structure than one offering 25.
  • What is its gen ed curriculum, and what are its gen ed learning outcomes? This can be helpful because I often work with colleges on identifying and assessing gen ed learning outcomes.


(Ironically, I never look for any assessment information on the college’s website. I know it’s not there. Yes, there may be a home page for the assessment office, usually full of guidelines on how to fill out report templates, and perhaps with links to some assessment reports. But I haven’t yet found a college website that tells me and others clearly, “What are the most important things we want students to learn here, and how well are they learning it?” So I don’t bother looking anymore.)


Yes, I could ask my contacts at the institution for all this information (and if I can’t find it on the website, I do), but poking around the website gives me additional insight:

  • Does the institution have a clear sense of its identity and priorities? I worry about colleges with incredibly cluttered home pages, full of announcements about recent and upcoming events, maybe some research, registration reminders, and links to intranet portals. It’s the throw-everything-but-the-kitchen-sink-and-see-what-works approach, and I worry if that’s the approach they take to everything else they do.
  • Most colleges publish their mission, but a remarkable number don’t publish their strategic plan. This gives me the impression that they don’t want public stakeholders (community members, businesses, government policymakers) to get on board and support their plans.
  • Some university websites list their programs by school or college—in order to find the Visual Communications program, you have to first somehow discern if it’s offered in the College of Business, the College of Art, or the College of Liberal Arts. Most prospective students are interested in particular programs and don’t care which college they’re housed in, so this raises a concern that the institution may be more faculty-centered than student-centered.
  • Sometimes the colleges/schools and the programs within them are just plain odd. I remember one institution that had a visual communication program in the business college and a graphic design program in the art college—and of course they offered completely separate curricula and didn’t talk to each other! These oddities often suggest silos and turf wars.
  • Sometimes a college offers, say, 150 programs for 2500 students. This is a college that’s stretching its resources too far—probably some of those programs are too small to be effective.
  • I sometimes need Indiana Jones to track down gen ed requirements. Here’s one (sadly typical) example: from the home page, I clicked on Academics, then Academic Catalogs, then 2015-2016 Undergraduate Catalog, then Colleges and Schools, then College of Liberal Arts & Sciences, then (finally) General Education Requirements. The only conclusion I can draw is that colleges and universities are embarrassed of their gen ed requirements, which doesn’t say much about the real value we place on the liberal arts.


Now, I know I’m not a typical visitor to your college’s website, but I’m sure I’m not the only stakeholder interested in these kinds of information…and perhaps drawing these kinds of conclusions about your college. At a minimum, your accreditation reviewers will probably visit your website looking for things very similar to what I look for.


For more ideas on common flaws in college websites, visit

Fixing assessment in American higher education

Posted on May 7, 2016 at 9:00 AM Comments comments (4)

In my April 25 blog post, “Are our assessment processes broken?” I listed five key problems with assessment in the United States. Can we fix them? Yes, we can, primarily because today we have a number of organizations and entities that can tackle them, including (in no particular order):


Here are five steps that I think will dramatically improve the quality and effectiveness of student learning assessment in the United States.


1. Develop a common vocabulary. So much time is wasted debating the difference between a learning outcome and a learning objective, for example. The assessment movement is now mature enough that we can develop a common baseline glossary of those terms that continue to be muddy or confusing.


2. Define acceptable, good, and best assessment practices. Yes, many accreditors provide professional development for their membership and reviewers on assessment, but trainers often focus on best practices rather than minimally acceptable practices. This leads to reviewers unnecessarily “dinging” institutions on relatively minor points (say, learning outcomes don’t start with action words) while missing the forest of, say, piles of assessment evidence that aren’t being used meaningfully.


Specifically, we need to practice what we preach with one or more rubrics that list the essential elements of assessment and define best, good, acceptable, and unacceptable performance levels for each criteria. Fortunately, we have some good models to work off of: NILOA’s Excellence in Assessment designation criteria, CHEA’s awards for Effective Institutional Practice in Student Learning Outcomes, recognition criteria developed by the (now defunct) New Leadership Alliance for Student Learning and Accountability, and rubrics developed by some accreditors. Then we need to educate institutions and accreditation reviewers on how to use the rubric(s).


3. Focus less on student learning assessment (and its cousins, student achievement, student success, and completion) and more on teaching and learning. I would love to see Lumina focus on excellent teaching (including engagement) as a primary strategy to achieve its completion agenda—getting more faculty to adopt research-informed pedagogies that help students learn and succeed. I’d also like to see accreditors include the use of research- and evidence-informed teaching practices in their definitions of educational excellence.


4. Communicate clearly and succinctly with various audiences what our students are learning and what we're doing to improve learning. I haven't yet found any institution that I think is doing this really well. Capella is the only one I’ve seen that impresses me, and even Capella only presents results, not what they're doing to improve learning. I'm intrigued by the concept of infographics and wish I'd studied graphic design! A partnership with some graphic designers (student interns or class project?) might help us come up with some effective ways to tell our complicated stories.


5. Focus public attention on student learning as an essential part of student success. As Richard DeMillo recently pointed out, we need to find meaningful alternatives to US News rankings that focus on what’s truly important—namely, student learning and success. The problem has always been that student learning and success are so complex that they can’t be summarized into a brief set of metrics.

But the U.S. Department of Education has opened an intriguing possibility. At the very end of his April 22 letter to accreditors, Undersecretary Ted Mitchell noted that “Accreditors may…develop tiers of recognition, with some institutions or programs denoted as achieving the standards at higher or lower levels than others.” Accreditors thus now have an opportunity to commend publicly those institutions that achieve certain standards at a (clearly defined) “best practice” level. Many standards would not be of high interest to most members of the public, and input-based standards (resources) would only continue to recognize the wealthiest institutions. But commendations for best practices in things like research- and evidence-informed teaching methods and student development programs, serving the public good, and meeting employer needs with well-prepared graduates (documented through assessment evidence and rigorous standards) could turn this around and focus everyone on what’s most important: making sure America’s college students get great educations.