Linda Suskie

  A Common Sense Approach to Assessment in Higher Education

Blog

view:  full / summary

What to look for in multiple choice test reports

Posted on February 28, 2017 at 8:15 AM Comments comments (2)

Next month I’m doing a faculty professional development workshop on interpreting the reports generated for multiple choice tests. Whenever I do one of these workshops, I ask the sponsoring institution to send me some sample reports. I’m always struck by how user-unfriendly they are!

 

The most important thing to look at in a test report is the difficulty of each item—the percent of students who answered each item correctly. Fortunately these numbers are usually easy to find. The main thing to think about is whether each item was as hard as you intended it to be. Most tests have some items on essential course objectives that every student who passes the course should know or be able to do. We want virtually every student to answer those items correctly, so check those items and see if most students did indeed get them right.

 

Then take a hard look at any test items that a lot of students got wrong. Many tests purposefully include a few very challenging items, requiring students to, say, synthesize their learning and apply it to a new problem they haven’t seen in class. These are the items that separate the A students from the B and C students. If these are the items that a lot of students got wrong, great! But take a hard look at any other questions that a lot of students got wrong. My personal benchmark is what I call the 50 percent rule: if more than half my students get a question wrong, I give the question a hard look.

 

Now comes the hard part: figuring out why more students got a question wrong than we expected. There are several possible reasons including the following:

 

  • The question or one or more of its options is worded poorly, and students misinterpret them.
  • We might have taught the question’s learning outcome poorly, so students didn’t learn it well. Perhaps students didn’t get enough opportunities, through classwork or homework, to practice the outcome.
  • The question might be on a trivial point that few students took the time to learn, rather than a key course learning outcome. (I recently saw a question on an economics test that asked how many U.S. jobs were added in the last quarter. Good heavens, why do students need to memorize that? Is that the kind of lasting learning we want our students to take with them?)

 

 

If you’re not sure why students did poorly on a particular test question, ask them! Trust me, they’ll be happy to tell you what you did wrong!

 

Test reports provide two other kinds of information: the discrimination of each item and how many students chose each option. These are the parts that are usually user-unfriendly and, frankly, can take more time to decipher than they’re worth.

 

The only thing I’d look for here is any items with negative discrimination. The underlying theory of item discrimination is that students who get an A on your test should be more likely to get any one question right than students who fail it. In other words, each test item should discriminate between top and bottom students. Imagine a test question that all your A students get wrong but all your failing students answer correctly. That’s an item with negative discrimination. Obviously there’s something wrong with the question’s wording—your A students interpreted it incorrectly—and it should be thrown out. Fortunately, items with negative discrimination are relatively rare and usually easy to identify in the report.

What does a new CAO survey tell us about the state of assessment?

Posted on January 26, 2017 at 8:40 AM Comments comments (6)

A new survey of chief academic officers (CAOs) conducted by Gallup and Inside Higher Education led me to the sobering conclusion that, after a generation of work on assessment, we in U.S. higher education remain very, very far from pervasively conducting truly meaningful and worthwhile assessment.


Because we've been working on this so long, as I reviewed the results of this survey, I was deliberately tough. The survey asked CAOs to rate the effectiveness of their institutions on a variety of criteria using a scale of very effective, somewhat effective, not too effective, and not effective at all. The survey also asked CAOs to indicate their agreement with a variety of statements on a five-point scale, where 5 = strongly agree, 1 = strongly disagree, and the other points are undefined. At this point I would have liked to see most CAOs rate their institutions at the top of the scale: either “very effective” or “strongly agree.” So these are the results I focused on and, boy, are they depressing.


Quality of Assessment Work

Less than a third (30%) of CAOs say their institution is very effective in identifying and assessing student outcomes. ‘Nuff said on that! :(


Value of Assessment Work

Here the numbers are really dismal. Less than 10% (yes, ten percent, folks!) of CAOs strongly agree that:

  • Faculty members value assessment efforts at their college (4%).
  • The growth of assessment systems has improved the quality of teaching and learning at their college (7%).
  • Assessment has led to better use of technology in teaching and learning (6%). (Parenthetically, that struck me as an odd survey question; I had no idea that one of the purposes of assessment was to improve the use of technology in T&L!)


And just 12% strongly disagree that their college’s use of assessment is more about keeping accreditors and politicians happy than it is about teaching and learning.

 

And only 6% of CAOs strongly disagree that faculty at their college view assessment as requiring a lot of work on their parts. Here I’m reading something into the question that might not be there. If the survey asked if faculty view teaching as requiring a lot of work on their parts, I suspect that a much higher proportion of CAOs would disagree because, while teaching does require a lot of work, it’s what faculty generally find to be valuable work--it's what they are expected to do, after all. So I suspect that, if faculty saw value in their assessment work commensurate with the time they put into it, this number would be a lot higher.

 

Using Evidence to Inform Decisions

Here’s a conundrum:

  • Over two thirds (71%) of CAOs say their college makes effective use of data used to measure student outcomes,
  • But only about a quarter (26%) said their college is very effective in using data to aid and inform decision making.
  • And only 13% strongly agree that their college regularly makes changes in the curriculum, teaching practices, or student services based on what it finds through assessment.


 So I’m wondering what CAOs consider effective uses of assessment data!


 Furthermore,

  • About two thirds (67%) of CAOs say their college is very effective in providing a quality undergraduate education.
  • But less than half (48%) say it’s very effective in preparing students for the world of work,
  • And only about a quarter (27%) say it’s very effective in preparing students for engaged citizens.
  • And (as I've already noted) only 30% say it’s very effective in identifying and assessing student outcomes.


How can CAOs who admit their colleges are not very effective in preparing students for work or citizenship engagement or assessing student learning nonetheless think their college is very effective in providing a quality undergraduate education? What evidence are they using to draw that conclusion?


And,

  • While less than half of CAOs saying their colleges are very effective in preparing students for work,
  • Only about a third (32%) strongly agree that their institution is increasing attention to the ability of its degree programs to help students get a good job.


My Conclusions

After a quarter century of work to get everyone to do assessment well:

  • Assessment remains spotty; it is the very rare institution that is doing assessment pervasively and consistently well.
  • A lot of assessment work either isn’t very useful or takes more time than it’s worth.
  • We have not yet transformed American higher education into an enterprise that habitually uses evidence to inform decisions.

What are the characteristics of effective curricula?

Posted on January 6, 2017 at 8:20 PM Comments comments (9)

I'm working on a book chapter on curriculum design, and I've come up with eight characteristics of effective curricula, whether for a course, program, general education, or co-curricular experience:

• They treat a learning goal as a promise.

• They are responsive to the needs of students, employers, and society.

• They are greater than the sum of their parts.

• They give students ample and diverse opportunities to achieve key learning goals.

• They have appropriate, progressive rigor.

• They conclude with an integrative, synthesizing capstone experience.

• They are focused and simple.

• They use research-informed strategies to help students learn and succeed, including high-impact practices.

 

What do you think? Do these make sense? Have I missed anything?

And...do the curricula you work with have these characteristics?

Making a habit of using classroom assessment information to inform our own teaching

Posted on December 20, 2016 at 10:50 AM Comments comments (2)

Given my passion for assessment, you might not be surprised to learn that, whenever I teach, the most fun part for me is analyzing how my students have done on the tests and assignments I’ve given them. Once tests or papers are graded, I can’t wait to count up how many students got each test question right or how many earned each possible score on each rubric criterion. When I teach workshops, I rely heavily on minute papers, and I can’t wait to type up all the comments and do a qualitative analysis of them. I love to teach, and I really want to be as good a teacher as I can. And, for me, an analysis of what students have and haven’t learned is the best possible feedback on how well I’m teaching, much more meaningful and useful than student evaluations of teaching.

 

I always celebrate the test questions or rubric criteria that all my students did well on. I make a point of telling the class and, no matter how jaded they are, you should see their faces light up!

 

And I always reflect on the test questions or rubric criteria for which my students did poorly. Often I can figure out on my own what happened. Often it’s simply a poorly written question or assignment, but sometimes I have to admit to myself that I didn’t teach that concept or skill particularly well. If I can’t figure out what happened, I ask the class and, trust me, they’re happy to tell me how I screwed up! If it’s a really vital concept or skill and we’re not at the end of the course, I’ll often tell them, “I screwed up, but I can’t let you out of here not knowing how to do this. We’re going to go over it again, you’re going to get more homework on it, and you’ll submit another assignment (or have more test questions) on this.” If it's the end of the course, I make notes to myself on what I'll do differently next time.

 

I often share this story at the faculty workshops I facilitate. I then ask for a show of hands of how many participants do this kind of analysis in their own classes. The number of hands raised varies—sometimes there will be maybe half a dozen hands in a room of 80, sometimes more—but rarely do more than a third or half of those present raise their hands. This is a real issue, because if faculty aren’t in the habit of analyzing and reflecting on assessment results in their own classes, how can we expect them to do so collaboratively on broader learning outcomes? In short, it’s a troubling sign that the institutional community is not yet in the habit of using systematic evidence to understand and improve student learning, which is what all accreditors want.

 

Here, then, is my suggestion for a New Year’s resolution for all of you who teach or in any way help students learn: Start doing this! You don’t have to do this for every assignment in every course you teach, but pick at least one key test or assignment in one course whose scores aren’t where you’d like them. Your analysis and reflection on that one test or assignment will lead you into the habit of using the assessment evidence in front of you more regularly, and it will make you an even better teacher than you are today.

Lessons from the Election for Higher Education

Posted on November 28, 2016 at 7:25 AM Comments comments (1)

If you share my devastation at the results of the U.S. presidential election and its implications for our country and our world, and if you are struggling to understand what has happened and wondering what you can do as a member of the higher education community, this blog post is for you. I don’t have answers, of course, but I have some ideas.

 

Why did Trump get so many votes? While the reasons are complex, and people will be debating them for years, there seem to be two fundamental factors. One can be summed up in that famous line from Bill Clinton’s campaign: It’s the economy, stupid.  Jed Kolko at fivethirtyeight.com found that people who voted for Trump were more likely to feel under economic threat, worried about the future of their jobs.

 

The other reason is education. Nate Silver at fivethirtyeight.com has tweeted that Clinton won all 18 states where an above average share of the population has advanced degrees, but she lost 29 of the other 32.  Education and salary are highly correlated, but Nate Silver has found signs that education appears to be a stronger predictor of who voted for Trump than salary.

 

Why is education such a strong predictor of how people voted? Here’s where we need more research, but I’m comfortable speculating that reasons might include any of the following:


  • People without a college education have relatively few prospects for economic security. In my book Five Dimensions of Quality I noted that the Council of Foreign Relations found that, “going back to the 1970s, all net job growth has been in jobs that require at least a bachelor’s degree.” I also noted a statistic from Anthony Carnevale and his colleagues: “By 2020, 65 percent of all jobs will require postsecondary education and training, up from 28 percent in 1973.”
  • Colleges do help students learn to think critically: to distinguish credible evidence from what I call “incredible” evidence, to weigh evidence carefully when making difficult decisions, and to make decisions based more on good quality evidence than on emotional response.
  • College-educated citizens are more likely to have attended quality good schools from kindergarten on, learning to think critically not just in college but throughout their schooling.
  • College-educated citizens are more optimistic because their liberal arts studies give them the open-mindedness and flexibility to handle changing times, including changing careers.


We do have a tremendous divide in this country—an education divide—and it is growing. While college degree holders have always earned more than those without a college degree, the income disparity has grown; college graduates now earn 80% more than high school graduates, up from 40% in the 1970s.

 

If we want a country that offers economic security, whose citizens feel a sense of optimism, whose citizens make evidence-informed decisions, and whose citizens are prepared for changes in their country and their lives, we need to work on closing the education divide by helping as many people as possible get a great postsecondary education.

 

What can we do?


  1. Welcome the underprepared. They are the students who really need our help in obtaining not only economic security but the thinking skills that are the hallmark of a college education and a sense of optimism about their future. The future of our country is in their hands.
  2. Make every student want to come back, as Ken O’Donnell has said, until they complete their degree or credential. Every student we lose hurts his or her economic future and our country.
  3. Encourage actively what Ernest Boyer called the scholarship of application: using research to solve real-life problems such as regional social and economic issues.
  4. Partner with local school systems and governments to improve local grade schools. Many regions of the country need new businesses, but new businesses usually want to locate in communities with good schools for their employees and their families.
  5. Create more opportunities for students to listen to and learn from others with different backgrounds and perspectives. Many colleges seek to attract international students and encourage students to study abroad. I’d like to go farther. Do we encourage our international students to share their backgrounds and experiences with our American students, both in relevant classes and in co-curricular settings? Do we encourage returning study abroad students to share what they learned with their peers? Do we encourage our students to consider not only a semester abroad but a semester at another U. S. college in a different part of the country?
  6. Create more opportunities for students to learn about the value of courtesy, civility, respect, compassion, and kindness and how to practice these in their lives and careers.

Lessons from the Election for Assessment

Posted on November 21, 2016 at 2:45 PM Comments comments (0)

The results of the U.S. presidential election have lessons both for American higher education and for assessment. Here are the lessons I see for meaningful assessment; I’ll tackle implications for American higher education in my next blog post.

 

Lesson #1: Surveys are a difficult way to collect meaningful information in the 21st century. If your assessment plan includes telephone or online surveys of students, alumni, employers, or anyone else, know going in that it’s very hard to get a meaningful, representative sample.

 

A generation ago (when I wrote a monograph Questionnaire Survey Research: What Works for the Association of Institutional Research), most people had land line phones with listed numbers and without caller ID or voice mail. So it was easy to find their phone number, and they usually picked up the phone when it rang. Today many people don’t have land line phones; they have cell phones with unlisted numbers and caller ID. If the number calling is unfamiliar to them, they let the call go straight to voice mail. Online surveys have similar challenges, partly because databases of e-mail addresses aren’t as readily available as phone books and partly because browsing habits affect the validity of pop-up polls such as those conducted by Survey Monkey. And all survey formats are struggling with survey fatigue (how many surveys have you been asked to complete in the last month?).

 

Professional pollsters have ways of adjusting for all these factors, but those strategies are difficult and expensive and often beyond our capabilities.

 

Lesson #2: Small sample sizes may not yield meaningful evidence. Because of Lesson #1, many of the political polls we saw were based on only a few hundred respondents. A sample of 250 has an error margin of 6% (meaning that if, for example, you find that 82% of the student work you assessed meets your standard, the true percentage is probably somewhere between 76% and 88%). A sample of 200 has an error margin of 7%. And these error margins assume that the samples of student work you’re looking at are truly representative of all student work. Bottom line: We need to look at a lot of student work, from a broad variety of classes, in order to draw meaningful conclusions.

 

Lesson #3: Small differences aren’t meaningful. I was struck by how many reporters and pundits talked about Clinton having, say, a 1% or 2% point lead without mentioning that the error margin made these leads too close to call. I know everyone likes to have a single number—it’s easiest to grasp—but I wish we could move to the practice of reporting ranges of likely results, preferably in graphs that show overlaps and convey visually when differences aren’t really significant. That would help audiences understand, for example, whether students’ critical thinking skills really are worse than their written communication skills, or whether their information literacy skills really are better than those of their peers.

 

Lesson #4: Meaningful results are in the details. Clinton won the popular vote by well over a million votes but still lost enough states to lose the Electoral College. Similarly, while students at our college may be doing well overall in terms of their analytic reasoning skills, we should be concerned if students in a particular program or cohort aren’t doing that well. Most colleges and universities are so diverse in terms of their offerings and the students they serve that I’m not sure overall institution-wide results are all that helpful; the overall results can mask a great deal of important variation.

 

Lesson #5: We see what we want to see. With Clinton the odds-on favorite to win the race, it was easy to see Trump’s chances of winning (anywhere from 10-30%, depending on the analysis) as insignificant, when in fact these probabilities meant he had a realistic chance of winning. Just as it was important to take a balanced view of poll results, it’s important to bring a balanced view to our assessment results. Usually our assessment results are a mixed bag, with both reasons to cheer and reasons to reflect and try to improve. We need to make sure we see—and share—both the successes and the areas for concern.

Using Professional Development Funds Wisely

Posted on November 9, 2016 at 5:45 AM Comments comments (0)

 

I recently suggested to a college that it invest its professional development funds in helping faculty learn more about how to teach and assess. The response? We already do plenty—we give every faculty member funds to use however they like on professional development.

 

The problem with this approach is that there can be a difference between what people want to do and what they should do. If you gave me funds for my own personal growth and development, I’d probably use it to visit some fine restaurants instead of the gym membership that I really should get. If you gave me funds for professional development, I’d probably use it to go to a research conference in a nice location rather than organize a visit with a team of my colleagues a college that’s doing a great job teaching and assessing writing in our discipline.

 

One of the themes of my book Five Dimensions of Quality is “put your money where your mouth is.” Does your college really do this when it comes to professional development?

 

  • Does your college focus its professional development resources on the things your college says it’s focusing on? For example, if one of your college’s strategic goals is to be student-centered, do you focus professional development funds on helping faculty and staff learn what it means to be student-centered and how to incorporate student-centered practices into their teaching and other responsibilities?
  • Does your college give priority to funding sabbatical leave requests that, again, address your college’s top priorities? If your college’s mission or strategic goals include teaching excellence, for example, do you give high priority to sabbatical leaves that address the scholarship of teaching?
  • Does your college prioritize travel funding conferences and events that will help faculty and staff develop the knowledge and skills to address your college’s top priorities, such as student success?
  • Does your college prioritize sabbatical and travel funding for requests that include plans to disseminate that’s been learned to colleagues across your college?
  • Does your teaching-learning center use systematic evidence of what faculty and student development staff most need to learn when it plans its professional development offerings? For example, if assessments show that students across a variety of disciplines struggle to cite sources, does the TLC collaborate with librarians to offer programming on how to teach students to cite sources?
  • Does your assessment committee periodically review department assessment reports to identify what faculty and staff are doing well with assessment and what remains a struggle? Does it publicize successes, such as useful rubrics and prompts, to help others learn what good practices look like? Does it sponsor or recommend professional development to help faculty and staff with whatever aspects of assessment are most challenging?

An example of closing the loop...and ideas for doing it well

Posted on September 24, 2016 at 7:35 AM Comments comments (6)

I was intrigued by an article in the September 23, 2016, issue of Inside Higher Ed titled “When a C Isn’t Good Enough.” The University of Arizona found that students who earned an A or B in their first-year writing classes had a 67% chance of graduating, but those earning a C had only a 48% chance. The university is now exploring a variety of ways to improve the success of students earning a C, including requiring C students to take a writing competency test, providing resources to C students, and/or requiring C students to repeat the course.

 

I know nothing about the University of Arizona beyond what’s in the article. But if I were working with the folks there, I’d offer the following ideas to them, if they haven’t considered them already.

 

1. I’d like to see more information on why the C students earned a C. Which writing skills did they struggle most with: basic grammar, sentence structure, organization, supporting arguments with evidence, etc.? Or was there another problem? For example, maybe C students were more likely to hand in assignments late (or not at all).

 

2. I’d also like to see more research on why those C students were less likely to graduate. How did their GPAs compare to A and B students? If their grades were worse, what kinds of courses seemed to be the biggest challenge for them? Within those courses, what kinds of assignments were hardest for them? Why did they earn a poor grade on them? What writing skills did they struggle most with: basic grammar, organization, supporting arguments with evidence, etc.? Or, again, maybe there was another problem, such as poor self-discipline in getting work handed in on time.

 

And if their GPAs were not that different from those of A and B students (or even if they were), what else was going on that might have led them to leave? The problem might not be their writing skills per se. Perhaps, for example, that students with work or family obligations found it harder to devote the study time necessary to get good grades. Providing support for that issue might help more than helping them with their writing skills.

 

3. I’d also like to see the faculty responsible for first-year writing articulate a clear, appropriate, and appropriately rigorous standard for earning a C. In other words, they could use the above information on the kinds and levels of writing skills that students need to succeed in subsequent courses to articulate the minimum performance levels required to earn a C. (When I taught first-year writing at a public university in Maryland, the state system had just such a statement, the “Maryland C Standard.”;)

 

4. I’d like to see the faculty adopt a policy that, in order to pass first-year writing, students must meet the minimum standard of every writing criterion. Thus, if student work is graded using a rubric, the grade isn’t determined by averaging the scores on various rubric criteria—that lets a student with A arguments but F grammar earn a C with failing grammar. Instead, students must earn at least a C on every rubric criterion in order to pass the assignment. Then the As, Bs, and Cs can be averaged into an overall grade for the assignment.

 

(If this sounds vaguely familiar to you, what I’m suggesting is the essence of competency-based education: students need to demonstrate competence on all learning goals and objectives in order to pass a course or graduate. Failure to achieve one goal or objective can’t be offset by strong performance on another.)

 

5. If they haven’t done so already, I’d also like to see the faculty responsible for first-year writing adopt a common rubric, articulating the criteria they’ve identified, that would be used to assess and grade the final assignment in every section, no matter who teaches it. This would make it easy to study student performance across all sections of the course and identify pervasive strengths and weaknesses in their writing. If some faculty members or TAs have additional grading criteria, they could simply add those to the common rubric. For example, I graded my students on their use of citation conventions, even though that was not part of the Maryland C Standard. I added that to the bottom of my rubric.

 

6. Because work habits are essential to success in college, I’d also suggest making this a separate learning outcome for first-year writing courses. This means grading students separately on whether they turn in work on time, put in sufficient effort, etc. This would help everyone understand why some students fail to graduate—is it because of poor writing skills, poor work habits, or both?

 

These ideas all move responsibility for addressing the problem from administrators to the faculty. That responsibility can’t be fulfilled unless the faculty commit to collaborating on identifying and implementing a shared strategy so that every student, no matter which section of writing they enroll in, passes the course with the skills needed for subsequent success.

Helping Students Evaluate the Credibility of Sources

Posted on September 16, 2016 at 6:05 AM Comments comments (0)

Like many Americans, I have been appalled by this year’s presidential election train wreck. I am dismayed in so many ways, but perhaps no more so than by the many American citizens who either can’t or choose not to distinguish between credible and what I like to call incredible sources of information. Clearly we as educators are not doing enough to help our students learn how to do this.

 

I think part of problem is that we in higher education have historically focused on teaching our students to use only academic library resources, which have been vetted by professionals and are therefore credible. But today many of our students will never access a college library after they graduate—they’ll be turning to what I call the Wild West of the internet for information. So today it’s vital that we teach our students how to vet information themselves.

 

A number of years ago, I had my students in my first-year writing courses write a research paper using only online sources. Part of their assignment was to identify both credible and non-credible sources and explain why they found some credible and others not. Here’s the guidance I gave them:

 

Evaluating sources is an art, not an exact science, so there is no one set of rules that will help you definitively separate credible sources from non-credible sources. Instead, you have to use thinking skills such as analysis and evaluation to judge for yourself whether a source is sufficiently credible for you to use in your research. The following questions will help you decide.

 

What is the purpose of the source? Serious sources are more credible than satiric or humorous ones. Sources intended to inform, such as straight news stories, may be more credible than those intended to persuade, such as editorials, commentaries, and letters to the editorial, which may be biased.

 

Is the author identified? A source with an identified author(s) may be more credible than one without an author, although authoritative organizations (e.g., news organizations, professional associations) may publish credible material without an identified author.

 

Who is the author? A credible source is written by someone with appropriate education, training, or experience to write with authority on the topic. An unknown writer is less credible than a frequently published writer, and a student is less credible than a professor. If you feel you need more information on the author, do a database search and/or Google search for the author’s name.

 

Who published or sponsored the source? A scholarly journal is generally more credible than a popular magazine or newspaper. Sources whose purpose is to sell a product or point of view—including many “news” organizations and websites—may be less credible than those whose purpose is to provide impartial information and services. A website with a URL extension of .edu, .gov, or .org may be more credible than one ending in .com (but not necessarily--.edu, .gov, and .org sites often exist to promote a particular point of view). A source published by a reputable publisher or organization is often more credible than one published independently by the author or one published by a fly-by-night organization, because a reputable publisher or organization provides additional review and quality control.

 

How complete is the source’s information? Sources with more complete coverage of a topic may be more credible that those than provide limited coverage.

 

Is the content balanced or biased? Sources that present a balanced point of view are often more credible than those that clearly have a vested interest in the topic. If the author argues for one point of view, does he or she present opposing views fairly and refute them persuasively?

 

Are information, statements, and claims documented or unsupported? Sources that provide thorough, complete documentation for their information and claims are generally more credible than those that make unsupported or scantily-supported statements or claims. For example, information based on a carefully-designed research project is more credible than information based only on the author’s personal observations.

 

Has the source, author, publisher, and/or sponsor been recognized by others as credible? Sources found through academic databases such as Lexis Nexis or Infotrac are more credible than those only found through Google. Sources frequently reviewed, cited, or linked by others are more credible than those that no other expert or authority mentions or uses. You can do a database search and/or Google search for reviews of a source and to see how often it has been cited or linked by others. To look for links to a source, search on Google for “link:” and the URL (e.g., link:www.towson.edu) and see how many links are found.

 

Is the material well-written? Material that is clear, well-organized and free of spelling and grammatical errors is more credible than poorly-written material.

 

What is the date the material was published or last updated? Material with a clear publication date is more credible than undated material. For time-sensitive research topics, recent information is more credible than older information. Web sources that are updated regularly and well-maintained (e.g., no broken links) may be more credible than those that are posted and then neglected.

 

What are your own views and opinions? Don’t bring prejudices to your search. It’s easy to think that sources with which you agree are more credible than those with which you disagree. Keep an open, critical mind throughout your search, and be willing to modify your thesis or hypothesis as you learn more about your topic.

Rubric events this fall!

Posted on August 28, 2016 at 8:40 AM Comments comments (0)

This fall is the Semester of the Rubric for me. I'm doing sessions called "Everything I Thought I Knew About Rubrics Was Wrong" at the Assessment Institute at Indianapolis on October 17 and at the NEEAN Fall Forum on the College of the Holy Cross in Worcester, Massachusetts, on November 4. If you are in the Middle East, Africa, Europe, or Asia, I'm doing a workshop on "Building Rubrics" at the Assessment Leadership Conference at United Arab Emirates University in Al Ain on November 14-15. 


On another note, on January 17 I'm doing "Building a Culture of Quality," a retreat for institutional leaders sponsored by WASC at the Kellogg West Conference Center in Pomona, California.


For more information on any of these events, visit www.lindasuskie.com/upcoming-events. I hope to see you!

Can rubrics impede learning?

Posted on August 18, 2016 at 12:40 AM Comments comments (6)

Over the last couple of years, I’ve started to get some gentle pushback from faculty on rubrics, especially those teaching graduate students. Their concern is whether rubrics might provide too much guidance, serving as a crutch when students should be figuring out things on their own. One recent question from a faculty member expressed the issue well: “If we provide students with clear rubrics for everything, what happens when they hit the work place and can’t figure out on their own what to do and how to do it without supervisor hand-holding?”

 

It’s a valid point, one that ties into the lifelong learning outcome that many of us have for our students: we want to prepare them to self-evaluate and self-correct their work. I can think of two ways we can help students develop this capacity without abandoning rubrics entirely. One possibility would be to make rubrics less explicit as students progress through their program. First-year students need a clear explanation of what you consider good organization of a paper; seniors and grad students shouldn’t. The other possibility—which I like better—would be to have students develop their own rubrics, either individually or in groups, subject, of course, to the professor’s review.

 

In either case, it’s a good idea to encourage students to self-assess their work by completing the rubric themselves—and/or have a peer review the assignment and complete the rubric—before turning it in. This can help get students in the habit of self-appraising their work and taking responsibility for its quality before they hit the workplace.

 

Do you have any other thoughts or ideas about this? Let me know!

When might a nationally-accredited school be a good fit for regional accreditation?

Posted on July 25, 2016 at 11:10 AM Comments comments (0)

American accreditors fall into three broad groups: regional, national, and specialized. Of the three, regional accreditation is often seen as the most desirable for several reasons. First, regional accreditors are among the oldest accreditors in the U.S. and accredit the most prestigious institutions, giving them an image of quality. Second, employers are increasingly requiring job applicants to hold degrees from regionally accredited institutions. Third, some specialized accreditors require accredited programs to be in a regionally accredited institution. And finally, despite Federal regulations to the contrary, students from nationally-accredited institutions sometimes find it hard to transfer their credits elsewhere or to pursue a more advanced degree.

 

For all these reasons, nationally-accredited institutions sometimes consider pursuing regional accreditation. Unfortunately, in many instances regional accreditation is simply not a good fit—it’s like trying to fit a square peg into a round hole. Then the institution may either fail in its efforts to earn regional accreditation or, once accredited, run into problems maintaining its accreditation.

 

When might regional accreditation be a good fit?

 

1. Regional accreditation is only open to institutions that award at least one degree. If your institution offers only certificates and/or diplomas, it isn’t eligible.

 

2. Regional accreditors require all undergraduate degree programs to include certain components, including a general education or core curriculum studying the liberal arts and the development of certain skills and competencies.

 

3. Regional accreditors require a system of shared collegial governance. While none prescribes a particular governance system, all require that the respective roles, responsibilities, and authority of the board, leadership, administration, and faculty be clearly articulated. And an implicit expectation is that the institutional culture be one of communication and collaboration; regional accreditation simply becomes very difficult without these.

 

4. Because regional accreditors accredit a vast array of institutions, their standards are relatively imprecise, more a set of principles that are applied within the context of each institution’s mission. Regional accreditation is therefore a process that requires considerable time, thought, and effort by many members of the institutional community, not a task to be delegated to someone.

 

5. Regional accreditors expect a commitment to ongoing improvement beyond the minimum required for accreditation. Regional accreditation is not appropriate for an institution content to teeter on the edge of the bare minimum required for compliance.

 

6. Regional accreditors expect a commitment to collegiality within and across institutions. Volunteer peers from other institutions will work with your institution, and the accreditor expects your institution to return the favor once accredited, providing volunteer peer evaluators, presenting at conferences, and so on.

 

7. Regional accreditors expect a board that is empowered and committed to act in the best interests of the institution and its students. Again, regional accreditors are not prescriptive about board make-up and duties, but they want to see a board that has the commitment, capacity and authority to act in the institution’s best interests. Suppose, for example, that the president/CEO/owner develops early-onset Alzheimer’s and begins to make irrational decisions that are not in the best interest of the institution. Can the board bring about a change in leadership? If the board heads a corporation, can it put institutional quality ahead of immediate shareholder return on investment? If the board oversees other entities that are troubled, such as a church, hospital, or another educational institution, can it put the best interests of the accredited institution first, or will it be tempted to rob Peter to pay Paul?

 

Some shameless self-promotion here: my book Five Dimensions of Quality: A Common Sense Guide to Accreditation and Accountability aims to explain what regional accreditors are looking for in plain terms. If your nationally-accredited institution is considering moving to regional accreditation, I think the book is a worthwhile investment.

Meaningful assessment of AA/AS transfer programs

Posted on July 9, 2016 at 7:45 AM Comments comments (2)

I often describe the teaching-learning-assessment process as a four-step cycle:

1. Clear learning outcomes

2. A curriculum and pedagogies designed to provide students with enough learning opportunities to achieve those outcomes

3. Assessment of those outcomes

4. Use of assessment results to improve the other parts of the cycle: learning outcomes, curriculum, pedagogies, and assessment


I also often point out that, if faculty are struggling to figure out how to assess something, the problem is often not assessment per se but the first two steps. After all, if you have clear outcomes and you’re giving students ample opportunity to achieve them, you should be grading students on their achievement of those outcomes, and there’s your assessment evidence. So the root cause of assessment struggles is often poorly articulated learning outcomes, a poorly designed curriculum, or both.


I see this a lot in the transfer AA/AS degrees offered by community colleges. As I explained in my June 20 blog entry, these degrees, designed for transfer into a four-year college major, typically consist of 42-48 credits of general education courses plus 12-18 credits related to the major. The general education and major-related components are often what I call “Chinese menu” curricula: Choose one course from Column A, two from Column B, and so on. (Ironically, few Chinese have this kind of menu any more, but people my age remember them.)

 

The problem with assessing these programs is the second step of the cycle, as I explained in my June 20 blog. in many cases these aren’t really programs; they’re simply collections of courses without coherence or progressive rigor. That makes it almost impossible both to define meaningful program learning outcomes (the first step of the cycle) or assess them (the third step of the cycle).

 

How can you deal with this mess? Here are my suggestions.

 

1. Clearly define what a meaningful “program” is. As I explained in my June 20 blog entry, many community colleges are bound by state or system definitions of a “program” that aren’t meaningful. Regardless of the definition to which you may be bound, I think it makes the most sense to think of the entire AA/AS degree as the program, with the 12-18 credits beyond gen ed requirements as a concentration, specialization, track or emphasis of the program.


2. Identify learning outcomes for both the degree and the concentration, recognizing that there should often be a relation between the two. In gen ed courses, students develop important competencies such as writing, analysis, and information literacy. In their concentration, they may achieve some of those competencies at a deeper or broader level, or they may achieve additional outcomes. For example, students in social science concentrations may develop stronger information literacy and analysis skills than students in other concentrations, while students in visual arts concentrations may develop visual communication skills in addition to the competencies they learn in gen ed.


Some community colleges offer AA/AS degrees in which students complete gen ed requirements plus 12-18 credits of electives. In these cases, students should work with an advisor to identify their own,unique program/concentration learning outcomes and select courses that will help them achieve those outcomes.


3. Use the following definition of a program (or concentration) learning outcome: Every student in the program (or concentration) takes at least two courses with learning activities that help him or her achieve the program learning outcome. This calls for fairly broad rather than course-specific learning outcomes.


If you’re struggling to find outcomes that cross courses, start by looking at course syllabi for any common themes in course learning outcomes. Also think about why four-year colleges want students to take these courses. What are student learning, beyond content, that will help them succeed in upper division courses in the major? In a pre-engineering program, for example, I’d like to think that the various science and math courses students take help them graduate with stronger scientific reasoning and quantitative skills than students in non-STEM concentrations.


4. Limit the number of learning outcomes; quality is more important than quantity here. Concentrations of 12-18 credits might have just one or two.

 

5. Also consider limiting your course options by consolidating Chinese-menu options into more focused pathways, which we are learning improve student success and completion. I’m intrigued by what Alexandra Waugh calls “meta-majors”: focused pathways that prepare students for a cluster of four-year college majors, such as health sciences, engineering, or the humanities, rather than just one.


6. Review your curricula to make sure that every student, regardless of the courses he or she elects, will graduate with a sufficiently rigorous achievement of every program (and concentration) learning outcome. An important principle here: There should be at least one course in which students can demonstrate achievement of the program learning outcome at the level of rigor expected of an associate degree holder prepared to begin junior-level work. In many cases, an entry-level course cannot be sufficiently rigorous; your program or concentration needs at least one course that cannot be taken the first semester. If you worry that prerequisites may be a barrier to completion, consider Passaic County Community College’s approach, described in my June 20 blog.


7. Finally, you’ve got meaningful program learning outcomes and a curriculum designed to help students achieve them at an appropriate level of rigor, so you're ready to assess those outcomes. The course(s) you’ve identified in the last step are where you can assess student achievement of the outcomes. But one additional challenge faces community colleges: many students transfer before taking this “capstone” course. So also identify a program/concentration “cornerstone” course: a key course that students often take before they transfer that helps students begin to achieve one or more key program/concentration learning outcomes. Here you can assess whether students are on track to achieve the program/concentration learning outcome, though at this point they probably won’t be where you want them by the end of the sophomore year.

A big community college issue: Degree programs that really aren't

Posted on June 20, 2016 at 11:30 AM Comments comments (3)

Over the years I’ve worked with myriad community colleges, large and small, in dozens of states throughout the United States. More than many other higher ed sectors, community colleges truly focus on helping students learn, making assessment a relatively easy sell and making community colleges some of my favorites to work with.

 

But I’m seeing an issue at community colleges throughout the United States that deeply troubles me and can make assessment of program learning outcomes almost impossible. The issue flows from the two kinds of associate degree programs that community colleges offer. One kind is what many call “career and technical education” (CTE) programs. Often A.A.S. degrees, these are designed to prepare students for immediate employment. The other kind is what many call “transfer programs”: A.A. or A.S. programs, often named something like “General Studies” or “Liberal Education,” that are designed to prepare students to transfer into baccalaureate programs at four-year colleges.

 

The problem I’m seeing is that many of these programs, especially on the transfer side, aren’t really programs. Here’s how the regional accreditors’ standards define programs:

 

  • ACCJC: “Appropriate breadth, depth, rigor, sequencing, time to completion, and synthesis of learning”
  • HLC: “Require levels of performance by students appropriate to the degree or certificate awarded”
  • MSCHE: “Characterized by rigor and coherence… designed to foster a coherent student learning experience and to promote synthesis of learning”
  • NEASC effective July 1, 2016: “Coherent design and… appropriate breadth, depth, continuity, sequential progression, and synthesis of learning”
  • NWCCU: “Rigor that [is] consistent with mission… A coherent design with appropriate breadth, depth, sequencing of courses, and synthesis of learning”
  • SACS: “A coherent course of study”
  • WSCUC: “Appropriate in content, standards of performance, [and] rigor”

 

 

There’s a theme here: A collection of courses is not a program and, conversely, a program is more than a collection of courses. A true program has both coherence and rigor. In order for this to happen, some courses must be more advanced than others and build on what’s been learned in earlier courses. That means that some program courses should be at the 200-level and have prerequisites.

 

But many community college degree “programs” are in fact collections of courses, nothing more.

 

  • Many transfer degree “programs” consist of 42 or 45 credits of general education courses—virtually all introductory 100-level courses—plus another 12-18 credits of electives, sometimes in an area of specialization, sometimes not.
  • At virtually every community college I’ve visited, it’s entirely possible for students to complete an associate degree in at least one “program” by taking only 100-level courses.
  • In some disciplines, “program” courses are largely cognate requirements (such as physics for an engineering program) with perhaps only one course in the program discipline itself.
  • And on top of all this, any 200-level courses in the “program” are often sophomore-level in name only; they have no prerequisites and appear no more rigorous than 100-level courses.

 

 

Two years of 100-level study does not constitute an associate degree and does not prepare transfer students for the junior-level work they will face when they transfer. And a small handful of introductory courses does not constitute an associate degree program.

 

Turning community college associate degree programs into true programs with rigor and coherence is remarkably difficult. Among the barriers:

 

  • Some systems and states prohibit community colleges from offering associate degrees in liberal arts or business disciplines, as Charlene Nunley, Trudy Bers, and Terri Manning note in NILOA’s Occasional Paper #10, “Learning Outcomes Assessment in Community Colleges.”
  • In other systems and states, plenty of community college faculty have told me that their counterparts at local four-year colleges don’t want them to teach anything beyond introductory courses—the four-year faculty want to teach the rest themselves. (My reaction? What snobs.)
  • Yet other community college faculty have told me that they have felt pressure from the Lumina Foundation’s completion agenda to eliminate all course prerequisites.

 

 

So at some community colleges nothing can be done until laws, regulations, or policies are changed, leaving thousands of students in the meanwhile with a shortchanged education. But there are plenty of other community colleges that can do something. I’m particularly impressed with Passaic County Community College’s approach. Every degree program, even those in the liberal arts, has designated one 200-level course as its program capstone. The course is open only to students who have completed at least 45 credits and have taken at least one other course in the discipline. For the English AA, for example, this course is “Topics in Literature,” and for the Psychology option in the Liberal Arts AA, this course is “Social Psychology.” It’s a creative solution to a pervasive problem.

Join me in Indiana, Nebraska, California, and the Middle East

Posted on June 11, 2016 at 7:05 AM Comments comments (0)

Over the coming months I'm be speaking or doing workshops in a variety of public venues. If your schedule permits, please join me--I'd love to see you! For more information on any of these events, visit www.lindasuskie.com/upcoming-events.

  • On June 23, I'll be doing a post-conference workshop on "Meaningful Assessment of Program Learning Outcomes" at the Innovations 2016 in Faith-Based Nursing Conference at Indiana Wesleyan University in Marion.
  • On August 8, I'll be doing a workshop on "Using Assessment Results to Understand and Improve Student Learning" at Nebraska Wesleyan University in Omaha, sponsored by Nebraska Wesleyan University and Concordia, Doane, and Union Colleges.
  • On October 17 or 18 (date and time TBA), I'll be doing a session titled "Everything I Thought I Knew About Rubrics Was Wrong" at the 2016 Assessment Institute in Indianapolis.
  • On November 15 or 16 (date and time TBA), I'll be doing a session or workshop (topic to be announced) at the inaugural Assessment Leadership Conference sponsored by United Arab Emirates University in Al Ain.
  • On January 17, I'll be facilitating "Building a Culture of Quality: A Retreat for Institutional Leaders," hosted by the WASC Senior College and University Commission, at the Kellogg West Conference Center in Pomona, California.

What does your website say about your institution?

Posted on May 27, 2016 at 12:40 AM Comments comments (3)

Part of my preparation for working with or visiting any college is visiting its website. I’m looking for basic “get acquainted” info to help me understand the college and therefore do a better job helping it. The information I’m looking for often includes things like the following:


  • How big is the institution? This helps me because large institutions may need different assessment or accreditation support structures than small ones.
  • What are its mission, vision, and strategic goals? This helps me because assessment and accreditation work should focus on institutional achievement of its mission, vision, and strategic goals.
  • Who “owns” the institution? Is it public, private non-profit, or private for-profit? Who founded it, and how long ago? This gives me insight into possible unstated values of the institution. For example, an institution founded by a religious denomination may still abide by some of the denomination’s tenets, even if it is now independent. A public institution is typically under pressure to be all things to all people and may therefore be stretched too thin.
  • Who accredits the institution? Helpful for obvious reasons!
  • What kinds of programs does it offer? This helps me because professional/career programs often need different assessment approaches or support than liberal arts programs.
  • How are the institution’s academic programs organized? Sometimes there are several schools within a college or several colleges within a university.
  • How many programs does it offer? An institution offering 250 programs needs a different assessment structure than one offering 25.
  • What is its gen ed curriculum, and what are its gen ed learning outcomes? This can be helpful because I often work with colleges on identifying and assessing gen ed learning outcomes.

 

(Ironically, I never look for any assessment information on the college’s website. I know it’s not there. Yes, there may be a home page for the assessment office, usually full of guidelines on how to fill out report templates, and perhaps with links to some assessment reports. But I haven’t yet found a college website that tells me and others clearly, “What are the most important things we want students to learn here, and how well are they learning it?” So I don’t bother looking anymore.)

 

Yes, I could ask my contacts at the institution for all this information (and if I can’t find it on the website, I do), but poking around the website gives me additional insight:


  • Does the institution have a clear sense of its identity and priorities? I worry about colleges with incredibly cluttered home pages, full of announcements about recent and upcoming events, maybe some research, registration reminders, and links to intranet portals. It’s the throw-everything-but-the-kitchen-sink-and-see-what-works approach, and I worry if that’s the approach they take to everything else they do.
  • Most colleges publish their mission, but a remarkable number don’t publish their strategic plan. This gives me the impression that they don’t want public stakeholders (community members, businesses, government policymakers) to get on board and support their plans.
  • Some university websites list their programs by school or college—in order to find the Visual Communications program, you have to first somehow discern if it’s offered in the College of Business, the College of Art, or the College of Liberal Arts. Most prospective students are interested in particular programs and don’t care which college they’re housed in, so this raises a concern that the institution may be more faculty-centered than student-centered.
  • Sometimes the colleges/schools and the programs within them are just plain odd. I remember one institution that had a visual communication program in the business college and a graphic design program in the art college—and of course they offered completely separate curricula and didn’t talk to each other! These oddities often suggest silos and turf wars.
  • Sometimes a college offers, say, 150 programs for 2500 students. This is a college that’s stretching its resources too far—probably some of those programs are too small to be effective.
  • I sometimes need Indiana Jones to track down gen ed requirements. Here’s one (sadly typical) example: from the home page, I clicked on Academics, then Academic Catalogs, then 2015-2016 Undergraduate Catalog, then Colleges and Schools, then College of Liberal Arts & Sciences, then (finally) General Education Requirements. The only conclusion I can draw is that colleges and universities are embarrassed of their gen ed requirements, which doesn’t say much about the real value we place on the liberal arts.

 

Now, I know I’m not a typical visitor to your college’s website, but I’m sure I’m not the only stakeholder interested in these kinds of information…and perhaps drawing these kinds of conclusions about your college. At a minimum, your accreditation reviewers will probably visit your website looking for things very similar to what I look for.

 

For more ideas on common flaws in college websites, visit www.ecampusnews.com/featured/featured-on-ecampus-news/college-website-mistakes/.

Fixing assessment in American higher education

Posted on May 7, 2016 at 9:00 AM Comments comments (4)

In my April 25 blog post, “Are our assessment processes broken?” I listed five key problems with assessment in the United States. Can we fix them? Yes, we can, primarily because today we have a number of organizations and entities that can tackle them, including (in no particular order):


 

Here are five steps that I think will dramatically improve the quality and effectiveness of student learning assessment in the United States.

 

1. Develop a common vocabulary. So much time is wasted debating the difference between a learning outcome and a learning objective, for example. The assessment movement is now mature enough that we can develop a common baseline glossary of those terms that continue to be muddy or confusing.

 

2. Define acceptable, good, and best assessment practices. Yes, many accreditors provide professional development for their membership and reviewers on assessment, but trainers often focus on best practices rather than minimally acceptable practices. This leads to reviewers unnecessarily “dinging” institutions on relatively minor points (say, learning outcomes don’t start with action words) while missing the forest of, say, piles of assessment evidence that aren’t being used meaningfully.

 

Specifically, we need to practice what we preach with one or more rubrics that list the essential elements of assessment and define best, good, acceptable, and unacceptable performance levels for each criteria. Fortunately, we have some good models to work off of: NILOA’s Excellence in Assessment designation criteria, CHEA’s awards for Effective Institutional Practice in Student Learning Outcomes, recognition criteria developed by the (now defunct) New Leadership Alliance for Student Learning and Accountability, and rubrics developed by some accreditors. Then we need to educate institutions and accreditation reviewers on how to use the rubric(s).

 

3. Focus less on student learning assessment (and its cousins, student achievement, student success, and completion) and more on teaching and learning. I would love to see Lumina focus on excellent teaching (including engagement) as a primary strategy to achieve its completion agenda—getting more faculty to adopt research-informed pedagogies that help students learn and succeed. I’d also like to see accreditors include the use of research- and evidence-informed teaching practices in their definitions of educational excellence.

 

4. Communicate clearly and succinctly with various audiences what our students are learning and what we're doing to improve learning. I haven't yet found any institution that I think is doing this really well. Capella is the only one I’ve seen that impresses me, and even Capella only presents results, not what they're doing to improve learning. I'm intrigued by the concept of infographics and wish I'd studied graphic design! A partnership with some graphic designers (student interns or class project?) might help us come up with some effective ways to tell our complicated stories.

 

5. Focus public attention on student learning as an essential part of student success. As Richard DeMillo recently pointed out, we need to find meaningful alternatives to US News rankings that focus on what’s truly important—namely, student learning and success. The problem has always been that student learning and success are so complex that they can’t be summarized into a brief set of metrics.

But the U.S. Department of Education has opened an intriguing possibility. At the very end of his April 22 letter to accreditors, Undersecretary Ted Mitchell noted that “Accreditors may…develop tiers of recognition, with some institutions or programs denoted as achieving the standards at higher or lower levels than others.” Accreditors thus now have an opportunity to commend publicly those institutions that achieve certain standards at a (clearly defined) “best practice” level. Many standards would not be of high interest to most members of the public, and input-based standards (resources) would only continue to recognize the wealthiest institutions. But commendations for best practices in things like research- and evidence-informed teaching methods and student development programs, serving the public good, and meeting employer needs with well-prepared graduates (documented through assessment evidence and rigorous standards) could turn this around and focus everyone on what’s most important: making sure America’s college students get great educations.

Are our assessment processes broken?

Posted on April 25, 2016 at 11:30 AM Comments comments (6)

Wow…my response to Bob Shireman’s paper on how we assess student learning really touched a nerve. Typically about 50 people view my blog posts, but my response to him got close to 1000 views (yes, there’s no typo there). I’ve received a lot of feedback, some on the ASSESS listserv, some on LinkedIn, some on my blog page, and some in direct e-mails, and I’m grateful for all of it. I want to acknowledge in particular the especially thoughtful responses of David Dirlam, Dave Eubanks, Lion Gardiner, Joan Hawthorne, Jeremy Penn, Ephraim Schechter, Jane Souza, Claudia Stanny, Reuben Ternes, Carl Thompson, and Catherine Wehlburg.


The feedback I received reinforced my views on some major issues with how we now do assessment:


Accreditors don’t clearly define what constitute acceptable assessment practices. Because of the diversity of institutions they accredit, regional accreditors are deliberately flexible. HLC, for example, simply says that assessment processes should be “effective” and “reflect good practice,” while Middle States now simply says that they should be “appropriate.” Most of the regionals offer training on assessment to both institutions and accreditation reviewers, but the training often doesn’t distinguish between best practice and acceptable practice. As a result, I heard stories of institutions getting dinged because, say, their learning outcomes didn’t start with action verbs or their rubrics used fuzzy terms, even though no regional requires that learning outcomes be expressed in a particular format or that rubrics must be used.


And this leads to the next major issues…


We in higher education—including government policymakers—don’t yet have a common vocabulary for assessment. This is understandable—higher ed assessment is still in its infancy, after all, and what makes this fun to me is that we all get to participate in developing that vocabulary. But right now terms such as “student achievement,” “student outcomes,” “learning goal,” and even “quantitative” and “qualitative” mean very different things to different people.


We in the higher ed assessment community have not yet come to consensus on what we consider acceptable, good, and best assessment practices. Some assessment practitioners, for example, think that assessment methods should be validated in the psychometric sense (with evidence of content and construct validity, for example), while others consider assessment to be a form of action research that needs only evidence of consequential validity (are the results of good enough quality to be used to inform significant decisions?). Some assessment practitioners think that faculty should be able to choose to focus on assessment “projects” that they find particularly interesting, while others think that, if you’ve established something as an important learning outcome, you should be finding out whether students have indeed learned it, regardless of whether or not it’s interesting to you.


Is all our assessment work making a difference? Assessment and accreditation share two key purposes: first, to ensure that our students are indeed learning what we want them to learn and, second, to make evidence-informed improvements in what we do, especially in the quality of teaching and learning. Too many of us—institutions and reviewers alike—are focusing too much on how we do assessment and not enough on its impact.


We’re focusing too much on assessment and not enough on teaching and curricula. While virtually all accreditors talk about teaching quality, for example, few expect that faculty use research-informed teaching methods, that institutions actively encourage experimentation with new teaching methods or curriculum designs, or that institutions invest significantly in professional development to help faculty improve their teaching.


What can we do about all of this? I have some ideas, but I’ll save them for my next blog post.

 

A response to Bob Shireman on "inane" SLOs

Posted on April 10, 2016 at 8:55 AM Comments comments (11)

You may have seen Bob Shireman's essay "SLO Madness" in the April 7 issue of Inside Higher Ed or his report, "The Real Value of What Students Do in College." I sent him the following response today:


I first want to point out that I agree wholeheartedly with a number of your observations and conclusions.


1. As you point out, policy discussions too often “treat the question of quality—the actual teaching and learning—as an afterthought or as a footnote.” The Lumina Foundation and the federal government use the term “student achievement” to discuss only retention, graduation, and job placement rates, while the higher ed community wants to use it to discuss student learning as well.


2. Extensive research has confirmed that student engagement in their learning impacts both learning and persistence. You cite Astin’s 23-year-old study; it has since been validated and refined by research by Vincent Tinto, Patrick Terenzini, Ernest Pascarella, and the staff of the National Survey of Student Engagement, among many others.


3. At many colleges and universities, there’s little incentive for faculty to try to become truly great teachers who engage and inspire their students. Teaching quality is too often judged largely by student evaluations that may have little connection to research-informed teaching practices, and promotion and tenure decisions are too often based more on research productivity than teaching quality. This is because there’s more grant money for research than for teaching improvement. A report from Third Way noted that “For every $100 the federal government spends on university-led research, it spends 24 cents on teaching innovation at universities.”


4. We know through neuroscience research that memorized knowledge is quickly forgotten; thinking skills are the lasting learning of a college education.


5. “Critical thinking” is a nebulous term that, frankly, I’d like to banish from the higher ed lexicon. As you suggest, it’s an umbrella term for an array of thinking skills, including analysis, evaluation, synthesis, information literacy, creative thinking, problem solving, and more.


6. The best evidence of what students have learned is in their coursework—papers, projects, performances, portfolios—rather than what you call “fabricated outcome measures” such as published or standardized tests.


7. You call for accreditors to “validate colleges’ own quality-assurance systems,” which is exactly what they are already doing. Many colleges and universities offer hundreds of programs and thousands of courses; it’s impossible for any accreditation team to review them all. So evaluators often choose a random or representative sample, as you suggest.


8. Our accreditation processes are far from perfect. The decades-old American higher education culture of operating in independent silos and evaluating quality by looking at inputs rather than outcomes has proved to be a remarkably difficult ship to turn around, despite twenty years of earnest effort by accreditors. There are many reasons for this, which I discuss in my book Five Dimensions of Quality, but let me share two here. First, US News & World Report’s rankings are based overwhelmingly on inputs rather than outcomes; there’s a strong correlation with institutional age and wealth. Second, most accreditation evaluators are volunteers, and training resources for them are limited. (Remember that everyone in higher education is trying to keep costs down.)


9. Thus, despite a twenty-year focus by accreditors on requiring useful assessment of learning, there are still plenty of people at colleges and universities who don’t see merit in looking at outcomes meaningfully. They don’t engage in the process until accreditors come calling; they continue to have misconceptions about what they are to do and why; and they focus blindly on trying to give the accreditors whatever they think the accreditors want rather than using assessment as an opportunity to look at teaching and learning usefully. This has led to some of your sad anecdotes about convoluted, meaningless processes. Using Evidence of Student Learning to Improve Higher Education, a book by George Kuh and his colleagues, is full of great ideas on how to turn this culture around and make assessment work truly meaningful and useful to faculty.


10. Your call for reviews of majors and courses is sound and, indeed, a number of regional accreditors and state systems already require academic programs to engage in periodic “program review.” There’s room for improvement, however. Many program reviews follow the old “inputs” model, counting library collections, faculty credentials, lab facilities, and the like and do not yet focus sufficiently on student learning.

 

Your report has some fundamental misperceptions, however. Chief among them is your assertion that the three step assessment process—declare goals, seek evidence of student achievement of them, and improve instruction based on the results—“hasn’t worked out that way. Not even close.” Today there are faculty and staff at colleges and universities throughout the country who have completed these three steps successfully and meaningfully. Some of these stories are documented in the periodical Assessment Update, some are documented on the website of the National Institute for Learning Outcomes Assessment (www.learningoutcomeassessment.org), some are documented by the staff of the National Survey of Student Engagement, and many more are documented in reports to accreditors.


In dismissing student learning outcomes as “meaningless blurbs” that are the key flaw in this three-step process, you are dismissing what a college education is all about and what we need to verify. Student learning outcomes are simply an attempt to articulate what we most want students to get out of their college education. Contrary to your assertion that “trying to distill the infinitely varied outcomes down to a list… likely undermines the quality of the educational activities,” research has shown that students learn more effectively when they understand course and program learning outcomes.


Furthermore, without a clear understanding of what we most want students to learn, assessment is meaningless. You note that “in college people do gain ‘knowledge’ and they gain ‘skills,’” but are they gaining the right knowledge and skills? Are they acquiring the specific abilities they most need “to function in society and in a workspace,” as you put it? While, as you point out, every student’s higher education experience is unique, there is nonetheless a core of competencies that we should expect of all college graduates and whose achievement we should verify. Employers consistently say that they want to hire college graduates who can:

• Collaborate and work in teams

• Articulate ideas clearly and effectively

• Solve real-world problems

• Evaluate information and conclusions

• Be flexible and adapt to change

• Be creative and innovative

• Work with people from diverse cultural backgrounds

• Make ethical judgments

• Understand numbers and statistics

 

Employers expect colleges and universities to ensure that every student, regardless of his or her unique experience, can do these things at an appropriate level of competency.


You’re absolutely correct that we need to focus on examining student work (and we do), but how should we decide whether the work is excellent or inadequate? For example, everyone wants college graduates to write well, but what exactly are the characteristics of good writing at the senior level? Student learning outcomes, explicated into rubrics (scoring guides) that elucidate the learning outcomes and define excellent, adequate, and unsatisfactory performance levels, are vital to making this determination.


You don’t mention rubrics in your paper, so I can’t tell if you’re familiar with them, but in the last twenty years they have revolutionized American higher education. When student work is evaluated according to clearly articulated criteria, the evaluations are fairer and more consistent. Higher education curriculum and pedagogy experts such as Mary-Ann Winkelmes, Barbara Walvoord, Virginia Anderson, and L. Dee Fink have shown that, when students understand what they are to learn from an assignment (the learning outcomes), when the assignment is designed to help them achieve those outcomes, and when their work is graded according to how well they demonstrate achievement of those outcomes, they learn far more effectively. When faculty collaborate to identify shared learning outcomes that students develop in multiple courses, they develop a more cohesive curriculum that again leads to better learning.


Beyond having clear, integrated learning outcomes, there’s another critical aspect of excellent teaching and learning: if faculty aren’t teaching something, students probably aren’t learning it. This is where curriculum maps come in; they’re a tool to ensure that students do indeed have enough opportunity to achieve a particular outcome. One college that I worked with, for example, identified (and defined) ethical reasoning as an important outcome for all its students, regardless of major. But a curriculum map revealed that very few students took any courses that helped them develop ethical reasoning skills. The faculty changed curricular requirements to correct this and ensure that every student, regardless of major, graduated with the ethical reasoning skills that both they and employers value.


I appreciate anyone who tries to come up with solutions to the challenges we face, but I must point out that your thoughts on program review may be impractical. External reviews are difficult and expensive. Keep in mind that larger universities may offer hundreds of programs and thousands of courses, and for many programs it can be remarkably hard—and expensive—to find a truly impartial, well-trained external expert.


Similarly, while a number of colleges and universities already subject student work to separate, independent reviews, this can be another difficult, expensive endeavor. With college costs skyrocketing, I question the cost-benefit: are these colleges learning enough from these reviews to make the time, work, and expense worthwhile?


I would add one item to your wish list, by the way: I’d like to see every accreditor require its colleges and universities to expect faculty to use research-informed teaching practices, including engagement strategies, and to evaluate faculty teaching effectiveness on their use of those practices.


But my chief takeaway from your report is not about its shortcomings but how the American higher education community has failed to tell you, other policy thought leaders, and government policy makers what we do and how well we do it. Part of the problem is, because American higher education is so huge and complex, we have a complicated, messy story to tell. None of you has time to do a thorough review of the many books, reports, conferences, and websites that explain what we are trying to do and our effectiveness. We have to figure out a way to tell our very complex story in short, simple ways that busy people can digest quickly.

Rubrics: Not too broad, not too narrow

Posted on April 3, 2016 at 6:50 AM Comments comments (2)

Last fall I drafted a chapter, “Rubric Development,” for the forthcoming second edition of the Handbook on Measurement, Assessment, and Evaluation in Higher Education. My literature review for the chapter was an eye-opener! I’ve been joking that everything I had been saying about rubrics was wrong. Not quite, of course!

 

One of the many things I learned is that what rubrics assess vary according to the decisions they inform, falling on a continuum from narrow to broad uses.

 

Task-specific rubrics, at the narrow end, are used to assess or grade one assignment, such as an exam question. They are so specific that they apply only to that one assignment. Because their specificity may give away the correct response, they cannot be shared with students in advance.

 

Primary trait scoring guides or primary trait analysis are used to assess a family of tasks rather than one specific task. Primary trait analysis recognizes that the essential or primary traits or characteristics of a successful outcome such as writing vary by type of assignment. The most important writing traits of a science lab report, for example, are different from those of a persuasive essay. Primary trait scoring guides focus attention on only those traits of a particular task that are relevant to the task.

 

General rubrics are used with a variety of assignments. They list traits that are generic to a learning outcome and are thus independent of topic, purpose, or audience.

 

Developmental rubrics or meta-rubrics are used to show growth or progression over time. They are general rubrics whose performance levels cover a wide span of performance. The VALUE rubrics are examples of developmental rubrics.

 

The lightbulb that came on for me as I read about this continuum is that rubrics toward the middle of the continuum may be more useful than those at either end. Susan Brookhart has written powerfully about avoiding task-specific rubrics: “If the rubrics are the same each time a student does the same kind of work, the student will learn general qualities of good essay writing, problem solving, and so on… The general approach encourages students to think about building up general knowledge and skills rather than thinking about school learning in terms of getting individual assignments done.”

 

At the other end of the spectrum, developmental rubrics have a necessary lack of precision that can make them difficult to interpret and act upon. In particular, they’re inappropriate to assess student growth in any one course.

 

Overall, I’ve concluded that one institution-wide developmental rubric may not be the best way to assess student learning, even of generic skills such as writing or critical thinking. As Barbara Walvoord has noted, “You do not need institution-wide rubric scores to satisfy accreditors or to get actionable information about student writing institution-wide.” Instead of using one institution-wide developmental rubric to assess student work, I’m now advocating using that rubric as a framework from which to build a family of related analytic rubrics: some for first year work, some for senior capstones, some for disciplines or families of disciplines such as the natural sciences, engineering, and humanities. Results from all these rubrics are aggregated qualitatively rather than quantitatively, by looking for patterns across rubrics. Yes, this approach is a little messier than using just one rubric, but it’s a whole lot more meaningful.


Rss_feed