Linda Suskie

  A Common Sense Approach to Assessment & Accreditation

Blog

An example of closing the loop...and ideas for doing it well

Posted on February 22, 2018 at 7:00 PM Comments comments (0)

I was intrigued by an article in the September 23, 2016, issue of Inside Higher Ed titled “When a C Isn’t Good Enough.” The University of Arizona found that students who earned an A or B in their first-year writing classes had a 67% chance of graduating, but those earning a C had only a 48% chance. The university is now exploring a variety of ways to improve the success of students earning a C, including requiring C students to take a writing competency test, providing resources to C students, and/or requiring C students to repeat the course.

 

I know nothing about the University of Arizona beyond what’s in the article. But if I were working with the folks there, I’d offer the following ideas to them, if they haven’t considered them already.

 

1. I’d like to see more information on why the C students earned a C. Which writing skills did they struggle most with: basic grammar, sentence structure, organization, supporting arguments with evidence, etc.? Or was there another problem? For example, maybe C students were more likely to hand in assignments late (or not at all).

 

2. I’d also like to see more research on why those C students were less likely to graduate. How did their GPAs compare to A and B students? If their grades were worse, what kinds of courses seemed to be the biggest challenge for them? Within those courses, what kinds of assignments were hardest for them? Why did they earn a poor grade on them? What writing skills did they struggle most with: basic grammar, organization, supporting arguments with evidence, etc.? Or, again, maybe there was another problem, such as poor self-discipline in getting work handed in on time.

 

And if their GPAs were not that different from those of A and B students (or even if they were), what else was going on that might have led them to leave? The problem might not be their writing skills per se. Perhaps, for example, that students with work or family obligations found it harder to devote the study time necessary to get good grades. Providing support for that issue might help more than helping them with their writing skills.

 

3. I’d also like to see the faculty responsible for first-year writing articulate a clear, appropriate, and appropriately rigorous standard for earning a C. In other words, they could use the above information on the kinds and levels of writing skills that students need to succeed in subsequent courses to articulate the minimum performance levels required to earn a C. When I taught first-year writing at a public university in Maryland, the state system had just such a statement, the “Maryland C Standard.”

 

4. I’d like to see the faculty adopt a policy that, in order to pass first-year writing, students must meet the minimum standard of every writing criterion. Thus, if student work is graded using a rubric, the grade isn’t determined by averaging the scores on various rubric criteria—that lets a student with A arguments but F grammar earn a C with failing grammar. Instead, students must earn at least a C on every rubric criterion in order to pass the assignment. Then the As, Bs, and Cs can be averaged into an overall grade for the assignment.

 

(If this sounds vaguely familiar to you, what I’m suggesting is the essence of competency-based education: students need to demonstrate competence on all learning goals and objectives in order to pass a course or graduate. Failure to achieve one goal or objective can’t be offset by strong performance on another.)

 

5. If they haven’t done so already, I’d also like to see the faculty responsible for first-year writing adopt a common rubric, articulating the criteria they’ve identified, that would be used to assess and grade the final assignment in every section, no matter who teaches it. This would make it easy to study student performance across all sections of the course and identify pervasive strengths and weaknesses in their writing. If some faculty members or TAs have additional grading criteria, they could simply add those to the common rubric. For example, I graded my students on their use of citation conventions, even though that was not part of the Maryland C Standard. I added that to the bottom of my rubric.

 

6. Because work habits are essential to success in college, I’d also suggest making this a separate learning outcome for first-year writing courses. This means grading students separately on whether they turn in work on time, put in sufficient effort, etc. This would help everyone understand why some students fail to graduate—is it because of poor writing skills, poor work habits, or both?

 

These ideas all move responsibility for addressing the problem from administrators to the faculty. That responsibility can’t be fulfilled unless the faculty commit to collaborating on identifying and implementing a shared strategy so that every student, no matter which section of writing they enroll in, passes the course with the skills needed for subsequent success.

What can an article on gun control tell us about creating good assessment reports?

Posted on November 8, 2017 at 10:05 AM Comments comments (6)

I was struck by Nicholas Kristof’s November 6 New York Times article, How to Reduce Shootings. No, I’m not talking here about the politics of the issue, and I’m not writing this blog post to advocate any stance on the issue. What struck me—and what’s relevant to assessment—is how effectively Kristof and his colleagues brought together and compellingly presented a variety of data.


Here are some of the lessons from Kristof’s article that we can apply to assessment reports.


Focus on using the results rather than sharing the results, starting with the report title. Kristof could have titled his piece something like, “What We Know About Gun Violence,” just as many assessment reports are titled something like, “What We’ve Learned About Student Achievement of Learning Outcomes.” But Kristof wants this information used, not just shared, and so do (or should) we. Focus both the title and content of your assessment report on moving from talk to practical, concrete responses to your assessment results.


Focus on what you’ve learned from your assessments rather than the assessments themselves. Every subheading in Kristof’s article states a conclusion drawn from his evidence. There’s no “Summary of Results’ heading like what we see in so many assessment reports. Include in your report subheadings that will entice everyone to keep reading.


Go heavy on visuals, light on text. My estimate is that about half the article is visuals, half text. This makes the report a fast read, with points literally jumping out at us.


Go for graphs and other visuals rather than tables of data. Every single set of data in Kristof’s report is accompanied by graphs or other visuals that let immediately let us see his point.


Order results from highest to lowest. There’s no law that says you must present the results for rubric criteria or a survey rating scale in their original order. Ordering results from highest to lowest—especially when accompanied by a bar graph—lets the big point literally pop out at the reader.


Use color to help drive home key points. Look at the section titled “Fewer Guns = Fewer Deaths” and see how adding just one color drives home the point of the graphics. I encourage what I call traffic light color-coding, with green for good news and red for results that, um, need attention.


Pull together disparate data on student learning. Kristof and his colleagues pulled together data from a wide variety of sources. The visual of public opinions on guns, toward the end of the article, brings together results from a variety of polls into one visual. Yes, the polls may not be strictly comparable, but Kristof acknowledges their sources. And the idea (that should be) behind assessment is not to make perfect decisions based on perfect data but to make somewhat better decisions based on somewhat better information than we would make without assessment evidence. So if, say, you’re assessing information literacy skills, pull together not only rubric results but relevant questions from surveys like NSSE, students’ written reflections, and maybe even relevant questions from student evaluations of teaching (anonymous and aggregated across faculty, obviously).


Breakouts can add insight, if used judiciously. I’m firmly opposed to inappropriate comparisons across student cohorts (of course humanities students will have weaker math skills than STEM students). But the state-by-state comparisons that Kristof provides help make the case for concrete steps that might be taken. Appropriate, relevant, meaningful comparisons can similarly help us understand assessment results and figure out what to do.


Get students involved. I don’t have the expertise to easily generate many of the visuals in Kristof’s article, but many of today’s students do, or they’re learning how in a graphic design course. Creating these kinds of visuals would make a great class project. But why stop student involvement there? Just as Kristof intends his article to be discussed and used by just about anyone, write your assessment report so it can be used to engage students as well as faculty and staff in the conversation about what’s going on with student learning and what action steps might be appropriate and feasible.


Distinguish between annual updates and periodic mega-reviews. Few of us have the resources to generate a report of Kristof’s scale annually—and in many cases our assessment results don’t call for this, especially when the results indicate that students are generally learning what we want them to. But this kind of report would be very helpful when results are, um, disappointing, or when a program is undergoing periodic program review, or when an accreditation review is coming up. Flexibility is the key here. Rather than mandate a particular report format from everyone, match the scope of the report to the scope of issues uncovered by assessment evidence.

An easy, inexpensive, meaningful way to close the assessment loop

Posted on October 29, 2017 at 9:50 AM Comments comments (2)

Assessment results are often used to make tweaks to individual courses and sometimes individual programs. It can be harder to figure out how to use assessment results to make broad, meaningful change across a college or university. But here’s one way to do so: Use assessment results to drive faculty professional development programming.


Here’s how it might work.


An assessment committee or some other appropriate group reviews annual assessment reports from academic programs and gen ed requirements. As they do, they notice some repeated concerns about shortcomings in student learning. Perhaps several programs note that their students struggle to analyze data. Perhaps several others note that quite a few students aren’t citing sources properly. Perhaps several others are dissatisfied with their students’ writing skills.


Note that the committee doesn’t need reports to be in a common format or share a common assessment tool in order to make these observations. This is a qualitative, not quantitative, analysis of the assessment reports. The committee can make a simple list of the single biggest concern with student learning mentioned in each report, then review the list and see what kinds of concerns are mentioned most often.


The assessment committee then shares what they’ve noticed with whoever plans faculty professional development programming—what’s often called a teaching-learning center. The center can then plan workshops, brown-bag lunch discussions, learning communities, or other professional development opportunities to help faculty improve student achievement of these learning goals.


There needn’t be much if any expense in offering such opportunities. Assessment results are used to decide how professional development resources are used, not necessarily increase professional development resources.

Assessing the right things, not the easy things

Posted on October 7, 2017 at 8:20 AM Comments comments (2)

One of the many things I’ve learned by watching Ken Burns’ series on Vietnam is that Defense Secretary Robert MacNamara was a data geek. A former Ford Motor Company executive, he routinely asked for all kinds of data. Sounds great, but there were two (literally) fatal flaws with his approach to assessment.


First, MacNamara asked for data on virtually anything measurable, compelling staff to spend countless hours filling binders with all kinds of metrics—too much data for anyone to absorb. And I wonder what his staff could have accomplished had they not been forced to spend so much time on data collection.


And MacNamara asked for the wrong data. He wanted to track progress in winning the war, but he focused on the wrong measures: body counts, weapons captured. He apparently didn’t have a clear sense of exactly what it would mean to win this war and measure progress toward that end. I’m not a military scientist, but I’d bet that more important measures would have included the attitudes of Vietnam’s citizens and the capacity of the South Vietnamese government to deal with insurgents on its own.


There are three important lessons here for us. First, worthwhile assessment requires a clear goal. I often compare teaching to taking our students on a journey. Our learning goal is where we want them to be at the end of the learning experience (be it a course, program, degree, or co-curricular experience).


Second, worthwhile assessment measures track progress toward that destination. Are our students making adequate progress along their journey? Are they reaching the destination on time?


Third, assessment should be limited—just enough information to help us decide if students are reaching the destination on time and, if not, what we might to do help them on their journey. Assessment should never take so much time that it detracts from the far more important work of helping students learn.

What to look for in multiple choice test reports

Posted on February 28, 2017 at 8:15 AM Comments comments (2)

Next month I’m doing a faculty professional development workshop on interpreting the reports generated for multiple choice tests. Whenever I do one of these workshops, I ask the sponsoring institution to send me some sample reports. I’m always struck by how user-unfriendly they are!

 

The most important thing to look at in a test report is the difficulty of each item—the percent of students who answered each item correctly. Fortunately these numbers are usually easy to find. The main thing to think about is whether each item was as hard as you intended it to be. Most tests have some items on essential course objectives that every student who passes the course should know or be able to do. We want virtually every student to answer those items correctly, so check those items and see if most students did indeed get them right.

 

Then take a hard look at any test items that a lot of students got wrong. Many tests purposefully include a few very challenging items, requiring students to, say, synthesize their learning and apply it to a new problem they haven’t seen in class. These are the items that separate the A students from the B and C students. If these are the items that a lot of students got wrong, great! But take a hard look at any other questions that a lot of students got wrong. My personal benchmark is what I call the 50 percent rule: if more than half my students get a question wrong, I give the question a hard look.

 

Now comes the hard part: figuring out why more students got a question wrong than we expected. There are several possible reasons including the following:

 

  • The question or one or more of its options is worded poorly, and students misinterpret them.
  • We might have taught the question’s learning outcome poorly, so students didn’t learn it well. Perhaps students didn’t get enough opportunities, through classwork or homework, to practice the outcome.
  • The question might be on a trivial point that few students took the time to learn, rather than a key course learning outcome. (I recently saw a question on an economics test that asked how many U.S. jobs were added in the last quarter. Good heavens, why do students need to memorize that? Is that the kind of lasting learning we want our students to take with them?)

 

 

If you’re not sure why students did poorly on a particular test question, ask them! Trust me, they’ll be happy to tell you what you did wrong!

 

Test reports provide two other kinds of information: the discrimination of each item and how many students chose each option. These are the parts that are usually user-unfriendly and, frankly, can take more time to decipher than they’re worth.

 

The only thing I’d look for here is any items with negative discrimination. The underlying theory of item discrimination is that students who get an A on your test should be more likely to get any one question right than students who fail it. In other words, each test item should discriminate between top and bottom students. Imagine a test question that all your A students get wrong but all your failing students answer correctly. That’s an item with negative discrimination. Obviously there’s something wrong with the question’s wording—your A students interpreted it incorrectly—and it should be thrown out. Fortunately, items with negative discrimination are relatively rare and usually easy to identify in the report.

Making a habit of using classroom assessment information to inform our own teaching

Posted on December 20, 2016 at 10:50 AM Comments comments (2)

Given my passion for assessment, you might not be surprised to learn that, whenever I teach, the most fun part for me is analyzing how my students have done on the tests and assignments I’ve given them. Once tests or papers are graded, I can’t wait to count up how many students got each test question right or how many earned each possible score on each rubric criterion. When I teach workshops, I rely heavily on minute papers, and I can’t wait to type up all the comments and do a qualitative analysis of them. I love to teach, and I really want to be as good a teacher as I can. And, for me, an analysis of what students have and haven’t learned is the best possible feedback on how well I’m teaching, much more meaningful and useful than student evaluations of teaching.

 

I always celebrate the test questions or rubric criteria that all my students did well on. I make a point of telling the class and, no matter how jaded they are, you should see their faces light up!

 

And I always reflect on the test questions or rubric criteria for which my students did poorly. Often I can figure out on my own what happened. Often it’s simply a poorly written question or assignment, but sometimes I have to admit to myself that I didn’t teach that concept or skill particularly well. If I can’t figure out what happened, I ask the class and, trust me, they’re happy to tell me how I screwed up! If it’s a really vital concept or skill and we’re not at the end of the course, I’ll often tell them, “I screwed up, but I can’t let you out of here not knowing how to do this. We’re going to go over it again, you’re going to get more homework on it, and you’ll submit another assignment (or have more test questions) on this.” If it's the end of the course, I make notes to myself on what I'll do differently next time.

 

I often share this story at the faculty workshops I facilitate. I then ask for a show of hands of how many participants do this kind of analysis in their own classes. The number of hands raised varies—sometimes there will be maybe half a dozen hands in a room of 80, sometimes more—but rarely do more than a third or half of those present raise their hands. This is a real issue, because if faculty aren’t in the habit of analyzing and reflecting on assessment results in their own classes, how can we expect them to do so collaboratively on broader learning outcomes? In short, it’s a troubling sign that the institutional community is not yet in the habit of using systematic evidence to understand and improve student learning, which is what all accreditors want.

 

Here, then, is my suggestion for a New Year’s resolution for all of you who teach or in any way help students learn: Start doing this! You don’t have to do this for every assignment in every course you teach, but pick at least one key test or assignment in one course whose scores aren’t where you’d like them. Your analysis and reflection on that one test or assignment will lead you into the habit of using the assessment evidence in front of you more regularly, and it will make you an even better teacher than you are today.

An example of closing the loop...and ideas for doing it well

Posted on September 24, 2016 at 7:35 AM Comments comments (5)

I was intrigued by an article in the September 23, 2016, issue of Inside Higher Ed titled “When a C Isn’t Good Enough.” The University of Arizona found that students who earned an A or B in their first-year writing classes had a 67% chance of graduating, but those earning a C had only a 48% chance. The university is now exploring a variety of ways to improve the success of students earning a C, including requiring C students to take a writing competency test, providing resources to C students, and/or requiring C students to repeat the course.

 

I know nothing about the University of Arizona beyond what’s in the article. But if I were working with the folks there, I’d offer the following ideas to them, if they haven’t considered them already.

 

1. I’d like to see more information on why the C students earned a C. Which writing skills did they struggle most with: basic grammar, sentence structure, organization, supporting arguments with evidence, etc.? Or was there another problem? For example, maybe C students were more likely to hand in assignments late (or not at all).

 

2. I’d also like to see more research on why those C students were less likely to graduate. How did their GPAs compare to A and B students? If their grades were worse, what kinds of courses seemed to be the biggest challenge for them? Within those courses, what kinds of assignments were hardest for them? Why did they earn a poor grade on them? What writing skills did they struggle most with: basic grammar, organization, supporting arguments with evidence, etc.? Or, again, maybe there was another problem, such as poor self-discipline in getting work handed in on time.

 

And if their GPAs were not that different from those of A and B students (or even if they were), what else was going on that might have led them to leave? The problem might not be their writing skills per se. Perhaps, for example, that students with work or family obligations found it harder to devote the study time necessary to get good grades. Providing support for that issue might help more than helping them with their writing skills.

 

3. I’d also like to see the faculty responsible for first-year writing articulate a clear, appropriate, and appropriately rigorous standard for earning a C. In other words, they could use the above information on the kinds and levels of writing skills that students need to succeed in subsequent courses to articulate the minimum performance levels required to earn a C. (When I taught first-year writing at a public university in Maryland, the state system had just such a statement, the “Maryland C Standard.”;)

 

4. I’d like to see the faculty adopt a policy that, in order to pass first-year writing, students must meet the minimum standard of every writing criterion. Thus, if student work is graded using a rubric, the grade isn’t determined by averaging the scores on various rubric criteria—that lets a student with A arguments but F grammar earn a C with failing grammar. Instead, students must earn at least a C on every rubric criterion in order to pass the assignment. Then the As, Bs, and Cs can be averaged into an overall grade for the assignment.

 

(If this sounds vaguely familiar to you, what I’m suggesting is the essence of competency-based education: students need to demonstrate competence on all learning goals and objectives in order to pass a course or graduate. Failure to achieve one goal or objective can’t be offset by strong performance on another.)

 

5. If they haven’t done so already, I’d also like to see the faculty responsible for first-year writing adopt a common rubric, articulating the criteria they’ve identified, that would be used to assess and grade the final assignment in every section, no matter who teaches it. This would make it easy to study student performance across all sections of the course and identify pervasive strengths and weaknesses in their writing. If some faculty members or TAs have additional grading criteria, they could simply add those to the common rubric. For example, I graded my students on their use of citation conventions, even though that was not part of the Maryland C Standard. I added that to the bottom of my rubric.

 

6. Because work habits are essential to success in college, I’d also suggest making this a separate learning outcome for first-year writing courses. This means grading students separately on whether they turn in work on time, put in sufficient effort, etc. This would help everyone understand why some students fail to graduate—is it because of poor writing skills, poor work habits, or both?

 

These ideas all move responsibility for addressing the problem from administrators to the faculty. That responsibility can’t be fulfilled unless the faculty commit to collaborating on identifying and implementing a shared strategy so that every student, no matter which section of writing they enroll in, passes the course with the skills needed for subsequent success.

Making assessment consequential

Posted on January 25, 2016 at 7:25 AM Comments comments (0)

Of course as soon as I posted and announced my last blog on helpful assessment resources, I realized I’d omitted two enormous ones: AAC&U, which has become an amazing resource and leader on assessment in general education and the liberal arts, and the National Institute of Learning Outcomes Assessment (NILOA), which has generated and published significant scholarship that is advancing assessment practice. I’ve edited that blog to add these two resources.

 

Last year the folks at NILOA wrote what I consider one of eight essential assessment books: Using Evidence of Student Learning to Improve Higher EducationIt’s co-authored by one of the greatest collections of assessment minds on the planet: George Kuh, Stan Ikenberry, Natasha Jankowski, Timothy Cain, Peter Ewell, Pat Hutchings, and Jillian Kenzie. They make a convincing case for rebooting our approach to assessment, moving from what they call a culture of compliance, in which we focus on doing assessment largely to satisfy accreditors, to what they call consequential assessment, the kind that truly impacts student success and institutional performance.


Here’s my favorite line from the book: “Good assessment is not about the amount of information amassed, or about the quality of any particular facts or numbers put forth. Rather, assessment within a culture of evidence is about habits of question asking, reflection, deliberation, planning, and action based on evidence” (p. 46). In other words, the most important kind of validity for student learning assessments is consequential validity.

 

The book presents compelling arguments for making this transformational shift, discusses challenges in making this shift and offers practical, research-informed strategies on how to overcome those challenges based on real examples of good practices. This book turned on so many light bulbs for me! As I noted in my earlier blog on eight essential assessment books, it’s a worthwhile addition to every assessment practitioner’s bookshelf.

 

I’ll be publishing a more thorough review of the book in an upcoming issue of the journal Assessment & Evaluation in Higher Education.

Making student evaluations of teaching useful

Posted on September 15, 2015 at 7:10 AM Comments comments (0)

On September 24, I’ll be speaking at the CoursEval User Conference on “Using Student Evaluations to Improve What We Do,” sharing five principles for making student evaluations of teaching useful in improving teaching and learning:

 

1. Ask the right questions: ones that ask about specific behaviors that we know through research help students learn. Ask, for example, how much tests and assignments focus on important learning outcomes, how well students understand the characteristics of excellent work, how well organized their learning experiences are, how much of their classwork is hands-on, and whether they receive frequent, prompt, and concrete feedback on their work.

 

2. Use student evaluations before the course’s halfway point. This lets the faculty member make mid-course corrections.

 

3. Use student evaluations ethically and appropriately. This includes using multiple sources of information on teaching effectiveness (teaching portfolios, actual student learning results, etc.) and addressing only truly meaningful shortcomings.

 

4. Provide mentoring. Just giving a faculty member a summary of student evaluations isn’t enough; faculty need opportunities to work with colleagues and experts to come up with fresh approaches to their teaching. This calls for an investment in professional development.

 

5. Provide supportive, not punitive, policies and practices. Define a great teacher as one who is always improving. Define teaching excellence not as student evaluations but what faculty do with them. Offer incentives and rewards for faculty to experiment with new teaching approaches and allow them temporary freedom to fail.

 

My favorite resource on evaluating teaching is the IDEA center in Kansas. It has a wonderful library of short, readable research papers on teaching effectiveness. A particularly helpful paper (that includes the principles I’ve presented here) is IDEA Paper No. 50: Student Ratings of Teaching: A Summary of Research and Literature.

Overcoming barriers to using assessment results

Posted on June 20, 2015 at 8:40 AM Comments comments (0)

In my June 6 blog, I identified several barriers to using the assessment evidence we’ve amassed. How can we overcome these barriers? I don’t have a magic answer—what works at one college may not work at another—but here are some ideas. Keep in mind that I’m talking only about barriers to using assessment results, not barriers to getting people to do assessment, which is a whole different ball of wax, as my grandmother used to say.


Define satisfactory results. There’s a seven-step process to do this, which I laid out in my March 23 blog.

 

Share assessment results clearly and readily. I’m a big fan of simple bar graphs showing the proportions of students who earned each rubric rating on each criterion. I like to use what I call “traffic light” color coding: students with unsatisfactory results are coded red, those with satisfactory results are coded yellow, and those with exemplary results are coded green. Both good results and areas that need improvement pop out at readers.

 

Nurture a culture of evidence-based change. Institutional leaders need to create a culture that encourages innovation, including the willingness to take some degree of risk in trying new things that might not work. Indeed Michael Meotti just published a LinkedIn post on seven attributes of successful higher education leaders, one of which is to support risk-taking.


In my book Five Dimensions of Quality: A Common Sense Guide to Accreditation and Accountability, I explain that concrete, tangible incentives, recognition, and rewards can help nurture such a culture.

 

  • Offer a program of mini-grants that are open only to faculty who have unsatisfactory assessment results and want to improve them.
  • Include in the performance evaluation criteria of vice presidents and deans the expectation that they build a culture of evidence in their units.
  • Include in faculty review criteria the expectation that they use student learning assessment evidence from their classes to reflect on and improve their own teaching.
  • Give budget priority to budget requests that are supported by systematic evidence. For significant proposals, such as for a new program or service, ask for a “business plan” comparable to what an investor might want to see from an entrepreneur.
  • Make pervasive shortcomings an institutional priority. For example, if numerous academic programs are dissatisfied with their students’ writing skills, set a university goal to make next year “the year of writing,” with an investment in professional development on teaching and assessing writing, speakers or consultants, faculty retreats to rethink curricula and their emphasis on developing writing skills, and a fresh look at support systems to help faculty teach and assess writing. As I noted in my last blog post, this means a real investment of resources, and this cannot happen without leadership commitment to a culture of evidence-based improvement.

 

I’ll be talking about the first two of these strategies—defining satisfactory results and sharing results clearly and readily-- at two upcoming events: Taskstream’s CollabEx Live! in New York City on June 22 and LiveText’s Assessment & Collaboration Conference in Nashville on July 14. I hope to see you!

Barriers to using assessment results

Posted on June 6, 2015 at 6:40 AM Comments comments (2)

I’ve heard assessment scholar George Kuh say that most colleges are now sitting on quite a pile of assessment data, but they don’t know what to do with it. Why is it often so hard to use assessment data to identify and implement meaningful changes? In my new book Five Dimensions of Quality: A Common Sense Guide to Accreditation and Accountability (Jossey-Bass), I talk about several barriers that colleges may face. In order to use results…

 

We need a clear sense of what satisfactory results are and aren’t. Let’s say you’ve used one of the AAC&U VALUE rubrics to evaluate student work. Some students score a 4, others a 3, others a 2, and some a 1. Are these results good enough or not? Not an easy question to answer!

 

Compounding this is the temptation I’ve seen many colleges face: setting standards so low that all students “succeed.” After all, if all students are successful, we don’t have to spend any time or money changing anything. I’ve seen health and medicine programs, for example, set standards that students must score at least 70% on exams in order to pass. I don’t know about you, but I don’t want to be treated by a health care professional who can diagnose diseases or read test reports correctly only 70% of the time!

 

Assessment results must be shared clearly and readily. No one today has time to study a long assessment report and try to figure out what it means. Results need to “pop out” at decision makers, so they can quickly understand where students are succeeding and where they are not.

 

Change must be part of academic culture. Charles Blaich and Kathleen Wise have noted that the tradition of scholarly research calls for researchers to conclude their work with calls for further research for others to make changes, not acting on findings themselves. So it’s not surprising that many assessment reports conclude with recommendations for modifications to assessment tools or methods rather than to teaching methods or curriculum design.

 

Institutional leaders must commit to and support evidence-based change. Meaningful advances in quality and effectiveness require resource investments. In my book I share several examples of institutions that have used assessment to make meaningful changes. In every example, the changes required significant resource investments: in redesigning curricula, offering professional development to faculty, restructuring support programs, and so on. But at many other colleges, the culture is one of maintaining the status quo and not rocking the boat.

 

I am honored and pleased to be one of the featured speakers at Taskstream's CollabExLive! on June 22 at New York University's Kimmel Center. My session will focus on the first two of these barriers: having clear, justifiable standards and sharing results clearly. This session will be especially fun because we'll be working with simulated TaskStream reports. I hope to see you!

Infographics: A great way to share assessment results

Posted on September 1, 2013 at 8:40 AM Comments comments (0)

In this visual age, “infographics,” which combine graphics and text to convey key points of complex information, can be a great way to share assessment results and other evidence. If you’re not familiar with the term, visit http://dailyinfographics.com  or http://visual.ly and click on the Education link for hundreds of examples (some good, some not so good).

 

Infographics work only if you have key points to convey, though. For much assessment evidence, your key points will be answers to, “Are we achieving our goals? Are our students achieving our learning outcomes? How do we know?”

 

Graphics, IT, and marketing students can be a great resource for creating Infographics. They’ll look at your results with a fresh eye and bring a fresh perspective.