Linda Suskie

  A Common Sense Approach to Assessment & Accreditation

Blog

Rubric events this fall!

Posted on August 28, 2016 at 8:40 AM Comments comments (0)

This fall is the Semester of the Rubric for me. I'm doing sessions called "Everything I Thought I Knew About Rubrics Was Wrong" at the Assessment Institute at Indianapolis on October 17 and at the NEEAN Fall Forum on the College of the Holy Cross in Worcester, Massachusetts, on November 4. If you are in the Middle East, Africa, Europe, or Asia, I'm doing a workshop on "Building Rubrics" at the Assessment Leadership Conference at United Arab Emirates University in Al Ain on November 14-15. 


On another note, on January 17 I'm doing "Building a Culture of Quality," a retreat for institutional leaders sponsored by WASC at the Kellogg West Conference Center in Pomona, California.


For more information on any of these events, visit www.lindasuskie.com/upcoming-events. I hope to see you!

Join me in Indiana, Nebraska, California, and the Middle East

Posted on June 11, 2016 at 7:05 AM Comments comments (0)

Over the coming months I'm be speaking or doing workshops in a variety of public venues. If your schedule permits, please join me--I'd love to see you! For more information on any of these events, visit www.lindasuskie.com/upcoming-events.

  • On June 23, I'll be doing a post-conference workshop on "Meaningful Assessment of Program Learning Outcomes" at the Innovations 2016 in Faith-Based Nursing Conference at Indiana Wesleyan University in Marion.
  • On August 8, I'll be doing a workshop on "Using Assessment Results to Understand and Improve Student Learning" at Nebraska Wesleyan University in Omaha, sponsored by Nebraska Wesleyan University and Concordia, Doane, and Union Colleges.
  • On October 17 or 18 (date and time TBA), I'll be doing a session titled "Everything I Thought I Knew About Rubrics Was Wrong" at the 2016 Assessment Institute in Indianapolis.
  • On November 15 or 16 (date and time TBA), I'll be doing a session or workshop (topic to be announced) at the inaugural Assessment Leadership Conference sponsored by United Arab Emirates University in Al Ain.
  • On January 17, I'll be facilitating "Building a Culture of Quality: A Retreat for Institutional Leaders," hosted by the WASC Senior College and University Commission, at the Kellogg West Conference Center in Pomona, California.

What should boards be monitoring?

Posted on March 14, 2016 at 8:35 AM Comments comments (0)

On April 17, 2016, I’m doing a pre-conference workshop at AGB’s National Conference on Trusteeship on “Creating and Using Dashboards to Monitor and Improve Institutional Performance.”

 

The most important question about dashboards is what your board should be tracking. I see two broad categories. The first is your institution’s health and well-being. Boards should be tracking answers to the following questions:

• Is your college community safe and healthy?

• Do you have enough resources: financial, human, capital, and technological?

• Do you have the right resources: financial, human, capital, and technological?

• Are your revenue sources sufficiently diverse?

• Is your college financially healthy?

 

The second broad category is how well your institution is keeping its promises—implicit as well as explicit—to its students and their families, your region, and taxpayers and others who support it. Boards should be tracking answers to the following questions:

• Are your students learning what your promise?

• Do your students succeed?

• How well does your college help students learn, develop, and succeed?

• Does your college contribute to economic development and to the public good?

• Is your college achieving what you promise in your mission and goals?

• Do you put your money where your mouth is—investing in keeping your promises?

• How efficiently do you deploy your resources?

 

I hope you or someone else from your institution will join me at this workshop!

 

Making student evaluations of teaching useful

Posted on September 15, 2015 at 7:10 AM Comments comments (0)

On September 24, I’ll be speaking at the CoursEval User Conference on “Using Student Evaluations to Improve What We Do,” sharing five principles for making student evaluations of teaching useful in improving teaching and learning:

 

1. Ask the right questions: ones that ask about specific behaviors that we know through research help students learn. Ask, for example, how much tests and assignments focus on important learning outcomes, how well students understand the characteristics of excellent work, how well organized their learning experiences are, how much of their classwork is hands-on, and whether they receive frequent, prompt, and concrete feedback on their work.

 

2. Use student evaluations before the course’s halfway point. This lets the faculty member make mid-course corrections.

 

3. Use student evaluations ethically and appropriately. This includes using multiple sources of information on teaching effectiveness (teaching portfolios, actual student learning results, etc.) and addressing only truly meaningful shortcomings.

 

4. Provide mentoring. Just giving a faculty member a summary of student evaluations isn’t enough; faculty need opportunities to work with colleagues and experts to come up with fresh approaches to their teaching. This calls for an investment in professional development.

 

5. Provide supportive, not punitive, policies and practices. Define a great teacher as one who is always improving. Define teaching excellence not as student evaluations but what faculty do with them. Offer incentives and rewards for faculty to experiment with new teaching approaches and allow them temporary freedom to fail.

 

My favorite resource on evaluating teaching is the IDEA center in Kansas. It has a wonderful library of short, readable research papers on teaching effectiveness. A particularly helpful paper (that includes the principles I’ve presented here) is IDEA Paper No. 50: Student Ratings of Teaching: A Summary of Research and Literature.

 

Why is American higher education under fire?

Posted on August 23, 2015 at 7:35 AM Comments comments (0)

There are three fundamental reasons:

 

Economic development. The U.S. is increasingly dependent on college-educated workers to drive its economy. The proportion of U.S. jobs requiring post-secondary education or training is growing from about 35% in 1973 to a projected 65% in 2020. All net jobs growth since the 1970s has been in jobs requiring at least a bachelor’s degree.

 

Affordability and return on investment. Eighty percent of today’s students are going to college to “be very well off financially,” up from just 40% in the 1970s. They want their investment in college to pay off. It generally does; the average college “wage premium” today is 80%, up from 40% in the 1970s. But averages don’t reflect everyone’s experience, and today 40% of 25-year-olds have student loan debt, up from about 25% ten years ago. When students and their families pay so much and incur so much debt, they start to question the value of anything that doesn’t seem to them to contribute to that return on investment, like gen ed requirements. (I’m not criticizing the liberal arts here at all, just pointing out that we don’t always communicate their value well.)

 

The changing American college student. Today 43% of U.S. undergraduates are over 24 years old, and only 25% attend a residential college full-time. Today’s entering students are generally less prepared to succeed in college and increasingly “stop out” and “swirl” on their way to a college degree.

 

Why isn’t American higher education addressing these forces better? And what can we do to meet these needs better? I share some ideas in the last chapter of my book Five Dimensions of Quality: A Common Sense Guide to Accreditation and Accountability, and at the beginning of the book I give the sources of all the figures I've quoted here. I’ll be talking about all this in my September 10 plenary at the Drexel Regional Assessment Conference. I hope to see you there!

 

Join me for a cool virtual symposium in September!

Posted on August 18, 2015 at 6:50 AM Comments comments (0)

Gavin Henning and his colleagues at ACPA have come up with a really cool idea: a virtual symposium on September 29 that lets people who can't travel to a face-to-face conference participate right from their home campuses. I'm hoping to inspire some of the conversation with a mini-talk on documenting student learning. For more information, visit www.myacpa.org/events/2015-presidential-symposium-fulfilling-our-promises-students-fostering-and-demonstrating.

If you can't join the ACPA symposium, I hope to see you at two other events in September: the Drexel Regional Assessment Conference on September 10 or the Courseval User Conference in Chicago on September 24.

See you at Drexel or CoursEval conferences?

Posted on July 29, 2015 at 8:15 AM Comments comments (0)

I'm speaking at two events in September. On September 10, I'll be delivering a plenary on "The Future of Higher Education" at the Drexel Regional Assessment Conference in Philadelphia. For more information, visit www.drexel.edu/aconf.


And on September 24, I'll be delivering the keynote, "Using Student Evaluations to Improve What We Do," at the CoursEval User Conference at Columbia College Chicago. For more information, visit course-evaluation.com/user-conference-2015.


I hope to see you at one of these events!

Overcoming barriers to using assessment results

Posted on June 20, 2015 at 8:40 AM Comments comments (0)

In my June 6 blog, I identified several barriers to using the assessment evidence we’ve amassed. How can we overcome these barriers? I don’t have a magic answer—what works at one college may not work at another—but here are some ideas. Keep in mind that I’m talking only about barriers to using assessment results, not barriers to getting people to do assessment, which is a whole different ball of wax, as my grandmother used to say.


Define satisfactory results. There’s a seven-step process to do this, which I laid out in my March 23 blog.

 

Share assessment results clearly and readily. I’m a big fan of simple bar graphs showing the proportions of students who earned each rubric rating on each criterion. I like to use what I call “traffic light” color coding: students with unsatisfactory results are coded red, those with satisfactory results are coded yellow, and those with exemplary results are coded green. Both good results and areas that need improvement pop out at readers.

 

Nurture a culture of evidence-based change. Institutional leaders need to create a culture that encourages innovation, including the willingness to take some degree of risk in trying new things that might not work. Indeed Michael Meotti just published a LinkedIn post on seven attributes of successful higher education leaders, one of which is to support risk-taking.


In my book Five Dimensions of Quality: A Common Sense Guide to Accreditation and Accountability, I explain that concrete, tangible incentives, recognition, and rewards can help nurture such a culture.

 

  • Offer a program of mini-grants that are open only to faculty who have unsatisfactory assessment results and want to improve them.
  • Include in the performance evaluation criteria of vice presidents and deans the expectation that they build a culture of evidence in their units.
  • Include in faculty review criteria the expectation that they use student learning assessment evidence from their classes to reflect on and improve their own teaching.
  • Give budget priority to budget requests that are supported by systematic evidence. For significant proposals, such as for a new program or service, ask for a “business plan” comparable to what an investor might want to see from an entrepreneur.
  • Make pervasive shortcomings an institutional priority. For example, if numerous academic programs are dissatisfied with their students’ writing skills, set a university goal to make next year “the year of writing,” with an investment in professional development on teaching and assessing writing, speakers or consultants, faculty retreats to rethink curricula and their emphasis on developing writing skills, and a fresh look at support systems to help faculty teach and assess writing. As I noted in my last blog post, this means a real investment of resources, and this cannot happen without leadership commitment to a culture of evidence-based improvement.

 

I’ll be talking about the first two of these strategies—defining satisfactory results and sharing results clearly and readily-- at two upcoming events: Taskstream’s CollabEx Live! in New York City on June 22 and LiveText’s Assessment & Collaboration Conference in Nashville on July 14. I hope to see you!

 

 

Barriers to using assessment results

Posted on June 6, 2015 at 6:40 AM Comments comments (2)

I’ve heard assessment scholar George Kuh say that most colleges are now sitting on quite a pile of assessment data, but they don’t know what to do with it. Why is it often so hard to use assessment data to identify and implement meaningful changes? In my new book Five Dimensions of Quality: A Common Sense Guide to Accreditation and Accountability (Jossey-Bass), I talk about several barriers that colleges may face. In order to use results…

 

We need a clear sense of what satisfactory results are and aren’t. Let’s say you’ve used one of the AAC&U VALUE rubrics to evaluate student work. Some students score a 4, others a 3, others a 2, and some a 1. Are these results good enough or not? Not an easy question to answer!

 

Compounding this is the temptation I’ve seen many colleges face: setting standards so low that all students “succeed.” After all, if all students are successful, we don’t have to spend any time or money changing anything. I’ve seen health and medicine programs, for example, set standards that students must score at least 70% on exams in order to pass. I don’t know about you, but I don’t want to be treated by a health care professional who can diagnose diseases or read test reports correctly only 70% of the time!

 

Assessment results must be shared clearly and readily. No one today has time to study a long assessment report and try to figure out what it means. Results need to “pop out” at decision makers, so they can quickly understand where students are succeeding and where they are not.

 

Change must be part of academic culture. Charles Blaich and Kathleen Wise have noted that the tradition of scholarly research calls for researchers to conclude their work with calls for further research for others to make changes, not acting on findings themselves. So it’s not surprising that many assessment reports conclude with recommendations for modifications to assessment tools or methods rather than to teaching methods or curriculum design.

 

Institutional leaders must commit to and support evidence-based change. Meaningful advances in quality and effectiveness require resource investments. In my book I share several examples of institutions that have used assessment to make meaningful changes. In every example, the changes required significant resource investments: in redesigning curricula, offering professional development to faculty, restructuring support programs, and so on. But at many other colleges, the culture is one of maintaining the status quo and not rocking the boat.

 

I am honored and pleased to be one of the featured speakers at Taskstream's CollabExLive! on June 22 at New York University's Kimmel Center. My session will focus on the first two of these barriers: having clear, justifiable standards and sharing results clearly. This session will be especially fun because we'll be working with simulated TaskStream reports. I hope to see you!

 

 

 

Join me to talk about using assessment results!

Posted on May 10, 2015 at 7:10 AM Comments comments (0)

One of the hottest assessment topics today is not collecting assessment information but understanding and using the information we've already collected. I'll be talking about this at two conferences this summer. 

On June 22, I'll be doing a session at Taskstream's CollabEx Live! at NYU's Kimmel Center in New York City. The topic is "Understanding & Using Assessment Evidence." A book signing will follow. Visit www1.taskstream.com/user-events/ for more information.

On July 14, I'll be doing two sessions at LiveText's Assessment & Collaboration Conference at Opryland in Nashville. I'm participating in a panel on rubrics with Lance Tomei, Barbara Walvoord, Belle Wheelan, and Peter Jonas. Then I'm doing a session on "Understanding & Interpreting Rubric Results." Visit livetextconference.com for more information.

These are both terrific conferences, jam-packed with great speakers, inspiring ideas, and practical solutions. I hope to see you at them!