Linda Suskie

 A Common Sense Appr​oach to Assessment in Higher Education


Blog

Lessons from the Election for Assessment

Posted on November 21, 2016 at 2:45 PM

The results of the U.S. presidential election have lessons both for American higher education and for assessment. Here are the lessons I see for meaningful assessment; I’ll tackle implications for American higher education in my next blog post.

 

Lesson #1: Surveys are a difficult way to collect meaningful information in the 21st century. If your assessment plan includes telephone or online surveys of students, alumni, employers, or anyone else, know going in that it’s very hard to get a meaningful, representative sample.

 

A generation ago (when I wrote a monograph Questionnaire Survey Research: What Works for the Association of Institutional Research), most people had land line phones with listed numbers and without caller ID or voice mail. So it was easy to find their phone number, and they usually picked up the phone when it rang. Today many people don’t have land line phones; they have cell phones with unlisted numbers and caller ID. If the number calling is unfamiliar to them, they let the call go straight to voice mail. Online surveys have similar challenges, partly because databases of e-mail addresses aren’t as readily available as phone books and partly because browsing habits affect the validity of pop-up polls such as those conducted by Survey Monkey. And all survey formats are struggling with survey fatigue (how many surveys have you been asked to complete in the last month?).

 

Professional pollsters have ways of adjusting for all these factors, but those strategies are difficult and expensive and often beyond our capabilities.

 

Lesson #2: Small sample sizes may not yield meaningful evidence. Because of Lesson #1, many of the political polls we saw were based on only a few hundred respondents. A sample of 250 has an error margin of 6% (meaning that if, for example, you find that 82% of the student work you assessed meets your standard, the true percentage is probably somewhere between 76% and 88%). A sample of 200 has an error margin of 7%. And these error margins assume that the samples of student work you’re looking at are truly representative of all student work. Bottom line: We need to look at a lot of student work, from a broad variety of classes, in order to draw meaningful conclusions.

 

Lesson #3: Small differences aren’t meaningful. I was struck by how many reporters and pundits talked about Clinton having, say, a 1% or 2% point lead without mentioning that the error margin made these leads too close to call. I know everyone likes to have a single number—it’s easiest to grasp—but I wish we could move to the practice of reporting ranges of likely results, preferably in graphs that show overlaps and convey visually when differences aren’t really significant. That would help audiences understand, for example, whether students’ critical thinking skills really are worse than their written communication skills, or whether their information literacy skills really are better than those of their peers.

 

Lesson #4: Meaningful results are in the details. Clinton won the popular vote by well over a million votes but still lost enough states to lose the Electoral College. Similarly, while students at our college may be doing well overall in terms of their analytic reasoning skills, we should be concerned if students in a particular program or cohort aren’t doing that well. Most colleges and universities are so diverse in terms of their offerings and the students they serve that I’m not sure overall institution-wide results are all that helpful; the overall results can mask a great deal of important variation.

 

Lesson #5: We see what we want to see. With Clinton the odds-on favorite to win the race, it was easy to see Trump’s chances of winning (anywhere from 10-30%, depending on the analysis) as insignificant, when in fact these probabilities meant he had a realistic chance of winning. Just as it was important to take a balanced view of poll results, it’s important to bring a balanced view to our assessment results. Usually our assessment results are a mixed bag, with both reasons to cheer and reasons to reflect and try to improve. We need to make sure we see—and share—both the successes and the areas for concern.

Can rubrics impede learning?

Posted on August 18, 2016 at 12:40 AM

Over the last couple of years, I’ve started to get some gentle pushback from faculty on rubrics, especially those teaching graduate students. Their concern is whether rubrics might provide too much guidance, serving as a crutch when students should be figuring out things on their own. One recent question from a faculty member expressed the issue well: “If we provide students with clear rubrics for everything, what happens when they hit the work place and can’t figure out on their own what to do and how to do it without supervisor hand-holding?”

 

It’s a valid point, one that ties into the lifelong learning outcome that many of us have for our students: we want to prepare them to self-evaluate and self-correct their work. I can think of two ways we can help students develop this capacity without abandoning rubrics entirely. One possibility would be to make rubrics less explicit as students progress through their program. First-year students need a clear explanation of what you consider good organization of a paper; seniors and grad students shouldn’t. The other possibility—which I like better—would be to have students develop their own rubrics, either individually or in groups, subject, of course, to the professor’s review.

 

In either case, it’s a good idea to encourage students to self-assess their work by completing the rubric themselves—and/or have a peer review the assignment and complete the rubric—before turning it in. This can help get students in the habit of self-appraising their work and taking responsibility for its quality before they hit the workplace.

 

Do you have any other thoughts or ideas about this? Let me know!

Meaningful assessment of AA/AS transfer programs

Posted on July 9, 2016 at 7:45 AM

I often describe the teaching-learning-assessment process as a four-step cycle:

1. Clear learning outcomes

2. A curriculum and pedagogies designed to provide students with enough learning opportunities to achieve those outcomes

3. Assessment of those outcomes

4. Use of assessment results to improve the other parts of the cycle: learning outcomes, curriculum, pedagogies, and assessment


I also often point out that, if faculty are struggling to figure out how to assess something, the problem is often not assessment per se but the first two steps. After all, if you have clear outcomes and you’re giving students ample opportunity to achieve them, you should be grading students on their achievement of those outcomes, and there’s your assessment evidence. So the root cause of assessment struggles is often poorly articulated learning outcomes, a poorly designed curriculum, or both.


I see this a lot in the transfer AA/AS degrees offered by community colleges. As I explained in my June 20 blog entry, these degrees, designed for transfer into a four-year college major, typically consist of 42-48 credits of general education courses plus 12-18 credits related to the major. The general education and major-related components are often what I call “Chinese menu” curricula: Choose one course from Column A, two from Column B, and so on. (Ironically, few Chinese have this kind of menu any more, but people my age remember them.)

 

The problem with assessing these programs is the second step of the cycle, as I explained in my June 20 blog. in many cases these aren’t really programs; they’re simply collections of courses without coherence or progressive rigor. That makes it almost impossible both to define meaningful program learning outcomes (the first step of the cycle) or assess them (the third step of the cycle).

 

How can you deal with this mess? Here are my suggestions.

 

1. Clearly define what a meaningful “program” is. As I explained in my June 20 blog entry, many community colleges are bound by state or system definitions of a “program” that aren’t meaningful. Regardless of the definition to which you may be bound, I think it makes the most sense to think of the entire AA/AS degree as the program, with the 12-18 credits beyond gen ed requirements as a concentration, specialization, track or emphasis of the program.


2. Identify learning outcomes for both the degree and the concentration, recognizing that there should often be a relation between the two. In gen ed courses, students develop important competencies such as writing, analysis, and information literacy. In their concentration, they may achieve some of those competencies at a deeper or broader level, or they may achieve additional outcomes. For example, students in social science concentrations may develop stronger information literacy and analysis skills than students in other concentrations, while students in visual arts concentrations may develop visual communication skills in addition to the competencies they learn in gen ed.


Some community colleges offer AA/AS degrees in which students complete gen ed requirements plus 12-18 credits of electives. In these cases, students should work with an advisor to identify their own,unique program/concentration learning outcomes and select courses that will help them achieve those outcomes.


3. Use the following definition of a program (or concentration) learning outcome: Every student in the program (or concentration) takes at least two courses with learning activities that help him or her achieve the program learning outcome. This calls for fairly broad rather than course-specific learning outcomes.


If you’re struggling to find outcomes that cross courses, start by looking at course syllabi for any common themes in course learning outcomes. Also think about why four-year colleges want students to take these courses. What are student learning, beyond content, that will help them succeed in upper division courses in the major? In a pre-engineering program, for example, I’d like to think that the various science and math courses students take help them graduate with stronger scientific reasoning and quantitative skills than students in non-STEM concentrations.


4. Limit the number of learning outcomes; quality is more important than quantity here. Concentrations of 12-18 credits might have just one or two.

 

5. Also consider limiting your course options by consolidating Chinese-menu options into more focused pathways, which we are learning improve student success and completion. I’m intrigued by what Alexandra Waugh calls “meta-majors”: focused pathways that prepare students for a cluster of four-year college majors, such as health sciences, engineering, or the humanities, rather than just one.


6. Review your curricula to make sure that every student, regardless of the courses he or she elects, will graduate with a sufficiently rigorous achievement of every program (and concentration) learning outcome. An important principle here: There should be at least one course in which students can demonstrate achievement of the program learning outcome at the level of rigor expected of an associate degree holder prepared to begin junior-level work. In many cases, an entry-level course cannot be sufficiently rigorous; your program or concentration needs at least one course that cannot be taken the first semester. If you worry that prerequisites may be a barrier to completion, consider Passaic County Community College’s approach, described in my June 20 blog.


7. Finally, you’ve got meaningful program learning outcomes and a curriculum designed to help students achieve them at an appropriate level of rigor, so you're ready to assess those outcomes. The course(s) you’ve identified in the last step are where you can assess student achievement of the outcomes. But one additional challenge faces community colleges: many students transfer before taking this “capstone” course. So also identify a program/concentration “cornerstone” course: a key course that students often take before they transfer that helps students begin to achieve one or more key program/concentration learning outcomes. Here you can assess whether students are on track to achieve the program/concentration learning outcome, though at this point they probably won’t be where you want them by the end of the sophomore year.

Rubrics: Not too broad, not too narrow

Posted on April 3, 2016 at 6:50 AM

Last fall I drafted a chapter, “Rubric Development,” for the forthcoming second edition of the Handbook on Measurement, Assessment, and Evaluation in Higher Education. My literature review for the chapter was an eye-opener! I’ve been joking that everything I had been saying about rubrics was wrong. Not quite, of course!

 

One of the many things I learned is that what rubrics assess vary according to the decisions they inform, falling on a continuum from narrow to broad uses.

 

Task-specific rubrics, at the narrow end, are used to assess or grade one assignment, such as an exam question. They are so specific that they apply only to that one assignment. Because their specificity may give away the correct response, they cannot be shared with students in advance.

 

Primary trait scoring guides or primary trait analysis are used to assess a family of tasks rather than one specific task. Primary trait analysis recognizes that the essential or primary traits or characteristics of a successful outcome such as writing vary by type of assignment. The most important writing traits of a science lab report, for example, are different from those of a persuasive essay. Primary trait scoring guides focus attention on only those traits of a particular task that are relevant to the task.

 

General rubrics are used with a variety of assignments. They list traits that are generic to a learning outcome and are thus independent of topic, purpose, or audience.

 

Developmental rubrics or meta-rubrics are used to show growth or progression over time. They are general rubrics whose performance levels cover a wide span of performance. The VALUE rubrics are examples of developmental rubrics.

 

The lightbulb that came on for me as I read about this continuum is that rubrics toward the middle of the continuum may be more useful than those at either end. Susan Brookhart has written powerfully about avoiding task-specific rubrics: “If the rubrics are the same each time a student does the same kind of work, the student will learn general qualities of good essay writing, problem solving, and so on… The general approach encourages students to think about building up general knowledge and skills rather than thinking about school learning in terms of getting individual assignments done.”

 

At the other end of the spectrum, developmental rubrics have a necessary lack of precision that can make them difficult to interpret and act upon. In particular, they’re inappropriate to assess student growth in any one course.

 

Overall, I’ve concluded that one institution-wide developmental rubric may not be the best way to assess student learning, even of generic skills such as writing or critical thinking. As Barbara Walvoord has noted, “You do not need institution-wide rubric scores to satisfy accreditors or to get actionable information about student writing institution-wide.” Instead of using one institution-wide developmental rubric to assess student work, I’m now advocating using that rubric as a framework from which to build a family of related analytic rubrics: some for first year work, some for senior capstones, some for disciplines or families of disciplines such as the natural sciences, engineering, and humanities. Results from all these rubrics are aggregated qualitatively rather than quantitatively, by looking for patterns across rubrics. Yes, this approach is a little messier than using just one rubric, but it’s a whole lot more meaningful.

Making assessment consequential

Posted on January 25, 2016 at 7:25 AM

Of course as soon as I posted and announced my last blog on helpful assessment resources, I realized I’d omitted two enormous ones: AAC&U, which has become an amazing resource and leader on assessment in general education and the liberal arts, and the National Institute of Learning Outcomes Assessment (NILOA), which has generated and published significant scholarship that is advancing assessment practice. I’ve edited that blog to add these two resources.

 

Last year the folks at NILOA wrote what I consider one of eight essential assessment books: Using Evidence of Student Learning to Improve Higher EducationIt’s co-authored by one of the greatest collections of assessment minds on the planet: George Kuh, Stan Ikenberry, Natasha Jankowski, Timothy Cain, Peter Ewell, Pat Hutchings, and Jillian Kenzie. They make a convincing case for rebooting our approach to assessment, moving from what they call a culture of compliance, in which we focus on doing assessment largely to satisfy accreditors, to what they call consequential assessment, the kind that truly impacts student success and institutional performance.


Here’s my favorite line from the book: “Good assessment is not about the amount of information amassed, or about the quality of any particular facts or numbers put forth. Rather, assessment within a culture of evidence is about habits of question asking, reflection, deliberation, planning, and action based on evidence” (p. 46). In other words, the most important kind of validity for student learning assessments is consequential validity.

 

The book presents compelling arguments for making this transformational shift, discusses challenges in making this shift and offers practical, research-informed strategies on how to overcome those challenges based on real examples of good practices. This book turned on so many light bulbs for me! As I noted in my earlier blog on eight essential assessment books, it’s a worthwhile addition to every assessment practitioner’s bookshelf.

 

I’ll be publishing a more thorough review of the book in an upcoming issue of the journal Assessment & Evaluation in Higher Education.

What is a rubric?

Posted on November 2, 2015 at 6:55 AM

I’ve finished a draft of my chapter, “Rubric Development,” for the forthcoming second edition of the Handbook on Measurement, Assessment, and Evaluation in Higher Education. Of course the chapter had to explain what a rubric is as well as how to develop one. My research quickly showed that there’s no agreement on what a rubric is! There are at least five formats for guides to score or evaluate student work, but there is no consensus on which of the formats should be called a rubric.

 

The simplest format is a checklist: a list of elements present in student work. It is used when elements are judged to be either present or not; it does not assess the frequency or quality of those items.

 

Then comes a rating scale: a list of traits or criteria for student work accompanied by a rating scale marking the frequency or quality of each trait. Here we start to see disagreements on vocabulary; I’ve seen rating scales called minimal rubrics, performance lists, expanded checklists, assessment lists, or relative rubrics.

 

Then comes the analytic rubric, which fills in the rating scale’s boxes with clear descriptions of each level of performance for each trait or criterion. Here again there’s disagreement on vocabulary; I’ve seen analytic rubrics called analytical rubrics, full rubrics or descriptive rubrics.

 

Then there is the holistic rubric, which describes how to make an overall judgment about the quality of work through narrative descriptions of the characteristics of work at each performance level. These are sometimes called holistic scoring guides.

 

Finally, there’s what I’ve called a structured observation guide: a rubric without a rating scale that lists traits with spaces for comments on each trait.

 

So what is a rubric? Opinions fall into three camps.

 

The first camp defines rubrics broadly and flexibly as guides for evaluating student work. This camp would consider all five formats to be rubrics.

 

The second camp defines rubrics as providing not just traits but also standards or levels of quality along a continuum. This camp would consider rating scales, analytic rubrics, and holistic rubrics to be rubrics.

 

The third camp defines rubrics narrowly as only those scoring guides that include traits, a continuum of performance levels, and descriptions of each trait at each performance level. This camp would consider only analytic rubrics and holistic rubrics to be rubrics.

 

I suspect that in another 20 years or so we’ll have a common vocabulary for assessment but, in the meanwhile, if you and your colleagues disagree on what a rubric is, take comfort in knowing that you’re not alone!

Two simple steps to better assessment

Posted on October 16, 2015 at 7:45 AM

I recently came across two ideas that struck me as simple solutions to an ongoing frustration I have with many rubrics: too often they don't make clear, in compelling terms, what constitutes minimally acceptable performance. This is a big issue, because you need to know whether or not student work is adequate before you can decide what improvements in teaching and learning are called for. And your standards need to be defensibly rigorous, or you run the risk of passing through and graduating students unprepared for whatever comes next in their lives.


My first "aha!" insight came from a LinkedIn post by Clint Schmidt. Talking about ensuring the quality of coding "bootcamps," he suggests, "set up a review board of unbiased experienced developers to review the project portfolios of bootcamp grads."


This basic idea could be applied to almost any program. Put together a panel of the people who will be dealing with your student after they pass your course, after they complete your gen ed requirements, or after they graduate. For many programs, including many in the liberal arts, this might mean workplace supervisors from the kinds of places where your graduates typically find jobs after graduation. For other programs, this might mean faculty in the bachelor's or graduate programs your students move into. The panels would not necessarily need to review full portfolios; they might review samples of senior capstone projects or observe student presentations or demonstrations. 


The cool thing about this approach is that many programs are already doing this. Internship, practicum, and clinical supervisors, local artists who visit senior art exhibitions, local musicians who attend senior recitals--they are all doing a various of Schmidt's idea. The problem, however, is that often the rating scales they're asked to complete are so vaguely defined that it's unclear which rating constitutes what they consider minimally acceptable performance.And that's where my second "aha!" insight comes into play. It's from a ten-year-old rubric developed by Andi Curcio to assess a civil complaint assignment in a law school class. (Go to lawteaching.org/teaching/assessment/rubrics/, then scroll down to Civil Complaint: Rubric (Curcio) to download the PDF.) Her rubric has three columns with typical labels (Exemplary, Competent, Developing), but each label goes further.

  •  "Exemplary" is "advanced work at this time in the course - on a job the work would need very little revision for a supervising attorney to use." 
  • "Competent" is "proficient work at this time in the course - on a job the work would need to be revised with input from supervising attorney." 
  • And "Developing" is "work needs additional content or skills to be competent - on a job, the work would not be helpful and the supervising attorney would need to start over." 

Andi's simple column labels make two things clear: what is considered adequate work at this point in the program, and how student performance measures up to what employers will eventually be looking for.


If we can craft rubrics that define clearly the minimal level that students need to reach to succeed in their next course, their next degree, their next job, or whatever else happens next in their lives, and bring in the people who actually work with our students at those points to help assess student work, we will go a long way toward making assessment even more meaningful and useful.

Making student evaluations of teaching useful

Posted on September 15, 2015 at 7:10 AM

On September 24, I’ll be speaking at the CoursEval User Conference on “Using Student Evaluations to Improve What We Do,” sharing five principles for making student evaluations of teaching useful in improving teaching and learning:

 

1. Ask the right questions: ones that ask about specific behaviors that we know through research help students learn. Ask, for example, how much tests and assignments focus on important learning outcomes, how well students understand the characteristics of excellent work, how well organized their learning experiences are, how much of their classwork is hands-on, and whether they receive frequent, prompt, and concrete feedback on their work.

 

2. Use student evaluations before the course’s halfway point. This lets the faculty member make mid-course corrections.

 

3. Use student evaluations ethically and appropriately. This includes using multiple sources of information on teaching effectiveness (teaching portfolios, actual student learning results, etc.) and addressing only truly meaningful shortcomings.

 

4. Provide mentoring. Just giving a faculty member a summary of student evaluations isn’t enough; faculty need opportunities to work with colleagues and experts to come up with fresh approaches to their teaching. This calls for an investment in professional development.

 

5. Provide supportive, not punitive, policies and practices. Define a great teacher as one who is always improving. Define teaching excellence not as student evaluations but what faculty do with them. Offer incentives and rewards for faculty to experiment with new teaching approaches and allow them temporary freedom to fail.

 

My favorite resource on evaluating teaching is the IDEA center in Kansas. It has a wonderful library of short, readable research papers on teaching effectiveness. A particularly helpful paper (that includes the principles I’ve presented here) is IDEA Paper No. 50: Student Ratings of Teaching: A Summary of Research and Literature.

Rethinking the assessment of gen ed and institution-wide learning outcomes

Posted on July 31, 2015 at 8:25 AM

I’ve been working with a number of colleges on assessing their gen ed or institution-wide learning outcomes, and concluded that what many are doing is way too complicated. Typically colleges decide to use rubrics (often the AAC&U VALUE rubrics or a modification of them) to assess gen ed or institution-wide learning outcomes. Then they have faculty submit samples of student work. Then one or more groups of faculty use the rubrics to score the student work samples.

 

If this strategy works, there’s nothing wrong with it. But I’m seeing too many colleges where this process isn’t working well.

 

  • At some colleges, faculty submitting samples are largely disconnected from the assessment process, so they don’t feel ownership. Assessment is something “done” to them.
  • It’s hard to come up with a rubric that’s meaningfully applicable to student work taken from many different courses. So, at some colleges, the rubric results don’t have meaning to many faculty. That makes it hard to use the rubric results to make meaningful, broad improvements in teaching and learning.
  • At many of these colleges, student work samples are submitted into an assessment data management system. These systems, chosen and implemented wisely, can be great time-savers. But too often I’m seeing faculty required, rather than encouraged, to use these systems. They’re required to use rubrics, or to use rubrics with a particular format, or to report on what they’ve done in a particular way—all of which may not fit well with what they’re doing. Square pegs are being pushed into round holes.
  • Using standard assessment structures encourages comparisons that may be inappropriate. Should we really compare students’ critical thinking skills in literature courses with those in chemistry courses?

 

What I’m increasingly recommending is a bottom-up, qualitative approach to assessing gen ed and institution-wide learning outcomes. Let faculty in each course or program develop a rubric or other assessment that is meaningful to them—that reflects college-wide learning outcomes through the lens of what they are trying to teach. That kind of rubric can be used both for grading and for broader assessment.


(An important caveat here: I said "course" and not "class." Faculty teaching sections of the same course should be collaborating to identify and implement an appropriate strategy to assess key gen ed or institutional learning outcomes in all sections of the course.)


Then have a faculty group review the reports of these assessments holistically and qualitatively for recurring themes. I’ve done this myself, and things always pop out. At one college I visited, students repeatedly struggled to integrate their learning—pull the pieces together and see the big picture. At another, students repeatedly struggled with analysis, especially with data. The findings, gleaned from human rather than system review, were clear and “actionable”—they could lead to institution-wide discussions and decisions on strategies to improve students’ integration or data analysis skills.


So if a standardized, centralized approach to assessing gen ed or institutional outcomes is working for your institution, don’t mess with success. But if it seems cumbersome, time consuming, and not all that helpful, consider a less structured, decentralized approach.

 

For more thoughts on assessing institution-wide and gen ed learning outcomes, see my blog posts on tips for assessing gen ed learning outcomes, tips on assessing institution-wide learning outcomes and the various levels of assessment: class, course, program, and gen ed.

Five Big Ideas about Assessing General Education

Posted on May 23, 2015 at 9:15 AM

I recently had the pleasure to speak to faculty and administrators at a college in New England on assessing their gen ed curriculum. Here are the five big ideas I shared with them.

 

Big Idea #1: Gen Ed Assessment is Hard! It’s harder than assessing student learning in programs (majors) or individual courses, for several reasons.


  • American colleges and universities are frankly embarrassed of their gen ed requirements. The requirements are typically buried down deep on the college’s website or in its catalog, and academic advisors typically talk about gen ed requirements as something to “get out of the way.”
  • There’s often no ownership of gen ed. Who’s in charge of the humanities or social sciences requirement, for example, making sure it delivers on its intentions?
  • Gen ed outcomes are often fuzzy, and it’s hard to assess fuzzy goals meaningfully.
  • Gen ed assessment requires collaboration, and many colleges operate in a culture of isolation.

 

Big Idea #2: It’s All about Goals. I take gen ed learning outcomes very seriously. They are a promise that the college is making: Every undergraduate who completes the gen ed requirements, no matter which gen ed courses or sections he or she has chosen, is competent at every gen ed outcome. But a lot of gen ed curricula aren’t designed to ensure this. Yes, many students graduate competent in all gen ed learning outcomes, but it’s possible for some students to fall through the net and graduate without some of these important competencies.

 

Big Idea #3: Gen Ed Assessment Shouldn’t Be All that Different from What You’re Already Doing. If you’re teaching it and grading it, you’re assessing it. Gen ed assessment is often the biggest struggle for faculty who haven’t been addressing key gen ed competencies in their courses.


Big Idea #4: Keep This as Easy as You Can.

  • Start at the end and work backwards; if your gen ed curriculum has a sophomore or junior capstone, start there. The capstone projects should demonstrate achievement of a number of gen ed outcomes. If the projects show great communication, critical thinking, and information literacy skills (or whatever your gen ed outcomes are), you’re done!
  • Look for the biggest return on investment. At many colleges, the 80-20 rule applies: 80% of undergraduates enroll in only 20% of gen ed course offerings. Start by assessing student learning in those courses.
  • Keep your gen ed outcomes and curriculum lean. The more learning outcomes you have and the more courses you offer, the more work you have to keep everything updated, aligned, and assessed. I’m also seeing research that lean community college gen ed curricula actually increase student success rates.


Big Idea #5: Make Accountability Pressures Work for You. Rather than view calls for accountability as a threat, look on them as an opportunity.

  • Demonstrate that you are doing what everyone wants: for students to get the best possible education.
  • Tell the world how good you are and, when assessment results are disappointing, what you’re doing to get even better.
  • Show that you use your limited resources wisely—that the investments by students, taxpayers, and donors are making a difference, in a cost-effective way.
  • Show that you are keeping your promises, especially that your students are indeed learning what you promise.

Assessing high impact practices

Posted on April 13, 2015 at 8:20 AM

“High impact practices"—one of those buzzwords getting a lot of attention these days. What exactly are high impact practices (HIPs), and how should they be assessed?

 

HIPs are educational experiences that make a significant difference in student learning, persistence, and success. Research by the Association of American Colleges & Universities (AAC&U) and the National Survey of Student Engagement (NSSE) has found that the following can all be HIPS:

• First-year experiences

• Learning communities

• Writing-intensive courses

• Collaborative learning experiences

• Service learning

• Undergraduate research

• Internships

• Capstone courses and projects

 

What makes these experiences so effective? In a word: engagement. Students are more likely to learn, persist, and succeed if they are actively engaged in their learning. Gallup, for example, found that college graduates who feel their college prepared them for life and helped them graduate on time were more likely to agree with the following:

• I had at least one professor who made me excited about learning

• My professors cared about me as a person.

• I had a mentor who encouraged me to pursue my goals and dreams.

• I worked on a project that took a semester or more to complete.

• I had an internship or job that allowed me to apply what I was learning in the classroom.

• I was extremely active in extracurricular activities and organizations while I attended college.

(Sadly, only 3% of all college graduates reported having all six of these experiences.)

 

So how should HIPs be assessed? As I often say about assessment, it’s all about goals. Because HIPs are intended to help students learn, persist, and succeed, your assessments should focus on student retention and graduation rates, perhaps grades in subsequent coursework (if appliable), and how well students have achieved the HIP’s key learning outcomes. Check your institution’s strategic goals too. There may be a goal to, for example, improve the success of a particular student cohort. If your HIP is intended to help achieve this kind of goal, track that as well.

Finding time

Posted on January 18, 2015 at 7:05 AM

When I work with faculty on curriculum design, teaching strategies, or assessment methods, one of the most common reactions is, “This is great, but when am I going to find the time to do this?” It’s a legitimate question. Especially since the Great Recession, everyone in higher education has been asked to wear more and more hats, to do more with less. At some colleges I visit, the exhaustion is palpable.

 

There are only so many hours in a week, and we can’t create more time. So the only way to find time to work on the quality of what we do is to stop doing something else. If faculty are expected to bring new approaches to curricula, teaching strategies, and assessment on top of everything else, the message is that everything else is more important.

 

What can you stop or scale back? My first suggestion is to look at your committees; most colleges I visit have too many, and committee work expands to fill the time allotted. What would happen if a particular committee didn’t meet for the rest of the year?

 

Next, carve out times in the academic calendar when faculty can get together to talk. Some colleges don’t schedule any classes on, say, Wednesdays at noon, giving departments and committees time to meet. Some set aside professional development days at the beginning and/or end of the semester. Think twice about filling these days with a program that everyone is expected to attend; today it’s the rare college where everyone has the same professional development needs and will benefit from the same program. Instead consider asking each department to design its own agenda for the day.

 

Finally, look at your array of curricular offerings: your degree and certificate programs, your array of general education offerings, and so on. Each of those courses and programs needs to be reviewed, updated, planned, taught, and assessed. Three course preparations each semester don’t take as much time as four. Look at student enrollment patterns, then ask yourself if a course or program that attracts relatively few students is more important than the time freed up if it were no longer offered.

Four tips on assessing institution-wide learning outcomes

Posted on September 26, 2014 at 12:10 AM

Over the summer, student learning expert Dee Fink asked me for some suggestions on how to assess student achievement of institution-wide learning outcomes. I told him this is hard to answer in generalities—a lot depends on things like the institution’s curricula, size, and complexity—but here are some tips.

 

1. How do students learn these things? In other words, where (in what courses) and how (through what learning activities) do students achieve each of these learning outcomes? My point is the one I made in my last blog post: if faculty are teaching an outcome, they are (or should be) grading students on it, so they should already have assessment evidence in hand. If so, you’re down to aggregating evidence and looking for overall trends.

 

2. One size does not fit all. Math application skills might be assessed through a multiple choice test, while appreciation of diverse perspectives might be assessed through a reflective essay. There’s no law that says everything has to be assessed the same way.

 

3. Keep it simple. Complicated assessment processes, such as submitting samples of student work that are scored by a committee, can collapse under their own weight. Rank-and-file faculty members come to view themselves as providers of assessment information, not consumers of it. For many U.S. institutions, no matter how many general education courses the institution offers, there are probably no more than 20 that most students take to complete their general education requirements. Just focus your assessments on those high-enrollment courses to start.

 

3. Focus on capstones: projects or experiences that students complete as they approach graduation. Done right, they should give you a lot of information on a number of key institutional learning outcomes—a good embodiment of “keep it simple.” I’m also a big fan of reflective writing, so I like to see capstones accompanied by reflective papers on what and how students have learned.

 

4. Approach portfolios slowly and carefully. While they're considered the gold standard of assessment, if not carefully planned they can be a huge amount of work; someone needs to sift through all that stuff and make sense of it. I recommend portfolios for low-enrollment programs and individually-designed majors.

Meaningful learning activities and assignments: A key to successful assessment

Posted on August 22, 2014 at 6:40 AM

If you’re giving students plenty of opportunities to achieve your key outcomes, assessment is easy: you’re already grading their work on those learning activities, so you already have assessment evidence in hand. When faculty struggle with assessing key learning outcomes, the problem is often that they’re not giving students meaningful learning activities to help them achieve those outcomes. If you want students to learn how to analyze information, for example, what kinds of learning activities do you give them to help them learn how to analyze information? Here are some tips:

• Start with the assignment’s key learning outcomes: what you want students to learn by completing the assignment.

• Explain to students why you are giving them the assignment—how the assignment will help prepare them to succeed in later courses, in the workplace, and/or in their lives. (Some students do better if they understand the relevance of what they’re doing.)

• Create a rubric to grade the assignment that reflects those key outcomes, with appropriate emphasis on the most important outcomes. (A recent study found that many faculty emphasize grammar at the expense of other skills.)

• Give the rubric to students with the assignment, so they know where to focus their time and energies.

• Consider alternatives to traditional papers. Students can share their analysis of information through a chart, graph, or other visual, which can be faster to grade and fairer to students who struggle with written communication skills.

• Point students in the right direction by giving them appropriate guidelines: length and format of the assignment, what resources they can use, who the assignment’s audience is, etc.

• Break large assignments into smaller pieces. Ask students to submit first just their research paper topics—if the topic won’t work well, you can get them back on track before they go too far down the wrong road.

 

The clearer your guidelines to students, the better some students will do, and we all know that an A assignment is a lot faster and easier to grade than a poor assignment. So this is a win-win strategy: as Barbara Walvoord and Virginia Anderson say in their book Effective Grading, your students work harder and learn more, and you spend less time grading!

The way to better rubrics: Start by looking at student work

Posted on August 14, 2014 at 6:15 AM

In my July 30 blog post, I discussed the key findings of a study on rubric validity reported in the June/July 2014 issue of Educational Researcher. In addition to the study’s major findings, a short statement on how the rubric under study was developed caught my attention:

 

“A team…developed the new rubric based on a qualitative analysis of approximately 100 exemplars. The team compared the exemplars to identify and articulate observed, qualitative differences…”

 

I wish the authors had fleshed this out a bit more, but here’s my take on how the rubric was developed. The process began, not with the team brainstorming rubric criteria, but by looking at a sample of 100 student papers. I’d guess that team members simply took notes on each paper: What in each paper struck them as excellent? Mediocre but acceptable? Unacceptably poor? Then they probably compiled all the notes and looked through them for themes. From these themes came the rubric criteria and the performance levels for each criterion…which, as I explained in my July blog post, varied in number.

 

I’ve often advised faculty to take a similar approach. Don’t begin the work of developing a rubric with an abstract brainstorming session or by looking at someone else’s rubric. Start by reviewing a sample of student work. You don’t need to look at 100 papers—just pick one paper, project or performance that is clearly outstanding, one that is clearly unacceptable, and some that are in between. Take notes on what is good and not-so-good about each and why you think they fall into those categories. Then compile the notes and talk. At that point—once you have some basic ideas of your rubric criteria and performance levels of each criterion—you may want to consider looking at other rubrics to refine your thinking (“Yes, that rubric has a really good way of stating what we’re thinking!”).

 

Bottom line: A rubric that assesses what you and your colleagues truly value will be more valid, useful, and worthwhile.

A vital skill: Seeing the 30,000-foot picture

Posted on July 10, 2014 at 5:35 AM

I love Alison Head and John Wihbey’s piece, “At Sea in a Deluge of Data” in this week’s Chronicle of Higher Education. They talk about a particular skill that’s growing in importance in the 21st century, what I call seeing the 30,000-foot picture: taking a lot of information, seeing the big ideas from all that information, and communicating the big points clearly and understandably.

 

Many colleges have a hard time helping their students develop this skill. Traditional library research papers may help, but they don’t give students the real-world integrative skills that employers are looking for: separating the information wheat from the chaff (the relevant from the irrelevant and the credible from what I like to call the incredible) and communicating big points in short, succinct ways that people can quickly and easily understand (see my earlier blog on infographics).


One reason that I think we have a hard time helping students develop this skill is because so many of us struggle with this ourselves. Seeing the 30,000-foot picture doesn’t come naturally to most people. David Keirsey has found that only about 5-10% of the population has the inherent temperament for big-picture analysis; people are far more likely to be detail-oriented. (You can take the Keirsey Temperament Sorter at www.keirsey.com and see where you fit.)


I see this a lot in work on assessment and accreditation. People are good at saying, “We used this rubric and here are the scores,” “Students took this survey and here are their responses,” “Here are grade distributions from key gateway courses.” But people often struggle to connect those pieces. What do your rubric, survey, and grade distribution results each say about students’ writing skills, for example? What are they telling you overall about students’ writing skills? Are the survey results and grades helping you understand why you’re getting your rubric results? Accreditors are less interested in a table of results than in what the results are saying to you. What overall conclusions can you draw about your students’ writing skills?

 

We need both detail and 30,000-foot people working on assessment and accreditation activities. Make sure you’ve got both on your team.

Why aren't grades good enough?

Posted on November 17, 2013 at 6:55 AM

Why aren't grades sufficient evidence of student learning? 


1. Grades alone do not usually provide meaningful information on exactly what students have and have not learned. So it's hard to use grades alone to decide how to improve teaching and learning.


2. Grading and assessment criteria sometimes differ. Some components of grades reflect classroom management strategies (attendance, timely submission of assignments) rather than achievement of key learning outcomes.


3. Grading standards are sometimes vague or inconsistent. They may weight relatively unimportant (but easier to assess) outcomes more heavily than some major (but harder to assess) outcomes.


4. Grades do not reflect all learning experiences. They provide information on student performance in individual courses and assignments but not student progress in achieving program-wide or institution-wide outcomes.


That said, the grading process can provide excellent evidence of achievement of key learning outcomes, and using information from the grading process in this way can make assessment faster, easier, and more meaningful. NILOA (the National Institute for Learning Outcomes Assessment) has recently published a paper on how Prince George's Community College in Maryland is doing exactly this: http://learningoutcomesassessment.org/OccasionalPapernineteen.html.


You'll see from the NILOA paper that using the grading process to collect assessment evidence works only when faculty are willing to collaborate and agree on at least base grading criteria. I often suggest a two-part rubric: the top half provides the common criteria everyone agrees to, and the bottom half is class-specific criteria that individual faculty want to factor into grades.

Writing effective multiple choice questions

Posted on November 1, 2013 at 7:20 AM

Unlike many people involved with higher education assessment, I'm a fan of multiple choice tests...under the right circumstances, of course.

 

Multiple choice tests can give us a broader picture of student learning than "authentic" assessments, and they can be scored and evaluated very quickly. And, yes, they can assess application and analysis skills as well as memory and comprehension.

 

The key is to ask questions that can be answered in an open-book, open-note format...ones that require students to think and apply their knowledge rather than just recall. My favorite way to do this is with what I call "interpretive exercises" and others call "vignettes," "context-dependent items" or "enhanced multiple choice." You've seen these on published tests. Students are given material they haven't seen before: a chart, a description of a scenario, a diagram, a literature excerpt. The multiple choice questions that follow ask students to interpret this new material.

 

The key to a good multiple choice test is to start with a "test blueprint": a list of the learning objectives you want to assess. Then write items for each of those learning objectives.

 

There are just two other precepts for writing good multiple choice items. First, remove all barriers that will keep a knowledgeable student from getting the item right. (For example, don't make the item unnecessarily wordy.) Second, remove all cludes that will help a less-than-knowledgeable student get the item right. (For example, use common misconceptions as incorrect options.)

Course vs program vs gen ed assessment

Posted on September 28, 2013 at 7:35 AM

I get a lot of questions on the differences among assessment at various levels:

 

Student-level assessment is assessing learning of individual students, generally on course learning objectives. This is the kind of assessment that faculty have done for literally thousands of years. Its primary purposes are to grade students and give them feedback on their learning.

 

Class-level assessment is assessing learning of an entire class (section) of students, again on course learning objectives. Class-level assessments look at the same evidence used to grade students and give them feedback on their learning (student-level assessment, above) but aggregate results for all students in a class or section to get an overall picture of their collective strengths and weaknesses in their learning. A faculty member might tally, for example, how many students got Question 12 right and what rubric ratings students earned regarding the organization of their papers. The primary purposes are to reflect on and improve individual teaching practice.

 

Course-level assessment is assessing learning of all students in a (multi-section) course, again on course learning objectives. This is just like class-level assessment except that faculty teaching sections of a course identify common course objectives and common means of assessing them. They then summarize results across sections to get an overall picture of students’ collective strengths and weaknesses. Faculty might, for example, agree to use the same rubric to grade the final paper or to include the same set of five questions on the final. If they see areas of weakness across sections, they work together to identify ways to collaboratively address those weaknesses.

 

Program-level assessment is assessing learning of all students in a program on program-level learning outcomes. Program level outcomes are generally addressed over multiple courses and are broader than course learning objectives. Course-level objectives generally contribute to program-level learning outcomes. For example, several courses in a program may each address specific technological skills. Those courses collectively contribute to an overall program-level learning outcome that students use technologies appropriately and effectively. Program-level outcomes are often best assessed with a significant assignment or project completed shortly before students graduate.

 

General education assessment is assessing learning of all undergraduates on general education outcomes. Faculty teaching courses that address a particular general education outcome collaborate to identify how they are assessed. They then aggregate results across courses to get an overall picture of students’ collective strengths and weaknesses regarding that learning outcome. There are many ways to do this, including using a shared rubric to assess a key assignment or project, using a common set of test questions, or using portfolios.