Linda Suskie

  A Common Sense Approach to Assessment & Accreditation

Blog

An example of closing the loop...and ideas for doing it well

Posted on September 24, 2016 at 7:35 AM

I was intrigued by an article in the September 23, 2016, issue of Inside Higher Ed titled “When a C Isn’t Good Enough.” The University of Arizona found that students who earned an A or B in their first-year writing classes had a 67% chance of graduating, but those earning a C had only a 48% chance. The university is now exploring a variety of ways to improve the success of students earning a C, including requiring C students to take a writing competency test, providing resources to C students, and/or requiring C students to repeat the course.

 

I know nothing about the University of Arizona beyond what’s in the article. But if I were working with the folks there, I’d offer the following ideas to them, if they haven’t considered them already.

 

1. I’d like to see more information on why the C students earned a C. Which writing skills did they struggle most with: basic grammar, sentence structure, organization, supporting arguments with evidence, etc.? Or was there another problem? For example, maybe C students were more likely to hand in assignments late (or not at all).

 

2. I’d also like to see more research on why those C students were less likely to graduate. How did their GPAs compare to A and B students? If their grades were worse, what kinds of courses seemed to be the biggest challenge for them? Within those courses, what kinds of assignments were hardest for them? Why did they earn a poor grade on them? What writing skills did they struggle most with: basic grammar, organization, supporting arguments with evidence, etc.? Or, again, maybe there was another problem, such as poor self-discipline in getting work handed in on time.

 

And if their GPAs were not that different from those of A and B students (or even if they were), what else was going on that might have led them to leave? The problem might not be their writing skills per se. Perhaps, for example, that students with work or family obligations found it harder to devote the study time necessary to get good grades. Providing support for that issue might help more than helping them with their writing skills.

 

3. I’d also like to see the faculty responsible for first-year writing articulate a clear, appropriate, and appropriately rigorous standard for earning a C. In other words, they could use the above information on the kinds and levels of writing skills that students need to succeed in subsequent courses to articulate the minimum performance levels required to earn a C. (When I taught first-year writing at a public university in Maryland, the state system had just such a statement, the “Maryland C Standard.”)

 

4. I’d like to see the faculty adopt a policy that, in order to pass first-year writing, students must meet the minimum standard of every writing criterion. Thus, if student work is graded using a rubric, the grade isn’t determined by averaging the scores on various rubric criteria—that lets a student with A arguments but F grammar earn a C with failing grammar. Instead, students must earn at least a C on every rubric criterion in order to pass the assignment. Then the As, Bs, and Cs can be averaged into an overall grade for the assignment.

 

(If this sounds vaguely familiar to you, what I’m suggesting is the essence of competency-based education: students need to demonstrate competence on all learning goals and objectives in order to pass a course or graduate. Failure to achieve one goal or objective can’t be offset by strong performance on another.)

 

5. If they haven’t done so already, I’d also like to see the faculty responsible for first-year writing adopt a common rubric, articulating the criteria they’ve identified, that would be used to assess and grade the final assignment in every section, no matter who teaches it. This would make it easy to study student performance across all sections of the course and identify pervasive strengths and weaknesses in their writing. If some faculty members or TAs have additional grading criteria, they could simply add those to the common rubric. For example, I graded my students on their use of citation conventions, even though that was not part of the Maryland C Standard. I added that to the bottom of my rubric.

 

6. Because work habits are essential to success in college, I’d also suggest making this a separate learning outcome for first-year writing courses. This means grading students separately on whether they turn in work on time, put in sufficient effort, etc. This would help everyone understand why some students fail to graduate—is it because of poor writing skills, poor work habits, or both?

 

These ideas all move responsibility for addressing the problem from administrators to the faculty. That responsibility can’t be fulfilled unless the faculty commit to collaborating on identifying and implementing a shared strategy so that every student, no matter which section of writing they enroll in, passes the course with the skills needed for subsequent success.

 

Categories: Practical Tips

Post a Comment

Oops!

Oops, you forgot something.

Oops!

The words you entered did not match the given text. Please try again.

5 Comments

Reply Manar Sabry
3:59 AM on November 10, 2016 
Great suggestions Linda. I also want to see the main characteristics of the students before entering college or stating the class. Maybe these C students are significantly different and that the remedy lies in a pre-request. Maybe their high school GPA or their SAT scores are different compare to As and Bs students. I want to know if they were admitted for a special program or what are their intended majors.
Reply Debbie Kell
11:34 AM on September 27, 2016 
C says...
Regarding suggestion 1, are there any inexpensive tools faculty can use to capture this information or suggestions for setting up a data base with this information?


I think the most helpful process and tool involves the collaborative development and utilization of a rubric. Faculty members teaching this course would meet and develop the criteria or traits that define desired writing behaviors. They similarly develop descriptive standards of performance for each of these criteria. Then they agree upon an assignment to which the rubric would be applied across all sections/all instructors. Faculty members would hold norming sessions so that there is a shared consensus about what constitutes each level of student performance.

After the student work is evaluated using this rubric, there are a number of approaches that could be used to aggregate your data. You could develop a form in Google Docs. You could develop a form in Survey Monkey. Email the links to all faculty members involved in this assessment, asking them to "enter" their student performance data. Then you could study student performance by any field that you built into your form. Of course, you would filter by trait or criteria. But you could also filter and study outcomes by modality (online, on campus, etc.), by campus, and so on. Keep it simple as you get out of the gate.
Reply Debbie Kell
11:24 AM on September 27, 2016 
Thank you for your response, Linda.

I have occasionally been asked whether or not this whole focus on assessment hasn't just created another industry which pulls energy, funding, and staffing away from the real process of teaching/learning. My response is that, simply put, assessment works. I have seen programs with low passing rate, low retention and graduation rates, and weak student performance on professional licensing exams turn themselves around when they study their curriculum, strategically and intentionally study student performance, and build in effective assessment practices throughout. Yes, drill down to specific learning outcomes. Yes, maintain consistency as appropriate across sections/instructors. Yes, look at leverage points - the point in the learning process at which we can we most effectively intervene to make a difference? Your comments seize a truly actionable finding and point to steps that can result in improved student outcomes.

Debbie Kell consult@dkell.us
Reply Catherine Wehlburg
5:05 PM on September 25, 2016 
Thank you, Linda! Your insights are always so very good to read. I do want to really think through what a university imperative to "not" give grades of "C" in foundational courses will actually do. There are often unintended consequences of policies that are created to do good things. In this instance, faculty who teach this course could modify their grading so that fewer students get a grade of "C" and more get "A" or "B." Presto-chango -- no problem anymore. But, of course, we have seen grade inflation as an issue on many of our campuses for many years. And so, this would not be a good way to combat less learning by some students (as evidenced by their lower grades).
Reply C
10:55 AM on September 24, 2016 
Regarding suggestion 1, are there any inexpensive tools faculty can use to capture this information or suggestions for setting up a data base with this information?