Linda Suskie

  A Common Sense Approach to Assessment & Accreditation

Blog

What are the characteristics of effective curricula?

Posted on January 6, 2017 at 8:20 PM Comments comments (9)

I'm working on a book chapter on curriculum design, and I've come up with eight characteristics of effective curricula, whether for a course, program, general education, or co-curricular experience:

• They treat a learning goal as a promise.

• They are responsive to the needs of students, employers, and society.

• They are greater than the sum of their parts.

• They give students ample and diverse opportunities to achieve key learning goals.

• They have appropriate, progressive rigor.

• They conclude with an integrative, synthesizing capstone experience.

• They are focused and simple.

• They use research-informed strategies to help students learn and succeed, including high-impact practices.

 

What do you think? Do these make sense? Have I missed anything?

And...do the curricula you work with have these characteristics?

Helping Students Evaluate the Credibility of Sources

Posted on September 16, 2016 at 6:05 AM Comments comments (0)

Like many Americans, I have been appalled by this year’s presidential election train wreck. I am dismayed in so many ways, but perhaps no more so than by the many American citizens who either can’t or choose not to distinguish between credible and what I like to call incredible sources of information. Clearly we as educators are not doing enough to help our students learn how to do this.

 

I think part of problem is that we in higher education have historically focused on teaching our students to use only academic library resources, which have been vetted by professionals and are therefore credible. But today many of our students will never access a college library after they graduate—they’ll be turning to what I call the Wild West of the internet for information. So today it’s vital that we teach our students how to vet information themselves.

 

A number of years ago, I had my students in my first-year writing courses write a research paper using only online sources. Part of their assignment was to identify both credible and non-credible sources and explain why they found some credible and others not. Here’s the guidance I gave them:

 

Evaluating sources is an art, not an exact science, so there is no one set of rules that will help you definitively separate credible sources from non-credible sources. Instead, you have to use thinking skills such as analysis and evaluation to judge for yourself whether a source is sufficiently credible for you to use in your research. The following questions will help you decide.

 

What is the purpose of the source? Serious sources are more credible than satiric or humorous ones. Sources intended to inform, such as straight news stories, may be more credible than those intended to persuade, such as editorials, commentaries, and letters to the editorial, which may be biased.

 

Is the author identified? A source with an identified author(s) may be more credible than one without an author, although authoritative organizations (e.g., news organizations, professional associations) may publish credible material without an identified author.

 

Who is the author? A credible source is written by someone with appropriate education, training, or experience to write with authority on the topic. An unknown writer is less credible than a frequently published writer, and a student is less credible than a professor. If you feel you need more information on the author, do a database search and/or Google search for the author’s name.

 

Who published or sponsored the source? A scholarly journal is generally more credible than a popular magazine or newspaper. Sources whose purpose is to sell a product or point of view—including many “news” organizations and websites—may be less credible than those whose purpose is to provide impartial information and services. A website with a URL extension of .edu, .gov, or .org may be more credible than one ending in .com (but not necessarily--.edu, .gov, and .org sites often exist to promote a particular point of view). A source published by a reputable publisher or organization is often more credible than one published independently by the author or one published by a fly-by-night organization, because a reputable publisher or organization provides additional review and quality control.

 

How complete is the source’s information? Sources with more complete coverage of a topic may be more credible that those than provide limited coverage.

 

Is the content balanced or biased? Sources that present a balanced point of view are often more credible than those that clearly have a vested interest in the topic. If the author argues for one point of view, does he or she present opposing views fairly and refute them persuasively?

 

Are information, statements, and claims documented or unsupported? Sources that provide thorough, complete documentation for their information and claims are generally more credible than those that make unsupported or scantily-supported statements or claims. For example, information based on a carefully-designed research project is more credible than information based only on the author’s personal observations.

 

Has the source, author, publisher, and/or sponsor been recognized by others as credible? Sources found through academic databases such as Lexis Nexis or Infotrac are more credible than those only found through Google. Sources frequently reviewed, cited, or linked by others are more credible than those that no other expert or authority mentions or uses. You can do a database search and/or Google search for reviews of a source and to see how often it has been cited or linked by others. To look for links to a source, search on Google for “link:” and the URL (e.g., link:www.towson.edu) and see how many links are found.

 

Is the material well-written? Material that is clear, well-organized and free of spelling and grammatical errors is more credible than poorly-written material.

 

What is the date the material was published or last updated? Material with a clear publication date is more credible than undated material. For time-sensitive research topics, recent information is more credible than older information. Web sources that are updated regularly and well-maintained (e.g., no broken links) may be more credible than those that are posted and then neglected.

 

What are your own views and opinions? Don’t bring prejudices to your search. It’s easy to think that sources with which you agree are more credible than those with which you disagree. Keep an open, critical mind throughout your search, and be willing to modify your thesis or hypothesis as you learn more about your topic.

Meaningful assessment of AA/AS transfer programs

Posted on July 9, 2016 at 7:45 AM Comments comments (2)

I often describe the teaching-learning-assessment process as a four-step cycle:

1. Clear learning outcomes

2. A curriculum and pedagogies designed to provide students with enough learning opportunities to achieve those outcomes

3. Assessment of those outcomes

4. Use of assessment results to improve the other parts of the cycle: learning outcomes, curriculum, pedagogies, and assessment


I also often point out that, if faculty are struggling to figure out how to assess something, the problem is often not assessment per se but the first two steps. After all, if you have clear outcomes and you’re giving students ample opportunity to achieve them, you should be grading students on their achievement of those outcomes, and there’s your assessment evidence. So the root cause of assessment struggles is often poorly articulated learning outcomes, a poorly designed curriculum, or both.


I see this a lot in the transfer AA/AS degrees offered by community colleges. As I explained in my June 20 blog entry, these degrees, designed for transfer into a four-year college major, typically consist of 42-48 credits of general education courses plus 12-18 credits related to the major. The general education and major-related components are often what I call “Chinese menu” curricula: Choose one course from Column A, two from Column B, and so on. (Ironically, few Chinese have this kind of menu any more, but people my age remember them.)

 

The problem with assessing these programs is the second step of the cycle, as I explained in my June 20 blog. in many cases these aren’t really programs; they’re simply collections of courses without coherence or progressive rigor. That makes it almost impossible both to define meaningful program learning outcomes (the first step of the cycle) or assess them (the third step of the cycle).

 

How can you deal with this mess? Here are my suggestions.

 

1. Clearly define what a meaningful “program” is. As I explained in my June 20 blog entry, many community colleges are bound by state or system definitions of a “program” that aren’t meaningful. Regardless of the definition to which you may be bound, I think it makes the most sense to think of the entire AA/AS degree as the program, with the 12-18 credits beyond gen ed requirements as a concentration, specialization, track or emphasis of the program.


2. Identify learning outcomes for both the degree and the concentration, recognizing that there should often be a relation between the two. In gen ed courses, students develop important competencies such as writing, analysis, and information literacy. In their concentration, they may achieve some of those competencies at a deeper or broader level, or they may achieve additional outcomes. For example, students in social science concentrations may develop stronger information literacy and analysis skills than students in other concentrations, while students in visual arts concentrations may develop visual communication skills in addition to the competencies they learn in gen ed.


Some community colleges offer AA/AS degrees in which students complete gen ed requirements plus 12-18 credits of electives. In these cases, students should work with an advisor to identify their own,unique program/concentration learning outcomes and select courses that will help them achieve those outcomes.


3. Use the following definition of a program (or concentration) learning outcome: Every student in the program (or concentration) takes at least two courses with learning activities that help him or her achieve the program learning outcome. This calls for fairly broad rather than course-specific learning outcomes.


If you’re struggling to find outcomes that cross courses, start by looking at course syllabi for any common themes in course learning outcomes. Also think about why four-year colleges want students to take these courses. What are student learning, beyond content, that will help them succeed in upper division courses in the major? In a pre-engineering program, for example, I’d like to think that the various science and math courses students take help them graduate with stronger scientific reasoning and quantitative skills than students in non-STEM concentrations.


4. Limit the number of learning outcomes; quality is more important than quantity here. Concentrations of 12-18 credits might have just one or two.

 

5. Also consider limiting your course options by consolidating Chinese-menu options into more focused pathways, which we are learning improve student success and completion. I’m intrigued by what Alexandra Waugh calls “meta-majors”: focused pathways that prepare students for a cluster of four-year college majors, such as health sciences, engineering, or the humanities, rather than just one.


6. Review your curricula to make sure that every student, regardless of the courses he or she elects, will graduate with a sufficiently rigorous achievement of every program (and concentration) learning outcome. An important principle here: There should be at least one course in which students can demonstrate achievement of the program learning outcome at the level of rigor expected of an associate degree holder prepared to begin junior-level work. In many cases, an entry-level course cannot be sufficiently rigorous; your program or concentration needs at least one course that cannot be taken the first semester. If you worry that prerequisites may be a barrier to completion, consider Passaic County Community College’s approach, described in my June 20 blog.


7. Finally, you’ve got meaningful program learning outcomes and a curriculum designed to help students achieve them at an appropriate level of rigor, so you're ready to assess those outcomes. The course(s) you’ve identified in the last step are where you can assess student achievement of the outcomes. But one additional challenge faces community colleges: many students transfer before taking this “capstone” course. So also identify a program/concentration “cornerstone” course: a key course that students often take before they transfer that helps students begin to achieve one or more key program/concentration learning outcomes. Here you can assess whether students are on track to achieve the program/concentration learning outcome, though at this point they probably won’t be where you want them by the end of the sophomore year.

A big community college issue: Degree programs that really aren't

Posted on June 20, 2016 at 11:30 AM Comments comments (3)

Over the years I’ve worked with myriad community colleges, large and small, in dozens of states throughout the United States. More than many other higher ed sectors, community colleges truly focus on helping students learn, making assessment a relatively easy sell and making community colleges some of my favorites to work with.

 

But I’m seeing an issue at community colleges throughout the United States that deeply troubles me and can make assessment of program learning outcomes almost impossible. The issue flows from the two kinds of associate degree programs that community colleges offer. One kind is what many call “career and technical education” (CTE) programs. Often A.A.S. degrees, these are designed to prepare students for immediate employment. The other kind is what many call “transfer programs”: A.A. or A.S. programs, often named something like “General Studies” or “Liberal Education,” that are designed to prepare students to transfer into baccalaureate programs at four-year colleges.

 

The problem I’m seeing is that many of these programs, especially on the transfer side, aren’t really programs. Here’s how the regional accreditors’ standards define programs:

 

  • ACCJC: “Appropriate breadth, depth, rigor, sequencing, time to completion, and synthesis of learning”
  • HLC: “Require levels of performance by students appropriate to the degree or certificate awarded”
  • MSCHE: “Characterized by rigor and coherence… designed to foster a coherent student learning experience and to promote synthesis of learning”
  • NEASC effective July 1, 2016: “Coherent design and… appropriate breadth, depth, continuity, sequential progression, and synthesis of learning”
  • NWCCU: “Rigor that [is] consistent with mission… A coherent design with appropriate breadth, depth, sequencing of courses, and synthesis of learning”
  • SACS: “A coherent course of study”
  • WSCUC: “Appropriate in content, standards of performance, [and] rigor”

 

 

There’s a theme here: A collection of courses is not a program and, conversely, a program is more than a collection of courses. A true program has both coherence and rigor. In order for this to happen, some courses must be more advanced than others and build on what’s been learned in earlier courses. That means that some program courses should be at the 200-level and have prerequisites.

 

But many community college degree “programs” are in fact collections of courses, nothing more.

 

  • Many transfer degree “programs” consist of 42 or 45 credits of general education courses—virtually all introductory 100-level courses—plus another 12-18 credits of electives, sometimes in an area of specialization, sometimes not.
  • At virtually every community college I’ve visited, it’s entirely possible for students to complete an associate degree in at least one “program” by taking only 100-level courses.
  • In some disciplines, “program” courses are largely cognate requirements (such as physics for an engineering program) with perhaps only one course in the program discipline itself.
  • And on top of all this, any 200-level courses in the “program” are often sophomore-level in name only; they have no prerequisites and appear no more rigorous than 100-level courses.

 

 

Two years of 100-level study does not constitute an associate degree and does not prepare transfer students for the junior-level work they will face when they transfer. And a small handful of introductory courses does not constitute an associate degree program.

 

Turning community college associate degree programs into true programs with rigor and coherence is remarkably difficult. Among the barriers:

 

  • Some systems and states prohibit community colleges from offering associate degrees in liberal arts or business disciplines, as Charlene Nunley, Trudy Bers, and Terri Manning note in NILOA’s Occasional Paper #10, “Learning Outcomes Assessment in Community Colleges.”
  • In other systems and states, plenty of community college faculty have told me that their counterparts at local four-year colleges don’t want them to teach anything beyond introductory courses—the four-year faculty want to teach the rest themselves. (My reaction? What snobs.)
  • Yet other community college faculty have told me that they have felt pressure from the Lumina Foundation’s completion agenda to eliminate all course prerequisites.

 

 

So at some community colleges nothing can be done until laws, regulations, or policies are changed, leaving thousands of students in the meanwhile with a shortchanged education. But there are plenty of other community colleges that can do something. I’m particularly impressed with Passaic County Community College’s approach. Every degree program, even those in the liberal arts, has designated one 200-level course as its program capstone. The course is open only to students who have completed at least 45 credits and have taken at least one other course in the discipline. For the English AA, for example, this course is “Topics in Literature,” and for the Psychology option in the Liberal Arts AA, this course is “Social Psychology.” It’s a creative solution to a pervasive problem.

Making a liberal arts degree relevant and employable

Posted on February 10, 2016 at 8:10 AM Comments comments (0)

One of the reasons I’m a passionate advocate of the liberal arts is because my own undergraduate liberal arts degree has served me so well…but then again, it was an unusual interdisciplinary program. Hopkins coded its liberal arts courses according to area of study: natural sciences courses were coded N, social and behavior sciences courses were coded S, humanities H. My Quantitative Studies major required a couple of entry level courses (probability and statistics) plus electives chosen from courses coded Q, with a certain number in the upper division.

 

I had a ball! In addition to math, I took courses in engineering, physics, economics, computer science, and psychology, where I discovered an unexpected passion for educational testing and measurement that led me to graduate study and my work today. At the same time, while Hopkins didn’t offer formal minors, I earned 18 credits in English. 

 

Memories of all this came back to me as I read Matthew Sigelman’s piece in Inside Higher Ed on creating liberal arts programs that combine foundational liberal arts skills such as writing and critical thinking with the entry level technical skills that employers seek. My knowledge of statistical analyses and computer programming got me my first positions. But my writing skills and interdisciplinary studies helped me move out of them, into a career in higher education that has required working with people from all kinds of academic backgrounds, speaking a bit of their language, and applying the concepts I’ve learned to their disciplines. I wouldn’t be where I am today without the combination of technical skills, writing skills, and broad liberal arts foundation that Sigelman advocates.

 

So here’s an idea. Many colleges today label “writing-intensive” courses with a W and require students to take a certain number of them. Why not do something similar with other skills that today’s employers are seeking? Label leadership- and teamwork-intensive courses L, data-intensive courses D, problem-solving -intensive courses P, technology-intensive courses T, analysis-intensive courses A, ethics-intensive courses E, and so on. Develop clear institutional guidelines on how to qualify for each label; some courses might earn multiple labels. Then encourage students in the liberal arts to take courses with whatever labels best fit their career interests—perhaps as an interdisciplinary major, perhaps as a minor, or perhaps as electives in a major or general education.

 

This will only work, of course, if curricula have enough flexibility to allow students to fit these courses in. But that’s a solvable challenge, and I think this is an idea worth considering.

Challenging students to do their best work

Posted on December 7, 2015 at 9:50 AM Comments comments (1)

I always look forward to the annual report of the National Survey of Student Engagement (NSSE), and this year is no different. While the report packs plenty of information into a few pages, I was especially intrigued by the section on motivating students to do their best work. NSSE has found that only 54% of first-year students and 61% of seniors were “highly challenged” to do their best.

 

What can faculty do to challenge students to do their best? NSSE found that students who felt challenged to do their best had coursework with “complex cognitive tasks.” Their courses were clearer and better organized, and they received prompt, formative feedback. Course learning activities included “success-oriented learning strategies” such as active reading, reviewing notes after class, and summarizing what they learned.

 

But what really struck me was NSSE’s finding that the extent to which students are challenged is unrelated to the institution’s selectivity and, indeed, inversely related for seniors. NSSE concludes that “admission selectivity is neither a prerequisite for nor a guarantee of a high-quality educational experience.”

 

Ironically, right after this report was released, the Association of American Universities, the American Council on Education, and the Association of Public and Land-Grant Universities released a joint letter to the U.S. Secretary of Education to request consideration of “differential accreditation, which would allow institutions with a consistent record of strong academic programs to go through a less burdensome review process than institutions with a less proven track record and weaker outcomes.”

 

This sounds reasonable, but what exactly are “strong academic programs”? NSSE’s research makes clear that the quality of undergraduate education cannot be determined by admissions selectivity alone. Indeed, I’ve got a whole collection of articles from various pundits asking whether the focus of many highly selective universities on research productivity undermines the quality of teaching. Simply put, at these universities faculty may see no incentive to improve their teaching beyond the merely adequate. No, I'm not aware of any substantive research on this, but that's the point.

 

My undergraduate mentor was the late Julian Stanley, an expert in psychometrics who turned his research attention in later life to educating the gifted and talented. His position was that if we give these students a challenging education, they will make even greater contributions to society--contributions that we greatly need from them. As much as our most capable students end up contributing, how much more might they contribute if they were given a truly great education that challenged them to stretch and do their very best work—more than even they thought they were capable of?

 

I’m all for fast-tracking the accreditation of institutions with “strong academic programs,” but only if we clearly define those as programs that we know through systematic evidence truly give undergraduates the best possible education through consistently great teaching and learning experiences.

Assessing high impact practices

Posted on April 13, 2015 at 8:20 AM Comments comments (1)

“High impact practices"—one of those buzzwords getting a lot of attention these days. What exactly are high impact practices (HIPs), and how should they be assessed?

 

HIPs are educational experiences that make a significant difference in student learning, persistence, and success. Research by the Association of American Colleges & Universities (AAC&U) and the National Survey of Student Engagement (NSSE) has found that the following can all be HIPS:

• First-year experiences

• Learning communities

• Writing-intensive courses

• Collaborative learning experiences

• Service learning

• Undergraduate research

• Internships

• Capstone courses and projects

 

What makes these experiences so effective? In a word: engagement. Students are more likely to learn, persist, and succeed if they are actively engaged in their learning. Gallup, for example, found that college graduates who feel their college prepared them for life and helped them graduate on time were more likely to agree with the following:

• I had at least one professor who made me excited about learning

• My professors cared about me as a person.

• I had a mentor who encouraged me to pursue my goals and dreams.

• I worked on a project that took a semester or more to complete.

• I had an internship or job that allowed me to apply what I was learning in the classroom.

• I was extremely active in extracurricular activities and organizations while I attended college.

(Sadly, only 3% of all college graduates reported having all six of these experiences.)

 

So how should HIPs be assessed? As I often say about assessment, it’s all about goals. Because HIPs are intended to help students learn, persist, and succeed, your assessments should focus on student retention and graduation rates, perhaps grades in subsequent coursework (if appliable), and how well students have achieved the HIP’s key learning outcomes. Check your institution’s strategic goals too. There may be a goal to, for example, improve the success of a particular student cohort. If your HIP is intended to help achieve this kind of goal, track that as well.

Encouraging great teaching

Posted on March 16, 2015 at 8:10 AM Comments comments (0)

One of my favorite chapters in my book Five Dimensions of Quality: A Common Sense Guide to Accreditation and Accountability is “Why Is This So Hard?” It was my “venting chapter,” with a pretty long list of the barriers to advancing in quality, and it was very cathartic to write.

 

One item on that list is succinct: The money’s not there. A new report by Third Way states the issue beautifully: “Federal policy incentivizes research first, second, and third—and student instruction last.” It goes on to explain, “For every $100 the federal government spends on university-led research, it spends twenty-four cents on teaching innovation at universities.” Its conclusion? “If one took its cues entirely on the federal government, the conclusion would be that colleges exist to conduct research and publish papers with student instruction as an afterthought.”

 

One professor at a regional comprehensive university put it to me this way: “I know I could be a better teacher. But my promotions are based on the research dollars I bring in and my publications, so that’s where I have to focus all my time. As long as my student evaluations are decent, there’s no incentive or reward for me to try to improve my teaching, and any time I spend on that is time taken away from my research, which is where the rewards are.”

 

The one bright spot here is that more and more colleges and universities are recognizing the need to invest in helping faculty improve their teaching. The last 20 years have seen a growth in “teaching learning centers” designed to do this along with other incentives and support, such as those at the University of Michigan reported by the Chronicle of Higher Education. But so far we are only scratching the surface. Colleges, universities, and government policymakers all need to do more to put their money where their mouth is, actively encouraging and supporting the great teaching and learning that is supposed to be higher education’s fundamental purpose.

Getting a good grade by doing what the teacher says

Posted on February 5, 2015 at 7:55 AM Comments comments (0)

A recent study by Hart Research Associates for the Association of American Colleges & Universities found, among many other things, that only about a quarter of employers are satisfied with the creative and innovative skills of recent college graduates. Why are college graduates so dismal in this respect? Throughout their education, from grade school through college, in most classes, the way to get a good grade is to do what the teacher says: read this assignment, do this homework, write a paper on this topic with these sections, develop a class presentation with these elements. Faculty who teach general education courses in the creative arts—art, theater, creative writing, even graphic design—have told me that students hate taking those courses because they have no experience in “thinking outside the box.”

 

How can we encourage creativity and innovative thinking? Simply building it into our grading expectations can help. The first time I used a rubric, many, many years ago, I gave it to my class with their assignment, and the papers I received were competent but flat and uninspired. I had to give the best papers A’s because that was what the rubric indicated they should earn, but I was disappointed.

 

The next time I taught the course, I changed the rubric so that all the previous elements earned only 89 points. The remaining points were for a fairly vague category I labeled “Creative or innovative ideas or insight.” Problem solved! The A papers were exactly what I was hoping for.

 

Now this was a graduate course, and just putting something on a rubric won’t be enough to help many first-year students. This is where collaborative learning comes into play. Put students into small groups with a provocative, inspiring question for them to discuss, and watch the ideas start to fly.

A simple chart for every syllabus

Posted on January 27, 2015 at 7:40 AM Comments comments (0)

Most colleges and universities require course syllabi to include a list of course objectives. For years I’ve had a fantasy about accreditors requiring something more: a simple 3-column chart.

 

The first column would be titled, “This is what you’ll learn how to do in this course.” Under it would be listed the key learning objectives of the course.

 

The second column would be titled, “This is how you’ll learn how to do this.” For each learning objective, this column would list the learning activities (classwork, homework, projects, etc.) that would help students achieve the objective.

 

The third column would be titled, “This is how you’ll show me that you’ve learned this.” For each learning objective, this column would list what students will submit to show they’ve achieved the objective: a paper, project, presentation, demonstration, exam, etc.

 

In essence, this is a simple curriculum map for a course. I’ve been sharing this fantasy for years, and only a couple of years ago did someone say, “Ah… Dee FInk!” It turned out that Dee, author of Creating Significant Learning Experiences, came up with the same idea long before I did (with somewhat different column headings), so I always give him full credit. I had the honor of doing a workshop with Dee last fall on creating and assessing significant learning experiences. It was wonderful to have the opportunity to work with someone who shares my belief that teaching, learning, and assessment are inextricably linked, not separate processes.

 

Dee points out that, when faculty fill out this chart, they should start with the first column (This is what you’ll learn how to do) and then move to the third column (This is how you’ll show me that you’ve learned it”). Then they should move to the middle column (This is how you’ll learn how to do this), designing learning experiences that will enable students to demonstrate their learning successfully.

 

Today I suggest just one tweak to Dee’s 3-column model. After the first column (This is what you’ll learn how to do), I suggest inserting one more column, titled “And learning this will help you learn how to…” In this column, faculty would list the program or general education learning outcome that the course objective helps students achieve.

 

Here’s an example:

 

This is what you’ll learn how to do: Analyze your own and others’ responses to a work of art.

And learning this will help you learn how to: Think critically and analytically

This is how you’ll learn how to do this: Small group discussions and presentations of critical responses to artworks

This is how you’ll show me that you’ve learned this: Paper comparing your response to a work of art to those of critics

Are our courses and programs appropriately rigorous?

Posted on January 3, 2015 at 7:50 AM Comments comments (0)

It was decades ago that someone first told me higher education’s dirty little secret: higher education is the one commodity where the consumer wants the least for his or her money. According to the latest CIRP freshman survey, the most common reason first-year students are going to college is to get a better job. A lot of students want the credential but not necessarily the learning that goes with it.

Unfortunately, with recent pushes to get students to persist and graduate, plus the efforts of some private colleges to avoid a “death spiral” of declining enrollments and revenues, I see way too many colleges today playing into this. Some examples I’ve seen here and there over the last couple of years:

• Course numbering systems that are meaningless; 200-level courses are no more advanced than 100-level courses; 300- and 400-level courses are no more advanced than 100- and 200-level courses. First-year students and seniors are in the same courses.

• Community colleges that let students take any course without any prerequisites, including an internship in their very first semester

• Associate degrees that can be completed with all 100-level courses; more advanced study at the 200-level is not required

• Graduate programs that offer graduate credit for courses that should be undergraduate prerequisites

• Courses with undergraduate curricula that earn graduate credit (for example, a first-year business statistics course that’s part of an MBA program)

• Degree “programs” at the associate, bachelor’s, and master’s levels that are simply collections of elective courses, without coherence. As I’ve said many times, a collection of courses is not a program.

Unfortunately, there’s no easy way to turn this around. Accreditors and state higher education agencies do their best, but they don’t have the capacity to carefully examine every course and program curriculum. (Some specialized accreditors do have this capacity, because they examine limited numbers of programs in depth.) The answer is, instead, making this a matter of integrity throughout the higher education community, a national priority, and a focus of national conversations. The Degree Qualifications Profile can be a starting point for the conversation, but at this point many colleges are still ignoring it.

In my book Five Dimensions of Quality: A Common Sense Guide to Accreditation and Accountability, the first dimension is Relevance and the third is Focus and Aspiration. Both dimensions call for rigorous curricula that give our students the education they need.

Strategies that help students learn

Posted on November 3, 2014 at 12:50 AM Comments comments (4)

I have often said that, while in many ways these are challenging times for American higher education, in some ways we are living in a golden age, because we are coming off a quarter century of good research on practices that help students learn.

 

In my 2009 book Assessing Student Learning: A Common Sense Guide, I tried to distill that research into a list of strategies. In my new book Five Dimensions of Quality: A Common Sense Guide to Accreditation and Accountability, I’ve updated that list. Today research indicates that students learn most effectively when:

1. They see clear relevance and value in their learning activities.

2. They are instilled with a “can do” attitude.

3. They are academically challenged and given high but attainable expectations, such as through assignments with scaffolding.

4. Learning activities and grades focus on important learning outcomes. Faculty organize curricula, teaching practices, and assessments to help students achieve important learning outcomes. Students spend their time and energy learning what they will be graded on.

5. They understand course and program learning outcomes and the characteristics of excellent work, often through a rubric.

6. They spend significant time and effort studying and practicing.

7. They interact meaningfully with faculty—face-to-face and/or online.

8. They collaborate with other students—face-to-face and/or online—including those unlike themselves.

9. New learning is related to their prior experiences and what they already know, through both concrete, relevant examples and challenges to their existing paradigms.

10. They learn by doing, through hands-on practice engaging in multidimensional “real world” tasks, rather than by listening to lectures.

11. They use their learning to explore, apply, analyze, justify, and evaluate, because facts memorized in isolation are quickly forgotten.

12. They participate in out-of-class activities that build on what they are learning in the classroom.

13. They can obtain support when they need it: academic, social, personal, and financial.

14. They receive frequent, prompt, concrete feedback on their work, followed by opportunities to revise their work.

15. They integrate and see coherence in their learning by reflecting on what and how they have learned, by constructing their own learning into meaningful frameworks, and through synthesizing capstone experiences such as first-year experiences, field experiences, community-based or service learning experiences, independent study, and research projects.

16. Their college and its faculty and staff truly focus on helping students learn and succeed and on improving student learning and success.

If you're not teaching it, there's not much point in assessing it

Posted on September 16, 2014 at 12:20 AM Comments comments (0)

Curriculum maps—those charts that list a program’s key learning outcomes on one side and the program’s courses across the top—are everywhere these days. Why? One reason is that, if you’re teaching something, you’re probably grading students on it. And if you’re grading students on it, you’ve probably got assessment information already in hand, making the assessment job easier.

 

Another reason is that, if your students aren’t doing well on a particular assessment, curriculum maps can help you figure out whether students are getting enough opportunities to learn that competency. If their quantitative skills aren’t what you’d like to see, and they only take one course that helps them develop their quantitative skills, you may want to talk about addressing quantitative skills in some other courses.

 

Curriculum maps work only if they identify courses where students really work on a particular goal or competency: where they have homework, classwork, and other assignments to develop their learning, not just readings and lectures. I’d love to see a requirement that a course can be checked off on a map as helping students to achieve a particular learning outcome only if a certain proportion of course grades are based on their achievement of that outcome. In other words, a course counts as contributing toward a quantitative skills learning outcome only if at least, say, 5% of the final grade is based on students’ quantitative skills. Otherwise, students may not be getting the deep, rich learning opportunities they need to achieve the outcome well.

Meaningful learning activities and assignments: A key to successful assessment

Posted on August 22, 2014 at 6:40 AM Comments comments (0)

If you’re giving students plenty of opportunities to achieve your key outcomes, assessment is easy: you’re already grading their work on those learning activities, so you already have assessment evidence in hand. When faculty struggle with assessing key learning outcomes, the problem is often that they’re not giving students meaningful learning activities to help them achieve those outcomes. If you want students to learn how to analyze information, for example, what kinds of learning activities do you give them to help them learn how to analyze information? Here are some tips:

• Start with the assignment’s key learning outcomes: what you want students to learn by completing the assignment.

• Explain to students why you are giving them the assignment—how the assignment will help prepare them to succeed in later courses, in the workplace, and/or in their lives. (Some students do better if they understand the relevance of what they’re doing.)

• Create a rubric to grade the assignment that reflects those key outcomes, with appropriate emphasis on the most important outcomes. (A recent study found that many faculty emphasize grammar at the expense of other skills.)

• Give the rubric to students with the assignment, so they know where to focus their time and energies.

• Consider alternatives to traditional papers. Students can share their analysis of information through a chart, graph, or other visual, which can be faster to grade and fairer to students who struggle with written communication skills.

• Point students in the right direction by giving them appropriate guidelines: length and format of the assignment, what resources they can use, who the assignment’s audience is, etc.

• Break large assignments into smaller pieces. Ask students to submit first just their research paper topics—if the topic won’t work well, you can get them back on track before they go too far down the wrong road.

 

The clearer your guidelines to students, the better some students will do, and we all know that an A assignment is a lot faster and easier to grade than a poor assignment. So this is a win-win strategy: as Barbara Walvoord and Virginia Anderson say in their book Effective Grading, your students work harder and learn more, and you spend less time grading!

A degree or program is more than a collection of courses

Posted on January 15, 2014 at 8:10 AM Comments comments (0)

I was struck by a statement in an article by Mark Salisbury last month in Inside Higher Ed: "a college experience should approach learning as a process -- one that is cumulative, iterative, multidimensional and, most importantly, self-sustaining long beyond graduation." Many accreditors echo this thought.  SACS-COC, for example, requires accredited colleges to offer "degree programs that embody a coherent course of study," while NEASC stipulates that "Each educational program demonstrates coherence through its goals, structure, and content."


The way I put this is, "A degree or program is more than a collection of courses."  In a true program, or a truly meaningful degree, the whole is greater than the sum of the parts. A program expects students to develop key skills and competencies in multiple courses or experiences throughout their studies, with students developing more advanced skills as the program progresses and pulling the pieces together into a coherent whole by the time they graduate. 


Am I saying that curricula should be limited to traditional fields of study or that programs should be tightly prescribed? Not at all. My undergraduate major was an amorphous interdisciplinary entity called Quantitative Studies, and it has served me extremely well. I like the idea of individualized majors, and I'm intrigued by competency-based programs that allow students to progress toward a degree by demonstrating competency rather than sitting in a classroom for so many hours.


But I worry about colleges that let students cobble together an assortment of unrelated courses or experiences into a degree. I worry about associate degrees that can be earned by enrolling only in 100-level courses. I worry about degrees that let students choose their courses without approved goals or plans, because without guidance some students will inevitably choose courses that fit their schedule or whose professor they like rather than courses that help them move ahead purposefully.


General Education at Community Colleges

Posted on November 10, 2013 at 6:20 AM Comments comments (0)

Over the last few months I’ve visited a number of community colleges across the country, and I’m seeing some common themes emerge. While my visits are purportedly about assessment, the conversations invariably turn to learning outcomes and curriculum design. (And this doesn’t surprise me; I’ve been saying for years that, if your college is struggling with assessment, the cause is likely either unclear goals or curricula that aren’t designed to help students achieve those goals.) A lot of community college curricula are constrained by state requirements but, if you have some flexibility, here’s the advice I’ve found myself giving most frequently on community college gen ed curricula:


1. Limit the number of gen ed learning outcomes. Two examples: the Texas Higher Education Coordinating Board stipulates just six; the Community College of Baltimore County has just four. If you have a lot of learning outcomes, you have a much harder time designing a curriculum so that every student achieves each one of them, and you have a lot more work assessing them.


2. Keep your gen ed requirements simple and traditional: communication, math, social sciences, natural sciences, humanities, fine and performing arts. Not exciting, but these courses have the best chance of transferring and meeting gen ed requirements at four-year colleges.


3. Keep your gen ed course offerings limited and traditional: Introduction to Biology, Introduction to Psychology, U.S. Government, 20th Century American Literature. Again, these courses have the best chance of transferring and meeting gen ed requirements at four-year colleges. Limiting the number of offerings can lead to huge savings in faculty time in course planning, monitoring/review, and assessment. Traditional doesn’t mean boring or irrelevant, of course. You can focus a 20th century American literature course on works particularly likely to engage your students and meet their interests and needs.


4. Address each gen ed learning outcome in more than one core requirement. Some community colleges require that every gen ed course address critical thinking, for example, and some require quantitative skills to be addressed in gen ed social science courses as well as in math courses. This helps ensure that, no matter how long students enroll at your college or what they take, they’ll leave with stronger skills than when they arrived.

What is a competency-based degree?

Posted on October 19, 2013 at 7:45 AM Comments comments (0)

The idea of competency-based education has been around for generations. Here's what a course would look like if it were competency-based.

1. Your course would have a clear list of what students must know and be able to do in order to pass the course--in other words, learning objectives.

2. Every course lesson and activity is designed to help students achieve those learning objectives.

3. Everything you grade students on is based on how well students have achieved those learning objectives. Things unrelated to those objectives, such as attendance, class participation, or submitting assignments on time don't count toward grades unless things like time management are part of the course's learning objectives.

4. Students must get a passing grade on every course learning objective in order to pass the course. In other words, if you are teaching writing and correct grammar is one of your learning objectives, they must get a passing grade on grammar; an A for a persuasive argument can't average out with an F for grammar to yield a C.

In theory, competency-based degrees could consist of courses designed this way, but in practice they usually take these ideas a step farther. If every course in your program is designed according to these principles, students would earn a degree in your program by completing and passing every assignment in every course. Competency-based degrees consist of all those assignments without chunking them into courses; students do not earn credits or grades for courses passed. Students might also have the option of working through the assignments at their own pace.

Obviously competency-based degrees are not for everyone. But the idea of competency-based education--evaluating or grading students based on whether or not they have achieved important learning outcomes, and not letting an A on one key outcome balance an F on another--is simply good educational practice and something that any faculty member can apply.