A recent episode of NPR’s “Weekend Edition” featured a guest who told listeners that research shows that sleeping on your back can be hazardous to your health. He also said that research shows that sleeping on your front can be hazardous to your health, and that sleeping on your left side can be hazardous to your health, and that sleeping on your right side can be hazardous to your health. Each conclusion, reached through analysis of data from one or more independent studies, was presumably compelling enough to warrant publication somewhere; thus inclusion in a chronology of fears and phobias. And when the guest was asked how he slept, he said, “… fitfully, because I’m constantly plagued by fears of the risks I’m taking.”
“Research shows….” eh? Wow! If the only safe alternative is to spend the rest of my life sleeping vertically, I think I’d like to know more about just how and by whom that data was collected and analyzed before committing to the change! In fact, I would argue that whenever someone asserts that one particular course of action is better than another because “…research shows…” that observation alone ought to raise the red flag of caution rather than the white flag of surrender before substantive decisions are made.
Case in point: student course evaluations. Talk about a hot button item with a seemingly endless number of editorial pieces decrying or extolling the merits of these instruments based on what the “…research shows …”. As one who has enjoyed considerable success in the classroom (that success, by the way, measured largely by student feedback), I’ve wondered for a long time just what the research actually does show. And if, in fact, those evaluations aren’t worth the paper they’re printed on—as some peers would have me believe—then I’d kind of like to know where the holes are. So I took a look.
What I didn’t expect to find is that the vast majority of people who have taken time to look (yup, at data) agree that the criticisms leveled at course evaluations perpetuate little more than myth.
Authors of Paper #50 from the IDEA CENTER, winnowing 15 years worth of literature (2,875 articles!) down to 542 actual research reports put emotion on the shelf and draw some typical and important conclusions. None of the following popular misconceptions are supported by data from contemporary research:
- Students cannot make consistent judgments.
- Student ratings are just popularity contests.
- Student ratings are unreliable and invalid.
- The time of day that a course is offered affects ratings
- Students will not appreciate good teaching until they are out … a few years.
- Students just want easy courses.
- Student feedback cannot be used to help improve instruction.
- Emphasis on student ratings has led to grade inflation.
At the end of the day, if there is any credible evidence to suggest that course evaluations are without value in measuring the quality of a course or the teacher who has the privilege of presenting it, it’s dwarfed by evidence to the contrary.
To be sure, there is still much to be gained by (1) improving teacher evaluation instruments, (2) implementing efficacious use of mid-term feedback, peer evaluation, and teaching portfolios, and (3) finding reliable ways to measure what students are actually learning. Recent efforts to measure how content delivery styles can improve (or not) attention spans, subject matter mastery, and problem solving skills also hold promise for raising the bar.
But I’m not seeing anything to suggest that what we’re doing now is all that bad if the feedback is credible and the teacher is willing to use it constructively. In fact, I’m so convinced of the positive value of student opinion that I’ve made significant changes in my own courses this year specifically to address issues raised the last time around. And I’m already drafting supplemental questions for upcoming evals in a continuing effort to get better at what I thought was already pretty good. What is the reaction to some change I’ve made in delivery or content? Are there suggestions for better use of class time? How have their perceptions of the natural world changed, and what have they learned that might impact their lives outside of the classroom? For me, those latter points are crucial, for if someone doesn’t leave my class a different person than they were when they walked in, then they may have wasted their time, and I’ve definitely failed to achieve my goals.
Now for the touchy part. If those course evaluations have the merit that I and others contend they do, do they have any place in issues related to promotion and tenure? And if so, what? Yikes! I hope the people who are making those decisions tread lightly as they think long and hard about the implications of saddling faculty of any rank with an obligation to meet a minimum score on a course evaluation.
Colleagues who I’ve been lucky enough to get to know better in CTE workshops over the past four months are great. They’re enthusiastic about opportunities to shape young minds; they’re apprehensive about finding a comfortable middle ground between content delivery, theater, and student achievement; and they are concerned about managing the delicate balance between teaching, research, and whatever else they are expected to excel at. If a grant proposal falls short, they know that review panel comments and critiques from mentors will help them to make it better the next time. If unexpected experimental results or other rogue events lead to a scholarly dead end, we’ll applaud their ability to back away and seek new paths to success. Aren’t missteps in the teaching mission also to be expected? My strong plea is to let our faculty (especially new faculty) teach the same course(s?) for several semesters so they can learn from their mistakes. Let glances over their shoulders be for visions of where they’ve been and not fear of who’s sneaking up on them. Encourage them to use those first few rounds of numbers and comments as fertilizers rather than herbicides (Oh give him a break; he’s an Aggie!) to cultivate the next generation of educators who will ensure the greatness that is Cornell.
Trust me; it will pay off.
And now, if I can just get off of this bully pulpit in one piece….
Addendum: Some additional pieces on this subject from the CTE website that are worth a look:
Brower, Aaron. 2008. Myths and realities about student course evaluations. University of Wisconsin. https://tle.wisc.edu/solutions/evaluation/myths-and-realities-about-student-course-evaluations
Lillienfeld, S. 1999. Student course evaluations and what research teaches us. Emory University. http://www.emory.edu/EMORY_REPORT/erarchive/1999/February/erfebruary.15/2_15_99lilienfeld.html
Boggs, A. et al. 2009. The validity of student course evaluations: an eternal debate? University of Toronto. http://www.academia.edu/184452/The_Validity_of_Student_Course_Evaluations_An Eternal_Debate