December 28, 2016

Moving Away from the Course Evaluation Numbers Game

This content was previously published by Campus Labs, now part of Anthology. Product and/or solution names may have changed.

Is your course evaluation data being used to develop or to evaluate your faculty? That was the key question in my recent conversation with Ken Ryalls, PhD, President of IDEA. In episode 4 of our Data in Higher Education podcast, the two of us discussed a global approach to assessing teaching and learning in order to foster classroom engagement, risk taking, and innovation.

Teaching and learning: where we are today

The idea that most faculty aren’t committed to teaching and learning is a common misperception. Most instructors do want to improve—and administrators do want to support them—but the system itself is outdated and limiting. Rarely is there a comprehensive view of a faculty member’s effectiveness in the classroom. Instead, most approaches to understanding teaching effectiveness fixate on snapshots of a faculty member’s performance, broken down by discrete courses and reduced to a magic number on a five-point scale for reporting purposes. It’s easy, then, for an instructor’s primary motivation to become avoiding low student ratings, especially if they can negatively impact his or her career. A system ostensibly designed for quality improvement becomes potentially punitive due to the conflict between two competing goals: professional development and job performance evaluation. What’s needed is a careful review of current approaches to evaluating classroom interactions, a focus on holistic assessment, and a more thoughtful use of actionable data for better teaching and learning.

Consider the three-legged stool

For feedback to be meaningful, the student voice is critically important. After all, it’s the students who spend the most time observing and interacting with faculty in the classroom. The problem occurs when the collective voice of students is the only voice (or even the loudest one), and the feedback is boiled down to a number. The common perception is that this number discredits a faculty member who scores poorly—or thwarts innovation, if the ratings are satisfactory. The number becomes the end-goal, and the focus is no longer on improvement and growth. In other words, the system may incentivize faculty not to innovate with their instruction methodologies. Invoking the three-legged stool metaphor, a better evaluation model would involve three equally valid “legs” for a balanced structure: student feedback, a faculty self-assessment, and feedback from a peer (a fellow instructor or an administrator).

Ask better questions and delve into the data

A holistic assessment approach should reflect multiple data points, including student feedback on a classroom activity, a peer evaluation of a course, and a specific student learning outcome. The goal is to incorporate different points of view into the evaluative process. Asking some key internal questions can help clarify the underlying goals for an effective approach:

Why are you doing the evaluation? 
Why and how are you using student ratings? 
What questions are you asking? 
What outcomes are you trying to achieve? 
What problems are you trying to solve? 
What answers are you hoping to gain?

The primary purpose of feedback should be quality improvement, and an effective evaluation will use questions that align with learning outcomes and yield insights into student engagement and performance. Questions should focus, ideally, on how instructional techniques support successful learning outcomes. They should not be prompts to encourage opinions or comments about personal characteristics, such as how nice a professor is. For helpful peer feedback, the use of rubrics can ensure a level of consistency and fairness.

Make the information consumable

Data serves no one if it’s difficult to use. The feedback from an evaluation process should be organized so that it’s easy to consume, without being overly simplistic. A careful analysis of holistic assessment data shouldn’t result in a magic number. Rather, it should produce an accurate synopsis of what, and how, a faculty member teaches in the classroom on behalf of student learning. This analysis can also support and guide actions for improvement and professional growth. It’s a challenging balance to strike, but it’s not impossible if the right data tools are used.

What will progress look like?

Ideally, a more comprehensive approach to faculty evaluation will focus on faculty development and foster a culture of learning for all stakeholders. Students will learn the course content, instructors will learn how to adjust their teaching delivery for the best outcomes, and administrators will learn new approaches for supporting and guiding faculty.

Headshot of JD White, Ph.D.

JD White, Ph.D.

Chief Product Officer
Anthology

John “JD” White, Ph.D., leads the Anthology product development team as Chief Product Officer. His areas of expertise include assessment in higher education, student success and retention efforts, the use of analytics in higher education, and the development of technology to support institutional effectiveness. Before joining Campus Labs, he managed assessment initiatives for the Department of University Housing at the University of Georgia. He has also had student affairs roles at Georgia Tech, Virginia Tech, and Northern Arizona University.