Enlighten Your Institutional Initiatives with Course Evaluation Data
This content was previously published by Campus Labs, now part of Anthology. Product and/or solution names may have changed.
Take a second and reflect on the ways your institution actively uses data from your course evaluation processes.
Now, let me guess the examples you came up with:
- Adjustments to Teaching Methodology
- Determining and Predicting Course Effectiveness
- State Mandates
Are we on the same wavelength with our examples? These are, of course, completely valid uses of your course evaluation data—but what if you pushed beyond the “norm”? Imagine the possibilities when conversations around this data aren’t focused on personnel decisions and response rates alone. Course evaluation data is one of the few authentic data sets that represent the student’s perceptions of their success and growth as a learner. This data set alongside direct measures of success creates strong key performance indicators for retention, while driving strategic planning, enrollment and equity issues on campus.
There are countless sources of data at any institution, but course evaluation data stands out as an underutilized, incredibly valuable and authentic data set. The best part? The process of collecting this data is already built into your institution’s end-of-term processes and easily captures a student’s perception of learning—which is more valuable than, for example, their indication of satisfaction with an on-campus event.
By including these more diverse course evaluation data sets, you can dive deeper to boost your institutional effectiveness efforts.
It is no secret that retention and persistence numbers show up in every campus’s key performance indicators. Two factors that play a role in student satisfaction at your institution are feeling a sense of community and their perception of the quality of their academic curriculum. If you look at combining data from student participation outside of the classroom (e.g., event attendance, co-curricular pathway participation, organization membership) and your student’s perception of learning from their course evaluations, this becomes an indicator of persistence. By analyzing these two data sets together, your institution can move closer to proactively identifying whether a student is a risk for not returning—positively or negatively impacting your retention numbers. Once specific students are identified, appropriate interventions can be put in place to improve both the student’s experience and satisfaction inside and outside of the classroom.
Taking this a step further, you should regularly assess the use of your campus support services and interventions to identify their impact on your students’ educational goals. If you combine data associated with the use of campus resources (e.g., advisors, academic support services, career services) you can reflect on whether they have an impact on how students rate the perception of their learning experience. As a result of analyzing this data, you will also be able to draw conclusions on how to make improvements to programming and student services.
Budget and Operations
What about using teaching and learning data to justify or modify courses that have a higher facilitation price? Course evaluation data combined with program and course operational data will provide metrics on whether a student’s perception of learning is positively or negatively impacted by courses that occur during certain times, days of the week modalities, or tenured or adjunct faculty or terms. Analyzing these data sets can play a critical role in the allocation of funds within specific programs. This data will give institutional leaders the capability to have more informed conversations around what it takes to operate programs matched with the student’s perception of learning—leading to intentional and pragmatic decisions. It will also help inform a specific program review process, a part which asks the program to reflect on how its budget is aligned to support the institution’s larger priorities. Additionally, the combined data sets will lead to insights making it clear what other adjustments may need to occur to course offerings such as the location, building or delivery method.
Student Learning Outcomes
In the past five to ten years, student learning achievement has boomed into a buzzworthy dataset for Institutional Effectiveness offices around the country whereas student learning outcomes (SLOs) are tracked at institutional, general education, programmatic and course levels. SLOs are meant to provide a more authentic representation of the learning that is and isn’t occurring in and out of the classroom. SLOs can add a formative lens to the typical course success rates, but we are still missing a critical piece of the learning journey—the student’s voice. Course evaluation data sets paired alongside SLO data tell a far more relevant story about what engaging in the learning experience was like. Knowing that 90% of the class is proficient in critical thinking is helpful, but knowing that students in that class reported during course evaluations that the class was rigorous, the content was meaningful and the teacher was accessible starts to tell a story about added value.
This may have come up as one of your examples of common uses of teaching and learning data but I would be remiss if I didn’t note the large role this data set plays in an institution’s accreditation (regional or program) and cyclical reporting processes. We know that regardless of what regional accreditor your institution reports to, they all call for information from your campus on the quality of educational programs while examining important factors such as: student learning assessment, educational environment, student support services and financial reporting. Course evaluation data is instrumental in telling your campus’s continuous improvement story. It can help illuminate the growth and success of your programs, courses, faculty and departments to external and internal stakeholders:
“Institutional leaders are in a position to successfully adapt to a changing higher education landscape by engaging student leaders, listening to real student needs, and collaborating in decision making.“
One of the core concerns of institutional leaders is students returning to the institution. This means student success key performance indicators need to authentically include, highlight and respond to student voices while being at the forefront of every institutional initiative. By harnessing this data, you can push beyond the normal course evaluation conversations on response rates or tenure/promotion and gradually move toward conversation of impact, improvement and enlightenment.
Discover more about how Anthology Course Evaluations can help you progress with diverse data and allow you to spend more time and energy on impact, improvement and enlightenment.
 MacCracken, Smith, Templeton (2019). "A Study of Student Voice in Higher Education", https://www.aacu.org/diversitydemocracy/2019/winter/templeton
Jenna Ralicki is an emerging scholar in the field of student affairs assessment analytics. She specializes in crafting institutional learning outcomes frameworks that blend classroom foundations with experiential application and 21st century transferrable skill development. Jenna has collaborated with more than 200 campuses to grow a systemic culture around student learning that is holistic and driven by centralized and innovative technology infrastructures. Her efforts support higher education institutions in using a model of continuous improvement that results in high quality program review, accreditation and strategic planning processes. She is passionate about focusing on the student experience as the main driver for institutional and analytic initiatives. She holds a bachelor’s degree in communication arts from Gannon University and a master’s in higher education from Buffalo State College.