Friday, June 20, 2008

Online Course Evaluations and Response Rate Considerations

The following is an email exchange between Gary Brown, Nils Peterson of Center for Teaching Learning and Technology at WSU and members of the TLTGroup.

Ehrmann@ TLT: Nils,

 I’ve gotten a couple questions from a subscriber.

Do any WSU colleges conduct student course evaluations exclusively online? All of them?

What kind of response rate does WSU get to online surveys and what strategies seem to work best for that purpose?



Nils: WSU has several colleges that do online surveys exclusively (Engineering & Architecture; Agriculture and Natural & Human Resources; Pharmacy). Response rates vary by course from very low to 100%. Gary Brown can take up this conversation to talk about what we do/don’t know what drives response rates

Gary: As Nils notes, we have several colleges doing online evaluations, some exclusively, more joining all the time. Response rates vary, but maybe more importantly, so do the instruments and, more importantly yet, the way the evaluations are used. I won’t go into detail about the differences in the evaluations instruments we’ve encountered, but online or not, the quality and fit for a variety of pedagogies is for me much more of a concern than the mode of delivery. The way they are used extends validity, because response rates matter little if results are ignored by faculty, misunderstood or difficult to interpret, and, all too common, boiled down to a single number for ranking purposes. It is hard to make arguments about the validity of an instrument and process if it is all capped by use that is itself invalid. But that makes the more important argument—it isn’t response rate and subsequent issue of response bias that matters as much as it ought be making sure that the response is representative and appropriate for the purpose of the process—hopefully for improving students’ learning experiences.

All that aside, response rates:

In our College of Agriculture, the response rate was 53%. But that number varies widely across departments. Here is a picture of response rates across departments from about a year ago:


Needless to say, the variance across departments is mirrored by similar and dramatic variance among courses/faculty, so it is hard for us to attribute the variance exclusively to the medium of delivery. We make other conjectures in our analysis in an article we published a while back. A key to response rates, we note in the article, is that in department with the higher response rates, the chairs of the departments were involved in the design of the instrument and the decision to put it online. So there is something important to be said for leadership and the engagement in the process of that leadership. We also point to other associations with higher rates we tracked in certain classes, mostly associated with the engagement of faculty in the process, their demonstration throughout the term that they listen and respond (not necessarily capitulate) to students’ concerns, and that they work overtly to engage students in the teaching/learning/assessment process.

The issue is pretty hot, too, and there are a number of discussions about response rates:
http://www.utexas.edu/academic/diia/assessment/iar/teaching/gather/method/survey-Response.php

http://books.google.com/books?id=zrjGUewMWHEC&pg=PA92&lpg=PA92&dq=adequate+survey+response+rates&source=web&ots=Q-1Sj0ntID&sig=XvzUTqM5dv5NjHIC0FX4CAr2LxM#PPA92,M1

http://books.google.com/books?id=H0Uexcg9xBcC&pg=PA42&lpg=PA42&dq=adequate+survey+response+rates&source=web&ots=aLzsrkerPO&sig=Prza517KiMb_Cf2jGenNXhKG5Dk#PPA46,M1

http://www.aapor.org/bestpractices

Most of these suggest, as you will see, that 50% is adequate, if not stellar. (The most authoritative is the last link, and they say, too, that 50% is ok.) The larger concerns I infer from your note is the utility of responses at low rates (we’ll let others worry for the moment about the implications of comparing results, as some chairs do, when the response rates differ significantly).

But our own work here at WSU with the College of Engineering suggests that the response bias may be less salient than one would presume.

We have not written this up yet, but here is a comparison of online versus paper done with the college of engineering at WSU. We have shared this with a work group from the American Evaluation Association (AEA) and are finding others who report the same phenomenon. The response rate online was about 51%, paper in class at about 71% (which is much lower than most people believe is the case for traditional paper-based, with the presumption that it runs closer to mid 90s). The samples are convenience samples based upon faculty preference for using paper or trying the online. The graph reflects 26 student evaluations randomly drawn from each of the three groups. If there is some kind of response bias, the picture here does not reveal it. We have been monitoring this as we move more and more online and remain interested in exploring the distinctions we may get (or not) when populations complete the instruments voluntarily, for extra credit, or when they are required to do so.

No comments: