Document Type

Report

Publication Date

9-1-2010

Keywords

electronic course evaluations, paper course evaluations

Abstract

Electronic course evaluations are becoming a popular, inexpensive substitute for traditional paper course evaluations. Electronic evaluations are easy to implement, reduce the impact on instructor time, are more uniform in their administration, and can reduce printing and paper costs. Further, some usually unexpected benefits can accrue from electronic evaluations. For instance, students appear to respond in more detail to open ended electronic questions than they would to the same question posed in paper format. While there are clear benefits from electronic course evaluations, there also exist pitfalls. Research suggests students view electronic evaluations as less anonymous thereby bringing into question the validity of student responses. Two other common and related concerns are that electronic course evaluations receive fewer student responses and those who do respond are not representative of the population of enrolled students. Student response rates and the impact of electronic course evaluations on instructor ratings are the focus of this report. The Office of Survey Research (OSR) conducted a controlled pilot of electronic course evaluations during Spring Quarter, 2010. This pilot provided the opportunity to learn about OSR’s ability to implement large scale electronic evaluations and simultaneously investigate the impact of these evaluations relative to traditional paper evaluations. OSR piloted electronic evaluations with 21 WWU instructors teaching 23 different CRNs. Of these 23 CRNs, 3 were part of large, multiple CRN courses whose other CRNs were evaluated with the traditional paper thus providing a control group with which to measure the impact of electronic course evaluations. Seven CRNs were taught by instructors who were simultaneously teaching at least one different section of the same course. These other CRNs serve as a control group. Thirteen CRNs were taught by instructors who taught the same course in a previous quarter; the courses in the prior quarters serve as a control group for these instructors. Student response rates on the electronic evaluations were considerably lower than the response rate in the paper evaluation control groups. 74.2% of enrolled students completed the paper evaluations while 56.8% completed electronic evaluations. This lower response rate is quantitatively consist with the best peer-reviewed research estimate OSR could locate (an estimated decline of about 12%) and qualitatively consistent with the findings of institutional research directors at local colleges and universities. When within-instructor response rates estimates are computed, the student response rate difference rises to almost 20%; thus OSR’s best estimate of the impact of electronic evaluations on student responses is that an additional one-in-five students will choose not to complete an electronic evaluation relative to a traditional paper evaluation. Given that student responses to any evaluation system are voluntary, it is interesting to ask if student participation (or lack thereof) in electronic evaluations is random or systematic. One can think of arguments why a decline in participation is not random. OSR’s electronic evaluations were completed on a student’s own time. Students who felt strongly (either positively or negatively) would be more likely to use their time to complete an evaluation. Students who feel less strongly about a course would be less likely to complete an evaluation. As a result, the student evaluations may become bi-modal. While OSR did not link individual student responses with the identifying student information, OSR did track responses to specific evaluation questions like question #20 of the teaching evaluation form: “{The} Instructor’s contribution overall to the course was:” Relative to their control groups, the overall variance of responses to this question was considerably larger for electronic evaluations; a result consistent with response distributions becoming more bi-modal. Further, the average electronic response to question #20 was two-tenths of a point lower (on a five point scale) than the paper evaluations. Similar differences occurred in the other questions investigated. In summary, it appears that electronic evaluations reduce response rates by about 20%, reduce the average instructor scores by a small amount (two-tenths of a point), and increase the variance of the responses. While these differences may be attributable to the electronic format, some care should be taken in using these numbers. First, there is a psychological literature on the Hawthorne effect which points out that individuals are more likely to participate in an experiment because they believe they are helping in the creation of knowledge. If this occurred in our pilot, then one might expect even lower response rates after electronic evaluations are adopted. Further, the instructors participating in the experiment may not be representative of the population. If these instructors volunteered to participate because of their enthusiasm for electronic evaluations, then their enthusiasm may have been transmitted to their students thus increasing response rates. A less enthusiastic instructor might receive fewer responses and possibly different ratings in fields like question #20. The remainder of this report documents a list serve discussion regarding electronic course evaluations that took place between members of the Pacific Northwest Association of Institutional Researchers. This discussion involves many local institutions who have experimented or implemented electronic course evaluations. This is followed by a literature review and a complete discussion of the Western Washington University pilot. This report concludes with an estimate of what it would take OSR to implement a campus-wide electronic course evaluation system. To summarize the final section, OSR estimates that it would require a technically skilled employee to spend about 40 hours in initial setup time and about 50 hours per quarter to implement electronic course evaluations. However, this time commitment would serve only to program and e-mail the electronic evaluations to students. Additional time and computing storage space would be needed to store and disseminate results. Of course, these costs may be offset by the elimination of paper surveys.

Identifier

317

Publisher

Digital object produced by Office of Survey Research, Western Washington University, and made available by University Archives, Heritage Resources, Western Libraries, Western Washington University.

Genre/Form

Reports

Subjects - Topical (LCSH)

Universities and colleges--Evaluation--Data processing; Universities and colleges--Evaluation

Title of Series

Technical and research reports (Western Washington University. Office of Survey Research) ; 2010-03

Type

Text

Rights

This resource is provided for educational purposes only and may be subject to U.S. and international copyright laws. For more information about rights or obtaining copies of this resource, please contact University Archives, Heritage Resources, Western Libraries.

Language

English

Format

application/pdf

Share

COinS