The Online Evaluation Of Courses / J. F. Groen & Y. Herry 106 CSSHE SCÉES Canadian Journal of Higher Education Revue canadienne d’enseignement supérieur Volume 47, No. 2, 2017, pages 106 - 120 The Online Evaluation of Courses: Impact on Participation Rates and Evaluation Scores Jovan F. Groen & Yves Herry University of Ottawa Abstract At one of Ontario’s largest universities, the University of Ottawa, course evaluations involve about 6,000 course sections and over 43,000 students every year. This paper-based format requires over 1,000,000 sheets of paper, 20,000 envelopes, and the support of dozens of administrative staff members. To examine the impact of a shift to an online system for the evaluation of courses, the following study sought to compare participation rates and evaluation scores of an online and paper-based course evaluation system. Results from a pilot group of 10,417 students registered in 318 courses suggest an average decrease in participation rate of 12–15% when using an online system. No significant differences in evaluation scores were observed. Instructors and students alike shared positive reviews about the online system; however, they suggested that an inclass period be maintained for the electronic completion of course evaluations. Résumé À l’Université d’Ottawa, une des plus grandes universités de l’Ontario, les évaluations de cours impliquent quelques 6 000 cours et plus de 43 000 étudiants chaque année. Plus d’un million de feuilles de papier, 20 000 enveloppes et le soutien de quelques douzaines d’employés sont nécessaires à la tâche. Pour étudier les répercussions de la transition vers un système d’évaluation en ligne, l’étude présentée visait à comparer les taux de participation et les résultats d’évaluation d’un système en ligne et d’un système manuel (papier). Les résultats d’un groupe pilote composé de 10 417 étudiants, inscrits à 318 cours, suggèrent une diminution des taux de participation moyens de 12 à 15 % avec l’utilisation du système en ligne, mais aucune différence significative dans CJHE / RCES Volume 47, No. 2, 2017 The Online Evaluation Of Courses / J. F. Groen & Y. Herry 107 les résultats d’évaluation n’a été observée. Par ailleurs, professeurs et étudiants ont émis des commentaires positifs sur le système en ligne, mais ont suggéré de maintenir une courte période en classe pour que les étudiants remplissent l’évaluation de leur cours en ligne. Course evaluations and the promotion of this practice are important parts of any system that aims to enhance teaching and contribute to the quality of the student experience. Not to be overlooked, feedback from course evaluations is an important tool for professors. Beyond the feedback they receive, the results are also important for the development of their careers since they are part of a professor’s portfolio required for tenure, promotion, and contract renewals. This paper presents the results of a pilot project conducted at the University of Ottawa to evaluate the impact of the online evaluation of courses on student participation rates and on the evaluation scores when compared with the paper-based course evaluation method completed in class. Context and Review of the Literature At the University of Ottawa course evaluations involve about 6,000 course sections and over 42,000 students every year. Paper-based course evaluations consisting of 13 questions are distributed in-class and completed by the attending students. Near the end of the semester, a 20-minute period is set aside for the completion of evaluation forms at the beginning of the class. Students are also given an additional sheet to share their comments with the course instructor. The forms are scanned by an optical reader, and the results are sent to the professor, the program director, and the dean. Comment sheets are only sent to the professor. Every year course evaluations require over 1,000,000 sheets of paper and 20,000 envelopes. In recent years, the average participation rate for the paper-based evaluations has been about 64%, ranging from 60% to 81% depending on the faculty. The University of Ottawa has sought to implement a new system to evaluate all courses online for several years, however, before doing so the administration tested the online evaluation format within the institutional context. In North America, interest in the online evaluation of courses increased significantly in the early 2000s. The percentage of American universities using online evaluations rose from 2% to 33% between 2000 and 2005 (Anderson, Brown, & Spaeth, 2006). In recent years, several Canadian universities have adopted, either entirely or partially, the online evaluation of courses. Many other universities, much like the University of Ottawa, are discussing the possibility of implementing such a system. Universities offering the online evaluation of courses make their evaluation forms available throughout the evaluation period (about two weeks near the end of each semester). This provides students with more time to complete the evaluation and to do so at a time convenient to them. Some institutions have maintained a mandatory in-class period to allow students to complete the evaluation in class with their laptop or mobile devices, while other universities have not retained the in-class system. Research into the online evaluation of courses has attempted to address the university community’s main concerns in this area. These include participation rates, course evaluation scores, comments provided by students, and incentives to fill out the evaluations. The following section highlights literature related to each of these areas of concern. CJHE / RCES Volume 47, No. 2, 2017 The Online Evaluation Of Courses / J. F. Groen & Y. Herry 108 Participation Rates The participation rate for online evaluations is the foremost concern identified in the literature (Adams & Umbach, 2012). Indeed, most studies on the subject point to a lower participation rate for the online evaluation of courses, compared with paper-based evaluations (Gamliel & Davidovitz, 2005; Nevo, McClean, & Nevo, 2010; Hativa, 2013; Stowell, Addison, & Smith, 2012). The participation rate for online evaluations varies among institutions, with rates between 30% and 53% (Nowell, Lewis, & Handley, 2010), the majority hovering around the 50% mark. Some universities have achieved participation rates of nearly 70% by offering student incentives. Universities that have moved from paper-based evaluations to online evaluations often note a slight increase in the participation rate of online evaluations over a multi-year period (Nevo et al., 2010). However, research has shown that technology alone is not the sole influencing factor for the participation rate, rather it is the engagement of academic leaders that has a significant impact on the level of participation (Pitre-Hayes, 2013). The more students see the effects of these evaluations on the quality of teaching, the higher participation rates will be. Two factors reducing the rate of participation. First, the student perception that an evaluation is only useful to the professor, and second, students will not benefit from their course evaluation results because they will have now finished the course (Nevo et al., 2010). There is, therefore, continued work to do in terms of explaining and demonstrating the benefits of course evaluations. Despite the factors listed above, it is understood that certain groups of students are more likely to complete course evaluations, whether online or paper-based. Generally, these are female students and academically strong students (based on course marks and cumulative GPA) (Adams & Umbach, 2012; Hativa, 2013). Course Evaluation Scores Lower participation rates in course evaluations raise the following question: how representative are the students who complete the questionnaire compared with the overall student body taking the course? Professors and researchers wonder if certain groups— such as the least satisfied students, the lowest achieving students, or students least engaged in the course—are more likely than others to answer the questionnaire online, (Hativa, 2013). A larger proportion of these students would therefore have a negative impact on the evaluation scores. If one subgroup of students is more likely to participate than another, it could be presumed that online evaluations introduce a bias at the expense of the latter group. However, research on this subject shows that this is not the case, and that these ideas represent misconceptions about the online evaluation of courses (University of Saskatchewan, n.d.). In fact, according to several studies, the judgements made by students about courses through online evaluations appear to be similar to those made using paper-based evaluations, and this despite a lower participation rate (Gamliel & Davidovitz, 2005; Legg & Wilson, 2012; Nowell et al., 2010; Stowell et al., 2012; University of Saskatchewan, n.d.; Venette, Sellnow, & McIntyre, 2010). Burton, Civitano, and Steiner-Grossman (2012) conducted a meta-analysis of research on the subject identifying 16 studies that compared the results of online evaluations with those of paper-based evaluations. Fourteen of these studies found no significant difference CJHE / RCES Volume 47, No. 2, 2017 The Online Evaluation Of Courses / J. F. Groen & Y. Herry 109 between the two methods of evaluation, and two studies found a slight increase in positive scores for online evaluations. Burton, et al. (2012) also found increased positive scores in online evaluations, as did Nowell et al. (2010) and Morrison (2013). The research results do not support the idea that the online format negatively influences course evaluation scores. Comments Provided by Students One of the positive effects of online evaluations is the quality and quantity of comments from students, regarding their educational experience (Crews & Curtis, 2011; Legg & Wilson, 2012). Studies have shown that lengthier comments have been associated with online evaluations (Venette et al., 2010). The comments were more developed, better constructed (Morrison, 2011), more positive, and more useful (Burton et al. 2012; Heath, Lawyer, & Rasmussen, 2007) than those provided in the paper-based evaluations. In addition, a larger number of students were said to provide comments when using the online evaluation format. Incentives to Increase Participation Rates Researchers have studied cases where incentives were offered to boost course evaluation participation rates. Among those analyzed, the mandatory periods of class time devoted to the completion of the course evaluations was maintained, which confirmed in the eyes of students that completing the course evaluation was important enough to merit class time (Crews & Curtis, 2011; Nevo et al., 2010). In addition, students noted that completing course evaluations outside of class time represented an extra task for them. Other incentives are used by some universities, but implementing these involves a number of challenges in terms of both management and ethics. Among the more extreme examples, we find the mandatory evaluation of courses and imposition of penalties such as withholding final marks until the course evaluation has been completed (Linse, 2012). Other universities have used more positive methods such as adding marks (between one and five) to the final marks of students who have completed the course evaluation (PitreHayes, 2013; University of Saskatchewan, n.d.). Crews and Curtis (2011) compiled a list of the most common strategies used by professors who obtained course evaluation participation rates of over 80%. It should be noted that these professors often combined several strategies. Some 70% of this group had sent a personal email to their students to highlight the importance of the course evaluations, and 52% had given extra marks for course evaluations. In another study (University of Saskatchewan, n.d.), professors who made the completion of the course evaluation an assignment and gave students additional marks obtained an average participation rate of 87%. This percentage fell slightly to 77% for professors who gave the evaluation as an assignment but did not give marks; professors whose only strategy was to mention the course evaluations to their students had an average participation rate of 32%, and those who failed to mention it at all had a 20% participate rate. Another strategy that encourages student participation in course evaluations is the use of an informal midterm course evaluation. Research shows that professors who ask their students’ opinions on the course at midterm and who take their comments into account during the rest of the course have high participation rates during the final course evaluation (Davis, 2009; University of Saskatchewan, n.d.; University of Ottawa, n.d.). CJHE / RCES Volume 47, No. 2, 2017 The Online Evaluation Of Courses / J. F. Groen & Y. Herry 110 In conclusion, it must be said that all of the strategies that involve requiring evaluations to be completed or the allocation of additional marks are very difficult to manage; they can endanger the confidentiality of student responses and can introduce bias in the evaluation scores because students may only complete the form to get additional marks without giving the questions their fullest attention. However, no study to date has looked at the impact of these strategies on evaluation scores. Pilot Project Research Questions Based on the results of this literature review, the University of Ottawa put in place a pilot project to address the following questions: • What is the impact of an online evaluation format on participation rates and course evaluation scores? • What is the level of satisfaction of users? • When did students complete the course evaluations and using what type of device? The methodology of the survey was developed to answer these questions. Methodology Instruments The instruments used to collect data from students were the course evaluation form and a separate student questionnaire on their level of satisfaction with the online process and the way in which they completed the course evaluation. Data collected from professors included a questionnaire regarding their level of satisfaction with the evaluation process and about the results obtained. This questionnaire was administered following the receipt of student course evaluation results. Course evaluation form. The standard course evaluation form at the University of Ottawa consists of 13 questions. The majority of course evaluations are carried out using a paper-based format that is distributed to students in class. However, the university offers an online version of the questionnaire to students in all its distance education courses and students with special needs. The online questionnaire includes the same questions as the paper-based version and is identically formatted. As with the printed version, it contains 13 questions with an additional comment section. The online version of the form was used for the purposes of this project. Student questionnaire. In addition to the course evaluation form, each student was asked to answer six questions on how the evaluation was carried out, their level of satisfaction with the online evaluation process, and the device they used to complete the questionnaire. Students responded to a separate online questionnaire immediately after completing the course evaluation. The questions were as follows: 1. I completed the online evaluation of this course (When I received the email invite; during the in-class period; when I received the email reminder; at another time). 2. What device are you using to complete the online evaluation of the course? (Tablet [iPad, etc.]; smartphone; personal computer; campus computer). 3. I found it easy to use the online evaluation format. (Using a four-point Likert scale). 4. I was able to give more detail about my experience in the course using the online evaluation format than the paper-based format. (Using a four-point Likert scale). CJHE / RCES Volume 47, No. 2, 2017 The Online Evaluation Of Courses / J. F. Groen & Y. Herry 111 5. I would feel comfortable using the online evaluation format for all of my course evaluations. (Using a four-point Likert scale). 6. If you could improve one thing about the online evaluation format, what would you suggest? (Access to the questionnaire via uoZone, on-campus Wi-Fi connection, navigation using a smartphone/tablet, other [please elaborate below]; no improvement needed). Lastly, students were given a section to provide additional comments. Questionnaire for professors. After receiving the online evaluation results (scores and student comments), professors who participated in the project were sent a five-question survey to be answered using a four-point Likert scale ranging from strongly agree to strongly disagree. The questions focused on how the online evaluation was carried out and their level of satisfaction. The questions were as follows: 1. Students provided more detail in the open-ended comments using the online evaluation format compared to the paper-based format. 2. Student comments were of higher quality in the open-ended comments using the online evaluation format compared to the paper-based format. 3. Student comments were more useful to my teaching in the open-ended comments using the online evaluation format compared to the paper-based format. 4. I would feel comfortable using the online evaluation format for all of my course evaluations. 5. Should the in-class evaluation completion period be maintained? (Yes/No). Lastly, professors were given a section to provide their comments. Participants The aim of the pilot project was to target a representative sample of the courses offered by the university. To compile this sample, the registrar’s office provided a list of 400 courses for the fall 2013 session and a list of 325 courses for the winter 2014 session. The sampling took into consideration the course year (1st year through 4th year and graduate studies) and the number of students registered in each course. Because participation in the project was voluntary, the professor of each of the courses in the sample was sent an email asking them to participate in an online evaluation of their course rather than using the paper-based method. Of the 400 fall session courses, 180 professors agreed to take part in the project; of the 325 winter session courses, 138 professors accepted. In total, 10,417 students participated in the project and 4,508 completed the six-item questionnaire regarding their perception of how the online evaluation was carried out. Every faculty (except education, which has a different course evaluation period) and grade level was represented in the study’s respondents. During the data analysis, the averages were adjusted to take into account the new distribution of classes, according to their representativeness of all of the courses offered over the two sessions. Data Collection Data collection was carried out between November 2013 and April 2014. The evaluation form was made accessible online using a secure website throughout the regular course evaluation period, two weeks in November for the fall session, and two weeks in March for the winter session. During this period, students were able to complete the form CJHE / RCES Volume 47, No. 2, 2017 The Online Evaluation Of Courses / J. F. Groen & Y. Herry 112 at a time convenient to them. The mandatory in-class period was also maintained and students who had not yet completed the course evaluation had the choice to use this inclass time to complete the course evaluations using a personal computer, smartphone, or tablet while the course instructor stepped out of the room. After answering the 13 questions on the course evaluation form and saving their responses, students could then answer the six additional questions regarding how the evaluation was carried out. At the beginning of the regular course evaluation period, the system sent an email to students inviting them to complete the evaluation for their course, followed by an email reminder and a thank-you message at the end of the evaluation period. Student responses were confidential. Anonymous student comments were made available to professors via their personal faculty profile accounts. Data Analysis Plan The analysis of results can be divided into three main sections. The first is the comparison of the results obtained by professors in the online evaluation project with the results obtained using the paper-based evaluations from the same course taught by them at least once over the last three years. During this stage of the data analysis, 182 courses participating in the pilot project were identified as having carried out a course evaluation using the paper-based format in the three years preceding the project. Each of these courses were taught by the same professor, were subject to the online evaluation as part of this project, and had at least one paper-based evaluation in the past three years. The participation rate and the course evaluation scores obtained during the online evaluation of the 182 courses were compared to those obtained by the same courses evaluated using the paper-based method. This comparison is the most statistically robust since it controls for a large number of associated variables including the course code; the course level; the faculty and department offering the course; the professor teaching the course; the professor’s gender, status, and academic ranking; and the students registered for the course. The second major section of the data analysis compared the results obtained by all courses in the 2013–2014 online project with the results from courses that carried out the paper-based evaluation in the same academic year. During this stage of the data analysis, the participation rates and the evaluation scores of 318 courses evaluated online were compared with 3,516 courses that were evaluated using the paper-based method during the same period. This comparison is statistically less robust than the previous one since it does not control for as many associated variables but does allow control of the period of evaluation, something that the previous analyses did not permit. A common statistical method used to test differences between two or more groups is the analysis of variance (ANOVA). However, when the hypothesis of independence and homogeneity are not fulfilled, neither ANOVA nor Kruskal-Wallis test (a non-parametric version of ANOVA) apply due to the high risk of false positive or false negative results. Given the absence of normality, homogeneity, and the non-independent nature of the sample in this study, the Wilcoxon signed-rank test was used to compare the difference between the means of two groups. The third major section of the data analysis focused on the presentation of frequency distributions of responses provided by students and professors in their respective questionnaires. This helped review trends that informed the formulation of several recommendations. CJHE / RCES Volume 47, No. 2, 2017 The Online Evaluation Of Courses / J. F. Groen & Y. Herry 113 Results The presentation of the results is divided into four main sections. The first outlines a comparison between the participation rates of the online evaluation and the paper-based evaluation. The second outlines a comparison between the evaluation scores obtained by professors from the online and paper-based evaluations. The third and fourth sections outline the frequency distributions of responses provided by students and professors in their respective questionnaires. Comparison of Online and Paper-Based Evaluation Participation Rates Table 1 outlines the participation rates achieved by all courses, based on the level of study (undergraduate or graduate) and the year of study (first, second, third, or fourth year). The results reveal statistically significantly lower participation rates for online evaluations than for paper-based evaluations [S = 5,308, p < .0001]. In the comparison of evaluations for the 182 courses (comparison 1), the online participation rate was 51% and 63% for the paper-based evaluations. In the comparison between the 318 courses participating in the pilot project and all other courses (total of 3,516) evaluated during that same period (comparison 2), participation rates were 51% and 66% respectively. Table 1. Comparison of the Participation Rate by Level and Year of Study Comparison 1 Year of Study Total Online course evaluations (182) Comparison 2 Most recent paper-based course evaluations (182) Online course evaluations (318) Total courses evaluated in 2013–2014 182 318 3,516 5,842 7,370 10,417 174,863 % students evaluated 51% 63% 51% 66% (standard deviation) (0.20) (0.16) (0.20) (0.27) 35 35 56 312 Number of students 1,729 2,309 2,935 55,651 % of respondents (standard deviation) 2nd Year 182 Total number of students 1st Year Total number of courses 51% (0.14) 58% (0.16) 48% (0.16) 61% (0.18) 49 49 77 691 Number of students 1,576 2,040 3,032 47,921 % of respondents (standard deviation) 49% (0.19) 65% (0.17) 48% (0.18) 65% (0.43) Number of courses Number of courses CJHE / RCES Volume 47, No. 2, 2017 The Online Evaluation Of Courses / J. F. Groen & Y. Herry 114 Comparison 1 Year of Study 3rd Year Online course evaluations (182) Most recent paper-based course evaluations (182) Online course evaluations (318) Total courses evaluated in 2013–2014 58 98 963 1,894 2,144 2,981 39,780 % of respondents (standard deviation) 53% (0.18) 62% (0.16) 53% (0.18) 69% (0.18) Number of courses 24 24 55 705 Number of students 471 685 1,060 17,872 % of respondents (standard deviation) Graduate Studies 58 Number of students 4th Year Number of courses Comparison 2 50% (0.23) 75% (0.07) 56% (0.22) 69% (0.18) Total number of courses 16 16 32 845 Total number of students 172 192 409 13,639 72% (0.22) 84% (0.12) 72% (0.19) 80% (0.19) % students evaluated (standard deviation) The comparison between levels of study allows us to observe that a similar difference of 12–14% between the online and paper-based evaluation participation rates is present across undergraduate levels and for graduate studies. The participation rate is higher by 20% for graduate courses than for undergraduate courses, regardless of whether the evaluations were conducted online or in print. Online evaluation participation rates are relatively constant depending on the year of study (first, second, third, or fourth) varying from 48–56%, whereas paper-based evaluation participation rates fluctuate more based on the year of study, varying from 58–75%. An analysis of participation rates for online evaluations in each faculty are lower, varying from 43–82%, whereas participation rates for paper-based evaluation vary from 53–84%. Comparison of Online and Paper-Based Evaluation Scores Table 2 outlines the results from both comparison groups based on the level of study (undergraduate or graduate), year of study (first, second, third, or fourth). The results reveal similar evaluation scores for online evaluations (average of 4.0/5) and for paperbased evaluations (average of 3.9/5). When examining scores based on the level of study, year of study or by faculty, no significant difference is revealed between the scores obtained using the online format and the paper-based format [S = –933.5, p = 0.2014]. CJHE / RCES Volume 47, No. 2, 2017 The Online Evaluation Of Courses / J. F. Groen & Y. Herry 115 Table 2. Comparison of the Course Evaluation Scores by Level and Year of Study Comparison 1 Year of Study Total Online course evaluations (182) Total number of courses Comparison 2 Most recent Total online Total courses paper-based course sample evaluated in course evalua(318) 2013–2014 tions (182) 182 182 318 3,516 5,842 7,370 10,417 174,863 4.0 3.9 4.0 4.0 (0.60) (0.57) (0.61) (0.59) 35 35 56 312 Number of students 1,729 2,309 2,935 55,651 Mean score (standard deviation) 3.9 (0.56) 3.9 (0.55) 4.0 (0.56) 4.0 (0.58) 49 49 77 691 Number of students 1,576 2,040 3,032 47,921 Mean score (standard deviation) 3.9 (0.57) 3.9 (0.57) 3.9 (0.59) 4.0 (0.58) 58 58 98 963 Number of students 1,894 2,144 2,981 39,780 Mean score (standard deviation) 4.1 (0.66) 3.9 (0.61) 4.0 (0.64) 4.0 (0.59) Number of courses 24 24 55 705 Number of students 471 658 1,060 17,872 Mean score (standard deviation) 4.2 (0.53) 4.2 (0.45) 4.0 (0.61) 4.1 (0.57) 16 16 32 845 172 192 409 13,639 4.3 (0.61) 4.4 (0.48) 4.2 (0.62) 4.2 (0.60) Total number of students Mean score (standard deviation) 1st Year 2nd Year 3rd Year 4th Year Number of courses Number of courses Number of courses Graduate Total number of courses Studies Total number of students Mean score (standard deviation) CJHE / RCES Volume 47, No. 2, 2017 The Online Evaluation Of Courses / J. F. Groen & Y. Herry 116 Student Questionnaire Responses Regarding Level of Satisfaction and the Evaluation Process We learned from students’ questionnaire responses that 97% of them found online evaluations easy and that 91% of them felt comfortable evaluating all of their courses online. As for the written comments, 65% found that online evaluations allowed them to write more detailed comments. For the time given to complete the questionnaire, almost half of the students (47%) completed the course evaluation within the dedicated in-class period. Almost a quarter of those students (23%) completed the questionnaire online when they received the initial email inviting them to do so, and 8% completed it when they received the reminder email. The rest of the students completed the questionnaire at different times during the two-week course evaluation period. Eighty percent of students used their personal computers to complete the questionnaire, while the rest used a tablet, smart phone, or university computer. Regarding possible improvements to the evaluation system, 60% of students had no recommendations. However, 10% of students asked for an improvement with the use of smartphones. This percentage represents the number of students who used this tool to complete the questionnaire. Lastly, 13% of students asked for an improved Wi-Fi connection at the university, and another 13% asked for easier access to the questionnaire through the student portal. Professor Questionnaire Responses Regarding Level of Satisfaction and the Evaluation Process Professors’ questionnaire responses showed that 80% are open to or are in favour of the online evaluation format. Regarding the comments received on their teaching, 37% found that they were more detailed, 33% found them of better quality, and 29% found them more useful than comments provided with the paper-based evaluation. Depending on the questions, a proportion, from 34% to 50% of professors, did not see a difference. On the question of maintaining the in-class period to complete the online evaluation, 62% of professors from the fall session were in favour of keeping it, compared to only 15% from the winter session. This was the only difference found in the professors’ responses between the two sessions. Summary, Discussion, and Recommendations The goal of this project was to evaluate the potential impact of an online format for the evaluation of courses on student participation rates and to compare online and paperbased course evaluation scores. This project aimed to answer the following questions: • What is the impact of the online evaluation of courses on the participation rates and scores of the evaluations? • What is the level of satisfaction of users? • When did students complete the evaluation, and what type of device did they use? The project included unique elements such as a large sample of 318 courses, each offered by a different professor; representation across each faculty and nearly all programs; and representation of all the course offerings of the university during the fall 2013 and winter 2014 sessions. The large number of participating students, more than 10,400, rep- CJHE / RCES Volume 47, No. 2, 2017 The Online Evaluation Of Courses / J. F. Groen & Y. Herry 117 resenting a quarter of the student population of the university, was also different from previous studies. Much of the research analyzed in the literature review had smaller samples and often focused on a limited number of courses that were frequently offered in one particular program or faculty. Another unique element of the study was the comparison between 182 courses evaluated online and the paper-based evaluations for the same course taught by the same instructor within the past three years. The pilot project results revealed significantly lower participation rates for online evaluations than for paper-based evaluations. These results are consistent with the results of other studies on the subject (Adams & Umbach, 2012; Gamliel & Davidovitz, 2005; Nevo et al., 2010; Hativa, 2013; Stowell et al., 2012) that have revealed lower participation rates for online evaluation of courses. However, the rates observed in this project can be plotted in the upper part of the spectrum of those studies where participation rates vary from 30–53% (Nowell et al., 2010). It should be noted that despite lower participation rates, the results revealed similar teaching evaluation scores for online evaluations (average 4.0/5) and paper-based evaluations (average 3.9/5). When examining the scores based on the level of study, year of study, or by faculty, no significant differences were found between the scores of online or paper-based course evaluations. These results are also consistent with findings from previous studies, indicating that students’ experiences with online and paper-based course evaluations are similar, despite a decrease in the participation rates (Burton et al., 2012; Gamliel & Davidovitz, 2005; Legg & Wilson, 2012; Nowell et al., 2010; University of Saskatchewan, n.d.; Stowell et al., 2012; Venette et al., 2010). One of the positive effects of the online evaluation of courses identified in the literature review is the quality and quantity of comments provided by students on teaching (Crews & Curtis, 2011; Legg & Wilson, 2012). Using the online evaluation format, the quantity of student comments increased (Stowell et al., 2012; Venette et al., 2010). They were also more detailed, better constructed (Morrison, 2011), more positive, and deemed more useful (Burton et al., 2012; Heath et al., 2007) than comments from the paperbased evaluations. In addition, a larger number of students provided comments using the online evaluation. The results of the pilot project align closely with these listed benefits, as seen in the feedback provided in the survey administered to professors. Keeping the in-class period to complete the online evaluation did not receive strong support from professors. In the comments provided by professors, we identified two main reasons for this result. First, professors felt that because some students had already completed the questionnaire, they showed up for class late, which was disruptive, and second, some professors felt that the time reserved for completing course evaluations would be better used for teaching and learning. However, we have to keep in mind that the research clearly indicates that the in-class period is a strong incentive for students to complete the questionnaire, and that in the pilot project, close to half of the students (47%) completed the questionnaire during this period. Students saw this time as an incentive because it confirms that course evaluations are important enough to put time aside in class to do them, and they view having to do course evaluations outside of class time as an extra chore. Many studies recommend keeping the compulsory in-class period for online evaluation of courses. This recommendation was included in the report to the University of Ottawa senate. CJHE / RCES Volume 47, No. 2, 2017 The Online Evaluation Of Courses / J. F. Groen & Y. Herry 118 The encouraging results from this pilot project raise several questions to which we do not have complete answers. For example, many of the universities that have adopted the online evaluation of courses have noted a slight increase in participation rates over the years (Nevo et al., 2010). Would this increase also occur at the University of Ottawa? Moreover, for the purpose of this project, the vast majority of the students only had one course to evaluate online; the other courses were evaluated in class using the paper-based method. Would having to evaluate five courses instead of just one have an impact on the participation rate and on the quantity and quality of the comments provided? On this last point, several studies have found that the online evaluation method has a positive impact on student comments (Crews & Curtis, 2011; Legg & Wilson, 2012). These are important questions. We would remind readers that although the participation rate in the current pilot project was lower for online evaluations than for paper-based evaluations, course evaluation scores were similar between evaluation methods, and a positive effect was observed on comments provided to professors. Recommendations made and passed by the university senate were informed by the findings of this study. The senate recommended that: • the evaluation of the course remains voluntary for students and that no incentive related to the allocation of additional marks should be put in place; • the results of courses evaluation be more readily available on the University of Ottawa website; • a significant marketing campaign for online evaluation of courses be put in place to encourage students to complete the evaluation forms, explain and demonstrate the benefits of courses evaluation, and emphasize the fact that course evaluations benefit students as much as the faculty and the institution; • professors adopt voluntary strategies to promote student participation in online evaluation of courses, such as the use of an informal midterm evaluation, sending a personal email on behalf of the professor to students to make them aware of the importance of course evaluations, assign the completion of a course evaluation as homework without the provision of extra points, integrate a course evaluation in the course outline, and demonstrate the changes that were implemented based on the feedback received; and • the compulsory in-class evaluation period be maintained for two years after the implementation of the online system and an assessment be conducted at the end of this period. Acknowledgments We would like to thank the staff at the University of Ottawa’s Centre for University Teaching, Office of Institutional Research, and the university registrar as well as the members of the Senate Committee on Teaching and Teaching Evaluation for their contributions to the report, which inspired this paper. CJHE / RCES Volume 47, No. 2, 2017 The Online Evaluation Of Courses / J. F. Groen & Y. Herry 119 References Adams, M. J. D., & Umbach, P. D. (2012). Nonresponse and Online Student Evaluations of Teaching: Understanding the Influence of Salience, Fatigue, and Academic Environments. Research in Higher Education, 53(5), 576–591. Anderson, J., Brown G., & Spaeth S. (2006). Online Student Evaluations and Response Rates Reconsidered. Innovate, 2(6). Retrieved from http://nsuworks.nova.edu/cgi/ viewcontent.cgi?article=1124&context=innovate Burton, W., Civitano, A., & Steiner-Grossman, P. (2012). Online versus paper evaluations: differences in both quantitative and qualitative data. Journal of Computing in Higher Education, 24(1), 58-69. Crews, T. B., & Curtis, D. F. (2011). Online course evaluations: faculty perspective and strategies for improved response rates. Assessment & Evaluation in Higher Education, 36(7), 865–878. Davis, B. G. (2009). Tools for teaching. 2nd ed. San Francisco, CA: Jossey-Bass. Gamliel, E., & Davidovitz, L. (2005). Online versus traditional teaching evaluation: Mode can matter. Assessment & Evaluation in Higher Education, 30(6), 581–592. Hativa, Nira. (2013). Answers to faculty concerns about online versus in-class administration of student ratings of instruction, tomorrow’s professor. In Student ratings of instruction: A Practical approach to designing, operating, and reporting. Lexington, KY: Oron Publications. Retrieved from http://cgi.stanford.edu/~dept-ctl/tomprof/ posting.php?ID=1246 Heath, N. M., Lawyer, S. R., & Rasmussen, E. B. (2007). Web-based versus paper-andpencil course evaluations. Teaching of Psychology, 34(4), 259-261. Legg, A. M., & Wilson, J. H. (2012). RateMyProfessors.com offers biased evaluations. Assessment & Evaluation in Higher Education, 37(1), 89–97. Linse, A. R. (2012). Early release of the final course grade for students who have completed the SRI form for that course. Professional and Organizational Development (POD) Network in Higher Education. Posted April 27th, 2012. Morrison, K. (2013). Online and paper evaluation of courses: A literature review and case study. Education Research and Evaluation, 19(7), 585–604. Morrison, R. (2011). A comparison of online versus traditional student end‐of‐course critiques in resident courses. Assessment & Evaluation in Higher Education, 36(6), 627– 641. Nevo, D., McClean, R., & Nevo, S. (2010). Harnessing information technology to improve the process of students’ evaluations of teaching: An exploration of students’ critical success factors of online evaluations. Journal of Information Systems Education, 21(1), 99–109. Nowell, C., Lewis R. G., & Handley, B. (2010). Assessing faculty performance using student evaluations of teaching in an uncontrolled setting. Assessment & Evaluation in Higher Education, 35(4), 463–475. CJHE / RCES Volume 47, No. 2, 2017 The Online Evaluation Of Courses / J. F. Groen & Y. Herry 120 Pitre-Hayes, Corinne. (2013). Student evaluation of teaching and courses: The teaching and courses evaluation project final report. Burnaby, BC: Simon Fraser University. Retrieved from http://www.sfu.ca/content/dam/sfu/teachingandcourseeval/ documents/TCEP%20Final%20Report%201.7.pdf Stowell, J. R., Addison, W. E., & Smith, J. L. (2012). Comparison of online and classroom-based student evaluations of instruction, Assessment & Evaluation in Higher Education, 37(4), 465–473. University of Ottawa. (n.d.). Mid-term course evaluations, centre for university teaching. Retrieved from http://tlss.uottawa.ca/site/en/resources/805-mid-term-courseevaluations University of Saskatchewan. (n.d.). Online course evaluations and response rates. Retrieved from http://www.usask.ca/vpteaching/documents/seeq/online_course_evals. pdf Venette, S., Sellnow, D., & McIntyre, K. (2010). Charting new territory: Assessing the online frontier of student ratings of instruction. Assessment & Evaluation in Higher Education, 35(1), 101–115. Contact Information Jovan F Groen Centre for University Teaching University of Ottawa [email protected] As the acting director of the Centre for University Teaching, Jovan Groen provides strategic and operational leadership in the design, development, and implementation of a wide variety of teaching and learning support initiatives to University of Ottawa faculty and instructional staff. With a background in curriculum design, Jovan has worked closely with various faculties and departments in different processes of curriculum assessment, development, and review. Stemming from his research interests, Jovan teaches course design and evaluation of learning and serves on various provincial and national committees related to educational development. Yves Herry obtained his PhD in educational psychology from Laval University. From 1982 to 1987, he was assistant professor at Laurentian University’s École des sciences de l’éducation. He joined the Faculty of Education at the University of Ottawa in 1987; he served as Vice-Dean, Research from 2002 to 2008 and has since served as Associate Vice-President, Teaching and Learning. As a distinguished researcher, Yves Herry has published numerous articles, book chapters, books, and research reports. His research in educational psychology has been funded by SSHRC and other government agencies such as the Ontario Ministry of Education. CJHE / RCES Volume 47, No. 2, 2017
Author
Author
University of Ottawa