The Canadian Journal of Higher Education La revue canadienne d'enseignement supérieur Volume XXXV, No. 1, 2005 pages 61-84 Reflections of Researchers Involved in the Evaluation of Pedagogical Technological Innovations in a University Setting CINDY IVES McGill University KATHERINE MCWHAW Dawson College CHRISTINA DE SIMONE University of Ottawa ABSTRACT It is widely assumed that developments in information and communication technologies are fundamentally transforming and improving higher education. As a part of an ongoing evaluation of technology-supported pedagogy in one university, our three-year research project was designed, on the one hand, to determine if and how selected technologies were beneficial for learning and, on the other hand, to offer professional development for faculty members. In this paper, we reflect on our participation in a pedagogy and technology (referred to as PedTech) pilot project, describe some of the relationships that developed between ourselves as researchers and evaluators and our faculty collaborators, and share what we have learned from this experience. We suggest that a scholarship of teaching approach to evaluating innovations in teaching and learning is one way to support institution-wide adoption. 62 Cindy Ives, Katherine McWhaw, & Christina De Simone RÉSUMÉ Présumons que les développements dans les technologies de l'information et de la communication transforment et améliorent l'éducation postsecondaire. Par une évaluation continue de technologies pédagogiques dans une université, notre projet de recherche avait deux objectifs de base. Le premier objectif était de déterminer si les technologies choisies amélioreraient l'apprentissage. Le deuxième était d'offrir le développement professionnel aux professeurs ciblés. Dans cet article, nous réfléchissons sur notre participation dans ce projet pilote de nature techno-pédagogique. Nous décrivons les relations développées entre nous, les chercheurs-évaluateurs, et les professeurs collaborateurs, pour partager les leçons et les pratiques de réussite. Nous croyons qu'une approche institutionnelle d'évaluation des technologies pédagogiques innovatrices est un moyen d'étendre cette application. INTRODUCTION Universities in Canada have the responsibility to meet society's needs by preparing citizens for a rapidly changing, knowledge-based world. It is widely assumed that developments in information and communication technologies are fundamentally transforming and improving higher education (Advisory Committee for Online Learning, 2001). To address the challenges posed by the changes, many universities have implemented pilot projects to test the effectiveness of new technologies for learning. These pilot projects usually involve providing training and support for faculty members' innovations in teaching with technology. As a part of an ongoing evaluation of technology-supported pedagogy in classrooms at one university, our research was designed to determine if and how selected technologies were beneficial for learning, and to offer professional development for faculty members. In this article, we reflect on our participation in an institution-wide pedagogy and technology (referred to as PedTech) pilot project, describe some of the relationships that The Canadian Journal of Higher Education Volume XXXV, No. 1, 2005 Reflections on PedTech Projects 63 developed between ourselves as researchers and evaluators and our faculty collaborators, and share what we have learned from this experience. BACKGROUND AND THEORETICAL FRAMEWORK In 1999, our university received external funding for a three-year university-wide pilot project aimed at supporting the specific classroombased PedTech projects of individual faculty members. The goal was for faculty to transform teaching and learning in their courses through pedagogically-sound applications of various computer communication technologies. It was expected that these innovations would lead to institutional-level adoption of technology. As members of an on-campus research center, we were involved in two aspects of the pilot project: (a) evaluating the effectiveness of teaching and learning strategies using technology, and (b) providing training and support to faculty. Our involvement took the form of designing and implementing the evaluations of specific innovations and helping faculty to learn about evaluation as a tool for ongoing teaching improvement. In our work with faculty we were influenced by theoretical frameworks in teaching and learning, instructional design, technology implementation and educational research. For example with respect to teaching and learning, we drew from constructivist theories, which emphasize strategies that promote cognitive engagement and meaningful involvement with tasks. Active learning—interaction with teachers, classmates, and course materials—allows learners to construct their own meaning (Jonassen, 1999), preferably within authentic learning situations (Brown, Collins, & Duguid, 1989). The implication of this perspective for instruction is that teachers become guides or coaches who provide learners with appropriate scaffolding (Yygotsky, 1978) so that students can maximize their ability to apply their newly acquired knowledge in personally meaningful contexts. There is a pervasive view in the higher education literature that technology can effectively support constructivist learning environments (e.g., Hannafin & Land, 1997; Harasim, 1999; Hiltz, 1994; Rogers, 2000; Twigg, 2000). Research has recognized the value of approaching The Canadian Journal of Higher Education Volume XXXV, No. 1, 2005 64 Cindy Ives, Katherine McWhaw, & Christina De Simone instructional design through a number of learning-centered principles that focus on cognitive, metacognitive, motivational and affective factors, and individual differences (Lambert & McCombs, 1998). Good practice in undergraduate education involves interaction and active learning, is organized to support a range of learning preferences (Chickering & Gamson, 1987), and includes a variety of instructional materials (Brown, 2000). For technology to play an effective role in this process, it must be wisely integrated with pedagogy and institutional infrastructure (Bates & Poole, 2003; Daniel, 1996; Richey, 1997). This means that technologies should be chosen appropriately, based on their features (Laurillard, 2002) and their use should be designed to support specific learning outcomes (Gandell, Weston, Finkelstein & Winer, 2000; Sharpe & Bailey, 1999). We approached the PedTech evaluations from an educational research perspective that was grounded in quasi-experimental studies using quantitative methods (Campbell & Stanley, 1963; Gall, Borg & Gall, 1996). We were also influenced by our colleagues in the research center, who had an established reputation for quantitative meta-analyses, (e.g., Azevedo & Bernard, 1995; Lou, Abrami & d'Appollonia, 2001). THE PILOT PROJECT The pilot project was originally envisaged as centrally coordinated, with pedagogical, technical, and research expertise provided separately by three on-campus support centers. These included (a) teaching and learning services, (b) instructional technology services, and (c) the research center with which we were affiliated. We were expected to design research studies in keeping with our traditions that would be acceptable for publication in peer-reviewed education journals. On the other hand, we were also expected to collaborate with faculty in designing the evaluations of their technology-enhanced learning environments. This implied that we had a mandate for both research and evaluation. Eighteen months into the pilot project, it became evident that a radical change in focus had occurred. Each Faculty within our university was at a very different stage of development with respect to implementing The Canadian Journal of Higher Education Volume XXXV, No. 1, 2005 Reflections on PedTech Projects 65 technology: some had established well-staffed centers to support PedTech development, while others had no experience and no infrastructure. Many individual professors were reluctant to change their teaching practices— this reduced the pool of potential collaborators. The group of willing associates was further limited by the fact that many other professors were uncomfortable with the notion of research in their classrooms. They seemed particularly concerned by the possibility of evaluation by educational researchers of their experiments with using technologies for teaching and learning. In light of profound objections to the centralized model, we designed an alternate approach to studying the innovations funded by the grant. We proposed two types of research activities (referred to as Tier 1 and Tier 2) with which faculty could choose to become involved, and a third institutional-level approach (Tier 3) which we and other members of the research center undertook independently. This proposal was generally accepted by the pilot project's steering committee and formed the structure of the final project. In Tier 1 activities we concentrated on the evaluation—both formative and summative—of each of the funded PedTech interventions, with a focus on student learning and motivation, as well as whether or not the technology worked as expected. The extent of each faculty member's involvement in this process depended on their interest in the results and their willingness to collaborate. The aim of this evaluative process was to provide timely, constructive feedback over the course of one semester to improve and extend the interventions (or indeed change or discontinue their use as the data suggested). Our role in this case was advisory: first to find or design evaluation instruments, and second to clarify for the participating faculty members and discuss with them the implications of the data collected. Tier 2 research involved a relatively small number of in-depth investigations in areas of instructional concern to associated faculty members. We identified the research questions and formulated the research designs in collaboration with them. Our role here was negotiated individually, but involved working with faculty to design and implement a more traditional research study of a PedTech innovation. Typically the collaboration lasted more than one semester. The Canadian Journal of Higher Education Volume XXXV, No. 1, 2005 66 Cindy Ives, Katherine McWhaw, & Christina De Simone In Tier 3 research we explored areas of interest to ourselves or that we felt would make the kind of larger scale impact on the university that the pilot project was originally intended to have. We initiated investigations that were broader in scope, and that transcended the particular interests of one or a few faculty members. We had hopes that the results of Tier 3 initiatives would be directly useful for teaching and learning at a panuniversity level. With this in mind, we planned to disseminate our results as broadly as possible. A sample listing of the studies conducted in the pilot project is available in Table 1. We elaborate below on one study at each level to illustrate the relationships that evolved with faculty members as their studies unfolded. Tier 1 Example Tier 1 activities generally evaluated computer-based interventions designed to enhance the acquisition of specific learning outcomes. In this example, an instructor using a graphical authoring software (Inspiration®) for presentation purposes was interested in evaluating electronic concept mapping as a learning tool in her subject area. She wanted to know how the technology could be used in her classes to improve learning. To acquire this information, we undertook a full formative evaluation by collecting data prior to, during, and following the use of the tool in the class. Our relationship with this faculty member was most often one of gathering information, summarizing, and synthesizing the data so that we could all understand its implications. Then she could apply these new insights in her teaching. In response to her need for instruction in the use of the tool before she could proceed with the implementation, we trained her and the students. When she required technical support we facilitated it. The third author was one of the researchers who worked closely with this instructor, and her reflections illustrate that the instructor seemed to enjoy brainstorming about conducting evaluations of her teaching with technology and working together on pedagogical solutions. In retrospect, she seemed to be very comfortable with the larger team and with me. I believed that my major role in working with this faculty member was to assist her in using the tool in her The Canadian Journal of Higher Education Volume XXXV, No. 1, 2005 Reflections on PedTech Projects 67 context. I hoped to help construct an evaluation design that would be of minimal interference to her teaching and students' learning, yet suitably sophisticated to yield meaningful and valid data. As well, my role was to fuel her already existing enthusiasm—she was making a significant change in her teaching so that her students could learn the discipline, her discipline, better. More importantly, she was deliberately looking to me for ways to help her students understand content that was very difficult for them and she knew that her past attempts at trying to guide them had not been as successful as she had hoped. However, she chose to stay within the parameters of Tier 1 because she did not have release time to spend developing modified materials and testing them. In the end, she was satisfied that the tool helped her students learn better. This time constraint was typical of most faculty members involved in Tier 1 projects. Tier 2 Example In Tier 2 research, faculty members embraced an in-depth study that was of key interest to them, and typically, we assisted them in designing, developing, investigating, and reporting the results of their research questions. Our relationship with faculty members in Tier 2 projects developed into extended partnerships. Although faculty members generated the initial ideas, we contributed our classroom-based research experience and assisted them with implementation. In these projects, we were invited to share our expertise in educational research and instructional design in support of investigations that were well-grounded in the educational research tradition. One faculty member wished to utilize technology (digital animation) to enhance critical thinking. The first and third authors met with the faculty member regularly, observing her in the classroom to become acquainted with her teaching styles, as well as with the technology she wanted to use and the questions she wanted to address. We undertook a literature search on critical thinking on her behalf, and met frequently as a research team with the faculty member as an active participant, examining together strategies for teaching critical thinking in her discipline. The Canadian Journal of Higher Education Volume XXXV, No. 1, 2005 Table 1. Sample Listing ofPed/Tech Category Project o\ 00 Projects Tier 1: Arts and Concept mapping using Evaluation Science Inspiration® software as learning tool. of PedTech Activities Project Description • Formative evaluation of use of software in classroom setting • Interpretation of results and discussion with professor interventions Our Roles Time Frame • Advisor Short term: • Pedagogical usually 1 consultant semester • Technical consultant S" • Evaluator Tier 2: School of Use of digital animation • Formulation of research hypotheses • Pedagogical In-depth Business to enhance critical • Literature review on critical thinking consultant thinking skills of • Design of student assessment using critical accounting students thinking protocol • Technical consultant • Partner • Data collection • Evaluator • Course revisions and follow-up evaluations • Researcher • Data analysis and interpretation • Reflections of professor and education specialists - publication expected ? Sto ^ • Collaborator investigations S' £ TO Long term: up to 3 years I I a fio C > »• S' R G TO Vl © s TO Table 1. (continued) Sample Listing of Ped/Tech Projects Category Project Project Description Activities Tier 3: Technology University-wide Pan- Integration and investigation into university Students with experiences with • Development of questionnaire for faculty investigations Disabilities technology of students Time Frame • Collection of data with disabilities Our Roles • Analysis and interpretation of results • Development of questionnaire for students with disabilities • Researcher Long term: from 1 to 3 years • Dissemination of findings to university leaders g S Ci a a a ft. a' s ^ s -t r" S S. 5 0 < J 3- § -s s o a and faculty and beyond Student University-wide • Development of student questionnaire Perceived investigations of Effectiveness student perspectives of • Analysis of student data of Technology technology use within • Dissemination of findings at conferences Use classes Successful Comprehensive review Practices of literature on successful pedagogical and technical practices in higher classroom-based education >3 • Collection of data • Collect articles on classroom-based pedagogical and technical • Reading and coding of all articles in database practices • Qualitative synthesis of results found in ». O 3 to O 3 " C 3 ft. £ rs 3" articles • Dissemination of findings to faculty within and outside of the university £ ri 70 Cindy Ives, Katherine McWhaw, & Christina De Simone We also assisted her with the design of assessments of students' learning. We arranged for a doctoral student with expertise in critical thinking protocols to analyze students' assignments for evidence of critical thinking. We then summarized and helped the instructor interpret the findings so she could reconceptualize her course for the future and modify her instructional strategies where necessary. This partnership has continued through three more academic years until the point where the professor now feels her course has reached a level of effectiveness that satisfies her. The first author has been working with her throughout the project's life and is currently helping to write a reflective article on the process for publication in a journal devoted to excellence in teaching and learning at the postsecondary level. Her reflections focus on the longerterm impact of the pilot project. One of the interesting aspects of this project is that the availability of seed funding for technological interventions has inspired a course redesign effort that has lasted several terms. We've seen significant improvements in student satisfaction over time, and, in addition, the professor has learned so much about using varied instructional strategies for improved learning of difficult concepts. Her confidence with using technology appropriately has also been enhanced. Tier 3 Examples By exploring the systemic impact of technology implementation, we intended to identify variables influencing institutional transformation. Tier 3 activities included two different institution-wide surveys of classroombased technology use and a comprehensive and synthetic review of the literature on technology integration. These studies did not depend upon establishing relationships between ourselves and individual faculty members. Rather, Tier 3 research was concerned with examining issues and concerns that affected the broader university community. Tier 3 research had the potential to speak to a large audience, transcending the particular context of specific university classrooms. Conducting it made no demands on individual faculty members. Tier 3 comprised three research studies, described as follows. The Canadian Journal of Higher Education Volume XXXV, No. 1, 2005 Reflections on PedTech Projects 71 1. Technology Integration and Students With Disabilities. The second author was involved in an institution-wide project initiated with the office of services for disabled students. The study examined how the integration of technology within the various Faculties impacted on students with disabilities, who represent about 10% of the total student population at our university. As computer-mediated technologies are becoming increasingly incorporated into students ' university experience, there is a growing concern (e.g., Burgstahler, 2002; Fichten, Asuncion, Barile, Généreux, Fossey, Judd, Robillard, De Simone, & Wells, 2001; Gibson, 2002) that students with disabilities may have difficulty accessing or using these technologies to their advantage. This team also developed a questionnaire for professors on their use of technology, especially whether and how they had adapted the technology to meet the needs of their students with disabilities. The second author described the process and findings as follows. Working with the Coordinator for Services for Disabled Students, I collaborated on the development of a questionnaire for students with disabilities, asking them about their experience with technology in their classes, labs, library, and at home. Services for Disabled Students coordinated the preparation, distribution, and collection of the student questionnaires. Together we analyzed and interpreted the findings. We found that while professors did not take students with disabilities into account when implementing technology, they were more than willing to be trained, on a just-in-time basis, to adapt their use of technology in the classroom. Results were compiled and analyzed for public presentations (Bissonnette, Schmid, & McWhaw, 2002; McWhaw, 2002) on the needs and experiences of the faculty. Our university now has the data to make informed decisions about how to ensure access to technology-based teaching and learning tools for disabled students. Improved institutional policies and procedures, as well as approaches for faculty development in designing or adapting technology-based courses for students with disabilities, will also benefit other university communities with respect to accessibility to technologies that meet particular learning needs. The Canadian Journal of Higher Education Volume XXXV, No. 1, 2005 72 Cindy Ives, Katherine McWhaw, & Christina De Simone 2. Student Perceived Effectiveness of Technology Use. In a second project, other researchers within our center conducted a university-wide survey of student perspectives of the effectiveness of using computer technologies for learning. Some of the critical questions addressed by the study were (a) what role does technology play in student perceived effectiveness of a class; (b) what is the relationship between technology use, how that technology is used, and student perceived effectiveness of the class; and (c) is the use of technology seen by students as an essential component of learning? The variables under investigation included degree of learner control, specific and general benefits to students, previous experience, technical support, access and demographics. The results suggested a positive relationship between global course evaluations and the learning experiences that students engaged in. Students also indicated that they valued the use of computer technologies for learning. (Lowerison, Sclater, Schmid & Abrami, 2003). The next year, the same group of researchers conducted a followup study which investigated the role that computer technology plays in transforming the learning process in higher education. Results suggested a relationship between computer technology, active learning, and perceived course effectiveness. Students who use computer technology a lot appear to benefit the most from active learning (Lowerison et. al., 2004). 3. Successful Practices. In the third Tier 3 study, the first and third authors conducted a comprehensive review of recent ( 1995-2001 ) literature on technology integration in higher education. We aimed to identify and understand successful classroom-based pedagogical and technological practices, in order to share them with other faculty members. We planned to extract the lessons learned from other studies as a basis for encouraging faculty members to adapt the ideas to their own contexts, creating their own discoveries, and, in turn, sharing them with other colleagues to eventually construct communities of practice in technology-supported teaching and learning. The outcome of these learning communities would be to offer faculty a virtual space, an arena where they could engage in The Canadian Journal of Higher Education Volume XXXV, No. 1, 2005 Reflections on PedTech Projects 73 exploration and creative pursuits of technology-supported teaching and learning practices. The key findings of this synthesis are described below. We found that successful teaching and learning practices using technology in the postsecondary classroom could be categorized into one of: (a) technical practices, (b) support and training practices, and (c) instructional practices. Technical practices (e.g., Mitra, Steffensmeier, Lenzmeier & Massoni, 1999; Oliva & Pollastrini, 1995) include reliable, universal access to appropriate hard and software, with regular equipment upgrades. Standardization is highly recommended. Support and training practices (e.g., Gibson & Nocente, 1998; Sprague, Kopfman, & deLevante, 1998; Strudler, McKinney, & Jones, 1995; Wager, Heye, & Tsai, 1995) include both pedagogical and technical training offered by competent and available instructional and IT specialists. As well, institutional planning efforts are essential. Instructional practices (e.g., Carlson & Gooden, 1999; Halpin, 1999; Peters, O'Brien, Briscoe, & Korth, 1995; Siegel, Good, & Moore, 1996; Smith, 2000) include constructivist teaching strategies, modeling, group work, practice activities and the choice of appropriate software. We found it interesting that many research reports purporting to describe evaluations of the effectiveness of technology-enhanced teaching and learning focused on administrative rather than on teaching and learning issues, and note here that two of the three categories we identified in the literature were not directly related to teaching activities. Few of the studies reported learning gains related to the use of technology. The successful practices review raised our awareness of other postsecondary initiatives in technology integration. Conducted at the same time as Tier 1 and Tier 2 projects, we found descriptions in the literature of many of the challenges we were facing. The more we read, the more we came to understand that we were not alone in our efforts to help professors use technology effectively. The Canadian Journal of Higher Education Volume XXXV, No. 1, 2005 74 Cindy Ives, Katherine McWhaw, & Christina De Simone DISCUSSION After three years on the various PedTech evaluation and research projects, we felt that our work was of value to the university in encouraging a culture of evaluation among some professors. Although relatively few faculty members identified projects at either level, there were some who viewed the idea of Tier 1 evaluation or Tier 2 research with enthusiasm, considering it an opportunity for them to investigate their technology-based instructional strategies. Our participating professors fit into this group. We know that they appreciated the opportunity to explore the impact of their use of technologies on their students. They tended to be confident about their teaching, but were looking for ways to better support their students' learning. They were also comfortable with the process of evaluation. There was, however, substantial resistance among professors generally to getting involved in either Tier 1 or Tier 2 projects. We speculate now that there were a number of reasons for this resistance and our reading of the successful practices literature has helped us make sense of it. Some professors may not have understood the potential of evaluation to improve not only teaching but also student learning. They may also have been unclear about our mandate or roles (see Cross, 2000; Gelzheiser & Meyers, 1996). The discomfort may have been exacerbated by the suggestion that researchers from within an educational research center should conduct evaluations in other disciplines. Some professors seemed to be concerned about having their performance as instructors assessed by outsiders. In a culture where teaching quality contributes to decisions on promotion and tenure, this lack of confidence is understandable, but it misses the point that building formative evaluation into teaching innovation is a tool for improvement that can save time and resources in the long run. Other professors seemed to view assessment of their teaching as contrary to their academic freedom (Gray, 1997). According to Wiggins (2000), this uneasiness with evaluation—even formative evaluation—is fostered because it is perceived as something The Canadian Journal of Higher Education Volume XXXV, No. 1, 2005 Reflections on PedTech Projects 75 punitive, rather than as a tool for improving teaching and learning. This may be especially true in large-scale projects such as ours with centralized or administrative control. In some cases, we were not able to carry out our evaluation mandate. It became the responsibility of individual grant recipients to conduct and report their own evaluations. This development highlighted for us the tendency of institutional politics, interdepartmental competitiveness, and a "technology push" attitude to complicate efforts to improve teaching and learning. We also noted the lack of value accorded educational research in the postsecondary environment. As well, we observed the importance of clarifying the responsibilities of various organizational units up front, at the beginning of an institutionwide pilot project. The Tier 3 studies were conceived to answer some of the broader questions we were faced with in the pilot project, specifically related to the organizational weaknesses in planning for technology implementation that are widely reported in the literature and which we experienced as well. Barriers to the adoption of innovations are well documented, as are the factors involved in the resistant behaviours of faculty and students (e.g.,Anderson, Varnhagen,&Campbell, 1998;Ertmer, 1999;Rogers, 1999; Fabry & Higgs, 1997; Schuell & Farber, 2001). Through the identification of successful practices we have been able to bring the results of other studies to our understanding of what went on in this project, including the larger questions related to policy and planning for technology integration. On reflection, and now that the project is over, we realize that we did not explicitly draw on theoretical frameworks in evaluation, faculty development, or technology integration. Though our practice as educational researchers had provided experience with the tools and methods we would use to design studies, collect and analyze data, we were not completely prepared for the challenges we would face. In retrospect, we feel we were not familiar enough with the politics of evaluation. Levin-Rosalis (2003) points out that research and evaluation are separate disciplines with different purposes, and that though the roles of researchers and evaluators The Canadian Journal of Higher Education Volume XXXV, No. 1, 2005 76 Cindy Ives, Katherine McWhaw, & Christina De Simone overlap, they are distinct. The purpose of research is to acquire knowledge, while evaluation is a tool for decision makers. In our context, evaluations of PedTech innovations were to help inform the university about where to invest institutional resources for technology in the future. As such we were searching for evidence about how to implement technology in ways that supported enhanced student learning. To do this we needed faculty members who were willing to examine their practice using Tier 1 approaches at least. Even Tier 2 projects were limited in scope and impact to the individual professors' personal circle of colleagues and influence. We would not be able to generalize our results to the broader university community. Institutional change, an important goal of the PedTech Project, depends on broadly applicable results, rather than individual accomplishments in a few specific courses. We do not feel that our research mandate was compromised by the reluctance of professors to participate in the pilot project evaluations, however. The Tier 3 projects all contributed to a broader understanding of technology use in postsecondary teaching and learning and we have begun to successfully disseminate this new knowledge. One area that troubled us was that our obligations as evaluators seemed to be in conflict with our faculty development goals. Ramsden (1992) points out that evaluating teaching effectiveness is best done by instructors rather than to them. In Tier 1, although we offered to collaborate with professors, we were frequently perceived as external auditors who would report back to the university leadership on what they were doing "wrong." The Tier 2 example is for us an excellent example, not only of thoughtful use of technology and the ongoing nature of course design, but also of the value of a scholarship of teaching (Boyer, 1990; Cambridge, 1999; Hakim, 2002) and action research (Carson & Sumara, 1997) approach to improving instructional practice over time. We interpret the continuing relationship with this professor as evidence that research can help improve teaching practice and student learning as well. And we appreciate the additional perspective on faculty development (Weimer, 2002) as an ongoing activity that this study provides. The Canadian Journal of Higher Education Volume XXXV, No. 1, 2005 Reflections on PedTech Projects 77 In terms of technology integration frameworks, we were aware that a major hurdle in institution-wide implementations is that they require simultaneous changes in vision, curriculum, instruction, and assessment practices. Such changes can be uncomfortable for academics, and they tend to occur at different rates for different groups and individuals (Anderson, Varnhagen & Campbell, 1998; Ertmer, 1999; Powers, Anderson, & Love, 2000; Rogers, 1999). We were not in a position to affect university policy in, for example, reward and recognition of professors' initiatives with technology. Nor were we able to help our collaborators find the time in their busy schedules to design evaluation instruments or analyze data. Although we were concerned about the potential for an over-emphasis on technology and an under-emphasis on the pedagogical aspects of the pilot project (Cognition and Technology Group at Vanderbilt, 1996), we were not able to address this except with individual faculty members. REFLECTIONS There is some evidence in the evaluations we conducted and in instructor perceptions that student learning of difficult concepts was enhanced in the Tier 1 and Tier 2 projects. We cannot tell whether or not the technology alone was responsible, but we believe that the process of evaluation had an impact on the teaching, which led to improved student learning. One important question for universities to address is how to evaluate the direct effects of technology on learning as separate from course design and/or teaching approaches. We wonder whether this is even possible or desired. What methodologies, for example, are appropriate in evaluation? We found that multiple methods of data collection and analysis gave us more confidence in our results than a strictly quasi-experimental approach. Our experience confirms the value of grants for getting professors involved in PedTech innovations as they allow for experimentation with new technologies and teaching methods. We see these seed grants as the first step to setting up a system for ongoing teaching improvement in The Canadian Journal of Higher Education Volume XXXV, No. 1, 2005 78 Cindy Ives, Katherine McWhaw, & Christina De Simone universities. However, the expectation that university-wide technology adoption would follow from a few individual implementations was unrealistic. Technology integration needs more than seed funding to be institutionalized (Ely, 1997). Our question, and the question that comes from the literature review, is how to put systems in place in universities to support widespread technology implementation and pedagogical improvement. Although we have some idea of what is needed for professors to be motivated to adopt unfamiliar learning technologies (e.g., rewards, recognition, support, financing, and more time), we wonder how universities can reallocate their resources to sustain a community of academics who value technology as a tool for improving teaching and learning. Some of the resources that need to be provided include technical expertise and infrastructure, training in technology and pedagogy, and the pedagogical consultants to offer ongoing support. Decisions about the nature of this support (e.g., centralized or decentralized structures) need to take into account the particular culture of each institution (see Ives, 2002). The question that underlies all of the above is how universities can promote and build a culture for evaluation as a tool for improving teaching. Our experience in this project suggests that a scholarship of teaching approach is one way to do this. This approach is based on the sharing of individual teaching experiences with colleagues, being open to receiving critiques and suggestions for improvement, and building upon the lessons about learning offered by others in a systematic way. Since we modeled this behaviour in our relationships with our participating faculty members, we recognize the need for system-wide encouragement in order to expand the community beyond the few professors with whom we worked. We think that this is particularly important in the area of technology because determining appropriate choices is such a complex matter. Other people's experiences, both failures and successes, are helpful in this regard. We also note that there are special skills needed by pedagogical consultants and faculty developers to make it possible for them to work The Canadian Journal of Higher Education Volume XXXV, No. 1, 2005 Reflections on PedTech Projects 79 with large numbers of faculty from many different disciplines. These skills include excellent interpersonal communication and conceptual flexibility in addition to evaluation, research, and technical skills. An additional skill is the critically important ability to reflect upon one's experiences and to apply the lessons learned from them in future practice (Schon, 1987). It has taken us a few years after the completion of the pilot project to be able to reflect upon our experiences. Each of us has moved on to a new role—in faculty development, program evaluation, or teaching and research—in a different institution. Our new environments have provided us with the frameworks (e.g., Cambridge, 1999; Chism, 2004; LevinRosalis, 2003; Ramsden, 1992; Saroyan & Amundsen, 2004; Weimer, 2002) we lacked when we first became involved in the PedTech projects. This has given us new language and new understanding of our roles and our contributions. As we face similar challenges now, we can draw on the lessons we learned. We share these experiences in the spirit of ongoing improvement of postsecondary teaching and learning with technology. 4 ^ The Canadian Journal of Higher Education Volume XXXV, No. 1, 2005 80 Cindy Ives, Katherine McWhaw, & Christina De Simone References Advisory Committee for Online Learning. (2001). The e-learning e-volution in colleges and universities: A pan-Canadian challenge. Ottawa: Report prepared for the Council of Ministers of Education, Canada and Industry Canada. Anderson, T., Varnhagen, S., & Campbell, K. (1998). Faculty adoption of teaching and learning technologies: Contrasting earlier adopters and mainstream faculty. The Canadian Journal of Higher Education, XXVIII( 2-3), 71-98. Azevedo, R., & Bermard, R.M. (1995). A meta-analysis of the effects of feedback in computer-based instruction. Journal of Educational Computing Research, 13(2), 111-127. Bates, A.W., & Poole, G. (2003). Effective teaching with technology in higher education: Foundations for success. San Francisco: Jossey Bass. Bissonnette, L., Schmid R. F., & McWhaw. (2002, May). Meeting the evolving needs for faculty in providing access to university students with disabilities. Paper presented at the annual meeting of the Canadian Society for the Study of Education, Toronto, ON. Boyer, E.L. (1990). Scholarship reconsidered: Priorities of the professoriate. Princeton NJ: Carnegie Foundation for the Advancement of Teaching. Brown, D.G. (Ed.). (2000). Interactive Learning. Bolton, MA: Anker Publishing. Brown, J.S., Collins, A., & Duguid, P. (1989). Situated cognition and the culture of learning. Educational Researcher, 18, 32-41. Burgstahler, S. (2002). Bridging the digital divide in postsecondary education: Technology access for youth with disabilities. Information brief. Minneapolis, MN: National Center on Secondary Education and Transition. Cambridge, B. (1999). The scholarship of teaching and learning: Questions and answers from the field. AAHE Bulletin, December. Retrieved October 11, 2004 from http://aahebulletin.com/public/archive/dec99f2.asp?pf=l Campbell, D., & Stanley, J. (1963). Experimental and quasi-experimental designs for research. Chicago, IL: Rand-McNally College Publishing Company. Carlson, R.D., & Gooden, J.S. (1999). Are teacher preparation programs modeling technology use for pre-service teachers? ERS Spectrum, 17(3), 11-15. Carson, T.R., & Sumara, D.J. (Eds.). (1997). Action Research as a Living Practice. New York: Peter Lang. The Canadian Journal of Higher Education Volume XXXV, No. 1, 2005 Reflections on PedTech Projects 81 Chickering, A.W., & Gamson, Z.F. (1987). The seven principles for good practice in undergraduate education. Washington: American Association for Higher Education. Chism, N. (2004). Using a framework to engage faculty in instructional technologies. EDUCAUSE Quarterly, 27(2). Retrieved October 18, 2004 from http://www.educause.net/apps/eq/eqm04/eqm0424.asp Cognition and Technology Group at Vanderbilt. (1996). Looking at technology in context: A Framework for understanding technology and education research. In D. Berliner & R. Calfee (Eds.). Handbook of educational psychology. New York: Simon & Schuster/Macmillan. Cross, K.R (2000). The educational role of researchers. New Directions in Higher Education, 28(2), 63-74. Daniel, J.S. (1996). The mega-universities and knowledge media: Technology strategies for higher education. London: Kogan Page. DeSimone, C., Ives, C.A., & McWhaw, K. (2002, April). Researchers and institutional change: Our roles, experiences, and recommendations for conducting university wide research. Paper presented at the Annual Meeting of the American Educational Research Association: New Orleans. Ely, D.P. (1997). Emerging paradigms in diffusion and implementation. In C.R. Dills & A.J. Romiszowski (Eds.). Instructional development paradigms (pp. 155-161). Englewood Cliffs NJ: Educational Technology Publications. Ertmer, PA. (1999). Addressing first- and second-order barriers to change: Strategies for technology integration. Educational Technology, Research and Development, 47(4), 47-61. Fabry, D.L., & Higgs, J.R. (1997). Barriers to the effective use of technology in education: Current status. Journal of Educational Computing Research, 17(A), 385-395. Fichten, C.S., Asuncion, J., Barile, M., Généreux, C., Fossey, M.E., Judd, D., Robillard, C„ DeSimone, C. & Wells, D. (2001). Technology integration for students with disabilities: Empirically based recommendations for faculty. Educational Research and Evaluation, 7, 185-221. Gall, M.D., Borg, W.R., & Gall, J.P. (1996). Educational research: An introduction, (6th ed.). White Plains, NY: Longman. The Canadian Journal of Higher Education Volume XXXV, No. 1, 2005 82 Cindy Ives, Katherine McWhaw, & Christina De Simone Gandell, T., Weston, C„ Finkelstein, A., & Winer, L.R. (2000). Appropriate use of the web in teaching in higher education. In B. Mann (Ed.), Perspectives in web course management. Ottawa: Canadian Scholars' Press, (pp. 61-68). Gelzheiser, L.M., & Meyers, J. (1996). The role of the researcher as change agent. Exceptionality, 6(2), 125-128. Gibson, C.C. (2002, May). Comment offered at the closing plenary session, Annual Conference of the Canadian Association for Distance Education, Calgary, AB. Gibson, S., & Nocente, N. (1998). Addressing instructional technology needs in faculties of education. Alberta Journal of Educational Research, 44(3), 320-331. Gray, P.J. (1997). Viewing assessment as an innovation: Leadership and the change process. New Directions for Higher Education, No. 100, San Francisco: Jossey-Bass. Hakim, M.A. (2002). Navigating the web of discourse on the scholarship of teaching and learning: An annotated webliography. C&RL News, <53(1). Retrieved October 11, 2004 from http://www.ala.org/ala/acrl/acrlpubs/ crlnews/backissues2002/julyaugust/scholarshipteaching.htm Halpin, R. (1999). Amodel of constructivist learning in practice: Computer literacy integrated into elementary mathematics and science teacher education. Journal of Research on Computing in Education, 32(1), 128-138. Hannafin, M.J., & Land, S.M. (1997). The foundations and assumptions of technology-enhanced student-centered learning environments. Instructional Science, 25, 167-202. Harasim, L. (1999). A Canadian virtual university: Models for an online national learning network. Ottawa: Industry Canada. Hiltz, S.R. (1994). The virtual classroom: Learning without limits via computer networks. Norwood, N.J.: Ablex Publishing. Ives, C. (2002). Designing and developing an educational systems design model for technology integration in universities. Unpublished doctoral dissertation. Concordia University, Educational Technology: Montreal, QC. Jonassen, D.H. (1999). Computers as mind tools for schools: Engaging critical thinking, (2nd ed.). Upper Saddle River, NJ: Merrill. Lambert, N.M., & McCombs, B.L. (Eds.). (1998). How students learn: Reforming schools through learner-centered education. Washington, DC: APA. The Canadian Journal of Higher Education Volume XXXV, No. 1, 2005 Reflections on PedTech Projects 83 Laurillard, D. (2002). Rethinking university teaching : A Conversational framework for the effective use of learning technologies. London: RoutledgeFalmer. Levin-Rosalis, M. (2003). Evaluation and research: Differences and similarities. The Canadian Journal of Program Evaluation, J 8(2), 1-31. Lou, Y., Abrami, P.C., & d'Apollonia, S. (2001). Small group and individual learning with technology: A meta-analysis. Review of Educational Research, 71(3), 449-521. Lowerison, G., Sclater, J., Schmid, R., & Abrami, P. (2003, April). Student perceived effectiveness of computer technology use in higher education. Paper presentation at the Annual Meeting of the American Educational Research Association: Chicago, IL. Lowerison, G., Sclater, J., Schmid, R., & Abrami, P. (2004, April). Are we using technology for learning? Paper presented at the Annual Meeting of the American Educational Research Association: San Diego, CA. McWhaw, K. (2002, May). L'intégration des technologies à l'Université Concordia: L'expérience des étudiants handicapés et de leurs professeurs. Paper presented at the annual meeting of ACFAS. Quebec, QC. Mitra, A., Steffensmeier, T., Lenzmeier, S., & Massoni, A. (1999). Changes in attitudes toward computers and use of computers by university faculty. Journal of Research on Computing in Education, 32( 1), 189-202. Oliva, M., & Pollastrini, Y. (1995). Internet resources and second language acquisition: An evaluation of virtual immersion. Foreign Language Annals, 28(4), 551-563. Peters, J.M., O'Brien, G.E., Briscoe, C„ & Korth, WW. (1995). A long-term assessment of an integrated microcomputer component for preservice secondary science teachers. Journal of Computers in Mathematics and Science Teaching. 14(4), 499-520. Powers, W.G., Anderson, S., & Love, D. (2000). Overcoming resistance to instructional technology. Journal of the Association for Communication Administration, 29, 286-292. Ramsden, P. (1992). Learning to teach in higher education. New York: Routledge. Richey, R. ( 1997). Research on instructional development. Educational Technology Research and Development, 45(3), 91-100. Rogers, D.L. (2000). A paradigm shift: Technology integration for higher education in the new millennium. Educational Technology Review, Spring/ Summer (13), 19-33. The Canadian Journal of Higher Education Volume XXXV, No. 1, 2005 84 Cindy Ives, Katherine McWhaw, & Christina De Simone Rogers, RL. (1999). Barriers to adopting emerging technologies in education. Journal of Educational Computing Research, 22(4), 455—472. Saroyan, A., & Amundsen, C. (Eds.). (2004). Rethinking teaching in higher education: From a course design workshop to a faculty development framework. Stylus: Sterling VA. Schôn, A.D. (1987). Educating the reflective practitioner. San Francisco: JosseyBass. Schuell, T.J., & Farber, S.L. (2001). Students' perceptions of technology use in college courses. Journal of Educational Computing Research, 24(2), 119-138. Sharpe, R., & Bailey, P. (1999). Evaluation and design of technologies to meet learning outcomes. Journal of Computer Assisted Learning, 15(3), 179-188. Siegel, J., Good, K., & Moore, J. (1996). Integrating technology into educating preservice special education teachings. Action in Teacher Education, 17, 53-63. Smith, S.J. (2000). Graduate student mentors for technology success. Teacher Education and Special Education, 23(2), 167-182. Sprague, D.M., Kopfman, K., & de Levante Dorsey, S. (1998). Faculty development in the integration of technology in teacher education courses. Journal of Computing in Teacher Education, 14(2), 24-28. Strudler, N.B., McKinney, M.O., & Jones, W.P. (1995). Integrating technology into teaching education courses: Longitudinal perspectives on overcoming impediments. Journal of Computing in Teacher Education, 11(3), 15-20. Twigg, C.A. (2000). Institutional readiness criteria. EDUCAUSE Review, March/ April, 43-51. Vygotsky, L.S. (1978). Mind in society. (M. Cole, V. John, S. Scribner, & E. Souberman, Eds.). Cambridge, MA: Harvard University Press. Wager, W., Heye, P.F., & Tsai, C. (1995). Technology needs assessment in higher education. College & University Media Review, Fall, 9-24. Weimer, M. (2002). Learner-centered teaching: Five key changes to practice. San Francisco: Jossey-Bass. Wiggins, G. (1998). Educative assessment. Designing assessment to inform and to improve performance. Jossey-Bass, Inc. The Canadian Journal of Higher Education VolumeX X X V ,No. 1, 2005
Author
Author
Author
Athabasca University