The Canadian Journal of Higher Education, Vol. XXIV-3, 1994 La revue canadienne d'enseignement supérieur, Vol. XXIV-3, 1994 Perspectives on Improving Teaching in Canadian Universities W. ALAN WRIGHT* & M. CAROL O'NEIL* Abstract As a result of increasing Canada, concerns many universities improving teaching. who are primarily Canadian This study examines responsible degree-granting each of thirty-six The findings practices to improve reveal a widespread lies in the provision rewards (appointment, and senior encourages teaching abilities. of making improve Practices personnel of those individuals improvement activities Respondents indicated at to faculty improvement in the form of employment The role of department an institutional is also seen as an important Other areas considered opportunities were seen as having heads, culture which component of a include activities for faculty of institutions. to develop which seek to evaluate instruction for the decisions fifty-one the potential teaching at their respective in creating which provide in the perceptions promotion). instruction strategy. structures education aimed at of incentives tenure, teaching-improvement support programs belief that the greatest teaching administrators effective of higher and policies for teaching institutions. potential deans, about the quality have implemented and their purposes the least potential to teaching. Résumé Afin de répondre secondaire programmes aux lacunes perçues au Canada, un nombre grandissant et des politiques * Dalhousie University quant à la qualité de l'éducation d'universités visant à l'amélioration sont implanté de l'enseignement. postdes La Perspectives on Improving Teaching in Canadian Universities 27 présente étude relève la perception des responsables de la pédagogie universitaire dans cinquante-et-une institutions qui décernent des diplômes universitaires. Les répondants ont estimé le potentiel de chacune des trente-six pratiques ayant pour but d'améliorer la pédagogie dans leurs institutions respectives. Les résultats révèlent la croyance répandue que le potentiel le plus important concernant l'amélioration de l'enseignement réside dans la stimulation du corps professoral sous la forme de compensations professionnelles (nomination, permanence d'emploi et promotion). Le rôle critique des directeurs de départment, des doyens, et des cadres supérieurs dans la création d'une culture institutionnelle qui favoriserait une pédagogie efficace est également perçu comment faisant partie intégrale d'une politique globale d'amélioration de l'enseignement. L'étude évalue également l'importance des activités et les structures d'encadrement facilitant le développement des compétences pédagogiques des professeurs. Les pratiques ayant pour objectif d'évaluer l'enseignement à des fins de permanence d'emploi et de promotion ont, selon les répondants, le moins d'impact sur la qualité de l'enseignement. Introduction In recent years Canadian universities have increasingly focused attention on the educational process, the enhancement of teaching, and the overall improvement of instruction. This trend responds in some measure to the concerns of groups and individuals outside the university about the direction and quality of university teaching and learning. Calls for greater attention to teaching have come from within the university as well as from faculty members who argue that teaching a c c o m p l i s h m e n t is not appropriately promoted, recognized, and rewarded; from those charged with providing a sound education in spite of severe financial constraints; and from students. Such issues are not new to universities in Canada and elsewhere. Sibley (1993) notes that "some academic concerns are hardy perennials" and cites presentations at the 1927 and 1930 National Conferences of Canadian Universities as examples: "The Weakness of English in Large Numbers of Graduates and Undergraduates" and "Is Canadian Education Fulfilling its Purpose?" (p. 115). Since that time, universities have periodically returned to these and other questions of quality as they responded to challenges represented by changes in enrollment and funding patterns and in the needs and expectations of those both within and outside the institution. More recently, there has been a wide ranging debate about the nature and quality of university education (particularly undergraduate education) and a 28 W. Alan Wright & M. Carol O'Neil number of thoughtful suggestions for ways to improve it (see, for example, Astin 1985, Bok 1986, Boyer 1987, 1990, Ramsden 1992, Mayhew et al. 1990, Schaefer 1990). Many of these suggestions centre around the crucial role of the university teacher and the supporting structures which can enhance the quality of instruction and the academic achievements of students. In order to deal with these and other concerns about the educational mission of universities, in 1990 the Association of Universities and Colleges of Canada (AUCC) established an independent commission of inquiry with a mandate to: ...examine the ability of university education to adapt rapidly to the needs of a Canada that is and will continue to be increasingly d e p e n d e n t on the essential national resource of well-educated citizens (Smith, 1991, p. 3). T h e Report of the Commission of Inquiry on Canadian University Education (Smith Commission Report) concluded that "[Reaching is seriously undervalued at Canadian universities and nothing less than a total re-commitment to it is required" (Smith, 1991, p. 63). Among the Commission's recommendations for actions aimed at improving the quality and status of teaching were increased training in teaching methods for graduate students; the expansion of faculty development opportunities, instructional development offices, and funding for pedagogical innovations; and improved methods of evaluating and rewarding teaching effectiveness (pp. 64-65). The current widespread enthusiasm for improving university teaching in Canada might lead one to believe that the instructional development movement is new to higher education in this country. In fact, such efforts have long been a part of our tertiary education system. Reporting on his study of instructional development in higher education twenty years ago, Bruce Shore concluded that "Canadian universities and colleges are in the forefront of worldwide efforts by educational institutions to improve the quality of teaching and learning" (Shore, 1974, p. 45). Ten years later, Abram Konrad observed: "For at least a decade now, specific efforts have been made to facilitate the quest for excellence in Canadian higher education" (Konrad, 1983, p. 14). In 1988, two other studies on the state of i n s t r u c t i o n a l d e v e l o p m e n t in C a n a d a were c o n d u c t e d . A University of Calgary professor reported on the perspectives of faculty developers regarding a range of teaching improvement activities in order to establish priorities for a proposed instructional development centre (Schulz, 1988). A task force of the Association of Atlantic Universities (AAU) compiled an inventory of instructional development activities in fourteen Atlantic universities, Perspectives on Improving Teaching in Canadian Universities 29 recommending the formation of a permanent regional committee to stimulate and coordinate faculty development in four Canadian provinces (AAU, 1988). Thus, there has been, over a number of years, a sustained search for ways to improve the caliber of educational offerings in Canadian universities through instructional development structures and activity. As a result of the ongoing pressures to enhance university teaching and learning, many new programs and policies have been implemented or are planned at campuses across the country. In making decisions about what to do to improve teaching on a particular campus, instructional developers, interested faculty, and administrative planners can consult a large and growing body of literature on the subject. But informed decisions should also rely upon the practical k n o w l e d g e and experience of those in the field. As agents of change, campus individuals responsible for instructional development or faculty development programs play a key role in determining the ultimate success or failure of such initiatives. As well, their experiences can provide decision-makers and others with important insights about the effectiveness of various teaching improvement practices. This article e x a m i n e s the results of a survey of those responsible for instructional development at Canadian universities regarding the activities which they think have the greatest potential to improve teaching on their campuses. The survey results provide both an opportunity to examine changes in perceptions over time and up-to-date information on promising directions for teaching improvement efforts based on the practical observations of key actors on the campuses of Canadian universities. Method Participants This study was designed to gather and assess the views of instructional development practitioners and other relevant individuals on the potential of a number of practices to improve the quality of teaching on their respective university campuses. For each of fifty-eight degree-granting institutions in Canada, where possible, a single participant was identified by name through directories of teaching improvement personnel, lists of chairs of university teaching committees, and institutional representatives to organizations concerned with teaching in higher education. In those cases where the relevant individual could not be identified, administrative officers (vice-presidents academic, for example) with more general responsibility for teaching activities were requested to forward the survey to the appropriate participant, described as " a director of a faculty development 30 W. Alan Wright & M. Carol O'Neil centre, a head of a committee on teaching and learning, or an academic whose specific responsibility is faculty development." A question on the survey asking the r e s p o n d e n t to indicate the nature of his/her i n v o l v e m e n t in teaching improvement activities provided a means to ensure that the response group included only salient campus actors. Instrument An earlier version of the survey instrument and the categories which formed the basis for analysis of the data were adapted by a panel of experts in the field of instructional development' from a number of similar surveys, which are discussed below. The combined expertise of this group and that of the researchers upon whose work the instrument is based provided the means to ensure the validity of the final instrument, which required only minor modifications to improve linguistic clarity and consistency. Our instrument is derived from four earlier studies. Centra's (1976) investigation of faculty development practices in the United States provided the foundational apparatus for later adaptations of the survey instrument, which include K o n r a d ' s (1983) survey of faculty d e v e l o p m e n t practices in C a n a d a and Erickson's (1986) survey of practices at U.S. colleges and universities. These studies asked for information on the existence of approximately 40 to 45 faculty development practices and (with the exception of Erickson) for an estimation by the respondent of the degree of effectiveness of each practice. W e included many of these items on our questionnaire, eliminating both practices not related to teaching and items which were redundant. Portions of Cochran's (1989) study of administrative commitment to teaching were also included or adapted in our questionnaire. In addition, we adopted from these studies the approach of grouping related survey items into categories. The Erickson and Konrad surveys utilized five categories: workshops and seminars; assessment practices: media; technology, and course development; institution-wide practices (grants, leaves, etc.); and miscellaneous practices. Cochran's categories were somewhat better defined: instructional development activities; instructional enhancement efforts; employment policies and practices; strategic administrative actions; and campus environment and culture. In our survey, the categories are further refined in order to provide a framework for analysis which not only groups related practices, but also identifies the appropriate place in the institutional hierarchy for such activities to be initiated and maintained. The nine categories in our analysis include employment policies and practices, leadership of deans and heads, leadership of the senior Perspectives on Improving Teaching in Canadian Universities 31 administration, structure and organizations, educational events, development opportunities and grants, developmental resources, formative evaluation of instruction, and summative evaluation of instruction. The questionnaire comprised two sections. The first asked for background information on the specific role of the respondent in teaching improvement activities, the structures designed to enhance teaching (offices, committees, etc.), and the size of the student population (full-time equivalent enrollment). The second section of the survey instrument included a list of thirty-six items (activities, policies, practices) about which the respondent was asked to "rate each item to indicate the confidence you have in its potential to improve the quality of teaching in your university." The respondent then gave a rating for each item based on a numeric scale of 1 ("least confident") to 10 ("most confident"). The responses were compiled and each item ranked according to its mean rating. Procedures Questionnaires were sent to 58 degree-granting institutions in Canada. Each institution received one questionnaire in either French or English, with the exception of one university with a number of semi-autonomous campuses across the province. In this case, a survey was sent to each of seven campuses. For each institution, the respondent was a person responsible for teaching improvement activities, identified through the process described above. Follow-up letters were sent to non-respondents. Telephone calls were made to those not responding to either the initial request or the follow-up letter. Of the total population of 58 degree-granting institutions in Canada, 51 completed questionnaires were received, a response rate of 87.75%. This rate of response suggests a high degree of reliability of the survey results. In the data analysis, each of the thirty-six teaching improvement activities were rank-ordered according to their mean ratings on the 10-point scale. The items were then grouped into the nine previously defined categories of activities, with four related items in each category. The categories were themselves ranked according to the aggregate mean of the four items. This method of rankordering both individual items and categories facilitates both comparison with other research and interpretation of the results. Analysis of the data reveals important information about the respondents' perceptions of a variety of teaching improvement practices. In interpreting the results, it is important to remember that not all respondents will have had direct experience with all of the practices listed in the survey. For this and other reasons, the results should not be taken as an absolute measure of the relative 32 W. Alan Wright & M. Carol O'Neil success of individual practices. Nonetheless, the respondents' practical experience, professional expertise, and their influential roles in their institutions and, indeed, in the instructional development movement in Canada make their opinions and insights valuable to those planning ways to improve the quality of university teaching. In addition, recording the perceptions of those responsible for instructional development in Canadian universities in the early 1990's makes it possible to record changes over time. Results The survey was directed to one instructional development role player on each of fifty-eight university campuses across Canada. Table I (appendix) provides information about the respondents' roles and the size, language of instruction, and geographic location of their institutions. The high response rate (87.75%) resulted in data which reflect a representative profile of Canadian universities by type, size, location, and principal language of instruction. Readers will note that the relatively small sizes of the response sub-groups discourage us from attempting to advance any generalized observations attributable to differences among institutions or respondents' roles. Three other tables are included in the appendix. Table II details the mean rating and standard deviation for each teaching improvement practice, from the highest rated to the lowest. Mean ratings range from a high of 8.68 to a low of 4.96 on the ten-point scale. Table III lists the nine categories defined by the researchers, rank-ordered by the aggregate mean of the component items. The results are discussed in more detail below within the framework of these categories. Table IV outlines the various institutional structures related to teaching and teaching improvement at the responding Canadian universities. Employment Policies and Practices Respondents indicated that the activities with the greatest potential for improving teaching were those within the category "employment policies and practices." "Recognition of teaching effectiveness and its evaluation as a significant and integral aspect of all career decisions (e.g., tenure and promotion)" was the highest-rated item (mean 8.68). Hiring practices which require a demonstration of teaching ability ranked second (mean 7.98). This result points to the importance of a reward system which values effective teaching. Researchers have long chronicled the widespread view of faculty members that their r e s e a r c h a c h i e v e m e n t s are the p r i m a r y d e t e r m i n a n t s of career advancement and that teaching accomplishments are seldom, if ever, taken into Perspectives on Improving Teaching in Canadian Universities 33 full consideration. "Recognition of effective teaching in the reward system" was one of the "most pressing development needs" found in Konrad's 1983 study. According to Schulz' 1988 survey, one of the most effective strategies to improve teaching was to ensure that "the university merit and promotion committee carefully scrutinizes teaching" (p. 9). The Smith Commission reported that "the reputation and mobility of the professor is far more dependent upon his/her articles and the like than upon the professor's local fame as an inspiring teaching" (1991, p. 36). Typically, faculty feel that they are faced with a choice between research and teaching: "I don't spend time on teaching because there is absolutely no payoff to m e " (Ian G o m m e cited in Smith, 1991, p. 39). The Smith C o m m i s s i o n Report also referred to L e n n a r d ' s 1986 survey which showed Canadian university p r o f e s s o r s thought that, in tenure decisions, research productivity weighed much more heavily in the balance than did teaching (cited in Smith, 1991, p. 38). A broader notion of scholarship would include not only research and publication but also teaching. Advocates of this redefined notion argue that if faculty members are to give greater attention to their pedagogical activities universities must provide incentives (Boyer, 1987, 1990; O'Neil & Wright, 1993; Seldin & Associates, 1990; Smith, 1991). For most, the first step in improving teaching is to ensure that faculty are rewarded for teaching effectiveness: What's really being called into question is the reward system and the key issue is this: what activities of the professoriate are most highly prized? After all, it's futile to talk about improving the quality of teaching if, in the end, faculty are not given recognition for the time they spend with students (Boyer, 1990, p. xi). The Smith Commission Report recommends that universities articulate a definition of scholarship which includes more than research activities leading to refereed publications and that faculty members be given a regular opportunity to decide the primary basis (teaching or research) on which they will be evaluated (Smith, 1991, pp. 63-64). Survey respondents may or may not agree with the specific recommendations of the Smith Report, but it is abundantly clear that they agree that faculty teaching accomplishments must be recognized and r e w a r d e d if we h o p e to i m p r o v e the quality of teaching and learning at Canadian universities. The need for hiring practices which require a demonstration of teaching ability was also considered extremely important by survey respondents. The Smith Commission Report bemoans the fact that in hiring new faculty there is "nothing to guarantee that the Ph.D. recipient has demonstrated skill in teaching" and that "[wjhile new candidates are rarely, if ever, screened for teaching 34 W. Alan Wright & M. Carol O'Neil abilities, they generally need not apply unless they have already established a f a i r l y i m p r e s s i v e r e c o r d of r e s e a r c h p u b l i c a t i o n " ( S m i t h , 1991, p. 59). A c c o r d i n g to G r a h a m Skanes, of Memorial University of N e w f o u n d l a n d , Canadian universities are guilty of a "form of institutional madness" as they "hire people to do a j o b " - t o teach-"for . . . which they are largely unprepared" (Skanes, 1988, p. 1). The two additional survey items regarding employment policies and practices, which describe teacher evaluation policies, received relatively modest support from the respondents. Keeping a teaching dossier or portfolio as the recognized system of recording teaching accomplishments ranked twenty-first (mean 6.71) and regularly reviewing faculty members' teaching effectiveness ranked twenty-third (mean 6.66). Within the context of a comprehensive institutional system of teacher evaluation and instructional development, these two p r a c t i c e s may play a more i m p o r t a n t teaching i m p r o v e m e n t role than is reflected in the mean scores they achieved in the survey. Most universities have systems for regular review of faculty performance; sometimes this takes the form of an informal annual report on activities to the department head. Other institutions have very detailed systems for documentation and assessment. Survey respondents had only moderate confidence that performance review would in itself improve the quality of instruction in their institutions - a perception backed up by research evidence (Weimer, 1991). However, research evidence also suggests that reviews undertaken for developmental purposes and coupled with developmental activities for faculty can have a positive impact on teaching effectiveness (Trask, 1989). A policy which designates the teaching dossier as the accepted method of documenting teaching could have an impact on teaching improvement at a number of levels. Writing the dossier and compiling evidence of effective teaching provides an opportunity for faculty members to reflect upon their teaching experience in a systematic and focused way. In preparing an account of teaching activities, it is necessary to consider teaching goals and values, their relationship to teaching methods, and which evidence (in terms of learning outcomes, for example) best supports claims of teaching effectiveness. This process of reflection and writing can have a positive effect on teaching. Thorough documentation of teaching also provides important evidence in personnel decisions, and could lead to career advancement in an institution which values teaching. In a foreword to an institutional guide to compiling a teaching dossier, Dr. Howard Clark, President of Dalhousie University, wrote: Perspectives on Improving Teaching in Canadian Universities 35 [The guide] will help us not only to document teaching accomplishment more effectively, but also to encourage improvements in our teaching performance, and make it possible to recognize and reward those who display particular excellence in their teaching abilities (O'Neil & Wright, 1993, p. vii). Adoption of the teaching dossier approach has a potential impact at the departmental level, as well. Appointments, tenure, and promotion committees would require criteria for evaluating the materials presented in a teaching dossier. The development of these criteria would encourage widespread discussion about educational values, departmental teaching goals, and appropriate teaching evaluation mechanisms (O'Neil & Wright, 1993, pp. 10-16). Leadership: Deans & Heads Survey participants expressed a strong belief that deans and heads can play a key role in improving teaching on campus. Recognition by deans and heads of teaching as an important aspect of academic responsibility ranked fourth (mean 7.60). The willingness of deans and heads to commit f u n d s for classroom research on teaching ranked sixth (mean 7.45), and praise and rewards by deans and department heads for good teaching ranked eighth (mean 7.31). The respondents were less confident that the creation, by the deans and heads, of a "climate of trust" supporting classroom observation would improve teaching (rank 26, mean 6.43). 2 S c h u l z ' s (1988) findings also underline the important role played by department heads. Tied for a ranking of third of fifteen items were "Department Head rewards/praises good teachers" and "Department Head says teaching is important" (p. 9). Because deans and heads interact with faculty members on a daily basis, their attitudes and actions help to shape departmental and institutional cultures and priorities. We know from a variety of other sources that by expressing and demonstrating a commitment to teaching, deans and heads can encourage the teaching improvement efforts of faculty. An evaluation of the Lilly Teaching Fellow Program at universities in the United States found that interviews with faculty: •...clearly indicate that faculty are supported in their teaching efforts when they receive informal encouragement and positive recognition from their departmental chairs and deans. This kind of incentive is informal, cost free, and requires little time, yet is effective (Rice & Austin, 1990, p. 37). Rice and Austin (1990) outline a number of specific areas where heads can have a significant impact: in scheduling teaching assignments; in tenure and 36 W. Alan Wright & M. Carol O'Neil promotion considerations; in providing information on teaching and the valuing of teaching; and in offering guidance to junior faculty, encouraging them to devote time to enhancing their teaching performance. The authors further argue that without the active support of the head "many incentives to encourage good teaching may be fruitless" (p. 39). The organization and funding of classroom research is another important area in which deans and, to a lesser extent, heads can have a significant impact on the improvement of teaching. Our respondents ranked this item sixth (mean 7.45), acknowledging the need to provide more opportunity to research issues of education. However, it may be the case that the deans and heads themselves remain unaware or unconvinced of the importance of research on teaching. The Smith Commission concluded: [I]t is in everyone's interest to foster growth and maturation in the field of educational research . . . [b]ut it is all too clear that research in the w h o l e field of education is not highly regarded in most Canadian universities (Smith, 1991, p. 89). The survey results clearly indicate that respondents see the role of deans and department heads in efforts to improve teaching as crucial. This suggests that instructional development strategies should not fail to take into account the influential role of deans and heads. Leadership: Senior Administrators Senior administrators can also have an impact on efforts to improve teaching, although their involvement may be more indirect. Senior administrators illustrating the importance of teaching improvement activities by giving them high visibility ranked tenth (mean 7.30), and a related item, administrators' public articulation of the importance of teaching, ranked thirteenth (mean 6.84). Respondents had only moderate confidence in the potential impact of senior administrators emphasizing how research and scholarly activities can support effective teaching and fostering the kind of institutional pride which stimulates effective instruction. These items ranked nineteenth and twenty-fifth with mean scores of 6.74 and 6.45, respectively. Even though they are removed from the daily concerns of teaching and related activities at the departmental level, senior administrators nonetheless have a significant role to play in a comprehensive teaching improvement program. Derek Bok argues that while administrators in the late twentieth century may be reluctant to undertake this role in the face of greater power vested in the faculty they still retain considerable influence over the distribution of resources Perspectives on Improving Teaching in Canadian Universities 37 and can create an atmosphere which supports and encourages educational change. If anyone is to have a vision for a university and communicate its basic directions and priorities, that person is likely to be a president or some other official with broad academic responsibilities. It is easy to be cynical about the influence such individuals actually wield, especially over the faculty. But most professors, like everyone else, are not totally committed to a fixed intellectual agenda.. . . They too have secret doubts about how much of what they do really matters. It is this residuum of flexibility and uncertainty that gives to presidents and deans the chance to use their persuasive powers to create new priorities and mobilize faculty energies behind them. (Bok, 1986, p. 193) Educational Events Workshops on teaching methods for targeted groups such as new faculty and graduate teaching assistants were ranked fifth by respondents, with a mean score of 7.55. Workshops for faculty are often the most visible activities sponsored by a teaching centre. While some observers may believe that workshops have a limited impact on teaching practice, past surveys of faculty developers in Canada rank workshops the most effective teaching improvement practice ( K o n r a d , 1983; S c h u l z , 1988). At D a l h o u s i e U n i v e r s i t y , the O f f i c e of Instructional Development and Technology considers workshops to be a valuable component of the ongoing faculty development program. Attendance at m a j o r w o r k s h o p s has c o n s i s t e n t l y s u r p a s s e d 100 p r o f e s s o r s , p a r t i c i p a n t response forms draw many favourable comments concerning the usefulness of the material presented, and faculty regularly suggest relevant follow-up activities to explore in greater detail an approach to teaching or an implementation strategy (Dalhousie University, 1992). Workshops should be well-planned, well-organized, and well-publicized; should cover topics of current interest to a broad range of faculty; and should be delivered by knowledgeable and skilled presenters in an appropriate environment. Such events help revitalize university pedagogy and contribute to the notion that quality teaching is taken seriously on campus. Furthermore, a workshop series on a given theme-such as teaching large classes, inequity in the classroom, teaching critical thinking, cooperative learning, and writing across the curriculum-allows faculty to examine a topic in s o m e d e p t h , h e l p s d e v e l o p an institutional a p p r o a c h to a p r o b l e m , and inevitably serves to identify hitherto unknown campus experts in the field. In 38 W. Alan Wright & M. Carol O'Neil the case of Dalhousie's "writing across the curriculum series," the workshops led to the publication of a compendium of writing assignments and techniques featuring original contributions from thirty-six faculty (Herteis & Wright, 1992). Two other types of educational events scored moderately well in the survey. Conferences on teaching and learning conducted on campus and open to faculty f r o m all disciplines ranked fourteenth (mean 6.82) and seminars on understanding student learning ranked sixteenth (mean 6.78). By contrast, hosting speakers on issues or trends in higher education was ranked only thirty-third of 36 items. Educational events with a practical purpose, and which are interactive and involving, enjoy a high degree of confidence for their potential to improve instruction. Events which feature speakers in the more traditional lecture role are considered less effective. Structure and Organizations Ranked third (mean 7.70) in the survey was the presence on campus of a centre for teaching and learning which promotes effective instruction and the development of faculty as teachers. This result indicates the importance of a structured university service with a specific mandate to improve and enhance teaching. Some Canadian universities established teaching centers in the nineteen seventies: McGill University (1970), McMaster University (1972), Concordia University (1974), and the University of Waterloo (1976) are examples. In the francophone milieu, both the Université de Montréal and Université Laval had established a service pédagogique by 1974 (Shore, 1974). Many other institutions have followed suit. Of 51 universities participating in our 1992 survey, 22 reported having offices dedicated to the quality of teaching, with 16 of these having committees for this purpose as well. Of the remaining 29 without teaching centers, 18 indicated that the faculty development function was carried out solely by a committee. The range of names given to units responsible for the quality of teaching is broad: the office of instructional development, the centre for the support of teaching, the centre for university teaching and learning, the centre for teaching enhancement, the learning development office, university teaching services, teaching resource office, educational development office. The work of these centers, as well as their resources and reporting structures, varies, but not as a function of their names. When structured approaches to improving university teaching emerged as an area of interest two decades ago, there were attempts to clearly distinguish among terms such as "faculty development" and "instructional Perspectives on Improving Teaching in Canadian Universities 39 development": the former was described as focusing "primarily on the skills and knowledge of the teaching staff," while the latter included "other parts of the educational process . . . such as the quality of instructional materials, the physical plant, or the relative value placed on teaching by the institution" (Shore, 1974, p. 45). Over time, these distinctions have been blurred and the terms are often used interchangeably to describe similar programs and initiatives in centers across the country. Given that many of the respondents in our study are instructional developers, it is hardly surprising that the existence of a centre is very high on the list of items with potential to improve university teaching. The results underline the acknowledged role of the instructional development centre and point out that the centers constitute the preferred solution for organizing instructional development initiatives. In contrast, the respondents had less confidence in the potential of other structures to improve teaching. For example, when asked their degree of confidence in "a broadly-based faculty committee with a mandate for improving the quality of instruction," our survey participants ranked the item twenty-eighth out of thirty-six items. In practice, "faculty development co-ordinators and committees are frequently found in association" (Lunde and Healy, 1991, p. 15). Without a centre and recognized director, a faculty development program may nevertheless have a major impact when committee members have "both the responsibility and the authority to carry out the work (p. 15)." But directors of centers tend to be the "passionate champions and dedicated change agents" which ensure the success of a faculty development program (p. 15). Teaching awards fall thirty-first (mean 6.00) among thirty-six practices which have the potential to improve university teaching. In a 1988 survey, teaching awards ranked at the bottom of a list of fifteen teaching improvement practices (Schulz, 1988). Teaching awards at the departmental, institutional, regional, and national levels recognize and draw attention to outstanding teachers and outstanding instruction. In this sense, these honours serve a useful purpose. But they are not seen to have a major impact on the quality of instruction. Development Opportunities & Grants The survey included four items in a category called development opportunities and grants. They ranked between seventeenth and twenty-fourth. Two types of faculty development grants are provided to individual professors, sometimes by an instructional development centre or committee, sometimes by other faculty or institutional structures. Funds for faculty members to attend conferences or take courses related to their improvement as teachers ranked eighteenth, with a 40 W. Alan Wright & M. Carol O'Neil mean score of 6.76. The Instructional Development Travel Grant Program at Wilfrid Laurier University and the Committee on Educational Development at Trent University are among several structures in Canadian universities which provide such funds (Wright, 1993). One argument for creating a faculty development fund is that professors competing for limited departmental travel funds are reluctant to give priority to teaching conferences over discipline-based research meetings: a separate fund allows professors to indulge their pedagogical pursuits without sacrifice in their areas of disciplinary specialization. Grants for faculty members developing new or different approaches to teaching their university courses ranked twenty-second with a mean score of 6.67. These funds are administered either by teaching improvement centers or faculty committees, by senate committees on instructional development, or by the office of the vice-president, academic. Over twenty Canadian universities responding to a recent survey have established teaching development funds (Wright, 1993). The Teaching/Learning Grants Program at the University of Ottawa offers funds to help faculty to "test teaching strategies," to "develop i n n o v a t i v e t e a c h i n g m a t e r i a l s , " and to f a c i l i t a t e s t u d e n t l e a r n i n g . T h e Instructional D e v e l o p m e n t Grants program at the University of Guelph is designed to "encourage the development and enhancement of credit course materials and supporting materials designed to make student learning more effective." The University of Waterloo cites three examples of projects supported by their Instructional Development Grants: ...the development of a training manual for teaching assistants, sponsorship of a conference on methods of teaching foreign languages, and a study of independent learning methods in environmental studies (Wright, 1993). In addition to providing funds for teaching improvement to individual professors, some Canadian universities have programs designed to encourage pedagogical innovations at the level of departments and faculties. The Teaching and Learning Enhancement Fund at the University of British Columbia stipulates that more than one department or faculty must participate in funded projects, while the Université Laval's Développement Pédagogique plan is intended to: ...inciter les unités (académiques) à poser des gestes pédagogiques spéciaux et différents de ceux inhérents aux activités de gestion pédagogique habituelle (Wright, 1993). While grants made to individuals typically range from $250 to $2,000, the departmental and faculty awards can be as high as $50,000 (Wright, 1993). Perspectives on Improving Teaching in Canadian Universities 41 Another means of improving teaching in this category is temporary workload reduction for the purpose of developing new courses or making major course revisions. This approach, normally under the auspices of the department or faculty, ranked seventeenth with a mean score of 6.76. At York University, the Senate Committee on Teaching and Learning administers Release-Time Teaching Fellowships. "These fellowships are intended," according to the terms of reference of the Fellowships committee, "to provide recipients with the opportunity to develop innovative teaching and learning projects or to enhance their own skills (as opposed to disciplinary competence), when such development or enhancement could not take place in the context of a full teaching load" (Wright, 1993). Sabbatical leaves for the purpose of improving o n e ' s teaching ranked twenty-fourth, with a mean score of 6.60. A survey of institutional sabbatical leave policies in Atlantic Canada showed that Acadia University explicitly mentions sabbatical projects "directed primarily toward enhancement of teaching" (Brooks, 1993, p. 1). The collective agreements and policies in force at several other universities in the region "make some reference to teaching, either explicitly or implicitly" (p. 11). Valuing teaching and recognizing the scholarship of teaching appear to constitute the prerequisites to making teaching improvement the basis of sabbaticals for university faculty. Formative Evaluation of Instruction Instructional development centers and committees are often catalysts for the introduction of formative teaching techniques. Ideally, they work with departments and faculty structures to this end. The survey instrument contained four items dealing with assessment of teaching performance for formative (improvement) purposes. Ranking for the four items ranged from eighth to twentyseventh, and "formative evaluation of instruction" ranked only seventh of nine categories. The category of "consultation regarding course materials" (outlines, readings, methods of evaluating student work, etc.) with faculty peers ranked eighth. This form of assessment is, perhaps, relatively unthreatening as it relies on an intellectual, and rather distant process of review. The professor has every opportunity to reflect upon, prepare, and revise materials submitted for evaluation. The professor is on the reassuring, solid ground of print. And formative assessment, by definition, leads to suggestions for improvement rather than judgment and ranking. Finally, academic practices have often required faculty to submit course outlines and other materials for departmental records or for program c o m m i t t e e r e v i e w . T h i s p r a c t i c e e v o l v e s in f a m i l i a r a c a d e m i c territory. 42 W. Alan Wright & M. Carol O'Neil Recognizing the teaching improvement potential of formative assessment of course materials, Weimer, Parrett, and Kerns (1988) have published forms to facilitate materials review by peers and students. Although this practice can function on an ad hoc basis, a structured program supervised by the instructional d e v e l o p m e n t o f f i c e may have a greater impact on improving teaching (Fleming, 1993). If the examination of course materials can be seen as quite unthreatening, the same view is not generally held with regard to the videotaping of classroom practice for the analysis and improvement of instruction. Yet this technique ranked in the top third (twelfth among the thirty-six survey items, with a mean score of 6.90). V i d e o t a p e analysis is f r e q u e n t l y o f f e r e d by instructional d e v e l o p m e n t c e n t e r s ( W i l f r i d L a u r i e r U n i v e r s i t y , U n i v e r s i t y of N e w Brunswick) as a service to faculty. The instructional developer records classes and then reviews the videotape with the professor. The resulting dialogue between the developer and the faculty member can lead to a personalized plan for teaching improvement. Midterm student ratings of teaching and observation by faculty peers to assist in the improvement of instruction ranked twentieth and twenty-seventh (means 6.72 and 6.41, respectively). While all of the formative assessment techniques ranked higher than summative evaluation techniques, it is somewhat surprising that midterm student ratings and peer observation both ranked in the bottom half of strategies having potential to improve instruction. Various forms of midterm student ratings are often reported to lead to measurable improvements in teaching performance, especially when they are administered as a part of a comprehensive program involving a diagnostic process and follow up (Aleamoni, 1978; Areola & Aleamoni, 1990; Cohen, 1990; Franklin & Theall, 1990; Gil, 1987; McKeachie, 1987; Stevens, 1987). Small group instructional diagnosis (SGID) in which a few class members provide midterm feedback on their professor's teaching is considered by some authors to be particularly effective. High student satisfaction with the SGID process has also been reported (Abbott et al. , 1990; Tiberius & Janzen, 1990). Midterm rating procedures can be instituted informally at the initiative of the professor. They can be coordinated by the department; or they can be conducted or facilitated by the staff of the office of instructional development. In any case, the process is intended as a diagnostic means to teaching enhancement: the data collected are strictly the property of the instructor and in no case should results be disclosed to departmental administrators or other parties. Perspectives on Improving Teaching in Canadian Universities 43 Why is peer review of teaching materials ranked highly while classroom observation is ranked relatively poorly (twenty-seventh)? Is less insight on teaching ability gained by observing a class than by studying teaching materials for that class? Is class observation considered too invasive a method of assessing teaching performance? Would class observation be favoured if it was conducted in conjunction with a review of teaching materials? W h a t e v e r the ranking in this survey, a growing number of Canadian universities have established a system of peer consultation at the heart of which lies the conviction that faculty colleagues can help one another to improve their teaching through the class observation and subsequent dialogue method. The University of Alberta has conducted a highly successful peer consultation program since 1984. The p r o g r a m is c o o r d i n a t e d by the u n i v e r s i t y ' s f a c u l t y d e v e l o p m e n t o f f i c e (Stanford, 1990). Two of four formative evaluation items did not achieve particularly high rankings in our survey, despite the growing use of techniques designed to diagnose the quality of teaching. Let us conclude that these approaches all have a place in a comprehensive instructional development strategy, but that there are some techniques which currently enjoy greater confidence of instructional developers for their potential to improve university teaching. Developmental Resources Developmental resources made available by instructional development centers and committees include mentoring programs, expert consultation, newsletters, and teaching resource centers. While respondents gave fairly high ratings to support systems based on colleague and professional advice, print resources rated near the bottom of the thirty-six items. These findings are consistent with the results of earlier Canadian surveys cited above (Konrad, 1983; Schulz, 1988). Mentoring programs, including support systems for new professors, were rated seventh for their potential to improve instruction (mean 7.39). These programs are based on notions of collegiality, openness, and the sharing of expertise. Centers often play a coordinating role in establishing and maintaining these peer networks. Faculty draw on the richness and the diversity of their experience to both revitalize seasoned colleagues and to support and advise newcomers to the academic community. The New Faculty Mentor Program at Wilfrid Laurier University pairs beginning professors with more senior colleagues to facilitate discussion of problems and concerns such as teaching techniques, grading, student ratings, grants programs, and tenure and promotion policies. N e w faculty report that the informal dialogue is "extremely h e l p f u l " and 44 W. Alan Wright & M. Carol O'Neil "supportive," allowing them to "land smoothly, avoid mistakes, and realize they are important to the school." What is more, the mentor system is said to effectively address concerns related to "teaching styles," "classroom problems," and "course planning" (Wilfrid Laurier University, 1993a, 1993b). Another developmental resource offered to faculty by instructional development centers is an expert consultation service which provides assistance in course planning, constructing tests, and the development of specific teaching skills. The availability of an expert teaching consultant ranked eleventh in the survey with a mean score of 7.12. W e conclude that expert advice has an important place in an overall strategy to improve university teaching, but that peer support, perhaps organized by faculty developers, is seen to have even greater potential to enhance instruction. The practice of making print resources available, usually carried out by an instructional development office or committee, is ranked near the bottom of the list of activities which have a potential to improve teaching. The circulation to all faculty of newsletters and articles that are pertinent to teaching improvement or faculty development ranked thirtieth with a mean score of 6.10. A readily accessible professional library concerned with such topics as instructional methodology, teaching skills, and psychology of learning fared even worse. The item ranked second to last with a mean score of only 5.14. In spite of the relatively poor showing of the provision of print materials to faculty as a means of improving teaching, many faculty developers are likely to maintain that they do serve a purpose in the context of a comprehensive instructional development strategy. While the accent should be placed on active, involving programs such as mentoring and consultation, print materials can reinforce efforts, publicize programs, and lend credibility to initiatives in a milieu in which the printed word is a passage to legitimize thought and action. Summative Evaluation of Instruction One of the most interesting results of the survey is that, while respondents expressed the highest degree of support for changes in the reward system, they had little confidence in the teaching improvement potential of activities devoted to the evaluation of teaching for purposes of making personnel decisions such as tenure and promotion (summative evaluation). The four practices in this category ranked at the bottom of the list, between twenty-ninth and thirty-sixth place. The reason may be simply that summative evaluation is primarily concerned with assessing teaching, not improving it. However, methods of evaluating instruction can have an indirect impact on the quality of instruction by engendering confidence that rewards are distributed on the basis of fair and Perspectives on Improving Teaching in Canadian Universities 45 appropriate assessments of teaching p e r f o r m a n c e . Faculty are unlikely to respond positively to the incentives offered by a teaching reward system in which personnel decisions are based on what are perceived to be faulty data or procedures. Respondents saw relatively little teaching-improvement potential in the review of course materials as part of university review procedures, ranking this item twenty-ninth with a mean score of 6.20. This practice may nonetheless play an important role in a comprehensive teaching improvement strategy. By instituting review procedures which require a careful examination of educational practices (such as the choice and usage of course materials), universities can make clear the importance of teaching and learning on campus and provide an incentive for faculty and others to achieve a high level of teaching effectiveness. According to a study done for the Smith Commission, student ratings of instruction are used by 94% of Canadian universities to assess the quality of teaching (Donald and Saroyan, 1991). Yet this practice ranked near the bottom of the thirty six items on the survey (thirty-second with a mean score of 6.00). Research shows that student ratings data, gathered using appropriate instruments and procedures, are one important source of valid and reliable evidence of teaching effectiveness. As with other items in this category, the influence of student ratings on the quality of instruction may be indirect. That is, student ratings have an impact on the reward system which in turn is seen to have a more direct impact on the quality of teaching. Requiring faculty to prepare an annual report on their teaching accomplishments also ranked near the bottom (thirty-third, with a mean score of 5.71). The assessment of classroom performance by peers and heads for summative purposes was seen as having even less potential to improve teaching, ranking last in the list of thirty six practices with a mean score of 4.96. These results illustrate one dilemma facing those working to enhance the teaching role of faculty and to improve the quality of instruction. On the one hand, key instructional development role players strongly support rewards for teaching effectiveness, while on the other hand they have very little confidence in the teaching improvement potential of activities designed to assess the quality of teaching performance. However, reward systems require the use of techniques for determining the quality of instruction so that rewards can be distributed appropriately. Even though summative evaluation practices are perceived by respondents as having little potential to improve instruction, they should nonetheless play an important, facilitative role within a reward system which provides incentives for faculty to achieve high levels of teaching effectiveness. 46 W. Alan Wright & M. Carol O'Neil Faculty developers responding to the Schulz survey of 1988 showed great faith in the teaching improvement potential of careful scrutiny of teaching by merit and promotion committees (ranked first of fifteen items), moderate faith in promotion to full professor based on teaching and service (seventh of fifteen items), and relatively little faith in the practice of termination for poor teaching and department chastisement of poor teaching (thirteenth and fourteenth of fifteen items) (Schulz, 1988). Instructional development professionals want the reward system to recognize the importance of effective teaching, but they have little faith in punitive measures or sanctions within the framework of teaching improvement. Conclusion There is a long and rich history of teaching improvement activity in Canadian universities, but there remains much to be done to support the teaching function and to provide opportunities for faculty to develop and to improve their skills as educators. Instructional development practice is still evolving, with many institutions just beginning the systematic implementation of policies and programs designed to acknowledge, reward, develop, and sustain effective teaching in higher education. The results of this survey suggest that the best strategies for improving university teaching and learning recognize the complex interplay of influential structures and attitudes which can promote or inhibit an overriding commitment to educational excellence. Ideally, teaching improvement activities would be planned within the framework of a comprehensive scheme based on a thorough understanding of both the broader field of instructional development and the specific characteristics of individual institutions. The task of devising such a scheme is even more difficult and pressing in a period when financial constraints and rapid economic and social change threaten to erode employee morale and public confidence in the ability of the university to meet the educational needs of individuals and of Canadian society. The survey points to a widespread belief in the fundamental importance of employment policies and practices in the improvement of teaching and learning. Instructional developers have long held that the quality of university instruction can be improved through the provision of extrinsic rewards for effective teachers. Twenty years ago, Shore (1974) argued that increasing incentives for effect i v e t e a c h i n g s h o u l d b e at the c e n t r e of any a p p r o a c h to i n s t r u c t i o n a l d e v e l o p m e n t . In 1983, Konrad reported that coordinators of instructional development activities in Canadian universities maintained the belief in the Perspectives on Improving Teaching in Canadian Universities 47 importance of a reward system which recognizes effective teaching. More recently, the Smith Commission Report recommended substantial changes in the reward system in order to encourage and reward teaching effectiveness. Smith (1991) argued that, while faculty members overwhelmingly report that teaching effectively is important to them personally (p. 38), personal satisfaction is not sufficiently motivating, especially in light of the greater rewards given for research accomplishments. The survey respondents echoed the belief that a reward system which recognizes teaching performance has the greatest potential to improve instruction in Canadian universities. This view suggests that changes in employment policies and practices should be a primary ingredient of any teaching improvement plan. But what would be the outcome of a quality-improvement scheme centered on rewards and p e r f o r m a n c e assessment? What impact do extrinsic rewards such as those recommended by Shore (1974), Konrad (1983), and Smith (1991) have on the quality of instruction? The work of Ramsden (1992) and others cautions against overestimating the impact of an improved reward system as the centerpiece of an instructional development plan. Indeed, there is evidence which suggests that a primary focus on rewards and assessment could not only fail to improve teaching, but could have a negative impact on quality. Citing the work of McKeachie (1982), Miller (1988), and others, Ramsden (1992) reports that research does not support the widespread belief that "there is a link between poor teaching and lack of incentives to perform" (p. 253). He argues that extrinsic rewards for individual professors have little positive impact on the quality of instruction and may even produce negative results. These rewards may increase faculty's perception that teaching is a private activity and that the examination of performance could result in penalties, thereby making them reluctant to share experiences and seek advice. R a m s d e n ' s critique of the rewards-appraisal-accountability approach to improving teaching is a cautionary tale which deserves careful study. However, it is necessary to take into account that key instructional development officials hold a strong belief that a reward system which recognizes teaching accomplishments has the greatest potential to improve instruction in Canadian universities. This belief (shared by many faculty as evidenced in submissions to the C o m m i s s i o n of Inquiry on Canadian University Education) suggests that changes in employment policies and practices are a necessary part of any comprehensive teaching improvement strategy. While not ruling out changes in the reward system, Ramsden suggests shifting the emphasis from the individual faculty member, concentrating "more on 48 W. Alan Wright & M. Carol O'Neil good teaching rather than good teachers" (p. 254). Such an approach focuses on the creation of an environment where the improvement of teaching and learning is seen as a collective responsibility, best achieved by cooperative effort and the exchange of ideas; where innovation can be undertaken without fear that failure will result in penalty; and where supervisors of faculty undertake to create such an environment. The survey respondents agree that supervisors (deans and department heads) play a crucial role in a comprehensive teaching improvement strategy. These campus leaders are in a unique position to make the improvement of teaching and learning an important part of the collective activities of academic departments and other teaching units. Such a goal will not be achieved by fiat or heavy handed managerial techniques, but rather by building on the valuable traditions of academic life: democratic decision-making, the sharing of expertise and information, intellectual curiosity, and a devotion to discovery. These two important arenas of teaching improvement activity - employment policies and practices and the leadership of deans and department heads are not normally within the purview of teaching centers, raising questions about what role instructional developers can play in these areas. While not directly involved in the decision-making structures in either sphere, instructional developers can be influential by virtue of their expertise in the field of university teaching and learning. They can, for example, provide advice and training to both administrators and faculty groups about ways of structuring personnel policies such that teaching performance is rewarded adequately. They can educate deans and department heads about the qualities of good teaching and about how to create an environment which supports and encourages effective instruction. They can help build linkages among campus leaders devoted to improving the quality of university education. Many of the remaining teaching improvement practices in the survey represent a panoply of activities of the type usually organized and promoted by teaching centers. In the absence of a teaching centre, these activities are sometimes arranged by other groups or individuals on campus. In any case, respondents generally saw the greatest teaching improvement potential in those activities (workshops, mentoring, videotaping and analysis, consultation with a teaching expert) which give faculty members the opportunity to learn actively about ways to enhance the quality of their teaching performance. Decisions about such activities should be made based on the expressed needs of an institution's faculty. Summative evaluation of teaching received little support from respondents as a way of improving teaching on campus. However, it seems clear that Perspectives on Improving Teaching in Canadian Universities 49 summative evaluation should constitute an important component of a comprehensive teaching improvement plan which includes increased rewards for effective teaching. W h a t should be the role of instructional developers in the evaluation of university teachers for summative purposes? There are widely differing views on this issue. Many instructional developers feel that they should maintain a strictly hands-off approach to summative evaluation, in order to preserve the trust of faculty. Others feel that the expertise of instructional developers should be used extensively to advise personnel committees on matters related to the quality of an individual's teaching performance. Still others maintain that instructional developers should work closely with others to devise appropriate, just, and accurate systems of evaluating teaching performance, while refraining from commenting on individual cases. Continuing discussion among instructional developers is necessary to delineate the appropriate professional responsibility in this area. The survey results provide useful information on the perceptions of key instructional development role players at Canadian universities. Further investigation is needed to determine how their beliefs about teaching improvement practices compare to those of other relevant campus groups such as faculty, acad e m i c a d m i n i s t r a t o r s , and students. It would also be u s e f u l to c o m p a r e responses by size of institution, geographical region, language of instruction, and role of respondent. The authors are extending the survey to other countries, enabling the comparison of the attitudes of instructional developers internationally. Overall, the findings indicate that a successful comprehensive teaching improvement strategy should aim to have an impact on the educational environment of the entire institution. To accomplish this goal, the notion of instructional development as training in teaching technique and individual faculty development must be expanded to include partnerships with a wide variety of relevant campus groups and structures to devise programs and policies which reflect an overriding commitment to the pursuit of excellence in teaching and learning. Long-lasting and substantive improvement in university teaching and learning will occur through fundamental institutional change. 50 W. Alan Wright & M. Carol O'Neil Notes 1 The panel included Roger Barnsley of St. Thomas University, Graham Skanes of Memorial University of Newfoundland, and Alan Wright of Dalhousie University. The questionnaire was first used in June 1991 in the context of an instructional development seminar for senior university administrators in the Association of Atlantic Universities (AAU). 2 Some abiguity in the terms used to describe this item (the lack of a clearly defined purpose for the "classroom observation") may account for its low ranking. It is possible that respondents were concerned about the element of administrative control or supervisory powers suggested by the description of the activity in the questionnaire. Appendix Table 1 Respondent profile Language of Instruction English N= 41 French 10 Region N= Quebec 11 Ontario 13 West 12 2,501-5,000 8 5,001-10,000 9 10,001-20,000 10 Size <1,000 N= 4 Respondent's Role* N= Atlantic 15 1,000-2,000 7 FTDir 8 PT Dir 13 FT faculty/Chair 9 Non-faculty 5 >20,000 12 Other** 15 FT Dir: "full-time director of instructional development office" PT Dir: "part-time director of instructional development office" FT faculty/Chair: "full time faculty member and Chair of faculty development committee or committee on teaching and learning" Non-faculty: "person responsible for faculty development among other responsibilities" Other: Respondents gave the following descriptions: Vice-President, Associate Vice-President, Vice-Recteur (n=4) Dean (n=4) Directeur de l'Enseignement & de la Recherche ou d'Affaires Professorales (n=2) Non-teaching staff professional (n=2) Chair of Faculty (n=l) Did not specify (n=2) Perspectives on Improving Teaching in Canadian Universities 51 Table 2 Items by rank "Rate each item to indicate the confidence you have in its potential to improve the quality of teaching in your university." Scale: 1 = least confident 10 = most confident RANK PRACTICE MEAN 1. Recognition of teaching in tenure & promotion decisions 8.68 2. Hiring practices require demonstration of teaching ability 7.98 3. Centre to promote effective instruction 7.70 4. Deans/Heads foster importance of teaching responsibilities 7.60 5. Workshops on teaching methods for targeted groups 7.55 6. Deans/Heads provide funds/opportunity to improve class instruction 7.45 7. Mentoring programs & support for new professors 7.39 8.* Deans/Heads praise & reward good teaching 7.31 8.* Consultation on course materials with faculty peers (formative) 7.31 10. Senior admin, gives visibility to teaching improvement activities 7.30 11. Availability of expert teaching consultant 7.12 12. Videotaping classroom teaching for analysis & improvement 6.90 13. Importance of teaching made public by senior administrators 6.84 14.* Conference on teaching and learning held on campus 6.82 14.* Faculty review of academic program to improve instruction 6.82 16. Seminars on understanding student learning 6.78 17. Temporary work load reduction for course improvement/revision 6.77 18. Funds for faculty to attend conference/course on teaching 6.76 19. Senior admin, emphasizes how research supports teaching 6.74 20. Mid-term student feedback to instructor (formative) 6.73 21. Teaching dossier recognized record of teaching accomplishments 6.71 22. Grants to faculty to devise new approaches to teaching 6.67 23. Regular (non t&p) review of faculty teaching effectiveness 6.66 24. Sabbatical leaves for improving teaching 6.60 25. Institutional pride (by senior admin.) stimulates effective instruction 6.45 26. Deans/Heads promote climate of trust for classroom observation 6.43 27. Classroom observation by peers for improvement purposes 6.41 28. Faculty committee with mandate for improving instruction 6.36 29. Course materials reviewed in university review process (summative) 6.20 30. Circulation of articles & newsletter on teaching 6.10 Teaching recognition programs (e.g. awards) 31. 6.00 32. End-of-term student feedback for summative purposes 5.73 33.* Speakers on issues in higher education 5.71 33.* Annual report on teaching accomplishments (summative) 5.71 5.14 35. Readily accessible professional library 36. Classroom observation by peers/heads for summative purposes 4.96 * Denotes tie ** S.D.: Standard Deviation S.D" 1.64 1.64 1.52 1.94 1.64 1.79 1.46 2.03 1.63 1.84 1.85 2.04 2.58 1.73 1.89 1.65 1.95 1.88 2.14 2.01 2.10 1.85 2.02 2.17 2.13 2.35 1.99 1.77 1.97 1.65 2.18 2.52 1.83 2.23 1.99 2.18 52 W. Alan Wright & M. Carol O'Neil Table 3 Categories by rank CATEGORY RANK 1. CATEGORY ITEM CATEGORY NAME RANK MEAN "Employment Policies & Practices" Recognition of teaching in tenure & promotion decisions Hiring practices require demonstration of teaching ability Teaching dossier recognized record of teaching accomplishments Regular (non t & p) review of faculty teaching effectiveness S.D. 29.73 5.68 2. "Leadership: Deans & Heads" 28.39 Deans/heads foster importance of teaching responsibilities (4) Deans/heads provide funds/opportunity for improving classroom instruction (6) Deans/heads praise & reward good teaching (8) * Deans/heads promote climate of trust for classroom observation (26) 6.45 3. "Leadership: Senior Administrators" Senior admin, gives visibility to teaching improvement activities Importance of teaching made public by senior administrators Senior admin, emphasizes how research supports teaching Institutional pride fostered by senior admin, stimulates effective instruction 27.06 7.36 (1) (2) (21) (23) (10) (13) (19) (25) 4. "Educational Events" Workshops on teaching methods for targeted groups Conference on teaching and learning held on campus Seminars on understanding student learning Speakers on issues in higher education 26.73 (5) (14) * (16) (33) * 5.71 5. "Structure & Organizations" Centre to promote effective instruction Faculty review of academic program to improve instruction Faculty committee with mandate for improving instruction Teaching recognition programs (e.g. awards) 26.61 (3) (14) * (28) (31) 5.52 Perspectives on Improving Teaching in Canadian Universities 53 Table 3 Categories by rank (continued) CATEGORY 6. CATEGORY ITEM CATEGORY NAME RANK RANK MEAN 26.53 5.86 "Formative Evaluation of Instruction" 26.49 Consultation on course materials with faculty peers (formative) (8) * Videotaping classroom teaching for analysis & improvement (12) Mid-term student feedback to instructor (formative) (20) Classroom observation by peers for improvement purposes (27) 5.56 "Developmental Resources" Mentoring programs & support for new professors Availability of expert teaching consultant Circulation of articles & newsletters on teaching Readily accessible professional library 25.75 5.04 "Summative Evaluation of Instruction" 22.47 Course materials reviewed in university review process (summative) (29) End-of-term student feedback for summative purposes (32) Annual report on teaching accomplishments (summative) (33) * Classroom observation by peers/heads for summative purposes(36) 7. "Development Opportunities & Grants" Temporary work load reduction for course improvement/revision Funds for faculty to attend conference/course on teaching Grants to faculty to devise new approaches to teaching Sabbatical leaves for improving teaching S.D. 6.53 * Denotes tie (17) (18) (22) (24) (7) (11) (30) (35) 54 W. Alan Wright & M. Carol O'Neil Table 4 Institutional structures devoted to teaching at Canadian universities Of 51 universities responding: Structure n* A centre or office devoted primarily to the improvement of teaching 22 A standing faculty committee on teaching 22 An ad hoc faculty committee on teaching 17 Other** 9 * Some institutions have more than one of these structures. **includes planning bodies for a teaching centre or standing committee, pedgogical resource centres, advisory panels, teaching award committees, and structures related to curriculum development and student needs. References Abbott, R. D., Wulff, D. H., Nyquist, J. D., Ropp, V. A., & Hess, C. W. (1990). Satisfaction with processes of collecting student opinions about instruction: The student perception. Journal of Educational Psychology, 82(2), 201-206. Aleamoni, L. M. (1978). The usefulness of student evaluations in improving college teaching. Instructional Science, 7, 95-105. Aleamoni, L. M. (Ed.). (1987). Techniques for evaluating and improving instruction. New Directions for Teaching and Learning, 31. San Francisco, CA: Jossey-Bass Publishers. Areola, R. A., & Aleamoni, L. M. (1990). Practical decisions in developing and operating a faculty evaluation system. In M. Theall & J. Franklin (Eds.), Student ratings of instruction: Issues for improving practice (pp. 37-55). New Directions for Teaching and Learning, 43. San Francisco, CA: Jossey-Bass Publishers. Association of Atlantic Universities. (1988). Final report: AAU Sub-committee on faculty development. Halifax, NS: Author. A s t i n , A . W . ( 1 9 8 5 ) . Achieving educational excellence: A critical assessment of priorities and practices in higher education. San F r a n c i s c o , C A : J o s s e y - B a s s Publishers. Perspectives on Improving Teaching in Canadian Universities 55 Bok, D. (1986). Higher education. Cambridge, MA: Harvard University Press. Boyer, E. L. (1987). College: The undergraduate experience in America. N e w York: Harper & Row. Boyer, E. L. (1990). Scholarship reconsidered: Priorities of the professoriate. Princeton, NJ: The Carnegie Foundation for the Advancement of Teaching. Brooks, G. (1993). Sabbatical policies in Atlantic universities: Will leave be granted for teaching related projects? Report to the Association of Atlantic Universities Coordinating Committee on Faculty Development. Halifax, NS: Association of Atlantic Universities. Cochran, L. H. (1989). Administrative commitment to teaching. Cape Girardeau, MO: Step Up. Cohen, P. A. (1990). Bringing research into practice. In M. Theall & J. Franklin (Eds.), Student ratings of instruction: Issues for improving practice (pp. 123-132). New Directions for Teaching and Learning, 43. San F r a n c i s c o , C A : J o s s e y - B a s s Publishers. Dalhousie University. (1992). Report of Activities: 1991-92. Halifax, NS: Dalhousie University, Office of Instructional Development and Technology. Diamond, R. M. (1988). Faculty development, instructional development, and organizational development: Options and choices. In E. C. Wadsworth, L. Hilsen, & M. A. Shea (Eds.), A handbook for new practitioners (pp. 9-12). Professional and Organization Development Network in Higher Education (POD Network). Stillwater, OK: New Forums Press. D o n a l d , J., & S a r o y a n , A. ( 1 9 9 1 ) . Assessing the quality of teaching in Canadian universities. Report to the Commission of Inquiry on Canadian University Education. Ottawa, ON: Association of Universities and Colleges of Canada. Erickson, G. (1986). A survey of faculty development practices. To Improve the Academy, 182-197. Fleming, N. D. (1993). Faculty developers: Are they giving away the x-rays? Journal of Staff, Program, & Organization Development, / / ( I ) , 5-9. Franklin, J., & Theall, M. (1990). Communicating student ratings to decision makers: Design for good practice. In M. Theall & J. Franklin (Eds.), Student ratings of instruction: Issues for improving practice (pp. 75-95). New Directions for Teaching and Learning, 43. San Francisco, CA: Jossey-Bass Publishers. Gil, D. H. (1987). Instructional evaluation as a feedback process. In L. M. Aleamoni (Ed.), Techniques for evaluating and improving instruction (pp. 57-64). New Directions for Teaching and Learning, 31. S a n F r a n c i s c o , C A : J o s s e y - B a s s Publishers. Herteis, E. M., & Wright, W. A. (Eds.). (1992). Learning through writing: A compendium of assignments and techniques. Halifax, NS: Dalhousie University, Office of Instructional Development and Technology. Konrad, A. G. (1983). Faculty development practices in Canadian universities. The Canadian Journal of Higher Education, XIII( 2), 13-24. Lunde, J. P., & Healey, M. M. (1991). Doing faculty Network. Stillwater, OK: New Forums Press. development by committee, POD 56 W. Alan Wright & M. Carol O'Neil Mayhew, L. B., Ford, P. J., & Hubbard, D. L. (1990). The quest for quality: The challenge for undergraduate education in the 1990's. San Francisco, C A : Jossey- Bass Publishers. McKeachie, W. J. (1982). The rewards of teaching. In J. Bess (Ed.), Motivating professors to teach effectively. New Directions for Teaching and Learning. San Francisco, CA: Jossey-Bass Publishers. McKeachie, W. J. (1987). Can evaluating instruction improve teaching? In L. M. Aleamoni (Ed.), Techniques for evaluating and improving instruction (pp. 3-8). New Directions for Teaching and Learning, 31. San Francisco, CA: Jossey-Bass Publishers. Miller, R. I. (1988). Merit pay in United States post-secondary institutions. Higher Education, 17, 219-232. O'Neil, M. C., & Wright, W. A. (1993). Recording teaching accomplishment: A Dalhousie guide to the teaching dossier. 4th edition. H a l i f a x , N S : D a l h o u s i e University, Office of Instructional Development and Technology. Ramsden, P. (1992). Learning to teach in higher education. New York, NY: Routledge. Rice, R. E., & Austin, A. E. (1990). Organizational impacts on faculty morale and motivation to teach. In P. Seldin & Associates, How administrators can improve teaching: Moving from talk to action in higher education. San Francisco, C A : Jossey-Bass Publishers. Schaefer, W. D. (1990). Education without compromise: From chaos to coherence in higher education. San Francisco, CA: Jossey-Bass Publishers. Schulz, R. A. (1988, June). Possible successful strategies for teaching development offices (TDO's). Unpublished paper based on a presentation at the Eighth Annual Conference on Teaching and Learning in Higher Education, University of Calgary, Calgary, AB. Seldin, P. & associates. (1990). How administrators can improve teaching: Moving from talk to action in higher education. San Francisco, CA: Jossey-Bass Publishers. Shore, B. M. (1974). Instructional development in Canadian higher education. The Canadian Journal of Higher Education, 4, 45-53. Sibley, W. M. (1993). The university in the 1990's: Crisis or predicament? The Canadian Journal of Higher Education, XXXIIK1). Skanes, G. R. (1988, October). Madness. Unpublished paper based on a presentation at the meeting of the Association of Universities and Colleges of Canada, Winnipeg, MB. Smith, S. (1991). The report of the Commission of Inquiry on Canadian University Education. Ottawa, ON: Association of Universities and Colleges of Canada. Sorcinelli, M. D. (1988). Encouraging excellence: Long-range planning for faculty development. In Emily C. Wadsworth (Ed.), A handbook for new practitioners, (pp. 27-34). POD Network. Stillwater, OK: New Forums Press. Stanford, L. (1990, April). Peer consultation: A collegial approach to teaching enhancement. Unpublished materials from a workshop presented at Dalhousie University, Halifax, NS. Perspectives on Improving Teaching in Canadian Universities 57 Stevens, J. J. (1987). Using student ratings to improve instruction. In L. M. Aleamoni (Ed.), Techniques for evaluating and improving instruction (pp. 33-38). New Directions for Teaching and Learning, 31. San F r a n c i s c o , C A : J o s s e y - B a s s Publishers. Theall, M., & Franklin, J. (1990). Student ratings in the context of complex evaluation systems. In M. Theall & J. Franklin (Eds.), Student ratings of instruction: Issues for improving practice (pp. 17-34). New Directions for Teaching and Learning, 43. San Francisco, CA: Jossey-Bass Publishers. Theall, M., & Franklin, J. (Eds.). (1990). Student ratings of instruction: Issues for improving practice . New Directions for Teaching and Learning, 43. San Francisco, CA: Jossey-Bass Publishers. Tiberius, R. G., & Janzen, K. (1990, November). Faculty helping faculty: A peer consulting procedure based on small group interaction. Unpublished paper based on a presentation to the Fifteenth Annual Conference of the Professional and Organizational Development Network in Higher Education, Lake Tahoe, CA. Trask, K. A. (1989). The chairperson and teaching. In A. F. Lucas (Ed.), The department chairperson's role in enhancing college teaching, (pp. 99-107). New Directions for Teaching and Learning, 37. San Francisco, CA: Jossey-Bass Publishers. Wadsworth, E. C. et al. (Eds.). (1988). A handbook for new practitioners, Professional development in higher education. Stillwater, OK: New Forums Press. Weimer, M. (1991). Improving college teaching: Strategies for developing instructional effectiveness. San Francisco, CA: Jossey-Bass Publishers. Weimer, M., Parrett, J. L., & Kerns, M.-M. (1988). How am I teaching? Forms and activities for acquiring instructional input. Madison, WI: Magna Publications. Wilfrid Laurier University. (1993a). Guidelines for mentors. Waterloo, ON: Wilfrid Laurier University, Office of Instructional Development. Wilfrid Laurier University. (1993b). WLU new faculty mentor program. Waterloo, ON: Wilfrid Laurier University, Office of Instructional Development. W r i g h t , W . A., & O ' N e i l , M. C. (1992). I m p r o v i n g s u m m a t i v e s t u d e n t ratings of instruction practices. Journal of Staff, Program, & Organization Development, 10 (2), 75-85. Wright, W. A. (1993). Teaching development funds in universities in Canada. Unpublished raw data. Halifax, NS: Dalhousie University, Office of Instructional Development and Technology.
Authors
- W. Alan Wright
Author
- M. Carol O’Neil
Author