The Canadian Journal of Higher Education, Vol. IX-1, 1979 The McGill Faculty and Course Evaluation System Le corps enseignant de l'université McGill et le processus d'évaluation des cours P.A. CRANTON* The McGill Faculty and Course Evaluation System has been developed over the past one and a half years in response to specific needs expressed by both administrators and faculty at McGill University. The Centre for Learning and Development at McGill receives a number of distinct but closely related requests: for assistance with evaluation for the improvement of teaching skills, for assistance with course or curriculum development, for assistance with program evaluations or the evaluation of innovative educational projects, and for assistance in designing evaluation procedures for promotion and tenure decision-making. At the same time many resources are available for the conducting of evaluations including evaluation "models" or plans, and a variety of questionnaires and handbooks (cf. Berquist & Phillips, 1975). These resources have tended to be unavailable or unknown to the average faculty member or are too abstract or theoretical for the non-education-evaluation specialist to use. The development of the McGill System has the general goal of utilizing the best of the already available materials ("translating" where necessary), developing materials where none exist, and presenting them in the context of a systematic and practical "system" or procedure to the faculty member who is interested in evaluating any aspect of the educational process for any purpose. The system has been influenced by the views of evaluation held by both the Centre for Learning and Development and the pilot users; there now is what one might term a "philosophy of evaluation," or a set of principles which defines the general nature of the system. Evaluation is viewed as an on-going, interactive and developmental process which requires a commitment of time and effort from the faculty member. It is never a quick one-judgement procedure; improvement of the educational process is always the long term goal, even when the evaluation is done for the purpose of making an administrative decision. The responsibility for conducting the evaluation, making the decisions and disseminating the results lies solely with the professor(s) or administrator(s) using the system. The system itself is merely a tool, a resource, or a vehicle for this process; it *Centre for Learning and Development, McGill University 12 P.A. C r a n t o n contains no standards, does not dictate or prescribe. Rather it purports to " t e a c h " the professor how to do the evaluation Background As mentioned above, many articles and books have been written on evaluation in recent years. The move toward university accountability began in the 1960's with the beginnings of budget shrinkage, dropping enrollment, unemployment among graduates and student unrest. With this move came course evaluations (questionnaires, the results of which were published in student guides to course selection) and then university funded centres for the improvement of instruction. Evaluation became an area of interest for many educators. The j u m p f r o m research design to evaluation was difficult. Scriven (1967) defined formative and summative evaluation, the former being on-going and developmental, the latter being judgmental. Many "models" were proposed in the education literature (cf. Stake, 1967). Gradually the work became directed at the user rather than at other evaluation writers, at the public school level (cf. L u f t , Lujan & Bemis, 1977) and in higher education (cf. Pierce & Schroeder, 1974). "Faculty development" and "teaching improvement" became the latest trend in higher education. Handbooks were written and "clinics" set up to guide a professor through a relatively structured and prescriptive improvement process. Faculty members themselves, however, rarely learned how to conduct their own evaluations. Recently the emphasis has moved away f r o m individual professors' development and course questionnaires to program and project evaluation, as the university administration becomes more aware of the need for accountability. The present system was developed for both the administrative needs of the university and the needs of the individual faculty member. Format of the McGill System The system has two major "branches" or routes, one for administrative decision making and one for improvement of teaching and/or course improvement. Program and project evaluation are included in the administrative branch since the process tends to be more similar (e.g. involving groups of individuals, decisions involving budget, resources, etc.). Each " b r a n c h " has a separate User Guide, but utilizes many of the same resources. There are four components to the system, two "physical," and two "personnel." (1) The User's Guides are a practical, " h o w - t o " series of steps for the evaluation process, beginning with stating the purpose of the evaluation and ending with the interpretation of the results and the implementation of decisions. The guides are rather like selfinstructional modules; that is, they are interactive, relying on the user performing certain tasks, utilizing resources, reading articles, and contacting consultants. On one level, they are a guide to the diverse information that is available on evaluation, but at the same time detailed information is provided on the necessary steps in the evaluation process. (2) The Evaluation Resources are a carefully selected portion of the available material on evaluation. Resources were chosen on the basis of readability and practicality. Resources are of three types: articles or books containing information on some aspect of the evaluation process, techniques or procedures actually used in the process (e.g. questionnaires, check- 13 T h e McGill Faculty and Course Evaluation System lists, simulations), and services also offered through other channels (e.g. computer services, a teaching improvement clinic). At each step of the evaluation process the available resources are described in the User Guide and are referred to when (and if) they user needs them. (3) Workshops are currently used to introduce users to the concepts involved in the evaluation process and to the use of the system. (4) Consultants are available for the user who is faced with a unique evaluation problem, or who encounters any difficulties in the evaluation process. The Evaluation Process Whatever type of evaluation is being done, it is suggested the seven basic steps be included in the process. The time spent at each step and the extent to which the resources are utilized will vary considerably dependent on the purpose of the evaluation; hence the separation into two separate paths. Within each path, much "branching" is possible and is even encouraged: individuals in different disciplines, different levels, and/or with different goals will be doing different types of evaluation within this general framework. Step One. The user describes the purpose of the evaluation on three levels: why the evaluation is being done or what is prompting it, the nature of the program, project, course or teaching situation being evaluated, and the types of decisions that will be made as a result of the evaluation. The objective of this step is to have the professor or the department seriously consider the nature of the evaluation they plan to undertake. Techniques are provided for the focusing of discussion and the specification or "narrowing d o w n " of goals or objectives. For the program evaluation, or any evaluation involving a group of faculty members agreement should be reached as to the purpose and scope of the evaluation, and the commitment required from the individuals involved. Step Two. The user further specifies what will be evaluated by examining teaching roles or course functions and their priorities in the department. At this step the User Guides " b r a n c h " into evaluation of teaching, courses or programs. The professor prioritizes his or her teaching roles (lecturer, seminar leaders, course manager, etc.) or teaching skills (ability to promote discussion, speaking ability, ability to relate to students, etc.) for the evaluation of teaching. In the course evaluation branch, the professor considers the functions of the course (a prerequisite to further courses, a research seminar, a basic skills course, a practicum) and the course objectives or goals (what will students know or be able to do upon completion of the course?). In program evaluation, each course is considered in the light of the overall program goals and program structure. Within each branch, the handbook provides guidelines, checklists, and referrals to resources that may assist in the analysis being done. Step Three. The user now moves away from describing or analyzing the object of the evaluation and begins to plan the evaluation itself. Appropriate sources of information are selected (professors, students, professional associations, course notes and outlines, etc.) and the techniques for gathering the information are planned (interviews, observations, comments, videotapes). It is recommended, for any administrative decision making, that at least two separate sources of information be used. The sources and techniques chosen rely to some extent on the purposes and functions specified in the previous two 14 P.A. C r a n t o n steps: guidelines are given. The user may also refer to summaries of the research which compares various sources and techniques (e.g. the correlation between professor and student ratings of a course, the validity and reliability of questionnaires, observation schemes, and so on). Step Four. The next step is to consider the standards or criteria that will be used in making any decisions based upon the evaluation results. What if, for example, two sources of information are used, and they totally contradict one another? Determining in advance how decisions will be made (weights, priorities, cut-off points) is a necessary component of a fair and systematic evaluation procedure. The User Guide suggests a number of variables that can be considered at this point: cost, flexibility of the decision, psychological "cost". Guidelines are then given for determining standards based on these variables. It may not be possible, or even desirable, to attach numbers to these standards at this point, depending on the purpose of the evaluation, the type of information being used and the complexity of the situation. Step Five. In the fifth step of the evaluation process the faculty member moves towards a more detailed plan of what will be done as a result of the evaluation. Basically, this is a further expansion of Steps One and Four. The user reviews the time and resources available in the department to implement changes. The User Guide breaks this step into an analysis of faculty, student and assistant time, the availability of equipment, physical facilities, books and materials, printing services, and learning aids. Services available at McGill University are described. Examples of completed analyses are given. The extent to which such an analysis needs to be done is entirely dependent on the purpose of the evaluation: for example, an evaluation of an individual's teaching for the purpose of improvement does not require the thorough survey of departmental resources that a program evaluation requires. However the detailed specification of a "plan of action" to follow the evaluation is considered essential in all kinds of evaluation planning. Step Six. In the early planning, the faculty member considers the sources of information available and appropriate in the situation and selects the techniques for selecting information. Now, at this step, the actual instruments and/or techniques are selected, or, if necessary, developed for the evaluation. Questionnaire items may be selected f r o m one or more item "banks" or collections of items. Guidelines are given for conducting interviews, making observations, analyzing videotapes, and so on. The many resources available in this area are listed and described; the user is directed to the resources which are most appropriate to his or her situation. Step Seven. If the previous steps have been conscientiously followed, Step Seven should be relatively straight-forward. Actual procedures will depend upon the particular evaluation process. The information is collected; analysis or summary of the results is done; the information is then applied to the plans and decisions outlined earlier. The resource files contain information in areas like techniques of statistical analysis, methods for implementing change, etc. The evaluation process can be summarized in the following figure. At each step, the user is in touch with resources and consultants, but is responsible for his or her own decisions. The User Guide provides the structure necessary for a non-evaluation expert to assume this responsibility. 15 T h e McGill Faculty and Course Evaluation System Orienting Workshop 3. S e l e c t s o u r c e s of information 4. Determine or s e t s t a n d a r d s or c r i t e r i a f o r judgements Resources Consultants 5. Decide on a c t i o n s 6. S e l e c t or d e v e l o p information gather, ing techniques 7. Gather, summarize, and i n t e r p r e t information \]/ Decisions, Actions Implementation of the System The Evaluation System is currently being implemented on a pilot basis in several McGill departments. A wide range of academic disciplines are represented in this pilot group (e.g., medicine, English, education) and the purposes of the evaluation vary with each user. Data is being collected on a n u m b e r of levels to assess the effectiveness of the system. 1 . Descriptive data is recorded for each use of the system. This includes: academic discipline, t y p e of evaluation (program, course, teaching), n u m b e r of individuals involved in the evaluation, a m o u n t of time spent in conducting the evaluation, the involvement of consultants, the extent of the use of resource files, and records of the steps followed and decisions made in the process. 2. Reactions to the system are obtained f r o m all involved individuals — the faculty users, students, assistants, Centre for Learning and Development staff w h o contribute to the 16 P.A. Cranton process, and any involved resource persons. Through questionnaires, interviews and comments, reactions to the usefulness of each of the four components (workshops, User Guides, resource files, and consultants) and suggestions for improvement are recorded. Emphasis is placed on the content, format, organization, understandability, and writing style or " t o n e " of the guides and resource files, and on the role of the consultants and workshops. Different types of information are available f r o m the different sources: at this level, the faculty members' reactions will be the most useful in the planning of revisions to the system. 3. The effect of the Evaluation System will also be judged. Two types of information will be used here — the changes that take place in the nature of the administrative decisionmaking process, and the effect that the system has on the teaching and course improvement process. Comparisons will be made to both processes conducted without the system. Faculty, resource persons, administrators and Centre for Learning and Development staff will provide information through observations, questionnaires, interviews and recorded comments. Basically, the principles of the Evaluation System are being used to assess the system itself. Summary The McGill Faculty and Course Evaluation System was developed in response to needs expressed by both university administrators and individual faculty members. Rather than re-inventing the wheel (or perhaps, the course questionnaire), an attempt was made to coordinate the existing evaluation models, plans, and techniques, and to present them in a comprehensible form to the faculty members who are interested in using them. To this end, the two User's Guides were developed, containing a systematic procedure for going through an evaluation process. The Evaluation System is now being pilot tested at McGill University and is being used to evaluate itself. In the coming year necessary revisions will be made and information will be collected on the effect that such a system has on the evaluation process in a higher education institution. REFERENCES Berquist, W. & Phillips, S H a n d b o o k for Faculty ment of Small Colleges, 1975. Development, Washington: Council for Advance- L u f t , M., Lujan, J. & Bemis, K., "A quality assurance model for process evaluation." in Borich, G. Evaluating Educational Programs and Products, Englewood Cliffs, M.J., 1974. Pierce, H.B., and Schroeder, L.L., " A n objectives-based participatory evaluation plan for teaching f a c u l t y . " Educational Technology, 1974, 28-32. Scriven, M., "The methodology of evaluation." Perspectives on curriculum evaluation, A E R A Monograph Series on Curriculm Evaluation No. 1. Chicago: Rand McNally and Co., 1967. Stake, R., " T h e countenance of educational evaluation." Teachers College Record, 1967, 68, 523-540.