Practical Measures for Institutional Program Reviews / J. Jayachandran, C. P. Neufeldt, E. Smythe, & O. Franke 54 Canadian Journal of Higher Education Revue canadienne d’enseignement supérieur Volume 49, No. 2, 2019, pages 54 - 71 Practical Measures for Institutional Program Reviews: A Case Study of a Small Post-Secondary Institution John Jayachandran Concordia University of Edmonton Colin Neufeldt Concordia University of Edmonton Elizabeth Smythe Concordia University of Edmonton Oliver Franke Concordia University of Edmonton Abstract Post-secondary institutions carry out cyclical program reviews (CPRs) to assess educational effectiveness. CPRs often use both qualitative and quantitative data analyses with the aim of improving teaching and learning. Though most of the CPR review studies identify various factors for this purpose, they fail to identify measures/indicators that are relevant and practical for the institutional decision-making process. Our main objectives for this article are two-fold: first, we identify and list variables that are measurable and sort them into clusters/groups that are relevant to all programs, and second, we critically assess the relevance of these indicators to program review in a small-sized, post-secondary institution. Résumé Les établissements universitaires conduisent des évaluations cycliques de leurs programmes afin de connaître leur efficacité et leur valeur pédagogique. CJHE / RCES Volume 49, No. 2, 2019 Practical Measures for Institutional Program Reviews / J. Jayachandran, C. P. Neufeldt, E. Smythe, & O. Franke 55 Dans ces évaluations cycliques, on se sert de méthodologies qualitatives et quantitatives dans le but d’améliorer l’enseignement et l’apprentissage. Bien que ces évaluations permettent souvent de déterminer un certain nombre de facteurs, elles n’aident pas à trouver des indicateurs ou des mesures pertinents et pratiques pour la prise de décision interne des établissements universitaires. Notre article poursuit deux objectifs principaux. Premièrement, nous dressons une liste de facteurs mesurables et nous les catégorisons en thèmes ou groupes pertinents pour tous les programmes pédagogiques universitaires. Deuxièmement, nous évaluons l’intérêt que ces indicateurs peuvent avoir pour l’évaluation des programmes pédagogiques dans une petite université. Over the last two decades program review has become a topic of interest and debate among higher educational professionals and within higher education institutions (Halpern, 2013). Program review is a critical component of self-examination, reflection, and continuous improvement in teaching and learning. It could be considered one of the most powerful and effective tools to shape and reshape an institution. The review process, in general, allows faculty in a particular program to evaluate the program’s effectiveness in serving students and achieving educational excellence. According to Bok (2006), “[t]hough the process of program review may not be perfect . . . program review, when thoughtfully carried out, is more reliable than hunches or personal opinions” (p. 320). Provincial accreditation agencies have imposed rigorous assessment requirements on post-secondary institutions as part of their responsibilities to monitor degree programs to ensure that standards of quality continue to be met. As in many provinces, Alberta’s regulatory body, the Campus Alberta Quality Council (CAQC) is an arms-length quality assurance agency that reviews and recommends all Alberta post-secondary degree programs to the Minister of Advanced Education for approval (Campus Alberta Quality Council, 2019). This case study draws upon the experiences of the article’s authors as faculty members who were participant observers in the process of conducting cyclical program reviews of three academic programs in a post-secondary institution, Concordia University of Edmonton (CUE), in 2015. Institutional Profile For almost 100 years, CUE has operated in Edmonton, Alberta on Treaty 6 territory. The university is a small, liberal arts institution established in 1921. After purchase of the property from the Hudson’s Bay Company in 1925, the first building on the current campus was constructed in 1926 (Concordia University of Edmonton, n.d.). The university employs 10 senior administrators, 60 full-time faculty members, and a pool of 255 sessional instructors. CUE is committed to a student-centered approach to learning, with a focus on small class sizes, student engagement in research and scholarship, and support for active learning in the context of the overall mission of the university. The university offers over 45 majors and minors in the faculties of Arts, Science and Management, as well as After-Degree programs (in Education and Environmental Health), Masters degrees and a suite of post-baccalaureate certificates and diplomas in high-demand areas such as, information security management. The university has an CJHE / RCES Volume 49, No. 2, 2019 Practical Measures for Institutional Program Reviews / J. Jayachandran, C. P. Neufeldt, E. Smythe, & O. Franke 56 annual enrolment of approximately 2000 students from across Canada and from over forty countries. As a small institution CUE may have particular challenges that impact the process of program review. However, many of the issues and challenges of undertaking meaningful program reviews that provide useful information to evaluate and improve programs discussed below could well apply to many post-secondary institutions, both large and small. Formal Review Process Overview CUE’s 2014 Academic Program Cyclical Review Policy and Procedure governed the CPR process. The program review schedule is set by the Vice-President Academic (VPA), in consultation with the Dean of Graduate Studies and Program Development (DGSPD). The DGSPD convenes a Review Committee to track the process and ensure accurate, timely and effective outcomes. The Dean of the relevant faculty then convenes a Working Committee (WC) to create the Review Report in accord with the CAQC institutional self-study guidelines. The report is reviewed by the VPA who then arranges a site visit by the external evaluators who subsequently provide a report to the DGSPD which is shared with the WC. The WC develops a response to the external evaluators’ comments and the completed report goes to the Review Committee and VPA for final approval. The DGSPD then forwards the completed Review Report to the CAQC. Our programs go through this formal review process every five years. Overview of Measures/Indicators for Program Review One major debate in the program review literature centers on the appropriate role of qualitative and quantitative measures (Gustafson, Daniels, & Smulski, 2014). Experts and scholars in the program-review discipline tend to focus on quantitative assessment and underestimate the value of qualitative data. However, relying solely on quantitative measures will have the effect of skewing the program-review processes away from concerns over educational quality or student success (Academic Senate for California Community Colleges, 2009). On the other hand, those academics who recognize the value of program reviews have argued that there is a need to include more qualitative assessments in these reviews because they provide a more balanced and richer perspective (e.g., ContrerasMcGavin & Kezar, 2007; Den Outer, Handley, & Price, 2013; Fifolt, 2013; Harper & Kuh, 2007; Museus, 2007; Van Note Chism & Banta, 2007). Qualitative measures, such as student abilities, ethical reasoning and critical thinking may be difficult to measure, but these skills are central to preparing students for lifelong learning and effective citizenship (Academic Senate for California Community Colleges, 2009). Regardless of which measures are chosen, most program reviews, at a minimum, require data on program demands, program resources, program efficiency, and program outcomes. The various components required for a review are well documented (Office for Academic Programs and Program Review Panel, 2011; Office of Educational Effectiveness and Institutional Research, 2014; Ontario Universities Council on Quality Assurance, 2016). In the following section, we discuss the various components that play a vital role in an effective program evaluation. CJHE / RCES Volume 49, No. 2, 2019 Practical Measures for Institutional Program Reviews / J. Jayachandran, C. P. Neufeldt, E. Smythe, & O. Franke 57 Key Components of an Effective Evaluation Curriculum Review Curriculum represents the heart and soul of instructional programs in post-secondary institutions and therefore it is critical to undertake a comprehensive examination of the curriculum when conducting a program review. One essential item for a successful curriculum review is a curriculum map. This allows for scrutiny necessary to evaluate the structure of individual courses, academic programs and the institution’s curriculum as a whole, including review of general education requirements and outcomes. A program review process is almost certainly incomplete if the curriculum has not been reviewed for several years (Academic Senate for California Community Colleges, 2009). Figure 1: Student Learning Outcomes (SLO) at the Course, Program, and Institutional level Source: Authors’ conception based on Academic Senate for California Community Colleges (2009). Program review: setting a standard. Based on the original paper by Educational Policies Committee 1995-1996. Retrieved from http:// files.eric.ed.gov/ fulltext/ED510580.pdf As illustrated in Figure 1 above, the review of discipline-based assessment leads to a greater understanding of how individual courses and institutional-wide student services are aligned with program and institutional learning outcomes. This alignment is often illustrated through the curriculum map. This exercise will help curriculum designers observe, measure and assess teaching and learning activities, as well as the learning outcomes at all three levels (i.e., course, program, and institution) known as “constructive alignment” (Banta & Pike, 2012; Biggs & Tang, 2011). CJHE / RCES Volume 49, No. 2, 2019 Practical Measures for Institutional Program Reviews / J. Jayachandran, C. P. Neufeldt, E. Smythe, & O. Franke 58 Teaching and Learning Student learning outcomes should guide curriculum development, effective teaching methodologies and methods of assessment. As part of the review process, both direct and indirect evidence is collected at each level of student learning outcomes. While direct evidence is based on objective measures—such as a student’s actual performance (e.g., exams, essays, oral presentations)—indirect evidence, on the other hand, is based on subjective measures, such as a student’s learning experiences and his or her perceived achievement of learning outcomes. Indirect evidence is usually derived from student survey responses (e.g., exit survey, alumni survey) to questions about their learning experiences or other aspects of the program (Breslow, 2007). These findings from indirect measures (qualitative, subjective) complement and enrich the findings from direct measures (quantitative, objective) of student learning. Some specific student learning outcomes such as leadership abilities, ethical reasoning, critical thinking, and the extent to which the institution itself is fulfilling its mission, cannot be easily or efficiently measured quantitatively (Contreras-McGavin & Kezar, 2007; Furman, 2013; Germaine et al., 2013). For example, in a sociology program a student’s critical thinking could be assessed based on his or her demonstrated ability to apply sociological knowledge to understand human society within the context of a globalized world. Such an assessment may be difficult to quantify, but it provides very valuable qualitative information to evaluate student learning outcomes. While these outcomes are important goals of higher education, they require lengthy self-reported surveys for quantification (Furman, 2013). Detailed information about program effectiveness, student learning, and student satisfaction is often more easily obtained through the use of qualitative methods (Contreras-McGavin & Kezar, 2007; Harper & Kuh, 2007; Van Note Chism & Banta, 2007). Resources An important element of CPRs is a comprehensive and thorough analysis of the institution’s physical facilities, technical, and other supports for students, faculty and staff in the program. Studies that examine institutional resources in CPRs suggest that it is not necessarily a matter of total program spending but rather of how funding is allocated to enhance the program (Wellman, 2010). Analysis of expenditures on resources can be used to determine whether instructional resources allocated in the past are currently adequate and/or appropriate to achieve the program goals. More specifically, what should be analyzed in the CPRs is how the administration of the institution plans to utilize existing human, physical and financial resources, and whether an institutional commitment to increase those resources to support the program is needed. Quality Indicators Quality assurance measures for university academic programs have been adopted around the world and are widely recognized as a vital component of every viable educational system. Academics who have investigated the useful and reputable approaches to reviewing and assessing university programs generally agree that most CPRs should include quality-assurance management principles, to a greater or lesser degree, in the CJHE / RCES Volume 49, No. 2, 2019 Practical Measures for Institutional Program Reviews / J. Jayachandran, C. P. Neufeldt, E. Smythe, & O. Franke 59 review process (Ontario Universities Council on Quality Assurance, 2016). The general consensus is that a quality assurance component must be an integral part of the teaching and learning process throughout the program, and that quality assurance cannot only be assessed through exit surveys of students who graduate from the program. Empirical studies consistently report faculty disenchantment with formal notions of quality assurance (Anderson, 2006; Newton, 2010). This is mainly due to disagreement about what constitutes quality education and lingering doubts about the use of metrics and quantification of complex areas. Nevertheless, there has been a steady increase in interest by governments and other agencies for more direct indicators of learning quality in post-secondary educational institutions. Though the measures of student performance and achievement are often used as a proxy for quality learning, there are other measures which are known to have a strong association with quality outcomes (Ontario Universities Council on Quality Assurance, 2016). Methodology Probably the most controversial aspect of designing academic program reviews concerns the methods used to evaluate programs (Conrad & Wilson, 1985). There is no real consensus among academics as to how assessments ought to be conducted within an institution. At least partly as a consequence of this controversy, institutions frequently employ a combination of both qualitative and quantitative methods and techniques. Though quantitative assessment methods have historically been preferred within higher education, many researchers question the validity and reliability of using a single methodology for this purpose (Commander & Ward, 2009; Van Note Chism & Banta, 2007). Therefore, CPRs should include both quantitative and qualitative methods; these methods should not be considered mutually exclusive. Since most academic institutions and governmental agencies prefer a data-driven approach to the review process, it is crucial that valid indicators of the quality of teaching and learning are developed in advance of the review process and implemented in order to produce practical and useful data as well as meaningful information that can be used to inform institutional decisions (Coates, 2006b, Hattie, 2005). One such attempt was undertaken by Tan (1992), who through an extensive literature review, developed a comprehensive list of variables used by previous researchers in quality assessment studies which provide a basis for developing a broad list of measures. Criteria for Selecting Indicators/Measures The first purpose of this article is to identify and list variables that are measurable and sort them into clusters/groups that are relevant to all programs. Based on the literature review above, in the following section we cluster variables into four main areas with a brief explanation for selection of these variables. The criteria used to guide selection include the availability, relevancy, and measurability of indicators and whether multiple measures are available to evaluate each area. The use of multiple measures is important because it yields more valid results compared to single measures of assessment. Very few evaluative questions can be answered with any degree of certainty on the basis of a single indicator. Most questions have multiple dimensions, and multiple indicators will CJHE / RCES Volume 49, No. 2, 2019 Practical Measures for Institutional Program Reviews / J. Jayachandran, C. P. Neufeldt, E. Smythe, & O. Franke 60 be required for their assessment. Both quantitative and qualitative data collected should enable institutions to make decisions that would lead to improved instruction, stronger curricula, and more effective and efficient policies about learning outcomes assessment, with the overall goal of improving teaching and learning. Table 1. Sample Assessment Matrix Component Curriculum Indicators 1. Curriculum map 2. Number, type, depth and breadth of the courses 3. Cross-listing, overlapping content or shared resources 4. Course demand/enrollment 5. Maintaining currency with respect to curricular changes and course offerings in the academic field Teaching and learning 6. Curriculum compared with that of comparable programs at other universities 1. Methods of delivery for courses in the program 2. Methods of evaluation 3. Ratio of student to faculty 4. Average/median class size 5. Measures of student achievement and average pass rate 6. Opportunity available to improve teaching 7. Institutional resources available for teaching 1. Analysis of physical facilities Resources 2. Availability of technical support, and support for students, faculty and staff 3. Resources available and cost efficiency of the program Quality indicators Faculty 1. Workload: number of courses taught by full/part-time faculty 2. Academic accomplishment(s): a. Professional involvement b. Research grant proposals written, submitted or awarded c. Refereed publications d. Innovation in curriculum development 3. Average number of standing committees served (internal and external) Student 4. Engagement in professional development activities 1. Demand: the number of students declaring the department/program major at the time of university census (percent change from the prior year) 2. Student retention, attrition rates, program completion rates, and average completion time 3. First-year students’ continuation rates (percent that return the following fall) 4. Number of full-time students per program 5. Average GPA of full-time students enrolled in the program 6. Perceived student satisfaction with the program (capstone survey, exit survey, early leaver survey) CJHE / RCES Volume 49, No. 2, 2019 Practical Measures for Institutional Program Reviews / J. Jayachandran, C. P. Neufeldt, E. Smythe, & O. Franke Component Graduates 61 Indicators 1. Rates of graduation from the program 2. Alumni: student perception of the department or program and its value in their future careers/vocations 3. Employment rates (appropriate employment two years after graduation) 4. Survey of employer satisfaction In an ideal situation, it would be beneficial for CPRs to have an Assessment Matrix (Table 1) to assess all areas of the institution. Because CUE is a smaller institution, there is a limited amount of resources, such as non-academic support to the faculty undertaking the review who, given their other teaching and service commitments, have limited time to carry out the required assessments. Curriculum The review of an academic program should include: (a) an effective curriculum map to link core content, concepts, methods, and skills to particular courses and experiences; (b) an examination as to how the number, type, depth and breadth of the courses support student learning outcomes (SLOs) and goals of the program; (c) a review of how courses in the program interact with other programs on campus (e.g., cross-listing, overlapping content or shared resources). It should also include an honest examination of whether courses offered appropriately meet student demands as well as the learning objectives and outcomes. Whether the program is maintaining currency with respect to curricular changes and course offerings in the academic field must also be assessed and might include a comparison of the curriculum with those of comparable programs at other universities or colleges. CPRs based on these measures provide an opportunity to review the effects of past changes made to the program’s curriculum. In addition, the integrity of the curriculum could be validated by internal and external agencies through these measures. For example, the information from an effective CPR could permit administrators and program chairs to make an honest assessment of how courses offered in a particular program complement or impact other programs thus facilitating institutional financial and program planning. These measures, similarly, permit meaningful comparisons of a particular program curriculum with those of other institutions to ensure the program remains relevant and competitive with other institutions. Teaching and Learning In an ideal situation, a program review should include: (a) methods of course delivery (e.g., lecture, visual display, online resources, labs, and discussion groups); (b) methods of evaluation (i.e., grading philosophy and standards); (c) ratio of students to faculty in the program; (d) average/median class size; (e) measures of student achievement (e.g., grade point average [GPA] and the average pass rate); (f) opportunities available to improve teaching (e.g., institutional resources supporting innovation in teaching, the use of teaching evaluations, and recognizing/rewarding quality teaching); (g) institutional resources available for teaching (e.g., space, equipment, library resources, and institutional support services). CJHE / RCES Volume 49, No. 2, 2019 Practical Measures for Institutional Program Reviews / J. Jayachandran, C. P. Neufeldt, E. Smythe, & O. Franke 62 Most of the measures listed above are objective and usually readily available to institutions. They provide an opportunity to examine various pedagogical designs adopted by professors, departments, and universities. Such assessments will help to improve the quality of teaching and learning and reveal whether the feedback from students has been incorporated into improved teaching. Resources The CPRs should include the following evaluation tools to assess resources that are available to faculty, staff, and students and how efficiently they are used in the program: (a) analysis of physical facilities (e.g., laboratories, equipment, teaching aids, and library resources); (b) availability of technical and other support for students, faculty, and staff; (c) resources available and the cost efficiency of the program (e.g., faculty time required to offer the necessary courses for majors, course overload, etc.). The resource indicators listed above should be used to determine whether they are adequate and/or appropriate to achieve program goals. CPRs must address resources/ reallocation issues, if any, to maintain a high quality of teaching and learning. Fiscal measures, such as cost-effectiveness of a program are very valuable for institutions during their planning, budget and curriculum reviews. The library staff, partnering with faculty conducting program reviews, have also become an integral part of the process. They provide information on library collection development plans (part of the resources available for the program) by reviewing the library’s collection and resources on a regular basis. They also provide important support for students and programs via their instructional role in ensuring student and faculty information literacy through periodic seminars and workshops to improve teaching and learning. Quality Indicators The CPRs should include the following measures to assess the quality of teaching and learning: Faculty measures. Some of the measures to include are: (a) the workload of a faculty member (i.e. the number of courses taught by full/part-time faculty); (b) the academic accomplishment(s) of the faculty (e.g., professional involvement, research grant proposals written, submitted or awarded, refereed publications, and innovation in curriculum development); (c) the engagement of the faculty member in professional development activities (e.g., workshops, seminars, etc.); (d) the number of standing committees on which the faculty member has served (internal and external). In the case of the final measure regarding standing committees, such service may take away time that could be spent on other activities such as teaching, research, and scholarship. Student measures. Some of the measures to include are: (a) student demand (the number of students declaring the department/program major at the time of the university census each year); (b) student retention and attrition rates (see Yorke & Longden (2007) for measurement problems), program completion rates, transfer rates and the average time that students require to complete the program; (c) the continuation rates of first-year students (the percentage of students who return to university the following term); (d) the number of full-time equivalent students per program; (e) the average GPA CJHE / RCES Volume 49, No. 2, 2019 Practical Measures for Institutional Program Reviews / J. Jayachandran, C. P. Neufeldt, E. Smythe, & O. Franke 63 of full-time students enrolled in the program; (f) the perceived student satisfaction with the program (capstone-student surveys, student-exit surveys, early leaver surveys). Graduate measures. Some of the measures to include are: (a) the rates of graduation from the program; (b) student perception of the department or program and its value in their future careers/vocations (alumni); (c) employment rates (appropriate employment two years after graduation); (d) surveys of employer satisfaction with program graduates. The indicators listed under each of the areas above have an impact on the quality of student learning. The input measures, such as teaching qualifications and student demand, provide valuable insight into the student-learning environment. The output measures, such as graduation and employment rates, are necessary to assess the success of a program. Finally, outcome measures, such as student or employer satisfaction, provide an overall assessment of the quality of teaching and learning. These proposed measures apply equally well to both undergraduate and graduate programs. However, a few additional indicators might be required for CPRs of graduate programs including: (a) percentage of graduates employed in a field related to the program (related employment); (b) quality and availability of graduate supervision; (c) faculty research funding, honours and awards, and commitment to student mentoring; (d) students’ scholarly output and success rates in provincial and national scholarships. Data When we prepared our CPRs, CUE’s administration provided us with a CPR template that outlined the requirements for our evaluations and reports which included a detailed description of the program under review: a description of student-demand analysis, labour-market analysis, anticipated employment outcomes for graduates from the program, cost-effectiveness of the program, and financial support for students admitted to the program. It also required an analysis of the quality of faculty, and program support including adequate physical resources. An explanation of the current state of the program, including its strengths, weaknesses, as well as future challenges were also required. The CPR template closely matched the list of measures/indicators discussed above. The program coordinators for the History, Sociology and Political Economy programs were tasked with undertaking CPRs of their respective programs using this template. CUE faculty, staff, and administrators gathered much of the data required. While much of this was quantitative data such as class size, and cost per full-load equivalent (FLE)a common measurement of enrolment in post-secondary institutionsa limited amount of qualitative data (e.g. alumni testimonials) was also included in the CPRs (see table 1). There were other indicators that were rejected in our study including average instructional salaries of part-time and full-time faculty. Program cost-effectiveness is often assessed by this measure but it could be artificially inflated in programs where a teaching faculty member is also part of the administration, not uncommon in smaller institutions. We also rejected revenue generated by external sources (e.g., grants, foundation awards) which might not be readily available to all departments or programs. Application/registration ratios were also excluded because they could be misleading since some students apply to several institutions but their decision to enroll in a particular program may be based on many factors, such as cost of tuition, reputation of the institution, and economic CJHE / RCES Volume 49, No. 2, 2019 Practical Measures for Institutional Program Reviews / J. Jayachandran, C. P. Neufeldt, E. Smythe, & O. Franke 64 conditions. Finally, course failure rates might not be suitable for courses with small enrollment. Findings Based on the overview of measures/indicators for program review discussed earlier we analyzed CUE data in four areas: curriculum, teaching and learning, resources, and quality indicators. In the following section, we discuss some of the experiences and challenges we faced in assessing each of the components identified as vital for the evaluation of our respective programs. Curriculum Curriculum is an essential component of program evaluation and it must be thoroughly examined during a CPR. We were able to gather information on most of the items listed under “curriculum” in Table 1, except for the last two items: the uniqueness of the program and institutional comparisons. This was mainly due to the difficulties in locating institutions comparable to CUE, and the delays that we encountered in receiving such information prior to the deadline for the completion of the CPRs for our respective programs. Programs in post-secondary institutions often face many pressures that impact the development of their curriculum. While our focus is on our institution we recognize that these pressures are not unique or exclusive to one institution. These include the push to cross-list courses in order to reduce operational costs, and to offer courses based on their popularity with students rather than whether they contribute to achieving the program’s learning objectives. Faculty often develop new courses reflecting their interests and expertise and if there is turnover and they leave the institution then these courses become orphans. This can result in a significant number of courses which are listed in the institutions’ academic calendar but are not offered on a regular basis. The value of preparing a curriculum map is that it often reveals this pattern. In our analyses, we found that developing curriculum maps was a very useful exercise in identifying: (a) gaps and redundancies in the course offerings of our programs; (b) courses that could be cross-listed with other programs and therefore be more cost-effective; (c) courses that had not been offered for some time and were no longer relevant to, or current with, the program discipline (e.g., service courses applicable to other disciplines); and (d) courses that were no longer in demand, but still required for degree completion (e.g., capstone courses). Teaching and Learning The central focus of the CPRs is to demonstrate that quality teaching and learning takes place in an academic institution. The two main partners involved in this process are the faculty and the students. The evaluation of the teaching component of the CPR deals primarily with methods of delivery and assessment, the quality of faculty, and the resources available to achieve program learning objectives. In our analysis, we also found that the most common type of teaching methods, course deliveries and assessments employed in the Social Science undergraduate programs were CJHE / RCES Volume 49, No. 2, 2019 Practical Measures for Institutional Program Reviews / J. Jayachandran, C. P. Neufeldt, E. Smythe, & O. Franke 65 the following: instructor-led (e.g., lectures, discussion groups, debates) and technologyled (e.g., Moodle [CUE’s online course-management system], online/hybrid courses). All of our instructors in the Social Science department utilized multiple methods, and the specific teaching and learning strategies used for delivery and assessment were determined by course-level and content. With few exceptions, most junior-level courses employed lectures and objective-type assessments; in most senior-level courses, on the other hand, there was a greater emphasis on group-discussions, debates, and class presentations. During the last decade, ethnic and cultural diversity have greatly expanded especially in western universities with increasing numbers of international students studying abroad (Northedge, 2003). While international students undoubtedly have special needs with regard to provision for language and social support, problems of learning in a second language, homesickness, and cultural or social isolation, these need to be addressed by a team approach that includes student services such as councillors, learning accommodations, the international student office, and student’s associations in cooperation with classroom instructors. CUE’s student population is diverse in terms of their basic skills and preparation for post-secondary education. Our programs often have a number of students who have unique learning issues and challenges, making it more difficult for them to achieve the intended learning objectives of the respective programs. In these cases, the information gleaned from the CPRs provides invaluable resources to faculty and administration to better meet the needs of these students. We also noticed that the interaction between students and faculty improved significantly as students progressed from junior to senior-level courses. This often stimulates greater student motivation, interest and success in the class and learning experiences. While this is an advantage for students, this interaction can also increase the challenges for faculty if they have a heavy teaching workload. For CUE faculty teaching undergraduate programs, for example, four courses per semester is the norm. Resources Our CPR analyses revealed that institutional supports and resources are available for respective programs but in certain areas these resources and supports are limited which led to gaps in providing support to students, some of which have to be filled by faculty. We noted that some measures (such as the cost per FLE) commonly employed to evaluate the cost-effectiveness of a particular program appear to be of little practical use in that effort. For example, the cost of delivery for programs with fewer faculty members was often artificially high. This is because some CUE administrators also have the shared responsibility for teaching courses in some programs and given their higher salaries, these additional expenses will inflate the cost per FLE of delivering their respective programs. Furthermore, we also noticed a lack of clarity in how institution-wide costs are reflected in the cost per student in a particular program (i.e. program cost vs. institutional cost). Quality Indicators Our initial plan was to assess the quality of teaching and learning across three groups: faculty, students, and graduates in our respective programs. Using data from annual eval- CJHE / RCES Volume 49, No. 2, 2019 Practical Measures for Institutional Program Reviews / J. Jayachandran, C. P. Neufeldt, E. Smythe, & O. Franke 66 uations of program faculty that are conducted by CUE’s administration (based on annual reports submitted by the faculty), we were able to obtain information on faculty teaching, research and scholarly activity for our CPRs. Unfortunately, we had very little reliable information to assess the experiences and learning outcomes of students and graduates which are required for our CPRs. This was primarily due to a lack of adequate time and resources to collect relevant and current information on the retention of students, student withdrawals, successful student completion of the programs, and the employment of students after graduation. Looking back, we now recognize that it would have been easier to gather this information if there been an ongoing institution-wide data gathering process already in place. For faculty with a heavy workload, it is a challenge for those reviewing their programs to acquire the necessary data to complete their reviews in a timely manner. A process of regularly conducting periodic exit surveys of new graduates, students in the programs’ capstone courses as well as surveys of alumni would have undoubtedly provided data for better assessments of the quality of our programs as well as any changes over time. These challenges point to a need for institutions to implement institution-wide data gathering policies that are intentionally organized and operational to better support the ongoing cyclical program review process. Conclusion The purpose of this article is to identify relevant and practical measures for the cyclical review processes in a small institutional setting by focussing on a single case study with which we are most familiar. Given the very different regulation of post-secondary institutions internationally, cross-national comparisons of how quality assurance is done in other countries may not make sense. While the process of accreditation and quality assurance developed earliest in the United States and much of the literature on program review is American, the process of accreditation there is very decentralized and conducted by non-governmental organizations and agencies (El-Khawas, 2001). More relevant might be cross-institutional comparisons within a common quality assurance or regulatory framework. For example, comparisons across Alberta’s post-secondary institutions would provide many insights. Such an undertaking would pose challenges. The first is access to information about the internal program review processes. While the templates and requirements of the CAQC are readily publicly available, the actual reviews of programs are not. Institutions are sensitive about sharing this information, sometimes with reason, as critical evaluations of a program are controversial within the institution themselves but may also be seen as having negative impacts on reputations and enrolment. Second, the comparisons would need to cover institutions with similar characteristics and missions which, in the case of CUE, would be a small number of institutions. Third, it is clear that the government’s “one-size-fits-all” approach to cyclical program reviews may work well for some institutions, but it does not always work equally well for all institutions–especially those with programs that have a small number of full-time faculty with a paucity of resources, given the considerable labour intensity required for CPRs. Our hope, however, is that by sharing the insights of our experience we can shed light on the question of what measures are most useful and meaningful in a good program review and perhaps stimulate and encourage the sharing of information on the process and CJHE / RCES Volume 49, No. 2, 2019 Practical Measures for Institutional Program Reviews / J. Jayachandran, C. P. Neufeldt, E. Smythe, & O. Franke 67 experiences of faculty and administrators across institutions. Most of the measures/indicators discussed here have been used by many post-secondary institutions in the past, but we have tried to place them into various groups/clusters that we believe will be more manageable and useful for institutional decision-making processes. While the list of measures identified here is extensive, it is not exhaustive. The key point is that it is essential for institutions to select measures based on an understanding of what works well and provides useful and meaningful information for their own institutional mission and culture. The proposed measures/indicators must link the CPRs not only to the institution’s vision and mission but also to the program’s specific goals and objectives. Finally, since external accreditation agencies often seek measurable evidence of student learning, the program review process must incorporate both qualitative and quantitative measures/indicators. This evidence could be obtained through periodic student exit and alumni surveys. While some limited comparison with programs at other institutions is required as part of the CPR process, there are limitations to the insights that might be produced. Given limitations of resources and time, it may be more useful to rely on measures that would demonstrate trends within the institution to improve programs. Furthermore, ongoing and persistent financial pressures necessitate that every department demonstrates the usefulness and legitimacy of their programs–a process to which CPRs can contribute if the information gathered is meaningful and useful. Implications Our program reviews have had a number of implications for CUE. Having gone through a number of CPRs over the past few years, we have gained valuable experience in refining how we conduct our program reviews, resulting in several policy changes to better streamline the process. First, as a result of the demands of the CPR programs and other institutional needs, CUE retained an independent institutional researcher to oversee the gathering and analysis of our data. The institutional researcher conducts the student and the graduate satisfaction survey, provides data on labour/employment market information, and comparative student data from other institutions that offer similar programs. As a result of this hire, the data is currently of better quality and more consistent across departments. Our completed CPR reports now serve as a model and guide to departments preparing their own CPR reports; in some cases, this has resulted in a reduction in the time required to prepare a CPR report. The average time required to complete CPRs, based on three recent reports, has been reduced to 7 months from 12 to 15 months. Many CUE CPRs are now passing the external evaluations and none of the CPRs submitted to CAQC thus far have been returned or received negative feedback. Finally, our work in developing appropriate learning outcomes and curriculum maps for our programs have made it easier for many CUE programs to employ learning outcomes that are relevant to their students. This, in turn, ensures that CUE students acquire the appropriate educational experience to achieve success in their chosen vocational fields. References Academic Senate for California Community Colleges. (2009). Program review: setting a standard. Based on the original paper by Educational Policies Committee 1995-1996. Retrieved from http://files.eric.ed.gov/fulltext/ED510580.pdf CJHE / RCES Volume 49, No. 2, 2019 Practical Measures for Institutional Program Reviews / J. Jayachandran, C. P. Neufeldt, E. Smythe, & O. Franke 68 Anderson, G. (2006). Assuring quality/resisting quality assurance: academics’ responses to ‘quality’ in some Australian universities. Quality in Higher Education, 12(2), 161–173. Banta, T. W., & Pike, G. R. (2012). The bottom line: Will faculty use assessment findings? In C. Secolsky & D. B. Denison (Eds.), Handbook on measurement, assessment, and evaluation in higher education (pp. 47–56). New York, NY: Routledge. Biggs, J., & Tang, C. (2011). Teaching for quality learning at university. Maidenhead, U.K.: McGraw-Hill and Open University Press. Bok, D. (2006). Our underachieving colleges. Princeton, NJ: Princeton University Press. Breslow, L. (2007). Methods of measuring learning outcomes and value added. Cambridge, MA: Teaching and Learning Laboratory, Massachusetts Institute of Technology. Retrieved from https://tll.mit.edu/sites/default/files/guidelines/a-e-toolsmethods-of-measuring-learning-outcomes-grid-2.pdf Campus Alberta Quality Council. (2019). Handbook: quality assessment and quality assurance. Retrieved from https://caqc.alberta.ca/media/6083/handbook_withrevisions-to-feb-2019.pdf Coates, H. (2006b). Universities on the catwalk: Modeling performance in higher education. Paper presented Australasian Association for Institutional Research Annual Forum, Coffs Harbour, NSW. Commander, N. E., & Ward, T. (2009). Assessment matters: The strength of mixed research methods for the assessment of learning communities. About Campus, 14, 25-28. doi: 10.1002/abc.292 Concordia University of Edmonton. (n.d.). History. Retrieved from https://concordia. ab.ca/about/who-we-are/history Concordia University of Edmonton. (2014). Academic program cyclical review policy and procedure. Retrieved from https://documents.concordia.ab.ca/s/ Xp7YgeeIQ6atIBwAvjuyzg Conrad, C. F., & Wilson, R. F. (1985). Academic Program Reviews: Institutional Approaches, Expectations, and Controversies. ASHE-ERIC Higher Education Report No. 5. Retrieved from https://files.eric.ed.gov/fulltext/ED264806.pdf Contreras-McGavin, M., & Kezar, A. J. (2007). Using qualitative methods to assess student learning in higher education. New Directions in Institutional Research, 136, 69– 79. doi: 10.1002/ir.232 Den Outer, B., Handley, K., & Price, M. (2013). Situational analysis and mapping for use in education research: A reflective methodology. Studies in Higher Education, 38, 1504–1521. doi: 10.1080/03075079.2011.641527 El-Khawas, E. (2001). Accreditation in the USA: origins, developments and future prospects. International Institute for Educational Planning. Retrieved from http:// unesdoc.unesco.org/images/0012/001292/129295e.pdf Fifolt, M. M. (2013). Applying qualitative techniques to assessment in student affairs. CJHE / RCES Volume 49, No. 2, 2019 Practical Measures for Institutional Program Reviews / J. Jayachandran, C. P. Neufeldt, E. Smythe, & O. Franke 69 Assessment Update, 25, 5–12. doi: 10.1002/au Furman, T. (2013). Assessment of general education. The Journal of General Education, 62, 129–136. doi: 10.1353/ jge.2013.0020 Germaine, R., Barton, G., & Bustillos, T. (2013). Program review: Opportunity for innovation and change. Journal of Innovative Teaching, 6, 28–34. Gustafson, J. N., Daniels, J. R., & Smulski, R. J. (2014). Case study: One institution’s application of a multiple methods assessment framework. Journal of Research & Practice in Assessment, 9, 58-73. Retrieved from http://www.rpajournal.com /case-study-oneinstitutions-application-of-a-multiple-methods-assessment-framework Halpern, D. F. (2013). A is for assessment: The other scarlet letter. Teaching of Psychology, 40, 358–362. doi: 10.1177/0098628313501050 Harper, S. R., & Kuh, G. D. (2007). Myths and misconceptions about using qualitative methods in assessment. New Directions for Institutional Research, 136, 5–14. doi: 10.1002/ir.227 Hattie, J. (2005). What is the nature of evidence that makes a difference to learning? Paper presented at The Australian Council for Educational Research Annual Conference on Using Data to Support Learning, Melbourne, Australia. Retrieved from https://www. acer.org Museus, S. E. (2007). Using qualitative research to assess diverse institutional cultures. New Directions for Institutional Research, 136, 29–40. doi: 10.1002/ir Newton, J. (2010). Views from below: Academics coping with quality. Quality in Higher Education, 8(1), 39–61. doi: 10.1080/13538320220127434 Northedge, A. (2003). Rethinking Teaching in the Context of Diversity, Teaching in Higher Education, 8(1), 17–32. doi: 10.1080/1356251032000052302 Office for Academic Programs and Program Review Panel. (2011). California state university: program review guide. Retrieved from https://www4.csudh.edu/Assets/ CSUDH-Sites/IEA/docs/program-review/Program_rev Office of Educational Effectiveness and Institutional Research. (2017). Program review handbook: California Lutheran University. Retrieved from https://www.callutheran. edu/offices/institutional-research/program-reviews/ProgramReviewHandbook2017.pdf Ontario Universities Council on Quality Assurance. (2016). Quality assurance framework. Retrieved from http://oucqa.ca/wp-content/uploads/2017/11/QC-AnnualReport-2016-17.pdf Tan, D. L. (1992). A multivariate approach to the assessment of quality. Research in Higher Education, 33(2), 205–226. Retrieved from http://www.jstor.org/stable /40196036 Van Note Chism, N., & Banta, T. W. (2007). Enhancing institutional assessment efforts through qualitative methods. New Directions for Institutional Research, 136, 15–28. doi: 10.1002/ir.228 Wellman, J. V. (2010, January). Connecting the dots between learning and resources. CJHE / RCES Volume 49, No. 2, 2019 Practical Measures for Institutional Program Reviews / J. Jayachandran, C. P. Neufeldt, E. Smythe, & O. Franke 70 NILOA Occasional Paper 3. Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment. Retrieved from http://www. learningoutcomeassessment.org/occasionalpaperthree.htm Yorke, M., & Longden, B. (2007). The first year experience in higher education in the UK: Report on phase 1 of a project funded by the Higher Education Academy. Higher Education Academy. Retrieved from http://www.improvingthestudentexperience.com /library/UG_documents/FYE_in_HE_in_the_UK_FinalReport_Yorke_and_Longden. pdf Contact Information John Jayachandran Concordia University of Edmonton [email protected] John Jayachandran is Professor and Program coordinator of the Department of Sociology at Concordia University of Edmonton, where he has been since 1990. He received his Ph.D in Sociology (Demography) from the University of Alberta 1990. His research interests span both sociology and demography. His current research interests include structural equation modelling of balancing work and family, and subjective well-being. His recent publications include “Balancing Work and Family in Canada: a Causal Modelling Approach” and “Determinants of Life Satisfaction in Canada.” He teaches courses on statistics, research methods, aging, and population studies. Colin P. Neufeldt is Professor of History, Dean of Graduate Studies, and Assistant Vice President Academic at Concordia University of Edmonton. Colin researches the history of Mennonites in Eastern Europe and the Soviet Union. His publications include “The Public and Private Lives of Mennonite Kolkhoz Chairmen in the Khortytsia and Molochansk German National Raĭony in Ukraine (1928–1934)” and “Collectivizing the Mutter Ansiedlungen: The Role of Mennonites in Organizing Kolkhozy in the Khortytsia and Molochansk German National Districts in Ukraine in the late 1920s and early 1930s.” Colin lives in Edmonton, Alberta, where Colin also practices law. Elizabeth Smythe is professor of political science at Concordia University of Edmonton where she teaches international and comparative politics courses as well as Canadian public policy. Her research interests include international trade and investment agreements, food standards and social movements and global justice. Her most recent publications are (2018) The Role of Religion in Struggles for Global Justice co-edited with Peter J. Smith, Katharina Glaab and Claudia Baumgart-Ochse Routledge Press and (2018) Food for Thought: How trade agreements impact the prospects of a national food policy, Canadian Food Studies 5(3), 76-99. Oliver Franke is assistant professor of political economy at Concordia University of Edmonton where he teaches introductory and intermediate macroeconomics and microeco- CJHE / RCES Volume 49, No. 2, 2019 Practical Measures for Institutional Program Reviews / J. Jayachandran, C. P. Neufeldt, E. Smythe, & O. Franke 71 nomics courses and senior level economics courses related to topics such as globalization, the environment and money and banking. He has also been Chair of the Department of Social Sciences since October 2013. His interests and research include global economics, macroeconomics and public finance. CJHE / RCES Volume 49, No. 2, 2019
Author
Concordia University of Edmonton
Author
Concordia University of Edmonton
Author
Concordia University of Edmonton
Author
Concordia University of Edmonton