The Impact of Quality Assurance Policies on Curriculum Development in Ontario Postsecondary Education Qin Liu University of Toronto Abstract Two trends in the evolution of quality assurance in Canadian postsecondary education have been the emergence of outcomes-based quality standards and the demand for balancing accountability and improvement. Using a realist, process-based approach to impact analysis, this study examined four quality assurance events at two universities and two colleges in Ontario to identify how system-wide quality assurance policies have impacted the curriculum development process of academic programs within postsecondary institutions. The study revealed different approaches that postsecondary institutions chose to use in response to quality assurance policies and the mechanisms that may account for different experiences. These mechanisms include endeavours to balance accountability and continuous improvement, leadership support, and the emerging quality assurance function of teaching and learning centres. These findings will help address the challenges in quality assurance policy implementation within Canadian postsecondary education and enrich international discussions on the accountability-improvement dichotomy in the context of quality assurance. Keywords: internal quality assurance, external quality assurance, accountability, continuous improvement, learning outcomes Résumé Deux tendances quant à l’évolution de l’assurance de la qualité dans l’enseignement postsecondaire canadien sont l’émergence de normes de qualité fondées sur les résultats et la demande d’équilibrage entre la responsabilité et l’amélioration. Utilisant une approche de l’analyse d’impact à la fois réaliste et axée sur les processus, cette étude examine quatre cas d’assurance de la qualité dans deux universités et deux collèges en Ontario afin de déterminer l’impact de politiques d’assurance de la qualité à l’échelle du système sur le processus d’élaboration des programmes. L’étude révèle différentes approches choisies par les établissements en réponse aux politiques d’assurance de la qualité ainsi que les mécanismes pouvant expliquer les expériences différentes. Ces mécanismes comprennent les tentatives d’équilibrer la responsabilité et l’amélioration continue, le soutien à la direction et le rôle émergent des centres d’enseignement et d’apprentissage dans l’assurance de la qualité. Ces conclusions aideront à relever les défis de la mise en œuvre de politiques d’assurance de la qualité au sein des établissements postsecondaires canadiens et à enrichir les discussions internationales sur la dichotomie responsabilité/amélioration dans le contexte de l’assurance de la qualité. Mots-clés : assurance de la qualité interne, assurance de la qualité externe, responsabilité, amélioration continue, résultats d’apprentissage Introduction Quality assurance (QA) has been “a rapidly growing concern” in postsecondary education around the world (Altbach, Reisberg, & Rumbley, 2010, p. 53). Comprehensive and systematic approaches to QA have been introduced in many countries (Martin & Stella, 2007). Canada is no exception to this reform movement. Since 2000, a series of QA mechanisms and policies have been established at the national and provincial levels (Weinrib & Jones, 2014). Two related trends need to be noted about the recent evolution of QA in Canadian postsecondary education. One trend is the emergence of outcomes-based quality standards as accountability schemes. The province of Ontario spearheaded this in the development of a de- Canadian Journal of Higher Education | Revue canadienne d’enseignement supérieur 50:1 (2020) The Impact of Quality Assurance Policies on Curriculum Development in Ontario Postsecondary Education Q. Liu gree qualifications framework that outlines overall program design and outcome emphasis for qualifications from certificates to doctoral degrees. In 2007, the Council of Ministers of Education, Canada (CMEC) released the Canadian Degree Qualifications Framework, which articulates what bachelor’s, master’s, and doctoral degrees are intended to achieve in terms of general learning outcomes. This outcomes-based approach has also been applied to non-degree credentials. For example, the Credentials Framework in Ontario requires Ontario colleges to meet all specific vocational learning outcomes as defined in provincial program standards for credentials ranging from certificates to graduate certificates. These frameworks, although exerting different power in practice, serve as criteria for standardizing the outcomes of postsecondary education. As a matter of fact, increasingly outcomes-oriented quality assurance and assessment represents a global trend in higher education (Aamodt, Frølich, & Stensaker, 2018; Coates, 2010; Finnie & Usher, 2005). The other trend is the demand for balancing accountability and continuous improvement in QA schemes. Accountability and improvement are the dual purposes of QA (Kis, 2005; Vroeijenstijn, 1995). In the Canadian context, CMEC enshrined accountability-improvement balance in its definition of quality assurance: The criteria and processes that are employed in reviews of institutions and/or programs to determine whether standards set for postsecondary curriculum, outcomes, and input are being met and maintained [i.e., accountability, Author added], and whether they encourage continuous improvement in the quality of higher education. (as cited in Liu, 2015, p. 56) Province-level QA policies–for example, those in Ontario–also have aimed to balance accountability to established standards and quality improvement within postsecondary institutions. This goal arguably serves as a benevolent external conditioning factor for QA policy implementation at individual institutions. Given these two trends, it is important to examine how QA policies have impacted postsecondary institutions, particularly in curriculum development. This kind of process-based impact analysis essentially addresses two concerns in QA policy implementation: how postsecondary institutions respond to the dual needs of accountability to external expectations and internal commitment 54 to continuous improvement; and whether QA brings about quality improvement. This research explored these concerns by examining four institutional QA processes at two universities and two colleges in Ontario. The study revealed different approaches that postsecondary institutions chose to use in response to QA policies and the mechanisms behind those approaches that may account for different experiences. The findings will not only help address the challenges in QA policy implementation within Canadian postsecondary education but also enrich international discussions on the accountability-improvement dichotomy in the context of QA. The Ontario Context for Quality Assurance in Postsecondary Education The province of Ontario has 24 public universities and 23 public colleges, including five institutes of technology and advanced learning which are authorized to offer up to 15% of programming for baccalaureate studies. Since 2010, a quality assurance system consisting of three distinctive QA frameworks overseen by three separate QA agencies has been in effect. The Ontario University Council on Quality Assurance (OUCQA) implements a QA framework that involves Ontario public universities in an eight-year cycle of auditing of the institutional QA process (known as IQAP). The Ontario College Quality Assurance Service (OCQAS) plays a similar role within the college sector, implementing processes for validating new certificate and diploma programs and auditing program review processes (known as the Program Quality Assurance Process Audit, or PQAPA, prior to 2016) at public colleges. The Postsecondary Education Quality Assessment Board (PEQAB) assesses the quality of degree programs offered outside Ontario public universities, including those offered by public colleges. Notably, the governance structures of these three agencies differ: the OUCQA and the OCQAS are university/college consortium agencies whereas the PEQAB is a governmental agency (Baker & Miosi, 2010). In this sense, the QA frameworks that the OUCQA and the OCQAS implement are internal to the university and college communities but external to individual institutions. There are two noteworthy commonalities among these three QA frameworks. One is that they all utilize learning outcomes-based standards as accountability Canadian Journal of Higher Education | Revue canadienne d’enseignement supérieur 50:1 (2020) The Impact of Quality Assurance Policies on Curriculum Development in Ontario Postsecondary Education Q. Liu schemes. These standards are known as the Degree Level Expectations in the OUCQA framework, the Credentials Framework in the OCQAS framework and the Ontario Qualifications Framework used by the PEQAB assessment. The application of learning outcomes to these qualification frameworks aligns well with, and partly is a response to, broader international developments toward adopting learning outcomes in higher education QA policies, particularly in the Bologna reform Process to create the Europe Higher Education Area (Adam, 2008). The other commonality is that all the frameworks are intended to balance accountability and improvement by articulating the importance of both objectives in their policy documents1 (see more details in Liu, 2015). This balance aligns with a principle of good practice specified by the International Network for Quality Assurance Agencies in Higher Education (INQAAHE, 2016) that recognizing that quality is primarily the responsibility of the postsecondary education institutions themselves. Thus, QA frameworks for Ontario postsecondary education are contextually grounded in the broad international context of QA development. Literature Review The literature offers three insights into the impact of external quality assurance (EQA) policies and mechanisms on postsecondary education. Firstly, EQA mechanisms exert impacts on both governance structures and the quality of teaching practices. However, their impacts on improvement of teaching infrastructure, emergence of new routines and procedures, and centralization and formalization of quality management have been found to be stronger than those on the improvement of teaching methods and quality (Langfeld, Stensaker, Harvey, Huisman, & Westerheijden, 2010; Liu & Yu, 2014; Stensaker, 1999) except for mechanisms focussed on the curriculum (Vilgats & Heidmets, 2011). Secondly, both external and internal factors condition EQA impacts on postsecondary institutions and their local academic units. As Martin (2018) observed in case studies of eight universities across the world, external conditioning factors include international accreditations, national quality assurance frameworks, and other regulatory obligations; these factors have provided opportunities to increase institutional capacity for internal quality assurance (IQA) and led to changes in IQA policies and structures within postsecondary institutions. Inter- 55 nal conditioning factors encompass leadership support, active stakeholder participation (including faculty, staff and students), internal governance, the visibility of IQA processes, and the dependence of the institution on the government (Csizmadia, Enders, & Westerheijden, 2008; Martin, 2018; Veiga, Rosa, Dias, & Amaral, 2013). A third insight is that discussions about EQA impact are grounded in contested views about the relationship between accountability and improvement. As Newton (2012) contended, EQA impact studies essentially address the relationship between accountability and improvement. The literature has presented contrasting views about this relationship. One view, often discussed by earlier researchers, polarized the two by arguing that accountability and improvement are independent of each other (Middlehurst & Woodhouse, 1995) and mutually exclusive (Thune, 1996; Vroeijenstijn, 1995). Accountability and improvement were found to be two equally unworkable polar scenarios: when accountability was dominant, improvement was ineffectual; when improvement was dominant, accountability was suppressed (Massy, 1997). There was a simple causality where IQA processes were seen as related to improvement and EQA processes as associated with accountability (Stensaker, 2003). Inside a postsecondary institution, accountability may divert academic staff’s attention away from the improvement of learning when they try to comply with bureaucratic imperatives, thus damaging the potential for quality improvement (Harvey, 1997, as cited in Kis, 2005). On the other hand, recent literature recognized a more nuanced accountability-improvement relationship (Williams & Harvey, 2015). QA agency representatives such as Woodhouse (1999) contended that accountability can always be reframed into the perspective of quality improvement. Faculty representatives also argued that although the point of departure of EQA was accountability, follow-up and follow-through on improvement recommendations may have emphasized an improvement approach, thus tilting the QA process toward the improvement end of the accountability-improvement continuum (Genis, 2002). The term quality enhancement has been used as a steppingstone between quality assurance and quality improvement (Filippakou & Tapper, 2008; Newton, 2012), thus positioning IQA practices on a continuum between accountability and improvement. Canadian Journal of Higher Education | Revue canadienne d’enseignement supérieur 50:1 (2020) The Impact of Quality Assurance Policies on Curriculum Development in Ontario Postsecondary Education Q. Liu Conceptual Framework Lattuca and Stark’s (2009) academic plan model resonated with the present study as it presumes that curriculum development is subject to both internal and external influences. In particular, in this model faculty members as an internal influence play a key role in developing the curriculum and external factors such as accreditation and QA standards also affect the curriculum development process. The present study focussed on program-level (rather than course-level) curriculum development. The study was also grounded in two EQA impact models. The first one was generated from an empirical study (Brennan & Shah, 2000a, 2000b) that drew upon experiences of 29 institutions in 14 European countries. The model posits that QA impact falls on four levels: the system, the institution, the basic (organizational) unit, and individuals. It identifies three mechanisms of impact: rewards (i.e., the effects of the published results of quality assessment on funding, reputation, and influence), policies/structures (i.e., IQA processes that are created in response to the EQA requirements and recommendations) and cultures (i.e., changed cultures that arise from the effects of IQA processes). The latter three of the four levels and the latter two of the three mechanisms were relevant to the present study. The other model (Scheuthle & Leiber, 2015) was conceptually derived from Coleman’s (1990) social theory that explains the behaviour of social systems. The process-based impact model contends that the EQA impact begins and ends at the institutional or macro level, but in between dips to the individual or micro level. Specifically, when postsecondary institutions have encountered EQA policies, those individuals who implement the policies adopt certain kinds of actions, depending on their values and preferences; and those individual actions in turn bring about institutional change. Thus, QA impact entails the interactions between the institutional macro-level and the individual micro-level of actors. These two models suggested to the present study that QA impact be analyzed on various levels through an examination of IQA processes and institutional cultures, with a view of uncovering the impact process. Methodology This study adopted a realist approach to impact analysis. Realists view a stratified reality and “see events 56 as being produced by the interaction of a multitude of underlying causal entities operating at different levels” (House, 1991, p. 7). The stratified reality is exhibited in what Bhaskar (1978, 2008) called three domains of reality, which consist of events, observable experiences, and generative mechanisms or structures that exist independently with a tendency to produce patterns of events. The realist approach is qualitative and process-based because it focuses on events and the processes that connect them and sees the analysis as a matter of identifying the actual generative mechanisms that result in a particular outcome in a certain context (Maxwell, 2012; Mohr, 1996; Sayer, 2010). As such, the realist approach fit well with the conceptual framework of this study. This study employed case study methodology, which is recognized for its capacities to uncover processes linking inputs and outputs within a system (Hammersley, Gomm, & Foster, 2000) and for its value in qualitative impact analysis (Maxwell, 2004). Four events, in realist terms, at two universities and two colleges in Ontario that demonstrated the implementation of the system-wide QA policies were selected as the cases. These cases were selected to balance institutional distinctiveness and common IQA practices. This selection criterion allowed the study to demonstrate different scenarios at different institutions on one hand and yield findings that were analytically applicable to other institutional contexts on the other hand. The data sources for each case were relevant institutional documents and interviews with academic administrators and faculty members involved in the events. All the interviewed faculty members also served as program coordinators or program chairs within their academic units; therefore, in their interviews, they spoke about the experiences of their academic units in relation to the event as well as their own beliefs and experiences. Most of these interviews took place between August 2013 and May 2014. The interviews focussed on two main areas: how the implementation of system-wide QA policies affected the educational practices in the interviewee’s institution or academic unit; and how the process intersected with their own beliefs about postsecondary education. The four events, along with the numbers of reviewed documents and interviewed individuals, are presented in Table 1. Altogether, these data revealed the events and experiences—at the institutional, academic unit, and individual levels—related to impacts of QA policies. The generative mechanisms that produced those events and experiences were identified by triangulating interviews Canadian Journal of Higher Education | Revue canadienne d’enseignement supérieur 50:1 (2020) The Impact of Quality Assurance Policies on Curriculum Development in Ontario Postsecondary Education Q. Liu 57 Table 1. Data collection for the study Research sites Cases/Events Interviewees Academic administrators Faculty members Related Documents University A Technology-supported curriculum mapping process at a university 2 4 5 University B Senate-approved University Learning Outcomes of a university 9 8 6 College A New program development process at a college 4 2 6 College B Program review process at a college 6 3 6 21 17 23 Total and institutional documents within a case and comparing patterns between cases within and across postsecondary sectors. There were two limitations in the data. One was that, although the interviewees were purposefully recruited from different academic units at each institution, their perspectives and experiences were only partially representative of those within the institution. The other was that most interviewees tended to be receptive to those institutional changes they had experienced and amenable to the outcomes-based standards in QA frameworks. These generally positive attitudes could be partially ascribed to the interviewees’ administrative roles and partly to the self-selection effects of interviewee recruitment. To alleviate this bias, during interviews the researcher intentionally probed whether the participants had observed any resistance among their colleagues to the event and if so, what the experience was like. In the next two sections on findings, brief case descriptions are first provided. These are followed by the themes generated from these case studies, first about the two university cases and then about the two college cases. own IQA process started to be implemented, University A introduced a web-based curriculum mapping tool and created a technology-supported curriculum mapping process. The process was intended to ensure the alignment of university courses and programs with the undergraduate Degree Level Expectations (DLEs) in the processes of program review, curriculum renewal, and new program development. University B was a medium-sized university in a rural area; undergraduate students constituted approximately 90% of the student population. The university began to use learning objectives to inform its academic programming in the 1980s. Since then, those learning objectives had become an integral part of its institutional practice. After a two-year development and consultation process, in the academic year 2012-13, the university’s senate approved two sets of University Learning Outcomes (ULOs) for undergraduate programs and graduate programs and asked all university programs to incorporate the ULOs. Thus, rather than adopting the DLEs as University A did, University B used its home-grown ULOs to inform its educational practices. Findings from the Two University Cases Generated Themes Brief Case Descriptions – the Events University A was an urban university with over 90% undergraduate student enrolment. In 2011, when the QA framework requiring each public university to develop its Two themes emerged from the analysis of the reported experiences at the institutional, academic unit, and individual levels with respect to the two university cases. Theme 1 The two universities utilized different approaches to create IQA processes for curriculum development in re- Canadian Journal of Higher Education | Revue canadienne d’enseignement supérieur 50:1 (2020) The Impact of Quality Assurance Policies on Curriculum Development in Ontario Postsecondary Education Q. Liu sponse to EQA policies. The QA framework expects public universities “to design and implement their own Institutional Quality Assurance Process (IQAP) that is consistent not just with their own mission statements and their university Degree Level Expectations [DLEs], but also with the protocols of this Framework” (Ontario Universities Council on Quality Assurance, 2018, p. 2). At University A, the curriculum mapping initiative was a direct response to the QA framework. The university essentially adopted the DLEs specified in the QA framework; the function of the web-based tool was to map the departmental learning outcomes onto the DLEs. Therefore, the QA framework had a direct impact on the curriculum development of academic programs at University A. The interviewed senior administrator shared that The background is the introduction of the learning-outcomes based approach to the quality assurance for curriculum… I think in 2010 or something like that, they [the Ontario Council of Academic Vice-Presidents] created this series of Undergraduate and Graduate Degree Expectations, and all the universities in Ontario were meant to adopt these in their curriculum development and program review policies. So [University A], of course, did that. (UA1) In contrast, the DLE expectations in the QA framework had virtually no direct impact on University B when it designed its own learning outcomes that superseded the DLEs. The different impacts on the two universities can be ascribed to the different strategies they undertook. While University A employed a responsive approach, University B used an improvement-oriented, pre-emptive institutional strategy. This was made clear by an interviewed educational developer: “they’ve [the Provost and the associate Vice-President] taken a very positive approach to focusing, advancing, and strengthening the quality of teaching and learning on campus, rather than from a quality assurance perspective” (UB1). The interviewed senior university administrator also explained: “So this notion of call for accountability, [name of the Provost] was ahead of our time in predicting the increased demands for this. That's when it [the ULO initiative] started” (UB2). The pre-emptive approach was well recognized within the university community. An associate dean of a school commented that the ULO initiative was a “pre-emptive 58 response” to the system-wide requirement for quality assurance and accountability (UB5). A senior educational developer stated: “I think it was a response to it [accountability]. I think it was a recognition that that's the way that higher education is going and we need to help to bring that as a leader as opposed to do the minimum or be the follower” (UB4). Further, it is important to note that the curriculum mapping initiative at University A was not only a response to the pressure from external accountability but also an institutional endeavour to address the internal need for enhancing faculty engagement with the curriculum. As the senior administrator stated, Our internal purpose of the tool was really to facilitate departmental engagement with curriculum analyses for the learning outcomes approach, in particular around program review. That was sort of our motivation, to try to make faculty engagement with the process more efficient. (UA1) For University B, the system-wide QA framework did play a role in its initiative of creating and implementing the ULOs. The external framework served as a catalyst to the university’s internal process of implementing the ULOs, as the senior administrator commented. I think there was an understanding of the need to have an appropriate quality assurance process that would reflect the quality of the system on one hand while internally we were working on or talking about curriculum innovation, and appropriate pedagogical practices, and assessing what we do. So in a way these two things are independent but they are connected. For us there was no disassociation between the two; for us, we saw it as one enabled or assisted the other one…What quality assurance did is to give us a little leverage to help us do it faster. (UB2) The external framework also made the university more committed to the learning outcomes approach. The educational developer stated that The IQAP provided increased motivation for people to more intentionally take an outcomes-based approach because of the requirements communicated in IQAP… They’ve provided another resource for us to engage in a conversation with faculties regarding processes that most faculties are already doing. (UB1) Canadian Journal of Higher Education | Revue canadienne d’enseignement supérieur 50:1 (2020) The Impact of Quality Assurance Policies on Curriculum Development in Ontario Postsecondary Education Q. Liu The two cases illustrated that the two universities stood at different points on the accountability-improvement continuum. University A started its curriculum mapping initiative from the accountability end and moved toward the improvement end as the initiative was implemented. The accountability-oriented view taken at the university level was also shared by some faculty members at the academic unit level. A faculty member on the curriculum committee of a department made this comment. There’s a correlation between the UDLEs [Undergraduate Degree Level Expectations], the learning outcomes, and the curriculum because it’s great to say, ‘We adhere to UDLEs’, and you can even go as far as saying ‘We have learning outcomes that we can identify or correlate with those UDLEs.’ … What this [curriculum mapping] does is this provides the evidence, the concrete evidence. It’s just not subjective—what I’m saying. This shows very clearly, not only that we are achieving it but exactly how we are achieving it.… It allows us to present a very clear, concrete correlatable result. (UA5) In comparison, many departments took an improvement-oriented, ground-up approach to learning outcomes development; that is, their motivation for using the learning outcomes approach mainly lay in improving the curricular and pedagogical practices within the department, as shown in a quote below. However, this improvement-oriented view was contextually situated within, and interacted with, the IQA process that ultimately ensured the EQA expectations were met. That [development of program learning outcomes] was never driven by IQAP. It’s never been a top-down; this is always bottom-up. It is our [discipline]-driven. … There is internal influence, which comes from us, as a sense of where we are as a discipline. So definitely internal. In terms of institutional influence, there has been an expectation in the last year that we explicitly integrate what the university wants into our own assessment plans, which is what we’ve done. (UB6) Theme 2 Leadership support and individual values played important roles in enabling the two initiatives at the two universities. 59 At both universities, there was championship for initiating the event. At University A, a senior university administrator endorsed an educational developer’s idea of using a web-based tool for curriculum mapping. At University B, five interviewees expressed their high regard for the forward-thinking senior leadership of the university. One of them shared that “at the leadership level, both [names of the Provost and an associate Vice-President] have taken a forward-thinking but collaborative approach to developing these [university] outcomes. That has both increased capacity and built awareness” (UB1). Conceivably, without leadership support, the two initiatives would not have gone far. All the individuals interviewed for the study played a leadership role at the university or academic unit level. Strikingly, the interviewees’ own perspectives on the outcomes-based approach seemed to have, to a large extent, influenced how they motivated others to implement the outcomes-based institutional processes. For example, a department head (also a faculty member) shared that I really felt that this would improve the clarity of our goal. I think it provides more transparency to students. I actually embrace it as an important element of good teaching practice. I just felt we needed to do it because it was the best thing for the quality of our programs” (UB14). With his genuine belief about the benefits of the outcomes-based approach, he did not encounter resistance when he was trying to bring his colleagues on board. He said, To me, when I talk to faculty, I tend to minimize the accountability piece, although if it's appropriate, I'll bring it up. First and foremost, it's all about, everybody here cares about the quality of the programs, and if you can convince them that this will have an effect on the quality, they'll be there. His experience corroborated a connection between individual beliefs and actions in addressing challenges to the outcomes-based approach. In contrast, another department head (also a professor) who expressed strong skepticism about the outcomes-based approach emphasized the accountability rationale when trying to engage her colleagues: “We gave them the message that we had no choice, so we Canadian Journal of Higher Education | Revue canadienne d’enseignement supérieur 50:1 (2020) The Impact of Quality Assurance Policies on Curriculum Development in Ontario Postsecondary Education Q. Liu just had to do it. That is what convinced them that they just had to do it” (UA3). This accountability-focused communication approach, when coupled with a lack of buy-in from the departmental leader herself, ultimately became a barrier to making positive change to the curriculum and the program. Findings from the Two College Cases Brief Case Descriptions – the Events College A was one of the five institutes of technology and advanced learning in Ontario; thus, the development of degree-granting programs constituted an important component in its IQA processes. The new program development process consisted of two phases and was recognized as having greater clarity and efficiency than the previous four-phase process. At the end of Phase 1, a program proposal was expected; at the end of Phase 2, a full document with all the details about the new program would need to be in place. A college-wide program QA committee provided support and formative peer review during Phase 1 and approved the full new program document toward the end of Phase 2. By the time of data collection, the committee had been in operation for two years. College B was a typical Ontario community college that offered certificate and diploma programs with few degree programs. Its program review process was introduced in 2010 as a major revamp after the college’s PQAPA review in 2008. The process consisted of eight components: identifying the program for formal review; a process orientation meeting; program mapping; a review of program data and internal assessment report; a focus group meeting with external stakeholders; creating the final report; presenting the final report to the college’s governing body; and follow-up actions. Compared with the previous process, the new model provided clearer instructions, was more data-based, and had greater depth, but it demanded a higher time commitment. Another change was that all program reviews became centralized under the oversight of the teaching and learning centre of the college whereas previously each school had conducted its own program reviews. 60 Generated Themes Two themes emerged from the analysis of the reported experiences at the institutional, academic unit, and individual levels with respect to the two college cases. Theme 1 The QA process at both colleges exhibited institutional endeavours to balance accountability to EQA policies with internal continuous improvement. Academic programs offered by Ontario public colleges were accountable to two QA frameworks. The OCQAS framework fulfilled two expectations in the government’s 2003 Framework for Programs of Instruction: the Ontario college sector is to establish a system-wide credentials validation service that will provide reasonable assurance “for the (non-degree) credentials under the Credentials Framework; and public colleges are “to establish mechanisms for the review of their programs of instruction to ensure ongoing quality, relevancy, and currency.” (Ontario Ministry of Training, Colleges and Universities, 2003, pp. 4–5) The framework required all college non-degree programs to conform to the outcome-based Credentials Framework and Program Standards that the government began to produce in the 1990s. It also required individual colleges to conform to six PQAPA criteria when implementing IQA to ensure the quality of their programs (see the six criteria in Liu, 2015). These criteria asked all college academic programs to set learning outcomes in accordance with the 2003 policy directive. The colleges were also to formulate academic policies and allocate resources (human, physical, financial support) to support student achievement of the program learning outcomes. In addition, college degree programs were to be assessed against a set of 13 quality criteria and related benchmarks, including the outcomes-based degree level standards under the PEQAB framework. Although neither of these QA frameworks was intended to drive the curriculum development process, their impacts on the actual work of curriculum development in colleges were tangible. These impacts arose from the compliance mechanisms that had been set up in the government’s policy directive and funding model. Both colleges in this study were quite amenable to the EQA obligations. This was reflected in both college policy Canadian Journal of Higher Education | Revue canadienne d’enseignement supérieur 50:1 (2020) The Impact of Quality Assurance Policies on Curriculum Development in Ontario Postsecondary Education Q. Liu documents and practices. All the interviewees indicated the importance of complying with the QA frameworks. For instance, a senior administrator at College A highlighted, We make sure that we've got a solid framework and that whatever they're building meets Ministry guidelines and standards because you can build a beautiful program that doesn't meet all the standards and that's a problem. That's also not fair to your program. (CA3) However, it was also evident from the interviews at both colleges that meeting the expectations in the QA frameworks was only part of the colleges’ considerations; this was often partnered with their internal needs in QA implementation. At College A, satisfying both the system-wide expectations and the college’s own internal needs was the most important consideration in their new program development process. An administrator at the teaching and learning centre made this clear up front in the interview by saying, The major components [of the process] would be ensuring that each step we take in terms of proposing an idea, researching the idea, and moving forward with development is keeping in mind the right parameters to abide both by provincial expectations but also college expectations. (CA1) The senior administrator also stressed the importance of balancing external and internal expectations by saying, At the end of the day, when we do our submissions, we know that we've met the legislative requirements, [the college’s] internal practices or pieces that are important to us, and we value in our mission and vision that we've engaged our entire community. (CA3) In practice, the college deliberately created templates for new program development, which embedded the learning outcomes requirements from the OCQAS and PEQAB frameworks as well as the college’s expectations articulated in its strategic plan and academic strategy documents. These templates demonstrated the college’s commitment to both the provincial mandate and the college’s own mission. At College B, its board policies explicitly addressed the objectives of accountability and continuous improvement. One board policy stated that the president of the college “shall not fail to ensure” that all programming is 61 consistent with the Ministry’s requirements and policy for program of instruction. Another affirmed continuous improvement as a goal for the quality work of the college. In practice at College B, EQA standards were highly important considerations in program reviews. The program review coordinator at the teaching and learning centre brought up the expectations from both inside and outside the college when commenting on what factors had shaped its program review process. Obviously the PQAPA did [shape the review process]. I carry this [PQAPA manual] with me all the time. There is a balance between or amongst Ministry requirements, college requirements, and stakeholders. I think that those three elements have to be embraced within the program review process and I think that our current model has done a very good job of that. (CB2) At the academic unit level, administrators of those programs that were subject to both accreditation requirements and the internal program review process liked to start the internal process two years before the accreditation review so that one could build upon the other. In the meantime, interviewees also shared that the internal program review process functioned as a continuous improvement mechanism for accreditors. As such, institutional program review and program accreditation were brought together to serve the dual purposes of accountability and improvement. Experiences within academic units also showed the impact of PEQAB policies on the development of degree programs when the developers tried to meet the degree standards. A proposed computer science program incorporated a capstone project for all students in their fourth year and an option of completing a thesis for graduation to fulfill the requirement of “research and scholarship” (CA6), on the advice of university academics in this field. About the same standard, a business program team shared “we spent a lot of time on that… We have to really think about [it] because the general college curriculum would not meet that standard” (CA4). Theme 2 The teaching and learning centre played an instrumental role in implementing IQA processes. A common factor that enabled the two processes at the two colleges was the significant role that their teaching and learning centres played in helping academic pro- Canadian Journal of Higher Education | Revue canadienne d’enseignement supérieur 50:1 (2020) The Impact of Quality Assurance Policies on Curriculum Development in Ontario Postsecondary Education Q. Liu gram teams navigate the IQA processes. At College A, the staff from the teaching and learning centre supported program teams from the beginning of the process till the program proposal was submitted internally and externally. New program development and curriculum development were part of the portfolio of the teaching and learning centre at College A, along with program review and faculty development. An administrator at the teaching and learning centre commented: “Quality is built right into everything we do;” part of the centre’s function was quality assurance (CA1). Curriculum consultants at the centre saw themselves as “a process person” and “a catalyst” for new program development while recognizing that the locus of control lay within the academic unit (CA2). Another noteworthy feature of the new program development process at College A was a long-standing institutional practice in which all the college programs and courses were required to develop a summative and cumulative statement about their students’ performance by the end of the program (known as the “critical performance statement”) before articulating the program/ course learning outcomes. This practice was carried on and remained an integral part of the streamlined new program development process, demonstrating a continuous improvement from an existing college tradition. In practice, the new program team was encouraged to seek support from the curriculum consultants at the teaching and learning centre to help them develop critical performance statements and program/course learning outcomes. At College B, the whole program review process was overseen by the college’s teaching and learning centre. An important mandate of the centre was being “responsible for providing coordination and support for each formal program review,” according to the college’s academic policies. A senior college administrator highly commended the work that the teaching and learning centre did and emphasized, “It is important to set up the right team with a singular focus on quality assurance” (CB1). In practice, the program review coordinator was responsible for making sure that the eight components of a review process would happen as scheduled, and curriculum consultants supported processes in curriculum mapping. Thus, the role of the teaching and learning centre in supporting program review was well recognized in college policies and integrated into daily practice. 62 Discussion The four case studies attested to Lattuca and Stark’s (2009) academic plan model in that the QA policies and standards for the Ontario postsecondary educational system constituted external factors that influenced the curriculum development process of academic programs in Ontario postsecondary institutions. The impacts were revealed through four IQA processes at four postsecondary institutions, which constituted four events investigated in this study. The four case studies uncovered the stratified realities of the impacts of EQA policies and standards; that is, the impacts varied as a result of interactions between different generative mechanisms at play and experiences at the individual, departmental, and institutional levels. Part of these processes was the significant role of individuals, more specifically individual leaders, in enabling or impeding departmental or institutional changes, which corroborated Scheuthle and Leiber’s (2015) EQA impact model. The interactions between generative mechanisms and experiences at different levels emerged as the four themes generated from analysis. The four themes demonstrated the generative mechanisms that were in operation when system-wide QA policies were implemented in Ontario postsecondary institutions. The mechanisms functioned as internal conditioning factors of the impact of system-wide QA policies on curriculum development. The most prominent mechanism, which emerged from all four cases, was the institutional endeavour to harmonize accountability to the EQA expectations and the need for internal continuous improvement. This means that the dual purposes of accountability and improvement can be compatible and well-integrated into one initiative, at least from a design perspective. From an implementation perspective, the dynamics between accountability and improvement varied from case to case, thus positioning each case differently on the accountability-improvement continuum. The technology-supported curriculum mapping process at University A was initiated as a response to meeting the outcomes-based standards in the system-wide QA framework, which was the accountability end of the continuum. However, while the process evolved, faculty engagement became another major purpose, tilting toward the improvement end. For University B, the improvement dimension was far stronger than the accountability dimension, as the university proactively designed its own institutional-level learning outcomes that aligned Canadian Journal of Higher Education | Revue canadienne d’enseignement supérieur 50:1 (2020) The Impact of Quality Assurance Policies on Curriculum Development in Ontario Postsecondary Education Q. Liu with and superseded the standards in the system-wide expectations. For the two college cases, the accountability dimension was given priority over the improvement dimension as these colleges were mandated to ensure that the system-wide QA requirements would be satisfied first in their institutional practices. However, the accountability dimension did not overweigh institutional needs for improvement. Instead, accountability-oriented components were well integrated into the institutional process and worked seamlessly together with internal procedures for continuous improvement. These variations among the four cases demonstrated the autonomy Ontario postsecondary institutions had in implementing system-wide quality assurance policies. A second generative mechanism occurred at the individual level, that is, leadership support and individual values. While this theme was reported only for the two university cases, it also applied to the two college cases. At College A, the leadership residing in the Program Quality Assurance Committee ensured harmonization between external and internal expectations in the implementation of the new program development process. At College B, the director of its teaching and learning centre played an instrumental role in implementing the program review process. In all these cases, leaders at the academic unit and institutional levels became agents of change; their values and actions facilitated or impeded the changes at different levels. This finding supported a realist view that participants’ intentions and beliefs are essential parts of the causal mechanisms operating in a particular setting (Maxwell, 2012; Sayer, 2010). A third generative mechanism was organizational; that is, the instrumental role of teaching and learning centres in implementing institutional processes that involve both quality assurance and curriculum development. This role was prominent at the two Ontario colleges. The functions of the teaching and learning centres expanded from curricular and pedagogical support to leading the implementation of IQA processes; outcomes-based standards in EQA policies became indispensable knowledge for educational developers and curriculum consultants and were well incorporated into curricular and pedagogical practices. In the two university cases, educational developers also served as proponents and facilitators in the process of implementing institutional changes that were oriented toward an outcomes-based curriculum development process. While this organizational change entailed a centralized management in some cases, the 63 institutionalized QA function of the teaching and learning centre has raised the profile of educational developers as change agents and has helped cultivate an internal culture of quality within postsecondary institutions. This demonstrates a quality development model in which the functions of quality assurance and educational development in postsecondary institutions are combined into one office so that accountability and improvement can be balanced (Gosling & D’Andrea, 2001). Arguably, educational developers who engaged in the QA function in the four cases became catalysts for reconciling the accountability and improvement purposes of institutional QA processes. In addition, a generative mechanism that was applicable to University B and College A was institutional culture, which is a mechanism of QA impact according to Brennan and Shah’s (2000a, 2000b) model. At University B, institutional learning objectives had been in place for decades before the introduction of the University Learning Outcomes; at College A, academic programs benefited from a long-standing practice of using “critical performance statements” as summative learning outcomes. These tools embodied institutional culture, which could make a difference to how EQA policies impacted institutional processes. For example, University B developed quite a different approach to meeting EQA requirements than other universities like University A that did not have a similar institutional tradition. As Martin (2018) observed, “IQA tools are not effective per se; rather, their effectiveness depends largely on the way in which they are organized and used.” (p. 253). Further, the four case studies revealed two types of observable experiences: positive changes to the curriculum processes and negative experiences of some faculty members. Improvements were identified in three areas. First, faculty members and administrators observed a number of benefits from those institutional QA processes, including better coherence in the curriculum and specific curricular changes. These benefits promised to improve the quality of academic programs. This suggests that the institutional processes in the four cases created beneficent opportunities to enhance the curriculum and academic programs. Second, engagement with the institutional QA process was transformative for some individuals. For example, one faculty member and one curriculum consultant at College B shared that getting involved in the new program development process was a significant professional development experience for them. Third, improvement occurred to the quality Canadian Journal of Higher Education | Revue canadienne d’enseignement supérieur 50:1 (2020) The Impact of Quality Assurance Policies on Curriculum Development in Ontario Postsecondary Education Q. Liu assurance process itself. All four cases represented institutional efforts to enhance the existing IQA practices that aimed to strengthen the curriculum of academic programs. This improvement aligns with what other studies (Langfeldt et al., 2010; Liu & Yu, 2014; Stensaker, 1999) have found; that is, EQA mechanisms exert strong impact on the administrative operations of postsecondary institutions. These experiences suggest that the positive effects of EQA policies did penetrate to the academic unit and individual levels within Ontario postsecondary institutions in the realm of curriculum development. On the other hand, implementation of EQA policies also brought about confusion and frustration on the part of departments and individual faculty members. Notably, the sources of confusion and frustration were different for faculty and staff at colleges and universities. At the universities, some faculty members resisted the learning outcomes approach required by the QA mechanism, believing that the approach was misaligned with their discipline. These faculty members were more likely to feel that accountability schemes were imposed upon them, and less likely to willingly implement any process related to the accountability scheme. The sense of accountability and lack of buy-in among faculty members became a barrier to continuous improvement. In contrast, common challenges for college faculty members mainly lay in operational areas such as workload and time management, and they often did not challenge the outcomes-based approach behind the processes. This suggests that Ontario colleges were more readily compliant with outcomes-based QA standards than Ontario universities and that the relationship between outcomes-based accountability mechanisms and intrinsic interest in institutional improvement was less contentious for the college sector than for the university sector. Thereby, there was a sectorial difference in the types of challenge encountered when postsecondary institutions responded to outcomes-based QA policies. Concluding Thoughts Although the four cases took place in the context of Ontario QA frameworks, the implications of this study extend beyond the Ontario context in three conceivable ways. First, the question of the accountability-improvement relationship remains relevant to other Canadian provinces and other parts of the world. There has been a significant 64 shift from the focus on accountability to evidence-based continuous improvement in quality assurance schemes (Houston & Paewai, 2013; Morest, 2009) and student learning assessment (Liu, 2017; Russell & Markle, 2017). Similarly, this study demonstrates a palpable tendency within postsecondary institutions to balance accountability and improvement to enable institutional changes. It illustrates that accountability-oriented QA standards can be incorporated into institutional initiatives for internal improvement, thus contributing to the reconciliation view (not the polarization view) of the accountability-improvement relationship. Second, the study reveals that system-wide QA frameworks exert impacts on the curriculum via their learning outcomes-based standards. This may be because when the outcomes-based approach is used, the goals of QA processes and curriculum development are aligned under the learning outcomes priority. In this sense, outcomes-based QA standards reinforce the implementation of outcomes-based education. In Ontario, the different outcomes-based standards for universities and colleges that have constituted important parts of a set of outcomes-oriented QA criteria have penetrated into the realm of the curriculum in postsecondary education in what Sigler (2007) called the Age of Outcomes. It can be anticipated that the increasing orientation toward learning outcomes in quality assurance and postsecondary education will deepen the impact of QA policies and mechanisms on curriculum development. This impact of outcomes-based quality assurance processes on the academic realm of postsecondary education shows learning outcomes constitute a distinct scheme of quality assurance (Aamodt et al., 2018). Finally, the findings suggest that the QA impact on curriculum development is made possible via different IQA processes that are ingrained in institutional cultures at postsecondary institutions. The four case studies illustrated different QA tools, procedures, and policies in relation to curriculum development, including institutional learning outcomes, curriculum mapping tools and processes, program development and review processes, and support from the teaching and learning centre of the institution. The effectiveness of these tools, procedures and policies may depend on a democratic society and a decentralized postsecondary education system where postsecondary institutions enjoy a considerable amount of autonomy and the system-wide QA policies embrace a balance of accountability and improvement. All these Canadian Journal of Higher Education | Revue canadienne d’enseignement supérieur 50:1 (2020) The Impact of Quality Assurance Policies on Curriculum Development in Ontario Postsecondary Education Q. Liu societal factors place a limit on the generalizability of the findings of this study to other parts of the world. Acknowledgements This study was supported by funding from the Doctoral Fellowship of the Social Sciences and Humanities Research Council of Canada and the Opportunities to Innovate Fund of the Higher Education Quality Council of Ontario (Advisor: Professor Glen Jones). References Aamodt, P. O., Frølich, N., & Stensaker, B. (2018). Learning outcomes–A useful tool in quality assurance? View from academic staff. Studies in Higher Education, 43(4), 614–624. https://doi.org/10.1080/0 3075079.2016.1185776 Adam, S. (2008, November). Learning outcomes current development in Europe: Update on the issues and applications of learning outcomes associated with the Bologna Process. Paper presented at the Bologna Seminar, Edinburgh, Scotland. Altbach, P. G., Reisberg, L., & Rumbley, L. E. (2010). Trends in global higher education: Tracking an academic revolution. Rotterdam, the Netherlands: Sense Publications. Baker, D., & Miosi, T. (2010). The quality assurance of degree education in Canada. Research in Comparative and International Education, 5(1), 32–57. https://journals.sagepub.com/doi/pdf/10.2304/ rcie.2010.5.1.32 Bhaskar, R. (1978). A realist theory of science. Sussex, England: Harvester Press. Bhaskar, R. (2008). A realist theory of science (2nd ed.). New York, NY: Routledge. Brennan, J., & Shah, T. (2000a). Quality assessment and institutional change: Experiences from 14 countries. Higher Education, 40, 331–349. https://link.springer. com/article/10.1023/A:1004159425182 Brennan, J., & Shah, T. (2000b). Managing quality in higher education: An international perspective on institutional assessment and change. Buckingham, England: Open University Press. 65 Coates, H. (2010). Defining and monitoring academic standards in higher education. Higher Education Management and Policy, 22(1), 1–17. http://www. oecd.org/education/imhe/50310012.pdf Coleman, J. (1990). Foundations of social theory. Cambridge, MA: Harvard University Press. Csizmadia, T., Enders, J., & Westerheijden, D. F. (2008). Quality management in Hungarian higher education: Organisational responses to governmental policy. Higher Education, 56(4), 439–455. doi: 10.1007/ s10734-007-9 103-3 Filippakou, O., & Tapper, T. (2008). Quality assurance and quality enhancement in higher education: Contested territories? Higher Education Quarterly, 62(1/2), 84–100. https://onlinelibrary.wiley.com/doi/ epdf/10.1111/j.1468-2273.2008.00379.x Finnie, R., & Usher, A. (2005). Measuring the quality of post-secondary education: Concepts, current practices and a strategic plan. Ottawa, ON: Canadian Policy Research Networks. Genis, E. (2002). A perspective on tensions between external quality assurance requirements and institutional quality assurance development: A case study. Quality in Higher Education, 8(1), 63–70. https://doi. org/10.1080/13538320220127443 Gosling, D., & D’Andrea, V. M. (2001). Quality development: A new concept for higher education. Quality in Higher Education, 7(1), 7–17. https://doi. org/10.1080/13538320120045049 Hammersley, M., Gomm, R., & Foster, P. (2000). Case study and theory. In R. Gomm, M. Hammersley, & P. Foster (Eds.), Case study method: Key issues, key texts (pp. 234–258). Thousand Oaks, CA: Sage Publications. House, E. R. (1991). Realism in research. Educational Researcher, 20(6), 2–9. https://journals.sagepub. com/doi/10.3102/0013189X020006002 Houston, D., & Paewai, S. (2013). Knowledge, power and meanings shaping quality assurance in higher education: A systemic critique. Quality in Higher Education, 19(3), 261–282. International Network for Quality Assurance Agencies in Canadian Journal of Higher Education | Revue canadienne d’enseignement supérieur 50:1 (2020) The Impact of Quality Assurance Policies on Curriculum Development in Ontario Postsecondary Education Q. Liu Higher Education. (2016). Guidelines of good practice (revised edition). Retrieved from the INQAAHE website: https://www.inqaahe.org/sites/default/files/ INQAAHE_GGP2016.pdf Kis, V. (2005). Quality assurance in tertiary education: Current practices in OECD countries and a literature review on potential effects. Retrieved from the website of the Organization for Economic Co-operation and Development: http://www.oecd.org/education/ skills-beyond-school/38006910.pdf Langfeldt, L., Stensaker, B., Harvey, L., Huisman, J., & Westerheijden, D. F. (2010). The role of peer review in Norwegian quality assurance: Potential consequences for excellence and diversity. Higher Education, 59(4), 391–405. https://doi.org/10.1007/ s10734-009-9255-4 Lattuca, L. R., & Stark, J. A. (2009). Shaping the college curriculum: Academic plans in context. San Francisco, CA: Jossey-Bass. Liu, Q. (2015). The quality assurance system for Ontario post-secondary education: 2010-2014. Higher Education Evaluation and Development, 9(2), 55–79. https://doi.org/10.6197/HEED.2015.0902.04 Liu, O. L. (2017). Ten years after the Spellings Commission: From accountability to internal improvement. Educational Measurement: Issues and Practice, 36(2), 34–41. https://doi.org/10.1111/emip.12139 Liu, S., & Yu, H. (2014). Study of the impacts of the quality assessment of undergraduate education policy in China: Students’ perceptions. Higher Education Studies, 4(2), 52–60. https://eric.ed.gov /?id=EJ1076480 Martin, M. (Ed.). (2018). Internal quality assurance: Enhancing higher education quality and graduate employability. Paris, France: International Institute for Educational Planning. Martin, M., & Stella, A. (2007). External quality assurance in higher education: Making choices. Paris, France: International Institute for Educational Planning. Massy, W. F. (1997). Teaching and learning quality-process review: The Hong Kong programme. Quality in Higher Education, 3(3), 249–262. https://www.tandfonline.com/doi/abs/10.1080/1353832970030305 66 Maxwell, J. (2004). Using qualitative methods for causal explanation. Field Methods, 16, 243–264. https://doi. org/10.1177%2F1525822X04266831 Maxwell, J. (2012). The importance of qualitative research for causal explanation in education. Qualitative Inquiry, 18(8), 655–661. https://doi. org/10.1177%2F1077800412452856 Middlehurst, R., & Woodhouse, D. (1995). Coherent systems for external quality assurance. Quality in Higher Education, 1(3), 257–268. https://doi. org/10.1080/1353832950010307 Mohr, L. B. (1996). The causes of human behavior: Implications for theory and method in the social sciences. Ann Arbor, MI: University of Michigan Press. Morest, V. S. (2009). Accountability, accreditation, and continuous improvement: Building a culture of evidence. New Directions for Institutional Research, 143, 17–27. https://doi.org/10.1002/ir.302 Newton, J. (2012): Is quality assurance leading to enhancement? In F. Crozier, M. Kelo, T. Loukkola, B. Michalk, A. Päll, F. M. Galán Palomares, N. Ryan, B. Stensaker, & L. Van de Velde (Eds.), How does quality assurance make a difference? A selection of papers from the 7th European Quality Assurance Forum (pp. 8–14). Brussels, Belgium: European University Association. Ontario Ministry of Training, Colleges and Universities. (2003). Minister’s binding policy directive: Framework for programs of instruction. Retrieved from http://www.tcu.gov.on.ca/pepg/documents/FrameworkforPrograms.pdf Ontario Universities Council on Quality Assurance. (2018). Quality assurance framework and guide. Retrieved from http://oucqa.ca/wp-content/uploads/2018/10/Quality-Assurance-Framework-andGuide-Updated-Guide-Oct-2018-Compressed-Version.pdf Postsecondary Education Quality Assessment Board. (2017). Handbook for Ontario colleges. Retrieved from http://www.peqab.ca/Publications/Handbooks%20Guidelines/2017HNDBKCAAT.pdf Russell, J., & Markle, R. (2017). Continuing a culture of evidence: Assessment for improvement (ETS Canadian Journal of Higher Education | Revue canadienne d’enseignement supérieur 50:1 (2020) The Impact of Quality Assurance Policies on Curriculum Development in Ontario Postsecondary Education Q. Liu Research Report No. RR-17-08). Princeton, NJ: Educational Testing Service. Retrieved from https:// files.eric.ed.gov/fulltext/EJ1168504.pdf Sayer, A. (2010). Method in social science: A realist approach (2nd ed.). New York, NY: Routledge. Scheuthle, H., & Leiber, T. (2015, March–-April). Impact evaluation of quality assurance in higher education: Methodology, design and results. Paper presented at the biannual conference of the International Network for Quality Assurance Agencies in Higher Education, Chicago, IL. Sigler, W. (2007). Managing for outcomes: Shifting from process-centric to results-oriented operations. Washington, DC: American Association of Collegiate Registrars and Admission Officers. Stensaker, B. (1999). External quality auditing in Sweden: Are departments affected? Higher Education Quarterly, 53(4), 353–368. https://doi. org/10.1111/1468-2273.00136 Stensaker, B. (2003). Trance, transparency and transformation: The impact of external quality monitoring in higher education. Quality in Higher Education, 9, 151–159. https://doi.org/10.1080/13538320308158 Thune, C. (1996). The alliance of accountability and improvement: The Danish experience. Quality in Higher Education, 2, 21–32. https://doi. org/10.1080/1353832960020103 Veiga, A., Rosa, M. J., Dias, D., & Amaral, A. (2013). Why is it difficult to grasp the impacts of the Portuguese quality assurance system? European Journal of Education, 48(3), 454–470. https://doi. org/10.1111/ejed.12040 Vilgats, B., & Heidmets, M. (2011). The impact of external quality assessment on universities: The Estonian experience. Higher Education Policy, 24(3), 331–346. https://link.springer.com/article/10.1057/ hep.2011.7 Vroeijenstijn, A. I. (1995). Improvement and accountability: Navigating between Scylla and Charybdis. Guide for external quality assessment in higher education. London, England: Jessica Kingsley Publishers Ltd. 67 degrees: Quality assurance and Canadian universities. Policy and Society, 33(3), 225–236. https://doi. org/10.1016/j.polsoc.2014.07.002 Williams, J., & Harvey, L. (2015). Quality assurance in higher education. In J. Huisman, H. de Boer, D. D. Dill, & M. Souto-Otero (Eds.), The Palgrave international handbook of higher education policy and governance (pp. 506–525). New York, NY: Palgrave Macmillan. Woodhouse, D. (1999). Quality and quality assurance. In Organisation for Economic Co-operation and Development (Ed.), Quality and internationalisation in higher education (pp. 29–44). Paris, France: Organization for Economic Co-operation and Development. Retrieved from http://www.oecd-ilibrary.org/ education/quality-and-internationalisation-in-higher-education_9789264173361-en Contact Information Qin Liu [email protected] Notes 1 All three system-wide QA frameworks have made the dual purposes of accountability and improvement explicit in their documents. The framework for the Ontario university sector is meant to “support innovation and improvement while cultivating a culture of transparency and accountability – i.e., quality assurance that produces quality enhancement [italics in the original document]” (Ontario Universities Council on Quality Assurance, 2018, p. 1). The intention of the PQAPA process for the college sector is to “contribute to the continuous improvement of the educational programs of the college system” (cited in Liu, 2015, p. 62). Since January 2016, the College Quality Assurance Audit Process has replaced the PQAPA; however, the intent remains the same. Similarly, the intended aim of the program evaluation standard for the PEQAB degree program quality review process is also continuous improvement of the program (Postsecondary Education Quality Assessment Board, 2017, p. 29). Weinrib, J., & Jones, G. (2014). Largely a matter of Canadian Journal of Higher Education | Revue canadienne d’enseignement supérieur 50:1 (2020)