Representation of Library Value / B. Jackson 80 CSSHE SCÉES Canadian Journal of Higher Education Revue canadienne d’enseignement supérieur Volume 47, No. 1, 2017, pages 80 -96 The Representation of Library Value in Extra-Institutional Evaluations of University Quality Brian Jackson Mount Royal University Abstract The ways in which university quality assessments are developed reveal a great deal about value constructs surrounding higher education. Measures developed and consumed by external stakeholders, in particular, indicate which elements of academia are broadly perceived to be most reflective of quality. This paper examines the historical context of library quality assessment and reviews the literature related to how library value is framed in three forms of external evaluation: accreditation, university rankings, and student surveys. The review finds that the library’s contribution to university quality, when it is considered at all, continues to be measured in terms of collections, spaces, and expenditures, despite significant expansion of library services into nontraditional arenas, including teaching and research, scholarly communications, and data management and visualization. These findings are contrasted with the frequently invoked notion of the library as the heart of the university. Résumé Les façons dont sont façonnées les évaluations de qualité des universités en révèlent beaucoup quant aux concepts de valeur entourant l’éducation supérieure. Les mesures élaborées et mises en pratique par les parties prenantes externes, en particulier, indiquent quels éléments académiques sont largement perçus comme étant plus représentatifs de la qualité. Ce document examine le contexte historique de l’évaluation de la qualité d’une bibliothèque, et passe en revue la documentation liée à sa valeur selon trois formes d’évaluations externes : l’accréditation, le classement de l’université et CJHE / RCES Volume 47, No. 1, 2017 Representation of Library Value / B. Jackson 81 les sondages menés auprès des étudiants. L’étude conclut que la contribution d’une bibliothèque à la qualité de son université, lorsqu’elle est prise en compte, continue d’être évaluée en termes de collections, d’espace et des dépenses qui y sont effectuées, malgré l’accroissement significatif des services offerts par la bibliothèque dans des arènes non traditionnelles comme l’enseignement et la recherche, les communications universitaires, ainsi que la gestion et la visualisation des données. Ces résultats détonnent de la notion souvent invoquée qui veut qu’une bibliothèque constitue le cœur d’une université. In 1994, historian Shelby Foote said “a university is just a group of buildings gathered around a library” (Chepesiuk, 1994, p. 984). Foote’s frequently quoted statement plays on the much older idea of the library as the metaphorical heart of the university. The quotation and the metaphor on which it is based both evoke a notion of centrality with respect to the library, that the library is fundamental to the fulfillment of the university’s mission. As with any indispensible institutional organ, ongoing assessment of the library’s contribution is crucial to healthy functioning. If the broader community of higher education stakeholders holds the library in the esteem suggested in these and other adages, it should follow that measures of university quality take appreciable account of library quality. This study will review the historical context of library quality assessment and examine the literature on, and positioning of, libraries in institutional quality evaluations to determine the extent to which this is currently the case. The ways in which library quality is measured have evolved substantially since the first collection of university library statistics was published more than a century ago. That first attempt to compare academic libraries, called the Gerould Statistics (Molyneux, 1986), provided metrics on collection size, collection growth, expenditures on acquisitions, number of staff, and staff salaries. It is a testament to the choices made by James Gerould that all of these data are still collected by most major library associations about their member libraries. They provide an important glimpse into changes in the support for and operation of academic libraries. As a means of measuring library quality, however, the usefulness of these figures is questionable. The size of a library’s collection, for example, says little about either the suitability of the collection for the research needs of its users or the ability of users to find needed materials, although use of that indicator persists in most library quality assessments. Like other statistics collected by Gerould, these provide no definitive insight into the library’s contribution to the mission of the institution. The metrics gathered by library associations have expanded in number and depth but continue to focus on collections, expenditures, library use, and staffing. The primary use of these data has been to observe trends and behaviours related to libraries and library use. At least two associations have used data gathered about member libraries to develop rankings. The Association of Research Libraries’ (ARL) Library Investment Index, published annually in the Chronicle of Higher Education, ranks libraries based on a combination of (i) total expenditures, (ii) expenditures on salaries, wages, and collections, and (iii) number of staff. Der Bibliotheksindex (BIX) is a library ranking operated jointly by the German Library Association and the North Rhine-Westphalian Library Service Centre. The BIX ranking relies on a larger number of indicators than does the ARL Index, dividing metrics into categories related to services, usage, efficiency, and development. CJHE / RCES Volume 47, No. 1, 2017 Representation of Library Value / B. Jackson 82 The step taken by these two organizations from simply collecting data to ranking libraries moved the assessment landscape from observation to value judgment. It also amplified the weight attributed to particular characteristics to a sufficient extent to conclude that one library is outperforming another, based on those attributes. Neither ranking, however, provides any indication of the library’s real contributions to the university. The use of inputs and outputs to rank performance assumes that the chosen indicators will confer some benefit on stakeholders, most likely to learning and research objectives. Not all stakeholders, though, necessarily share those assumptions. In response to the limitations of input and output metrics, the ARL launched a New Measures Initiative in the late 1990s, in which it explored new ways to measure library quality. Two of the most widely used tools to come out of the initiative, both based on concepts launched by the business community, are LibQUAL+, a survey used to measure library service quality, and the Balanced Scorecard, an approach to planning and measuring the success of performance objectives. Although a large number of libraries have incorporated these initiatives into their assessment activities, the New Measures have not to any marked extent been adopted into institutional quality frameworks. Concurrent with the development of the New Measures, with libraries under increasing pressure to justify their cost, the literature on performance measurement in libraries increasingly focused on the ways in which libraries contribute to the achievement of institutional objectives. There is now a growing body of work that explores the value of libraries at an institutional level. Megan Oakleaf’s seminal work, The Value of Academic Libraries (2010), provided a detailed summary of the ways in which library impact on institutional outcomes is and could be measured. Other, more focused studies have examined the library’s impact on institutional reputation (Weiner, 2009), student retention (Mezick, 2007), faculty and graduate research (King & Tenopir, 2013; Smith, 2003; Wilson & Tenopir, 2008), and student success as measured by grades (Wong & Cmor, 2011; Zhong & Alexander, 2007). In some cases—the Library Cube at the University of Wollongong, in Australia, for example—libraries have collaborated with institutional analysis departments to link library use data with other performance indicators on an ongoing basis (Jantii & Cox, 2013). While these efforts have provided some evidence of the library’s impact on students and faculty, the data are correlational, and very few consider potential impacts of non-traditional library services such as those related to scholarly communications, research data management, legal aspects of information use, and others. Library quality has often been framed from the perspective of the library or the parent institution. Rarely does the literature on library quality go beyond immediate stakeholders—students, faculty, and university administration. If libraries do indeed factor significantly into university quality, then library services should also be of concern to external stakeholders—governments, accreditation agencies, research funders, prospective employers of graduates, prospective students and their families, and other community members with a vested interest in the quality of postsecondary education. These parties are responsible for political, financial, and moral support for higher education. But the ways in which they receive information relevant to making value judgments on a university may differ from those of internal stakeholders. For the purposes of this study, a distinction will be made between information aimed at external stakeholders and produced by institutions themselves, such as annual reports, CJHE / RCES Volume 47, No. 1, 2017 Representation of Library Value / B. Jackson 83 financial statements, advertising, and other sources, and quality measures that are designed external to the university—program and institutional accreditation results, university rankings, and student surveys. The latter are under scrutiny here because they provide a perspective on institutional quality that may differ significantly from the perspective of stakeholders within the university. And while none of these assessments perfectly defines university quality, all are critically important for monitoring performance, ensuring compliance with standards, making institutional comparisons, and informing changes to planning and policy. It is from that broader, external perspective that this study will explore the notion of the centrality of the library with respect to university quality. Accreditation Accreditation has become the definitive means of assessing university and program quality in many parts of the world. What was once an informal, voluntary system of performance monitoring is now typically mandatory for institutions to receive government funding or recognition by professional bodies. Recent moves toward public accountability for universities and the standardization of educational quality indicators, as well as a perception of higher education as an economic driver have placed a greater emphasis on the government’s role in regulating the evaluation of postsecondary institutions (Eaton, 2012). This change has perhaps been most strongly felt in the United States, with its history of academic independence, than in nations where government oversight of educational matters has been greater (Neal, 2008). Quality assurance in many countries, though, continues to be conducted through a process of self-evaluation and peer review, guided by regional and professional accreditation standards. The effect of these changes has been that a system designed primarily for internal monitoring has evolved to one that is performed increasingly for the benefit of external audiences (Eaton, 2009). Accreditation is now expected to provide assurance to students and their families that their degrees will be recognized by employers and graduate schools. It is used by governments to ensure financial accountability. And although universities self-evaluate adherence to their own missions, their policies and practices are expected to fall within the framework outlined by accreditors. The library has long had a role in accreditation, although that too has evolved. In 1922, among its principles for accrediting colleges, the North Carolina State Department of Public Instruction advised that “a college should have a live well [sic] distributed professionally administered library of at least 8,000 volumes” (Allen, 1922, p. 13). A 1935 guide to higher education in the United States (Elliott, Ashbrook, & Chambers) advised trustees that “use and usableness of the library rather than total number of volumes is stressed” (pp. 91–92) in accreditation. And in the 1950s, the New England Association of Colleges and Secondary Schools evaluated the extent to which the library is actually used by both students and faculty; the number, the variety, the recency of publication, and suitability of the books; the sufficiency of space set aside for quiet study and leisure-time reading; the accessibility of other library materials . . . and the amount of the annual appropriation for new books. (Association of College and Research Libraries, 1958, p. 9) CJHE / RCES Volume 47, No. 1, 2017 Representation of Library Value / B. Jackson 84 Although qualitative elements were present in some of these assessments, the dominant measures of library quality were inputs (volumes, expenditures, space) and outputs (use). The 1990s and 2000s saw a change in approach to accreditation in the United States from one that looked primarily at basic metrics to one that incorporated outcomes as an element of quality. Accreditation agencies began to challenge assumptions that sufficient resources naturally lead to positive outcomes and wanted to explore the real impact that the university and its departments were having on the work of students and academic staff. Within some accreditation standards, the way that library quality was evaluated was part of this shift (Dalrymple, 2001). For library researchers concerned with accreditation, the incorporation of information literacy outcome measures in accreditation guidelines has been the major focus. Information literacy has been defined by the American Library Association (1989) as an outcome through which students “recognize when information is needed and have the ability to locate, evaluate, and use effectively the needed information.” The concept of information literacy was coined in the 1970s and was refined as a learning outcome, primarily by libraries and library organizations, throughout the 1990s and 2000s (Saunders, 2010). Those who work in libraries have been the most vocal advocates for general acceptance of information literacy as an indispensible skill. The advent of the information age brought with it the recognition of the importance of information skills for which libraries had been striving. The majority of the literature written on accreditation and libraries in the past 15 years has focused on the ways in which American regional and programmatic accreditation agencies treat, or do not, information literacy (Bradley, 2013; Gratch-Lindaur, 2002; Saunders, 2007; Thompson, 2002). This is not surprising. Programs of information literacy instruction are as much a part of the mission of the library as is the provision of learning resources. It is the foremost arena in which the expertise of library staff directly contributes to student learning outcomes. Accreditation standards, though, are inconsistent in their treatment of information literacy as a learning outcome. In the United States, for example, the Middle States Commission once prescribed a highly collaborative environment between librarians and teaching faculty to incorporate information literacy into the larger curriculum (Middle States Commission on Higher Education, 2009). These standards were modified in 2014 so that information literacy is listed as an outcome, but without discussion of collaborative efforts (Middle States Commission on Higher Education, 2014). The North Central Association of Colleges and Schools, as another example, has been silent about higher-level instruction and collaboration involving library staff (Higher Learning Commission, 2014). In the UK, the Quality Assurance Agency for Higher Education (2013) singles out information literacy as a crucial set of skills but leaves the responsibility for teaching those skills unaccounted for. Information literacy does not appear in any of the quality assurance documents issued by the multiple Canadian agencies responsible for accreditation, or in the documents of the Australian Tertiary Education Quality and Standards Agency. While librarians have celebrated the recognition of information literacy as a core outcome of general higher education programs, it should be emphasized that some accreditors are chiefly concerned that students develop the related skills and thought processes, but they make no recommendation on who should facilitate the development (Saunders, 2010). To be certain, some agencies strongly endorse librarian/faculty collabora- CJHE / RCES Volume 47, No. 1, 2017 Representation of Library Value / B. Jackson 85 tion toward this outcome, but these are a minority, at least in Western nations. There is evidence, as well, that faculty see information literacy instruction as primarily their responsibility, with some support from library staff (Jackson, MacMillan, & Sinotte, 2014; Stanger, 2012; Weiner 2014). All of this suggests that although crucial outcomes related to information literacy have rightfully made their way into accreditation standards, and there is some recognition that the library has an instructional role, it is not an automatic assumption that accredited institutions deploy the expertise of librarians to help students develop these skills. Information literacy is a topic of significant concern to library staff, but it is of course not the only accreditation standard related to libraries. Collections remain the most consistent library criterion for accreditation; the availability of program-appropriate learning resources is the only guideline related to libraries outlined by all accreditation agencies, although most of these agencies address provisions for adequate library spaces. In some British, Canadian, and Australian standards in particular, library spaces and resources appear within inventories of campus facilities of concern to accreditors. The Saskatchewan Higher Education Quality Assurance Board (2014), for example, requires that “physical, learning and information resources (both start-up and continuing) are in place to assure a quality degree program. These include classrooms, shops, laboratories and other facilities, equipment, libraries and other information resources, computing facilities, as well as cooperative work placements/practica/internships” (p. 16). These items do typically receive detailed individual evaluation as part of the accreditation process, but there is no indication of value or standing among them that library quality may be more or less important than that of classrooms or shops, for example. Additional elements of library quality that appear infrequently in accreditation documents include “professionally qualified and numerically adequate staff” (Commission on Institutions of Higher Education, 2011, p. 20) and evidence that the university “regularly and systematically evaluates the quality, adequacy, utilization, and security of library and information resources and services” (Northwest Commission on Colleges and Universities, 2010). Accreditation agencies want to see evidence of library capacity to support academic programs, but relatively few have substantially modified their written standards to reflect evolving notions of the library’s impact (learning outcomes) or to require evidence of higher-level administrative activities (ongoing evaluation, data collection). No accreditation standards discuss the non-traditional library services such as scholarly communications and research data management that have become de rigueur in academic libraries; inputs remain the sole or primary measures of library quality in most standards. Undoubtedly, institutions themselves expand upon the library’s roles and responsibilities where appropriate within self-assessments, and these additional functions are surely considered by bodies conducting institutional evaluations, but for most accreditors, an adequate library is one that has relevant collections and sufficiently modern spaces with a reasonable capacity to house students. The Association of College and Research Libraries (2011) has developed a comprehensive set of standards for academic libraries, with “accrediting practices” in mind. Many of the principles in the document, though, including those under the headings institutional effectiveness, professional values, educational role, discovery, and external relations, go far beyond the accrediting practices currently outlined in the documents provided by CJHE / RCES Volume 47, No. 1, 2017 Representation of Library Value / B. Jackson 86 most accreditation bodies. These latter omissions indicate that the accreditation process does not view library quality through the same lens used by the library community. The library as a source of learning materials, study spaces, and information technology is crucial to the university enterprise, but most aspects of library quality as conceived of by librarians are not considered central to university quality as understood by accreditors. University Rankings Of the measures of university quality under examination here, rankings are the most widely distributed. As a genre, they are targeted to a broad spectrum of stakeholders, including potential students and their families, funding bodies, policy makers, and university administrators, although individual ranking publications may have niche markets. They are also the least ambiguous of the three evaluation types under consideration with respect to the perceived value of each facet of university quality. The choice and weighting of variables used in rankings speak volumes about expectations for universities and their functions. Taken together, rankings tell us which elements of academia—teaching, learning, research, outcomes, the experience—that we as a society value, or should value, according to the rankers. University rankings have been criticized for decades, primarily for methodological issues (Kehm, 2014), but it was with the advent of global rankings in 2003, when the Academic Ranking of World Universities was established, that the discourse on rankings began to address their widespread influence. As well-established information organizations—The Times, Quacquarelli Symonds, Thomson Reuters—joined the ranking business, observers noted an increasing reliance on rankings to benchmark, and even guide, performance (Hazelkorn, 2011). All stakeholder levels, from potential students to university and government policy makers, were paying attention and responding to rankings. The influence rankings appeared to have on institutional decisions, along with concerns about flawed methodologies, has generated misgivings about the process, which have only increased in time. If an inventory of the most prominent global rankings (Academic Ranking of World Universities, Times Higher Education World University Rankings, QS World University Rankings, Leiden Rankings) and domestic rankings (from the US News & World Report, The Times, The Guardian, Maclean’s) had been kept, it would not have changed considerably in the past decade. There has, however, been tremendous growth in the number of smaller, niche rankings available (Usher, 2009). Data sources used in these smaller analyses range from open surveys to public data on postsecondary institutions to profiles on the networking website LinkedIn. Together, the number of indicators used in both established and transient rankings is considerable. Elements of university quality under consideration may include research outputs, expenditures, teaching quality, graduate employment, campus services, dorms and residences, intramural sports, and drinking establishments, among many others. Although the role of any of these indicators in defining university quality is debatable, they do represent perceptions of the characteristics that make a good university, for those who develop the schemes and, presumably, for the readers who continue to consume rankings. The role of libraries within rankings is minimal. None of the global ranking systems include in their analyses any measures related to libraries. In some few cases, supple- CJHE / RCES Volume 47, No. 1, 2017 Representation of Library Value / B. Jackson 87 mentary materials produced by the ranking agencies explore library quality, but these typically are not measured with the same rigour as are the primary indicators. The QS World University Rankings, as an example, gives star ratings to additional components of university life, including libraries, but the ratings are compiled based on unsystematic online scoring. Instead, global rankings focus primarily on research, reputation, and, to a lesser extent, teaching. Libraries fare mildly better in some domestic or national ranking systems. Only two major ranking publications of 14 analyzed in a previous study (Jackson, 2015) in the United States, the UK, Canada, and Australia included direct measures related to libraries. One, Maclean’s magazine in Canada, included four library indicators: expenditures as a percentage of institutional budget, new acquisitions, holdings per student, and total holdings. These accounted for 12–15% of the total score, varying by year and institutional category, which was by far the most weight given to libraries in any major ranking. Although the data used by Maclean’s about libraries is questionable in terms of currency and comprehensiveness (holdings data do not include electronic materials), the chosen indicators reflect the perspective that library collections are a relatively integral part of university quality. The other publication to include libraries was The Princeton Review, which provided a score for libraries based on online surveys of users who registered to participate. No guidance was provided to participants other than a request to rate the quality of an institution’s library on a Likert scale. It is impossible to determine, based on this method, which elements of libraries influenced the final scores. An additional five publications from the same previous study (Jackson, 2015) included library-related measures indirectly. The Complete University Guide, The Guardian’s League Tables, and The Sunday Times University Guide in the UK, as well as the US News & World Report and College Prowler, in the United States, included budgetary indicators that encompassed spending on libraries. These were based on spending on academic services, expenditures on facilities, or total spending per student. The Sunday Times University Guide also included data from the National Student Survey related to libraries, although library-specific data were subsumed under measures of general satisfaction. There is a growing body of research that attempts to identify links between library services and university performance as measured by other indicators, including several that are typically used in rankings. Correlational analyses have attempted to link library services to institutional reputation (Weiner, 2009), research outputs (Noh, 2012), student retention (Mezick, 2007), student grades (Wong & Cmor, 2011), and overall rank (Oppenheim & Stuart, 2004). The library variables used to make such comparisons chiefly include expenditures, library staff, collections, and direct student interactions with professional library staff. While there is little doubt that the library contributes in some fashion to success in these areas of concern for universities, there is also little doubt that scores on each indicator used in rankings are subject to a host of additional forces. Regardless of the degree to which library services influence other measures of university quality, libraries are generally left out of the discussion of university quality when it comes to rankings. Of those that do include libraries, the quality of library services is gauged almost exclusively by the amount of money spent on them, and those data are combined with expenditures on other services. There may be an assumption on the part of ranking agencies that success in some areas—research outputs, for example—cannot CJHE / RCES Volume 47, No. 1, 2017 Representation of Library Value / B. Jackson 88 occur in the absence of good libraries. Also possible, though, is that libraries are excluded because the publishers do not believe that stakeholders will consider the library when making decisions. If this is the case, then at least in the eyes of rankers, libraries do not constitute a core element of university quality. Student Surveys Student surveys form an appreciable part of the performance measurement activities of any postsecondary institution. Regardless of whether they purport to measure student engagement, student satisfaction, or student experience, surveys provide data on a scale that is not possible to obtain by other means, particularly those surveys that are well established and widely distributed. Of those established surveys that have been thoroughly tested, there is evidence that the elements under scrutiny—behaviours that contribute to student engagement, for example—do contribute to positive academic outcomes (Carini, Kuh, & Klein, 2006; Webber, Bauer Krylow, & Zhang, 2013). Thus, they can inform administrators, with reasonable precision, how institutions are faring in supporting positive outcomes for students. The limitations of student surveys have been well documented. The data gleaned from surveys are limited to the experience of students in a particular program at a particular institution, and these students are often lacking the contextual knowledge to make comparative assessments (Mavondo, Tsarenko, & Gabbott, 2004). Broad-based surveys themselves do not account for institutional or national attributes that may impact student engagement (Hagel, Carr, & Devlin, 2012). This is particularly problematic when surveys are used for comparative purposes. Benchmarking against other universities for the purposes of internal monitoring and ranking is of limited utility when student surveys are the primary basis for comparison (Gordon, Ludlum, & Hoey, 2007). Both of these exercises are regularly done, though, despite the contextual shortcomings. Broader criticisms see policy development formed around student satisfaction as undue reliance on consumerist elements that are anathema to the traditional administration of postsecondary education (Varnava & Broadbent, 2007). Still, few would argue that university quality evaluations or policy development should be based on student feedback alone. Student surveys are only one of many performance measures used in institutional monitoring. The ways that surveys address student experiences with the library fit into two broad categories. The first involves the use of direct questions in which the quality of the library as a whole or some aspect of library service is evaluated. Inherent, and at times explicit, in these questions are indicators of library use, in addition to quality. The Canadian University Survey Consortium (CUSC) surveys of first years, middle years, and graduating students, for example, includes check boxes to indicate that students have used physical books and electronic resources and then asks respondents to rate their satisfaction with each. The now defunct College Student Experiences Questionnaire was more comprehensive in its treatment of library use, containing eight questions addressing specific experiences with the library and its resources. In an unfortunately large number of surveys, multiple aspects of library quality are rolled into single questions, which makes responding difficult and interpreting data nearly impossible. These problematic questions can be seen in the UK’s National Student Survey, which asks students the degree to which they agree that “library resources and CJHE / RCES Volume 47, No. 1, 2017 Representation of Library Value / B. Jackson 89 services are good enough for my needs,” the University Experience Survey, in Australia, which asks students to rate “library resources and facilities,” and in the question from The Times Higher Education Student Experience Survey that asks respondents whether they agree that their school has a “good library and good library opening hours.” Because it is not possible to determine whether respondents are rating resources, services, facilities, or opening hours in these questions, their real utility comes from the indication that if students select any options other than “not applicable,” this is a reasonable indication that they have some experience with the library or library resources. The second category of questions measures student behaviours that are linked to information literacy outcomes. These questions vary between those that are explicit about the use of library resources in developing information literacy skills and those that inquire about information literacy outcomes without identifying a library-specific context. The CUSC survey, for example, asks students to rate the degree to which their institution contributes to growth in their “ability to find and use information” and “reading to absorb information accurately,” while the Australasian Survey of Student Engagement asks respondents how frequently they “worked on an essay or assignment that required integrating ideas or information from various sources.” The National Survey of Student Engagement (NSSE) includes questions about information evaluation and integration in its core survey, but also includes as of 2014 an optional module that measures behaviours linked to information literacy. The information literacy module consists of 14 questions concerning an array of activities (information finding, evaluation, integration, ethics, etc.) that, together, indicate a level of experience with practices that contribute to the development of information literacy, as outlined by the Association of College and Research Libraries. Only one of the 14 questions asks about the library specifically. There is an appreciable level of agreement between established student surveys that library quality contributes to student engagement or student satisfaction, or at least that a line of inquiry to that effect is worthwhile. Of the limited empirical studies that have looked at libraries and student engagement in the context of surveys, the results are mixed. Kuh and Gonyea (2003), looking at the College Student Experiences Questionnaire, found that self-reported library use does not contribute to the development of information literacy skills, overall gains from postsecondary education, or satisfaction. The College Student Surveys Project Group, on the other hand, found connections between information literacy development and other NSSE engagement scales (Gratch-Lindauer, 2008). Of course, information literacy outcomes are not achieved exclusively through use of the library. Students could conceivably develop an advanced set of information literacy skills without ever having used the library or communicating with library staff, although those activities are certainly beneficial to most students. As others have noted, it is likely that myriad factors influence academic success in varying ways for different students (Carini et al., 2006; Klein, Kuh, Chun, Hamilton, & Shavelson, 2005). All universities are concerned with the achievement of student learning outcomes, and student surveys can play a substantial role in monitoring the conditions that are most likely to bring about success in this area. The roles of neither the library nor information literacy are central to most surveys of student experiences. Considering the immense number of components to and influences on the student experience, there is no reason to believe that the library should be central. Even NSSE founder George Kuh who, along with CJHE / RCES Volume 47, No. 1, 2017 Representation of Library Value / B. Jackson 90 Robert Gonyea, described the library as “the physical manifestation of the core values and activities of academic life” (2003, p. 256) and claimed that the “library’s central role in the academic community is unquestioned” (p. 256), did not make the library central to NSSE. There is, however, ample evidence outside of the literature related to student surveys that engagement with the library is connected to positive outcomes for students (Soria, Fransen, & Nackerud, 2013; Wong & Cmor, 2011). The near-universal, albeit brief, inclusion of libraries in surveys that attempt to define such expansive concepts as student engagement and experience suggests that, in this realm, libraries are considered important elements of university quality. Discussion The notion of the library as central to the functioning of the university remains ubiquitous, despite numerous disruptive shifts to higher education. The library as the heart of the university is a metaphor repeated over and over again—in the titles of books, in numerous articles, and as the mottos of several academic libraries, to name only a few examples. It may indeed be the most frequently employed phrase for writers wishing to refer to the library in glowing terms. The concept of the centrality of the library was explored in depth by Deborah Grimes (1993), with a follow-up examination by Lynch et al. (2007). Both studies used interviews with university administrators to explore perceptions of the conceptually central role of the library. In both cases, CEOs and provosts acknowledged the emotional capital of the library, but in neither study would they commit to the concept fully. The idea of the library as the heart of the university works well as a slogan, but the library’s budget is no more sacrosanct than that of any other university department. The data on library expenditures bear out these conclusions: as a percentage of institutional budgets, institutional spending on ARL libraries decreased from 3.7% in 1984 to less than 2% in 2009 (Association of Research Libraries, 2013). It is difficult to imagine, were the library truly the heart of the university, that budgetary reductions to that degree would be possible without greater ramifications to the institution. The same is true of performance measurement. The library is, of course, not immune to evaluative oversight through its regular reporting channels. Extra-institutional assessment, though, demonstrates broader attitudes about the library’s importance to the university. Although the results of this analysis certainly varied, in no cases could it be said that library quality is a dominant factor in the measurement of university quality. With the exception of university rankings, assessments designed by external stakeholders generally measure library performance, particularly in the realm of traditional library indicators, as one important but relatively small piece of institutional operations. University rankings in general pay very little attention to libraries as a measure of university quality. Although the three types of university evaluations considered here were chosen because of their development and use by external stakeholders, there is a marked difference between university rankings and the other two forms of evaluation: accreditation and student surveys. It was noted earlier that rankings are used by a diverse spectrum of stakeholders, but most were created by or at the behest of media organizations whose primary function is to sell information. For that reason, they are designed in part to appeal to the largest possible audiences, most notably current and future students, their parents, and CJHE / RCES Volume 47, No. 1, 2017 Representation of Library Value / B. Jackson 91 other concerned citizens, dispensing factoids of interest to media outlets through which the performance of local universities can be reported. In most cases, libraries do not fit into these conceptualizations of university quality, which emphasize some combination of research, reputation, teaching, and the student experience, even though the library has some impact on all of those elements. Detailed accreditation and student survey reports, on the other hand, may be designed externally but are not burdened with the need to appeal to large audiences. They are constructed to ensure that policy makers and administrators have the information needed to conduct their business effectively, with the added bonus that successful assessments provide assurance to a broader category of stakeholder that includes students, parents, and concerned citizens. Traditional library metrics, including those related to collections, use, spaces, and satisfaction, play a regular, albeit minor, role in these evaluations because of a shared understanding that they measure attributes of university quality that contribute to student and program success. Research from the library community, in turn, has attempted to demonstrate that this is indeed the case. What libraries actually do, though, has shifted beyond the provision of research materials and study spaces. As critical as those basic services are, librarians at most universities are heavily engaged in teaching and research, and they typically offer a suite of additional services in the areas of scholarly communications, research data management and visualization, and legal aspects of information use. Once considered value-added services, many of these are now seen as core functions of the library. It should follow that as additional services become standard procedure, they are subject to scrutiny, as are other university operations. It is inconsistent to compare library quality across institutions if central functions are left out of the equation. With some exceptions, including the minority of accreditors who consider the work of libraries in the area of information literacy and the inclusion of the workload outputs of faculty librarians in the processes of ranking and accreditation, the onus for evaluation of non-traditional operations has been placed on the library. The result has been a steady increase in the number of resources and staff dedicated exclusively to assessment (Wright & White, 2007), and a burgeoning body of literature that explores new ways of assessing library services and impact (see Dugan, Hernon & Nitecki, 2009; Hernon, Dugan & Schwartz, 2013; Oakleaf, 2010). Rarely, however, do results of this continuous evaluation go further than the library or university administration. Although ongoing assessment is crucial to service improvement, as a solely internal exercise it does little to contribute to perceptions of university quality. As this survey has demonstrated, evidence of the quality of non-traditional library services is not a requirement or evident as a desire for extrainstitutional assessment of university quality. Conclusion How institutional and program quality measures are developed reveals a great deal about value constructs. Of the many elements that contribute to university quality, those that are expected to have the largest impact on institutional missions are and should be assessed in greater detail. These constructs will no doubt vary among groups of stakeholders. Assessments developed and used by parties external to institutional operations—government, media, potential students, and research bodies—provide insight into CJHE / RCES Volume 47, No. 1, 2017 Representation of Library Value / B. Jackson 92 broader perceptions of what higher education should be and do. In the case of libraries, often described as the heart of the institution, external evaluations depict value primarily in the work that has always been done by libraries: providing resources and spaces that contribute to teaching, learning, and research. Library value, however, also exists in nontraditional contributions to the institution. That these contributions are largely absent from higher-level scrutiny suggests that there is a disparity between what internal and external stakeholders view as valuable to higher education. References Allen, A. T. (1922). Institutions of higher learning in North Carolina. Raleigh, NC: State Superintendent of Public Instruction. American Library Association. (1989). Presidential committee on information literacy: Final report. Retrieved from http://www.ala.org/acrl/publications/whitepapers/ presidential Association of College and Research Libraries. (1958). College and university library accreditation standards, 1957. Chicago, IL: Author. Association of College and Research Libraries. (2011). Standards for libraries in higher education. Retrieved from http://www.ala.org/acrl/standards/standardslibraries Association of Research Libraries. (2013). Library expenditures as a percent of total university expenditures, 1982–2011 (40 universities) [Data file]. Retrieved from http:// www.arl.org/focus-areas/statistics-assessment/statistical-trends#.VMqwpkfF98F Bradley, C. (2013). Information literacy in the programmatic university accreditation standards of select professions in Canada, the United States, the United Kingdom, and Australia. Journal of Information Literacy, 7(1), 44–69. doi:10.11645/7.1.1785 Carini, R. M., Kuh, G. D., & Klein, S. P. (2006). Student engagement and student learning: Testing the linkages. Research in Higher Education, 47(1), 1–32. doi:10.1007/ s11162-005-8150-9 Chepesiuk, R. (1994). Writers at work: How libraries shape the muse. American Libraries, 25(11), 984–987. Commission on Institutions of Higher Education. (2011). Standards for accreditation. Retrieved from https://cihe.neasc.org/sites/cihe.neasc.org/files/downloads/Standards/ Standards_for_Accreditation.pdf Cowan, S. M. (2013). Information literacy: The battle we won that we lost? portal: Libraries and the Academy, 14(1), 23–32. doi:10.1353/pla.2013.0049 Dalrymple, P. W. (2001). Understanding accreditation: The librarian’s role in educational evaluation. portal: Libraries and the Academy, 1(1), 23–32. doi:10.1353/ pla.2001.0004 Dugan, R. E., Hernon, P., Nitecki, D. A. (2009). Viewing library metrics from different perspectives: Inputs, outputs, and outcomes. Santa Barbara, CA: Libraries Unlimited. Eaton, J. S. (2009). Accreditation in the United States. New Directions for Higher Education, (145), 79–87. doi:10.1002/he.337 CJHE / RCES Volume 47, No. 1, 2017 Representation of Library Value / B. Jackson 93 Eaton, J. S. (2012). The future of accreditation. Planning for Higher Education, 40(3), 8–15. Elliott, E. C., Ashbrook, W. A., & Chambers, M. M. (1935). The government of higher education. New York, NY: American Book Company. Gordon, J., Ludlum, J., & Hoey, J. J. (2008). Validating NSSE against student outcomes: Are they related? Research in Higher Education, 49(1), 19–39. doi:10.1007/ s11162-007-9061-8 Gratch-Lindauer, B. (2002). Comparing the regional accreditation standards: Outcomes assessment and other trends. The Journal of Academic Librarianship, 28(1– 2), 14–25. doi:10.1016/S0099-1333(01)00280-4 Gratch-Lindauer, B. (2008). College student engagement surveys: Implications for information literacy. New Directions for Teaching and Learning, 2008(114), 101–114. doi:10.1002/tl.320 Grimes, D. J. (1993). Centrality and the academic library. Retrieved from ProQuest Dissertations and Theses. (304044825) Hagel, P., Carr, R., & Devlin, M. (2012). Conceptualising and measuring student engagement through the Australasian Survey of Student Engagement (AUSSE): A critique. Assessment & Evaluation in Higher Education, 37(4), 475–486. doi:10.1080/02602938 .2010.545870 Hazelkorn, E. (2011). Rankings and the reshaping of higher education: The battle for world-class excellence. Basingstoke, UK: Palgrave Macmillan. doi:10.1057/9780230306394 Hernon, P., Dugan, R. E., & Schwartz, C. (Eds.). (2013). Higher education outcomes assessment for the twenty-first century. Santa Barbara, CA: Libraries Unlimited. Higher Education Funding Council for England. (2014). UK review of the provision of information about higher education: National Student Survey results and trends analysis 2005–2013. Retrieved from http://www.hefce.ac.uk/media/hefce/content/ pubs/2014/201413/HEFCE2014_13%20-%20corrected%2012%20December%202014.pdf Higher Learning Commission. (2014). Higher learning commission policy book. Retrieved from http://policy.ncahlc.org/ Jackson, B. (2015). What do university rankings tell us about perceptions of library value? In S. Durso, S. Hiller, M. Kyrillidou, & A. Pappalardo (Eds.), Proceedings of the 2014 Library Assessment Conference (pp. 431–437). Washington, DC: Association of Research Libraries. Retrieved from http://libraryassessment.org/bm~doc/proceedingslac-2014.pdf Jackson, B., MacMillan, M., & Sinotte, M. (2014). Great expectations: Results from a faculty survey of students’ information literacy proficiency. Paper presented at the IATUL Conference, Espoo, Finland. Retrieved from http://docs.lib.purdue.edu/ iatul/2014/infolit/1/ Jantti, M., & Cox, B. (2013). Measuring the value of library resources and student academic performance through relational datasets. Evidence Based Library and Information Practice, 8(2), 163–171. CJHE / RCES Volume 47, No. 1, 2017 Representation of Library Value / B. Jackson 94 Kehm, B. M. (2014). Global university rankings—Impacts and unintended side effects. European Journal of Education, 49(1), 102–112. doi:10.1111/ejed.12064 King, D. W., & Tenopir, C. (2013). Linking information seeking patterns with purpose, use, value, and return on investment of academic library journals. Evidence Based Library and Information Practice, 8(2), 153–162. Klein, S. P., Kuh, G. D., Chun, M., Hamilton, L., & Shavelson, R. (2005). An approach to measuring cognitive outcomes across higher education institutions. Research in Higher Education, 46(3), 251–276. Kuh, G. D., & Gonyea, R. M. (2003). The role of the academic library in promoting student engagement in learning. College & Research Libraries, 64(4), 256–282. Lynch, B. P., Murray-Rust, C., Parker, S. E., Turner, D., Walker, D. P., Wilkinson, F. C., & Zimmerman, J. (2007). Attitudes of presidents and provosts on the university library. College & Research Libraries, 68(3), 213–228. Mavondo, F. T., Tsarenko, Y., & Gabbott, M. (2004). International and local student satisfaction: Resources and capabilities perspective. Journal of Marketing for Higher Education, 14(1), 41–60. doi:10.1300/J050v14n01_03 Mezick, E. M. (2007). Return on investment: Libraries and student retention. The Journal of Academic Librarianship, 33(5), 561–566. doi:10.1016/j.acalib.2007.05.002 Middle States Commission on Higher Education. (2009). Characteristics of excellence in higher education: Requirements of affiliation and standards for accreditation. Retrieved from http://www.msche.org/publications/CHX06_Aug08REVMarch09.pdf Middle States Commission on Higher Education. (2014). Standards for accreditation and requirements of affiliation (13th ed.). Retrieved from http://www.msche.org/ documents/RevisedStandardsFINAL.pdf Molyneux, R. E. (1986). The Gerould statistics: 1907/08–1961/62. Washington, DC: Association of Research Libraries. Retrieved from http://www.arl.org/storage/ documents/publications/gerould-statistics.pdf Neal, A. D. (2008). Seeking higher ed accountability: Ending federal accreditation. Change, 40(5), 24–31. Noh, Y. (2012). The impact of university library resources on university research achievement outputs. Aslib Proceedings, 64(2), 109–133. doi:10.1108/00012531211215150 Northwest Commission on Colleges and Universities. (2010). Standards for accreditation. Retrieved from http://www.nwccu.org/Pubs%20Forms%20and%20 Updates/Publications/Standards%20for%20Accreditation.pdf Oakleaf, M. (2010). The value of academic libraries: A comprehensive research review and report. Chicago, IL: American Library Association. Oppenheim, C., & Stuart, D. (2004). Is there a correlation between investment in an academic library and a higher education institution’s ratings in the Research Assessment Exercise? Aslib Proceedings, 56(3), 156–165. CJHE / RCES Volume 47, No. 1, 2017 Representation of Library Value / B. Jackson 95 Quality Assurance Agency for Higher Education. (2013). The UK quality code for higher education. Retrieved from http://www.qaa.ac.uk/assuring-standards-andquality/the-quality-code Saskatchewan Higher Education Quality Assurance Board. (2014). Quality assurance review process: Program review standards and criteria. Retrieved from http://www. quality-assurance-sk.ca/program-review-standards-criteria Saunders, L. (2007). Regional accreditation organizations’ treatment of information literacy: Definitions, collaboration, and assessment. The Journal of Academic Librarianship, 33(3), 317–326. doi:10.1016/j.acalib.2007.01.009 Saunders, L. (2010). Information literacy as a student learning outcome: As viewed from the perspective of institutional accreditation. Retrieved from ProQuest Dissertations and Theses. (3452631) Smith, E. T. (2003). Assessing collection usefulness: An investigation of library ownership of the resources graduate students use. College & Research Libraries, 64(5), 344–355. Soria, K. M., Fransen, J., & Nackerud, S. (2013). Library use and undergraduate student outcomes: New evidence for students’ retention and academic success. portal: Libraries and the Academy, 13(2), 147–164. doi:10.1353/pla.2013.0010 Stanger, K. (2012). Whose hands ply the strands? Survey of Eastern Michigan University psychology faculty regarding faculty and librarian roles in nurturing psychology information literacy. Behavioral & Social Sciences Librarian, 31(2), 112–127. doi:10.108 0/01639269.2012.713845 Thompson, G. B. (2002). Information literacy accreditation mandates: What they mean for faculty and librarians. Library Trends, 51(2), 218–261. Usher, A. (2009). New frontiers in institutional comparisons. Australian Universities’ Review, 51(2), 87–90. Varnava, T., & Broadbent, G. (2007). The National Student Survey. The Law Teacher, 41(3), 330–334. doi:10.1080/03069400.2007.9959752 Webber, K. L., Bauer Krylow, R., & Zhang, Q. (2013). Does involvement really matter? Indicators of college student success and satisfaction. Journal of College Student Development, 54(6), 591–611. doi:10.1353/csd.2013.0090 Weiner, S. (2009). The contribution of the library to the reputation of a university. The Journal of Academic Librarianship, 35(1), 3–13. doi:10.1016/j.acalib.2008.10.003 Weiner, S. A. (2014). Who teaches information literacy competencies? Report of a study of faculty. College Teaching, 62(1), 5–12. doi:10.1080/87567555.2013.803949 Wilson, C. S., & Tenopir, C. (2008). Local citation analysis, publishing and reading patterns: Using multiple methods to evaluate faculty use of an academic library’s research collection. Journal of the American Society for Information Science and Technology, 59(9), 1393–1408. doi:10.1002/asi.20812 Wong, S. H. R., & Cmor, D. (2011). Measuring association between library instruction and graduation GPA. College & Research Libraries, 72(5), 464–473. CJHE / RCES Volume 47, No. 1, 2017 Representation of Library Value / B. Jackson 96 Wright, S., & White, L. S. (2007). SPEC kit 303: Library assessment. Washington, DC: Association of Research Libraries. Zhong, Y., & Alexander, J. (2007). Academic success: How library services make a difference. ACRL Thirteenth National Conference Proceedings. Retrieved from http:// www.ala.org/acrl/sites/ala.org.acrl/files/content/conferences/confsandpreconfs/ national/baltimore/papers/141.pdf Contact Information Brian Jackson Mount Royal University [email protected] Brian Jackson is an assistant professor in the library at Mount Royal University. He holds a Master of Library and Information Studies degree from the University of Alberta. His research focuses on the ways in which library impact and value are constructed in both internal and external evaluations of library services. CJHE / RCES Volume 47, No. 1, 2017