The Emperor’s New Clothes: Maclean’s, NSSE, and the Inappropriate Ranking of Canadian Universities J. Paul Grayson York university Abstract Most Canadian universities participate in the US-based National Survey of Student Engagement (NSSE) that measures various aspects of “student engagement.” The higher the level of engagement, the greater the probability of positive outcomes and the better the quality of the school. Maclean’s magazine publishes some of the results of these surveys. Institutions are ranked in terms of their scores on 10 engagement categories and four outcomes. The outcomes considered are how students in the first and senior years evaluate their overall experiences (satisfaction) and whether or not students would return to their campuses. Universities frequently use their scores on measures reported by Maclean’s in a self-congratulatory way. In this article, I deal with levels of satisfaction provided by Maclean’s. Based on multiple regression, I show that of the 10 engagement variables regarded as important by NSSE, at the institutional level, only one explains most of the variance in first-year student satisfaction. The others are of limited consequence. I also demonstrate, via a cluster analysis, that, rather than there being a hierarchy of Canadian institutions as suggested by the way in which Maclean’s presents NSSE findings, Canadian universities can most adequately be divided into a limited number of different satisfaction clusters. Findings such as these might serve as a caution to parents and students who consider Maclean’s satisfaction rankings when assessing the merits of different universities. Overall, in terms of first-year satisfaction, the findings suggest more similarities than differences between and among Canadian universities. Keywords: NSSE, Maclean’s, Canadian university rankings, student engagement, student satisfaction Résumé La plupart des universités canadiennes participent à l’Enquête nationale sur la participation étudiante/National Survey of Student Engagement (NSSE), qui est basée aux États-Unis. Plus le niveau de « participation étudiante » est élevé, plus la probabilité de résultats positifs est élevée, et plus l’école est considérée comme étant de bonne qualité. Le magazine Maclean’s publie certains des résultats de cette enquête. Les établissements y sont classés selon leur score dans dix catégories de « participation » et quatre résultats. Les résultats considérés sont la manière dont les étudiants de première et de dernière année évaluent leur expérience globale (satisfaction), et leur désir de retourner étudier au même endroit si c’était à refaire. Les universités utilisent fréquemment les résultats rapportés par Maclean’s à des fins d’autopromotion. Dans cet article, je me penche sur les niveaux de satisfaction présentés par Maclean’s. Sur la base d’une régression multiple, je montre que sur les dix variables de participation considérées comme importantes par la NSSE, au niveau des établissements, une seule explique la majeure partie de la variance en ce qui concerne la satisfaction des étudiants de première année. Les autres ont peu d’effet. Je démontre également, par le biais d’une analyse par grappe, qu’au lieu d’être hiérarchisées comme le suggère la façon de faire de Maclean’s avec les résultats de la NSSE, les universités canadiennes peuvent être divisées de façon plus adéquate en un nombre limité de grappes de satisfaction. Ces découvertes peuvent servir de mise en garde aux parents et aux étudiants qui considèrent les classements de Maclean’s pour comparer les universités. Globalement, en ce qui a trait à la satisfaction des étudiants de première année, elles suggèrent qu’il y a plus de ressemblances que de différences entre les universités canadiennes. Mots-clés : enquête nationale sur la participation étudiante, Maclean’s, classement des universités canadiennes, participation étudiante, satisfaction des étudiants Canadian Journal of Higher Education | Revue canadienne d’enseignement supérieur 50:3 (2020) The Emperor’s New Clothes J. Paul Grayson 15 Introduction Every year Maclean’s magazine publishes an issue devoted to providing Canadians with information on their universities. In 2019, at the beginning of its disquisition, Maclean’s writes, “Here’s everything you need to know to choose the right school” (Maclean’s, 2020). “Everything” included, as provided by Statistics Canada, information on numbers of students and faculty on different campuses; the number and nature of research grants awarded by agencies like the Social Science and Humanities Research Council (SSHRC); the results of surveys of students, faculty, and administrators carried out by private research firms; and information from the US-based National Survey of Student Engagement (NSSE). While the algorithms Maclean’s employs in processing this information are not always clear, the end product is a rank-ordering of universities along a number of dimensions including: reputation, student satisfaction, the likelihood of students returning to the same university, and a number of practices and outcomes derived from the NSSE. In addition, Maclean’s provides an overall ranking of universities (Dwyer, 2018). Research conducted in the United States has shown that approximately 40% of first-year students utilized rankings such as those provided by Maclean’s in selecting a place to study. Seventeen percent (17%) believed that such rankings were “very important” (Zilvinskis & Rocconi, 2018, p. 257). Comparable national data for Canadian students are unavailable; however, a study in Ontario indicated that although rankings did not affect the attraction of students to high profile universities, it was a different matter for small institutions. The attractiveness of the latter was enhanced by positive Maclean’s rankings (Drewes & Michael, 2006, p. 799). Given the possible importance of Maclean’s rankings to some university-bound Canadian students, their parents, potential donors, governments, and universities themselves, it is essential to ensure that the information presented by Maclean’s is neither intentionally nor unintentionally misleading. The cost of these possibilities to various parties could be considerable. As a result of this consideration, in this article, I analyse data provided by Maclean’s on first-year satisfaction, one of the four outcomes derived from NSSE data. More specifically, I am interested in the degree to which Maclean’s presents information on this phenomenon in a way that reflects what actually occurs in Canadian institutions of higher education. By focusing on student satisfaction, I am not arguing that it is the gold standard to be used in evaluating universities. I am simply working with the reality that Maclean’s itself regards satisfaction as an important criterion in distinguishing among institutions of higher learning. However, even if we accept the legitimacy of this criterion, I will show that the simple ranking provided by Maclean’s provides a distorted picture of what happens on Canadian campuses. In addition, on the basis of my analysis of satisfaction, I will argue that in some important ways similarities among Canadian universities far outweigh their differences. In making this argument I will rely on the data provided by Maclean’s in 2018. As this article utilizes information provided by Maclean’s on institutions, it is important to distinguish between individual and aggregate (in current context, institutional) levels of analysis. The former examines the characteristics of individuals who might, for example, get high grades. The latter might deal with the characteristics of social bodies, such as universities, and examine the relationship between average class size and numbers of students winning prestigious national and international scholarships. I mention this for one simple reason. Sometimes relationships found at the individual level are not replicated when researchers focus on aggregates. In the current context this means that we should not assume that relationships among variables revealed when researchers study students can be generalized to the study of universities per se. The reverse is also true. Student Engagement For over a decade, most Canadian universities have participated in the US-based National Survey of Student Engagement (NSSE, 2019c). Administered to students in their first and senior years of study, the survey focuses on students’ backgrounds and various aspects of “student engagement.” According to NSSE: Student engagement represents two critical features of collegiate quality. The first is the amount of time and effort students put into their studies and other educationally purposeful activities. The second is how the institution deploys its resources and organizes the curriculum and other learning opportunities to get students to participate in activities that decades of research studies show are linked to student learning. (NSSE, 2019a) Canadian Journal of Higher Education | Revue canadienne d’enseignement supérieur 50:3 (2020) The Emperor’s New Clothes J. Paul Grayson 16 Consistent with this definition, many questions asked in the survey have the following format. Respondents are asked, “During the current school year, how often have you done the following?” Possible activities include, “connected ideas from your courses to your prior experiences and knowledge.” In keeping with the definition above, this question is one measure of the time and effort students expend on their studies. Other questions deal with how the institution deploys its resources and organizes the curriculum. By way of example, survey respondents are questioned on the frequency with which they “participate in an internship, co-op, field experience, student teaching, or clinical placement.” The full text of the Canadian version of the NSSE is available on the web (NSSE, 2019d). For analysis purposes, responses to questions such as the foregoing are subdivided into 10 categories (hereafter referred to as engagement categories): higher-order learning; reflective and integrative learning; learning strategies; quantitative reasoning; collaborative learning; discussions with diverse others; student-faculty interaction; effective teaching practices; quality of interactions; and supportive environment1 (NSSE, 2019b). For each category, question responses are combined into a single variable with scores ranging from 0 to 60. Although the NSSE questions have changed over time, with a major revision occurring in 2013 (NSSE, 2019c), the current questionnaire is consistent with original objectives (Fosnacht & Gonyea, 2018, p. 63). As a result, the ways in which researchers characterized the pre-2013 surveys still have resonance. The fundamental assumption underlying the survey is that different aspects of student engagement contribute to outcomes such as learning and degree completion. As a result, should high levels of student engagement be detected through surveys, it is reasonable to assume positive outcomes. Despite its concentration on learning outcomes, research utilizing NSSE data also clearly demonstrates a connection among student engagement, learning, and student satisfaction. As satisfaction is linked to student engagement, this finding in some ways gives support to the attention given to the phenomenon by Maclean’s. Cheong and Ong (2016) summarize the link as follows: Participation in campus activities such as student organizations and clubs also leads to commitment and positive perception of experiences, which are correlated with greater satisfaction... Higher levels of engagement with faculty, staffs [sic], and students together with effort contribute to not only a higher cumulative grade point average (GPA) but also perception of satisfaction with one’s entire academic experience. (p. 411) The clear implication of findings such as these is that, in addition to learning outcomes, at the individual level, student satisfaction can be viewed as a possible outcome of student engagement. For neither this nor other relationships among variables utilized by NSSE do I automatically assume similar dynamics at both the individual and aggregate levels. Overall, NSSE argues that its data can, is, and should be used by universities in planning processes. Consistent with this possibility, NSSE, upon request, will provide individual institutions with anonymous comparator groups of similar universities. The availability of this option enables institutions to put the results of their surveys in perspective. Should standings along one measure be lower than in comparative institutions, resources can be allocated to correct this imbalance. The Underlying Principle The original principle underlying the NSSE was simple. As noted previously, in the United States, decades of research have established a link (sometimes tenuous) between student engagement as defined above and positive university outcomes (Astin, 1993; Rockenbach et al., 2016). These include high grades and retention. As a result, it is assumed that if responses to the NSSE reveal high levels of student engagement, positive outcomes are a likely concomitant. As NSSE puts it: Survey items…represent empirically confirmed “good practices” in undergraduate education. That is, they reflect behaviors by students and institutions that are associated with desired outcomes of college [emphasis added]. (NSSE 2019a) In other words, you don’t have to eat the meal, you simply have to look at the recipe to judge its taste. Importantly, in formulating its assumptions, among other sources, NSSE relied on American studies at the individual level in which it was possible to examine the effect of NSSE engagement practices on grades and Canadian Journal of Higher Education | Revue canadienne d’enseignement supérieur 50:3 (2020) The Emperor’s New Clothes J. Paul Grayson 17 persistence after controlling for pre-entry characteristics. This is essential practice if the objective is to assess the net effect of the post-secondary experience. In these American studies, pre-entry characteristics included measures like family income, race, and ACT and SAT scores. One important study on which NSSE relied that met the foregoing conditions involved 18 institutions and 6,193 first-year and 5,227 senior students. Importantly, this study used objective measures of achievement: students were not simply asked to state their grades etc. A conclusion that emerged from this analysis was that, “while pre-college characteristics, such as academic achievement, predict first-year grades and persistence, student engagement during college also has modest positive effects” (Kuh et al., 2007, p. 2). Just how modest were these effects? Overall, “a one-standard deviation increase in ‘engagement’ during the first year of college increased a student’s GPA by about .04 points” (Kuh et al., 2007, p. 17). Put differently, after adjustments for pre-entry characteristics, engagement variables explained 13% of the variance in GPA (the total amount of variance explained by the model was 42%) (Kuh et al., 2007, p. 17). Modest effects were also evident for retention. “Students who are engaged at a level that is one standard deviation below the average have a probability of returning of .85.” By contrast, “students who are engaged at a level that is one standard deviation above the average have a probability of returning of .91” (Kuh et al., 2007, p. 21). Like the authors of the study, I view these figures as quite modest. Subsequent research conducted at the institutional level is consistent with the overall findings of Kuh et al. (Pascarella et al., 2010). Overall, research at both the individual and institutional levels indicates that the effect of student engagement on learning outcomes is modest. How NSSE Results Are Used Although by his own admission one of the architects of NSSE (George Kuh) considers the influence of engagement on certain outcomes as modest, he nonetheless writes that “faculty and administrators would do well to arrange the curriculum and other aspects of the college experience in accord with these good practices [as embodied in NSSE].” He further contends that, “those in- stitutions that more fully engage their students in the variety of activities that contribute to valued outcomes of college can claim to be of higher quality compared with other colleges and universities where students are less engaged” (Kuh, 2003, p. 1). Based on these assumptions, NSSE “was designed from its inception to serve as a benchmarking tool that institutional leaders can use to gauge the effectiveness of their programs by comparing first-year and senior students separately to those at comparison institutions” (NSSE, 2019c, p. 13). By way of example, McGill was interested in the number of their students who “wrote more than 10 papers or reports of fewer than 5 pages.” The NSSE told McGill that the figure was 22%. The university was also able to obtain comparators from the NSSE. For example, the figures for the G13 (research intensive universities in Canada) was a lower 16%; however, for AAU universities (Association of American Universities), which includes McGill, the figure was a far higher 31% (Planning and Institutional Analysis, 2010, p. 14). Presumably, on the basis of information such as this, McGill would be able to promote the writing of increased numbers of short papers. The extent to which any increase would have a positive effect is open to question. In addition to supplying information to institutions via Maclean’s, NSSE results are disseminated to the Canadian public. For the past several years, Maclean’s has obtained from individual universities the information collected by the NSSE in the previously mentioned categories: higher-order learning; reflective and integrative learning; learning strategies; quantitative reasoning; collaborative learning; discussions with diverse others; student-faculty interaction; effective teaching practices; quality of interactions; and supportive environment. It then lists, in descending order, each institution in terms of its senior year standings for each of these categories. Information is also collected on student satisfaction and whether or not enrolees would return to the same institution. As a measure of satisfaction, NSSE asks students, “How would you evaluate your entire educational experience at this institution?” Maclean’s provides information on the percentage saying excellent and good. Table 1 shows the first 13 institutions ranked in 2018 in terms of the level of overall student satisfaction with their university experience. Quest, in the number one spot, had 94% of their first-year students categorizing their experience as excellent or good. With a score of Canadian Journal of Higher Education | Revue canadienne d’enseignement supérieur 50:3 (2020) The Emperor’s New Clothes J. Paul Grayson 18 Qualifications 86%, Brescia is in 13th place. The problem with treating the data in this way is that we do not know if there are any statistically significant differences in the rankings. Perhaps the score for Quest (94%) is sufficiently high to distinguish it from Brescia (86%). But what about the 86% and 83% for Brescia and Trent respectively? Are these differences sufficient to rank institutions in this way? As will be seen later, the answer is no. Despite, in some cases, the incredibly small differences between one institution and the next, universities are often swift to rejoice in their positive standings in any given year. For example, Acadia boasted, “Acadia University has been named one of the top undergraduate schools in Canada by Maclean's magazine” (Acadia University, 2010). Similar pride was evident at Mount Allison University. As stated on its website, “Mount Allison consistently ranks as Canada's top undergraduate university” (Mount Allison University, 2019). Institutions toward the bottom of the list tend to be more reserved. Given the modest contribution that engagement factors make to outcomes, at both the individual and institutional levels, it is important to ask if the likely modest gains in outcomes resulting from implementing measures to enhance student engagement are worth their cost. In answer, it would be prudent for each institution to conduct a cost benefit analysis prior to introducing potential changes based on NSSE results. There is another concern. The fact that in multi-institutional studies in the United States a link has been established between student engagement and positive outcomes does not mean that in any given Canadian institution we should expect the same. For example, one Canadian longitudinal study carried out at York University at the individual level found that measures from a precursor of NSSE, the College Student Experiences Questionnaire (CSEQ), explained only 3.1% of the variance in students’ grades after three years of study (Grayson, 1999, p. 698). Note that grades were derived from academic records. Other individual level studies Table 1 How Would You Evaluate Your Entire Educational Experience at This Institution? First year students % Excellent % Good Quest 61 33 Tyndale 61 28 Ambrose 49 44 Trinity Western 48 42 Queen's 44 44 Sherbrooke 41 48 Mount Royal 40 50 Saint Paul (Ottawa) 40 54 St. Francis Xavier 39 43 Wilfrid Laurier 39 47 Trent 38 45 Briercrest 37 49 Brescia (Western) 36 50 Canadian Journal of Higher Education | Revue canadienne d’enseignement supérieur 50:3 (2020) The Emperor’s New Clothes J. Paul Grayson 19 have pointed in the same direction. For example, a longitudinal study based on students from the University of British Columbia, York, McGill, and Dalhousie discovered that some engagement factors (not the full roster found in NSSE) were statistically significant in the explanation of grades derived from administrative records for second-generation students; however, they were not statistically significant for the first generation (Grayson, 2008, 2011). Also, in an examination of retention at York, engagement variables were of no statistically significant explanatory value (Grayson, 1998). In view of findings such as these, before allocating resources consistent with their NSSE results, Canadian universities should conduct their own studies in which they link NSSE findings and data in administrative records. The latter would provide objective information on students’ backgrounds, academic performance, completed credits, and persistence. On the basis of information thereby obtained, universities would be in a position to determine if the model underlying the NSSE is appropriate to their institution and to assess the potential impact of certain NSSE-inspired changes on their campuses. Research of this nature, which should be conducted at least once in each university, would not be a costly venture and could potentially lead to informed allocation of resources. Consistent with the foregoing, a recent individual level study carried out at Western University, the University of Waterloo, York, and the University of Toronto, found that students’ generic skills levels (writing, test taking, analysis, time and group management, research, presentation, and numeracy skills), were as good a predictor of students’ university grades as was their level of high school achievement (beta=0.23 for each) (remember that in the American study by Kuh et al. [2007] cited earlier, the amount of variance in GPA explained by factors other than high school grades was slight). Slightly weaker but still statistically significant effects were also found for thoughts of leaving prior to degree completion and satisfaction with the university experience (Grayson et al., 2019). Under conditions such as these the universities might better spend their resources on developing students’ generic skill levels than on increasing engagement. Of course, these two allocations are not mutually exclusive. Analysis Analyses such as the foregoing require access to more data from NSSE than is published in Maclean’s annual issue on higher education; however, as the current objective is only to assess the way in which Maclean’s uses, and might use, the institutional-level information at its disposal, lack of access to additional data is unimportant. Consistent with this understanding, in column 2 of Table 2, for each university, I have listed the percentages of students who thought that their overall first-year experiences were either good or excellent. (For the time being, ignore other columns.) Multiple regressions and two-step cluster analyses were used in the examination of these data. Multiple regression analysis allows researchers to estimate the unique effect of a particular independent (or causal) variable on a dependent (or caused) variable. In this paper, utilization of this procedure allows, for example, the estimation of the unique effect of one of the engagement categories on satisfaction after the removal of the effects of the other nine categories. In the current analyses, the regression scores presented are betas. Only those variables that make a statistically significant contribution to the dependent variables will be reported. Betas are standardized measures of the effect of independent on dependent variables. Although a simplification, for purposes of the current discussion, we can consider them to range from -1.0 to +1.0. A negative sign indicates that the higher the value of the independent variable, the lower the value of the dependent variable. A positive sign signifies that as the value of the independent variable increases, so does the value of the dependent variable. An important feature of betas is that, because they are standardized, it is possible to compare the effects of different independent variables. For example, if a beta for one independent variable is .30, and for another it is .60, we know that the second variable has twice the impact of the first on the dependent variable. Most importantly, the beta is a measure of impact after the influence of all other variables has already been considered. There are different opinions on the minimum number of cases required for each independent variable used in regression analyses. What all have in common is that the greater the number of cases for each inde- Canadian Journal of Higher Education | Revue canadienne d’enseignement supérieur 50:3 (2020) The Emperor’s New Clothes J. Paul Grayson 20 pendent variable, the better. As a result, in the current endeavour, given that Maclean’s reported on only 60 of the 72 Canadian institutions that in 2017 participated in the NSSE (published in 2018), I strived for parsimony in the selection of independent variables. With this in mind, I first determined the overall effect of the 10 engagement categories on satisfaction. Theoretically, there was no reason for assuming that one category would be more important than another. As a result, I employed stepwise regression. At the end of the regression analysis only one variable, quality of interactions, was identified as statistically significant. The associated beta was .82. The sum of the betas of all remaining non-statistically significant variables was .78 with an average beta of .09. Overall, the one engagement category explained 67% of the overall variance. Given that the highest Variance Inflation Factor (VIF) in the regression for an independent variable was 1.5, and the lowest 1.0, multicollinearity was not an issue. In essence, only 10% of the engagement categories identified as important by NSSE were of consequence for an institution’s first-year satisfaction score. At the institutional level, this finding alone is sufficient to call in question NSSE’s underlying model of student engagement. In view of the importance of this engagement category, it is worth noting the way in which it was operationalized. According to the NSSE, quality of interactions reflects the quality of encounters students have with: 1. 2. 3. 4. 5. Other students Academic advisors Faculty Student services staff Other administrative staff and offices Given its operationalization, the quality of interactions variable is perhaps more a measure of client care than of student engagement. If this is the case, there is nothing new in the finding: the importance of client care for success has been recognized for decades in business organizations (Peters & Waterman, 1982). Even if information on satisfaction presented by Maclean’s is accepted at face value, is it appropriate for the magazine to rank institutions in descending order? The short answer is, perhaps not. Why? Because differences in some cases are extremely small, and listings of this nature possibly detract from a recognition of this fact. To illustrate, looking at column two in Table 2 we see that the satisfaction scores, with excellent and good combined, for Acadia, Brescia, and ACAD are 85%, 86%, and 87% respectively. Categories were combined because satisfaction is an ordinal variable and overlap at each end of the scale is a distinct possibility. Are these differences significant? Should these institutions be given the same or a different rank? An answer to this question is provided by two-step cluster analysis. This procedure groups cases in terms of commonality. In this study, all institutions placed in one group would have more in common with one another than with universities in any other group. In the current endeavour, independent of their specific dimensions, the mere finding of any clusters is important. The existence of any groups points to the limitations of simply ranking institutions as does Maclean’s. Before proceeding with analysis, it is necessary to make four points. First, univariate cluster analysis has been utilized in a number of studies (Fournier et al., 2007; Sriwanna et al., 2016). Second, there is no consensus on the most appropriate technique for cluster analyses (Dolnicar, 2002). Third, different techniques can result in the identification of different clusters (Kent et al., 2015).2 Fourth, cluster analysis is used with both large and small samples. For example, in an overview of 243 studies, it was found that 52 utilized samples of less than 100 (Dolnicar, 2002). Consistent with the foregoing, in the current undertaking, I utilized two-step cluster analysis as available in SPSS. I chose this technique rather than procedures such as “k-means” and “Jenks natural breaks” for two reasons. First, although the researcher can experiment with different numbers of clusters, the default setting for two-step automatically calculates the optimal number of groups for the given sample (IBM, n.d.).3 In both k-means and Jenks this option is unavailable—the number of groups must be specified.4 Second, two-step provides useful and graphic output. The results of the two-step procedure, when applied to the original satisfaction scores, are found in Table 3. As seen in the table, the statistical procedure only identifies two groups of institutions! Forty-two are classified as high satisfaction. The remaining 18 manifest low satisfaction. A discriminant analysis showed these differences to be statistically significant and the silhouette measure of cohesion and separation was good (the highest category). Canadian Journal of Higher Education | Revue canadienne d’enseignement supérieur 50:3 (2020) The Emperor’s New Clothes J. Paul Grayson 21 Table 2 First Year Satisfaction Figures 1 2 3 4 Institution % Satisfied % Satisfied adjusted for quality int. Column 2 minus column 3 ACAD 87 82 5 Acadia 85 84 1 Alberta 84 83 1 Ambrose 93 90 3 Brandon 82 82 0 Brescia (Western) 86 87 -1 Briercrest 86 89 -3 Brock 81 81 0 Calgary 78 82 -4 Cape Breton 86 89 -3 Carleton 83 79 4 Concordia 78 75 3 Dalhousie 83 83 0 Guelph 82 81 1 King's (Edmonton) 88 86 2 Lakehead 73 79 -6 Laurentian 81 83 -2 Laval 87 84 3 Lethbridge 83 83 0 MacEwan 85 79 6 Manitoba 71 75 -4 McGill 83 79 4 McMaster 84 84 0 Memorial 76 81 -5 Moncton 84 83 1 Mount Allison 82 84 -2 Mount Royal 90 85 5 Mount Saint Vincent 84 83 1 Nipissing 83 84 -1 OCAD U 72 78 -6 Canadian Journal of Higher Education | Revue canadienne d’enseignement supérieur 50:3 (2020) The Emperor’s New Clothes J. Paul Grayson 22 1 2 3 4 Institution % Satisfied % Satisfied adjusted for quality int. Column 2 minus column 3 Ottawa 79 76 3 Queen's 88 84 4 Quest 94 93 1 Ryerson 77 78 -1 Saint Mary's 79 80 -1 Saint Paul (Ottawa) 94 92 2 Saskatchewan 79 80 -1 Sherbrooke 89 89 0 Sheridan 85 85 0 Simon Fraser 68 76 -8 St. Francis Xavier 82 89 -7 St. Thomas 86 83 3 Thompson Rivers 82 83 -1 Toronto 73 78 -5 Trent 83 84 -1 Trinity Western 90 87 3 Tyndale 89 96 -7 UBC (Okanagan) 87 83 4 UBC (Vancouver) 79 80 -1 UNB 82 83 -1 UOIT 82 83 -1 UPEI 84 79 5 UQAM 86 83 3 Victoria 79 82 -3 Waterloo 80 80 0 Western 84 80 4 Wilfrid Laurier 86 86 0 Windsor 79 77 2 Winnipeg 79 76 3 York 70 74 -4 Mean 83 83 0.0 S.D. 5.5 4.6 3.3 Canadian Journal of Higher Education | Revue canadienne d’enseignement supérieur 50:3 (2020) The Emperor’s New Clothes J. Paul Grayson 23 Table 3 Satisfaction Group Placement High Low ACAD Calgary Acadia Memorial Alberta Victoria Brandon Concordia Dalhousie Manitoba King's (Edmonton) Saint Mary's Laval Saskatchewan Lethbridge Simon Fraser Moncton UBC (Vancouver) Mount Allison Winnipeg Mount Royal OCAD U Mount Saint Vincent Ottawa Sheridan Ryerson St. Thomas Windsor Thompson Rivers York UBC (Okanagan) Lakehead UNB Toronto UQAM Waterloo Brock Guelph Laurentian McMaster Nipissing Queen's Trent UOIT MacEwan McGill UPEI Carleton Western Canadian Journal of Higher Education | Revue canadienne d’enseignement supérieur 50:3 (2020) The Emperor’s New Clothes J. Paul Grayson 24 High Low Ambrose Brescia (Western) Briercrest Cape Breton Quest Saint Paul (Ottawa) Sherbrooke St. Francis Xavier Trinity Western Tyndale Wilfrid Laurier To what extent do these categorizations reflect attendant differences in the NSSE student engagement measures? The answer is provided in Table 4. From the table, two things are apparent. First, differences between the high and low satisfaction groups are statistically significant for: reflective learning, effective teaching, quality of interactions, total satisfaction, and number of undergraduates (size) (Universities Canada, 2018). Second, with the exception of size, differences between the high and low groups are small. For example, for reflective learning, the high and low satisfaction groups have respective means of 34 and 33. (Keep in mind that scores are out of 60.) The figures for effective teaching are 36 and 34. Even for quality of interactions, the variable contributing most to student satisfaction, the score for the high group is 40. For the low group it is 37. As might be expected, only differences in student satisfaction are what might be termed modest: the average score for high satisfaction institutions is 85%. For universities with low satisfaction the average score is 76%. From these figures we can conclude that although universities in the high and low groups manifest different satisfaction scores, there are virtually no differences among them in terms of the engagement categories NSSE deems relevant. In other words, in Canada, at the institutional level, high levels of satisfaction say nothing about other aspects of student engagement, that, according to NSSE, define the quality of an institution. A finding that warrants independent comment is institutional size. Table 4 shows that while the average number of students in universities in the high satisfaction group was 10,830, the mean number of full-time undergraduates in the low group was 20,783. This difference was statistically significant. This finding suggests that, all else being equal, students would do well to enroll in the smallest university to which they have access. Of course, all else is seldom equal. The Effect of Adjustments Some readers may be unfamiliar with the idea of statistical adjustment. For this reason, I will give a simplified example. Assume that research has confirmed that females are usually more satisfied with their university experiences than males. University A is 50% female. The number of females in university B is 75%. Suppose that university A’s NSSE survey indicated that 70% of students identified their experiences as good or excellent. In university B the corresponding figure was 80%. What did the score for university B indicate? That the conditions at B were more conducive to satisfaction or that it simply numbered more females in its student body? As a result of this ambiguity, we conduct a procedure that, statistically, makes the percentage of females in both institutions a constant. Once we do this, we find that university A’s score increases to 75% while that of university B drops slightly to 78%. As a result of this procedure we can argue that even when adjustments are made for the number of females in each of the universi- Canadian Journal of Higher Education | Revue canadienne d’enseignement supérieur 50:3 (2020) The Emperor’s New Clothes J. Paul Grayson 25 Table 4 Engagement Category Scores by Satisfaction Group Variable 18 35.2 1.5 60 36.0 2.4 High 42 34.4 2.6 18 33.0 1.3 60 34.0 2.3 High 42 35.4 1.8 18 34.9 1.1 60 35.2 1.7 High 42 23.3 3.4 18 23.4 2.4 Total 60 23.3 3.1 High 42 33.2 3.1 Low 18 31.7 3.3 Total 60 32.7 3.2 High 42 37.0 3.6 Low 18 38.2 2.5 Total 60 37.4 3.3 High 42 14.7 2.7 Low 18 13.1 1.5 Total Effective Teaching* 2.6 Low Faculty Interactions 36.3 Total Discussions Others 42 Low Collaborative Learning High Total Quantitative Reasoning S.D. Low Learning Strategies Mean Score Total Reflective Learning* Number Low Higher Order Learning Satisfaction Group 60 14.2 2.5 High 42 36.2 2.6 Low 18 34.2 1.0 Total 60 35.6 2.4 Canadian Journal of Higher Education | Revue canadienne d’enseignement supérieur 50:3 (2020) The Emperor’s New Clothes J. Paul Grayson Variable 26 High 42 41.1 2.3 18 37.1 1.5 60 39.9 2.8 High 42 32.2 2.4 18 29.8 1.4 Total 60 31.5 2.4 High 42 85.4 3.4 Low 18 76.1 3.8 Total 60 82.6 5.5 High 32 10,831 8,831 Low 17 20,784 15,679 Total # FT Undergrads* S.D. Low Total Satisfaction* Mean Score Total Supportive Environment Number Low Quality of Interactions* Satisfaction Group 49 14,284 12,459 *F p < .05 ties, the score of institution B is slightly higher than that of A. What is the importance of possibilities such as this in the current context? In answer, we can start with the proposition that along some dimensions few campuses are alike. Differences can be found in entry requirements, race, ethnicity, first language, campus cultures, class backgrounds, and so on. After adjustments are made for possibilities such as these, in many instances it is pre-entry characteristics of students, rather than the university itself, that account for differences in outcomes. In recognition of this reality, in 1991, in their now classic summary of American research on university outcomes, Pascarella and Terenzini wrote, “the dimensions along which American colleges are typically categorized, ranked, and studied (such as size, type of control, curricular emphasis, and selectivity) are simply not linked with major differences in net impacts on students” (1991, p. 589). In the 2016 revision of their book the authors reached a similar conclusion. They wrote that, “we conclude that between-college effects are relatively modest” (Rockenbach et al., 2016, p. 533). In the revision, the authors also specified that “it is important to note that the single strongest predictor of a student’s outcomes at the end of college is that student’s characteristics on the same construct when entering college.” As a result, “while college can (and often does) profoundly shape learning, growth, and development, the precollege environment has a substantial impact on the attributes of college graduates” (Rockenbach et al., 2016, p. 571). The importance of this observation is brought home in Canadian studies. In an examination, at the individual level, of first-year students at the University of British Columbia (UBC), York, McGill, and Dalhousie, respondents were asked a number of demographic questions. When asked their origins, 36% of the students at UBC stated China, Hong Kong, or Taiwan. At York the figure was only 5%. At UBC only 4% of international students were from the United States. The number for McGill was 43%. While 47% of those at McGill had fathers with at least a BA, the figure for York was 38% (Grayson, 2011, Canadian Journal of Higher Education | Revue canadienne d’enseignement supérieur 50:3 (2020) The Emperor’s New Clothes J. Paul Grayson 27 p. 611). In essence, there were considerable differences in the background characteristics of students in each institution. In addition to reporting on demographics, the study above examined students’ satisfaction with their programs. In analyses, objective information on entry grades, first-year grades, sex, and student status (domestic and international) was obtained from administrative records. This information was linked to data collected in the survey on type of residence, in-class experiences, academic involvement, contacts with faculty and staff, event involvement, and friendships. When satisfaction was adjusted for these variables it was clear that there was no difference in the satisfaction of students on the different campuses (Grayson, 2008, p. 226). Why is this important? In its 2018 special issue on Canadian universities, Maclean’s reported that on the Vancouver campus of the University of British Columbia 79% of first-year students rated their experiences (satisfaction) as excellent or good. The corresponding figures for York, McGill, and Dalhousie were 70%, 83%, and 83% respectively. On the basis of the above study, it is doubtful that these differences would remain were the appropriate adjustments made. In a more recent study of York, the University of Toronto, Waterloo, and Western a satisfaction question comparable to NSSE’s was asked of over 2,200 students. After adjustments had been made for students’ level of cultural capital, sex, first language, domestic or international status, first generation status, and year of study, no between-institution differences were found in overall satisfaction (Grayson et al., 2019). These Canadian studies show that making distinctions among universities on satisfaction scores without adjusting for other important variables results in a distorted picture. Even if satisfaction results are only adjusted for one of the potential control variables included in Maclean’s special edition, the allocation of universities to satisfaction groups changes drastically. This is brought home by the figures on satisfaction adjusted for quality of interactions that were summarized in column three of Table 2. The differences between the adjusted and unadjusted satisfaction scores are found in column four. These figures represent variations in satisfaction that cannot be attributed to quality of interactions. A positive score in column four indicates that the institution’s number of satisfied students was higher than predicted by the university’s quality of interactions. In other words, factors other than quality of interactions were at work. For example, ACAD’s unadjusted satisfaction score was 5% higher than predicted based on the quality of interactions on campus. A negative sign shows that that the percentage of satisfied students was lower than warranted by the quality of interactions. For example, based on its quality of interactions score, it could have been expected that the percentage of Lakehead’s first-year students who were satisfied with their first-year experience would have been 6% higher than observed. In other words, Lakehead’s satisfaction score inadequately reflected the quality of interactions on campus. A score of zero indicates congruency between the quality of interactions and satisfaction scores, as found at Brandon, McMaster, and Waterloo. Using the standard deviation of 5.5 from column two as the criterion, it is clear that after adjustments for quality of interactions, six institutions had satisfaction scores that inadequately reflected the nature of campus interactions. Lakehead (-6), OCAD U (-6), Simon Fraser (-8), St. Francis Xavier (-7), and Tyndale (-7) were short-changed. In their satisfaction scores, they did not receive full recognition for their quality of interactions. By contrast, MacEwan (+6) scored higher than would be expected. When the values in column three were subjected to a two-step cluster analysis, three groups were identified: low, medium, and high satisfaction. A discriminant analysis showed these differences to be statistically significant and the silhouette measure of cohesion and separation was ‘good’. The numbers of universities falling into each group were 11, 20, and 29 respectively. The mean adjusted satisfaction scores for these groups were 78%, 83%, and 90%. Post hoc tests (Bonferroni) indicated that overall and between group differences were statistically significant. The relationship between membership in the satisfaction and adjusted satisfaction groups is summarized in Table 5. The table shows that of universities originally grouped in the high category, 26% dropped into the low group once their satisfaction scores were adjusted for quality of interactions. A further 12% fell into the medium category. None of the universities in the low satisfaction category retained this status once satisfaction scores had been adjusted. Instead, 83% and 17% found posi- Canadian Journal of Higher Education | Revue canadienne d’enseignement supérieur 50:3 (2020) The Emperor’s New Clothes J. Paul Grayson 28 Table 5 Adjusted Satisfaction Placement by Unadjusted Placement Unadjusted Group Placement 1 High Low 26% 0% 2 Medium 12% 83% 3 High 62% 17% Total 100% 100% Cases Adjusted Group Placement 2 Low 42 18 Fisher's p < .05 tions in the medium and high categories respectively. This amount of churning is further confirmation of the limitations of simply ordering institutions in terms of unadjusted satisfaction. Simply controlling for quality of interactions, the only NSSE engagement category found to be statistically significant in explaining institutional levels of satisfaction, completely changed the ordering of Canadian universities. The specific institutions that changed their satisfaction status once adjustments were made for quality of interactions are found by comparing columns two and three in Table 6. For example, we see that McGill fell from high to medium. The status of Memorial went from low to high. York’s position went from low to medium. In presenting the results of the analysis in which satisfaction is adjusted for quality of interactions, I am not arguing that the latter should always be used in all analyses. It all depends on the intent of the researcher. I am simply trying to drive home the fact that an institution’s placement is variable depending upon the amount of available information. This said, either of the two clusterings discussed in this article would be superior to the current rank-ordering of universities. Discussion I began this report with two questions. First, how credible are NSSE’s claims? Second, does Maclean’s make the best possible use of the data at its disposal? In answer to the first question, NSSE contends that student engagement contributes to desired outcomes. This is true; however, as its own research shows, at best, after the imposition of appropriate controls at both individual and institutional levels, the net effects are moderate. Pre-entry characteristics are of far more consequence. The results of Canadian research also point to the limited effect of student engagement on important outcomes. NSSE further believes that in the absence of information on university outcomes, its measures of student engagement can be used as proxies for desiderata such as student learning and retention. This is a fallacious argument. If, at best, engagement variables are weakly connected to some important outcomes, how will they be used to identify them? A related claim is that high scores on 10 engagement categories can be used as an indication of institutional quality. If there were a one-to-one correspondence between engagement practices and outcomes like GPA, this would be a reasonable argument. As shown earlier, however, in the United States, at the individual level, engagement variables only explained 13% of the variance Canadian Journal of Higher Education | Revue canadienne d’enseignement supérieur 50:3 (2020) The Emperor’s New Clothes J. Paul Grayson 29 Table 6 Original and Adjusted Group Placement Institution Original Group Placement Adjusted Group Placement ACAD High High Acadia High High Alberta High High Brandon High High Dalhousie High High King's (Edmonton) High High Laval High High Lethbridge High High Moncton High High Mount Allison High High Mount Royal High High Mount Saint Vincent High High Sheridan High High St. Thomas High High Thompson Rivers High High UBC (Okanagan) High High UNB High High UQAM High High Brock High High Guelph High High Laurentian High High McMaster High High Nipissing High High Queen's High High Trent High High UOIT High High MacEwan High Medium McGill High Medium UPEI High Medium Carleton High Medium Western High Medium Ambrose High Low Canadian Journal of Higher Education | Revue canadienne d’enseignement supérieur 50:3 (2020) The Emperor’s New Clothes J. Paul Grayson Institution 30 Original Group Placement Adjusted Group Placement Brescia (Western) High Low Briercrest High Low Cape Breton High Low Quest High Low Saint Paul (Ottawa) High Low Sherbrooke High Low St. Francis Xavier High Low Trinity Western High Low Tyndale High Low Wilfrid Laurier High Low Calgary Low High Memorial Low High Victoria Low High Concordia Low Medium Manitoba Low Medium Saint Mary's Low Medium Saskatchewan Low Medium Simon Fraser Low Medium UBC (Vancouver) Low Medium Winnipeg Low Medium OCAD U Low Medium Ottawa Low Medium Ryerson Low Medium Windsor Low Medium York Low Medium Lakehead Low Medium Toronto Low Medium Waterloo Low Medium in GPA. While effects of this magnitude are common in the social sciences, in my opinion they are insufficient to support claims that universities should be rated in terms of student engagement or that benefits will likely accrue to students who enroll in high-ranking institutions. Overall, institutional quality should not be reduced to limited measures of student engagement as operationalized by NSSE. In this study it was clear that only one engagement category, quality of interactions, was of consequence for first-year satisfaction. Apparently, many practices that researchers using NSSE data found important to this outcome at the individual level are of little consequence when aggregate data are used. As a result, it is possible that institutional practices, like providing better client care, Canadian Journal of Higher Education | Revue canadienne d’enseignement supérieur 50:3 (2020) The Emperor’s New Clothes J. Paul Grayson 31 could increase rates of first-year satisfaction more than the encouragement of more student-faculty interaction. In view of these considerations, it is reasonable to conclude that in Canada many claims made by Maclean’s based on NSSE’s underlying model are exaggerated. It is equally clear that Maclean’s does not make sufficient use of available data. By simply listing institutional standings on the 10 engagement categories, student satisfaction and thoughts of return, the magazine likely contributes to a misunderstanding by parents and potential students of the best places in Canada to study. As a result, parents may spend money on sending their children to universities at a distance while closer ones might suffice. As shown in this article, when organized in terms of first-year satisfaction, depending upon adjustments, there are only two or three groups of Canadian universities. Moreover, despite some differences in satisfaction, in the unadjusted analysis yielding two groups, each cluster supported equal levels of student engagement. It was seen that many studies show that once controls have been imposed for background characteristics such as prior achievement, levels of cultural capital, and so on, differences in satisfaction among institutions are drastically reduced. Even on the basis of the limited information made available by Maclean’s, we saw some confirmation of this possibility. When satisfaction was adjusted for quality of interactions, a number of institutions moved from the high to the low satisfaction group. Others moved in the opposite direction. It is highly likely that were it possible to utilize NSSE questions, link them to administrative data, and to control for further variables, additional changes would occur. These measures would possibly further diminish inter-university differences. Overall, the picture left by Maclean’s rankings of Canadian universities is highly problematic. On the basis of the same raw data made available to readers, rather than simply ranking schools, it would have been possible to see what unites as well as separates Canadian universities. Given the results of the current study, had this strategy been followed, we likely would have seen little variation in the engagement practices found on Canadian campuses. What are the implications of these findings for students? In reply, students should identify universities offering programs in which they are interested. Having done this, they should take other considerations into account: what are the costs of attending one university compared to another; how do the campuses “feel”; what kind of student housing is available, and so on. Their last concern should be with Maclean’s rankings. Nothing I have said is particularly new. In 1991 Pascarella and Terenzini wrote of American colleges and universities that: there are clear and unmistakable differences among postsecondary institutions in a wide variety of areas, including size and complexity, control, mission, financial and educational resources, the scholarly productivity of faculty, reputation and prestige, and the characteristics of the students enrolled. (p. 589) They then qualify their position: “Despite their structural and organizational differences,” Pascarella and Terenzini consider the possibility that universities’: similarities in curricular content, structures and sequencing; instructional practices; overall educational goals; faculty values; out-of-class experiences; and other areas do in fact produce essentially similar effects on students, although the “start” and “end” points may be very different across institutions. (p. 589) In the years since they wrote these words, little has changed. I must stress that in this article I have concentrated on institutional scores of first-year student satisfaction. On the basis of NSSE data provided by Maclean’s it would be possible to conduct similar analyses of final-year satisfaction and the thoughts of first- and final-year students regarding returning to the same institution. I will not prejudge the findings of such inquiries. Also, by definition, I have focused on undergraduate education. Dynamics at post-graduate levels may be totally different. I might also mention that analyses similar to those utilized in this article could be conducted on other dimensions utilized by Maclean’s that are not derived from the NSSE. Were such examinations carried out, it is possible that they too would reveal the inadequacy of simply ranking universities. A finding such as this would further contribute to the realization that Canadian universities have more in common than suggested by Maclean’s. Conclusion A large number of Canadian universities participate in the NSSE. Results of this activity supposedly contribute Canadian Journal of Higher Education | Revue canadienne d’enseignement supérieur 50:3 (2020) The Emperor’s New Clothes J. Paul Grayson 32 to an understanding of university outcomes and provide a basis for university decision making. In addition, institutional scores on various engagement categories are a manifestation of a university’s overall quality. Once a year, among many other measures, Maclean’s publishes, in rank order, the scores obtained by Canadian universities on 10 engagement categories and four university outcomes. The data are presented in an uncritical fashion. No attempt is made to inquire into similarities and differences between and among institutions. The likely result of this practice is that many readers perceive a hierarchy of institutions. Certainly, many universities do. This is shown by the way in which some exalt in their status. Despite Maclean’s practice, if we focus only on first-year student satisfaction, we see that there is not a simple hierarchy of institutions. Without adjustments, there are two statistically significant groups of Canadian universities. The satisfaction scores of one group are 9% higher than those of the other. With adjustments, the number of groups increases to three. By focusing on student satisfaction, I am not suggesting that it is the standard to be used in the evaluation of Canadian universities. Nothing could be further from the truth. I am merely arguing that if we focus on firstyear satisfaction, it is clear that the simple ranking provided by Maclean’s provides a distorted picture of what happens on Canadian campuses. (At this point we do not know if the analysis of other outcomes would lead to similar conclusions.) In addition, based on the two-group analysis, it is evident that for all student engagement categories, differences among universities are slight. In other words, despite differences in first-year satisfaction, Canadian institutions of higher learning have more or less equal measures of student engagement. Were it possible to control for several confounding variables, even existing differences could potentially decrease. It is possible that examinations of other outcomes would be consistent with this conclusion. In view of these findings, students and parents would do well to avoid making enrollment decisions on the basis of Maclean’s satisfaction, and possibly other, rankings. Instead, they should consider the types of programs offered by different institutions, locations, costs, and so on. No matter where they attend, students’ experiences will have many common elements. Universities should also exercise caution. No matter how tempting it is to view one’s institution as ahead of the pack, NSSE results cannot be used to support assumptions of this nature. It follows that when policy decisions are made, rankings based on NSSE data should not be the first consideration. Does all of this mean that there is no merit in the NSSE? Not at all. What is in question is the way in which it is used. As noted earlier, at least once, each university should link its NSSE results to administrative records so that objective measures of grades and retention could be obtained (an agency other than NSSE should conduct this analysis!). It could then assess the degree to which the model underlying NSSE was applicable to its circumstances. Should the model be validated, future NSSE results could be used and interpreted in an informed way. If the model were not validated, money allocated to the survey would be better spent elsewhere. What about Maclean’s? Is there any merit to its undertaking? In response, it is very important that Canadian students, and their parents, some of whom may not have benefited from a higher education, have access to an inexpensive, readily accessible, and easily read source of information on the universities to which their sons and daughters might aspire. Maclean’s could help meet this need. This said, the magazine would have more credibility should it depart from its current practice of presenting information in a way suggesting a hierarchy of institutions. Instead, Maclean’s should take steps to ensure that in addition to showing differences among universities, similarities are not ignored. Furthermore, the magazine should exercise caution with respect to the dimensions on which it relies in ranking institutions. Not to do so belies reality and potentially complicates the university decision making processes for young Canadians. Acknowledgements I would like to thank James Côté and two reviewers for comments made on an earlier version of this paper. References Acadia University. (2010). Acadia among the best in Canada. https://www2.acadiau.ca/home/news-reader-page/acadia-among-the-best-in-canada.983.html Astin, A. (1993). What matters in college? Jossey Bass. Canadian Journal of Higher Education | Revue canadienne d’enseignement supérieur 50:3 (2020) The Emperor’s New Clothes J. Paul Grayson 33 Cheong, K. C., & Ong, B. (2016). An evaluation of the relationship between student engagement, academic achievement, and satisfaction. In S. F Tang & L. Logonnathan (Eds.), Assessment for learning within and beyond the classroom (pp. 409–416). Springer. Grayson, J. P. (2008). The experiences and outcomes of domestic and international students at four Canadian universities. Higher Education Research and Development, 27(3), 215–230. https://doi. org/10.1080/07294360802183788 Dolnicar, S. (2002). A review of unquestioned standards in using cluster analysis for data-driven market segmentation. https://ro.uow.edu.au/commpapers/273 Grayson, J. P. (2011). Cultural capital and academic achievement of first generation domestic and international students in Canadian universities. British Educational Research Journal, 37(4), 605–630. https://doi.org/10.1080/01411926.2010.487932 Drewes, T., & Michael, C. (2006). How do students choose a university?: An analysis of applications to universities in Ontario, Canada. Research in Higher Education, 47(7), 781–800. https://doi.org/10.1007/ s11162-006-9015-6 Dwyer, M. (2018, December 21). National Survey of Student Engagement: Results for Canadian universities. Maclean’s. https://www.macleans.ca/education/national-survey-of-student-engagement-results-for-canadian-universities/ Fosnacht, K., & Gonyea, R. M. (2018). The dependability of the updated NSSE: A generalizability study. Research and Practice in Assessment, 13(Summer/Fall), 62–74. https://www.rpajournal.com/dev/ wp-content/uploads/2019/01/RPA_Summer_Fall_Issue_2018_A5.pdf Fournier, M., Massei, N., Bakalowiczb, M., & Dupont, J. P. (2007). Use of univariate clustering to identify transport modalities in karst aquifers. Comptes Rendus Geoscience, 339(9), 622–631. https://doi. org/10.1016/j.crte.2007.07.009 Fraley, C., Raftery, A. E., Murphy, T. B., & Scrucca, L. (2012). MCLUST Version 4 for R: Normal mixture modeling for model-based clustering, classification, and density estimation. University of Washington. https://www.researchgate.net/publication/257428214_MCLUST_Version_4_for_R_Normal_Mixture_Modeling_for_Model-Based_Clustering_Classification_and_Density_Estimation Grayson, J. P. (1998). Racial origin and student retention in a Canadian university. Higher Education, 36, 323–352. https://doi.org/10.1023/A:1003229631240 Grayson, J. P. (1999). The impact of university experiences on self-assessed skills. Journal of College Student Development, 40(2), 687–700. Grayson, J. P., Côté, J., Chen, L., Kenedy, R., & Roberts, S. (2019). A call to action: Academic skill deficiencies in four Ontario universities. https://skillsforuniversitysuccess.info.yorku.ca IBM. (n.d.). Two step cluster analysis. https://www.ibm. com/support/knowledgecenter/en/SSLVMB_24.0.0/ spss/base/idh_twostep_main.html Kent, P., Jensen, B. K., & Kongstead, A. (2015). A comparison of three clustering metods for finding subgroups in MRI, SMS or clinical data: TwoStep Cluster analysis, Latent Gold and SNOB. BMC Medical Research Methodology, 14(113), 1–14. https:// doi.org/10.1186/1471-2288-14-113 Kuh, G. (2003). The National Survey of Student Engagement: Conceptual framework and overview of psychometric properties. https://centerofinquiry. org/wp-content/uploads/2017/04/conceptual_framework_2003.pdf Kuh, G., Kinzie, J., Cruce, T., Shoup, R., & Gonyea, R. M. (2007). Connecting the dots: Multi-faceted analyses of the relationships between student engagement results from the NSSE, and the institutional practices and conditions that foster student success. https://scholarworks.iu.edu/dspace/handle/2022/23684 Maclean’s. (2020). Maclean’s university guide 2020. https://www.macleans.ca/education-hub/ Mount Allison University. (2019). Canada's #1 undergraduate university. https://admissions.mta. ca/topreasons/?gclid=CjwKCAjw-OHkBRBkEiwAoOZqlx-b1qobDyPAfPX_H4_QmvigJpyaNBqkyozrCYE1drGiAYKn27nkIRoCQQwQAvD_BwE Canadian Journal of Higher Education | Revue canadienne d’enseignement supérieur 50:3 (2020) The Emperor’s New Clothes J. Paul Grayson 34 National Survey of Student Engagement (NSEE). (2019a). About NSSE. http://nsse.indiana.edu/html/ about.cfm SAS. (2015). The MODECLUS Procedure. SAS Institute Inc. https://support.sas.com/documentation/onlinedoc/stat/141/modeclus.pdf National Survey of Student Engagement (NSSE). (2019b). Engagement indicators & high-impact practices. http://nsse.indiana.edu/pdf/EIs_and_ HIPs_2015.pdf Sriwanna K., Boongoen T., & Iam-On, N. (2016). An enhanced univariate discretization based on cluster ensembles. In K. Lavangnananda, S. Phon-Amnuaisuk, W. Engchuan, & J. Chan (Eds.), Intelligent and evolutionary systems. Proceedings in adaptation, learning and optimization, vol 5. Springer, Cham. https://doi.org/10.1007/978-3-319-27000-5_7 National Survey of Student Engagement (NSSE). (2019c). NSSE's conceptual framework (2013). http://nsse.indiana.edu/html/conceptual_framework_2013.cfm National Survey of Student Engagement (NSSE). (2019d). Survey instrument. http://nsse.indiana.edu/ html/survey_instruments.cfm North, M. (2009, August 14–16). A method for implementing a statistically significant number of data classes in the Jenks algorithm [Paper presentation]. International Conference on Fuzzy Systems and Knowledge Discovery, Tianjin, China. https://www. researchgate.net/deref/http%3A%2F%2Fdx.doi. org%2F10.1109%2FFSKD.2009.319 Pascarella, E. T., Seifert, T. A., & Blaich, C. (2010). How effective are the NSSE benchmarks in predicting important educational outcomes? Change: The Magazine of Higher Learning, 42(1), 16–22. https:// doi.org/10.1080/00091380903449060 Pascarella, E. T., & Terenzini, P. (1991). How college affects students. Jossey-Bass. Peters, T. J., & Waterman, R. H. (1982). In search of excellence. Harper & Row. Planning and Institutional Analysis. (2010). Student life and learning focus on students, measuring our success on the undergraduate experience at McGill: Results from the National Survey of Student Engagement (NSSE). McGill University. https://www. mcgill.ca/apb/files/apb/NSSE_Measuring_our_Success_Report_Final.pdf Universities Canada. (2018). Enrolment by university. https://www.univcan.ca/universities/facts-and-stats/ enrolment-by-university/ Zilvinskis, J., & Rocconi, L. (2018). Revisiting the relationship between institutional rank and student engagement. The Review of Higher Education, 41(2), 253–280. https://doi.org/10.1353/rhe.2018.0003 Contact Information J. Paul Grayson [email protected] Notes 1 2 3 4 See Appendix A for a more detailed explanation. For certainty, I did the following with the satisfaction variable. Two-step had identified two clusters. As a result, I specified two for k-means. The outcome was the same as achieved when using two-step. The logic behind, and procedures involved in, univariate cluster analyses are available for statistical programs like SAS (SAS, 2015) and R (Fraley et al., 2012). With reference to this limitation North (2009, p. 1) writes of the Jenks procedure, “without a mechanism for determining the appropriate number of classes for a given dataset, the results of Jenks classification may be inaccurate, or worse, arbitrary.” Rockenbach, A. N., Pascarella, E. T., Wolniak, G. C., Mayhew, M. J., Bowman, N. A., Terenzini, P. T., & Seifert, T. A. D. (2016). How college effects students: 21st century evidence that higher education works. Jossey-Bass. Canadian Journal of Higher Education | Revue canadienne d’enseignement supérieur 50:3 (2020) The Emperor’s New Clothes J. Paul Grayson 35 Appendix A: NSSE Engagement Indicators Source: http://nsse.indiana.edu/pdf/EIs_and_HIPs_2015.pdf Canadian Journal of Higher Education | Revue canadienne d’enseignement supérieur 50:3 (2020)
- Author
- York University