ebook img

ERIC ED594960: A Case Study to Examine Three Peer Grouping Methodologies. Professional File. Article 142, Summer 2017 PDF

2017·4.8 MB·English
by  ERIC
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview ERIC ED594960: A Case Study to Examine Three Peer Grouping Methodologies. Professional File. Article 142, Summer 2017

PROFESSIONAL FILES | SUMMER 2017 VOLUME Supporting quality data and decisions for higher education. © Copyright 2017, Association for Institutional Research Letter from the Editor Summer brings time to reflect and recharge. The Summer 2017 volume of AIR Professional Files presents four articles with intriguing ideas to consider as you plan for the next academic year. Data governance is a pressing issue for many IR professionals, as sources of data proliferate and challenge our ability to control data integrity. In her article, Institutional Data Quality and the Data Integrity Team, McGuire synthesizes and interprets results from 172 respondents to an AIR- administered survey of postsecondary institutions on their data integrity efforts. She describes the current state of data governance and offers strategies to encourage institutional leaders to invest in data quality. Those of us who work in assessment often take it for granted that assessment results will be used for learning improvement. Fulcher, Smith, Sanchez, and Sanders challenge this assumption by analyzing information from program assessment reports at their own institution. Needle in a Haystack: Finding Learning Improvement in Assessment Reports uncovers many possible reasons for the gap between obtaining evidence of student learning and using that evidence for improvement. The authors suggest ways to promote learning improvement initiatives, and share a handy rubric for evaluating assessment progress. Institutional researchers are beset with requests to form peer groups, and it seems that no one is ever satisfied with the results. Two articles in this volume present very different methodologies for forming sets of comparison institutions. In her article, A Case Study to Examine Three Peer Grouping Methodologies, D’Allegro compares peer sets generated by different selection indices. She offers guidance for applying each index and encourages cautious interpretation of results. Rather than rummaging around for the perfect peer set, Chatman proposes creating a clone, or doppelganger university, one that is constructed from disaggregated components drawn from diverse data sources. In Constructing a Peer Institution: A New Peer Methodology, he walks us through the process of creating peers for faculty salaries, instructional costs, and faculty productivity. While the constructed peer approach has its challenges, the appeal of achieving a perfect fit peer is undeniable. I hope your summer “reflection” inspires you to share your work with your IR colleagues through AIR Professional Files. Sincerely, Sharron L. Ronco IN THIS ISSUE... EDITORS Article 140 Page 1 Author: Katherine A. McGuire Institutional Data Quality and the Data Integrity Team Sharron Ronco Coordinating Editor Article 141 Page 19 Marquette University Authors: Keston H. Fulcher, Kristen L. Smith, Elizabeth R. H. Sanchez and Courtney B. Sanders Leah Ewing Ross Needle in a Haystack: Finding Learning Improvement in Assessment Reports Managing Editor Association for Institutional Research Article 142 Page 35 Authors: Mary Lou D’Allegro Lisa Gwaltney A Case Study to Examine Three Peer Grouping Methodologies Editorial Assistant Association for Institutional Research Article 143 Page 55 Authors: Steve Chatman ISSN 2155-7535 Constructing a Peer Institution: A New Peer Methodology PROFESSIONAL FILE ARTICLE 142 © Copyright 2017 Association for Institutional Research A CASE STUDY TO EXAMINE THREE PEER GROUPING METHODOLOGIES Mary Lou D’Allegro the peer sets for each selection index of higher education have put colleges are provided. Second, an empirical and universities on high alert. To About the Author investigation was conducted to counter this skepticism, colleges and Mary Lou D’Allegro is associate provost compare the institutional peers schools have increased their efforts at Paul Smith’s College. chosen by each selection index using to evaluate their quality, efficiency, those procedures. Third, the stability and effectiveness (Ruben, 2004). A Acknowledgments of peer selection over time was also growing and important segment of This paper is an update to a previous ascertained from that enquiry. that evaluation is the comparison study published in AIR Professional Files: and benchmarking to like institutions M. L. D’Allegro and K. Zhou, “A Case Compiled separately from two data (Qayoumi, 2012). Therefore, peer Study to Examine Peer Grouping and sets extracted three years apart, selection has become more prevalent. Aspirant Selection,” Professional Files the three selection indices under Moreover, higher education has seen (Fall 2013), Association for Institutional investigation yielded remarkably the benefit of using peer comparisons Research. The following faculty inspired different sets of peers. Fewer than half and benchmarking to inform decision the author to develop additional novel of the institutions used in this study making and strategic planning. peer selection indices not noted in were identified as peers at both points previously published studies: Aaron of time. Additional analyses revealed This research builds on previous work Pacitti, Douglas T. Hickey Chair in that the underlying distributions of that examined the methodology to Business and Associate Professor of the characteristics used to select peers choose a set of institutional peers. Economics; and John Cummings, Dean might be just as influential as the Specifically, that research investigated of Science and Professor of Physics, characteristics themselves. The results the usefulness of the proximity both at Siena College. Thank you for did not produce sufficient evidence selection index and proposed intelligent and imaginative approaches to endorse any one of the selection standardized equation to foster in selecting institutional peers. indices, but instead suggest that a ease of replication. In that work, the combination of selection indices might proximity selection index was deemed Abstract be superior to any one selection index to be an appropriate methodology This study considered three selection alone. for the selection of a generic set of indices to choose institutional peers: institutional peers (D’Allegro & Zhou, (a) proximity, (b) percentile, and (c) BACKGROUND 2013). For this research, an institutional normative. Although conceptually peer is defined as institutions that The continued increase in public similar, only the proximity selection are similar with regard to certain scrutiny of higher education, the index had been previously studied. delineating factors (Anderes, 1999; expanded demands of accountability, The purpose of this paper is threefold. Trainer, 2008). A selection index is and the overall cynicism of the value First, the procedures used to generate a numerical designation system SUMMER 2017 VOLUME | PAGE 35 to indicate the extent to which an between any given institution or were substantial enough to render institution is a potential peer. comparison institution and the the selection process ineffectual. target institution on predetermined This reinforces previous findings that Faculty proposed to the researcher two parameters was calculated. The institutional characteristics alone are different approaches to peer selection divergence among selection indices not sufficient in choosing institutional indices that were not considered in is their underlying distributions. peers (Shin, 2009). the researcher’s previously published Correspondingly, the primary purposes work. The faculty’s suggestions seemed of the study were to: (a) determine and Instead, a more-informed and more- rational because their methodologies document the differences, if any, in comprehensive process was tested. might temper potential irregularities the institutional peer sets produced by The selection process entailed five in the data. Particularly, their proposed each selection index; (b) conclude, from steps outlined by D’Allegro and Zhou selection indices either (a) relied on any differences, what index is best; (2013): (a) identifying an initial set of statistics that were less susceptible to and (c) ascertain the stability of peer peers, (b) choosing the preliminary the perils of non-normal distributions selection over time. set of variables, (c) transforming than the standard deviation used and standardizing variables, (d) in the proximity selection index or METHODOLOGY determining the best set of variables (b) standardized the distribution so to use, and (e) establishing the best This study does not abandon previously that imperfections in the data were selection strategy. This research is applied principles and, as such, uses minimized. As will be discussed in the fundamentally undistinguishable a variety of sources and methods to “Methodology” and “Results” sections, from that research except for the last maintain a practical balance between non-normal distributions can acutely step. Therefore, a pithy summary of stakeholder judgment and statistical affect the set of peer institutions that Steps 1–4 are provided, along with a analysis (Trainer, 2008). Credibility of are selected. This further confirmed comprehensive description of Step 5. the institutional peer sets relies on that the process for determining constituent input. Not only were faculty peers seems to be arbitrary (Anderes, 1. Identifying an Initial Set of and staff consulted for this compilation, 1999). Accordingly, there is little or no Peers but in addition the concept for the evidence to the quality or adeptness of The initial set of peers was selected a alternative selection indices arose many processes to select a set of peers. priori to this study. To recap, an initial from the propositioned reasoning of Careful planning and investigation set of institutional characteristics was two faculty members. Hence, selection of the criteria used to select a set of identified to eliminate from further methodologies were based primarily institutional peers is still advised, but analysis institutions that would not on constituent suggestions and on the researcher realized the frailty of realistically be considered a peer other documented peer selections. even the most careful undertaking of of the target institution. The initial selecting a set of institutional peers, set of institutions was chosen from In the original research, an attempt including the conclusions of previous an original list of private, nonprofit to find a quick, pragmatic method research. institutions that submitted data to the to choose a set of peers from two Integrated Postsecondary Education or three institutional characteristics At the heart of the paper is the Data System (IPEDS) from the Data was unsuccessful. Using different description of three different selection Compare Institutions website. The list combinations of those institutional indices and the ensuing peer sets was generated using the EZ group characteristics, it was discovered that created by each. Those selection option (National Center for Education the resulting peer sets were similar to indices were similar to the nearest Statistics [NCES], 2012). Data for the target institution with respect to neighbor rationale (McLaughlin, these institutions were collected for some data elements but different with Howard, & McLaughlin, 2011). For all 2010 and 2011; these were the most respect to others. Those differences three selection indices, the distance recent data available at the time of the PAGE 36 | SUMMER 2017 VOLUME previous study. An updated data set 2012). For the most part, an institution’s study. Therefore, no adjustments were was identically assembled using 2014 focus is on quality. As such, the target needed for the updated data set. and 2015 information; these were the institution’s own Key Performance most recent data available at the time Indicators (KPIs) were the starting 3. Transforming and of this study. Note that for the target point. KPIs are a mix of approximately Standardizing Variables institution, the 2015 Basic Carnegie 20 output or direct measures of quality There was a fair amount of variability Classification did not change from and input or influencers of quality. in enrollment among the initial set of 2010 (Carnegie Foundation, 2015). Therefore, the initial set of variables institutions. Moreover, the enrollment Furthermore, only the basic 2015 Basic chosen either had some influence on of the target institution was twice Carnegie Classification was currently quality or included direct measures the size of most of the institutions in available on the EZ group option. Lists of institutional performance. Faculty both data sets. Therefore, some of the for both time periods were generated and staff were also asked to rate the data elements were standardized to using the following criteria: (a) private importance of each KPI, being mindful mitigate differences due to institutional not-for-profit institutions, 4-year or of the importance of using both input size (Gater, 2003; Huxley, 2009). This above; (b) highest degree awarded and output variables in the peer was accomplished by using the full- either a bachelor’s degree, master’s selection process. time equivalent (FTE) for enrollment as degree, or both; (c) baccalaureate the divisor. Examples of data elements college for arts and sciences, or The data also had to be easy to access that were standardized by dividing baccalaureate college balanced arts for all or most institutions. Several by the FTE included the number of and sciences, diverse fields; (d) enrolled sources were considered including: conferred bachelor’s degrees, number full-time undergraduate students; (e) (a) National Survey of Student of applicants, unduplicated annual institution size between 1,000 and Engagement (NSSE) benchmarks, enrollment, instructional expenses, and 9,999 students; (f) Title IV participant (b) American Association of endowment. (federal financial aid eligibility); University Professors (AAUP) Faculty (h) located in the United States or Compensation Survey (2012), (c) Noel Full-time and part-time faculty counts designated as a U.S. Service School Levitz Student Satisfaction Inventory were combined into one data element. (e.g., U.S. Naval Academy), and (i) not a (NLSSI), and (d) U.S. News & World In effect, the proportion of full-time tribal college. These parameters align Report rankings (U.S. News & World faculty was calculated by dividing the with the characteristics of the target Report, 2015). Nevertheless, not all sum of full-time plus part-time faculty institution. This is also on par with institutions participate in the NSSE into the number of full-time faculty. selection parameters recommended or NLSSI or administer these surveys by previous studies (Anderes, 1999). within a reasonable time period to 4. Determining the Best Set of As a result of applying these criteria, avail comparisons. Also, detailed AAUP Variables to Use 285 institutions were selected for the faculty salary data are not available for Of the 28 variables identified in Step 2, previous study while the updated listed many institutions. Consequently, data three were both output measures and yielded 232 institutions. were obtained from IPEDS or the U.S. among the target institution’s KPIs: (a) News & World Report rankings. ratio of conferred bachelor’s degrees 2. Choosing the Preliminary Set to FTE, (b) 1-year retention rate, and of Variables The preliminary set of 28 variables (c) 6-year graduation rate. These Other pertinent information was are shown in Appendix A, along variables were also student centered— collected for each of these institutions. with the institutional characteristics specifically student success focused— Relevance in the context of selecting used to select the initial set of peers. and aligned with the target institution’s peers are those data points that Note that the KPIs have remained mission. To augment the data analysis indicate the institution’s priorities the same and, therefore, the faculty and simplify its interpretation, the (Anderes, 1999; Cohodes & Goodman, were not consulted again for this remaining variables were classified into SUMMER 2017 VOLUME | PAGE 37 Table 1. Overall OLS Regression Models for the Three Performance Indicators: Ratio of Conferred Bachelor’s Degree to FTE, 1-Year Retention Rates, and 6-Year Graduation Rates Standardized Category Variable Beta Coefficient Original Data Set Ratio of Conferred Bachelor’s Degrees to FTE Admissions 25th Percentile Mathematics SAT .348* Faculty Average Faculty Salary –.142 Enrollment Estimated Fall Enrollment per FTE –.053 Institutional Characteristics Selectivity –.282** Finance Instructional Expenses per FTE .166 1-Year Retention Rates Admissions 25th Percentile Mathematics SAT .465*** Faculty Average Faculty Salary .135 Enrollment FTE .064 Institutional Characteristics Selectivity .301*** Finance Instructional Expenses per FTE .065 6-Year Graduation Rates Admissions Percent of Students Receiving Federal Grant Aid –.145** Faculty Average Faculty Salary .211** Enrollment FTE .090 Institutional Characteristics Selectivity .274** Proportion of Transfer Students –.104** Finance Total Price of Attendance .007 Instructional Expenses per FTE .224*** Updated Data Set Ratio of Conferred Bachelor’s Degrees to FTE Admissions Applicants per FTE –.141* Faculty Percent of Faculty with Terminal Degree .254** Enrollment 12-Month Enrollment per FTE .065 Institutional Characteristics Selectivity .038 Finance Total Price of Attendance .393*** PAGE 38 | SUMMER 2017 VOLUME 1-Year Retention Rates Admissions 75th Percentile Mathematics SAT .383*** Faculty Average Faculty Salary .086 Percent of Faculty with Terminal Degree .054 Enrollment FTE .131* 12-Month Enrollment per FTE –.050 Institutional Characteristics Selectivity .130 Finance Total Price of Attendance .089 Alumni Giving Rate 2.229* 6-Year Graduation Rates Admissions 75th Percentile Mathematics SAT .350*** Faculty Average Faculty Salary –.015 Percent of Faculty with Terminal Degree .127** Enrollment FTE .158*** 12-Month Enrollment per FTE –.040 Institutional Characteristics Selectivity .132* Finance Total Price of Attendance .181** Alumni Giving Rate .202*** Note: * p ≤ .05, ** p ≤ .01, *** p ≤ .001. one of the following five groups: (a) other models. Distributing the variables weight’s significance level indicates if admissions, (b) faculty, (c) enrollment, into five groups allowed the inclusion a variable is, in fact, a predictor of the (d) institutional characteristics, and (e) of all variables into the model for that output variable (Cohen & Cohen, 1983). finance. category (SPSS, 2012). Informed by Although there were some exceptions, previous research, the standardized only one predictor from each category As described in our previous research, beta weights were the determinants of was chosen for the three overall several regression analyses, single- what data elements would be used for models. This was deliberate because step ordinary least square (OLS), were peer selection (Hom, 2008). there were high correlations among used to identify the best variables predictors in any given category. In to select a set of peers. In the first In the second phase, an overall addition, the inclusion of only one or phase, regression models were regression model for each output two predictors from each category compiled separately for the five variable was computed using the forced a balance of institutional metrics variable categories for each of the best predictor(s) from each of the five for peer selection. The best predictors three output measures, a total of 15 regression models. The best predictor(s) for each KPI regression model by models. Because the analysis was still had the smallest significance level category for the original and updated exploratory at this stage, the single- associated with the standardized beta data sets are listed in Table 1. step enter method was preferred over coefficient. The standardized beta SUMMER 2017 VOLUME | PAGE 39 5. Establishing the Best Figure 1. Selection Index Numeric Assignments for Differences Between Target Selection Strategy College and Each Institution in the Initial Data Sets Peer institutions are determined by having metrics that are close to the target institution (McLaughlin et al., 2011). This is manifested in the computation of a selection index. Three selection indices were examined: (a) proximity, (b) percentile, and (c) normative. The calculation of each selection index also involves several steps but the steps are basically the same for each: (a) identifying the most relevant parameters, (b) computing the numerical difference between the comparison and target institutions on each of those parameters, (c) averaging those differences across parameters, and (d) determining range cut-scores to delineate a peer from an almost-peer. The first step, identifying predictor is normally distributed. For was for the proximity selection index. the most relevant parameters, has each predictor, a proximity index score Moreover, the logic is the same and already been decided by the three of 1 was assigned to the comparison is shown in Figure 1. However, the overall OLS models mentioned in institution that was between one-half boundaries for each percentile index Step 4. Descriptions of Steps b–c are and one standard deviation of the target score is determined by the first and provided for each index below. The institution’s metric, a score of 2 was third quartile cut-scores, and not by the determination of range cut-scores given if the comparison institution was data distribution’s standard deviation are further described in the “Results” within one-half a standard deviation. as it was for the proximity selection section. Equally weighted, the average of the index. In effect, the percentile selection proximity index scores derives the index ensures an equal number Proximity selection index proximity selection index. The two of comparison institutions in each As mentioned, the numeric equations that compose the proximity partition. differences between the target and selection index calculation are shown each comparison institution were in Appendix B. An example on how to A slight diversion is in order. Normal computed for each predictor. The mean calculate the proximity selection index is distributions are not assumed and of these differences determines an provided in Appendix C. skewed variables can still produce institution’s propinquity to the target accurate results (Smith, 2012). Yet, institution. For the proximity selection Percentile selection index extreme values or outliers on the low index, the unit of measurement is the For the percentile selection index, end or high end of the distribution can standard deviation for each predictor. differences between the target and affect or skew the distribution and drag This is depicted in Figure 1, with the each comparison institution were the mean away from a true measure assumption for this illustration that the determined for each predictor as it of central tendency. Outliers on both underlying data distribution for each PAGE 40 | SUMMER 2017 VOLUME ends might also affect the distribution’s percentile points or half the percentile points are easier to compute and kurtosis. Kurtosis refers to the width of selection index distribution versus conceptualize. As mentioned, the curve the peak of the distribution around the approximately 68% of the proximity created by the z-scores represented by measure of central tendency (Hembree, selection index distribution. Equally the x-axis and resulting probabilities 2013). In turn, this exaggerated weighted, the average of the percentile plotted on the y-axis, in a standard dispersion could unduly increase index scores derive the percentile normal distribution is symmetrical the standard deviation and, thus, selection index. The two equations (Weiss, 2015). The difference in the stretch the distribution segments. used for computing the percentile proportion of the total area under the Consequently, a disproportional selection index are shown in Appendix curve that is to the right of the z-score number of comparison institutions B. An example of how to calculate a between the comparison institution would receive larger index scores than percentile selection index is provided and target institution was used to they deserve because they would in Appendix C. determine distance from the target be more likely to fall in a subdivision institution. closer to the mean. This might not be a Normative selection index problem per se, but could compromise Before the boundaries for each For each predictor, a normative the ability of the selection index to normative selection index were index score of 1 was assigned to distinguish a peer from a non-peer. established, values for each predictor a comparison institution that was were converted to z-scores. Each within one-fourth the distance of the On the other hand, the percentile predictor was standardized with the total standard normal distribution’s selection index distribution is resulting distribution having a mean area from the target institution. As partitioned with an equal number of of 0 and standard deviation of 1 (SPSS, with the percentile selection index, a comparison institutions in each section. 2012). That said, the standard normal score of 2 was given if the comparison Unlike the proximity selection index, distributions were derived from institution was within one-eighth of outliers are less likely to affect the using the original distribution’s mean the area or distance from the target percentile selection index because the and standard deviation. Therefore, institution’s probability corresponding percentile selection index relies on the effects of the outliers and resulting to the z-score. Equally weighted, the median as the center of the distribution asymmetrical distributions were not average of the normative index scores and not a potentially displaced mean. completely eradicated. However, derives the normative selection index. Therefore, the percentile selection the advantage of these transformed The equations used to compute the index could be advantageous to the distributions is the fact that the new normative selection index are shown proximity selection index, especially for distributions were symmetrical. In in Appendix B. An example on how to skewed data distributions. essence, the normative selection index calculate a normative selection index is is a hybrid of both the proximity and provided in Appendix C. For each predictor, a percentile percentile selection indices. As with the index score of 1 was assigned to the proximity selection index, the mean RESULTS comparison institution that was within and standard deviation determine For the original data set, there were 25 percentile points of the target distance or probability. However, as 58 peers and 47 almost-peers across institution metric, and a score of 2 was with the percentile selection index, the the three peer selection indices. There given if the comparison institution use of the standard normal distribution, were fewer peers in the updated was within 12.5 percentile points of ensures that the distribution is data set, 34. There were 55 almost- the target institution. This is a smaller sectioned into equal parts. peers. Across data sets, the normative partition than the proximity selection selection index in the original data set index, given a percentile index score Another benefit of transforming the produced the largest number of peers, greater than 0 is awarded if the original distribution to the standard 51. The percentile selection index in the comparison institution is within 50 normal distribution is that the cut- SUMMER 2017 VOLUME | PAGE 41 Table 2. Index Score Peer and Almost-Peer Classifications for the Three Selection Indices Selection Index Peer Almost-Peer N* Percent** N* Percent** Original Data Set Proximity 813 65.6% 426 34.4% Percentile 638 53.8% 547 46.2% Normative 756 60.7% 487 39.3% Updated Data Set Proximity 750 60.1% 498 39.9% Percentile 595 64.5% 327 35.5% Normative 606 60.2% 400 39.8% Note: * Count of index scores for each predictor for each peer and almost-peer. ** Percent of index scores that were 1 (Almost-Peer) or 2 (Peer). updated data set produced the fewest updated data set. In part this was due The updated data set was similar in that number of peers, 26, just slightly more to the smaller set of initial peers in 60.1% of the proximity index scores than half the size of the largest set of 2016 compared to 2013 (N = 285, N = categorized a comparison institution peers or set of almost-peers. 232, respectively). The smaller number as a peer although peers make up only of initial peers in the updated data set two-fifths (42.3%) of both sets (11 vs. Selection Index Ranges was the result of several circumstances. 15, respectively). Proximity selection index For 46 of the original data initial set For the original data, the range of the of institutions, the Basic Carnegie Percentile selection index resulting proximity selection index classification level changed in 2015 For the original data set, the percentile was 1.33 to 1.78 for the peers and to a master’s level. The enrollment selection index range used to almost-peers. The updated data set of six of these original data set initial determine the peer institutions and posted a range that was slightly more institutions dropped below 1,000, and almost-peer institutions was the same compressed, ProxI Range = 1.44 to 1.78, one institution closed. as the proximity selection index for the for the peers and almost-peers. The updated data set (PercI Range = 1.44 cutoffs for the peer set was the 95th Examining the individual proximity to 1.78) but more compressed than percentile, while the almost-peers were index scores for each predictor in the the percentile selection index for the institutions between the 90th and 95th original data set, the proximity index updated data set (PercI Range = 1.11 percentiles. scores were more likely to classify a to 1.56). For comparative purposes, the comparison institution as a peer than same cutoffs used for the proximity The set of proximity peers and an almost-peer (65.6%), although the selection index were also applied to proximity almost-peers changed number of peers and almost-peers the percentile selection index, 95th between the original data set and the were the same. This is seen in Table 2. percentile or higher for peers and PAGE 42 | SUMMER 2017 VOLUME

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.