|USFDC Home||| RSS|
This item is only available as the following downloads:
xml version 1.0 encoding UTF-8 standalone no
mods:mods xmlns:mods http:www.loc.govmodsv3 xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govmodsv3mods-3-1.xsd
mods:relatedItem type host
mods:identifier issn 1068-2341mods:part
mods:detail volume mods:number 6issue 17series Year mods:caption 19981998Month September9Day 66mods:originInfo mods:dateIssued iso8601 1998-09-06
xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam a22 u 4500
controlfield tag 008 c19989999azu 000 0 eng d
datafield ind1 8 ind2 024
subfield code a E11-00108
Educational policy analysis archives.
n Vol. 6, no. 17 (September 06, 1998).
Tempe, Ariz. :
b Arizona State University ;
Tampa, Fla. :
University of South Florida.
c September 06, 1998
Performance indicators : information in search of a valid and reliable use / E. Raymond Hackett [and] Sarah D. Carrigan.
Arizona State University.
University of South Florida.
t Education Policy Analysis Archives (EPAA)
1 of 25 Education Policy Analysis Archives Volume 6 Number 17September 6, 1998ISSN 1068-2341 A peer-reviewed scholarly electronic journal. Editor: Gene V Glass Glass@ASU.EDU. College of Education Arizona State University,Tempe AZ 85287-2411 Copyright 1998, the EDUCATION POLICY ANALYSIS ARCHIVES.Permission is hereby granted to copy any a rticle provided that EDUCATION POLICY ANALYSIS ARCHIVES is credited and copies are not sold. Performance Indicators: Information in Search of a Valid and Reliable Use E. Raymond Hackett Auburn University Sarah D. Carrigan University of North Carolina at Greensboro Abstract Measures of overall institutional performance were explored from a decision support perspective with twenty similar Carnegie Cl assification Baccalaureate II institutions. The study examined the usefulness of performance indicators in campus decision making following both a hypothesis testing and case study approach. Two conclusions were reached: first, that the performan ce measures most commonly cited in the literature as measures of institutional financi al viability are of limited use for institution specific policy development; and second that performance indicators are most effectively used within an institution specifi c, whole system framework.Introduction State-defined performance indicators for institutions of postsecondary education are rapidly becoming the hallmark of the 1990s. By 1993 over one-third of the states had some form of performance indicator legislation enac ted (Bogue, Creech & Folger, 1993) and with each legislative session since the number has increased. Discussion at the state level has begun to shift toward funding the enterpr ise based on outcomes, effectiveness, and efficiency (Gather, Nedwek, and Neal, 1994). Si gnificant attempts at operationalzing these concepts and weaving them int o the fabric of planning, policy and budget development were given license in several st ates during the 1997 and 1998 legislative sessions. In the 1994 Education Commission of the S tates publication Charting Higher Education Accountability (Ruppert, 1994) a case stu dy of ten states indicated that the
2 of 25adoption of state-level performance indicators most often was done rapidly, relied on existing data and usually was driven by legislative initiative. This report implied that few states have accomplished the analysis necessary to define measures appropriate for systemic decision making and public reporting. With the advent of student right-to-know legislation, federally defined performance indicators for institutions of postseco ndary education became a larger part of the institutional reporting cycle. In 1996, the Department of Education proposed a far more explicit use of performance indicators, and th is proposal led to a national debate. The belief that a unique equation could provide an indication of institutional financial and programmatic health, and that institutional sco res on a specific set of indicators should impact the disbursement of federal funds, wa s outlined in the Federal Register Volume 61, Number 184 on September 20, 1996. In thi s Notice of Proposed Rulemaking, the Secretary of Education proposed to amend the Student Assistance General Provision regulations by revising the requi rements for compliance audits and adding a new subpart establishing financial respons ibility standards. The proposed regulations would require institutions participatin g in programs authorized by Title IV of the Higher Education Act of 1965, as amended, to me et cutoff scores on certain calculated financial ratios to avoid a compliance a udit. Certainly, institutions of postsecondary education should be held accountable to their constituents, their service area, and the pub lic that provides monetary and other support. However, there is a concomitant reality, t hat is the reality of the deans, administrators, faculty and staff attempting to man age real institutions in a real world. At this level there is only one question. How do I mak e good decisions? And that is a powerful question. For it is the sum of the decisio ns made during the campus year that create the future for an institution. It is the sum of these decisions that lead to outcomes, effectiveness, and efficiency. It is at the decisio n point where institutional research finds its home and performance indicators have meaning. O ffices of institutional research conduct studies and convert data into information f or two primary purposes: to support the decision making process by providing analyses t hat serve to reduce uncertainty prior to making a decision; and to assess how effective t he institution has been at meeting the goals and objectives outlined in the campus plan. T he former is for internal constituents and the latter for both internal and external const ituents. While there is significant research in po stsecondary education on the development of information to support an understanding of the o peration and outcomes of the enterprise, further research must be focused on def ining decision points. With a taxonomy of decision points, and an understanding o f how they interrelate, research can be focused on the amount of uncertainty that is red uced by various performance indicators at given decision points. A clearer unde rstanding of performance indicators and their relationship to decision support must be developed. This article approaches the use of perfor mance indicators from two perspectives. In the first study eleven frequently cited performa nce indicators were used to explore the implications of enrollment stability and financial viability with twenty similar Carnegie Classification Baccalaureate II institutions. This study examined issues addressed in the Federal Register Volume 61 proposal to amend the St udent Assistance General Provision regulations by revising the requirements for compliance audits and adding a new subpart establishing financial responsibility s tandards. The implication here was that institutional scores on a specific set of indi cators define the financial viability of an institution and should impact the disbursement of f ederal funds. The second study used a case study approa ch to focus on a campus included in the sample of institutions used in the first study. In this particular case study, the institution had decided that challenges on two fronts were thre atening the institution. The institution moved to change both the population of students served and the focus of the academic program. The use of information and perfor mance indicators to support decisions related to the repositioning was explored
3 of 25Study OneReview of the Problem and Literature Measures of academic programs, staffing, enrollment level, student and faculty characteristics, and revenue and expense can help d efine an institution's programmatic and financial strengths and weaknesses. At independ ent institutions, particularly the smaller liberal arts institutions, it is essential that the campus leadership understand the implications of these numeric indicators and their interrelationships. A significant change in the value of key performance indicators a t smaller institutions can signify changes that will impact the campus for a given yea r, or a number of years. With the publishing of the National Association of College a nd University Business Officers (NACUBO), Financial SelfAssessment: A Workbook fo r Colleges and Universities, in the early 1980s a move began to understand the camp us and campus policy in terms of performance indicators. Certainly, the total qualit y improvement concept of benchmarks falls along the continuum of work that has been con ducted on performance indicators. There has been significant discussion on the development of performance indicators and their use in higher education. Among the extant models are: the National Association of College and University's Financial S elf-Assessment Workbook (1987); Performance Measurement Systems for Higher Educatio n (Kidwell and Long, 1995); Strategic Indicators for Higher Education (Taylor, Myerson and Massy, 1993); and Measuring Up: The Promises and Pitfalls of Performa nce Indicators (Gather, Nedwek and Neal, 1994). The Joint Commission on Accountabi lity Reporting (JCAR), a project of the American Association of State Colleges and U niversities, the American Association of Community Colleges, and the National Association of State Universities and Land-Grant Colleges has produced a framework fo r accountability reporting recently summarized in the 1996 publication JCAR Technical C onventions Manual. Currently in progress is the NACUBO Benchmarking Project, which is developing quantitative measures to set as a point of reference and standar d for basic operations. However, most of the analysis and literature on the development o f state defined institution-level performance indicators describes a pattern of imple mentation with little prior conceptual development and a focus on interinstitutional compa rison (Bogue, Creech & Folger, 1993). A 1994 Education Commission of the States st udy found that performance indicator initiatives in the various states contain many of the same measures (Ruppert, 1994). Most of the states studied used 20 or so ind icators that were collected by a governing board and reported in a tabular form. The indicators most commonly used reflected some measure of: instructional inputs; in structional process and use of resources; instructional outcomes; efficiency and p roductivity; diversity and access; articulation; and relation to state needs. In the 1987 revision of Financial SelfA ssessment: A Workbook for Colleges and Universities (Dickmeyer & Hughes), the concept of a n overall institutional equation, defined in terms of key performance indicators, was again emphasized. It was strongly implied in this volume that there were ranges withi n the various indicators presented that indicated good, moderate or poor performance on a g iven indicator. It was also implied, in this major work of a standing NACUBO committee, that a certain equation could be inferred for an institution from a combination of t hese indicators. It was further implied that this unique equation could provide an indicati on of institutional health, and areas of institutional strength and weakness. Since 1987 a n umber of institutions have adopted the self-assessment strategy put forth in this volu me and a modest research literature has developed. A noticeable addition to this strategy w as put forth by Mary Sapp (1994) in the AIR Professional File document, Setting a Key S uccess Index Report: A How to Manual. A recasting of standard financial ratios to accommodate the Financial Accounting Standards Board's Statement of Financial Accounting Standards No. 116
4 of 25Methods Data were collected from twenty instituti ons of the College Information Systems Association for a five year period from FY 1992-93 through FY 1996-97 and included 265 measures. Most of these measures were data alre ady being supplied by the colleges to National Center for Educational Statistics and o ther national organizations such as the Council for the Advancement and Support of Educatio n. Data were collected on revenues, private support, expenditures and transfe rs, balance sheet items, plant, personnel, faculty development, instruction, studen t characteristics, financial aid, library holdings, and data processing equipment. A data ele ment dictionary recapitulating and refining national definitions was prepared and taug ht to the institutions through a series of workshops. Institution level performance indicat ors were developed from primary data and took the form of primary data; totals of p rimary data; percentages of total; ratios; and appropriate algorithmic transformations At the time of the study only twenty of a possible 26 had reported and verified data for the five years under study. The association staff have discovered through this proj ect the difficulty of collecting accurate, timely and comparable data from a number of institutions even with national data standards. Each campus has a number of primary data providers and that will confound any study over multiple campuses. For the purposes of this preliminary inve stigation, performance indicators carry the maximum of information when they provide the de cision making process with insight into whether an institution is maintaining a steady level of viability; losing viability; or gaining viability. Institutions were defined as viable if they maintain enrollment and maintain financial viability. Of cou rse an essential element of institutional viability is whether an institution i s meeting the goals and measurable objectives outlined in the campus plan. Assessing i nstitutional outcomes in terms of consistency with institutional goals was outside th e scope of this study. The first step in this investigation was to develop a set of core indicators that could provide an indication of institutional viability. I n Measuring Up: The Promises and Pitfalls of Performance Indicators (Gaither, Nedwek and Neal, 1994) ten core indicators that are found in use and cited most frequently as measures of institutional viability are listed. In the present study that list was modified slightly to focus on the institutional viability construct outlined in Dickmeyer and Hughe s (1987). This construct focused on enrollment stability and flexibility in managing av ailable revenues, funds, and expenditures. An eleventh indicator, percent change in fall fulltime equivalent students (fte), was included with the core indicators in ord er to explore the concept of institutional viability as defined in this study. T he eleven indicators used are defined in Table 1. Table 1 Definition of the eleven performance indicators use d in study one. Covered Expenditures Excess (deficit) of current fund revenues over (under) current fund. expenditures 1. FTE Fall full-time equivalent students. 2. Percent Change FTE Percent change in fall full-ti me equivalent students over previous year 3. Constant Dollar Net Student Revenue Total tuition and fee revenues minus unrestricted current fund scholarships and fellowsh ips adjusted by the HEPI. 4. Constant Dollar Net Expenditures per Student Tota l current fund expenditures and transfers adjusted by the HEPI index divided by fall FTE. 5. Tuition Discount Percentage Defined as: ((tuition & fee revenues minus unrestricted current fund scholarships an & fellowships) divided by full-time tuition and fee rate)) 6.
5 of 25divided by fall FTE students Available Funds Ratio Defined as: (sum of the unr estricted current fund balance, quasi-endowment at market value, and unexpended pla nt fund balance) divided by unrestricted education and general expenditures plu s mandatory transfers. 7. Liquidity of the Current Fund Balance Defined as: cash in the unrestricted current fund plus investments in the unrestricted current fund divid ed by liabilities in the unrestricted current fund. 8. Average Faculty Salary Average salary for all ful l-time faculty. 9. Acceptance Ratio Number accepted divided by number applied. 10. Matriculant Ratio Number matriculated divided by number accepted. 11. The identified indicators were first exam ined using descriptive statistics and analysis of v ariance across all the institutions for five years. The ins titutions were then divided into two groups defined in terms of their viability based on the stability of the student population and the institution's financ ial position. For the student population, stability was defined in terms of number of enrolled students an d change in number of enrolled students. Financial vi ability was defined in terms of the institution's a bility to meet its financial obligations without significa ntly changing fund balances and by the NACUBO ratio level definition for liquidity of the current fund balance and availability of fund balances to meet c urrent obligations. The performance indicators defined wer e then compared within the new groupings of institutions and financial viability and enrollment stability examined using descriptive statistics an d multivariate statistics.Results The twenty institutions for which valid a nd reliable data were available were all Carnegie Classification Baccalaureate II and similar in acad emic program offerings. A Pearson Product Moment Correlation Coefficient was calculated for each of the chosen core indicators in relationship with eac h other for all the institutions for all years. That matrix, found in Figure 1, indicated only four rela tionships of any magnitude: (1) enrollment was positively rel ated to net student revenues; (2) average faculty s alary was positively related to net student revenues; (3) the matriculant ratio was positively related to th e applicant ratio; (4) the available funds ratio was negatively related to expenditures per student. Figure 1. Pearson product moment correlation coeffi cients for the ten core indicators.
6 of 25The first three of these relationships might have b een expected. The implication that institutions wit h a stronger available funds position were expending le ss per student, though understandable, certainly warranted further study. The lack of other relation ships was considered the strongest indication that further study was warranted. In terms of an overall profile, Figure 2 details five years of percent change in fall fte da ta for the institutions. Figure 3 details five years of covere d expenditures, or the excess (deficit) of current fund revenues over (under) current fund expenditures. As can be seen in Figure 2, in almost every case the institutions managed to maintain or expand enrollme nt over the five year period. The financial data presented in Figure 3 suggests that two distinct gr oups could be developed based on ability to meet expenditure demands with available revenues. Figure 2. Percent change in fall FTE from previous year. FY 92-93FY 93-94FY 94-95FY 95-96FY 96-97College A0.14%6.60%9.35%0.00%4.02%College B1.22%6.65%5.29%-9.87%0.82 %College C-0.98%-1.23%2.75%4.87%1.3 5%College D7.66%-3.20%4.50%2.82%2.95 %College E-20.21%-8.39%37.94%2.45%2.95%College F18.63%3.14%1.52%-3.75%4.89%College G-8.47%-9.72%-0.31%-0.56%4.77%College H-6.79%0.13%6.60%3.03%0.75 %College I-0.36%2.61%-7.18%-3.40%-2 .08%College J0.95%-0.42%1.29%117.53%29.84%College K9.58%1.73%-10.88%8.94%2.34%College LN/AN/AN/AN/AN/ACollege M6.54%-0.60%0.35%.62%1.98%College NN/AN/AN/AN/AN/ACollege O11.07%10.08%13.73%-1.25%8.41%College P26.77%6.99%8.56%16.18%14.62%College Q5.04%-2.40%-2.69%-2.40%-0 .61%College R3.65%-3.44%-3.57%3.82%0.1 1%College S-0.59%4.30%-1.69%-2.17%-0 .04%College T-3.14%-6.78%3.38%-9.59%-4 .04% Figure 3. Covered expenditures, FY 1992-93 to FY 19 96-97. FY 92-93FY 93-94FY 94-95FY 95-96FY 96-97 College A$24,071 ($222,895)$322,656 ($305,060)$1,96 7,207 College B$487,342 $931,612 $1,075,006 ($93,044)$2,4 21,350 College C($191,674)($83,843)$5,298 $85,799 $1,019,3 95 College D$15,881 ($295,349)($109,014)($301,853)($22 ,474) College E($180,268)($955,771)$118,237 $135,813 $1,1 05,218
7 of 25College F($376,915)$1,185,143 $128,121 ($2,205)($11 8,892) College G($408,189)$1,654,871 ($359,433)($1,263,470 )($689,241) College H$1,159,043 $28,428 $1,009,956 $619,496 $67 8,227 College I$49,512 $10,728 ($300,306)$12,480 ($456,71 2) College J$1,570,039 $2,090,371 $1,578,264 $1,533,88 0 $210,909 College K$217 $2,707 $1,068 $3,692 $102,458 College L$808,590 $770,227 $361,301 $452,010 ($398, 757) College M$753,339 $372,701 $198,771 $468,484 ($179, 087) College N($1,065,616)($202,832)($1,861,550)($1,699, 522)($1,173,749) College O$434,207 $707,564 $300,052 $1,772,279 $951 ,086 College P$265,177 $154,010 $136,554 $264,129 ($74,8 04) College Q($56,006)($325,000)($68,577)$33,890 ($115, 278) College R($291,984)($389,348)($343,438)($994,877)($ 1,892,805) College S$221,497 $212,235 $799,562 $878,162 $245,2 26 College T$66,847 ($395,514)($783,982)$5,523 $68,381 After reviewing the Covered Expenditure d ata together with the available funds ratio and liq uidity of the current fund balance, the institutions were divided into two groups. One group was designated a s the strong group and consisted of ten institutions that were able to consistently maintain financial viability as indicated by covered expenditures, ava ilable funds ratio and liquidity of the current fun d balance. The second group was designated the weak g roup and consisted of ten institutions that were no t able to consistently maintain financial viability a s indicated by covered expenditures, available fund s ratio and liquidity of the current fund balance. Th ese two groups were used to explore the relationshi p between the construct institutional viability, defi ned in terms of the three financial measures and ft e, and the information provided by the selected performanc e indicators. It was decided to use multiple linear reg ression to begin to define sets of information that might be related to institutional viability using the two gr oups identified. The three financial measures and f te were used as dependent variables and each of the ten ind icators compared individually as independent variables for all institutions in each group, for a ll five years. Independent variables with a signifi cant R 2(F-test) and P-value were placed in a multiple line ar equation as independent variables with the relat ed dependent variables. The eight independent variable s were: Covered expenditures strong group. 1. Covered expenditures weak group. 2. Liquidity of the current fund balance strong grou p. 3. Liquidity of the current fund balance weak group 4. Available funds ratio strong group 5. Available funds ratio weak group 6. Full-time equivalent students strong group 7. Full-time equivalent students weak group 8. As can be seen in Figure 4, for the insti tutions that were able to consistently maintain fin ancial viability the dependent variable, covered expenditu res, was positively related with the matriculant ra tio. This would indicate that the size of the freshman c lass over the five year period had a significant th ough small (adjusted R 2 = .164) impact on a balanced budget. For the insti tutions that were not able to consistently maintain financial viability the depen dent variable, covered expenditures, was only positively related with the acceptance ratio. The i mplication here is that becoming less selective ove r the five year period had a significant though inconsequ ential (adjusted R 2 = .066) impact on decreasing
8 of 25 budgetary imbalances. What was interesting in this analysis was not only the minimal impact of the not ed effects, but also that none of the other independen t variables had an effect for this dependent variab le for either group. Figure 4. Significant results for the dependent var iable: covered expenditures. Strong group. Regression Statistics Multiple R0.432R Square0.186Adjusted R Square0.164Standard Error591099.732Observations39ANOVA dfSSMSFSignificance F Regression12.963E+122.963E+128.481E+006.050E-03Residual371.293E+133.494E+11Total381.589E+13 CoefficientsStandard Error t StatP-value Intercept-455867.482376535.084-1.2110.234Matriculant Ratio2092804.983718621.4332.9120.006 Weak group. Regression Statistics Multiple R0.298R Square0.089
9 of 25 Adjusted R Square0.067Standard Error605017.940Observations44ANOVA dfSSMSFSignificance F Regression11.496E+121.496E+124.087E+004.961E-02Residual421.537E+133.660E+11Total431.687E+13 CoefficientsStandard Error t StatP-value Intercept678212.270399274.8011.6990.097Acceptance Ratio-2333811.7381154382.406-2.0220.050 For the institutions that were able to co nsistently maintain financial viability the depende nt variable, liquidity of the current fund balance, wa s positively related to the three independent varia bles, percent change fte, constant dollar net expenditure s per student, and average faculty salary (adjusted R 2 = .288), as seen in Figure 5. This would indicate tha t a consistent growth in the size of the student population is related to financial strength in thes e institutions. For the institutions that were not able to consistently maintain financial viability, the liqu idity of the current fund balance was positively re lated to the independent variables, covered expenditures and constant dollar net student revenue. The positive relationship evidenced by these two financial varia bles would be expected. What is interesting is the modest amount of variance that is accounted for (ad justed R 2 = .196) by two financial variables that should have a strong relationship with this measure of institutional viability. This could be construe d as a fairly explicit indication that other expenditure r elated pressures must be considered in reviewing th e financial viability of these institutions, institut ions that have been unable to balance revenue to ex pense on a consistent basis. As with covered expenditures what was interesting in this analysis was not onl y the modest impact of the noted effects, but also that n one of the other independent variables had an effec t for this dependent variable for either group. Figure 5. Significant results for the dependent var iable: liquidity of the current fund balance. Strong group. Regression Statistics
10 of 25 Multiple R0.598R Square0.357Adjusted R Square0.288Standard Error5.834Observations32ANOVA dfSSMSFSignificance F Regression3529.239176.4135.1830.006Residual28953.01634.036Total311482.255 CoefficientsStandard Errort StatP-value Intercept13.6878.1861.6720.106Percent Change in FTE38.00115.2382.4940.019Expenditures per Student -0.0010.001-1.5170.141 Faculty Salary0.0000.000-0.7160.480 Weak group. Regression Statistics Multiple R0.479R Square0.230Adjusted R Square0.197Standard Error1.417
11 of 25 Observations50ANOVA dfSSMSFSignificance F Regression228.13114.0657.0070.002Residual4794.3472.007Total49122.478 CoefficientsStandard Errort StatP-value Intercept1.7890.4783.7400.000Covered Expenditures0.0000.000-2.8660.006Student Revenues0.0000.000-1.8610.069 As can be seen in Figure 6 below, for the institutions that were able to consistently mainta in financial viability the dependent variable, availab le funds ratio, was positively related with fullt ime equivalent students and tuition discount percentage This seems to imply that size of the student population, maintained by leveraging tuition, is re lated to overall institutional financial strength. This effect was one of the larger effects seen in this s tudy (adjusted R 2 = .349). For the institutions that were not able to consistently maintain financial viabili ty the dependent variable constant dollar net expenditures per Student was the only independent v ariable positively related with the available funds ratio (adjusted R 2 = .197). Figure 6. Significant results for the dependent var iable: available funds ratio. Strong group. Regression Statistics Multiple R0.614R Square0.376Adjusted R Square0.349Standard Error1.686
12 of 25 Observations49.000ANOVA dfSSMSFSignificance F Regression278.88639.44313.8840.000Residual46130.6822.841Total48209.568 CoefficientsStandard Errort StatP-valueLower 95% Intercept4.5130.6826.6230.0003.142FTE-0.0020.000-3.5820.001-0.002TDP-5.1201.236-4.1430.000-7.607 Weak group. Regression Statistics Multiple R0.439R Square0.193Adjusted R Square0.173Standard Error72.699Observations42ANOVA dfSSMSFSignificance F Regression150506.73750506.7379.5560.004Residual40211405.7355285.143Total41261912.472
13 of 25 CoefficientsStandard Errort StatP-value Intercept81.08424.0703.3690.002Expenditures per Student -0.0140.004-3.0910.004 The implication of the above results is that, withi n this group, those institutions that have a higher expenditure level are also financially more viable. The available funds ratio is perhaps the single be st measure of an institutions financial viability in t hat it accounts for all funds that could be marshal ed to meet institutional financial obligations. What stan ds out, using this most inclusive of financial meas ures, is that the noted effects are due to so few indepen dent variables. For the institutions that were able to co nsistently maintain financial viability the depende nt variable, full-time equivalent student, was positiv ely related to the two independent variables, const ant dollar net expenditures per student and acceptance ratio (adjusted R 2 = .439), as can be seen in Figure 7. This result implied that less selective entrance re quirements led to a larger student population and s o to a larger expenditure base. An alternative explanation for the acceptance ratio effect would be that the market niche of each of these institutions is clear ly understood by potential students. The lack of a relationship with tuition discount percentage and c onstant dollar net student revenues might also sugg est that, in this group, the larger institutions schola rship with restricted funds as opposed to leveragin g with unrestricted current funds. For the institutions th at were not able to consistently maintain financial viability the independent variables, tuition discou nt percentage and acceptance ratio, were positively related to the dependent variable full-time equival ent student ratio (adjusted R 2 = .323). Figure 7. Significant results for the dependent var iable: full-time equivalent student. Strong group. Regression Statistics Multiple R0.687R Square0.473Adjusted R Square0.439Standard Error453.587Observations34ANOVA dfSSMSFSignificance F
14 of 25 Regression25716405.0622858202.5313.8920.000Residual316377963.398205740.755Total3312094368.46 CoefficientsStandard Errort StatP-value Intercept2884.694495.5185.8220.000Expenditures per Student-0.3480.067-5.1630.000Acceptance Ratio-869.179777.033-1.1190.272 Weak group. Regression Statistics Multiple R0.599R Square0.359Adjusted R Square0.323Standard Error420.206Observations39ANOVA dfSSMSFSignificance F Regression23556039.5541778019.7810.0700.000Residual366356641.902176573.386Total389912681.456 CoefficientsStandard Error t StatP-value Intercept-131.099314.892-0.4160.680
15 of 25 Tuition Discount Percentage 1427.638642.3042.2230.033 Acceptance Ratio2458.9801026.6062.3950.022 For this group, the implication is that the institu tions with a larger student population accept more potential matriculants and leverage the cost to att end with unrestricted current fund dollars. Taken together, these two results clearly suggest that so me combination of less selectivity or identificatio n to market niche combined with a higher level of financ ial aid, or leveraged tuition, was related to a lar ger student population in both groups.Discussion Taken together, the results related to th ese performance indicators suggest that the recruit ment and retention program is an important source of institu tional financial viability. The results indicate th at leveraging the cost to attend is integral to mainta ining and expanding the student population for thes e institutions. The implication was that all the inst itutions discount tuition, though the financially m ore viable institutions were seen to rely less on disco unting and more on funded scholarships. The performance indicators used for this study are amon g the most frequently cited as measures of institutional viability and the results did provide information related to institutional financial via bility. This study did demonstrate that there are unique groupings of liberal arts institutions and that unique financial equations for these groups might b e defined in terms of several performance indicator s. However, what does stand out is that there are few policy-related implications. These institutions, mo st of which have been in existence for over a century, ar e maintaining enrollment and graduating students. Some have more financial flexibility than others an d that can be traced to size of enrollment and cost to attend. These standard financial ratios were being considered in a number of states as triggers for au dits during deliberations related to the Statewide Posts econdary Review Entities (SPRE). Equations involvin g these financial viability indicators are being cons idered in the proposal to amend the Student Assista nce General Provision regulations by revising the requi rements for compliance audits, as detailed in the Federal Register Volume 61. Certainly these results did not imply that these frequently cited performa nce indicators should trigger federal policy and instit utional sanctions. The results of Study One did suggest that serious consideration should be given to questions of institutional viability, unique institutional profi les and the use of performance indicators in instit utional management. Study Two explored a whole system appro ach developed around the concept of decision support as suggested by Kaufman in Educational Syst em Planning (1972). One of the institutions included in the first study was used for the case s tudy approach employed in Study Two.Study Two Review of the Problem and Literature Institutions of higher education are, by any standard, complex entities. Even the least comp lex of institutions, the small liberal arts college, provi des an enormous number of pedagogical, social, behavioral and economic phenomena to study. As camp us decision-makers begin to understand these phenomena they become more effective at defining an d creating the information needed to support decision making. The campus year might be envisione d as multiple threads woven together. Among these threads would be the recruitment and retention thre ad, an academic programs thread, a student life thr ead, a staffing thread, a physical plant thread, and a f iscal thread. Along each of the threads lie decisio n points. The sum of the decisions at these points are instru mental in creating the fabric and design of an
16 of 25institution's future. It is a fairly straightforward task to li st some of the critical decision points in the camp us year and the questions they raise. What decision rule will w e use for admitting students? How will financial ai d be apportioned? Will there be unfunded financial aid, and if so how much? Will there be a raise? Can maintenance be deferred? What programs will be targ eted for excellence and at what expense? The answers to these questions, and a myr iad more that confront the campus administrative an d planning team will be cast in terms of decisions. A t the very least, the leadership of every campus mu st ask the following two questions at the beginning of each academic and planning year. First, will we be intentional in making decisions for this campus? An d, will we use the best possible information to red uce uncertainty before we make decisions? Assuming that decisions are to be intentional, our primary concern then is the need to reduce uncertainty befo re the decision is made. It is the role of institut ional research to provide the information that reduces un certainty prior to making decisions. There are a number of decision points dur ing the campus year encompassing a number of dimensions from departmental decisions to decisions with campus-wide implications. From a temporal perspective, there are decisions that are made dail y, weekly, each academic term, and yearly. Almost a ll of the literature related to decision support focus es on the for-profit business and industry sector. This literature began to call for, and then examine, int egrated decision support systems (DSS) starting in the early 1970s (Van Gundy, 1988). These analytical sof tware engines were intended to provide the necessary decision support information at the appro priate desk for everything from daily to annual decisions throughout the firm (Alavim and Joachimst haler, 1992). Implementation of completely integrated decision support systems in the for-prof it sector has been marked by mixed results and the implementation of such systems remains a complex is sue (Lucas, Ginzberg, and Schultz, 1990). The control of operations and support for marketing hav e seen a wide spread acceptance and use of decision support tools, primarily for daily, weekly and quar terly decisions (Alavim and Joachimsthaler, 1992). The literature on the use of decision support systems f or major policy and direction related issues has sh own that there is far less consensus on the use of DSS by top-level management (Reagan-Cirincione et al., 1991). The acceptance of decision support system s in postsecondary education is similar to the expe rience of the for-profit sector, though the literature is not as rich. Most of the administrative software sy stems in use by the institutions provide adequate support fo r daily, weekly, and academic term decisions. The marketing function, embodied in the admissions and development programs, have become quite sophisticated. However, the use of information to s upport decisions related to the major policy, performance, and direction related issues faced by institutions leaves much to be desired (Gaither, Nedwek and Neal, 1994; Kidwell and Long, 1995). For the purposes of this research, those decisions will be defined as decision points. Infor mation developed to support those decisions is defined as a performance indicator (PI). For example, the deci sion to admit or not admit a student is, in fact, a dail y or weekly decision. However, setting a decision r ule that some measure, such a s school class rank, will be u sed as an admission criteria is probably done only once a year. This is a key decision point. The info rmation used to make that decision, probably develo ped from a retention study and related descriptive stat istics, would be defined as key performance indicat ors. Though there is a large body of instituti onal research literature, that literature should be strengthened in three areas: 1. There is a need to develop a taxonomy of key decision points within th e campus year; 2. There is a need to understand what key performance indicators reduce uncertainty prior to making a given decision and the impact of the in formation on decision making; 3. There is a need to understand the campus as a system defined by decisi on points and sets of decision points that are interrelated. From a practical perspective, decisions a re approached, and usually made, within the context of the institution's program structure. A framework for pr ogram structure was established nationally in the 1960s and has evolved into the current national pro gram classification structure defined in NACUBO's Administrative Service and implicit in the National Center for Educational Statistics Integrated Postsecondary Education System. Specific decisions will be made relative to the goals or budgetable objectives of a specific program or the cost center s defined at the sub-program or sub-sub-program lev el of the Program Classification Structure. Decisions are also within a temporal plane and related to spe cific times within the academic or fiscal year. A decisio n point is defined here as related to a specific pr ogram
17 of 25at a specific time in the academic or fiscal year. A decision model of the campus could be made that resembles a PERT chart with each line representing a program and the action points representing decisi on points. Decision points can also be characterized in terms of the type of decisions that are made. M ost will be regular and identifiable, located within the aeg is of a program and at a specified time within the year or academic term. Other decisions will be unexpecte d and will encompass either new opportunities or decisions that need to be revisited. Decisions that need to be revisited are inevitable, even the best plans will require mid-course corrections. Specific decisions are made and there are discreet decision points. However, decisions are r arely made in a vacuum. Specific decision points group to gether within decision sets. The information that i s developed for the reduction of uncertainty at each decision point within a decision set is often revie wed together. Specific decisions are made within the co ntext of the decision set. If the most appropriate framework for dec isions is the decision set, the nature of decision sets can best be described as a cascade. Even single decisio ns can lead to a cascade of additional discrete decisions. Multiple measures of outcome can be impa cted in the same way. Perhaps the most important skill in policy analysis is being able to understan d and predict the cascade effect. This case study focuses on a campus that was included in the sample of institutions used abo ve to explore the use of performance indicators as measur es or predictors of institutional viability. In thi s particular case study the institution decided, in F Y 199192, that challenges on two fronts were threatening the institution. The first challenge wa s in the retention of students, with only 40% of en tering freshmen returning for the second year. The institu tion was convinced that this was unacceptable in te rms of cost to the institution to recruit a large fresh man class, and in relation to the mission of the in stitution. The second challenge was in the construction of two new state supported branch community college campuses serving nearby counties. These counties ha d traditionally been a source of students for the institution, though many of these local students re quired remediation. The institution had a number of medial courses included within the academic program Decision Sets: A Case Study Retention Challange The first step was to collect information from the student record files and an entering fres hman survey the institution had been administering and c onduct a probability regression analysis to determi ne factors that correlate with retention into the seco nd year. The results of that study are outlined in Figure 8. Figure 8. Factors related to retention the second y ear. Dependent Variable : First-time freshmen returning for second year. Independent Variables Investigated with probit anal ysis ACT English Amount of Loans ACT Math Amount of Workstudy Hours ACT Social Studies or Reading Distance from Home ACT Science Dorm Student ACT Composite College Grade Point Average Graduation Quartile
18 of 25Gender High School Grade Point Average Married Amount Non-institutional Aid Elected Major Amount Institutional Aid Religious Preference Groupings of variables found to significantly incre ase the probability of returning for the second year with probit analysis Group OneGroup Two College Grade Point Average Dorm Student Institutional Aid None Institutional Aid ACT Composite College Grade Point Average Elected Major High School Grade Point Average Dorm Student Loans Given the results of the retention study, four poli cy related decisions were made. All freshmen were required to live in a college dor m except those living with a relative. 1. Admissions standards were refined and evaluation of all applicants was moved to a faculty committee using a multiple criteria best-fit model. 2. Policy for awarding financial aid was changed to fo cus on students most likely to be retained. 3. Faculty began to work with students on electing a m ajor before arriving on campus. 4. Figure 9. Percent of freshman returning for the sec ond year. As can be seen in Figure 9 above the perc ent of freshmen returning for the second year rose dramatically from Fall 1992 to Fall 1996. Also, an intentional decision was made to increase the use o f
19 of 25financial aid to recruit students who were more lik ely to be retained. Figure 10 shows the increase in scholarship and fellowship aid per full-time studen t from FY 1992-93 to FY 1996-97. As significant as these changes are, they should be explored within t he context of a related, yet separate, decision set that was being addressed at the same time. Figure 10. Scholarship and fellowship aid per fulltime student. Community College Challenge A significant segment of the institution' s overall enrollment profile was from the counties surrounding the campus. Two community college branc h campuses were opening in counties adjacent to the campus. Further examination of the student data set revealed that many of the students in the institution's freshman year remedial program were, in fact, students that would be candidates for thes e branch campuses and their open door policies. The i nstitution made four key decisions: to focus on students that have the highest chance o f retention; 1. to withdraw from remedial programs and leave those students to community colleges; 2. to reduce the size of the student body concomitant with the withdrawal from remedial programs; 3. to reduce the size of the faculty relative to the s ize of the reduction in the student population. 4. As can be seen in Figure 11, the student population was significantly reduced between Fall 1 991 and Fall 1996. There was an equivalent reduction in the faculty that occurred during that period of ti me. Faculty positions were reduced by 10% over the five year period with reductions related to remedial courses that were dropped. These faculty were in se veral disciplines and attrition and early retiremen t were the chief reduction in force strategies follow ed. Figure 11. Fall enrollment 1991-1996.
20 of 25 A series of other decisions and results c ascaded from these initial decisions. Some of these are outlined below. To attract and retain the caliber of student that h ad been identified as most likely to persist, the retention studies suggested some increase in financ ial aid. 1. Though reduction in faculty offset most of the loss in student tuition and fee revenues, substantial increases in tuition and fees were necessary. Analy sis had indicated that the institution was underselling its product. 2. The ACT scores of new freshmen increased dramatical ly as well as related measures of previous academic success. 3. Through intentional analysis and decision making the institution had changed the profile of its student body and reduced the size of the faculty. T hough only one measure of the entering freshman class, the changing ACT profile, as seen in Figure 12, is indicative of the new more rigorous decision model for admitting students applied by the new adm issions procedures. Figure 12. Fall 1991-1996 new freshman ACT composit e scores.
21 of 25 Perhaps the cascading nature of decision sets is also seen in the impact on the current fund s. Figure 13 below shows the change in current fund expenditu res as a percent of total from FY 1992-93 to FY 1996-97. The most significant feature of this perio d is the shift in expenditures from instruction to scholarships and fellowships. An additional impact is seen in reviewing tuition increases and the behavior of tuition and fee revenues at this instit ution during the five years being studied. Figure 1 4 indicates that tuition and fee revenue per student, net of scholarship, rose over the five-year period being studied. This was due to significant increases in t uition and fees and a decrease in number of student s recruited. The institution was successful at recrui ting a more academically prepared and affluent stud ent population. Figure 13. Current fund expenditures as a percent o f total from FY 1992-93 to FY 1996-97. Expenditures by programFY 92-93FY 93-94FY 94-95FY 95-96FY 96-97 Instruction$4,357,598$4,452,393$4,267,199$4,343,119 $4,393,923 Academic Support$734,794$751,267$757,292$822,828$88 7,878 Student Services$3,209,702$3,042,999$3,686,927$4,25 8,749$3,648,876 Institutional Support$2,742,729$2,613,368$2,581,843 $2,743,601$2,477,247 Operation & Maintenance of Plant$1,295,546$1,445,54 2$1,336,703$1,556,446$1,579,062 Scholarships & Fellowships$3,512,193$3,498,727$3,27 8,937$3,702,188$5,512,980 Mandatory Transfers$10,603$11,971$11,971$19,031$35, 741
22 of 25 Auxiliary Services$2,526,940$1,909,342$1,908,835$2, 297,643$3,528,274 Total Expenditures$18,390,105$17,725,609$17,829,707 $19,743,605$22,063,981Expenditures by program as a percent of totalexpendituresInstruction23.7%25.1%23.9%22.0%19.9%Academic Support4.0%4.2%4.2%4.2%4.0%Student Services17.5%17.2%20.7%21.6%16.5%Institutional Support14.9%14.7%14.5%13.9%11.2%Operation & Maintenance of Plant7.0%8.2%7.5%7.9%7.2 % Scholarships & Fellowships19.1%19.7%18.4%18.8%25.0%Mandatory Transfers0.1%0.1%0.1%0.1%0.2%Auxiliary Services13.7%10.8%10.7%11.6%16.0% Figure 14. Net tuition and fee revenue per student. Discussion
23 of 25 This article approached the use of perfor mance indicators from two perspectives. In the firs t study eleven frequently cited performance indicators were used to explore the implications of enrollment stability and financial viability with twenty simil ar Carnegie Classification Baccalaureate II institu tions. This study examined issues addressed in the Federal Register Volume 61 proposal to amend the Student Assistance General Provision regulations by revisin g the requirements for compliance audits and adding a new subpart establishing financial responsibility standards. The implication here was that instituti onal scores on a specific set of indicators define the p rogrammatic and financial viability of an instituti on and should impact the disbursement of federal funds. Th ese results did not imply that these frequently cit ed performance indicators should trigger federal polic y and institutional sanctions. What did stand out i s that there are few policy related implications that can be drawn from these internationally accepted institutional viability measures. These institution s, most of which have been in existence for over a century, are maintaining enrollment and graduating students. Some have more financial flexibility than others and that can be traced to size of enrollment and cost to attend. What does stand out is that th ere are few policy-related implications. The second study used a case study approa ch to focus on a campus included in the sample of institutions used in the first study. In this parti cular case study, the institution had decided that challenges on two fronts were threatening the institution. The institution moved to change both the population of students served and the focus of the academic progr am. The institution was successful over a five-year period in changing both the character of the studen t body and the academic program mix while improving its overall financial position. The insti tution used performance indicators within a whole system context, as suggested by Kaufman in Educational System Planning (1972), to reduce uncertainty before changing institutional policy and to measure the outcomes of those changes. This case study was seen by the authors t o reinforce their belief that specific decision poi nts group together within decision sets. Information that was developed for the reduction of uncertainty at each decision point was reviewed, and decisions made, wi thin the context of the decision set. The belief th at decision sets exhibit cascade effects was also rein forced. In this case study single decisions led to a cascade of additional discrete decisions. As well, multiple measures of outcome were impacted in the same way. Three overall conclusions were reached as a result of these two studies. First, the performa nce measures most commonly cited in the literature as m easures of institutional financial viability are of limited use for institution specific policy develop ment. Second, performance indicators are most effectively used within an institution specific, wh ole system framework. Third, being able to understa nd and predict the cascade effect in the use of perfor mance indicators is essential for effective policy analysis.ReferencesAlavim, M. & Joachimsthaler, E.A.(1992). Revisiting DSS implementation research: a meta-analysis of the literature and suggestions for researchers. MIS Quarterly, 16 (10) 95116. Bogue, G. Creech, J. & Folger, J. (1993). Assessing quality in higher education: policy actio ns in SREB States Atlanta, GA: Southern Regional Education Board Dickmeyer, N.& Hughes, K. S. (1987). Financial selfassessment workbook: a workbook for colleges and universities Washington, DC: National Association of College a nd University Business Officers. Gaither, G., Nedwek, B.P. & Neal J.E. (1994). Measuring up: the promises and pitfalls of performa nce indicators in higher education (ASHE-ERIC Higher Education Report; No. 5.) Washi ngton, D.C.: The George Washington University, Graduate School of Ed ucation and Human Development. Joint Commission on Accountability Reporting. A need answered Washington DC: American Association of State Colleges and Universities.Lucas, H.C., Ginzberg, M.J. & Schultz, R.L. (1990). Information systems implementation: testing a
24 of 25 structural model Ablex Publishing Corporation, Norwood, NJ. Kaufman, R. A. (1972). Educational Systems Planning. Englewood Cliffs, NJ: Prentice Hall Kidwell, J.J. & Long, L. (1995). Performance measurement systems for higher educatio n (NACUBO's Effective Managment Series) Washington, DC: Nationa l Association of College and University Business Officers.Notice of Proposed Rule Making, Number 184, Volume 61, Federal Register (1996) Reagan-Cirincione, P, Shuman, S., Richardson, G.P. & Dorf, S.A. (1991). Decision modeling: tools for strategic thinking Interfaces, 21:6, 52-65. Ruppert, S. (ed) (1994). Charting higher education accountability Boulder, CO: Education Commission of the States.Prager, McCarthy & Sealy, Inc. (1995). Ratio analysis in higher edeucation. New York, NY: Prager, McCarthy & Sealy.Sapp, M.M.(1994). Setting up a key success index report: a how-to man ual (AIR Professional File Number 51, Winter, 1994) Tallahassee, FL: Associati on for Institutional Research. Taylor, B.E., Myerson, J.W. & Massy,W.F. (1993). Strategic indicators in higher education: improving performance Princeton, NJ: Peterson's Guides. Van Gundy, A.B., Jr.(1988). Techniques for Structured Problem Solving second edition, Van Nostrand Reinhold, New York.About the AuthorsE. Raymond Hackett is Executive Director of the College Information S ystems Association, a data sharing consortium of liberals arts colleges and un iversities. He is also an Assistant Professor in th e Educational Leadership program at Auburn University Dr. Hackett has served as a state system-level policy analyst and as a campus vice president for a dministration and finance. Email: firstname.lastname@example.orgSarah D. Carrigan is Coordinator of Institutional Studies at the Uni versity of North Carolina at Greensboro and formerly served as the Research Dire ctor for the College Information Systems Association. Dr. Carrigan has also served as a camp us residence life director at a liberal arts colleg e.Copyright 1998 by the Education Policy Analysis ArchivesThe World Wide Web address for the Education Policy Analysis Archives is http://olam.ed.asu.edu/epaa General questions about appropriateness of topics o r particular articles may be addressed to the Editor, Gene V Glass, email@example.com or reach him at College of Education, Arizona State University, Tempe, AZ 85287-2411. (602-965-26 92). The Book Review Editor is Walter E. Shepherd: firstname.lastname@example.org The Commentary Editor is Casey D. Cobb of the University of New Hampshire: email@example.com .EPAA Editorial Board
25 of 25 Michael W. Apple University of Wisconsin Greg Camilli Rutgers University John Covaleskie Northern Michigan University Andrew Coulson firstname.lastname@example.org Alan Davis University of Colorado, Denver Sherman Dorn University of South Florida Mark E. Fetler California Commission on Teacher Credentialing Richard Garlikov email@example.com Thomas F. Green Syracuse University Alison I. Griffith York University Arlen Gullickson Western Michigan University Ernest R. House University of Colorado Aimee Howley Marshall University Craig B. Howley Appalachia Educational Laboratory William Hunter University of Calgary Richard M. Jaeger University of North Carolina--Greensboro Daniel Kalls Ume University Benjamin Levin University of Manitoba Thomas Mauhs-Pugh Rocky Mountain College Dewayne Matthews Western Interstate Commission for Higher Education William McInerney Purdue University Mary McKeown-Moak MGT of America (Austin, TX) Les McLean University of Toronto Susan Bobbitt Nolen University of Washington Anne L. Pemberton firstname.lastname@example.org Hugh G. Petrie SUNY Buffalo Richard C. Richardson Arizona State University Anthony G. Rud Jr. Purdue University Dennis Sayers Ann Leavenworth Centerfor Accelerated Learning Jay D. Scribner University of Texas at Austin Michael Scriven email@example.com Robert E. Stake University of Illinois--UC Robert Stonehill U.S. Department of Education Robert T. Stout Arizona State University