xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam a22 u 4500
controlfield tag 008 c20049999azu 000 0 eng d
datafield ind1 8 ind2 024
subfield code a E11-00371
Educational policy analysis archives.
n Vol. 12, no. 22 (May 26, 2004).
Tempe, Ariz. :
b Arizona State University ;
Tampa, Fla. :
University of South Florida.
c May 26, 2004
Effects of performance budgeting and funding programs on graduation rate in public four-year colleges and universities / Jung-cheol Shin [and] Sande Milton.
Arizona State University.
University of South Florida.
t Education Policy Analysis Archives (EPAA)
xml version 1.0 encoding UTF-8 standalone no
mods:mods xmlns:mods http:www.loc.govmodsv3 xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govmodsv3mods-3-1.xsd
mods:relatedItem type host
mods:identifier issn 1068-2341mods:part
mods:detail volume mods:number 12issue 22series Year mods:caption 20042004Month May5Day 2626mods:originInfo mods:dateIssued iso8601 2004-05-26
1 of 26 A peer-reviewed scholarly journal Editor: Gene V Glass College of Education Arizona State University Copyright is retained by the first or sole author, who grants right of first publication to the EDUCATION POLICY ANALYSIS ARCHIVES EPAA is a project of the Education Policy Studies Laboratory. Articles appearing in EPAA are abstracted in the Current Index to Journals in Education by the ERIC Clearinghouse on Assessment and Evaluation and are permanently archived in Resources in Education Volume 12 Number 22May 26, 2004ISSN 1068-2341The Effects of Performance Budgeting and Funding Pr ograms on Graduation Rate in Public Four-Year Colleges and Universities Jung-cheol Shin Sande Milton Florida State UniversityCitation: Shin, J., Milton, S., (2004, May 26). The effects of performance budgeting and funding programs on graduation rate in public four-year col leges and universities. Education Policy Analysis Archives, 12 (22). Retrieved [Date] from http://epaa.asu.edu/epa a/v12n22/.AbstractThis study was conducted to determine whether state s with performance budgeting and funding (PBF) programs ha d improved institutional performance of higher educat ion over the five years (1997 through 2001) considered in this s tudy. First Time in College (FTIC) graduation rate was used as the measure of institutional performance. In this study, the un it of analysis is institution level and the study population is all p ublic four-or-more-year institutions in the United States To test PBF program effectiveness, Hierarchical Linear Modeling (HLM) growth analysis was applied. According to the HLM a nalysis, the growth of graduation rates in states with PBF progr ams was not greater than in states without PBF programs. The lack of g rowth in institutional graduation rates, however, does no t mean that
2 of 26 PBF programs failed to achieve their goals. Policymakers are advised to sustain PBF programs long enough until s uch programs bear their fruits or are proven ineffectiv e. IntroductionThe purpose of this study was to examine the effect s of performance budgeting and funding (PBF) programs on the performance of pu blic higher education institutions in the United States using longitudina l performance data. As of 2001, thirty-six states in the U.S. utilized either performance budgeting or performance funding programs, or both, in instituti ons of higher education (Burke & Minassians, 2001). This trend began during the 1990s, as state policy-makers sought to enhance accountability in h igher education, find more effective budget allocation methods, and in some ca ses, trim budgets. Although performance budgeting (PB) and performance funding (PF) are different in focus, both create a link, to different degrees, be tween budget allocation and institutional performance. In addition, both types of programs are based on the underlying premise that higher education institutio ns are motivated to improve their performance when performance is linked to bud get allocation. The trend toward increased use of performance budge ting and funding programs is not limited to the U.S. The United King dom led other European countries in initiating a market-driven higher educ ation accountability program, first introduced by the Thatcher government. Influe nced by trends in the U.S and Europe, the Canadian provinces of Alberta and O ntario have also adopted performance-based funding mechanisms to assess univ ersity efficiency and quality (Barnetson 1999). In Australia, 40 public r esearch universities and two private institutions are subject to an accountabili ty framework that links their funding to performance requirements (Atkinson-Grosj ean & Grosjean, 2000). In spite of vigorous protests from students and facult y, New Zealand established a new accountability system in 1998. In the early 199 0s, South Korea adopted a performance-based funding program consisting of nin e independent sub-programs, each with a particular purpose, to wh ich funding was tied (Kim, 2001).Current Status Of Pbf ProgramsMajor Stakeholders: State Politics and Business Inv olvement The major stakeholders in higher education reform o f the 1990s have been state legislatures, state chancellors and state age ncies, governors, and higher education institutions (Blackwell & Ciston, 1998). Among these stakeholders, governors and state legislatures play the primary r ole in the adoption of PBF programs (Burke & Minassians, 2001). In contrast, a relatively small number of initiatives (in 24 states) have been conducted with in state agencies or university systems themselves.A more complete understanding of the widespread ado ption of PBF programs since the 1990s must encompass more than the vested interests of state legislatures and governors. In the 1990s, higher ed ucation reform has been
3 of 26 guided by economic valuesÂ—competition, productivity and efficiencyÂ—more than ever before. This shift in priorities might be explained in part by the growing interest of business leaders in state higher educat ion policy-making (Usdan, 2002; Zernike, 2002; Zumeta, 2000).Business interests have changed the picture of high er education. As Levin (2001) argues, higher education is becoming like a Â“globally competitive businessÂ” (p.237). In some cases, business leaders are actively involved in PBF program development. For example, in South Carolina which links 100% of higher education funding to institutional performan ce, two of the 12 members of the Higher Education Joint Legislative Study Commit tee represented the business community. [Not one of the 12 members of t he committee came from higher education (China, 1998)].However, other major stakeholders in higher educati onÂ—institutional administrators and facultyÂ—have always been more co ncerned about institutional improvement than accountability, unli ke politicians who emphasize external accountability through the implementation of PBF programs (Burke, 1998). Because of these perceptual and value differ ences, tensions almost invariably exist between both groups around the iss ue of PBF programs.Type of Program: Performance Budgeting or Funding?Performance budgeting and funding are similar in th at both programs link institutional performance with budget allocation, b ut the methods differ in the way each ties institutional performance to state fu nding. While performance funding (PF) programs link budget to institutional performance in a direct, automatic, and formulaic manner, the link in perfor mance budgeting (PB) programs is loose, indirect, and uncertain (Burke e t al, 2000). Because of the loose link between performance and b udget allocation, PB programs may be less effective in enhancing institu tional performance. Notwithstanding this limitation, state policy-maker s typically prefer performance budgeting to performance funding because of the fle xibility in implementation (Burke, et al., 2000).Although more states adopted PB programs than PF pr ograms, recent trends indicate, Â“both programs are borrowing elements fro m the other approach to gain its benefits while evading their own problemsÂ” (Burke et al, 2000, p.3). Further, in order to minimize the limitations and m aximize the strengths of each program, an increasing number of states have adopte d both programs at the same time (Burke et al, 2000).Commonly-Used Indicators for measuring institutiona l performanceRuppert (1995) conducted an in-depth case study on performance indicators (PIs) and performance reporting in ten states that have been among the leaders in designing and using PIs in higher education. (Note 1) He found that most states share a common core of performance indicator s. Graduation outcome data, for instance, is one measure that responds to policy concerns Â—rising
4 of 26 college costs and the economic return to the state of college-educated citizens. A survey by the State Higher Education Executive Of ficers (SHEEO) conducted between 1996 and 1997 identifies some indicators us ed frequently in the U.S. at that time: graduation rate (32 states), transfer rate (25 states), faculty workload/productivity (24 states), follow-up satisf action (23 states), and externally sponsored research funds (23 states) (Ch ristal, 1998). Another study (Burke, 1998) of 10 states conducted in 1997 found less commonality among PIs in different PBF programs. Among the states sur veyed, the following performance indicators were in use: graduation and retention rates (10 states), two-to-four-year transfers (6 states), faculty work load (5 states), institutional choice (5 states), graduation credits and time to d egree (4 states), licensure test score (4 states), transfer graduation rates (4 stat es), workforce training and development (4 states), and external research fund ratio (3 states). Overall, the common performance indicators across these studies were graduation rate, retention rate, transfer rate, faculty workload, an d sponsored research funds.Previous Evaluations of the Impacts of Pbf ProgramsMost evaluative studies of the impacts of PBF progr ams focus on a specific state or set of institutions within a state. As man y studies assert, the effectiveness of PBF programs is a critical concern of policy-makers. Nevertheless, few academic evaluation studies of PB F program impacts have been conducted, save the occasional evaluation proj ect or dissertation. State-level evaluations for budget allocation purpo ses mainly use quantitative data on specific performance indicators. Florida an d South Carolina, for example, annually evaluate each institutionÂ’s perfo rmance in order to guide budget allocation.In a book on PBF programs, Bogue (2002) explores th e impact and effectiveness of the PBF program in Tennessee. From the actual 1994-1998 performance data, he found that the universities ha ve consistently scored above the national norm on the College BASE. One interest ing outcome of the Tennessee PBF program is the clear increase in the proportion of academic program accreditations, which improved from 65 perc ent to 100 percent (Bogue, 2002). He found, however, that institutions general ly failed to significantly raise persistence-to-graduation rates and overall job-pla cement rates over the same period.FloridaÂ’s Office of Program Policy Analysis and Gov ernment Accountability (OPPAGA) (2001) evaluated the performance budgeting program of the State University System in Florida as part of an evaluati on of performance budgeting programs at each state agency. In the evaluation, O PPAGA reported that graduation and retention rates have increased since the state adopted a performance budgeting program, and externally-funde d financial support for the research program has significantly increased as wel l. Additional confirmation of these findings exists in FloridaÂ’s relatively high score on completion indicators versus other evaluation indicators in the Â‘Measurin g Up 2000Â’ study. OPPAGA claims this result occurred because the Florida Leg islature decided to make raising the completion or graduation rates a priori ty in its indicator system.
5 of 26 The South Carolina Commission on Higher Education ( CHE) (2002) has been studying the impacts of performance funding program in South Carolina as a three-year project for the Fund for Improvement of Postsecondary Education (FIPSE) grant. In this evaluation, the South Caroli na CHE used diverse methods (journal analysis, surveys, interviews, and histori cal data) to review its PF program. According to preliminary findings, institu tional policies have changed to accommodate the PF program and some outcomes are changing as well. Interestingly, the impacts of the PF program seem t o vary depending on which evaluation method is used in the evaluation process For example, respondents to the survey answered that the PF program has had no impact on institutional quality or efficiency. Yet historical data on insti tutional performance shows that performance on SAT scores, minority enrollment, and externally funded research have all increased, and that graduation ra tes are slowly rising as well. These results suggest that the actual performance o f South Carolina higher education institutions has been increasing since So uth Carolina adopted the PF program.In a review of performance data from Missouri highe r education institutions, Marcus et al. (2000) found that student assessment scores, licensure exam pass rates, first time in college (FTIC) graduation rates, minority graduation rates, and student job placement rates have all inc reased since Missouri adopted a performance funding program. While the ca usal relationship between PF and increased performance was not conclusive, th e PF program was determined to be Â“clearly responsible for the ident ification of priorities for funding, for the establishment of assessment measur es, and for helping institutions to accept that part of their state all ocation is linked to resultsÂ” (p.215).The studies mentioned above indicate that PBF progr ams have had a positive impact on institutional performance. However, these results are based on limited time periods, and the interpretation of the performance data can differ from one researcher to another. For example, Dallet et al. (2002) analyzed the same performance data used in an earlier study by O PPAGA (2001) to discuss the impact of FloridaÂ’s PBF program. While OPPAGA r eported that the PBF program had had a positive impact on FloridaÂ’s high er education institutions, the conclusion of Dallet and his colleagues was quite d ifferent. They argued that, although the graduation rate and retention rate in FloridaÂ’s higher education institutions increased somewhat after the inception of the PBF program in 1994, this growth actually occurred at a level similar to that of the 1990-91 academic year, when no such policy was introduced. In Colora do, which adopted a PF program from 1994/95 to 1996/97, the effect of perf ormance level on budget reward was found in a regression analysis to be low er than expected (Bridges, 1999). Instead, there was a higher correlation betw een institution size and performance funding than between performance fundin g and performance level.Research QuestionsThe overarching purpose of this study is to explore the impacts that PBF programs have had on institutional performance. Thu s, there are two basic research questions addressed in this study:
6 of 26 Has institutional performance in states with PBF pr ograms (PBF states) improved more than in states without PBF programs ( non-PBF states)? 1. Has institutional performance in states with both p rograms (PB and PF) improved more than in states with only a PB or PF p rogram? 2.Research Design And MethodUnit of Analysis and PopulationThe unit of analysis for this study is higher educa tion institution, rather than the state, because the target of PBF programs is instit ution-level change, and state-level performance is simply the aggregation o f each stateÂ’s institution-level performance. The study will be limited to public hi gher education institutions because private institutions are not the main objec ts of state policy. Further, although some states have the same PBF programs for public four-year institutions and community colleges, many states (e .g., Florida, California, New York) have different PBF programs for public four-y ear institutions and community college systems (Burke, 1998). In additio n, as Hoyt (2001) addresses, the missions of community colleges are d ifferent from those of four-year colleges. Therefore, for this study the o bject of interest is limited to public four-year institutions, and excludes public community colleges. Also, because PBF programs focus mainly on undergraduate education, graduate-only institutions will be excluded from th e analysis. Thus, the target population of this study is all pu blic four-year higher education institutions in the United States. Based on the cri teria above, 456 institutions were selected for the study population. (Note 2) The information on the annual PBF program status of each state is available from surveys conducted each year from 1997 through 2001 (Burke & Minassians, 20 01; Burke et al., 2000; Burke & Modarresi, 1999; Burke & Serban; 1998; Burk e & Serban, 1997). Dependent VariableThe dependent variable used in this study is the Fi rst Time in College (FTIC) graduation rate of each institution during the year s 1997 through 2001. This dependent variable was selected based on three crit eria: 1) it is the most commonly-used performance indicator nationwide; 2) nationwide data are readily available; (Note 3) and 3) there is internal validity in using graduat ion rate as a measure of performance in higher educatio n. The term is also in accordance with the Student Right-To-Know Act (SRTK ), which mandated institutions to report the graduation rate of only Â“first-timeÂ” and Â“full-timeÂ” students who spent Â“up to 150 percent of normal tim e to complete their degreesÂ” (9PL 101-542; Federal Register, 1992).In conducting a broad policy studies such as this, one must consider the question of whether a given policy is accomplishing its goals (Affholter, 1994). In PBF programs, the attainment of program goals is me asured using performance indicators that are chosen by state legislatures, c oordinating boards, or in some cases, higher education institutions themselves. Ac cording to an analysis of
7 of 26 performance indicators nationwide (Christal, 1998), graduation rate is the most commonly-used indicator in U.S. higher education in stitutions with PBF programs. Of the 33 states with PBF programs in 199 7, 32 states used graduation rate as one of their performance indicat ors. In addition, when an inappropriate or weak indicato r is chosen as a dependent variable, it can invalidate the results of a policy study. Each institution admits applicants who are deemed qualified to study at the institution. These students study approximately four years to satisfy graduatio n requirements that are set forth by the department, college, university, or st ate. If many students do not graduate in a timely manner, these students have ei ther not yet satisfied the graduation requirements, have chosen to leave, or h ave transferred to another institution. Any of these cases poses a problem for the institution, which, due to the very nature of its mission, seeks to maintain a high graduation rate. As a result, low graduation rates may imply that the ins titutions are falling short in performing some of their functions. Therefore, grad uation rate is a persuasive and valid indicator to represent how well the insti tution performs in at least one of its many goals. (Note 4) FTIC graduation rate is thus calculated using the c ompletions within 150% of normal time to degree divided by the total number i n a particular yearÂ’s cohort. The Integrated Postsecondary Education Data System (IPEDS) provides total cohort information, completion within 150% of norma l time to degree, and the final FTIC graduation rate. Therefore, for the purp ose of this study, the final graduation rate provided by the IPEDS will be used without any adjustment. Independent VariablesIndependent variables in the statistical model will be of two types: program (or treatment) variables, and control variables. The pr ogram variables are the PBF programrelated factors. The control variables inc lude factors other than program-related factors that influence graduation r ate. PBF program variables include: PBF program adoption 1. PBF states are states that have adopted PBF programs in at lea st three of the five years 1997-2001. a. Non-PBF states are states that either did not adopt a PBF program or had adopted one for only one year during 1997-20 01. (Note 5) b. PBF type 2. PB & PF states are states that have concurrent performance fundin g and performance budgeting programs. a. PF or PF states are states that only have one type of program: PB o r PF. b. Control variables are used to control for other fac tors that affect FTIC graduation rate. Based on the literature, several v ariables were chosen to control for exogenous influences on the graduation rate in order to accurately capture the effects of PBF programs. These control variables are divided into institution-level control variables and state-level control variables.
8 of 26 Institution-level control variables are variables t hat have been shown to affect graduation rate at the institution level. The avera ge tuition has been associated with graduation rates (Heller, 1997; Hsing & Chang, 1996; Dayhoff, 1991). The availability of financial aid (grants, loans, work study, etc.) has a positive impact on enrollment (John et al., 2001; Braunstein et al, 1999; Sheridan et al., 1994; Cabrera et al., 1992). The opportunity to live in a campus residence hall, especially during the freshman year, has also been shown to influence retention rates and time to completion (Astin, 1997; Sheridan et al., 1994). Average Tuition is defined as [(instate tuition + out of state tui tion)/2]. a. Student Dormitory Ratio is defined as (space availability in dormitory facilities)/(full-time students). b. Financial aid student ratio is the percentage of full-time, first-time degree and certification seeking students who receive any financial aid (grants, loans, assistantships, fellowships, tuition wavier, tuition discounts, veterans benefits, employer aids and other monies). c. State-level control variables are state level chara cteristics which have been shown to influence graduation rate, and which must be included in the causal model in order to capture accurately the effects of PBF programs. Astin (1997), Schmitz (1993), and Kahn and Nauta (2001) found tha t college preparation factors (mean entrance test score on ACT or SAT, an d freshmenÂ’s high school grades) are positively associated with graduation r ate. Also, unemployment rate has been shown to be negatively associated with gra duation rate (Heller, 1999; Hsing & Chang, 1996). Statewide family ability to p ay college costs is the variable that reflects financial support from the f amily to the students (Braunstein et al., 1999; John, 1993) as well as th e economic conditions of each state. College preparation is defined as Â“Overall scoreÂ” of college preparati on (Note 6) as described in Â“Measuring Up 2000Â” (NCPPHE, 2001) a. Family ability to pay college costs is defined as the studentÂ’s familyÂ’s ability to pay at public four-year colleges, as des cribed in Â“Measuring Up 2000Â” (NCPPHE, 2001). b. Unemployment rate is defined as the number of unemployed as a percen t of the labor force (U.S. Department of Labor, 2002) c.Data SourcesStudy data are available directly or indirectly fro m diverse sources. Data on PBF program variables have been collected in annual sur veys conducted by Burke and associates in the years 1997 through 2001 (Burk e & Minassians, 2001; Burke et al., 2000; Burke & Modarresi, 1999; Burke & Serban, 1998; Burke & Serban, 1997). Nationwide data on state-level contr ol variables were included in the nationwide performance evaluation study, Â“Measu ring Up 2000Â” (NCPPHE, 2001). The data on unemployment rate are available from the U.S. Department of Labor (U.S. Department of Labor, 2002). All the other control variables, and the dependent variable, graduation rate, are availa ble from the IPEDS database.
9 of 26 Method and Analysis ProceduresIdentifying the data structure of a dependent varia ble and independent variable is the first step in choosing the most appropriate statistical method. In this study, the dependent variable, FTIC graduation rate, is a continuous variable, while the independent variables consist of continuous var iables and dichotomous variables. The data to be analyzed have two importa nt characteristics. First, the data related to the dependent variable are longitud inal in nature, and will allow for an analysis of changes in the FTIC graduation r ate in the years 1997 through 2001. Second, as discussed in a previous se ction, the data related to the independent variables have two hierarchical str uctures: institution-level variables and state-level variables.Considering the data structure and research questio ns, recently developed Hierarchical Linear Modeling (HLM) growth model wil l be applied. HLM growth analysis enables to describe changes over time in l ongitudinal data (unlike a pre-test and post-test design), and analyzes progra m impacts within a particular period of time specified by the researcher (unlike time-series analysis). What is more, when the data structure is hierarchical, the HLM growth model can better estimate the contribution of variables at each leve l (Arnold, 1992; Bryk & Raudenbush, 1992). Thus, under a hierarchical data structure, the HLM growth model is more relevant and useful than other method s such as pre-and post-test design or interrupted time series design.One strength of the HLM growth model is that HLM co nsiders each different variable at each different level, and each differen t level is formally represented by its own sub-modelÂ—institution-level and state-le vel in this study. These sub-models express relationships between variables at each level, and specify how variables at one level influence relations occu rring at another (Bryk & Raudenbush, 1992). When the change in FTIC graduati on rate (growth trajectory) is included in the analysis, this study has three different hierarchical structures. Accordingly, three different types of s ub-models will be generated. The first is the institutional growth model, which includes each individual institutionÂ’s growth trajectory for FTIC graduation rate. The second is the within-state model, which reflects institutional ch aracteristics. The third is the between-states model, which analyzes the effects of state-level policy variables. The first step in model building is to identify the growth trajectory of each institution between linear and polynomial models. T o identify the growth trajectory, a visual inspection of all the institut ionsÂ’ growth trajectories was conducted, and the average growth trajectory was ge nerated and visually inspected as Bryk and Raudenbush (1992) recommend. Based on the identified growth trajectory, a linear within-institution mode l (Level-1 model) was generated.Prior to specifying institution-level and state-lev el models, it is useful to fit the unconditional model which does not include explanat ory variables at the institution and state levels. The major purpose of the unconditional model is to collect information about the growth trajectory and point of origin (i.e., the estimated FTIC graduate rate in 1997). Based on the information obtained
10 of 26 through unconditional models, the first step is to determine if sufficient variability among institutionand state-level variables exists For example, if the growth rate within a given state is the same for every ins titution, it means that the parameter is constant across all the institutions w ithin a state (Level-2). In this case, the parameter is retained at Level-2 and the corresponding random effect term is set to zero, but no Level-2 level predictor s are included in the conditional model.Based on the results of the analysis of the uncondi tional model, the conditional model, which includes explanatory variables at Leve l-2 and Level-3, is considered. When the coefficients of the unconditio nal model are significant at each level, the variables are included in the condi tional model. By introducing explanatory variables at each level, the total vari ation of FTIC graduation rate can be apportioned by levels. At Level-2, instituti on-level control variables are included, and state-level control variables and pro gram variables of interest are included at Level-3.ResultsResults of the Unconditional ModelFitting the unconditional model at Level-2 and Leve l-3 exclusive of any Level-2 or Level-3 predictors provided useful information a bout the general pattern of growth and the difference in the growth rates betwe en institutions and states. Fixed EffectsThe state-level fixed effect of graduation rate at the beginning of the five-year period indicated that, averaged across all institut ions in the 41 states included in the analysis, the estimated graduation rate in 1997 was 39.92 percent (t = 28.23; p <. 001). The estimated state-level fixed effect of slope indicated that, averaged across all institutions in the 41 states, the growth in graduation rates was estimated to be 0.62 percentage points per year This growth trajectory was significantly different from zero (t = 4.47; p <. 001). (See Table 1.) Random effectThe estimated variance components appear in the low er panel of Table 1. At the institution level (Level-2), the large chi-squa re values (1545.11 and 585.84) indicate that there were large amounts of random va riations ( p <. 001) among institutions for both initial graduation rate and t he growth in graduation rate. Therefore, further conditional analyses at the inst itution level were warranted. Across the states (Level-3), a significant amount o f random variation also existed in the state mean initial graduation rate a nd state mean growth rates. The chi-square values for the two parameters were 1 32.72 ( p <. 001) and 121.10 ( p = <.001), respectively. Again, further conditional analysis for each parameter at the state level was also indicated. Table 1 Fixed Effects and Random Effects (Unconditional Mod el)
11 of 26 Model Specification at Institutional Level and Sate Level Based on the results of the unconditional model, th e next step was to fit the institution-level model (Level-2 model) in order to account for random variation found in the Level-1 growth parameters. To search f or Level-2 predictors with sufficient predictive power, all Level-3 models wer e temporarily left unconditional. Then, the Level-3 predictors were co nsidered when the Level-2 model was specified.Institution Level Model (Level-2 Model)To explain the variability between institutions, in stitutional characteristics were considered at Level-2. Among the three potential Le vel-2 variables, average tuition and financial student ratio were excluded i n the analysis because both variables were highly correlated with other include d variables. For the institutionÂ’s initial status, the variance component associated with student dormitory ratio (277.36) was relatively lar ge, and also had statistically significant predictive power ( t = 6.598; p< .001). Also, the residual variance for student dormitory ratio was significant (chi-square = 63.70, p = 0.008). For the institutionÂ’s growth rate, dormitory ratio was the only significant predictor (chi-square = 85.40, p < 0.001). Therefore, student dormitory ratio was included for institutionÂ’s initial status and for institutionÂ’s growth rate. The final Level-2 co nditional models were specified as: State Level Model (Level-3 Model)To explain the differences between state variations state characteristics were included in the models. The results of a correlatio n analysis suggest that family ability to pay college costs was the strongest pote ntial predictor. (College
12 of 26 preparation and unemployment rate were highly corre lated with family ability to pay, and were thus excluded from the analysis becau se of multicollinearity.) Therefore, the final state-level models were: The Effects of PBF ProgramsThe next step was to test the PBF program effects b y including the program-related variables in the model. The first r esearch question was to compare the growth in institutional graduation rate s between PBF states and non-PBF states. To test for PBF program effects, th e final model was expanded by adding a PBF program variable. This program vari able was included only in the state average growth rate parameter Â– not the i nitial status parameterÂ—because the research question concerned t he effects of the PBF program on the growth in graduation rates. Therefor e, the final models to test program effects were: In the model, Â“PBFÂ” represented a contrast between PBF states and non-PBF states. The effect of PBF programs on FTIC graduati on rate was tested in two ways. First, the model fit improvement was tested t o know whether significantly more state-level variance in graduation rate was ex plained by the introduction of PBF program variable. This test was conducted using a model fit test with D-statistics. Second, single parameter tests of the contrast of interest Â— PBF states vs. non-PBF states Â— were conducted.The model fit was not improved significantly by add ing the PBF program variable (chi-square = 0.188; df = 2; p>.500 ). This result suggests that the PBF program variable did not help to reduce the unexpla ined variance at the state level. (See Table 2.) No matter whether the states were PBF or non-PBF states, the inclusion of the PBF program variable did not i nfluence significantly the growth in graduation rates. In addition, the result s for the single parameter tests showed that the contrast for PBF states vs. non-PBF states was not statistically significant (for state mean growth rate t = -0.50; p >.500, and for Dorm Rate t = 0.044; p = >.500). (See Table 3.) Table 2 Fixed Effects of Final Model (Testing PBF Program E ffects)
13 of 26 Table 3 Random Effects of Final Model (Testing PBF Program Effects) The Effects of Program TypesResearch question 2 concerned the effects of PBF ty pes on growth in graduation rates within the PBF states. That is, di d states with both PB and PF programs have higher growth in graduation rates tha n states with only a PB or PF program? Therefore, a new data file with only PB F states (30 states) was obtained by splitting the original data file. Fitti ng the model followed the same steps described above. The variance for the Level-2 model indicates that t here were significant amounts of random variations ( p <. 001) among institutions for both initial graduation rate and growth of graduation rate. Ther efore, further conditional analysis at the institution level was warranted. Ac ross states, too, a significant
14 of 26 amount of random variation existed in the state mea n initial status, and in the state mean growth rate.Based on the unconditional model, institutional cha racteristic variables were included at Level-2 models. Each predictor variable for an institutionÂ’s initial status and growth rate were the same as for the pre vious model specification. Through the model specification procedures, student dormitory ratio was selected for the institutionÂ’s initial status predi ctors and for the institutionÂ’s growth rate predictor.Based on the results of the Level-2 conditional mod el, state-level characteristics were included to construct a conditional Level-3 mo del. The model specification procedures were similar to those of the previous mo del, which was based on 41 states. Family ability to pay for college proved si gnificant and was included in the final model. In addition, the performance progr am type variable was included in the final model to address the research question. Therefore, the final models were: By adding a PBF program type variable in the models the model fit improved significantly (chi-square = 8.12; df = 2; p = 0.017). In addition, single parameter test results showed that the PBF program type was s ignificant overall. The effects of PBF program type on the state mean growt h in graduation rates was statistically significant ( t = 2.36; p = 0.026). Also, PBF program type had a significant effect through student dormitory ratio: in states with both PB and PF programs, the ratio of dormitory beds has a greater impact on graduation rate than in states with only one program ( t = 2.61; p = 0.015) as shown in Table 4. The test results demonstrate that the states with b oth PB and PF programs performed better than the states with only a PB or PF program. Table 4 Fixed Effects of Final Model (Testing PBF Program T ypes)
15 of 26 Summary And DiscussionThe purpose of this study was to determine whether states with performance budgeting and funding (PBF) programs had increased the effectiveness of their public institutions of higher education over the fi ve years considered in this study. To explore this question, two primary resear ch questions were generated. An HLM analysis made it possible to test PBF program effectiveness: 1) between PBF states and non-PBF st ates; 2) between the states with both PB and PF programs and with only P B or PF program. The following is a summary of the research findings:(1) During the five years under study, the growth o f graduation rates in PBF states was not greater than in non-PBF states.(2) The growth in graduation rates in the states wi th both PB and PF programs was higher than in the states with only one (PB or PF) program. These results could be disappointing for state poli cy-makers who support performance-based reforms, or might be good news to those who disapprove of state government-initiated reforms in higher educat ion. The following discussion explores the results and considers their implicatio ns. The first finding to consider is that the growth in graduation rate in PBF states was not found to be different from that in non-PBF states. In interpreting these results, it is important to consider: a) the nature of change in graduation rates at institutions of higher education; b) the amount of funding tied to PBF in a given state; and c) the degree to which PBF programs infl uence collegeand departmental-level decision making. As many higher education administrators have recogn ized, the change of the graduation rates among colleges and universitie s generally is very slow. This tendency is confirmed by a review of a d ecade of changes in graduation rates in Florida (Florida Board of Educa tion, 2001), where the graduation rate at state colleges and universities varied by only four points
16 of 26 from 1992 through 1999 (57.2 % in 1992, 58.7% in 19 93, 59.5% in 1994, 59.4% in 1995, 59.5% in 1996, 59.5% in 1997, 61.1 % in 1998, 59.6% in 1999, and 59.9% in 2000). Despite changes in budgeting and other policies mad e by the state during this decade, the graduation rate of Florida institutions of higher education remained very stable, a fact that could be attributed to the intrinsic nature of change in higher education institutions and one which could h ave limited the impact of PBF programs on graduation rate. Such an alternativ e explanation for the slow growth of the graduation rate in PBF states may acc ount for the lack of difference seen between PBF and non-PBF states, whi ch would otherwise suggest that PBF programs are ineffective.The tendency of graduation rates to change slowly c alls into question the use of FTIC graduation rate as a program indicator, partic ularly for performance funding (PF) programs, which tightly link budget wi th institutional performance. When the graduation rate is selected as a performan ce indicator for a PF program, policy-makers should not expect instant re sults. Thus, the findings of this study would suggest that PF states which use g raduation rate as a performance indicator take into account the caveats described above. Another possible explanation on the ineffectiveness of the PBF programs might be the small amount of funding tied to institutiona l performance. Although some researchers (Hoyt, 2001; Serban, 1998; MGT of Ameri ca, INC., 1999) argued that the amount of money may not be crucial in PBF programs, the SHEEO survey results show that the proportion of money li nked to performance is too smallÂ—from 0.5% to 4% across all PBF states except South Carolina, the only state that allocates 100% of its higher education b udget based on institutional performance (Christal, 1998). A considerable amount of program ineffectiveness might be explained by the low monet ary stakes tied to institutional performance.In addition, although many states adopted PBF progr am, these states still allocate most portion of their budget based on trad itional criteria. As Wildavsky (1988) argued, even when state governments adopt ne w budgeting policies, the tradition of incrementalism remains entrenched in m any state agencies and state legislatures. As is well known, the main budg et allocation criteria in higher education are traditionally the number of full time equivalent (FTE) students. Therefore, one possible scenario in budgeting alloc ation is that low-performing institutions with high student enrollment might rec eive more money than high-performing institutions with lower student enr ollments, as Bridges (1999) found in his dissertation on the PF program in Colo rado. From the point of program implementation, one criti cal issue related to ineffectiveness of the PBF programs is the level at which the PBF program has an impact within the institution. Expectations rela ted to a PBF program might be communicated and tied to funding at the institution al level, departmental level, or faculty level. Considering that faculty plays a large part in the performance of higher education institutions, it is important that they be Â‘on boardÂ’ in order to improve institutional performance. As Burke and his associates (2000) found, however, performance funding tends to become invisi ble on campus below the level of the vicepresident. In some extreme cases Poisel (1998) found that
17 of 26 presidents of community colleges do not exactly und erstand the programs at the beginning of the PBF program implementation.The second finding to consider is the greater growt h of the graduation rate in states with both PB and PF programs than in states with only one of these programs. To complement the strengths and accommoda te the weaknesses of the two programs, ten states adopted both programs at the same time, thereby combining the flexibility of performance budgeting and the certainty of performance funding. The states with both programs might transfer strong political intentions to their colleges and universities by adopting the PF progra m; and they may allow for needed flexibility by adopting the PB program. Rega rdless of whether the PBF programs were mandated by state legislature, and wh ether performance indicators are prescribed or not, the flexibility a nd participation of institutional administrators and faculty in the design and implem entation of PBF programs is crucial. Participants in the program implementation process tend to feel more responsible for the outcomes (Coulson-Clark, 1999). As Van Vaught (1994) argued, some of the failures of government-initiate d changes in higher education may be attributed to the independent natu re of the academic profession. This characteristic of higher education institutions ensures that government-initiated reforms in higher education sy stems will fail unless they are flexible in their implementation Â– and take fac ulty culture into account.LimitationsThis study has two limitations in generalizing its findings. First, although the most common performance indicator, FTIC graduation rate, is used to compare institutional performance, some states do not use F TIC graduation rate as a performance indicator. However, there is no compreh ensive study of performance indicators nationwide, except a State H igher Education Executive Officers (SHEEO) survey which found that 32 of 33 s tates in the U.S. include graduation rate as one of their performance indicat ors (Christal, 1998). Second, FTIC graduation rate is only one of the def initions of graduation rate. The report Â‘Measuring Up 2000Â’ (NCPPHE, 2000) used two different terms in its calculation of completion. One approach was FTIC gr aduation rate and the other type involved the proportion of total complet ion to total enrollment. Only one of these terms, FTIC graduation rate was used f or this study because most states use FTIC graduation rate as their graduation rate indicator.ConclusionsIn the 1990s, higher education institutions faced t he pressure of externally-imposed reforms designed to link budget with institutional performance. Among these reforms were performance b udgeting and performance funding programs. As traditionally auto nomous institutions, however, colleges and universities were slow in res ponding to the demand for change posed by such programs. As the results of th is study show, the implementation of PBF programs did not have the imm ediate or dramatic impact on higher education institutions that policy -makers may have expected.
18 of 26 Institutional performanceÂ—FTIC graduation rate in t his studyÂ—did not improve markedly after the adoption of PBF the programs by states. This outcome might be attributed to the tendency of higher education i nstitutions to change slowly, to the use of graduation rate as a measure of insti tutional performance or effectiveness, or to problems intrinsic to the PBF programs themselves. The lack of growth in institutional graduation rate s, however, does not mean that PBF programs failed to achieve their goals. Mo re time may have been necessary for changes to become apparent, or change s might have appeared indirectly rather than directly. From a strategic p oint of view, then, legislatures might do well to encourage institutions to engage i n actions which will lead to long-term change. Concluding that PBF programs are not useful based simply on changes in graduation rate over a short period, then, is not advised. Another issue comes to light when evaluating the ef fectiveness of PBF programs, that is, the common practice of state leg islatures of allocating monies for higher education annually. As long as state gov ernments measure institutional performance annually and allocate bud gets based on this annual measurement, institutions will direct more money an d effort towards short-term rather than long-term fundamental changes. The pres sure of annual budgetary decisions is accentuated when states review and som etimes change their performance indicators annually. In such cases, ins titutions are not sure if todayÂ’s performance indicators will be next yearÂ’s indicators. Under these circumstances, it is not surprising that institutio ns tend to focus more on short-term than long-term efforts to increase their institutional performance. In consideration of the findings of this study and the evaluation issues discussed above, policy-makers are advised to susta in PBF programs long enough until such programs bear their fruits or pro ve ineffective. In addition, a distinction between short-term and long-term perfor mance indicators is essential. Long-term indicators should be considere d in the budget allocation process after a reasonable time has passed, or weig hted in such a way as to guide institutions to focus more on long-term than short-term changes. If policy-makers attend to such details, PBF programs may prove effective in fostering the sort of institutional change that ben efits all involved with higher education.Notes 1. This study was done during 1993 to 1994, and the te n states included were Colorado, Florida, Illinois, Kentucky, New York, So uth Carolina, Tennessee, Texas, Virginia, and Wisconsin. 2. Branch campuses will generally be excluded from the analysis, except in cases where adequate information is available. 3. In 2001, the Integrated Postsecondary Education Dat a System (IPEDS) began to provide nationwide graduation rate data, w hich had been collected since 1991 using a student tracking system. 4. In some cases, a low graduation rate reflects the i nitial intentions of students
19 of 26 who ultimately wish to transfer to another institut ion, a scenario most frequently seen at community colleges (Whigham, 2000). 5. We excluded states with two years PBF experience. T his excluded nine states from the analysis. 6. The Â“Overall scoreÂ” of preparation is calculated we ighted based on studentsÂ’ high school completion, K12 course taking, and K12 student achievement.ReferencesAffhoilter, D. P. (1994). Outcome monitoring. In J. S. Wholey, H. P. Hatry, & K. E. Newcomer (Eds), Handbook of practical program evaluation, San Francisco:Jossey-Bass Inc. Arnold, C. L. (1992). An introduction to hierarchic al linear models. Measurement and Evaluation in Counseling and Development, 25 58-89. Astin, A. W. (1997). How good is your institutionÂ’s retention rate? Research in Higher Education, 38 (6), 647-658. Atkinson-Grosjean, J., & Grosjean, G. (2000). The u se of performance models in higher education: a comparative international review. Education Policy Archives, 8 (30). Retrieved from http://epaa.asu.edu/epaa/v8n30.htm Barnetson, B. (1999). AlbertaÂ’s performance-based funding mechanism Paper presented at the Coalition for the Advancement of Applied Education Meeting. (ERIC Document Reproduction Service No. ED 430 428) Blackwell, E. A., & Ciston (1998). Power and influe nce in higher education: The case of Florida. Higher Education Policy, 12 111-122. Bogue, E. G. (2002). Twenty years of performance fu nding in Tennessee: A case study of policy intent and effectiveness. In Joseph C. Burke and As sociates, Funding public colleges and universities: popularity, problems, and prospects. Albany, NY: The Rockefeller Institute Press. Braunstein, A., McGrath, M., & Pescatrice, D. (1999 ). Measuring the impacts of income and financial aid offers on college enrollment decisions. Research in Higher Education, 40 (3), 247-259. Bridges, G. L. (1999). Performance-funding of higher education: A critical analysis of performance funding in the state of Colorado. Unpublished doctoral dissertation, University of C olorado. Bryk, A. S., & Raudenbush, S. W. (1992). Hierarchical linear models. Newbury Park, CA: Sage Publications, Inc. Burke, J. C., & Minassians, H. (2001). Linking state resources to campus results: From fad to trendÂ—The fifth annual survey New York: The Nelson A. Rockefeller Institute of Government. Burke, J. C., Rosen,J., Minassians, H.,, & Lessard, T. (2000). Performacne funding and budgeting: An emerging merger? The fourth annual survey. New York: The Nelson A. Rockefeller Institute of Government. Burke, J. C., & Modarresi, S. (1999). Performance funding and budgeting: Popularity and volatilityÂ—The third annual survey Albany, NY: Rockefeller Institute of Government. Burke, J. C. (1998). Performance funding indicators : Concerns, values, and models for state colleges and universities. New Direction for Institutional Research, 97 49-67. Burke, J. C. & Serban, A. M. (1998). Current status and future prospectus of performance funding and performance budgeting for public higher educati on: The second survey Albany, NY: Rockefeller Institute of Government. Burke, J. C., & Serban, A. M. (1997). Performance funding and budgeting for public higher education: Current status and future prospects Albany, NY: Rockefeller Institute of Government.
20 of 26 Cabrera, A. F., Nora, A., & Castaneda, M. B. (1992) The role of finance in the persistence process: A structural model. Research in Higher Education, 33 (5), 571-593. China, J. W. (1998). Legislating quality: the impacts of performance bas ed funding on public South Carolina technical colleges Unpublished doctoral dissertation, University of Texas at Austin. Christal, M. E. (1998). State survey on performance measures: 1996-97 Denver, CO: State Higher Education Executive Officers/Education Commission o f the States. Coulson-Clark, M. (1999). From process to outcome: Performance funding policy in Kentucky public higher education, 1994-1997 Unpublished doctoral dissertation, University of Kentucky. Dallet, P. H., Wright, D. L., & Copa, J. C. (2002). Ready, fire, aim: Performance funding policies for public postsecondary education in Florida. In Josep h C. Burke and Associates, Funding public colleges and universities: popularity, problems, an d prospects. Albany, NY: The Rockefeller Institute Press. Dayhoff, D. C. (1991). High school and college fres hmen enrollments: the role of job displacement, Quarterly Review of Economics and Business 31 (1), 91-103. Florida Board of Education (2001). State University System Accountability Report Tallahassee, FL: Author. Florida Office of Program Policy Analysis and Gover nment Accountability (2001). Justification review of the state university system (Report No. 01-28). Tallahassee, FL: Author. Heller, D. E. (1997). Student price response in hig her education: An update to Leslie to Brinkman. The Journal of Higher Education 68 (6), 624-655. Hoyt, J. E. (2001). Performance funding in higher e ducation: The effects of student motivation on the use of outcomes tests to measure institutional effe ctiveness. Research in Higher Education 42 (1), 71-85. Hsing, Y., & Chang, H. S. (1996). Testing increasin g sensitivity of enrollment at private institutions to tuition and other costs. The American Economist, 40 (1), 45-56. John, E. P. St., Hu, S., & Weber, J. (2001). State policy and the affordability of public higher education: The influence of state grant on persiste nce in Indiana. Research in Higher Education, 42 (4), 401-428. Kahn, J. H., & Nauta, M. M. (2001). Social cognitiv e predictors of firstyear college persistence: Th e importance of proximal assessment. Research in Higher Education 42 (6), 633-652. Kim, K. B. (2001). A study of institutional policy-makersÂ’ perceptions of performance funding indicators for higher education in Korea and United States Unpublished doctoral dissertation, University of Iowa. Levin, J. S. (2001). Public policy, community colle ges, and the path to globalization. Higher Education, 42 237-262. Marcantonio, R. J. & Cook, T. D. (1994). Convincing quasi-experiments: The interrupted time series and regression-discontinuity designs. In Joseph S. Wholey, Harry P. Hatry, & Kathryn E. Newcomer (Eds.), Handbook of practical program evaluation. San Francisco: Jossey-Bass Inc. Marcus, D., Cobb, E. B., & Shoenberg, R. E. (2000). Missouri coordinating board for higher education: Funding for results. Lessons learned from FIPSE project IV. District of Columbia, U.S.: 05/2000. (ERIC Document Reproduction Service No. ED 443 300) National Center for Public Policy and Higher Educat ion. (2000). Measuring-up 2000: The state-by-state report card for higher education. Retrieved from http://measuri ngup2000.highereducation.org Poisel, M. A. (1998). An evaluation of performance-based budgeting in the Florida Community College System Unpublished doctorate dissertation, Florida State University.
21 of 26 Ruppert, S. S. (1995). Root and realities of statelevel performance indicators systems. New direction for higher education, 91 11-23. Schmitz, C. C. (1993). Assessing the validity of hi gher education indicators. The Journal of Higher Education, 64 (5), 503-521. Sheridan, P. M., & Pyke, S. W. (1994). Predictors o f time to completion of graduate degrees. The Canadian Journal of Higher Education XXIV-2 68-88. South Carolina Commission on Higher Education (2002 ). Studying the effects of performance funding on planning, budgeting and student outcomes The paper presented in FIPSE South Carolina conference of performance funding, Februar y, 2002. U.S. Department of Labor (2002, SEP.). Local area unemployment statistics Retrieved from http://www.bls.gov/au/ast ch98htm; http://www.bls.gov/lau/ astrk98.htm; and http://www.bls.gov/lau/l auastrk.htm Usdan, M. D. (2002, Spring). The new state politics of education. The State Education Standard 14-18. Van Vught, F. A. (1994). Policy models and policy i nstruments in higher education: The effects of governmental policy-making on the innovative behavi or of higher education institutions. Higher education: Handbook of theory and research, X. Wildavsky, A. (1988). The dance of dollars: in the new politics of the bu dgetary process Glenview, Illinois: Scott, Foresman/Little Brown College Divi sion. Zernike, K. (2002, August 4). Accountability hits h igher education. The Chronicle of Higher Education p. B20. Zumeta, W. (2000). Accountability: challenges for higher education Retrieved from http://www.nea.or g/he/mea/ma2k/a00p57.pdf About the AuthorsJung-cheol Shin, Ph. D.Research Associate, Center for Educational Research and Policy Studies Florida State UniversityStone Building 312-FTallahassee, FL 32306-4463Phone: 850-668-3962 Fax: 850-644-1592Email: firstname.lastname@example.orgSande Milton, Ph. D.Professor, Educational Leadership and Policy Studie s Florida State UniversityStone Building 113Tallahassee, FL 32306-4451Phone: 850-6446777 Fax: 850644-1258Email: email@example.com The World Wide Web address for the Education Policy Analysis Archives is epaa.asu.edu Editor: Gene V Glass, Arizona State University
22 of 26 Production Assistant: Chris Murrell, Arizona State University General questions about appropriateness of topics o r particular articles may be addressed to the Editor, Gene V Glass, firstname.lastname@example.org or reach him at College of Education, Arizona State Un iversity, Tempe, AZ 85287-2411. The Commentary Editor is Casey D. Cobb: email@example.com .EPAA Editorial Board Michael W. Apple University of Wisconsin David C. Berliner Arizona State University Greg Camilli Rutgers University Linda Darling-Hammond Stanford University Sherman Dorn University of South Florida Mark E. Fetler California Commission on TeacherCredentialing Gustavo E. Fischman Arizona State Univeristy Richard Garlikov Birmingham, Alabama Thomas F. Green Syracuse University Aimee Howley Ohio University Craig B. Howley Appalachia Educational Laboratory William Hunter University of Ontario Institute ofTechnology Patricia Fey Jarvis Seattle, Washington Daniel Kalls Ume University Benjamin Levin University of Manitoba Thomas Mauhs-Pugh Green Mountain College Les McLean University of Toronto Heinrich Mintrop University of California, Los Angeles Michele Moses Arizona State University Gary Orfield Harvard University Anthony G. Rud Jr. Purdue University Jay Paredes Scribner University of Missouri Michael Scriven University of Auckland Lorrie A. Shepard University of Colorado, Boulder Robert E. Stake University of IllinoisÂ—UC Kevin Welner University of Colorado, Boulder Terrence G. Wiley Arizona State University John Willinsky University of British ColumbiaEPAA Spanish & Portuguese Language Editorial Board Associate Editors Gustavo E. Fischman
23 of 26 Arizona State University & Pablo Gentili Laboratrio de Polticas Pblicas Universidade do Estado do Rio de JaneiroFounding Associate Editor for Spanish Language (199 8Â—2003) Roberto Rodrguez Gmez Universidad Nacional Autnoma de Mxico Argentina Alejandra Birgin Ministerio de Educacin, Argentina Email: firstname.lastname@example.org Mnica Pini Universidad Nacional de San Martin, Argentina Email: email@example.com, Mariano Narodowski Universidad Torcuato Di Tella, Argentina Email: Daniel Suarez Laboratorio de Politicas Publicas-Universidad de Bu enos Aires, Argentina Email: firstname.lastname@example.org Marcela Mollis (1998Â—2003) Universidad de Buenos Aires Brasil Gaudncio Frigotto Professor da Faculdade de Educao e do Programa dePs-Graduao em Educao da Universidade Federal F luminense, Brasil Email: email@example.com Vanilda Paiva Email:firstname.lastname@example.org Lilian do Valle Universidade Estadual do Rio de Janeiro, Brasil Email: email@example.com Romualdo Portella do OliveiraUniversidade de So Paulo, Brasil Email: firstname.lastname@example.org Roberto Leher Universidade Estadual do Rio de Janeiro, Brasil Email: email@example.com Dalila Andrade de Oliveira Universidade Federal de Minas Gerais, Belo Horizont e, Brasil Email: firstname.lastname@example.org Nilma Limo Gomes Universidade Federal de Minas Gerais, Belo Horizont e Email: email@example.com
24 of 26 Iolanda de OliveiraFaculdade de Educao da Universidade Federal Flumi nense, Brasil Email: firstname.lastname@example.org Walter KohanUniversidade Estadual do Rio de Janeiro, Brasil Email: email@example.com Mara Beatriz Luce (1998Â—2003) Universidad Federal de Rio Grande do Sul-UFRGS Simon Schwartzman (1998Â—2003) American Institutes for ResesarchÂ–Brazil Canad Daniel Schugurensky Ontario Institute for Studies in Education, Univers ity of Toronto, Canada Email: firstname.lastname@example.org Chile Claudio Almonacid AvilaUniversidad Metropolitana de Ciencias de la Educaci n, Chile Email: email@example.com Mara Loreto Egaa Programa Interdisciplinario de Investigacin en Edu cacin (PIIE), Chile Email: firstname.lastname@example.org Espaa Jos Gimeno SacristnCatedratico en el Departamento de Didctica y Organ izacin Escolar de la Universidad de Valencia, Espaa Email: Jose.Gimeno@uv.es Mariano Fernndez EnguitaCatedrtico de Sociologa en la Universidad de Sala manca. Espaa Email: email@example.com Miguel Pereira Catedratico Universidad de Granada, Espaa Email: firstname.lastname@example.org Jurjo Torres Santom Universidad de A Corua Email: email@example.com Angel Ignacio Prez Gmez Universidad de Mlaga Email: firstname.lastname@example.org J. Flix Angulo Rasco (1998Â—2003) Universidad de Cdiz Jos Contreras Domingo (1998Â—2003)Universitat de Barcelona Mxico Hugo Aboites Universidad Autnoma Metropolitana-Xochimilco, Mxi co
25 of 26 Email: email@example.comSusan StreetCentro de Investigaciones y Estudios Superiores en Antropologia Social Occidente, Guadalajara, Mxico Email: firstname.lastname@example.org Adrin Acosta Universidad de Guadalajara Email: email@example.com Teresa Bracho Centro de Investigacin y Docencia Econmica-CIDE Email: bracho dis1.cide.mx Alejandro Canales Universidad Nacional Autnoma de Mxico Email: firstname.lastname@example.org Rollin Kent Universidad Autnoma de Puebla. Puebla, Mxico Email: email@example.com Javier Mendoza Rojas (1998Â—2003) Universidad Nacional Autnoma de Mxico Humberto Muoz Garca (1998Â—2003)Universidad Nacional Autnoma de Mxico Per Sigfredo ChiroqueInstituto de Pedagoga Popular, Per Email: firstname.lastname@example.org Grover PangoCoordinador General del Foro Latinoamericano de Pol ticas Educativas, Per Email: email@example.com Portugal Antonio TeodoroDirector da Licenciatura de Cincias da Educao e do Mestrado Universidade Lusfona de Humanidades e Tecnologias, Lisboa, Portugal Email: firstname.lastname@example.org USA Pia Lindquist WongCalifornia State University, Sacramento, California Email: email@example.com Nelly P. StromquistUniversity of Southern California, Los Angeles, Cal ifornia Email: firstname.lastname@example.org Diana RhotenSocial Science Research Council, New York, New York Email: email@example.com Daniel C. LevyUniversity at Albany, SUNY, Albany, New York
26 of 26 Email: Dlevy@uamail.albany.edu Ursula Casanova Arizona State University, Tempe, Arizona Email: firstname.lastname@example.org Erwin Epstein Loyola University, Chicago, Illinois Email: email@example.com Carlos A. Torres University of California, Los Angeles Email: firstname.lastname@example.org Josu Gonzlez (1998Â—2003) Arizona State University, Tempe, Arizona EPAA is published by the Education Policy StudiesLaboratory, Arizona State University