USF Libraries
USF Digital Collections

Educational policy analysis archives

MISSING IMAGE

Material Information

Title:
Educational policy analysis archives
Physical Description:
Serial
Language:
English
Creator:
Arizona State University
University of South Florida
Publisher:
Arizona State University
University of South Florida.
Place of Publication:
Tempe, Ariz
Tampa, Fla
Publication Date:

Subjects

Subjects / Keywords:
Education -- Research -- Periodicals   ( lcsh )
Genre:
non-fiction   ( marcgt )
serial   ( sobekcm )

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
usfldc doi - E11-00174
usfldc handle - e11.174
System ID:
SFS0024511:00174


This item is only available as the following downloads:


Full Text
xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam a22 u 4500
controlfield tag 008 c20009999azu 000 0 eng d
datafield ind1 8 ind2 024
subfield code a E11-00174
0 245
Educational policy analysis archives.
n Vol. 8, no. 30 (June 29, 2000).
260
Tempe, Ariz. :
b Arizona State University ;
Tampa, Fla. :
University of South Florida.
c June 29, 2000
505
Use of performance models in higher education : a comparative international review / Janet Atkinson-Grosjean [and] Garnet Grosjean.
650
Education
x Research
v Periodicals.
2 710
Arizona State University.
University of South Florida.
1 773
t Education Policy Analysis Archives (EPAA)
4 856
u http://digital.lib.usf.edu/?e11.174


xml version 1.0 encoding UTF-8 standalone no
mods:mods xmlns:mods http:www.loc.govmodsv3 xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govmodsv3mods-3-1.xsd
mods:relatedItem type host
mods:identifier issn 1068-2341mods:part
mods:detail volume mods:number 8issue 30series Year mods:caption 20002000Month June6Day 2929mods:originInfo mods:dateIssued iso8601 2000-06-29



PAGE 1

1 of 35 Education Policy Analysis Archives Volume 8 Number 30June 29, 2000ISSN 1068-2341 A peer-reviewed scholarly electronic journal Editor: Gene V Glass, College of Education Arizona State University Copyright 2000, the EDUCATION POLICY ANALYSIS ARCHIVES. Permission is hereby granted to copy any article if EPAA is credited and copies are not sold. Articles appearing in EPAA are abstracted in the Current Index to Journals in Education by the ERIC Clearinghouse on Assessment and Evaluation and are permanently archived in Resources in Education The Use of Performance Models in Higher Education: A Comparative International Review Janet Atkinson-Grosjean University of British Columbia Garnet Grosjean University of British ColumbiaAbstractHigher education (HE) administrators worldwide are responding to performance-based state agendas for public institut ions. Largely ideologically-driven, this international fixation o n performance is also advanced by the operation of isomorphic forces with in HE's institutional field. Despite broad agreements on the validity of performance goals, there is no "one best" model or predictable set of consequences. Context matters. Responses are conditioned by each nation's historical and cultural institutional legacy. To derive a generali zed set of consequences, issues, and impacts, we used a comparative internat ional format to examine the way performance models are applied in t he United States, England, Australia, New Zealand, Sweden, and the Ne therlands. Our theoretical framework draws on understandings of pe rformance measures as normalizing instruments of governmentality in th e "evaluative state,"

PAGE 2

2 of 35supplemented by field theory of organizations. Our conclusion supports Gerard Delanty's contention, that universities need to redefine accountability in a way that repositions them at th e heart of their social and civic communities. I. Introduction In recent years, the imposition of perform ance models on institutions of higher education has become a widespread practice. Nationa l systems are in place in France, Britain, the Netherlands, Scandinavia, Australia, a nd New Zealand. In federations like Germany, the US, and Canada, individual Lnder, states, and provinces have taken the initiative (Brennan, 1999; Woodhouse, 1996). Performance models include, but are not li mited to, social technologies like performance indicators. They are situated within br oader, ideological mechanisms variously characterized as public sector reform, ne w public management (NPM), or what Neave, in the context of higher education (HE), cal ls "the evaluative state" (Neave, 1998; 1988). These mechanisms attempt to impose acc ountability on public sector institutions and improve service provision, by meas uring performance against managerial, corporate, and market criteria. Accountability and service improvement are common goals of all HE performance models. But different national systems adopt differ ent combinations of supplementary goals. These include stimulating internal and exter nal institutional competition; verifying the quality of new institutions; assignin g institutional status; justifying transfers of state authority to institutions; and f acilitating international comparisons (Brennan, 1999:223). The particular combination of goals depends on specific national contexts, and the balance within them of accountabi lity, markets, and trust (Brennan, 1999; Trow, 1998). But the foundations of these structural cha nges extend beyond ideological reform of public-sector institutions. They are rooted, as wel l, in the post-war transition from lite to mass systems of higher education (Scott, P. 1995 ). Arguably, the momentum of massification alone would have enforced restructuri ng of the HE system in most jurisdictions (Neave, 1998; Dill, 1998). The combination of HE expansion and the emergence of the evaluative state produces internat ional convergence around the implementation of performance models. Furthermore, convergence proceeds at a farfrom-uniform rate. It is modulated by path-dependent national institutions and entrenched cultural traditions, and the divergent starting points of each national system. Broadly sp eaking, public universities in the Anglo-Saxon countries are moving from a position of strong autonomy to one of subordination to centralized, state control. For co ntinental Europe and Scandinavia, where strong state control was the norm, more contr ol of higher education is being ceded to the institutions. These apparently contradictory trajectories converge at the level of institutional performance and accountability (Henkel and Little, 1999) where, as Newson (1998:113) has pointed out, "criteria such as 'efficiency,' 'p roductivity,' and 'accountability' are becoming embedded in the routine day-to-day decisio n-making that takes place in 'local' units throughout the university." At this level, th e proliferation of a few dominant models can be explained, in part, by the operation of isomorphic forces within

PAGE 3

3 of 35institutional fields, whereby "lead" organizations set the pace for "followers" (Powell & DiMaggio, 1983.) Performance models have now been in place l ong enough for studies of consequences to be undertaken (Neave, 1998; Dill 19 98). For example, a recent 15-country OECD study, under the direction of John Brennan and Tarla Shah of Britain's Open University, considers the impact of performanc e models in 40 participating institutions. On the basis of early analyses, Brenn an (1999) reports that while impacts are conditioned by the nature of the individual ins titution and the distribution of authority in the HE system, performance mechanisms appear to have raised the profile of teaching and learning in HE institutions. He finds that overall impact is increased when the mechanisms gain legitimacy at the faculty and d epartment level, and that increased centralization and managerialism is characteristic at the level of the institution. In some countries, Brennan suggests, evaluation and assessm ent mechanisms tilt the distribution of power away from faculty and towards senior manag ers and administrators. But in other countries, where the management layer is trad itionally weak, the impacts of external evaluations are more important. A potential weakness of this otherwise exha ustive study is its reliance on institutional self-reports. By surveying a wide ran ge of methodologically diverse studies from different national contexts, we hope to distil l a robust set of findings. We first construct the theoretical framework of the "evaluat ive state," through which to view the policy and administrative implications of performan ce models. We then consider the theoretical importance of accounting tools in perfo rmance measurement, before defining the terms and trends in performance-based HE manage ment. Next, utilizing a comparative international format, we summarize the impact of HE performance models in the United States, England, Australia, New Zeala nd, Sweden, and the Netherlands. Where appropriate, we add the results of cross-nati onal studies. Finally, we attempt to synthesize our findings into a generalized set of c onsequences, identifying system-level effects, technical performance issues, institutiona l effects and management issues, impacts on teaching and research, and on faculty an d academic departments. II. The Evaluative State Fundamental changes in the policies and pra ctices of most OECD countries have followed a cultural shift in the public management paradigm over the last two decades. Public sector reforms induced fundamental changes, not only in policies and practices, but also in the culture underlying the public admin istration of nation-states (Strange, 1996; Aucoin, 1995; Charih and Daniels, 1997; OECD, 1995; Keating 1998). This new culture took as axiomatic market-like principles of cost-recovery, competitiveness, and entrepreneurship in the provision of public service s (Power 1996; Charih and Rouillard, 1997). Criteria of economy and efficiency were supp orted by “broad accusations of waste, inefficiency, excessive staffing, unreasonab le compensations, freeloading, and so forth” (Harris 1998:137). "Rational" corporate mana gement techniques were installed incorporating accounting, auditing, accountability, and performance criteria. The intent was not only to make public institutions less costl y and more effective, but also to normalize and entrench private sector principles (H ood, 1991, 1995; Savoie, 1995; Harris, 1998). The application of these criteria to HE produced elaborate exercises in "visioning," "re-engineering," and "quality assuran ce," structured on the basis of

PAGE 4

4 of 35transparent and auditable accountability for perfor mance (Power, 1996). International convergence around these idea ls renders the putative retreat of the state somewhat illusory (Dominelli and Hoogvelt, 19 96; Strange, 1996; Dale, 1997). Rather than regulating directly, however, the state now regulates from a distance, assuring accountability through refined forms of "r emote control" or steering (Burchell et al., 1991; Barry et al., 1996; Power, 1995). Nea ve neatly points to the paradox: “what some regard as a lighter form of surveillance…goes hand in hand with a veritable orgy of procedures, audits, [and] instruments of adminis trative intelligence which, in their scope and number…make those which upheld the statecontrol model appear rustic” (1998:266). By using these mechanisms to steer from a distance, the state ensures its performance agenda is internalized by the instituti on. Thus regulation becomes self-regulation, and state control becomes self-con trol—a type of self-disciplining Foucault (1978) called "governmentality." In his study of Continental European HE sys tems, Maassen (1997) empirically identified this move. In the countries Maassen stud ied, detailed regulation of the inputs and processes of HE is no longer practiced. Instead institutions themselves create the conditions for achieving the outcomes required by t he state, thereby demonstrating the effects of “remote steering” (Maassen 1997:125). To induce self-regulation and self-surveillance in institutions, Maassen found th at European governments are also abandoning existing rigid legal frameworks—a move N eave (1998) calls "dejuridification"—in favour of "framework laws." M aassen suggests that European HE is undergoing the most far-reaching transition sinc e that from lite to mass systems. What we are seeing, he speculates, might be “only t he beginning of a long-term trend that will change HE far more fundamentally than we can imagine” (1997:125). According to Neave, the beginning of this l ong-term trend was the emergence of the evaluative state “from two very different disco urses, the one European and political, the other mainly American and economic” (1998:278). In the first discourse, control of universities mirrored broader democratic issues, wh ile the second was a direct bid to substitute market control for state control. The fo rmer tended to predominate in France, Sweden, Belgium, and Spain, according to Neave, whi le the latter dominated in the UK and the Netherlands and rooted itself earlier. Both discourses converged, Neave says, around three major displacements in HE. One displacement is increasing concentratio n on strategic planning and systems development. Another marks the emergence of powerfu l, intermediary "buffer bodies" to serve as the state's agents in evaluation and surve illance. The third is the proliferation of increasingly demanding performance models, includin g quality assessment and assurance; continuous improvement; performance-base d funding, budgeting, and management; strategic planning and budgeting; and t otal quality management. In one way or another, all these models rely on measuremen ts or "indicators" of performance. III. Issues in Measuring Performance Paradoxically, the evaluative state's selfregulating "governmentality" requires fidelity devices to measure and induce compliance. Largely, these calculative practices (Miller, 1994) or rituals of verification (Power, 1 995) employ accounting tools, such as budgets, cost/benefit analyses, cost-centre compari sons, financial audits, and an increasing array of performance and compliance audi ts (Power, 1995; Porter, 1995;

PAGE 5

5 of 35Harris, 1998). Accounting tools enable “actions on the actions of others…to remedy deficits of rationality and responsibility” (Miller : 1994:29). They are characterized by their surveillance and control capacities, i.e. abi lity to determine norms, then discipline performance against them (Hoskin and Macve, 1993). Despite appearances, accounting techniques and numbers are not neutral reflections of "reality." Rather, they selectively construct re ality from complex webs of social and economic negotiations. An accounting "fact" is actu ally a contingent and partial accomplishment. Yet contingency and partiality disa ppear in inscription. Tabulated, calculated, and double-underlined, accounting "fact s" appear incontrovertible—the very essence of stability, objectivity, and impartiality In a university setting, the apparent objec tivity of such "facts" can undermine autonomy, “open[ing] up the routine evaluation of a cademic activities to other than academic considerations, and…mak[ing] it possible t o replace substantive judgements with formulaic and algorithmic representation” (Pol ster and Newson 1998:175). A financial calculus thus underpins the discourse of performance in HE, and constitutes its instrumental logic. The instrumentalities include p erformance indicators, quality indices, and benchmarking standards. In a detailed study of institutions in three commonwealth countries, Miller (1995:1) found that these marketbased, managerial instrumentalities “have modified or come to dominate the governance a nd culture of universities in Australia, the United Kingdom, and Canada”. Comment ing on the lack of faculty resistance, Miller argues that as academics become constrained, monitored, and documented by performance criteria, they come to co llude in the construction of their own fate (cf Harley and Lowe 1999). Performance indicators (PIs) are the key in strumentality. Watts (1992) studied the major OECD countries, looking at accountability and performance measures. Of the eight commonalities he found, PIs were by far the most significant. PIs replace traditional input measures, like the number of stud ents enrolled, with goalor resultoriented estimates of outcomes or value-added, such as the quality and employability of graduates. Identifying one of their most contentiou s aspects, Watts (1992:87) comments that “many of these efforts have found...real probl ems in trying to measure quantitatively the unmeasurable.” Harris (1998:136) reminds us that despite t heir objectified and factual appearance, much of the accounting and other data used to const ruct PIs derives from the subjective exercise of judgement. Similar judgements are also exercised on the indicators themselves, which are interpreted to infer "facts" that then “create the domain of the factual” (Harris, 1998: 136). Because PIs focus on readily quantifiable inputs and outputs, they tend to neglect the more complex soci al variables that resist measurement (Newson, 1992; Harris, 1998). And, because of the d ifficulty of linking measurable outputs to inputs and processes, there is a danger is that “targeted goals, as reflected in indicators, often become ends rather than means” (H arris, 1998:136). El-Khawas and colleagues note that “academi cs have resisted the move towards performance indicators, arguing that [they] are red uctionist, offer inaccurate comparisons, and are unduly burdensome” (1998:9). A s a result, she notes, some governments are introducing PIs incrementally, requ iring universities to generate an increasing amount of quantitative data for intermed iary bodies. Others have embedded PIs in institutional contracts or other forms of co nditional funding. While debate continues on their appropriate use, she says, in mo st countries public officials advocate the development of a few relevant performance indic ators, together with comparisons among institutions and over time. She differentiate s England, which “took a further step by linking the amount of research funding to perfor mance scores of academic

PAGE 6

6 of 35departments” (El-Khawas et al., 1998:9). In the stu dies cited later, we will find more variation than El-Khawas suggests in the numbers an d types of indicators tracked. We will also see that the pattern of linking funding t o performance extends beyond research to HE budgets more generally. And we will find perf ormance-linked funding in, for example, the United States, Australia, and New Zeal and as well as in England. While there is no single, agreed-upon defin ition of PIs, the one developed by Cave, Hanney, and Kogan (1991:24) is still applicable: a performance indicator is an authoritative measure —usually in quantitative form—of an attribute of the activity of a higher ed ucation institution. The measure may be ordinal or cardinal, absolute or com parative. It thus includes both the mechanical applications of formul ae (where the latter are imbued with value or interpretative judgements) and such informal and subjective procedures as peer evaluation or reputat ional rankings. One of the principal causes of controversy surrounding the use of PIs is their link to performance-related funding and budgeting. It is im portant to differentiate between these terms. According to Burke and Serban (1998:2), “the advantages and disadvantages of each are the reverse of the other. In performance f unding, the tie between results and resources is clear but inflexible. In performance b udgeting, the link is flexible but unclear.” Performance funding ties separate and usu ally small allocations of funding directly to institutional performance against a nor mally limited number of indicators. In performance budgeting, a longer list of indicators provides an overall picture of institutional performance; this then supplies the c ontext in which a decision on the institution's total budget allocation is made. The former enhances the incentive to improve performance, but punishes circumstances bey ond institutional control. Further, the small sums allocated are disproportionate to th e effort required to generate the data. The flexibility of the latter allows for extenuatin g circumstances, but diminishes specific incentives to improve (Burke and Serban, 1998.) Johnstone (1998) confirms these differences and notes that both are rooted in conceptions of administrators as "rational actors" who will maximize whatever is rewarded. According to Johnstone, conventional budg et drivers—particularly full-time equivalent enrollments—induce institutions to "over -enroll" at the cost of quality and can lead to a concentration on popular programs tha t can be taught cheaply (1998:16). In contrast, performance-based budgets use criteria su ch as degrees awarded, time to completion, graduates' external performance, facult y success in attracting competitive research grants, and faculty reputations with peers However, says Johnstone, proponents of performance criteria are beginning to realize th at there is a need to balance “multiple, difficult-to-measure, and not always compatible goa ls” (Johnstone, 1998:16). For example, to maximize student accessibility, institu tions are encouraged to accept promising but less-qualified students. This goal is incompatible with maximizing completion rates or postgraduate examination perfor mance. The offsetting advantages and disadvantages of performance funding and performance budgeting helps to explain why increasi ng numbers of states in the U.S.A. are adopting both systems (Burke and Serban, 1998). While examples of performance models could be found in some states (e.g. Tennesse e) as early as the 1970s, by 1998 they were utilized in half the states in the U.S.A. Reported intentions predict that 70% of states will have adopted performance funding or bud geting models by 2002 (Burke and Serban, 1998). There is more than rational judgement at wo rk here; a "bandwagon" is rolling.

PAGE 7

7 of 35Organizational theory assists our understanding of this phenomenon. Powell and Dimaggio (1983), for example, have pointed to the r ole of isomorphic forces in stabilizing institutional and organizational fields around a dominant model. The forces at work may be regulative, normative, cognitive, or an y combination thereof, depending on the nature of the field (Scott, R. 1995). Thus the particular combinations of state policy, programs, and funding (regulative); academic values and norms of accountability (normative); and the way the social purpose of HE i s framed (cognitive) might be expected to produce fairly similar institutional re sponses to performance criteria that may, nevertheless, differ in important respects in different national and sub-national contexts. Further, formal organizations like universi ties and colleges tend to adopt prevailing "rituals of rationality" to increase their legitima cy and chances for survival (Meyer and Rowan, 1977; Kaghan, 1998). These rituals of ration ality increasingly include principles of profitability and "good management" derived from the private sector. Public universities and colleges, therefore, can be situat ed in a larger institutional framework where the system of organizations is isomorphically aligned around i deological commitments to private sector principles of rationa lity. But as Kaghan (1998:172) points out, instit utional theories tend to focus at the macrostructural level and pay little attention the "microdynamics" of specific practices. To attend to this level of detail, we now consider the way performance models are enacted in different national contexts. A comprehen sive examination of US and UK experiences is followed by less detailed analyses o f Australia, New Zealand, Sweden and the Netherlands. IV. Performance Models in Context1.State Models in the United States Policy-makers in the U.S.A. were among the first to experiment with monitoring the performance of publicly funded institutions of higher education. In the 1960s and 1970s, state officials began examining possibilitie s of allocating resources to institutions according to how well they achieved state objective s and outcomes (Layzell, 1998). Tennessee was the first state to implement performance funding in higher education. Well regarded in the US, the program is considered a success. The Tennessee State Higher Education Board initiated a pilot prog ram in 1975. By 1979, state officials, working with advisory groups, had developed a set o f ten performance criteria. These, and the associated measurement and reporting proced ures, were applied to all public universities and colleges (El-Khawas, 1998). During 1980-81, public institutions were able to earn up to 2 percent above formula allocati ons, based on performance against these criteria (Albright, 1997). The plan has been reviewed and updated at five-year intervals since then. Today, the amount of discreti onary funding available to reward good performance stands at 5.5 percent of an instit ution's overall budget. Explicit goals are targeted over an extended period of time, allow ing institutional behaviour to be shaped towards desired ends. Because of isomorphic forces, the success o f the Tennessee program led to the development of similar programs in Arkansas, Missou ri and Ohio (El-Khawas, 1998). But conformity is far from total. Texas is among se veral states that have studied, proposed, and rejected performance funding—largely because of a lack of support from

PAGE 8

8 of 35state legislators, combined with cumbersome reporti ng requirements, and reduced institutional autonomy (Albright 1997). On the othe r hand, the State of South Carolina has adopted measures that ties allocation of the st ate's entire budget for public higher education to institutional performance against 37 s pecific indicators (Burke and Serban, 1998) One notable characteristic of Tennessee-sty le performance funding is that it is non-competitive. All institutions can access these supplemental "bonus" funds. If one fails to obtain its share of the supplementary fund s, the others do not benefit. Generally, however, policy-makers today are less favourably in clined to voluntary institutional improvement; systems of mandated public accountabil ity are becoming the norm. As with the introduction of the Tennessee model, we se e a tendency to copy other states' systems, in an attempt to develop a common core of indicators to address common problems. A study by the National Association of Stat e Budget Offices (NASBO, 1996) reviews measures adopted by 38 states in addressing calls for HE improvement and accountability. These include budget reforms, restr ucturing of governance, performance-based funding, and privatization of tea ching hospitals. We cannot report on this study in detail, or present the responses of a ll the participating states. However, certain states can be considered "indicators" of th e changes induced by performance models in all states. Arizona's Budget Reform Act of 1993 resulte d in the development of a master list of state government programs in 1995, complete with mission statements of institutions, functional program descriptions, goals, performance measures, funding and staff information. This was the first opportunity for sta te analysts to determine budgets and funding sources for higher education. Subsequently, in an attempt to increase graduation rates without increasing the budget, a "short" Bach elor's Degree program (three-years) was implemented at Northern Arizona University. As well, certain programs implemented a twelve-month academic year. Faculty c ould elect to take their break in either fall or spring instead of summer. To ensure a steady supply of enrollees, the Arizona Legislature introduced a bill to provide HE scholarships to students who graduated high school in three consecutive academic years and retained a GPA of at least 3.0 (out of 4.0). State funding would be shif ted from the K-12 system to the HE system to fund the new measures. In 1995, Arkansas moved from an enrollments -based funding policy to one focused on productivity outcomes. The Institutional Product ivity Committee and the State Board of Education developed sixteen performance measures Amendments to the Revenue Stabilization Law resulted in the creation of a Hig her Education Institutions Productivity Fund, authorized to provide an additional $5 millio n and $10 million in fiscal years 1996 and 1997 respectively, on the basis of institutiona l performance on these measures. Also in 1995, the Governor of California ag reed to provide lump-sum funding to the University of California, and California State University for a period of three years for general support, capital outlays, and to servic e debt requirements. In exchange, the universities were required to increase enrollments and the portability of courses between institutions; implement new productivity and effici ency increases each year; improve student graduation times; and restore faculty salar ies to competitive levels. Meanwhile, in the Kansas fiscal 1997 budget, and the Kentucky 1994-1996 Appropriations Bill, appropriation increases to higher education were ba sed on performance funding concepts and principles. On July 1, 1995, Minnesota merged three of the state's public, post-secondary systems under a single governance structure. For 19 95 and 1996, a portion of state

PAGE 9

9 of 35appropriations to the University of Minnesota and t he state's colleges and universities was made contingent upon achievement of performance goals. For example, for the University of Minnesota, $5 million of the 1996 app ropriation was placed in a performance incentive account, to be released in $1 million increments for achieving each of five performance measures. The measures rel ated to: a) recruitment and retention of freshman students with high academic a verages in 1995; b) increase in the intake of minority students in 1996; c) increase in the number of women and minority faculty hired in 1995-96; d) increase in graduation rates between 1994 and 1996; and e) increase in the number of credits offered through t elecommunications between 1995 and 1996. Missouri adopted policies that ensure the r ecognition of institutional performance through appropriate incentive funding. In fiscal ye ars 1995 and 1996, funding was appropriated to reward institutions based on their attainment of certain goals: a) assessment of graduates; b) graduation of minority students; c) number of students pursuing graduate education; d) teacher-education g raduates scoring in the upper half of national exams; and e) job placement rates in major field. In fiscal year 1996, more that $7 million of the ongoing untargeted funding for fo uryear institutions was distributed according to these performance goals. While other states, including New Mexico, N ew York, North Carolina, North Dakota, Oklahoma, South Carolina, Utah, Washington, and Wyoming have all undergone budget reform, restructuring, and the imp lementation of performance measures, none has gone to the extreme of South Car olina. In 1996, at the urging of a group of prominent business leaders, the State Comm ission for Higher Education implemented the most significant performance-based funding program to date. The program was phased-in. By the 2000 fiscal year, as stated earlier, 100% of state HE funding will be allocated on the basis of instituti onal performance on 37 specific indicators. This high number of indicators, as well as the total linking of funding to performance, runs counter to conventional wisdom on performance models. Agendas beyond Performance The above review of performance models make s evident the extent to which they can be used to advance state agendas other than tho se strictly concerned with accountability and performance. In the case of Minn esota and Missouri, for example, performance models are used to address state requir ements for equity and equality in public institutions. Thus the state can use these m odels to force HE institutions to advance compliance with long-range state objectives If the institutions successfully comply, they are rewarded. Otherwise, there is an i mplicit threat that the state will step in and take control of budgets and governance struc tures. But state policy is subject to change with each election. In between, there may be insufficient time for political objectives to be fully integrated into an instituti on's governance and funding structure. A recent study by the State Higher Educatio n Executive Officers (SHEEO, 1997), provides a snapshot of the experience of 48 states in implementing performance measures. The study indicates that: thirty-seven states used performance measures in so me way this is more than double the number three years pre viously twenty-six states plan to expand or refine current efforts most states adopt performance measures for accounta bility purposes twenty-three states use performance measures to inf orm consumers about higher education

PAGE 10

10 of 35twenty-three states use performance measures to dis tribute state funds to higher education institutions (Network News, 1998:1-2). Most of the performance models referred to in this study fail to differentiate between longer-term state interests and short-term public d emands. As well, in the twenty-three states where performance measures supply informatio n to consumers of HE, the information reported is deemed more useful to polic y-makers, than for assisting individual consumers to make informed educational c hoices. Responses to US Performance Models The SHEEO and the NASBO studies cited above seem to indicate a shared understanding between state officers and HE institu tions about the importance of performance models. This may not be the case. In a survey of higher education policy issues (Ruppert, 1998), a total of 1008 respondents consisting of political leaders (n=519) and higher education leaders (n=489), from 12 Midwestern states were asked to identify the most critical issues facing post-secon dary education in the approach to the 21st century. Keeping higher education affordable w as a major concern for both groups, but political leaders ranked it as their first prio rity, while higher education leaders ranked it second. Overall, how to pay for higher education (funding policies) was considered the Midwest's second highest priority. For higher educa tion leaders this was the number one priority, while political leaders ranked it sixth o ut of nine issues. Capacity for change was the third priority for higher education leaders while political leaders ranked this item fifth. Not surprisingly, political leaders ranked e nsuring accountability second, and productivity and cost efficiency third priority, wh ile higher education leaders ranked these sixth and eighth respectively. With such disp arities on the relative priorities of key issues, will the two groups support one another? Or is the stage set for increased tensions, in the form of either active or passive r esistance to state mandated measures? In analyzing responses to the SHEEO survey, Albright (1998) reports that in states implementing performance-based funding, HE institut ions accrue certain advantages. They benefit from increased communication with, and support from political leaders; the funding provides an alternative to enrollment-based subsidies, and acts as an incentive to improve performance. By aligning planning goals wit h budgets, institutions can respond to calls for accountability and reinforce confidenc e in higher education. However, the design and implementation of a performance model is not accomplished without difficulty. Ways must also be found to balance decr easing institutional autonomy and increasing state review and control. Qualitative me thods must be used to supplement quantitative measures when studying institutional p rocesses. There is a need to overcome the complexities of measuring "quality," p articularly as it pertains to student learning, and to find measures that adequately refl ect differences in institutional missions. While some states have been more successf ul than others in introducing performance measures, it is still too early to atte mpt to identify a single "best" US model. In terms of future prospects, a survey of s tate finance officers reports data on legislative action plans for 1999 (McKeown-Moak, 19 99). From the perspective of state officials, the financial outlook for US higher educ ation is better now than in years. State appropriations reached the highest level ever in FY 99, increasing four times faster than the Consumer Price Index. HE's share of state gener al funds increased for the first time in over a decade. Average tuition fees are rising s teeply. State officials proclaim that such positive economic conditions for higher educat ion have not existed in the last two

PAGE 11

11 of 35decades. At the same time, administrators in HE inst itutions prepare for reduced appropriations and increases in the use of performa nce models. Student debt loads continue to rise at an alarming rate, and instituti ons that originally welcomed new federal tax credits now face the added costs of com pliance and record keeping. Added to this, are increased competition for state resources ; demands for up-to-date curricula that keep pace with the economic and market change; appr oaching reelection campaigns for state legislators; tensions with faculty and staff about internal restructuring to accommodate performance criteria; and threats to re structure HE governance. Taken together, these factors indicate the prospect of co ntinuing struggle for US higher education leaders. A final note: Congress enacted changes to t he Higher Education Act in October 1998. Beginning in the 2001 academic year, colleges and universities must submit comprehensive reports on attendance costs for stude nts, to the National Committee on the Cost of Higher Education. NCHE will then publis h trend information on tuition fees and financial aid by institution, and compare this information with the Consumer Price Index. Failure to comply will net the recalcitrant institution a fine of $25,000. Compared to the burgeoning costs of reporting, some might co nsider the fine the more fiscally prudent option for financiallystarved institution s. 2. England In England, performance models were first i ntroduced in the early 1980s as an ideological initiative of the Thatcher government. Continuing under Thatcher's successor, John Major, they then, as in other count ries, transcended the partisan divide into Tony Blair's New Labour administration. A number of intermediary agencies are respo nsible for administering the performance agenda. These include the Higher Educat ion Funding Councils of England (HEFCE), Wales (HEFCW), and Scotland (SHEFC), which administer the Research Assessment Exercise (RAE), and the Higher Education Quality Council (HEQC). Under the recommendations of the Dearing Report, the latt er was succeeded by the Quality Assurance Agency for Higher Education (QAAHE) in 19 97. QAAHE administers quality audits and the Teaching Quality Assessment (TQA). The Research Assessment Exercises and Teach ing Quality Assessments represent longstanding programs of performance assessment. Bo th are controversial, for various reasons. The purpose of the former is the highly se lective distribution of funding in support of high-quality research. It evaluates on t he basis of perceived national and international standards. The latter justifies publi c support on the basis of quality and quality improvement, and rewards "excellence" in th ese areas. TQA evaluations are mission-dependent. They inform rather than determin e funding, and are less oriented to quantitative data than the RAE, although both progr ams use performance indicators. TQA indicators include student entry profil es; expenditures per student; progression and completion rates; qualifications ob tained; and subsequent destinations. Institutions are assessed on six core aspects rated on a four-point scale. The RAE looks for indicators relating to research publications; r esearch grant income; numbers of assistants and students employed; and the research environment. It rates seven categories and relies on the subjective judgements of peer pan els concerning the national and international standing of the research departments assessed (Stanley and Patrick, 1998). In contrast to this "arm's length" determination, T QAs involve site visits by external assessors and encourage critical self-assessment of weaknesses as well as strengths. Much of the criticism focused on the RAE stems from the statistical ranking of

PAGE 12

12 of 35institutional performance and the publication of th ose rankings in the media, with subsequent reputational and funding effects. Critic ism is also leveled at the underlying methodology, the emphasis on outputs and the relian ce on statistical data rather than qualitative assessments, as well as the additional workload institutions face in complying with performance models. The 1997 National Committee of Enquiry into Higher Education (Dearing, 1997) made performance requirements even more explicit. D earing recommended the development of performance indicators and benchmark s for "families" of institutions with similar characteristics, on the principle that the interpretation of performance should take account of sector context and diversity In response, the Higher Education Funding Council (HEFCE) set up a Performance Indica tors Study Group (PISG) to develop indicators and benchmarks of performance, r ather than descriptive statistics. The latter, while they are “helpful in the manageme nt of institutions, can only be judged in the light of the missions of institutions and do not purport to measure performance” (PISG, 1999:8). In this regard, the group comments disparagingly on the publication of "misleading and inaccurate" league tables. In the first stage of its study, the group focused on producing indicators for the government and funding councils that would also inf orm institutional management and governance. Its immediate priority was the publicat ion of institutional-level, output-based indicators for research and teaching. Process indicators, such as the results of TQAs, were rejected. By the time of its first re port (PISG 1999), the group had prepared proposals for indicators relating to: part icipation of under-represented groups; student progression; learning outcomes and non-comp letion; efficiency of learning and teaching; student employment; research output; and HE links with industry. All except the latter related to both institutional and sector -levels. Responding to Dearing's concerns about interpretive contexts, the group dev eloped a set of "context statistics" for each indicator to take account, for example, of an institution's student intake, its particular subject mix, and the educational backgro unds of students. These will allow “the results for any institution to be compared not with all institutions in the sector, but with the average for similar institutions” (PISG, 1 999:6). The next stage of the study will look at th e information needs of other stakeholders, particularly students and their advisers. The third stage will respond to a call from the Chancellor of the Exchequer to improve the indicato rs on student employment outcomes. The PISG acknowledges that PIs in HE are “complicated and often controversial” and that “the interpretation of indi cators is generally at least as difficult as their construction” (1999:12). They note that PIs r equire agreement about the values (inputs) that make up the ratio, reliable data coll ection, and a consensus that a higher ratio is "better" or "worse" than a lower ratio. Th e literature supports that none of these is easily negotiable nor guaranteed in advance.Faculty Responses to Performance Models in the UK Among faculty and at the institutional leve l, responses to performance mechanisms tend to follow a "strategy of accommodation" that f ocuses on technical rather than normative aspects, and involves participation in th e development of measures to make them "more meaningful or less harmful" (Polster and Newson, 1998). Consequences of this strategy in the UK include: the imposition of performance accounting systems for rating faculty productivity; favouring of research that attracts funding; a competitive transfer market in the CVs of "high performing" res earchers; heavier and lighter teaching loads for "less productive" and "more productive" r esearchers respectively; an associated deterioration in teaching conditions; and a reorder ed system of state-appointed buffer

PAGE 13

13 of 35bodies to allocate funding on the basis of external ly determined criteria (Polster and Newson, 1998: 177). These elements recur in the fol lowing detailed discussion of the findings of two UK studies. Each examines the impli cations of performance models for faculty in English universities Henkel (Henkel, 1997) studied seven discipl ines across six different types of universities, interviewing 105 adminstrators and ac ademics at various levels in the hierarchy. The study sought the implications of thr ee performance policies: the research assessment exercise (RAE); the Higher Education Qua lity Council's (HEQC) academic audits for quality assurance; and the Higher Educat ion Funding Council for England's (HEFCE) teaching quality assessments (TQA). In five of the universities studied, Henkel found a significant trend to "centralized decentral ization"—strong central management coupled with maximum devolution of responsibility. This involved the creation of well-defined new roles at the centre, and the proli feration of non-academic support units. In part, these were to mediate the state's performa nce expectations and policies, now interpreted as corporate standards. Budgets were being devolved, usually to the department level, and the iteration between the cen tre and departments was deemed increasingly important. The new challenges were cre ating adaptation and status problems for administrators in some universities. B ut in others, administrative roles were expanding to meet the requirements of the new state policies. One administrator referred to his new authority to “open the black bo x of academic decision making” (Henkel, 1997:140). While those at the centre spoke of iteratio n, individual faculty and the basic units were more aware of centralized authority. Many acad emics expressed "bitter resentment" about the inordinate administrative requirements ne cessary to comply with performance models, and strongly objected to the amount of time taken away from academic work (141). Many expressed nostalgia for the lite syste m, and saw the new models as attempting to compensate for the consequences of th at system's disappearance. Thus, performance models were viewed as connected with “a n undervaluing of individualization, excellence, and risk, espousing instead a "predictable mediocrity"” (ibid). Some also saw the new models as facilitatin g instrumentalism and "satisficing" behaviour on the part of students, as well linking with market values of consumerism and customer-led education. At issue as well was th e emergence of differentiated contracts “based on competitiveness, insecurity, th e casualization of academic employment, and…the attenuation of institutional lo yalty” (142). Henkel's findings are affirmed in a study o f what Dominelli and Hoogvelt (1996) describe as the "Taylorization" of academic labour. Taylorization is achieved through the fragmentation, sequencing, and commodification of f aculty work “into component parts or activities, each part being translated or "opera tionalized" into empirically identifiable and quantifiable indicators or measures” (79) These discrete "technical competencies" may then be “subject to cost-efficiency scrutiny an d put up for tender” (79). The elimination of professional autonomy is another key aspect. Functional analysis defines "competences," which are then further defined by pe rformance criteria—the assessable outcomes. What are the consequences of "Taylorization and performance models for academics? Dominelli & Hoogvelt describe increased workloads; shrinking resources; dramatic declines in social status; and truncation of functions. They cite the following statistics: between 1987 and 1993, student numbers in HE increa sed by 50% while academic staff numbers increased by only 10%, and total spen ding per student fell by 50%.

PAGE 14

14 of 35(p.82 and fns 35 and 36)in the same period, core staff increased by 1.2% wh ile staff employed on temporary and short-term contracts increased 23% (p 83) in the OECD, between 1980 and 1990, the UK was the only country with real negative growth in pay (-3.8%) for academic teacher s (p83) Echoing Henkel's findings, these writers suggest th at the English performance model is built on the following characteristics: (1) decentr alized budget management; (2) peer pressure and peer scrutiny of "performance"; and (3 ) flexible production techniques. The UK's Research Assessment Exercise (RAE) The RAE is a major and recurring evaluation of research performance. For a comprehensive Foucauldian analysis of the RAE as a routine operation of surveillance and assessment dependent on coercion and consent, s ee Broadhead and Howard (1998). The last RAE was 1996; the next will be in 2001. Th e RAE directly affects the allocation of funds from the higher education fundi ng councils. Council research budgets have not increased for some years so, for i nstitutions, competition for research funds is a zero-sum game with winners and losers. A nd, since the binary system of universities and polytechnics was unified in 1992, this "flat" amount of funding now has to be allocated to more than 40 institutions—twice the original number (McNay, 1999). Reporting on the consequences of the 1992 and 1996 RAEs, McNay found that “money was a great driver in participating in the RAE and the money that flows from it was the main means by which it exercised influence for beha viour change” (1999:192). Institutional submissions to the RAE descri be research performance and plans for each academic area, and list by area all "researchactive" staff, together with details of their research output—publications, discoveries, pa tents, and so on. A series of panels then judge performance—by a variety of different an d not necessarily compatible means—against approximately 70 criteria. The scale runs from 1 (research of little consequence) through 5 (research of international r enown), to 5* (outstanding) (Williams, 1998). Funds to support research in a pa rticular institution are subsequently calculated from an aggregation of these determinati ons. Units that do well have funding for the next five years, while poorly rated units t ry to limit the damage resulting from lost income (ibid.). To discover the impacts of the RAE, McNay c onducted 30 institutional case studies; surveyed administrative and academic staff in 15 institutions; and interviewed external stakeholders in the funding councils, indu stry, learned societies, and professional bodies. Overall, he finds that the RAE 's impacts extend beyond funding, to affect “institutional strategies, priorities, and u se of general resouces, not just those flowing from RAE (1999:199). He reports the following institutional-level impacts (1999:195-6). First, he found more refinement of research policy and strategy, wi th research now focused in a smaller number of priority areas. Next, the research functi on is better managed and more efficient but administrative requirements have incr eased, with an increase in centralized research management and the number of committees. T hird, these changes are primarily expressed through strategic policies and practices relating to research staffing. For example, some universities adopted more exclusionar y recruitment criteria favouring "proven" researchers, and used the same exclusionar y criteria to designate some existing research staff "non-active." Contradicting other st udies, McNay finds “ some spending on attracting "stars" [the CV transfer market] but thi s was marginal” (1999:196).

PAGE 15

15 of 35 Next, participation in the RAE caused an or ganizational restructuring that gradually but effectively separated research from teaching. R esearch centres freed staff from teaching responsibilities and graduate schools focu sed on research, leaving undergraduate teaching responsibilities to the depa rtments. Overall, 71% of unit heads reported the RAE's positive impact on research, whi le 62% report its negative impact on teaching. These results are hardly surprising since as McNay states, “the Dearing enquiry takes the breach [between teaching and rese arch] as a fait accompli ” (1999:198). Finally, and paradoxically, the RAE generat ed a virement (reallocation) of funds from higher-graded to lower-graded departments. Thi s reallocation was policy in several of the institutions studied. Largely, the virement is a strategic response to an anomaly in the RAE framework. RAE funding flows from "improvem ent." Top-rated departments have no room for improvement on the RAE scale so re ceive no increase in funding. But lower-rated areas can improve their performance and increase their funding. Therefore “financially, improvers were better than star perfo rmers at the funding ceiling” (McNay, 1999:196). McNay also found internal reallocations of teaching funds to support research activities. At the unit level, heads of research units were generally positive abo ut the impact of the RAE on productivity but expressed concerns abou t the related increase in stress. Other concerns included: inhibition of new research areas and interdisciplinary research; increasingly conservative approaches to research; a nd the aforementioned rupture between teaching and research. Two other issues wer e important at the unit level. First, concern was expressed at the rewarding of publicati on rather than dissemination. It was felt that the RAE focused too exclusively on presti ge journals “mainly read by other academics, including panel members making RAE judge ments”, whereas dissemination could often be more effectively achieved through pr ofessional and popular journals read by end-users (1999:198). McNay points out that ther e is a risk of “the academic world…talking only to itself and so sterilising its work” (201). Second, staff management was a major issue for unit heads—both th e determination of researcher status (active or inactive), and the reorganization of individual researchers into teams. At the individual researchers' level only 34% in McNay's study believed the RAE had improved the quality of their research. Most sa id the exercise had had little or no impact on them, apart from the stress and time-loss associated with the administration of performance exercises. Nevertheless, half now worke d more in teams and about a third reported some constraint on choice of research topi cs. About 58% believed that the research agenda and priorities were defined by peop le other than researchers, “despite the peer-review process of RAE and the prominence o f academics in committees of the research councils and other funding bodies” (199). Williams (1998:1079), a medical researcher involved in leading the RAE exercise for his research group, takes a more combative stan ce. He believes the RAE uses “restrictive, flawed, and unscientific criteria” an d produces “a distorted picture of research activity that can threaten the survival of active and productive research units”. He says the exercise is “unaccountable, time-consum ing, and expensive” and should be made more objective. Williams identifies a number o f major flaws in the RAE: restrictive survey criteria; dubious performance in dicators; loopholes and abuses; inefficiencies and unnecessary expense; subjective unaccountable panel reviews; bias towards established groups; and damage to other asp ects of scholarship like teaching. McNay finally considers a number of system level impacts of the RAE. Through what Williams (1998:1079) calls “the double blessin g of money and prestige”, and the RAE's competitive nature, the state seems to have s ucceeded in increasing research achievements in exchange for little if any growth i n the overall research budget.

PAGE 16

16 of 35However, the costs are no less real. McNay believes the research/teaching split was at least anticipated and probably intended. Each was funded and assessed separately a nd held separately accountable. Staff could be designated "teaching only" as well a s "research only. And, increasingly, research and teaching were organized in different f orms. McNay notes that in the 1996 RAE, the education panel was the only one that woul d accept teaching material as evidence of research output, and that “the teaching curriculum is being affected as senior staff in universities withdraw support from [depart ments] with low RAE grades, so that taught courses close” (200). Increasingly, staff re wards are research driven and some teaching funds are being reallocated ("raided") to finance research. Yet, as McNay points out, 80% of HE funding is for teaching. He question s the privileging of the "scholarship of discovery" over the "scholarship of transmission ." Another empirically based study investigate d the RAE's impact on academic work in two social science and two business disciplines (Harley and Lowe 1999). In the study, some 80% of respondents identified changes and recr uitment patterns in their discipline generally. Of these, threequarters attributed the changes directly to the RAE and a further 18% held the RAE partly responsible. A quar ter of the sample characterized the changes in terms of less emphasis on teaching skills; just under two-thirds in terms of greater emphasis on research; and just over two-thirds in terms of greater emphasis on publication. More than three-quarters of the sample cited change s in recruitment and selection policies in their own departments as a re sult of the RAE. Asked about the changes taking place in their disciplines, 52% char acterized them as "bad," 18% as "good and bad," and 23% as "good." In terms of impa cts on their own work, 53% said the RAE had influenced it and only 10% indicated no influence whatsoever. 3. Australia In Australia, the country's 40 public resea rch universities and two private institutions are subject to a common framework of f unding and regulation, that provides some 60% of their total funding and subjects them t o the performance requirements of the Higher Education Funding Act (Marginson, 1998). Reform commenced in 1988, with the abolition of the binary divide between uni versities and colleges of advanced education, and has continued since that time. Refor m included a number of early initiatives: a system of discipline reviews conduct ed by panels of experts reporting to the minister; the development and testing of a system o f performance indicators; allocation of special funds to support performance initiatives ; and establishment of a fund to improve teaching (Harman, 1998). There was strong e mphasis on managerial modes of operation, adequate levels of accountability, and m aximum flexibility in decision-making (Meek and Wood, 1998). Resulting ch anges have proved so extensive, the process is often referred to as the "Australian Experiment." During 1993-95, a number of innovative perf ormance features were introduced under the rubric of an annual academic audit focuse d on processes and outcomes (Harman, 1998). Participating universities would co nduct a self-evaluation and prepare a detailed portfolio. Peer-review panels would visit and assess the institution's effectiveness in performance outcomes and processes Universities would be ranked on the basis of effectiveness and outcome excellence a nd the rankings, together with detailed reports, would be published annually. As i n England's RAE, these rankings and their publication were by far the most controversia l element of the scheme. Results were widely reported in the media. High-ranked universit ies found their prestige had increased, while those who performed poorly experie nced reputational damage. Finally, the process would be driven by the incentive of inc remental performance funding, allocated according to the rankings, to a maximum o f 5% of annual budgets for the

PAGE 17

17 of 35top-ranked institutions (Harman, 1998). Institutions have welcomed the additional f unding and the program has garnered the support of institutional leadership and others who saw a need for management reforms and a greater client focus. Criticism has b een severe however, much focused, as in England, around the contentious ranking system w hich favours the older, more-established universities; the underlying metho dology and the reliance on narrow statistical data; the additional workload; and the negative effects on less-favoured institutions. Some have argued that, especially in teaching and learning, results are temporary. Others share Dill's (1998) opinion, that the cost/benefit ratio of the whole exercise is flawed, especially for the lower-ranked institutions where the consumption of scarce resources on these initiatives has bred staf f resentment. Nevertheless, the new government elected in August 1996 committed itself to continuing performance models, albeit with a 5% red uction in operating grants and other funding restraints (Meek and Wood, 1998). The Highe r Education Council was made responsible for the government's new program, which includes the integration of various models; institutional reviews of performance improv ements every three to four years; and public reporting of performance improvements. A s of 1997, universities had been asked to submit a copy of their strategic plan, tog ether with information on the key indicators they used to judge their own performance ; current outcomes and intended improvements; and improvements since the last evalu ation (Harman 1998:345). A survey by Taylor and colleagues (Taylor e t al., 1998) of Australian academics in three universities sought perceptions of the impact s of these and earlier reforms. The survey revealed a high level of concern in many are as and a fairly dismal assessment of future prospects for teaching and research, as well as of the standard of undergraduate students and the extent of academic freedom. The qu ality of new students, teaching, and research are all identified as in decline, while th e undervaluing of teaching in comparison with research persists. Changes in unive rsity management to a more corporate style are seen as a threat to academic fr eedom. More established research universities are concerned that scarce research fun ds are being stretched too widely. This perception is leading to new divisions in the unifi ed higher education sector. The writers believe that “the tension between staff desire for academic freedom—with its often time-consuming collegial decision-making—and manage ment's need for flexibility is set to continue” (269). Academics' entrenched distrust of administration “will not be ameliorated by the growing managerial desire to con ceive of higher education as a corporate service industry”. They conclude that “th ere is a real danger that management and academic staff will polarize” (ibid.) Another study (Marginson, 1998) coined the term "new university" to capture the institutional impact of the constellation of change s introduced under the reform agenda. This extensive study of 17 universities found: the emergence of a new kind of strategic leader in the presidential office; eclipse of colle gial decision-making and emergence of management-controlled, "post-collegial" mechanisms; changes in research management with consequent effects on academic work; commonali ties and variations among the "new universities"; and that the changes correspond ed with systems of "new public management." These results are confirmed in the stu dy of governance and management by Meek & Wood (1998). Currie and colleagues (Currie, 1998; Currie and Vidovich 1998) conducted a qualitative study based on interviews of 153 Austra lian and 100 American academics at six universities: Sydney, Murdoch, and Edith Cowan in Australia; Arizona, Florida State, and Louisville in the US. Additional data we re drawn from studies and interviews in Canada and New Zealand. Currie's theoretical fra mework was constructed around

PAGE 18

18 of 35Foucault's concept of governmentality; Lyotard's id eas on performativity; and theories of globalization and pervasive neoliberal market ideal s. The focus was managerialism in Australian and US universities. A large majority (+ 85%) of respondents in the study reported increases in accountability and surveillan ce over the last five years. There was a sense that performance data were being gathered wit hout any clear perception of how they were to be used. Other perceptions included: declining budge tary control by faculty; predominance of private-sector approaches to management; the sen se that universities no longer thought of themselves as primarily educational inst itutions; and a suspicion that salary and administrative costs for senior and middle mana gement were burgeoning. Divisions between faculty and central administration were rep orted to be widening, with the academic function becoming subordinated to the admi nistrative function. Full-cost recovery was a major theme (Fisher and Rubenson, 19 98), as were efforts to run the university like a business. Those areas closer to t he market flourished while the rest had to battle for survival. A majority of faculty (73% in US; 59% in AUS) said decision-making had become “more bureaucratic, topdown, centralized, autocratic, and managerial” (Currie, 1998:26). Of the rest, 19% in the US and 17% in AUS identified democratic decision-making as present at the unit l evel, while bureaucratic and corporate managerial procedures predominated at the institutional level. 4. New Zealand New Zealand's 32 post-secondary institution s currently enroll some 200,000 students, just over half at the seven national univ ersities. In September 1997, the New Zealand government released a green paper on tertia ry (higher) education. The proposals were radical enough to prompt student protests in t he streets of Auckland, Christchurch, and Wellington. Some 74 students were arrested atte mpting to break through a police barricade at the Parliament Buildings in Wellington A student leader said that the proposals, if enacted, would turn the NZ into the “ most right-wing country in the world” in terms of HE funding (Cohen, 1997:A44). An earlie r, leaked version of the document used the term "corporatization," and painted a pict ure of “voucher-bearing students attend[ing] higher education institutions that were more private than public. The institutions would be expected to turn a profit” (i bid.). The language of the official version was more temperate. Its release was followed by a year of exten sive consultation and policy development—almost 400 submissions were received—cu lminating in a November 1998 white paper. In substance, the new policies ha ve been compared to the UK's Dearing Report. Both the UK and NZ documents “sugge st a future in which institutions will bear much more responsibility for their own af fairs, particularly their financial affairs” (Cohen, 1997:A44). The white paper establi shes the ground rules for what the government calls “a high-performing tertiary sector ” (Creech, 1998). The policy direction follows the "evaluative state" model long established in New Zealand. It calls on universities to “lock-in quality” and sets up a number of mechanisms to ensure performance will occur. A new intermediary body—Quality Assurance A uthority New Zealand (QAANZ)—will “rigorously test” the teaching and res earch of every institution in the sector. Funding will depend on performance tests be ing met. As well, university governance will be reformed. Governing councils wil l be limited to twelve members, including faculty, outside experts, and students. T he government reserves the right to intervene in the affairs of any institution deemed at risk, whether academically or financially, “to protect the taxpayers' investment” All institutions will have to

PAGE 19

19 of 35demonstrate their financial viability before receiv ing further government funding. The awarding of government funds for resear ch will also be modified, along the lines of Britain's RAE, to introduce competition. O f the $100 million annual research budget, 20% will be set aside initially as a “conte stable pool”. To qualify, researchers will need a demonstrated track record in their fiel ds and a "strategic" focus that both benefits the national interest and is cost-effectiv e. In 2001, after a review of the country's research requirements, the plan is to increase the contestable portion of the annual budget to 80%. These recent moves continue the process of cultural change in the New Zealand Higher Education System, that began with the "neoli beral experiment" in 1984. In a program of radical social and economic restructurin g, successive governments have reconfigured the country once called "the welfare c apital of the world" (Roberts, 1998:3). As in Australia, and Britain under Thatche r and Major, welfare benefits were slashed, user-pay systems were introduced in the pu blic sector, and state assets privatized. The public sphere was transformed by th e introduction of quasi-markets (Marginson, 1997). The trend towards devolution wit h strong state steering is that of the "evalulative state." Bureaucrats now talk the langu age of "inputs," "outputs," and "throughputs" (Roberts, 1998). Students pay a highe r proportion of their educational costs and are designated as "customers." The teache r-student relationship has become contractual rather than pedagogic (Codd, 1997). The emphasis on performance and accountability for results is pervasive. The discou rse is of "international competitiveness" and "enterprise culture" (Roberts, 1998:3). Transforming educational institutions into corporate entities “geared toward the ideal of making a profit or at least minimizing losses and efficiencies” has been an imp ortant objective (Roberts, 1998:3). Regular performance reviews—based on a variety of p erformance indicators—are mandated for all levels of the institution, to ensu re efficiency objectives are met. The development of a National Qualifications Framework, which breaks down the "educational product" into "unit standards," facili tates the Taylorization (Dominelli and Hoogvelt, 1996) and commodification (Peters and Mar shall, 1996) of higher education in New Zealand.5. Sweden The evaluation movement arrived in Sweden l ater than elsewhere in Europe, with performance models first appearing on the political agenda towards the end of the 1980s (Nilsson and Naslund, 1997). It is also developing somewhat differently than in other Nordic countries with a clear trend linking program reviews, institutional evaluations, and national evaluations. Considerable movement can be detected away from the system of highly centralized state control of HE, that saw the country through the expansive period of the 1960s and 1970s. Decentralization was the motif of the 1980s. In 1989, the Minister of Education appointed a national commissi on to begin investigating the quality of higher education. The Liberal-Conservati ve government of 1991-94 signalled continuing commitment to deregulation of HE policy, with their 1992 proposition: Universities and Colleges of Higher Education—Freed om for Quality. They disbanded the central HE authority (Universitets-och hhskole mbetet—UH) and allowed individual insitutions to communicate directly with the Ministry of Education regarding funding. Infused with neoliberal ideology, the new g overnment sought to provide institutions with more autonomy in their dealings with the state They established a national Secretariat for Evaluation of Universities and Coll eges (subsequently to become the Office of the Chancellor) with a mandate to determi ne “various indicators of quality which can be used as the basis for allocating funds for undergraduate education” (SFS,

PAGE 20

20 of 351992, cited in Nilsson & Naslund, 1997: 7). When th is proved unrealistic at a national level, each institution was given responsibility fo r establishing a program of quality development. With the institution of the 1994 propo sition ( Teaching and Research—Quality and Competitiveness) 5% of each institution's resource allocation was based on an evaluation of its quality developme nt program and iimplementation efforts (Nilsson and Naslund, 1997). When the Socia l Democratic government assumed power in 1994 they did away with this premium, decl aring that “quality enhancement is not simply something that is expressed in special p rogrammes but is basically an attitude which must characterize the day-to-day work of each institution (Nilsson and Naslund, 1997:7). The Social Democratic Government also restr uctured the intermediate authority into separate free-standing units— including the Na tional Agency for Higher Education (Hgskoleverket)—to ensure that institutional perfo rmance programs were reviewed regularly. Thus, beginning in 1995, efforts to impr ove the quality of performance, rather than the quality of education, became the focus of assessment. Concurrent with this decision came the announcement that total funding o f undergraduate education was being cut by 10%. Bauer and Kogan (1997) argue that while there appears to be a general trend in devolution of authority from the s tate to institutions, and while the notion of a national system of performance indicato rs has been abandoned, the State has actually increased its performance requirements. Fe edback of results is an important function in the new steering system. Greater autono my has thus been obtained at the costs of increased demands for accountability, and a more systematic approach to assurance. This is described by Wahln (1998), as a shift from a system of management by rule, to one of management by goals or results. The system includes the evaluation of individual educational subjects at a National level the evaluation of education programs for accreditation, and an emphasis on the developme nt of a professional culture in which university staff take responsibility for their work and its results. Recently, as well, a new requirement calls on universities to report student outcomes according to class, ethnicity, and gender. In performance models generally, social engineering ambitions are never far away. Finally, all 36 institutions of higher educ ation in Sweden must undergo a quality audit to ensure that mechanisms are in place, befor e the year 2000, for the efficient use of resources. From early indications, university re actions to these moves are mostly positive (Wahlen, 1998:38). In a study of performance systems in the No rdic countries, Smeby & Stensaker (1999) found evidence in all four countries of bala nce between internal institutional needs and external societal needs. None of the coun tries link assessment with resource allocation nor are there direct attempts at politic al steering. Rather, the intent seems to be ameliorative and, as such, may bolster academics trust in these systems (1999:13). Despite surface similarities, however, differences in design and practice are apparent, reflecting the differing institutional and politica l endowments of each country. While the authors accept that performance models represent th e new "meta-discourse" of HE policy, they suggest that “the processes involved i mply, at least in the Nordic countries, very incremental changes to existing structures of power within higher education” (1999:13). In Norway and Finland, for example, thes e systems are considered "policy experiments." In Denmark, the process is undergoing reassessment at the end of the first round, while in Sweden the history of decentralizat ion and delegation predates the new metadiscourse, extending back to 1977. The author s conclude that “changes to the existing external and internal "power balance" betw een state and institutions…occur very slowly in all four countries” (ibid.). This st udy therefore supports a "historical

PAGE 21

21 of 35institutionalist" interpretation of path-dependent policy change (Hall, 1997). 6. The Netherlands Together with France and Great Britain, the Netherlands was among the first European countries to institute a formal performanc e model system in the mid-1980s. The original approach combined self-evaluation with peer review by visiting expert committees. The focus was the program, rather than the institution. The state strongly advocated performance indicators, but these were re sisted by universities. The model was refined in the Ministry of Science and Educatio n's 1985 publication Higher Education Autonomy and Quality, which set out a new coordination relationship between the HE sector and the state (Maassen 1998). More autonomy would be granted, but in exchange for cooperation in the development of a comprehensive system designed to regularly assess the performance of university p erformance. The state would not completely devolve its authority, but would be sele ctive about the arenas of its involvement. As well, the coordination relationship was open to other stakeholders such as employers and local authorities. According to Ma assen, the system incorporated a drift towards market-oriented criteria (1998:20). U niversities were to develop strategic, performance-based self-knowledge— institutional profiles— and were encouraged to adopt managerial modes of behavior and business pri nciples. Originally, the state intended the Inspectorate of Higher Education to administer the performance model. But through a compromise deal in 1986, the universities and higher professional schools (the Netherlands has a dual sy stem) were able to involve their own representative organizations in the process, and th e IHO was bypassed. In practice, two separate systems were developed: one for universiti es coordinated by the Association of Cooperating Universities in the Netherlands (VSNU); the other for the higher professional sector coordinated by the HBO-Council (Maassen, 1998:21-2). Both emphasized the dual performance goals of quality im provement and accountability. The VSNU's pilot project began in 1988 and the full sys tem became operational in 1989. While adapted from the North American model the Dutch system differs because it is collectively owned by the institutions. Largely because of this, over time, the emphasis has shifted from the accountability end of the spectrum towards the improvement end. As well, evaluation results do not feed into the policy or funding process; there are no political consequences. It is felt direct links would lead to strategic behaviour and tend to undermine the improvement pro cess (Maassen, 1998:25). This creates something of a dilemma since real incentive s are lacking, yet if incentives were introduced, power games would prevail. According to Maassen, the Ministry's response has been to abstain from short-term interventions, but with the threat of mediumto long-term consequences in the absence of results. T hus the IHO plays a meta-evaluative, monitoring role. So far, the trust invested in inst itutions appears not to have been misplaced. Faculties and departments seem to take t heir responsibilities under the system seriously. But, in the absence of incentives, what doe s "taking responsibilities seriously" mean? Has the low-key approach to performance produ ced any real change? A study of Dutch higher education by Frederiks & Westerheijden (1994) concluded that the quality of teaching is receiving considerably more attentio n than before the reforms. Many programs and faculties now have “special committees or specially appointed staff members for the quality management of education” an d the topic “has certainly gained an important place on the agenda of [university} de cision makers” (1994:200). As well, in contrast to the former singular focus on pedagog y, the input and output characteristics of education—informing potential students, and inve stigating the labour market

PAGE 22

22 of 35prospects for graduates—are now receiving attention Frederiks & Westerheijden suggest that a "quality culture" is emerging in Dut ch higher education. In terms of responses to self-evaluations a nd the recommendations of visiting peer-review committees, the authors find that while measures are taken to address outstanding issues, the relation between taking mea sures and observing improvement is obscure. There is no evidence that “the large amoun t of resources invested leads immediately to an equally large improvement in the quality of education” (ibid.). Nevertheless, the authors find a surprisingly high level of satisfaction with the Dutch performance model. Surprising for two reasons: the traditional reluctance of autonomous organizations to submit to external scrutiny, and t he heavy administrative burden involved in constructing an adequate self-evaluatio n. Despite generally high levels of satisfacti on, however, Maassen forecasts change. Specifically, this relates to Holland's role in the EU, and the general harmonization of HE under EU rules. Some type of accreditation appro ach may well replace the peer review system in the coming decade. V. Summary and Conclusions The politics of performance is deeply embed ded in the "evaluative state" and the trend to performance measurement is unlikely to be reversed. Indeed, with the normalization of performance expectations and the b roadening of knowledge missions beyond teaching and research, accountability and pe rformance criteria are likely to become ever more complex and embedded. Gibbons pred icts “new bench-marking methodologies and the production of a range of benc h-marking studies right across the higher education sector” and the use of quality ind icators to rank universities “by region, by country and even globally” (1998: 50). With the globalization of performance in pr ospect, our study shows deep flaws in the conceptualization, measurement criteria, and im pacts of these models (see Appendix for more details.) At the technical level, for exam ple, we report lack of clarity in definitions of what constitutes "good performance," and absence of agreement on the adequacy of specific indicators. At the broad syste m level, we identify increasing differentiation and stratification as universities were defined by their performance rankings as "good," "bad," or "indifferent" perform ers, and as either "research" or "teaching" institutions. Increasingly, teaching and research are being defined as measurable products rather than processes of learni ng or enquiry. The proliferation of buffer bodies to mediate compliance with performanc e models was a feature of all systems studied. In terms of institutional effects, we find a performancelinked focus on missions and visions that promote increased efficiency and c alls for more effective, centralized management. Funding is increasingly linked to perfo rmance on various measures, variously defined, few of which account for traditi onal moral or social imperatives. A consistent complaint is the amount of time and expe nse involved in conforming to proliferating compliance requirements. Individual d epartments and faculty members report erosion of disciplinary boundaries and decli ne of collegiality, as well as polarization between departments and the locus of a dministrative control. Throughout, we find a strong consensus that the costs of compli ance with performance regimes far outweigh the benefits.

PAGE 23

23 of 35 Our review of the experience of different s tates and institutions raises a number of empirical questions deserving of further study. Is there any evidence that performance-based funding will actually improve ins titutional performance in the long run? Is the money allocated in these programs a lar ge enough incentive for participation, or is the implied threat of greater state intervent ion and the loss of autonomy sufficient motivation? Does compliance indicate agreement with the concept and process? Are the ways states deal with non-compliance effective? Do attempts to meet general, institution-level performance measures create goal dissonance and other difficulties at different internal levels? To what extent is the in creased demand for detailed reporting an additional burden? Will institutions engage in a ggressive competition in attempts to demonstrate compliance? If funding is at stake, is there a possibility that quality of education will be sacrificed in the rush to meet ex ternal standards and access additional funds? Only longitudinal empirical research can an swer questions like these, and determine whether performance models have enduring value for the conduct of higher education. Further study is clearly needed. Given t he evidence to date, there seems to be no "ideal" model or mix. However, if one country st ands out, it is the Netherlands. Of those national systems reviewed here, the Dutch see m to have mastered the positive aspects of performance models while avoiding many o f the more negative consequences. This is the reason, no doubt, that many countries i n Continental Europe follow a "softer" Dutch-style model, involving qualitative measures a nd far less prominence for performance indicators than in the UK and US. State s, territories, and provinces that have yet to implement these models, might want to c onsider the contrasting understandings of "performance" in the European and Anglo-Saxon systems, and review relative strengths and weaknesses, before committin g resources. In conclusion, few would argue against the ethic of accountability that animates performance models, nor would they disagree that wh at performance models measure is important. But the "fatal flaw" of performance mode ls is that they reduce performance to what is measurable, when so much of importance is n ot. Because performance models focus on instrumental and utilitarian concerns, the fear is that the intrinsic value of education may be lost. As it becomes more accountable in a "knowle dge society," can the university survive in its traditional form? Survival may depen d on a much broader definition of accountability, according to Delanty (1999); one th at encompasses public and civic commitment. The best way to guarantee the future of the university, he says, is to reposition it at the heart of the public sphere, “e stablish[ing] strong links with the public culture, providing the public with enlightenment ab out the mechanisms of power and seeking alternative forms of social organization.” Further, with university knowledge becoming such a central social, economic and politi cal resource, why be “a tool of the state and market forces”? Why not, instead, become an agent of social and political change? (ibid.). The central task, we would argue, is to embrace a social mission, banish lingering litism, and advance the democratization of knowledge.Appendix: Summary of issues and impacts of performance models internationally In the tables below, we itemize the consequ ences, impacts, and issues attached to the performance models we reviewed in a set of tabl es. As this article makes clear, some of these effects are more pronounced in Anglo-Saxon systems, others in European systems. We do not differentiate among the systems nor do we make a determination

PAGE 24

24 of 35whether the consequences are good, bad, or indiffer ent, since these are open to interpretation and will be conditioned by the reade r. We have organized the effects into five categories: (i) overall system-level effects; (ii) technical performance issues; (iii) institutional effects and management issues; (iv) i mpacts on teaching and research; and (v) impacts on faculty and academic departments. Cl early, many of the effects "spill-over" into other categories and may even app ear mutually contradictory. It is worth reiterating that, whatever the commonalities, legacies count. Whether cultural, institutional, national, or ideological, the differ ences between systems are as great as the convergence among them. Finally, the classification scheme is both provisional and heuristic and should not be read otherwise. No atte mpt is made to rank-order the effects or to exhaustively reproduce every element previous ly discussed. We try, instead, to convey generalities.System-level effects possible differentiation of universities into resea rch institutions and teaching institutions increased stratification, as rankings differentiate "good," "bad," and "indifferent" performers more isomorphism as valid differences are erased by conformance to a limited number of indicators "newcomers" have to compete with established instit utions for limited funds established institutions have to share "steady stat e" funding with newcomers proliferation of external intermediary bodies to ad minister performance and quality programs and mandate consequences of noncom pliance and "poor performance" more "rational" basis for funding decisions therefo re better justifications for HE funding bilateral systems unified social engineering ambitions broad frameworks replace regulation (dejuridifation ) proliferation of stakeholders to be accommodated Technical performance issues lack of agreed-on definitions of what constitutes good performance" (quality) lack of agreement concerning the adequacy of specif ic performance indicators incompatibilities between performance measures, so that maximizing some means underperforming on others inability of quantitative measures to capture conte xtual and institutional differences use of dubious proxies of performance reduction of complexity subjective bias in construction and interpretation of measures appearance of "objective" neutrality more, and more directly useful data; revelations ab out previously unknown aspects of performance increased ability to "prove" accountability for pub lic funds susceptibility of measures to changing political ag endas

PAGE 25

25 of 35Institutional effects and management issues increased efficiency and more effective management focus on "missions," priorities, and identification of strengths growth of non-academic management-support functions with the power to intervene in academic decisions funding increasingly linked to performance, on vari ous measures, variously defined increased competition, both within and between inst itutions increased surveillance, both internal and external centralized, corporate decision making, supported b y budgetary and performance-based criteria increased time and costs to administer and conform to proliferating compliance requirements possibility that short term gains from compliance w ill produce "long-term pain" possibility that the "short term pain" of complianc e will produce long term gains evidence that universities are becoming more market -like; strategic behaviour to maximize market gains evidence that universities are abandoning tradition al societal and moral imperatives better understandings of institutional missions and new, more dynamic perspectives on the management of institutions better responsiveness to the needs of public, polit ical, and other stakeholders limited financial incentives Impacts on teaching and research performance defined as measurable product (publicat ions; external research funding; job-ready graduates) rather than process ( learning; inquiry) separation of research and teaching more-rigorous definitions of "active research" focus on quantity rather than quality of research focus on quantity rather than quality of publicatio ns devaluation of teaching in some systems, with shift of resources to research less time for performing teaching and research due to conforming with compliance procedures peer-reviewer "burn-out" as more are called on to p articipate in assessments and audits preference for research with measurable outcomes, w ithin a defined time frame, that carries external funding shift in pedagogical emphasis as students demand mo re "relevance" value-for-money approach: students are no longer le arners in pursuit of understanding, but customers taking delivery of a c ommodity impact of cost/benefit and cost-recovery constraint s on course diversity narrow definitions of research performance discoura ge risk-taking and innovation Impacts on faculty and academic departments

PAGE 26

26 of 35erosion of disciplinary boundaries decline of collegiality individual projects discouraged in favour of "team efforts" polarization between faculties/departments and cent ral administration detrimental effect of compliance exercises on facul ty workloads decreased faculty time for students and community s ervice increased stress, anxiety, uncertainty, and resentm ent resistance to the measures although this tends to b e passive rather than active "Taylorization" of faculty work means more short-te rm contracts and less security loss of autonomy over individual work demands for more productivity NoteThe authors wish to acknowledge the financial suppo rt of the Humanities and Social Science Federation of Canada, which funded the foun dational study for this paper. The authors also thank Professors Kjell Rubenson and Do nald Fisher, co-directors of the Policy Centre, for their guidance.References Albright, B. N. (1997) Of Carrots, Sticks, and Stat e Budgets, available online at . In Accountability and Regulation on Public Higher Education August 16 Albright, B. N. (1998) Performance-based Funding. Network News 17 (1), February Aucoin, P. (1995) The New Public Management: Canada in Comparative Pe rspective Montreal: The Institute for Research on Public Poli cy Baker, J. (1997) Conflicting Conceptions of Quality Policy Implications for Tertiary Education. Paper presented at. AIC Tertiary Educati on in New Zealand Conference, 27 & 28 May 1997Banta, T., Rudoph, L. B., Van Dyke, J. & Fisher, H. S. (1996) Performance Funding Comes of Age in Tennessee. Journal of Higher Education 67 (1) Barry, A., Osborne, T. & Rose, N., Eds. (1996) Foucault and Political Reason: Liberalism, Neoliberalism, and Rationalities of Gov ernment Chicago: University of Chicago PressBauer, M. & Kogan, M. (1997) Evaluation Systems in the UK and Sweden: Successes and Difficulties. European Journal of Education 32 (2) Bjarnason, S. (1998) "Buffer" Organizations in Higher Education: Illustr ative Examples in the Commonwealth Study supported by the Commonwealth Fund for Tech nical Cooperation. London: Association of Commonwealth Un iversities, available online at Brennan, J. (1999) Evaluation of Higher Education i n Europe. In Changing

PAGE 27

27 of 35Relationships Between Higher Education and the Stat e ed. M. Henkel & B. Little, pp. 219-35. London: Jessica Kingsley PublishersBroadhead, L. and Howard, S. (1998). "The Art of Pu nishing": The Research Assessment Exercise and the Ritualisation of Power in Higher Education. Education Policy Analysis Archives, 6 (8). (Entire issue.) Available online at http://epaa.asu.edu/epaa/v6n8.html.Burchell, G., Gordon, C. & Miller, P., eds. (1991) The Foucault Effect: Studies in Governmentality Chicago: University of Chicago Press Burke, J. c. & Serban, A. M. (1998) Funding Public Higher Education for Results: Fad or Trend? Results from the second annual survey, July 1998. State University of New York: The Nelson A Rockefeller Institute of Governm ent M., Hanney, S. & Kogan, M. (1991) The Use of Performance Indicators in Higher Education: A Critical Analysis of Developing Practi ce, Second Edition London: Jessica Kingsley PublishersCharih, M. & Daniels, A. (1997) Introduction: Canad ian Public Administration at the Crossroads. In New Public Management and Public Administration in Canada ed. M. Charih & A. Daniels, pp. 13-24. Toronto: The Instit ute of Public Administration of CanadaCharih, M. & Rouillard, L. (1997) The New Public Ma nagement. In New Public Management and Public Administration in Canada ed. M. Charih & A. Daniels, pp. 27-45. Toronto: The Institute of Public Administrat ion of Canada J. (1997) Knowledge, Qualifications, and Higher Edu cation: A critical view. In Education Policy in New Zealand: The 1990s and Beyo nd ed. K. M. Matthews & M. Olssen. Palmerston North: Dunmore PressCohen, D. (1997) New Zealand prepares for a major s hakeup of its higher education system. Chronicle of Higher Education December 19, A44, International section Creech, W. (1998) Students, Quality, and Fairness— Key to Future for Tertiary Education Minister of Education's Press Statement on the Te rtiary Education White Paper, November 18. Wellington: Government of New Z ealand Currie, J. (1998) Globalization Practices and the P rofessoriate in Anglo-Pacific And North American Universities. Comparative Education Review 42 (1), February, 15-29 Currie, J. & Vidovich, L. (1998) Microeconomic Refo rm through Managerialism in American and Australian Universities. In Universities and Globalization: Critical Perspectives ed. J. Currie & J. Newson. Thousand Oaks: Sage R. (1997) The State and the Governance of Education : An Analysis of the Restructuring of the State-Education Relationship. In Education: Culture, Economy, and Society ed. A. Halsey, H. Lauder, P. Brown & A. S. Wells. Oxfor d: Oxford University Press Dearing, Lord (1997) Report of the National Committ ee of Enquiry into Higher Education, Hayes, Middlesex: NCIHE

PAGE 28

28 of 35Delanty, G. (1999) The transformation of knowledge and the ethos of the university: Outline of a theory of epistemic shifts. Plenary pa per. Re-Organizing Knowledge/Transforming Institutions: The University in the XXI Century University of Massachusetts-Amherst, September 17-19D. D. (1998) Evaluating the "Evaluative State": Imp lications for Research in Higher Education. European Journal of Education 33 (3), September, 36178 Dominelli, L. & Hoogvelt, A. (1996) Globalization, Contract Government and the Taylorization of Intellectual Labour in Academia. Studies in Political Economy 49 (Spring), 71-100 El-Khawas, E. (1998) Strong State Action but Limite d Results: Perspectives on University Resistance. European Journal of Education 33 (3), September, 31731 El-Khawas, E., DePietro-Jurand, R. & Holm-Nielsen, L. (1998) Quality Assurance in Higher Education: Recent Progress; Challenges Ahead Preparation supported by the World Bank; available online at . UNESCO World Conference on Higher Education. Paris, France, October 5-9, 1998 Fisher, D. & Rubenson, K. (1998) The Changing Polit ical Economy: The Private and Public Lives of Canadian Universities. In Universities and Globalization: Critical Perspectives ed. J. Currie & J. Newson. Thousand Oaks: Sage Foucault, M. (1978) Governmentality. In The Foucault Effect: Studies in Governmentality 1991, ed. G. Burchell, C. Gordon & P. Miller, pp. 87-104. Chicago: University of Chicago PressFrederiks, M. & Westerheijden, D. (1994) Effects of Quality Assessment in Dutch Higher Education. European Journal of Education 29 (2), 181-200 Gibbons, M. (1998) Higher Education Relevance in th e 21st Century. Paper prepared for UNESCO World Conference on Higher Education, Paris, France, October 5-9, 1998. Secretary General, Association of Commonwealth Univ ersities, supported by the World Bank. 70 pagesHall, P. (1997) The Role of Interests, Institutions and Ideas in Comparative Political Economy of the Industrialized Nations. In Comparative Politics: Rationality, Culture, and Structure ed. M. I. Lichbach & A. S. Zuckerman. Cambridge: Cambridge University PressHarley, S. & Lowe, P. (1999) Academics Divided: The Research Assessment Exercise and the Academic Labour Process. Paper presented at Critical Perspectives on Accounting. Baruch, CUNY, April 25-27Harman, G. (1998) Quality Assurance Mechanisms and their Use as Policy Instruments: Major International Approaches and the Australian E xperience since 1993. European Journal of Education 33 (3), September, 33149 Harris, J. (1998) Performance Models. Public Productivity & Management Review 22 (2), December, 135-40

PAGE 29

29 of 35Henkel, M. (1997) Academic Values and the Universit y as a Corporate Enterprise. Higher Education Quarterly 51 (2), April, 134-43 Henkel, M. & Little, B. (1999) Introduction. In Changing Relationships Between Higher Education and the State ed. M. Henkel & B. Little, pp. 9-22. London: Jess ica Kingsley PublishersHood, C. (1991) A Public Management for All Seasons ? Public Administration 69 (Spring), 3-19 Hood, C. (1995) The "New Public Management" in the 1980s: Variations on a Theme. Accounting, Organizations and Society 20 (2/3), 93-109 Hoskin, K. W. & Macve, R. H. (1993) Accounting As D iscipline: The Overlooked Supplement. In Knowledges: Historical and Critical Studies in Disc iplinarity ed. E. Messer-Davidov, D. R. Shumway & D. J. Sylvan, pp. 2 5-53. Charlottesville, VA: University Press of VirginiaJohnstone, D. B. (1998) The Financing and Managemen t of Higher Education: A Status Report on Worldwide Reforms. Preparation supported by the World Bank; available online at . UNESCO World Conference on Higher Education. Paris, France, Octo ber 5-9, 1998 Kaghan, W.N. (1998), "Court and Spark: Studies in P rofessional University Technology Transfer Management," Diss, Seattle, WA, University of Washington. Keating, M. (1998) Public Management Reform and Economic and Social De velopment Paris: OECDMaassen, P. A. M. (1997) Quality in European Higher Education: Recent Trends and their Historical Roots. European Journal of Education 32 (2), 11125 Maassen, P. A. M. (1998) Quality Assurance in the N etherlands. In Quality Assurance in Higher Education: An International Perspective ed. G. H. Gaither, pp. 19-27. San Francisco: Jossey BassMarginson, S. (1997) Markets in Education Sydney: Allen and Unwin Marginson, S. (1998) The Best of Times and the Wors t of Times; Research Managed as a Performance Economy—the Australian Case. Paper pr esented at. Association for the Study of Higher Education. Miami, FL, November 4-5McDaniel, O. C. (1996) The Theoretical and Practica l Use of Performance Indicators. Higher Education Management 8 (3), November McKeown-Moak, M. P. (1999) Financing Higher Education: An Annual Report from the States online publication at . SHEEOMcNay, I. (1999) The Paradoxes of Research Assessme nt and Funding. In Changing Relationships Between Higher Education and the Stat e ed. M. Henkel & B. Little, pp.

PAGE 30

30 of 35191-218. London: Jessica Kingsley PublishersMeek, V. L. & Wood, F. (1998) Higher Education Gove rnance and Management: Australia. Higher Education Policy 11 (2-3), June, 165-81 Meyer, J.W. and Rowan, B. (1977), "Institutional Or ganizations: Formal Structure as Myth and Ceremony," in W.W. Powell and P. DiMaggio (Eds.), The New Institutionalism in Organizational Analysis 1991, Chicago, University of Chicago Press.Miller, H. D. (1995) The Management of Change in Universities: Universit ies, State and Econonomy in Australia, Canada, and the United King dom Buckingham: Open University Press and Society for Research into High er Education Miller, P. (1994) Accounting as a Social and Instit utional Practice: An Introduction. In Accounting as a Social and Institutional Practice ed. A. G. Hopwood & P. Miller, pp. 1-39. Cambridge: Cambridge University PressNASBO. (1996) State Innovations in Higher Education Finance and G overnance National Association of State Budget Officers Infor mation Brief, vol. 4, No.1, April. Washington, DC: NASBONeave, G. (1988) On the Cultivation of Quality, Eff iciency, And Enterprise: An Overview of Recent Trends in Higher Education in We stern Europe 1986-1988. European Journal of Education 23 7-23 Neave, G. (1992) On Bodies Vile and Bodies Beautifu l: The Role of "Buffer" Institutions between Universities and State. Higher Education Policy 5 (3) Neave, G. (1998) The Evaluative State Reconsidered. European Journal of Education 33 (3), September, 265-85 Network News. (1998) State Survey on Performance Me asures: 1996-7. Network News: A Quarterly Bulletin of the SHEEO/NCES Communicatio n Network Vol.17, No.1 (February)Newson, J. (1992) The Decline of Faculty Influence: Confronting the Effects of the Corporate Agenda. In Fragile Truths: Twenty-Five Years of Sociology and Anthropology in Canada ed. W. K. Carroll, L. Christiansen-Ruffman, R. Cu rrie, F & D. Harrison, pp. 22746. Ottawa, ON: Carleton Univers ity Press Newson, J. (1998) The Corporate-Linked University: From Social Project to Market Force. Canadian Journal of Communication 23 (1), Winter, 107-24 Nilsson, K. A. & Naslund, H. (1997) Towards a Swedi sh Evaluation and Quality Assurance System in Higher Education.Http://www.evaluat.lu.se/dokument/evaluation.htm. A ugust 15 1999 OECD (1995) Governance in Transition: Summary PUMA Committee, 1990-95. Paris: OECDOECD (1999) IMHE: Institutional Experiences of Qual ity Assessment in Higher

PAGE 31

31 of 35Education. Available online at , for information contact: John Brennan at . August 14 1999 Peters, M. & Marshall, J. (1996) The Politics of Cu rriculum: Busnocratic Rationality and Enterprise Culture. Delta 48 (1), 33-46 PISG (1999) Performance Indicators in Higher Education First report, Performance Indicators Steering Group, First report—February. L ondon: Higher Education Funding Council for EnglandPolster, C. & Newson, J. (1998) Don't Count your Bl essings: The Social Accomplishments of Performance Indicators. In Universities and Globalization: Critical Perspectives ed. J. Currie & J. Newson, pp. 173-91. Thousand O aks: Sage Porter, T. M. (1995) Trust in Numbers: The Pursuit of Objectivity in Sci ence and Public Life Princeton: Princeton University Press Powell, W. W. and DiMaggio, P. (1983) “The Iron Cag e Revisited: Institutional Isomorphism and Collective Rationality in Organizat ional Fields,” in W.W. Powell and P. DiMaggio (eds) The New Institutionalism in Organizational Analysis 1991, Chicago, University of Chicago PressPower, M. (1995) The Audit Society: Rituals of Verification Oxford: Oxford University PressPower, M. (1996) Making Things Auditable. Accounting, Organizations and Society 21 (2/3), 289-315 Roberts, P. (1998) Rereading Lyotard: Knowledge, Co mmodification and Higher Education. Http://www.sociology.org. Electronic Journal of Sociology 3 (3) Ruppert, Sandra (1998) Survey Results of State Poli tical and Higher Education Leaders. Available online at Savoie, D. J. (1995) What is Wrong with the New Pub lic Management? Canadian Public Administration 38 (1), Spring, 112-21 Scott, Peter (1995) The meanings of mass higher education, Buckingham: Open University PressScott, W.R. (1995), Institutions and Organizations Thousand Oaks, Ca, Sage. SHEEO. (1997) Focus on Performance Measures available online at Smeby, J.-C. & Stensaker, B. (1999) National Qualit y Assessment Systems in the Nordic Countries: Developing a Balance between Exte rnal and Internal Needs. Higher Education Policy 12 (1), March, 3-14 Stanley, E. C. & Patrick, W. J. (1998) Quality Assu rance in American and British Higher Education: A Comparison. In Quality Assurance in Higher Education: An International Perspective ed. G. H. Gaither. San Francisco: Jossey Bass

PAGE 32

32 of 35Strange, S. (1996) The Retreat of the State: The Diffusion of Power in the World Economy Cambridge: Cambridge University Press Taylor, T., Gough, J., Bundrock, V. & Winter, R. (1 998) A Bleak Outlook: Academic staff Perceptions of Changes in Core Activities in Australian Higher Education. Studies in Higher Education, Oct98, Vol 23 Issue 3, P255, 1 4p 23 (3), October, 255-69 Trow, M. (1998) American Perspectives on British Hi gher Education under Thatcher and Major. Oxford Review of Education 24 (1), March, 111-30 Wahlen, S. (1998) Is there a Scandinavian Model of Higher Education? Higher Education Management 10 (3) Watts, R. (1992) Universities and Public Policy. In Public Purse, Public Purpose: Autonomy and Accountability in the Groves of Academ e ed. J. Cutt & R. Dobell, pp. 75-91. Ottawa, ON: Institute for Research on Public Policy and Canadian Comprehensive Auditing FoundationWilliams, G. (1998) Misleading, Unscientific, and U njust: The United Kingdom's Research Assessment Exercise. BMJ: British Medical Journal 316 (7137), 4 April, 1079-83Woodhouse, D. (1996) Quality Assurance: Internation al Trends, Preoccupations, and Features. Assessment and Evaluation in Higher Education 21 (4), December, 347-57About the AuthorsJanet Atkinson-GrosjeanCentre for Policy studies in Higher Education & Tra ining 2125 Main MallUniversity of British ColumbiaVancouver, BC, Canada, V6T 1Z4604-377-8155 Email: janetat@interchange.ubc.ca Janet Atkinson-Grosjean is conducting interdiscipli nary research at the intersection of science and innovation policy, higher education pol icy, science studies, and institutional and organizational sociology. Her dissertation exam ines the status of "post-academic" science at the public/private divide, where intelle ctual property rights move publicly-funded discoveries into the private domain Her fieldwork concerns Canada's Network of Centres of Excellence program. Other res earch interests include accountability, performance and governance in highe r education, and the sociology of the professions. She holds a doctoral fellowship fr om the Social Sciences and Humanities Research Council of Canada and has prese nted her work at national and international venues.Garnet Grosjean Research Coordinator

PAGE 33

33 of 35 Centre for Policy Studies in Higher Education and T raining Garnet Grosjean's research draws on the policy and practice of adult and higher education, to study co-operative education as a bri dge between the academy and workplace. The central focus is the effect of learn ing context on university students' conceptions of learning and work. Other research in terests include governance and performance of higher education, and the changing v ocational role of the university. He is a member of Canada's Western Research Network on Education and Training.Copyright 2000 by the Education Policy Analysis ArchivesThe World Wide Web address for the Education Policy Analysis Archives is epaa.asu.edu General questions about appropriateness of topics o r particular articles may be addressed to the Editor, Gene V Glass, glass@asu.edu or reach him at College of Education, Arizona State University, Tempe, AZ 8 5287-0211. (602-965-9644). The Commentary Editor is Casey D. C obb: casey.cobb@unh.edu .EPAA Editorial Board Michael W. Apple University of Wisconsin Greg Camilli Rutgers University John Covaleskie Northern Michigan University Alan Davis University of Colorado, Denver Sherman Dorn University of South Florida Mark E. Fetler California Commission on Teacher Credentialing Richard Garlikov hmwkhelp@scott.net Thomas F. Green Syracuse University Alison I. Griffith York University Arlen Gullickson Western Michigan University Ernest R. House University of Colorado Aimee Howley Ohio University Craig B. Howley Appalachia Educational Laboratory William Hunter University of Calgary Daniel Kalls Ume University Benjamin Levin University of Manitoba Thomas Mauhs-Pugh Green Mountain College Dewayne Matthews Western Interstate Commission for HigherEducation William McInerney Purdue University Mary McKeown-Moak MGT of America (Austin, TX) Les McLean University of Toronto Susan Bobbitt Nolen University of Washington Anne L. Pemberton apembert@pen.k12.va.us Hugh G. Petrie SUNY Buffalo

PAGE 34

34 of 35 Richard C. Richardson New York University Anthony G. Rud Jr. Purdue University Dennis Sayers Ann Leavenworth Centerfor Accelerated Learning Jay D. Scribner University of Texas at Austin Michael Scriven scriven@aol.com Robert E. Stake University of Illinois—UC Robert Stonehill U.S. Department of Education David D. Williams Brigham Young UniversityEPAA Spanish Language Editorial BoardAssociate Editor for Spanish Language Roberto Rodrguez Gmez Universidad Nacional Autnoma de Mxico roberto@servidor.unam.mx Adrin Acosta (Mxico) Universidad de Guadalajaraadrianacosta@compuserve.com J. Flix Angulo Rasco (Spain) Universidad de Cdizfelix.angulo@uca.es Teresa Bracho (Mxico) Centro de Investigacin y DocenciaEconmica-CIDEbracho dis1.cide.mx Alejandro Canales (Mxico) Universidad Nacional Autnoma deMxicocanalesa@servidor.unam.mx Ursula Casanova (U.S.A.) Arizona State Universitycasanova@asu.edu Jos Contreras Domingo Universitat de Barcelona Jose.Contreras@doe.d5.ub.es Erwin Epstein (U.S.A.) Loyola University of ChicagoEepstein@luc.edu Josu Gonzlez (U.S.A.) Arizona State Universityjosue@asu.edu Rollin Kent (Mxico)Departamento de InvestigacinEducativa-DIE/CINVESTAVrkent@gemtel.com.mx kentr@data.net.mx Mara Beatriz Luce (Brazil)Universidad Federal de Rio Grande do Sul-UFRGSlucemb@orion.ufrgs.brJavier Mendoza Rojas (Mxico)Universidad Nacional Autnoma deMxicojaviermr@servidor.unam.mxMarcela Mollis (Argentina)Universidad de Buenos Airesmmollis@filo.uba.ar Humberto Muoz Garca (Mxico) Universidad Nacional Autnoma deMxicohumberto@servidor.unam.mxAngel Ignacio Prez Gmez (Spain)Universidad de Mlagaaiperez@uma.es Daniel Schugurensky (Argentina-Canad)OISE/UT, Canadadschugurensky@oise.utoronto.ca Simon Schwartzman (Brazil)Fundao Instituto Brasileiro e Geografiae Estatstica simon@openlink.com.br

PAGE 35

35 of 35 Jurjo Torres Santom (Spain)Universidad de A Coruajurjo@udc.es Carlos Alberto Torres (U.S.A.)University of California, Los Angelestorres@gseisucla.edu