xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam a22 u 4500
controlfield tag 008 c19959999azu 000 0 eng d
datafield ind1 8 ind2 024
subfield code a E11-00049
Educational policy analysis archives.
n Vol. 3, no. 20 (December 22, 1995).
Tempe, Ariz. :
b Arizona State University ;
Tampa, Fla. :
University of South Florida.
c December 22, 1995
Possible indicators of research quality for colleges and universities / Ronald H. Nowaczyk [and] David G. Underwood.
Arizona State University.
University of South Florida.
t Education Policy Analysis Archives (EPAA)
xml version 1.0 encoding UTF-8 standalone no
mods:mods xmlns:mods http:www.loc.govmodsv3 xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govmodsv3mods-3-1.xsd
mods:relatedItem type host
mods:identifier issn 1068-2341mods:part
mods:detail volume mods:number 3issue 20series Year mods:caption 19951995Month December12Day 2222mods:originInfo mods:dateIssued iso8601 1995-12-22
1 of 12 Education Policy Analysis Archives Volume 3 Number 20December 22, 1995ISSN 1068-2341A peer-reviewed scholarly electronic journal. Editor: Gene V Glass Glass@ASU.EDU College of Education Arizona State University,Tempe AZ 85287-2411 Copyright 1995, the EDUCATION POLICY ANALYSIS ARCHIVES.Permission is hereby granted to copy any a rticle provided that EDUCATION POLICY ANALYSIS ARCHIVES is credited and copies are not sold.Possible Indicators of Research Quality for Colleges and Universities Ronald H. Nowaczyk Clemson University David G. Underwood Clemson University udavid@CLEMSON.EDUAbstract: The move toward more public accountability of inst itutions of higher education has focused primarily on undergraduate education. Yet, many institutions view research as an important component of their mission. Much of the l iterature on assessing research quality has relied on quantitative measures such as level of ou tside funding and number of publications generated. Focus groups consisting of research facu lty were conducted at a landgrant university. Faculty were asked to evaluate current indicators o f research quality as well as to suggest additional measures. While faculty recognized the n eed for the traditional measures, they cautioned against over-reliance on these indicators Additional indicators focusing on graduate education as well as external peer reviews were rec ommended. Developing indicators that provide evidence of long-term impact on social and scientific advancement was suggested. The public desire for accountability of programs a t institutions of higher education has contributed greatly to the assessment movement in h igher education. Regional accrediting associations (e.g., Middle States, New England, Nor th Central, Northwest, Southern and Western) expect colleges and universities to demons trate their institutional effectiveness through an ongoing program of self-examination.
2 of 12 Considerable work in the area of assessing the und ergraduate teaching mission has led to a growing body of literature (e.g., Astin, 1985. 1991 ; Bogue & Saunders, 1992; Erwin, 1991; Ewell, 1983). Less attention has been directed towa rd other aspects of an institution's mission including research. Adding to the body of knowledge by engaging in research and related scholarly activities is an important component of t he mission for many institutions especially those with graduate programs or a substantial level of external research funding. The development of effective assessment programs for th e research mission is in its early stages. A good assessment program should include quality indi cators that are acceptable to the academic community.Review of the Literature on Research Productivity The role of research in a university environment m ust not be underestimated. As Dill (1986) points out, the vast majority of discoveries are made in a higher education environment. In recent years the emphasis for research universit ies seems to be focused on productivity. The system used by the Carnegie Foundation (1987) to cl assify colleges and universities, uses a measure of dollars generated by research and number of Ph.D. degrees granted as the major method of classifying institutions into categories. Additionally, the National Science Foundation annually ranks institutions based on Total and Fede rally Financed Research and Development Expenditures. In each of these cases, "more" is int erpreted to mean "better." The research produced by those ranked higher is considered to be of better quality than that produced by those who are ranked lower. The issue of "quality" in research has been a topi c of great debate on university campuses, but it has produced very little in the way of liter ature which identifies the meaning of the term as it applies specifically to research. Several indica tors have been identified as being useful in determining the quality of research programs. Among those are the number of dollars generated, number of publications, number of citations, and pe er review (Kogan, 1989). Productivity Dollars. Snyder et al. (1991) focused on aspects of strategi c management to determine factors which were equated with research excellence. They selected a sample of institutions from the top 100 in the previously men tioned National Science Foundation rankings. The implication is that if an institution is ranked highly then excellence in research is the reason. Their finding that the number of dollars generated by research was the most often cited measurement of success should come as no surprise. They also found that while most of those surveyed could identify factors, such as dollars, w hich were used to measure the success of the research program, "it was not clear whether or not these factors were selected consciously as factors necessary for the attainment of objectives, or because they were the easiest factors to measure" (Snyder, et al., 1991, p. 52). Directly re lated is their finding that "Those universities that are ranked higher, have faculty that are adept at obtaining research grants" (p. 55). The emphasis on dollars is further illustrated by Archa mbault (1989) who identifies the need for quality in research but points out that, in additio n, it must be "profitable." Productivity Publications. With the emphasis on productivity, the number of p ublications is frequently used as an indicator of quality in re search. The fact that the research is published is taken as an indication of its quality. This indicat or is often further categorized and weighted by identifying the type of publication (book or resear ch article) and if it is an article, the type of journal (refereed or non-refereed). Although it is a good indicator of how prolific the researcher or the department is in producing acceptable articl es it does not address the impact of those articles. A study conducted by Moed, et al. (1989) attempted to make a distinction between what they saw as "output" (the number of publications) a nd the "impact" of those publications. The impact was determined by checking citations of the articles over a period of years. Their determination was that one should use caution in ad opting such indicators because "Citation practices appear to differ significantly from field to field" and "Citation practices within fields
3 of 12 can also change during the decade" (p. 190). Peer Evaluation. Peer evaluations of research and research programs are often cited as a method to insure the quality of the research effort s. The objective of such a review is to assemble a group of peers to review the research efforts and make a determination of the quality of those efforts. Studies conducted by Henkel (1989) list se veral of the concerns related to the peer review process which are often heard on a university campu s. One of the concerns focuses on the idea that "group judgements of scientific quality were t hought to be insufficiently acknowledged" (p. 179). The implication here is that the visiting gro up does not fully understand the work of the unit or individual being reviewed. These concerns a re even greater when the work is multidisciplinary or covers a wide span of interest An additional concern also noted by Henkel (1989) is that "Scientists do not feel they belong to a republic of equal citizens" (p. 179). T his perception does not readily lend itself to being reviewed. The more diverse the span of focus, the greater this concern becomes. These concerns associated with the normal methods of determining quality in research, coupled with the pressures from our regional accred iting body, the Southern Association of Colleges and Schools (SACS), and our state Commissi on on Higher Education (CHE) require a closer look at how quality of research is determine d. Since most of the concerns associated with how research quality is determined originate with f aculty, our intent is to use those faculty, through focus groups, to identify their concerns an d develop other indicators which might be more appropriate.Assessment of Research Programs at Clemson Universi ty Clemson University is a state-assisted, coeducatio nal land grant university located in the northwest corner of South Carolina. The Fall 1992 e nrollment included 13,197 undergraduates and 4,263 graduates. The University has nine colleg es and awards bachelor's degrees in 72 programs, master's degrees in 68 programs and docto r's degrees in 34 programs. Historically, its mission has emphasized the agricultural and enginee ring programs. In response to SACS requirements and state require ments on assessment, Clemson requires each of its academic departments with a re search mission to have an action plan in place to evaluate its progress in the area of research an d related scholarly activities. The action plan is a twoto three-page document outlining the departmen tal mission and expected results in the area of research. Each department is expected to include outcome measures that would demonstrate the success of the department in meeting its resear ch goals. Narrative reports based on the plan are expected every three years. These action plans were submitted during the 199091 academic year. It was clear at that time that faculty were struggling with the process of developing indicators to assess the progress of the department regarding its research mission. A variety of indicators were proposed to assess research. These indicators were grouped into one of four categories, productivity, departmental commitment to research, faculty participation, and awards and expenditures. Table 1 lists the percentage of departments listing indicators from t hese categories. Table 1Percent of Clemson University Departments Listing I ndicators from Four Categories of Research ActivitiesCategory or IndicatorPercent of Departments Productivity Measures100
4 of 12 (Journal publications 100) (Conference presentations 78)Authored or edited books27Faculty Participation in Research41Departmental Commitment to Research30Awards & Expenditures 35 Productivity indicators include journal publicatio ns, conference presentations, artistic exhibits and performances, and authored or edited b ooks. All departments included one or more indicators from this category. Fewer departments li sted indicators in the other three categories. Departmental Commitment focused on the level of dep artmental resources devoted to research. These indicators included the level of faculty rele ase time for research, the level of financial resources devoted to research (e.g., equipment expe nditures, space allocation, computer time), number of graduate students in the program, and lev el of faculty involvement in supervising graduate research. These indicators are clearly mor e process than outcome oriented. Faculty Participation which was listed by more than a third of the departments included the number of proposals submitted by faculty, level of faculty in volvement in professional organizations, and number of faculty on sabbatical or involved in an e xchange program. The category of Awards and Expenditures included level of sponsored resear ch from grants and contracts, level of funding for graduate students, and the number of fa culty receiving professional awards. While a number of these indicators are consistent with those cited in the literature on academic research, we sensed a frustration on the p art of our faculty with the adequacy of these indicators to represent completely the research act ivities at Clemson. At the same time, there was a concern voiced by the faculty that the University might opt for the easiest method of reporting research activity by solely providing quantitative measures that do not necessarily reflect research quality. The listings of the number of research pub lications and dollar level of external funding were most often mentioned. In fact, the first repor t to the state focused on quantitative measures that were readily available. In the most recent rep ort the University included a measure of student participation in research, both sponsored and unspo nsored. Given the uneasiness faculty were expressing with these action plans in research, the Office of Assessment and the University Assessment Committee endorsed this project which was designed to elicit faculty input at a broader level regarding assessment of research. Rather than developing and distributing a campus-wide questionn aire, we decided to bring groups of faculty together for discussion on assessment of research. Focus groups were conducted to answer questions faculty may have about the assessment pro gram itself as well as to have the faculty brainstorm about additional indicators that might b e developed and used to assess research at Clemson.MethodFocus Groups Four focus groups were conducted during the Spring and Fall of 1992. Groups consisted of 6 to 10 faculty members along with the two autho rs who served as facilitators. Faculty elected to the University Faculty Senate who served on the Senate's Research Committee participated in one focus group. Collegiate Deans nominated two ind ividuals from their colleges, who were respected by their peers as researchers, to serve i n the other focus groups. Eighteen of the 20 faculty nominated attended one of the groups. An ad ditional faculty member who serves on the
5 of 12 University Assessment Committee also participated.Procedure Each focus group lasted approximately one hour. Fo llowing an introduction of all participants, the authors described the purpose of the focus groups which was to solicit their opinions regarding evaluation of research at a depa rtmental level. The focus on departmental productivity rather than individual productivity wa s stressed. Then a series of four questions were presented, one at a time. The questions were: 1. Currently, how would your department head report the overall quality of research in your department?2. Where do you feel faculty within your department would feel dissatisfied with that type of report?3. What could be offered as indicators to improve t hat type of report? 4. What do you see as the distinguishing characteri stics of quality research within your discipline? Each question was shown on an overhead projector a nd comments were recorded on the overhead as the discussion proceeded. One of the au thors also took notes during the sessions. The results are based on a compilation of both the comments on the overheads and the notes that were recorded.Results The first two questions asked the faculty to descr ibe how quality of research would be reported in their department currently, and with wh at aspects of that report they feel faculty would express dissatisfaction. There was considerab le consistency among the groups in terms of their responses. Table 2 lists the primary indicato rs of quality research that faculty feel their departments are currently using. As one would expec t at a research university, publications and grantsmanship were the two main areas of focus. Wit hin each area, however, the faculty made a number of distinctions and expressed reservations t hey held concerning the misinterpretation or over-reliance on various indicators. Table 2Frequently Mentioned Quality Indicators Currently U sed by Departments for Reporting Research ActivitiesResearch Publications:
6 of 12 Number of publications (in some cases, ratio per fa culty member) Types of publications (journal articles, monographs chapters, books) Quality of publication Reputation of publication in disciplineDistribution of publication (e.g., regional, nati onal,international) Refereed vs. non-refereed journalsInvited chapters or papers Citation statistics of research publications (numbe r of citations as well as who is citing the work and the frequency of citation) Research Grants & External Funding: Number of grants submitted and fundedCompetitiveness of grant processReputation of granting agency (e.g, NSF)Total dollars generatedGranting agency response to grant reports (i.e., sa tisfaction with work) Success rate of grant renewals Other Indicators: Papers presented at conferences and professional me etings Number of papers presented Quality of conference External research awards, fellowships, and recognit ions of faculty External and peer reviews of research programsCreative and scientific research "products" (e.g., art exhibitions, patents, new research applications and methods)Amount and quality of interdisciplinary research in cluding collaboration as well as consultative support Proportion of graduate students who complete termin al degree as well as level of graduate funding for research Number of completed research projectsCustomer satisfaction with research product A concern expressed with using research publicatio ns as an indicator of research quality was the temptation to overemphasize the number of publicati ons generated. They perceive that the development o f and striving to maintain (or exceed) a departmental publication:faculty ratio could lead to a reductio n in the quality of research programs as emphasis shifts to increasing the number of manuscripts submitted. Measuring quality not quantity was a major recommen dation from the faculty. They also felt that the ra nking of the quality of journals could be somewhat subjec tive. Some faculty also felt there needed to be a p lace for scholarly research publications that are not in ref ereed journals. Lastly, they felt there could be di fficulty in judging the quality of interdisciplinary and multid isciplinary research. Determining the contributions to one's discipline was perceived as more difficult when the research includes investigators and studies from m ultiple disciplines. The highest level of concern was expressed regardi ng the use of funding indicators. While the faculty agreed funding of research was an appropriate indic ator, the temptation to use funding as the primary
7 of 12 indicator of research quality needed to be avoided. The fact that funding is not only related to the q uality of the research proposal and the investigators but als o the topic, the existence of a graduate program, a nd dollars available from the agency makes the use of funding as a sole indicator suspect. Faculty were also conc erned that research that generates indirect costs for the institution would be perceived as higher quality r esearch when that is not necessarily the case. The listing of other quality indicators is in resp onse to the participants' opinions that judging res earch quality is more than counting the number of publica tions or amount of funding generated. Faculty expre ssed a desire to avoid using easily accessible quantitativ e indicators as the only measures of research quali ty. They sense that overuse of quantitative measures may be occurring now. They also felt additional indicators were needed as research becomes more of a collaborative effort with investigators from different discipline s. When the third question was presented, faculty off ered several new indicators of research quality. Th e more frequently mentioned indicators are shown in T able 3. A number of responses focused on graduate education. A close relationship between graduate ed ucation and research and scholarly activity is assu med in research universities offering terminal degrees. Wi thin each focus group, faculty recommended quality indicators that are based on the progress of gradua te students while they are at Clemson as well as wh ere they go and what they do once they complete their gradua te education. Graduate students coauthoring papers and making presentations are viewed as positive indicat ors of research quality. Placement of graduate stud ents into appropriate career positions at prestigious in stitutions was also cited as an important indicator of research quality. Table 3Additional Quality Indicators of Research Productiv ity Recommended by the Focus GroupsIndicators Related to Graduate Education Graduate student placement: Quality of institutions that hire graduate students Proportion of graduate students receiving graduate degrees Salary information and job titles of graduate stude nts after completing program Graduate student participation in research: Coauthorship on publicationsPaper presentations at conferences or professional meetings External graduate awards for research External Reviews: External evaluations by professional organizationsGranting agency editor's review of programExternal peer reviews of research program Other Indicators: Evidence of research having a societal benefit or i mpact < Evidence of long-term peer use of research findin gs Undergraduate involvement in researchDepartmental faculty serving as peer reviewers or j ournal editors Election of departmental faculty to reputable posit ions due to recognition of scholarly activities Evidence of ongoing, sustained research programs by individual departmental faculty
8 of 12 Adopting a renewable tenure system Many of the faculty felt comfortable with some typ e of external review of the research program if it were conducted by peers or professional organizatio ns within the discipline. The exact form of such a review varied from sending representative publications to peers for their evaluation to inviting a team of pe ers to campus for a review. Most faculty felt such reviews should be done regularly every several years. Among the other indicators were several that could be viewed as long-term outcome measures. These include evidence that the research has had a benefi t or application in society or that the research ha s had a long-term impact in the discipline. The activities of departmental faculty as reviewers of grants or e ditors of journals were considered a positive indicator of re search quality as well as faculty being elected to positions within professional organizations. One faculty memb er suggested that the concept of renewable tenure ( and faculty being able to secure a renewed term of tenu re) would be evidence of a quality research program The fourth question asked faculty to identify the distinguishing characteristics of quality research programs within their discipline. Many of the respo nses included items listed in Table 2 that were giv en in response to Question 3. Quality research programs w ere associated with individuals within the programs Reference was made to departments that have "stars, top faculty, well-rounded and well-respected individuals. These individuals were viewed as havin g established a reputation for consistently high qu ality research. Their publications are often found in the top journals within the discipline. These individu als are also sought by others for collaborative work, wheth er it be with other academic institutions or indust ry. The quality of the graduate programs at these inst itutions was also viewed as an important component of high quality. They attract the best undergraduat es into their programs (selective admission), invol ve graduate students in research, and produce graduate students who establish a reputation for themselves within the discipline. Some faculty felt that the top prog rams were associated with considerable support in t erms of funded faculty chairs, lower teaching loads, and su perior facilities and equipment. However, in at lea st one focus group, several faculty felt that in their dis cipline some of the top programs are not characteri zed by such levels of support.Discussion The process of conducting focus groups with facult y validated the use of many of the current quality indicators and also provided some additional indica tors for departmental consideration. Faculty in the focus groups mentioned most of the indicators that have b een listed in departmental assessment plans. A comparison of Table 1 which is based on the assessm ent plans with Table 2 which lists the frequently c ited indicators by faculty reveals a high degree of simi larity. Faculty believe that two of the major areas of res earch quality include publication of research findi ngs and grantsmanship. Within each, however, they were careful to note that multiple measures exist, some of which may not apply across all disciplines. This is a point noted by Moed, et al. (1989). The quality of the publication was an important consideration for facu lty. Quality might be defined in a variety of ways including the reputation of the journal within the discipline as well as whether it is refereed or not The faculty also emphasized the impact of the publicati ons. Citations in other work and evidence that prev ious research has had a significant impact in the discip line were given as possible indicators. This findin g is also consistent with the report by Moed, et al. Our facu lty also noted some concerns about simply counting citations. They emphasized the value of research th at is cited over an extended period of time. The discussion on grantsmanship centered more on t he grants themselves rather than the level of external funding. Faculty attached more significanc e to the competitiveness of the grant, the reputati on and prestige of the granting agency and the outcome of the grant rather than the dollars required to fund the grant. A major concern of the faculty was the temptation to emphasize the easily quantifiable measures possible associated within these two areas. This co ncern echoes comments in Snyder, et al. (1991). Rel ying on simple statistics such as the number of publicat ions written per faculty member or the number of do llars
9 of 12generated per faculty might distort the true resear ch quality within the department. While the faculty recognized the importance of including measures inv olving dissemination of research findings and level s of external support, they felt other indicators were a lso appropriate. The faculty suggested the inclusion of indicators based on graduate education as well as possible external peer reviews. The importance of success in the graduate program as a quality indicator of res earch surfaced twice. The faculty included measures of gr aduate student success when describing distinguishi ng characteristics of top research programs within the ir discipline. They also listed specific indicators involving graduate education that they felt should be part of their departmental plans. Most of the discussion c entered on the accomplishments of their graduate students w hile in the program and after they receive their te rminal degree. Prestigious institutions were likely to sho w graduate participation in research publications a nd presentations and would be more likely to acquire a ttractive academic and industry positions upon grad uation. The faculty seemed amenable to external reviews as long as they were conducted by peers within their discipline. Faculty felt the best evaluators would be individuals familiar with the research process w ithin their particular disciplines. In some instances, they fel t an external review process already exists when co mpeting for external grants. Faculty proposals for possible inclusion of indica tors based on graduate education and external reviews are one of the major findings of this study The recognition by faculty that education and res earch are closely related argues somewhat against the popular view that research activities detract from teachin g. While the focus here has been on graduate education, seve ral faculty did mention the importance of undergrad uate participation in research when appropriate. Includi ng measures of graduate education when describing research activities is one way institutions can edu cate the general public on the relationship of rese arch and education. Students have noted that they value the contacts with faculty outside of the traditional cl assroom environment (Light, 1990). Including indicators of graduate (and undergraduate) student involvement wh en assessing the quality of the research program would be a statement of the importance the department pl aces on education as part of the research process. Faculty concern about external reviews as part of the assessment process is not new (e.g., Henkel, 1989). Within the research endeavor, however, exter nal reviews are a common and expected component. Faculty submit their publications and grant proposa ls for external reviews. It appears that they would value external reviews of departmental programs also as l ong as they are conducted by knowledgeable individu als. In most instances, they feel comfortable with revie ws by peers or professional organizations within th eir discipline. We were pleased with this process for several reas ons. First, the use of focus groups enabled the Off ice of Assessment to develop a list of indicators of re search quality that will be provided to departments The list is intended to aid departments as they revise their assessment plans. None of the indicators will be r equired. Instead, the faculty will be encouraged to review t he list to identify those that they feel would best measure their progress toward the departmental research mis sion. The institution as well as faculty in the foc us groups recognized the fact that departments vary in terms of their maturity as research units and research go als and that not all indicators were appropriate for all de partments. Second, the use of focus groups allowed for more d etailed input on some indicators than would have been possible with the use of a questionnaire. Facu lty were able to express their views and outline th eir concerns fairly easily. Consensus on many of the in dicators was obvious. Being able to identify accept ance of some indicators would have been more difficult if a written questionnaire had been used. Third, the discussion during the focus groups outl ined areas where the institution needs to improve communication about the assessment process itself a nd also identified areas for further study. We foun d that faculty had some difficulty separating evaluation o f an individual's research accomplishments from the assessment of a department's success in meeting its research goals. On several occasions, discussions shifted from reporting departmental progress to measuring t he individual faculty member's ability to meet tenu re and promotion requirements. The institution will need t o continue to explain the purpose of program assess ment to the faculty. We also may have identified a potential problem in assessing research quality as multidisciplinary
10 of 12research increases. Many of the current and propose d indicators rely on evaluation by knowledgeable pe ers. Consistent with Henkel's (1989) findings, faculty i nvolved in multidisciplinary research recognized th e difficulty in using some of these indicators when j udging research that does not fit neatly into one d iscipline. Identifying "peers" may be difficult for some resea rch projects. While faculty voiced these concerns, no solution was apparent. The short-term solution seem s to involve identifying individuals with the bread th to evaluate particular multidisciplinary efforts. As m ultidisciplinary work increases, this solution may work in that faculty will be able to identify peers who pos sess the requisite knowledge and expertise for such evaluations. Lastly, we were pleasantly surprised as the discus sions within the focus groups developed. The participants appeared to enjoy the opportunity to i nteract with colleagues from different disciplines across the campus. The composition of the focus groups was int entionally heterogeneous. One group, for instance, included faculty from engineering, visual arts, acc ounting, nursing, chemistry and the library. The fa culty were very accepting of the variety of research and scholarly endeavors represented by various discipli nes. Faculty recognized that the wide variety of such ac tivities precluded the adoption of a standard set o f indicators across the institution.
11 of 12 References Astin, A. (1985). Achieving educational excellence: A critical assessment of priorities and practices in higher education. San Francisco: Jossey-Bass.Astin, A. (1991). Assessment for Excellence. New Yo rk: Macmillan. Bogue, E. G., & Saunders, R. L. (1992). The evidenc e for quality. San Francisco: Jossey-Bass. The Carnegie Foundation for the Advancement of Teac hing. (1987). A classification of institutions of h igher education. (Technical Report). Princeton, NJ.Dill, D. D. (1986). Research as a scholarly activit y: Context and culture. In J. W. Creswell (ed.), Ne w Directions for Institutional Research, 50, (pp. 7-2 3). San Francisco: Jossey-Bass. Erwin, T. (1991). Assessing student learning and de velopment. San Francisco: Jossey-Bass. Ewell, P. T. (1983). Information on student outcome s: How to get it and how to use it. Boulder, Colora do: National Center for Higher Education Management Sys tems. Henkel, M. (1989). Excellence versus relevance: The evaluation of research. In M.Kogan (ed.), Evaluati ng Higher Education (pp. 173-182). London: Jessica Kin gsley Publishers Ltd. Kogan, M. (ed.). (1989). Evaluating Higher Educatio n. London: Jessica Kingsley Publishers Ltd. Light, R. J. (1990). The Harvard assessment seminar s: First report. Explorations with students and fac ulty about teaching, learning, and student life. (Availa ble from R. J. Light, Harvard Graduate School of Ed ucation, Larsen Hall, Cambridge, MA 02138.)Moed, H. F., Burger, W. J. M., Frankfort, J. G., & Van, Raan, A. F. G. (1989). The use of bibliometric data as tools for university research. In M. Kogan (ed.), E valuating Higher Education (pp. 183-192). London: J essica Kingsley Publishers Ltd.Copyright 1995 by the Education Policy Analysis ArchivesEPAA can be accessed either by visiting one of its seve ral archived forms or by subscribing to the LISTSER V known as EPAA at LISTSERV@asu.edu. (To subscribe, send an em ail letter to LISTSERV@asu.edu whose sole contents are SUB EPAA your-name.) As articles are published by the Archives they are sent immediately to the EPAA subscribers and simultaneously archived in three forms. Articles ar e archived on EPAA as individual files under the name of the author a nd the Volume and article number. For example, the art icle by Stephen Kemmis in Volume 1, Number 1 of the Archives can be retrieved by sending an e-mail letter to LISTSER V@asu.edu and making the single line in the letter read GET KEMMIS V1N1 F=MAIL. For a table of contents of the entire ARCHIVES, send the following e-mail message to LISTSERV@asu.edu: INDEX EPAA F=MAIL, that is, send an e-mail letter and make its single line read INDE X EPAA F=MAIL.The World Wide Web address for the Education Policy Analysis Archives is http://seamonkey.ed.asu.edu/epaa Education Policy Analysis Archives are "gophered" at olam.ed.asu.edu To receive a publication guide for submitting artic les, see the EPAA World Wide Web site or send an e-mail letter to LISTSERV@asu.edu and include the single line GET EP AA PUBGUIDE F=MAIL. It will be sent to you by retur n e-mail.
12 of 12General questions about appropriateness of topics o r particular articles may be addressed to the Edito r, Gene V Glass, Glass@asu.edu or reach him at College of Education, Arizona State University, Tempe, AZ 85287-2411. (6 02-965-2692)Editorial Board John Covaleskiejcovales@nmu.edu Andrew Coulson firstname.lastname@example.org Alan Davis email@example.com Mark E. Fetlermfetler@ctc.ca.gov Thomas F. Greentfgreen@mailbox.syr.edu Alison I. Griffithagriffith@edu.yorku.ca Arlen Gullickson firstname.lastname@example.org Ernest R. Houseernie.email@example.com Aimee Howleyess016@marshall.wvnet.edu Craig B. Howley firstname.lastname@example.org William Hunterhunter@acs.ucalgary.ca Richard M. Jaeger email@example.com Benjamin Levinlevin@ccu.umanitoba.ca Thomas Mauhs-Pughthomas.firstname.lastname@example.org Dewayne Matthewsdm@wiche.edu Mary P. McKeowniadmpm@asuvm.inre.asu.edu Les McLeanlmclean@oise.on.ca Susan Bobbitt Nolensunolen@u.washington.edu Anne L. Pembertonapembert@pen.k12.va.us Hugh G. Petrieprohugh@ubvms.cc.buffalo.edu Richard C. Richardsonrichard.email@example.com Anthony G. Rud Jr.firstname.lastname@example.org Dennis Sayersdmsayers@ucdavis.edu Jay Scribnerjayscrib@tenet.edu Robert Stonehillrstonehi@inet.ed.gov Robert T. Stoutstout@asu.edu