USF Libraries
USF Digital Collections

Educational policy analysis archives

MISSING IMAGE

Material Information

Title:
Educational policy analysis archives
Physical Description:
Serial
Language:
English
Creator:
Arizona State University
University of South Florida
Publisher:
Arizona State University
University of South Florida.
Place of Publication:
Tempe, Ariz
Tampa, Fla
Publication Date:

Subjects

Subjects / Keywords:
Education -- Research -- Periodicals   ( lcsh )
Genre:
non-fiction   ( marcgt )
serial   ( sobekcm )

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
usfldc doi - E11-00137
usfldc handle - e11.137
System ID:
SFS0024511:00137


This item is only available as the following downloads:


Full Text
xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam a22 u 4500
controlfield tag 008 c19999999azu 000 0 eng d
datafield ind1 8 ind2 024
subfield code a E11-00137
0 245
Educational policy analysis archives.
n Vol. 7, no. 25 (August 25, 1999).
260
Tempe, Ariz. :
b Arizona State University ;
Tampa, Fla. :
University of South Florida.
c August 25, 1999
505
Quality of researchers' searches of the ERIC database / Scott Hertzberg [and] Lawrence Rudner.
650
Education
x Research
v Periodicals.
2 710
Arizona State University.
University of South Florida.
1 773
t Education Policy Analysis Archives (EPAA)
4 856
u http://digital.lib.usf.edu/?e11.137


xml version 1.0 encoding UTF-8 standalone no
mods:mods xmlns:mods http:www.loc.govmodsv3 xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govmodsv3mods-3-1.xsd
mods:relatedItem type host
mods:identifier issn 1068-2341mods:part
mods:detail volume mods:number 7issue 25series Year mods:caption 19991999Month August8Day 2525mods:originInfo mods:dateIssued iso8601 1999-08-25



PAGE 1

1 of 12 Education Policy Analysis Archives Volume 7 Number 25August 25, 1999ISSN 1068-2341 A peer-reviewed scholarly electronic journal Editor: Gene V Glass, College of Education Arizona State University Copyright 1999, the EDUCATION POLICY ANALYSIS ARCHIVES. Permission is hereby granted to copy any article if EPAA is credited and copies are not sold. Articles appearing in EPAA are abstracted in the Current Index to Journals in Education by the ERIC Clearinghouse on Assessment and Evaluation and are permanently archived in Resources in Education The Quality of Researchers’ Searches of the ERIC Da tabase Scott Hertzberg ERIC Clearinghouse on Assessment and Evaluation University of Maryland Lawrence Rudner ERIC Clearinghouse on Assessment and Evaluation University of MarylandAbstractDuring the last ten years, end-users of electronic databases have become progressively less dependent on librarians and othe r intermediaries. This is certainly the case with the Educational Resource s Information Center (ERIC) Database, a resource once accessed by passin g a paper query form to a librarian and now increasingly searched d irectly by end-users. This article empirically examines the search strate gies currently being used by researchers and other groups. College profe ssors and educational researchers appear to be doing a better job searchi ng the database than other ERIC patrons. However, the study suggests tha t most end-users should be using much better search strategies. A critical component of conducting almost any kind of research is to examine the literature for both related content and previously employed research methods. By reviewing the related literature, researchers are b etter able to formulate their research questions, build on past research, and design more effective studies. In the field of

PAGE 2

2 of 12education, a usual first step in identifying relate d literature is to search the over 950,000 citations included in the Educational Resources Inf ormation Center (ERIC) database. With its availability on the Internet and on CD-ROM, the ERIC database is now accessed by a wide and diverse audience and less sp ecialized audience. In May 1999, the ERIC Clearinghouse on Assessment and Evaluation alo ne had over 3,500 users searching the ERIC database daily. This is quite a change from 10 years ago when access to the ERIC database was typically restricted to tr ained reference librarians who had accounts with commercial information service organi zations such as Dialog. The question studied in this paper is the quality of the search strategies of today’s end-users. We present effective strategies for sear ching the ERIC database, a brief summary of the literature on end-user searching, an d empirical information on the quality of end-users searches of the ERIC database installed at the ERIC Clearinghouse on Assessment and Evaluation web site.Effective Strategies for Searching the ERIC Databas e The Educational Resources Information Cen ter is the largest source of educational information in the world. The most well-known and f requently used body of information produced by the ERIC system is the ERIC Database wh ich contains close to one million citations and abstracts reflecting both published a nd "gray literature" (conference papers, contractor reports, etc.) gathered by the 16 ERIC s ubject area clearinghouses. For over thirty years, the database has been a widely-used a nd well-known research tool. The ERIC database can be accessed through various media. Researchers may search the database via Dialog, the Internet or CDROMs produced by several vendors. Although, the database is still searchable by way o f paper indexes, electronic formats are the concern here because they are largely responsib le for the surge in end-user searching. There are some good search practices that are applicable to all electronic versions of the database. One of the most important tactics is the use of Boolean operators (AND, OR, NOT) to refine queries. One-word and one-phrase searches are rarely sufficient. When using Boolean operators, avoid the common mist ake of confusing the function of "AND" and "OR". The query Portfolios AND Nongraded Evaluation retrieves only documents containing both descriptors, while a sear ch for Portfolios OR Nongraded Evaluation retrieves a set of documents that have either or b oth of the descriptors. Another fundamental rule for successful s earching is to use all relevant descriptors (ERIC indexing terms). Find all related and narrowe r terms that apply and link them into the search with the Boolean operator "OR". Using al l relevant descriptors increases recall (i.e. comprehensiveness of retrieval) and of ten reveals useful citations not found when searching using only one or two descriptors. T he ERIC database is a very well indexed database, but has not been constructed with perfect consistency over the past 30 years. Further, the terms preferred by any individu al end-user may not be the same as the terms preferred by the ERIC indexers. For example, ERIC uses Test Wiseness Student Evaluation and Disadvantaged Youth The terms Test Preparation, Student Assessment and Disadvantaged Students are not ERIC descriptors. Failing to use the contro lled vocabulary terms will result in a search that misse s highly relevant documents. Because of these gaps between the databas e’s controlled vocabulary and natural language, use of The Thesaurus of ERIC Descriptors (Houston, 1995) is essential to successful searching. The thesaurus, which has been published in paper since the creation of the database, is now available on many CD-ROM versions of the database and uniquely at the website of the ERIC Clearinghou se on Assessment and Evaluation (ERIC/AE).

PAGE 3

3 of 12 The thesaurus is incorporated in the Sear ch ERIC Wizard, one of the user interfaces for the ERIC/AE’s Internet version of th e database ( http://ericae.net/scripts/ewiz/amain2.asp ). The ERIC Wizard interacts with users to indicate whether a search term is an actual ERIC de scriptor. If a term entered by a user is not a descriptor, the Wizard suggests alternatives. When the correct descriptor is located, the Wizard displays an array of related and narrowe r terms. The user may then choose from the first term or the related terms to constru ct a search of the database. Hints for Effective SearchingUse Boolean operators (AND, OR, NOT) to craft good queries. Expand the query by ORing appropriate narrower and related terms Use the print of an electronic ERIC thesaurus to fi nd useful descriptors. Use the Building Block or Pearl Building methods. Conduct multiple searches. An added feature of the search engine ins talled on the ERIC/AE website is a Find Similar link. The Find Similar feature performs a popular search strategy known a s Pearl Building Pearl Building involves the constructing of new searches around descriptors found in the good results of preliminar y searches. The Find Similar link for a particular citation will produce a new set of docum ents that are based on the first document’s descriptors. This function often retriev es useful documents not found in the first search. You can choose the best documents fro m the second set of citations and continue to re-circulate the search until you no lo nger find any new, relevant hits. You may also edit the descriptors of a selected documen t to search only for the descriptors judged relevant to your needs. Another good technique for organizing a c omplex search, applicable to all search situations, is the Building Blocks method. On a piece of paper, write out the two or three most essential components of a given question. Thes e are the building blocks of the search. Construct a search by linking the building blocks with what you believe are the correct Boolean operators. If the resultant search is not very successful, expand it by attaching related descriptors to one or more of the building blocks. Continue to add to the building blocks and, if necessary, rearranging the Boolean operators, until you achieve satisfactory results. Inherent in this meth od is the necessity of conducting multiple queries for a given search.Literature Review This section summarizes some of the liter ature with regard to enduser searching with particular attention to the quality of end-use r results, quality of search strategies, time spent on a search, use of thesauri, the freque ncy of multiple searches, and experience. Since this study is concerned with enduser searching of an electronic database through an Internet interface, both studie s of users of on-line databases and studies of users of Internet search engines are rel evant. Studies of the first type of users are quite numerous, as on-line databases have been widely used for over 20 years. Relevant literature on the search behavior of Inter net users, on the other hand, is still rather scarce.Quality of end-user results There is a large body of literature claim ing that most end-users obtain poor results when searching for themselves (Lancaster, Elzy, Zet er, Metzler and Yuen, 1994; Bates

PAGE 4

4 of 12and Siegfried, 1993; Tolle and Sehchang, 1985; Teit elbaum and Sewell, 1986). Lancaster, Elzy, Zeter, Metzler and Yuen, for examp le, compared faculty and student searches of ERIC on CD-ROM to searches conducted by librarians. They noted that most of the end-users found only a third of the rel evant articles found by the librarians. There are several studies, however, where end-users are able to search online databases with good results. Sullivan, Borgman and Wippern (1990) compared the searching of 40 doctoral students given minimal tra ining with searches done by 20 librarians. The 40 students were no less satisfied with their searches of ERIC and Inspec than with the results retrieved by the librarians, and, in fact, found their searches to be more precise. Similarly, the patent attorneys in Vo llaro and Hawkin (1986) felt that intermediaries could have done a better job, but we re largely satisfied with their own searches. Both studies observed that the end-users still had trouble searching databases. Sullivan, Borgman and Wippern noted that the end-us ers made more errors, prepared less well than intermediaries and had less complete results." There are a few explanations for why some end-users may search more successfully than others. Yang (1997) observed that certain concepts and metaphors used by novice users to construct searches were benefici al to searching. Marchionini, Dwiggins and Katz (1993) suggested that subject exp ertise helps end-users search more effectively.Strategies Several studies have concluded that end-u sers use poor searching techniques, marked by overly simple statements and limited use of Boolean operators or other commands (Bates and Siegfried, 1993; Tolle and Hah, 1985; Teitelbaum and Sewell, 1986). In their study of 27 humanities scholars, Ba tes and Siegfried (1993) observed that 63% of the searches contained only one or two terms and 25% included no Boolean operators at all. Nims and Rich (1998) studied over 1,000 s earches conducted on the Search Voyeur webpage hosted by Magellan. The Search Voyeu r site allows users to spy on the searches of other users. The researchers found a pr ofusion of poorly constructed searches. Searchers performed one-word searches whe n more complex queries linked with Boolean operators were necessary. Overall, a m ere 13% of the searchers used Boolean operators. The study, which observed how th e general public searches the entire World Wide Web, suggests that end-users may have mo re trouble searching Internet databases than older online databases. End-users of Internet databases may be less familiar with the search protocols and may have hig her expectations of the technology’s ability to make up for their poor searching techniq ues. Time Spent Searching Looking at the transaction logs of 11,067 search sessions on computers linked to Medline at the National Library of Medicine, Tolle and Hah (1985) found that end-users averaged significantly less time searching than lib rarians. Patrons in the study averaged 15 minutes of searching per session, while libraria ns in the control group averaged 20 to 25 minutes.Use of a Thesaurus In their study of 41 patent attorneys sea rching Inspec, Valloro and Hawkins (1986) observed that the majority of the end-users did not utilize the database’s thesaurus. Interviews revealed that most of the subjects did n ot feel familiar enough with the main functions of the database to effectively use the th esaurus (which they considered an

PAGE 5

5 of 12advanced feature). The study suggests that end-user s may be under-utilizing online thesauri, but the subject remains largely unexamine d. Number of Queries Conducting multiple searches is often ess ential to successful searching. Yet studies suggest that only around half of all end-us ers perform more than one search per session. (Spink 1996; Huang 1992). Spink conducted 100 interviews with academic end-users at Rutgers University and found that only 44% conducted multiple searches per session.Experience The most significant factor determining s earching success appears to be experience using a database. In a recent study of l aw school students searching Quicklaw, Yuan (1997) showed that the search repert oires of students became more complex and effective over time. Tolle and Hah (198 6) found a correlation between experience and the frequency of multiple searches. Only 8% of the experienced users in the study stopped searching after a failed search, while the rate of stopping was 11% for moderately experienced users and 20% for inexperien ced users. Summary The quality of end-user searching appears to vary depending on the individual end-user. Some searchers are stronger than others b ecause of skills they bring to searching or gain from using an online database ove r time. However, the literature suggests that most end-users could be doing better. Even the studies that recorded a high level of end-user satisfaction, observed that end-u sers rely on overly simple searches, make frequent errors, and fail to attain comprehens ive results.Method For two days in early November 1998, all patrons wanting to search the ERIC database installed at the ERIC/AE website were requ ired to complete a 10-item background questionnaire. For each patron, we then tracked a) the maximum number of OR’s in their searches as a measure of search quali ty, b) the number of queries per session, c) whether they used the thesaurus or free -text search engine, d) number of hits examined, and e) the amount of time devoted to sear ching the ERIC database per session. Data were collected on 4,086 user session s. Because some browsers were not set to accept identifiers, we were not always able to rela te background data to session information. Accordingly, our analysis is based on the 3,420 users with background and corresponding session information. Participation in the study was entirely v oluntary; patrons could go elsewhere to search the ERIC database. However, our questionnair e was short and our data collection was unobtrusive. Based on the prior week’s log, we estimate our retention rate was over 90%.Results We asked our end-users "what is the prima ry purpose of your search today?". As shown in Table 1, most patrons were searching in co nnection with preparing a research report.Table 1

PAGE 6

6 of 12 Purpose of searching the ERIC databasePurposeNPercentResearch report preparation182553.4%Class assignment60117.6 Professional interest55416.2 Lesson planning1775.2 Background for policy making1755.1 Classroom management882.6 TOTAL3240 100.0% Some searching characteristics of the ent ire sample and of groups of individuals who identified themselves as college librarians, co llege professors, and researchers are presented in Table 2. College librarians are presum ably the most trained and most experienced user group, while college professors an d researchers are presumably the most diligent user group. Most variables were fairly normally distr ibuted. Accordingly, means and standard deviations (std dev) are presented in the table. Th e amount of time spent searching, however, was quite skewed. Central tendency and var iability for time are represented by medians and semi-interquartile ranges (sir).Table 2 Searching Characteristics for Select User Groups QualityN queriesThesaurus Use Hits ExaminedTime (in seconds) nMeanStd devMeanStd dev%MeanStd devMediansirCollege Librarian 96.913.892.663.2646.83.115.41207240 Researcher 445.421.263.043.6937.64.8510.23376408 College Professor 209.371.102.492.4644.65.5815.09361345 All users 3420.441.772.752.9538.73.658.65352351 A good search incorporates Boolean operat ors to capture appropriate terms. As a

PAGE 7

7 of 12measure of search quality, we noted the maximum num ber of OR’s used in any query during a patron’s session. The data indicate that t here is about one OR in every two search sessions. College librarians tend to conduct the most complicated searches and college professors conducted the simplest searches. To provide an additional perspective on these numbers, we computed the number of OR’s us ed in the 84 pre-packaged search strategies at http://ericae.net/scripts/ewiz/expert.htm. These search strategies were developed by the top reference librarians across th e entire ERIC system. The mean number of OR’s used in these high quality, general purpose searches was 2.9 with a standard deviation of 2.8. Thus, the data show that online users tend to be conducting very simple searches that do not take account of su bject matter nuances. The typical user performs 2 to 3 queries per search session and there is little variability across groups. In contrast, the referen ce staff at the ERIC Clearinghouse on Assessment and Evaluation typically conduct 3 to 6 searches when responding to patron inquiries. Not using the ERIC thesaurus to guide a s earch is equivalent to guessing which terms are used by the ERIC indexers. Using the thes aurus, one can employ the proper terms in a search. College librarians and college p rofessors use the thesaurus much more often than most users. Yet, less than half of the s earches at the ERIC/AE site take advantage of this unique, special feature. For any given topic in education, there i s typically a large number of related papers and resources. To find all the resources which meet their specific purposes, users need to examine a large number of citations. College profes sors and researchers are much more diligent than other users in examining citations. F urther, as noted by the variance, some professors and researchers are looking at a very la rge number of citations. Still, the average number of citations examined is quite small typically about 5 or 6 hits for the most diligent groups. It appears that most patrons, especially those that are not trained researchers, are not looking beyond the first page of hits. The study showed that the median amount o f time spent searching the ERIC/AE site is about 6 minutes. College professors and res earchers spend slightly more time than the typical user searching for information. College librarians spend considerably less time searching. At a minimum, we would like to see at lea st one OR in the query, more than one query, and at least four hits examined. Only 153 (4 .5%) of our examined 3420 users met these criteria.Discussion Our findings with regard to Internet sear ching of the ERIC database are consistent with the broader literature on end-user database se arching. Some researchers may be doing a better job than most patrons. Nevertheless, most end-users are conducting few searches, crafting poor searches, not using the the saurus, and are examining only a few potential hits. While there are times an end-user m ay want to quickly look up something, such as finding a reference, research report prepar ation usually involves finding a collection of several relevant, high quality studie s. This work cannot be done quickly. Ninety-five percent of the searches we examined do not meet our minimal criteria. From our point of view, these results are very disappoin ting. Patrons are not using effective search strategies and cannot possibly find the best and most relevant articles in the database being searched. We have reason to believe that most end-u sers are satisfied with any somewhat-relevant hit and are not looking for the b est citations. After we added the Find

PAGE 8

8 of 12Similar option to our search engine, we noted that few end -users were taking advantage of the feature. We posted a short survey for a few hours asking why. The vast majority of users (80%) told us they were able to find what they wanted on the first page of hits. The reality is that with the default search options hits are presented in what is basically chronological order. The ranked relevance option do es not necessarily present the best quality documents first. Users may be satisfied, bu t they are not finding the best. We cannot place enough emphasis on the ne ed to use the Thesaurus of ERIC Descriptors when constructing a search strategy. In addition t o the need to include related and narrower terms, the philosophy behind t he ERIC Thesaurus and its structure necessitate added diligence on the part of the sear cher. The ERIC Thesaurus is designed to reflect the terms used in the professional and s cholarly education literature. It is not a strictly hierarchical thesaurus with a rigid set of mutually-exclusive term arrays. Thus, the ERIC Thesaurus is populated with terms that partially overlap and its structure sometimes necessitates variable search strategy des ign. For example, to find the documents that address the evaluation of instructio nal methods or activities one should search Course Evaluation" OR "Curriculum Evaluation". This is a problem with the social sciences in general as terms are less well d efined, more fluid and less strictly hierarchical than in the physical sciences. We occasionally hear frustration from the research community with regard to the ERIC database. The data imply that much of the enduser frustration is due to poor end-user searches. This is not to say the ERIC data base is not without its faults. The ERIC system has basically been level-funded for the past 20 years and there has been no system-wide examination of ERIC’s acquisition and p rocessing efforts in 20 years. As a result, there are gaps in ERIC coverage. At our own clearinghouse, we have noted that the 39 journals that we process for inclusion in th e ERIC database produce 1,100 articles. Yet, due to our budget, we have usually b een limited to entering 700 articles per year. We process few international journals and are slow to add new journals, regardless of their quality or prominence. We believe there has also been a steady d ecline in the "gray" literature portion of the ERIC database. Of the approximately 5,500 paper s presented at the annual meetings of the American Educational Research Association, f or example, only about 1,200 are entered into the ERIC database. Many authors do not have prepared papers and many that have papers do not respond to solicitation req uests. Authors should view ERIC as a reproduction service. We make copies of papers avai lable to others. Inclusion in the ERIC database only means that a paper has met some minimal acceptability criteria; it is not equivalent to peer-reviewed publishing and it s hould not preclude an author from submitting their paper to a refereed journal. Accor dingly, we do not see any reason an author should not submit their paper to ERIC. In fa ct, submitting high quality papers can result in more people seeing the research and more people submitting their papers. Thus, we believe many authors are not assuming their shar e of the responsibility in building the ERIC resource. While ERIC database content has its limit ations, we believe the lack of end-user search skills is the major impediment to locating t he best and most relevant resources. Poorly formed searches and poor search strategies c annot possibly find the best citations. We are encouraged by the conclusions of Sullivan, B orgman and Wippern (1990). With minimal training and a bit of diligence, end-users can attain satisfactory results. It is our hope that readers of this article will follow the s uggestions outlined at the beginning of this paper and, concomitantly, increase their chanc es of finding the best and most relevant documents in the ERIC database.

PAGE 9

9 of 12NoteWe wish to thank Dagobert Soergel, Jim Houston and Ted Brandhorst for their useful suggestions on an earlier version of this paper.References Bates, M. J., Siegfried, .S L. and Wilde, D. N. (19 93). An Analysis of Search Terminology Used by Humanities Scholars: The Getty Online Searching Project Number 1. Library Quarterly 63(1), 1-39. Ching, Y. S. (1997). Qualitative Exploration of Lea rners’ Information Seeking Processes Using the Perseus Hypermedia System. Journal of the American Society of Information Science, 48(7), 667-669. Houston, J. (1995). The Thesaurus of ERIC Descriptors, 13th edition Phoenix, AZ: Oryx Press.Huang, M. H. (1992). Pausing Behavior of End Users in Online Searching. Unpublished doctoral dissertation, University of Maryland.Lancaster, F. W., Elzy, C., Zeter, M. J. Metzler, L. and Yuen, M. L.. (1994). Comparison of the Results of End User Searching wit h Results of Two Searching by Skilled Intermediaries RQ 33(3), 370-387. Marchionini, G., Dwiggins, S., and Katz, A. (1993). Information Seeking in a Full-Text End-User-Oriented Search System: The Roles of Domai n and Search Expertise. Library and Information Science Research Vol 15 (Winter), 3569. Nims, M. and Rich, L. (March 1998). How Successfull y Do Users Search the Web College and Research Libraries News. 155-158. Spink, A. (1996). Multiple Search Sessions: A Model of End-User Behavior. Journal of American Society of Information Science 47 (3), 603-609. Sullivan, M.V., Borgman, C.L., & Wippern, D. (1990) End-users, Mediated Searches and Front End Assistance Programs on Dialog: A comp arison of learning, performance and satisfaction. Journal of American Society of Information Science 41(1), 27-42. Tolle, J. E. and Hah, S. ( 1985). Online Search Patterns. Journal of American Society of Information Science 36(3), 82-93. Teitelbaum, S. and Sewell, W. (1986). Observations of End-User Online Searching Behavior Over Eleven Years. Journal of American Society of Information Science 37(7), 234-245.Vollaro, A.J. and Hawkins, D.T. (1986). End-User Se arching in a Large Library Network. Online 10(7), 67-72. Yuan, W. (1997). End-User Searching Behavior in Inf ormation Retrieval : A

PAGE 10

10 of 12 Longitudinal Study. Journal of American Society of Information Science 48(3), 218-234.About the AuthorsScott Hertzberg is a Research Assistant at the ERIC Clearinghouse on Assessment and Evaluation, College of Library and Information Serv ices, 1129 Shriver Laboratory, University of Maryland, College Park, Maryland, 207 42. He specializes in social science information services.Lawrence Rudner is the Director of ERIC Clearinghouse on Assessmen t and Evaluation, College of Library and Information Serv ices, 1129 Shriver Laboratory, University of Maryland, College Park, Maryland, 207 42. He specializes in assessment and information services. He can be reached at rudn er@ericae.net.Copyright 1999 by the Education Policy Analysis ArchivesThe World Wide Web address for the Education Policy Analysis Archives is http://epaa.asu.edu General questions about appropriateness of topics o r particular articles may be addressed to the Editor, Gene V Glass, glass@asu.edu or reach him at College of Education, Arizona State University, Tempe, AZ 85287-0211. (602-965-96 44). The Book Review Editor is Walter E. Shepherd: shepherd@asu.edu The Commentary Editor is Casey D. Cobb: casey.cobb@unh.edu .EPAA Editorial Board Michael W. Apple University of Wisconsin Greg Camilli Rutgers University John Covaleskie Northern Michigan University Andrew Coulson a_coulson@msn.com Alan Davis University of Colorado, Denver Sherman Dorn University of South Florida Mark E. Fetler California Commission on Teacher Credentialing Richard Garlikov hmwkhelp@scott.net Thomas F. Green Syracuse University Alison I. Griffith York University Arlen Gullickson Western Michigan University Ernest R. House University of Colorado Aimee Howley Ohio University Craig B. Howley Appalachia Educational Laboratory William Hunter University of Calgary Richard M. Jaeger University of North Carolina—Greensboro Daniel Kalls Ume University Benjamin Levin University of Manitoba

PAGE 11

11 of 12 Thomas MauhsPugh Green Mountain College Dewayne Matthews Western Interstate Commission for HigherEducation William McInerney Purdue University Mary McKeown-Moak MGT of America (Austin, TX) Les McLean University of Toronto Susan Bobbitt Nolen University of Washington Anne L. Pemberton apembert@pen.k12.va.us Hugh G. Petrie SUNY Buffalo Richard C. Richardson New York University Anthony G. Rud Jr. Purdue University Dennis Sayers Ann Leavenworth Centerfor Accelerated Learning Jay D. Scribner University of Texas at Austin Michael Scriven scriven@aol.com Robert E. Stake University of Illinois—UC Robert Stonehill U.S. Department of Education Robert T. Stout Arizona State University David D. Williams Brigham Young University EPAA Spanish Language Editorial BoardAssociate Editor for Spanish Language Roberto Rodrguez Gmez Universidad Nacional Autnoma de Mxico roberto@servidor.unam.mx Adrin Acosta (Mxico) Universidad de Guadalajaraadrianacosta@compuserve.com J. Flix Angulo Rasco (Spain) Universidad de Cdizfelix.angulo@uca.es Teresa Bracho (Mxico) Centro de Investigacin y DocenciaEconmica-CIDEbracho dis1.cide.mx Alejandro Canales (Mxico) Universidad Nacional Autnoma deMxicocanalesa@servidor.unam.mx Ursula Casanova (U.S.A.) Arizona State Universitycasanova@asu.edu Jos Contreras Domingo Universitat de Barcelona Jose.Contreras@doe.d5.ub.es Erwin Epstein (U.S.A.) Loyola University of ChicagoEepstein@luc.edu Josu Gonzlez (U.S.A.) Arizona State Universityjosue@asu.edu Rollin Kent (Mxico)Departamento de InvestigacinEducativaDIE/CINVESTAVrkent@gemtel.com.mx kentr@data.net.mx Mara Beatriz Luce (Brazil)Universidad Federal de Rio Grande do SulUFRGSlucemb@orion.ufrgs.br

PAGE 12

12 of 12 Javier Mendoza Rojas (Mxico)Universidad Nacional Autnoma deMxicojaviermr@servidor.unam.mxMarcela Mollis (Argentina)Universidad de Buenos Airesmmollis@filo.uba.ar Humberto Muoz Garca (Mxico) Universidad Nacional Autnoma deMxicohumberto@servidor.unam.mxAngel Ignacio Prez Gmez (Spain)Universidad de Mlagaaiperez@uma.es Daniel Schugurensky (Argentina-Canad)OISE/UT, Canadadschugurensky@oise.utoronto.ca Simon Schwartzman (Brazil)Fundao Instituto Brasileiro e Geografiae Estatstica simon@openlink.com.br Jurjo Torres Santom (Spain)Universidad de A Coruajurjo@udc.es Carlos Alberto Torres (U.S.A.)University of California, Los Angelestorres@gseisucla.edu