USF Libraries
USF Digital Collections

Educational policy analysis archives

MISSING IMAGE

Material Information

Title:
Educational policy analysis archives
Physical Description:
Serial
Language:
English
Creator:
Arizona State University
University of South Florida
Publisher:
Arizona State University
University of South Florida.
Place of Publication:
Tempe, Ariz
Tampa, Fla
Publication Date:

Subjects

Subjects / Keywords:
Education -- Research -- Periodicals   ( lcsh )
Genre:
non-fiction   ( marcgt )
serial   ( sobekcm )

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
usfldc doi - E11-00323
usfldc handle - e11.323
System ID:
SFS0024511:00323


This item is only available as the following downloads:


Full Text

PAGE 1

1 of 16 A peer-reviewed scholarly journal Editor: Gene V Glass College of Education Arizona State University Copyright is retained by the first or sole author, who grants right of first publication to the EDUCATION POLICY ANALYSIS ARCHIVES EPAA is a project of the Education Policy Studies Laboratory. Articles appearing in EPAA are abstracted in the Current Index to Journals in Education by the ERIC Clearinghouse on Assessment and Evaluation and are permanently archived in Resources in Education Volume 11 Number 25August 4, 2003ISSN 1068-2341Re-analysis of NAEP Math and Reading Scores in States with and without High-stakes Tests: Response to Rosenshine Audrey Amrein-Beardsley David C. Berliner Arizona State UniversityCitation: Amrein-Beardsley, A. A. & Berliner, D. C. (2003, August 4). Re-analysis of NAEP math and reading scores in states with and without high-stak es tests: Response to Rosenshine. Education Policy Analysis Archives, 11 (25). Retrieved [Date] from http://epaa.asu.edu/epa a/v11n25/.AbstractHere we address the criticism of our NAEP analyses by Rosenshine (2003). On the basis of his thoughtful critique we redid some of the analyses on which he focused. Our findings contradi ct his. This is no fault of his, the reasons for which are explained i n this paper. Our findings do support our position that high-stakes t ests do not do much to improve academic achievement. The extent to which states with high-stakes tests outperform states without hi gh-stakes tests is, at best, indeterminable. Using 1994-1998 NAEP readi ng and 1996-2000 NAEP math data and accounting for NAEP ex emption rates for the same years, we found that states with high-stakes tests are not outperforming states without high-stakes te sts in reading in the 4th grade or math in the 8th grade at a statist ically significant level. States with high-stakes tests are, however, outperforming states without high-stakes tests in math in the 4th grade at a

PAGE 2

2 of 16 statistically significant level. Our findings also support our earlier stance that states with high-stakes tests are exemp ting more students from participating in the NAEP than are st ates without high-stakes tests. This is more prevalent the more recent the NAEP test administration. This is illustrated in the tab les below.IntroductionIn our research, we were concerned that scores on h igh-stakes state tests could easily be manipulated through narrowing of the curr iculum, drilling on items similar to the test, increasing exclusion rates of students increasing in dropouts and push-outs, and the like. To judge whether that conc ern was valid, we looked at audit tests—tests that might have some overlap with a state’s own test but where school personnel were under much less intense press ure to achieve higher scores. We chose a series of audit tests to examine—SAT, AC T and AP tests, as well as all administrations of the NAEP reading and mathema tics tests. We also studied whether some unanticipated side effects were presen t when high-stakes tests were introduced, such as increased GED taking, increased reporting of cheating, problems of teacher morale, problems with student m otivation to learn, and so forth.Substantive criticism of our work, thus far, has be en limited to the NAEP data we reported. To our knowledge, the other conclusions w e reached have not yet been subject to the same kinds of thoughtful criticism. So for now, given the methods that we used for analyses, our findings in those other a reas stand. We concluded that there was no systematic pattern of gains on SATs, A CTs or AP exams. That is, we found no evidence of transfer from the state tests to these other tests, tests that can be considered as audit measures. In addition, w e found increased drop-out rates and decreased high school graduation rates, i ncreased rates by which students participated in the GED program, and a hos t of troubling negative affects associated with high-stakes testing.Here we address the criticism of our NAEP analyses by Rosenshine (2003). On the basis of his thoughtful criticism, we redid some of the analyses on which he focused and now have a different view of the findin gs. What we found contradicts what we found in both of our earlier papers (Amrein & Berliner, 2002a; Amrein & Berliner 2002b) but the data analyzed below are for different years of data from those used in the earlier papers. Following the for m of the analyses done by Rosenshine the data analyzed below are only for the years 1994-1998 for the NAEP reading test, and 1996-2000 for the NAEP mathe matics tests. In addition, in our earlier work we used the national trend line as the contrast or control group for our analyses. In this analysis, we use the composit e score for states without high-stakes tests as the control. In addition, our findings contradict the findings reported by Rosenshine. This is no fault of his. Ro senshine used our designation of clear and unclear states with and without high-stak es tests from the second of our two papers.[2] We communicated many times and appro ved the states he used in his analysis. Given more consideration, however, we noticed the distinctions we made between clear and unclear states was based on our overall findings which were based on all of the available NAEP data. In ot her words, Rosenshine analyzed the latest two NAEP administrations in rea ding and math using the

PAGE 3

3 of 16 distinctions we made between clear and unclear stat es when we used all of the available NAEP data, approximately 10 years of NAEP data per subject. To complicate things more, because we used the nationa l trend line as our control group, our clear/unclear distinctions were also mad e factoring in the national average. Rosenshine did not do this which makes for differences in the findings. He used the states without high-stakes tests as the co ntrol. This makes for a better analysis and we have followed his lead here. In sho rt, Rosenshine should not be faulted for his findings nor should he be considere d wrong in what he did. He did a fine reanalysis of our NAEP examination given the i nformation he had at that point, and here we are redoing his.NAEP Reading Grade 4 1994-1998Taking Table 1 from Amrein & Berliner (2002b) and t he states in which high stakes tests were implemented before 1994 and between 1994 and 1998, we re-ran our analyses, as Rosenshine did, using all states with high-stakes tests and, as the control group, all states without high-stakes tests for which NAEP data were available. What we found in regards to reading grad e 4 achievement from 1994-1998 is as follows:Table 1 Fourth grade 1994-1998 NAEP reading scores (raw dat a).States without high-stakes tests: NAEP 1994 NAEP 1998 States withhigh-stakes tests: NAEP 1994 NAEP 1998 Arizona206207Alabama208211Arkansas209209Kentucky212218California197202Louisiana197204Colorado213222Maryland210215Connecticut222232Michigann/a217Delaware206212Mississippi202204Florida205207Missouri217216Georgia207210 New Mexico205206 Hawaii201200 North Carolina214217 Iowa223223Oklahoman/a220

PAGE 4

4 of 16 Kansasn/a222 South Carolina203210 Maine228225Tennessee213212Massachusetts223225Texas212217Minnesota218222 West Virginia213216 Montana222226Nevadan/a208New Hampshire223226New York212216Oregonn/a214Rhode Island220218Utah217215Virginia213218Washington213217Wisconsin224224 Change in Score Change in Score Wyoming221219OVERALL214.7216.8+2.1* OVERALL208.8213.1+4.3* *Significant at a p < .05 levelTable 1 illustrates that the states with high-stake s tests outperformed those states without high-stakes tests on the NAEP grade 4 readi ng tests over the period 1994-1998. However, as shown in our earlier researc h (Amrein & Berliner, 2002a; Amrein & Berliner, 2002b), the rates by which stude nts are excluded from the NAEP must be taken into consideration to determine whether gains and losses are clear (interpretable) or unclear (not interpretable ). Clear gains can be determined if a state’s scores i ncrease while the rates by which students are exempted from the NAEP stay the same o r decrease. In other words, when the pool of students sampled to participate in the NAEP is less selective then the likelihood that their scores would increase art ificially is nullified. Under these conditions such gains are clear. Clear losses can b e determined if a state’s scores decrease at the same time the rates by which studen ts are exempted from the NAEP increase. In this case, the pool of students s ampled was more selective and yet the scores still went down. Under these conditi ons it is reasonable to interpret those findings as a clear loss.Unclear gains are the case when a state’s scores in crease while the rates by which

PAGE 5

5 of 16 students are exempted from the NAEP increase. In ot her words, the pool of students sampled to participate in the NAEP is more selective and therefore likely to have biased the resulting gains. If lower-scorin g students are pulled from the NAEP sample, scores on the NAEP will increase. This makes for unclear results. Unclear losses are the case when a state’s scores d ecrease at the same time the rates by which students are exempted from the NAEP sample decrease. In this case, the pool of students sampled was less selecti ve so it is difficult to determine whether the addition of more lower-scoring students or an actual decrease in achievement caused the resulting losses.We believe that it is absolutely necessary to make these kinds of judgments about each state because states with high-stakes tests ar e those states that increasingly are exempting more students from participating in t he NAEP. “In states with high-stakes tests, between 0%–49% of the gains in N AEP scores can be explained by increases in rates of exclusion.” (Amrein & Berl iner, 2002a) Looking simply at those states for which clear gain s or losses are applicable, an analysis of the data yields the results given in Ta ble 2. In this table states shaded in green are those for which clear results were eviden t, states shaded in red are those for which unclear results were illustrated, and sta tes shaded in yellow are those for which there were not enough data to analyze gains o r losses appropriately. As can be seen, only two states included in the sta tes with high-stakes column can be counted as states with “clear” effects. The comp osite data are not significant but the table illustrates the extent to which states wi th high-stakes tests are not gaining in score simply because of their high-stakes testin g policies.Table 2 Fourth grade 1994-1998 NAEP reading scores with sta tes coded as clear or unclear in their gains and losses.States without high-stakes tests: NAEP 1994 NAEP 1998 States withhigh-stakes tests: NAEP 1994 NAEP 1998 Arizona206207Alabama208211Arkansas209209Kentucky212218California197202Louisiana197204Colorado213222Maryland210215Connecticut222232Michigann/a217Delaware206212Mississippi202204Florida205207Missouri217216Georgia207210 New Mexico205206

PAGE 6

6 of 16 Hawaii201200 North Carolina214217 Iowa223223Oklahoman/a220Kansasn/a222 South Carolina203210 Maine228225Tennessee213212Massachusetts223225Texas212217Minnesota218222 West Virginia213216 Montana222226Nevadan/a208New Hampshire223226New York212216Oregonn/a214Rhode Island220218Utah217215Virginia213218Washington213217Wisconsin224224 Change in Score Change in Score Wyoming221219OVERALL215.4217.0+1.6* OVERALL209.5210.0+0.5 *Significant at a p < .05 levelThe composite data are important in that they nulli fy what one might conclude looking simply at Table 1. States with high-stakes tests are not outperforming states without high-stakes tests in reading grade 4 performance. Rather, as illustrated in Table 2, states without high-stakes tests gained in reading grade 4 performance at a statistically significant level. G iven only two states are included as clear states with high-stakes tests, states with hi gh-stakes tests made insignificant gains and the differences between the two mean gain s are not statistically significant.Most importantly, what can be drawn from Table 2 is that states with high-stakes tests are exempting more students from participatin g in the reading grade 4 NAEP. Ninety percent of the states with “unclear” gains a re states with increases in the rates by which students were exempted from the test This supports the notion that states with high-stakes tests are not gaining in NA EP scores simply because of

PAGE 7

7 of 16 their high-stakes testing policies.NAEP Math Grade 4 1996-2000Taking Table 1 from Amrein & Berliner (2002b) and t he states in which high stakes tests were implemented before 1996 and between 1996 and 2000, we re-ran our analyses using all states with high-stakes tests an d, as the control group, all states without high-stakes tests for which NAEP data were available. What we found in regards to math grade 4 achievement from 1996-2000 is as follows:Table 3 Fourth grade 1996-2000 NAEP mathematics scores (raw data)States without high-stakes tests: NAEP 1996 NAEP 2000 States withhigh-stakes tests: NAEP 1996 NAEP 2000 Alaska224n/aAlabama212218Arizona218219California209214Arkansas216217Delaware215n/aColorado226n/aFlorida216n/aConnecticut232234Indiana229234Georgia216220Kentucky220221Hawaii215216Louisiana209218Idahon/a227Maryland220222Illinoisn/a225Massachusetts229235Iowa229233Michigan226231Kansasn/a232Mississippi208211Maine233231Missouri225229Minnesota232235Nevada217220Montana228230New Jersey227n/aNebraska228226New Mexico214214North Dakota231231New York223227Oregon224227North Carolina224232

PAGE 8

8 of 16 Rhode Island221225Ohion/a231Tennessee219220Oklahoman/a225Utah226227Pennsylvania226n/aVermont225232South Carolina213220Washington225n/aTexas229233Wisconsin231n/a Change in Score Virginia222230 Change in Score Wyoming223229West Virginia224225OVERALL224.9226.8+1.9* OVERALL219.9224.5+4.6* *Significant at a p < .05 levelTable 3 illustrates that the states with high-stake s tests outperformed those states without high-stakes tests on the math NAEP grade 4 tests over the time period 1996-2000. However, as argued earlier, the rates by which students are excluded from the NAEP must be taken into consideration to d etermine whether gains and losses are clear or unclear. Using the same rules a s outlined above to determine clear and unclear gains and losses, we looked at on ly those states for which clear gains or losses are relevant. (For 4th grade reading, Rosenshine included the following states: Arizona, Arkansas, California, Co nnecticut, Hawaii, Iowa, Maine, Montana, New Hampshire, Rhode Island, Utah, Washing ton, Wisconsin, and Wyoming. There are notable differences in the state s he included and the states we included that likely came from the fact that we drew our states directly out of Table 1 of the original document.) An analysis of t he data yields the following:Table 4 Fourth grade 1996-2000 NAEP mathematics scores with states coded as clear or unclear in their gains and losses .States without high-stakes tests: NAEP 1996 NAEP 2000 States withhigh-stakes tests: NAEP 1996 NAEP 2000 Alaska224n/aAlabama212218Arizona218219California209214Arkansas216217Delaware215n/aColorado226n/aFlorida216n/aConnecticut232234Indiana229234

PAGE 9

9 of 16 Georgia216220Kentucky220221Hawaii215216Louisiana209218Idahon/a227Maryland220222Illinoisn/a225Massachusetts229235Iowa229233Michigan226231Kansasn/a232Mississippi208211Maine233231Missouri225229Minnesota232235Nevada217220Montana228230New Jersey227n/aNebraska228226New Mexico214214North Dakota231231New York223227Oregon224227North Carolina224232Rhode Island221225Ohion/a231Tennessee219220Oklahoman/a225Utah226227Pennsylvania226n/aVermont225232South Carolina213220Washington225n/aTexas229233Wisconsin231n/a Change in Score Virginia222230 Change in Score Wyoming223229West Virginia224225OVERALL224.5225.6+1.1 OVERALL210.4215.0+4.6* *Significant at a p < .05 levelCompared to the reading data above, we now find the opposite when we look at the math grade 4 NAEP composite data. When states with clear effects are pulled out and analyzed, it is apparent that states with highstakes tests are outperforming states without high-stakes tests at a statistically significant level. The scores posted by the clear states with high-stakes tests are sign ificantly different than the scores posted by the clear states without high-stakes test s. Again, however, what can also be drawn from Table 4 is that states with high-stakes tests are exempting more students from participating in the math grade 4 NAEP. Two times as many states with high-stakes t ests exempted students and realized gains in grade 4 math achievement from 199 6-2000 than did states without high-stakes tests. This, again, supports the notion that states with high-stakes tests

PAGE 10

10 of 16 are not all gaining in NAEP scores simply because o f their high-stakes testing policies.NAEP Math Grade 8 1996-2000Taking Table 1 from Amrein & Berliner (2002b) and t he states in which high stakes tests were implemented before 1996 and between 1996 and 2000, we re-ran our analyses using all states with high-stakes tests an d, as the control group, all states without high-stakes tests for which NAEP data were available. What we found in regards to math grade 8 achievement from 1996-2000 is as follows:Table 5 Eighth grade 1996-2000 NAEP mathematics scores (raw data)States without high-stakes tests: NAEP 1996 NAEP 2000 States withhigh-stakes tests: NAEP 1996 NAEP 2000 Alaska278n/aAlabama256262Arizona268271California263262Arkansas261261Delaware267n/aColorado276n/aFlorida264n/aConnecticut280282Indiana275283Georgia262266Kentucky267272Hawaii262263Louisiana252259Idahon/a278Maryland270276Illinoisn/a277Massachusetts277283Iowa284n/aMichigan276278Kansasn/a284Mississippi250254Maine284284Missouri274274Minnesota284288Nevadan/a268Montana283287New Mexico262260Nebraska283281New York270276North Dakota284283North Carolina268280Oregon277281Ohion/a283

PAGE 11

11 of 16 Rhode Island268273Oklahoman/a272Tennessee263263South Carolina260266Utah276275Texas270275Vermont279283Virginia270277Washington276n/aWest Virginia265271Wisconsin283n/a Change in Score Change in Score Wyoming275277OVERALL275.5276.7+1.2* OVERALL266.1271.6+5.4* *Significant at a p < .05 levelTable 5 illustrates the states with high-stakes tes ts outperformed those states without high-stakes tests on the math NAEP grade 8 1996-2000. Again, we argue that the rates by which students are excluded from the NAEP must be taken into consideration to determine whether gains and losses are clear or unclear. Using the same rules as outlined above to determine clear and unclear gains and losses, we looked at only those states for which cl ear gains or losses are apparent (Note 3). An analysis of the data yields the follow ing:Table 6 Eighth grade 1996-2000 NAEP mathematics scores with states coded as clear or unclear in their gains and losses .States without high-stakes tests NAEP 1996 NAEP 2000 States withhigh-stakes tests NAEP 1996 NAEP 2000 Alaska278n/aAlabama256262Arizona268271California263262Arkansas261261Delaware267n/aColorado276n/aFlorida264n/aConnecticut280282Indiana275283Georgia262266Kentucky267272Hawaii262263Louisiana252259Idahon/a278Maryland270276

PAGE 12

12 of 16 Illinoisn/a277Massachusetts277283Iowa284n/aMichigan276278Kansasn/a284Mississippi250254Maine284284Missouri274274Minnesota284288Nevadan/a268Montana283287New Mexico262260Nebraska283281New York270276North Dakota284283North Carolina268280Oregon277281Ohion/a283Rhode Island268273Oklahoman/a272Tennessee263263South Carolina260266Utah276275Texas270275Vermont279283Virginia270277Washington276n/aWest Virginia265271Wisconsin283n/a Change in Score Change in Score Wyoming275277OVERALL271.1271.9+0.7 OVERALL258.8261.8+3.0 After the states with clear effects are pulled out and analyzed, it seems that states with high-stakes tests are outperforming states wit hout high-stakes tests. They are not, however, outperforming states without high-sta kes tests at a statistically significant level. In addition, the scores posted b y the clear states with high-stakes tests are not significantly different than the scor es posted by the clear states without high-stakes tests. States with high-stakes tests are not outperforming states without high-stakes tests in math grade 8 performan ce. Again, what can also be drawn from Table 6 is that states with high-stakes tests are exempting more students from participating in the m ath grade 8 NAEP. Thirty-three percent of the states without high-stakes tests exe mpted more students and realized gains in math grade 8 NAEP scores. Fifty p ercent of the states with high-stakes tests exempted more students and realiz ed gains in math grade 8 NAEP scores. This, again, supports our assertion th at states with high-stakes tests are not gaining in NAEP scores simply because of th eir high-stakes testing policies.ConclusionIn short, states with high-stakes tests seem to hav e outperformed states without

PAGE 13

13 of 16 high-stakes tests on the grade 4 math NAEP at a sta tistically significant level. However, gains between states with and without high stakes tests were not statistically different on the grade 4 reading or t he grade 8 math NAEP. States with high-stakes tests are not outperforming states without high-stakes tests on both of these measures.In addition, the rates by which personnel in states with high-stakes tests are exempting students are increasing at a faster rate than they are in states without high-stakes tests. There may be an underlying chara cteristic other than high-stakes tests that is causing this phenomenon, but this wou ld take further analyses. What we do know, however, is that for the most part the gains posted by states with high-stakes tests on two of the three NAEP tests ar e more related to the rates by which students are exempted from the tests than the y are related to high-stakes tests themselves.We thank Professor Rosenshine for suggesting these alternative analytic techniques to us. In the end, for now, we remain un convinced that the NAEP tests are showing much in the way of transfer effects. Gi ven all the data we reported in our previous reports we remain unconvinced that the high-stakes tests used by states are showing systematic positive affects on a udit tests used to assess transfer.ReferencesAmrein, A.L. & Berliner, D.C. (2002a, March 28). Hi gh-stakes testing, uncertainty, and student learning Education Policy Analysis Archives 10 (18). Retrieved July 24, 2003 from http://epaa.asu.edu/epaa/v10n18/.Amrein, A.L. & Berliner, D.C. (2002b). The impact o f high-stakes tests on student academic performance: An analysis of NAEP results i n states with high-stakes tests and ACT, SAT, and AP Test results in states w ith high school graduation exams. Tempe, AZ: Education Policy Studies Laborato ry, Arizona State University. Retrieved July 24, 2003 fromhttp://www.asu.edu/educ/epsl/EPRU/documents/EPSL-02 11-126-EPRU.pdf. Rosenshine, B. (2003, August 4). High-stakes testin g: Another analysis. Education Policy Analysis Archives 11 (24). Retrieved August 4, 2003 from http://epaa.asu.edu/epaa/v11n24/.About the AuthorsAudrey Amrein-BeardsleyEmail: audrey.beardsley@cox.netAudrey Amrein-Beardsley is a part-time Research and Evaluation Associate at a private foundation in Scottsdale, Arizona and a par t-time researcher at Arizona State University. She received her PhD from Arizona State University in 2002 in Education Policy with an emphasis in Research Metho dology. Her scholarly interests include the study of the intended and uni ntended consequences of high-stakes testing policies. To date, she has focu sed on the effects of high-stakes testing policies at a macro level given the frequen cy with which these policies are

PAGE 14

14 of 16 being implemented across the nation.David C. BerlinerRegents' Professor of EducationCollege of EducationArizona State UniversityTempe, AZ 85287-2411Email: berliner@asu.eduDavid C. Berliner is Regents' Professor of Educatio n at the College of Education of Arizona State University, in Tempe, AZ. He received his Ph.D. in 1968 from Stanford University in educational psychology, and has worked also at the University of Massachusetts, WestEd, and the Univer sity of Arizona. He has served as president of the American Educational Research A ssociation (AERA), president of the Division of Educational Psychology of the Am erican Psychological Association, and as a fellow of the Center for Adva nced Study in the Behavioral Sciences and a member of the National Academy of Ed ucation. Berliner's publications include The Manufactured Crisis Addison-Wesley, 1995 (with B.J. Biddle) and The Handbook of Educational Psychology Macmillan, 1996 (Edited with R.C. Calfee). Special awards include the Resea rch into Practice Award of AERA, the National Association of Secondary School Principals Distinguished Service Award, and the Medal of Honor from the Univ ersity of Helsinki. His scholarly interests include research on teaching an d education policy analysis. The World Wide Web address for the Education Policy Analysis Archives is epaa.asu.edu Editor: Gene V Glass, Arizona State UniversityProduction Assistant: Chris Murrell, Arizona State University General questions about appropriateness of topics o r particular articles may be addressed to the Editor, Gene V Glass, glass@asu.edu or reach him at College of Education, Arizona State Un iversity, Tempe, AZ 85287-2411. The Commentary Editor is Casey D. Cobb: casey.cobb@unh.edu .EPAA Editorial Board Michael W. Apple University of Wisconsin David C. Berliner Arizona State University Greg Camilli Rutgers University Linda Darling-Hammond Stanford University Sherman Dorn University of South Florida Mark E. Fetler California Commission on Teacher Credentialing

PAGE 15

15 of 16 Gustavo E. Fischman California State Univeristy–LosAngeles Richard Garlikov Birmingham, Alabama Thomas F. Green Syracuse University Aimee Howley Ohio University Craig B. Howley Appalachia Educational Laboratory William Hunter University of Ontario Institute ofTechnology Patricia Fey Jarvis Seattle, Washington Daniel Kalls Ume University Benjamin Levin University of Manitoba Thomas Mauhs-Pugh Green Mountain College Les McLean University of Toronto Heinrich Mintrop University of California, Los Angeles Michele Moses Arizona State University Gary Orfield Harvard University Anthony G. Rud Jr. Purdue University Jay Paredes Scribner University of Missouri Michael Scriven University of Auckland Lorrie A. Shepard University of Colorado, Boulder Robert E. Stake University of Illinois—UC Kevin Welner University of Colorado, Boulder Terrence G. Wiley Arizona State University John Willinsky University of British ColumbiaEPAA Spanish Language Editorial BoardAssociate Editor for Spanish Language Roberto Rodrguez Gmez Universidad Nacional Autnoma de Mxico roberto@servidor.unam.mx Adrin Acosta (Mxico) Universidad de Guadalajaraadrianacosta@compuserve.com J. Flix Angulo Rasco (Spain) Universidad de Cdizfelix.angulo@uca.es Teresa Bracho (Mxico) Centro de Investigacin y DocenciaEconmica-CIDEbracho dis1.cide.mx Alejandro Canales (Mxico) Universidad Nacional Autnoma deMxicocanalesa@servidor.unam.mx Ursula Casanova (U.S.A.) Arizona State Universitycasanova@asu.edu Jos Contreras Domingo Universitat de Barcelona Jose.Contreras@doe.d5.ub.es Erwin Epstein (U.S.A.) Loyola University of ChicagoEepstein@luc.edu Josu Gonzlez (U.S.A.) Arizona State Universityjosue@asu.edu

PAGE 16

16 of 16 Rollin Kent (Mxico) Universidad Autnoma de Puebla rkent@puebla.megared.net.mx Mara Beatriz Luce (Brazil) Universidad Federal de Rio Grande do Sul-UFRGSlucemb@orion.ufrgs.br Javier Mendoza Rojas (Mxico)Universidad Nacional Autnoma deMxicojaviermr@servidor.unam.mx Marcela Mollis (Argentina)Universidad de Buenos Airesmmollis@filo.uba.ar Humberto Muoz Garca (Mxico) Universidad Nacional Autnoma deMxicohumberto@servidor.unam.mx Angel Ignacio Prez Gmez (Spain)Universidad de Mlagaaiperez@uma.es DanielSchugurensky (Argentina-Canad) OISE/UT, Canadadschugurensky@oise.utoronto.ca Simon Schwartzman (Brazil) American Institutes forResesarch–Brazil (AIRBrasil) simon@sman.com.br Jurjo Torres Santom (Spain) Universidad de A Coruajurjo@udc.es Carlos Alberto Torres (U.S.A.) University of California, Los Angelestorres@gseisucla.edu EPAA is published by the Education Policy Studies Laboratory, Arizona State University


xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam a22 u 4500
controlfield tag 008 c20039999azu 000 0 eng d
datafield ind1 8 ind2 024
subfield code a E11-00323
0 245
Educational policy analysis archives.
n Vol. 11, no. 25 (August 04, 2003).
260
Tempe, Ariz. :
b Arizona State University ;
Tampa, Fla. :
University of South Florida.
c August 04, 2003
505
Re-analysis of NAEP Math and Reading Scores in States with and without High-stakes Tests: Response to Rosenshine / Audrey Amrein-Beardsley [and] David C. Berliner.
650
Education
x Research
v Periodicals.
2 710
Arizona State University.
University of South Florida.
1 773
t Education Policy Analysis Archives (EPAA)
4 856
u http://digital.lib.usf.edu/?e11.323


xml version 1.0 encoding UTF-8 standalone no
mods:mods xmlns:mods http:www.loc.govmodsv3 xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govmodsv3mods-3-1.xsd
mods:relatedItem type host
mods:identifier issn 1068-2341mods:part
mods:detail volume mods:number 11issue 25series Year mods:caption 20032003Month August8Day 44mods:originInfo mods:dateIssued iso8601 2003-08-04