USF Libraries
USF Digital Collections

Educational policy analysis archives

MISSING IMAGE

Material Information

Title:
Educational policy analysis archives
Physical Description:
Serial
Language:
English
Creator:
Arizona State University
University of South Florida
Publisher:
Arizona State University
University of South Florida.
Place of Publication:
Tempe, Ariz
Tampa, Fla
Publication Date:

Subjects

Subjects / Keywords:
Education -- Research -- Periodicals   ( lcsh )
Genre:
non-fiction   ( marcgt )
serial   ( sobekcm )

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
usfldc doi - E11-00316
usfldc handle - e11.316
System ID:
SFS0024511:00316


This item is only available as the following downloads:


Full Text

PAGE 1

1 of 13 A peer-reviewed scholarly journal Editor: Gene V Glass College of Education Arizona State University Copyright is retained by the first or sole author, who grants right of first publication to the EDUCATION POLICY ANALYSIS ARCHIVES EPAA is a project of the Education Policy Studies Laboratory. Articles appearing in EPAA are abstracted in the Current Index to Journals in Education by the ERIC Clearinghouse on Assessment and Evaluation and are permanently archived in Resources in Education Volume 11 Number 18June 12, 2003ISSN 1068-2341An Examination of the Longitudinal Effect of the Wa shington Assessment of Student Learning (WASL) on Student Ac hievement Donald C. Orlich Washington State UniversityCitation: Orlich, Donald C. (2003, June 12). An exa mination of the longitudinal effect of the Washington Assessment of Student Learning (WASL) on student achievement. Education Policy Analysis Archives, 11 (18). Retrieved [date] from http://epaa.asu.edu/epa a/v11n18/.AbstractLinn, Baker and Betebenner (2002) suggested using t he effect size statistic as a measure of adequate yearly progress target (AYPT) as is required by PL 107-110. This paper analyzes a fo ur-year data set from the required high-stakes test--Washington Asse ssment of Student Learning—using effect size as the AYPT metr ic. Mean scale scores for 4th, 7th and 10th grade reading and math ematics were examined. Nominal descriptors suggested by Cohen (1 988) were applied and showed no yearly effect in student achi evement as a function of the WASL. Comparing the 1998 scale scor es to those of 2001 showed a small effect. However, manipulating t he effect size criterion from 0.20 to 0.05 did show small yearly e ffects in student achievement. Meeting AYPT objectives will be a prob lem of defining the standard as yearly score fluctuations occur. Th e educational research community should challenge the statistical logic associated with setting AYPT’s. A statistical and accountability dilemma has emerge d due to the passage of the “No Child Left Behind Act of 2001” (PL 107-110). States are now forced by federal law to show adequate student yearly progress targets, w hich will be met through

PAGE 2

2 of 13 high-stakes testing. Several states have constructe d their own accountability systems that feature criterion-referenced assessmen ts. However, student test scores tend to display characteristics of norm-refe renced tests, i. e., normal distributions. The use of effect size statistics, w hich is herein applied to one state, has been suggested as one means of determining the required adequate yearly progress targets (Linn, Baker & Betebenner, 2002).The Washington State Model (WASL)The State of Washington established the Washington Assessment of Student Learning (WASL) as its accountability tool. The WAS L is keyed to the state’s standards called “Essential Academic Learning Requi rements.” The WASL is used to test all 4th, 7th and 10th graders in mathematic s, reading, and writing. The 5th, 8th and 10th graders will be assessed in science. L istening is also assessed. Using the data collected from the 1998 through 2001 WASL administrations; the writer calculated effect sizes (see Cohen, 1988) to observ e trends. The purpose of this study is to determine the effec t on student achievement as a consequence of the longitudinal administration of t he Washington Assessment of Student Learning (WASL) the state mandated high-sta kes test. The WASL scale score means and standard deviations were available for the years 1998, 1999, 2000 and 2001 for mathematics and reading and are s hown in Table 1. The average number of students taking the WASL math and reading tests during the four year period for grades 4, 7 and 10, respective ly, are 70,431; 72,864 and 66,856. These three combined totals account for 21 percent of the state’s total 2001-02 K-12 student population of 1,010,424 (Educa tion Profile, 2002). The number of WASL test-takers is significant.Table 1 Yearly Means and Standard Deviations for 4th, 7th, and 10th Grade Mathematics and Reading Scale Scores On the Washing ton Assessment of Student Learning (1998-2001)Grade Leveland Subject Spring 1998Means Spring 1998StandardDeviations Spring 1999Means Spring 1999StandardDeviations 4 Mathematics383.532.2386.533.94 Reading402.119.3404.219.57 Math357.446.4364.752.07 Reading390.120.1393.120.210 MathN/RN/R382.242.810 ReadingN/RN/R402.829.5Grade Leveland Subject Spring 2000Means Spring 2000Standard Spring 2001Means Spring 2001Standard

PAGE 3

3 of 13 DeviationsDeviations 4 Mathematics391.234.9393.334.94 Reading407.319.6405.718.67 Math369.153.6368.751.67 Reading393.820.9394.520.610 Math387.640.0390.841.110 Reading407.330.2410.030.5All means and standard deviations are from files of Office of State Superintendent of Public Instruction, Olympia, Washington.An inspection of the scale score means shows a rath er small incremental increase in most means. However, there is a scale point decl ine of 0.4 in the mean of grade 7, 2001 math scores compared to 2000. A similar dec line is noted in 2001 for grade 4 reading, where the mean scale score dropped by 1. 6 points compared to 2000. These patterns have been praised by state policy ma kers as showing evidence of student progress. However, are the scores truly ref lective of student achievement? To answer that question, I use the statistical test of effect size.Effect sizeThe effect size is a method by which to judge the relative learnin g worth from independent samples (see Bloom, 1984; Cohen, 1988; Glass, 1980; Marzano et al., 2001; & Walberg, 1999). In this case, what evi dence is there that administering and teaching to the WASL has a positive impact on s tudent achievement? Cohen (1988) defined an effect size as the differen ce between two means divided by the standard deviation of either group. With ind ependent samples, such as the WASL, one can determine the effect sizes by compari ng the means of two different years. Cohen also suggested that the relative effic acy of an effect could be stated in nominal terms. If an effect size (ES) were at le ast a 0.2, it was labeled as small. An ES of at least 0.5 was labeled as medium; while and ES of 0.8 or greater was large Thus, an effect size of 0.2 is required to show e fficacy of learning. Table 2 shows the effect size calculations and nominal desc riptors for this study. In all calculations a uniform method was used. The earlier year is the control, while the latter year of the pair is the experimental group. Standard deviations are from the control years.Table 2 Effect Size Calculations for 4th, 7th, and 10th Gra de Mathematics and Reading Scores On the Washington Assessment of Student Learning (1 998-2001)

PAGE 4

4 of 13 Grade LevelAnd Subject 1999/1998Effect Size Effect2000/1999 Effect Size Effect2001/2000 Effect Size Effect 4 Math0.09None0.14None0.06None4 Reading0.11None0.16None-0.08Negative7 Math0.16None0.08None-0.01Negative7 Reading0.15None0.03None0.05None10 MathN/RN/R0.13None0.08None10 ReadingN/RN/R0.15None0.09NoneThe effect is described in nominal terms as per Jac ob Cohen’s (1988) definitions.Discussion of Data SetsTable 2 shows the effect sizes for the 4th, 7th and 10th grade mathematics and reading scores yearly from 1998 to 2001. Examining Table 2, you may note that at the 4th grade level, five scores show no effect in achievement, while there is one negative learning effect on grade 4 reading in 2001, that is, a decline in achievement.The grade 7 pattern is similar showing no effect on five of the six scores and one negative effect in mathematics for 2001. The grade 10 results show no effect on mathematics and reading scores in all cases.If one were to use an effect size of 0.05, which wo uld account for a two percentile gain, as the Adequate Yearly Progress Target (AYPT) suggested by Linn, Baker, and Betebenner (2002), then 13 of the 16 scores wou ld meet that target. However, using Cohen’s (1988) definitions the 16 scores woul d show no effect and not meet the target. Setting the criterion measure of an ade quate yearly progress target (AYPT) may become a major problem of definition. Th is situation may further complicate the implementation of AYPT policy. It ap pears that further analysis of setting AYPT’s and field-studies are essential.The average percentile gain during the four-year pe riod of this study (1998-2001) for grades 4, 7 and 10 in math and reading was 3.3. That number would correspond to an effect size of 0.08. That effect s ize would exceed the AYPT suggested by Linn et al. (2002), but not the nomina l ES suggested by Cohen. These differences will be explored further.Now examine Tables 3 and 4, which display the data and effect sizes by comparing the 1998 WASL scores for grades 4 and 7 to those of 2001; and grade 10 for 1999 and 2001. Using Cohen’s classifications, five of th e six comparisons show a small effect and only one with no effect. It appears to t ake several years to show any effect. Using WASL math results released in the SRI International report (2002) showed that for grade 4 from 1997-2002, the ES was 0.79. The ES for grade 7 from 1998-2002 was 0.31; while the ES for grade 10, 1999-2002, was 0.14. These data are inconclusive, but do suggest that like nat ional norm-referenced tests, state

PAGE 5

5 of 13 criterion-referenced tests may take several years t o show a positive impact on student achievement.Camara and Powers (1999) concluded that coaching st udents for the SAT does in fact increase a student’s SAT score. Washington Sta te teachers have had at least four years experience in preparing for and teaching to the WASL. Further, between June 21 and June 30, 2001, the State Superintendent of Public Instruction selected 175 classroom teachers to attend a special, all exp enses paid, WASL assessment training program in Mesa, Arizona (Brown, 2001, May 17). The impact of that training and student familiarity with the WASL appe ar to be similar to the SAT coaching findings reported by Camara and Powers (19 99), especially at grade 4 where teachers have had over six years of “practice .” Additionally, the state superintendent of public instruction initiated a “S chool Improvement Specialist” program in 2001-02. About 200 selected individuals are being paid $30,000 a school year to work 1.5 days per week (or up to $90 ,000 for 4.5 days) to help teachers and schools “improve student performance.” No independent evaluation of this multi-million dollar expenditure has yet be en conducted.Table 3 Means, Standard Deviations, and Effect Sizes for 4t h, and 7th Grade Mathematics and Reading Scale Scores on the W ashington Assessment of Student Learning Comparing (1998—2001 )Grade Leveland Subject Spring1998Means Spring 1998StandardDeviations Spring2001Means Spring 2001 StandardDeviations EffectSize Effect 4 Mathematics 383.532.2393.334.90.30Small 4 Reading402.119.3405.718.60.19None7 Math357.446.4368.751.60.24Small7 Reading390.120.1394.520.60.22SmallTable 4 Means, Standard Deviations, and Effect Sizes for 10 th Grade Mathematics and Reading Scores on the Washington As sessment of Student Learning Comparing (1999—2001)Grade LevelandSubject Spring1999Means Spring 1999StandardDeviations Spring2001Means Spring 2001 StandardDeviations EffectSize Effect 10 Math382.242.8390.841.10.20Small

PAGE 6

6 of 13 10 Reading 402.829.5410.030.50.24Small It appears that nearly four years would be required to show a small effect on student test performance; thus AYPTs may be elusive and troublesome. And how does a state account for a negative effect size yea r as the Washington data illustrate? The Washington State reform movement wa s legislated in existence in 1993. Thus, one could argue that during an eight ye ar period little impact on student achievement is shown for the estimated $1 B illion cost for the state’s total school reform package.A question needs resolution: “Is the small effect s hown on the four year comparisons a function of increased teacher knowled ge and possible compromise of WASL questions, or is the effect a function of g rowth in student academic achievement?” Obviously that question is not answer ed in this paper, but must be explored.Further ImplicationsThis article only examines the effect size of stude nt achievement as a consequence of longitudinal high-stakes testing in one state. T o illustrate the gravity of the problem let us also examine the percent of students meeting the arbitrarily set standard scale score in math and reading for grades 4, 7 and 10 from 1998-2002. (See Table 5.)Table 5 Percent of Students Meeting and Not Meeting Standar d for 4th, 7th, and 10th Grade Mathematics and Reading on the Washington Assessment of Student Learning (1998—2002)Grade LevelAnd Subject 1998-99 %MeetingStandard 1999-2000 %Meeting Standard 2000-2001 %Meeting Standard 2001-2002 %Meeting Standard 4 Math37.341.843.451.84 Reading59.165.866.165.67 Math24.228.227.430.47 Reading40.841.539.844.510 Math33.035.038.937.310 Reading51.459.862.459.2Source: State Superintendent of Public Instruction, Education Profiles, Olympia, WA, 1998-2002.The range of percentages of students meeting the st andard for math in grade 4 is

PAGE 7

7 of 13 from 37.3 in 1998-99 to 51.8 percent in 2001-02. Fo r grade 7, the range is 24.2 to 30.4 percent. And in grade 10, the range in math is from 33.0 to 37.3 percent. Considering that yet about one-half of the fourth g raders and almost two-thirds of the seventh and tenth graders do not meet the standard (fail) policy makers should be alarmed. In deed, on November 30, 2002 writers o f the SRI International report concluded “The analyses further suggest that the gr ade 7 test was more challenging for the 7th graders than the grade 10 t est was for the 10th graders” (page ii). The report also noted that several WASL test items on the 4th grade math test were not aligned with the Essential Academic L earning Requirements. The state superintendent of public instruction info rmed the standard-setting groups that the WASL cut scores and standard-setting are g uided by the belief that, “In all content areas the standard should reflect what a we ll taught, hardworking student should know and be able to do near the end of grade [4,7, or 10]” (SRI, 2002, p. 20). It is obvious that developmental, cognitive or behavioral perspectives are not being reflected in that guiding principle. Is facul ty psychology now in vogue? Orlich (2000) analyzed the 4th grade practice WASL items by using the developmental scales published by Epstein (2002, p. 184), Bloom’s Taxonomy (1954), and the NAEP scales (Campbell et al, 1998) and concluded that the bulk of items on the 4th grade WASL were well beyond their developmental level. Results of the 1998-99 WASL results confirmed that conclusi on. Thus, the very high percentages of students “not meeting the WASL stand ard” may be traced to developmentally inappropriate items.All children must take the WASL, with very few exce ptions. The math effect size differences between grade-four minority children an d white/Caucasian children range from 0.75 to 0.80, or about 28 percentile. Fo r Special education the effect size difference between that population and white/C aucasian children is 0.96 or a 33 percentile difference (see Taylor, 2001, Tables 8-10 and 8-12). The latter is nearly one full standard deviation on the WASL. Fin ally, there is a correlation between reading and math WASL scores of 0.73 (Abbot t and Joireman, 2001). Is math or reading being tested?The cost-effectiveness of the WASL may be determine d by the actual contract costs of the WASL with the Riverside Publishing Com pany. The initial contract was for about $40 Million. The renewal (2001-05) is $61 ,673,910 (Contract no.120-761, 2001). Thus the WASL cost is about $102 Million per se. That figure does not include teacher salaries for time spent to prepare students or to administer the WASL, plus other bureaucratic costs associated for its administration. With an average 3.3 percentile gain per year, the cost per one percentile gain is about $11 Million per year. Obviously, the cost-benefit calcu lation is challengeable; but no matter because the cost to meet any AYPT, as is man dated by federal laws (PL107-110), will be staggering. More importantly t he money goes only to the test publishers. Not one dollar of the WASL reform expen diture goes to teachers to aid their instructional efforts.Are the WASL Scores aberrations? Apparently not: Li nn and Haug (2002) examined Colorado school buildings test scores over a four-year period and concluded the following:

PAGE 8

8 of 13 “The performance of successive cohorts of students is used in a substantial number of states to estimate the improvement of sch ools for purposes of accountability. The estimates of improvement, howev er, are quite volatile. This volatility results in some schools being recognized as outstanding and other schools identified as in need of improvement simply as the result of random fluctuations. It also means that strategies of look ing to schools that show large gains for clues of what other schools should do to improve student achievement will have little chance of identifying those practices t hat are most effective. On the other hand, schools that are identified as ‘in need of im provement’ will generally show increases in scores the year after they are identif ied simply because of the noise in the estimates of improvement and not because of the effectiveness of the special assistance provided to the schools or pressure that is put on them to improve.” (p. 35).Similarly, Darling-Hammond (2003) reported that a d oubt must be cast on state test gain scores because in Texas, students showed gains on the state mandated assessment, but did not make comparable gains on na tional standardized tests or the Texas college entrance test.ConclusionUsing an effect size measurement and Cohen’s (1988) nominal definitions, there is no effect, that is, no positive impact on yearly st udent achievement as a consequence of the longitudinal administration of t he Washington Assessment of Student Learning (WASL). However, over a four-year period a small effect size does emerge. The results of this study parallel the findings of Amrein and Berliner (2002a) who analyzed the consequences of 18 states with high-stakes tests. They reported that in 17 of the 18 states, student learn ing remained at the same level as it was before the policy of high-stakes tests was i nstituted. In two separate studies, the first of 28 states wit h high-stakes tests, Amrein and Beliner (2002b) concluded that these tests do littl e to improve student achievement. In a second study of 17 states (2002c) they conclud ed that high-stakes tests may actually worsen academic performance and exacerbate dropout rates. The affective dimensions of high-stakes tests should be of great concern to policy makers and educators alike.Washington State policy makers must re-examine the intent of the WASL and the empirical data sets that analyze it to determine it s educational worthiness and continued fiscal support (Orlich, 2000; Abbott & Jo ireman, 2001; Basarab, 2001; Fouts, 2002; & Keim, 2002). At the federal level th ere is need to examine the practicality, reasonableness and statistical logic of setting adequate yearly progress targets. The experience in the state of Washington apparently shows that setting AYPT’s may not only be an assessment fallacy, but a gross misapplication of adapting the banking practice of applying compound interest calculations to human cognition. Is educational reform anything you can g et away with? NoteThe Author expresses appreciation to colleagues at Washington State University,

PAGE 9

9 of 13 V. S. Manaranjan and Michael S. Trevisan for their critique, insights, and suggestions relating to this paper; and to the anon ymous referees for their constructive feedback.ReferencesAbbott, M. L. & Joireman, J. (2001, July). The rela tionships among achievement, low income, and ethnicity across six groups of Wash ington state students. Lynnwood, WA: Washington School Research Center, Te chnical Report #1. Amrein, A. L. & Berliner, D. C. (2002a, March 28). High-stakes testing, uncertainty, and student learning. Education Policy Analysis Archives, 10 (18). Retrieved April 10, 2002 from http://epasa.asu.edu/epaa/v10n18/ Amrein, A. L. & Berliner, D. C. (2002b, December). The impact of high-stakes tests on student academic performance: an analysis of NAE P results in states with high-stakes tests and ACT, SAT, and AP test results in states with high school graduation exams. Education Policy Research Unit, E ducation Policy Studies Laboratory, Arizona State University, Tempe, AZ. (E PSL-0211-126-EPRU). Amrein, A. L. & Beliner, D. C. (2002c, December). A n analysis of some unintended and negative consequences of high-stakes testing. E ducation Policy Research Unit, Education Policy Studies Laboratory, Arizona State University, Tempe, AZ. (EPSL-0211-125EPRU).Basarab, S. (2001, February). An overview of studen t assessment in Washington state. Unpublished Report. Citizens United for Resp onsible Education (CURE). Burien: WA. Web site of CURE is at: http://www.eskimo.com/~cure/ Bloom, B. S. et al. (1956) Taxonomy of educational objectives: The classificat ion of educational goals. Handbook I: Cognitive domain New York: David McKay. Bloom, B. S. (1984). The 2 sigma problem: The searc h for methods of group instruction as effective as one-to-one tutoring. Educational Researcher, 13 (6), 4-16. Brown, D. (2001, May 17). Staffing paper: MEA for M esa, Arizona trip. Olympia, WA: Office of State Superintendent of Public Instru ction files. Camara, W. J. & Powers, D. (1999, July). Coaching a nd the SAT. Society for Industrial Psychology Inc. Retrieved August 27, 200 2, from http://stop.org/tip/backissues/tipjuly99/4camara.ht m. Campbell, J. R., Voelke, K. E., & Donahue, P. L. (1 998, August). Report in brief: NAEP trends in academic progress, revised Washington, DC: National Center for Education Statistics, (NCES no. 998-530).Cohen, J. (1988). Statistical power analysis for the behavioral scien ces, 2nd Ed. Hillsdale, NJ: Lawrence Erlbaum Associates.Contract No. 120-761. (2001, September 20). Riverside Publishing Contrac t with Superintendent of Public Instruction for WASL imple mentation.

PAGE 10

10 of 13 Darling-Hammond, L. (2003). Standards and assessmen ts: Where we are and what we need. Teachers College Record. Retrieved February 27, 2003 from http://www.tcrecord.org/Contentasp?ContentID=11109 Education Profile. State Superintendent of Public I nstruction. Olympia, WA. Retrieved at http://www.k12.wa.us/edprofile/state Epstein, H. T. (2002). Biopsychologicalaspects of m emory and education. In S. P. Shohov, (Ed.). Advances in Psychology Research, vol. 11. New York: Nova, 181-186.Fouts, J. T. (2002, April). The power of early succ ess: A longitudinal study of student performance on the Washington assessment of student learning 1998-2001. Lynnwood, WA: Washington School Research Center, Research Report #1.Glass, G. V. (1980). Summarizing effect sizes. In R Rosenthal, (Ed.). New directions for methodology of social and behavioral science: Quantitative assessment of research domain San Francisco: Jossey-Bass. Keim, W. G. (2002). School accountability and fairn ess: A policy study of the 2000 Washington state school accountability criteria Pullman, WA: Washington State University, Unpublished Doctoral Dissertation.Linn, R. L., Baker, E. V., & Betebenner, D. W. Acco untability systems: Implications of requirements of the no child left behind act of 2001. Educational Researcher 31 (6), 3-16.Marzano, R. J., Pickering, D. J., & Pollock, J. E. (2001). Classroom instruction that works: Research-based strategies for increasing stu dent achievement. Alexandria, VA: Association for Supervision and Curriculum Deve lopment. Orlich, D. C. (2000). A critical analysis of the gr ade four Washington assessment of student learning. Curriculum In Context, 27 (2), 10-14 No Child Left Behind Act of 2001, Public Law No. 10 7-110, 115 Stat. 1425 (2002). SRI International. (2002, November 30). A review of the Washington assessment of student learning in mathematics: Grades 7 and 10. F inal Report. SRI Project Number: P12089. Olympia, WA: The Office of Superint endent of Public Instruction. Taylor, C. S. & Lee, Y. (2001, October 18). Washing ton assessment of student learning grade 4, 2000 technical report. Olympia, W A: Office of the State Superintendent of Public Instruction.Walberg, H. J. (1999). Productive teaching. In H. C Waxman and H. J. Walberg, (Eds.), New directions for teaching practice and research (pp. 75-104). Berkeley: CA: McCutchan Publishing Corporation. About the Author

PAGE 11

11 of 13 Donald C. Orlich Science Mathematics Engineering Education Center Washington State University Pullman, Washington 99164-4237 Telephone: (509) 335-4844 FAX: (509) 335-7389E-mail: dorlich@wsu.eduDonald C. Orlich is Professor Emeritus, Science Mat hematics Engineering Education Center at Washington State University, Pu llman. His academic focus is science education and instruction. He is senior coauthor with R. Harder, R. Callahan, M, Trevisan and A. Brown of the forthcomi ng 7th edition of Teaching Strategies: A Guide to Effective Instruction, Houghton Mifflin, 2004. He authored Designing Successful Grant Proposals, ASCD 1996/2002. On March 2001, the 160,000 member ASCD honored him with the “The Affil iate Publications Award for Outstanding Affiliate Article” for “A Critical Anal ysis of the Grade Four Washington Assessment of Student Learning.” The World Wide Web address for the Education Policy Analysis Archives is epaa.asu.edu Editor: Gene V Glass, Arizona State UniversityProduction Assistant: Chris Murrell, Arizona State University General questions about appropriateness of topics o r particular articles may be addressed to the Editor, Gene V Glass, glass@asu.edu or reach him at College of Education, Arizona State Un iversity, Tempe, AZ 85287-2411. The Commentary Editor is Casey D. Cobb: casey.cobb@unh.edu .EPAA Editorial Board Michael W. Apple University of Wisconsin David C. Berliner Arizona State University Greg Camilli Rutgers University Linda Darling-Hammond Stanford University Sherman Dorn University of South Florida Mark E. Fetler California Commission on Teacher Credentialing Gustavo E. Fischman California State Univeristy–LosAngeles Richard Garlikov Birmingham, Alabama Thomas F. Green Syracuse University Aimee Howley Ohio University Craig B. Howley Appalachia Educational Laboratory William Hunter University of Ontario Institute of

PAGE 12

12 of 13 Technology Patricia Fey Jarvis Seattle, Washington Daniel Kalls Ume University Benjamin Levin University of Manitoba Thomas Mauhs-Pugh Green Mountain College Les McLean University of Toronto Heinrich Mintrop University of California, Los Angeles Michele Moses Arizona State University Gary Orfield Harvard University Anthony G. Rud Jr. Purdue University Jay Paredes Scribner University of Missouri Michael Scriven University of Auckland Lorrie A. Shepard University of Colorado, Boulder Robert E. Stake University of Illinois—UC Kevin Welner University of Colorado, Boulder Terrence G. Wiley Arizona State University John Willinsky University of British ColumbiaEPAA Spanish Language Editorial BoardAssociate Editor for Spanish Language Roberto Rodrguez Gmez Universidad Nacional Autnoma de Mxico roberto@servidor.unam.mx Adrin Acosta (Mxico) Universidad de Guadalajaraadrianacosta@compuserve.com J. Flix Angulo Rasco (Spain) Universidad de Cdizfelix.angulo@uca.es Teresa Bracho (Mxico) Centro de Investigacin y DocenciaEconmica-CIDEbracho dis1.cide.mx Alejandro Canales (Mxico) Universidad Nacional Autnoma deMxicocanalesa@servidor.unam.mx Ursula Casanova (U.S.A.) Arizona State Universitycasanova@asu.edu Jos Contreras Domingo Universitat de Barcelona Jose.Contreras@doe.d5.ub.es Erwin Epstein (U.S.A.) Loyola University of ChicagoEepstein@luc.edu Josu Gonzlez (U.S.A.) Arizona State Universityjosue@asu.edu Rollin Kent (Mxico) Universidad Autnoma de Puebla rkent@puebla.megared.net.mx Mara Beatriz Luce (Brazil) Universidad Federal de Rio Grande doSul-UFRGSlucemb@orion.ufrgs.br Javier Mendoza Rojas (Mxico)Universidad Nacional Autnoma deMxicojaviermr@servidor.unam.mx Marcela Mollis (Argentina)Universidad de Buenos Airesmmollis@filo.uba.ar

PAGE 13

13 of 13 Humberto Muoz Garca (Mxico) Universidad Nacional Autnoma deMxicohumberto@servidor.unam.mx Angel Ignacio Prez Gmez (Spain)Universidad de Mlagaaiperez@uma.es Daniel Schugurensky (Argentina-Canad) OISE/UT, Canadadschugurensky@oise.utoronto.ca Simon Schwartzman (Brazil) American Institutes forResesarch–Brazil (AIRBrasil) simon@sman.com.br Jurjo Torres Santom (Spain) Universidad de A Coruajurjo@udc.es Carlos Alberto Torres (U.S.A.) University of California, Los Angelestorres@gseisucla.edu EPAA is published by the Education Policy Studies Laboratory, Arizona State University


xml version 1.0 encoding UTF-8 standalone no
mods:mods xmlns:mods http:www.loc.govmodsv3 xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govmodsv3mods-3-1.xsd
mods:relatedItem type host
mods:identifier issn 1068-2341mods:part
mods:detail volume mods:number 11issue 18series Year mods:caption 20032003Month June6Day 1212mods:originInfo mods:dateIssued iso8601 2003-06-12


xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam a22 u 4500
controlfield tag 008 c20039999azu 000 0 eng d
datafield ind1 8 ind2 024
subfield code a E11-00316
0 245
Educational policy analysis archives.
n Vol. 11, no. 18 (June 12, 2003).
260
Tempe, Ariz. :
b Arizona State University ;
Tampa, Fla. :
University of South Florida.
c June 12, 2003
505
Examination of the longitudinal effect of the Washington Assessment of Student Learning (WASL) on student achievement / Donald C. Orlich.
650
Education
x Research
v Periodicals.
2 710
Arizona State University.
University of South Florida.
1 773
t Education Policy Analysis Archives (EPAA)
4 856
u http://digital.lib.usf.edu/?e11.316