USF Libraries
USF Digital Collections

Educational policy analysis archives

MISSING IMAGE

Material Information

Title:
Educational policy analysis archives
Physical Description:
Serial
Language:
English
Creator:
Arizona State University
University of South Florida
Publisher:
Arizona State University
University of South Florida.
Place of Publication:
Tempe, Ariz
Tampa, Fla
Publication Date:

Subjects

Subjects / Keywords:
Education -- Research -- Periodicals   ( lcsh )
Genre:
non-fiction   ( marcgt )
serial   ( sobekcm )

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
usfldc doi - E11-00245
usfldc handle - e11.245
System ID:
SFS0024511:00245


This item is only available as the following downloads:


Full Text
xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam a22 u 4500
controlfield tag 008 c20019999azu 000 0 eng d
datafield ind1 8 ind2 024
subfield code a E11-00245
0 245
Educational policy analysis archives.
n Vol. 9, no. 48 (November 19, 2001).
260
Tempe, Ariz. :
b Arizona State University ;
Tampa, Fla. :
University of South Florida.
c November 19, 2001
505
Information technology and the goals of standards-based instruction : advances and continuing challenges / Douglas A. Archbald.
650
Education
x Research
v Periodicals.
2 710
Arizona State University.
University of South Florida.
1 773
t Education Policy Analysis Archives (EPAA)
4 856
u http://digital.lib.usf.edu/?e11.245


xml version 1.0 encoding UTF-8 standalone no
mods:mods xmlns:mods http:www.loc.govmodsv3 xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govmodsv3mods-3-1.xsd
mods:relatedItem type host
mods:identifier issn 1068-2341mods:part
mods:detail volume mods:number 9issue 48series Year mods:caption 20012001Month November11Day 1919mods:originInfo mods:dateIssued iso8601 2001-11-19



PAGE 1

1 of 25 Education Policy Analysis Archives Volume 9 Number 48November 19, 2001ISSN 1068-2341 A peer-reviewed scholarly journal Editor: Gene V Glass College of Education Arizona State University Copyright 2001, the EDUCATION POLICY ANALYSIS ARCHIVES Permission is hereby granted to copy any article if EPAA is credited and copies are not sold. Articles appearing in EPAA are abstracted in the Current Index to Journals in Education by the ERIC Clearinghouse on Assessment and Evaluation and are permanently archived in Resources in Education .Information Technology and the Goals of Standards-B ased Instruction: Advances and Continuing Challenges Douglas A. Archbald University of DelawareCitation: Archbald, D.A. (2001, November 19). Infor mation Technology and the Goals of Standards-Based Instruction: Advances and Continuin g Challenges, Education Policy Analysis Archives 9 (48). Retrieved [date] from http://epaa.asu.edu/epa a/v9n48/.AbstractThis article examines goals of standards-based refo rm in education and ways in which developments in information technolog y have facilitated those goals. Since standards-based reform is a rath er general concept, I begin by developing a more specific formulation whi ch I refer to as the “standards-based instruction and assessment” model. Developments in information technology over the last fifteen years have contributed in important ways to the goals of standards-based refo rm at the policy level, but difficult organizational and technical challeng es still have to be overcome to realize more fully the goal of standard s-based instruction and assessment in instructional management and prac tice within schools and classrooms.Introduction

PAGE 2

2 of 25It is axiomatic that effective education requires monitoring students’ academic progress and using this information to design appropriate instruction. Planning, instr uction, and assessment should be closely interconnected and cyc lical (Angelo, 1998; Linn & Gronlund, 1995; Otto, Wolf & Eldridge, 1984; Tanner 2001). (Note 1) Standards-based instruction and assessment, as the word standard implies, means consistency in expectations for student achievement and curriculum coverage and consistency in the standards by which students are evaluated. This req uires that standards be clear, specific, and uniform–not fluctuating depending upon which te acher a student gets, what week of the school year it is, or what grade, course, or sc hool s/he happens to be in. Teaching toward clear and consistent standards, assessment o f student performance to monitor progress, and utilizing assessment information for instructional planning–these are the elements of what I shall refer to as “standards-bas ed instruction and assessment” (SBIA). SBIA is an ideal–a set of principles to strive for. There are many obstacles to realizing the ideals of SBIA in schools, but progress has occ urred both because of education reform initiatives and developments in information technology in education: namely state standards-based reform policies and web-based systems for reporting standards and assessment results. After elaborating more on SBIA, this paper explains and provides examples of how progress is occurring at the state, district, and school levels. At the same time, there is still far to go and obstacles t o overcome before more fully realized versions of SBIA operate effectively at the classro om level. The last section of this paper examines some of these obstacles.The Standards-based Instruction And Assessment Mode lIdeally, teachers teach toward a clear set of instr uctional objectives. This statement does not imply a mechanistic pursuit of minutely specifi ed behavioral objectives. While educators disagree on the specifics of good objecti ves-based instruction, there is virtually universal agreement with the principle th at good teachers have a very clear understanding of the instructional outcomes they se ek (Angelo, 1998; Smith & Ragan, 1999; Wiggins, 1998). They are clear on what they w ant their students to know and be able to do at the culmination of instruction. Good lessons and units have clear and specific academic aims. Effective instructional pla nning entails thoughtful design and sequencing of lessons and units.Assessment is integral to effective instruction. Go od teachers assess the outcomes of their instruction frequently by questioning student s and with more formal assessments of student learning, and they use the assessment information in the design of subseq uent instruction. On a moment to moment basis, in the co ntext of instruction, good teachers use a variety of questioning techniques to assess s tudent comprehension and developing proficiencies (Angelo, 1998; Tanner, 2001). Over th e longer term, from lesson to lesson, unit to unit, and marking period to marking period, good teachers systematically collect and manage assessment data on their students. Good teachers also evaluate their impact on students over the course of the year, by examini ng individual and aggregate growth of their students based on appropriate local and normreferenced and criterion-referenced assessments. Appropriate assessment data include sc ores on tests, quizzes, and other diagnostic and standardized assessments, as well as grades and other forms of student information useful for instruction. When assessment s reveal students are not achieving desired learning outcomes, instruction is revised a ccordingly. Planning, instruction, and assessment should constitute an ongoing, integrated cycle.

PAGE 3

3 of 25Standards-based instruction and assessment requires that teachers teaching at the same grade, whether in different classrooms or schools, be guided by the same content and performance standards. As asserted earlier, this is an ideal, a set of principles to strive for. Ideally, expectations for student achievement and standards of curriculum content should not be dependent upon what classroom or scho ol a student happens to be in. The planning and delivery of instruction should be guid ed by and aim toward clearly specified standards. Whether standards prescribe ho w well students should be able to read or write by certain grade levels, what topics they should cover in particular courses, or what competencies they should demonstrate at gra duation, in principle the standard is the same for all students–that is what “standard” m eans. The antithesis of standards-based instruction is each teacher indepen dently deciding what will be covered in his/her classroom and the level of achievement e xpected of the students. Without standards, expectations for student knowledge and a chievement and instructional goals are necessarily based on the idiosyncratic preferen ces of individual teachers. Fully realized at an organizational level, standards-based instruction and assessment (SBIA) requires that the planning, instruction, and assessment cycle be vertically integrated among the school’s different levels: stu dent, classrooms, supra-classroom groups, whole school (McEwan, 1998; NEC, 1985; Otto Wolf & Eldrigde, 1984). Standards of achievement should be clearly understo od and operate school-wide, with adequate data to monitor performance and processes at organizational and sub-organizational levels. The model assumes that t eachers have a common vocabulary to plan instruction and reflect on practice; plan i nstruction from a shared understanding of content and achievement standards; evaluate stud ents based on principles and expectations that are consistent across students, g roups of students, and periods of time; and participate in instructional management, such a s monitoring the effectiveness of instructional programs and identifying instructiona l and resource allocation priorities for short and long-term planning. Research shows that s chools vary widely in their ability to achieve SBIA at an organizational level–with leader ship being a critical variable–and that schools that are able to implement effectively principles of SBIA are likely to be better schools (Archbald & Newmann, 1992; Kaufmann, 1992; Levine & Lawrence, 1992; Leithwood & Aitken, 1995; McEwan, 1998; Sande rs, 1999; Senge, 2000; Wohlstetter, Kirk, Robertson & Mohrman, 1997).A Gap Between the Standards-based Instruction and A ssessment Model and PracticePlanning, instruction, and assessment occur as a cy cle to some degree in all classrooms and schools—but to what degree is this process stan dards-based, informed by assessment data, and integrated at the organization al level? That is, to what degree does it conform to the principles of SBIA?The standards-based reform movement has occurred in large part because many people believe there has been, and perhaps continues to be a significant gap between instruction in practice and the principles of SBIA. Building in the mid-80s and continuing throughout the 90s, the education standa rds movement resulted in widespread adoption in almost all states of written standards of content and performance and standards-based testing programs to measure stu dent achievement and school performance. Before turning to some of the accompli shments of standards-based reform

PAGE 4

4 of 25in education, let us examine some of the conditions that lead to it. Certainly one of the most significant factors has b een the glaring disparities in achievement and presumably academic standards among schools, especially between schools serving lower-income and/or minority popula tions and schools with more middle-income, non-minority populations. While thes e disparities have been fodder for numerous newspaper articles and government reports, one notable study with evidence of disparities in standards among schools was repor ted by the U.S. Department of Education’s Office for Educational Research and Imp rovement. This study (OERI, 1994) found students in low-income schools getting “mostly As” in reading and math, while students matched on reading and mathematics ability but enrolled in schools with low percentages of kids in poverty were getting “mo stly Cs.” The obvious implication is that reading and mathematics ability levels suffici ent to get high grades in lower-income schools, would likely produce only "Cs" in schools with more higher-income children. Other studies have examined more directly variation in teacher expectations and instructional practices among schools and classroom s. A review of observational research on reading instruction found “substantial variation across teachers” on such variables as academic focus, classroom management p roblems, and time-on-task in reading, with predictable relationships with readin g achievement (Rosenshine & Stevens, 1984: 782). Likewise, a national survey of reading practices found “considerable variation” among teachers in reported teaching philosophies, learning goals, and instructional practices (Baumann, Hoffma n, Duffy-Hester & Ro, 2000). (Note 2) A study of a large sample of teacher-made tests an d quizzes from middle school general mathematics courses found teachers’ tests v arying widely in their level of alignment with the standards published by the Natio nal Council of Teachers of Mathematics. Some teachers’ tests were composed of more than two-thirds “single-step/single-solution” computation problems, while for other teachers the percentage was as low as one-third, with the majori ty of their test questions involving higher levels of mathematical reasoning and problem solving. Teachers varied even more in the extent to which test questions involved contextualized problems, mathematical representations, and written explanati ons of mathematical reasoning (Archbald & Grant, 2000). A similar study by Senk, Beckman and Thompson (1997) found that teachers ranged greatly in their relianc e on multiple-choice formats for their tests from a low of 3% to a high of 42% of tests us ed in their courses. The percent of “low-level” items (answers can be computed in one o r two steps) among tests ranged from 55%–90%. The data clearly suggest that teacher s’ views of what constitutes adequate achievement and their instructional and as sessment practices vary widely. As the above studies indicate, the particular teach er to which a student is assigned matters a lot. A growing body of research indicates that “differential teacher effectiveness is a strong determinant of difference s in student learning, far outweighing the effects of differences in class size and hetero geneity” (Darling-Hammond, 2000). The teacher to which a student gets assigned to a l arge degree determines the content s/she will cover, the pedagogy s/he will experience and the academic standards s/he will be held to. In the absence of clear and consistent standards and standards-based assessment within the school, it stands to reason t hat there is likely to be more idiosyncratic–“standards-free and data-free”–instru ctional planning and assessment, contributing to disparities in achievement expectat ions and the quality of instruction (Otto, Wolf & Eldridge, 1984; Valencia & Wixson, 20 00). SBIA should be able to

PAGE 5

5 of 25reduce the number of students held to substandard a cademic expectations, subjected to poorly planned and sequenced instruction, and socia lly promoted through the grades with inadequate skills and undiscovered learning de ficiencies. Another factor fostering support for standards-base d education reform policies has been the perception of deficient instructional managemen t at the school and district level, perceived in part to be a product of the absence of agreed-upon standards and measurement of results, and limited accountability for results. (Note 3) An example of deficient measurement systems is revealed in this a necdote from an administrator in a New York district. She describes the following atte mpt to use data to make an important decision related to the effectiveness of a sixth gr ade accelerated math program: To find out about the students in accelerated math, I had to pursue several data sources: the current roster of students (middl e school database), sixth grade scores on state exams (cumulative folders in the middle school guidance office), designation of mastery for eighth -graders on their sixth grade tests (testing and assessment database, locat ed in the high school), and analysis of the scores against the current class li sts. That work, which required several weeks of effort, suggested some interesting findings. The initial insight was techn ological in nature: We lack an integrated database that could help us answer qu estions such as who has access to accelerated courses. …A question from a p arent opened the door to review of our math program and the realization t hat otherwise helpful data were too scattered to be immediately useful (K ing, 2000: 19). This administrator, who appears to possess an excep tional commitment to using data for instructional decision-making purposes, is not in a n exceptional situation in terms of her ability to access the data needed. Her story would characterize many districts. Lacking clear standards and good assessment data, t he concern is that too many teachers and administrators fall prey to easy, but unexamine d, assumptions that standards are high, teachers are teaching effectively, and studen ts are achieving at acceptable levels (Litow, 1999; Powell, Farrar & Cohen, 1985; Sanders 1999; Sizer, 1984). If a school has neither clear grade level achievement standards nor standards-based assessments of achievement, it is hard to know whether instruction is effective. It also creates conditions in which ineffective teachers can be buffered from accountability or pressures to improve, while exemplary teachers go unrecognized, with the outcomes of their efforts remaining invisible within the organization. Substa ndard instruction and learning outcomes are not easily changed if they are not eas ily identified. Without benchmarks against which to gauge performance and data to supp ort analysis of practice, many schools are rudderless and unable to do much about it. Whether standards-based reforms can fully resolve these problems remains unclear, and is probably unlikely. There is little research spec ifically on this question, but findings of studies which can speak to this question support th at the absence of clear standards and instructional planning information is an obstacle t o school effectiveness. A study comparing successful and unsuccessful schools imple menting school-based management found that the successful managers of reform had “a ccess to a wide variety of information on student, staff, and school performan ce, and used the information to guide decision making, to provide feedback to school cons tituents, and to enhance

PAGE 6

6 of 25 organizational learning” (Wohlstetter et al., 1997: 213). Other studies emphasize the costs deficient information and bad management: A l arge study of a systemwide school-based management initiative in Chicago deleg ating more authority and management responsibility to schools found that mos t schools did not improve, and a very large proportion, as much as one-third, actual ly got worse with planning and decision-making in schools crippled by conflict, in adequate information, and poor decision-making (Bryk, 1999). The efficacy of stand ards-based reform is still more assumed than documented, but there is hope that the gap between SBIA and practice in schools may be lessened by state laws implementing clearer and more ambitious standards, improved assessments of achievement, and stronger accountability sanctions.Standards-based Reform and Information Technology:Making Standards and Performance Outcomes ClearerFew would dispute that the standards-based reform m ovement has far to go. But at the same time we must not underestimate the scope of th e challenge or overlook progress. The United States’ education system is enormously d ecentralized and has been characterized by leading scholars as fragmented in its governance (Cohen & Spillane, 1993), loosely-coupled in its organizational struct ure (Weick, 1976), and suffused with cultural traditions making management practice and instructional methods highly resistant to change (Sarason, 1990; Stigler & Hiebe rt, 1999). Faced with these conditions, it will be neither quick nor easy to en gender principles of SBIA in schools. Still, standards-based reform has produced some not eworthy systemwide changes. Educational System Goals Are ClearerTwenty years ago few states had educational goal st atements specific enough to provide instructional guidance. Now virtually all states ha ve extensive sets of written standards for curriculum content and student achievement refe rred to as content and performance standards (Gandal, 1997; Joftus & Berman, 1998). Co ntent standards prescribe topics that must be covered and performance standards pres cribe skills and abilities and specific expectations by certain grades.As states have rewritten and revised these standard s over time, the progression has been inexorably one way: states have made content and pe rformance standards more comprehensive and more specific and strengthened th eir statutory authority. Over that past several decades state-prescribed education goa ls have become more comprehensive and specific. Content and performance standards hav e been prescribed at increasing numbers of grades in the K-12 sequence, in increasi ng numbers of subjects, and with broader scope of coverage at each grade level. In a ddition, state curriculum documents and policies are increasingly prescribing instructi onal procedures with examples to illustrate exemplary forms of student achievement ( Archbald, 1999). To illustrate, compare a set of goal statements excerpted from U.S History from Texas’s standards, 1985 versus their current standards: 1985

PAGE 7

7 of 25 Emergence of the U.S. as a world power. A) describe the causes and effects of United Statesinvolvement in foreign affairs and in international conflicts B) describe the United States international politic al, humanitarian, economic, and military cooperative ef forts C) analyze the foreign policies of the United State s and their impact on the nation Current StandardsThe student understands the emergence of the United States as a world power between 1898 and 1920. The student is e xpected to: (A) explain why significant events and individuals,including the Spanish-American War, U.S. expansioni sm, Henry Cabot Lodge, Alfred Thayer Mahan, and Theodor e Roosevelt, moved the United States into the positio n of a world power; (B) identify the reasons for U.S. involvement in Wo rld War I, including unrestricted submarine warfare; (C) analyze significant events such as the battle o f Argonne Forest and the impact of significant individuals in cluding John J. Pershing during World War I; and (D) analyze major issues raised by U.S. involvement in World War I, Wilson's Fourteen Points, and the Trea ty of Versailles. This kind of change has occurred in virtually all s tates (additional examples below). Educational goals, then, have been made clearer, mo re specific, and more uniform throughout state education systems. In fact, resear ch indicates goals have become more uniform among states within the country (Gandal, 19 97; Joftus & Berman, 1998; Raimi & Braden, 1998). A major component of standards-led education reform has been the production of influential national curriculum repor ts. (Note 4) As state curriculum committees across the country worked on revising an d upgrading their own documents and standards they often looked to the national cur riculum reports for guidance while sharing information through networking, conferences and the like – activities which many observers believe have lead to greater uniform ity of standards among states. Performance Measurement Has Improved

PAGE 8

8 of 25The amount of student achievement testing has incre ased steadily over the years, but more significant for the goals of SBIA, the sophist ication of performance measurement has improved. During the 80s statewide testing prog rams expanded in scope, with testing expanding to cover more grade levels and mo re subjects (CCSSO, 1998; Clotfelter & Ladd, 1996; Linn, 2000). The 90s broug ht about four important changes. First, there was growing recognition of the limitat ions of exclusive reliance on norm-referenced comparisons (Linn, 2000; Shepard, 1 990). State testing programs became more criterion-referenced, with scores of st udents and school mean scores reported in relation to fixed benchmarks derived fr om published content and performance standards. In addition to reporting, “w e are at the 55th percentile,” a school would report, “our average score is 3.2 on the 5 po int scale, where 3.0 is the threshold described as ‘at standard’.”Second, exclusive reliance on multiple choice forma ts gave way to mixed formats, tests with both multiple choice items and constructed res ponse items. This in theory makes the tests more valid for measuring problem solving and knowledge integration. Many more states also started using writing tests in whi ch students produce short essays graded by readers. Third, states moved toward more analytical measures of school performance reporting. Educators and researchers have long understood that a school’s test scores are a function not just of the quality and effectiveness of its st aff and programs, but also of the socioeconomic backgrounds of its students. Therefor e, comparing schools on “raw” descriptive statistics, such as the mean of their s tudents’ scores on a particular test, is not particularly useful (Meyers, 2000; Willms, 1992). H igh scoring schools almost always have low percentages of low-income students; vice v ersa for the low performing schools. By the late 80s many states began using mo re analytical measures of school performance. One approach is to adjust for student socioeconomic background and other characteristics considered exogenous to the school, so schools’ scores would be compared only to other demographically similar scho ols. Another approach (not mutually exclusive with demographic adjustments) is to report and evaluate schools’ scores on the basis of achievement gains (or losses ) over time. Within these broad analytical reporting strategies there are a variety of more specific statistical methods. The main point, though, is over that last fifteen y ears state education agencies shifted toward the reporting of measures of school performa nce that are significantly more advanced than mere reporting of “raw” descriptive s tatistics (Linn, 2000). Fourth, school performance measurement has increasi ngly included information beyond test scores. Other measures of performance have als o been added to states’ systems for evaluating schools. These include graduation rates, percentages of students taking the SAT, and enrollments in college prep courses.Clarifying Goals, Disseminating Performance Informa tion, and Creating More Accessible InformationEducational standards and assessment results can ai d decision-making and influence practice only if people know the standards and can access the assessment results. One obstacle to standards-based reform has been that te achers sometimes do not know much in detail about new standards documents (Archbald, 1997; Cohen, 1997); teachers have

PAGE 9

9 of 25also confronted difficulties in accessing assessmen t results. The Internet, local computer networking, and electronic data interchange are ove rcoming some of these obstacles to access and understanding and therefore increasing t he number of teachers and principals using information on standards and results for inst ruction and instructional management. Prior to the advent of the Internet and computer ne tworking for electronic data interchange, information about goals and performanc e was distributed exclusively in paper reports. Curriculum guides were printed and b ound by state education agencies and then distributed to school district office pers onnel by mail or at conferences of district curriculum coordinators; district personne l in turn had the responsibility of distributing these documents to schools. It was gen erally expected that at least one copy would be available in schools’ administrative offic es and hoped that each teacher would have his/her own set, but this often depended upon the administrators’ predilections and photo-copying resources. The same general approach operated with respect to reports of school and school district performance on state tes ts, although unlike curriculum documents, test data (typically in the form of magn etic tape) would eventually also be sent to school districts.Paper-based, manual distribution of information sti ll occurs–however, there is a parallel process today of electronic information exchange. V irtually all school and district level decision-makers now have access to electronic infor mation. Over 95% of schools have computers connected to the Internet and local area networks (NCES, 2000; NCES, 2001). (Note 5) The infrastructure is in place, then, to profoundl y improve access to decision-support information about educational goal s, standards, and performance. Throughout the nation, states and districts are usi ng this IT infrastructure to do just that. The examples below are illustrative of typical webbased EDI concerning educational standards and school performance.Information on state content and performance standa rds is accessed typically through a four step process of web navigation: (1) the state education agency website; (2) the office responsible for curriculum/instruction; (3) the link(s) to “standards;” (4) the link(s) to particular content areas and grade level s. Increasingly, standards documents are displayed in portable document format, but are also in rich text and ascii text formats. This maximizes their accessibility.Figure 1 shows Virginia’s web-based menu box for sp ecifying the subject area and grade level standards. Figure 1. Typical Menu Box to Access Content/Perfor mance Standards Figure 2 shows a selection from the “Grade Eight St andards for Learning” from Virginia’s content/performance standards documents, which are accessed on the web in

PAGE 10

10 of 25portable document format. Figure 2. Excerpt from Virginia's Web-based Standar ds Document Most states with web-based access to standards docu ments have relatively similar processes of access. Distributing, revising, accessing, and using standa rds documents on the web is far easier and more efficient than managing paper reports. The re are obvious efficiencies of distribution and revision. The information moves el ectronically instead of in cartons in trucks, and can be revised without incurring large re-distribution costs. (This is not to say frequent revision is a good idea–far from it.) More important from the perspective of SBIA, is that users always have access to the infor mation. Teachers planning lessons or

PAGE 11

11 of 25committees planning curriculum can immediately acce ss the specific standards and areas of content they need for guidance. They can do it f rom home. Parents also can access the information. No one has to try to remember where th eir curriculum document is stored and ask for a new copy when they cannot find it.Information on student and school performance is al so increasingly on-line. Following are a few examples. Figure 3 shows the Delaware Sta te Testing Program home page. The home page provides a great deal of background i nformation on the state testing program, including an explanation of the scales use d to represent student and school performance, sample items used in the state tests, and an on-line newsletter about the testing program and related assessment topics. Figu re 4 shows the menu of selections for users to produce on-line reports of school level te st scores. A subsequent menu (not shown) allows users to select particular schools an d to disaggregate results by student categories (race, gender, special ed, etc.). Figure 5 shows a report from a similar system in Arizona. The portable document format report of a selected school provides not only test score information, but information on staff, s tudent enrollments, curriculum specialties, the school academic calendar, expendit ures, discipline, and a other items of information. Figure 6 shows a hyper-text markup lan guage report produced from data for the Chicago school district. This web-based system produces reports reflecting the results of value-added analyses computed at the sch ool-level. The reports show whether a school’s productivity trend over the last nine ye ars is upwards, downwards, or flat. These examples are illustrative of the kind of scho ol performance information that has become widely available on the web. Figure 3. Web Homepage for Delaware State Testing P rogram (DSTP)

PAGE 12

12 of 25 Figure 4. Menu to Select Delaware School Level Test Score Results Figure 5 School Report Card: Arizona

PAGE 13

13 of 25 Figure 6. Gain Score Report from Chicago Public Sch ool System The web-based information is intended for anyone wi th an interest in the performance of schools in a given locale (region or district) or i n the performance of a particular school. Teachers, principals, and parents can evaluate the performance of “their” school in relation to standards and in relation to the perfor mance of other schools; school board members, civic leaders, and policymakers can evalua te the performance of groups of schools in which they have an interest. The informa tion reveals how well students are achieving in schools, identifies needs for improvem ent, and depending upon the quality of the measurements, which schools are performing a dequately and which schools are not. This creates a more efficient and rational bas is for instructional management decision-making, and for allocating rewards, sancti ons, and resources aimed at school improvement.Before computer networking and electronic informati on exchange, performance data on schools was locked exclusively into paper reports t hat moved slowly through the organization, typically down the organizational hie rarchy, from one office to the next. Often it was four or five months after testing befo re test score reports and other school reports were distributed throughout the organizatio n. At the school level, administrators would typically have responsibility for disseminati ng test results among school faculty–a process carried out in a wide variety of ways, with widely varying efficiency, and often involving a lot of photo-copying. The same tasks an d difficulties of paper report management described earlier with standards documen ts also affects the distribution and use of test score reports.The web and EDI have changed this. Now, routinely, information from statewide testing is available over the web at the same time to every one–teachers, parents, school principals, guidance counselors, district office cu rriculum coordinators, personnel

PAGE 14

14 of 25supervisors, superintendents, and school board memb ers. Many states and districts also make individual student achievement information ava ilable to authorized users (e.g., the child’s teacher, guidance counselor, principal, and district data specialists). This has created substantially improved access to informatio n for teacher-parent conferencing, for IEP meetings for special education students, and fo r instructional planning.Some Conclusions About Progress Toward SBIAEducation critics are often quick to declare that r eforms have failed when dramatic results are not seen after a few years. This perspe ctive reflects an inadequate grasp of the scale and complexity of the education system and th at schools’ academic influence is shaped greatly by the level of support from home an d the values of local communities and wider society. The progress described above in specificity of standards, performance measurement, and information access is significant and should be recognized. The quality of state content and performance standards for curriculum has unquestionably improved, owing to leadership among national curric ulum organizations and to the growing stock of experience among dozens of state a nd district level standards committees. The quality of performance measurement has improved as a result of scholarly advocacy, advances in the psychometric fi eld, and heightened awareness among policymakers and measurement specialists in e ducation agencies. And the revolution in information technology brought to sch ools networked information systems that have given teachers and administrators much im proved access to information on curriculum standards, school performance reports, a nd student and school profile characteristics.The Next Challenge: Integrating SBIA Into Instructi onal Practice The education policy changes described above have r esulted from standards-based reform and developments in IT. Have these changes a t the policy level affected instructional practice in schools? In its ideal for mulation, SBIA requires that the planning, instruction, and assessment cycle be inte grated among classrooms and grade levels. Standards of achievement should be clearly understood, consistent throughout the school, and guide unit and lesson planning. There s hould be easy access to student, instructional, and organizational data to monitor p erformance, to diagnose student needs and problems, and to develop improvement strategies Undoubtedly, standards-based reform policies have made teachers and principals f ar more aware of prescribed standards and school performance outcomes today tha n a decade ago. But are standards-based reform and IT changing practices of instruction and instructional management within schools in ways more consistent w ith the SBIA model? Less is known about this.We do know that IT can play an important role in facilitating standards-b ased reform and the implementation of SBIA within schools at the level of instructional practice. Relational data-base management technology and comp uter-based instructional management systems have gone far toward making SBIA technically feasible The rapid rise of vendors and software programs for instructi onal management attests to this. There are a variety of instructional management and information technology systems now available. The goal of these systems is to use computer technology to improve the development and management of lesson plans, student academic records, and information about students’ instructional experienc es. Reflecting principles of

PAGE 15

15 of 25standards-based reform, these instructional managem ent systems almost always have features to link lesson and unit objectives to larg er curriculum content and performance standards, whether these standards come from distri ct, state, or national sources. These systems are also designed to manage student achieve ment records, such as grades and test scores, as well as to record instructional exp eriences such as lessons and tasks completed, books read, essays written, and the like While there is very little research specifically on the extent to which instructional management systems are effective in achieving the g oals of SBIA within schools and classrooms, available research and theory suggest e ffectiveness will depend upon several factors:Leadership And School CultureValues and practices of leaders in the school can e ither encourage or discourage SBIA. Support for SBIA is created by values and practices such as: norms of collegial conversation that reflect on practice; frequent mee tings to examine evidence on instructional practice, staff performance, or stude nt achievement; commitments to professional development and resource support for C RA; and statements, behaviors, and symbols that communicate values of trust, experimen tation, and open communication. (Note 6) School leaders must be experienced, committed, and knowledgeable in order to create these conditions.Often, these conditions do not occur. Instead of tr ust, teachers or principals may be suspicious that information will be used against th em or simply unmotivated to pursue the advantages computers may bring (KHEC, 2000; Sti ggins, 2001). Such fears are reinforced when test scores are publicly reported a nd lead to simplistic media proclamations about “poor teaching and failing scho ols” or are used inappropriately by uninformed district officials. Another factor milit ating against the development of cultures conductive to SBIA is past negative experi ences with instructional management or student information technologies (Rosen & Weil, 1995). It was not until well in to the 90s that faster CPUs, greater storage capacity, fas ter connections, better graphical user interfaces and other IT advances made more user-fri endly systems possible. During the developmental days of computers in the 80s and even into the 90s untold many schools and districts across the country hastily adopted co mputer-based instructional management and student information systems sold wit h promises and attractive presentations, only to find the systems actually co mplicated, cumbersome, time-consuming, and difficult to use. Frustrating e arly experiences with computers have left many personnel in schools skeptical about new “technology solutions” promised by technology advocates outside the school.Personnel Technical ProficiencyThere is no getting around the fact that for SBIA t o be integrated into instructional practice all or nearly all of the school’s faculty and administrators must have a certain level of computer proficiency. While SBIA does not technically require computers, neither does organizing and retrieving information in libraries. But card catalogues and 800-page indexes of serials are essentially obsolet e because of computers. Computers and well designed instructional management software are needed to fully realize the potential of SBIA to promote uniform standards with in a school, to track student

PAGE 16

16 of 25progress through the year and among grades, to repo rt information to users, and to analyze and evaluate results.Despite the ubiquity of computers in schools, teach ers and administrators are still predominantly of the pre-computer generation. Many are still reluctant computer users and limited to fairly simple procedures, such as wo rd processing or web-browsing (Cooley, 1998; OTA, 1995). Compared with prescribed standards of proficiency (e.g., Coughlin, 1999; ISTE, 2000), the proportion of teac hers and principals proficient for more advanced computing uses remains relatively sma ll. With respect to the proficiencies required for data-driven decision-mak ing, for instance, a prominent ULCA education research center describes school decision -makers as “woefully underprepared to use data [to document results] and for planning and decision-making purposes” (Linn & Baker, 1999). (Note 7) Several reports indicate that school districts gen erally give insufficient attention and resources to training te achers to gain needed proficiencies with educational technology–about half as much as they s hould (MDR, 2000; OTA, 1995). Technical ObstaclesInformation management systems themselves may have shortcomings that frustrate and impede the goals of users. Administrators and teach ers have very little “down time” during the day and so systems must be designed to p rovide needed information quickly and easily. While a reasonable level of computer pr oficiency can be expected, teachers and administrators cannot all be expected to be adv anced computer users. The systems must be designed to the maximum extent feasible aro und the habits and needs of users, rather than users having to adapt inadequately desi gned systems to the unique conditions in schools and needs of school-based users. Here ar e a few issues that designers of these systems must consider.All schools have some turnover of students over the course of the school year and many schools have high levels of student turnover. Accor ding to one national estimate, one out of six children will attend three schools by the ti me they are in third grade (GAO, 1994). It is therefore essential to develop procedures to insure that student databases are current and easy to update. When a new student arrives in a school, the registration information must be entered immediately into the data system an d appropriate records up dated. It is a waste of time and discourages reliance on data fo r planning and decision making if teachers or administrators attempt to use a system and frequently encounter missing or out-of-date information. A closely related issue is linkages between databas es. Functionality decreases to the degree that an information management system is una ble to pull together current information from different databases. If student de mographic and registration information cannot be easily linked with student ac hievement information and student achievement information cannot be linked with curri cular or discipline information, the usefulness of the data declines markedly. It is not enough that the data exist, they must be cross-referenced and easily queried by users.A third issue concerns the quality of instructional and achievement data. Relatively detailed information is needed for instructional pl anning. Ideally, SBIA requires at least three types of information: standards-based assessm ent information, at least once per marking period, from tests and tasks closely aligne d with the school’s curriculum;

PAGE 17

17 of 25state/district achievement test information; and in structional information at the student level related to curriculum objectives. State testi ng information by itself is inadequate. As Clements (2000) observed: The decision support systems now under development generally contain the accountability data, because they are the data that are available. These data are considered to be of sufficient quality to warra nt widespread use for school and district comparisons and to highlight wh ere effective practice seems to be occurring. While these data may be usef ul to some decision-makers looking for places where assistance is needed, the data appear to have limited use to teachers and principa ls seeking to improve what they are doing. (p. 3). A fourth set of issues concerns the accessibility o f the information. Teachers and administrators must be able to use the systems easi ly. As stated earlier, users should be expected to be proficient with common software prog rams and commands and basic principles of data analysis and management. At the same time, the systems must be tailored to the needs and proficiency levels of thi s audience. This means that users must be able to access and query the system from their o wn computers and produce useful reports as needs arise, without having to struggle with manuals, engage in frequent trial and error, navigate through dozens of screens, and frequently solicit technical consultation. If these standards of ease and functi onality cannot be achieved, teachers and principals will of course continue to make inst ructional and management decisions – only the decisions will be made on the information at hand, however anecdotal, unsystematic, or incomplete it may be.The variables above are some of the most important that will determine the benefits to be derived from the “next generation” of IT-based i nstructional management systems for schools. Most schools and districts are still strug gling with the challenges of converting existing paper-based data systems to electronic for mats and connecting multiple, separate databases to create more integrated, wareh oused data management systems. Today in most districts, school-based personnel can access electronically (a) information on state curriculum content standards and schoolwid e profile reports and (b) information on individual students from student records. Howeve r, the information of type (a) is in aggregate form and has been compiled by others, whi le information of type (b) is typically limited to registration, scheduling, and report card information and is difficult to aggregate and analyze. Further, it is typically not the case that all teachers can access information of types (a) or (b) easily from their c lassrooms on an “as needed” basis. The goal of “next generation” IT systems is to surmount the school-level leadership, personnel proficiency, and technical variables to i mplement systems that can place at the teachers’ fingertips the information necessary to f ully realize the vision of SBIA (Means, 2000). This can bring standards-based reform more e ffectively to the classroom level. Then, perhaps, we will see payoffs in rising studen t achievement.Notes 1 Otto, Wolf & Eldridge (1984), based on a review of reading research, conclude that “when teachers do more ongoing diagnosis and utiliz e information in planning appropriate instruction, achievement scores tend to be higher” (p. 814).

PAGE 18

18 of 25 2 See also Elmore, Peterson & McCarthy (1996) for a study finding highly varied adaptations of reading instruction among elementary teachers in “restructured” schools. Also, see CCSSO (2000). 3 For studies documenting management dysfunctions in public education systems, see Ascher (1996); Bryk (1999); Hess (1999); Hula, Jeli er & Schauer (1997); Olson (1997); Litow (1999); Mattoon & Test (1995); Ravitch & Vite ritti (1997); and Ravitch (1999). These studies describe problems of educational mana gement and instructional planning stemming from poor information, inability to conduc t strategic planning, and political influences. 4 In many ways 1989 was the apogee of standards-base d reform advocacy. President Bush and the nation’s governors at the Charlottesvi lle Education Summit called for a nationwide commitment to higher academic standards; 1989 brought the publication of three key national “standards” reports, Everybody Counts (National Research Council), Science for All Americans (American Association for the Advancement of Scien ce), and Curriculum and Evaluation Standards for School Math ematics (National Council of Teachers of Mathematics). Reports in other subjects followed shortly. Building a History Curriculum: Guidelines for Teaching History in Schools (Bradley Commission on History in Schools), Curriculum Guidelines (National Council for the Social Studies), and Charting a Course: Social Studies for the 21st Cent ury (National Commission on Social Studies in the Schools). 5 Becker in 1994 reported that over the three years between 1989 and 1992, the number of computers in U.S. schools grew by 300,000 to 400 ,000 per year. 6 For more on this see Barth (1990); Leithwood & Ait ken (1995); and Senge (2000). 7 For more on this, see Hurst (1997), President’s Co mmittee (1997) and Streifer (1997). Computer availability and information technology ar e most inadequate in districts on the low-income, low-tech side of the “digital divide” ( President’s Committee, 1997).ReferencesAngelo, T. (1998). Classroom assessment and research: Uses, approaches and research findings San Francisco: Jossey-Bass. Archbald, D. (1997). Curriculum control policies an d curriculum standardization: Teachers' reports of policy effects. International Journal of Educational Reform 6(2), April, 155-173.Archbald, D. (1999). The reviews of state content standards in english l anguage arts and mathematics: A summary and review of their meth ods and findings and implications for future standards development Washington DC: National Education Goals Panel. [http://www.negp.gov/page9-3.htm#Std]Archbald, D. & Grant, T. (2000). What’s on the Test ? An analytical framework and findings from an examination of teachers’ mathemati cs tests. Educational Assessment

PAGE 19

19 of 256(4), 2000. Ascher, C., Fruchter, N., & Berne, R. (1996) Hard lessons NY: The Twentieth Century Fund Press.Baumann, J., Hoffman, J., Duffy-Hester, A. & Ro, J. (2000). The First R yesterday and today: US elementary reading instruction practices reported by teachers and administrators. Reading Research Quarterly 35 (3), 338-377. Becker, H. (1994). Analysis and trends of school use of new informatio n technologies Prepared for the U.S. Congress Office of Technology Assessment. Washington, DC: U.S. Government Printing Office.Bryk, A. (1999). Policy lessons from Chicago’s expe rience with decentralization. In Ravitch, D. (Ed.) Brookings papers on Education Policy (pp. 67-98). Washington DC: Brooking Institution Press.CCSSO (1998). State education accountability reports and indicato r reports: Status of reports across the states 1998. Washington, DC: CCSSO State Education Assessment Center.CCSSO (2000). Using Data on Enacted Curriculum in Mathematics and Science: Sample Results from a Study of Classroom Practices and Subject Content A joint study by Council of Chief State School Officers, State Ed ucation Assessment Center; Wisconsin Center for Education Research at the Univ ersity of Wisconsin, Madison; and the State Collaborative. www.ccsso.org\Publications Clements, B. (2000). Background paper on ideal educ ation information for improving classroom practice. Paper commissioned for Conferen ce on Ideal Education Information For Improving Classroom Practice, by Office of Chie f Information Officer, U.S. Department of Education, Washington DC, September 2 8-29. Cohen, D. (1996). Standards-based school reform: Po licy, practice, and performance. In H. Ladd (Ed.) Holding schools accountable: Performance-based refo rm (pp. 99-127). Washington DC: Brookings Institution.Cohen, D. & Spillane, J. (1993). Policy and practic e: The relations between governance and instruction. In S. Fuhrmann (Ed.) Designing coherent education policy (pp. 35-95). San Fransisco: Jossey-Bass Publishers.Cooley, V. (1998). Technology lessons. Electronic School [Online]. Available: http://www.electronic-school.com/0698f3.html Coughlin, E. (1999). Professional competencies for the digital age classroom. Learning and Leading with Technology 27 (3), 22-27. Clotfelter, C. & Ladd, H. (1996). Recognizing and r ewarding success in public schools. In H. Ladd (Ed.) Holding schools accountable: Performance-based refo rm (pp. 23-64). Washington DC: Brookings Institution.Darling-Hammond, L. (2000). Teacher quality and stu dent achievement: a review of state policy evidence. Educational Policy Analysis 8 (1).

PAGE 20

20 of 25[http://olam.ed.asu.edu/epaa/vol8.html]Elmore, R., Peterson, P. & McCarthy, S. J. (1996). Restructuring in the classroom: Teaching, learning, and school organization San Francisco: Jossey-Bass. GAO (1994). Elementary school children: Many change schools fre quently, harming their education Washington, DC: General Accounting Office. [GAO/H EHS-94-45] Gandal, M. (1997). Making standards matter: An annual fifty-state repo rt on efforts to raise academic standards Washington D.C.: American Federation of Teachers. Hess, F. (1999). Spinning wheels: The politics of urban school refor m Washington DC: Brookings Institution.Hula, R., Jelier, R., & Schauer, M. (1997). Making educational reform: Hard times in Detroit, 1988-1995. Urban Education 32(2), 202-232. Hurst, D. (1997). Teaching technology to teachers. Educational Leadership ISTE (2000). National educational technology standards for stude nts: Connecting curriculum and technology International Society for Technology in Education Eugene, OR.Joftus, S. & Berman, I. (1998). Great expectations? Defining and assessing rigor in state standards for mathematics and English languag e arts Washington D.C.: Council for Basic Education.Jones, B. (2000). Educational leadership: Policy dimension in the 21st century Stamford CT: Ablex Publishing Corporation.KHEC (2000). The data made me do it. Policy Perspectives 9 (2), 1-12 (publication of the Knight Higher Education Collaborative, Universi ty of Pennsylvania, Philadelphia, PA).King, S. (2000). Tracking data on student achieveme nt. In B. Kallick and J. Wilson (Eds.) Information Technology for Schools (pp. 17-32) San Francisco: Jossey-Bass. Leithwood, K. (1990). The principal’s role in teach er development. In Changing school culture through staff development Alexandria, Va.: ASCD Press. Leithwood, K. & Aitken, R. (1995). Making schools smarter: A system for monitoring school and district progress Thousand Oaks, CA: Sage. Levine, D. & Lawrence, L. (1990). Unusually effecti ve schools: A review and analysis of research and practice. Madison, Wis.: The Nation al Center for Effective Research and Development.Linn, R. (2000). Assessment and accountability. Educational Researcher 29 (2), 4-16. Linn, R. & Baker, E. (1999). Techies, trekkies, and luddites. The CRESST Line Winter, p1,6.

PAGE 21

21 of 25Litow, S. (1999). Problems of managing a big city s chool system. In Ravitch, D. (Ed.) Brookings papers on Education Policy (pp. 185-216). Washington DC: Brooking Institution Press.MDR (Market Data Retrieval). (2000). Technology in Education 2000. [Online]. Available: http://www.schooldata.com/publications3.html Mattoon, R. & Testa, W. (1995). Midwest approaches to school reform. Economic Perspectives 19, Jan/Feb, 2-19. McEwan, E. (1998). The principals’ guide to raising reading achievemen t Thousand Oaks, CA: Corwin.Means, B. (2000). Technology use in tomorrow's scho ols. Educational Leadership 58 (4) 57-61.Meyers, R. (2000). Value added indictors: A powerfu l tool for evaluating science and mathematics programs and policies. NISE Brief 3 (3), June. NEC. (1985). Principals who produce results: How they think, wha t they do Arlington Heights, IL: Northwest Educational Cooperative.NCES (National Center for Education Statistics). (2 000). Teachers' tools for the 21st century: A report on teachers use of technology (NCES 2000-102). U.S. Department of Education: Washington, D.C. [Online]. Available: http://nces.ed.gov/spider/webspider/2000102.html NCES (National Center for Education Statistics). (2 001). Internet access in u.s. public schools and classrooms: 1994–2000 (NCES 2001-071). U.S. Department of Education: Washington, D.C. [Online]. Available: http://nces.ed.gov/pubs2001/internet/ OTA (Office of Technology Assessment, U.S. Congress ). (1995). Teachers and technology: Making the connection OTA-EHR-616. Washington, DC: U.S. Government Printing Office.Otto, W., Wolf, A. & Eldrigde, R.G. (1984). Managin g instruction. In P. D. Pearson (Ed.), Handbook of reading research (pp.799-827). New York: Longman. Olson, L. (1997). "Annenberg challenge" proves to b e just that. Education Week (Web), June 25, 2-3.Powell, A., Farrar, E., & Cohen, T. (19985). The shopping mall high school: Winners and losers in the educational marketplace Boston: Houghton Mifflin. President's Committee. (1997). Report to the Presid ent on the Use of Technology to Strengthen K-12 Education in the United States, Pre sident's Committee of Advisors on Science and Technology, Panel on Educational Techno logy, March 1997. [www.whitehouse.gov/WH/EOP/OSTP/NSTC/PCAST/k-12ed.h tml] Raimi, R. & Braden, L. (1998). State mathematics standards: An appraisal of math standards in 46 states, the District of Columbia, a nd Japan Washington D.C.: Fordham Foundation.

PAGE 22

22 of 25Ravitch, D. & Viteritti, J. (1997). New York: The obsolete factory In Ravitch, D. & J. Viteritti (Eds.) New Schools for a New Century (pp. 17-36). New Haven: Yale University Press.Rosen, L. D., & Weil, M. (1995). Computer availabil ity, computer experience and technophobia among public school teachers. Computers and Human Behavior 11 (1), 9-31.Rosenshine, B. & Stevens, R. (1984). Classroom inst ruction in reading. In P. D. Pearson (Ed.), Handbook of reading research (pp.745-798). New York: Longman. Sanders, E. (1999). Urban school leadership: Issues and strategies Larchmont, NY: Eye on Education.Sanders, W.L. & Rivers, J.C. (1996). Cumulative and residual effects of teachers on future student academic achievement. Knoxville: Uni versity of Tennessee Value-Added Research and Assessment Center.Sarason, S. (1990). The predictable failure of educational reform San Francisco: Jossey-Bass.Senge, P. Cambron-McCabe, N., Lucas, T., Smith, B., Dutton, J., & Kleiner, A. (2000). Schools that learn NY: Doubleday Senk, S. L., Beckman, C. E., & Thompson, D. R. (199 7). Assessment and grading in high school mathematics classrooms. Journal for Research in Mathematics Education, 28 (2), 187-215. Shepard, L. (1990). Inflated test score gains: Is t he problem old norms or teaching to the test? Educational Measurement: Issues and Practice 9 (3), 15-22. Sizer, T.R. (l984). Horace's compromise: The dilemma of the American hi gh school Boston, MA: Houghton Mifflin.Smith, P. & Ragan, T. (1999). Instructional design New York: John Wiley and Sons. Stiggins, R. (2001). The principals’ leadership rol e in assessment. National Association of Seconday School Principals’ Bulletin 85 (621), 13-26. Stigler, J. & Hiebert, J. (1999). The teaching gap New York: The Free Press. Stotsky, S. (1997). State English language arts standards: An appraisal of English language arts/reading standards in 28 states Washington D.C.: Fordham Foundation. Streifer, P. (1999). Data-based decision making thr ough fast-track evaluation: What is it and why do it? Schools in the Middle Vol. 9 No. 1, 1999. Tanner, D. (2001). Assessing academic achievement Boston: Allyn and Bacon. Valencia, S. & Wixson, K. (2000). Policy-oriented r esearch on literacy standards. In Kamil, M., Mosenthal, P., Pearson, P. D., & Barr, R (Eds.) Handbook of reading research, Vol III (pp. 909-936). Mahwah, NJ: Lawrence Erlbaum Associ ates.

PAGE 23

23 of 25 Weick, K. (1976). Educational organizations as loos ely-coupled systems. Administrative Science quarterly 21 1-19. Wiggins, G. (1998). Educative assessment: Designing assessments to info rm and improve student performance San Francisco: Jossey-Bass. Wilms, J., D. (1992). Monitoring school performance London: Falmer Press. Wohlstetter, P., Kirk, A., Robertson, P. & Mohrman, S. (1997). Organizing for successful school-based management Arlington, VA: Association for Supervision and Curriculum Development.About the AuthorDr. Douglas A. Archbald is an Associate Professor in Educational Leadersh ip and Policy in the doctoral program at the University of Delaware in the College of Human Services, Education, and Public Policy. Dr. Archbal d teaches course in Education Policy, Educational Evaluation, Curriculum, and Leg al Issues in Education. As a core faculty member of the Delaware Academy for School L eadership, Dr. Archbald works with education leaders on the use and management of education data for planning and decision-making. Dr. Archbald has been a principal investigator for several national research studies and published more than 35 article s, book chapters, and commissioned research reports, including Beyond Standardized Testing: Assessing Authentic Ac ademic Achievement in the Secondary School (co-authored with Dr. Fred Newmann in 1988).Copyright 2001 by the Education Policy Analysis ArchivesThe World Wide Web address for the Education Policy Analysis Archives is epaa.asu.edu General questions about appropriateness of topics o r particular articles may be addressed to the Editor, Gene V Glass, glass@asu.edu or reach him at College of Education, Arizona State University, Tempe, AZ 8 5287-0211. (602-965-9644). The Commentary Editor is Casey D. C obb: casey.cobb@unh.edu .EPAA Editorial Board Michael W. Apple University of Wisconsin Greg Camilli Rutgers University John Covaleskie Northern Michigan University Alan Davis University of Colorado, Denver Sherman Dorn University of South Florida Mark E. Fetler California Commission on Teacher Credentialing Richard Garlikov hmwkhelp@scott.net Thomas F. Green Syracuse University Alison I. Griffith York University Arlen Gullickson Western Michigan University

PAGE 24

24 of 25 Ernest R. House University of Colorado Aimee Howley Ohio University Craig B. Howley Appalachia Educational Laboratory William Hunter University of Calgary Daniel Kalls Ume University Benjamin Levin University of Manitoba Thomas Mauhs-Pugh Green Mountain College Dewayne Matthews Education Commission of the States William McInerney Purdue University Mary McKeown-Moak MGT of America (Austin, TX) Les McLean University of Toronto Susan Bobbitt Nolen University of Washington Anne L. Pemberton apembert@pen.k12.va.us Hugh G. Petrie SUNY Buffalo Richard C. Richardson New York University Anthony G. Rud Jr. Purdue University Dennis Sayers California State University—Stanislaus Jay D. Scribner University of Texas at Austin Michael Scriven scriven@aol.com Robert E. Stake University of Illinois—UC Robert Stonehill U.S. Department of Education David D. Williams Brigham Young University EPAA Spanish Language Editorial BoardAssociate Editor for Spanish Language Roberto Rodrguez Gmez Universidad Nacional Autnoma de Mxico roberto@servidor.unam.mx Adrin Acosta (Mxico) Universidad de Guadalajaraadrianacosta@compuserve.com J. Flix Angulo Rasco (Spain) Universidad de Cdizfelix.angulo@uca.es Teresa Bracho (Mxico) Centro de Investigacin y DocenciaEconmica-CIDEbracho dis1.cide.mx Alejandro Canales (Mxico) Universidad Nacional Autnoma deMxicocanalesa@servidor.unam.mx Ursula Casanova (U.S.A.) Arizona State Universitycasanova@asu.edu Jos Contreras Domingo Universitat de Barcelona Jose.Contreras@doe.d5.ub.es Erwin Epstein (U.S.A.) Loyola University of ChicagoEepstein@luc.edu Josu Gonzlez (U.S.A.) Arizona State Universityjosue@asu.edu Rollin Kent (Mxico)Departamento de Investigacin Mara Beatriz Luce (Brazil)Universidad Federal de Rio Grande do

PAGE 25

25 of 25 Educativa-DIE/CINVESTAVrkent@gemtel.com.mx kentr@data.net.mx Sul-UFRGSlucemb@orion.ufrgs.brJavier Mendoza Rojas (Mxico)Universidad Nacional Autnoma deMxicojaviermr@servidor.unam.mxMarcela Mollis (Argentina)Universidad de Buenos Airesmmollis@filo.uba.ar Humberto Muoz Garca (Mxico) Universidad Nacional Autnoma deMxicohumberto@servidor.unam.mxAngel Ignacio Prez Gmez (Spain)Universidad de Mlagaaiperez@uma.es Daniel Schugurensky (Argentina-Canad)OISE/UT, Canadadschugurensky@oise.utoronto.ca Simon Schwartzman (Brazil)Fundao Instituto Brasileiro e Geografiae Estatstica simon@openlink.com.br Jurjo Torres Santom (Spain)Universidad de A Coruajurjo@udc.es Carlos Alberto Torres (U.S.A.)University of California, Los Angelestorres@gseisucla.edu