USF Libraries
USF Digital Collections

Educational policy analysis archives

MISSING IMAGE

Material Information

Title:
Educational policy analysis archives
Physical Description:
Serial
Language:
English
Creator:
Arizona State University
University of South Florida
Publisher:
Arizona State University
University of South Florida.
Place of Publication:
Tempe, Ariz
Tampa, Fla
Publication Date:

Subjects

Subjects / Keywords:
Education -- Research -- Periodicals   ( lcsh )
Genre:
non-fiction   ( marcgt )
serial   ( sobekcm )

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
usfldc doi - E11-00365
usfldc handle - e11.365
System ID:
SFS0024511:00364


This item is only available as the following downloads:


Full Text
xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam a22 u 4500
controlfield tag 008 c20049999azu 000 0 eng d
datafield ind1 8 ind2 024
subfield code a E11-00365
0 245
Educational policy analysis archives.
n Vol. 12, no. 16 (April 10, 2004).
260
Tempe, Ariz. :
b Arizona State University ;
Tampa, Fla. :
University of South Florida.
c April 10, 2004
505
Education and alternate assessment for students with significant cognitive disabilities : implications for educators / Mary C. Zatta [and] Diana C. Pullin.
650
Education
x Research
v Periodicals.
2 710
Arizona State University.
University of South Florida.
1 773
t Education Policy Analysis Archives (EPAA)
4 856
u http://digital.lib.usf.edu/?e11.365



PAGE 1

1 of 27 A peer-reviewed scholarly journal Editor: Gene V Glass College of Education Arizona State University Copyright is retained by the first or sole author, who grants right of first publication to the EDUCATION POLICY ANALYSIS ARCHIVES EPAA is a project of the Education Policy Studies Laboratory. Articles appearing in EPAA are abstracted in the Current Index to Journals in Education by the ERIC Clearinghouse on Assessment and Evaluation and are permanently archived in Resources in Education Volume 12 Number 16April 10, 2004ISSN 1068-2341Education and Alternate Assessment for Students wit h Significant Cognitive Disabilities: Implications fo r Educators Mary C. Zatta Perkins School for the Blind Diana C. Pullin Boston CollegeCitation: Zatta, M., Pullin, D., (2004, April 10). Education and alternate assessment for students with significant cognitive disabilities: Implicatio ns for educators. Education Policy Analysis Archives, 12 (16). Retrieved [Date] from http://epaa.asu.edu/epa a/v12n16/.AbstractState and federal mandates for education reform cal l for increased accountability and the inclusion of stude nts with disabilities in all accountability efforts. In the rush to implement high-stakes education reforms, particularly those i nvolving tests or assessments, the particular needs of students with severe cognitive disabilities are only now being addressed by policymakers and educators. For students with signi ficant cognitive disabilities, implementation of alternate approaches to education accountability is increasing. At the same time, the challenges associated with successfully implementin g alternate assessment programs are becoming more obvious. This paper describes some of the ways in which alternate asses sment as

PAGE 2

2 of 27 part of standards-based education reform may impact students with significant cognitive disabilities. It provide s an overview of state efforts to implement alternate assessments fo r students with significant cognitive disabilities, followed by an example of how one state has begun to implement alternate assessme nt through the Massachusetts Alternate Assessment (MCAS-Alt/Massachusetts Comprehensive Assessment System Alter nate). It reviews issues educators in all states will face in the participation of students with significant disabilities in altern ate assessment programs, the content and form of alternate assessm ents, the validity and reliability of the assessments, and th e role of teachers in the implementation of alternate assessment progr ams. Education reform has become one of the paramount pu blic policy issues in the nation. As policymakers and educators rush to recti fy the many perceived shortcomings of our educational system by requiring more accountability, it is increasingly clear that many reforms have not, in f act, fully taken into consideration the particular needs of students with significant cognitive disabilities. For these students, the implementatio n of alternate approaches to education accountability is increasing. At the same time, there is limited guidance from research on how to appropriately impl ement alternate assessment and local educators have limited prepara tion in alternate assessment practices. This paper describes some of the ways in which alternate assessment as part of standards-based edu cation reform may impact students with significant cognitive disabilities. I t provides an overview of state efforts to implement alternate assessments for stud ents with significant cognitive disabilities, followed by an example of h ow one state has begun to implement alternate assessment through the Massachu setts Alternate Assessment (MCAS-Alt / Massachusetts Comprehensive Assessment System Alternate). Then, it reviews some of the potential issues researchers and educators in all states will face in the participat ion of students with significant disabilities in alternate assessment programs, the content and form of alternate assessments, the validity and reliability of the as sessments, and the role of teachers in the implementation of alternate assessm ent programs.Standards-Based Education Reform: Mandates forAccountabilityThe current wave of education reform initiatives ex tends back to the mid-1980s, when national calls for dramatic change began to dr aw considerable public attention to the quality of schools and the need fo r increased accountability for educational outcomes (National Commission on Educat ion,1983). Eventually, a movement calling for systemic reform of the nation' s schools was born. This initiative focused upon an effort to impact all com ponents of the educational process in an effort to achieve pervasive and meani ngful change. The dissatisfaction with American education led to a sh ift in focus “from the process of education to the outcomes of the educational pro cess” (Geenen, Thurlow, & Ysseldyke, 1995, p. 2). By the mid-1990s, the state s began to establish educational standards and outcomes, often relying h eavily upon the use of high-stakes tests to both define and measure educat ional progress. The U.S.

PAGE 3

3 of 27 Congress declared the importance of embracing the g oal of ensuring that “all children can learn and achieve to high standards" a nd set out incentives to insure that all states pursued this goal (Goals 200 0: Educate America Act of 1994 (P.L. 103-227). At about the same time, and fo r the first time, Congress declared in both its special education laws and its general legal requirements for elementary and secondary education that high standa rds and accountability should apply to all students, including students wi th disabilities (U.S. P.L. 103-227, Section 3(1), 1994; Title I of the Improvi ng America’s Schools Act (IASA) of 1994; Individuals with Disabilities Educa tion Act (IDEA) of 1997). The 1997 amendments to the IDEA mandated the alignment of general and special education reform efforts (Guy, Shin, Lee, & Thurlow 1999). The IDEA’97 requires that children with disabilitie s be included in general state and district-wide assessment programs. The mandate underscores that accommodations be provided for students with disabi lities to ensure appropriate participation in the assessment. Further, for those students with significant disabilities, IDEA '97 requires that each state pro vide an alternate assessment for those children who cannot participate in the st andard State and district-wide assessment programs. Finally, the law places the r esponsibility upon each state for developing the participation guidelines a nd gives the IEP team responsibility for making determinations on the par ticipation of each student in state assessment programs based on the state guidel ines. In the No Child Left Behind Act of 2001 (NCLBA) (P. L. 107-110), Congress reaffirmed and expanded its commitment to standards -based education reform. The new law requires annual testing of students in grades three through eight, calls for determinations whether schools are making "adequate yearly progress" in meeting academic standards, and encourages great er accountability for educational progress, including the use of sanction s and rewards. The NCLBA also addresses the participation of students with d isabilities in these programs. In assessing adequate yearly progress, it calls for participation of no less than 95% of students with disabilities in either regular assessment or alternate assessment programs, reasonable adaptations and acc ommodations for students with disabilities, the use of valid and re liable measures for students with disabilities, disaggregated accountability rep orting to focus on outcomes for students with disabilities, and meaningful reportin g to parents of individual student results.The essential components of all these recent reform mandates rest upon the use of content standards, performance assessments, and accountability. Initially, content standards were the main politica l tools of standards-based reform: “They define the breadth and depth of value d knowledge that students are expected to learn, and they are intended to red uce the curriculum disparities existing across schools and school districts” (McDo nnell et al.,1997, p. 114; see also Ysseldyke, Thurlow, & Shriner, 1994). Performa nce assessment, however, became the mechanism for ensuring accountability in meeting academic content standards.Accountability is central to standards-based reform and takes two forms: student accountability (assigns responsibility to t he student) and system accountability (assigns responsibility to the educa tional system or individuals

PAGE 4

4 of 27 within that system). “System accountability is des igned to improve educational programs whereas student accountability is designed to motivate students to do their best" (National Center on Educational Outcome s, 2001). System accountability, defined as “a system activity desig ned to assure those inside and outside the educational system that schools are moving in desired directions” (p. 2), is most often measured by large -scale standardized tests (Ysseldyke, Olsen and Thurlow (1997). Student accou ntability is also most often attained through standardized tests and is many tim es linked to high school graduation or grade-to-grade promotion requirements According to the National Center for Educational Outcomes, “all states have s ome type of system accountability, but not all states have student acc ountability” (National Center on Educational Outcomes, 2001).Until recently, there has generally been a dual sys tem of accountability one for general education and one for special education (Se bba, Thurlow, & Goertz, 2000). Indeed, some would argue that for students w ith disabilities there was no systemic accountability at all (McDonnell, et al.,1 997). Now, there is a push for a unified educational accountability system based upo n the realization that “accountability is only realized when all children, including students with disabilities, are considered in the planning, devel opment, and implementation” (Erickson & Thurlow, 1997, p. 1).For students with disabilities, inclusion in the ge neral system for student and system accountability is intended to insure full pa rticipation in the content and performance standards of general education. These g oals began to be addressed as students with disabilities were includ ed in state and local large-scale testing programs. For some, this partic ipation required some accommodations or modifications to allow participat ion. However, for the much smaller population of students with significant dis abilities, participation in large-scale assessment programs, even with accommod ations or modifications is not appropriate. For the population of students with significant disabilities, alternate assessment systems are now being implemen ted to address the mandates for inclusion of all students in assessment and accountability programs. There are, however, significant challenge s associated with the implementation of these alternate assessments.Some of these challenges have been deliberated in t he courts. Even the federal courts have become involved in struggles over alter nate assessment. The courts have previously upheld the right of states a nd local districts to make high-stakes decisions, such as the award of a high school diploma contingent upon student test performance ( Debra P.v. Turlington 1981; Brookhart v. Illinois State Board of Education 1982; Board of Education v. Ambach 1983). However, the courts also specified that tests used for these purposes had to be valid and based upon content that students had a fa ir opportunity to learn. They also required, for students with disabilities, that IEPs should create appropriate opportunities for students to prepare for tests. Re cently, a federal district court mandated that the State of California must insure t hat students with learning disabilities, including those under both IEPs and S ection 504 plans, must be provided alternate assessments if they are unable t o access the general test due to a disability ( Chapman v. California Dept. of Ed. Feb 21, 2002).

PAGE 5

5 of 27 Alternate Assessments – What are they?For students with disabilities for whom participati on in the general assessment program with accommodations is not appropriate, edu cators have turned to alternate assessment programs. The term "alternate assessment" has been defined by Ysseldyke, et al. (1997) as “any assessm ent that is a substitute way of gathering information on the performance and pro gress of students who do not participate in the typical state assessment use d with the majority of students who attend school” (p. 2). Alternate assessment is seen as an "approach to enable the educational outcomes of students with th e most significant disabilities to be included in school and district accountability measures” (Kleinert, Haig, Kearns, & Kennedy, 2000, p. 53; se e also Coutinho and Malouf (1993). Thompson, Quenemoen, Thurlow, and Ysseldyke (2001) provide examples of alternate assessments explaining that “ alternate assessments typically involve some variation of what is sometim es called performance-based assessment, authentic assessment, or ‘alternative’ assessment, or with a collection of these tools, portfolio assessment” (p p. 80-81). As portfolio assessments have become more common for performance assessment, they have become more systematic. Student accomplishment s are systemically sampled or collected over a period of time to asses s student growth and attainment in content areas (Baker, 1993). Portfoli os are now being measured against predetermined scoring criteria (Thompson, e t al., 2001). Most states have adopted a portfolio assessment mod el as their method of alternate assessment for students with disabilities (Thompson et al., 2001, ). Kentucky and Maryland have led the way in the imple mentation of alternate assessments. “Both of these states have used the id ea of portfolio assessment as a means of gathering achievement information whe n students cannot participate in the general state assessments” (Rous e, Shriner, & Danielson, 2000, p. 89). However, the format for these assessm ents has been variable across the country (Thompson, et. al, 2001) and the research on implementation of these practices is thus far somew hat limited. Carpenter, Ray & Bloom (1995) describe the benefit of portfolios in terms of their ability to provide concrete evidence of stude nt work and progress toward annual goals and objectives. “The goal of these new er assessments is to more accurately depict what students can do, in more aut hentic or real-life contexts, and to focus classroom instruction on the developme nt of problem-solving and higher-order thinking and writing skills” (Kleinert Kennedy, & Kearns, 1999, p. 93). According to Thompson et al., (2001) and Choat e & Evans (1992) there are numerous advantages to using a portfolio assessment model. These advantages include an increased ability for school districts to be accountable for all students, the ability to demonstrate student gr owth, an assessment process that is able to include all students on an individu alized basis, a demonstration of student progress toward standards, and a “means of incorportating assessment and instruction relevant to functioning in the real world” (Choate & Evans, 1992, p. 9).At the same time, there is growing recognition of s ome of the challenges posed by the use of portfolio assessments – difficulty wi th the implementation process,

PAGE 6

6 of 27 scoring difficulty, problems with generalizability and comparability of results, and validity and reliability issues. Ysseldyke and Olse n (1997) warn that “there is little consensus on what constitutes a portfolio or how portfolios should be used in large-scale assessment” (p. 11). Another comment ator (Maurer, 1996) speaks to the need for clarity regarding four speci fic issues about portfolio assessment: the purpose of portfolio assessment (wh y assess?), participation guidelines for portfolio assessment (who to assess? ), alignment of the assessment with what is being taught (what to asses s?), and the validity and reliability of the assessment (how to assess and sc ore?). Each of these issues frames an essential set of questions for educators implementing alternate assessments.Why Assess? The purpose or purposes of any assessment must be established at the outset. “Many of the technical i ssues presented by the conceptions of portfolio assessment in the literatu re could likely be resolved by clarifying the purpose of portfolios” (Nolet, 1992, p. 11). However, as Olsen (1998) noted in a review of state practices, “one o f the common threads that runs through these documents is the need for states to establish a solid philosophical basis for alternate assessments befor e moving too far in to the details of development” (p. 1).According to the National Center on Educational Out comes, “the primary purpose for alternate assessments is to increase th e capacity of large-scale accountability systems to create information about how a school, district, or state is doing in terms of overall student performa nce” (NCEO, 2000). In addition to these systemic accountability purposes, however, assessment results provide judgment or accountability informat ion to the student and the parent (Maurer, 1996). These goals are not necessar ily easily reconciled. For either systemic or student accountability, the basi c premises of alternate assessments are the same. These assessments must be "designed to provide information relative to key performance indicators that represent the most essential features of the educational experience of students with disabilities” (Ysseldyke, Thurlow, Kozleski, & Reachly, 1998b, p. 14). Warlick (2000) discusses the importance of alignment of alternate assessments with each state’s general assessment: “the purpose of an alte rnate assessment should reasonably match, at a minimum, the purpose of the assessment for which it is an alternate” (p. 18).In most programs, assessments including alternate a ssessments are seen as "a matter of school accountability more than student a ccountability” (Kleinert et al., 2000, p. 53). However, in many states and local sch ool districts, there are also high-stakes accountability consequences for student s, such as the determination of the type of exit credential a stud ent may receive. And, even when high-stakes consequences may be limited for in dividual students, the availability of alternate assessment evidence can b e expected to play a key role in such critical activities as the formulation or r evision of IEPs. Multiple uses of alternate assessments may be significant particular ly if there are high stakes involved. States must ensure that portfolio assessm ents measure what they are intended to measure and recognize that if they are being used for multiple purposes (e.g., student accountability and school a ccountability) that what they measure is consistent with the purposes of the asse ssment. Failure to meet

PAGE 7

7 of 27 these requirements may have a significant impact on the validity of an assessment.Who to Assess? States must develop specific guidelines regarding participation in alternate assessment. Consistent w ith IDEA '97 requirements, Warlick and Olson’s (1998) report demonstrates that in all 12 states they surveyed, the IEP teams are called upon to make the decisions regarding whether students will participate in the general ed ucation test or the alternate assessment and to document justification for this d ecision in the IEP. Appropriately, the task of specifying the criteria to be used in making these decisions are left up to the states. To date, numer ous states have established participation guidelines. However, these guidelines are not consistent from state to state. Warlick and Olson’s (1998) examined the p ractices in twelve states and found: 75% of the states use a curriculum focus criterion (i.e., unable to participate fully in the general curriculum, pursui t of functional or livings skills oriented curriculum, etc.) in determining participa tion. Sixty-seven per cent of the states cited the student’s need for “intensive individualized instruction in order to acquire, maintain, or generalize skills” a s a criterion for alternate participation (Warlick & Olsen, 1998, p. 10). In so me states (59%) older students are permitted to participate in an alterna te assessment “only if they are unable to complete the regular diploma program even with program adaptations” (Warlick & Olsen, 1998, p. 10).There is an overall concern about how to institute an alternate assessment process without once again creating a mechanism tha t promotes a dual educational system or other unintended consequences One challenge focuses upon weighing the balance between the systemic and the individual accountability goals associated with a program. At the ground level, when individual IEP participants are making decisions ab out whether to include a student in the standard or the alternate assessment system, the primary consideration is probably the individual needs of t he student. However, the influences associated with systemic accountability also must be in play. This is particularly true when there is a high-stakes impac t on the school, the district, or even the individual educators who work with the stu dent, as is the case in the growing number of states now seeking to measure tea cher accountability on the basis of student assessment performance.When the costs associated with systemic accountabil ity are high, there might be a press to have larger numbers of students with dis abilities included in alternate assessment as a means of preventing their scores fr om being factored in with the rest of the scores from the standard assessment This practice might make overall system performance seem higher. But, “placi ng a large number of students with disabilities in an alternate assessme nt program.... could help perpetuate the separate system that has been a conc ern for many” (Warlick & Olsen, 1998, p. 3). And, certainly far from clear a t this time is the impact of what might be viewed as a slight Congressional pull-back in the No Child Left Behind Act of 2001 from the previous commitment to partici pation of all children to allow only 95% participation in determining systemi c accountability, or "adequate yearly progress".What to Assess ? The advocacy for curriculum standardization is a critical

PAGE 8

8 of 27 component in the current reform movement. Yet, this point of view is not without problems. McIntyre (1992) saw the emphasis on curri culum standardization as a problem for special education in that it “would hin der individualization in special classes” (p. 7). Ysseldyke, Thurlow, & Geenen (1994 ) emphasize that the successful participation of students with disabilit ies is dependent on states developing “outcomes that are comprehensive and bro ad enough to be meaningful for all students” (p. 5). McDonnell, et al. (1997) also articulate a need for attention to the specific curricular needs of students with significant cognitive disabilities: “the degree to which a set of content standards is relevant to their valued educational outcomes and consistent with proven instructional practices will determine how successfully they will participate in standards-based reform” ( p. 114).In order to achieve comprehensive and broad outcome s without lowering standards, consensus must be reached among stakehol ders on both standards and outcomes. McDonnell et al. (1997) describe the conflicts resulting from the differing assumptions of standards-based reform and special education and conclude that the successful participation of stude nts with disabilities in standards-based reform will depend on the alignment between these assumptions. Standards-based reform has been built around a specific set of assumptions about curriculum and instruction, embod ied in the content and performance standards that are central to the refor ms. Special education, for its part, has been built around a set of assumptions ab out valued post-school outcomes, curricula, and instruction that reflect t he diversity of students with disabilities and their educational needs. (McDonnel l et al., 1997). Most parents and special educators agree that a functional curri culum approach is essential for students with severe cognitive disabilities. If the alternate assessment system can align with the general curriculum withou t precluding a simultaneous focus on functional life skills, how do we ensure t hat alternate assessment is appropriate and comprehensive and maintains a philo sophical focus geared toward a unified education approach (i.e., no separ ate focus for special education)?While there is a strong sentiment against the devel opment of “separate standards” for the small percentage of the student population composed of students with significant disabilities (Ysseldyke & Thurlow, 1999), states have taken a range of approaches to alternate assessment s. “Some states and districts focus very narrowly on specific academic standards, whereas others take a broader approach and include many functional or life skills within their standards for all students” (Thompson, et al., 2001 p. 22). One of the most prevalent concerns is about the “cost” of an academ ic focus for students who have participated in a more “functional” or “practi cal” program. Guy, et. al. (1999) addresses this concern “that students with d isabilities may be merged into a system that has a heavy focus on academics, often to the exclusion of more applied and vocational kinds of skills, (the r esult of which) threatens what has been working for students with disabilities” (p 78). Two leaders in the implementation of alternate assessment, the states of Kentucky and Maryland, while basing the assessment criteria on the core le arning outcomes identified for all students, “clearly attempted to address the functional skill needs of students in their respective alternate assessments” (Kleinert et al., 2000, p. 57). A national study in 2000 reported this range of app roaches by states:

PAGE 9

9 of 27 • alternate assessments encompass general education standards in 28 states; • alternate assessment in 7 states assess standards with an additional set of functional skills;• two states have two alternate assessments one t hat assesses general education standards at lower levels and one that as sesses functional skills; • alternate assessments in 3 states were developed based on functional skills and then linked back to state standards; and• nine states based their alternate assessments on functional skills only with no alignment to state standards (Warlick, 2000).The different possibilities open in selecting the c ontent of alternate assessments present several challenges for educator s. The possible tensions between student accountability purposes and systemi c accountability purposes must be addressed. The extent to which inclusion fo r students with significant disabilities in the content standards of general ed ucation must be determined. States must continue to address these issues as the y refine their standards-based reform efforts. States must continu e to evaluate whether or not a dual education system is being perpetuated wh ile at the same time examining the impact of content standards on studen ts with significant disabilities.How to Assess and Score? For any assessment, it is important to ensure that the resulting scores are accurate, reflect the info rmation the assessment was intended to collect, and are meaningfully linked to teaching practice. In a report compiled by Quenemoen, Thompson and Thurlow (2003), comparing the assumptions and values embedded in the scoring crit eria used in five states for their alternate assessments, discuss the importance of teachers having an understanding of “the stated and embedded scoring c riteria” (p. 41). They caution states to keep in mind that “alternate asse ssments are a much more recent development than regular assessments (Quenem oen et al., 2003, p. 41)” and as such, advocate the necessity of ongoing deba te and discussion regarding the underlying assumptions as they relate to students with significant cognitive disabilities and the impact of those assu mptions on the scoring criteria.The struggles involved in establishing reliable and valid test results are evidenced throughout the literature. Even without t he particular complications associated with the alternate assessment of student s with disabilities, one leading commentator on testing and assessment has n oted that all types of performance assessment "present a number of validit y problems not easily handled with traditional approaches and criteria fo r validity research” (Moss, 1992, p. 230). Other commentators have noted politi cal problems associated with performance assessments: “If performance asses sments are to gain any credibility with students, parents, and the communi ty, they need to be reliable, valid, and generalizable. If we as a profession do not establish these traits, then performance assessments will, in time, come under t he same type of attack that standardized tests receive today” (Maurer, 1996, p. 111).

PAGE 10

10 of 27 Clearly, the concerns regarding validity and reliab ility have a critical impact for systemic and student accountability. Given the time lines involved in meeting federal mandates concerning both accountability and the inclusion of students with disabilities, the time required to establish r eliability and validity has been short and the expertise on how to do so not widely available (Heaney & Pullin, 1998). The American Educational Research Associatio n (AERA), American Psychological Association (APA), and the National C ouncil on Measurement in Education (NCME) have set the professional standard s of practice for educational and psychological testing in their publ ication Standards for Educational and Psychological Testing (1999). While these requirements do not include extensive discussion of performance assessm ent issues, they do establish benchmarks for validity and reliability d eterminations that should be taken into account by educators implementing altern ate assessment systems. The Test Standards define validity as “the degree to which evidence a nd theory support the interpretations of test scores entailed by proposed uses of tests...the proposed interpretation refers to the c onstruct or concepts the test is intended to measure” (AERA, APA, & NCME, 1999, p. 9 ). Caution must be taken when determining the types of evidence that m ight be incorporated into a portfolio or other performance assessment. “Importa nt validity evidence can be obtained from an analysis of the relationship betwe en a test’s content and the construct it is intended to measure” (AERA, APA, & NCME, 1999, p. 11). The evidence or work samples included in an assessment must support the construct or concepts being measured and they must be sufficient and relevant. Miller and Legg (1993) reference “eight criteria th at need to be studied for serious validation of alternative assessments: inte nded and unintended consequences of test use, fairness, transfer and ge neralizability, cognitive complexity, content quality, content coverage, mean ingfulness, and cost and efficiency” (p. 10).The Test Standards (AERA, APA, & NCME, 1999) define reliability as “t he consistency of such measurements when the testing p rocedure is repeated on a population of individuals or groups” (AERA, APA, & NCME, 1999, p. 25). After performance assessment results are collected, someo ne has to judge student responses and determine whether they meet the requi site educational standards. In scoring portfolio assessments, judges determine an individual’s score based on defined criteria or scoring rubrics. “Inter-rater reliability is also necessary in alternative assessments because the sc oring procedures are usually subjective” (Miller & Legg, 1993, p. 11). I nter-rater scoring reliability plays an important role in establishing the validit y of an assessment and is therefore, subject to rigorous technical requiremen ts. “In such cases relevant validity evidence includes the extent to which the processes of the observers or judges are consistent with the intended interpretat ion of scores” (AERA, APA, & NCME, 1999, p. 13). Establishing the reliability of such judgments on a large-scale assessment program has already been ide ntified as a significant challenge (Shepard, 1992); many more issues arise w hen alternate assessments are being administered.Vermont was one of the first states to use portfoli o assessments on a large-scale basis for all students, including those with disabilities. Koretz,

PAGE 11

11 of 27 McCaffrey, Klein, Bell, & Stecher (1993) evaluated the 1992 Vermont Portfolio Assessment program and found disappointing reliabil ity coefficients. In Kentucky, another state on line early with these as sessments, there was an early finding that “there remains much work to be d one around issues of reliability of scoring procedures” (Elliott, 1997 p 106; see also Koretz & Hamilton, 2000). Sailor (1997) found that “the Kent ucky experiment with Alternate Portfolios is plagued with predictable pr oblems of reliability of judgment across independent scorers” (p.103). In Ke ntucky, portfolios were scored initially by the teachers administering them This led to a concern regarding subjectivity, especially because Kentucky ’s statewide assessment system was a high stakes system. Schools in Kentuck y are subject to rewards and sanctions based on the assessment scores. When an assessment system is a high stakes system, it is subject to even grea ter scrutiny regarding validity and reliability because of the ultimate “cost”, or consequences, of the assessment results. The inter-rarer reliability in Kentucky has shown a substantial increase since the mandate that every a lternate portfolio “be blindly and separately scored by two trained scorers and th at all disagreements be reconciled through a third, state-level scoring” (K leinert, et al., 2000, p. 60). Another significant issue regarding the validity an d reliability of the alternate assessment are issues of whether or not the portfol io is a reflection of the student’s work or the teacher’s abilities. A statew ide teacher survey conducted by Kleinert, et al., (1999) noted a concern regardi ng “the extent to which the alternate assessment was more of a teacher assessme nt than a student assessment” (p. 93). In portfolio assessment the re sulting product to be judged for accountability purposes is a compilation of the student’s work. Students with significant disabilities are typically reliant on t eachers to assemble their portfolio. The question arises as to the degree the resulting product is more reflective of the teacher’s expertise in assembling a portfolio t hat meets the requirements of the scoring rubric than the capabilities of the stu dent. Is the resulting score a measure of the student’s ability and achievement or the teacher’s ability to assemble a portfolio to meet the specifications of the assessment? In the Kentucky statewide teacher survey, teachers’ commen ts indicated a concern that “teacher creativity/work is a greater factor i n determining the ultimate score than is student learning” (Kleinert et al., 1999, p 98). The mandates for available and persuasive validity and reliability evidence are clear. But it is also evident, given the scientific complexity of obtaining such evidence, that there would be problems in this rega rd. The press of limited time to implement the new systems, coupled with lack of guidance on how to obtain defensible validity and reliability evidence, place d educators in the position of proceeding without appropriate safeguards in place. The professional standards of practice call for validity and reliability evide nce before a program is made operational (AERA/APA/NCME, 1999). Without such per suasive evidence, the research community and professional vendors are obl igated to mobilize quickly to address the need for this information. This rese arch will probably require the combined efforts of both the special education comm unity and testing and assessment professionals. The lack of persuasive te chnical data on the defensibility of alternate assessments at present s uggests the need for great caution in implementing any high-stakes consequence s for either individual or systemic accountability as a result of alternate as sessments.

PAGE 12

12 of 27 Challenges Faced by Teachers Administering Portfoli o AssessmentDespite the fact that the intent is that an alterna te assessment portfolio be assembled as much as possible with the input of the student, it is clear that the students for whom the portfolio assessment is appro priate (e.g., students with significant cognitive disabilities) may be limited in their ability to provide such input. As a result, the composition of each student ’s portfolio is likely to be highly reliant on the expertise and training of the student’s teacher. Teacher background can impact student performance in two wa ys: teacher capacity in providing instruction covered in the assessment and teacher capability in assembling student portfolios. Either or both facto rs have a powerful impact on student performance.Studies of the assessment of students with disabili ties indicate that special educators often lack familiarity with the content a nd knowledge, or content standards, covered on assessments (DeStefano, Shrin er, and Lloyd, 2001). Content coverage in a high-stakes assessment contex t can be a challenge for all teachers. However, it can be a particular chall enge when the inclusion of students with disabilities, particularly those with significant disabilities, have had limited prior exposure to the general education cur riculum. According to research conducted elsewhere by Kleine rt, et al., (1999) “the alternate portfolio process seems more focused on a n assessment of the teacher than on the student.” (p. 97) This study hi ghlights the need for further analysis regarding the “extent to which teacher exp erience, scope, and recency of teacher training, or other salient teacher chara cteristics were related to reported adoption of instructional practices and te acher perceptions of the benefits of the alternate assessment to their stude nts.” (Kleinert, et. al, 1999, p. 97)There does appear to be some evidence that teachers with greater experience, expertise and training are likely to produce a port folio which receives a higher score than a teacher new to the process of producin g an alternate assessment for the first time. Kleinert, et al., (2000) raised this question in their research: “to what extent did teacher (e.g., experience, amount o f training) and instructional (amount of student involvement in the construction of the portfolio) variables predict the portfolio score?” Thompson et al., (200 1) identify the issue of teacher training and experience regarding performan ce assessment as the key to improved results for teachers and students. Num erous authors have discussed the importance of teacher experience and training in portfolio use (Thurlow et al., 1998, Coutinho & Malouf, 1993, Har ris & Curran, 1998). Harris and Curran’s (1998) study regarding the impa ct of knowledge, attitudes and concerns about portfolio assessment looked spec ifically at the impact on special educators. Their research findings indicate “if special educators are to use portfolios in ways that provide maximum benefit s to their students, then they need to have greater knowledge about portfolio s” (Harris & Curran, 1998, p. 92). According to Worthen (1993) “the classroom teacher is the gatekeeper of effective alternative assessment.” (p. 447) Wort hen (1993) further states: “to

PAGE 13

13 of 27 a much greater degree than in traditional assessmen t, the quality of alternative assessments will be directly affected by how well t eachers are prepared in the relevant assessment skills.” (p. 448)In addition, teacher attitudes toward the use of po rtfolio assessment may be impacted by training and experience ( Harris & Curr an, 1998, Cheong, 1993). According to Harris and Curran (1998), “teachers wh o are trained and experienced in portfolio use have highly positive a ttitudes towards them” (p. 84). Given the current, and growing, critical shortage o f qualified special educators (Donovan &Cross, 2002; McLaughlin, Artiles & Pullin 2001), the extent of teacher expertise in both special education and alt ernate assessment will be a problem with growing implications.Turner, Baldwin, Kleinert, and Kearns (2000), discu ss the impact of teacher understanding of the scoring rubric and the resulti ng impact on student scores. According to Turner, et al., (2000), “understanding the scoring rubric may allow some teachers to represent quality indicators that are not actually apparent in the classroom” (p. 74). These authors articulated a possibility that teachers could inflate performance on a portfolio assessment (Turner et al., 2000). This possibility raises significant concern regarding bo th validity and reliability issues arising from the fact that a portfolio assessment c ould be administered to the same student by two different teachers and result i n entirely different scores. These two widely different scores could result from simple fundamental differences in the teachers' understanding of the r equirements in the scoring rubric, as well as the teachers' familiarity with t he individual student. All of these factors present considerable questions about the va lidity and reliability of inferences made about portfolio assessment.Harris and Curran (1998) also articulate a number o f “practical” problems affecting teachers using portfolio assessment. They identify these “practical problems as “the time involved, the cost, problems with planning portfolios, organizing and managing their contents, and selecti on of containers and storage” (Harris & Curran, 1998, p. 84; see also Ka mpfer, Horvath, Kleinert, and Kearns; Cheong, 1993). Turner et. al, (2000) offer an observation regarding the typical length of an alternate assessment when it i s conducted in a portfolio format and the demand on teacher time. “As such, so me teachers may not be willing to put forth the effort required to create a portfolio that accurately represents the student’s current program” (Turner, et.al, 2000, p. 74). States must recognize that support must be provided for ed ucators to ensure that the “practical” problems do not negatively impact the p ortfolio score. Educators at the ground level are instrumental in t he success of alternate assessment programs. They must know how to identify potential candidates for alternate assessment, the content standards covered in the assessment and how to teach that content, how to address participa tion issues in IEP meetings, how to compile portfolios, and how to make appropri ate judgments about student performance. They must find a way to do thi s when the consequences of alternate assessment are linked to both student and systemic accountability and perhaps as well their own individual accountabi lity. They must also find ways to accommodate the time and intellectual deman ds associated with alternate assessment in their already busy days. An d, as the critical shortage of

PAGE 14

14 of 27 qualified special educators continues to grow, ther e will probably be fewer and fewer local educators who have even a rudimentary s pecial education background (McLaughlin, Artiles & Pullin, 2001), in dependent of an understanding of the assessment issues discussed he re.Massachusetts' Implementation of an Alternate Asses sment System: One State's Response In response to national initiatives for education reform, many states passed their own reform legislation. A closer look at one state's efforts at alternative assessment, provides useful examples of the challen ges educators face in the implementation of an alternate assessment program.On June 18, 1993 the Massachusetts legislature enac ted the Massachusetts Education Reform Act (MERA), which called for the c reation of a statewide general curriculum in the major academic discipline s, school improvement plans and a new high-stakes assessment test tied to high school graduation (French, 1998). In response to federally imposed timelines, the Massachusetts State Board of Education began an ambitious implementatio n process for the MERA. A Five Year Master Plan organized five strategic go als which included eighty new initiatives. Among these initiatives was the de velopment of the Massachusetts Curriculum Frameworks and the Massach usetts Comprehensive Assessment System (MCAS). Similar to other states s tatewide assessment systems, the MCAS is used for both systemic account ability (school and district performance indicators and potential state take-ove r of low performing schools or districts) and student accountability (individua l student performance reports and high school graduation contingent upon acceptab le MCAS performance). The MCAS is a large-scale, criterion-referenced tes ting system with provisions for accommodations for students with most disabilit ies. For a student with disabilities, the IEP team is ch arged with determining whether the student 1) can take the standard MCAS under rou tine conditions, 2) can take the standard MCAS with accommodations, or 3) r equires an alternate assessment. State guidelines instruct IEP teams in their decision-making based on the characteristics of a student’s instructional program and local assessment (Mass. Dept. of Ed, 2002).Massachusetts began the early stages of implementat ion of an alternate assessment system for students with significant dis abilities in 1999. The state developed a portfolio-based assessment which was de signed to measure student’s knowledge of the key concepts and skills articulated by the general learning standards for all students set forth in th e Massachusetts Curriculum Frameworks. This portfolio-based alternate assessme nt is known as the Massachusetts Comprehensive Assessment System – Alt ernate (MCAS-Alt). “The alternate assessment is intended for the very small number of students who are unable to participate in the standard MCAS due to the nature and severity of their disabilities” (Mass. Dept. of Ed, 2002, p. 16). For students with disabilities, “the purpose of the MCAS Alternate As sessment is to measure the achievement of these students on the Massachusetts Curriculum Framework learning standards in English Language Arts, Mathem atics, Science and Technology/Engineering, and History and Social Scie nce” (Mass. Dept. of Ed,

PAGE 15

15 of 27 2000, p. 3).The MCAS-Alt requires the collection of a body of e vidence that may include student work samples, instructional data on the stu dent, videotapes, and other supporting information linked to instruction in the subject being assessed. The training materials for educators provided by the Ma ssachusetts Department of Education include a scoring guide which is intended “to help teachers and students prepare high-quality portfolio entries.” ( Mass. Dept. of Ed, 2000, p. 23) According to the Massachusetts Department of Educat ion, “the portfolio is developed over the course of the school year by the student, the student’s teacher, and other adults in the school or program who work with the student” (Mass. Dept. of Ed, 2002, p. 16).The Massachusetts alternate assessment system has b een described by one of the leading researchers on the testing of individua ls with disabilities as “leading the way in the assessment and reporting of students with significant disabilities who require alternate assessments” (Thurlow, as quo ted by Mass. Dept. of Ed.,2003). An examination of this system provides t he opportunity to highlight some of the particular challenges confronting educa tors in implementing these reforms for students with significant disabilities. In terms of Maurer's call for clarity, the goals of Massachusetts' alternate asse ssment seem, on their face, to be clear. But, the question remains whether the assessment can meet the validity and reliability requirements regarding ali gnment of the “assessment content and the construct it is intended to measure ” (AERA, APA, NCME, 1999, P. 11).When the state of Massachusetts began to initiate i ts alternate assessment program in 1999, there were short timelines for imp lementation of the new assessments mandated by the federal government in t he 1997 IDEA amendments. A system of assessment had to be develo ped and a large number of educators that had to be trained to admin ister the MCAS-Alt. Massachusetts field tested the MCAS Alternate Asses sment during the 1999-2000 school year. During the 2000-2001 school year the alternate assessment was officially implemented for the first time, with the first portfolio assessments due at the beginning of May 2001.Between October 2000 and January 2001, the Massachu setts Department of Education trained 3300 administrators and teachers in the implementation process of the MCAS-Alt. The deadlines of the feder al mandates had a significant impact on the effectiveness of this tra ining. According to Dan Wiener, Project Coordinator of the MCAS-ALT for the Massach usetts Department of Education, “it became clear that we needed to train teachers very intensively and give them much more time than we gave them, whi ch we had every intention of doing but the law gave us such a short brief, turnaround time” (Wiener, 2002a).Additional challenges associated with the implement ation of alternate assessment were concerned with how the evidence wou ld be assessed and scored (Weiner, 2002b). The scoring rubric for the MCAS-Alt developed by its private testing contractor is used to review, evalu ate and score student portfolios. Scorers examine each portfolio strand f or evidence of the student’s performance in the following categories: completene ss of materials submitted;

PAGE 16

16 of 27 demonstration of the level of complexity at which t he student addresses the learning standards in each content area; demonstrat ion of the accuracy of the student’s responses and performance on each product ; evidence of the degree of independence the student demonstrated in perform ing each task or activity; and evidence of the student’s ability to make decis ions and/or self-evaluate as they engage in the task or activity (Mass. Dept. of Ed, 2002). The scoring rubric is used to generate a numerical score for each portfolio strand and then the three scores of the three portf olio strands submitted in each content area are averaged in order to determine an overall score. The overall scores are translated into performance levels by th e Massachusetts Department of Education in conjunction with its assessment con tractor. The performance levels used to report student results in each conte nt area in which the MCAS-Alt is administered include the three performance level s used in the standard MCAS (needs improvement, proficient, and advanced) as well as three additional areas (awareness, emerging, and progress ing). A description of the performance levels for the MCAS Alt is as follows: awareness (student demonstrates very little understanding of learning standards), emerging (student demonstrates a rudimentary understanding of a limit ed number of learning standards and addresses the standards at substantia lly below grade level expectations), progressing (student demonstrates a partial understanding of some learning standards and address the standards a t below grade level expectations), needs improvement (student demonstra tes a partial understanding of the content area at grade level ex pectations), proficient (student demonstrates a solid understanding of the content area at grade level expectations), and advanced (student demonstrates a comprehensive and in-depth understanding of the content area at grade level expectations). The scoring criteria for the rubric were determined with the assistance and feedback of hundreds of teachers who participated i n the implementation of 1999-2000 field test. The scorers of the alternate assessments are recruited and trained by the Massachusetts Department of Educ ation and its contractor. As the state itself confirmed, the difficulties of scoring alternate assessments represent a challenge “ to use methods other than t raditional testing to portray what a student has learned and to do this in a way that allows others who may not work directly with the student to interpret thi s evidence correctly” (Mass. Dept. of Ed, 2000, p. 23).During the first year of implementation it became c lear that ”there were in some cases, different interpretations of the ways in whi ch we told people to score” (D. Wiener, personal communication, Feb. 26, 2002). As a result, the state reevaluated the training system for scorers and mad e changes in the training plan for scorers for the next round of portfolio sc oring. The 2002 MCAS-Alt portfolios were scored during a t hree week scoring institute that was conducted in July 2002 during which 5300 M CAS-Alt portfolios were scored by 125 Massachusetts educators. Educators fr om across the state were recruited to participate in the scoring institute a nd preference was given to educators who could commit to the full three weeks of scoring. To prepare the scorers for the task of scoring the MCAS-Alt portfo lios, scorers received a set of written scoring guidelines two to three weeks prior to the scoring institute. In

PAGE 17

17 of 27 addition, the scorers participated in one and one-h alf days of training at the beginning of the scoring institute. Calibrated trai ning strands were used to “qualify” scorers for the task of scoring the MCASAlt (Mass. Dept. of Ed, 2002). As a means of establishing reliability in the scori ng, approximately 25% of the MCAS-Alt’s were scored by two different scorers. In addition, due to the significant consequences (award of a regular high s chool diploma) attached to the 10th grade score, all grade 10 MCAS-Alt’s were scored by two different scorers (Mass. Dept. of Ed 2003, p. 2).A similar scoring process was implemented in the 20 03 administration. 5118 portfolios were scored by approximately 150 scorers during a three week scoring institute using a similar process as the 20 02 scoring institute (Mass. Dept. of Ed, 2003).According to the Massachusetts Department of Educat ion, “It is anticipated that scores may be modest in the first few administratio ns of the MCAS Alternate Assessment, but scores are generally expected to im prove…as educators become increasingly familiar with these requirement s” (Mass. Dept. of Ed, 2002, p. 27). In fact, the data support this statem ent. Although changes in scoring make it impossible to clearly establish yea r-to-year trends, in each of the three years of administration of the MCAS Alt, approximately 1% of all the students in the state (about 6.5% of the students w ith disabilities in the state) participated in the alternate assessment. In 2001, 75% of the portfolios submitted scored in the lowest performance category “awareness”. In 2002, only 5% were scored at the “awareness” level, due i n large part to a change in scoring. In 2003, only 3.5% were scored at the “awa reness” level. Changes in how the data was recorded from Year 1 (2 001) to Year 2 (2002) are important to note. In the recording/categorization of the Year 1 data, those portfolios which were unable to be scored because t here was insufficient evidence were included in the data for the awarenes s category. In the Year 2 data presentation, this data was separated out and an incomplete section was included in the data display. In Year 2, 44% of the portfolios were incomplete in at least one subject area. In the Year 2 results ho wever, the combination of the incomplete data and the awareness data (49%) is low er than the Year 1 awareness data (75%) Also of note, in Year 2, 34% of the portfolios scored in the progressing category an increase of 21% from Ye ar 1. In Year 3 (2003) the percentage of portfolios which received incompletes dropped to 19% and the percentage of portfolios which scored in the progre ssing category increased to almost 65%, (D. Weiner, personal communication, 9/0 3). The state reported in 2002 that it did include MCAS Alt data within its reports on the overall performance of all students in the stat e and all students with disabilities. Overall, on the Grade 10 MCAS, used t o determine high school diploma awards, 14% of all students across the stat e failed and 45 % of students with disabilities failed (Mass. Dept. of E d., 2002, August). Among the students participating in the alternate assessment, only 12 students across the state received a passing score (needs improvement o r higher) on the Grade 10 level. (Mass. Dept of Ed., 2003). However, in 2003 the number of students that received a passing score increased to 26. "This num ber represents a dramatic increase over the previous two years" (Mass. Dept. of Ed., 2004).

PAGE 18

18 of 27 Massachusetts is currently making an attempt to add ress requirements in the NCLB legislation regarding reporting of student ass essment results. The state has made a plan for reporting the aggregated result s in a manner which attempts to minimize the potential negative impact of the inclusion of student alternate assessment scores by assigning a point va lue system to the portfolios based on the scored performance level for each port folio. The points would be assigned to the MCAS-Alt performance levels ( 0 poi nts = portfolio not submitted, 25 points = incomplete, 50 points = awar eness, 75 points = emerging, 100 points = progressing) in a similar ma nner as the regular MCAS (0 points – failing, 25 points – needs improvement, 50 points – proficient, and 100 points – advanced). The plan is for this reporting system to be implemented in the 2004 administration of the MCAS and MCAS-Alt.In addition to challenges associated with scoring t he MCAS Alt, there are also issues concerning content coverage for the assessme nt. In Massachusetts the alternate assessment is linked directly to the gene ral education standards in the Massachusetts Curriculum Frameworks and is intended to assess student’s mastery of skills, concepts and information regardi ng the general curriculum. Consistent with the state's regular assessment, the MCAS Alternate Assessment requires assessment in English Language Arts, Mathematics, History and Social Science, and Science and Technol ogy/Engineering. However, the MCAS Alternate Assessment does not inc lude assessment in essential life areas or functional skills as has be en the practice in some other states such as Maryland and Kentucky. According to Dan Wiener, Project Coordinator of the MCAS Alt at the Massachusetts De partment of Education, “I think we’re in the minority in that we haven’t…but many access skills are embedded in the entry points to our Curriculum Fram eworks” (personal communication, Feb. 26, 2002).In response to the need to make the general curricu lum accessible to all students, a resource guide was developed by the Mas sachusetts Department of Education which includes “instructional and assessm ent strategies [that] provide opportunities to teach students with disabilities t he same standards as general education students, and to promote greater ‘access to the general curriculum’ for students with disabilities, as required by law” (Mass. Dept. of Ed, 2002). The educator’s manual describes four ways that stud ents with disabilities can participate in the general curriculum. Those four a reas are: (1) addressing the standard as written for the grade level of the stud ent; (2) addressing the standard as written but using a different method of presentation and/or student response; (3) addressing the standard at lower leve ls of complexity and difficulty than grade-level peers, and (4) addressing the stan dard through social, communication, and motor “access skills” that are “ incorporated and embedded in standards-based learning activities” (Mass. Dept of Ed, 2002, p. 56). Jacqueline Farmer Kearns, Project Director of the I nterdisciplinary Human Development Institute at the University of Kentucky states that “access skills are a way that students with disabilities can participa te in the general curriculum” (J. Farmer Kearns, personal communication, April 21, 20 00). In the 2003 Educator’s Manual for the MCAS Alternate Assessment (Mass. Dept. of Ed,

PAGE 19

19 of 27 2003), the state describes access skills in the fol lowing manner: “skills become ‘access skills’ when they are practiced as a natura l part of instruction based on learning standards. When students practice their sk ills during daily academic instruction, they are participating in the general curriculum, though at a very basic level” (p. 57).Administering an alternate assessment based on alig nment with the general curriculum has added yet another layer of difficult y in the quest for education reform. It is well recognized that the federal mand ate to adapt and align the general curriculum for all students including stude nts with significant disabilities has presented a challenge for school districts acro ss the country. A recent study of Massachusetts teachers of students with signific ant disabilities who participated in MCAS Alt elicited evidence from tea chers that their students’ participation in the assessment process did cause t eachers to pay attention to state curriculum frameworks they had previously ign ored These teachers also indicated the importance of the provision of approp riate and ongoing professional development activities at the state an d building level which address the issues related to administering the MCAS-Alt wi th students with significant disabilities including assistance with curriculum a lignment for this population. The study concludes that school districts should se ek to use trainers/consultants who have experience with admin istering the MCAS-Alt and with aligning curriculum for students with signific ant disabilities. (Zatta, 2003) In the past four years of administration of the MCA S Alt (one pilot year and three statewide administrations), it has become cle ar that the resources to assist teachers with the administration of an alter nate assessment have increased but still have failed to adequately addre ss the needs of students with significant cognitive disabilities and the educator s who serve them. This is particularly true in the area of professional devel opment. As Richard Elmore (2002) asserts, “the pedagogy of professional devel opers [must] be as consistent as possible with the pedagogy that they expect from educators. It has to involve professional developers who, through expert practice, can model what they expect of the people with whom they are w orking (p. 8).” Effective training efforts serve to increase capacity not onl y on an individual teacher-by-teacher basis but at the building and sy stem level as well. Building capacity not only serves to ensure effective implem entation but supports sustained reform as well. Several variables related to professional developm ent activities were found to impact the effectiveness of the administration of t he MCAS-Alt. These variables included: teacher understanding, teacher willingnes s, commitment from school leadership and availability of resources. Developin g understanding and willingness amongst the individuals responsible for the administration of the MCAS-Alt is important to the resulting student outc omes. The resources identified as having an impact on the administratio n of the MCAS-Alt include the availability of consultants experienced in the asse ssment system, peer support, sufficient time to implement the program, and adequ ate materials and equipment (Zatta, 2003).Training in the specifics of the scoring guidelines of the alternate assessment has also been identified as important in terms of t he potential impact on student

PAGE 20

20 of 27 scores. Teachers in Massachusetts indicated that ex perience with the scoring rubric of the MCAS-Alt gave them a clearer understa nding of the specific requirements. Those who had participated in pilot s tudies during the development of MCAS Alt and in scoring sessions for the assessment felt the most competent to effectively participate in the as sessment system (Zatta, 2003). “Of course, as teachers also gain familiarit y with portfolio management techniques, submission requirements, curriculum ali gnment, and instructional improvements, the scores of all students will rise” (Weiner, 2002b, p. 9). Training specifically targeted to the teachers of s tudents with significant disabilities and experience with the scoring rubric were regarded by teachers as critical in providing them with the information nee ded to effectively administer the MCAS-Alt (Zatta, 2003).In addition, the issue of training for scorers and the impact of training on the resulting student scores was also identified as an area of importance. Teachers questioned the reliability of their students’ score s based on a comparison of the comments made by different scorers regarding simila r portfolio evidence. The issues of scorer training must be carefully attende d to in order to maximize inter-rater reliability. This issue is not unique t o Massachusetts. A study conducted in Kentucky in 1999 also called for more research regarding the “development of performance-based measures for stud ents with significant disabilities to meet the rigorous technical require ments of inter-rater scoring reliability” (Kleinert et al., 1999, p. 100).The 2003 annual training for administrators respons ible for the implementation of the MCAS-Alt in their respective schools undersc ored the importance of support from school leadership as well as an emphas is on training for teachers (Mass. Dept. of Ed, 2003). This shift in emphasis f rom previous yearly training focused exclusively on teachers may be indicative o f the state’s recognition of the importance of leadership issues in the alternat e assessment program. The Massachusetts alternate assessment system is bu t one approach to the challenges associated with including students with disabilities in education reform and accountability efforts. At this juncture the state is only in the early stages of implementing its system. The evidence rep orted here point to further areas for future efforts to enhance the quality of alternate assessments and associated educational practices for students with significant disabilities.ConclusionThe Congress set out a laudable series of goals whe n it required that students with disabilities be fully included in state and lo cal standards-based education reform initiatives. It is clear that the intent of the federal and state legislation is to improve current practices within the entire educ ation system. It is also clear that the current initiatives may not yet be fully a nd appropriately including the low incidence population of students with significa nt disabilities. In their zeal to call for a unified system of educational accountabi lity and correct the problems of exclusion in the past, legislators and policymak ers alike have not always recognized the individual and intensive needs of ch ildren with significant cognitive disabilities. Nor have they recognized th e many unresolved issues associated with alternate assessment. As a result, significant further efforts are

PAGE 21

21 of 27 needed to develop and refine the processes for asse ssing students with significant disabilities. These efforts must involv e both educators and policy-makers at the ground level, as well as the p rivate vendors who design and deliver assessment systems. Equally important, the research community faces considerable challenges in both assessing the effects of these assessments as well as offering scientifically-base d solutions to the challenges associated with alternate assessment.The goals of education reform are substantial and c omplex. It is no wonder that there are such daunting issues related to how to ef fectively achieve full participation for low incidence populations such as individuals with significant cognitive disabilities. Yet, at the same time, thes e students must not be overlooked. Now is the time to begin to consider ho w to better include and account for their abilities. As one disability advo cate has noted, “we have moved from access to the schoolhouse to access to h igh expectations and access to the general curriculum” (Warlick, 2000, p 11). The challenge ahead is to realize the goal of full and effective partic ipation for students with significant disabilities.ReferencesAmerican Educational Research Association, American Psychological Association, and National Council for Measurement in Education (1999) Standards for Educational and Psychological Testing : Washington, D.C.: American Research Association. Board of Education v. Ambach, 458 N.Y.S. 2d 680 (19 82). Brookhart v. Illinois State Board of Education, 697 F2d 179, 182 (7th Circuit, 1983). Carpenter, C. D., Ray, M., & Bloom, L. (1995). Port folio assessment: Opportunities and challenges. Intervention in School and Clinic, 31 (1), 34-41. Chapman v. California Department of Education, 229 F. Supp. 981 (N.D. Calif., 2002) Cheong, J. (1993). Portfolios: A window on student achievement. Thrust for Educational Leadership (November/December), 27-29. Choate, J., & Evans, S. (1992). Authentic assessmen t of special learners: Problem or promise? Preventing School Failure, 37 (1), 6-9. Coutinho, M., & Malouf, D. (1993). Performance asse ssment and children with disabilities: Issues and possibilities. Teaching Exceptional Children, 25 (4), 63-67. Debra P. v. Turlington, 644 F2d 397 (5th Cir. 1981) also F2d. (11th Cir. 1984). DeStefano, L, J. Shriner, and C. Lloyd (2001, Fall) Teacher decision-making in participation of students with disabilities in large-scale assessmen t. Exceptional Children 68:1, pp. 7-22. Donovan, M.S. & Cross, C. (eds.) (2002). Minority S tudents in Gifted and Special Education. Washington, D.C.: National Academy Press. Elliott, J. (1997). Invited commentary. TASH, 22 (2), 104-106. Erickson, R., & Thurlow, M. (1997). State special education outcomes, 1997: A report on state activities during educational reform Minneapolis: National Center for Educational Outc omes. French, D. (1998). The state's role in shaping a pr ogressive vision of public education. Phi Delta Kappan (November), 185-195. Geenen, K., Thurlow, M., & Ysseldyke, J. (1995). A disability perspective on five years of education

PAGE 22

22 of 27 reform Minneapolis, MN: University of Minnesota, Nationa l Center on Educational Outcomes. Goals 2000: Educate America Act, P.L. 103-227 (1994 ). Guy, B., Shin, H., Lee, S.-Y., & Thurlow, M. (1999) State graduation requirements for students with and without disabilities (Technical Report 24). Minneapolis: National Cente r for Educational Outcomes. Harris, M., & Curran, C. (1998). Knowledge, attitud es, and concerns about portfolio assessment: An exploratory study. Teacher Education and Special Education, 21 (2), 83-94. Heaney, K. & D. Pullin (1998). Accommodations, Flag s, and Other Dilemmas: Disability Rights and Admissions Testing: Educational Assessment, 5 (2) 71-93. Kampfer, S., L. Horvath, H. Kleinert, and J. Kearns (2001, Spring). Teachers’ perceptions of one state’s alternate assessment: Implications for prac tice and preparation. Exceptional Children 67:1, pp. 361-374. Kleinert, H., Haig, J., Kearns, J. F., & Kennedy, S (2000). Alternate assessments: Lessons learned and roads to be taken. Exceptional Children, 67 (1), 51-66. Kleinert, H., & Kearns, J. F. (1999). A validation study of the performance and learner outcomes of Kentucky alternate assessment for students with sig nificant disabilities. The Association for Persons with Severe Handicaps, 24 (2), 100-110. Koretz, D., McCaffrey, D., Klein, S., Bell, R., & S techer, B. (1993). The reliability of scores from the 1992 Vermont portfolio assessment program (CSE Technical Report 355). Los Angeles: University of California. Maurer, R. (1996). Designing alternative assessments for interdiscipli nary curriculum in middle and secondary schools Needham, MA: Allyn & Bacon. Massachusetts Department of Education (2003). MCAS Alternate Assessment Summary of 2001 and 2002 Results. Malden, MA: The Commonwealth of M assachusetts. Massachusetts Department of Education (2004). MCAS Alternate Assessment (MCAS-Alt): Summary of 2003 State Results. Malden, MA: The Comm onwealth of Massachusetts. Massachusetts Department of Education, (2003). MCAS Alternate Assessment 2004 Administrator’s Training, Framingham, MA. Massachusetts Department of Education, (2002). 2003 MCAS Alternate Assessment Educators’ Training, Brookline, MA.. Massachusetts Department of Education (2002, August ). Spring 2002 MCAS Tests: Summary of State Results. Malden, MA: The Commonwealth of Mass achusetts. Massachusetts Department of Education (2002). Requi rements for the Participation of Students with Disabilities in MCAS – Including Test Accommodation s and Alternate Assessment, Malden, MA: The Commonwealth of Massachusetts. Massachusetts Department of Education (2001). Educa tor’s Manual for the MCAS Alternate Assessment, Malden, MA: The Commonwealth of Massach usetts. The Massachusetts Education Reform Act of 1993: Res earch and Evaluation Mapping Project (1999). Amherst, MA: University of Massachusetts Am herst School of Education. McDonnell, L., McLaughlin, M., & Morison, P. (1997) Educating one and all Washington, D.C.: National Academy Press. McLaughlin, M., Artiles, A. & Pullin, D. (2001). Ch allenges for transformation of special education in the 21st century: Rethinking culture in school refo rm. Journal of Special Education Leadership Miller, M. D., & Legg, S. M. (1993). Alternative as sessment in a high-stakes environment. Educational Measurement: Issues and Practice (2), 9-15. Moss, P. (1992). Shifting conceptions of validity i n educational measurement: Implications for

PAGE 23

23 of 27 performance assessment. Review of Educational Research, 62 (3), 229-258. National Center on Educational Outcomes. (1999). 1999 state special education outcomes: A report on state activities at the end of the century Minneapolis, MN: University of Minnesota. National Center on Educational Outcomes. (2000). IEP's and standards: What they say for students with disabilities Minneapolis, MN: University of Minnesota. National Center on Educational Outcomes, (2000). Special topic area: Accountability for students with disabilities. Minneapolis, MN: University of Minnesota. National Commission on Excellence in Education (198 3). A nation at risk: The imperative for education reform. Washington, D.C.: U.S. Government Printing Office. No Child Left Behind Act of 2001, P.L. 107-110.Nolet, V. (1992). Clasroom-based measurement and po rtfolio assessment. Diagnostique, 18 (1), 5-24. Olsen, K. (1998). What principles are driving development of state al ternate assessments? : Mid-South Regional Resource Center. Quenemoen, R., Thompson, S. and Thurlow, M. (2003). Measuring academic achievement of students with significant cognitive disabilities: B uilding understanding of alternate assessment scoring criteria (Synthesis Report 50). Minneapolis MN: University of Minnesota, National Center on Educational Outcomes. Rouse, M., Shriner, J., & Danielson, L. (2000). Nat ional assessment and special education in the United States and England and Wales: Towards a comm on system for all? In M. McLaughlin & M. Rouse (Eds.), Special Education and School Reform in the United S tates and Britain (pp. 67-97). London and New York: Routledge. Sailor, W. (1997). Invited Commentary. TASH, 22 (2), 102-114. Sebba, J., Thurlow, M., & Goertz, M. (2000). Educat ional accountabiloity and Students with Disabilities in the United States and Canada. In M. McLaughlin & M. Rouse (Eds.), Special Education and School Reform in the United States an d Britain New York: Routledge. Shepard, L. (1992). Will national tests improve student learning? (CSE Technical Report 342). Los Angeles: University of California. Standards for Educational and Psychological Testing (1999). Washinton, D.C.: American Educational Research Association. Thompson, S., Quenemoen, R., Thurlow, M., & Ysseldy ke, J. (2001). Alternate Assessments for Students with Disabilities Thousand Oaks, California: Corwin Press, Inc. Turner, M., Baldwin, L., Kleinert, H., & Kearns, J. F. (2000). The relation of statewide alternate assessment for students with severe disabilities to other measures of instructional effectiveness. The Journal of Special Education, 34 (2), 69-76. Warlick, K. (2000, June 23-24, 2000). Proceedings Report. Paper presented at the Alternate Assessment Forum: Connecting into a Whole, Salt Lak e City, Utah. Warlick, K., & Olsen, K. (1998). Who to assess? State practices in determining eligi bility for alternate assessment .: Mid-South Regional Resource Center. Weiner, D. (2002a, February). Personal interview wi th M. Zatta. Wiener, D. (2002b). Massachusetts: One state's approach to setting perf ormance levels on the alternate assessment (Synthesis Report 48). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes. Retrieved July 14, 2003, from the World Wide Web: http://education.umn.edu/NCEO/OnlinePubs/Synthesis4 8.html Worthen, B. (1993). Critical issues that will deter mine the future of alternative assessment. Phi Delta Kappan 444-456.

PAGE 24

24 of 27 Ysseldyke, J., Olsen, K., & Thurlow, M. (1997). Issues and considerations in alternate assessments (Synthesis Report No. 27). Minneapolis, MN: Univers ity of Minnesota: National Center on Educational Outcomes. Ysseldyke, J., & Olson, K. (1997). Putting alternate assessments into practice: What t o measure and possible sources of data Minneapolis: National Center for Educational Outc omes. Ysseldyke, J., & Thurlow, M. (1999). Assessment and accountability for student with disa bilities: 1999 status update and emergin issues Minneapolis: National Center for Educational Outcomes. Ysseldyke, J., Thurlow, M., & Geenen, K. (1994). Educational accountability for students with disabilities Minneapolis, MN: University of Minnesota, Nationa l Center on Educational Outcomes. Ysseldyke, J., Thurlow, M., Kozleski, E., & Reachly D. (1998). Accountability for the results of educating students with disabilities: Assessment pr ovisions of the 1997 Ammendments to the Individuals wiht Disabilities Education Act Minneapolis: National Center for Educational Outcomes. Ysseldyke, J., Thurlow, M., & Shriner, J. (1994). Students with disabilities & educational standards: Recommendations for policy & practice Minneapolis, MN: University of Minnesota, Nationa l Center on Educational Outcomes. Zatta, M. (2003). Is there a relationship between teacher experience and training and student scores on the MCAS Alernate Assessment? (Unpublished doctoral dissertation, Boston College Chestnut Hill, Massachusetts). About the AuthorsMary Zatta Perkins School for the BlindEmail: mary.Zatta@Perkins.orgMary Zatta received her Ph.D. from Boston College. She is administrator in the Deafblind Program at Perkins School for the Blind i n Watertown, Massachusetts. At the Perkins School, she is respon sible for educational and residential programming for deafblind adolescents. In addition to her work at Perkins, she has served as an international consult ant in several nations on issues related to the instruction of deafblind chil dren and adolescents. In addition, she is an adjunct faculty member at the U niversity of Massachussetts-Boston / Center for Social Developme nt and Education. Diana PullinLynch School of EducationSchool of LawBoston CollegeEmail: pullin@bc.eduDiana Pullin is Professor of Education Law and Publ ic Policy at the Lynch School of Education and the School of Law at Boston College. She holds a law degree and a Ph.D. from The University of Iowa. She has published extensively in the area of education law and public policy and has served as a consultant to numerous professional associations, research center s, advocacy groups, attorneys, and education officials on issues concer ning law, testing, and disability.

PAGE 25

25 of 27 The World Wide Web address for the Education Policy Analysis Archives is epaa.asu.edu Editor: Gene V Glass, Arizona State UniversityProduction Assistant: Chris Murrell, Arizona State University General questions about appropriateness of topics o r particular articles may be addressed to the Editor, Gene V Glass, glass@asu.edu or reach him at College of Education, Arizona State Un iversity, Tempe, AZ 85287-2411. The Commentary Editor is Casey D. Cobb: casey.cobb@unh.edu .EPAA Editorial Board Michael W. Apple University of Wisconsin David C. Berliner Arizona State University Greg Camilli Rutgers University Linda Darling-Hammond Stanford University Sherman Dorn University of South Florida Mark E. Fetler California Commission on TeacherCredentialing Gustavo E. Fischman Arizona State Univeristy Richard Garlikov Birmingham, Alabama Thomas F. Green Syracuse University Aimee Howley Ohio University Craig B. Howley Appalachia Educational Laboratory William Hunter University of Ontario Institute ofTechnology Patricia Fey Jarvis Seattle, Washington Daniel Kalls Ume University Benjamin Levin University of Manitoba Thomas Mauhs-Pugh Green Mountain College Les McLean University of Toronto Heinrich Mintrop University of California, Los Angeles Michele Moses Arizona State University Gary Orfield Harvard University Anthony G. Rud Jr. Purdue University Jay Paredes Scribner University of Missouri Michael Scriven University of Auckland Lorrie A. Shepard University of Colorado, Boulder Robert E. Stake University of Illinois—UC Kevin Welner University of Colorado, Boulder Terrence G. Wiley Arizona State University John Willinsky University of British Columbia

PAGE 26

26 of 27 EPAA Spanish and Portuguese Language Editorial BoardAssociate Editors for Spanish & Portuguese Gustavo E. Fischman Arizona State Universityfischman@asu.eduPablo Gentili Laboratrio de Polticas Pblicas Universidade do Estado do Rio de Janeiro pablo@lpp-uerj.netFounding Associate Editor for Spanish Language (199 8-2003) Roberto Rodrguez Gmez Universidad Nacional Autnoma de Mxico Adrin Acosta (Mxico) Universidad de Guadalajaraadrianacosta@compuserve.com J. Flix Angulo Rasco (Spain) Universidad de Cdizfelix.angulo@uca.es Teresa Bracho (Mxico) Centro de Investigacin y DocenciaEconmica-CIDEbracho dis1.cide.mx Alejandro Canales (Mxico) Universidad Nacional Autnoma deMxicocanalesa@servidor.unam.mx Ursula Casanova (U.S.A.) Arizona State Universitycasanova@asu.edu Jos Contreras Domingo Universitat de Barcelona Jose.Contreras@doe.d5.ub.es Erwin Epstein (U.S.A.) Loyola University of ChicagoEepstein@luc.edu Josu Gonzlez (U.S.A.) Arizona State Universityjosue@asu.edu Rollin Kent (Mxico) Universidad Autnoma de Puebla rkent@puebla.megared.net.mx Mara Beatriz Luce (Brazil) Universidad Federal de Rio Grande do Sul-UFRGSlucemb@orion.ufrgs.br Javier Mendoza Rojas (Mxico)Universidad Nacional Autnoma deMxicojaviermr@servidor.unam.mx Marcela Mollis (Argentina)Universidad de Buenos Airesmmollis@filo.uba.ar Humberto Muoz Garca (Mxico) Universidad Nacional Autnoma deMxicohumberto@servidor.unam.mx Angel Ignacio Prez Gmez (Spain)Universidad de Mlagaaiperez@uma.es DanielSchugurensky (Argentina-Canad) OISE/UT, Canadadschugurensky@oise.utoronto.ca Simon Schwartzman (Brazil) American Institutes forResesarch–Brazil (AIRBrasil) simon@sman.com.br

PAGE 27

27 of 27 Jurjo Torres Santom (Spain) Universidad de A Coruajurjo@udc.es Carlos Alberto Torres (U.S.A.) University of California, Los Angelestorres@gseisucla.edu EPAA is published by the Education Policy Studies Laboratory, Arizona State University


xml version 1.0 encoding UTF-8 standalone no
mods:mods xmlns:mods http:www.loc.govmodsv3 xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govmodsv3mods-3-1.xsd
mods:relatedItem type host
mods:identifier issn 1068-2341mods:part
mods:detail volume mods:number 12issue 16series Year mods:caption 20042004Month April4Day 1010mods:originInfo mods:dateIssued iso8601 2004-04-10