USF Libraries
USF Digital Collections

Evaluation of the first year of a statewide problem solving/response to intervention initiative

MISSING IMAGE

Material Information

Title:
Evaluation of the first year of a statewide problem solving/response to intervention initiative preliminary findings
Physical Description:
Book
Language:
English
Creator:
Castillo, Jose Michael
Publisher:
University of South Florida
Place of Publication:
Tampa, Fla
Publication Date:

Subjects

Subjects / Keywords:
Data-Based Decision-Making
Systems Change
Educators
Implementation
Professional development
Dissertations, Academic -- Psychological and Social Foundations -- Doctoral -- USF   ( lcsh )
Genre:
non-fiction   ( marcgt )

Notes

Abstract:
ABSTRACT: This program evaluation study examined the relationship between Problem Solving/Response to Intervention (PS/RtI) training and technical assistance and educator and implementation outcomes following the first year of a 3-year project. Educators from 40 pilot schools in eight districts participating in the study received ongoing professional development targeting the rationale for the initiative, systems change issues, and the steps of the PS/RtI model. Data on educator beliefs, educator perceived and demonstrated PS/RtI skills, and PS/RtI implementation were collected throughout the year from the 40 pilot schools as well as 33 comparison schools. To examine the relationships between PS/RtI training and technical assistance and preliminary outcomes, a series of multi-level models were conducted. Results of the analyses suggested that the ongoing professional development provided during the first year related to some outcomes. Specifically, PS/RtI training and technical assistance appeared to be positively related to increases in the beliefs and perceived skills of educators. The relationship between professional development activities and other outcomes targeted during the first year (i.e., demonstrated skills and implementation) was unclear. Potential explanations for the findings from this study and implications for future research are discussed.
Thesis:
Dissertation (Ph.D.)--University of South Florida, 2009.
Bibliography:
Includes bibliographical references.
System Details:
Mode of access: World Wide Web.
System Details:
System requirements: World Wide Web browser and PDF reader.
Statement of Responsibility:
by Jose Michael Castillo.
General Note:
Title from PDF of title page.
General Note:
Document formatted into pages; contains 368 pages.
General Note:
Includes vita.

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 002063926
oclc - 558648071
usfldc doi - E14-SFE0003020
usfldc handle - e14.3020
System ID:
SFS0027337:00001


This item is only available as the following downloads:


Full Text

PAGE 1

Evaluation of the First Year of a Statewide Problem Solving/Respons e to Intervention Initiative: Preliminary Findings by Jose Michael Castillo, Educational Specialist A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy Department of Psychologi cal & Social Foundations College of Education University of South Florida Major Professor: George Batsche, Ed.D. Michael Curtis, Ph.D. Jeffrey Kromrey, Ph.D. Roger Boothroyd, Ph.D. Date of Approval: June 15, 2009 Keywords: data-based decision-making, syst ems change, educators, implementation, professional development Copyright 2009, Jose Castillo

PAGE 2

Dedication I want to thank my beautiful wife Monica for her patience and support over the years as I worked to complete graduate schoo l (and particularly this dissertation). I also would like to thank my mom for always putting my brother and I (including our education) first when raising us as a singl e mother. Last but not least, thank you to George and Mike for their mentorship th roughout the years and to Roger and Jeff for their support during my doctoral studies.

PAGE 3

i Table of Contents List of Tables iv List of Figures v Abstract vi Chapter I: Introduction 1 Overview of Service Deli very in the PS/RtI Model 6 Outcomes in the Traditional Model Versus the PS/RtI Model 10 Implementation Challenges to be Faced 12 Evaluating Implementation of the PS/RtI Model 15 Purpose 18 Chapter II: Literature Review 20 Overview of the PS/RtI Model 21 Student and Systemic Outcomes in the Traditional Model Versus the PS/RtI Model 27 Traditional Model 27 The PS/RtI Model 39 Implementation Challenges to be Faced 58 Consensus 58 Implementation Integrity 62 Research on Program Evaluation Models 73 Conclusions 81 Chapter III: Method 83 Participants 83 Pilot Schools 83 Comparison Schools 89 Project Description 91 Measures 94 Beliefs Survey 98 Perceptions of RtI Skills Survey 99 Self-Assessment of Problem Solving Implementation 100 PS/RtI Direct Skill Assessments 101 Tier I and II Criti cal Components Checklist 102 Procedures 103 Personnel Orientation and Training 103

PAGE 4

ii Baseline Data Collection 104 Demonstration Site Traini ng and Technical Assistance Year 1 105 Year 1 Data Collection 107 Data Analysis 114 Chapter IV: Results 120 Research Question 1 123 Educators Beliefs About Student Learning and Service Delivery 124 Assumptions 124 Descriptive Data 125 Educator Beliefs Multilevel Model Results 132 Educators Perceived RtI Academic (RTI-A) Skills 141 Assumptions 141 Descriptive Data 142 Educator Perceived RTI-A Skills Multilevel Model Results 148 Educators Perceived Rt I Behavior (RTI-B) Skills 155 Assumptions 155 Descriptive Data 156 Educator Perceived RTI-B Skills Multilevel Model Results 158 Research Question 2 160 Educators Demonstrated PS/RtI Skills 165 Assumptions 165 Descriptive Data 167 Educator Demonstrated Skills Multilevel Model Results 171 Research Question 3 177 Self-Report of PS/RtI Impl ementation in Pilot Schools 177 Assumptions 178 Descriptive Data 179 SAPSI Multilevel Model Results 182 PS/RtI Implementation Levels Evident from Permanent Products 188 Assumptions 188 Descriptive Data 189 Tier I and II Critical Components Checklist Multilevel Model Results 193 Summary of Results 198 Chapter V: Discussion 201 Potential Explanations for Year 1 Findings 204 Educator Beliefs and Perceived RtI Skills 204

PAGE 5

iii Educator Demonstrated Skills 211 Implementation of a PS/RtI Model 213 Other Variables Related to Year 1 Project Outcomes 215 Implications for Future Project Activities 223 Potential Implications for Future Research 228 Limitations 230 Conclusions 232 References 234 Appendices 244 Appendix A Demonstration Distri ct Mini-Grant Application and Scoring Rubric 244 Appendix B Problem-Solving/Re sponse-to-Intervention Project Implementation Plan 267 Appendix C Problem-Solving/Re sponse-to-Intervention Project Evaluation Rubric 287 Appendix D Example Validation Forms 297 Appendix E Copies of Measures 306 Appendix F Data Collection, Entry, and Analysis Rubric 338 Appendix G Statistical Models 343 Appendix H Residual Variance Assumption Analyses Summary 349 About the Author End Page

PAGE 6

v iv List of Tables Table 1 Size, Location, and Student Demographics of Selected Demonstration Districts 86 Table 2 Descriptive Statistics fo r Pilot and Comparison Schools for School Size, Student Dem ographics, and Student Achievement 88 Table 3 Beliefs Multi-Level Model Data 127 Table 4 Perceptions of RTI-A Skills Multi-Level Model Data 143 Table 5 Perceptions of RTI-B Skills Multi-Level Model Data 157 Table 6 Skill Assessment Multi-Level Model Data 167 Table 7 Self-Assessment of Problem Solving Implementation MultiLevel Model Data 179 Table 8 Tier I & II Critical Components Checklist Multi-Level Model Data 190

PAGE 7

v v List of Figures Figure 1. Problem-Solving/Respon se-to-Intervention (PS/RtI) Diagram 7 Figure 2. Three-Tiered Res ponse-to-Interven tion Model 8

PAGE 8

v vi Evaluation of the First Year of a Statewid e Problem Solving/Res ponse to Intervention Initiative: Preliminary Findings Jose Castillo ABSTRACT This program evaluation study examined the relationship between Problem Solving/Response to Intervention (PS/RtI) trai ning and technical assistance and educator and implementation outcomes following the first y ear of a 3-year project. Educators from 40 pilot schools in eight districts participat ing in the study received ongoing professional development targeting the rationale for the init iative, systems change issues, and the steps of the PS/RtI model. Data on educator beli efs, educator perceived and demonstrated PS/RtI skills, and PS/RtI implementation were collected throughout the year from the 40 pilot schools as well as 33 comparison schools. To examine the relationships between PS/RtI training and technical assistance and pr eliminary outcomes, a series of multi-level models were conducted. Results of the analys es suggested that th e ongoing professional development provided during the first year related to some outcomes. Specifically, PS/RtI training and technical assistance appeared to be positively related to increases in the beliefs and perceived skills of educat ors. The relationship between professional development activities and ot her outcomes targeted during the first year (i.e., demonstrated skills and implementation) wa s unclear. Potential explanations for the findings from this study and implicati ons for future research are discussed.

PAGE 9

1 1 Chapter I Introduction Public schools, as governmentfunded institutions, are expe cted to abide by federal and state statutes governing educational services (Jacob & Hartshorne, 2003). Despite decades of federal and state educat ional reform mandates focusing on improving the processes of teaching (e.g., strengthening curricula, upgrading th e quality of teachers and instruction, improving in structional resources and materials; Passow, 1990), significant proportions of student s continue to struggle to ac hieve academic and behavior benchmarks. Recent estimates indicate that approximately 20-40% of school-age children experience reading difficulties (Fletcher & Lyon, 1998; Grigg, Donahue, & Dion, 2007), while approximately 20-30% struggle with basic math skills (Grigg et al., 2007). Furthermore, epidemiological study estimat es indicate that 16-22% of school-age children exhibit diagnosable mental health problems (Hoagwood & Johnson, 2003), many of which may be moderated by academic and behavioral problems encountered in schools (Kellam, Mayer, Rebok, & Hawkins, 1998). In addition, significant achievement gaps continue to exist between racial/ethni c minorities, low socio-economic status (SES) students, and English Language Learners ( ELLs) and their high-SES, Caucasian peers (Grigg et al., 2007). The aforementioned issues, along with studi es demonstrating that students in the United States perform lower on standardized ach ievement tests than their same-age peers

PAGE 10

2 2 from other industrialized nations (National Ce nter for Education St atistics, 2005), are the catalyst for the school accountability movement The cornerstone of the accountability movement, the No Child Left Behind Act of 2001 (NCLB, 2002), shifts the focus of educational reform away from improving the processes of ed ucation and towards providing services that impr ove outcomes for all students. NCLB requires that every student perform at grade-level in reading and math by the 2013-2014 school year. States are required to develop intermediate goals that establish the percenta ge of students that must meet standards each year for a school to make Adequate Yearly Progress (AYP). These goals must be raised at least every 3 years and progress must be monitored using statewide assessments. Results from the stat ewide assessments must be disaggregated by race/ethnicity, socio-economic status (SES), English Language Learner (ELL) status, and disability (SWD) status when determining AY P. NCLB further stipulates that evidencebased practices be used to instruct student s and has allocated over 1 billion dollars in funding to help schools improve the quality of reading instruction in general education through implementation of programs such as Reading First and Early Reading First Thus, NCLB holds schools accountable for th e progress of all students by mandating that schools use evidence-based instruction and data to inform decision-making. Although schools are now being held accountable for the aggregated and disaggregated outcomes of all students, ma ny questions remain about how schools can meet the mandates of NCLB (2002). To addre ss these questions adequately, the reasons for the failure of schools to help significant proportions of students achieve grade level standards must be examined. Many research ers purport that one of the contributing factors behind the high levels of academic and behavior difficulties is that the traditional

PAGE 11

3 3 educational system is not structured to re spond to students with diverse learning needs (Tilly, 2002; Torgesen, 2002). In structional options for student s in the tradi tional system are often bifurcated into two distinct categories, general and special education. Students who do not respond to the core general educa tion curriculum are ofte n referred for special education services with little or no attempt to provide eviden ce-based interventions in the general education environment (Batsche Elliott, Schrag, & Tilly, 2005). Importantly, relying on special education as the primary mechanism for providing services to underachieving st udents is wrought with technica l and logistical problems. Invalid identification procedures and increasing referral rates have resulted in a “wait-tofail” service delivery model (Batsche, E lliott, & Graden et al., 2005; President’s Commission on Excellence in Special Educa tion [PCESE], 2002). Researchers have raised concerns over the persis tent use of invalid identification criteria that rely on discrepancies between norm-referenced cognitive and academic achievement test scores to determine which students have learning di sabilities that enable them to qualify for additional services (e.g., Flet cher, Francis, Morris, & Lyon, 2005). Critics of this approach have argued that requiring significant discrepa ncies between scores on normreferenced tests of cognitive processing and achievement (e.g., one standard deviation) force struggling students to wait for the gap be tween themselves and their peers to widen (Fletcher, Coulter, Reschly, & Vaughn, 2004). Despite these concerns, eligibility for special education services for students susp ected of having a lear ning disability has typically been tied to discrepa ncy and regression models. T hus, students must fall months or even years behind their peers to be f ound eligible, while many referred students never receive services despite remaining behind their peers (Stanovich, 1999).

PAGE 12

4 4 One of the major factors that contributed to widespread use of the “wait-to-fail” traditional model was the previous iterations of the Individuals with Disabilities Education Act of 2004 (IDEIA, 2004). The prev ious iterations of ID EIA (starting with the Education of the Handicapped Act of 1975) created and shaped the bifurcated traditional system through a categorical f unding mechanism requiri ng all IDEIA monies to be spent directly on special education services. Therefore, although Title I of the NCLB Act (formerly the Elementary and Secondary Education Act, 1965) provides funding for remedial services in reading and math to thos e schools with significant proportions of students eligible for freereduced lunch, many sc hools perceived that labeling students with a disability, despite the evidence suggesting that the identification procedures used were invalid, would secure additional services for struggling students (Fletcher et al., 2005). The 2004 reauthorization of IDEIA allows a maximum of 15% of IDEIA funding to be allocated to strengthening general ed ucation instruction th rough the provision of early intervening services to non-special e ducation students. Speci fically, the provision states: “A local educational agency may not use more than 15 percent of the amount such agency receives under this part for any fi scal year, less any amount reduced by the agency pursuant to subsection (a)(2)(C), if any, in combination with other amounts (which may include amounts other than e ducation funds), to develop and implement coordinated, early in tervening services, which may include interagency financing structures, for students in kindergarten th rough grade 12 (with a particular emphasis on students in kindergarten through gr ade 3) who have not been identified as needing special

PAGE 13

5 5 education or related services but who need additional academic and behavioral support to succeed in a general education envi ronment” (IDEIA, 2004, Sec. 613(f)). Thus, the 15% clause provides schools with additional funds to strengthen the quantity and quality of evidence-based interventions (early intervening services) available to general education students who fall behind. In addition, schools can spend money on assessments that allow educators to reliably and validly monitor th e progress of student response to intervention. The exp ectations evident from IDEIA, therefore, are for schools to prevent problems through evidence-based, ear ly intervening servic es in the general education environment and decrease relian ce on special education services as the mechanism for remediation of student learning difficulties. In sum, NCLB (2002) and IDEIA (2004) mandate that schools use evidencebased practices to improve student outcomes Schools are required to provide researchbased instruction and use assessment to ma ke data-based decisions about student progress. Importantly, both laws include language that emphasizes improving student performance in relationship to state approved standards. Through the two laws, mandates and funding are provided to schools in an effo rt to improve the quantity and quality of assessment and instructional options in gene ral education with the goal of improving the performance of all students, re gardless of whether they are identified with disabilities. The question of how educators are expected to meet the requirement of improving the performance of all students, however, rema ins unclear. Despite uncertainty regarding how to meet these expectations, referen ces to the Problem Solving/Response to Intervention (PS/RtI) model, an approach to organizing services supported in the educational literature, occur throughout IDEIA.

PAGE 14

6 6 Overview of Service Deli very in the PS/RtI Model Consistent with the expectations of NCLB (2002) and IDEIA (2004), a PS/RtI model uses assessment to f acilitate the development and implementation of evidencebased interventions in the gene ral education environment and to determine the extent to which students respond to the interventions through continuous progress monitoring (Batsche, Elliott, Graden, et al., 2005). Alt hough a number of examples of PS/RtI models exist in the literatu re, the process typically involve s progressing through four major stages referred to as the problem-solvi ng process; problem id entification, problem analysis, plan development and implemen tation, and program evaluation/response to intervention (Bergan & Kratoc hwill, 1990). When addressing problems for a student or group of students, educators involved in gr oup problem-solving teams use the four stages of the problem-solving process to systematic ally (1) identify the expected replacement behavior (i.e., the skill the st udent or students is/a re expected to perform), (2) determine what factors are inhibiting performance of the replacement behavior, (3) develop and implement a plan to remove barriers to le arning, and (4) evaluate student RtI (Batsche, Elliott, Graden, et al., 2005). Research on group problem-solving teams suggests that implementing problem-solving procedur es improves student (e.g., academic performance, on-task behavior) and systemic outcomes (e.g., special education referrals and placements; Burns & Symington, 2002). S ee Figure 1 below for a diagram of the PS/RtI model.

PAGE 15

7 7 Figure 1 Problem-Solving/Response-to-Int ervention (PS/RtI) Diagram. In addition to providing a framework for making decisions about student performance, the PS/RtI model contains mech anisms to help schools use their limited resources more efficiently. To increase th e efficiency with which schools provide services, interventions are available fo r both individual and groups of students. Interventions available to stude nts are typically categorized into three tiers that intensify and focus the interventions (Batsche, Elliott, & Graden, et al., 2005). Although the procedures vary somewhat for academics and be havior, the three-tier conceptual model is similar across both domai ns (see Figure 2 below). A brief description of the three-tier model based on Batsche, Elliott, Graden, et al’s (2005) conceptualization follows.

PAGE 16

8 8 Figure 2 Three-Tiered Responseto-Intervention Model. Tier I instruction involves pr oviding scientific, research -based instruction to all students (i.e., universal intervention). Educators administer universal screening assessments 3-4 times per year and examine existing data to determine the overall impact of Tier I instruction and scr een for individual students not responding to the curriculum. Research examining the impact of implemen ting Tier I interven tion procedures has demonstrated improvements in academic, beha vioral, and socio-emotional outcomes for students ( Dolan, Kellam, Brown, et al., 1993; Kellam, et al., 1998; Kellam, Rebok, Mayer, Ialongo, & Kalodner, 1994; Kellam, Werthamer-Larsson, Dolan, et al., 1991) Tier II intervention (i.e., supplemental inte rvention) involves additional time and/or skill focus in the curriculum targ eting the content area of c oncern (e.g., reading). Students

PAGE 17

9 9 receiving Tier II interventions are monitored mo re frequently (e.g., monthly) to facilitate decision-making regarding the effectiveness of the intervention plan developed through the problem-solving process. Examination of the impact of interventions consistent with Tier II procedures has de monstrated that supplementa l intervention improves the academic performance of students (Kamps & Greenwood 2003; Torgesen et al., 1999; Torgesen et al., 2001; Vaughn, 2003; Vellutino et al., 1996). Although the majority of students should respond to Tier I and II instru ction, estimates indicate that approximately 5% will require more intense, targeted interventions availa ble through Tier III procedures. Tier III interventions typically involve highly idios yncratic, intensive services that require the expe rtise of a diverse team of tr ained individuals. Educators monitor progress frequently (e.g., weekly) to make decisions regarding student RtI. Research examining the impact of Tier III services is sparse and difficult to interpret because the majority of studies examining idio syncratic, intensive interventions have not demonstrated that the participants failed to respond to systematically administered Tier I and II interventions. Interventions developed for students receiving Tier II I services may or may not involve resources outside of wh at can be realistica lly expected in the general education setting. When the resources (e.g., time, materi als, personnel) required exceed what is available through general educa tion, then the student is cons idered for spec ial education eligibility. Thus, in the PS/RtI model, special education becomes a mechanism for providing additional, intensive services to students, not a location where students diagnosed with disabilities go to receive in struction. In addition, the PS/RtI model moves

PAGE 18

10 10 the requirements for special education eligib ility away from tradi tional norm-referenced assessments and towards the level of res ources needed to improve student RtI. In summary, the PS/RtI model serves seve ral functions. First, PS/RtI serves as a decision-making framework for determining wh at services should be provided to students. Learning problems can be systematica lly identified early in the problem cycle, analyzed, and addressed to improve student outcomes at the group and individual levels. Second, PS/RtI functions as an indicator of the frequency and intensity of services needed for all students to be successf ul. By evaluating student RtI at three tiers of intervention, educators are able to more efficiently use their limited resources and improve student performance in the general education environmen t. In other words, a tiered system of intervention allows educators to solve less severe problems in the general education environment and invest additional resources in those students who require more intensive intervention to achieve educational benchmar ks, thereby meeting the mandates of NCLB (2002) and IDEIA (2004). Finally, the PS/RtI model is used to determine e ligibility for special education by identifyi ng what students require serv ices beyond the capacity of general education. Outcomes in the Traditional Model Versus the PS/RtI Model To date, research on impl ementation of the PS/RtI model has demonstrated improved student and systemic outcomes when compared to the traditional model. As was previously mentioned, significant proportions of students, particularly students from traditionally disadvantaged backgrounds (e.g., racial/ethnic minorities, low-SES students, ELLs), continue to demonstrate academic and behavioral difficulties (Hoagwood & Johnson, 2003; National Center for Education St atistics, 2005) in the traditional system.

PAGE 19

11 11 Compounding the problem of significant proportions of struggling students is the fact that relying on special education as the prim ary mechanism for providing services to underachieving students has demonstrated littl e efficacy. Evidence suggests that special education has done little to improve th e academic or professional outcomes (e.g., proportion of students who remained employe d following graduation) of students found eligible for services (Forness, 2001; Kavale & Forness, 1999; PCESE, 2002). In addition, increases in the number of students referre d and placed in special education programs (PCESE, 2002; Vaughn & Fuchs, 2003) and overrepresentation of students from racially/ethnically di verse backgrounds, males, student s from low-SES backgrounds, and ELLs (Heller, Holoma, & Messick, 1982; Donovan & Cross, 2002) are systemic problems associated with the traditional m odel that predominantly relies on special educators to provide additional services. Together, these data suggest that th e traditional model does not result in impr oved performance for all students nor equitable outcomes for disaggregated subgroups (requirements for schools in the accountability context set forth by NCLB [2002] and IDEIA [2004]). Conversely, evaluations of implementati on of the PS/RtI model at the building, district, intermediate unit, and state levels suggest that the model leads to improved student and systemic outco mes. Findings regarding student outcomes include improvements in reading and math perfor mance (Burns, Appleton, & Stehouwer, 2005; Callender, 2006; Knoff & Batsche, 1995; Marston, Muyskes, Lau, & Canter, 2003; McGlinchey, Schallmo, & Goodman, 2006; O’ Conner, 2000; Stollar & Graden, 2006; Tilly, 2003; Torgesen, 2005; VanDerHe yden & Burns, 2005; VanDerHeyden & Jimmerson, 2005). In terms of systemic out comes, reductions in special education

PAGE 20

12 12 referrals and placements, decreases in disp roportional representation among traditionally disadvantaged groups, and decreases in office discipline referrals have been reported (Burns, Appleton, & Stehouwer, 2005; Knoff & Batsche, 1995; Marston, Muyskes, Lau, & Canter, 2003; Tilly, 2003; Torges en, 2007; VanDerHeyden & Jimerson, 2005; VanDerHeyden, Witt, & Gilberston, 2007). Theref ore, evaluations of the PS/RtI model suggest that the preventive approach to service delivery results in improved academic performance for students and equitable out comes for disaggregated subgroups, outcomes that are consistent with the manda tes of NLCB (2002) and IDEIA (2004). Although these data suggest that positive outcomes resulted from implementation of PS/RtI, the evaluations have occurred in a small number of sites that varied in terms of the unit of analysis (i.e., build ing, district, intermediate unit, or state level). Therefore, additional data are needed to help educator s make decisions about the efficacy of the PS/RtI model for improving student outcomes. In addition, before widespread adoption and evaluation of the model can occur, a number of fact ors impacting implementation must be considered. Implem entation of any new service delivery model in schools, including PS/RtI, is dependent on a number of factors including ove rcoming a history of educational reform failure. Implementation Challenges to Be Faced For decades, educational reform move ments have been commonplace in schools (Passow, 1990). Whether through legislati on, administrative polic y, or some other mechanism, schools have attempted a number of large-scale educational reforms with limited success (Sarason, 1990). According to Sarason (1990), meaningful educational reform has failed because legislators, policymakers, and administrators paid little

PAGE 21

13 13 attention to schools in the c ontext of their histories or larger social systems (e.g., communities, districts, states, mandates). In many instances, initiatives were launched without investing the time and resources need ed to investigate the problem and redesign the system in a coordinated, systematic manner. The result has been a myriad of initiatives, often targeting the same probl ems, but requiring conflicting actions from educators. When one initiative did not demons trate results, another was often attempted without examination of why the previous re form did not produce the desired results. Consequently, what has resulted is a culture in which educators expect that reform movements that are launched will be replaced by another, often conflicting, initiative. Sarason (1990) purports that the reason many in itiatives fail is because schools are left unchecked to implement the initiatives. Sara son argues that when provided with multiple, often competing, initiatives and little or no support, schools will respond in ways that minimize the effort required to change, ther eby limiting meaningful educational reform. In fact, Sarason (1982) has demonstrated th at teachers typically do not implement new practices that require more than a few skills that are outside of their existing skill set. Given that implementation of the PS/RtI model requires a major conceptual and practical shift from the traditional mode l, Sarason’s (1982) findings are cause for concern. The PS/RtI model requires educators to administer assessments and link the data to evidence-based interventions implemented in the general educa tion environment. In addition, educators must learn to make data-based decisions to determine the effectiveness of interventions implemente d. Because of NCLB (2002), educators must shift more of their focus from what servic es are provided to how the services provided are improving student performance. The skil ls required to make decisions about the

PAGE 22

14 14 effectiveness of services are often different from the requirements of the traditional model in which struggling st udents are referred for speci al education and uniform procedures are followed to determine eligibility. In other words, the shift from prescribed procedures to using data to develop, imple ment, and monitor interventions requires new skill sets that may be outside of the exis ting skill sets of most educators. Consistent with Sarason’s (1982) findings research on intervention integrity has demonstrated that many intervention plans ar e not implemented by teachers with fidelity (Noell, et al., 2005). Given that intervention implementation is but one component of the PS/RtI model, concerns have been raised re garding the extent to which educators can implement PS/RtI as intended (Noell et al., 2005; VanDerHeyden, Witt, & Gilbertson, 2007). Drift from 100% implementation integrity appears inevitable; however, questions remain about the degree to which the model can be implemented with fidelity and to what extent the level of fidelity impacts stude nt outcomes (Burns, Appleton, & Stehouwer, 2005). Although these questions remain unans wered, there is reason to believe that implementation integrity can be improved by following effective professional development practices while providing direct training in problem-solving procedures. According to Showers, Joyce & Be nnett (1987) effective professional development practices contain four major stages; theory, de monstration, opportunities to practice, and immediate corrective feedback. First, the theoretical basis and rationale behind the skills being taught must be provided. The purpose of providing this information is for educators to obtain a knowledge base on which to draw when implementing the new practices and to achie ve consensus that the new practices are important to implement. Next, individua ls with experience implementing the new

PAGE 23

15 15 practices model the required skills. Finally, ed ucators learning the new skills are provided multiple opportunities to practice followed by immediate corrective feedback after each opportunity. Joyce and Showers (1996) pur ported that subsequent research on professional development models indicated that the inclusion of this final step did not appear to be necessary for new practices to be implemented. Regardless of whether corrective feedback is include d in a professional developm ent plan, the purpose of the latter stages is for educators to become pr oficient with the new skills through observation, repeated practice, and refinement of what is being practiced ( potentially through the provision of corrective f eedback). Showers, Joyce, and co lleagues have demonstrated that professional development models that in clude these stages result in improved implementation of new practices. Importantl y, researchers examining implementation of problem-solving procedures have demonstrated that using direct training methods and providing opportunities to prac tice results in increased us e of problem-solving methods (Curtis & Metz, 1986; Zins & Ponti, 1996). Evaluating Implementation of the PS/RtI Model Given the questions regarding the feasib ility of implementing the PS/RtI model with integrity and the degree to which impl ementation integrity impacts student outcomes (Noell et al., 2005; VanDerHeyden et al., 2007 ), professional development models that lead to high levels of fidelity are necessary. Therefore, it will be important for educators implementing PS/RtI to evaluate the impact of their training programs in terms of levels of consensus and the knowledge and skills acquired by participants. If educators are expected to implement the model with integr ity, high levels of agr eement with the core principles associated with PS/RtI as well as mastery of PS/RtI knowledge and skills will

PAGE 24

16 16 be required. Previous evaluations of impl ementation of the PS/RtI have examined teacher, administrator, and parent satisfac tion (Batsche, Elliott, Schrag, et al., 2005; Callender, 2006) and the perceived knowledge an d skills of practitione rs relative to the PS/RtI (Callender, 2006; Stollar & Graden, 2006) These evaluations have demonstrated high levels of satisfaction, but mixed result s in terms of the perceived knowledge and skills required to fully implement the model. None of these evaluations, however, examined other variables likely to impact the degree to which educators successfully implement PS/RtI practices nor their impact on student outcomes. Variables such as core beliefs that educators hold about educati ng students and what skills they demonstrate mastery of may explain some variation in th e implementation of the model and ultimately the outcomes of students. Given that drift in implementation of the PS/RtI model is likely to occur, the degree to which the knowledge and skills acqui red by participants are implemented with integrity in schools should be examined. Prev ious evaluations of the PS/RtI model have included analyses examining implementation in tegrity as part of the evaluation model (Callender, 2006; Stollar & Graden, 2006; Va nDerHeyden, Witt, & Gilbertson, 2007). In an example of a district level evalua tion, VanDerHeyden, Witt, & Gilbertson (2007) reported that educators were able to impleme nt a version of the PS/RtI model with high levels of fidelity. Conversely, Callender ( 2006) and Stollar and Gr aden (2006) reported that evaluations at the state level showed in consistent implementation of the components of the PS/RtI model. Importantly, no previ ous evaluations were found that examined implementation integrity in terms of its imp act on student outcomes However, research

PAGE 25

17 17 on the impact of implementation integrity on student outcomes will be important to determine what levels of fidelity predict improvements in academics and behavior. The emphasis placed on improving student performance in NCLB (2002) and IDEIA (2004) necessitates that any mode l implemented in schools demonstrates improvements in measurable student outcomes. Therefore, data that assess academic and behavioral performance of students in school s implementing the PS/RtI model will need to be a part of any evaluation model. Data that examine student growth in reading and math skills that are tied to mandated goals fr om NCLB will be partic ularly important to collect. Data examining the impact of the model on system outcomes related to academic performance such as office discipline re ferrals, special education referrals and placements, and disproportional representation also will need to be collected. As was previously mentioned, evaluati ons conducted at the building, district, intermediate unit, and state levels have examined these out come variables and found improvements as a result of implementation of the PS/RtI m odel (Burns, Appleton, & Stehouwer, 2005; Calender, 2006; Knoff & Batsche, 1995; Marston, Muyskes, Lau, & Canter, 2003; McGlinchey, Schallmo, & Goodman, 2006; O’ Conner, 2000; Stollar & Graden, 2006; Tilly, 2003; VanDerHeyden & Burns, 2005; VanDerHeyden & Jimmerson, 2005; VanDerHeyden, Witt, & Gilberston, 2007). Howe ver, given that these studies were conducted at a limited number of sites, additi onal data collection and analyses are needed to determine whether the positive impact of implementing the PS/RtI model can be generalized to other sites as well as what conditions facilitate improved student and systemic outcomes.

PAGE 26

18 18 Purpose Schools, districts, and st ates are in the process of piloting and/or implementing the PS/RtI model (Batsche, Elliott, & Grade n, et al., 2005). However, more systematic research and evaluation of the impact of implementing the PS/RtI model is needed. Because implementation of the model requires approximately 4-6 years (Batsche, Elliott, & Graden, et al., 2005), research on both the proximal and distal (i.e., shortand longterm respectively) impact of the model on important implementation variables (e.g., training impact, implementation integrity, beliefs) and student outcomes should be conducted. The purpose of the study discu ssed below was to examine the proximal relationship between PS/RtI training and tech nical assistance and a number of variables associated with implementation. Schools participating in the first year of a state initiative to implement PS/RtI practices were used to eval uate the relationship betwee n the training and technical assistance provided and educator and im plementation outcomes. First, the study evaluated the relationship between a multi-layered professional development model and the beliefs and perceived skills of educator s relative to the PS/RtI model. Second, the study examined the extent to which educat ors who received targeted professional development demonstrated the knowledge and skills necessary for PS/RtI implementation. Finally, the study investigat ed the extent to which professional development activities were associated with educators implementing the PS/RtI model with integrity. Given the 4-6 year estimate for full implementation provided by Batsche, Elliott, and Graden, et al. (2005), student outcomes were not examined in this study. Thus, the research questions addressed were:

PAGE 27

19 19 1) What is the relationship between initial and repeated PS/RtI tr aining and technical assistance and the beliefs and pe rceived skills of educators? 2) What is the relationship between targ eted professional development and the demonstrated PS/RtI knowledge and skills of educators? 3) What is the relationship between PS/Rt I training and technical assistance and implementation integrity?

PAGE 28

20 20 Chapter II Literature Review NCLB (2002) and IDEIA (2004) are shif ting the focus of se rvice delivery in schools from process accountability to outco me accountability. Both laws mandate that schools use evidence-based practices to improve student outcomes. Mandates and funding are provided to schools in an effort to improve the quantity and quality of assessment and instructional options in gene ral education with the goal of improving the performance of all students, including those id entified with disabilities. The question of how educators are expected to meet the requi rement of improving th e performance of all students, however, remains unclear. Despite uncertainty regarding how to meet these expectations, proposals for how to organize service delivery to maximize student performance exist in the literature. One poten tial mechanism for meeting the expectations of NCLB and IDEIA that has received attention in the literature is the PS/RtI model. The PS/RtI model advocates purport that implementing the model results in improved student and systemic outcomes. Ho wever, before educators begin full-scale adoption of PS/RtI in schools, the model s hould be examined systematically and the information obtained disseminated to key stakeholders. Information on the components of the PS/RtI model, studies evaluating the im pact of the model on important educational outcomes, and the aspects of the model that need further evaluating are necessary before decisions can be made regarding implemen tation. Therefore, the following literature

PAGE 29

21 21 review provides information on (1) the PS/Rt I model, (2) student and systemic outcomes reported in studies of the traditional model and the PS/RtI model, (3) other outcomes studied by researchers evaluating the impact of PS/RtI, (4) aspects of PS/RtI that require additional research, and (5) pot ential uses of program evalua tion techniques to conduct a comprehensive evaluation of PS/RtI implementation. Overview of the PS/RtI Model According to Batsche, Elliott, Graden, et al. (2005), the PS/RtI model uses assessment to facilitate the developmen t and implementation of evidence-based interventions in the general education enviro nment and to determine the extent to which students respond to the interv entions through continuous pr ogress monitoring. In this model, the problem-solving proce ss guides decisions about what skills to target and how to intervene while student res ponse to intervention is used to determine th e effectiveness of interventions. Although a number of exam ples of the PS/RtI model exist in the literature, the process typically involves progressing thr ough four major stages referred to as the problem-solving pro cess; problem identificati on, problem analysis, plan development and implementation, and progr am evaluation/response to intervention (Bergan & Kratochwill, 1990). The description of the PS/Rt I model below is based on Batsche, Elliott, Graden, et al.’s conceptualization. The problem-solving process is initiate d when a student or group of students is/are identified for not meeting academic and/or behavioral expectations. During problem identification, the replacement behavior(s ) (i.e., the skill(s) students are expected to perform) is identified and defined in conc rete, measurable terms. Next, educators use assessments to determine the (1) current level of student performance, (2) current level of

PAGE 30

22 22 peer performance, (3) and exp ected level of performance (i .e., the benchmark) for the target skill. Once the data are collected and organized, educators conduct a gap analysis to determine how far (1) the student(s) is/a re from the benchmark, (2) the peers are from the benchmark, and (3) the student(s) is/are from the peers. The results of the gap analysis are used to determin e the appropriate unit of analysis for intervention (described below). Regardless of what unit of analysis is c hosen, a systematic as sessment of student strengths and weaknesses follows. The purpose of problem analysis is to determine what factors may be contributing to the student(s) not achieving the benchmark for the target skill. Educators develop and examine hypothe ses across instructional, curricular, environmental, and learner domains to deter mine the extent to wh ich environmental and student variables may be contributing to the problem. For each hypothesis developed, personnel collect data to confirm or reje ct its validity. Only hypotheses for which evidence suggests that the variable is a barr ier to student performa nce of the replacement behavior(s) are consider ed for intervention. Intervention plans are deve loped and implemented to re duce or remove barriers to performing the replacement behavi or(s). Intervention plans must be scientifically-based and link directly to the cause of the probl em. Interventions can be implemented for individual students or groups of students depending on how ma ny students are not performing the desired replaceme nt behavior (described be low). Regardless of how many students receive interventions, the impact of the plan is examined during the program evaluation/response to intervention stage. Du ring this stage, educators monitor the progress of students receiving intervention using assessment s that can be frequently

PAGE 31

23 23 administered and are sensitive to small changes in the replacement behavior. Administering assessments that meet these criteria allow educators to formatively calculate student performance in terms of th e gap between the identified student(s), the peers, and the benchmark (i.e., level) and rate of growth. Determining (1) the gap between identified students, their peers, and the benchmark and (2) the rate of growth for iden tified students and thei r peers compared to changing benchmarks is importa nt for two reasons. First, educators can make decisions about how far the student(s) currently is/are from peers and the benchmark as well as the distance between the peers and the benchmar k. Second, rate of gr owth for identified students and their peers compar ed to changes in the benc hmark provides information on if/when the students will cat ch up with the benchmarks for the replacement behavior. Both pieces of data provide educators with the information necessary to make decisions about whether the current inte rvention plan will improve st udent performance within a time frame that will allow them to ultimat ely be successful. Intervention plans that predict that students will reengage be nchmarks are typically continued, while intervention plans that do not sufficiently improve growth rates are typically modified. Thus, student RtI guides decisi ons regarding the extent to which the current intervention plan is effective for improving perform ance of the replacement behavior. In addition to providing a framework for making decisions about student performance, the PS/RtI model contains me chanisms for helping schools to more efficiently use their limited resources. To in crease the efficiency with which schools provide services, interventions are available for both indivi dual and groups of students. Interventions available to stude nts are typically categorized into three tiers. Although the

PAGE 32

24 24 procedures vary somewhat for academics and be havior, the three-tier conceptual model is similar across both domains. A brief descri ption of the three-tier model follows. Tier I instruction involves pr oviding scientific, research -based instruction to all students (i.e., universal intervention). For academics, educators administer universal screening assessments 3-4 times per year to examine the overall impact of Tier I instruction and screen for individual students not re sponding to the curriculum. For behavior, office discipline refe rrals (ODRs) are often used as a mechanism for examining the impact of behavioral in struction and to screen for students exhibiting significant behavior problems. Regardless of the inst ructional domain, when at-risk students are identified through schoolwide data or referr ed for problem-solving, determinations must be made regarding whether (1) the classroom environment is effective and (2) the student had sufficient access to instru ction. If either the classroo m environment is ineffective (i.e., approximately 20% or more of students did not attain be nchmark) or the student(s) did not have sufficient access to the curriculum (e .g., a significant number of absences), then Tier I interventions are attempted. Tier I interventions often in clude modifications to the core curriculum and working with parent s to increase student attendance. If the classroom environment is effec tive and the student(s) had access to the curriculum, then the student(s) receives Tier II intervention. Tier II intervention (i.e., supplementa l intervention) invol ves additional time and/or focus in the curriculum targeting the content area of concern (e.g., reading). Additional time in the curriculum includes st rategies such as requiring students to participate in instruction across multip le classrooms and providing small group instruction. Typically, th e additional exposure to target cont ent area instruction occurs for

PAGE 33

25 25 30-60 minutes. Additional focus in the curricul um often involves limiting the additional instruction to one or two skills within the c ontent area identified as specific concerns. Students receiving Tier II interventions are monitored more frequently (e.g., monthly) to facilitate decision-making regarding the effe ctiveness of the interv ention plan. For those students who respond to Tier II interventions (i.e., the student’s RtI has eliminated the gap between the student’s performance and the benchmark or will eliminate it within an accepted time frame), educators make decisi ons regarding whether to continue the interventions or provide Tier I instruction only. For those students who do not respond to Tier II interventions (i.e., th e student’s RtI has not reduced the gap between the student’s performance and the benchmark or will not eliminate the gap within an acceptable time frame), Tier III interventions are typically initiated. Although the majority of students will respond to Tier I and II instruction, a pproximately 5% will require more intense, targeted interventions available through Tier III. Tier III intervention is often where the problem-solving process is initiated at the individual student level. Tier III interventions typically involve highly idiosyncratic, intensive services that require the expertise of a diverse team of tr ained individuals. Sixty minutes plus of additional inst ruction in one or two target skills identified through the problem-solving process is often provided. In terventions developed for students receiving Tier III services may or may not involve reso urces outside of what can be realistically expected in the general education setting. When the resources (e .g., time, materials, personnel) required exceed what is availabl e through general educat ion, then the student is considered for special education eligib ility. Thus, in the PS/RtI model, special education becomes a mechanism for providing additional services to students, not a

PAGE 34

26 26 location where students diagnosed with disabi lities go to receive instruction. In addition, the PS/RtI model moves the requirements fo r special education e ligibility away from traditional norm-referenced assessments and towards the level of resources needed to improve the student’s RtI. Whether students are receiving Tier III interventions that require general or special education resources, e ducators monitor progress frequ ently (e.g., weekly) to make decisions regarding student RtI. For those st udents who respond to Tier III services, educators make decisions regarding whether to continue the interven tions or to reduce them to Tier II levels. For those students that repeatedly do no t respond to Tier III interventions, educators and parents must ma ke decisions regarding whether to continue highly intensive services targe ting the replacement behavior or to invest the resources on other important skills that the student may need to acquire. Thus, the PS/RtI model serves several functions. First, the PS/RtI model serves as a decision-making framework fo r determining what servic es should be provided to students. Learning problems can be systemati cally identified, analyzed, and addressed to improve student outcomes at the group and individual levels. Second, the PS/RtI model functions as an indicator of th e frequency and intensity of se rvices needed for all students to be successful. By evaluating student RtI at three tiers of inte rvention, educators are able to more efficiently use their limited re sources. In other words, a tiered system of intervention allows educators to solve less severe problems in the general education environment and invest additional resources in those students who require more intensive intervention. Finally, the PS/RtI model is us ed to determine eligibility for special

PAGE 35

27 27 education by identifying what students requ ire services beyond the capacity of general education. Batsche, Elliott, Graden, et al.’s ( 2005) description of the PS/RtI model is intuitively appealing. A data-b ased decision-making mechanis m based on the scientific method that allows educators to more efficiently use thei r resources should improve student and systemic outcomes. Ho wever, it is the job of rese archers to ensure that such claims possess validity. In fact a growing body of literature exists examining student and systemic outcomes in the traditional and PS/Rt I models. Data on the effectiveness of the traditional model is presented below followe d by data from studies examining PS/RtI. Student and Systemic Outcomes in the Tr aditional Model Versus the PS/RtI Model The Traditional Model. Researchers examining the eff ectiveness of the traditional model have investigated a va riety of student and systemic outcomes. Studies on student reading and math achievement, behavioral and socio-emotional outcomes of students, special education referral a nd placement rates, and disp roportionality have been conducted. These studies focus on the quality of both general and special education and employ varying methodologies to address the effectiveness of the traditional service delivery model. The following studies provi ded student and/or sy stemic data on the traditional model. In an examination of the overall effec tiveness of education, The National Center for Education Statistics administered the National Assessment of Educational Progress (NAEP) in reading and math to a na tionally representative sample of 4th and 8th grade students (Grigg et al., 2007). For reading, the NAEP was administered to approximately 165,700 4th graders and 159,400 8th graders across the country. The reading section

PAGE 36

28 28 measured various contexts (i.e., reading for literary experience, reading for information, and reading to perform a ta sk) and aspects (i.e., forming a general understanding, developing interpretation, making reader/tex t connections, and examining content and structure) of reading. Scor es ranged from 0-500 and were used to determine the achievement level of a student. The four possi ble achievement levels were below basic, basic, proficient, and advanced. Results from the NAEP reading section i ndicated that approximately 36% of 4th graders and 27% of 8th graders performed below basic in terms of reading skills (Grigg et al., 2007). The results also demonstrated di sproportionate repres entation among students who performed below basic. Disproportional re presentation was evident for gender (i.e., 41% of males and 34% of females performed below basic), race/ethnicity (i.e., 25% of Caucasians, 59% of Blacks, and 58% of Hi spanics performed below basic), SES (i.e., 54% of students eligible for fr ee-reduced lunch and 23% of st udents not eligible for freereduced lunch performed below basic), student s with disabilities (i .e., 66% of students with disabilities and 33% of students without disabilities performed below basic), and ELLs (i.e., 73% of ELLs and 33% of nonELLs performed below basic) among 4th grade students. For 8th grade students, disproportional re presentation among students who performed below basic was evident for race/et hnicity (i.e., 19% of Caucasians, 49% of Blacks, and 45% of Hispanics performed below basic), SES (i.e., 43% of students eligible for free-reduced lunch and 19% of students not eligible for free-reduced lunch performed below basic), students with disabilities (i.e., 66% of students with di sabilities and 24% of students without disabilities performed belo w basic), and ELLs (i.e., 71% of ELLs and 25% of non-ELLs performed below basic).

PAGE 37

29 29 Importantly, comparisons between previous administrations of the NAEP and the current version demonstrated limited change over time in the aggregat ed or disaggregated performance of students (Gri gg et al., 2007). From 1992 to 2005, the average scale score of students increased from 217 to 219 and from 260 to 262 for 4th and 8th graders respectively. Small improvements in the pr oportion of students performing below basic was evident as well. In 1992, 38% and 31% of students taking the test performed below basic in 4th and 8th grades respectively. In 2005, 36% and 27% of 4th and 8th grade students respectively performed below basi c. In addition, small improvements were evident in the performance of the aforementioned disaggr egated subgroups; however, substantial achievement gaps among racial/ethni c minorities, males, students eligible for free-reduced lunch, students w ith disabilities, and ELLs rema ined (i.e., the average scale score was consistently lowe r and the proportion of student s performing below basic was consistently higher for the traditionally disadva ntaged subgroups). For mathematics, the NAEP was administered to approximately 172,000 4th graders and 161,600 8th graders (Grigg et al., 2007). The test measured the mathematics performance of students across two dimensi ons, content and complexity. The content areas examined were number properties a nd operations, measurement, geometry, data analysis and probability, and algebra. The co mplexity of the problems in each of these content areas varied from low to high. The s cale ranged from 0-500 and resulted in four possible achievement levels; below basic, basic, proficient, and advanced. Results of the math section of the NAEP indicated that approximately 20% of 4th graders and 31% of 8th graders performed below ba sic (Grigg et al., 2007). Disproportional representati on among low achieving student s was evident for several

PAGE 38

30 30 disaggregated subgroups as well. For 4th grade students, dispropor tional representation of students who performed below basic was evident for race/ethnicity (i.e., 11% of Caucasians, 40% of Blacks, and 33% of Hi spanics performed below basic), SES (i.e., 33% of students eligible for fr ee-reduced lunch and 10% of st udents not eligible for freereduced lunch performed below basic), student s with disabilities (i .e., 43% of students with disabilities and 17% of students without disabilities performed below basic), and ELLs (i.e., 46% of ELLs and 17% of non-ELLs performed below basic). Higher rates of students performed below basic for each subgroup among 8th graders; however, disproportional representation c ontinued to be evident for ra ce/ethnicity (i.e., 21% of Caucasians, 59% of Blacks, and 50% of Hi spanics performed below basic), SES (i.e., 49% of students eligible for fr ee-reduced lunch and 21% of st udents not eligible for freereduced lunch performed below basic), student s with disabilities (i .e., 68% of students with disabilities and 27% of students without disabilities performed below basic), and ELLs (i.e., 71% of ELLs and 29% of non-ELLs performed below basic). Comparisons between previous administ rations of the NAEP and the current iteration demonstrated some improvement in the overall math achievement of students over time. The average scale score from 1990 to 2005 increased from 213 to 238 for 4th graders and from 263 to 279 for 8th graders. Decreases in the proportion of students performing below basic occurred as well. From 1990-2005, the proportion of students performing below basic decrea sed from 50% to 20% in 4th grade and 48% to 31% in 8th grade. Although improvements were evid ent for all disaggregated subgroups, racial/ethnic minorities, students eligible for free-reduced lunch, students with disabilities, and ELLs continued to lag behi nd their same grade peers in mathematics

PAGE 39

31 31 achievement (i.e., the average scale score wa s consistently lower and the proportion of students performing below basic was cons istently higher for the aforementioned traditionally disadvantaged subgroups). Overall, the data from the NAEP sugges t that a significant proportion of students are not attaining basic reading and math sk ills, although the performance of students is higher for math than reading. In addition, di sproportional numbers of those students not attaining basic skills are fr om traditionally disadvanta ged subgroups. Racial/ethnic minorities, low-SES students (i.e., students elig ible for free-reduced lunch), students with disabilities, and ELLs are more likely to pe rform below basic than their same-grade peers. Fourth grade males also are more lik ely to perform below basic in reading than same-grade females. Finally, despite small im provements, a longitudinal analysis of the performance of aggregated and disaggregated groups revealed that the 2005 findings for reading are largely consistent with previous administrations. Although some improvement over time in the math achievement of aggregated and disaggregated students was evident, significant achievem ent gaps remain among the aforementioned disaggregated subgroups. One limitation to the NAEP that warrant s consideration when interpreting the findings is that the assessment is not dire ctly linked to state standards. NCLB (2002) requires that states monitor their progress to ward all students dem onstrating proficient performance on state-approved grade-leve l standards by 2013-14. St ate-approved gradelevel standards, however, vary from state to state. The variation in these standards has resulted in statewide assessmen ts that vary in a number of characteristics including what is assessed and the difficulty of the items. Examination of how students are performing

PAGE 40

32 32 on statewide assessments that link more dir ectly to each state’s standards may provide different trends of student performance than examining one national assessment that does not take differences in e xpectations into account. One study that accounted for performance on statewide assessments was conducted by the Center on Education Policy (CEP, 2008). The CEP studied trends in statewide assessment data acr oss the country in reading and math from 2002 through 2007. The center gathered results of the stat ewide assessments from all states and included those states for which comparable da ta were available for multiple years (i.e., results were available utilizing the same assessments across multiple years) in the analyses conducted. Two indicators of perf ormance were used in the analyses, the percentage of students who scored proficient on a state’s outcome assessment and effect sizes. The CEP used these indicators to examine progress toward increasing the overall proficiency of students as well as determini ng the extent to which the achievement gap among demographic subgroups has closed. Results of the review of statewide assessment data suggest that increasing numbers of students are scori ng at the proficient level in most states. The CEP (2008) reported moderate to large gains at the elem entary and middle school levels with smaller gains observed at the high school level. Sp ecifically, 133 and 121 instances (an instance was defined as one specific content area and gr ade level for which scores were available) of increases in the percent of students pe rforming proficiently and effect sizes respectively were observed. Conversely, only ni ne and 11 instances of decreases in the percent of students performing proficiently and effect size s were observed respectively. The CEPs examination of the achievement ga p demonstrated that the gap between some

PAGE 41

33 33 demographic subgroups narrowed. Gaps in the percentage of students performing proficiently narrowed in 327 in stances, widened in 76 instances, and did not show a net change in 20 instances. Gaps in effect si zes narrowed in 184 inst ances, widened in 76 instances, and showed no net changes in 30 in stances. The authors not ed that narrowing of the achievement gap was particularly evident for African American students and somewhat evident for Latino students (although changes in the percentages of students who comprised this subgroup in many states made the results difficult to interpret). The CEP (2008) also compared trend data from statewide as sessments across the country to trend data from the NAEP. On e comparison involved whether increasing trends on both indicators (i.e., percent prof icient and effect sizes) were evident on statewide assessments and the NAEP. Results indicated that increa ses on both indicators occurred for both types of assessment more often than not. Increas es on both indicators for statewide assessments and the NAEP were observed in 17 of the 28 states and 21 out of 27 states for which data were availabl e for reading and math respectively. When comparisons between the two assessment type s were conducted by content area and grade level, similar results emerged. Comparisons between the statewide assessment and NAEP trends revealed 108 instances of increases on both sources, two inst ances of decreases on both sources, and 24 instances where the tr ends diverged (i.e., one source showed increases and the other source showed decr eases). The CEP (2008) concluded that the results of the statewide assessments were mo stly consistent with NAEP results from 2002 to 2007. Exceptions cited included smaller observed gains on the NAEP and that some states did not show any growth on the NAEP for 8th grade reading.

PAGE 42

34 34 Assessment of the academic achievement of students has been complemented by researchers examining the behavioral and socio-emotional outcomes of children across the country. Hoagwood and Johnson (2003) reviewed several population-based epidemiological studies that examined the pr evalence of mental health problems. Across the reviewed studies, approxima tely 16-22% of children and a dolescents up to the age of 18 had a diagnosable psychological disorder. A pproximately 5-9% coul d be classified as seriously emotionally disturbe d. Additionally, 4-8% of child ren and adolescents ages 917 had severe psychiatric diso rders. Unfortunately, only approximately 20% of children with serious mental health problems obtai ned mental health se rvices. Although these epidemiological data may not be directly linked to instruction occurring in schools, Adelman, Taylor, and colleagues (Adelman & Taylor, 2000) have suggested that schools play a central role in the behavioral and socio-emotiona l outcomes of students because much social and emotional learni ng occurs during school hours. Research on the academic and mental health outcomes of all school-age children has been complimented by investigations of the traditional service-delivery model, including the provision of speci al education services. In fi rst national investigation of special education services, Heller et al. (1982) reported findings from a panel’s investigation of disproportiona te representation of minority students in special education. The panel was comprised of members with expertise in disproportionality, special education, assessment, and school administration as well as unrelated fields such as law, statistics, and psychology. The panel’s task was to determine the extent to which disproportionate representation existed among st udents identified as educably mentally retarded (EMR) and then review the liter ature to determine possible causes. Panel

PAGE 43

35 35 members then made recommendations for im proving the equity and effectiveness of special education services for these students. Following their analyses of Office of Civil Rights’ data derived from a survey of elem entary and secondary schools, Heller et al. concluded that disproportionate representa tion of minority students in the EMR category occurred across the country. From the litera ture review that followed, Heller et al. reported that two likely reasons for these fi ndings were the use of invalid assessment procedures and inadequate in struction in the general education environment. Panel recommendations derived from these findi ngs included rethinking and restructuring assessment and instructional procedures in schools. Thus, Heller et al.’s recommendations focused on improving the qua lity and efficacy of practices in the general education environment ra ther than solely focusing on special education services. In a more recent investigation of the pr ovision of special education services, the PCESE (2002) provided recommendations for improving the educati onal performance of children and adolescents diagnosed with disabi lities. The report compiled by the panel of researchers and educators represented the thoughts and suggestions of more than 100 special education and educational finance expe rts, educational and medical researchers, and key stakeholders (e.g., teachers, pare nts) of education. The PCESE derived the following findings from their investigation: 1) The traditional model places compliance with procedures over student and systemic outcomes 2) The traditional model places too little emphasis on prevention and early intervening services resulting in a wa it-to-fail model in which important educational services are often delayed

PAGE 44

36 36 3) Educators in the traditional model treat stud ents in special education as separate in terms of funding sources prohibiting the pooling of all availa ble resources to improve outcomes, and creating incentives for misidentification and isolation from general education 4) Parents do not have sufficient options and recourses when a student does not make progress in special education 5) The threat of litigation has resulted in a culture of compliance which diverts schools from their primary missi on of educating every child 6) The current procedures for placing student s in special education are invalid which results in the misidentificati on of thousands of students 7) Children with disabilities require more highly qualified teachers 8) More rigorous and systematic research on special education practices is needed and educators under the traditional model do not always implement practices that have been shown to be effective and 9) The traditional model fails too many student s identified with disabilities resulting in too few students graduating from high school and transitioning to postsecondary opportunities or full-time employment. Based on the nine findings from their investigation of the traditional model, the PCESE include three major recommendations for how to improve the outcomes of students identified with a disability. The panel recommended that educators: 1) Make decisions on the efficacy of special education by the opportunities provided to each student and the resulting outco mes, not compliance with procedures

PAGE 45

37 37 2) Implement a model of service delivery ba sed on prevention and intervention that employs evidence-based practices and 3) Create a seamless funding system that base s evaluations of educational spending on all the expenditures for the stude nt including general education. Recommendations from Heller et al. (1982) and the PCESE (2002) focused on improving the quantity and quality of assessmen t and instructional practices in general and special education as well as increasing accountability for the outcomes of students with disabilities. Data reported by Forness (2001) provides an indication of the outcomes of students with disabilities in the traditional model. Forne ss reviewed 24 metaanalyses on the effectiveness of special education. The metaanalyses covered 20 intervention topics delivered through special educati on programming. One topic examined through metaanalysis was the impact of being placed in a special educati on classroom on student outcomes. Across 50 articles comparing outcome s of students receivi ng special education services and their general education peers, Fo rness reported an overall average effect size of -.12 for special class placements. These da ta reported suggest that placement in a special education classroom was associated w ith students acquiring skil ls at slower rates than their same-age peers. When only c onsidering high-inciden ce disabilities (i.e., learning disabilities and behavi oral disorders), however, th e average effect size was .29 suggesting that special education placement resulted in small improvements in student performance. Forness concluded that his re view of metanalyses suggests that placement in special education may be harmful to st udent outcomes, but that caution must be exercised when interpreting the results becau se many of the studies included students who received services for mental retardation. Fo rness stated that because of the nature of

PAGE 46

38 38 mental retardation, interventi on studies involving these stud ents may have demonstrated smaller effects than those studies only targeting high-incide nce disabilities. Data on disproportional representati on among students receiving special education services reported by Donovan and Cross (2002) provides further evidence regarding negative student outcomes associated with relying on special education as the primary source of additional services for str uggling students. Using data provided to the Office of Civil Rights by state departments of education, the aut hors calculated odds ratios (ORs), risk indices (RIs), and co mposition indices (CIs) for race/ethnicity and gender by subgroup. Across the Mentally Retard ed (MR), Learning Disabled (LD), and Emotionally Disturbed (ED) categories, black students were typically at more risk for being placed (RIs = 2.64, 6.49, and 1.45 respec tively). In addition, when compared to Caucasian students, black students were more likely to be placed in the MR (OR = 2.24), LD (OR = 1.08), and ED (OR = 1.59) categorie s. When examining risk longitudinally, the authors noted that the risk for being la beled MR, LD, and ED increased across years for multiple groups of students. Specifically, RIs increased for black students for the MR category, for all groups except Asians for the LD category, and for all students, particularly blacks, when examining EDs. Disproportionality also was reported for gender with males comprising approxim ately 58.31%, 67.83%, and 77.73% of students labeled MR, LD, and ED respectively. Howeve r, the authors cautioned that the data should interpreted cautiously because of issues such as questions about the accuracy of state databases and the variability in special education eligibility criteria across states. Despite the problems with the dataset, the aut hors stated that the obs erved differences in representation of minority groups and males/females may be related to the achievement

PAGE 47

39 39 gap among these groups. Consistent with the fi ndings of Heller et al. (1982) and the PCESE (2002), Donovan and Cross discussed th e issues related to general education service delivery when explaining potential re asons for the disproportionate representation of minority students in special education. In sum, the existing data on student out comes suggests that th e traditional model of service delivery is not meeting the n eeds of a significant proportion of students. Although national studies exam ining student performance have demonstrated some increases in the percentage of students pe rforming proficiently in reading and math, results from statewide assessments and the NAEP differed on the extent to which increases have occurred. In addition, both a ssessment sources, as well as a meta-analysis examining the efficacy of special education se rvices, suggest that many students continue to have difficulty acquiring basic academic and behavioral skills. Furthermore, the risk of not acquiring the necessary academic and beha vioral skills is higher for racial/ethnic minorities, males, low-SES students, ELLs, a nd students with disabilities. Importantly, potential explanations and recommendations of many researchers who have studied the outcomes of struggling students suggest that school reform efforts must focus on how services are delivered within and ac ross general and sp ecial education. The PS/RtI Model. Studies of the efficacy of th e traditional model for serving students have been complemented by recent research examining the PS/RtI model. A review of the literature re vealed a number of studies investigating the impact of implementing the PS/RtI model on student a nd systemic outcomes. Researchers have examined implementation of the model at a variety of units of analysis, ranging from grade levels within a building to state-leve l initiatives. Although the models examined

PAGE 48

40 40 contained some variability in terms of how the PS/RtI model was operationalized and implemented, all the models examined contained the key elements outlined by Batsche, Elliott, Graden, et al. (2005). In other words, all the models used assessment to facilitate the identification of at-risk students, implemented increasi ngly intense interventions based on student needs, and progress mon itored the response of students to the interventions in a formative manner. Consistent with the review of studies examining the traditional model, the following studies evaluated the impact of implementing the PS/RtI model on student and/or systemic outcomes. O’Conner (2000) implemented f our tiers of reading interv ention in three buildings in an urban area. O’Conner’s version of PS/RtI targeted kindergarten students ( n = 146) through their first grade year. Seventy percent of the studen ts were eligible for freereduced lunch. In terms of race/ethnicity, 44-73% were African American across the three schools while the majority of the remain ing students were Caucasian. Tier I of the model involved whole-class inst ruction using evidence-based r eading strategies. Tier II included additional one-to-one instruction usi ng tasks that paralleled those from wholeclass instruction. Students rece iving Tier III interventions were provided instruction for an additional 30 minutes four times per week. Finally, Tier IV interventions consisted of 15 minute sessions of one-to-one inst ruction, four times per week. O’Conner (2000) reported the effects of im plementation of the model by tier. The results indicated that the majority of stude nts responded to Tier I instruction. Of the students who did not respond, 25 returned permi ssion to participate in the additional tiers. The remaining students comprised the co mparison group used in the study. When students who received both Tier I and II se rvices were compared to the comparison

PAGE 49

41 41 students who only received Tier I instructi on, a MANOVA indicated that the intervention group significantly outperformed the comparison group (Wilk’s = .604, p < .05). Despite significantly higher levels of perf ormance, some students who received Tier II services did not improve and began receiving Tier III intervention. A MANOVA comparing those students who participated in Tier III intervention to the comparison students revealed that the Tier III interv ention group significantly outperformed the comparison group (Wilk’s = .390, p < .01). Once again, a few students from the Tier III intervention group required Tier IV services due to lack of re sponse. Consistent with the previous analyses, a MANOVA revealed that th e students who received Tier IV services significantly outperformed the comparison students (Wilk’s = .417, p < .05). O’Conner concluded that tiered interv entions improved student reading outcomes. Interestingly, O’Conner noted that no reduction in special education placements was found at the participating schools suggesting that other variables besides read ing achievement may play a role in special education placement rates. Dickson and Bursuck (1999), in another investigation of implementing tiered reading intervention, reported data on the reading outcomes of 72 students (69 of the students were Caucasian) following the firs t year of implementation in two rural elementary schools. Although the authors pres ented a five tier model for preventing reading difficulties in kindergarten and fi rst grade, only the first two tiers were implemented in first grade during the first year. The two tiers c onsisted of general education instruction and small group, intens ive instruction for students who did not respond to general education instruction. Fi rst, general education instruction was enhanced and the progress of the 72 students monitored. Next, 20 students identified as

PAGE 50

42 42 at-risk by screening assessments received additional interven tion. Finally, the researchers compared the performance of the two groups of students on a number of reading skills. In an initial screening of the 72 participants, 30 students were identified as at-risk based on the screening criteria employed by Disckson and Bursuck (1999). Following changes to the core curriculum implemented by teachers, 11 of the 30 students responded and were no longer considered at-risk. One student not initially identified by the first assessment was considered at-risk following th e changes to the core curriculum. Thus, the instructional changes reduced the number of at-risk students fr om 30 to 20. Prior to implementation of the small group, intensiv e interventions, Dickson and Bursuck found significant differences between the stude nts who responded to general education instruction ( n = 52) and those students identified as requiring additional intervention ( n = 20) by screening assessments. Significant differences occurred on measures of phonological awareness ( F = 26.98, p <.05), rapid letter naming ( F = 42.84, p <.05), segmenting ( F = 17.77, p < .05), letter-sound correspondence ( F = 38.89, p < .05), invented spelling ( F = 72.95, p < .05), and word attack ( F = 20.91, p < .05). Following implementation of additional intervention, sign ificant differences be tween the two groups no longer existed on measures of segmenting ( F = 2.47, p > .05), letter-sound correspondence ( F = .08, p > .05), and invented spelling ( F = 1.86, p > .05). In addition to tests of significance, the authors examined the effect size for the additional intervention provided to at-risk students. Prior to implementation of the small group, intensive intervention, the effect sizes for the 20 at-r isk students ranged from -1.0 to .2 with only one positive effect size. Follo wing implementation, the effect sizes for the 20 students ranged from -.79 to 2.0 with five of the six effects positive. These results

PAGE 51

43 43 indicated that students receiving more inte nsive intervention narrowed the gap between themselves and students only receiving general education instruction. Although significant differences persisted between the groups on some measures of reading achievement, the effect sizes following the implementation of additional intervention indicated that the at-risk students closed the gap on the majority of reading measures used in the study. Dickson and Bursuck (1999) cautiously concluded that enhancing the general education curriculum and providing additional intervention can improve student reading outcomes, but mentioned that studies with additional experimental control are required to evaluate the impact of tiered service delivery. In another study examining reading outcomes, O’Conner, Fulmer, and Harty (2003) examined the effectiveness of a thre e-tiered PS/RtI model at two elementary schools across a 4-year span. Tier I services c onsisted of universal r eading instruction and data-based decision making. Tier II consisted of flexible, small group direct instruction that targeted areas of weakness three days per week. Finally, Ti er III services consisted of flexible, individualized instruc tion that targeted specific areas of weakness five days per week. The researchers trained kindergarten and first grade staff in year 1, second grade staff in year 2, and third grad e staff in year 3. O’Conner et al. reported that students attending school one were from lowto mid-income neighborhoods and relatively homogeneous in terms of race/ethnicity (i.e ., 12% African American, 7% Hispanic, 9% Native American, and 72% Caucasian) while the students attending school two were mostly from high income neighborhoods a nd relatively heterogeneous in terms of race/ethnicity (i.e., 15% African Amer ican, 57% Caucasian, and 27% Other). Approximately 400 students were in grades K-3 (i.e., approximately 100 students per

PAGE 52

44 44 grade) at the two schools at the beginning of the study. Of these 400 students, 92 received Tier II and/or Tier III inte rvention on an as needed basis. Outcomes for these students were examined for second and third grade beca use the second graders in year 1 and the third graders in years 1 and 2 could serve as controls. O’Conner et al. (2003) reported data on the effectiveness of Tier I instruction and instruction for students with reading disabi lities. Results indicated that the students receiving Tier I instructi on following implementation of the model outperformed the control students from previous years on measures of decoding ( F (2, 283) = 16.24, p <.01), fluency ( F (2, 283) = 36.96, p < .01), and comprehension ( F (2, 283) = 9.97, p < .01) in second and third grade. Effect sizes at the end of third grade for the enhanced Tier I instruction were .19, .34, .52, and .29 for meas ures of word identification, decoding, fluency, and comprehension respectively in dicating small to moderate effects. O’Conner et al. (2003) reported improve ments in reading achievement for students with disabilities as well. Analyses of the data for the end of third grade demonstrated moderate to large effects (Cohen’s d equaled .40, 1.8, 1.4, and 1.0 for word identification, decoding, fluency, and comprehens ion respectively) in favor of the tiered service delivery model. Also, students in the experimental group demonstrated reduced special education identification rates. Students in the historic al control group were placed at an average rate of 15%. Following the f ourth year of implementation of the model, placement rates were 12% for the students who only received Tier I instruction and 8% for students who received Tier II and/or Tier III services. The authors concluded that tiered models of intervention appear to impr ove the reading performance of students, but

PAGE 53

45 45 noted that the lack of control schools in the study is a limitation th at necessitates caution when interpreting the results. VanDerHeyden and Burns (2005) examined the impact of a PS/RtI model as well. VanDerHeyden and Burns PS/RtI model involved a series of evidence-based intervention procedures and sequentially applied decision rules at each stage for reading and math. The four sequential stages were (1) un iversal screening us ing curriculum-based measurement (CBM) reading and math probes, (2) class-wide intervention for classes with a large proportion of stude nts performing below a functi onal instruction criterion, (3) brief performance/skill deficit assessment for those students who do not respond to evidence-based universal intervention, and (4) the response to a short-term, individualized intervention deliv ered in the classroom for those students identified with a skill deficit. Unlike the previous studies; howev er, the authors examined the impact of the model on mathematics achievement. One elementary school was the focus of this investigation. The elementary school had approximately the same number of males and females. In terms of race/ethnicity, 79% of the students were Caucasian, 16% were Hispanic, and 4% were African American. In addition, approx imately 3% of the students were eligible for Title I services (a proxy for low-SES), 1.7% were ELLs and 11% received services thr ough special education. Both within-year and across year student growth were examined for grades 3-5 by VanderHeyden and Burns (2005). The researcher s examined the within-year effects of their PS/RtI model by randomly selecting one Curriculum-Based Measurement (CBM) math computation probe from each month (Ja nuary through April) and examining growth in scores using a Repeated Measures ANOVA. The analysis revealed significant effects

PAGE 54

46 46 for each grade and the total sample ( F’s ranged from 13.45 to 64.29, p’s < .001). Cohen’s d ranged from .49 to .97 indicating moderate to large effects. In addition, the percentage of students identified as frus trational (i.e., at-risk) decr eased from 38% to 24% from January to April, while the percentage of students achieving mastery increased from 9% to 29%. Statistical analyses revealed that these changes were reliable ( z = 6.89, p <.001). VanDerHeyden and Burns (2005) examined the across year data by comparing data on a published, norm-referenced test of math achievement from the 2001-02 school year (pre-implementation) and the 2002-03 school year (post-implementation). Postimplementation scores were significantly higher for all grades and the total sample ( t ’ s ranged from 2.01 to 3.42, p’s < .05). Cohen’s d ranged from .29 to .45 demonstrating small to moderate effects. In addition, the pe rcentage of students scoring below average reliably decreased from 15% to 13%, while the percentage of students scoring above average reliably increased from 24% to 30% ( z = 3.37, p < .001). The authors concluded that implementation of the PS/RtI model result ed in significant improvements in the math performance of students in grades 3-5 who attended the school. Knoff and Batsche (1995) also examined the impact of implementing a PS/RtI model in one building. Unlike th e previously reviewed studie s; however, the researchers used a comparison school and a within school multiple baseline design to examine the model’s impact on a number of student and sy stemic outcomes. At the target elementary school, Knoff and Batsche collected three y ears of baseline data and three years of implementation data. Data were only availabl e for the second year of implementation at the comparison school. The target school was a Title I school with 87% of its students eligible for free-reduced lunch and was hete rogeneous in terms of race/ethnicity (i.e.,

PAGE 55

47 47 59% Caucasian, 38% African American, and 19% Other). The comparison school was similar in terms of SES (i.e., 91% of student s were eligible for fr ee-reduced lunch) and race/ethnicity (i.e., 41% Caucasian, 54% African American, and 6% Other). When examining student outcomes, Knof f and Batsche (1995) investigated the impact of the PS/RtI from baseline to year three of implementation. The researchers used the proportion of students at-orabove grade level (i.e., the 50th percentile) on a published, norm-referenced test of achievement (i.e., the Comprehensive Test of Basic Skills [CTBS]) to examine the imp act of implementing the model. The analyses were disaggregated by younger students (i.e., students in first gr ade at the beginning of implementation) and older students (i.e., stud ents in third grade at the beginning of implementation). For younger students, th e researchers found a 2%, 20%, and 18% increase in the proportion of students performing at grade level in reading, language, and math achievement respectively. For older students, Knoff and Batsche reported no increase for reading achievement, but a 13% and 2% increase in the proportion of students performing at grade level in langua ge and math achievement respectively. For the CTBS total battery score, a 15% increase for younger stud ents and a 12% increase for older students in terms of the proportion of students performing at grade level were detected. The findings related to the co mparison school component of the design provided additional evidence for the effectiveness of the PS/RtI model. At year 2 of implementation, the target sc hool had 33% of its students performing at-or-above grade level, while the comparison school had 28% of its students performing at the same criterion.

PAGE 56

48 48 Knoff and Batsche (1995) reported impr ovements after implementation of the PS/RtI model when examining systemic outcomes as well. Following implementation of the model, the proportion of students refe rred for and placed in special education decreased from baseline (i.e., 10% for referrals and 6% for placements) to year 3 (i.e., 2% for referrals and 2% for placements). Data from the comparison school component of the design were once again consiste nt with the within school co mponent findings. At year 2 of implementation, the proportion of students re ferred for and placed in special education was 3% and 2% respectively at the target school. Referral and placement rates at the comparison school were 10% and 7% respectiv ely following year 2 of implementation. Discipline referrals also decreased following implementation of the model at the target school. Disciple referrals decr eased from 73 incidents per 100 students at baseline to 53 incidents per 100 students at year 3 of im plementation. In addition, the proportion of students in the population who received referral s decreased from 37% at baseline to 28% at year 3. Thus, implementation of the m odel resulted in decreases of 75%, 67%, and 28% for special education referrals, special education placements, and total discipline referrals respectively. Improvements in th ese systemic outcomes as well as the aforementioned student outcomes lead the resear chers to conclude th at a student focused, intervention based model is an effective way to deliver educational services to students. Marston et al. (2003) implemented a PS/RtI model in a heterogeneous Midwestern school district. A ccording to Marston et al., th e model used consisted of applying the problem-solving process to identify and provide interventions to academically struggling students with a fo cus on general education. While phasing in implementation of the PS/RtI across the schoo ls within the distri ct beginning in 1994,

PAGE 57

49 49 Marston et al. evaluated the model’s impact on several student and systemic variables. Data were reported through the 2001-02 school year on the response of kindergarten students to early interventi on, the proportion of student s placed in high-incidence disability categories, and disproportional repr esentation of minority students in special education. Marston et al. (2003) reported the imp act of using problem-solving data to implement evidence-based reading strategies in kindergarten classrooms. As a result of the problem-solving process, the district began training kindergarten teachers from schools not making AYP to implement reco mmendations from the National Reading Panel (2000) report. The training occurred dur ing the 2001-02 school year. In addition to the training, some schools offered full day kindergarten as an option for students. Marston et al. reported effect sizes comparing schools not m eeting AYP to their district counterparts, schools offering full day kinderg arten to schools offering a half-day, and schools not meeting AYP and offering full da y kindergarten to schools meeting AYP and not offering full day kindergarten. Effect si zes for words read correctly were .43, .44. and .74 for schools not meeting AYP, schools that offered all day kindergarten, and schools not meeting AYP that offered all day kinderg arten respectively. The researchers reported that student growth curves accelerated fo llowing the instructional changes across a variety of pre-reading skills and vocabulary as well. In terms of systemic outcomes, Marston et al. (2003) reporte d special education placement and disproportionality data. For sp ecial education placements, the proportion of students identified with learning disabiliti es decreased from 6% in 1994 to 3% in 2001. The proportion of students identif ied with a mild mental impairment decreased from 1%

PAGE 58

50 50 in 1994 to .5% in 2001 as well. The resear chers also reported a decrease in disproportionality for African American students. At the conclusion of the 1997-98 school year, African Americans comprised 44.33 % of students in the district. However, African Americans represented 64.4%, 69%, and 68.9% of special education referrals, evaluations, and placements in the district. At the conclusion of the 2000-01 school year, African American students comp rised approximately the same proportion of the student population (i.e., 45%), but represented less of the special educati on referrals (59%), evaluations (57.7%), and placements (55.4%). Although disproportionality for African Americans in special education still existe d, it is noteworthy that the odds ratios for African American students re ceiving services through the learning disabilities category, mild mental impairment category, or problem-s olving category in the di strict (odds ratios ranged from 1.9 to 2.1) were lower than th e average odds ratio acr oss the state (the average odds ratio for African American st udents was 2.7). Thus, implementation of the PS/RtI model not only appeared to result in improved reading outcomes for kindergarten students, but also reduced risk for special education placements, particularly for African American students. Marston et al. concluded that the ava ilable data suggest that the implementation of the PS/RtI model in thei r school district improved student and systemic outcomes. In another district le vel study, VanDerHeyden, Witt, and Gilbertson (2007) reported data on the PS/RtI model describe d by VanDerHeyden and Burns (2005) above. The district was located in a southwester n, suburban community. Students attending the five schools in the district were somewhat homogeneous in terms of race/ethnicity (6781% Caucasian, 15-24% Hispanic, 2-6% Afri can American, and 2-5% Other) and SES

PAGE 59

51 51 (14-37% of students were elig ible for free-reduced lunch with a median of 21%). To examine the effects of the implementation of their PS/RtI model, VanDerHeyden et al. used a multiple baseline design across the four elementary schools in the district with baseline data from either one or two year s prior to implementation of the model. VanDerHeyden et al. (2007) reported that the number of evaluations for special education and the total number of students w ho qualified for services were reduced in the year(s) following implementation. During base line years, the numbe r of initial special education evaluations and students who qualifie d for services ranged from approximately 10 to 30 and 3 to 20 respectively. In the y ears following implementation of their PS/RtI model, the number of initial evaluations and students who qualified for services ranged from approximately 6 to 9 and 2 to 5 resp ectively. Disproportional representation of males evaluated for special education serv ices was also reduced from 62% during baseline years to 59% following implementati on. Interestingly, a re versal component was included in one of the schools in which PS/ RtI procedures were withdrawn during the first year of implementation and reinstated for the following school ye ar. In this school, both the number of evaluations and the num ber of students who qualified for services returned to levels above ba seline following withdrawal of the PS/RtI procedures. The number of evaluations and students who qualified for serv ices declined significantly following reintroduction of the model during th e subsequent school ye ar. Consistent with the findings reported by VanDerHeyden et al., VanDerHeyden and Jimerson (2005) reported that the number of children identified as having a specific learning disability (SLD) was reduced from 6% to 3.5% follo wing implementation of the model in the

PAGE 60

52 52 district. VanDerHeyden and Jimerson also repo rted a greater than 50% decrease in the number of students being evaluated for speci al education during the same time span. Tilly (2003), in a larger-s cale evaluation, examined the impact of implementing the PS/RtI model at the intermediate unit le vel. The Heartland Early Literacy Project (HELP), a multi-tiered version of the PS/RtI model targeting early literacy skills was implemented across a number of schools with in the intermediate unit’s jurisdiction. Implementation began in 1999 with 36 schools. The number of schools increased each year with 121 schools incl uded during the 2003-04 school year. Training focused on administration and scoring of screening measures, implementing evidence-based interventions, and data-based decision-mak ing. The dependent variables used in the evaluation of the project we re the Phoneme Segmentation Fluency (PSF), Nonsense Word Fluency (NWF), and Oral Read ing Fluency (ORF) subtests of the Dynamic Indicators of Basic Early Literacy Skills (DIBELS) as well as special education placement rates. Tilly (2003) used cross-sectional data across the 4 years of implementation to evaluate the impact of the project on read ing outcomes. For kindergarten students, the median performance increased each year on the PSF and NWF subtes ts of the DIBELS. For grades 1-3, the median performance of fi rst graders increased for ORF across the first 3 years of implementation with a slight d ecrease in year 4. No noticeable differences were evident in the median performance of second and third graders on the ORF subtests. Tilly also reported z-scores comparing the year 1 and year 3 PSF and ORF performance of students in schools participating in at least 3 years of the project. For the PSF subtest, z’s ranged from -.77 to 3.29 ( n = 36) with a mean of 1.08 a nd a median of 1.25 indicating

PAGE 61

53 53 a large effect for implementation of the model. For the ORF subtest, z’s ranged from -.68 to .2.47 ( n = 32) with a mean of .39 and a median of .36 indicating a small to moderate effect for implementation of the model. Tilly (2003) also examined the rate of ne w special education placements in grades K-3 across the 36 initial projec t schools. The three years pr ior to implementation and the four years post implementation were compared to determine the impact of implementing the model on special education placement ra tes. Reductions in new special education placements of 41%, 34%, 25%, and 19% were f ound in kindergarten, first grade, second grade, and third grade respectively. Thus, Tilly reported improvements in both student and systemic outcomes as a result of implem enting a PS/RtI model. The author concluded that schools should begin moving away from th e traditional service delivery model and toward a results-based model such as the PS/RtI model. Consistent with the recommendation made by Tilly (2003), several states have begun attempting to scale-up implementati on of a PS/RtI model. Callender (2006) reported data at the annual National A ssociation of School Psychologists (NASP) Convention on the impact of a state initiativ e in Idaho to implement a PS/RtI model. Statewide training and support we re provided to schools to imp lement Idado’s version of the model. According to Callender, training focused on systemic problem-solving processes and procedures, and research-based assessment and intervention practices. The assistance provided consisted of on-site form ative feedback and support of efforts made by school personnel. At the time of the presenta tion, the project had been implemented in 152 schools (128 elementary, 24 secondary) across 43 school districts (3 8% of the state’s

PAGE 62

54 54 total). Callender reported data on the imp act of interventions on reading outcomes and special education placements. Callender (2006) compared the performance of students in grades K-3 in PS/RtI and non-PS/RtI schools ( n = 1400). Findings indicated that students with a documented intervention plan (i.e., a document used in PS/ RtI schools to facilitate implementation of interventions) performed higher on measures of reading achievement than those students who did not. Callender also repor ted a large effect size ( ES = 1.10) for having an intervention plan from the fall semester to the spring semester of the 2002-03 school year as measured by the Idaho Reading Invent ory. Finally, Callender examined special education placements from the fall of the 2002-03 school year to the fall of the 2004-05 school year. Although statewide enrollment ha d increased 3% over this time span, statewide special education enrollment decr eased 1% while a 3% decreases was observed in PS/RtI schools. Together, these data suggest that implementation of a PS/RtI model in Idaho schools may have had a positive impact on some student and systemic outcomes; however, more information on the procedures and statistical analyses used by the initiative is needed before more definitive conclusions can be reached. McGlinchey, Schallmo, and Goodman (2006) also presented data from a state initiative to implement a PS/RtI model in schools. The initiative focused on reading and behavioral issues in Michigan public sc hools. Regional teams were selected by the project staff and were provide d training to help them impl ement and sustain evidencebased assessment and intervention practices in local elementary schools. The number of participating schools started at 21 in 2004 and increased to 102 schools in 2006. McGlinchey et al. report data on student reading achievement and the number of office

PAGE 63

55 55 discipline referrals at par ticipating schools. For readi ng, the researchers reported increases in the proportion of students ach ieving benchmarks on the DIBELS. For 22 schools that began implementing the model durin g the first year of the project, increases in the proportion of kinderga rteners, first graders, a nd second graders who achieved benchmark on the DIBELS ranged from approximately 5 to 15% across a one year period. In terms of systemic behavior outcomes, the researchers report ed a decrease in the average number of office discipline referrals per day per 100 students. At the beginning of the project, the average number of discipline referrals per day per 100 students at 18 schools who began implementing the model was .79. Following one year of implementation at the 18 schools, the average number of discipline referrals per day per 100 students was reduced to .47. The data pres ented by McGlinchey et al. suggest that implementation of a PS/RtI model may have improved student reading and systemic behavioral outcomes; however, longitudinal data including comparisons to schools not implementing the model would provide stronger evidence in terms of the impact of the PS/RtI procedures implemented. Stollar and Graden (2006) presented data from another state-le vel initiative to implement a PS/RtI model in schools. The model targeted the academic and behavioral performance of students in Ohio school s through systemsand individual-level collaborative problem-solving procedures. Coaches were trained to facilitate implementation of the model in participat ing schools through training and technical assistance. To examine the impact of the model on student outcomes, the researchers used performance on subtests from the DIBELS as dependent variables. Results indicated increases across a 4-year time span in the num ber of kindergarten students at-risk in the

PAGE 64

56 56 fall who were no longer at-risk in th e spring. In the 2001-02, 2002-03, 2003-04, and 2004-05 school years, 270, 758, 1,078, and 1,233 students improved risk status respectively. Finally, Burns, Appleton, and St ehouwer (2005) provided the most comprehensive review of outcomes associat ed with implementation of the PS/RtI found in the literature. In a review of four large-scale PS/RtI projec ts as well as other research projects, the researchers used metanalytic pro cedures to examine the overall impact of the PS/RtI on student and systemic outcomes. From the 21 studies that were reviewed, a total of 24 effect sizes and unbiased estimates of effect (UEE) were calculated. The researchers calculated eff ect sizes using Cohen’s d and the UEEs using Cohen’s d weighted by the sample si ze used in the studies. Overall effect sizes ranged from .18 to 3.04 with a mean of 1.27 ( SD = .94) and a median of 1.02 suggesting strong effects for implementing the PS/RtI model (Burns et al., 2005). The authors also disaggregated the resu lts by student and systemic outcomes. For student outcomes ( n = 11), the researchers found an average effect size of .96 ( SD = .77) and a median effect size of .72 suggesting m oderate to strong effects for implementing the model. Effects in favor of implementing the PS/RtI model were stronger for systemic outcomes. The average effect size ( n = 13) was 1.53 (SD = 1.02) and the median effect size was 1.28. When calculating UEEs, strong effects were found for both student (UEE = 1.02) and systemic (UEE = 1.54) outcomes. The researchers also disaggregated the UEEs by whether the PS/RtI model was fieldbased or implemented by researchers. For field-based implementation of the model, the researchers found strong effects on both student (UEE = .94) and systemic (UEE = 1.80) outcomes. For models implemented by

PAGE 65

57 57 researchers, the analysis reve aled a strong effect on student outcomes (UEE = 1.14) and a small to moderate effect on systemic out comes (UEE = .47). Finally, the researchers examined the proportion of students placed into special education in schools implementing the service-delivery model. Ac ross the studies reviewed, the researchers found that approximately 1.24% of students we re referred for special education while approximately 1.68% were placed in special education. Both referral and placement rates were well below the national average. Based on these data, the aut hors concluded that both field-based and research-based implem entation of the PS/RtI model resulted in strong effects on student and systemic outcomes. In summary, a number of studies have b een conducted to examine the impact of implementing PS/RtI models in schools. The un it of analysis in these studies has ranged from specific grade-levels to state-level initi atives. Research on the model across units of analysis has demonstrated improvements in st udent and systemic outcomes. Increases in reading, math, and language achievement; d ecreases in office discipline referrals; decreases in special education referrals, eval uations, and placements; and decreases in disproportional representation of racial/ethnic minority a nd male students in special education have been reporte d. In addition, a review of th e demographic profiles of the students sampled across the studies suggests that these improvements occurred in schools with both homogenous and heterogeneous student populations, an important consideration given the account ability for disaggregated s ubgroup performance in NCLB (2002). However, tremendous variability in the unit of analysis, control over manipulation of the independe nt variables, and how the dependent variables were measured existed. Therefore, additional stud ies examining the impact of implementing

PAGE 66

58 58 the model across different units of analysis the differences between field-based and research-based implementation, and the m odel’s impact on a variety of dependent variables is needed. Implementation Challenges to be Faced Findings from the studies re viewed above provide evidence that implementing the PS/RtI model results in improved student a nd systemic outcomes in schools. However, educators interested in implementing PS/RtI shou ld consider other vari ables as well when evaluating the model. Outcomes derived as a result of implementing a new service delivery model are often moderated or medi ated by a number of variables. Educators considering PS/RtI should identify these variab les and determine the ex tent to which they may impact desired outcomes in their schools. What follows is a brief review of variables identified in the literature that have been examined in conjunction with PS/RtI outcomes. Consensus Initiatives to implement change in schools have been in existence for decades with little evidence for meaningf ul improvements in student outcomes. According to Sarason (1990), one of the re asons for persistent failure to improve outcomes is that facilitators of reform in itiatives do not understand schools in the context of their histories or larger social systems. Long existing power relationships result in top down reform initiatives in whic h schools are expected to comp ly with directives from legislators, policy makers, and administrato rs. Educators typically are not involved in decision-making related to the reform initiatives, inhibiting collaboration from these key stakeholders who play a vital role in im plementing any changes. Administrators and policy makers do not understand that, left unchecked, school personnel will often respond to reform initiatives in ways that minimize the effort required for real change to occur.

PAGE 67

59 59 The avoidance response of school personnel, however, is directly related to the amount of powerlessness felt by the indivi duals most responsible fo r implementing the reform process. Thus, according to Sarason, not i nvolving school personnel in decision-making about educational reform functions as a barrier to meaningful change. Curtis, Castillo, and Cohen (2008) purpor t that achieving consensus among key stakeholders in a school (e .g., principal, teachers, suppor t personnel) regarding the implementation of an innovati on is a fundamental principl e of engaging in effective systems change. Curtis et al. suggest that a commitment from the majority (80% is often suggested but not universally agreed upon) of stakeholders in a building should be obtained before proceeding with implementati on of an innovation. Given the idea that the level of commitment from school personnel regarding a reform initiative is likely to influence the extent to which implementation occu rs, it is important to consider the nature of educator beliefs and how they change as a function of training. The degree of malleability of educator beliefs would be important for individuals interested in implementing the PS/RtI model to unders tand when disseminating information and initiating training on the model. Parajes (1992) states that teachers hold belie fs about topics such as the nature of knowledge, roles and responsibilities of e ducators, causes of teacher and student performance, and confidence to perform speci fic tasks. According to Parajes, these beliefs are often developed early in the edu cational careers of teachers and, once formed, are difficult to change. In fact, Guskey (1986) purports that conventional staff development programs are typically unsuccessful in terms of bringi ng about change in teacher beliefs. However, Guskey reports th at changes in attitudes often occur when

PAGE 68

60 60 teachers practice a new procedure, particul arly when it results in improved student performance. Thus, Guskey concluded that changes in teacher beliefs follow changes in teacher behavior. Although beliefs regarding implementing new practices appear to be resistant to change through in-services along, it is not clear whether these findings would generalize to all beliefs core to a PS/RtI model. It also is unclear whether a combination of approaches including in-s ervices, opportunities to prac tice, and feedback through coaching would result in changes in teacher beliefs. Given the lack of clarity regarding whet her research on teacher beliefs (Guskey, 1986; Parejes, 1992) would generalize to beliefs core to a PS/RtI model, change agents facilitating implementation of the model may n eed to target both th e perceived need and the skills of educators simultaneously. Eval uators examining changes in beliefs after initiating training to implement the PS/RtI m odel might expect changes to vary as a function of time exposed to tr aining and/or the degree to wh ich educators have practiced PS/RtI skills. More frequent assessment of educator beliefs and the extent to which educators received direct traini ng targeting beliefs versus practice with PS/RtI skills may be required to determine what training activit ies lead to changes in beliefs relevant to PS/RtI practices. In addition to examining how beliefs change as a function of training, evaluators should examine the ex tent to which these changes re late to skill development, implementation of PS/RtI practices, and student outcomes. The current literature base provides little information on how changes in beliefs impact the sk ill development of educators, implementation of a PS/RtI model, and how these factors relate to changes in student outcomes. Research focusing on the imp act of changes in beliefs on these factors

PAGE 69

61 61 might provide change agents with informa tion regarding how much to focus on altering beliefs of educators. A review of the literature indicates that a few rese archers have begun examining key stakeholder consensus and beliefs as part of their evaluations of the PS/RtI model. Batsche, Elliott, Schrag, and Tilly (2005) presented data from two evaluations of satisfaction with the PS/RtI model. The Heartland Area Education Agency 11, an intermediate unit located in Iowa, administered a survey to principles, general education teachers, and special education teachers re garding their satisfaction with the model. When asked whether they agreed with the statement that the problem-solving process improves the performance of students whose academic skills or behaviors are a concern, the majority of respondents (approximately 87-97%) agreed that the model improved student outcomes. Batsche, Elliott, Schrag, et al., also reported data from a longitudinal evaluation of the implementation of a PS/RtI mode l in Illinois. Results of this evaluation indicated that teacher and pare nt satisfaction with the PS/Rt I model was superior to the traditional model. Although these data suggest that educators and parents were satisfied with the services provided as a result of implementing the model, no data were presented regarding whether educators reached consen sus prior to initiating implementation, how their beliefs changed as a function of training, or the extent to whic h educators continued to be invested in implementing PS/RtI pol icies and procedures. Nor were data found linking the degree of consensus to implemen tation integrity. Thus, more research is needed to investigate key stakeholder consen sus and its relationship with implementation of the PS/RtI model.

PAGE 70

62 62 Implementation Integrity. Noell and Gansle (2006) stat e that treatment integrity (heretofore referred to as implementation integr ity) is critical to assessing student RtI. According to Noell and Gansle, educators ca nnot make sound judgments about the extent to which students respond to interventi on without data dem onstrating that the intervention plan was implemented as intend ed. The argument being that one cannot rule out that poor student RtI was due to factors such as failure to implement key components of the intervention without data demons trating that the intervention plan was implemented with some acceptable level of inte grity. However, what is unclear is exactly how implementation integrity shou ld be defined and measured. According to Noell and Gansle (2006), th ere are three critical dimensions that should be considered when measuring implem entation integrity. The first dimension to consider is the degree to which the interven tion plan is implemented as intended. Factors such as how the intervention is define d and what components of the plan are implemented precisely play major roles in determining integrity. What is clear from the limited research base is that as the degree to which the treatment plan is implemented with integrity decreases, the more likely that the intervention plan will be ineffective. The second dimension is how implementa tion integrity is to be defined and measured (Noell & Gansle, 2006). Educators must determine the critical elements of an intervention and at what level of detail to assess those elemen ts. In other words, one must choose between assessing the critical elements globally (e.g., the four major steps in the problem-solving process), focusing more on mi cro level steps in wh ich every potential action is assessed (e.g., every step of a sc ripted reading lesson), or examining an intermediate level in which important elem ents are clearly specified, but are defined

PAGE 71

63 63 globally enough to be feasible to assess (e .g., critical components of each step of the problem-solving process). According to research ers, focusing on critical elements at an intermediate level appears to result in the mo st optimal combination of reliably assessing implementation integrity and making assessment feasible for educators. The critical steps at this level are sensitive enough to pick up on variations in implementation and link levels of implementation to outcomes (Noell et al., 2005). In addition to defining what elements of an intervention are critical, practitioners also must determine how to assess the cri tical steps. According to Noell and Gansle (2006), the most practical strategy might in clude using both observations and permanent products. Observations allow prac titioners to record the degree to which key elements of the intervention plan are present, but are subj ect to reactivity biases and are often difficult to conduct on a frequent basis. Permanent products are another potentially valuable source regarding implementation integrity because some intervention plans or components of an intervention plan leave produc ts that can be used to assess critical elements. However, not all intervention plan s lend themselves to readily available permanent products. Although teacher self-re port of implementation also has been considered in the literature, re search has demonstrated that se lf-reports tend to be biased upwardly and are often conflicting with obser vation or permanent product data (Noell et al., 2005). Thus, the current literature on im plementation integrity suggests that a combination of observation and permanent produ ct review is the best method to assess implementation integrity currently available to educators. The third and final dimension suggested by Noell and Gansle (2006) is ensuring that training results in im proved probability of adequa te implementation integrity.

PAGE 72

64 64 According to Showers, Joyce, and colleague s (Showers & Joyce, 1996; Showers, Joyce & Bennett, 1987), effective professional deve lopment practices contain several major stages; theory, demonstra tion, opportunities to practi ce, and immediate corrective feedback. First, the theoretical basis and ra tionale behind the skills being taught must be provided. The purpose of providing this info rmation is for educators to obtain a knowledge base on which to draw when impl ementing the new practices and to achieve consensus that the new practices are importa nt to implement. Next, individuals with experience implementing the new practices mo del the required skills Finally, educators learning the new skills are provided multiple opportunities to practice followed by immediate corrective feedback after each opportunity. The pu rpose of the final three stages is for educators to become proficie nt with the new skills through observation, repeated practice, and feedback on their pe rformance. Showers et al. (1987) have demonstrated that professional development models that include these four stages result in improved implementation of new practices Showers and Joyce (1996) reported that later studies conducted suggest that the feedback component of the model, when omitted, did not result in decreases in implementa tion of new practices. However, researchers have demonstrated that including direct inst ruction and immediate corrective feedback as part of training increases implementation of PS/RtI procedures over didactic instruction alone (Curtis & Metz, 1986; Noell et al., 2005; Zins & Ponti, 1996). Evaluations of the integrity of PS/RtI pr ocedures have varied in terms of what components of the model were examined and how integrity was defined and measured. Noell, Duhon, Gatti, and Connell (2002) examined implementation integrity across four teachers who implemented behavioral interven tions for eight elementary school students.

PAGE 73

65 65 The students were referred for behavioral problems in the classroom. A nonconcurrent multiple baseline design across participants was used to examine implementation integrity. Following the development and impl ementation of the behavior management interventions, the researchers collected integrit y data via behavior reco rds used as part of the intervention (i.e., permanent products). Th e proportion of correct steps included in the behavioral records was used to determine the degree to which the interventions were implemented as intended. Data review mee tings were held on a daily basis once implementation integrity was low and stab le, or trending downward. The meetings consisted of reviewing student behavior, plan implementation, and strategies for implementing the intervention the next day. When accurate implementation was achieved, the data review meetings were faded to every other day. For those teachers that did not meet the accuracy criterion, perfor mance feedback on implementation integrity was added using two graphs. One graph disp layed student outcomes while the other graph displayed the proporti on of intervention steps implemented correctly by the teacher. Performance feedback meetings were systematically faded when the intervention was implemented with integrity across multiple days. Results indicated that initi al implementation varied acr oss teachers, but decreased or became unstable quickly in the absence of fo llow-up from the consu ltants (Noell et al, 2002). The brief data review meetings initia ted following low or decreasing levels of integrity resulted in improvements in impl ementation integrity for one teacher, some improvement for two teachers, and no improve ment for the fourth teacher. The addition of performance feedback to the data review meetings; however, resulted in high, stable integrity data. Finally, fading of the perfor mance feedback sessions resulted in less

PAGE 74

66 66 stability, but integrity data remained rela tively high. Interestingly, implementation integrity was higher for subsequent referrals than for the initial referrals to the teachers. These results suggested that implementation inte grity varied by teacher, type of feedback provided to teachers, and the teachers’ experi ence with the intervention procedures. The degree of variance in implementation integrity and the number of fact ors that contribute to the variance provide evidence for the n eed to carefully assess integrity when evaluating student RtI. Noell et al. (2005) expanded on the prev ious study by examining three different consultation strategies to determine their im pact on levels of implementation integrity. Interventions were implemented for 45 elemen tary school students requiring services for academic and/or behavioral concerns. Following the development and initial implementation of the intervention plan fo r each student, one of three consultative follow-up procedures was used for a peri od of 3 weeks. The follow-up procedures examined were brief weekly interviews, weekly interviews with an emphasis on commitment to follow the intervention prot ocol, and performance feedback. Permanent products from the intervention plans were us ed to assess the degree of implementation integrity. Results from a one-way ANOVA indicated that the performance feedback condition was superior to the other two cond itions in terms of implementation integrity [ F (2,42) = 9.0, p = .001; Noell et al., 2005]. The average percentage of intervention components found in the permanent products used to assess integrity for the performance feedback condition was approximately 80% ac ross the 3 weeks. The average percentage of intervention components implemented fo r the other two conditions ranged from

PAGE 75

67 67 approximately 20-65% with integrity decrea sing as a function of time. In fact, a significant main effect was found acr oss the three conditions for time [ F (2,42) = 10.0, p < .001], indicating that implementation integr ity decreased across th e three weeks. No interaction effects were found. No ell et al. also examined st udent change in performance for each of the consultation conditions. Resu lts indicated that significant differences existed among the groups [ F (2,38) = 10.7, p <.001], with students in the performance feedback condition outperform ing students in the other tw o groups. Thus, Noell et al. demonstrated that implementation integr ity could be improved through performance feedback; however, the degree to which the intervention was implemented with integrity decreased over time. The data also demonstr ated that students in the group with the highest level of implementa tion integrity improved the mo st thereby reinforcing the importance of evaluating implementation inte grity when making decisions about student RtI. Flugum and Reschly (1994) examined implementation integrity and its relationship to student outcomes as well. Fl ugum and Reschly used quality indicators of interventions to determine the degree to which problem-solving procedures were implemented with integrity. The quality indica tors were intermediate level components of intervention-based service deli very that are considered crit ical within a problem-solving model (see Upah & Tilly, 2002 for a descri ption of quality indicators of problemsolving). The sample consisted of 360 genera l education teachers a nd 422 student support service personnel who participat ed in intervention development and implementation for 470 randomly selected Iowa students. The sele cted students had been referred for special education, received a comprehensive evaluation, but were found ineligible for services. A

PAGE 76

68 68 survey asking the respondents to provide information regarding whether or not quality indicators were present during the interventi on was used to examine integrity. Questions about student performance were also asked on the survey. The data we re collected over a 3-year period. Of those sampled, only 175 teachers and 123 support personnel indicated that an intervention was implemented prior to the special education referral. Only those responses in which an interven tion was implemented prior to referral were included in the analyses. Results indicated that the majority of pre -referral interventions were deficient in terms of the quality indicators of pr oblem-solving (Flugum & Reschly, 1994). A significant amount of variation in the propor tion of teachers and st udent support service personnel who implemented problem-solving pr ocedures was evident across the quality indicators. The proportion of teachers who responded that they implemented a critical components of problem-solving ranged from 7-78% (median = 39.5%) across the quality indicators. Student support service personnel implementation of each critical component ranged from 2-71% (median = 35.5%) across the quality indicators. Despite the lack of implementation integrity reported by responde nts, significant positive correlations were found between the number of quality indicato rs present and positive student outcomes. Correlations of .17 ( p < .05) and .29 ( p < .05) were found between the number of quality indicators present and the target behavi or improving for teachers and student support personnel respectively. Thus, Flugum and Reschly concluded that the presence of quality indicators varied tremendously by case and that increasing implementation integrity may lead to improved outcomes for students. Alt hough the use of self-re port data suggested that these conclusions shoul d be interpreted with cauti on, the findings reported by

PAGE 77

69 69 Flugum and Reschly were consistent with ot her studies examining implementation of the PS/RtI model (see below). The studies conducted by Noell and co lleagues (Noell and Gansle, 2006) and Flugum and Reschly (1994) provided support for including implementation integrity in evaluations that examine the impact of in tervention-based servi ce delivery models on educational outcomes. Consistent with the fi ndings of these studies several researchers have examined implementation integrity as pa rt of their evaluation of the impact of the PS/RtI model on student and systemic outcome s. In an evaluation of district-wide implementation of a PS/RtI model review ed above, VanDerHeyden et al. (2007) examined the degree to which assessment a nd intervention procedures were implemented as intended. An integrity checklist that spec ified each observable step of the assessment procedures was used to examine assessment in tegrity. As part of th eir training protocol, the observers reminded teachers prior to admini stration to follow instructions from the available script when conducting screenings. When steps were implemented incorrectly, teachers were prompted to complete those steps in the script. The total number of correctly (i.e., unprompted) implemented steps was divided by the total number of steps possible and multiplied by 100% to estimat e assessment integrity. For all schools, 54 observations were conducted and average in tegrity for the assessment procedures was 98.76%. During the 54 observations, three teach ers required 1-2 prompts to correctly complete omitted steps. VanDerHeyden et al. (2007) also examined the integrity of decisions regarding intervention success. To examine the integrity of the decisions reached by educators, the criterion used to determine intervention succe ss in their PS/RtI model was provided to an

PAGE 78

70 70 untrained observer. The untrained observer received the individual intervention data from 44% of the intervention cases at the school s. Agreement regarding the decision of adequate or inadequate response to interven tion between the untrain ed observer and the school personnel responsible for making decisions exceeded 87%. Thus, it appeared that educators were able to implement assessmen t and intervention decision protocols with high levels of integrity. Decr eases in special education ev aluations and disproportional representation of males reporte d by VanDerHeyden et al. are c onsistent with the findings from Noell et al. (2005) and Flugum a nd Reschly (1994) regarding the positive relationship between implementation integrity and outcomes. Callender (2006) examined implementation integrity using multiple methods as part of his evaluation the PS/Rt I in the state of Idaho. Callend er collected self-report data from PS/RtI teams across the state and reviewed intervention plans to attain an index of fidelity. Self-report data were collected by administering surveys to 55 PS/RtI teams (i.e., 359 teachers, principles, student support se rvices personnel, etc. ) who attended PS/RtI trainings. Results of the survey revealed th at although PS/RtI teams indicated that they implemented some components of effective problem-solving meetings at a high level (e.g., positive team atmosphere, parents en couraged to attend), the teams rated implementation of key RtI steps (e.g., data were collected, progre ss graph was discussed, changes in aimlines/interventions were made) as the lowest in terms of implementation integrity. Reviews of intervention plans during th e first year of implementation were consistent with the RtI teams’ percep tions (Callender, 2006) During year 1 of implementation, intervention plans varied trem endously in terms of the proportion of key

PAGE 79

71 71 problem-solving steps (e.g., defining the prob lem) included. The number of intervention plans that contained evidence of a critical component of problem-solving ranged from 972% (median = 35.5%) across the key steps. Fo llowing additional trai ning during year 2 of implementation the percentage of interv ention plans that incl uded evidence of key components of problem-solving increased. Th e number of intervention plans that contained evidence of a critical component of problem-solving ranged from 60-91% (median = 86.5%) across the key steps. Thus both self-report da ta and reviews of permanent products revealed that implementa tion integrity of core problem-solving steps was low during the first year of implemen tation. Reviews of permanent products during the second year of implementation revealed higher levels of implementation integrity. Callender did not present any data on the re lationship between implementation integrity and student outcomes. In another state-level evaluation of impl ementation of the PS/RtI model, Stollar and Graden (2006) used multiple methods to examine implementation integrity as well. Consistent with Callender (2006), Stollar and Graden collected self-report data and reviewed permanent products to determine th e degree to which problem-solving practices had been implemented with fidelity. In term s of the self-report data collected, surveys were administered to participants in probl em-solving trainings asse ssing their perceived knowledge, skills, and levels of implem entation with problem-solving practices. Respondents reported high levels of lear ning on key problem-solving components across the school year (i.e., the average score ranged from 4.65 to 5.23 on a 6 point Likert type scale across the year) indicating that participants felt they ha d the knowledge and skills to implement the model. Respondents also report ed high levels of use of systems level

PAGE 80

72 72 problem-solving (5.02 out of 6 on a Likert type scale) suggesting that educators perceived they were implementing problem-solving proc edures to address classroomand/or building-wide issues. Stollar and Graden (2006) also examined permanent products from intervention cases in participating schools to determin e the degree to which quality indicators of problem-solving were present. The presence of quality indicators (i.e., critical components) was examined using a measure developed by Upah and Tilly (2002) that employs a 5 point Likert type scale. The per cent of cases examined with 4-5 ratings (i.e., the two highest ratings) on i ndicators of problem-solving quality ranged from 0-100% with a median of 74% across the problem-so lving steps examined. These data revealed variability in the degree that problem-s olving components were implemented with integrity; however, the median of 74% suggests that many of the problem-solving steps were implemented with somewhat high levels of implementation. This finding appeared to be consistent with the integrity data reported by the problem-solving training participants and student outcome data demonstrating improvements in reading achievement. In sum, researchers examining implementation integrity have demonstrated that educators do not always implement assessmen t and intervention procedures as intended. Although implementation integrity is often low in schools, me thods to improve fidelity exist in the literature. Thus, it appears necessa ry to monitor levels of integrity given the degree of drift in implementation that has b een found in the literatur e despite the use of evidence-based training procedures. The importance of monitoring implementation integrity when evaluating outcomes is more evident when the relationship between

PAGE 81

73 73 fidelity and student outcomes is considered. Both microand macro-level evaluations of PS/RtI procedures have demonstr ated that a positive relationship exists between levels of integrity, and student and systemic outcomes. The research examining this relationship is limited; however, requiring more stringent statis tical analyses, particular for macro-level evaluations of PS/RtI implementation. Research on Program Evaluation Models Researchers examining the PS/RtI model have focused on a number of variables including key stakeholder sa tisfaction with the model, implementation integrity, and student and systemic outcomes. Although impr oving student and systemic outcomes is arguably the ultimate criterion for success, variables such as key stakeholder consensus and implementation integrity have impacted an evaluator’s ability to make statements about the degree to which the PS/RtI model a ffected those outcomes. Evaluations that examine such variables separately, however, may not be of much use to educators implementing PS/RtI in schools. Because studies of the impact of the PS/RtI model occur in complex, real-world settings in which re searchers often have limited control over the variables studied, a clear understanding of the relationship between potential independent, moderating, mediating, and extr aneous variables, and their impact on the dependent variables of interest (e.g., stude nt and systemic outcomes) is crucial for researchers to accurately in terpret their findings. Program evaluation models are the vehicle through which individuals conducti ng applied research on the impact of innovations can organize variab les relevant to implementatio n of an innovation such as the PS/RtI model.

PAGE 82

74 74 Stufflebeam (2001) defined program evaluation as “a study designed and conducted to assist some audience to a ssess an object’s merit and worth” (p. 11). According to Stufflebeam, program evaluation approaches can be broken down into four main categories; pseudoevaluati ons, questions/methods oriented, improvement/accountability oriented, and social agenda/advocacy oriented. Psuedoevaluations fail to provide valid asse ssment data to all audiences that have an interest in the evaluation. Su ch evaluations include public re lations studies that seek to provide a favorable view of a program rega rdless of its actual merit or politically controlled studies that do not provide equal access to findings for all interested groups. Questions/methods, improvement/accountab ility, and social/advocacy oriented approaches are more valid evaluation approaches in the sense that they seek to attain accurate information about the merit of a pr ogram and disseminate it to all interested parties (Stufflebeam, 2001). Questions/methods oriented approaches typically employ a set of well-defined research questions and/ or methods to evaluate a program. When answering particular questions is the focus of the evalua tion, the methods are secondary in that the appropriate method to address ea ch question is chosen. Questions are often derived from the program’s objectives, acc ountability requirem ents from a funding agency, and/or an expert’s beliefs about wh at the evaluative criteria should be. Methodsoriented approaches typically emphasize technical adequacy of the evaluation by choosing particular methods (e.g., controlled experimental procedures, program models, case study procedures) to evaluate component s of a program. Stufflebeam asserts that questions/methods oriented approaches are qu asi-evaluation studies in that they can

PAGE 83

75 75 provide answers to important questions; however the focus of the appr oaches is often too narrow to provide an overall view of a program’s merit or worth. Improvement/accountability approaches, according to Stufflebeam (2001), consider the full range of questions and cr iteria required for as sessing the merit of a program. Such approaches often examine th e needs of program stakeholders and use them as the foundational criteria for dete rmining merit. Evaluators employing these approaches also examine the technical and ec onomic aspects of a pr ogram in conjunction will all relevant outcomes. Thus, these approaches emphasize improvement through databased decision-making, providing consumers w ith assessment of various programs and services, and assisting consumers to invest igate the merits of competing programs. Social agenda/advocacy approaches, the fi nal category of approaches reviewed by Stufflebeam (2001), are used by evaluators to attempt to make a difference in society. Approaches in this category t ypically seek to ensure that all segments of society have equal access to opportunities and services. Th ese approaches are of ten constructivist in orientation and employ qualitative methodol ogy. Evaluators encourage the engagement of key stakeholders in obtaining and interp reting findings. Stuffleb eam states that the social agendas of evaluators and involvem ent of key stakeholders in decision-making may make such approaches vulnerable to the biases of all involved; however, the principles of fairness and equity in te rms of program goals and involvement of stakeholders in decision-making makes such approaches appealing in a democratic society. The studies examining PS/RtI implemen tation reviewed a bove suggest that researchers typically relied on a co mbination of questions/methods and

PAGE 84

76 76 improvement/accountability oriented approach es. The researchers examining the model often asked research questions investigati ng how implementing the PS/RtI model would impact students, educators, and the buildi ngs that contain them (e.g., VanDerHeyden et al., 2007). These research questions were us ed to derive the methods employed to evaluate implementation of the model and its impact on a number of dependent variables. In many cases, the results were shared with key stakeholders to improve the quality of services provided in the schools (e.g., Ca llender, 2006; Noell et al., 2002), thereby introducing an improvement/accountability com ponent to many of the evaluation models. The majority of the studies examined, how ever, focus on a fairly narrow set of independent and dependent variables. Becau se few comprehensive evaluations of the PS/RtI have been conducted, it is important for researchers intere sted in examining implementation to gain an understanding of the PS/RtI model and its intended impact on various levels of the school system. One wa y that program evaluators can accomplish the task of evaluating outcomes at the student, staff, and building levels is by developing logic models. According to McLaughlin and Jordan (1999), logic models are tools used by program evaluators to examine the hypothe sized impact of a program across various levels of a system. By examining the outcome s targeted by a program in the context of inputs, processes, and outcomes, evaluators are able to display the relationship among myriad variables to aid in interpretation of program results. Inputs within a logic model are often divi ded into two categories, resources and characteristics (Boothroyd, 2005). In school sett ings, resources include the time, number and type of personnel, and funding availabl e to support implementation of a program. Examinations of characteristics in school-bas ed evaluations often i nvolve investigations

PAGE 85

77 77 of student demographics (e.g., race/ethnicit y, gender, SES), previous knowledge and skills of school staff, and the organizationa l and financial struct ure of building(s). Processes are what occur after implem entation of the program (Boothroyd, 2005). Processes include the content, frequency, and intensity of services delivered to students; the content, frequency, and intensity of training provided to sc hool staff; and the organizational and structural changes made at a building le vel to support implementation of the program. Outputs are synonymous with the outcomes produced as a result of implementing the program (Boothroyd, 2005). Ou tputs are often organized by short-, intermediate-, and long-term goa ls to facilitate interpretati on and increase the capacity of evaluators to use findings to help make formative changes to program implementation. Outcomes examined in educational evaluati ons typically include student academic and behavioral performance; changes in the beli efs, knowledge, and skills of educators; and systemic variables such as costs, dispr oportionality, and referrals and placements associated with special education services. Logic models also may include consider ation of the goals/ objectives of key stakeholders, external factors that impact the target organi zation, and contextual factors within the organization. Goals/ objectives of schools can vary but tend to revolve around facilitating the academic, behavioral, and/or socio-emotional success of their students. External factors include legisl ation, regulations, funding shifts and demographic shifts of the surrounding neighborhoods. Examples of c ontextual factors are leadership, school climate, motivation for change, and key stakeholder buy-in. Because schools are complex social systems that operate within larg er social systems (Curtis et al., 2008), understanding school goals/ob jectives, external pressures that shape those

PAGE 86

78 78 goals/objectives, and the contextual vari ables that impact implementation of the innovation is important for capturing an accurate picture of an innovation and its impact on educational outcomes. Although Stufflebeam’s (2001) impetus on attaining a comprehensive picture of an innovation and its impact on key stakeholders is importa nt, evaluators must make practical decisions regarding what variables to assess within their program evaluation model. In complex systems such as schools, a myriad of inputs, processes, outputs, contextual factors, and external factors cont ribute to the student and systemic outcomes of interest to educators. Once the outcomes of interest and the relevant variables that may contribute to those outcomes ar e identified, evaluators must make decisions about which variables to assess. These decisions are ty pically driven by two factors, parsimony and resources (Boothroyd, 2005). Evaluation models provide feedback to service providers, funding agencies, and consumers regarding the effectiveness of a program being implemented (Stufflebeam, 2001). For the evaluation model to be useful, consumers of the evaluation should be able to use the formative and summative data collected to make decisions about how to proceed with service delivery. Funding agenci es must make decisions regarding what projects or components of proj ects to continue funding. Stakehol ders of the services must be able to make decisions rega rding what services to advocat e for or use. For these types of decisions to be made, the complexity of the evaluation model cannot exceed the evaluators and stakeholders’ ability to interp ret and use the results. Therefore, evaluators should consider including variables in the ev aluation model that they can assess reliably and that lead to a better understa nding of the merit of the program.

PAGE 87

79 79 Resources play an important role in dete rmining what variables to assess within an evaluation model as well. The time, f unding, and personnel ava ilable should all be considered when developing an evaluation model. Many funding agencies require that evaluations be completed within a specified time frame to ensure that the program continues to receive funding. In addition, th e amount of money provided by the funding agencies or organization for the assessment ma terials, travel, and technology required to conduct a comprehensive evaluation vary tremendously. The number of personnel available to assist in data activities (e.g., collectio n, analysis) varies as well. Therefore, evaluators with more time and funding typica lly are more able to provide the personnel and materials required to conduct a comprehe nsive evaluation whereas those individuals with more limited resources will likely have to make decisions rega rding which variables are the most important to assess. Evaluations of PS/RtI implementation ar e affected by the need for parsimony and resources as well. Much of the literatur e on implementation of the PS/RtI model has focused on outputs (i.e., outcomes). Student achievement (i.e., reading and math performance), systemic (i.e., office discipline referrals, referrals for and placements in special education), and educat or (i.e., knowledge, skills, beliefs, and satisfaction) outcomes have been the focus of both small(e.g., gradeand sc hool-level evaluations) and large-scale (e.g., districta nd state-level evaluations) eval uations of the PS/RtI model (Batsche, Elliot, Schrag, et al. 2005; Burn s, Appleton, & Stehouwer, 2005). Some studies have focused on the processes that occur when implementing PS/RtI practices such as the degree to which procedures were impleme nted (i.e., implementation integrity) as intended (Noell & Gansle, 2006). Inputs examination often has focused on the

PAGE 88

80 80 demographics of students; however, a few evaluations have included information on teacher and community variables (see review of PS/RtI evaluations above). Interestingly, the published studies on PS/RtI implementation te nd to focus on a fairly prescribed set of research questions. Although th e results derived from addre ssing the research questions suggest that implementing the PS/RtI leads to improved outcomes, a more comprehensive evaluation that examines the relationships among inputs, processes, contextual, and external factors, and their impact on st udent and systemic outcomes would allow stakeholders to attain an unde rstanding of the circumstances in which the model is likely to be successful. To provide a comprehensive picture of th e circumstances that lead to improved student and systemic outcomes, variables in addition to those typically studied must be included in evaluations of the PS/RtI model. In addition to student demographics, inputs such as resources and organizational structures at the building, dist rict, and state levels should be assessed to determine the characte ristics and resources of stakeholders and organizations implementing the model as well as how they change over time. Processes assessed should include training and t echnical assistance provided along with implementation integrity. The inclusion of su ch variables in an evaluation model would allow researchers to not only determine the degree to which the model was implemented, but also what factors lead to higher levels of fidelity. Contextual factors (e.g., leadership, school climate, staff consensus) and external factors (e.g., federal legislation, state and district policies) also should be examined as research has demonstrated their importance in terms of facil itating systemic change in schools (Curtis, et al. 2008). Finally, the goals and objectives of the key stakeholders implementing the models should be examined.

PAGE 89

81 81 Although the PS/RtI can be implemented to address academic and behavioral outcomes, schools implementing the model often choose to focus on specific content areas (e.g., reading). Thus, evaluation models that include the types of variables highlighted in this paragraph are more aligned with the cr iteria for comprehensiveness outlined by Stufflebeam (2001). Conclusions Data on implementation of the PS/RtI model suggest positive results across a number of processes and outcomes. High levels of implementation integrity for components of the PS/RtI model (e.g. accurate administration of screening assessments, implementation of the majority of critical components of interventions) have been reported by investigators. Positive outcomes have been reported for educator (e.g., improvements in perceived knowledge and ski lls, high levels of sa tisfaction) student (e.g., increases in reading and math achieve ment) and systemic (e.g., decreases in the number of ODRs, decreases in referrals to and placements in special education) outcomes as well. However, the evaluations have varied in terms of the unit of analysis examined, the evaluation questions/methods used, and th e comprehensiveness of the evaluations. The findings described above suggest th at more comprehensive evaluations of implementation of the PS/RtI model are neede d. Additional evaluations are needed across classroom-, building-, district-, and state-leve l initiatives to implement the model. More detailed identification and analys is of inputs, processes, out puts, contextual factors, and external factors that impact implementation ar e necessary across these units of analysis to provide a more comprehensive picture of ci rcumstances in which PS/RtI tends to be successful. Although parsimony and resources shoul d play a role in evaluators’ decisions

PAGE 90

82 82 regarding evaluating implementation of the PS/ RtI model, the more relevant variables that are reliably assessed and interpreted, th e more information that should be available for key stakeholders of the evaluatio n to use to inform decision-making.

PAGE 91

83 83 Chapter III Method A longitudinal, quasi-experimental res earch design was used to address the research questions proposed for this program evaluation study. This study proposed to formatively evaluate the impact of the first year of a 3-year statewide school reform initiative (the Florida PS/RtI Project). Data were collected on a numbe r of input, process, and outcome variables from pilot schools implementing the model and matched comparison schools to evaluate the Project’ s impact on important educational outcomes following the first year of implementation. Participants Pilot Schools. Eight districts and a total of 40 sc hools within those districts were selected to begin implementing the PS/RtI model during the 2007-08 school year. These districts and schools were se lected through a competitive application process. All 67 school districts in the state of Florida were encouraged to submit applications proposing up to six pilot schools to begin implementa tion of the PS/RtI model (See Appendix A for a copy of the application). The application was sent to distri ct personnel in leadership positions (i.e., Superintendents, Associate Superintendents for Curriculum and Instruction, and Exceptional Student Educa tion Directors) and three informational Bidders’ Conferences were held to provide a detailed overview of the requirements for

PAGE 92

84 84 submitting the applications to the Project. Of the potential 67 applicants, 12 school districts applied (approximately 18% of Florida’s school districts). A minimum of two reviewers from the Fl orida PS/RtI Project Leadership Team independently evaluated each of the 12 submitted applications. Each application was scored using a standard evaluation rubric (See Appendix A for a copy of the rubric used). The rubric contained 11 items that assessed th e extent to which the district’s proposal clearly articulated overall commitment to the Project, commitment of resources and personnel, inclusion of district and school-level data requested, and previous experience with other programs or initiatives. Decisions regarding the selecti on of districts were made based on two criteria, the average scor e received on the application from the two independent reviewers and the extent to which the districts were repr esentative of other Florida school districts. Di strict size, geographical loca tion, and student demographic profiles were used as the primary indices of the degree to which districts were representative of other Fl orida school districts. The specific protocol used to select demonstration distri cts involved several steps. First, districts were grouped by size (i.e., the number of students in the district was used to organize districts into small, medium sma ll, medium, large, and very large districts). Next, the average score received on the appli cation was used to rank the 12 districts’ applications from highest to lowest w ithin each size grouping. Then, a discussion regarding the extent to which the highest scor ing district within each of the five groups would provide schools that were demographical ly and geographically representative of Florida schools occurred. The Project Leader ship Team decided that the demographic profiles provided in the applications and th e geographic location of the five districts

PAGE 93

85 85 suggested that the top scor ing districts within each size group would provide a representative sample. Finally, the next three highest scoring district s were selected to participate based on the resources that were av ailable to fund and provide PS/RtI training and technical assistance. See Table 1 below for an overview of the size, location, and student demographics of the eight demonstration dist ricts selected.

PAGE 94

86 86 Table 1 Size, Location, and Student Demographics of Selected Demonstration Districts District Size Location White Black Hispanic FRL ELL Disabilitya District A 35,723 North 27,218 (76.2%) 4,364 (12.2%) 2,319 (6.5%) 8,916 (25.0%) 404 (1.1%) 7,490 (21.0%) District B 353,831 South 33,274 (9.4%) 95,075 (26.9%) 216, 543 (61.2%) 208,795 (59.0%) 57,455 (16.2%) 71,531 (20.2%) District C 8,377 South 5,069 (60.5%) 828 (9.9%) 2,022 (24.1%) 3,014 (36.0%) 458 (5.5%) 1,773 (21.2%) District D 64,680 Central 49,512 (76.5%) 3,225 (5.0%) 8.067 (12.5%) 27,543 (42.6%) 2,235 (3.5%) 13,468 (20.8%) District E 110,006 Central 70,287 (63.9%) 20,292 (18.4%) 9,520 (8.7%) 44,530 (405%) 3,610 (3.3%) 23,042 (20.9%) District F 92,809 Central 49,207 (53.0%) 19,882 (21.4%) 19,520 (21.0%) 53,213 (573%) 7,103 (7.7%) 15,687 (16.9%) District G 26,971 North 22,425 (83.1%) 2,352 (8.7%) 1,070 (4.0%) 4,726 (17.5%) 143 (0.5%) 4,778 (17.7%) District H 6,699 North 5,677 (84.7%) 534 (8.0%) 287 (4.3%) 3,010 (44.9%) 140 (2.1%) 1,111 (16.6%) Note. Size is the number of students in the Pre-kindergarten through 12th grade population. Disability represents the number of students identified with disabilities age 6-21 (Florida Department of Education, 2008). Values in parentheses represent the percen tage of the district population that the subgroup represents. a Values include students receivi ng gifted education services. ELL = English Language Learners; FRL = Free-Reduced Lunch.

PAGE 95

87 87 The eight selected districts contain a tota l of 40 pilot schools. The number of pilot schools that participated within each of the eight districts ra nged from three to seven. The selected schools varied within and across dist ricts in terms of school size (i.e., the number of students in the school), student demogra phics, and student achie vement. See Table 2 for summary of descriptive data for the p ilot schools from the 2007-08 school year.

PAGE 96

88 88 Table 2 Descriptive Statistics for Pilot and Comparison Schools fo r School Size, Student Demographics, and Student Achievement School Status # of Students % Caucasian % Black % Hisp anic %Male %FRL %ELLs %SWD s Average FCAT Standard Score Pilota 673.70 (232.19) 54.13 (26.98) 23.98 (24.30) 14.90 (11.01) 52.02 (2.47) 53.34 (24.44) 11.50 (13.04) 16.19 (5.81) Reading: 311.74 (18.51) Math: 327.91 (19.99) Comparison b 756.39 (212.85) 57.22 (30.69) 25.49 (30.49) 11.22 (8.39) 51.63 (3.32) 51.05 (26.71) 10.92 (13.75) 17.12 (6.53) Reading: 313.78 (17.42) Math: 330.39 (18.31) F -Value 2.53 ( d =-0.46) 0.21 ( d =-0.11) .06 ( d =-0.06) 2.47 ( d =0.37) 0.33 ( d =0.13) 0.14 ( d =0.09) 0.03 ( d =-0.04) 0.44 ( d =0.16) Reading: 0.23 ( d =-0.11) Math: 0.30 ( d =-0.13) Note Data for size, student demogra phic, and student achievement variables represent the mean and standard deviation (in parenthes es) for each variable. ELL= English Language Learner; FCAT = Florida Comprehensive A ssessment Test; FRL = Free-Reduced Lunch; SWD = Students with Disa bilities. a n = 40. b n = 33.

PAGE 97

89 89 Comparison Schools. To provide a referent to evaluate implementation of a PS/RtI model against, districts were asked to propose a matched comparison school for each pilot school proposed in their applica tions. A total of 36 matched comparison schools were proposed by the eight selected districts. Following the selection of the demonstration districts, the Project Leadersh ip Team examined each of the proposed pilot and matched comparison schools to determine the extent to which each set of schools were similar. Project Leadership Team members believed that statistical analyses to determine if significant differences existed were not appropriate at the time of the preliminary comparison because of concerns over the accuracy of data reported by the pilot districts in their applications (some di screpancies between data provided by districts in their applications and data available through the Florida Depa rtment of Education website were observed). Theref ore, Project staff conducted a visual analysis of the differences between the sets of sc hools on a number of variables. School philosophy, school size, student demographics, student achievement, and the presence of other state in itiatives (i.e., Readin g First, Positive Behavior Support, and Voluntary Pre-Kindergarten) were examined to determine the degree to which the comparison schools were appropriate matches for the pilot schools. School philosophy (e.g., standards based education versus Mont essori) and the number of grade levels served were the primary foci of the visual an alysis. Project staff d ecided that statistical analyses including the demographic and ach ievement variables would be conducted following collection of these data from district and state da tabases during Year 1 of the Project. Following the visual analysis, Projec t staff determined that three of the proposed 36 comparison schools were not appropriate matc hes because of their status as specialty

PAGE 98

90 90 schools (i.e., their philosophical orientation of educating studen ts or the inclusion of high school grade levels made them different from the vast majority of elementary schools). Because of the small number of schools in two districts that proposed the three specialty schools, no additional comparis on schools could be provided, resulting in a total of 33 comparison schools for this study. Refer to Tabl e 2 for summary descriptive data for the comparison schools. Upon receiving Year 1 demographic and achievement data on the participating schools from the Florida Department of Edu cation Data Warehouse (described below in more detail), Project staff conducted inferentia l analyses to determine the extent to which the pilot and comparison schools were similar. A series of One-Way ANOVAs were conducted on a number of demographic and achievement variables to determine if significant differences between pilot and comparison schools existed. Specifically, Project staff compared pilot and comparison schools on size (i.e., the number of students in a school), racial/ethnic composition (i.e ., the proportion of white, black, Hispanic, Asian, Native American, and Multi-Racial st udents attending a school were analyzed separately), gender (i.e., th e proportion of male student s attending a school), the proportion of students eligible for free or reduced lunch, the proportion of English Language Learners, the proportion of student s with disabilities, and average FCAT performance (i.e., the average standard scor e for the reading and math subtests were examined separately). Each of these variables was examined in a separate One-Way ANOVA. Results of the ANOVAs indicated that no significant differences between pilot and comparison schools existed for any of the aforementi oned variables (all p -values exceeded .05).

PAGE 99

91 91 Because of concerns regarding limits to the statistical power available to detect differences between the 40 pilot and 33 co mparison schools, Project staff calculated Cohen’s (1988) d for each demographic and outcome comparison. Cohen’s d provides an index of the size of any discrepancies betw een groups by dividing the difference between the group means by the pooled standard devia tion. Effect sizes between .2 and .5 are considered small. Effect sizes between .5 and .8 are considered medium Effect sizes of .8 or above are considered large. When Cohen’s d was calculated for each of the comparisons outlined above, only two small effects of -.46 for school size and .37 for the proportion of Hispanic students were found. All other estimates ranged from -.13 to .16. These findings provide additional evidence that pilot and comparison schools appeared to be comparable across a number of demogra phic and achievement variables during Year 1 of the Project. The small effect sizes obs erved for school size and the proportion of Hispanic students attending the schools sugge sts that differences on these variables may have been detected if more power were av ailable. See Table 2 for the results of the ANOVAs and the effect sizes for each variab le used to compare pilot and comparison schools. Project Description Florida’s PS/RtI Project represents a co llaborative effort between the Florida Department of Education and the University of South Florida intended to facilitate the implementation of PS/RtI practices in the st ate’s public schools. Th e Project created two initiatives to accomplish this goal, one focu sing on a small number of demonstration sites and the other component focusing on stat ewide training. The statewide training component of the Project is intended to pr ovide school-based teams with the knowledge

PAGE 100

92 92 and skills needed to implement the PS/RtI m odel. The training modules developed for the project focus on data-based decision-making practices that improve student outcomes in the general education and sp ecial education environments Districts send school-based teams to participate in the trai ning. Participation in the traini ng is voluntary, and technical assistance and follow-up by Project staff is limite d, as is data collec tion to evaluate the impact of statewide training. The demonstration site component of the Project, on the other hand, is intended to provide a comprehensive evaluation of the impact of implementing a PS/RtI model on districts, buildings, educators, and students. Funding, traini ng, technical assistance, and follow-up support are being provided to demons tration districts and pilot schools for a period of 3 years to facilitate implementati on of the model. Initi ally, the Project is focusing on elementary schools. Pilot schools are able to choose to implement PS/RtI practices and procedures for reading, ma th, and/or behavior. Matched comparison schools are being used as a re ferent against which the impa ct of the Project is being evaluated. The comparison schools have been asked to delay school-wide implementation of PS/RtI practices until the conc lusion of the 3-year project. However, federal and state legislation and regulations recently enacted require that all schools begin implementing practices associated with a PS/RtI model when considering eligibility for students suspected of having a disability (e.g., Flor ida Administrative Code, 2009; IDEIA, 2004). Importantly, Project staff are expected to provide no profession al development or technical support to comparis on schools attempting implementation of a PS/RtI model. Implementation of the PS/RtI model across the demonstration districts and pilot schools is overseen by the Project’s Leadership Team. The Leadership Team is composed

PAGE 101

93 93 of two Project Directors, the Project Leader, three Regional Coordinators in charge of training and technical assistan ce, and two Project Evaluators Members of this team are responsible for Project planning, administra tive duties, and providi ng training, technical assistance, and support to demonstration sites to facilitate implem entation and evaluation of PS/RtI practices. District Leadership Teams, SBLTs, and district-based PS/RtI Coaches are the primary focus of profe ssional development provided by the three Regional Coordinators and Project staff in the identified pilot schools. The Project Evaluators provide ongoing assistance to the aforementioned demonstration site personnel to facilitate data collection for the Project’s ev aluation model (see Appendices B and C for the Project’s Implementation Pl an and Evaluation Model Summary Rubric respectively). In addition to the professional develo pment and support received from Project staff, each demonstration district is receivi ng funding for one full-time PS/RtI Coach for every three pilot schools. The PS/RtI Coaches are employees of the participating school districts, but are supported by funding provide d by the Project (i.e., $50,000 per coach). The coaches have received training and will continue to receive training by Project staff on PS/RtI practices and strategies for facilita ting implementation of the model in schools. Each coach is responsible for data collec tion and for providing supplemental training, technical assistance, and follow-up support to the District Leadership Teams and SBLTs at the demonstration sites. Coaches also may provide training on PS/RtI practices and procedures to school staff in each of the buildings for which they are responsible. Coaches work directly with the Project’s Regional Coordinators and Evaluators to facilitate the implementation and evaluation of PS/RtI practices.

PAGE 102

94 94 Measures System-wide applications of the PS/RtI m odel have only recently been attempted in schools. As such, empirically validated measures of the PS/RtI process are not available in the literature. Th erefore, Project staff identifi ed existing district and state initiatives from the available research and sc holarly presentations to collect and examine existing instruments. The instruments collected from other initiatives were used, in part, as the basis for creating instrument s for the Florida PS/RtI Project. In addition to collecting instruments fr om other state initiatives, Project staff examined the literature on facilitating systems change and implementing the PS/RtI model to determine what variab les to assess. Curtis et al. (2008) discussed several key principles for facilitating systems change in schools. Key stakeholder consensus regarding the change process, the use of n eeds assessments to identify strengths and weaknesses, the use of a structured planni ng and problem-solving pr ocess, and evaluating progress toward identified goals were iden tified by the authors as key components of facilitating systems change. Implementation in tegrity was identified by other authors as a critical component to consider for PS/RtI implementation (Noell & Gansle, 2006). Based on this review and items found on other inst ruments, Project staff began creating a number of instruments. The instruments descri bed below are those that were administered and collected for use during the first year of the Project. To address consensus issues (e.g., belief s, perceived needs), two surveys were developed that examine (1) what participants believe about student learning and service delivery and (2) educators’ perceived skill s with PS/RtI practices. Because these measures examined educators’ beliefs and perceived skills associated with the model,

PAGE 103

95 95 each of the measures was reviewed by an Educator Expert Validation Panel (EEVP) composed of educators from a neighboring school district with exposure to PS/RtI practices. Project staff discu ssed categories of educators who would be likely to be involved in implementation of the PS/RtI model and attempted to create a representative sample for the panel. After identifying th e number and types of educators that would comprise the panel, a district level contact provided the names and contact information for individuals who fit the descriptions provided. Validation panel forms for the two surveys were sent to five general education teachers, two special education teachers three school administrators, two school psychologists, two guidance counselors, two so cial workers, one r eading specialist, one behavior specialist, three dist rict administrators, and three program supervisors for a total of 24 sets of surveys disseminated. Panel me mbers were charged with providing feedback on the content and clarity of each item on the su rvey as well as providing suggestions for adding or subtracting items (See Appendix D for blank copies of the validation forms filled out by panel members). For returning completed validation panel forms for all the surveys mailed, panel members were paid a $100 stipend by the Project. One general education teacher, two special education teachers, one school administrator, two school psychologists, two guidance counselors, two soci al workers, three dist rict administrators, and one program supervisor returned comp leted validation forms (for a total of 14 validation forms). Following completion of th e validation panel process, Project staff reviewed the feedback from the EEVP memb ers and made revisions to the surveys. Revisions to the surveys based on EEVP f eedback were made using a structured process. Descriptive statistics were run on each survey to determine the proportion of

PAGE 104

96 96 respondents who agreed that th e content of a given item was relevant and that the item was written clearly (i.e., selected that the item was good; See Appendix D). Project staff considered 80% agreement (i.e., 80% of pa nel members selected good when reviewing a given item) the criterion for retaining an item as it was written. When agreement from the panel members was below 80%, Project staff re viewed and discussed feedback from the respondents who disagreed with the item as it wa s currently written (i.e ., selected one of the four responses that indicat ed that some change was needed in terms of how the item was written; see Appendix D). Discussions on the feedback from panel members for the reviewed items occurred until Project staff reached consensus regarding how to proceed with revising the item. Criteria used to determine whether suggestions should be incorporated into revisions included the extent to which recommended changes would improve the clarity of the item, change the intended meaning of the item, allow educators from other school districts to understand the item (i.e., terms suggested needed to be common to most school districts), and was accurate when feedback was provided about grammar. Following any changes that were made, the suggested changes provided by EEVP members were compared to the revised it em to determine if the disagreements had been resolved. Any members w hose disagreements that had b een resolved were added to the members who initially agreed to calculate the percentage of agreements with an item following revisions. Feedback from the EEVP on the Beliefs Survey suggested that some revisions to items were necessary. Prior to any revisions, 80% or more of EEVP members agreed with 55% (i.e., 11 out of 20) of the proposed items as they were currently written. The percent of EEVP members who agreed with the other 9 items as they were written typically

PAGE 105

97 97 approximated 80% but did not meet the cr iterion. For each of these 9 items, the respondents’ (who disagreed with the item) suggestions for revisions provided on the Beliefs Survey – Item Conten t and Clarification Rating Form (see Appendix D) were examined and discussed in terms of the criter ia outlined above. Using the criteria, Project staff revised five of the items to reflect feedback provided by EEVP members. Following these revisions and a determination of whethe r disagreements had been resolved, 80% of EEVP members agreed with three more items to result in 70% agreement with items (i.e., 14 out of 20 items) as they were written. Th e other two item revi sions resulted in 77% and 79% member agreement thus approximati ng the 80% criterion. Four items that did not meet the initial 80% criterion were not revised due to disagreements with the EEVP members’ rationale for requesting changes. Feedback for two of these items indicated that revisions were necessary because the item was grammatically incorrect. Project staff decided not to make revisions to these items after reviewing the initial versions because the items met common grammatical standards (e.g., some EEVP members indicated that “data were” did not meet subject-verb ag reement criteria although “data were” is technically accurate in term s of subject-verb agreement). The other two items were not revised because Project staff agreed that the changes requested would have introduced terminology not commonly used by the majority of school districts in th e State of Florida. Feedback from EEVP members for the Perceptions of RtI Skills Survey suggested that major revisions to the survey did not n eed to occur. A minimum of 80% of members agreed with the item as it was initiall y written for 100% of the items. Although the criterion for keeping an item as written was met for all items, Proj ect staff reviewed any feedback provided by respondents to determin e if the suggestions would improve the

PAGE 106

98 98 clarity of the items. Minor wording changes were made to clarify items or make the wording more succinct, but no substantive changes occurred from this discussion. Two instruments were developed to provi de data on the ongoing needs of pilot schools and the extent to which PS/RtI procedur es were being implemented with integrity during Year 1 of the Project. As was previ ously mentioned, Projec t staff reviewed the literature on PS/RtI model implementation integrity to help generate items for the instruments. Attempts were made to set up a PS/RtI Expert Validation Panel to review the Project’s integrity instruments and pr ovide feedback on their content validity. National experts who have wr itten and presented about implementing PS/RtI practices were contacted and six agreed to particip ate. Although the six experts were sent the instruments and forms on which to provide feedback, no validation panel forms were returned. Thus, content validity for the implementation integrity measures used in this study was derived from the literature base (e.g., Batsche, Elliott, Graden, et al., 2005; Bergan & Kratochwill). What follows is a description of the measures developed by Project staff that were used as part of this study. Beliefs Survey The Beliefs Survey contained items that assess educator beliefs about student learning and service delive ry. More specifically, the measure was developed by Project staff to assess educat ors’ service delivery philosophy and their beliefs regarding assessment practices, co re instruction, intervention, and special education eligibility determination. To determ ine educator beliefs, the following 5-point Likert-type scale was used (S ee Appendix E for a copy of the Beliefs Survey ): 1 = Strongly Disagree 2 = Disagree

PAGE 107

99 99 3 = Neutral 4 = Agree 5 = Strongly Agree. Content validity was examined through the EEVP discussed above. Reliability was examined by analyzing the internal consistenc y of items on the survey at two time points. Surveys administered to pilot and compar ison school educators in the Fall of 2007 and Spring of 2008 were analyzed separately to derive Cronbach alpha estimates. Internal consistency analyses resulted in Cronbach al pha coefficients of .76 and .78 for the Fall and Spring respectively. Perceptions of RtI Skills Survey The Perceptions of RtI Skills Survey contains items that assessed educator perceptions of the extent to which they possess skills necessary in a PS/RtI model. Project staff de veloped the measure to assess educators’ perceived skills in data-based decision-m aking, tiered service delivery, the problemsolving process, data collection procedur es, technology use, and special education eligibility determination. Each of the items within these domains measured educators’ perceptions of their skills using the following 5-point Li kert-type scale (See Appendix E for a copy of the Perceptions of RtI Skills Survey ): 1 = I do not have this skill at all (NS) 2 = I have minimal skills in this area; need substantial support to use it (MnS) 3 = I have this skill, but still need some support to use it (SS) 4 = I can use this skill with little support (HS) 5 = I am highly skilled in this area and could teach others this skill (VHS).

PAGE 108

100 100 Content validity was examined through the EEVP process discussed above. Reliability was examined by analyzing the internal cons istency of items on the survey at two time points. Surveys administered to pilot and comparison school educators in the Fall of 2007 and Spring of 2008 were analyzed separate ly to derive Cronbach alpha estimates. Separate Cronbach alphas were derived for items assessing skills related to academic issues and items assessing skills related to beha vior issues. Internal consistency analyses resulted in Cronbach alpha coefficients of .98 for both the Fall of 2007 and Spring of 2008 for items assessing skills related to academic issues. Cronbach alphas of .97 were derived for both the Fall of 2007 and Spring of 2008 for items assessing skills related to behavior issues. Self-Assessment of Problem Solving Implementation. The Self-Assessment of Problem Solving Implementation ( SAPSI ) was a needs assessment and progress monitoring tool designed to inform implementation of a PS/RtI model. The SAPSI contained items that require educators to report the extent to which their school had reached consensus regarding implementing a PS/RtI model, had the infrastructure in place to implement the model, and had begun ac tual implementation of PS/RtI practices. The following 4-point Likert-type scale was used to complete each item (See Appendix E for a copy of this measure): N (0)= Not Started I (1) = In Progress A (2) = Achieved M (3) = Maintaining.

PAGE 109

101 101 Content validity was examined by a comparison of the measure to a pre-existing needs assessment used as part of a state PS/RtI initiative in Illinois. The SAPSI used as part of Florida’s PS/RtI Project was adapted from the version used in Illinoi s’ statewide project (See Appendix E for a copy of the Illinois version of the SAPSI ). Reliability was examined by analyzing the internal consistenc y of items on the survey at two time points. Surveys administered to pilot and compar ison school educators in the Fall of 2007 and Spring of 2008 were analyzed separately to derive Cronbach alpha estimates. Internal consistency analyses resulted in Cronbach al pha coefficients of .96 and .94 for the Fall and Spring respectively. PS/RtI Direct Skill Assessments Analogue assessments of critical PS/RtI skills were used to assess participants’ skill developm ent. Project staff created a series of case studies that target critical PS/RtI skills within the doma ins of Problem Identification, Problem Analysis, Intervention Devel opment and Implementation, and Program Evaluation/RtI. The skills assessed on each cas e study aligned with th e content of each primary training session. Some assessments were individually administered to participants while some were completed in groups by SBLTs. Because the group administered skill assessments were added during the middle of Year 1 and were not administered at all trainings, only the indivi dually administered skill assessments were examined during this study. Participant perf ormance on each case study was scored using a standard rubric that utili zed a Likert-type scale for each item (See Appendix E for an example of an individually administered ski ll assessment and the st andard rubric). The content and range of the scales varied across skill assessments as a function of the skill being assessed. Conten t validity was examined through a re view of the literature on steps

PAGE 110

102 102 of the PS/RtI process (e.g., Batsche, Elliott, Graden, et al., 2005; Bergan & Kratochwill, 1990). Reliability was examined by analyzing th e internal consistency of items on each skill assessment administered. Each skill asse ssment administered to SBLT members was analyzed separately to derive Cronbach alpha estimates. Internal consistency analyses resulted in Cronbach alpha coeffi cients ranging from .39 to .67. Tier I and II Critical Components Checklist The Tier I and II Critical Components Checklist contained items that assessed the extent to which critical PS/RtI steps were present when educators evaluate d core and/or supplemental instruction. Project PS/RtI Coaches examined permanent pr oducts from meetings targeting Tier I and II instruction and assessed the degree to wh ich critical components were present using a standard rubric. Each item was assessed us ing the following 3-point Likert-type scale (See Appendix E for a copy of the instrume nt and the standard scoring rubric): 0 = Absent 1 = Partially Present 2 = Present. The standard rubric included specific criter ia for scoring each item using the scale provided. Content validity was examined by co mparing the items on the checklists to the steps of the PS/RtI discussed in the literatur e (e.g., Batsche, Elliott Graden, et al., 2005; Bergan & Kratochwill, 1990) Reliability was examined by analyzing the internal consistency of items on the checklists at three time points. Ch ecklists completed by PS/RtI Coaches for the Fall of 2007, Winter of 2008, and Spring of 2008 were analyzed separately to derive Cronbach alpha estimates. Internal cons istency analyses resulted in

PAGE 111

103 103 Cronbach alpha coefficients of .90, .91, and .90 for the Fall, Winter, and Spring respectively. Procedures Personnel Orientation and Training. During the summer of 2007, Project staff held three regional Administrators’ Orientati on meetings for the demonstration district and pilot school administrators. In each re gion, members of the District Leadership Teams and the principals at the pilot sc hools attended one of the regional meetings. Project staff provided an overview of the Pr oject, information intended to be used by principals to begin prepar ing pilot schools for implemen tation of the model at the beginning of the school year, and timelines for upcoming meetings and trainings. Participants at the meetings also were provided an opportunity for input into the scheduling of future activities and to ask clarification questions regarding Project requirements. PS/RtI Coaches hired by the districts partic ipated in a 5-day training facilitated by Project staff in July of 2007. The training c onsisted of an overview of the Project, legislative and policy issues driving impleme ntation of the PS/RtI model, how to use systems change principles to increase th e probability of succ essful implementation, effective coaching practices, procedures for collecting Project evaluation data, and the steps of the PS/RtI model. The content provid ed was intended to allow coaches to begin facilitating implementation of the model duri ng Year 1 of the Proj ect. Three of the 15 coaches were unable to attend the 5-day trai ning in July. These coaches attended three and one-half days of training in the middle of August 2007. This training contained the same information as the 5-day session, but th e time allotted for act ivities and questions

PAGE 112

104 104 was shortened because of the small number of coaches participating. More information on the content of the training is available at floridarti.usf.edu. Baseline Data Collection. Three years of baseline data were collected from pilot and comparison schools. Student (i.e., race/eth nicity, gender, free-r educed lunch status, ELL status, and disability status ) and staff (i.e., number of educators by position in fulltime equivalents) demographic data were collected for the 2004-05, 2005-06, and 200607 school years. Student achievement data on third through fifth graders as measured by the Florida Comprehensive Assessment Test Sunshine State Standards ( FCAT SSS) were collected during these same years as well. All data were collected at the person level (e.g., student, educator). Data at the person level allowed for manipulations at the grade, building, and district levels when necessary. These data we re collected from the Florida Department of Education’s Data Warehouse. All Florida school district s were required to submit the above data throughout the baseline years to the FL DOE electronically. These data were then provided to Pr oject staff for all pilot and comparison schools in remotely submitted data files. Three years of baseline data also were co llected on the extent to which the pilot and comparison schools implemented PS/RtI practic es prior to initiation of the Project. PS/RtI coaches reviewed reco rds (e.g., meeting notes, data reports and displays) to determine the degree to which permanent products suggested that PS/RtI practices occurred at the Tier I and II levels. The Proj ect Evaluators, district data contacts, and PS/RtI Coaches determined what records existe d in the districts and schools. Once viable records were located, the PS/RtI Coaches completed the Tier I and II Critical Components Checklists to determine the degree to whic h PS/RtI practices occurred across

PAGE 113

105 105 the tiers. Upon completion of the protocols, the coaches mailed the instruments to Project staff to be entered into a Project database. Demonstration Site Training and Technical Assistance – Year 1. Project staff were responsible for providing primary traini ng to the pilot schools. Specifically, three Regional Coordinators with assistance from the Project Leader were responsible for providing PS/RtI training to the SBLTs as well as the PS/RtI Coaches. The primary trainings followed an established training form at (i.e., a 2-1-1-1 format, with 2 days of training provided early in the fall, 1 day provi ded later in the fall, 1 day provided in the winter, and 1 day provided in the spring). Content covered du ring the primary trainings in Year 1 included an overview of the PS/RtI mo del, legislative and policy issues driving implementation of the model, facilitating syst ems change, the four step problem solving process, and improving Tier I assessment and instruction. More information on the content of the SBLT trainings is available at floridarti.usf.edu. PS/RtI Coaches in the dem onstration districts provided some additional PS/RtI training. The frequency and cont ent of trainings as well as the target audience varied by school. Trainings that were provided tended to include an overview of the PS/RtI model, and legislative and policy issues driving implementation. Some PS/RtI Coaches also reported providing skill training on the PS/ RtI process, assessment practices and procedures, intervention practi ces and procedures, and using databases to organize and display data for decision-making. These tr ainings were provided to SBLT members, school staff, or a combination of the two gr oups. Factors such as th e goals and objectives of the individual schools and districts, and needs assessment and outcome data were used to determine the individual targets of trai ning at the building level. From December 2007

PAGE 114

106 106 to May 2008, PS/RtI Coaches reported partic ipating in the provision of 244 training sessions. Data on coaching activ ities from August through November of Year 1 were not available because the remote data collection sy stem used to log coach activities was not functional until December 2007. Technical assistance to participants was provided at various levels. The Regional Coordinators were responsible for providing technical assistance to the PS/RtI Coaches. The content and focus of these meetings varied according to the part icular needs of the coaches and the schools and districts they se rved. Data on the beliefs and skills of the coaches collected at coaches’ trainings and co aching process evaluations were be used to determine coaching needs. Needs assessment and outcome data from the coaches’ schools were used to help determine the n eeds of districts and schools. Finally, two Project technical assistance meetings in wh ich all 15 coaches participated to discuss issues and receive additional tr aining and/or tech nical assistance were facilitated by the three Regional Coordinators, Project L eader, and Project Ev aluators. Regional Coordinators and one of the Project Evalua tors provided 1-day technical assistance sessions by region of the state in late Oc tober through early November 2007. The purpose of this session was to provide coaches addi tional training on data collection procedures and support addressing implementation issues in the pilot schools. Project staff also provided a 2-day session in March of 2008 fo r all coaches to problem solve issues occurring in the pilot schools and receive additional training on Project data collection. PS/RtI Coaches provided technical assi stance to the SBLTs and school staff. Technical assistance at each of the schools was driven by a number of variables. Coaches were encouraged to use data from a variety of needs (i.e., SAPSI ), student (e.g., FCAT ),

PAGE 115

107 107 and systemic (e.g., ODRs) assessments to dete rmine on which skills educators may need additional support to master. Discussions th at occurred during SBLT meetings, ProblemSolving Team meetings, consultations with educators, and inform al discussions with school staff also were likely sources of information on whic h skills required technical assistance from coaches. From December 2007 to May 2008, coaches reported 933 technical assistance sessions with demonstration site pers onnel. Data on coach technical assistance activities from August through N ovember of Year 1 were not available because the remote log system used for co aches to record their activities was not functional until December 2007. Year 1 Data Collection. Data to address the research questions for this program evaluation study were collected by multiple individuals from multiple sources. The individual responsible for co llecting a given data element, the source from which it was derived, and the frequency with which it was collected varied. Instruments designed to measure the impact of the trainings on participants’ beliefs and perceptions of skills (i.e., the Beliefs Survey and Perception of RtI Skills Survey ) were administered at a number of venues (e.g., SBLT trainings, staff trainings, staff meetings) on scantron forms. Each instrument was administered at the beginni ng and end of the school year to provide longitudinal data on the impact of the trai nings. The Regional Coor dinators and PS/RtI Coaches were trained to provide directions to respondents an d to answer questions that may arise during administration. Trainings occu rred via conference calls by region of the state for the Regional Coordina tors and PS/RtI Coaches. These conference calls ranged from 30 minutes to 1 hour in duration. In addition, PS/RtI Coaches received guidance on preferred administration venues (i.e., staff meetings and grade level meetings were

PAGE 116

108 108 preferred administration venues that should have been used prior to putting surveys in mail boxes). Graduate Assistants trained by Pr oject staff were res ponsible for uploading each completed survey via scantron software into a database created by the Project. Because the Project used these data to inform formative decision-making, Graduate Assistants performed inter-rater agreement checks on a regular basis. Ten percent of randomly selected surveys scanned were ch ecked by a Graduate Assistant who did not scan the set of surveys being rated. Inter-ra ter agreement estimates were calculated each time data entry accuracy was examined. Wh en inter-rater agreement estimates were below 90%, the Graduate Assistants recheck ed all the data ente red via scanning and corrected entry mistakes. Throughout Year 1, only one inter-rater agreement estimate below 90% was derived (i.e., 85% agreement on the items entered from 85 of the over 1700 Beliefs Surveys entered). Graduate Assistants revi ewed the data entered for this set of surveys and corrected all discrepancies. The remainder of the estimates derived exceeded .90 with the majority of estimates exceeding .98. Direct skill assessments were administer ed by Regional Coordinators at the SBLT trainings only during Year 1. Regional Coordinators were expected to administer the measures and answer clarification questions that arose during admi nistration but not to provide any technical assistance to respondents. Scoring and da ta entry for the instrument were completed by Graduate Assistants. Gradua te Assistants were tr ained to score each instrument using a standard rubric and ente r scores into a Project database. Trainings provided by one of the Project Evaluators for each set of skill assessments administered (i.e., skill assessments administered at a gi ven day of training) approximated 2 hours in duration and followed a similar format. First, the Project Evaluator reviewed the content

PAGE 117

109 109 and format of the skill assessment(s) and the scoring rubric(s). Next, the Project Evaluator modeled scoring of the items on a completed skill assessment(s) using the scoring rubric. Following each item, Graduate Assistants were provided the opportunity to ask questions or get clarification on how to score the item. After scoring of the skill assessment was modeled, the Project Evaluator a nd Graduate Assistants scored each item on a different completed protocol togeth er while discussing any questions or clarifications needed. Finally, Graduate Assi stants scored a third completed protocol independently and calculated inter-rater agreement estimates. A criterion of 80% agreement was necessary before Graduate Assi stants were allowed to begin scoring the skill assessments. Any discrepancies in scor ing noted following inter-rater agreement procedures during the training were discu ssed until consensus was reached on how to score the item on future protocols. All inte r-rater agreement estimates calculated at trainings throughout the year equaled or exceeded .80. Inter-rater agreement estimates for item scoring were conducted on approximately 15% of the skill assessment s completed by SBLT members during Year 1. Because Project staff used the data from sk ill assessments to formatively inform decisionmaking, Graduate Assistants conducted inte r-rater agreement pr ocedures on an ongoing basis. For each skill assessment used, 15% of the protocols (randomly selected) were independently scored by two Graduate A ssistants using the standard rubric. The proportion of agreements across items on the skill assessments was calculated to determine inter-rater agreement estimates. The target level of agreement was .80 for scoring of the instruments. Throughout Year 1, only one inter-rater agreement estimate was below .80 (i.e., an estimate of .72 proporti on of agreements for items on 21 of the

PAGE 118

110 110 approximately 270 protocols scored for the Day 5 skill assessment administered). Graduate Assistants discussed the discrepa ncies in scoring that occurred and reported achieving consensus regarding scoring those items on future assessments. Changes were not made to the data entered into the Projec t database because consensus was reached that the primary scorer’s decisions were accurate given the scoring rubric criteria (i.e., the primary scorer’s protocol was used to enter sc ores into the database). All other estimates exceeded .80 on the Day 5 skill assessment as well as the other skill assessments administered. The majority of estim ates across the year exceeded .90. Graduate Assistants also checked the data entered from the skill assessments for data entry accuracy. The proportion of agr eements was used to estimate inter-rater agreement for data entry. The target level for inter-rater agreement was 90% for data entry. When inter-rater agreement estimates were below the 90% criterion for entry, all scores were rechecked and any scores en tered incorrectly were changed. Only one estimate was below .90 (i.e., a .85 inter-rater agreement estimate fo r items on 41 of the 280 protocols entered for one of the Day 4 skill assessments). All data for the applicable skill assessments were rechecked and discrepancies corrected. All other estimates exceeded .90 across the skill assessments us ed with the majority of estimates approximating 100% agreement. See Appendix F for a summary of each instrument that was administered to measure the impact of trainings, who was responsible for data collection and entry, and approximate timelines for administration. The FL DOE was responsible for fa cilitating the collection of student demographic and achievement data, and staff da ta. Protocols explaini ng the type of data and categories requested by the Project were provided to a contact at the FL DOE. The

PAGE 119

111 111 FL DOE contact pulled the data from their centralized database and provided the data to the Project at the individual student and edu cator levels. These data were collected from school districts through a standa rdized electronic survey sy stem. Data files provided for baseline years had been reviewed and disc repancies addressed th rough the standardized electronic survey process. The data file provided to the Project for the 2007-08 school year was the preliminary file used by the FL DOE before the data could be reviewed and discrepancies addressed through the aforementio ned standardized process. The final file for the 2007-08 school year will not be av ailable until August 2009 necessitating the use of the temporary file in the analyses used in this study. See A ppendix F for additional information on the collection and entry of school-level student a nd staff demographic data. PS/RtI Coaches were responsible for collecting data derived from needs assessment and implementation in tegrity measures (i.e., the SAPSI and Tiers I and II Critical Components Checklist ). Trainings on SAPSI administration procedures occurred regionally through approximately 90-minute conference calls. One of the Project Evaluators reviewed administration procedures and what each item on the SAPSI assessed. PS/RtI Coaches asked questions and for clarification on items at each training as well. The SAPSI was completed by PS/RtI Coaches in conjunction with the SBLTs at the pilot schools at the beginning and end of the school year. Following completion of the SAPSI PS/RtI Coaches mailed one completed prot ocol to Project st aff and Graduate Assistants entered the data into a Project database. The criterion for inter-rater agreement for data entry was .90 for the SAPSI Graduate Assistants co nducted ongoing inter-rater

PAGE 120

112 112 agreement checks on sets of surveys entere d. All estimates calcul ated during Year 1 exceeded .90 with the majority indicating 100% agreement. The Tiers I and II Critical Components Checklist was completed by the PS/RtI Coaches throughout Year 1 of the Project. E ach checklist was completed three times per year for each content area and grade level targeted by the pilot school to provide longitudinal data on PS/RtI implementati on integrity. For each measure, coaches provided a score on each item using th e standard scori ng rubric. Training on the Tiers I & II Critical Components Checklists was provided by one of th e Project Evaluator across two sessions. The Project Eval uator provided training on ad ministration, scoring, and inter-rater agreement procedures for the in strument. In addition, PS/RtI Coaches were provided with opportunities to practice co mpleting the instrument. PS/RtI Coaches examined two examples of permanent produc ts (e.g, data review meeting notes, data printouts and graphs) and completed the checklists during the first session. Following the completion of the checklist for each example, the Project Evaluator provided feedback to the coaches on their responses. Finally, the Pr oject Evaluator discussed the inter-rater agreement procedures for each instrume nt, provided an opportunity to practice calculating inter-rater agreement estimates using the protocols they completed independently, and addressed questions asked by the coaches. During the second training session, PS/Rt I Coaches were asked to bring documentation from their schools. Tiers I & II Critical Components Checklist procedures were reviewed and PS/RtI Coaches then scored two sets of examples independently in dyads. Inter-rater agreement es timates were calculated for both sets of permanent products. Inter-rater agreement estimates for th e first set of permanent products for the 8

PAGE 121

113 113 dyads ranged from .44 to 1.0 with five of the eight estimates exceeding .80. PS/RtI Coaches discussed differences in scoring with their dyad partner first followed by a group discussion of items on which differences occu rred. The goal of these discussions was to achieve consensus regarding how to score it ems on which discrepancies occurred during subsequent completion of the checklists. Inte r-rater agreement estimates calculated for the second set of permanent products from the coaches’ districts ranged from .75 to 1.0 with all but one of the estimates exceeding .80. On-site technical assistan ce provided by the Project Evaluator followed the two training sessions. The Project Evaluator trav eled to each PS/RtI Coaches’ district to provide the coaches with additional practi ce and feedback on completing the checklists with actual permanent products from their scho ols. Each coach received approximately 24 hours of on-site technical assistance on co mpleting the checklists during Year 1. In total, approximately 10-15 hours of traini ng and technical assistance was provided to PS/RtI Coaches to facilitate accurate completion of the Tier I and II Critical Components Checklists Inter-rater agreement estimates for the scoring of items were calculated for randomly selected schools (i.e., one pilot and one comparison school per coach) during the second data collection wi ndow during each baseline and implementation year. To complete inter-rater agreement estimates, the PS/RtI Coach contacted another PS/RtI Coach or his/her Regional Coordinator to complete the checklists using the same permanent products. The target level for inte r-rater agreement estimates was .80 for all checklists. When this criterion was not met, the two individuals completing the assessments were asked to discuss the items for which differences occurred and reach

PAGE 122

114 114 consensus regarding how to score the items on future checklists. At the time analyses were conducted, 22 of the 29 randomly selected schools had inter-rater agreement forms completed. The overall level of inter-rater ag reement across the four years for these 22 schools exceeded .80. Project Graduate Assistants calculated inter-rater agreement estimates for data entry. Graduate Assistants randomly selected 20% of the protocols and rechecked the data entered for those protocols. The target level for data entry was 90% agreement. Inter-rater agreement checks were conducte d as data were entered. All estimates exceeded .95 with the majority of es timates approximating 100% agreement. Data Analysis Descriptive and inferential analyses we re conducted to address each research question. Research question one examined th e relationship between PS/RtI training and technical assistance and the beliefs and percei ved skills of educator s. Research question two investigated the actual skills demonstrat ed by educators. Research question three examined the relationship between PS/RtI training and technical assistance and implementation integrity at the school level. For each question, means and standard deviations were calculated for continuous variables to facilitate data interpretation. Frequency data were used to provide descriptive information on all categorical variables. Multi-level modeling was the inferential analysis used to address each research question. Multi-level modeling a llows researchers to analyz e nested data by examining the relationship between variables at multiple levels and the dependent variable(s) of interest. Models are built hier archically, with variables ente red at higher levels used to indirectly predict outcomes at the lower levels of the model. The levels examined can

PAGE 123

115 115 range from multiple observations within indi viduals (i.e., time) to macro variables (e.g., societal/political variables). Variables entered into regression equations across multiple levels improve the capability of researchers to consider context variables that impact real world outcomes. In addition, multi-level models provide researchers with the opportunity to examine fixed or random effects for intercepts and slopes whereas many traditional regression models force effect s that may vary across units to be fixed. The number of levels, predictors entered acr oss the levels, and decisions regarding whether to allow intercepts and slopes to vary across units are typically based on theory and the availability of data (Luke, 2004). Research question one was examined using a separate three level model for each dependent measure. Dependent measures us ed to address educators’ beliefs and perceived skills were the Beliefs Survey and the Perceptions of RtI Skills Survey Three separate models were conducted to predic t the educators’ (1 ) beliefs, and their perceptions of their (2) Res ponse to Intervention – Academ ic (RTI-A) and (3) Response to Intervention – Behavior (RTI-B) skills. For each model, the average item score (i.e., the values of each educator’s responses we re added together and divided by the total number of items) was entered for the surveys. Time (i.e., beginning versus end of Year 1) was the unit of analysis for Level 1 of the multilevel models. In other words, for each survey administered across the year, the educat ors’ average item scores were entered into the regression model. Thus, a given administ ration of a measure was used to predict an individual’s average item scor e. No additional predictors were entered at Level 1. Educator variables were examined at Level 2 of the models. Position (e.g., teacher, administrator), years of experience, highest degree earned, and status as an SBLT

PAGE 124

116 116 member comprised the variables entered for eac h educator. These data were derived from the demographic information collected from the Beliefs Survey Each position was dummy coded as a 0 or 1 in the database. Zero s indicated that an educator did not hold that job title, while a 1 indicated that the educator held that position. Years of experience was treated as ordinal data on the survey (i.e., educators were asked to select which range of years their experience was within). Thus years of experience was treated as ordinal level data in the models. The first possible range of experience (i.e., Less than 1 year) was coded as zero in the model. Each succes sive range of experience was provided a value of 1 higher than the previous range until all possible responses had been assigned a value. The highest degree earne d for educators were treated as ordinal level data in the models as well. Bachelors, Masters, Specialis t, and Doctorate degrees were entered as 0, 1, 2, and 3 respectively. Finally, membership on a SBLT was dummy coded as well. Values of 0 indicated non-membership on a SBLT. Conversely, values of 1 indicated membership on a SBLT. Level 3 of the multilevel models included school variables. School size, staff size, student demographics, school stat us (i.e., status as a pilot or comparison school), district membership (school affiliation with a particul ar district), SBLT atte ndance at the Project trainings, the number and duration (i.e., hours) of coach provided trainings and technical assistance sessions received by the school, and baseline FCAT achievement levels were entered into the final level. School size, staff size, student de mographics (i.e., the proportion of Caucasians, African Americans, Hispanics, Asians, Native Americans, multi-racial students, males, students on fr ee-reduced lunch, students identified as ELLs, and students identified with disabilities were entered in to the model as separate

PAGE 125

117 117 variables), the average proportion of days SB LT members attended the 5 days of training, the number and duration of coach provided tr aining and technical assistance sessions received by the school, and FCAT achievement levels were entered as continuous variables. School status a nd district membership were entered as dummy coded categorical variables. School status values of 0 indicated that educators worked in comparison schools while values of 1 indicate d that educators worked in pilot schools. Each of the eight districts were entered as separate dummy coded variables. Zeros indicated that a school did not belong to a given district. Conve rsely, values of 1 indicated that the school resi ded within the district. In addition to the main effects examined at Levels 2 and 3, interactions between each of the predictors and time were entered into the m odel. These potential interaction effects were examined to determine if change s in any of the educator or school variables across time significantly predic ted responses on the dependent measures. Decisions rules regarding allowing intercepts and slopes to vary are described below in the Results section. Appendix G contains the full statis tical models (i.e., the models for each dependent measure when all variables are en tered at Levels 1, 2, and 3) that were examined using Statistical Analysis Software – Version 9.2 (SAS v. 9.2). Research question two addressed the re lationship between PS/RtI training and technical assistance and SBLT educators’ demo nstrated skills. A three level model was used to address this research question. The individually administered skill assessments completed by SBLT members across trainings we re used as the dependent variable for this analysis. The percent of points possible for each respondent was entered for the skill assessment scores. Time, educator variables, and school variables comprised levels 1, 2,

PAGE 126

118 118 and 3 respectively. Application of skills ove r time was examined at Level 1. Position, years of experience, and highest degree earned were entered into the multilevel model at Level 2 to predict educator skills. School size and demographics, staff size, the proportion of SBLT members present at the trainings, and district membership were entered as Level 3 predictors. All educator a nd school level variables were entered in the same manner as described above for resear ch question one. Decision rules regarding allowing intercepts and slopes to vary ar e described below in the Results section. Appendix G contains the full statistical m odel that was examined using SAS v. 9.2. Research question three addressed impleme ntation integrity at the school level. To examine implementation integrity at the building level, two separate 2-level models were conducted. The SAPSI and the Tiers I & II Critical Components Checklist were entered as the dependent variables in th ese models. For each model, implementation integrity across time (i.e., administration of the instrument) was examined at Level 1. Level 2 of the models examined school level variables as predictors of implementation integrity. School size (i.e., th e number of students) ; staff size; student demographics; the proportion of SBLT members who attended trainings; the number of and duration of training and technical assi stance sessions provided by PS/RtI Coaches; average FCAT performance from previous year s, and district membership were entered as Level 2 predictors for the SAPSI All school level variables were entered in the same manner as described for resear ch questions one and two. For the model that included the Tiers I & II Critical Components Checklist as the dependent measure, status as a pilot or comparison school and baseline implementation level were included as Level 2 predictors in addition to the variable s listed above for the

PAGE 127

119 119 SAPSI School status was entered as a dummy co ded variable consistent with research question one. Baseline implementation level was entered as the average item score received on the Tiers I and II Critical Components Checklist across baseline years and windows. In addition to including these two variables, PS/RtI Coach provided training and technical assistance sessions were entered differently. Both the number and duration of training and technical assistance sessions were entered by window (i.e., administration) rather than across the year. Consistent with the previous research questions, interactions between each predictor and time were entered into both models. Decision rules regarding allowing intercepts and slopes to vary are describe d below in the Results section. Appendix G includes the full statistical mode ls examined using SAS v. 9.2.

PAGE 128

120 120 Chapter IV: Results Three types of analyses were conducte d to answer all research questions investigated during this study. First, the data used to address the re search question were examined to determine the degree to which assumptions of multilevel models procedures were met. Next, descriptive statistics were calculated for all data elements. Finally, multilevel models were used to examine the extent to which PS/RtI outcomes could be predicted by factors w ithin and across the participati ng schools (e.g., time, educators, schools). Statistical assumptions of multilevel models examined were the degree to which the data were (1) normally distributed, (2) randomly distributed when data were missing, and (3) nested. Skewness and kurtosis values were calculated and examined for all dependent measures as well as predictors entered into the multilevel models. These statistics were used to investigate the degree to which the data met the normality assumption. Values close to zero indicated relatively normally dist ributed data while values further away from zero indicated non-normally distributed data. Although the degree to which the data were normally distributed is discussed below for each model examined, multilevel models procedures are relatively robust to violations of this assumption (Raudenbush & Bryk, 2002). Correlations between present and missing da ta were calculated to examine the assumption of randomly distributed missing da ta. For all variables included in the

PAGE 129

121 121 analyses, present data received values of 1 while missing data received values of 0. Correlations were calculated within levels of the school system (i.e ., educator variables were included in one set of correlations and sc hool variables were included in a separate set of correlations). Signif icant correlations within or across data sources indicated related missing data clusters. Conversely, nonsignificant correlati ons indicated random missing data. Because multilevel models procedur es are less robust to violations of this assumption (Raudenbush & Bryk, 2002), analys es discussed below that include nonrandomly distributed missing data must be interpreted with caution. The degree to which the data were ne sted was examined by calculating IntraClass Correlations (ICCs). ICCs provided an es timate of the extent of shared variance across levels of the model. ICCs were calculated by di viding the amount of shared variance that could be explai ned by the amount of total expl ained variance in outcomes. Given the assumption of multilevel models analyses that data are nested (Raudenbush & Bryk, 2002), higher ICCs indicated that multilevel models procedures were appropriate to use. In addition to examining the aforementioned multi-level model assumptions, the assumption of normality of residual variances also was examined. For each multi-level model, two visual analyses were conducted to investigate the extent to which residual variances were distributed normally. First, a scatterplot of the predicted residuals was analyzed. Second, a stem and leaf plot was anal yzed to determine the extent to which the residual variances across school s were normally distributed. The results of these analyses for each model are reported in Appendix H.

PAGE 130

122 122 Prior to conducting multilevel models analyses, descriptive statistics were derived for the dependent and predictor variables. M eans and standard devi ations, and frequency counts were calculated for continuous and cat egorical variables respectively. These descriptive analyses were further disaggreg ated by (1) pilot versus comparison schools and (2) SBLT members versus staff for any dependent measures for which these data were available. Disaggregated data were included for these groups when available because of the differences in the frequency a nd intensity of PS/RtI training received by pilot schools, particularly SBLT members. Finally, multilevel models were conducte d for each research question. Separate models were examined for research questions that included multiple dependent measures (i.e., one model was examined for each dependent measure used to address the research question). For each dependent measure examined, the model with time as a Level 1 predictor was examined first to determine if the outcome assessed significantly changed throughout the school year. Then, the variable s included across ot her levels in the multilevel models were added to determine what factors predicted outcomes. Both main effects (i.e., intercepts of the predictors) and interacti on terms (i.e., slopes of the predictors) were included in the models to determine what factors significantly predicted the outcome examined. Each model examined required decisions to be made regarding the extent to which intercepts and slopes would be allowe d to vary. The research er hypothesized that intercepts and slopes across the predictors in cluded in all analyses would likely vary across levels (i.e., edu cators and schools). Therefore, all models were first examined with an unstructured covariance matr ix that allowed intercepts and slopes to vary freely.

PAGE 131

123 123 However, none of the models examined were able to converge with fully unstructured covariance matrices necessitating a more rest ricted approach. To facilitate convergence of each model, the following steps were used to determine the extent to which intercepts and slopes would be allowed to vary: 1) First, a Variance Components matrix was used that allowed intercepts and slopes to vary but forced covariances to be zero. 2) If the model would not converge using a Variance Components matrix, intercepts were allowed to vary while slopes remained fixed. 3) If the model still did not converge, both intercepts and slopes remained fixed. Using this decision tree, all models examin ed in this study converged. Continuous and categorical predictors were grand mean a nd zero centered respectively to facilitate interpretation of the estimates produced by the multilevel models that converged. Alpha was set at .05 for all models. Research Question 1 Research question 1 examined the re lationship between PS/RtI training and technical assistance and the beliefs and percei ved skills of educator s. Surveys assessing the beliefs of educators regarding student lear ning and organization of service delivery as well as their perceived skills with PS/RtI practices were administered at the beginning and end of the school year. Both surveys we re completed by educators in pilot and comparison schools. The surveys were administered separately to SBLT members receiving direct training from the Project and other instructional staff in the pilot schools to examine differences across these two groups Using these two surveys, three models

PAGE 132

124 124 were examined to address research questi on 1. Specifically, the dependent variable for each model was: 1) The overall beliefs of educators regard ing student learning and how services should be delivered as measured by the average item score on the Beliefs Survey 2) The educators’ perception of their ski lls applying PS/RtI practices to academic issues as measured by the average res ponse on items that assess academically relevant skills on the Perception of RtI Skills Survey and 3) The educators’ perception of their skills applying PS/RtI prac tices to behavior issues as measured by the average res ponse on items that assess behaviorally relevant skills on the Perception of RtI Skills Survey Educators’ Beliefs About Stude nt Learning and Service Delivery Assumptions Assumptions of multilevel models procedures were examined before conducting any inferential analyses. The normality assumption was examined for the beliefs data, and the Level 2 and 3 predicto rs to be entered into the model. Skewness and kurtosis values for the av erage item beliefs score of educators were -.22 and 1.34 respectively indicating a relativ ely normal distribution. Skewness values for level 2 and 3 predictors ranged from -.91 to 5.6 with the ma jority of estimates less than 2. Kurtosis values for these predictors ranged from -1.62 to 29.35. These two statistics indicated variability in the distribution of the data fo r Level 2 and 3 predictors; however, it should be noted that the majority of values ex ceeding 2 were associated with categorical variables (e.g., district membership). Although the variability in the distribution of data for these predictors should be noted, the large sample size in this study suggests that the

PAGE 133

125 125 multilevel model procedures should be robus t to this violation (Raudenbush & Bryk, 2002). The assumption that missing data were randomly distributed was examined next using the procedures described previously. Significant correlations as high as .99 ( p <.01) among items on one administration of the Beliefs Survey were found. Although still significant, lower correlations (approximating -.10, p <.01) among items across administrations of the Beliefs Survey were found These findings indicate that missing data at the educator level were related resulting in a viola tion of the randomly distributed missing data assumption. Given that multilevel m odels are sensitive to violations of this assumption, findings from the multilevel models procedures discussed below should be interpreted with cauti on (Raudenbush & Bryk, 2002). All data were present at the school level indicating that the assumption of random missing data was met for Level 3 variables. Finally, the assumption that the data were nested was examined by calculating the ICC from the unconditional be liefs model. The ICC estimat e derived was .49 indicating a nested data structure. Therefore, the multilevel models assumption of a nested data structure was met suggesting that multilevel models procedures were appropriate for this model. Descriptive Data. Educators’ average beliefs we re derived by calculating the average rating across items on the Beliefs Survey These average beliefs scores were calculated at the beginn ing and end of the year to determine what changes occurred in the educators’ reported beliefs. Average belief sc ores also were calcula ted for educators in pilot versus comparison schools as well as SB LT members versus staff. These educator

PAGE 134

126 126 pairings were examined to investigate what changes occurred in beliefs for groups with differential exposure to PS/RtI tr aining and technical assistance. Table 3a includes average beliefs item score data for the aforementioned groups. The beliefs of all educators included in the study increased from the beginning (Mean=3.57; SD =.34) to the end (Mean=3.62, SD =.34) of the school year. The beliefs of educators in pilot and comparison schools as well as educators who were SBLT members and staff also increased across time. Howeve r, the average level of beliefs at the beginning of the year as well as the amount of change in scor es observed at the end of the year differed by group.

PAGE 135

127 127 Table 3a Beliefs Multi-Level Model Data Beliefs Survey Descriptive Data from Beginning and End of Year Administrations for Total Sample, Pilot versus Comparison Schools, and SBLT Members versus All Other Staff Members Level 1 Variables naMean ( SD) Skewness Kurtosis Average Beliefs Item Score 4830 (68) 3.60 (0.34) -0.24 1.13 Beginning of Year 2401 (62) 3.57 (0.34) -0.35 1.19 End of Year 2429 (68) 3.62 (0.34) -0.14 1.03 Average Beliefs Item Score: Pilot versus Comparison Schools Pilot Schools 3127 (40) 3.63 (0.34) -0.15 1.06 Beginning of Year 1603 (40) 3.60 (0.33) -0.21 0.85 End of Year 1524 (40) 3.67 (0.34) -0.12 1.28 Comparison Schools 1703 (28) 3.54 (0.34) -0.42 1.19 Beginning of Year 798 (22) 3.52 (0.35) -0.55 1.57 End of Year 905 (28) 3.55 (0.32) -0.26 0.63 Average Beliefs Item Score: Pilot School SBLT versus Staff b SBLT 544 (40) 3.85 (0.34) -0.16 -0.09 Beginning of Year 283 (40) 3.76 (0.32) -0.18 -0.05 End of Year 261 (40) 3.95 (0.33) -0.28 0.02 Staff 4286 (68) 3.57 (0.32) -0.37 1.38 Beginning of Year 2118 (62) 3.55 (0.33) -0.39 1.38 End of Year 2168 (62) 3.58 (0.31) -0.33 1.34 Note. a Number in parentheses represents the numb er of schools from which educators responded. b Staff includes members from pilot and comparison schools. SBLT = School-Based Leadership Team. Pilot school educators (Mean=3.60, SD =.33) started with a higher level of average beliefs than their comparison school counterparts (Mean=3.52, SD =.35). Although both groups increased, pilot school educators av erage beliefs at the end of the year (Mean=3.67, SD= .34) increased more than educator s in comparison schools (Mean=3.55, SD =.32). A similar pattern emerged for SBLT me mbers versus other instructional staff.

PAGE 136

128 128 SBLT members indicated higher average beliefs (Mean=3.76, SD =.32) at the beginning of the year than their staff counterparts (Mean=3.55, SD= .33). Across the year, SBLT members average beliefs (Mean=3.95, SD =.33) increased more than educators who were not a member of a SBLT receiving training form the Project (Mean=3.58, SD =.31). Descriptive data also were examined for the Level 2 and 3 variables to be entered into the model predicting educators’ beliefs Level 2 predictors (i.e., educator level predictors) included position, years of experi ence, the highest degree earned, and SBLT membership. The number and percent of educators by each of these groups was calculated at the begi nning and end of the year due to differences in individuals completing the surveys. Educators from 62 and 68 of the 73 participating schools completed the survey at the beginning and e nd of the year respectively. All schools at which surveys were not administered at either or both time points were comparison schools in two of the eight di stricts. District policies and leadership commitment to Project requirements were the primary two r easons for delays in administering surveys. Table 3b includes data for all educator level predictors at the be ginning and end of the school year.

PAGE 137

129 129 Table 3b Beliefs Multi-Level Model Data Level 2 Predictor Frequencies by Time Level 2 Predictors Frequencies (%)aSkewness Kurtosis Beginning of Year b End of Year b Position General Education Teacher 1683 (70.10 ) 1714 (70.56) -0.89 -1.21 Special Education Teacher 297 (12.37) 321 (13.22) 2.23 2.97 Administrator 75 (3.12) 66 ( 2.72) 5.60 29.35 Student Support Services 91 (3.79) 97 (3.99) 4.77 20.78 Other 221 (9.20) 215 (8.85) 2.86 6.20 Years of Experience 0.23 -1.19 Less than 1 year 128 (5.37) 105 (4.34) 1-4 years 478 (20.06) 483 (19.98) 5-9 years 507 (21.28) 486 (20.11) 10-14 years 375 (15.74) 364 (15.06) 15-19 years 270 (11.33) 273 (11.29) 20-24 271 (11.37) 313 (12.95) 25 or more years 354 (14.86) 393 (16.26) Highest Degree Earned 1.20 1.58 Bachelor of Arts/Bachelor of Science 1437 (61.81) 1441 ( 60.50) Master of Arts/Master of Science 824 ( 35.44) 857 (35.98) Educational Specialist 49 (2.11) 66 (2.77) Doctor of Philosophy/Doctor of Education 15 (0.65) 18 (0.76) School Based Leadership Team Member Status 2.45 4.01 School Based Leadership Team Member 283 (11.79) 261 (10.75) Non-School Based Leadership Team Member 2118 (88.21) 2168 (89.25) Note. a Percent of educators in the corresponding category is included in parentheses. b Educators from 62 and 68 schools completed the Beliefs Survey at the beginning and end of the year respectively. General education teachers comprised the majority of educators completing the surveys (approximately 70% of respondents at the beginning and end of the school year). Special education teachers, administrato rs, student support service personnel, and

PAGE 138

130 130 individuals with other positions comprised th e remaining 30% of respondents. The years of experience among these educators was rela tively normally distributed with experience ranging from less than 1 year to over 25 ye ars. The majority of respondents’ highest degrees were at the bachelors or master s level. Approximately 60% and 35% of respondents had bachelors and masters degrees respectively. Finally, slightly more than 10% of the educators sampled were memb ers of a SBLT. The remaining 90% of respondents were non-SBLT members at pilot and comp arison schools. Level 3 predictor (i.e., school level va riables) descriptive statistics were calculated differentially for continuous versus ca tegorical variables. Table 3c includes the means and standard deviations of continuous school level variab les at the observation level (i.e., means and standard deviations take into account the number of educators responding from each school). Table 3d include s frequency data for categorical school level variables at the observation level. Overall, the data indicate vari ability in the school level variables (e.g., school demographics, st aff size, district membership) associated with the educators from the 68 schools who completed Beliefs Surveys

PAGE 139

131 131 Table 3c Beliefs Multi-Level Model Data Level 3 Con tinuous Predictors Descriptive Statistics Level 3 Predictors Mean ( SD )ana Skewness b Kurtosis b School Demographics School Size 752.86 (241.14) 4835 0.72 0.42 Staff Size 55.49 (16.89) 4835 0.57 0.14 Proportion White Students 0.56 (0.28) 4835 0.77 -0.55 Proportion Black Students 0.24 (0.26) 4835 1.38 0.81 Proportion Hispanic Students 0.13 (0.10) 4835 1.45 1.46 Proportion Asian Students 0.03 (0.03) 4835 1.74 3.09 Proportion Native Students 0.00 (0.00) 4835 1.87 4.82 Proportion Multiracial Students 0.04 (0.02) 4835 0.31 -0.68 Proportion Male 0.52 (0.03) 4835 -0.91 3.35 Proportion Free-Reduced Lunch 0.51 (0.25) 4835 -0.16 -1.14 Proportion English Language Learners 0.11 (0.13) 4835 1.75 2.60 Proportion Students with Disab ilities 0.16 (0.06) 4835 0.37 0.33 Average % SBLT Members Days Presentc 0.85 (0.11) 1524 0.86 -1.19 Coaching Variables Number Coach Trainingsc 5.85 (5.44) 1524 2.58 6.02 Coach Training Hoursc 20.64 (16.40) 1524 2.33 5.59 Number Coach Technical Assistance Sessionsc23.62 (14.92) 1524 1.90 2.68 Coach Technical Assistance Session Hoursc57.07 (33.32) 1524 1.72 1.96 Average FCAT Score from Baseline Years 315.89 (19.48) 4760 0.20 -0.50 Note. a n represents the number of observations with da ta associated with the corresponding variable. b Skewness and kurtosis values calcula ted from data across time points. c Means, SD s, and n s based on pilot school data entered at Time 2. All Time 1 and comparison values equal 0. FCAT=Florida Comprehensive Assessment Te st; SBLT = School Based Leadership Team.

PAGE 140

132 132 Table 3d Beliefs Multi-Level Model Data Level 3 Ca tegorical Predictors Descriptive Data Level 3 Predictors Frequencies (%)aSkewness Kurtosis School Status -0.61 -1.62 Pilot School 3127 (64.67) Comparison School 1708 (35.33) District Membership District A 483 (9.99) 2.67 5.13 District B 745 (15.41) 1.92 1.68 District C 510 (10.55) 2.57 4.60 District D 626 (12.95) 2.21 2.88 District E 827 (17.10) 1.75 1.06 District F 429 (8.87) 2.89 6.38 District G 823 (17.02) 1.76 1.08 District H 392 (8.11) 3.07 7.43 Note. a Percent of observations in corresponding category is included in parentheses. Educator Beliefs Multilevel Model Results A 3-Level multilevel model was examined to determine what factors predicte d educator beliefs rega rding student learning and how resources should be organi zed. The average item score on the Beliefs Survey was entered as the dependent variable in the analysis. Time (i.e., beginning versus end of the year belief scores) was entered as the Level 1 predictor of educator beliefs. Time was zero centered to facilitate interpretation of the results. Level 2 predictors included educator variables. Each e ducators position, years of experience, highest degree earned, and whet her s/he was a member of a SBLT was entered into the model. Each educators posi tion was entered as a series of dummy coded variables. General education teacher, speci al education teacher, administrator (i.e., principal or assistant prin cipal), student support serv ice personnel (i.e., school

PAGE 141

133 133 psychologist, guidance counselor, or social work er), or other position received a value of 1 when the respondent indicated s/he held that position. All non-selected positions received a value of zero for each educator. Year s of experience and highest degree earned were entered as ordinal variab les with higher values assigned to each successive step indicated on the Beliefs Survey For example, bachelors, masters, educational specialist, and doctorates were coded 0, 1, 2, and 3 resp ectively in the data set. Finally, SBLT membership was dummy coded with values of 1 representing membership and values of zero representing non-membership. The inte ractions between each educator level predictor and time were also entered into the model. Level 3 predictors includ ed school level variables. School demographics (e.g., size, racial composition by gr oup, poverty levels), staff size, pilot versus comparison school status, district membership, previous student performance, and the amount of coaching received were predictors entere d at Level 3. School demographic variables entered into the model were the number of students attending the school and proportion of students from various demographic gr oups attending the school. Specifically, the proportion of white, black, Hispanic, Asia n/Pacific Islander, Native American, multiracial, male, and English Language Lear ner students, as well as the proportion of students with disabilities and students eligible for free or reduced lunch were entered as separate, continuous variables. The number of staff, the aver age FCAT score of students in the school from the three previous school years (or however many years the school had been open if less than 3 years), the number of trainings and techni cal assistance sessions provided by coaches, and the total number of hours dedicated to tr ainings and technical assistance sessions provided by coaches also were entered as continuous variables.

PAGE 142

134 134 Importantly, data on coach trainings and tech nical assistance sessions represent activities from December through the end of the school year because data were not available for August-November due to technical problems with the data system used to collect those data. Finally, pilot school status and distri ct membership were entered as a series of dummy coded variables. Values of 1 repres ented membership in a pilot school or a particular district (i.e., District A, District B, District C, District D, District E, District F, District G, or District H). Values of 0 represented non-memb ership. Interactions between each school level variable and time also were entered into the model. Using the steps discussed above to find a mode l that would converge, interc epts were allowed to vary while slopes remained fixed. Prior to running the full 3 Level model, time was entered as a Level 1 predictor to determine if increases in the reported belie fs of educators note d in the descriptive analyses were statistically significant. Time, when entered into the model without any Level 2 or 3 predictors, was a significan t predictor of belie fs (Estimate=0.06, t =8.15, p =<.01). These findings indicated that educator beliefs incr eased from the beginning to the end of the school year. When Level 2 a nd 3 predictors were added into the model predicting educator beliefs, however, time was no longer a significant predictor (Estimate=0.21, t =1.64, p =.10) after controlling for the other variables in the model. Although the main effect of time was no longe r a significant predictor, significant interaction effects between time and some Level 2 and 3 predictors occurred. The significant interaction effects described below suggest that changes in beliefs across time differed depending on the valu es of other variables.

PAGE 143

135 135 Several Level 2 variables significantly c ontributed to the model predicting beliefs. Educator level variables that significantly contributed to the model were years of experience (Estimate=-.01, t =-3.76, p <.01), highest degree earned (Estimate=0.03, t =2.35, p =.02), membership on a SBLT (Estimate=0.14, t =5.66, p <.01), and a position as an administrator (Estimate=0.30, t =3.90, p <.01) or special education teacher (Estimate=0.14, t =1.99, p =.05). These results indicated that having more years of experience in education was associated with slightly lower beliefs while having earned a higher degree was associated with slightly hi gher levels of belief while controlling for other predictors. Membership on a SBLT and be ing an administrator or special education teacher also were predictors of higher leve ls of beliefs when other predictors were controlled. No other pos ition significantly predicted belief levels. When the interactions between time and each of the Level 2 predictors were examined, only the interactions between being a member of a SBLT and time (Estimate=0.14, t =4.65, p <.01) and holding a position as an administrator and time (Estimate=-0.27, t= -2.12, p =.03) were significant. While c ontrolling for other predictors, membership on a SBLT predicted increasing belie fs from the beginning to the end of the year. Thus, the interaction between time and membership on a SBLT contributed to the higher levels of beliefs for SBLT members predicted by the main effect. Conversely, holding a position as an administrator predic ted decreasing beliefs from the beginning to the end of the year. Although holding a positi on as an administrator predicted higher levels of beliefs, the data suggest that administrator beli efs decreased across the year. Level 3 variables entered into the mode l produced significant predictors of an educator’s beliefs. Significant school level predictors of belie f levels were working in a

PAGE 144

136 136 pilot school (Estimate=0.07, t =2.79, p <.01), and the interacti ons between the proportion of male students (Estimate=-0.78, t =-1.97, p =.05), the number of technical assistance sessions provided by coaches (Estimate=0.04, t =2.12, p =.03), and the hours of technical assistance provided by coaches (Estimate=-0.02, t =-2.65, p =.01) and time. These results indicated that working in a pilot school predicted highe r levels of beliefs while controlling for other predictors. When the in teraction between school level variables and time was examined, higher proportions of ma le students and more hours of technical assistance provided to a school predicted decreasing beliefs ac ross the school year while controlling for other predictors. Conversely, higher numbers of technical assistance sessions provided to a school predicted increas ing beliefs across the year. No other main or interaction effects signifi cantly contributed to predicti ons of beliefs. See Table 3e below for data on the degree to which each predictor entered into the 3 level model contributed to e ducator beliefs.

PAGE 145

137 137 Table 3e Beliefs Multi-Level Model Data 3 Level Multi-Level Model Predicting Educator Beliefs Predictors Estimate SE t p Level 1 Beliefs Intercept 3.44 0.08 40.85* <.01 Time (Slope) 0.15 0.13 1.13 .25 Level 2 Intercepts General Education Teacher 0.07 0.07 1.11 .27 Special Education Teacher 0.14 0.07 1.99* .05 Administrator 0.30 0.08 3.90* <.01 Student Support Services Personnel 0.08 0.08 1.05 .29 Other Position 0.10 0.07 1.35 0.18 Years of Experience -0.01 0.00 -3.76* <.01 Highest Degree Earned 0.03 0.01 2.35* .02 School-Based Leadership Team Membership 0.14 0.02 5.66* <.01 Slopes General Education Teacher*Time -0.20 0.12 -1.69 .09 Special Education Teacher*Time -0. 22 0.12 -1.82 .07 Administrator*Time -0.27 0.13 -2.12* .03 Student Support Services Personnel*Time -0. 15 0.13 -1.15 .25 Other Position*Time -0.16 0.12 -1.34 .18 Years of Experience*Time 0.01 0.00 1.31 .19 Highest Degree Earned*Time -0.01 0.02 -0.66 .51 School-Based Leadership Team Membership*Time 0.14 0.03 4.65* <.01

PAGE 146

138 138 Table 3e continued Beliefs Multi-Level Model Data 3 Level Multi-Level Model Predicting Educator Beliefs Predictors Estimate SE t p Level 3 Intercepts School Size -0.00 0.00 -0.97 .33 Staff Size 0.01 0.02 0.29 .78 Proportion White Students Atte nding School -1.08 0.92 -1.17 .24 Proportion Black Students Atte nding School -1.04 0.96 -1.08 .28 Proportion Hispanic Students A ttending School -1.14 0.97 -1.18 .24 Proportion Asian Students Atte nding School -0.89 1.17 -0.77 .44 Proportion Native American Student s Attending School 0.08 4.62 0.02 .99 Proportion Multiracial Students Attending School 0 . Proportion Male Students Atte nding School 0.06 0.44 0.13 .90 Proportion Students Eligible for Free-Reduced Lunch 0.07 0.12 0.55 .58 Proportion English Language Lear ner Students 0.20 0.15 1.35 .18 Proportion Students with Disabilities Attending School -0. 45 0.34 -1.33 .18 Pilot School Membership 0.07 0.03 2.79* <.01 District A Membership -0.01 0.07 -0.09 .93 District B Membership 0.07 0.11 0.64 .52 District C Membership -0.04 0.07 -0.57 .57 District D Membership -0.06 0.06 -1.07 .28 District E Membership -0.07 0.07 -1.00 .32 District F Membership -0.00 0.06 -0.01 .99 District G Membership 0.08 0.05 1.67 .09 District H Membership 0 . Average FCAT Baseline Years Score 0.01 0.01 0.78 .44

PAGE 147

139 139 Table 3e continued Beliefs Multi-Level Model Data 3 Level Multi-Level Model Predicting Educator Beliefs Predictors Estimate SE t p Slopes School Size*Time -0.00 0.00 -0.32 .75 Staff Size*Time 0.03 0.02 1.85 .06 Proportion White Students A ttending School*Time 0.87 0.84 1.04 .30 Proportion Black Students A ttending School*Time 0.94 0.89 1.06 .29 Proportion Hispanic Students Attending School*Time 1.05 0.91 1.16 .25 Proportion Asian Students A ttending School*Time 0.73 1.07 0.68 .49 Proportion Native American Stude nts Attending School*Time 4.32 3.92 1.10 .27 Proportion Multiracial Students Attending School*Time 0 . Proportion Male Students A ttending School*Time -0.78 0.39 -1.97* .05 Proportion Students on Free-Reduced Lunch Attending School*Time 0.13 0.10 1.21 .23 Proportion English Language Learner Students Attending School *Time -0.25 0.14 -1.85 .06 Proportion Students with Disabilities Attending School*Time 0.21 0.29 0.74 .46 Pilot School Membership*Time 0.18 0.11 1.60 .11 District A Membership*Time 0.04 0.06 0.80 .43 District B Membership*Time -0.02 0.09 -0.17 .87 District C Membership*Time 0.07 0.07 0.98 .33 District D Membership*Time 0.13 0.07 1.81 .07 District E Membership*Time 0.05 0.06 0.81 .42 District F Membership*Time -0.02 0.05 -0.32 .75 District G Membership*Time -0.03 0.04 -0.66 .51 District H Membership*Time 0 . Average FCAT Baseline Years Score*Time 0.00 0.01 0.39 .70

PAGE 148

140 140 Table 3e continued Beliefs Multi-Level Model Data 3 Level Multi-Level Model Predicting Educator Beliefs Predictors Estimate SE t p Proportion of Days SBLT Members Attended Training*T ime -0.14 0.12 -1.19 .24 Number of Coach Trainings*Time -0.00 0.01 -0.18 .86 Coach Training Hours*Time -0.00 0.00 -0.71 .47 Number of Coach Technical Assistance Sessions*Time 0.04 0.02 2.12* .03 Coach Technical Assistance Session Hours*Time -0.02 0.01 -2.65* .01 Note. p<. 05. SBLT = School-Based Leadership Team. Random effects for intercepts at the educat or and school levels were examined to determine if the average beliefs item score significantly varied. Inte rcepts at the school level significantly varied (Estimate=0.003, SE =0.001, z =2.68, p<.01) indicating that the average item beliefs score differe d across participating schools. Intercepts at the educator level significantly varied (Estimate=0.042, SE =0.003, z =16.04, p<.01) as well, indicating that the average item beliefs score differed across educators nested within participating schools. Thus, significant diffe rences in the reported belief s of educators within and across schools occurred. Differences in the ch anges in educator beliefs across time could not be examined because slopes remained fixed to allow the model to converge. Residual variance also was examined to determine the extent to which unexplained variance in educator beliefs existed after predictors were added to the model. Residual variance was significant in the full 3-Level model (Estimate=.054, SE =0.002, z =25.66, p=<.01) indicating that the multilevel model did not explain all of the variance in educator beliefs. However, the amount of unexplained variance decreased each time predictors were added to account for educator beliefs. The estimate of residual variance

PAGE 149

141 141 decreased from .058 in the unconditional model to .054 when all Level 1, 2, and 3 predictors were included in the multilevel model. The decrease in residual variance suggests that the addition of variables im proved the predictive utility of the model. Educators’ Perceived RtI Academic (RTI-A) Skills Assumptions Assumptions of multilevel models were examined using procedures consistent with the examination of the beli efs model. Skewness and kurtosis values for the average item academic skills score of educators were -0.54 and 0.43 respectively indicating a relatively normal distribution. Sk ewness values for level 2 and 3 predictors ranged from -.86 to 6.02 with the majority of estimates less than 2. Kurtosis values for these predictors ranged from -1.75 to 34.29. Consis tent with the beliefs model, these two statistics indicated variability in the distribut ion of the data for Level 2 and 3 predictors; however, the majority of values exceeding 2 were associated with categorical variables (e.g., district membership). Given the larg e sample size in this study, the multilevel modeling procedures used to examine perc eived RtI-A skills should be robust to violations of the normality assumption (Raudenbush & Bryk, 2002). The assumption that missing data were randomly distributed was examined next using the procedures described previously. Significant correlations as high as .99 ( p <.01) among items on the same administration of Perceptions of RtI Skills Survey were found. Although lower, significant correlations (estimates were typically below .10, p -values < .01) were found among items across administrations of the survey. These findings indicate that missing data at th e educator level were related, re sulting in a violation of the randomly distributed missing data assumption. Gi ven that multilevel mo dels are sensitive to violations of this assumption, findings from the multilevel models procedures

PAGE 150

142 142 discussed below should be in terpreted with caution (Raude nbush & Bryk, 2002). All data were present at the school level indicating that the assumption of random missing data was met for Level 3 variables. Finally, the assumption that the data were nested was examined by calculating the ICC from the unconditional Rt I-A model. The ICC estimate derived was .57 indicating a nested data structure. Therefore, the multilevel models assumption of a nested data structure was met suggesting that multilevel models procedures were appropriate for this model (Raudenbush & Bryk, 2002). Descriptive Data. Educators’ average perceived RTI-A skills were derived by calculating the average rating across items relevant to academ ic issues on the Perceptions of RtI Skills Survey These average RTI-A skills scores were calculated at the beginning and end of the year to determine what cha nges occurred in the educators’ perceived skills. Average RTI-A skills scores also were calculated for educators in pilot versus comparison schools as well as SBLT members vers us staff. These educator pairings were examined to investigate what changes occu rred in perceived skills for groups with differential exposure to PS/RtI tr aining and technical assistance. Table 4a includes average RTI-A skills item score data for the aforementioned groups. The perceived RTI-A skills of all e ducators included in th e study increased from the beginning (Mean=3.28; SD =0.78) to the end (Mean=3.44, SD =0.75) of the school year. The perceived skills of educators in pilot versus comparison schools as well as educators who were SBLT members versus sta ff also increased across time. However, the average reported level of RTI-A skills at the beginning of the year as well as the amount of change in scores observed at the end of the year differed by group.

PAGE 151

143 143 Table 4a Perceptions of RTI-A Skills Multi-Level M odel Data Descriptive Data from Beginning and End of Year Administrations for Total Sample, Pilot versus Comparison Schools, and SBLT Members versus All Other Staff Members Level 1 Variables naMean ( SD) Skewness Kurtosis Average Academic Skills Item Score 4629 (68) 3.36 (0. 77) -0.54 0.43 Beginning of Year 2236 (62) 3.28 (0.78) -0.52 0.26 End of Year 2393 (68) 3.44 (0.75) -0.55 0.63 Average Skills Item Score: Pilot versus Comparison Schools Pilot Schools 2961 (40) 3.35 (0.75) -0.50 0.44 Beginning of Year 1463 (40) 3.24 (0.76) -0.44 0.14 End of Year 1498 (40) 3.45 (0.73) -0.55 0.84 Comparison Schools 1668 (28) 3.39 (0.79) -0.60 0.44 Beginning of Year 773 (22) 3.34 (0.80) -0.67 0.53 End of Year 895 (28) 3.43 (0.79) -0.53 0.32 Average Skills Item Score: P ilot School SBLT versus Staff b SBLT 533 (40) 3.62 (0.75) -0.51 0.31 Beginning of Year 278 (40) 3.44 (0.79) -0.50 0.12 End of Year 255 (40) 3.81 (0.65) -0.19 -0.32 Staff 4096 (68) 3.33 (0.76) -0.55 0.45 Beginning of Year 1958 (62) 3.25 (0.77) -0.53 0.29 End of Year 2138 (68) 3.39 (0.75) -0.57 0.63 Note. a Number in parentheses represents the numb er of schools from which educators responded. b Staff includes members from pilot and comparison schools. RTI-A= Response to Intervention Academic; SBLT = School-Based Leadership Team. Pilot school educators (Mean=3.24, SD =0.76) started with a lower level of average perceived RTI-A skills than thei r comparison school counterparts (Mean=3.34, SD =0.80). Despite a lower level at the begi nning of the year, pilot school educators reported slightly higher RTI-A skills at the end of the year (Mean=3.45, SD= 0.72) than educators in comparison schools (Mean=3.43, SD =0.79). A similar pattern of larger increases emerged for SBLT members versus other instructional staff. SBLT members

PAGE 152

144 144 indicated higher RTI-A skills (Mean=3.44, SD =0.79) at the beginning of the year than their staff counterparts (Mean=3.25, SD= 0.77). Across the year, SBLT members reported average RTI-A skills (Mean=3.81, SD =0.65) increased more than educators who were not a member of a SBLT receiving tr aining from the Project (Mean=3.39, SD =0.75). Descriptive data also were examined for the Level 2 and 3 variables to be entered into the model predicting edu cators’ perceived RTI-A skills. Level 2 (i.e., educator level predictors) and 3 (i.e., school le vel variables) predictors en tered for the perceived RTI-A skills model were the same as were entere d for the beliefs model. The number and percent of educators by each of the demogra phic groups was calculated at the beginning and end of the year due to differences in individuals completing the surveys. Educators from 62 and 68 of the 73 participating schools completed the survey at the beginning and end of the year respectively. All schools at wh ich surveys were not administered at either or both time points were the same comparis on schools described for the beliefs model. Overall, the demographics of the educators completing the Perceptions of RtI Skills Survey were similar to the demographics of educators who completed the Beliefs Survey Table 4b includes data for all educator level predictors at the be ginning and end of the school year for the Perceptions of RtI Skills Survey

PAGE 153

145 145 Table 4b Perceptions of RTI-A Skills Multi-Level Model Da ta Level 2 Predictor Frequencies by Time Level 2 Predictors Frequencies (%)aSkewness Kurtosis Beginning of Year b End of Year b Position General Education Teacher 1290 (57.69) 1585 (66.23 ) -0.50 -1.75 Special Education Teacher 216 (9.66) 308 (12.87) 2.44 3.97 Administrator 59 (2.64) 59 (2.47) 6.02 34.29 Student Support Services 85 (3.80) 87 (3.64) 4.90 21.98 Other 141 (6.31) 186 (7.77) 3.35 9.24 Years of Experience 0.23 -1.17 Less than 1 year 93 (5.18) 99 (4.44) 1-4 years 359 (19.98) 449 (20.13) 5-9 years 381 (21.20) 463 (20.75) 10-14 years 277 (15.41) 336 (15.06) 15-19 years 227 (12.63) 248 (11.12) 20-24 201 (11.19) 282 (12.64) 25 or more years 259 (14.41) 354 (15.87) Highest Degree Earned 1.19 1.52 Bachelor of Arts/Bachelor of Science 1078 (60.97) 1342 (61.00) Master of Arts/Master of Science 634 (35.86) 790 (35.91) Educational Specialist 44 (2.49) 53 (2.41) Doctor of Philosophy/Doctor of Education 12 (0.68) 15 (0.68) School Based Leadership Team Member Status 2.41 3.82 School Based Leadership Team Member 1958 (87.57) 2138 (89.34) Non-School Based Leadership Team Member 278 (12.43) 255 (10.66) Note. a Percent of educators in the corresponding category is included in parentheses. b Educators from 62 and 68 schools completed the survey at the beginning and end of the year respectively. Level 3 predictor (i.e., school level variables) descriptive statistics were calculated differentially for continuous versus ca tegorical variables. Table 4c includes the means and standard deviations of continuous school level variab les at the observation

PAGE 154

146 146 level (i.e., means and standard deviations take into account the number of educators responding from each school). Table 4d include s frequency data for categorical school level variables at the observation level. Overall, the data indicate vari ability in the school level variables (e.g., school demographics, st aff size, district membership) associated with the educators from th e 68 schools who completed the Perceptions of RtI Skills Survey

PAGE 155

147 147 Table 4c Perceptions of RTI-A Skills Multi-Level Model Data Level 3 Continuous Predictor Descriptive Statistics Level 3 Predictors Mean ( SD )ana Skewness b Kurtosis b School Demographics School Size 757.04 (243.44) 4629 0.68 0.31 Staff Size 55.77 (17.18) 4629 0.55 0.02 Proportion White Students 0.56 (0.28) 4629 -0.80 -0.51 Proportion Black Students 0.24 (0.26) 4629 1.40 0.87 Proportion Hispanic Students 0.13 (0.10) 4629 1.52 1.75 Proportion Asian Students 0.03 (0.02) 4629 1.75 3.24 Proportion Native Students 0.00 (0.00) 4629 1.83 4.59 Proportion Multiracial Students 0.04 (0.02) 4629 0.33 -0.61 Proportion Male 0.52 (0.03) 4629 -0.86 3.19 Proportion Free-Reduced Lunch 0.51 (0.25) 4629 -0.15 -1.15 Proportion English Language Learners 0.11 (0.13) 4629 1.81 2.82 Proportion Students with Disab ilities 0.16 (0.06) 4629 0.38 0.25 Average % SBLT Members Days Presentc 0.27 (0.40) 4629 0.82 -1.25 Coaching Variables Number Coach Trainingsc 1.91 (4.16) 4629 2.52 5.67 Coach Training Hoursc 6.76 (13.55) 4629 2.28 5.30 Number Coach Technical Assistance Sessionsc7.67 (13.99) 4629 1.86 2.49 Coach Technical Assistance Session Hoursc18.44 (32.72) 4629 1.68 1.79 Average FCAT Score from Baseline Years 316.22 (19.47) 4555 0.18 -0.49 Note. a n represents the number of observations with da ta associated with the corresponding variable. b Skewness and kurtosis values calcula ted from data across time points. c Means, SD s, and n s based on pilot school data entered at Time 2. All Time 1 and comparison values equal 0. FCAT=Florida Comprehensive Assessment Te st; SBLT = School Based Leadership Team.

PAGE 156

148 148 Table 4d Perceptions of RTI-A Skills Multi-Level Model Data Level 3 Categorical Predictor Descriptive Data Level 3 Predictors Frequencies (%)aSkewness Kurtosis School Status -0.58 -1.66 Pilot School 2961 (63.97) Comparison School 1668 (36.03) District Membership District A 481 (10.39) 2.60 4.75 District B 705 (15.23) 1.94 1.75 District C 476 (10.28) 2.62 4.85 District D 602 (13.00) 2.20 2.84 District E 757 (16.35) 1.82 1.31 District F 413 (8.92) 2.88 6.31 District G 799 (17.26) 1.73 1.00 District H 396 (8.55) 2.96 6.79 Note. a Percent of observations in corresponding category is included in parentheses. Educator Perceived RTI-A Skills Multilevel Model Results A 3-Level multilevel model was examined to determine what f actors predicted educator perceived RTI-A skills. The average item score on items related to academic issues from the Perceptions of RtI Skills Survey was entered as the dependent variable in the analysis. The same Level 1, 2, and 3 variables (both main effects and inte ractions) that were entered into the model predicting educator beliefs were entered in this model. Decisions regarding allowing intercepts and slopes to vary and centering were consistent with the beliefs model described above as well. Prior to running the full 3 Level model, time was entered as a Level 1 predictor to determine if increases in the perceived RTI-A skills of educators noted in the descriptive analyses were statistically significant. Time, when entered into the model without any

PAGE 157

149 149 Level 2 or 3 predictors, was a significant predictor of perceived RTI-A skills (Estimate=0.19, t =11.04, p =<.01). These findings indicate d that educator reported academic skills increased from the beginning to the end of the school year. When Level 2 and 3 predictors were added into the model, however, time was no longer a significant predictor (Estimate=0.15, t =0.44, p =.66) after controlling the other predictors. Although the main effect of time was no longer signifi cant, significant inter action effects between time and some Level 2 and 3 predictors o ccurred. The significant interaction effects described below suggest that tim e contributed to predictions of educator perceived RTI-A skills when associated with some variables. Several Level 2 variables significantly contributed to the model predicting perceived RTI-A skills. Educator level variab les that significantly contributed to the model were highest degree earned (Estimate=0.13, t =4.09, p <.01), and a position as an administrator (Estimate=0.90, t =3.80, p <.01), general education teacher (Estimate=0.47, t= 2.18, p =.03), or special educa tion teacher (Estimate=0.44, t =2.01, p =.04). These results indicated that having earned a higher degree was associat ed with higher levels of perceived RTI-A skills while controlling for other predictors. Hold ing a position as an administrator, general education teacher, or sp ecial education teacher also were predictors of higher levels of perceived RTI-A skills while controlling for other predictors. No other educator level variables produced significant ma in effects. When the interactions between time and each of the Level 2 predictors was examined, only the inte raction between being a member of a SBLT and time (Estimate=0.39, t =5.52, p <.01) was significant. When controlling for other predictors, membership on a SBLT predicted increasing perceived RTI-A skills from the beginning to the end of the year.

PAGE 158

150 150 Level 3 variables entered into the model produced several significant predictors of an educator’s perceived RTI-A skills. School demographic variables that significantly predicted perceived educator RTI-A skills included the proportion of Black (Estimate=5.04, t =2.00, p =.05), Hispanic (Estimate=5.31, t =2.11, p =.04), Asian (Estimate=7.93, t =2.64, p =.01), and Native American (Estimate=26.58, t =2.26, p =.02) students attending the schools. Working in District E County also was a significant predictor of educator percei ved RTI-A skills (Estimate=-0.34, t =-2.04, p =.04). These results indicated that the school the educator worked in and working in a school with higher proportions of Black, Hispanic, Asia n, or Native American students predicted higher levels of perceived RTI-A skills while controlling fo r other predictors. Conversely, working in a school in District E County predicted lower perceived RTI-A skills. When the interaction between school leve l variables and time was examined, only the interactions between working in a pilot school and time (Estimate=0.57, t =2.09, p =.04), and the average proportion of SBLT me mbers present at trainings (Estimate=0.62, t =-2.13, p =.03) were significant. These results indicated that working in a pilot school predicted increasing perceived RTI-A sk ills across the year while controlling for other predictors. Conversely, higher averag e proportions of SBLT members attending trainings provided by the Project predicted decreasing reported sk ills across the year. Other school level variables did not differe ntially predict cha nging perceived RTI-A skills across time. See Table 4e below for data on the degree to which each predictor entered into the 3 level model predicted e ducator perceptions of their RTI-A skills.

PAGE 159

151 151 Table 4e Perceptions of RTI-A Skills Multi-Level Model Data 3 Level Model Predicting Educator Skills Predictors Estimate SE t p Level 1 Academic Skills Intercept 2.81 0.25 11.26* <.01 Time (Slope) 0.15 0.33 0.44 .66 Level 2 Intercepts General Education Teacher 0.47 0.21 2.18* .03 Special Education Teacher 0.44 0.22 2.01* .04 Administrator 0.90 0.24 3.80* <.01 Student Support Services Personnel 0.36 0.23 1.54 .12 Other Position 0.39 0.22 1.75 .08 Years of Experience 0.01 0.01 1.34 .18 Highest Degree Earned 0.13 0.03 4.09* <.01 School-Based Leadership Team Membership 0.11 0.06 1.72 .09 Slopes General Education Teacher*Time -0. 19 0.31 -0.62 .54 Special Education Teacher*Time -0.23 0.31 -0.74 .46 Administrator*Time -0.33 0.33 -1.00 .32 Student Support Services Personnel*Time -0. 26 0.33 -0.79 .43 Other Position*Time -0.40 0.32 -1.26 .21 Years of Experience*Time -0.01 0.01 -0.75 .45 Highest Degree Earned*Time -0.06 0.04 -1.58 .11 School-Based Leadership Team Membership*Time 0.39 0.07 5.52* <.01

PAGE 160

152 152 Table 4e continued Perceptions of RTI-A Skills Multi-Level Model Data 3 Level Model Predicting Educator Skills Predictors Estimate SE t p Level 3 Intercepts School Size -0.00 0.00 -1.28 .20 Staff Size 0.00 0.01 0.88 .38 Proportion White Students A ttending School 4.30 2.39 1.80 .07 Proportion Black Students A ttending School 5.04 2.51 2.00* .05 Proportion Hispanic Students Attending School 5.31 2.52 2.11* .04 Proportion Asian Students A ttending School 7.93 3.01 2.64* .01 Proportion Native American St udents Attending School 26.58 11.76 2.26* .02 Proportion Multiracial Students Attending School 0 . Proportion Male Students A ttending School 0.16 1.14 0.14 .89 Proportion Students Eligible for Free-Reduced Lunch Atte nding School 0.11 0.31 0.36 .72 Proportion English Language Learne r Students Attending School -0.16 0.42 -0.39 .70 Proportion Students with Disabilities Attending School -0.65 0.91 -0.72 .47 Pilot School Membership -0.01 0.07 -0.13 .90 District A Membership 0.03 0.18 0.17 .87 District B Membership -0.44 0.27 -1.60 .11 District C Membership 0.01 0.19 0.05 .96 District D Membership -0.18 0.16 -1.14 .26 District E Membership -0.34 0.17 -2.04* .04 District F Membership 0.19 0.15 1.32 .19 District G Membership 0.15 0.13 1.17 .24 District H Membership 0 . Average FCAT Baseline Years Score 0.00 0.00 1.28 20

PAGE 161

153 153 Table 4e continued Perceptions of RTI-A Skills Multi-Level Model Data 3 Level Model Predicting Educator Skills Predictors Estimate SE t p Slopes School Size*Time 0.00 0.00 1.66 .10 Staff Size*Time -0.00 0.00 -0.98 33 Proportion White Students Atte nding School*Time -1.90 2.06 -0.92 .36 Proportion Black Students Atte nding School*Time -2.23 2.19 -1.02 .31 Proportion Hispanic Students A ttending School*Time -2.24 2.22 -1.01 .31 Proportion Asian Students Atte nding School*Time -3.58 2.61 -1.37 .17 Proportion Native American Stude nts Attending School*Time -10.38 9.36 -1.11 .27 Proportion Multiracial Students Attending School*Time 0 . Proportion Male Students Atte nding School*Time -1.42 0.97 -1.46 .14 Proportion Students on Free-Reduced Lunch Attending School*Time 0.32 0.25 1.27 .21 Proportion English Language Learner Students Attending School *Time -0.39 0.37 -1.06 .29 Proportion Students with Disabilities Attending School*Time 0.47 0.73 0.64 .52 Pilot School Membership*Time 0.57 0.27 2.09* .04 District A Membership*Time 0.03 0.13 0.20 .84 District B Membership*Time 0.25 0.22 1.11 .27 District C Membership*Time 0.07 0.17 0.39 .70 District D Membership*Time 0.05 0.18 0.30 .77 District E Membership*Time 0.11 0.152 0.70 .49 District F Membership*Time -0.07 0.11 -0.63 .53 District G Membership*Time -0.05 0.10 -0.46 .65 District H Membership*Time 0 . Average FCAT Baseline Years Score*Time 0.00 0.00 0.28 .78 Proportion of Days SBLT Members Attended Training*T ime -0.62 0.29 -213* .03

PAGE 162

154 154 Table 4e continued Perceptions of RTI-A Skills Multi-Level Model Data 3 Level Model Predicting Educator Skills Predictors Estimate SE t p Number of Coach Trainings*Time 0.01 0.02 0.28 .78 Coach Training Hours*Time -0.00 0.01 -0.01 .99 Number of Coach Technical Assistance Sessions*Time -0.00 0.00 -0.48 .63 Coach Technical Assistance Session Hours*Time -0.00 0.00 -0.09 0.93 Note. p<. 05; df = 3678. SBLT = School-Based Leadership Team. Random effects for intercepts at the educat or and school levels were examined to determine if the average perceived RTI-A skills item score significantly varied. Intercepts at the school level significantly varied (Estimate=0.02, SE =0.01, z =2.71, p<.01) indicating that the aver age score differed across participa ting schools. Intercepts at the educator level significan tly varied (Estimate=0.28, SE =0.02, z =17.99, p<.01) as well indicating that the average score differed ac ross educators nested within participating schools. Thus, the average scores of e ducators within and across schools differed significantly. Variance in the scores of e ducators across time could not be examined because slopes remained fixed to allow the model to converge. Residual variance also was examined to determine the extent to which unexplained variance in educator perceived RTI-A skills existed after entering predictors into the model. Residual va riance was significant in the full model (Estimate=0.23, SE =0.01, z =21.95, p=<.01) indicating that th e multilevel model did no t explain all of the variance in educator perceived RTI-A sk ills. However, the amount of unexplained variance decreased each time predictors were added to account for educator reported

PAGE 163

155 155 skills. The estimate of residual variance d ecreased from 0.26 in the unconditional model to .023 when all Level 1, 2, and 3 predictors were included in the multilevel model. The decrease in residual variance suggests that the addition of predictors across levels improved the predictive utility of the model. Educators’ Perceived RtI Behavior (RTI-B) Skills Assumptions Assumptions of multilevel models procedures were examined before conducting any inferential analyses consistent with the models discussed previously. The normality assumption was examined for the perceived RTI-B skills data. Skewness and kurtosis values for the average it em RTI-B skills score of educators were 0.42 and 0.13 respectively indicating a relativ ely normal distribution. Skewness values for level 2 and 3 predictors were the same as described above for th e RTI-A skills model; however, multilevel models procedures should be robust to violations of the normality assumption (Raudenbush & Bryk, 2002). The assumption that missing data were ra ndomly distributed was violated at the educator level. Significant correlations (as high as .99, p <.01) paralleled the estimates found for the RTI-A related items. Given th at multilevel models are sensitive to violations of this assumption, findings from the multilevel models procedures discussed below should be interpreted with cauti on (Raudenbush & Bryk, 2002). Because all Level 3 variables were the same as for the RTI-A skills model, the assumption of random missing data was met for Level 3 predictors. Finally, the assumption that the data were nested was examined by calculating the ICC from the unconditional RTI-B skills mode l. The ICC estimate derived was .52. The estimate indicated that the data were nested suggesting that the assumption of a nested

PAGE 164

156 156 data structure was met. Therefore, multi-l evel modeling procedures appeared to be appropriate to determine factors that pr edict educators’ perceived RTI-B skills. Descriptive Data. Educators’ average perceive d RTI-B skills were derived by calculating the average rating across items relevant to behavior issues on the Perceptions of RtI Survey These average RTI-B skills scores we re calculated for the same groups as the RTI-A skills scores. Table 5a includes aver age RTI-B skills item score data for the aforementioned groups. The perceived RTI-B sk ills of all educators included in the study increased from the beginning (Mean=3.11; SD =0.79) to the end (Mean=3.27, SD =0.76) of the school year. The perceived skills of educators in pilot versus comparison schools as well as educators who were SBLT members versus staff also increased across time. However, the average reported level of RTI-B skills at the beginning of the year as well as the amount of change in scores observed at the end of the year differed by group.

PAGE 165

157 157 Table 5a Perceptions of RTI-B Skills Multi-Level Model Data Beginning a nd End of Year Administrations for Total Sample, Pilot versus Comparison Schools, and SBLT Members versus All Other Staff Members Level 1 Variables naMean ( SD) Skewness Kurtosis Average Behavior Skills Item Sc ore 4629 (68) 3.20 (0.78) -0.42 0.13 Beginning of Year 2236 (62) 3.11 (0.79) -0.34 -0.09 End of Year 2393 (68) 3.27 (0.76) -0.49 0.41 Average Skills Item Score: Pilot versus Comparison Schools Pilot Schools 2961 (40) 3.17 (0.76) -0.36 0.05 Beginning of Year 1463 (40) 3.06 (0.77) -0.23 -0.21 End of Year 1498 (40) 3.28 (0.73) -0.47 0.47 Comparison Schools 1668 (28) 3.24 (0.80) -0.53 0.26 Beginning of Year 773 (22) 3.20 (0.80) -0.57 0.24 End of Year 895 (28) 3.27 (0.81) -0.50 0.28 Average Skills Item Score: P ilot School SBLT versus Staff b SBLT 533 (40) 3.38 (0.71) -0.28 0.04 Beginning of Year 278 (40) 3.22 (0.74) -0.21 -0.07 End of Year 255 (40) 3.54 (0.64) -0.17 -0.03 Staff 4096 (68) 3.17 (0.78) -0.42 0.11 Beginning of Year 1958 (62) 3.09 (0.79) -0.35 -0.11 End of Year 2138 (68) 3.24 (0.76) -0.49 0.38 Note. a Number in parentheses represents the numb er of schools from which educators responded. b Staff includes members from pilot and comparison schools. RTI-B = Response to Intervention – Behavior; SBLT = School-Based Leadership Team. Pilot school educators (Mean=3.06, SD =0.77) started with a lower level of average perceived RTI-B skills than thei r comparison school counterparts (Mean=3.20, SD =0.80). Despite a lower level at the begi nning of the year, pilot school educators reported slightly higher RTI-B skills at the end of the year (Mean=3.28, SD= 0.73) than educators in comparison schools (Mean=3.27, SD =0.81). A similar pattern of larger

PAGE 166

158 158 increases emerged for SBLT members versus other instructional staff. SBLT members indicated higher RTI-B skills (Mean=3.22, SD =0.74) at the beginning of the year than their staff counterparts (Mean=3.09, SD= 0.79). Across the year, SBLT members reported average RTI-B skills (Mean=3.54, SD =0.64) increased more than educators who were not a member of a SBLT receiving trai ning form the Project (Mean=3.24, SD =0.76). Thus, although the levels of average reported RTI-B skills were lower, a similar pattern of increases occurred for the RTI-B and RTI-A perceived sk ills across groups. Level 2 and 3 variables to be entered into the model predicting educators’ perceived RTI-B skills were the same as the variables for the academic skills model. Descriptive data for these variables were th e same because the sample was derived from the same set of educators who took the Perceptions of RtI Skills Survey (i.e., the academic and behavior items used to derive the models came from the same survey). Refer back to Tables 4b, 4c, and 4d for the data for all ed ucator and school leve l predictors at the beginning and end of the school year for the Perceptions of RtI Skills Survey Educator Perceived RTI-B Sk ills Multilevel Model Results A 3-Level multilevel model was examined to determine what f actors predicted educator perceived RTI-B skills. The average item score on items re lated to behavior issues from the Perceptions of RtI Skills Survey was entered as the dependent variable in the analysis. The same Level 1, 2, and 3 variables (both main effects and intera ctions) that were entered into the models predicting educator beliefs and perceived RT I-A skills were entered in this model. Decisions regarding allowing inte rcepts and slopes to vary and centering were consistent with the previous models described above as well.

PAGE 167

159 159 Prior to running the full 3-Level model, ti me was entered as a Level 1 predictor to determine if increases in the perceived RTI-B skills of educators noted in the descriptive analyses were statistically significant. Time, when entered into the model without any Level 2 or 3 predictors, was a significant predictor of perceived RTI-B skills (Estimate=0.19, t =10.05, p =<.01). These findings indicated that educator reported RTI-B skills increased from the beginning to the e nd of the school year. When Level 2 and 3 predictors were added into the model pr edicting educator perceived RTI-B skills, however, time was no longer a sign ificant predictor (Estimate=0.19, t =0.55, p =.58) by itself. Although the main effect of time wa s no long a significant predictor, significant interaction effects between time and some Level 2 and 3 predictors occurred. The significant interaction effects de scribed below suggest that tim e contributed to predictions of educator perceived RTI-B skills wh en associated with some variables. Several Level 2 variables significantly contributed to the model predicting perceived RTI-B skills. Educator level variab les that significantly contributed to the model were highest degree earned (Estimate=0.10, t =2.94, p <.01), and a position as an administrator (Estimate=0.94, t =3.85, p <.01), general education teacher (Estimate=0.51, t= 2.33, p =.02), special education teacher (Estimate=0.58, t =2.56, p =.01), or student support person (Estimate=0.73, t =3.05, p <.01). These results indica ted that having earned a higher degree was associated with higher levels of perceived RTI-B skills while controlling for other predictors. Holding a posit ion as an administrato r, general education teacher, special education teacher, or stude nt support person (i.e., school psychologist, social worker, or guidance c ounselor) also predicted higher levels of perceived RTI-B skills while controlling for other predictors No other educator level main effects

PAGE 168

160 160 predicted perceived skill levels. When the interactions between time and each of the Level 2 predictors was examined, only the interaction between be ing a member of a SBLT and time (Estimate=0.35, t =4.62, p <.01) was significant. When controlling for other predictors, membership on a SBLT pred icted increasing perceived RTI-B skills from the beginning to the end of the year. Level 3 variables entered in to the model produced severa l significant predictors of an educator’s perceived RTI-B skills. School demographic variable s that significantly predicted perceived educator RTI-B skills included the proportion of Asian (Estimate=8.31, t =2.55, p =.01) and Native American (Estimate=34.03, t =2.67, p =.01) students attending a school. Working in Distri ct E also was a significant predictor of educator perceived RTI-B skills (Estimate=-0.41, t =-2.29, p =.02). These results indicated that working in a school with higher proportions of Asian or Native American students predicted higher levels of perceived RTI-B skills while controlling for other predictors. Conversely, working in a school in Distri ct E predicted lower perceived skills. When the interaction between school leve l variables and time was examined, only the interactions between working in a pilot school and time (Estimate=0.66, t =2.26, p =.02), and the average proportion of SBLT me mbers present at trainings (Estimate=0.66, t =-2.13, p =.03) were significant. These results indicated that working in a pilot school predicted increasing perceived RTI-B sk ills across the year while controlling for other predictors. Conversely, higher averag e proportions of SBLT members attending trainings provided by the Project predicted decreasing reported sk ills across the year. Other school level variables did not differe ntially predict cha nging perceived RTI-B

PAGE 169

161 161 skills across time. See Table 5b below for data on the degree to which each predictor entered into the 3-level model predic ted educator perceived RTI-B skills. Table 5b Perceptions of RTI-B Skills Multi-Level Model Data 3 Level Model Predicting Educator Skills Predictors Estimate SE t p Level 1 Behavior Skills Intercept 2.66 0.26 10.20* <.01 Time (Slope) 0.19 0.35 0.55 .58 Level 2 Intercepts General Education Teacher 0.51 0.22 2.33* .02 Special Education Teacher 0.58 0.23 2.56* .01 Administrator 0.94 0.24 3.85* <.01 Student Support Services Personnel 0.73 0.24 3.05* <.01 Other Position 0.35 0.23 1.52 .13 Years of Experience 0.01 0.01 0.99 .32 Highest Degree Earned 0.10 0.03 2.94* <.01 School-Based Leadership Team Membership -0.00 0.06 -0.02 .98 Slopes General Education Teacher*Time -0.15 0.32 -0.48 .63 Special Education Teacher*Time -0.18 0.32 -0.57 .57 Administrator*Time -0.27 0.34 -0.78 .44 Student Support Services Personnel*Time -0. 28 0.34 -0.84 .40 Other Position*Time -0.34 0.33 -1.04 .30 Years of Experience*Time -0.01 0.01 -1.11 .27 Highest Degree Earned*Time -0.03 0.04 -0.82 .41 School-Based Leadership Team Membership*Time 0.35 0.08 4.62* <.01

PAGE 170

162 162 Table 5b continued Perceptions of RTI-B Skills Multi-Level Model Data 3 Level Model Predicting Educator Skills Predictors Estimate SE t p Level 3 Intercepts School Size -0.00 0.00 -1.23 .22 Staff Size 0.01 0.01 0.86 .39 Proportion White Students A ttending School 4.18 2.59 1.61 .11 Proportion Black Students A ttending School 5.03 2.72 1.85 .06 Proportion Hispanic Students Attending School 5.11 2.72 1.88 .06 Proportion Asian Students A ttending School 8.31 3.25 2.55* .01 Proportion Native American St udents Attending School 34.03 12.76 2.67* .01 Proportion Multiracial Students Attending School 0 . Proportion Male Students A ttending School 0.89 1.23 0.72 .47 Proportion Students Eligible for Free-Reduced Lunch Atte nding School 0.27 0.34 0.81 .42 Proportion English Language Learne r Students Attending School -0.09 0.45 -0.20 .84 Proportion Students with Disabilities Attending School -0.39 0.98 -0.39 .69 Pilot School Membership -0.02 0.07 -0.32 .75 District A Membership -0.04 020 -0.23 .82 District B Membership -0.49 0.30 -1.65 .10 District C Membership -0.10 0.20 -050 .62 District D Membership -0.08 017 -0.46 .64 District E Membership -0.41 0.18 -2.29* .02 District F Membership 0.20 016 1.28 .20 District G Membership 0.09 014 0.66 .51 District H Membership 0 . Average FCAT Baseline Years Score 0.01 0.00 1.93 >.05

PAGE 171

163 163 Table 5b continued Perceptions of RTI-B Skills Multi-Level Model Data 3 Level Model Predicting Educator Skills Predictors Estimate SE t p Slopes School Size*Time 0.00 0.00 1.64 .10 Staff Size*Time -0.00 0.00 -0.90 37 Proportion White Students Atte nding School*Time -0.78 2.18 -0.36 .72 Proportion Black Students Atte nding School*Time -0.88 2.32 -0.38 .70 Proportion Hispanic Students A ttending School*Time -0.99 2.35 -0.42 .67 Proportion Asian Students Atte nding School*Time -2.56 2.77 -0.92 .36 Proportion Native American Stude nts Attending School*Time -14.32 9.94 -1.44 .15 Proportion Multiracial Students Attending School*Time 0 . Proportion Male Students Atte nding School*Time -1.07 1.03 -1.04 .30 Proportion Students on Free-Reduced Lunch Attending School*Time -0.04 0.27 -014 .89 Proportion English Language Learner Students Attending School *Time -0.32 0.39 -0.82 .41 Proportion Students with Disabilities Attending School*Time 0.36 0.77 0.47 .64 Pilot School Membership*Time 0.66 0.29 2.26* .02 District A Membership*Time -0.12 0.14 -0.83 .41 District B Membership*Time 0.03 0.24 0.12 .91 District C Membership*Time -0.08 0.18 -0.42 .67 District D Membership*Time -0.10 0.19 -0.51 .61 District E Membership*Time 0.04 0.16 0.23 .82 District F Membership*Time -0.20 0.12 -1.70 .09 District G Membership*Time -0.12 0.11 -1.08 .28 District H Membership*Time 0 . Average FCAT Baseline Years Score*Time -0.00 0.00 -0.73 .46 Proportion of Days SBLT Members Attended Training*T ime -0.66 0.31 -2.13* .03

PAGE 172

164 164 Table 5b continued Perceptions of RTI-B Skills Multi-Level M odel Data 3 Level Model Predicting Skills Predictors Estimate SE t p Number of Coach Trainings*Time 0.00 0.02 0.22 .82 Coach Training Hours*Time -0.00 0.01 -0.00 >.99 Number of Coach Technical Assistance Sessions*Time 0.00 0.00 0.53 .60 Coach Technical Assistance Session Hours*Time -0.00 0.00 -1.07 .28 Note. p<. 05; df = 3678. SBLT = School-Based Leadership Team. Random effects for intercepts at the edu cator and school levels were examined to determine if the average perceived RTI-B skil ls item score significantly varied. Intercepts at the school level significantly varied (Estimate=0.02, SE =0.01, z =2.93, p<.01) indicating that the aver age score differed across participa ting schools. Intercepts at the educator level significan tly varied (Estimate=0.27, SE =0.02, z =16.22, p<.01) as well indicating that the average score differed ac ross educators nested within participating schools. Thus, the reported aver age RTI-B skills of educators appeared to vary within and across schools. Variance in slopes across educat ors could not be examined because slopes remained fixed to allow the model to converge. Residual variance also was examined to determine the extent to which unexplained variance in educator perceived RTI-B skills existed after predictors were added to the model. Residual variance was significant in the full 3-level model (Estimate=0.27, SE =0.01, z =22.08, p=<.01) indicating that the multilevel model did not explain all of the variance in educator pe rceived RTI-B skills. However, the amount of unexplained variance decreased each time predictors were added to account for educator

PAGE 173

165 165 reported skills. The estimate of residua l variance decreased from 0.29 in the unconditional model to .027 when all Level 1, 2, and 3 predictors we re included in the multilevel model. The decrease in residual vari ance suggests that the addition of variables across levels increased the predictive utility of the model. Research Question 2 Research question 2 examined the re lationship of PS/RtI training and the demonstrated skills of educators. Skill assessments examining the extent to which educators could demonstrate application of the skills on which they were trained were administered at the end of the Day 2, 3, 4, and 5 trainings. The skill assessments administered on each day varied as a function of the training focus. Specifically, the skills assessed, the number of items, the number of points possible, and the number of assessments administered during a given day varied. Differences in the skills assessed were not controlled statistically; however, the other differences referenced (i.e., the number of items, the number of points po ssible, and the number of assessments administered) were controlled by calculating the percent of possibl e points that an educator could have been awarded each day. Because only SBLT members attended the 5 days of Project provided tr ainings, the analyses discusse d below pertained only to SBLT members in the 40 pilot schools. Educators’ Demonstrated PS/RtI Skills Assumptions Assumptions of multilevel modeling procedures were examined before conducting inferential analyses. The normality assumption was examined for the skill assessment data, and the Level 2 and 3 predictors to be entered into the model. Skewness and kurtosis values fo r the percent of points possibl e earned by educators were

PAGE 174

166 166 -0.65 and 0.13 respectively indi cating a relatively normal di stribution. Skewness values for level 2 and 3 predictors ranged from -1.69 to 3.81 with the majority of estimates less than 2. Kurtosis values for these predictors ranged from -1.15 to 12.55 with the majority of estimates less than 2. These two statistics i ndicated some variability in the distribution of the data for Level 2 and 3 predictors; howev er, the relatively large sample size used to conduct the analyses suggests that the multi-le vel model procedures should be robust to violations of this assumption (Raudenbush & Bryk, 2002). The assumption that missing data were randomly distributed was examined next using the procedures described for research que stion 1. Significant corr elations as high as 1.0 ( p <.01) among items on the same skill assess ment protocol were found. Significant moderate correlations (majority of the si gnificant correlations ranged from .3 to .7, p values <.01) among items within and across protocols were found as well. These findings indicated that missing data at the educator level were relate d resulting in a violation of the randomly distributed missing data assu mption. Given that multilevel models are sensitive to violations of this assumption, fi ndings from the multilevel models procedures discussed below should be in terpreted with caution (Raude nbush & Bryk, 2002). All data were present at the school level indicating that the assumption of random missing data was met for Level 3 variables. The assumption that the data were nested could not be examined because intercepts and slopes were fixed to allow th e model to converge. Despite this limitation, multilevel model procedures were used to examine this research question. ICC estimates from the unconditional models examining educat ors’ perceptions of their RtIA and RtI-B skills suggested a nested data structure. Although research question two examined

PAGE 175

167 167 demonstrated skills, the ICC estimates for perc eived skills suggested the possibility of a nested data structure. Descriptive Data. Educators demonstrated skill s were derived by dividng the total points earned by the total number of available points across skill assessments administered on a given training day. The percent of possible points earned were calculated for the Day 2, 3, 4, and 5 SBLT trainings to determine what changes occurred in the educators demonstrated skills. Ta ble 6a includes the average percentage of possible points earned across the training da ys. The data indicated that the average percentage of points earned decr eased from the beginning (Mean=0.84, SD =0.13) to the end of the year (Mean=0.77, SD =0.13). A noteworthy decrease in the average percentage of points earned occurred on the skill asse ssments completed at the Day 4 training (Mean=0.54, SD =0.13). Thus, the data from all 4 tr aining days suggested a decreasing trend as well as variability in points earned by educators. Table 6a Skill Assessment Multi-Level Model Data Descripti ve Data from SBLT Training Day Administrations Level 1 Variables n Mean ( SD) Skewness Kurtosis Average Percent of Points Possible 924 0.74 (0.17) -0.65 -013 Day 2 212 0.84 (0.13) -0.94 0.89 Day 3 223 0.81 (0.11) -0.91 0.92 Day 4 230 0.54 (0.12) -0.25 0.56 Day 5 259 0.77 (0.13) -1.29 2.63 Note SBLT = School-Based Leadership Team. Descriptive data also were examined for the Level 2 and 3 variables to be entered into the model predicting educators dem onstrated skills. Level 2 predictors (i.e., educator level predictors) included position, y ears of experience, and the highest degree

PAGE 176

168 168 earned. The number and percent of educators by each of these groups was calculated at the beginning and end of the year due to di fferences in individuals completing the skill assessments. SBLT members from all 40 pilot schools completed the skill assessments across all trainings. Table 6b includes data for all educator level predictors at the beginning and end of the school year. Table 6b Skill Assessment Multi-Level Model Data Level 2 Predictor Frequencies from Day 2 to Day 5 of SBLT Trainings Level 2 Predictors Frequencies (%)aSkewness Kurtosis Day 2 Training Day 5 Training Position General Education Teacher 70 (22.65) 61 (20.89) 1.45 0.09 Special Education Teacher 33 (10.68) 32 (10.96) 2.67 5.12 Administrator 47 (15.21) 38 (13.01) 2.12 2.48 Student Support Services 71 (22.98) 64 (21.92) 1.41 -0.00 Other 61 (19.74) 60 (20.55) 1.62 0.62 Years of Experience 0.12 -1.15 Less than 1 year 5 (1.77) 5 (1.94) 1-4 years 38 (13.43) 31 (12.02) 5-9 years 59 (20.85) 56 (21.71) 10-14 years 57 (20.14) 50 (19.38) 15-19 years 37 (13.07) 39 (15.12) 20-24 33 (11.66) 30 (11.63) 25 or more years 54 (19.08) 47 (18.22) Highest Degree Earned 0.74 0.96 B.A./B.S. 85 (30.91) 75 (29.64) M.A./M.S. 158 (57.45) 143 (56.52) Ed.S. 24 (8.73) 27 (10.67) Ph.D./Ed.D. 8 (2.91) 8 (3.16) Note. a Percent of educators in the corresponding category is included in parentheses. B.A./B.S.=Bachelors of Arts/Bachelors of Sc ience; Ed.S.=Educational Specialist; M.A./M.S. =Masters of Arts/Masters of Science; Ph.D./Ed.D.=Doctor of Philosophy; Doctor of Education; SBLT=School Based Leadership Team.

PAGE 177

169 169 The composition of SBLTs members completing the skill assessments was relatively evenly distributed across positions. None of the 5 positions examined (i.e., general education teachers, sp ecial education teachers, administrators, student support services personnel, or other position) repres ented less than 10% or more than 23% of respondents. The years of experience am ong these educators was relatively normally distributed with experience ranging from less than 1 year to over 25 years. The majority of respondents’ highest degrees were at the bachelors or masters level; however, more SBLT members held masters (approximately 57%) than bachelors (approximately 30%) degrees. Level 3 predictor (i.e., school level va riables) descriptive statistics were calculated differentially for continuous versus ca tegorical variables. Table 6c includes the means and standard deviations of continuous school level variab les at the observation level (i.e., means and standard deviations take into account the number of educators responding from each school). Table 6d include s frequency data for categorical school level variables at the observation level. Overall, the data indicate vari ability in the school level variables (e.g., school demographics, st aff size, district membership) associated with the educators from the 40 schools who participated in Year 1 trainings.

PAGE 178

170 170 Table 6c Skill Assessment Multi-Level Model Data Level 3 Continuous Predictors Descriptive Statistics Level 3 Predictors Mean ( SD )ana Skewness b Kurtosis b School Demographics School Size 678.99 (230.48) 1307 0.84 1.75 Staff Size 49.78 (16.00) 1307 0.60 1.58 Proportion White Students 0.54 (0.26) 1307 -0.76 -0.39 Proportion Black Students 0.24 (0.23) 1307 1.33 0.96 Proportion Hispanic Students 0.14 (0.10) 1307 1.22 0.81 Proportion Asian Students 0.03 (0.03) 1307 1.67 2.60 Proportion Native Students 0.00 (0.00) 1307 1.98 6.17 Proportion Multiracial Students 0.05 (0.02) 1307 0.06 -1.01 Proportion Male 0.52 (0.02) 1307 -0.84 0.82 Proportion Free-Reduced Lunch 0.55 (0.23) 1307 -0.46 -0.68 Proportion English Language Learners 0.11 (0.12) 1307 1.69 2.59 Proportion Students with Disa bilities 0.16 (0.06) 1307 0.43 0.25 Average % SBLT Members Days Present 0.85 (0.15) 1271 -1.69 3.84 Note. a n represents the number of observations with da ta associated with the corresponding variable. b Skewness and kurtosis values calcula ted from data across time points.

PAGE 179

171 171 Table 6d Skill Assessment Multi-Level Model Data Level 3 Categorical Predictor Descriptive Data Level 3 Predictors Frequencies (%)aSkewness Kurtosis District Membership District A 81 (6.19) 3.64 11.26 District B 177 (13.53) 2.13 2.56 District C 128 (9.79) 2.71 5.35 District D 302 (23.09) 1.28 -0.37 District E 244 (18.65) 1.61 0.60 District F 124 (9.48) 2.77 5.68 District G 176 (13.46) 2.14 2.60 District H 75 (5.73) 3.81 12.55 Note. a Percent of observations in corresponding category is included in parentheses. Educator Demonstrated Skills Multilevel Model Results A 3-Level multilevel model was examined to determine what factor s predicted educator demonstrated skills. The percent of possible points earned was entered as the dependent variable in the analysis. Time (i.e., SBLT training days acr oss the year) was entered as the Level 1 predictor of educator skills. Time was zero ce ntered to facilitate interpretation of the results. Level 2 predictors included educator variables. Each SBLT members position, years of experience, and highest degree earne d, were entered into the model. Level 3 predictors included school level variables. School dem ographics (e.g., size, racial composition by group, poverty levels), staff size, district membership, and the proportion of SBLT members who attended each traini ng were predictors entered at Level 3. Interactions between each edu cator and school level variable and time also were entered into the model. All predictors entered into the model examining demonstrated skills were

PAGE 180

172 172 entered using the same values and procedur es described above to address research question 1. Intercepts and slopes remained fixed to allow the model to converge. Prior to running the full 3 Level model, time was entered as a Level 1 predictor to determine if decreases in the demonstrated sk ills of educators noted in the descriptive analyses were statistically significant. Time, when entered into the model without any Level 2 or 3 predictors, was a signifi cant predictor of skills (Estimate=-0.04, t =-9.31, p =<.01). These findings indicated that educator demonstrated skills decreased from the beginning to the end of the school year. When Level 2 and 3 predictors were added into the model predicting demonstrated skills, time remained a significant predictor (Estimate=-0.27, t =-2.58, p =.01) while controlling for other predictors. Although the main effect of time remained a significant predictor, significant interaction effects between time and some Level 2 and 3 predic tors occurred. The significant interaction effects described below suggest that time cont ributed to predictions of educator skills when associated with some variables. Three Level 2 interaction variables significantly contributed to the model predicting demonstrated skills. Educator level variables that significantly contributed to the model were the interactions between holdi ng a position as a gene ral education teacher (Estimate=0.21, t =2.05, p =.04), special education teacher (Estimate=0.20, t =1.99, p =.05), or student support person (Estimate=0.20, t =2.00, p =.05) and time. These results indicated that working as a general educati on teacher, special education teacher, or student support person predicted increasing sk ills across trainings when controlling for other predictors. No Level 2 main effects significantly contributed to the model.

PAGE 181

173 173 Level 3 variables entered in to the model produced one si gnificant predictor of an educator’s demonstrated skills. The intera ction between the proportion of SBLT members present at a training and time (Estimate=-0.12, t =-1.96, p =.05) significantly predicted demonstrated skills. These results indicated that having a higher percentage of SBLT members attending trainings predicted decreasi ng demonstrated skills across time while controlling for other predictors. School de mographics (e.g., size, student demographic profile), staff size, working in a school in a ny of the eight demonstrat ion districts, nor the interactions among these variab les and time predicted demonstrated skills. See Table 6e below for data on the degree to which each predictor entered into the 3 level model predicted educator demonstrated skills.

PAGE 182

174 174 Table 6e Skill Assessment Multi-Level Model Data 3 Level Model Predicting Educator Skill Assessment Performance Predictors Estimate SE t p Level 1 Skills Intercept 1.38 0.26 5.28* <.01 Time (Slope) -0.27 0.11 -2.58* .01 Level 2 Intercepts General Education Teacher -0.47 0.25 -1.88 .06 Special Education Teacher -0.41 0.25 -1.63 .10 Administrator -0.44 0.25 -1.75 .08 Student Support Services Personnel -0.46 0.25 -1.83 .07 Other Position -0.44 0.25 -1.75 .08 Years of Experience -0.00 0.01 -038 .70 Highest Degree Earned 0.00 0.02 0.04 .97 Slopes General Education Teacher*Time 0.21 0.10 2.05* .04 Special Education Teacher*Time 0.20 0.10 1.99* .05 Administrator*Time 0.17 0.10 1.66 .10 Student Support Services Personnel*Time 0.20 0.10 2.00* .05 Other Position*Time 0.18 0.10 1.82 .07 Years of Experience*Time 0.00 0.00 0.06 .95 Highest Degree Earned*Time 0.01 0.01 1.12 .26

PAGE 183

175 175 Table 6e continued Skill Assessment Multi-Level Model Data 3 Level Model Predicting Educator Skill Assessment Performance Predictors Estimate SE t p Level 3 Intercepts School Size 0.00 0.00 0.37 0.71 Staff Size -0.00 0.00 -0.64 0.52 Proportion White Students Attending School -0 .37 266540 -0.00 1.00 Proportion Black Students A ttending School -0.09 266540 -0.00 1.00 Proportion Hispanic Student s Attending School -0.23 266540 -0.00 1.00 Proportion Asian Students Attending School -0.25 266540 -0.00 1.00 Proportion Native American Students Attending School -5.52 266540 -0.00 1.00 Proportion Multiracial Students Attending School -0.02 266540 -0.00 1.00 Proportion Male Students Atte nding School -0.54 0.68 -0.79 .43 Proportion Students Eligible for FreeReduced Lunch Attending School -0.18 0.12 -1.50 .13 Proportion English Language Learne r Students Attending School 0.21 0.19 1.15 .25 Proportion Students with Disabilities Attending School 0.65 0.42 1.55 .12 District A Membership -0.10 0.10 -1.00 .32 District B Membership -0.21 0.14 -1.53 .13 District C Membership -0.08 0.09 -0.87 .39 District D Membership -0.12 0.08 -1.48 .14 District E Membership -0.10 0.08 -1.15 .25 District F Membership -0.06 0.08 -0.78 .44 District G Membership -0.11 0.09 -1.34 .18 District H Membership 0 . Proportion of Days SBLT Members Attended Training 0.14 0.10 1.41 .16

PAGE 184

176 176 Table 6e continued Skill Assessment Multi-Level Model Data 3 Level Model Predicting Educator Skill Assessment Performance Predictors Estimate SE t p Slopes School Size*Time 0.00 0.00 -0.07 .94 Staff Size*Time 0.00 0.00 0.03 .97 Proportion White Students Atte nding School*Time -0.53 0.57 -0.92 .36 Proportion Black Students Atte nding School*Time -0.71 0.62 -1.15 .25 Proportion Hispanic Students A ttending School*Time -0.51 0.61 -0.84 .40 Proportion Asian Students Atte nding School*Time -0.56 0.71 -0.79 .43 Proportion Native American Stude nts Attending School*Time 0.08 3.01 0.03 .98 Proportion Multiracial Students Attending School*Time 0 . Proportion Male Students Atte nding School*Time 0.49 0.37 1.35 .18 Proportion Students on Free-Reduced Lunch Attending School*Time 0.05 0.06 0.74 .46 Proportion English Language Learner Students Attending School *Time -0.15 0.13 -115 .25 Proportion Students with Disabilities Attending School*Time -0.17 0.24 -0.72 .47 District A Membership*Time 0.01 0.05 0.26 .79 District B Membership*Time 0.05 0.08 0.67 .51 District C Membership*Time -0.01 0.05 -0.12 .90 District D Membership*Time 0.02 0.04 0.46 .64 District E Membership*Time 0.01 0.04 0.13 .90 District F Membership*Time 0.02 0.04 0.46 .65 District G Membership*Time 0.02 0.04 0.47 .64 District H Membership*Time 0 . Proportion of Days SBLT Members Attended Training*T ime -0.12 0.06 -1.96* .05 Note. p<. 05; df = 691. SBLT= School Based Leadership Team. Random effects for intercepts and slopes could not be calculated because both were held constant to allow the model to converge. However, residual variance was examined to determine the extent to which unexplained variance in educator

PAGE 185

177 177 demonstrated skills existed after adding predic tors to the model. Residual variance was significant in the full model (Estimate=0.025, SE =0.001, z =18.59, p =<.01) indicating that the multilevel model did not explain all of the variance in educator demonstrated skills. However, the amount of unexplained variance de creased when predictors were added to account for educator beliefs. The estimate of residual variance decreased from 0.028 in the unconditional model to 0.025 when all Level 1, 2, and 3 predictors were included in the multilevel model. The decrease in residua l variance suggested that the addition of variables across levels increased th e predictive utility of the model. Research Question 3 Research question 3 examined the rela tionship between training and technical assistance and implementation of a PS/RtI mode l. Two measures were used to address this research question. The SAPSI an implementation monitoring tool completed by SBLT members, was used to determine self re ported implementation levels in the 40 pilot schools. The SAPSI was completed by the pilot schools at the beginning and end of the school year. The Tier I and II Critical Components Checklist a permanent product review protocol completed by PS/RtI Coaches was used to determine implementation levels in the pilot a nd comparison schools. The Tier I and II Critical Components Checklist was completed by the coaches at the beginning, middle, and end of the year. Two separate models were conducted usi ng these two measures. Specifically, the dependent variable for each model was: 1) The overall implementation levels of pilot schools as measured by the average item score on the SAPSI

PAGE 186

178 178 2) The overall implementation levels across pilot and comparison schools as measured by the average item score on the Tier I and II Critical Components Checklist Self-Report of PS/RtI Impl ementation in Pilot Schools Assumptions Assumptions of multilevel models procedures were examined before conducting inferential analyses. The normality assumption was examined for the SAPSI data, and the Level 2 predictors to be entered into the model. Skewness and kurtosis values for the average item SAPSI score of educators were 0.70 and 0.44 respectively indicating a rela tively normal distribution. Skewness values for level predictors ranged from -0.85 to 3.29 with the ma jority of estimates less than 2. Kurtosis values for these predictors ranged from -1.94 to 9.04. These two statistics indicated variability in the distribution of the data for Level 2 predictors ; however, it should be noted that the majority of values exceeding 2 were associated with categorical variables (e.g., district membership). The relatively sm aller sample size used to address this research question suggests that the resu lts of the multilevel modeling procedures described below should be in terpreted with some caution given the violation of the normality assumption noted for some variables (Raudenbush & Byrk, 2002). The assumption that missing data were randomly distributed was examined next using the procedures described for research questions 1 and 2. Although the majority of data points were present, a few signi ficant correlations as high as 1.0 ( p <.01) among items within and across administrations of the SAPSI were found. These findings indicate that some missing data from the SAPSI were related resulting in a violation of the randomly distributed missing data assumption. Gi ven that multilevel mo dels are sensitive

PAGE 187

179 179 to violations of this assumption, these findi ngs provide additional ev idence that results from the multilevel models procedures discussed below should be interpreted with caution (Raudenbush & Byrk, 2002). All data for Level 2 predictors were present indicating that the assumption of random missing data was met for Level 2 variables. Finally, the assumption that the data were nested was examined by calculating the ICC from the unconditional SAPSI model. The ICC estimate derived was .04 indicating that a small amount of va riance in scores on the SAPSI was associated with the school. Although this estimate was smaller than estima tes derived from the previously discussed models, multilevel model procedures were us ed because the ICC suggested that the data were not completely independent. Descriptive Data. Pilot school self-reported implem entation levels were derived by calculating the average rating across items on the SAPSI These average SAPSI scores were calculated at the beginni ng and end of the year to determine what changes occurred in the reported implementation le vels. Table 7a includes average SAPSI item score data. The reported levels of implementation in pilot schools increased from the beginning (Mean=0.82; SD =0.43) to the end (Mean=1.49, SD =0.49) of the school year. Table 7a Self Assessment of Problem Solving Implementation Multi-Level Model Data Descriptive Data for the Total Sample and Beginning to End of Year Comparisons Level 1 Variables n Mean ( SD) Skewness Kurtosis Average SAPSI Item Score 80 1.16 (0.57) 0.70 0.44 Beginning of Year 40 0.82 (0.43) 0.85 0.15 End of Year 40 1.49 (0.50) 1.13 0.69 Note

PAGE 188

180 180 Descriptive data also were examined for the Level 2 variables to be entered into the model predicting self-reported implem entation levels. Level 2 predictors (i.e., school level variables) descriptive st atistics were calculated differentially for continuous versus categorical variables. Table 7b includes the means and standard deviations of continuous school level variab les at the observation level (i.e., means and standard deviations take into account th e number of schools for which data were available). Table 7c includes frequency data for district membership (a categorical variable). Overall, the data indicated variab ility in the school leve l variables (e.g., school demographics, staff size, district membership ) associated with the 40 pilot schools that completed the SAPSI

PAGE 189

181 181 Table 7b Self Assessment of Problem Solving Implemen tation Multi-Level Model Data Level 2 Con tinuous Predictor Descriptive Statistics Level 2 Predictors Mean ( SD )ana Skewness b Kurtosis b School Demographics School Size 673.70 (230.72) 80 0.85 1.82 Staff Size 49.73 (15.66) 80 0.58 1.78 Proportion White Students 0.54 (0.27) 80 -0.74 -0.49 Proportion Black Students 0.24 (0.24) 80 1.39 1.06 Proportion Hispanic Students 0.15 (0.11) 80 1.20 0.50 Proportion Asian Students 0.02 (0.03) 80 1.87 3.80 Proportion Native Students 0.00 (0.00) 80 2.24 7.32 Proportion Multiracial Students 0.04 (0. 02) 80 0.15 -0.80 Proportion Male 0.52 (0.02) 80 -0.85 0.90 Proportion Free-Reduced Lunch 0.53 (0.24) 80 -0.34 -0.85 Proportion English Language Learners 0.11 (0.13) 80 1.69 2.52 Proportion Students with Disabilities 0.16 (0. 06) 80 0.47 0.05 Average % SBLT Members Days Presentc 0.84 (0.11) 80 0.09 -1.94 Average FCAT Baseline Score 312.51 (19.92) 76 0.25 -0.34 Coaching Variablesc Number of Coach Provided Trainings 5.45 (5.19) 80 1.95 3.18 Coach Provided Training Hours 20.53 (16.84) 80 1.74 3.04 Number of Coach TA Sessions 24.28 (16.31) 80 1.35 0.89 Coach Provided TA Session Hours 57.35 (34.83) 80 1.17 0.58 Note. a n represents the number of observations with da ta associated with the corresponding variable. b Skewness and kurtosis values calcula ted from data across time points. c Means and standard deviations ba sed on end of year data only. FCAT= Florida Comprehensive Assessment Test; SAPSI = Self-A ssessment of Problem Solving Implementation; TA= Technical Assistance.

PAGE 190

182 182 Table 7c Self Assessment of Problem Solving Implemen tation Multi-Level Model Data Level 2 Cate gorical Predictor Descriptive Data Level 2 Predictors Frequencies (%)aSkewness Kurtosis District Membership District A 6 (7.50) 3.29 9.04 District B 12 (15.00) 2.00 2.04 District C 12 (15.00) 2.00 2.04 District D 14 (17.50) 1.74 1.07 District E 12 (15.00) 2.00 2.04 District F 6 (7.50) 3.29 9.04 District G 12 (15.00) 2.00 2.04 District H 6 (7.50) 3.29 9.04 Note. a Percent of observations in corresponding category is included in parentheses. SAPSI = Self-Assessment of Problem Solving Implementation. SAPSI Multilevel Model Results A 2-Level multilevel model was examined to determine what factors predicted reported levels of implementation. The average item score on the SAPSI was entered as the dependent vari able in the analysis. Time (i.e., beginning versus end of the year SAPSI scores) was entered as the Level 1 predictor of reported implementation. Time was zero centered to facilitate interpretation of the results. Level 2 predictors includ ed school level variables. School demographics (e.g., size, racial composition by gr oup, poverty levels), staff size, district membership, the average proportion of SBLT members attending trainings, previous student performance, and the amount of coaching received were pr edictors entered at Level 2. All variables were entered into the model for the 40 pilot schools using the same procedures described above for the previously examined models. Using the steps discussed above to find a

PAGE 191

183 183 model that would converge, intercepts and slopes were allowed to vary; however, the covariance between intercepts and slopes was forced to be zero. Prior to running the full 2 Level model, time was entered as a Level 1 predictor to determine if increases in reported implementa tion noted in the descri ptive analyses were statistically significant. Time, when ente red into the model without any Level 2 predictors, was a significant predictor of reported implementation (Estimate=0.67, t =10.03, p =<.01). These findings indicated that implementation as reported by SBLT members increased from the beginning to the end of the school year. When Level 2 predictors were added into the model pred icting reported implementation, time remained a significant predictor (Estimate=2.41, t =2.70, p =.01) while controlling for other variables. The addition of Level 2 and 3 vari ables to the model resulted in significant interactions between time and some predic tors. The results discussed below suggested that time significantly contributed to the mode l when associated with some school level variables as well. Level 2 variables entered in to the model produced several significant predictors of reported implementation levels. Significant school level predictors of implementation were the proportion of students eligible for free-reduced lunch attending the school (Estimate=1.00, t =2.12, p =.04) and being a school in District C (Estimate=0.70, t =2.36, p =.02) or District F (Estimate=-0.56, t =-2.42, p =.02) counties. These results indicated that higher proportions of students eligible for free-reduced lunch and being a school in District C county predicted hi gher levels of reported implem entation while controlling for other predictors. Conversely, be ing a school in District F pr edicted lower reported levels of implementation while controlling for other predictors. No other school demographics

PAGE 192

184 184 (e.g., size) variables, staff size, nor work ing in a school in any of the other six demonstration districts produced significant main effects. When the interaction between school leve l variables and time was examined, the interactions between time and the pr oportion of male students (Estimate=8.26, t =2.37, p =.02), the proportion of students eligible for free-reduced lunch (Estimate=-2.41, t =3.12, p <.01), and being a school in District C (Estimate=-1.39, t =-2.60, p =.01) were significant. Higher proportions of male students predic ted increasing levels of implementation across the school year while controlling for other predictors. Conversely, lower proportions of students eligible fo r free-reduced lunch and being a school in District C predicted decreasing implementati on levels across the year while controlling for other predictors. Thus, although significant main effects suggested higher proportions of students eligible for free-reduced lunch and being a school in District C predicted higher levels of reported implementation, the significant interaction effects for these variables suggest predictions of decreasing levels of repor ted implementation across time. Other school demographic variables, sta ff size, membership in the other seven demonstration districts, co ach training provided to sc hools, and previous FCAT performance did not differentially predict changing implementation levels across time. See Table 7d below for data on the extent to which each predictor entered into the 2Level model predicted pilot schoo l reported implementation levels.

PAGE 193

185 185 Table 7d Self Assessment of Problem Solving Impl ementation Multi-Level Model Data 2-Level Model Predicting Implementation Predictors Estimate SE t p Level 1 SAPSI Intercept 0.74 0.20 3.64* <.01 Time (Slope) 2.41 0.89 2.70* .01 Level 2 Intercepts School Size -0.00 0.00 -0.47 .64 Staff Size 0.01 0.01 1.12 .27 Proportion White Students A ttending School 4.28 3.74 1.14 .26 Proportion Black Students A ttending School 5.19 4.02 1.29 .21 Proportion Hispanic Students Attending School 4.13 3.94 1.05 .30 Proportion Asian Students A ttending School 5.81 4.46 1.30 .20 Proportion Native American Stude nts Attending School -12.39 22.06 -0.56 .58 Proportion Multiracial Students Attending School 0 . Proportion Male Students Atte nding School -2.05 2.50 -0.82 .42 Proportion Students Eligible for FreeReduced Lunch Attending School 1.00 0.47 2.12* .04 Proportion English Language Learner Students Attending School -0.63 0.59 -1.08 .29 Proportion Students with Disabilities Attending School -1.98 1.43 -1.39 .18 District A Membership 0.12 031 0.38 .70 District B Membership 0.07 0.45 0.15 .88 District C Membership 0.70 0.30 2.36* .02 District D Membership 0.01 021 0.03 .97 District E Membership -0.13 0.24 -0.54 .59 District F Membership -0.56 0.23 -2.42* .02

PAGE 194

186 186 Table 7d continued Self Assessment of Problem Solving Impl ementation Multi-Level Model Data 2 Level Model Predicting Implementation Predictors Estimate SE t p District G Membership 0.18 0.23 0.78 .44 District H Membership 0 . Average FCAT Baseline Score 0.01 0.00 1.12 .27 Slopes School Size*Time -0.00 0.00 -1.11 .27 Staff Size*Time 0.00 0.01 0.20 .84 Proportion White Students Atte nding School*Time 3.41 5.43 0.63 .53 Proportion Black Students A ttending School*Time 2.84 5.94 0.48 .64 Proportion Hispanic Student s Attending School*Time 5.27 5.90 0.89 .38 Proportion Asian Students Atte nding School*Time 1.59 6.23 0.25 .80 Proportion Native American Stude nts Attending School*Time -16.68 30.75 -0.54 .59 Proportion Multiracial Students Attending School*Time 0 . Proportion Male Students Atte nding School*Time 8.26 3.48 2.37* .02 Proportion Students on Free-Reduced L unch Attending School*Time -2.41 0.77 -3.12* <.01 Proportion English Language Learner St udents Attending School*Time -0.16 0.77 -0.21 .84 Proportion Students with Disabilities Attending School*T ime 3.51 1.97 1.78 .08 District A Membership*Time -0.81 0.41 -1.98 .06 District B Membership*Time 0.79 0.65 1.21 .24 District C Membership*Time -1.39 0.53 -2.60* .01 District D Membership*Time -0.34 0.41 -0.83 .42 District E Membership*Time 0.41 0.41 1.01 .32 District F Membership*Time 0.33 0.31 1.06 .30 District G Membership*Time -0.47 0.34 -1.37 .18 District H Membership*Time 0 .

PAGE 195

187 187 Table 7d continued Self Assessment of Problem Solving Impl ementation Multi-Level Model Data 2 Level Model Predicting Implementation Predictors Estimate SE t p Proportion of Days SBLT Members Attended Training*Tim e -1.78 0.88 -2.02 >.05 Average FCAT Baseline Score*Time -0.01 0.01 -1.00 .33 Coach Provided Training Number*Time 0.05 0.05 1.13 .27 Coach Provided Training Hours*Time -0.03 0.02 -1.87 .07 Coach Provided TA Session Number*Time 0.02 0.01 1.70 .10 Coach Provided TA Session Hours*Time -0.00 0.00 -1.00 .32 Note. p<. 05; df = 31. FCAT= Florida Comprehensive Assessment Test; SAPSI = Self-A ssessment of Problem Solving Implementation; SBLT= School Based Leadership Team; TA= Technical Assistance. Random effects for intercepts and slopes were examined to determine if the average SAPSI item score significantly varied ac ross schools and within schools across time respectively. Neither intercepts (Estimate=0.02, SE =0.02, z =1.05, p=.15) nor slopes (Estimate=0.01, SE =0.03, z =0.38, p=.35) significantly varied. Th ese results indicated that neither the average item SAPSI score nor the change in scores over time significantly varied across participating schools. Because the model would not converge using an unstructured covariance matrix, the covariance between intercepts and slopes remained at zero. Residual variance also was examined to determine the extent to which unexplained variance in reported implementation levels existed after adding predictors to the model. Residual variance was not si gnificant in the full model (Estimate=0.03, SE =0.02, z =1.59, p=<.06) indicating that the multilevel model may have accounted for the majority of variance in implementation le vels across schools and time nested within

PAGE 196

188 188 schools. In addition, the amount of unexplaine d variance decreased each time predictors were added to account for implementation le vels. The estimate of residual variance decreased from 0.31 in the unconditional model to 0.03 when all Level 1 and 2 predictors were included in the multilevel model. The de crease in residual vari ance suggests that the addition of variables across levels improve d the predictive utility of the model. PS/RtI Implementation Levels Ev ident from Permanent Products Assumptions Assumptions of multilevel modeling procedures were examined before conducting inferential analyses. The normality assumption was examined for the Tier I and II Critical Components Checklist data from Year 1, and the Level 2 predictors to be entered into the model. Skewness and kurtosis values for the average item Tier I and II Critical Components Checklist score of educators were 0.65 and -0.57 respectively indicating a relatively normal distribution. Skew ness values for Level 2 predictors ranged from -0.82 to 3.44 with the majority of estimat es less than 2. Kurtosis values for these predictors ranged from -1.98 to 9.93. These two statistics indicated variability in the distribution of the data for Level 2 predictors. Given the relatively small sample size used to address this research question, the re sults of the multilevel modeling procedures described below should be interpreted with some caution (Raudenbush & Bryk, 2002). The assumption that missing data were randomly distributed was examined next using the procedures described previously. Although the majority of data points were present, a few significant co rrelations as high as 1.0 ( p <.01) among items within and across administrations of the Tier I and II Critical Components Checklist were found. These findings indicated that some missing data from the Tier I and II Critical Components Checklist were related resulting in a viol ation of the randomly distributed

PAGE 197

189 189 missing data assumption. Given that multilevel m odels are sensitive to violations of this assumption, these findings provide additional evidence that results from the multilevel models procedures discussed below should be interpreted with caution (Raudenbush & Bryk, 2002). Finally, the assumption that the data were nested was examined by calculating the ICC from the unconditional Tier I and II Critical Components Checklist model. The ICC estimate derived was .85 indicating a nested data structure. This statistic indicates that the assumption of nested data was met providing support for the use of multilevel modeling procedures. Descriptive Data. Implementation levels were derived by calculating the average score across items on the Tier I and II Critical Components Checklist. Average item scores were calculated for checklists completed assessing implementation for academic content areas only. These average Tier I and II Critical Components Checklist scores were calculated at the beginning, middle, and e nd of the year to determine what changes occurred in implementation levels. Scores were available for 61, 64, and 61 of the participating schools at the beginning, mi ddle, and end of the year respectively. Checklists for the remaining 9 to 13 schools were not yet submitted at the time analyses were conducted. Missing checklists were from both pilot and comparison schools. The primary reasons for missing checklists were coach turnover (i.e., one coach moved prior to the conclusion of the school year) and inco rrectly completed checklists (i.e., one coach submitted checklists that did not follow standa rdized procedures). Table 8a includes average Tier I and II Critical Components Checklist item score data as well as the number of pilot and comparison schools for which checklists were available.

PAGE 198

190 190 Table 8a Tier I and II Critical Components Checklist Multi-Level Model Data Descriptive Data from the Beginning, Middle, and End of Year 1 for the Total Sample and Pilot versus Comparison Schools Level 1 Variables n Mean ( SD) Skewness Kurtosis Average Item Score 186 0.47 (0.42) 0.65 -0.57 Beginning of Year 61 0.44 (0.40) 0.51 -0.89 Middle of Year 64 0.49 (0.44) 0.69 -0.56 End of Year 61 0.48 (0.43) 0.73 -0.40 Average Item Score: Pilot versus Comparison Schools Pilot Schools 103 0.63 (0.45) 0.11 -1.10 Beginning of Year 33 0.60 (0.44) -0.15 -1.26 Middle of Year 36 0.66 (0.47) 0.19 -1.14 End of Year 34 0.65 (0.47) 0.19 -1.05 Comparison Schools 83 0.26 (0.25) 0.71 -0.75 Beginning of Year 28 0.25 (0.24) 0.75 -0.34 Middle of Year 28 0.27 (0.27) 0.75 -0.79 End of Year 27 0.27 (0.26) 0.67 -0.96 Note. Levels of implementation in participating schools incr eased slightly from the beginning (Mean=0.44; SD =0.40) to the middle (Mean=0.49, SD =0.44) of the year. A slight decrease occurred from the middle to the end (Mean=0.48, SD =0.43) of the school year; however, the leve l of implementation remained slig htly higher than the beginning of the year. Descriptive data also were disaggregated by school status to examine implementation levels in pilot versus comparison schools. Implementation levels in pilot schools increased slightly from the beginning (Mean=0.60, SD =0.44) to the middle (Mean=0.66, SD =0.47) of the year. Consistent with th e overall trend, a s light decrease in level occurred from the middle to the end (Mean=0.65, SD =0.47) of the year although the end of the year level remained higher than the beginning of the year. Comparison school

PAGE 199

191 191 implementation data indicated lower leve ls of and smaller increases in PS/RtI implementation than in pilot schools. Im plementation levels in comparison schools increased slightly from the beginning (Mean=0.25, SD =0.24) to the middle (Mean=0.27, SD =0.27) of the year. Products examined at the end of the year suggested that the implementation level remained the same from the middle to the end (Mean=0.27, SD =0.26) of the year. Descriptive data examined for the Level 2 variables to be entered into the model predicting implementation levels were calcul ated differentially for continuous versus categorical variables. Table 8b includes the m eans and standard deviations of continuous school level variables at the observation level (i.e., means and standard deviations take into account the number of schools for whic h data are available). Table 8c includes frequency data for school status and district membership (cat egorical variables). Overall, the data indicate variability in the school level variables (e .g., school demographics, staff size, district membership) associ ated with the 64 schools on which Tier I and II Critical Components Checklists were completed correctly during Year 1.

PAGE 200

192 192 Table 8b Tier I and II Critical Components Checklist Multi-Level Model Da ta Level 2 Continuous Predictor Descriptive Statistics Level 2 Predictors Mean ( SD )ana Skewness b Kurtosis b School Demographics School Size 711.08 (224.92) 219 0.51 0.59 Staff Size 52.27 (15.85) 219 0.40 0.59 Proportion White Students 0.56 (0.28) 219 -0.82 -0.57 Proportion Black Students 0.25 (0.27) 219 1.38 0.68 Proportion Hispanic Students 0.13 (0.10) 219 1.47 1.67 Proportion Asian Students 0.02 (0.02) 219 2.19 5.56 Proportion Native Students 0.00 (0.00) 219 1.98 5.05 Proportion Multiracial Students 0.04 (0.02) 219 0.23 -0.77 Proportion Male 052 (0.03) 219 -0.81 3.07 Proportion Free-Reduced Lunch 0.52 (0.25) 219 -0.19 -1.10 Proportion English Language Learners 0.11 (0.13) 219 1.72 2.40 Proportion Students with Disabilities 0. 17 (0.06) 219 0.41 0.54 Average % SBLT Members Days Presentc 0.84 (0.11) 120 -0.10 -1.90 Coaching Variables Number Coach Trainingsc 2.73 (3.04) 80 2.22 5.02 Coach Training Hoursc 10.26 (10.83) 80 2.06 4.24 Number Coach Technical Assistance Sessionsc12.14 (10.10) 80 1.75 3.07 Coach Technical Assistance Session Hoursc28.68 (22.76) 80 1.71 3.08 Average FCAT Score from Baseline Years 314.23 (19.68) 213 0.22 -0.48 Previous Years Implementation Level 0.15 (0.20) 186 1.94 3.71 Note. a n represents the number of observations with da ta associated with the corresponding variable. b Skewness and kurtosis values calcula ted from data across time points. c Means, SD s, and n s based on pilot school data entered. A ll comparison school values equal 0. FCAT=Florida Comprehensive Assessment Test; SBLT = School Based Leadership Team.

PAGE 201

193 193 Table 8c Tier I and II Critical Components Checklist Multi-Level Model Data Level 2 Categorical Pr edictors Descriptive Data Level 2 Predictors Frequencies (%)aSkewness Kurtosis School Status -0.19 -1.98 Pilot School 120 (54.79) Comparison School 99 (45.21) District Membership District A 18 (8.22) 3.06 7.45 District B 36 (16.44) 1.82 1.34 District C 24 (10.96) 2.52 4.37 District D 36 (16.44) 1.82 1.34 District E 36 (16.44) 1.82 1.34 District F 18 (8.22) 3.06 7.45 District G 36 (16.44) 1.82 1.34 District H 15 (6.85) 3.44 9.93 Note. a Percent of observations in corresponding category is included in parentheses. Tier I and II Critical Components Ch ecklist Multilevel Model Results A 2-Level multilevel model was examined to determine what factors predicted PS/RtI implementation. The average item score on the Tier I and II Critical Components Checklist was entered as the dependent variable in the analysis. Time (i.e., beginning, middle, and end of the year Tier I and II Critical Components Checklist scores) was entered as the Level 1 predictor of implement ation. Time was zero centered to facilitate interpretation of the results. Level 2 predictors includ ed school level variables. School demographics (e.g., size, racial composition by gr oup, poverty levels), staff size, district membership, the average proportion of SBLT members attending trainings, previous student performance, the amount of coaching received, previous year s implementation level, and school status

PAGE 202

194 194 were predictors entered at Level 2. All vari ables were entered into the model for the schools using the same procedures describe d for the previously examined models. Previous years’ implementation level, a pred ictor unique to this model, was calculated by averaging Tier I and II Critical Components Checklist scores across the three baseline years and data collection windows (i.e., one score was entered for each school). Using the steps discussed above to find a model that would converge, intercepts and slopes were allowed to vary; however, the covariance betwee n intercepts and slopes remained at zero. Prior to running the full 2-Level model, ti me was entered as a Level 1 predictor to determine if increases in implementation noted in the descriptive analyses were statistically significant. Time, when ente red into the model without any Level 2 predictors, did not significantly predict implementation (Estimate=0.02, t =0.95, p =0.34). These findings indicated that the slight increases in implementation noted from the beginning to the end of the school year did not occur beyond chance. When Level 2 predictors were added into the model pred icting reported implemen tation, no interaction effects between time and the predictors we re found providing further evidence that time did not significantly cont ribute to the model. Level 2 predictors entered into the model produced three significant main effects. Significant school level predictors of implem entation were being a school in District D (Estimate=0.73, t =2.13, p =.04) or District G (Estimate=3.02, t =3.02, p <.01) counties and the average proportion of SBLT members atte nding trainings across all participating schools (Estimate=2.01, t =2.34, p =.02). These results indica ted that being a school in District D or District G or having higher average proporti ons of SBLT members attend trainings predicted higher levels of implement ation while controlling for other predictors.

PAGE 203

195 195 No other main or interaction effects significan tly contributed to the model. See Table 8d below for data on all predictors entered into the 2-Level model predicting implementation levels. Table 8d Tier I and II Critical Components Checklist Multi-Level M odel Data 2 Level Model Predicting Implementation Predictors Estimate SE t p Level 1 Implementation Intercept 0.92 0.47 1.98 >.05 Time (Slope) -0.38 0.29 -1.32 .19 Level 2 Intercepts School Size 0.00 0.00 0.79 .43 Staff Size -0.01 0.01 -1.12 .27 Proportion White Students Attending School -3. 94 5.10 -0.77 .44 Proportion Black Students Atte nding School -1.65 5.13 -0.32 .75 Proportion Hispanic Students A ttending School -4.07 5.30 -0.77 .45 Proportion Asian Students Atte nding School -9.20 6.49 -1.42 .16 Proportion Native American Stude nts Attending School -6.59 26.11 -0.25 .80 Proportion Multiracial Students Attending School 0 . Proportion Male Students A ttending School 2.95 2.75 1.07 .29 Proportion Students Eligible for FreeReduced Lunch Attending School -0.50 0.64 -0.77 .44 Proportion English Language Learne r Students Attending School 0.21 0.84 0.25 .80 Proportion Students with Disabilities Attending School -0.45 1.83 -0.25 .80 Pilot School Membership -136 0.79 -1.72 .09 District A Membership 0.44 0.37 1.17 .25 District B Membership -0.55 0.60 -0.93 .36 District C Membership -0.23 0.51 -0.45 .65 District D Membership 0.73 0.34 2.13* .04

PAGE 204

196 196 Table 8d continued Tier I and II Critical Components Checklist Multi-Level Model Data 2 Level Model Predicting Implementation Predictors Estimate SE t p District E Membership 0.25 0.38 0.66 .51 District F Membership 0 . District G Membership 0.96 0.32 3.02* <.01 District H Membership 0 . Coach Variables Number of Trainings Provided 0.07 0.08 0.90 .37 Training Hours Provided -0.02 0.02 -0.83 .41 Number of Technical Assistance Sessions Provide d 0.00 0.02 0.02 .98 Technical Assistance Hours Provided 0.01 0.01 0.47 .64 Average Proportion of SBLT Members Attending Trainings 2.01 0.86 2.34* .02 Average FCAT Baseline Years Score 0.00 0.01 0.41 .68 Previous Years Implementation Level -0.57 0.56 -1.01 .32 Slopes School Size*Time -0.00 0.00 -0.21 .84 Staff Size*Time 0.00 0.01 0.53 .60 Proportion White Students Atte nding School*Time 3.50 3.13 1.12 .27 Proportion Black Students Atte nding School*Time 2.78 3.14 0.88 .38 Proportion Hispanic Students A ttending School*Time 3.92 3.29 119 .24 Proportion Asian Students Atte nding School*Time 5.26 3.89 1.35 .18 Proportion Native American Students Attending School*Time 10.54 16.86 0.63 .53 Proportion Multiracial Students Attending School*Time 0 . Proportion Male Students Attendi ng School*Time -1.67 1.68 -0.99 .32 Proportion Students on Free-Reduced Lunch Attending School*Time 0.15 0.42 0.35 .73 Proportion English Language Learner St udents Attending School*Time -028 0.50 -0.56 .58 Proportion Students with Disabilities Attending School*T ime 0.48 1.08 0.44 .66

PAGE 205

197 197 Table 8d continued Tier I and II Critical Components Checklist Multi-Level Model Data 2 Level Model Predicting Implementation Predictors Estimate SE t p Pilot School Membership*Time 0.82 0.46 1.77 .08 District A Membership*Time 0.01 0.23 0.06 .95 District B Membership*Time 0.19 0.36 052 .60 District C Membership*Time 0.06 0.39 014 .89 District D Membership*Time -0.25 0.20 -1.26 .21 District E Membership*Time -0.17 0.23 -0.77 .44 District F Membership*Time 0 . District G Membership*Time -0.19 0.19 -1.01 .31 District H Membership*Time 0 . Coach Variables Number of Trainings Provided*Time -0.03 0.06 -0.49 .62 Training Hours Provided*Time 0.01 0.02 0.66 .51 Number of Technical Assistance Sessions Provided*Time -0.01 0.02 -0.39 .70 Technical Assistance Hours Provided*Time -0.00 0.01 -0.05 .96 Average Proportion of SBLT Members Attending Trainings*Time -0.91 0.50 -1.83 .07 Average FCAT Baseline Years Score*Time -0.00 0.01 -0.28 .78 Previous Years Implementation Level*Time 0.39 0.40 0.97 .33 Note p <.05; df =67. FCAT= Florida Comprehensive Assessment Test; SBLT= School Based Leadership Team. Random effects for intercepts and slopes were examined to determine if the average Tier I and II Critical Components Checklist item score significantly varied across participating schools and across time nested within schools respectively. Intercepts (Estimate=0.04, SE =0.02, z =1.96, p=.03) significantly varied while slopes (Estimate=0.01, SE =0.01, z =0.92, p=.18) did not. These resu lted indicated that the average item Tier I and II Critical Components Checklist score level significantly varied across schools; however, the change in scores across the year did not significantly vary

PAGE 206

198 198 across schools. Because the model would not co nverge using an unstructured covariance matrix, the covariance between intercep ts and slopes remained at zero. Residual variance also was examined to determine the extent to which unexplained variance in implementation levels existed after adding predictors to the model. Residual variance was significant in the full 3-Level model (Estimate=0.02, SE =0.01, z =2.76, p =<.01) indicating that the multilevel model did not account for all of the variance in implementation scores. Howe ver, the amount of unexplained variance decreased each time predictors were added to account for implementation levels. The estimate of residual variance decreased fr om 0.03 in the unconditional model to 0.02 when all Level 1 and 2 predictors were include d in the multilevel model. The decrease in residual variance suggests that the addition of variables across levels increased the predictive utility of the model. Summary of Results The three research questions addresse d examined the relationship between training and technical assistan ce and Year 1 outcomes target ed by the Project. Research question 1 investigated the re lationship between training and technical assistance and the reported beliefs and perceived RTI-A and RTI-B skills of participating educators. One common variable associated with Project act ivities that significantly contributed to predictions of the average item scores on the Beliefs Survey and Perceptions of RtI Skills Survey (divided into RTI-A and RTI-B scores) was the interaction between SBLT membership (i.e., the group of educators w ho received 5 full day PS/RtI trainings from Project staff) and time. Regardless of the out come measures used to address research question one, membership on an SBLT predic ted increases in scores across the school

PAGE 207

199 199 year while controlling for other variables. Wo rking in a pilot school also contributed to the predictions of the average item scores for all outcome measures. The interaction between working in a pilot school and time si gnificantly predicted increases in perceived RTI-A and RTI-B skills while controlling for other predictors. A lthough working in a pilot school did not predict incr eases over time in belief scor es, the variable did predict higher levels of beliefs. Project activity variables that contributed differentially to the three models used to address research question one were the inte ractions between time and the number of coach provide technical assistance sessions, the total hours of coach provided technical assistance sessions, and the average proporti on of SBLT members present at Project trainings. The number of coach provided t echnical assistance sessions significantly predicted increasing beliefs scores form th e beginning to the end of the year while controlling for other predictors. Conversel y, the total hours of technical assistance sessions provided predicted decreasing be lief scores. Neither interaction term significantly contributed to either of the perception of skills models. The proportion of SBLT members present at Proj ect trainings predicted decr eases in average RTI-A and RTI-B skill scores across the year while controlling for other predictors but did not significantly contribute to the beliefs model. Research question two examined the re lationship between tr aining and technical assistance and the demonstrated skills of educators. Two vari ables associated with Project activities significantly contributed to the m odel predicting demonstrated skills. The main effect of time and the inte raction between time and the average proportion of SBLT members predicted demonstrated skills while controlling for other predictors. The main

PAGE 208

200 200 effect of time significantly predicted lower levels of demonstrated skills. Higher proportions of SBLT members attending trai nings predicted decreasing skills across the year. Finally, research question three examin ed the relationship between training and technical assistance and PS/RtI implementati on. For the model examining self-reported implementation in pilot schools only, the main effect of time significa ntly contributed to predictions of reported implementation level. While controlling for other predictors, time significantly predicted increases in SAPSI average item scores across the year while controlling for other predictors. No variables associated with Project activities (i.e., the number or duration of coach provided trai nings and technical a ssistance sessions, the average proportion of SBLT members attending Project trainings, nor the interactions among these variables and time) signifi cantly contributed to the model. The model examining implementation leve ls evident in permanent products from pilot and comparison schools produ ced one significant predicto r associated with Project activities. The average percentage of SBLT members attendi ng Project trainings significantly contributed to predictions of the average item score on the Tier I and II Critical Components Checklist Higher proportions of SBLT members attending trainings predicted higher scores on the checklists while controlling fo r other predictors. No other predictors associated with Pr oject activities (i.e., school status, the number or duration of coach provided trainings and technical assist ance sessions, nor th e interactions among these variables and time) nor the main effect of time significantly contributed to the model.

PAGE 209

201 201 Chapter V: Discussion The three research questions addressed in this study examined the relationship between PS/RtI training and tech nical assistance and several Year 1 outcomes targeted by the PS/RtI Project. Specifically, Year 1 targ ets included the (1) beliefs and perceived skills (RTI-A and RTI-B) of educators exposed to a PS/RtI model, (2) the demonstrated skills of SBLT members receiving direct training from Project staff, and (3) implementation of a PS/RtI model with a particular focus on Tier I. Training and technical assistance provided to pilot schools to facilitate attainme nt of these goals occurred at two levels. These levels diffe red in terms of delivery and intensity. Project staff provided 5 full-day trainings on PS/RtI concepts and skills to SBLT members selected by each pilot school. Thes e trainings occurred throughout the school year. Topics covered included the rationa le for implementation, systems change principles, and the four steps of the PS/RtI model. Consistent with Showers, Joyce, and colleagues’ research on effective professi onal development models (Joyce & Showers, 1996; Showers et al., 1987), Proj ect staff delivering these trai nings provided the rationale for each skill taught, modeled the skills, allowed SBLT members pr actice opportunities, and provided feedback following SBLT skill practice. PS/RtI Coaches provided the second level of training and technical assistance to pilot schools. PS/RtI Coaches engaged in s upplemental training of SBLT members as well as training of pilot school staff me mbers. PS/RtI Coaches were the primary

PAGE 210

202 202 providers of technical assistance to SBLT members and pilot school staff as well. Because the Project adopted a systems change perspective based on the current literature (e.g., Curtis et al., 2008), the second level of training and t echnical assistance provided to pilot schools varied. PS/RtI Coaches were instru cted to engage in tr aining and technical assistance activities that matched the goals and needs of the schools they supported. Although the specific activ ities coaches engaged in differe d across sites, Project staff trained the coaches to use the systems cha nge and professional development models adopted by the Project to facilitate identification of school needs. The analyses conducted in this study were intended to provide information on the extent to which the two levels of PS/RtI trai ning and technical assist ance provided related to Year 1 targets identified by the Project. Interpreta tions of the findings discussed in the Results section were considered in the context of two factors. One factor that influenced interpretations was the quasi-experimental de sign used to address the research question. Although comparison schools were included in the design and attempts were made to measure differences between the services de livered to pilot versus comparison schools, Project staff could not contro l all the extraneous variable s (see Johnson & Christensen, 2004 for a discussion of quasi-experimental desi gns and extraneous va riables) that could potentially impact the outcomes examined as part of this study (e.g., district provided training and technical assist ance opportunities, polic ies and procedures across sites). Therefore, significant relationships between va riables associated w ith PS/RtI training and technical assistance activities and Year 1 Proj ect outcomes are not discussed in terms of cause and effect. Rather potential explan ations for the findings are provided and discussed in the context of the current research-base supporting the findings.

PAGE 211

203 203 The other factor impacting interpreta tions was that the study examined the relationship between PS/RtI trai ning and technical assistance and outcomes at the end of the first year of a multi-year project. Previous attempts to implement a PS/RtI model suggest a minimum of 4-6 years is required for full implementation to occur (Batsche, Elliott, Schrag, et al., 2005). However, little is known about what incremental outcomes should be expected (e.g., expectations for progr ess at the end of the first year) to predict successful implementation in 4-6 years. Thus in addition to cons idering the research design used and the current lite rature base, all findings should be considered preliminary and not be used to make summative statements regarding the effectiv eness of the training and technical assi stance provided. Given the quasi-experimental research de sign used and the preliminary nature of the study’s results, the discussion below is or ganized into six sect ions. First, potential explanations for the extent to which PS/Rt I training and technical assistance activities related to Year 1 outcomes are discusse d. Second, educator and school demographic variables relationships with Project outcomes are explored. Third, potential implications for future PS/RtI training, technical assistan ce, and other Project act ivities are provided. Fourth, potential implications for future res earch are explored. Fifth, limitations to the study conducted are discussed in terms of pot ential impact on the analyses conducted and interpretation of the results. Finally, general conclusions foll owing Year 1 of the Project are provided.

PAGE 212

204 204 Potential Explanations for Year 1 Findings Educator Beliefs and Pe rceived RtI Skills Three multi-level models examined the relationships among educators’ beliefs and perceived skills (RTI-A and RTI-B) and PS/RtI training and technical assistance. PS/RtI training and technical assistance vari ables entered into the multi-level models included membership on a SBLT receiving traini ng from Project staff, status as a pilot school receiving two levels of training and t echnical assistance from the Project, the number and duration (total hours) of coach pr ovided training sessions at each school, the number and duration (total hours) of coach provided technical a ssistance sessions, and the average proportion of SBLT members w ho attended the 5 full-day trainings. The extent to which each of these variables contri buted to predictions of the (1) overall levels of educator beliefs and perc eived skills and (2) changes in these outcomes across time were examined to determine relationships. Va riables that were related to changes from the beginning to the end of the year provi ded stronger evidence that Project activities related to the outcomes. Results of the multi-level models examined suggested some relationship between PS/RtI training and technical assistance activ ities and educator beliefs and perceived skills. Membership on a SBLT receiving 5 fu ll-day trainings from Project staff was associated with increasing beliefs and pe rceived skills (RTI-A and RTI-B) core to a PS/RtI model. Status as a pilot school was related to increases in perceived skills (RTI-A and RTI-B) across the year. Alt hough working in a pilot school did not predict increases over time in beliefs, the variable was positively related to a higher level of beliefs. Coach provided trainings (both the numbe r of and duration) did not rela te to educator beliefs or

PAGE 213

205 205 perceived skills; however, coach provided technical assistance sessions related to educator beliefs. Higher numbers of coach provided technical assistance sessions received by a school was relate d to increasing beliefs of e ducators. Conversely, the total duration (hours) of the technical assistance sessions provided was related to decreasing beliefs across the year. Finally, higher av erage proportions of SB LT members attending the 5 full-day Project trainings predicted decr eases in perceived skills (RTI-A and RTIB); however, SBLT attendance wa s not related to beliefs. The finding that membership on a SBLT was related to increases in beliefs and perceived skills (RTI-A and RTI-B) across th e school year provides strong evidence for the relationship between PS/RtI training and technical assistance and these educator outcomes. Importantly, SBLT members receive d 5 full-day trainings across the year provided by Project staff. Projec t staff did not provide these tr ainings to other pilot school staff nor any comparison school educators. Although activities that occurred with SBLT members could not be controlled between tr ainings, the fact that SBLT membership predicted increasing beliefs and perceived skills when controlling for other variables suggests that the trainings ma y have contributed to increa ses beyond those noted for other educators (Refer back to Tables 3a, 4a, and 5a for changes in the average beliefs, perceived RTI-A skills, and RTI-B skills respectively). One hypothesis for the larger increases in be liefs and perceived skills observed for SBLT members is the intensity, focus, and form at of the trainings. Five days of training across the school year resulted in approxima tely 35 hours of professional development targeting PS/RtI concepts and skills. Th e Day 1 training module provided at the beginning of the year included content that exp licitly targeted the belief systems of SBLT

PAGE 214

206 206 members. Activities, discussions, and content intended to review major concepts that focused on core beliefs were infused thr oughout the remaining four days of training. Days 2-5 of the SBLT trainings focused on the four steps of the pr oblem solving process with applications to academic and behavior al content areas. The skills needed to complete the four steps were discussed, m odeled, and participants provided opportunities to practice and receive feedb ack consistent with the professional development model espoused by Showers et al., (1987). Although the relationship between SBLT memb ership and the perceived skills of educators appears to be consistent with research demonstrating that a four-step professional development model impacts the sk ills of educators (S howers et al., 1987), it is less clear how consistent the findings re garding changes in SBLT members’ beliefs are with previous research. Parajes (1992) purpor ts that educators de velop their beliefs regarding student learning and practices earl y in their careers. Furthermore, Parajes contends that, once developed, the beliefs of educators are resist ant to change. The finding that more years of expe rience significantly predicted decreasing beliefs across the year provides some evidence to support Para jes assertion. However, the fact that SBLT members’ beliefs, on average, increased fr om the beginning to the end of the year suggests that educator be liefs may be malleable. Guskey (1986) contents that educator attitudes change following practicing a new behavior, particularly when that behavior results in improved student outcomes. Although this study did not examine whether impr ovements in student outcomes occurred following Year 1 of the Project, SBLT members were provided with multiple opportunities to practice the sk ills on which they were tr ained as well as receive

PAGE 215

207 207 corrective feedback on their performance. One hypothesis fo r the increases in beliefs observed for SBLT members, therefore, is that opportunities to practice and develop PS/RtI skills throughout the year and rece ive feedback on performance resulted in increases in beliefs. However, the relationshi p between increases in perceived skills and beliefs was not examined in this study. Additio nal research would be needed to determine if evidence to support this hypothesis exists. Another finding that provides eviden ce that PS/RtI training and technical assistance was related to educator beliefs and perceived skills i nvolved the pilot school status variable. When controlling for other variables, pilot sch ool educators reported higher increases of perceive d skills (RTI-A and RTI-B) acr oss the year than their comparison school counterparts (Refer back to Tables 4a and 5a for the average perceived RTI-A and RTI-B skills respectiv ely reported by educator s at the beginning and end of the year). Unlike membership on a SBLT, the activities that differentiated pilot versus comparison school membership we re less clear. Projec t staff provided 5 fullday trainings to SBLT members; however, the training and technical assistance provided by the PS/RtI Coaches varied across the 40 pi lot schools. The content and quality of coach provided activities as well as what professional development activities SBLT members facilitated are less clear. Anecdot al reports from coaches suggest that presentations to staff regarding the rati onale for the PS/RtI model, and technical assistance focusing on data-based decisionmaking skills and implementation of the model were common examples of coach delivered activities. Despite the lack of clarity regarding the specific activi ties that differentiated pilot versus comparison school membership, what is clear is that pilot schools received some level of training and

PAGE 216

208 208 technical assistance from PS/RtI Coaches th at was not received by comparison schools. Thus, the positive relationship between increases in educators’ perceived skills and working in a pilot school provides some a dditional evidence that PS/RtI training and technical assistance may have contributed to the positive outcomes. Working in a pilot school did not significantly relate to increases in educator beliefs; however, the variable was associated with higher levels of beli efs. In other words, pilot school educators did not report greater increases in beliefs but did report higher overall beliefs than their comparison school counterparts when controlling for other predictors. Higher beliefs among educators in pilot schools s uggests that factors associated with these schools may have cont ributed to higher belief levels; however, it is more difficult to attribute these levels to PS/ RtI training and technical assistance activities provided by the Project. Increa ses from the beginning to the end of the year coinciding with the introduction of Projec t activities would have provi ded more evidence for the contribution of PS/RtI training and technical assistance. Two variables that more directly assesse d coaching activities re lated to educator beliefs. The number and duration of coach provided PS/RtI technical assistance sessions were related to increasing and decreasing beliefs respectively. The reason for the differential relationship between changes in be liefs and the number and hours of technical assistance provided is unknown. Anecdotal repor ts provided by Project staff and PS/RtI Coaches suggest that many educators needed to hear repeated, consistent messages for changes in beliefs to occur. Thus, one poten tial hypothesis for the finding of increased beliefs being associated with higher numbers of technical assistance sessions is that repeated exposure to consistent messa ges results in changes in beliefs.

PAGE 217

209 209 Support for this hypothesis may be derived from research examining the impact of mass versus distributed practice. Years of research on teaching behaviors suggests that providing frequent opportunities to practice ne w behaviors within short time frames results in immediate proficiency; however, without additional oppor tunities to practice the new behavior may not be maintain ed (e.g., Lee & Genovese, 1988). Conversely, providing frequent opportunities to practice th at are distributed throughout a longer time frame do not tend to provide as powerful immedi ate results but the results are more likely to be maintained. Although these findings we re derived for behaviors, Guskey’s (1986) assertion that teacher’s beliefs can change following practice opportunities that facilitate the acquisition of new skills suggests that a link between increasing beliefs and frequent coaching opportunities may exis t. If educators received fr equent practice opportunities and perceived the need for less support to apply skills, then it is possi ble that a collateral effect may have occurred for beliefs. Ho wever, more information on the specific activities coaches engaged in and more res earch regarding the re lationship between the beliefs and skills of educators is needed to evaluate this hypothesis. The number and duration of technical assi stance sessions provided by coaches to schools was not significantly related to perc eived skills. Coach pr ovided trainings (both number and duration) were not significantly related to educ ator beliefs or perceived skills. One hypothesis for the l ack of relationship noted is that indices beyond frequency and duration are required to adequately assess the role that coachi ng plays. Brown, Stroh, Fouts, and Baker (2005) reported that the lite rature on coaching that targets educational systemic reform is limited; however, the information that is available suggests that coaches need to possess diverse skill sets to be effective. Brown et al. report that effective

PAGE 218

210 210 coaches are experts in their content area and possess strong consultation skills (e.g., ask questions, actively listen to stakeholders create honest and trusting relationships, understand the importance of clients identify ing their own problems). These findings suggest that the expertise of PS/RtI Coaches and their use of consu ltation skills should be examined in addition to indices of freque ncy and duration of co aching opportunities. PS/RtI Coaches received training on the PS/RtI model and consultation skills prior to Year 1; however, the models examined in th is study did not include any coaching quality variables in the analyses. Therefore, the m odels may not have been sensitive to other dimensions of coaching beyond the frequenc y and duration of interactions with educators. Another potential hypothesis rela tes to the preliminary natu re of the analyses ran. Brown et al. (2005) reported that research on effective co aching for systemic reform suggests that the initial goal of a coach should be to build trusting and strong individual relationships with staff prior to engaging in difficult reform efforts. Thus, determining what the goals of PS/RtI Coaches were during Year 1 in addition to examining quality indicators appear to be important when ex amining the relationship between coaching and outcomes targeted by the Project. Because implementation of a PS/RtI model takes multiple years, it is plausible that more coaching opportunities are required before relationships between coaching activities a nd outcomes can be detected. The models examined included data from the first year of a 3-year projec t. Data points from subsequent years may produce different resu lts given more exposure to concepts and skills that occur through coaching.

PAGE 219

211 211 The accuracy of the coaching data used in this analysis could be a third reason for the results. As was previously stated, the co aching data available for Year 1 was collected from December through May. Missing data on coaching activities form August through November could have masked differences in the relative status of schools in terms of exposure to PS/RtI coaching activities. In a ddition to missing data, the fact that PS/RtI Coaches self-reported their ac tivities should be considere d. Although the coaches were trained on how to code their activities, self -report data can be biased by a number of factors (e.g., social desirability, impression management) that should be considered when interpreting the data provided by the PS/ RtI Coaches (Anastasi & Urbina, 1997). Finally, the average propor tion of SBLT members who a ttended Project trainings was related to perceived skills. Higher SBLT attendance was related to decreasing perceived (RTI-A and RTI-B) skills across the year when controlling for other predictors. The reason for this relationship is unclear. More investigat ion of this re lationship is needed to determine potential explanations for this finding. Educators’ Demonstrated Skills Membership on a SBLT was positively related to the beliefs and perceived skills of educators. Despite increases in beli efs and perceived sk ills observed for SBLT members, decreases in their demonstrated skills were observed. Skill assessments administered to the SBLT members following each day of training suggested that the average percent of points possible earned by SBLT members signi ficantly decreased across the year. Although the decrease in possible points earned was significant, a few factors must be considered when interpreting these results.

PAGE 220

212 212 Only skill assessments administered to SBLT members at the end of each training during Year 1 were examined. The skill assessment scores derived for each training represent average performan ce of SBLT members following training on and practice of the skills targeted. No baselin e scores demonstrating the ski lls of SBLT members prior to training on the skills assessed were available. Thus, the scores across the year represent comparisons of the degree of mastery of th e skills assessed following training each day rather than changes in their skills prior to and after receiving any Project delivered training. Given that the scor es represent the degree of mastery demonstrated by SBLT members on the skills trained that day, one hypothesis for th e decrease in performance across the year is that the skill assessments administered examined different steps of the PS/RtI model that may have varied in difficulty. Because the training focus shifted to diffe rent steps of the model across the year, skill assessments examined whatever skills were trained on that day. Thus, differences in scores may have been an artifact of the instrumentation rather than the skill development of the educators. In other words, differences in scores across the year may have been due to error variance associated with content samp ling rather than differences in the trainings (i.e., the assessments were not controlled or equated for difficultly level; Anatasi & Urbina, 1997). For instance, SBLT member s performed the highest on the skill assessment requiring educators to identify probl ems in a school from a sample data set (Day 2). Conversely, SBLT members scored th e lowest on the skill assessments that required the educators to evaluate the ex tent to which example intervention plans included the components of a comprehensive plan (Day 4). The lack of baseline data on the educators’ skills as well as the differe nces in the skills assessed necessitates the

PAGE 221

213 213 gathering of more information before dete rmining likely reasons for the declines in performance. SBLT members possessing lower levels of skills in areas such as intervention planning prior to training, some skills being more difficult to master than others, and the quality of the training provi ded are all potential explanations for the results. These hypotheses need to be examin ed more thoroughly prior to determining likely reasons for the decreases in mastery noted from the beginning to the end of the school year. In addition to potential issues with the instrumentation used, the methodology used to address the research question also ma kes it difficult to tease out the relationship between PS/RtI training and t echnical assistance and demons trated skills. The skill assessments used to address this question onl y were administered to SBLT members. The lack of a comparison group included in the analyses necessitates more caution when generating potential hypotheses for the resu lts. Although including comparison schools in the analyses used to address research ques tion one did not rule out the influence of extraneous variables, the findi ng that pilot school status was associated with increases in perceived skills provided stronger evidence for the potential impact of PS/RtI training and technical assistance. The lack of a comparison group included in the analyses of the demonstrated skills of educators does not rule out that the trainings were associated with decreases in skill mastery; however, it is more difficult to provide evidence that extraneous variables did not contribute to the findings. Implementation of a PS/RtI Model Similar to the demonstrated skills of SBLT members, the relationship between PS/RtI training and technical assistance a nd implementation of a PS/RtI model was

PAGE 222

214 214 difficult to determine from the results of the multi-level models examined. Pilot schools’ self-reports of PS/RtI implem entation significantly increased from the beginning to the end of Year 1. Although the pilot schools re ported higher levels of implementation following one year of training and technica l assistance, no variables assessing Project activities significantly contributed to the multi-level model examining the SAPSI results. Potential reasons for these findi ngs include who completed the SAPSI and the preliminary nature of the analyses following Year 1. Comparison schools did not complete the SAPSI to provide self-reports of implementation. Project staff decided not to require comparison schools to complete the instrument because of the potential that discussions among comparison school educators while completing the instrument might lead to activities to facilitate increased implementation. This decision, however, made it more difficult to relate PS/RtI training and technical assistance occu rring in pilot schools to the reported increases in implementation. Other factors such as distri ct initiatives and policies and procedures introduced by the state, among others, could ha ve contributed to th e increases observed. In fact, some school demographic and distri ct variables did signi ficantly relate to implementation (potential explanations for these relationships are discussed below). In addition to only receiving SAPSI data from pilot schools, indicators of the frequency and duration of coachi ng activities did not significantly relate to increases in PS/RtI implementation. The number of and to tal hours of coach provided training or technical assistance sess ions did not predict SAPSI score levels or in creases across time. One potential explanation for non-significant relationships between coaching indicators and self-reported implementation is lower th an optimal levels of power to detect

PAGE 223

215 215 significant relationships in the statistical an alyses. Level 1 (time) of the 2-Level multilevel model examining SAPSI scores included 2 time point s. Level 2 (school-level variables) consisted of 40 pilot schools. Raudenbush and Bryk (2002) suggest that higher numbers of Level 1 and 2 units increases th e power available to detect relationships among variables entered into multi-level models particularly for Level 2 units. Given the number of variables entered into the models in this st udy, more time points and schools included in the analyses may have increased the probability of detecting relationships among the variables, including the coaching indicators. In fact, p -values as low as .07 were detected for coaching indicators given th e power available in th e analyses described above. It is plausible that more observations across units (particularly more schools) may have detected significant relationships among some of these variables that were not detected with the current power levels available. Other potential explanations for divergen t coaching findings in clude the lack of coaching quality indices (as de scribed by Brown et al., 2005) included in the model (see above for a discussion of this hypothesis) and the preliminar y nature of the analyses conducted. Coaching on PS/RtI implementation issu es may be related to higher levels of implementation as schools move beyond their first year. Perhaps co aching becomes more important to levels of implementation as schools begin to encounter more advanced implementation issues. Batsche, Elliott, Schr ag et al.’s (2005) a ssertion that PS/RtI implementation takes 4-6 years suggests that schools will encounter myriad barriers to facilitating full implementation across the year s. Brown et al.’s ( 2005) report on coaching for educational systemic reform suggests that coaching from PS/RtI experts may help reduce or eliminate these barriers as individua l relationships with school staff develop.

PAGE 224

216 216 Marston et al.’s (2003) di scussion of some of the barriers to facilitating implementation of a PS/RtI model in the Minnea polis school district s uggests that simply capitalizing on relationships may not be suffi cient. Marston et al described different approaches to providing tr aining and follow-up support to schools attempting to implement the PS/RtI model over an 8-year pe riod. Marston et al. described an initial approach to providing professi onal development that involve d a cadre of three master trainers who provided both th e training and ongoing support to schools. Strengths of this approach cited by the authors included a c onsistent message provided from individuals with expertise and experience implementing th e model. The weakness of this approach was that the trainers had less time to suppor t schools following trainings as schools were added. A second approach described by Marst on et al. to provide additional support to schools was a trainer of trainers model. The authors reported training a set of additional trainers to provide more opportunities for schools to receive coaching on implementation issues. Marston et al. suggeste d that this approach allowed the trainers to capitalize on pre-existing relationships with school staff and engage in more frequent coaching; however, the possibility for inconsistencies in the prof essional development provided increased as trainers were added. Thus, both the relati onship developed by PS/RtI Coaches and their knowledge and skills as coaches may be important when assisting schools confronting barriers to implementati on. Analyses including data from subsequent years of the Project may help determine if this potential explanation is viable. Unlike reports provided by pilot schools, increases in PS/RtI implementation evident in permanent products were not signifi cant. In addition, stat us as a pilot school nor the coaching indicators mentioned above signif icantly related to levels of or increases

PAGE 225

217 217 in implementation. However, the average pr oportion of SBLT me mbers attending the 5 full-day trainings positively related to in creases in implementation. One potential explanation for this relationship is de gree of commitment by SBLT members being trained. SBLT attendance varied across tr ainings and schools suggesting that the individuals attending the traini ng were not consistent across th e year. It is plausible that more consistent attendance across members of the SBLT is an indicator of school commitment to PS/RtI implementation. In othe r words, schools with higher attendance across the year may have been more committed to implementation of a PS/RtI model than schools that had lower average attendance. The lack of relationship between pilot sc hool status and coaching activities and PS/RtI implementation evident in permanent products may be explained by a number of factors. Potential explanati ons for the lack of relations hip among these variables are similar to those provided above for the SAPSI model. Given the proposition that PS/RtI implementation takes multiple years (Batsche, El liot, Schrag, et al., 2005), it is plausible that future increases in implementation might result in relationships with PS/RtI training and technical assistance indicator variables. The average level of im plementation in pilot schools across the year was .66 out of a possi ble score of 2.0. These data suggest that schools, on average, only implemented a por tion of the PS/RtI model during Year 1. If increases in implementation of a PS/RtI m odel occur following a dditional training and technical assistance, it is plausible that significant relationships between these Project activities and implementation would result. Lower than optimal levels of statistical power may have impacted the multi-level model as well. The 2-Level model predicting implementation as measured by the Tier I

PAGE 226

218 218 and II Critical Components Checklist had three Level 1 units (i.e., time points) and 61 Level 2 units (i.e., schools). Again, more tim e points and schools may have increased the probability of detecting significant relati onships among variables. The interaction between pilot school status and time produced a pvalue of .08 with the current number of units; however, interactions between coaching variables and time did not produce a pvalue of less than .5. Although it is impossibl e to tell exactly what impact more time points and schools would have had on the analys es, it is plausible th at more statistical power, holding other variables c onstant, would have resulted in the detection of more significant predictors (Raudenbush & Byrk, 1992). Another potential explanation is the wa y in which the model predicting PS/RtI implementation evident in permanent produc ts was constructed. The model examined included three time points that all occurred du ring Year 1 of implementation. The average level of the 3 previous years of implementati on (i.e., one score that averaged the level of implementation across the three baseline ye ars and all three da ta collection windows within those years) also was included in the model as a predictor. Importantly, the first time point from the August through Decembe r data collection window during Year 1 reflected data from meetings that may have occurred following the first 2 days of training received by the SBLTs as well as 2-3 mont hs of training and technical assistance provided by PS/RtI Coaches. Thus, pilot sc hools may have received information and support that impacted data collected during the first collection wi ndow. Constricting time in the model to Year 1 rather than incl uding baseline time points as Level 1 units, therefore, may not have allo wed the model to detect chan ges over time that may have occurred between the conclusion of the base line years and the be ginning of Year 1.

PAGE 227

219 219 Other Variables Related to Year 1 Project Outcomes PS/RtI training and technical assistance activities differentially related to the outcomes examined by the multi-level models. Several educator and school demographic variables entered into the models related to Year 1 Project outcomes as well. Educator variables were examined in models pr edicting the beliefs, and perceived and demonstrated (SBLT members only) skills of educators. Educator demographic variables examined were years of experience, highest degree earned, and position. Findings suggested that these demographic variable s were differentially related by outcome measure. Years of experience related to the beliefs of educators in the sample. Specifically, educators with more years of experience tended to report lower levels of beliefs. Conversely, holding a higher degree wa s positively related to belief levels as well as perceived skills (RTI-A and RTI-B). The posi tion an educator held also was related to beliefs and perceived skills. Holding a pos ition as a special education teacher or administrator was associated with higher levels of beliefs and perceived skills. Holding a position as a general education teacher was asso ciated with higher perceived skills while holding a position as a student service person was related to higher perceived RTI-B skills. Finally, holding a position as an administ rator related to decreasing beliefs across time. Conversely, holding a posit ion as a general education teacher, special education teacher or student support role was related to increasing demonstrated skills across the year. Years of experience or hi ghest degree earned did not app ear to be related to changes in any of the educator outcomes across the year. Overall, these findings sugge st that educator demographics may play differential roles in the beliefs, perceived skills, and demonstrated skil ls of educators. The finding

PAGE 228

220 220 that more years of experience as an educator related to lower overall levels of beliefs. but not changes over time appears to be some what consistent with Parajes’ (1992) proposition that teachers develop beliefs early in their careers that are resistant to change. Educators with more years of experience in the sample appeared to hold beliefs more consistent with a traditional service deliver y model than their c ounterparts with less experience. Importantly, the interaction betw een years of experience and time did not significantly contribute to th e beliefs model suggesting th at the observed changes in beliefs across the year did not relate to years of experience. A lthough the main effect observed for years of experience may have been a result of starting with lower belief levels than their counterparts with less y ears invested in the education system, it is important to note that the lack of interaction effect between years of experience and time suggests that is may be possible to change th e beliefs of educators with more experience in the system. Holding a higher degree also was positively related with belief levels as well as perceived skill (RTI-A and RTI-B) levels. These results suggest that educators with more education tended to posses higher levels of beliefs and perceived skills. One potential hypothesis for this finding is that the profe ssional development recei ved as part of preservice university training programs results in educators who are better prepared to deliver services consistent with a PS/RtI m odel. Another potential explanation is that the educators who seek out higher degrees are mo re willing to learn new skills than those who do not. Consistent with years of experi ence, earning a higher degree was not related to changes in beliefs or perceived skills over time. Thus, at least following 1 year of the

PAGE 229

221 221 Project, the amount of education received by pa rticipants did not appear to be related to differential change in edu cator outcomes examined. Holding certain positions also was diff erentially related to educator outcomes. Holding a position as an administrator, ge neral education teacher, special education teacher, or student support person predicted hi gher levels of beliefs and/or perceived skills. One hypothesis for these findings is th at different roles and responsibilities provide varying perspectives and opportunities for skill development. However, whether any differences in educator outcomes target ed by the Project remain as roles and responsibilities change as a f unction of implementation of PS/ RtI remains to be seen. In fact, being an administrator significantly predicted decreasing beliefs while holding a position as a general education teacher, specia l education teacher or student support role was related to increasing demonstrated sk ills across the year. The reasons for these particular relationships are unc lear and require further invest igation to derive potential explanations. School level demographic variables also differentially relate d to educator and implementation outcomes examined in this study. Racial/ethnic composition related to the perceived skills of educat ors. The proportion of blac k, Hispanic, Asian, and Native American students attending the schools were positively related to perceived RTI-A and/or RTI-B skill levels. The proportions of these groups of students attending the schools were not associated with demonstr ated skills nor implementation of a PS/RtI model. The proportions of white and multirac ial students attending the schools were not related to any of the outcomes examined.

PAGE 230

222 222 The proportion of males and students elig ible for free-reduced lunch also were related with some outcomes examined. The proportion of males a ttending the schools was associated with decreasing beliefs but increasing levels of self-reported implementation across the year. The proporti on of students eligible for free-reduced lunch was associated with higher levels but decreasing self-re ported implementation across the year. Results indica ted that the proportions of ELL or ESE students attending the schools were not related to a ny outcomes examined in this study. District membership appeared to be rela ted to some outcomes as well. Working in a school in District E was rela ted to lower levels of percei ved skills but not beliefs or demonstrated skills. Nor was working in a school in District E associated with any changes in educator outcom es across the year. Being a school in District C was associated with higher levels of self-re ported implementation but decreases in selfreported implementation across the year. Distri ct F schools were associated with lower levels of self-reported implem entation. Being a school in Dist rict D or District G was related to higher levels of evidence of implementation in permanent products. Membership in any the other district was not rela ted to levels of or changes in educator or implementation outcomes. Several potential hypotheses for the findi ngs related to the school demographic variables exist. Higher levels of perceived skills among sc hools with higher proportions of minority students (i.e., black, Hispanic, Asian, and Native American) could be due to pre-existing experience with di fferentiated instructional tec hniques. Decreases in beliefs in schools with high proportions of male st udents and in reported implementation among schools with high proportions of student eligible for free-red uced lunch may be due to

PAGE 231

223 223 difficulties in working with male students (i.e., male students tend to be disproportionately represented among student s in high-incidence special education categories; Donovan & Cross, 2002) and better understanding of PS/RtI implementation requirements respectively. Differences in leve l and changes in implementation across the year related to district membership could be due to a number of factors such as district policies and procedures, data availabi lity and technology to support graphing, among others. Anecdotal reports from Project staff and PS/RtI Coach es suggest that differences in such issues across districts are impacting implementation; however, more research is needed before these reports are confirmed. Implications for Future Project Activities Given the quasi-experimental design us ed, the variability in the inclusion of comparison groups in the models, and the pre liminary nature of th e analyses conducted, the discussion above should be considered po tential explanations of the relationships derived rather than cause-effect chains. Desp ite the need for caution, the results of this study may have implications for future Proj ect implementation and evaluation activities. Year 1 PS/RtI training and technical assistan ce activities’ relationship with increases in educators’ beliefs and perceived skills sugge st that activities focusing on these outcomes should continue. Batsche, Elliot, Graden, et al (2005) report that educators must perceive the need for and that they have the skills a nd/or support to implement new practices prior to adopting them. Although increases in these outcomes occurred, mixed results regarding implementation were found. Pilot sch ools reported increases in implementation across the year, but these increases were not evident in permanent products from meetings where PS/RtI practices should have occurred. Given that self-report can be

PAGE 232

224 224 biased (Anatasi & Urbina, 1997) and the issues discussed above regarding the construction of the model that used permanent products to measure implementation, more analysis is needed to determine the extent to which implementation increased across the year. Regardless of the true amount of increases in implementation th at occurred, it is clear that optimal levels of implementation were not evident from either data source. Activities specifically focusing on implementa tion issues (e.g., pro cedures, feedback on the extent to which the steps occurred accura tely) should be consid ered to facilitate increases in implementation during Year 2; how ever, activities targeting beliefs and skills of educators should not be en tirely abandoned. The assertion that educators must see the need and perceive they have the skills and/or support to implement the new model suggests that some activities ta rgeting educators’ beliefs and skills core to a PS/RtI model may be required to facilitate continued increas es in these outcomes. Continued increases in these consensus issues could result in increases in implementation in the future. In addition to the focus of PS/RtI traini ng and technical assistance, the frequency and format in which they are delivered should be considered. Membership on a SBLT predicted increases in beliefs and perceived skills while pilot school status predicted increases in perceived skills from the beginni ng to the end of the year. The fact that SBLT membership predicted increasing beliefs and perceived skills and the estimate for perceived skill increases was higher than fo r pilot school status suggests that more intensive trainings may be an effective format to use when targeti ng educator outcomes. However, five full days of intensive PS/RtI training and technical assistance may not be realistic for all staff in a school. Thus, creativ e ways to provide professional development

PAGE 233

225 225 in PS/RtI knowledge and skills may be requi red. The fact that the number of coaching sessions provided to schools positively predic ted increases in beliefs across the year suggests that more frequent coaching inter actions may be one methodology to consider. Strategies to facilitate in creases in PS/RtI implementation should be considered as well. Neither pilot school status nor any coaching indicators we re significantly related to increases in implementation evident from permanent product review s conducted across Year 1. Project staff should consider a num ber of factors prior to making decisions regarding strategies to increase implementa tion based on these findings. The degree to which activities focus on beliefs, skills, implementation, and district issues, among others, will depend on answers to a number of questions. One issue to be addressed is the reason(s) for the decreasing trend in skill mastery demonstrated by SBLT members. Project staff should investigate the extent to which the decreasing trends were due to skill difficulty across training days, the effectiveness of the trainings, and/or the measurement tools and procedures used. Decreases due to skill difficulty would suggest the need to provide additional training targe ting the skills with which the majority of educators stru ggled (e.g., Intervention Development and Implementation components). Conversely, decr eases due to the effectiveness of the training would require that th e frequency, format, and/or delivery of the trainings be addressed. The percent of points possibl e earned by SBLT members on the skill assessments administered at the end of Days 2, 3, and 5 approximated or exceeded 80%. SBLT members earned 54% of the points av ailable on the Day 4 skill assessments measuring skills evaluating the extent to whic h intervention plans included the necessary components. These data suggest skills involve d in the development of intervention plans

PAGE 234

226 226 may have been more difficult for educator s. Although more analys is of the potential factors contributing to the decr easing trend is necessary, thes e data suggest that Project staff should, at a minimum, c onsider revisiting the developm ent of intervention plans in subsequent SBLT trainings. Another issue for Project staff to consid er is how to facilitate increases in implementation of the PS/RtI model. One potential strategy would be to provide additional training to SBLT members on the st eps of the PS/RtI model. After reviewing data from permanent product reviews following th e first year of a stat e PS/RtI initiative, Callender (2006) reported that additional traini ng on the steps of the PS/RtI model was provided during the second year in an effo rt to increase implementation. Following the provision of additional traini ng, Callender reported increa ses in PS/RtI implementation from the previous year. This precedent fo r increasing implementation across a number of schools suggests that providing additional tr aining focusing on the steps of the PS/RtI model may be an effective strategy for f acilitating use of the model in schools. Importantly, Callender reported that the additio nal trainings were pr ovided across a large number of schools suggesting that this approach may be an e ffective strategy to consider when attempting to scale-up implementation of the model. The provision of additional training to SBLT members could be followed up by additional opportunities for coaching. From December through May of Year 1, PS/RtI Coaches, on average, reported providing approx imately 5 training sessions for a total of approximately 21 hours. PS/RtI Coaches, on average, reported approximately 24 technical assistance sessions for a total of approximately 57 hours. Standard deviations for both training and technical assistance in dicated high levels of variability across

PAGE 235

227 227 coaches in the number and duration of th e support provided. Although the number and duration of training and technical assistance sessions did not relate to implementation following Year 1 of the Project, potential power issues and th e preliminary nature of the analyses necessitate that thos e findings be interpreted with caution. More opportunities to provide professional development to pilot school staff through coaching may result in increased PS/Rti implementation. Research from Showers, Joyce, and colleagues (Joyce and Showers, 1996; Showers et al., 1987) suggests that professi onal development that includes the rationale, modeling of skills, practice opportunities, and immediate corrective f eedback results in implementation of new practices. As was previo usly stated, it is unclear the extent to which PS/RtI Coaches used this model when wo rking with pilot school staff. It also is unclear what the focus of training and tec hnical assistance was during Year 1. Project staff should investigate the extent to which PS/RtI Coaches used this model and what the foci of the sessions were in schools that dem onstrate higher levels of implementation. In addition, Project staff should consider directing PS/RtI Co aches to use data on PS/RtI implementation to provide feedback followi ng meeting in which PS/RtI practices were used. Providing performance feedback to teach ers on implementation of interventions was one component that was associated with hi gher levels of integrity according to Noell and colleagues (Noell et al., 2002; Noell et al ., 2005). As part of regular meetings to support teachers implementing interventions, Noell and coll eagues provided feedback on the implementation of the interventions using permanent products generated as part of the plan. The permanent products were used to id entify components of the intervention plan

PAGE 236

228 228 implemented and provide assistance to improve integrity for components not implemented. Results of the studies suggest ed that performance feedback was an effective method for improving integrity. Alt hough implementation of interventions is only one component of the PS/Rt I model, the results could po tentially generalize to other steps of the model. Finally, Project staff should investigate the ex tent to which district factors such as policies and procedures and support from distri ct staff should be targ eted. Being a school in several districts was associated with some educator and implementation outcomes. Most of the significant relati onships between Year 1 outcome s and district membership occurred for overall levels; however, membersh ip in one district was associated with decreasing levels of self-reported implemen tation across the year. These relationships suggest that Project staff should examine f actors such as how district policies and procedures align with PS/RtI implementation, what professional development is available to schools, and what other di strict issues could potentia lly influence educator and implementation outcomes. Determining what factors may have an influence would be important to determining what steps would need to be taken when working with district personnel to suppor t pilot schools. Potential Implications for Future Research The potential implications for future Pr oject activities discussed above are based on preliminary findings following Year 1. Ho wever, the preponderance of evidence suggests that education reform initiatives requ ire years before full implementation occurs (e.g., Batsche, Elliott, Schrag, 2005; Brow n et al., 2005, Sarason, 1990). Given the literature base on education reform, findings following Year 1 should continue to be

PAGE 237

229 229 examined to determine how the relationships between PS/RtI training and technical assistance and educator and im plementation outcomes change across time. In addition to continuing to monitor Year 1 findings in subs equent years of the Project, the results of this study suggest some other research que stions that should be considered. One component of PS/RtI training and tec hnical assistance examined in this study was coaching. The number and duration of co ach provided technical assistance received by schools differentially related to educator beliefs. Coach provided training and technical assistance did not re late to any other outcomes examined. Several potential explanations for these findings were discusse d above. Research ques tions that would help address the extent to which those explanations are viable include: 1) What specific coaching activiti es relate to educator and implementation activities? 2) How do the consultation and PS/RtI know ledge and skills of PS/RtI Coaches relate to educator and implementation outcomes? Potential explanations for the educator outcomes associated with PS/RtI training and technical assistance also were provi ded. The beliefs, perc eived skills, and demonstrated skills of educators were examined to determine what factors were related to levels of and changes in these outcomes. PS/RtI training and technical assistance appeared to be related to beliefs and perceive d skills of educators but demonstrated skills decreased throughout the year. Questions remain, however, about how beliefs and perceived skills of educators interact to impact each other as well as whether the decreases in demonstrated skills were an arti fact of measurement i ssues. To provide more information on what factors accounted for the results, the following research questions should be considered:

PAGE 238

230 230 3) What is the relationship between educator beliefs regarding student learning and service delivery and skill development? 4) What is the relationship between PS/RtI training and technical assistance and the demonstrated skills of educators on PS/Rt I tasks controlled for difficulty level? Evidence for the relationship between PS/ RtI training and tec hnical assistance and implementation was mixed following Year 1. In addition to examining these relationships following subsequent years of the Project, other va riables that may be associated with implementation should be considered. How implementation of a PS/RtI model relates to student and systemic out comes also must be examined. Reform continues to be a focus in education because of the need to improve student academic and behavioral outcomes. Ultimately, the extent to which a PS/RtI model contributes to the education of students will be judged by th e impact of implementation on important educational outcomes. Thus, questions to be addressed regarding the implementation of a PS/RtI model and its impact on educational outcomes include: 5) How do educator beliefs and skills relate to implementation outcomes? 6) How do educator beliefs, skills, and imple mentation levels relate to student and systemic outcomes? Limitations A few limitations to the study must be considered when interpreting findings and considering their implications for future Proj ect activities. First, the quasi-experimental design used in which demonstration sites (i ncluding pilot and comparison schools) were selected through a competitive ap plication did not allow cause and effect relationships to be determined definitively. The lack of random assignmen t and control groups did not

PAGE 239

231 231 allow extraneous variables beyond the traini ng and technical assi stance provided by the Project to be ruled out. In addition, comp arison groups were not available for some models to differentiate outcomes for gr oups who received training and technical assistance versus those who did not. A lthough the quasi-experimental design did constrain the extent to which the author co uld determine cause and effect relationships, conducting pure experimental research tends to be unrealistic in sc hool settings. Given the inherent difficulty in conduc ting pure experimental research in schools, th e results of quasi-experimental studies su ch as this one should be considered when examining outcomes associated with large-scale initiatives. In fact, the external validity of the study may have been improved by attempting to sel ect schools with some level of pre-existing capacity to support PS/RtI implementation ra ther than randomly assigning pilot or comparison school status to schools in Florida. A second limitation to be considered is that the data collected for the analyses violated the multi-level model assumption th at missing data were randomly distributed. Because multi-level models can be sensit ive to violations of this assumption (Raudenbush & Bryk, 1992), the results must be interpreted with cauti on. Results of the analyses involving educators’ beliefs and perceptions of skil ls may have been impacted by the large number of missing surveys from comparison schools. Missing data from educators within schools also may have im pacted the results of models examining educator outcomes. Missing Tier I and II Critical Components Checklists for some pilot and comparison schools as well as missi ng items within th e checklists and SAPSIs may have subjected the implementation analyses to the same limitation.

PAGE 240

232 232 A related limitation involves the extent to which the training and technical assistance activities engaged in by PS/RtI Coaches were measured. Data on the number and duration of coach provided training and t echnical assistance sessions were collected but less is known regarding the specifics and quality of thes e activities. This lack of clarity makes it difficult to determine what types of activities related to the outcomes examined in this study. Information on the spec ific activities engaged in and the quality of these activities might be used to determine the extent to which differences in PS/RtI Coaches’ activities beyond frequency and durati on relate to educator and implementation outcomes. Another limitation is that the analyses re flect findings from the first year of a 3year Project. The findings pr ovided information on the prel iminary relationship between PS/RtI training and technical assistance a nd educator and implementation outcomes. However, it is important that these results be considered in the context of research suggesting that PS/RtI implementation takes a minimum of 4-6 year s (Batsche, Elliot, Schrag, et al., 2005). Given this timeline and other research suggesting that systemic reform is a multi-year process (e.g., Brown et al., 2005) the findings should continue to be examined to determine if the initial results are maintained. Conclusions Analyses following the first year of a 3-year project to evaluate PS/RtI implementation in schools suggest some re lationship between the PS/RtI training and technical assistance activities delivered a nd educator and implementation outcomes. PS/RtI training and technical assistance appeared to be positively related to increases in educators’ beliefs and perceived skills co re to a PS/RtI model. Although increases in

PAGE 241

233 233 these outcomes were observed, significant decr eases in the demonstrated skills of SBLT members occurred. However, measurement issues regarding the difficulty of the skills assessed must be investigated prior to draw ing any conclusions regarding the impact of trainings on skill development. Finally, pre liminary data on implementation of a PS/RtI model suggested mixed results during Year 1. Pilot schools reported increases in PS/RtI implementation across the year; however, pe rmanent product reviews did not reveal increases. In addition, variable s associated with PS/RtI trai ning and technical assistance provided by the Project did not relate to increases in impl ementation. Importantly, these findings represent results following Year 1 of the Project. All results discussed should be considered preliminary and continue to be examined following subsequent years.

PAGE 242

234 234 References Adelman, H.S., & Taylor, L. (2000). Shaping th e future of mental health in schools. Psychology in the Schools, 37 (1), 49-60. Anastasi, A, & Urbina, S. (1997). Psychological testing (7th Ed.) Upper-Saddle River, New Jersey: Prentice Hall. Batsche, G.M., Elliot, J., Graden, J.L., Grimes, J., Kovaleski, J.F., Prasse, D., Reschly, D.J., Schrag, J., Tilly, W.D. (2005). Response to intervention : Policy considerations and implementation Alexandria, VA: National Associ ation of State Directors of Special Education, Inc. Batsche, G.M., Elliot, J., Schrag, J., & Tilly, W.D. (2005, October 25). Response-tointervention: The oppor tunity and the reality Presented at the Na tional Association of State Directors of Special Education Annual Conference, Minneapolis, MN. Bergan, J.R., & Kratochwill, T.R. (1990). Behavioral consultation and therapy New York: Plenum. Boothroyd, R. (2005). Personal Communication. Tampa, FL. Brown, C.J., Stroh, H.R., Fouts, J.T., & Baker, D.B. (February, 2005). Learning to change: School coaching for systemic reform Fouts & Associates. Retrieved on May 5, 2009, from http://www.gatesfoundation.org/nr/downloads /ed/researchevaluation/SchoolCoaching Study05.pdf

PAGE 243

235 235 Burns, M., Appleton, J.J., & Stehouwer, J.D. (2005). Meta-analytic review of responsiveness-to-intervention research: Examining field-based and research implemented models. Journal of Pscyhoeducational Assessment, 23 381-394. Burns, M. K., & Symington, T. (2002). A me ta-analysis of pre-referral intervention teams: Student and systemic outcomes. Journal of School Psychology, 40 437–447. Callender, W.C. (2006, March). Summary evaluation of Id aho’s statewide RTI approach. Paper presented at the National Asso ciation of School Psychologists Annual Convention, Anaheim, CA. Center on Education Policy (June, 2008). Has student achievement increased since 2002?: State test score trends through 2006-07 Retrieved May 5, 2009, from http://www.cepdc.org/index.cfm?fuseaction=Page.vi ewPage&pageId=498&parentID=481 Cohen, J. (1988). Statistical power for the behavioral sciences (2nd Ed.) Hillsdale, NJ: Erlbaum. Curtis, M.J., & Metz, L.W. (1986). System-lev el intervention in a school for handicapped children. School Psychology Review, 15, 510-518. Curtis, M.J., Castillo, J.M., & Cohen, R.C. ( 2008). Best practices in system-level change. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology V (887-902). Bethesda, MD: National Associat ion of School Psychologists. Dickson, S.V., & Bursuck, W.D. (1999). Impl ementing a model for preventing reading failure: A report from the field. Learning Disabilities Research & Practice, 14 (4), 191-202.

PAGE 244

236 236 Dolan, L.J., Kellam, S.G., Brown, C.H., Werthamer-Larsson, L., Rebok, G.W., Mayer, L.S., Laudolff, J., Turkkan, J.S., Ford, C., & Wheeler, L. (1993). The short-term impact of two classroom-based preventiv e interventions on aggressive and shy behaviors and poor achievement. Journal of Applied Developmental Psychology, 14, 317-345. Donovan, M.S., & Cross, C.T. (Eds.) (2002). Minority students in special and gifted education Washington, DC: Nati onal Academy Press. Education of All Handicapped Children Act, P.L. 94-142 (1975). Elementary and Secondary Education Act, P.L. 89-10 (1965). Fletcher, J.M., Coulter, W.A., Reschl y, D.J., & Vaughn, S. (2004). Alternative approaches to definition and identification of learning disabilities: Some questions and answers. Annals of Dyslexia, 54 (2), 304-331. Fletcher, J.M., Francis, D.J., Morris, R.D., & Lyon, G.R. (2005). Evidence-based assessment of learning disabilitie s in children and adolescents. Journal of Clinical Child and Adolescent Psychology, 34 (3), 506-522. Fletcher, J.M., & Lyon, G.R. (1998). Reading: A researched-based approach. In W.M. Evers (Ed.), What’s gone wrong in America’s classrooms (pp. 50-72). Stanford: Hoover Institution Press. Florida Administrative Code book (2009). Retrieved from https://www.flrules.org/gateway/ruleNo.asp?ID=6A-6.0331 Florida Department of Education (May, 2008). Profiles of Florida school districts 200607: Student and staff data Retrieved May 5, 2009, from http://www.fldoe.org/eias/eiaspubs/

PAGE 245

237 237 Flugum, K.R., & Reschly, D.J. (1994). Prereferral interven tions: Quality indices and outcomes. Journal of School Psychology, 32 (1), 1-14. Forness, S.R. (2001). Special education and re lated services: What have we learned from meta-analysis? Exceptionality, 9 (4), 187-195. Grigg, W., Donahue, P., and Dion, G. (2007) The Nation’s Report Card: 12th-Grade Reading and Mathematics 2005 (NCES 2007-468) U.S. Department of Education, National Center for Education Statistic s. Washington, D.C.: U.S. Government Printing Office. Guskey, T.R. (1986). Staff development and the process of teacher change. Educational Researcher, 15 (5), 5-12. Heller, K.A., Holtzman, W.H., & Messick, S. (Eds.). (1982). Placing children in special education: A strategy for equity Washington, DC: National Academy Press. Hoagwood, K., & Johnson, J. (2003). School ps ychology: A public health framework I. From evidence-based practices to evidence-based policies. Journal of School Psychology, 41, 3-21. Individuals with Disabiliti es Education Improvement Act, U.S.C. H.R. 1350 (2004). Jacob, S., & Hawthorne, T.S. (2003). Ethics and law for school psychologists (4th ed.). Hoboken, NJ: John Wile & Sons, Inc. Johnson, B., & Christensen, L. (2004). Educational research: Quantitative, qualitative, and mixed approaches (2nd Ed.) Boston, MA: Pearson Education Inc. Kamps, D. M. and Greenwood, C. R. ( 2003, December). Formulating Secondary Level Reading Interventions. Paper presented at the National Research Center on Learning Disabilities Responsivene ss-to-Intervention Symposium, Kansas City, MO.

PAGE 246

238 238 Kavale, K.A., & Forness, S.R. (1999). Eff ectiveness of special education. In C.R. Reynolds & T.B. Gutken (Eds.), Handbook of school psychology (pp. 984-1024). Austin, TX: Pro-Ed. Kellam, S.G., Mayer, L.S., Rebok, G.W., & Hawkins, W.E. (1998). The effects of improving achievement on aggressive behavior and of improving aggressive behavior on achievement through two prevention inte rventions: An investigation of causal pathways. In B. Dohnrenwend (Ed.), Adversity, stress, and psychopathology (pp. 486505). Oxford: Oxford University Press. Kellam, S.G., Rebok, G.W., Mayer, L.S., Ialongo, N., & Kalodner, C.R. (1994). Depressive symptoms over first grade and their response to a developmental epidemiologically based preventive trial aimed at improving achievement. Development and Psychology, 6 463-481. Kellam, S.G., Werthamer-Larsson, L., Dola n, L., Brown, C.H., Mayer, L., Rebok, G., Anthony, J., Laudolff, J., Edelsohn, G., & Wheeler, L. (1991). Developmental epidemiologically-based preventive tria ls: Baseline modeling of early target behaviors and depressive symptoms. American Journal of Community Psychology, 19, 563-584. Knoff, H., & Batsche, G.M. (1995). Proj ect ACHIEVE: Analyzing a school-reform process for at-risk and underachieving students. School Psychology Review, 24 (4), 579-603. Lee, T.D., & Genovese, E.D. (1988). Distributio n of practice in motor skill acquisition: Learning and performance effects reconsidered. Research Quarterly for Exercise and Sport, 59, 277-287.

PAGE 247

239 239 Luke, D.A. (2004). Multilevel Modeling Thousand Oaks, CA: Sage Publications, Inc. Marston, D., Muyskes, P., Lau, M., & Cant er, A. (2003). Problem-solving model for decision making with high incidence disa bilities: The Minneapolis experience. Learning Disabilities Re search and Practice, 18, 187-200. McGlinchey, M., Schallmo, K., & Goodman, S. (2006, March). Michigan’s integrated behavior & learning support initiative Paper presented at the National Association of School Psychologists Annual C onvention, Anaheim, CA. McLaughlin, J.A., & Jordan, G.B. (1999). Logic models: A tool for telling your program’s performance story. Evaluation and Program Planning, 22, 65-72. National Center for Edu cation Statistics (2005). The condition of education 2005 (NCES 2005-094). Washington, DC: U.S. Government Printing Office. No Child Left Behind Act, U.S.C. 115 STAT. 1426 (2002). Noell, G.H., Duhon, G.J., Gatti, S.L., & C onnell, J.E. (2002). Consultation, follow-up, and behavior management intervention implementation in general education. School Psychology Review, 31, 217-234. Noell, G.H., & Gansle, K.A. (2006). Assuring the form has substance: Treatment plan implementation as the foundation of assessing response to intervention. Assessment for Effective Intervention, 32 (1), 32-39. Noell, G.H., Witt, J.C., Slider, N.J., Conne ll, J.E., Gatti, S.L., Williams, K.L., et al. (2005). Treatment implementation following behavioral consulta tion in schools: A comparison of three follow-up strategies. School Psychology Review, 34, 87-106. O’Conner, R. (2000). Increasing the intensity of intervention in ki ndergarten and first grade. Learning Disabilities Research & Practice, 15 (1), 43-54.

PAGE 248

240 240 O’Connor, R.E., Fulmer, D., & Harty, K. (2003, December). Tiers of intervention in kindergarten through third grade. Paper presented at the Na tional Research Center on Learning Disabilities Responsiveness-to-Int ervention Symposium, Kansas City, MO. Pajares, M.F. (1992). Teachers’ beliefs and educational re search: Cleaning up a messy construct. Review of Educational Research, 62 (3), 307-332. Passow, A.H. (1990). How it happened, wave by wave: Wither (or wither?) school reform? In S.B. Bacharach (Ed.), Educational reform: Ma king sense of it all (pp. 1019). Needham Heights, MA: Allyn and Bacon. President’s Commission on Excellen ce in Special Education (2002). A new era: Revitalizing special education for children and their families (U.S. Department of Education Contract No. ED-02-PO-0791). Washington, DC: U.S. Department of Education. Raudenbush, S.W., & Bryk, A.S. (2002). Hierarchical Linear Models (2nd Ed.) Thousand Oaks: Sage Publications. Sarason, S.B. (1982). The culture of the school and the problem of change (Rev. ed.). Boston: Allyn and Bacon. Sarason, S.B. (1990). The predictable failure of school reform San Francisco: JosseyBass. Showers, B. & Joyce, B. (1996). The evolution of peer coaching. Educational Leadership, 53 (6), 12-16. Showers, B., Joyce, B., & Bennett, B. (1987). Synthesis of research on staff development: A framework for future study and state-of-the-art analysis. Educational Leadership, 45 (3), 77-87.

PAGE 249

241 241 Stanovich, K.E. (1999). The sociopsychom etrics of learning disabilities. Journal of Learning Disabilities, 32 (4), 350-361. Stollar, S., & Graden, J. (2006, March). Evaluation of RTI within the Ohio Integrated Systems Model Paper presented at the National Association of School Psychologists Annual Convention, Anaheim, CA. Stufflebeam, D.L. (2001). Evaluation models. New Directions for Evaluation, 89, 7-98. Tilly, W.D. (2002). Best practices in school psychology as a problem-solving enterprise. In A. Thomas & J. Grimes (Eds.), Best Practices in School Psychology IV (pp. 21-36). Bethesda, MD: National Associat ion of School Psychologists. Tilly, W. D. (2003, December). How Many Tiers Are Needed for Successful Prevention and Early Intervention?: Heartland Area E ducation Agency’s Evolution From Four to Three Tiers. Paper presented at the Nationa l Research Center on Learning Disabilities Responsivene ss-to-Intervention Symposium, Kansas City, MO. Torgesen, J.K. (2002). The prevention of reading difficulties. Journal of School Psychology, 40, 7-26. Torgesen, J.K. (2005, August). Two years of data from Reading First: What we have achieved and what remains to be done Presented at the Reading Coaches Conference, Orlando, FL. Torgesen, J.K. (2007). Report on reduction of students iden tified as learning disabled in Reading First schools Tallahassee, FL: Florida State University, Florida Center for Reading Research.

PAGE 250

242 242 Torgesen, J.K., Alexander, A.W., Wagner, J.K., Rashotte, C.A., Voeller, K.S., & Conway, T. (2001). Intensive remedial instruction for ch ildren with severe reading disabilities: Immediatea nd long-term outcomes from tw o instructional approaches. Journal of Learning Disabilities, 34 (1), 33-58, 78. Torgesen, J.K., Wagner, J.K., Rashotte, C. A., Rose, E., Lindamood, P., Conway, T., & Garvin, C. (1999). Preventing reading fa ilure in young childre n with phonological processing disabilities: Group and in dividual response to instruction. Journal of Educational Psychology, 91 (4), 579-593. Upah, K.R., Tilly, W.D. (2002). Best pr actices in designing, implementing, and evaluating quality interventions. In A. Thomas & J. Grimes (Eds.), Best Practices in School Psychology IV (pp. 483-502). Bethesda, MD: Natio nal Association of School Psychologists. VanDerHeyden, A.M., & Burns, M.K. (2005). Using curriculum-based assessment and curriculum-based measurement to guide elem entary mathematics instruction: Effect on individual and group accountability scores. Assessment for Effective Intervention, 30 (3), 15-31. VanDerHeyden, A.M., Jimerson, S.R. (2005). Using response to inte rvention to enhance outcomes for children: Screening, service delivery, and systems change Manuscript submitted for publication. VanDerHeyden, A.M., Witt, J.C., & Gilberts on, D. (2007). A multi-year effects of the effects of a Response to In tervention (RTI) model on id entification of children for special education. Journal of School Psychology, 45, 225-256.

PAGE 251

243 243 Vaughn, S. (2003, December). How Many Tiers Are Needed for Response to Intervention to Achieve Acceptable Prev ention Outcomes? Paper presented at the National Research Center on Learning Di sabilities Responsiveness-to-Intervention Symposium, Kansas City, MO. Vaughn, S., & Fuchs, L.S. (2003). Redefining le arning disabilities as inadequate response to instruction: The promis e and potential problems. Learning Disabilities Research & Practice, 18 (3), 137-146. Vellutino, F.R., Scanlon, D.M., Sipay, E.R., Small, S.G., Pratt, A., Chen, R., Denckla, M.B. (1996). Cognitive profiles of difficult-to-remediate and readily-remediated poor readers: Early intervention as a vehicl e for distinguishing between cognitive and experiential deficits as basic caus es of specific r eading disability. Journal of Educational Psychology, 88 (4), 601-638. Zins, J.E., & Ponti, C.R. (1996). The influen ce of direct training in problem solving on consultee problem clarificati on skills and attributions. Remedial and Special Education, 17, 370-376.

PAGE 252

244 244 Appendix A Demonstration District Mini-Grant Application and Scoring Rubric

PAGE 253

245 245 TO: School Districts, State of Florida FROM: Florida Problem Solving/Response to Intervention Statewide Project SUBJECT: Problem-Solving/Response to Interv ention (PS/RtI) Demonstration Site Mini-Grant Application Procedures Background The No Child Left Behind Act (NCLB) and th e Individuals with Di sabilities Education Improvement Act (IDEIA) of 2004 embrace the use of Problem-Solving and Response to Intervention (Instruction) (PS/RtI) to ensu re that ALL students achieve state-approved grade-level benchmarks. In addition, the PS/RtI method has become part of the eligibility requirements for st udents with disabilities (eff ective October 13, 2006). The Florida Department of Education (F LDOE) has funded the Florida ProblemSolving/Response to Intervention Project to ensu re that all districts in Florida have access to high quality training in the skills necessary to implement this model. The Florida Problem Solving/Response to Intervention Proj ect is funded by a grant from the Florida Department of Education and is administered through the University of South Florida. The purposes of the FLDOE PS/RtI Project are twofold: 1) organize and deliver statewide training in PS/RtI and 2) evaluate the impact of the PS/RtI model on district, building and student outcomes. The evaluation of the impact of PS/RtI will take place in pilot school sites in demonstrati on districts thro ughout Florida. Demonstration districts will be selected fr om among those districts completing a MiniGrant Application. The purpose of this memo is to disseminate information regarding the Mini-Grant Application process. General Information Eligible Applicants: Any Florida public school district is eligible to apply to become a PS/RtI Demonstration District. Pilot Schools: Each district may request funding to support a maximum of six (6) pilot schools within the district. Proposed pilot school s within the district must house at least grades K-3. Demonstration dist ricts may include Reading First schools, Positive Behavior Supports schools, or schools par ticipating in other st ate or local initiativ es. The district must identify one (1) comparison school for eac h pilot school proposed in the application. The comparison school must contain the same grade levels and share similar student demographics as the pilot school(s). The comp arison school data will be used to compare the impact of the PS/RtI Project in school s with and without project implementation.

PAGE 254

246 246 Start Date: It is estimated that initial implemen tation activities with the demonstration sites will begin in the spri ng of 2007, with full implementa tion starting with the 20072008 school year. Application Deadline: Complete applications must be received by April 1, 2007. Mail the original and 5 copies to: Judith Hyde University of South Florida 4202 E. Fowler Avenue, EDU 162 Tampa, FL 33620 No FAX or email copies of proposals will be accepted. Informational Meetings: All districts interested in co mpleting a mini-gra nt application to become a demonstration district are invited to attend one of three orientation/informational meetings to be held in the north, central, and south regions of the state (see Appendix A). Each district may send up to three people, including the individual who will be primarily responsible for facilitating the grant writing team, one administrative representative from general education and one administrative representative from special education. Each meeting is scheduled from 9:00 a.m. to 1:00 p.m. The meeting agenda will include presentations on the Florida Problem Solv ing/Response to Intervention Project, the responsibilities of participa ting districts and procedures for completing the mini-grant application. Mini-grant application requi rements are described below. District representatives are encouraged to review the application requirements prior to the meeting. A question and answer (Q and A) session will be included in each meeting. NOTE: Pre-registration is required in or der to attend one of the Informational Meetings. To pre-register, go to http://floridarti.usf.edu/biddersconference/ click on Registration, complete the form and clic k on Submit Registrati on. If you encounter any difficulties with pre-regist ration, contact Judi Hyde at JHyde@tempest.coedu.usf.edu or 813-974-7448. The schedule for these meetings is as follows: Monday, February 26 Ft. Lauderdale Embassy Suites 1100 Southeast 17th Street Directions: http://www.embassysuites.com/en/es/hotels/ mapsdirections.jhtml?ctyhocn=FLLSOES 954-527-2700 Thursday, March 1 Tallahassee Doubletree Hotel 101 S. Adams St.

PAGE 255

247 247 Directions: http://doubletree.hilton.c om/en/dt/hotels/index.jhtml?ctyhocn=THLAPDT 850-224-5000 Monday, March 5 Orlando Orlando Airport Marriott 7499 Augusta National Drive Directions: http://marriott.com/property/propertypage/mcoap 407-851-9000 Attendance at one of the regional meetings is strongly encouraged but not required of districts planning to subm it a mini-grant application. Contact Person: For more information about applic ation procedures, contact Clark Dorman, Project Leader at Dorman@coedu.usf.edu or 813-391-3059

PAGE 256

248 248 Overview of the Demons tration Site Project The demonstration site component of the Stat ewide PS/RtI Project is designed to provide training, technical assistance and implementa tion support to indivi dual schools within school districts. Statewide Project sta ff will conduct the training, provide technical assistance and provide other tr aining and implementation sup ports to the pilot schools. Pilot schools, in turn, will serv e as evaluation sites to determine the impact of this project on student and other distri ct and building outcomes. The demonstration site component of the Pr oject will rely on a “coaching” and “trainers” method for implementation. State Project staff will serve as the “exte rnal coaches” to the schools. Funding will be provided for districts to hire one “internal” coach for up to three (3) pilot schools. Each school will cr eate a “school-based” implementation team consisting of six to eight members that in cludes representatives of general education, special education, instructiona l support and student services The building administrator must be included as a member of the team Building teams will learn how to develop a building implementation plan. The school-b ased team and the building coach will become “trainers” and “coaches” for the building staff and will be responsible for building-wide implementation. I. Services Provided to Demonstration Sc hools by the Statewide Project Staff 1. Training and technical assistance for school-based teams to implement the Problem Solving/Response to Inte rvention model in pilot schools 2. Funding for each selected demonstration di strict for up to two coaches (one for each three schools) to complement traini ng and provide technical assistance to pilot school sites in implementing PS/R tI, data collection and analysis, and dissemination of student outcome data 3. Training of and technical assistance and support for the coaches and building administrators 4. Training, technical assistan ce and support for the use of school-based data to develop, implement and evaluate core, supplemental and intensive instruction/intervention 5. Training and technical assist ance in the use of technology to organize and display building, classroom and student-based data 6. Training and technical assist ance in the use of technol ogy to monitor intervention implementation, support data-based decisi on making and track student progress 7. Support integration of exis ting and potential state-le vel, district and school initiatives to facilitate implementation of DOE Strategic Imperative #3-Improve students’ rates of learning and Strategic Imperative #5Increase the quantity and improve the quality of education options 8. Provide web-based programs to collect a nd organize data from the demonstration sites. Internal coaches will be responsible for submitting demonstration site data to the web-based programs

PAGE 257

249 249 II. Expectations of Demonstration Districts and Pilot Sites Each demonstration district may identify up to six (6) pilot schools and an equal number of comparison sc hools within the district In order to receive the services delineated above, districts and their pilot schools submitting an application under this project initiative must agree to the require ments set forth in “Commitments Needed for Success” in Appendix B. These incl ude certain districtand school-level administrative, curricular, financial, and personnel commitments, as well as parent involvement, data collection and reporting requirements. Each proposed pilot school must have a co mparison school that is similar to it on key demographic variables. Comparison schools will be asked only to participate in certain data collection activities, and must agree to participate in these activities. Coaches will support the collection of da ta in both pilot and comparison schools. III. Funding Each district may submit a mini-grant application for up to $100,000.00 per year in funding for a maximum of three years. The mini-grant is inte nded to support the employment of district-based coaches and tr aining activities. Districts must commit to a minimum of three years of project impleme ntation. Each application is for one year of funding. Continuing applic ations will be required each year for years 2 and 3 of the funding cycle. Continuation of funding for years 2 and 3 will be contingent on fulfillment of expectations by the district and pilot and comparison schools. Mini-Grant Application Requirements Each proposal must address each of the five components specified below in a narrative format, in the order in which they are presen ted for a) the demonstr ation district, and b) each of up to six (6) proposed pilot schools within the district. The total narrative (excluding demographic data required in item 2 below) must be double-spaced, using a 12-point font and should not exceed 25 pages in length. Documentation required in 1 and 2 below should be included in appendices to the application and do not count against the 25 page limit. 1. District and Pilot Schools Commitment : Proposals must outline specific commitments to implementing PS/RtI as a way of work and the activities (i) the district, and (ii) pilot schools will car ry out in order to meet the requirements specified in Appendix B. Letters of agreement/commitment from the following individuals must be included in the gr ant application. (See Appendix B for the minimum require d content of these letters). a) District Superintendent b) Assistant Superintendent fo r Curriculum and Instruction

PAGE 258

250 250 c) Director of Elementary Education d) Director of Exceptiona l Student Education e) Director(s) of district/s chool-wide Reading First and Positive Behavior Support Programs (if applicable) f) Principal of each of the proposed pilot schools g) Principal of each comparison school to provide data requested by Project Staff 2. District, Pilot and Comparison Schools Demographic Data : Proposals must include an outline of the a) District demographic data (see A ppendix C“Demonstration District Demographic Profile”) b) Each proposed pilot school’s de mographic data (see Appendix D – “Demonstration Pilot School’s Demographic Profile”), and c) Each comparison school’s demographic data (see Appendix E“Comparison School Demographic Profile”) (Appendices C, D, and E outline the minimum required content for this section.) 3. Statement of Need and Expected Outcomes : Proposals must, for each pilot school a) Describe the school’s needs (parti cularly student academic and/or behavioral needs) that will be ad dressed through participation in the PS/RtI project, including specifi c gaps, barriers, or weaknesses b) Indicate how implementation of th e PS/RtI model would impact the academic and/or behavioral outcomes of students in each pilot school c) Identify measurable student and scho ol outcomes, tied to the identified needs, that will result from par ticipation as a pilot school site d) Identify outcomes for specific target populations or school goals, including over-representa tion of minority students in special programs, low-SES and LEP students a nd/or D/F school status 4. District and Pilot Schools’ Experi ence with Initiatives and Programs : Proposals must describe the district’s and each pilot school’s current and/or previous level of involvement in and extent of implementation (e.g., beginning, intermediate, fully implementing) of academic and/or beha vioral initiatives and programs (e.g., Just Read Florida, Positive Behavioral Suppor t). Include information for any reading initiatives implemented within the last five years in the district and in each proposed pilot school. Specify any existing curric ulum-based measures (e.g., DIBELS, CBMMath) or data collection tools (e.g., PMRN, SWIS, AIMSweb) currently in use. In addition, discuss any involvement the district and each proposed pilot school has had with the following FLDOE projects/initiatives:

PAGE 259

251 251 Continuous Improvement Model (CIM) Reading First Just Read Florida Voluntary Pre-K (VPK) programs Positive Behavior Support PS/RtI Describe any other educational reform initiativ es or elements of the above initiatives in which the district or school has been involved within the past five years. 5. District Personnel Resources and Technology: Proposals must, for the district and each proposed pilot school: a) Identify personnel (e.g., teachers, stud ent support staff, and administrative staff) who will be assigned to this specific initiative at the district level and in each specific pilot school site; identify one coach for each three pilot schools b) Identify percent FTE each will be assigned c) Identify experience/qualifications to support implementation of the PS/RtI initiative d) Include a brief vita for each of the individuals identified as a potential coaches in (a) above in an appendix to the application e) Briefly describe the tech nology resources at the build ing or district levels that will be used in support of this initiative. In partic ular, describe any data management systems that will be used (See Appendix B) The Application Process Only one (1) mini-grant application will be accepted from each district. The Application Packet should include: 1) A Cover Letter from the District Superi ntendent indicating a desire for the district to participat e in the PS/RtI Project 2) The School District’s response to rele vant components of the proposal as specified under Proposal Requirements: Component 1 Dist rict Commitment Component 2 District Demographic Data Component 4 District and School Experience with Initiatives and Programs Component 5 Personnel Resources and Technology

PAGE 260

252 252 Letters of Agreement/ Commitment as described above in sections 1.a) through 1.g) 3) Pilot Schools’ Responses – A response for each proposed pilot school (up to six schools) to relevant components of the proposal as specified under Proposal Requirements: Component 1 Pilot School Commitment Component 2 Pilot School De mographic Data and Comparison School Demographic Data Component 3 Statement of Need and Expected Outcomes for the Pilot School Component 4 Pilot School’s E xperience with In itiatives and Programs Component 5 Personnel Resources and Technology Proposal Evaluation Scoring Guide Total points awarded will be an important cons ideration in the selec tion of demonstration districts. However, it also is important that a diversity of students, schools, and districts be represented in the demonstration districts and their pilot schools. Therefore, after all applications have been evaluated against th e criteria below and have received a final score of from 0 to 175, additional factors will be considered prior to th e selection of sites. Districts and pilot schools will be selected to include sites that are diverse with respect to: 1. Size of districts (i.e., sm all, medium, and large) 2. Geographic location 3. Student population demographics 4. Inclusion of D/F schools The application from each district will be evaluated using the Proposal Evaluation Form according to the following criteria: 1. District and Pilot Schools Commitment ( 50 points ): The proposal demonstrates clear administrative, programmatic and fiscal commitment (including the required letters of commi tment) to fully implementing PS/RtI and a capacity to fulfill the demonstration site’s requirements as outlined in Appendix B. ( Note: District=20, mean ratin g across pilot schools = 30 ) 2. District and Pilot and Comparison Schools’ Demographic Data ( 30 points ): The proposal provides detailed and current demographic data for the district and each proposed pilot school as required in Appendices C, D and E respectively. It provides a clear pict ure of the district’s and pilot and

PAGE 261

253 253 comparison schools’ status on the indicators given. (Note: District=10, mean rating across pilot schools =15, mean rating across comparison schools =5 ) 3. Statement of Need and Expected Outcomes ( 35 points ): The proposal clearly defines each pilot school’s needs that will be addressed through participation as demonstration sites and provides convincing evidence that without assistance from the project, these needs would not be met. The proposal also delineates projected st udent and school outcomes, including outcomes for specific target populations th at: a) are measurable, b) are clearly linked to the identified needs, and c) th at demonstrate an increased capacity to support students’ academic and behavioral performance in the general education environment. (Note: Mean rating ac ross pilot schools=35) 4. District and School Experience with Initiatives and Programs ( 20 points ): The proposal describes in detail the leve l of district and school involvement in academic and/or behavioral initiatives and programs, resulting in a comprehensive picture of the district’s and each pilot school’s current systemic capacity. (Note: District=10, mean ratin g across pilot schools =10) 5. District Personnel Resources and Technology ( 15 points ). The proposal clearly identifies personnel assigned to the PS/RtI initiative at a) the district level, and b) each proposed pilot school site and the percent FTE each is assigned to the initiative. It provides a clear picture of personnel qualifications and experience to support implementa tion of PS/RtI. Technology resources and a data management system to suppor t the initiative at the district and school site level are clearly delineated. ( Note: District = 6, mean rating across pilot schools =9 ) 6. Inclusion of D/F Schools ( 25 points ). D or F schools are represented among the proposed pilot school sites. Total Possible Score = 175 points

PAGE 262

254 APPENDIX A PS/ RtI Regional Areas

PAGE 263

255 APPENDIX B Commitments Required for Success Demonstration District Administration will commit to: 1. Developing and implementing a plan to en sure that general education, special education and other program personnel work together at the district level to effectuate the successful implementation of PS/RtI in the district pilot schools 2. Assigning district personnel w ith the requisite qualifications and experience to the PS/RtI initiative to support district coordination and implementation of the initiative across the pilot school sites 3. Putting in place a district-lev el leadership team to he lp pilot schools with the implementation of the PS/RtI initiative 4. Implementing evidenced-based practices to support learning of all students, including those at risk and ESE stude nts, to achieve AYP and Florida’s A+ Education Plan 5. Designating funds/resources to implement re search-based supplemental instruction and interventions to suppor t students who do not attain expected grade-level outcomes in reading and math 6. Designating resources to adequately s upport PS/RtI implementation at both the district and pilot school le vel, including faculty and staff, time, materials for screening, assessment and interventions, a nd financial support for scientificallybased progress monitoring software (e.g., AIMSweb or DIBELS) 7. Providing funds/resources (including tim e) for professional development of district-level personnel and pilot school teac hers and staff in PS/RtI, data collection and management, data analysis and interpretation 8. Having in place the technological resources and infrastructure, including personnel, and a data management system to ensure ease of access to student performance data by school level and project personnel and to s upport the PS/RtI initiative 9. Providing access to district and state-level student perf ormance data for schoollevel and project reporting purposes 10. Developing and implementing a plan to en sure parent involvement with PS/RtI efforts at the district and pilot school levels 11. Reviewing the district’s policies and pr ocedures for general and exceptional student education to ensure that they are consistent with PS/RtI Pilot School Principal and Administrative Team will commit to: 1. Implementing PS/RtI as a way of work at the pilot school site 2. Assigning personnel with the requisite quali fications and experi ence to the PS/RtI initiative to support its impleme ntation at the school site 3. Putting in place a school leadership team th at is representative of the school’s grade level faculty, support staff and pa rents (consisting of individuals with

PAGE 264

256 collective knowledge and experience in leadership, curriculum, data-based decision-making and systems change) 4. Being active participants in the school leadership team (attend PS/RtI trainings and team meetings) 5. Providing for a regularly scheduled time and place for team meetings 6. Securing agreement from the school faculty to commit to PS/RtI Project Initiative training and practices (including identif ication and selection of appropriate scientifically-based interv entions, continuous monitoring of student progress and the systematic review of academic a nd discipline data for decision-making) 7. Developing and implementing a plan to en sure that general education, special education and other program personnel work together to effectua te the successful implementation of PS/RtI at the pilot school site 8. Allocating required resources (funds, designated time, staff) to facilitate professional development of teachers a nd other professional personnel at the school site 9. Working collaboratively with the Proj ect Coach and Regional Coordinator in implementing PS/RtI at the school site 10. Providing dedicated time and resources for the Project Coach to work with classroom teachers and other school-ba sed support personnel (as needed) to effectively support PS/RtI implem entation at the school site 11. Allocating required personnel and other re sources (e.g., teachers, administrative staff, time, materials ) for full implem entation of PS/RtI at the school site 12. Having in place adequate technology in frastructure and a data management system to support the PS/RtI initia tive at the pilot school site 13. Reallocating resources based on data outcomes 14. Budgeting funds for PS/RtI supplies, materi als, travel and substitutes for team trainings/meetings, etc. School Leadership Team will commit to: 1. Implementing a team-based, problem-solving process to provide interventions for all students at the universal, targeted and intensive levels 2. Participating in PS/RtI traini ngs and networking meetings 3. Working collaboratively with the Proj ect Coach and Regional Coordinator (as needed) to effectively implement PS/RtI at the school site 4. Meeting on a regular basis at specified tim es for school leadersh ip team meetings 5. Collecting and using student outcome data for decision-making purposes 6. Working collaboratively with parents to ensure their involvement in PS/RtI planning, training and im plementation activities 7. Using and submitting required student performance and other data (e.g., satisfaction surveys) 8. Developing an annual action plan for PS/ RtI activities based on analysis of collected data

PAGE 265

257 APPENDIX C District Demographic Data Outline 1. Total student enrollment 2. Student enrollment By grade level By race/ethnicity By SES (use eligibility for free and reduced lunch) 3. Number and percent (of student population) of LEP students Overall By grade level 4. Number and percent of students with disabilities (elementary level) By grade By race/ethnicity By disability type Analysis of disproportionali ty in the identification of students eligible for special education, if available 5. Student performance on FCAT in reading and mathematics For all elementary level students o By grade level o By race/ethnicity For elementary level students with disabilities o By grade level o By race/ethnicity o By disability For LEP students o By grade level 6. Percent of students (at elementary level) who attained AYP in AY 2004-05 and AY 2005-06 overall by grade level by race/ethnicity SES LEP status 7. Number and percent of students retained in grade 3 based on performance on FCAT reading in AY 2004-05 AY 2005-06

PAGE 266

258 APPENDIX D Pilot School Demographic Data Outline (To be completed for each Proposed Pilot School) 1. Grade levels served (school site mu st at least house grades K – 3) 2. Total student enrollment (re port number and percent) By grade level By race/ethnicity By SES (based on eligibility for free and reduced lunch) 3. Number and percent (of student population) of LEP students Overall By grade level 4. Number and percentage of students with disabilities By grade level By disability type By race/ethnicity Analysis of disproportionality in the id entification of student s as eligible for special education, if available 5. Number and percent of students plac ed in ESE in AY 2004-05 and AY 2005-06 By grade level By disability type By race/ethnicity 6. Educational environment/leas t restrictive environment data for students with disabilities By grade level By disability type By race/ethnicity Analysis of disproportionality in pl acement of students, if available 7. Title I status (non-Title I, Title I targeted assistan ce, or Title I school-wide) 8. Student performance on FCAT in reading and mathematics For all students By grade level By race/ethnicity For students with disabilities By grade level By race/ethnicity By disability

PAGE 267

259 Analysis of performance gap between st udents with and without disabilities 9. Percent of students who attained AYP in AY 2004-05 and AY 2005-06 for reading and mathematics overall by grade level by race/ethnicity SES LEP status 10. Number and percent of students retained in Grade 3 based on performance on FCAT reading in AY 2004-05 AY 2005-06 11. School Grade (i.e., A through F) assigne d by FLDOE based on 2005-06 school year: _____ 12. Does your school currently have or ever had a Reading First Grant? _____Yes _____No 13. Does your school have a positive behavior support (PBS) program in place? ____ Yes ____No

PAGE 268

260 APPENDIX E Comparison School Demographic Data Outline (To be completed for each Comparison School) 1. Identify pilot school for which school will serve as comparison 2. Grade levels served (school site mu st at least house grades K – 3) 3. Total student enrollment (re port number and percent) By grade level By race/ethnicity By SES (based on eligibility for free and reduced lunch) 4. Number and percent (of student population) of LEP students Overall By grade level 5. Number and percentage of students with disabilities By grade level By disability type By race/ethnicity Analysis of disproportionality in the id entification of student s as eligible for special education, if available 6. Number and percent of students plac ed in ESE in AY 2004-05 and AY 2005-06 By grade level By disability type By race/ethnicity 7. Educational environment/leas t restrictive environment data for students with disabilities By grade level By disability type By race/ethnicity Analysis of disproportionality in pl acement of students, if available 8. Title I status (non-Title I, Title I targeted assistan ce, or Title I school-wide) 9. Student performance on FCAT in reading and mathematics For all students By grade level By race/ethnicity For students with disabilities By grade level

PAGE 269

261 By race/ethnicity By disability Analysis of performance gap between st udents with and without disabilities 10. Percent of students who attained AYP in AY 2004-05 and AY 2005-06 for reading and mathematics overall by grade level by race/ethnicity SES LEP status 10. Number and percent of students retained in Grade 3 based on performance on FCAT reading in AY 2004-05 AY 2005-06 11. School Grade (i.e., A through F) assigne d by FLDOE based on 2005-06 school year: _____ 12. Does your school currently have or ever had a Reading First Grant? _____Yes _____No 13. Does your school have a positive behavior support (PBS) program in place? _____Yes _____No

PAGE 270

262 Proposal Evaluation Scoring Guide Total points awarded will be an important cons ideration in the selec tion of demonstration districts. However, it also is important that a diversity of students, schools, and districts be represented in the demonstration districts and their pilot schools. Therefore, after all applications have been evaluated against th e criteria below and have received a final score of from 0 to 175, additional factors will be considered prior to the selection of sites. Districts and pilot schools will be selected to include sites that are diverse with respect to: 1. Size of districts (i.e., small, medium, and large), 2. Geographic location, 3. Student population demographics 4. Inclusion of D/F schools Evaluate the application from each district on the Proposal Evaluation Form according to the following criteria: 1. District and Pilot Schools Commitment (50 points ): The proposal demonstrates clear administrative, programmatic and fiscal commitment (i ncluding the required letters of commitment) to fully implemen ting PS/RtI and a capacity to fulfill the demonstration site’s requirements as outlined in Appendix B. ( Note: District=20, mean rating across pilot schools = 30 ) 2. District and Pilot and Comparis on Schools’ Demographic Data ( 30 points ): The proposal provides detailed and current demographic data for the district and each proposed pilot school as required in Appendices C, D and E respectively. It provides a clear picture of the district’s and pilot and comparison schools’ status on the indicators given. (Note: District=10, mean rating across pilot schools =15, mean rating across, comparison schools =5 ) 3. Statement of Need and Expected Outcomes ( 35 points ): The proposal clearly defines each pilot school’s needs that will be addressed through participation as demonstration sites and provides convincing evidence that without assistance from the project, these needs would not be met. The proposal also delineates projected student and school outcomes, including outcomes for specific target populations that: a) are measurable, b) ar e clearly linked to th e identified needs, and c) that demonstrate an increased capacity to support students’ academic and behavioral performance in th e general education environment (Note: Mean rating across pilot schools=35) 4. District and School Experience with Initiatives and Programs (20 points): The proposal describes in detail the level of district and school involvement in academic and/or behavioral initiatives and programs, resulting in a comprehensive

PAGE 271

263 picture of the district’s and each pilo t school’s current systemic capacity. (Note: District=10, mean rating across pilot schools =10) 5. District Personnel Re sources and Technology ( 15 points ). The proposal clearly identifies personnel assi gned to the PS/RtI initiative at a) the district level, and b) each proposed pilot school site a nd the percent FTE each is assigned to the initiative. It provides a clear picture of personnel qualifications and experience to support implementation of PS/RtI. Technology resources and a data management system to support the initiativ e at the district an d school site level are clearly delineated ( Note: District = 6, mean rating across pilot schools =9 ) 6. Inclusion of D/F Schools ( 25 points ). D or F schools are represented among the proposed pilot schools sites. Total Possible Score = 175 points

PAGE 272

264 Proposal Evaluation Form School District: ____________________ Reviewer: ____________________ Date of Review: ____________________ Refer to the Proposal Evaluation Scoring Guide for an explanation of factors to be considered in evaluating each of the following areas: 1. District and Pilot Schools Commitment (Total Possible Points = 50) District Rating (0 to 20 Points) _____ Pilot Schools (0 to 30 Points Each) 1. _____ 2. _____ 3. _____ 4. _____ 5. _____ 6. _____ Mean Pilot School Rating (0 to 30 Points) _____ Subtotal Points Awarded (District plus Mean Pilot Schools) = Comments : 2. District and Pilot and Comparison Schools’ Demographic Data (Total Possible Points = 30) District Rating (0 to 10 Points) _____ Pilot Schools (0 to 15 Each) Comparison Schools (0 to 5 Each) 1. _____ 1. _____ 2. _____ 2. _____ 3. _____ 3. _____ 4. _____ 4. _____ 5. _____ 5. _____ 6. _____ 6. _____

PAGE 273

265 Mean Pilot School Rating (0 to 15) _____ Mean Comparison School Rating (0 to 5) _____ Subtotal Points Awarded (District, pl us Mean Pilot, plus mean Comp) = Comments: 3. Statement of Need and Expected Outcomes (Total Possible Points = 35) Pilot School Ratings (0 to 35 Each): 1. _____ 2. _____ 3. _____ 4. _____ 5. _____ 6. _____ Subtotal Points Awarded (Mean Ra ting for Pilot Schools) = Comments: 4. District and School Experience with Initiatives and Programs (Total Possible Points = 20) District Rating (0 to 10 Points) _____ Pilot School Ratings (0 to 10 Points Each): 1. _____ 2. _____ 3. _____ 4. _____ 5. _____ 6. _____ Mean Pilot School Rating (0 to 10) _____ Subtotal Points Awarded (District plus Mean for Pilot Schools) = Comments:

PAGE 274

266 5. District Personnel Resources and Technology (Total Possible Points = 15) District Rating (0 to 6 Points) _____ Pilot School Ratings (0 to 9 Points Each): 1. _____ 2. _____ 3. _____ 4. _____ 5. _____ 6. _____ Mean Pilot School Rating (0 to 9) _____ Subtotal Points Awarded (District plus Mean for Pilot Schools) = Comments: 6. Inclusion of D/F Schools (Total Possible Points = 25) Subtotal Points Awarded = Total Application Points Awarded: Criterion Area 1. _____ 2. _____ 3. _____ 4. _____ 5. _____ 6. _____ TOTAL POINTS AWARDED (0 to 175) = SIZE OF DISTRICT (Small, Medium, Large) _________ GEOGRAPHIC REGION _________

PAGE 275

267 Appendix B Problem-Solving/Response-to-Interven tion Project Implementation Plan

PAGE 276

268 Project Administration Components Year 1 (7/1/06 – 9/30/07) Year 2 (8/1/07-7/31/08) Pilot Year 1 Year 3 (7/1/08-6/30/09) Pilot Year 2 Year 4 (7/1/09-6/30/10) Pilot Year 3 Year 5 7/1/10-6/30/11 1. Infrastructure Hired personnel As Needed As Needed As Needed As Needed Project Leaders 7/06 Graduate Assistants 8/06 Program Evaluator 8/06 Technical Support 8/06 3 Regional Coordinators 1/07 Program Assistant 3/07 Coaches hired/identified by districts 6/07 DOE Leadership team identified 6/07 Personnel Evaluations 6/07 Personnel Evaluations 6/08 Personnel Evaluations 6/09 Personnel Evaluations 6/10 Personnel Evaluations 6/11 2. District Finance & Administration Minigrants Establish application process 1/07

PAGE 277

269 Project Administration Components Year 1 (7/1/06 – 9/30/07) Year 2 (8/1/07-7/31/08) Pilot Year 1 Year 3 (7/1/08-6/30/09) Pilot Year 2 Year 4 (7/1/09-6/30/10) Pilot Year 3 Year 5 7/1/10-6/30/11 Conduct Bidder’s Conferences 23/07 Review District/school applications and select districts 4/07 Establish contracts 5-7/07 Establish contracts 5-7/08 Establish contracts 5-7/09 Establish billing schedule and criteria for district payments 6/07 Reapplication process Reapplication process Develop Application Protocol 3/08 NA Notify districts 3/08 Notify districts 3/09 Review reapplications 4/08 Review reapplications 4/09 Finalize renewal of district/school grants 5/08 Finalize renewal of district/school grants 5/09

PAGE 278

270 Project Administration Components Year 1 (7/1/06 – 9/30/07) Year 2 (8/1/07-7/31/08) Pilot Year 1 Year 3 (7/1/08-6/30/09) Pilot Year 2 Year 4 (7/1/09-6/30/10) Pilot Year 3 Year 5 7/1/10-6/30/11 3. DOE Submissions & Reports Quarterly reports 3/31, 6/30, 9/30, 12/31 Quarterly reports 3/31, 6/30, 9/30, 12/31 Quarterly reports 3/31, 6/30, 9/30, 12/31 Quarterly reports 3/31, 6/30, 9/30, 12/31 Quarterly reports 3/31, 6/30, 9/30, 12/31 Renewal of DOE grant 6/06 Renewal of DOE grant 6/07 Renewal of DOE grant 6/08 Renewal of DOE grant 6/09 Renewal of DOE grant 6/10

PAGE 279

271 Training and Technical Assistance Components Year 1 (7/1/06 – 6/30/07) Year 2 (7/1/07-6/30/08) Pilot Year 1 Year 3 (7/1/08-6/30/09) Pilot Year 2 Year 4 (7/1/09-6/30/10) Pilot Year 3 Year 5 7/1/10-6/30/11 1. Training Gather/review modules from other states 3/07 Conduct Regional Coordinators Coaching Training 6/07 Develop coaches’ training modules – Year 1, 6/07 Organize summer training for coaches 6/07 Deliver 5-day coaches training 7/9-13/07 Deliver 5-day coaches training 7/08 Deliver 5-day coaches training 07/09 Develop Needs Assessment (school sites) 6/07 Conduct Needs Assessment (school sites) 8/07 Conduct Needs Assessment (school sites) 8/08 Conduct Needs Assessment (school sites) 8/09 Districtand school-based personnel trainings – Session 1 Districtand school-based personnel trainings – Session 1 Districtand school-based personnel trainings – Session 1 Develop schooland districtbased personnel training modules for first 3 days – Year 1 08/07 Develop schooland districtbased personnel training modules for first 3 days – Year 2 08/08 Develop schooland districtbased personnel training modules for first 3 days – Year 3 08/09

PAGE 280

272 Training and Technical Assistance Components Year 1 (7/1/06 – 6/30/07) Year 2 (7/1/07-6/30/08) Pilot Year 1 Year 3 (7/1/08-6/30/09) Pilot Year 2 Year 4 (7/1/09-6/30/10) Pilot Year 3 Year 5 7/1/10-6/30/11 Schedule and arrange training sessions for each district – Session 1 07/07 Schedule and arrange training sessions for each district – Session 1 07/08 Schedule and arrange training sessions for each district – Session 1 07/09 Deliver Session 1 training (3 days) – 09/07 Deliver session 1 training (3 days) – 09/08 Deliver session 1 training (3 days) – 09/09 Districtand school-based trainings – Session 2 Districtand school-based trainings – Session 2 Districtand school-based trainings – Session 2 Develop schooland districtbased personnel training modules for day 4 (session 2) – Year 1 12/07 Develop schooland districtbased personnel training modules for day 4 (session 2) – Year 2 12/08 Develop schooland districtbased personnel training modules for day 4 (session 2) – Year 3 12/09 Schedule and arrange training sessions for each district – Session 2 11/07 Schedule and arrange training sessions for each district – Session 2 11/08 Schedule and arrange training sessions for each district – Session 2 11/09 Deliver Session 2 training (1 day) – 1/08 Deliver Session 2 training (1 day) – 1/09 Deliver Session 2 training (1 day) – 1/10

PAGE 281

273 Training and Technical Assistance Components Year 1 (7/1/06 – 6/30/07) Year 2 (7/1/07-6/30/08) Pilot Year 1 Year 3 (7/1/08-6/30/09) Pilot Year 2 Year 4 (7/1/09-6/30/10) Pilot Year 3 Year 5 7/1/10-6/30/11 Districtand school-based training – Session 3 Districtand school-based training – Session 3 Districtand school-based training – Session 3 Develop schooland districtbased personnel trainings for day 5 (Session 3) – Year 1 3/08 Develop schooland districtbased personnel trainings for day 5 (Session 3) – Year 1 3/09 Develop schooland districtbased personnel trainings for day 5 (Session 3) – Year 1 3/10 Schedule and arrange training sessions for each district – Session 3 1/08 Schedule and arrange training sessions for each district – Session 3 1/09 Schedule and arrange training sessions for each district – Session 3 1/10 Deliver Session 3 training (1 day) 3/08 Deliver Session 3 training (1 day) 3/09 Deliver Session 3 training (1 day) 3/10 Organizing summer training for coaches 6/08 Organizing summer training for coaches 6/09 Develop coaches’ training modules – Year 2, 6/08 Develop coaches’ training modules – Year 3, 6/09 Supplemental trainings for new personnel – As Needed Supplemental trainings for new personnel – As Needed Supplemental trainings for new personnel – As Needed

PAGE 282

274 Training and Technical Assistance Components Year 1 (7/1/06 – 6/30/07) Year 2 (7/1/07-6/30/08) Pilot Year 1 Year 3 (7/1/08-6/30/09) Pilot Year 2 Year 4 (7/1/09-6/30/10) Pilot Year 3 Year 5 7/1/10-6/30/11 2. Technical Assistance N/A Monthly regional TA meetings with coaches facilitated by Regional Coordinators Monthly regional TA meetings with coaches facilitated by Regional Coordinators Monthly regional TA meetings with coaches facilitated by Regional Coordinators Schedule and arrange TA sessions with coaches – by the 15th of preceding month Schedule and arrange TA sessions with coaches – by the 15th of preceding month Schedule and arrange TA sessions with coaches – by the 15th of preceding month Determine TA focus/content for sessions Determine TA focus/content for sessions Determine TA focus/content for sessions Deliver TA session Deliver TA session Deliver TA session Quarterly district TA meetings with district leadership and coaches facilitated by Regional Coordinators Quarterly district TA meetings with district leadership and coaches facilitated by Regional Coordinators Quarterly TA meetings with district leadership and coaches facilitated by Regional Coordinators

PAGE 283

275 Training and Technical Assistance Components Year 1 (7/1/06 – 6/30/07) Year 2 (7/1/07-6/30/08) Pilot Year 1 Year 3 (7/1/08-6/30/09) Pilot Year 2 Year 4 (7/1/09-6/30/10) Pilot Year 3 Year 5 7/1/10-6/30/11 Schedule and arrange TA sessions with district team members and coaches – Schedule first meeting at AO meetings 06/07, schedule next 3 at 09/07 meeting, attempt to schedule first meeting for Year 3 at fourth quarter meeting Schedule and arrange TA sessions with district team members and coaches – Schedule last 3 quarterly meetings at first quarter meeting, attempt to schedule first meeting for Year 4 at fourth quarter meeting Schedule and arrange TA sessions with district team members and coaches – Schedule last 3 quarterly meetings at first quarter meeting Determine TA focus/content for sessions Determine TA focus/content for sessions Determine TA focus/content for sessions Deliver TA session Deliver TA session Deliver TA session

PAGE 284

276 Training and Technical Assistance Components Year 1 (7/1/06 – 6/30/07) Year 2 (7/1/07-6/30/08) Pilot Year 1 Year 3 (7/1/08-6/30/09) Pilot Year 2 Year 4 (7/1/09-6/30/10) Pilot Year 3 Year 5 7/1/10-6/30/11 Weekly TA meetings with school-based leadership facilitated by coaches (Regional Coordinator attendance optional) Weekly TA meetings with school-based leadership facilitated by coaches (Regional Coordinator attendance optional) Weekly TA meetings with school-based leadership facilitated by coaches (Regional Coordinator attendance optional) Schedule and arrange TA sessions with school-based teams Schedule and arrange TA sessions with school-based teams Schedule and arrange TA sessions with school-based teams Determine TA focus/content for sessions Determine TA focus/content for sessions Determine TA focus/content for sessions Deliver TA session Deliver TA session Deliver TA session Quarterly statewide coaches meetings Quarterly statewide coaches meetings Quarterly statewide coaches meetings

PAGE 285

277 Training and Technical Assistance Components Year 1 (7/1/06 – 6/30/07) Year 2 (7/1/07-6/30/08) Pilot Year 1 Year 3 (7/1/08-6/30/09) Pilot Year 2 Year 4 (7/1/09-6/30/10) Pilot Year 3 Year 5 7/1/10-6/30/11 Schedule and arrange TA sessions with coaches – Immediately following scheduling of quarterly district leadership meetings schedule quarterly meetings for coaches for remainder of year Schedule and arrange TA sessions with coaches – Immediately following scheduling of quarterly district leadership meetings schedule quarterly meetings for coaches for remainder of year Schedule and arrange TA sessions with coaches – Immediately following scheduling of quarterly district leadership meetings schedule quarterly meetings for coaches for remainder of year Provide technology training and determine other TA focus/content for sessions Provide technology training and determine other TA focus/content for sessions Provide technology training and determine other TA focus/content for sessions Deliver TA session Deliver TA session Deliver TA session

PAGE 286

278 Training and Technical Assistance Components Year 1 (7/1/06 – 6/30/07) Year 2 (7/1/07-6/30/08) Pilot Year 1 Year 3 (7/1/08-6/30/09) Pilot Year 2 Year 4 (7/1/09-6/30/10) Pilot Year 3 Year 5 7/1/10-6/30/11 Check with district leadership teams at AO meetings regarding possibility of having a statewide meeting of district leadership teams Statewide district leadership meetings? Statewide district leadership meetings? Statewide district leadership meetings? Ask school administrators about helpfulness of district and/or regional school administrator meetings Regional school administrator meetings? Regional school administrator meetings? Regional school administrator meetings?

PAGE 287

279 Communications Components Year 1 (7/1/06 – 6/30/07) Year 2 (7/1/07-6/30/08) Pilot Year 1 Year 3 (7/1/08-6/30/09) Pilot Year 2 Year 4 (7/1/09-6/30/10) Pilot Year 3 Year 5 7/1/10-6/30/11 1. Quarterly Newsletter Developed plan for distribution – 5/07 Contact Project staff for newsletter content and commitments to write sections (Judi)– 08/01/07, 11/01/07, 02/01/08, 05/01/08 Contact Project staff for newsletter content and commitments to write sections (Judi)– 08/01/08, 11/01/08, 02/01/09, 05/01/09 Contact Project staff for newsletter content and commitments to write sections (Judi)– 08/01/09, 11/01/09, 02/01/10, 05/01/10 Contact Project staff for newsletter content and commitments to write sections (Judi)– 08/01/10, 11/01/10, 02/01/11, 05/01/11 Write and distribute first newsletter – 6/15/07 Project staff writes and sends sections to Judi for preparation – 09/01/07, 12/01/07, 03/15/08, 06/01/08 Project staff writes and sends sections to Judi for preparation – 09/01/08, 12/01/08, 03/15/09, 06/01/09 Project staff writes and sends sections to Judi for preparation – 09/01/09, 12/01/09, 03/15/10, 06/01/10 Project staff writes and sends sections to Judi for preparation – 09/01/10, 12/01/10, 03/15/11, 06/01/11 Dissemination of newsletter to stakeholder groups (see Communication Matrix; Judi) – 09/15/07, 12/15/07, 03/15/08, 06/15/08 Dissemination of newsletter to stakeholder groups (see Communication Matrix; Judi) – 09/15/08, 12/15/08, 03/15/09, 06/15/09 Dissemination of newsletter to stakeholder groups (see Communication Matrix; Judi) – 09/15/09, 12/15/09, 03/15/10, 06/15/10 Dissemination of newsletter to stakeholder groups (see Communication Matrix; Judi) – 09/15/10, 12/15/10, 03/15/11, 06/15/11 2. Weekly Email Updates Developed plan for distribution 5/07 Contact Project staff for email update content (Judi) – Monday of each week Contact Project staff for email update content (Judi) – Monday of each week Contact Project staff for email update content (Judi) – Monday of each week Contact Project staff for email update content (Judi) – Monday of each week

PAGE 288

280 Communications Components Year 1 (7/1/06 – 6/30/07) Year 2 (7/1/07-6/30/08) Pilot Year 1 Year 3 (7/1/08-6/30/09) Pilot Year 2 Year 4 (7/1/09-6/30/10) Pilot Year 3 Year 5 7/1/10-6/30/11 Suggestions for content to Judi – Wednesday of each week Suggestions for content to Judi – Wednesday of each week Suggestions for content to Judi – Wednesday of each week Suggestions for content to Judi – Wednesday of each week Email update written and distributed to stakeholders (see Communications Matrix; Judi) – Thursdays of each week) Email update written and distributed to stakeholders (see Communications Matrix; Judi) – Thursdays of each week) Email update written and distributed to stakeholders (see Communications Matrix; Judi) – Thursdays of each week) Email update written and distributed to stakeholders (see Communications Matrix; Judi) – Thursdays of each week) 3. Website Initial website created and operational – 03/07 Review and revise website content by 15th of each month (Judi) Review and revise website content by 15th of each month (Judi) Review and revise website content by 15th of each month (Judi) Review and revise website content by 15th of each month (Judi) Content updated periodically Redesign of website started Create plan for review and update of website – 5/07

PAGE 289

281 Communications Components Year 1 (7/1/06 – 6/30/07) Year 2 (7/1/07-6/30/08) Pilot Year 1 Year 3 (7/1/08-6/30/09) Pilot Year 2 Year 4 (7/1/09-6/30/10) Pilot Year 3 Year 5 7/1/10-6/30/11 4. List Serves Plan developed for creation of list serves – 5/07 Create list serves (see Communications Matrix; Judi) – 07/08 Update list serves (see Communications Matrix; Judi) – 07/09 Update list serves (see Communications Matrix; Judi) – 07/10 Update list serves (see Communications Matrix; Judi) – 07/11 5. Boilerplate Articles Make contacts with state associations by 6/15/07 (see Communications Matrix; Judi) Determine focus of annual article and identify author – 5/01/08 Determine focus of annual article and identify author – 5/01/09 Determine focus of annual article and identify author – 5/01/10 Determine focus of annual article and identify author – 5/01D/11 Send article providing overview of Project and demonstration districts to state associations by 6/30/07 (see Communications Matrix; Mike) Write and send articles to Judi – 6/1/08 Write and send articles to Judi – 6/1/09 Write and send articles to Judi – 6/1/10 Write and send articles to Judi – 6/1/11 Disseminate articles to stakeholders – 6/15/08 Disseminate articles to stakeholders – 6/15/09 Disseminate articles to stakeholders – 6/15/10 Disseminate articles to stakeholders – 6/15/11 6. Statewide PS/RtI Conference Create Conference Planning Team 10/07 Develop plan for statewide conference – 11/08 Develop plan for statewide conference – 11/09 Develop plan for statewide conference – 11/10

PAGE 290

282 Communications Components Year 1 (7/1/06 – 6/30/07) Year 2 (7/1/07-6/30/08) Pilot Year 1 Year 3 (7/1/08-6/30/09) Pilot Year 2 Year 4 (7/1/09-6/30/10) Pilot Year 3 Year 5 7/1/10-6/30/11 7. Develop plan for statewide conference – 11/07 Schedule and organize statewide conference Schedule and organize statewide conference Schedule and organize statewide conference Schedule and organize statewide conference Hold conference – 6/08? Hold conference – 6/09? Hold conference – 6/10? Hold conference – 6/11? 8. Other Conferences Team participation in Innovations Conference – 09/07 Team participation in Innovations Conference – 09/08 Team participation in Innovations Conference – 09/09 Team participation in Innovations Conference – 09/10 Develop comprehensive conference presentation paln with DOE staff 7/07 Present at AMM – 09/07 Present at AMM – 09/08 Present at AMM – 09/09 Present at AMM – 09/10 Discussion of priorities for presentation of Project information – 11/07 Discussion of priorities for presentation of Project information – 11/08 Discussion of priorities for presentation of Project information – 11/09 Discussion of priorities for presentation of Project information – 11/10

PAGE 291

283 Communications Components Year 1 (7/1/06 – 6/30/07) Year 2 (7/1/07-6/30/08) Pilot Year 1 Year 3 (7/1/08-6/30/09) Pilot Year 2 Year 4 (7/1/09-6/30/10) Pilot Year 3 Year 5 7/1/10-6/30/11 8. Collaboration with other State Projects On-going meetings held with FCRR, PBS, and VPK Continue on-going meetings with FCRR, PBS, and VPK Continue on-going meetings with FCRR, PBS, and VPK Continue on-going meetings with FCRR, PBS, and VPK Continue on-going meetings with FCRR, PBS, and VPK Have Project Leadership Team meeting to discuss collaboration with other State Projects – 09/07

PAGE 292

284 Evaluation Components Year 1 (7/1/06 – 6/30/07) Year 2 (7/1/07-6/30/08) Pilot Year 1 Year 3 (7/1/08-6/30/09) Pilot Year 2 Year 4 (7/1/09-6/30/10) Pilot Year 3 Year 5 7/1/10-6/30/11 1. Planning Drafted evaluation plan – 12/06 Review and update evaluation plan – 6/08 Review and update evaluation plan – 6/09 Review and update evaluation plan – 6/10 2. Instrumentation Gathered instruments from other states’ evaluation models – 4/07 Developed drafts of measures (see Evaluation Tool List) – 5/07 Finalize drafts of evaluation measures (see Evaluation Tool List) – 7/07 Revise and/or develop new evaluation measures – 7/08 Revise and/or develop new evaluation measures – 7/09 Complete Expert Validation Panel process for Project participant surveys (see Evaluation Tool List) – 6/07 Complete Validation Panel Process for parent survey & RtI Needs Assessment – 06/07

PAGE 293

285 Evaluation Components Year 1 (7/1/06 – 6/30/07) Year 2 (7/1/07-6/30/08) Pilot Year 1 Year 3 (7/1/08-6/30/09) Pilot Year 2 Year 4 (7/1/09-6/30/10) Pilot Year 3 Year 5 7/1/10-6/30/11 Pilot test instruments developed and revised as needed – 7/07 Complete webbased databases – 6/07 Update web-based data-bases (As Needed Update web-based data-bases (As Needed Update web-based data-bases (As Needed Update web-based data-bases (As Needed School level data Training survey data Training/TA logs Student level outcome data Intervention integrity? 3. Data Collection & Analysis Developed timeline for data collection – 5/07 Discuss baseline data elements to be gathered from pilot districts, pilot schools & comparison schools – 6/07 Collect baseline data from pilot & comparison schools

PAGE 294

286 Evaluation Components Year 1 (7/1/06 – 6/30/07) Year 2 (7/1/07-6/30/08) Pilot Year 1 Year 3 (7/1/08-6/30/09) Pilot Year 2 Year 4 (7/1/09-6/30/10) Pilot Year 3 Year 5 7/1/10-6/30/11 Collect data from coaches training Collect data from coaches training Collect data from coaches training Collect data from pilot and comparison schools (see Data Collection Rubric) Collect data from pilot and comparison schools (see Data Collection Rubric) Collect data from pilot and comparison schools (see Data Collection Rubric) Develop plan for conducting data analyses – 6/07 Conduct and interpret analyses (See Data Analysis Plan) Conduct and interpret analyses (See Data Analysis Plan) Conduct and interpret analyses (See Data Analysis Plan) Conduct and interpret analyses (See Data Analysis Plan) 4. Reporting Identify stakeholders who will receive reports Provide reports to stakeholders (see Data Reporting Plan) Provide reports to stakeholders (see Data Reporting Plan) Provide reports to stakeholders (see Data Reporting Plan) Provide reports to stakeholders (see Data Reporting Plan) • Develop plan for reporting data to stakeholders – 6/07 Project Leadership Team (by 3/31, 6/30, 9/30, 12/31) Project Leadership Team (by 3/31, 6/30, 9/30, 12/31) Project Leadership Team (by 3/31, 6/30, 9/30, 12/31) Project Leadership Team (by 3/31, 6/30, 9/30, 12/31) DOE Project Liaison (Quarterly report data; 3/15, 6/15, 9/15, 12/15) DOE Project Liaison (Quarterly report data; 3/15, 6/15, 9/15, 12/15) DOE Project Liaison (Quarterly report data; 3/15, 6/15, 9/15, 12/15) DOE Project Liaison (Quarterly report data; 3/15, 6/15, 9/15, 12/15)

PAGE 295

287 Evaluation Components Year 1 (7/1/06 – 6/30/07) Year 2 (7/1/07-6/30/08) Pilot Year 1 Year 3 (7/1/08-6/30/09) Pilot Year 2 Year 4 (7/1/09-6/30/10) Pilot Year 3 Year 5 7/1/10-6/30/11 Regional Coordinators (by end of each month) Regional Coordinators (by end of each month) Regional Coordinators (by end of each month) Statewide conference participants Statewide conference participants Statewide conference participants Statewide conference participants Annual report (6/30) Annual report (6/30) Annual report (6/30) Final report (7/30)

PAGE 296

288 Appendix C Problem-Solving/Response-to-Interv ention Project Ev aluation Rubric

PAGE 297

289 Demonstration Site Evaluati on Rubric Draft – 8/6/07 Component Evaluation Questions Data Source Method Collection Timeline Personnel Responsible Input – Pilot Districts and Schools 1. What were the demographic profiles of students attending the pilot (1) districts and (2) schools? Categories to be examined by grade-level include: a. Race/ethnicity (i.e., Caucasian, Black, Hispanic, Asian/Pacific Islander, Native American/Alaskan Native, & Multiracial)? b. Gender? c. Free-reduced lunch status? d. Disability status? e. English language learner status? 2. To what degree did pilot (1) districts and (2) schools reach consensus regarding participation in the PS/RtI Project? 3. What was the demographic profile of staff at the project and comparison schools and to what extent did turnover occur? 4. To what degree was the infrastructure necessary to support implementation of the PS/RtI (e.g., personnel, technology, financial 1. School records 2. District and school personnel 3. Coaches and GAs 4. District leadership teams, school-based 1. Records review; district application 2. District application; Modified RtI Needs Assessment 3. Records review from district and school records 4. District application; Modified RtI Needs 1. See Data Collection Rubric 2. See Data Collection Rubric 3. See Data Collection Rubric 4. See Data Collection Rubric 1. District data contact 2. Coaches collect data and provide to a GA to upload 3. District data contact 4. Coaches collect data and provide to a GA to upload

PAGE 298

290 resources, professional development structures, academic and behavioral programs, policies/procedures) present in pilot: a. Districts? b. Schools? teams, and coaches Assessment; Interviews Input – Coaches 5. To what degree did coaches in the pilot districts meet the requisite qualifications? 6. To what extent did coaches demonstrate coaching and PS/RtI skills? 5. Coaches and district personnel 6. Coaches 5. Coaches’ vita; district application 6. Coaching Analogue Assessment; Direct Skill Assessments 5. See Data Collection Rubric 6. Coaches Training 5. TBD 6. Regional coordinators collect data; scoring and entry TBD Process – PS/RtI Training 7. To what extent was training provided to each of the following key stakeholders: a. District leadership teams? b. School-based teams? c. Coaches? 8. To what extent were the following key stakeholders satisfied with the quality of the training: a. District leadership teams? b. School-based teams? c. Coaches? 9. To what extent were the following key 7. Regional coordinators and coaches 8. District leadership teams, school-based teams, and coaches 9. District leadership 7. Regional Coordinator Training Log; Coaches Training Log; Attendance Log 8. Training Evaluation Survey 9. Training 7. See Data Collection Rubric 8. See Data Collection Rubric 9. See Data Collection Rubric 7. Regional coordinators & coaches track and upload data via web-based screen 8. Regional coordinators & coaches collect data and provide to a GA to upload 9. Regional coordinators &

PAGE 299

291 stakeholders satisfied with the training content/materials: a. District leadership teams? b. School-based teams? c. Coaches? teams, school-based teams, and coaches Evaluation Survey coaches collect data and provide to a GA to upload Process Technical Assistance & Communication 10. To what extent was technical assistance provided to: a. District leadership teams? b. School-based teams? c. Coaches? 11. To what extent were the following key stakeholders satisfied with the technical assistance and communication provided by the project: a. District leadership teams? b. School-based teams? c. Coaches? 10. Regional coordinators and coaches 11. District leadership teams, school-based teams, and coaches 10. Regional Coordinator Technical Assistance Log; Coaches Technical Assistance Log 11. Technical Assistance Evaluation Survey; Coaches Evaluation Survey 10. See Data Collection Rubric 11. See Data Collection Rubric 10. Regional coordinators & coaches track and upload data via web-based screen 11. Regional coordinators & coaches collect data and provide to a GA to upload Output – Consensus 12. What was the impact of the Project on the level of consensus for: a. District leadership teams? b. School-based teams? c. Other school personnel? 13. What was the impact of the project on the following key stakeholders’ beliefs about PS/RtI: d. District leadership teams? e. School-based teams? 12. District leadership teams, school-based teams, and school personnel 13. District leadership teams, school-based teams, and school personnel 12. Modified RtI Needs Assessment 13. Beliefs Survey 12. See Data Collection Rubric 13. See Data Collection Rubric 12. Coaches collect data and provide to GAs to upload 13. Regional coordinators & coaches collect data and provide to a

PAGE 300

292 f. Other school personnel? 14. To what extent were the following key stakeholders satisfied with service delivery in the PS/RtI model? a. District leadership teams? b. School-based teams? c. Other school personnel? d. Parents? 15. To what extent were the following key stakeholders satisfied with student and systemic outcomes in the PS/RtI model? a. District leadership teams? b. School-based teams? c. Other school personnel? d. Parents? 14. District leadership teams, school-based teams, and school personnel 15. District leadership teams, school-based teams, and school personnel 14. School Personnel Satisfaction Survey; Parent Satisfaction Survey 15. School Personnel Satisfaction Survey; Parent Satisfaction Survey 14. See Data Collection Rubric 15. See Data Collection Rubric GA to upload 14. Regional coordinators & coaches collect data and provide to a GA to upload 15. Regional coordinators & coaches collect data and provide to a GA to upload Output – Infrastructure 16. What was the impact of the project on creating the infrastructure to support implementation of PS/RtI at the: a. District-level? b. School-level? 16.District leadership teams, school-based teams, and coaches 16. Modified RtI Needs Assessment; Interviews 16. See Data Collection Rubric 16. Coaches collect data and provide to a GA to upload Output – Implementation 17. What was the impact of the project on the PS/RtI skills of the following key stakeholders: a. Coaches? b. District leadership teams? c. School-based teams? d. Other school personnel? 17. Coaches, district leadership teams, school-based teams, and other school personnel 17. Perceptions of Skills Survey; Direct Skill Assessments; Neutral Interviews; Taped observation 18. Perceptions of 17. See Data Collection Rubric 17. Regional coordinators & coaches collect data and provide to a GA to upload 18. Regional

PAGE 301

293 18. What was the impact of the project on pilot school implementation of PS/RtI practices (e.g., core curriculum fidelity, intervention practices and fidelity, problemsolving team procedures, assessment practices)? 18. Coaches, schoolbased teams, and other school personnel Practices Survey; Modified RtI Needs Assessment; Critical Components Checklists; Problem-Solving Team Checklists; Intervention Integrity Log; Anecdotal records 18. See Data Collection Rubric coordinators & coaches collect data and provide to a GA to upload OutputStudent Outcomes 19. What was the impact of implementing PS/RtI on (1) reading and (2) math achievement: a. For all students? b. By race/ethnicity (i.e., Caucasian, Black, Hispanic, Asian/Pacific Islander, American Indian/Alaskan Native, & Multiracial)? c. By gender? d. By free-reduced lunch status? e. By disability status? f. By English language learner status? 20. What was the impact of implementing PS/RtI on behavioral outcomes: a. For all students? b. By race/ethnicity (i.e., Caucasian, Black, Hispanic, Asian/Pacific Islander, American Indian/Alaskan Native, & Multiracial)? 19. School records 20. School records 19. FCAT; SAT10; CBM; DIBELS; District assessments 20. Permanent products from interventions 19. See Data Collection Rubric 20. See Data Collection Rubric 19. District data contact will provide to Project staff 20. TBD

PAGE 302

294 c. By gender? d. By free-reduced lunch status? e. By disability status? f. By English language learner status? Output – Systemic Outcomes 21. What was the impact of implementing PS/RtI on office discipline referrals: a. For all students? b. By race/ethnicity (i.e., Caucasian, Black, Hispanic, Asian/Pacific Islander, American Indian/Alaskan Native, & Multiracial)? c. By gender? d. By free-reduced lunch status? e. By disability status? f. By English language learner status? 22. What was the impact of implementing PS/RtI on the special education referrals, evaluations, and placements: a. For all students? b. By race/ethnicity (i.e., Caucasian, Black, Hispanic, Asian/Pacific Islander, American Indian/Alaskan Native, & Multiracial)? c. By gender? d. By free-reduced lunch status? e. By disability status? f. By English language learner status? 23. What was the impact of implementing PS/RtI on student attendance: a. For all students? 21. School records 22. School records 23. School records 21. Records review of ODRs 22. Records review 23. Records review 21. See Data Collection Rubric 22. See Data Collection Rubric 23. See Data Collection 21. District contact or coach will collect and provide to Project staff 22. District contact or coach will collect and provide to Project staff 23. District contact or coach will

PAGE 303

295 b. By race/ethnicity (i.e., Caucasian, Black, Hispanic, Asian/Pacific Islander, American Indian/Alaskan Native, & Multiracial)? c. By gender? d. By free-reduced lunch status? e. By disability status? f. By English language learner status? 24. What was the impact of implementing PS/RtI on retention rates: a. For all students? b. By race/ethnicity (i.e., Caucasian, Black, Hispanic, Asian/Pacific Islander, American Indian/Alaskan Native, & Multiracial)? c. By gender? d. By free-reduced lunch status? e. By disability status? f. By English language learner status? 25. What the impact of implementing PS/RtI on costs for: a. Training? b. Materials? c. Personnel? d. Technology? e. Other? 24. School records 25. District, school, and project records 24. Records review 25. Records review Rubric 24. See Data Collection Rubric 25. See Data Collection Rubric collect and provide to Project staff 24. District contact or coach will collect and provide to Project staff 25. TBD Contextual Factors 26. How does school climate/culture impact implementation of PS/RtI? 26. School personnel, coaches, and school records 26. Beliefs Survey; Interviews; RtI Needs 26. See Data Collection Rubric 26. Coaches and Regional Coordinators

PAGE 304

296 27. How does leadership impact implementation of PS/RtI? 27. District and school administrators, and school records Assessment; Critical Components Checklists; Problem-Solving Team Checklists 27. Beliefs Survey; Interviews; RtI Needs Assessment; Critical Components Checklists; Problem-Solving Team Checklists 27. See Data Collection Rubric 27. Coaches and Regional Coordinators External Factors 28. How does legislation (e.g., NCLB, IDEIA) impact implementation of PS/RtI? 29. How do state and district policies impact implementation of PS/RtI? 28. District and school personnel, school records, legislation 29. District and school personnel, state and district policy records 28. NCLB and IDEIA; RtI Needs Assessment; Critical Components Checklists; Problem-Solving Team Checklists 29. State and district regulations; RtI Needs 28. See Data Collection Rubric 29. See Data Collection 28. Coaches and Regional Coordinators; Other? 29. Coaches and Regional Coordinators;

PAGE 305

297 Assessment; Critical Components Checklists; Problem-Solving Team Checklists; Questionairre Rubric Other? Goals & Objectives 30. How do the goals and objectives of schools (i.e., content area and grade levels targeted) impact implementation of PS/RtI? 31. How do the goals and objectives of schools (i.e., content area and grade levels targeted) impact student and systemic outcomes? 30. District and school personnel, and school records 31. District and school personnel, and school records 30. Grant applications; Interviews; RtI Needs Assessment; Critical Components Checklist; Coaches Observation Checklist 31. FCAT; SAT10; CBM; DIBELS; District assessments; ODRs; Grant application; Interviews; RtI Needs Assess. 30. See Data Collection Rubric 31. See Data Collection Rubric 30. Coaches and Regional Coordinators; Others? 31. Coaches and Regional Coordinators; Others?

PAGE 306

298 Appendix D Example Validation Forms

PAGE 307

299 Problem-Solving/Response-to-Intervent ion Beliefs Survey Content Validation – Item Content and Clarification Rating Form Directions : The Problem-Solving/Response-to-In tervention Beliefs Survey is intended to capture the degree to which school and district personnel possess the beliefs necessary for successful implementation of the Problem-Solving/Respons e-to-Intervention ( PS/RtI) model. The items on the survey are designed to assess the beliefs of school and di strict personnel in one or more of the following domains; overall educational philosophy, assessment practices, core instru ction, intervention, and special edu cation eligibility determination. Florida PS/RtI Project staff will use the data derived from the survey to inform the services provided to schools. A good survey is concise, contains clearly and accurately written items that relate to the purpose of the survey, and avoids duplicate it ems. To evaluate the degree to which the attached survey meets these cr iteria, please rate each item on the basis of appropriateness of content, necessity, and clarity Read each question carefully and rate it by circling one or more of the following descriptors: G = Good (Item is clearly and accurately written); R = Redundant (There are items with similar content and meaning); N = Nonessential (The content is non-related to any of the five PS/RtI belief domains); PW = Poorly Written (Item has semantic or grammatical errors); A = Ambiguous (Item has abstract or vague content, or double-barrele d items that ask two questions in one statement). If you have found an item to be prob lematic (i.e., you circled it with R, N, PW, or A ), please provide suggestions by rewriting th e item in the space below, or write: “Delete item” if you believe the item does not addr ess beliefs related to PS/RtI. This survey will be completed by school and district personnel participating in PS/RtI training across the state of Florida. Respondent s will be asked to rate the degree to which they agree with each PS/RtI belief on a 5-point continuum of strongly disagree to strongly agree For your information, school and dist rict personnel will use the following ratings: 1 = Strongly Disagree 2 = Disagree 3 = Neutral 4 = Agree 5 = Strongly Agree

PAGE 308

300 Problem-Solving/Response-to-Intervention Beliefs Survey G=Good R=Redundant N=Nonessential PW=Poorly Written A=Ambiguous Essential PS/RtI Beliefs Content and Clarity Ratings 1. I believe in the philosophy of No Child Le ft Behind (NCLB) even if I disagree with some of the requirements. G R N PW A Rewrite: ___________________________________________________________________ 2. Core instruction should be effective enough to result in 80% of the students achieving benchmarks in reading and math. G R N PW A Rewrite: ___________________________________________________________________ 3. The primary function of supplemental instructi on is to ensure that students meet gradelevel benchmarks in reading and math. G R N PW A Rewrite: ___________________________________________________________________ 4. The majority of student with learning disa bilities achieve grade-level benchmarks in reading and math. G R N PW A Rewrite: ___________________________________________________________________ 5. The majority of students with behavioral problems (EH/SED) achieve grade-level benchmarks in reading and math. G R N PW A Rewrite: ___________________________________________________________________ 6. Students with disabilities who are receiving sp ecial education services are capable of achieving grade-level benchmar ks in reading and math. G R N PW A Rewrite: ___________________________________________________________________ 7. General education teachers should implement mo re differentiated and flexible curricula to address the needs of a more diverse student body. G R N PW A Rewrite: ___________________________________________________________________ 8. General education classroom te achers would be able to im plement more differentiated and flexible interventions if th ey had additional staff support. G R N PW A Rewrite: ___________________________________________________________________ 9. The availability of additional interventions in the genera l education classroom would result in success for more students. G R N PW A Rewrite: ___________________________________________________________________ 10. Prevention activities and early intervention st rategies in schools would result in fewer referrals to problem-solving teams a nd placements in special education. G R N PW A Rewrite: ___________________________________________________________________ 11. The severity of a students problem is determined not by how far behind (or inappropriate) a student is but by how qui ckly a student responds to intervention. G R N PW A Rewrite: ___________________________________________________________________ 12. The results of IQ and achievement tes ting can be used to identify effective interventions for students with learning and behavior problems. G R N PW A Rewrite: ___________________________________________________________________

PAGE 309

301 13. Many students currently identified as “LD” do not have a disability, but came to school “not ready” or got too far behind for the available interventions to close the gap sufficiently. G R N PW A Rewrite: ___________________________________________________________________ 14. Using student-based data to determine interv ention effectiveness is more accurate than using “teacher judgment.” G R N PW A Rewrite: ___________________________________________________________________ 15. Evaluating a student’s response to interventions is a more effective way of determining what a student is capable of than using sc ores from “tests” (e.g., IQ/Achievement). G R N PW A Rewrite: ___________________________________________________________________ 16. Time and resources should be given first to students who are not reaching benchmarks before significant time and resources are di rected to students who are at or above benchmark. G R N PW A Rewrite: ___________________________________________________________________ 17. It is easier for me to make decisions about student performance and needed interventions when the student data are graphed. G R N PW A Rewrite: ___________________________________________________________________ 18. Parents should be involved in the problem-so lving process as soon as a teacher has a concern about a particular student. G R N PW A Rewrite: ___________________________________________________________________ 19. Students respond better to interventions when the parent is involved in the development and implementation of those interventions. G R N PW A Rewrite: ___________________________________________________________________ 20. All students can achieve grade-level benchmar ks if they have sufficient support. G R N PW A Rewrite: ___________________________________________________________________ If you believe that there are other important que stions not addressed in this survey that would help identify the degree to which school and district personnel posses the beliefs necessary to implement the PS/RtI model, pl ease list them below and state the domain (i.e., overall educational philosophy, assessmen t practices, core inst ruction, intervention, and special education eligibility dete rmination) that it characterizes: ________________________________________________________________________ ________________________________________________________________________ ________________________________________________________________________ ________________________________________________________________________ ________________________________________________________________________ ________________________________________________________________________ Thank you for your assistance with this important step in validating a measure to capture the beliefs of school and district pe rsonnel as they relate to PS/RtI.

PAGE 310

302 Perception of Skills Survey Content Valid ation – Item Content and Clarification Rating Form Directions : The Perception of Skills Survey is intended to capture the degree to which school and district personnel perceive that they have the skills needed to function within a ProblemSolving/Response-to-Intervention (PS/RtI) mode l. The items on the survey are designed to assess school and district personnel perceptio ns about their skills in one or more of the following domains; data-based decision-ma king, tiered service delivery, the problemsolving process, data collection procedur es, technology use, and special education eligibility determination. Florida PS/RtI Projec t staff will use the data derived from the survey to inform the services provided to schools. A good survey is concise, contains clearly and accurately written items that relate to the purpose of the survey, and avoids duplicate it ems. To evaluate the degree to which the attached survey meets these cr iteria, please rate each item on the basis of appropriateness of content, necessity, and clarity Read each question carefully and rate it by circling one or more of the following descriptors: G = Good (Item is clearly and accurately written); R = Redundant (There are items with similar content and meaning); N = Nonessential (The content is non-related to any of the five PS/RtI belief domains); PW = Poorly Written (Item has semantic or grammatical errors); A = Ambiguous (Item has abstract or vague content, or double-barrele d items that ask two questions in one statement). If you have found an item to be prob lematic (i.e., you circled it with R, N, PW, or A ), please provide suggestions by rewriting th e item in the space below, or write: “Delete item” if you believe the item does not address skills needed in a PS/RtI model. This survey will be completed by school and district personnel participating in PS/RtI training across the state of Florida. Respondent s will be asked to rate the degree to which they possess each skill on a 5-point continuum of I do not have this skill at all to I could teach others this skill For your information, school and district personnel will use the following ratings: 1 = I do not have this skill at all 2 = I need substantial support to use this skill 3 = I have this skill, but still need some support 4 = I can use this skill with little support 5 = I could teach others this skill

PAGE 311

303 Perceptions of Skills Survey G=Good R=Redundant N=Nonessential PW=Poorly Written A=Ambiguous Skills Content and Clarity Ratings 1. I know how to access the data necessary to determine the percent of students in core instruction who are achieving benchmarks in: a. Academics b. Behavior G R N PW A Rewrite: ___________________________________________________________________ 2. I have the skill to use the data to make decisions about the effectiveness of the core curriculum for individuals a nd groups of students for: a. Academics b. Behavior G R N PW A Rewrite: ___________________________________________________________________ 3. Please rate your skill level on each of the fo llowing steps in the problem identification (i.e., referral reason) stage of problem-solving: a. Defining the referral concern in terms of a replacement behavior (what you want the student to be able to do) instead of a referral problem for: 1. Academics 2. Behavior G R N PW A Rewrite: ___________________________________________________________________ b. Using data to define the current level of performance for the target student for: 1. Academics 2. Behavior G R N PW A Rewrite: ___________________________________________________________________ c. Determining the desired level of performance (i.e., benchmark) for: 1. Academics 2. Behavior G R N PW A Rewrite: ___________________________________________________________________ d. Determining current level of peer perfo rmance on the same behavior as the target student for: 1. Academics 2. Behavior G R N PW A Rewrite: ___________________________________________________________________ e. Calculating the gap between student performance and the benchmark for: 1. Academics 2. Behavior G R N PW A Rewrite: ___________________________________________________________________ f. Using gap data to determine whether co re instruction should be modified or whether supplemental instruction should be directed to the target student for: 1. Academics 2. Behavior G R N PW A Rewrite: ___________________________________________________________________ 4. I have the skill to identify the appropriate s upplemental intervention in my building for a student identified as at-risk for: G R N PW A

PAGE 312

304 a. Academics b. Behavior Rewrite: ___________________________________________________________________ 5. I have the skill to develop potential reasons (i .e., hypotheses) why a student or group of students is/are not achieving desired levels of performance (i.e., benchmarks) for: a. Academics b. Behavior G R N PW A Rewrite: ___________________________________________________________________ 6. I have the skill to determine the most appropr iate type(s) of data to use to determine which reasons (i.e., hypotheses) are likely to be contributing to the problem for: a. Academics b. Behavior G R N PW A Rewrite: ___________________________________________________________________ 7. I have the skills to access sources (e.g., myself internet sources, professional journals) to develop evidence-bas ed interventions for: a. Academic core curricula b. Behavioral core curricula c. Academic supplemental curricula d. Behavioral supplemental curricula e. Academic individualized intervention plans f. Behavioral individualized intervention plans G R N PW A Rewrite: ___________________________________________________________________ 8. I have the skill to ensure that any supplem ental and/or intensive interventions are integrated with core instruction in the general education classroom: a. Academics b. Behavior G R N PW A Rewrite: ___________________________________________________________________ 9. I have the skill to ensure that the proposed intervention plan is supported by the data that were collected: a. Academics b. Behavior G R N PW A Rewrite: ___________________________________________________________________ 10. I have the skill to provide the support nece ssary to ensure that the intervention is implemented appropriately for: a. Academics b. Behavior G R N PW A Rewrite: ___________________________________________________________________ 11. I have the skill to determine if an intervention was implemented the way it was supposed to be for: a. Academics b. Behavior G R N PW A Rewrite: ___________________________________________________________________ 12. I have the skill to select appropriate da ta (e.g., CBM, DIBELS, FCAT, behavioral observations) to use to progress monitor student performance during interventions: a. Academics b. Behavior G R N PW A Rewrite: ___________________________________________________________________ 13. I have the skill(s) to demonstrate the follo wing graphing skills for large group, small group, and individual students: a. Graph target student data G R N PW A

PAGE 313

305 b. Graph benchmark data c. Graph peer data d. Draw an aimline e. Draw a trendline Rewrite: ___________________________________________________________________ 14. I have the skill to use progress monitoring da ta displayed on a graph to make decisions about the degree to which a student is responding to intervention (e.g., positive, questionable or poor response). G R N PW A Rewrite: ___________________________________________________________________ 15. I have the skill to make intervention recomm endations based on the type of student(s) response to intervention. G R N PW A Rewrite: ___________________________________________________________________ 16. I have the skill to differentiate between stude nts who have not learned skills (e.g., wait to fail, not ready, got too far behind) from those who have barriers to learning due to a disability. G R N PW A Rewrite: ___________________________________________________________________ 17. I have the skills to conduct the foll owing data collection procedures: a. CBM b. DIBELS c. Accessing data from appropriate dist rictor school-wide assessments d. Standard behavioral observations e. Disaggregating data by race, gende r, free/reduced lunch, language proficiency, and disability status G R N PW A Rewrite: ___________________________________________________________________ 18. I have skills to use technology in the following ways: a. Access the internet to locate sources of academic and behavioral evidencebased interventions. b. Use electronic data collection tools (e.g., PDAs) c. Use the Progress Monitoring and Reporting Network (PMRN) d. Use the School-Wide Information System (SWIS) for Positive Behavior Support e. Graph and display stude nt and school data G R N PW A Rewrite: ___________________________________________________________________ 19. I have the skills to facilitate a PS/RtI meeting G R N PW A Rewrite: ___________________________________________________________________ If you believe that there are other important que stions not addressed in this survey that would help identify the degree to which sc hool and district personnel perceive they possess the skills needed in a PS/RtI model, please list them below and state the domain (i.e., data-based decision-making, tiered se rvice delivery, the problem-solving process, data collection procedures technology use, and speci al education eligibility determination) that it characterizes: ________________________________________________________________________ ________________________________________________________________________ ________________________________________________________________________ ________________________________________________________________________ ________________________________________________________________________

PAGE 314

306 Thank you for your assistance with this important step in validating a measure to capture school and district personnel perceptions a bout the degree to whic h they possess skills needed in a PS/RtI model.

PAGE 315

307 Appendix E Copies of Measures

PAGE 316

308 Beliefs Survey 0 0 0 0 0 0 1 1 1 1 1 1 2 2 2 2 2 2 3 3 3 3 3 3 4 4 4 4 4 4 5 5 5 5 5 5 6 6 6 6 6 6 7 7 7 7 7 7 8 8 8 8 8 8 9 9 9 9 9 9 Directions For items 2-5 below, please shade in the circle next to the response option that best represents your answer. 2. Job Description: PS/RtI Coach Teacher-General Edu cation Teacher-Special Education School Counselor School Psychologist School Social Worker Principal Assistant Principal Other (Please specify): 3. Years of Experience in Education: Less than 1 year 1 4 years 5-9 years 10 14 years 15-19 years 20-24 years 25 or more years Not applicable 4. Number of Years in your Current Position: Less than 1 year 1 4 years 5-9 years 10 14 years 15-19 years 20 or more years 5. Highest Degree Earned: B.A./B.S. M.A./M.S. Ed.S. Ph.D./Ed.D. Other (Please specify): 1. Your PS/RtI Project ID: Your PS/RtI Project ID was designed to assure confidentiality while also providing a method to match an individuals responses across instruments. In the space provided (first row), please write in the last four digits of your Social Security Number and the last two digits of the year you were born. Then, shade in the corresponding circles.

PAGE 317

309 Directions : Using the scale below, please i ndicate your level of agreement or disagreement with each of the following statements by shading in the circle that best represents your response 1 = Strongly Disagree (SD) 2 = Disagree (D) 3 = Neutral (N) 4 = Agree (A) 5 = Strongly Agree (SA) SD D N A SA 6. I believe in the philosophy of No Child Le ft Behind (NCLB) even if I disagree with some of the requirements. 1 2 3 4 5 7. Core instruction should be effective enough to result in 80% of the students achieving benchmarks in 7.a. reading 1 2 3 4 5 7.b. math 1 2 3 4 5 8. The primary function of supplemental instructi on is to ensure that students meet gradelevel benchmarks in 8.a. reading 1 2 3 4 5 8.b. math 1 2 3 4 5 9. The majority of students with learning disa bilities achieve grade-level benchmarks in 9.a. reading 1 2 3 4 5 9.b. math 1 2 3 4 5 10. The majority of students with behavioral problems (EH/SED or EBD) achieve gradelevel benchmarks in 10.a. reading 1 2 3 4 5 10.b. math 1 2 3 4 5 11. Students with high-incidence disabilities (e .g. SLD, EBD) who are receiving special education services are capable of achievi ng grade-level benchmarks (i.e., general education standards) in 11.a. reading 1 2 3 4 5 11.b. math 1 2 3 4 5 12. General education classroom te achers should implement more differentiated and flexible instructional practices to address the needs of a more diverse student body. 1 2 3 4 5 13. General education classroom te achers would be able to im plement more differentiated and flexible interventions if th ey had additional staff support. 1 2 3 4 5 14. The use of additional interven tions in the general education classroom would result in success for more students. 1 2 3 4 5

PAGE 318

310 SD D N A SA 15. Prevention activities and early intervention st rategies in schools would result in fewer referrals to problem-solving teams a nd placements in special education. 1 2 3 4 5 16. The “severity” of a student’s academic problem is determined not by how far behind the student is in terms of his/her academic pe rformance but by how quickly the student responds to intervention. 1 2 3 4 5 17. The “severity” of a student’s behavioral probl em is determined not by how inappropriate a student is in terms of his/her behavioral performance but by how quickly the student responds to intervention. 1 2 3 4 5 18. The results of IQ and achievement testing can be used to identify e ffective interventions for students with learning and behavior problems. 1 2 3 4 5 19. Many students currently identified as “LD” do not have a disability, rather they came to school “not ready” to learn or fell too far behind academically for the available interventions to close the gap sufficiently. 1 2 3 4 5 20. Using student-based data to determine interv ention effectiveness is more accurate than using only “teacher judgment.” 1 2 3 4 5 21. Evaluating a student’s response to interventions is a more effective way of determining what a student is capable of achieving than using scores from “tests” (e.g., IQ/Achievement test). 1 2 3 4 5 22. Additional time and resources should be allo cated first to students who are not reaching benchmarks (i.e., general edu cation standards) before signi ficant time and resources are directed to students who are at or above benchmarks. 1 2 3 4 5 23. Graphing student data makes it easier for one to make decisions about student performance and need ed interventions. 1 2 3 4 5 24. A student’s parents (guardian) should be involved in the problem-solving process as soon as a teacher has a concern about the student. 1 2 3 4 5 25. Students respond better to interventions when th eir parent (guardian) is involved in the development and implementati on of those interventions. 1 2 3 4 5 26. All students can achieve grade-level benchmar ks if they have sufficient support. 1 2 3 4 5 27. The goal of assessment is to genera te and measure effectiveness of instruction/intervention. 1 2 3 4 5 THANK YOU! *Code* School_ID

PAGE 319

311 Perceptions of RtI Skills Survey 0 0 0 0 0 0 1 1 1 1 1 1 2 2 2 2 2 2 3 3 3 3 3 3 4 4 4 4 4 4 5 5 5 5 5 5 6 6 6 6 6 6 7 7 7 7 7 7 8 8 8 8 8 8 9 9 9 9 9 9 Directions: Please read each statement about a skill related to assessment, instruction, and/or intervention below, and then evaluate YOUR skill level within the context of work ing at a school/building level. Where indicated, rate your skill separately for academics (i.e., reading and math) and behavior. Please use the following response scale : 1 = I do not have this skill at all (NS) 2 = I have minimal skills in this area; n eed substantial support to use it (MnS) 3 = I have this skill, but still n eed some support to use it (SS) 4 = I can use this skill with little support (HS) 5 = I am highly skilled in this area and could teach others this skill (VHS) The skill to: NS Mn S SS HS V HS 2. Access the data necessary to determine the percen t of students in core instruction who are achieving benchmarks (district grade-level standards) in: a. Academics 1 2 3 4 5 b. Behavior 1 2 3 4 5 3. Use data to make decisions about individuals and grou ps of students for the: a. Core academic curriculum 1 2 3 4 5 b. Core/Building discipline plan 1 2 3 4 5 1. Your PS/RtI Project ID: Your PS/RtI Project ID was designed to assure confidentiality while also providing a method to match an individual’s responses across instruments. In the space provided (first row), please write in the last four digits of your Social Security Number and the last two digits of the year you were born. Then, shade in the corresponding circles.

PAGE 320

312 The skill to: NS Mn S SS HS V HS 4. Perform each of the following steps when id entifying the problem for a student for whom concerns have been raised: a. Define the referral concern in terms of a replacement behavior (i.e., what the student should be able to do) instead of a referral problem for: Academics 1 2 3 4 5 Behavior 1 2 3 4 5 b. Use data to define the current level of perf ormance of the target student for: Academics 1 2 3 4 5 Behavior 1 2 3 4 5 c. Determine the desired level of performance (i.e., benchmark) for: Academics 1 2 3 4 5 Behavior 1 2 3 4 5 d. Determine the current level of peer perf ormance for the same skill as the target student for: Academics 1 2 3 4 5 Behavior 1 2 3 4 5 e. Calculate the gap between student current performance and the benchmark (district grade level standard) for: Academics 1 2 3 4 5 Behavior 1 2 3 4 5 f. Use gap data to determine whether core in struction should be adjusted or whether supplemental instruction should be di rected to the target student for: Academics 1 2 3 4 5 Behavior 1 2 3 4 5 5. Develop potential reasons (hypotheses) that a student or group of students is/are not achieving desired levels of perfor mance (i.e., benchmarks) for: a. Academics 1 2 3 4 5 b. Behavior 1 2 3 4 5 6. Identify the most appropriate type(s) of data to use for determining reasons (hypotheses) that are likely to be contributing to the problem for: a. Academics 1 2 3 4 5 b. Behavior 1 2 3 4 5 7. Identify the appropriate supplemental interven tion available in my building for a student identified as at-risk for: a. Academics 1 2 3 4 5 b. Behavior 1 2 3 4 5

PAGE 321

313 The skill to: NS Mn S SS HS V HS 8. Access resources (e.g., internet sources, professional literature) to develop evidencebased interventions for: a. Academic core curricula 1 2 3 4 5 b. Behavioral core curricula 1 2 3 4 5 c. Academic supplemental curricula 1 2 3 4 5 d. Behavioral supplementa l curricula 1 2 3 4 5 e. Academic individualized intervention plans 1 2 3 4 5 f. Behavioral individualized intervention plans 1 2 3 4 5 9. Ensure that any supplemental and/or intensive interventions are integrated with core instruction in the general education classroom: a. Academics 1 2 3 4 5 b. Behavior 1 2 3 4 5 10. Ensure that the proposed intervention plan is supported by the data that were collected for: a. Academics 1 2 3 4 5 b. Behavior 1 2 3 4 5 11. Provide the support necessary to ensure that the intervention is implemented appropriately for: a. Academics 1 2 3 4 5 b. Behavior 1 2 3 4 5 12. Determine if an intervention was implemen ted as it was intended for: a. Academics 1 2 3 4 5 b. Behavior 1 2 3 4 5 13. Select appropriate data (e.g., Curricu lum-Based Measurement, DIBELS, FCAT, behavioral observations) to use for progre ss monitoring of student performance during interventions: a. Academics 1 2 3 4 5 b. Behavior 1 2 3 4 5 14. Construct graphs for large group, small group, and individual students: a. Graph target student data 1 2 3 4 5 b. Graph benchmark data 1 2 3 4 5 c. Graph peer data 1 2 3 4 5 d. Draw an aimline 1 2 3 4 5 e. Draw a trendline 1 2 3 4 5 15. Interpret graphed progress monitoring data to make decisions about the degree to which a student is responding to intervention (e.g., positive, questionable or poor response). 1 2 3 4 5 16. Make modifications to intervention plans based on student response to intervention. 1 2 3 4 5 17. Use appropriate data to differentiate between students who have not learned skills (e.g., did not have adequate exposure to effective instruction, not ready, got too far behind) from those who have barriers to learning due to a disability. 1 2 3 4 5

PAGE 322

314 The skill to: NS Mn S SS HS V HS 18. Collect the following types of data: a. Curriculum-Based Measurement 1 2 3 4 5 b. DIBELS 1 2 3 4 5 c. Access data from appropriate districtor school-wide assessments 1 2 3 4 5 d. Standard behavioral observations 1 2 3 4 5 19. Disaggregate data by race, gender, free/re duced lunch, language proficiency, and disability status 1 2 3 4 5 20. Use technology in the following ways: a. Access the internet to locate sources of academic and behavioral evidence-based interventions. 1 2 3 4 5 b. Use electronic data collection tools (e.g., PDAs) 1 2 3 4 5 c. Use the Progress Monitoring and Reporting Network (PMRN) 1 2 3 4 5 d. Use the School-Wide Information System (SWI S) for Positive Behavior Support 1 2 3 4 5 e. Graph and display student a nd school data 1 2 3 4 5 21. Facilitate a Problem Solving Team (Stude nt Support Team, Intervention Assistance Team, School-Based Intervention T eam, Child Study Team) meeting. 1 2 3 4 5 THANK YOU!*Code* SchoolID

PAGE 323

315 Self-Assessment of Problem Solv ing Implementation (SAPSI)* School Name School Date of Report District Name District District & School ID SchoolID INSTRUCTIONS The members of your School-Based Leadersh ip Team should complete this needs assessment as a group. We ask that all members of the team participate in this process. Each group member will receiv e a copy of the needs assessment; however, only one form should be returned to Project staff. Your Problem Solving/Response to Intervention (PS/RtI) Coach will work with your team to facilitate completion of the SAPSI and will serve as the recorder for the version to be se nt to Project staff. This needs assessment will be completed three times per school year to monitor activities for implementation of PS/RtI in your school. The items on the SAPSI are meant to assess the degree to which schools implementing the PS/RtI model are (1) achieving and main taining consensus among key stakeholders, (2) creating and maintaining th e infrastructure necessary to support implementation, and (3) implementing practices and procedures c onsistent with the m odel. Members of the team should not be discouraged if your school has not achieved many of the criteria listed under the Consensus, Infrastructure, and Impl ementation domains. This instrument is intended to help your team identify needs at your school for which action plans can be developed. Whenever possible, data should be collected and/or reviewed to determine if evidence exists that suggests that a given activity is occurring. Please complete all pages on this needs assessment and mail to the following address by Friday, October 15th, 2007 Stevi Schermond Problem Solving/Response to Intervention Project 4202 E. Fowler Ave., EDU 162 Tampa, FL 33620

PAGE 324

316 Problem-Solving Team Mem bers (Name & Position) Person(s) Completing Report (Name & Position)

PAGE 325

317 PS/RtI Im p lementation Assessment Directions: In responding to each item below, pl ease use the following response scale: N ot Started ( N ) — (The activity occurs less than 25% of the time) I n Progress ( I ) — (The activity occurs approxim ately 25% to 74% of the time) A chieved ( A) — (The activity occurs approximately 75% to 100% of the time) M aintaining ( M ) — (The activity was rated as achieved last time and continues to occur approximately 75% to 100% of the time) For each item below, please write the letter of the option (N, I, A, M) that best represents your School-Based Leadership Team’s response in the co lumn labeled “Status”. In the column labeled “Comments/Evidence”, please write any comments, ex planations and/or eviden ce that are relevant to your team’s response. When completing the items on the SAPSI, the team should base its responses on the grade levels being targeted for implementation by the school. Additional Comments/Evidence: Consensus : Comprehensive Commitment and Support Status Comments/Evidence 1. District level leadership provides active commitment and support (e.g., meets to review data and issues at least twice each year.). 2. The school leadership provides training, support and active involvement. (e.g., principal is actively involved in School-Based Leadership Team meetings). 3. Faculty/staff support and are actively involved with problem solving/RtI (e.g., one of top three goals of the School Improvement Plan, 80% of faculty document support, three-year timeline for implementation available). 4. A School-Based Leadership Team is established, represents the roles of an administrator, facilitator, data mentor, content specialist and teachers from representative areas (i.e., general education & special education), and has a plan for involving parents. 5. Data are collected (e.g., beliefs survey, satisfaction survey) to assess level of commitment and impact of PS/RtI on faculty/staff.

PAGE 326

318 PS/RtI Implementation Assessment (Cont’d) Scale: N ot Started ( N ) — (The activity occurs less than 25% of the time) I n Progress ( I ) — (The activity occurs approxim ately 25% to 74% of the time) A chieved ( A) — (The activity occurs approximately 75% to 100% of the time) M aintaining ( M ) — (The activity was rated as achieved last time and continues to occur approximately 75% to 100% of the time) Infrastructure Development : Data Collection and Team Structure Status Comments/Evidence 6. School-wide data (e.g., DIBELS, Curriculum-Based Measures, Office Discipline Referrals) are collected through an efficient and effective systematic process. 7. Statewide and other databases (e.g., Progress Monitoring and Reporting Network [PMRN], School-Wide Information System [SWIS]) are used to make data-based decisions. 8. School-wide data are presented to staff after each benchmarking session (e.g., staff meetings, team meetings, grade-level meetings). 9. School-wide data are used to evaluate the effectiveness of core academic programs. 10. School-wide data are used to evaluate the effectiveness of core behavior programs 11. Curriculum-Based Measurement (e.g., DIBELS) data are used in conjunction with other data sources to identify students needing targeted group interventions and individualized interventions for academics. 12. Office Disciplinary Referral data are used in conjunction with other data sources to identify students needing targeted group interventions and individualized interventions for behavior. 13. Data are used to evaluate the effectiveness (RtI) of Tier 2 intervention programs. 14. Individual student data are utilized to determine response to Tier 3 interventions.

PAGE 327

319 PS/RtI Implementation Assessment (Cont’d) Scale: N ot Started ( N ) — (The activity occurs less than 25% of the time) I n Progress ( I ) — (The activity occurs approxim ately 25% to 74% of the time) A chieved ( A) — (The activity occurs approximately 75% to 100% of the time) M aintaining ( M ) — (The activity was rated as achieved last time and continues to occur approximately 75% to 100% of the time) Infrastructure Development : Data Collection and Team Structure (Cont’d) Status Comments/Evidence 15. Special Education Eligibility determination is made using the RtI model for the following ESE programs: a. Emotional/Behavioral Disabilities (EBD) b. Specific Learning Disabilities (SLD) 16. The school staff has a process to select evidencebased practices. a. Tier 1 b. Tier 2 c. Tier 3 17. The School-Based Leadership Team has a regular meeting schedule for problem-solving activities. 18. The School-Based Leadership Team evaluates target student(s) RtI at regular meetings. 19. The School-Based Leadership Team involves parents. 20. The School-Based Leadership Team has regularly scheduled data day meetings to evaluate Tier 1 and Tier 2 data. Additional Comments/Evidence:

PAGE 328

320 PS/RtI Implementation Assessment (Cont’d) Scale: N ot Started ( N ) — (The activity occurs less than 25% of the time) I n Progress ( I ) — (The activity occurs approxim ately 25% to 74% of the time) A chieved ( A) — (The activity occurs approximately 75% to 100% of the time) M aintaining ( M ) — (The activity was rated as achieved last time and continues to occur approximately 75% to 100% of the time) Implementation : Three-Tiered Intervention System and Problem-Solving Process Status Comments/Evidence 21. The school has established a three-tiered system of service delivery. a. Tier 1 Academic Core Instruction clearly identified. b. Tier 1 Behavior Core Inst ruction clearly identified. c. Tier 2 Academic Supplemental Instruction/Programs clearly identified. d. Tier 2 Behavior Supplemental Instruction/Programs clearly identified. e. Tier 3 Academic Intensive Programs are evidencebased. f. Tier 3 Behavior Intensiv e Programs are evidencebased. 22. Teams (e.g., School-Based Leadership Team, ProblemSolving Team, Grade-Level Teams) implement effective problem solving procedures including: a. Problem is defined as a data-based discrepancy (GAP Analysis) between what is expected and what is occurring (includes peer and benchmark data). b. Replacement behaviors (e.g., reading performance targets, homework completion targets) are clearly defined. c. Problem analysis is conducted using available data and evidence-bas ed hypotheses. d. Intervention plans include evidence-based (e.g., research-based, data-based) strategies. e. Intervention support personnel are identified and scheduled for all interventions.

PAGE 329

321 Additional Comments/Evidence: PS/RtI Implementation Assessment (Cont’d) Scale: N ot Started ( N ) — (The activity occurs less than 25% of the time) I n Progress ( I ) — (The activity occurs approxim ately 25% to 74% of the time) A chieved ( A) — (The activity occurs approximately 75% to 100% of the time) M aintaining ( M ) — (The activity was rated as achieved last time and continues to occur approximately 75% to 100% of the time) Implementation : Three-Tiered Intervention System and Problem-Solving Process (Cont’d) Status Comments/Evidence f. Intervention integrity is documented. g. Response to intervention is evaluated through systematic data collection h. Changes are made to intervention based on student response i. Parents are routinely involved in implementation of interventions Additional Comments/Evidence:

PAGE 330

322 PS/RtI Implementation Assessment (Cont’d) Scale: N ot Started ( N ) — (The activity occurs less than 25% of the time) I n Progress ( I ) — (The activity occurs approxim ately 25% to 74% of the time) A chieved ( A) — (The activity occurs approximately 75% to 100% of the time) M aintaining ( M ) — (The activity was rated as achieved last time and continues to occur approximately 75% to 100% of the time) Implementation : Monitoring and Action Planning Status Comments/Evidence 23. A strategic plan exists and is used by the School-Based Leadership Team to guide implementation of PS/RtI. 24. The School-Based Leadership Team meets at least twice each year to review data and implementation issues. 25. The School-Based Leadership Team meets at least twice each year with the District Leadership team to review data and implementation issues. 26. Changes are made to the implementation plan based on school and district leadership team decisions. 27. Feedback on the outcomes of the PS/RtI Project is provided to school-based facu lty and staff at least yearly. Additional Comments/Evidence:

PAGE 331

323 Self-Assessment of Problem Solving Implementation (SAPSI) – Illinois Version School Name Date of Report District Name & Number County INSTRUCTIONS Complete and submit at least three times per school year The problem solving team should complete this checklist three times per school year to monitor activities for implement ation of problem solving in the school. Completed forms can be faxed or ema iledby to your Regional Evaluation Coordinator. Problem-Solving Team Members Person(s) Completing Report

PAGE 332

324 Checklist #1: Start-Up Activity Complete and submit at least three times per school year. Status: N ot Started ( 0 to 25% ) I n Progress ( 25 to 74% ) A chieved (75 to 100%) M aintaining ( Rated as achieved last time) Comprehensive Commitment and Support Date (MM/DD/Y Y) Date (MM/DD/Y Y) Date (MM/DD/Y Y) 1. District level leadership provides active commitment and support. Status: 2. The building leadership provides support and active involvement. (i.e. principal actively involved in leadership team meetings). Status: 3. Faculty/staff support and are actively involved with problem solving (One of top 3 goals of the SIP, 80% of faculty document support, 3 year timeline). Status: 4. A school leadership team is established and represents the roles of an administrator, facilitator, data mentor content specialist, parent, and representative teachers. Status:

PAGE 333

325 Checklist #1: Start-Up Activity Complete and submit at least three times per school year. Status: N ot Started ( 0 to 25% ) I n Progress ( 25 to 74% ) A chieved (75 to 100%) M aintaining ( Rated as achieved last time) Establish and Maintain Team Process Date (MM/DD/Y Y) Date (MM/DD/Y Y) Date (MM/DD/Y Y) 5. Building has established a three-tiered system of service delivery. (this item may need to be removed because this may appear to be a simple question, but it is actually very complex. This may lead to high variability in answers.) Status: 6. School-wide data is collected through an efficient and effective systematic process. 7. School-wide data are presented to staff after each benchmarking session. Status: 8. CBM and/or Office Disciplinary Referral data are used in conjunction with other data sources to identify students needing targeted group interventions and individualized interventions. Status 9. Individual student data are utilized to determine the response to interventions. Status: 10. The building staff has a process to select evidence-based practices. Status: 11. Comprehensive and on-going training is provided to all key people including parents. Status: 12. Team has regular meeting schedule. Status: 13. Team is established and is representative of general education, special education and related service personnel. Status: 14. Team includes parents. Status 15. Team has regular meeting schedule. Status:

PAGE 334

326 Checklist #1: Start-Up Activity Complete and submit at least three times per school year. Status: N ot Started ( 0 to 25% ) I n Progress ( 25 to 74% ) A chieved (75 to 100%) M aintaining ( Rated as achieved last time) Three-Tiered System Date (MM/DD/Y Y) Date (MM/DD/Y Y) Date (MM/DD/Y Y) 16. Teams implement effective problem solving procedures including: Status: a. Problem is defined as a discrepancy between what is expected and what is occurring. Status: b. Problem is described using measurable and observable terms c. Replacement behaviors (e.g., reading performance targets, homework completion targets) are clearly defined Status: d. Evidence-based interventions are implemented Status: e. Response to intervention is evaluated through systematic data collection Status: f. Changes are made to intervention based on student response Status:

PAGE 335

327 Checklist #1: Start-Up Activity Complete and submit at least three times per school year. Status: N ot Started ( 0 to 25% ) I n Progress ( 25 to 74% ) A chieved (75 to 100%) M aintaining ( Rated as achieved last time) Three-Tiered System Date (MM/DD/Y Y) Date (MM/DD/Y Y) Date (MM/DD/Y Y) Self-Assessment 17. School-wide team/faculty completes SelfAssessment of Problem Solving Implementation (SAPSI). Status: 18. School-wide team summarizes existing school school-wide assessment data for decision making. Status: 19. Strengths, areas of immediate focus and action plan are identified. Status: Implementing Evidenced-Based Practice 20. A school school-wide assessment system for identifying and monitoring progress of all students is implemented. Status: 21. All building level reso urces are utilized in the development of instruction/interventions. Status: 22. Parents are routinely involved in implementation of interventions. Status: 23. Personnel with problem-solving and intervention expertise are identified & and involved. Status:

PAGE 336

328 Checklist #2: On-going Activity Monitoring Complete and submit at least three times per school year. Status: N ot Started ( 0 to 25% ) I n Progress ( 25 to 74% ) A chieved (75 to 100%) M aintaining ( Rated as achieved last time) Monitoring and Action Planning Date (MM/DD/Y Y) Date (MM/DD/Y Y) Date (MM/DD/Y Y) 24. The problem solving team meets frequently enough to follow decision-rules and make necessary instructional changes. Status: 25. The problem solving t eam provides a status report to faculty. Status: 26. Action plan based on the SAPSI is implemented. Status: 27. The SAPSI action plan is continually monitored for integrity of implementation. Status: 28. Effectiveness of SAPSI action plan implementation is assessed. Status: 29. Problem Solving data are analyzed. Status:

PAGE 337

329 Skill Assessment Example School Level Data Review Worksheet 0 0 0 0 0 0 1 1 1 1 1 1 2 2 2 2 2 2 3 3 3 3 3 3 4 4 4 4 4 4 5 5 5 5 5 5 6 6 6 6 6 6 7 7 7 7 7 7 8 8 8 8 8 8 9 9 9 9 9 9 Case Study You are asked by your school principal to revi ew school-level data and answer a number of questions for her. The da ta that are provided are 3rd grade FCAT Reading data and represent the % of st udents in each demographic categ ory who achieved “proficient” levels (a score of 3 or better on the FCAT). The three sets of data that are provided are for: 1) All students in 3rd grade, 2) The subset of students in 3rd grade who are receiving supple mental instruction (Tier 2) in addition to core instruction (Tier 1) and 3) The subset of students who are receiving in tensive instruction (Tier 3) in addition to core instruction. After reviewing the data below, answer the qu estions that follow. Please provide as much detail in your responses as you feel is necessary to explain your position. 1. Your PS/RtI Project ID: Your PS/RtI Project ID was designed to assure confidentiality while also providing a method to match an individual’s responses across instruments. In the space provided (first row), please write in the last four digits of your Social Security Number and the last two digits of the year you were born. Then, shade in the corresponding circles.

PAGE 338

330 1) The Following Data Are for All 3rd Grade Students in the School Disaggregated Student Group % Proficient Caucasian 82 African American 43 Hispanic 56 Low SES 52 Student’s with Disabilities 40 LEP 42 2) The Following Data Are for 3rd Grade Students Receiving Supplemental Instruction (Tier 2) in Addi tion to the Core Curriculum Disaggregated Student Group % Proficient Caucasian 67 African American 32 Hispanic 40 Low SES 59 Students with Disabilities 50 LEP 60 3) The Following Data Are for 3rd Grade Students Receiving Intensive Instruction (Tier 3) in Addition to the Core Curriculum Disaggregated Student Group % Proficient Caucasian 31 African American 30 Hispanic 55 Low SES 25 Students with Disabilities 37 LEP 45

PAGE 339

331 Case Study Questions 1. Is the Core Instruction eff ective? Justify your decision. 2. Which group(s) of students is likely to im prove the most with positive changes in core instruction? Justify your decision. 3. Which group(s) of students responded best to supplemental instruction? Justify your decision. 4. Who is most likely to be referred for Tier 3 interventions in this school setting? Justify your decision. 5. Which group of students is most at-risk for literacy failure in th is building? Justify your decision. 6. What, in general, can you say about the e ffectiveness of the di fferent instruction tiers in this building? Justify your decision. *Code* School_ID

PAGE 340

332 Tier I Problem ID Scoring Rubric Draft – 9/17/07 1. Is the Core Instruction effec tive? Justify your decision. a. 0 points = mentions that the core curriculum is effective or the individual’s position on the effectivene ss of the core curriculum cannot be determined from the information provided b. 1 point = mentions that the core curr iculum is not effective, but does not provide any rationale for his/her response c. 2 points = mentions that the core curri culum is not effective and refers to one or more demographic groups not performing well, but does not use data to justify the decision (e.g., le ss than 80% of a demographic group attaining benchmarks) d. 3 points = mentions that the core cu rriculum is not effective and provides data to justify the decision (e.g., le ss than 80% of a demographic group attaining benchmarks) 2. Which group(s) of students is likely to im prove the most with positive changes in core instruction? Justify your decision. a. 0 points = mentions that Caucasian st udents are the most likely to improve or that the group(s) that is the most likely to improve cannot be determined from the information provided b. 1 point = mentions that one or more of the demographic groups other than Caucasian students are the most likely to improve, but does not use data to justify the decision c. 2 point = mentions that one or more of the demographic groups other than Caucasian students is the most likel y to improve and states that the group(s) is most likely to improve because of low levels of current performance 3. Which group(s) of students responded best to supplemental instruction? Justify your decision. a. 0 points = does not mention that Caucasian, Low-SES, and/or LEP students were among the demographic groups for whom supplemental instruction was the most effective b. 1 point = mentions that Caucasian, Low-SES, and/or LEP students were among the demographic groups for whom supplemental instruction was the most effective, but does not us e data to justify his/her decision c. 2 points = mentions that Caucasian, Low-SES, and/or LEP students were among the demographic groups for whom supplemental instruction was the most effective and referen ces the proportion of students who responded in the groups incl uded in his/her answer 4. Who is most likely to be referred for Tier 3 interventions in this school setting? a. 0 points = responds that Caucasian st udents are the most likely to be referred for Tier 3 interventions or th e individual’s position on who is the

PAGE 341

333 most likely to be referred for Tier 3 interventions is unclear from the information provided b. 1 point = responds that one or more of the following demographic groups are the most likely to be referred for Tier 3 interventions: Hispanic, Low SES, Students with Disabilities, or LEP students (but not African Americans) c. 2 points = responds that African Americ an students are the most likely to be referred for Tier 3 interventions 5. Which group of students is most at-risk for literacy failure in this building? a. 0 points = responds that Caucasian students are the most at-risk for literacy failure in this building or the individual ’s position on who is the most at-risk for reading failure is unclear from the information provided b. 1 point = responds that one or more of the following demographic groups are the most at-risk for reading failure in the building: Hispanic, Low SES, Students with Disabilities, or LEP st udents (but not African Americans) c. 2 points = responds that African Americ an students are the most at-risk for reading failure 6. What, in general, can you say about the e ffectiveness of the di fferent instruction tiers in this building? a. 0 points = mentions that the different in structional tiers in this building are effective for all students or the indi vidual’s position on the effectiveness of the instructional tiers in this bu ilding is unclear from the information provided b. 1 point = mentions that instruction at Ti ers I, II, or III is ineffective, but does not mention that instruction is ineffective across all three tiers c. 2 point = mentions that instruction across all three tiers is ineffective

PAGE 342

334 Tier I and II Critical Components Checklist Directions : For each selected grade-level, please use the scale provided to indicate the degree to which each critical component of problem-solving is present in the problemsolving team paperwork. See the attached rubric for the criteria for determining the degree to which each critical component is present. Component 1 = Present 2 = Partially Present 3 = Absent N/A = Not applicable Evidence/Comments Problem Identification 1. Data were used to determin e the effectiveness of core academic and behavior instruction 1 2 3 2. Decisions were made to modi fy core instruction or to develop supplemental (Tier II) interventions 1 2 3 3. Universal screening (e.g., DIBELS, ODRs) or other data sources (e.g., district-wide assessments) were used to identify groups of students in need of supplemental intervention 1 2 3 Problem Analysis 4. The school-based team generated hypotheses to identify potential reasons for students not meeting benchmarks 1 2 3 5. Data were used to determine viable or active hypotheses for why student s were not attaining benchmarks 1 2 3 Intervention Development and Implementation 6. Modifications to core instruction a. A plan for implementation of modifications to core instruction was documented 1 2 3 N/A b. Support for implementation of modifications to core instruction was documented 1 2 3 N/A c. Documentation of implementation of modifications to core instruction was provided 1 2 3 N/A 7. Supplemental (Tier II) inst ruction development or modification a. A plan for implementation of supplemental instruction was documented 1 2 3 N/A b. Support for implementation of supplemental instruction was documented 1 2 3 N/A c. Documentation of implementation of supplemental instruction was provided 1 2 3 N/A Program Evaluation/RtI 8. Criteria for positive response to intervention defined 1 2 3 9. Progress monitoring data were collected/scheduled 1 2 3 10. A decision regarding student RtI was docum ented 1 2 3 11. A plan for continuing, modi fying, or terminating the intervention plan was provided 1 2 3

PAGE 343

335 Tiers I and II Critical Components Checklist Rubric 1. Data were used to determine the effectiveness of core academic and behavior instruction a. Present = Data quantifying the effectiveness of core academic and/or behavior instruction for all students, and for demographic subgroups of students are documented b. Partially Present = Data quantifying the effectiveness of core academic and/or behavior instruction for all students, or for demographi c subgroups of students are documented c. Absent = No data quantifying the effectiveness of core academic and/or behavior instruction are document 2. Decisions were made to modify core instru ction or to develop supplemental (Tier II) interventions a. Present = A decision to modify core in struction or to develop supplemental interventions was indicated and the decision was appropria te given the data used to evaluate the effectiven ess of core instruction b. Partially Present = A decision to modi fy core instruction or to develop supplemental interventions was indicate d, but the decision wa s not appropriate given the data used to evaluate the effectiveness of core instruction c. Absent = No decision regarding modify ing core instruction or developing supplemental interventions was indicated 3. Universal screening (e.g., DIBELS, ODRs) or other data sources (e.g., district-wide assessments) were used to identify groups of students in need of supplemental intervention a. Present = Data from universal screening assessments or other data sources were factored into the decision to iden tify students as needing supplemental intervention b. Partially Present = Students were identif ied for supplemental intervention based on data; however, the data used to make the decision came from outcome assessments such as the SAT-10 or FCAT c. Absent = Data were not used to iden tify students in need of supplemental intervention 4. The school-based team generated hypotheses to identify potential reasons for students not meeting benchmarks a. Present = Reasons for the students not meeting benchmarks were developed. The reasons provided span multiple hypotheses domains (e.g., child, curriculum, peers, family/community, classroom, teacher) b. Partially Present = Reasons for the students not meeting benchmarks were developed, but the reasons do not span multiple hypotheses domains (e.g., curriculum hypotheses only). c. Absent = Reasons for the students not m eeting benchmarks were not developed

PAGE 344

336 5. Data were used to determine viable or active hypotheses for why students were not attaining benchmarks a. Present = Data collected using RIOT (Review, Interview, Observe, Test) procedures for all hypotheses to determin e the reasons that are likely to be barriers to the students attaining benchmarks b. Partially Present = Data collected using RIOT (Review, Interview, Observe, Test) procedures for some hypotheses to determ ine the reasons that are likely to be barriers to the students attaining benchmarks c. Absent = Data not collected to determine the reasons that are likely to be barriers to the students attaining benchmarks 6a. A plan for implementation of modificat ions to core instruction was documented a. Present = A plan for implementing modi fications to core instruction was documented, and included the personnel res ponsible, the actions to be completed and the deadline for completing those actions b. Partially Present = A plan for implementi ng modifications to core instruction was documented, but the personnel responsible, the actions to be completed or the deadline for completing those actions was not included c. Absent = No plan for implementing the modifications to core instruction was documented d. N/A = The data used to evaluate the effectiveness of the core curriculum suggested that the development or modi fication of supplemental instruction was appropriate 6b. Support for implementation of modificatio ns to core instruction was documented a. Present = A plan for providing s upport to the personnel implementing modifications to core instruction was documented, and included the personnel responsible, the actions to be complete d and the deadline for completing those actions b. Partially Present = A plan for providing support to the personnel implementing modifications to core instruction was documented, but the personnel responsible, the actions to be completed or the dead line for completing those actions was not included c. Absent = No plan for providing sup port to the personnel implementing the modifications to core instruction was documented d. N/A = The data used to evaluate the effectiveness of the core curriculum suggested that the development or modi fication of supplemental instruction was appropriate 6c. Documentation of impleme ntation of modifications to core instruction was provided a. Present = Data were documented demonstr ating that the modifications to core instruction were implemented and at leas t some of the data were quantifiable b. Partially Present = Data were documented demonstrating that the modifications to core instruction were implemented, but none of the data were quantifiable

PAGE 345

337 c. Absent = No information on the degree to which the modifications to core instruction were implemented was documented d. N/A = The data used to evaluate the effectiveness of the core curriculum suggested that the development or modi fication of supplemental instruction was appropriate 7a. A plan for implementation of supplemental instruction was documented a. Present = A plan for implementati on of supplemental instruction was documented, and included the personnel res ponsible, the actions to be completed and the deadline for completing those actions b. Partially Present = A plan for implementation of supplemental instruction was documented, but the personnel responsible, the actions to be completed or the deadline for completing those actions was not included c. Absent = No plan for implementation of supplemental instruction was documented d. N/A = The data used to evaluate the effectiveness of the core curriculum suggested that modification of co re instruction was appropriate 7b. Support for implementation of suppl emental instruction was documented a. Present = A plan for providing s upport to the personnel implementing supplemental instruction was docum ented, and included the personnel responsible, the actions to be complete d and the deadline for completing those actions b. Partially Present = A plan for providing support to the personnel implementing supplemental instruction was documente d, but the personnel responsible, the actions to be completed or the deadline for completing those actions was not included c. Absent = No plan for providing su pport to the personnel implementing supplemental instruction was documented d. N/A = The data used to evaluate the effectiveness of the core curriculum suggested that modifications to co re instruction were appropriate 7c. Documentation of implementation of supplemental instruction was provided a. Present = Data were documented demonstr ating that the supplemental instruction protocol was implemented and at least some of the data were quantifiable b. Partially Present = Data were documente d demonstrating that the supplemental instruction protocol was implemented, but none of the data were quantifiable c. Absent = No information on the degree to which supplemental instruction was implemented was documented d. N/A = The data used to evaluate the effectiveness of the core curriculum suggested that modifications to co re instruction were appropriate 8. Criteria for determ ining positive RtI defined a. Present = The rate at which improvement on the target skill is needed for student RtI to be considered positive was provided in measurable terms

PAGE 346

338 b. Partially Present = Quantifiable data de fining improvement in the target skill needed for positive RtI was provided, but the data did not include a rate index c. Absent = No criteria for determining positive RtI were provided 9. Progress monitoring data collected/scheduled a. Present = Progress monitoring data were collected at an appropriate frequency using measures that are sensitive to small changes in the target skill b. Partially Present = Progress monitoring data were collected, but were not collected frequently enough or were colle cted using measures that were are not sensitive to small changes in the target skill c. Absent = Little or no progress monitoring data were collected 10. Decisions regarding student RtI documented a. Present = Documented decisions regardi ng whether the students demonstrated positive, questionable, or poor RtI were ma de based on progress monitoring data b. Partially Present = A discussion of stude nt RtI was provided, but no decisions regarding positive, questionable, or poor RtI were made c. Absent = No discussion of the students RtI was provided 11. Plan for continuing, modifying, or te rminating the intervention plan provided a. Present = A plan for continuing, modifying, or terminating the intervention plan was provided based on the students’ RtI b. Partially Present = A plan for continuing, modifying, or terminating the intervention plan was provided, but it did not link directly to the students’ RtI c. Absent = No plan for continuing, modifying, or terminating the intervention plan was provided

PAGE 347

339 Appendix F Data Collection, Entry, and Analysis Rubric

PAGE 348

340 Data Collection, Entry, and Analysis Rubric Year 1 Measure Collection Timeline Collection Method & Responsible Personnel Data Entry Method & Responsible Personnel Analysis Frequency Aug Sep Oct Nov Dec Jan Feb Mar Apr May Jun Jul Primary Training & Staff Surveys & Skill Assessments Beliefs Survey SBLT Day 1 & 2 & Staff Pre SBLT Day 5 & Staff Post (3/30-5/15) Administered by RCs & Coaches Uploaded via scantron by Project staff 1 x year Direct Skill Assessments SBLT Day 2 & Staff Pre SBLT Day 3 SBLT Day 4 SBLT Day 5 Administered by RCs & Coaches Scored & Entered by Project staff 2-4 x year Tied to training schedule for SBLTs Perceptions of Practices Survey SBLT & Staff Pre Administered by RCs & Coaches Uploaded via scantron by Project staff 1 x year Perceptions of Skills Survey SBLT & Staff Pre SBLT Day 5 & Staff Post Administered by RCs & Coaches Uploaded via scantron by Project staff 1 x year School Personnel Satisfaction Survey SBLT & Staff Pre Administered by RCs & Coaches Uploaded via scantron by Project staff 1 x year Training Evaluation Survey** SBLT Day 1 & Day 2 SBLT Day 3 SBLT Day 4 SBLT Day 5 Administered by RCs & Coaches Uploaded via scantron by Project staff 4 x year Tied to training schedule

PAGE 349

341 Measure Collection Timeline Collection Method & Responsible Personnel Data Entry Method & Responsible Personnel Analysis Frequency Aug Sep Oct Nov Dec Jan Feb Mar Apr May Jun Jul Training & Technical Assistance Logs Regional Coordinator Training & Technical Assistance Logs X X X X X X X X X X X RCs track activities and hours RCs enter into remote database (minimum of monthly) Monthly Coaches Training & Technical Assistance Logs* X X X X X X X X X X X Coaches track activities and hours Coaches enter into remote database (minimum of monthly) Monthly Implementation Integrity Measures Tiers I & II Critical Components Checklist* T1 Window T2 Window T3 Window Coaches complete checklists from permanent products Project staff enter into database 3 x year Tiers I & II Observation Checklist* NOT COLLECTED DURING YEAR 1 Tier III Critical Components Checklist* NOT COLLECTED DURING YEAR 1 Problem-Solving Team Meeting Checklists: Initial & Follow-Up* NOT COLLECTED DURING YEAR 1 Self Assessment of Problem Solving Implementation (SAPSI) Pre Post SBLT completes while coach facilitates Project staff enter 2 x year

PAGE 350

342 Measure Collection Timeline Collection Method & Responsible Personnel Data Entry Method & Responsible Personnel Analysis Frequency Aug Sep Oct Nov Dec Jan Feb Mar Apr May Jun Jul School Demographics School Demographics (See “School Demographics Data Protocol”)* X PE collects from FL DOE Data Warehouse Project staff download files 1 x year School Staff Demographics (See “School Staff Data Protocol”)* X PE collects from FL DOE Data Warehouse Project staff download files 1 x year School Level Student and Systemic Outcomes SAT-10/FCAT* (See “Individual Student Data Protocol”) X PE collects from FL DOE Warehouse Project staff download files 1 x year DIBELS/CBM* (See “Individual Student Data Protocol”) X PE collects from FCRR Project staff download files 1 x year ODRs (See “Systemic Outcome Data Protocol”)* X PE collects from FL DOE Warehouse Project staff download files 1 x year PST Referrals (See “Systemic Outcome Data Protocol”)* X PE collects from districts Project staff download files 1 x year ESE Referrals (See “Systemic Outcome Data Protocol”)* X PE collects from FL DOE Warehouse Project staff download files 1 x year ESE Evaluations (See “Systemic Outcome Data Protocol”)* X PE collects from FL DOE Warehouse Project staff download files 1 x year ESE Placements (See “Systemic Outcome Data Protocol”)* X PE collects from FL DOE Warehouse Project staff download files 1 x year

PAGE 351

343 Measure Collection Timeline Collection Method & Responsible Personnel Data Entry Method & Responsible Personnel Analysis Frequency Aug Sep Oct Nov Dec Jan Feb Mar Apr May Jun Jul Absences (See “Individual Student Data Protocol”)* X PE collects from FL DOE Warehouse Project staff download files 1 x year Retentions (See “Individual Student Data Protocol”)* X PE collects from FL DOE Warehouse Project staff download files 1 x year Other Process Measures Coaching Evaluation Survey** X Mailed to principals to be completed by SBLTs Uploaded via scantron by Project staff 1 x year Technical Assistance Evaluation Survey – Statewide Training Versions? NOT COMPLETED DURING YEAR 1 Other Outcome Measures Parent Satisfaction Survey* NOT COMPLETED DURING YEAR 1

PAGE 352

344 Appendix G Statistical Models

PAGE 353

345 Research Question 1 Multi-Level Model for Predicting an Educat or’s Average Item Score on the Beliefs Survey Individuals Beliefs Score = 000 + 001 (School Size) + 002 (Staff Size) + 003 (% White) + 004 (% Black) + 005 (% Hispanic) + 006 (% Asian) + 007 (% Native American) + 008 (% Multi-Racial) + 009 (% Male) + 010 (% Free-Reduced Lunch) + 011 (% English Language Learners) + 012 (% Students with Disabilities) + 013 (Pilot School Status) + 014 (District A) + 015 (District B) + 016 (District C) + 017 (District D) + 018 (District E) + 019 (District F) + 020 (District G) + 021 (District H) + 022 (Average FCAT Baseline) + 100 + 101 (School Size*time) + 102 (Staff Size*time) + 103 (% White*time) + 104 (% Black*time) + 105 (% Hispanic*time) + 106 (% Asian*time) + 107 (% Native American*time) + 108 (% Multi-Racial*time) + 109 (% Male*time) + 110 (% Free-Reduced Lunch*time) + 111 (% English Language Learners*time) + 112 (% Students with Disabilities*time) + 113 (Pilot School Status*time) + 114 (District A*time) + 115 (District B*time) + 116 (District C*time) + 117 (District D*time) + 118 (District E*time) + 119 (District F*time) + 120 (District G*time) + 121 (District H*time) + 122 (% SBLT Attendance*time) + 123 (# Coach Trainings*time) + 124 (Coach Training Hours*time) + 125 (# Coach TA*time) + 126 (Coach TA Hours*time) + 127 (Average FCAT Baseline*time) + 001 (General Education Teacher) + 002 (Special Education Teacher) + 003 (Administrator) + 004 (Student Support Services) + 005 (Other) + 006 (Experience) + 007 (Degree) + 008 (SBLT Membership) + 101 (General Education Teacher*time) + 102 (Special Education Teacher*time) + 103 (Administrator*time) + 104 (Student Support Services*time) + 105 (Other*time) 106 (Experience*time) + 107 (Degree*time) + 108 (SBLT Membership*time) + 000 + 000 + 001 + 002 Note 000 = School-Level intercept 001 022 = School-Level predictors of an educator’s average item score 100 = School-Level slope 101 127 = School-Level predictors of an educator’s slope 001 008 = Educator-Level predictors of an educator’s average item score 101 108 = Educator-Level predictors of an educator’s slope 000 = Error 000 002 = Error associated with rando m intercepts across levels

PAGE 354

346 Multi-Level Model for Predicting an Educator ’s Average Item Score on the Perceptions of Skills Survey (Response to Intervention – Academic and Response to Intervention – Behavior Skills) Individuals Perceptions of (RT I-A or RTI-B) Skills Score = 000 + 001 (School Size) + 002 (Staff Size) + 003 (% White) + 004 (% Black) + 005 (% Hispanic) + 006 (% Asian) + 007 (% Native American) + 008 (% Multi-Racial) + 009 (% Male) + 010 (% Free-Reduced Lunch) + 011 (% English Language Learners) + 012 (% Students with Disabilities) + 013 (Pilot School Status) + 014 (District A) + 015 (District B) + 016 (District C) + 017 (District D) + 018 (District E) + 019 (District F) + 020 (District G) + 021 (District H) + 022 (Average FCAT Baseline) + 100 + 101 (School Size*time) + 102 (Staff Size*time) + 103 (% White*time) + 104 (% Black*time) + 105 (% Hispanic*time) + 106 (% Asian*time) + 107 (% Native American*time) + 108 (% Multi-Racial*time) + 109 (% Male*time) + 110 (% Free-Reduced Lunch*time) + 111 (% English Language Learners*time) + 112 (% Students with Disabilities*time) + 113 (Pilot School Status*time) + 114 (District A*time) + 115 (District B*time) + 116 (District C*time) + 117 (District D*time) + 118 (District E*time) + 119 (District F*time) + 120 (District G*time) + 121 (District H*time) + 122 (% SBLT Attendance*time) + 123 (# Coach Trainings*time) + 124 (Coach Training Hours*time) + 125 (# Coach TA*time) + 126 (Coach TA Hours*time) + 127 (Average FCAT Baseline*time) + 001 (General Education Teacher) + 002 (Special Education Teacher) + 003 (Administrator) + 004 (Student Support Services) + 005 (Other) + 006 (Experience) + 007 (Degree) + 008 (SBLT Membership) + 101 (General Education Teacher*time) + 102 (Special Education Teacher*time) + 103 (Administrator*time) + 104 (Student Support Services*time) + 105 (Other*time) 106 (Experience*time) + 107 (Degree*time) + 108 (SBLT Membership*time) + 000 + 000 + 001 + 002 Note 000 = School-Level intercept 001 022 = School-Level predictors of an educator’s average item score 100 = School-Level slope 101 127 = School-Level predictors of an educator’s slope 001 008 = Educator-Level predictors of an educator’s average item score 101 108 = Educator-Level predictors of an educator’s slope 000 = Error 000 002 = Error associated with rando m intercepts across levels

PAGE 355

347 Research Question Two Multi-Level Model for Predicting an Educator ’s Percent of Points Possible on Skill Assessments Administered Individuals Skill Assessment Skill Score = 000 + 001 (School Size) + 002 (Staff Size) + 003 (% White) + 004 (% Black) + 005 (% Hispanic) + 006 (% Asian) + 007 (% Native American) + 008 (% Multi-Racial) + 009 (% Male) + 010 (% Free-Reduced Lunch) + 011 (% English Language Learners) + 012 (% Students with Disabilities) + 013 (District A) + 014 (District B) + 015 (District C) + 016 (District D) + 017 (District E) + 018 (District F) + 019 (District G) + 020 (District H) + 021 (% SBLT Attendance) + 100 + 101 (School Size*time) + 102 (Staff Size*time) + 103 (% White*time) + 104 (% Black*time) + 105 (% Hispanic*time) + 106 (% Asian*time) + 107 (% Native American*time) + 108 (% MultiRacial*time) + 109 (% Male*time) + 110 (% Free-Reduced Lunch*time) + 111 (% English Language Learners*time) + 112 (% Students with Disabilities*time) + 113 (District A*time) + 114 (District B*time) + 115 (District C*time) + 116 (District D*time) + 117 (District E*time) + 118 (District F*time) + 119 (District G*time) + 120 (District H*time) + 121 (% SBLT Attendance*time) + 001 (General Education Teacher) + 002 (Special Education Teacher) + 003 (Administrator) + 004 (Student Support Services) + 005 (Other) + 006 (Experience) + 007 (Degree) + 101 (General Education Teacher*time) + 102 (Special Education Teacher*time) + 103 (Administrator*time) + 104 (Student Support Services*time) + 105 (Other*time) 106 (Experience*time) + 107 (Degree*time) + 000 Note 000 = School-Level intercept 001 027 = School-Level predictors of an educator’s average item score 100 = School-Level slope 101 127 = School-Level predictors of an educator’s slope 001 008 = Educator-Level predictors of an educator’s average item score 101 108 = Educator-Level predictors of an educator’s slope 000 = Error

PAGE 356

348 Research Question 3 Multi-Level Model for Predicting Implementa tion Integrity as Measured by the SelfAssessment of Problem Solvi ng Implementation (SAPSI) Building’s SAPSI Score = 000 + 001 (School Size) + 002 (Staff Size) + 003 (% White) + 004 (% Black) + 005 (% Hispanic) + 006 (% Asian) + 007 (% Native American) + 008 (% Multi-Racial) + 009 (% Male) + 010 (% Free-Reduced Lunch) + 011 (% English Language Learners) + 012 (% Students with Disabilities) + 013 (District A) + 014 (District B) + 015 (District C) + 016 (District D) + 017 (District E) + 018 (District F) + 019 (District G) + 020 (District H) + 021 (Average FCAT Baseline) + 100 + 101 (School Size*time) + 102 (Staff Size*time) + 103 (% White*time) + 104 (% Black*time) + 105 (% Hispanic*time) + 106 (% Asian*time) + 107 (% Native American*time) + 108 (% Multi-Racial*time) + 109 (% Male*time) + 110 (% Free-Reduced Lunch*time) + 111 (% English Language Learners*time) + 112 (% Students with Disabilities*time) + 113 (District A*time) + 114 (District B*time) + 115 (District C*time) + 116 (District D*time) + 117 (District E*time) + 118 (District F*time) + 119 (District G*time) + 120 (District H*time) + 121 (% SBLT Attendance*time) + 122 (# Coach Trainings*time) + 123 (Coach Training Hours*time) + 124 (# Coach TA*time) + 125 (Coach TA Hours*time) + 126 (Average FCAT Baseline*time) + 000 + 000 + 001 + r100 + r101 Note 000 = School-Level intercept 001 021 = School-Level predictors of building’s score 100 = School-Level slope 101 126 = School-Level predictors of a building’s slope 000 = Error 000 001 = Error associated w ith random intercepts r100 r101 = Error associated with random slopes

PAGE 357

349 Multi-Level Model for Predicting a Building’s Implementation Integrity as Measured by the Tier I & II Critical Components Checklist Building’s Tier I & II Critical Components Checklist Score = 000 + 001 (School Size) + 002 (Staff Size) + 003 (% White) + 004 (% Black) + 005 (% Hispanic) + 006 (% Asian) + 007 (% Native American) + 008 (% Multi-Racial) + 009 (% Male) + 010 (% Free-Reduced Lunch) + 011 (% English Language Learners) + 012 (% Students with Disabilities) + 013 (Pilot School Status) 014 (District A) + 015 (District B) + 016 (District C) + 017 (District D) + 018 (District E) + 019 (District F) + 020 (District G) + 021 (District H) + 022 (% SBLT Attendance) + 023 (# Coach Trainings) + 024 (Coach Training Hours) + 025 (# Coach TA) + 026 (Coach TA Hours) + 027 (Average FCAT Baseline) + 028 (Average Implementation Baseline) + 100 + 101 (School Size*time) + 102 (Staff Size*time) + 103 (% White*time) + 104 (% Black*time) + 105 (% Hispanic*time) + 106 (% Asian*time) + 107 (% Native American*time) + 108 (% Multi-Racial*time) + 109 (% Male*time) + 110 (% Free-Reduced Lunch*time) + 111 (% English Language Learners*time) + 112 (% Students with Disabilities*time) + 113 (Pilot School Status*time) + 114 (District A*time) + 115 (District B*time) + 116 (District C*time) + 117 (District D*time) + 118 (District E*time) + 119 (District F*time) + 120 (District G*time) + 121 (District H*time) + 122 (% SBLT Attendance*time) + 123 (# Coach Trainings*time) + 124 (Coach Training Hours*time) + 125 (# Coach TA*time) + 126 (Coach TA Hours*time) + 127 (Average FCAT Baseline*time) + 127 (Average Implementation Baseline*time) + 000 + 000 + 001 + r100 + r101 Note 000 = School-Level intercept 001 028 = School-Level predictors of building’s score 100 = School-Level slope 101 128 = School-Level predictors of a building’s slope 000 = Error 000 001 = Error associated w ith random intercepts r100 r101 = Error associated with random slopes

PAGE 358

350 Appendix H Residual Variance Assumption Analyses Summary

PAGE 359

351 Normality of Residuals Assumption – Beliefs Model Multi-level models assume that resi duals of predicted values are normally distributed. To examine this assumption, two analyses were conducted. First, a visual analysis of a scatterplot of the residuals fr om predicted average item beliefs scores was examined to determine the extent to whic h the residuals appeared to be normally distributed. Second, the homogeneity of th e variance across units was examined by visually analyzing the distribution of residua l variances across schools. A stem and leaf plot was created from the residual variance s across schools to dete rmine the extent to which these residual variances were normally distributed. Figure 1 below includes the scatterplot of the residuals from predicated average item beliefs scores. A visual inspection of the scatterplot revealed relatively normally distributed residual variances. Fi gure 2 below includes a stem and leaf plot of the residual average item belief scores across schools. A visu al inspection of the stem and leaf plot suggested that the residual variances across schools were slightly skewed with one significant outlier appearing to contribut e to the skewness observed. Although these residuals were slightly skewed, the visual analysis did not suggest that multi-level modeling procedures should be abandoned.

PAGE 360

352 0.75 ˆ A ‚ A ‚ A ‚ A A A ‚ A A A A 0.50 ˆ A AAAA A A B B ‚ A A A A A C BAD AB ABA ‚ A A AA AAAAAAA AC CEBECAA A AAABA A B ‚ A A BA ABA CGAD BBIC BGDEADBA BBCAA ‚ A A A BBBBB BBABEGHFCHCHAFKCFDDC AA BAA B AA 0.25 ˆ A AA A BBADACCEBFCGKOHHELKFIGIALBBBBAADBA BA A ‚ AB ABAAC ACEBDJIINNQIOHJIQGLHI JHCAGCCACBBBAC A A ‚ A ABAAB AEBIBHGKLXITWNZRRQSHHLIE FDDBCEAA ADBCAAC A ‚ A AA CABBABCEEGEIOSXZXUYVVXPPTQPFJEBAJBCCCEAAA CA A ‚ AA A EDC BCEBNMRXSZYZZZQZTZSTQELENDFCDCBC DAD BAA AA A 0.00 ˆ A AAA AAABDGFDKMMVZZZZZZZZXVZSVKIEFEDADICBBG CBBAA A A A R ‚ A A ACAAAAAFFGJOLYUZTZZZZUZVVOVHJHHEJGCBEAC BBC BBC A A e ‚ A A ABAAD FEEEKPQVVVYSZVYPYLOHLIBHEECBDDDDABAB AA A A s ‚ A AA BA ABDBI FLKIKQTQXQTHZJMHNEEFHFBFBE ACCEBCA B AA A i ‚ B AA BAACDDGOMJPMJMFIDJJHEHDFDEB BEBC B BAA AA d -0.25 ˆ B AA ADDBGBHKLFGCKEJEDEGJBCCEBBA BDABBAE A u ‚ BBADDBFDIHFEDIADCECGCDBAFB AAAB CA AA A A A A A a ‚ A ACGFFGAEGEIBADCAAAAAACB AA BAD A A A l ‚ A AA B ABA ABBC BA AA AA ‚ AA B AA AA AAA A A A A B B A -0.50 ˆ A A B A A A A A ABAA A A ‚ A A A A AA AA ‚ A A A A ‚ ‚ A A A A -0.75 ˆ A ‚ ‚ A ‚ A ‚ A -1.00 ˆ ‚ A A A ‚ ‚ ‚ A -1.25 ˆ ‚ Šƒˆƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒˆƒ 2.6 2.8 3.0 3.2 3.4 3.6 3.8 4.0 4.2 4.4 Predicted Figure 1 Scatterplot of Predicted Beliefs Score Residuals

PAGE 361

353 Stem Leaf 13 0 12 12 11 11 10 10 9 9 8 8 7 7 6 6 1 5 5 5 12 4 556 4 00112344 3 5556666667889 3 000000112344 2 556666778888888 2 011123444 1 9 ----+----+----+----+ Multiply Stem.Leaf by 10**-2 Figure 2. Distribution of Level 1 Residual Vari ance Across Level 3 Units for Beliefs Model.

PAGE 362

354 Normality of Residuals Assumption – RTI-A Model Multi-level models assume that resi duals of predicted values are normally distributed. To examine this assumption, two analyses were conducted. First, a visual analysis of a scatterplot of the residuals from predicted average item RTI-A scores was examined to determine the extent to whic h the residuals appeared to be normally distributed. Second, the homogeneity of th e variance across units was examined by visually analyzing the distribution of residua l variances across schools. A stem and leaf plot was created from the residual variance s across schools to dete rmine the extent to which these residual variances were normally distributed. Figure 3 below includes the scatterplot of the residuals from predicated average item RTI-A scores. A visual in spection of the scatterplot revealed relatively normally distributed residual variances. Fi gure 4 below includes a stem and leaf plot of the residual average item RTI-A scores across schools. A visu al inspection of the stem and leaf plot suggested that the residual va riances across schools were slig htly skewed with one outlier appearing to contribute to the skewness obser ved. Although these residuals were slightly skewed, the visual analysis did not suggest that multi-level modeling procedures should be abandoned.

PAGE 363

355 ‚ 1.5 ˆ ‚ ‚ ‚ A A A A ‚ AA AAA ‚ AA A A 1.0 ˆ A A A ‚ C AA A ‚ AA A AAAA BB ‚ AAB AAA ACAA AB AA BAB ADB ‚ A AAAA A BA C AAABACBBAG C B ABABB ‚ A A A B A AAC AAD AEB DADBBGADBB FEDB ACBDE 0.5 ˆ A A A B A BABAAAACBAFE BCCCECKBGDEADLABDA C BBC ‚ A A AA AAB B BCCAFBFFELLHIFFGEIGEHHDHCEGABAEBB A D AA ‚ A AAA CCCBBA CBAFEECFDGLFJIIKQLJFDGHFFDDD BBDCFC BABB BA ‚ A B ADA ABECDDHFHGEOJHUWRMJVRFCFECD CB EBBAC CB D A ‚ B A A BB ABBACCHGCHGIDKLSZRWSLQYNROKNIIGCBCCAADA C A ABA R ‚ A A AA A C BCCCGEBJGMHKJQNVQTZUSZTPNMMFLMIFAEEADBCAA A B A e 0.0 ˆ A A CAABBACDFHDKJHIKQPQRQUMURSHIFKGGGAHJHCCAABAA AA A s ‚ A A AAACE AEHFEHELNUSNMOTPMGLFGDFFFEKCFDCBA BBA A A A i ‚ AA BAA CA DCCCCFHBGOGIONOWKOTLFGJEEGFDHDCD CDEECBAEA A A B d ‚ A A A ABBB BBBDFDEDFKIPLMPFLNHIGDCCDBC FDADAA AAAB A A u ‚ A A BDBAAC BDEB FCEHHKJDLIEFFACIIAFCCDBCIB BBCBBBC BA AA A a ‚ AB ABEACCCDBEKHCFEDCJFGDE DCACHCAIBBBCCBABA A C BA A l -0.5 ˆ A A A E BECCCCCCDFDDCADCADADBAA AB ACBAABABCB B A A ‚ A A A B CBBABDDEFDCBBDCDABC AD ABAA CBAA AB A A ‚ A BA A ABABAAAC DA BBDBABAACB AB AB BBACBABAA AA A A ‚ A A AAAB AACCBEBBCB AAAAA A B AA A A A A ‚ A AA AABA BC AB B A AAA A B A AA ‚ AE AABAABACABA A A -1.0 ˆ AAA AA A A ‚ A A A A ‚ AAA A A A A AA A A ‚ ‚ ‚ -1.5 ˆ ‚ ‚ A ‚ A ‚ A ‚ -2.0 ˆ ‚ Šƒˆƒƒƒƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒƒƒƒˆƒ 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 Predicted Figure 3 Scatterplot of Predicted RTI-A Score Residuals.

PAGE 364

356 Stem Leaf 28 6 27 26 25 24 23 22 35 21 19 20 3 19 1 18 89 17 1 16 0149 15 0779 14 033 13 267799 12 033348 11 002466678 10 013356 9 145689 8 2445 7 35699 6 27 Multiply Stem.Leaf by 10**-2 Figure 4. Distribution of Level 1 Residual Va riance Across Level 3 Units for RTI-A Model.

PAGE 365

357 Normality of Residuals Assumption – RTI-B Model Multi-level models assume that resi duals of predicted values are normally distributed. To examine this assumption, two analyses were conducted. First, a visual analysis of a scatterplot of the residuals from predicted average item RTI-B scores was examined to determine the extent to whic h the residuals appeared to be normally distributed. Second, the homogeneity of th e variance across units was examined by visually analyzing the distribution of residua l variances across schools. A stem and leaf plot was created from the residual variance s across schools to dete rmine the extent to which these residual variances were normally distributed. Figure 5 below includes the scatterplot of the residuals from predicated average item RTI-B scores. A visual in spection of the scatterplot revealed relatively normally distributed residual variances. Fi gure 6 below includes a stem and leaf plot of the residual average item RTI-B scores across schools. A visu al inspection of the stem and leaf plot suggested that the residual variances across schools were slightly skewed with one significant outlier appearing to contribut e to the skewness observed. Although these residuals were slightly skewed, the visual analysis did not suggest that multi-level modeling procedures should be abandoned.

PAGE 366

358 ‚ 1.5 ˆ A ‚ ‚ A A ‚ A ‚ A A A AB ‚ A A AA AA BA 1.0 ˆ A A AA A A A A AA ‚ A A B B A A ‚ A A A A B BA A AAABAAAABBB ‚ ABA AA AAAABACBE B A ACC BAAC CA A ‚ A AABC B ABABB CCAAB EABCF EDCABACA AAC ‚ A A BA A A B AAAIBBGBBDEFBDBBBFBCBDCD C BB 0.5 ˆ A A AAA A B CAADCCBFBEEFEDIJD EFCDJEFBA B AAAA A A A ‚ AA B ACAA EACC GEBCDBCHMHHIDGQNFFGDCBBDC ECBAA A A A ‚ A A A AB AAAA EDBFFCIFFGPNIMLQGOKI BCAADC BFAA BAAADA ‚ A AB AABCACED FEFCDKKPNKOMRRLRRIGKMLCECB BAA C BAA A A ‚ A A BB BA ADACDJDG FKKILHKULYLJQHSMQFLFHHKFBD BAA A A R ‚ A BC AA ABBCFDAGGEHJ HBMNMWRRPLTMITEHIFJFJEEEDDBA A e 0.0 ˆ A A A BA A AGGFKGMFJL OSSUNLPQMMRIMFCFFECBACDDC C A A A s ‚ AAAABCABDBFGHDHGGIHIKSUUMRNLIHHGGGBEBAACDBDDA BA A i ‚ A BAAA A ABAADCDAEFIDEQIIRNRUROHHGJEMCEFDK BADCCCBA BA A d ‚ A AAB AAACACBACDDEKKJIJIEJHLEKJGCHCCACDDCB B AABBB A A u ‚ A ABBABACDBBFMDIFGKHEPIJFEBA CFFIBEGACD BBB ABBA A A A a ‚ A A A B BCCEAHCEIBDEFEGEBCIEHAAFAABABCCBA FBB C A A l -0.5 ˆ BA A AB CBAABBA ACEAHDHDGHEFBECDDCC B B BCA B B AA A ‚ AA AA B CCD ADCCCDDDFBACECACAAA BBBB AABBAA A ‚ A A A AC BAED FCECCBECBABAA CA A AAACBBAA ‚ BA B CBDA EAADCDAABAAABCB D AA ABD AABAAA A ‚ ACBABA AAA BEBABABACABBAA AC B A A ‚ A D AA BB EABAB AA B A A A A A -1.0 ˆ BA AAA B A C AA A A A A ‚ AB B A A A A A ‚ B A A AA A ‚ AA ‚ ‚ -1.5 ˆ A ‚ ‚ A ‚ A A ‚ ‚ -2.0 ˆ ‚ Šƒˆƒƒƒƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒƒƒƒˆƒ 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 Predicted Figure 5 Scatterplot of Predicted RTI-B Score Residuals.

PAGE 367

359 Stem Leaf 40 2 38 36 34 32 30 28 26 589 24 0 22 94 20 458906 18 22406 16 12581223468 14 0204 12 12255714556666678 10 6912566677 8 46678 Multiply Stem.Leaf by 10**-2 Figure 6. Distribution of Level 1 Residual Va riance Across Level 3 Units for RTI-B Model.

PAGE 368

360 Normality of Residuals Assumption – Skills Model Multi-level models assume that resi duals of predicted values are normally distributed. To examine this assumption, two analyses were conducted. First, a visual analysis of a scatterplot of the residuals fr om predicted percent of points possible skills scores was examined to determine the extent to which the residuals appeared to be normally distributed. Second, the homogeneity of the variance across units was examined by visually analyzing the dist ribution of residual variances across schools. A stem and leaf plot was created from the residual variances across schools to determine the extent to which these residual variances were normally distributed. Figure 7 below includes the sc atterplot of the residuals fr om predicated percent of points possible skills scores. A visual inspection of the s catterplot revealed that the residual variances appeared to be somewhat skewed with more predictions occurring above the observed value than below. Figure 8 below includes a stem and leaf plot of the residual percent of points possible skills scores across schools. A visual inspection of the stem and leaf plot suggested that the re sidual variances across schools were relatively normally distributed. Although the predicted va lue residuals were so mewhat skewed, the visual analysis did not suggest that multi-level modeling procedures should be abandoned.

PAGE 369

361 ‚ 0.4 ˆ ‚ ‚ A ‚ AAA ‚ B ‚ DC A A ‚ A A CCG A B C 0.2 ˆ AAA BACAAAAA B D ‚ ADCA A ADGAAAAA BAD ‚ BA ACEB CCADKIF A AAAGA ‚ A AAA A A ACFBCBAHAECGHCA DBA ‚ B A A AEB AAAEEDBEBB CBAEEA A BA ‚ A A EA A AA FGEBIHAB BGB B ‚ A BA A EEEA AB CDEEDF BDB A 0.0 ˆ AA BAABB AA AB C ADB CAAAHDA A A ‚ A B ACACD ACCEAABC CB A ‚ A BA CA ABDBEA A AA BCEB A A R ‚ A A AABF AA B AFBAAAABD e ‚ A A A A ACEB A E CBA A A s ‚ A ACA DCBBA ACBC A i ‚ A AA FBB BECBA A d -0.2 ˆ A A B CDC AAB u ‚ AAACA ACE a ‚ A ACEAA ABB A l ‚ AAA DADA ‚ BDA AA A ‚ A A A A ‚ A A A -0.4 ˆ A A ABA A ‚ AA A ‚ AA ‚ A ‚ ‚ ‚ -0.6 ˆ ‚ A ‚ ‚ ‚ ‚ ‚ -0.8 ˆ ‚ Šƒˆƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒˆƒƒ 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 Predicted Figure 7 Scatterplot of Predicted Skill Assessment Score Residuals.

PAGE 370

362 Stem Leaf 4 6 4 2 3 7789 3 1123 2 556789 2 00222334 1 55689 1 0023444 0 5579 Multiply Stem.Leaf by 10**-2 Figure 8. Distribution of Level 1 Residual Variance Across Level 3 Units for Skill Assessment Model.

PAGE 371

363 Normality of Residuals Assumption – SAPSI Model Multi-level models assume that resi duals of predicted values are normally distributed. To examine this assumption, two analyses were conducted. First, a visual analysis of a scatterplot of the residuals from predicted average item SAPSI scores was examined to determine the extent to whic h the residuals appeared to be normally distributed. Second, the homogeneity of th e variance across units was examined by visually analyzing the distribution of residua l variances across schools. A stem and leaf plot was created from the residual variance s across schools to dete rmine the extent to which these residual variances were normally distributed. Figure 9 below includes the scatterplot of the residuals from predicated average item SAPSI scores. A visual inspection of th e scatterplot revealed relatively normally distributed residual variances. Figure 10 belo w includes a stem and leaf plot of the residual average item belief scores across sc hools. A visual inspection of the stem and leaf plot suggested that the residual variances across schools were slightly skewed with two outliers appearing to c ontribute to the skewness obser ved. Although these residuals were slightly skewed, the visual analysis did not suggest that multi-level modeling procedures should be abandoned.

PAGE 372

364 ‚ ‚ 0.3 ˆ ‚ ‚ ‚ ‚ ‚ A ‚ A 0.2 ˆ ‚ A ‚ A ‚ A ‚ A ‚ A A R ‚ e 0.1 ˆ A A A s ‚ A A i ‚ A A A A d ‚ A A AA u ‚ A A A A A a ‚ A A A A A l ‚ A AA A 0.0 ˆ A A AA A ‚ B A A A A A ‚ A A A ‚ A A A ‚ AA A A ‚ A A A ‚ A AA A A -0.1 ˆ A A ‚ A A A ‚ A A ‚ ‚ ‚ A ‚ A -0.2 ˆ A A ‚ Šƒƒˆƒƒƒƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒƒƒƒˆƒƒ 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Predicted Figure 9 Scatterplot of Predicted SAPSI Score Residuals.

PAGE 373

365 Stem Leaf 6 1 5 5 4 7 4 3 3 2 5589 2 23 1 67 1 00224 0 5779 0 0000000111222233334 Multiply Stem.Leaf by 10**-2 Figure 10. Distribution of Level 1 Residual Vari ance Across Level 2 Units for the SAPSI Model.

PAGE 374

366 Normality of Residuals Assumption – Tier I and II Critical Components Checklist Model Multi-level models assume that resi duals of predicted values are normally distributed. To examine this assumption, two analyses were conducted. First, a visual analysis of a scatterplot of the residuals from predicted average item Tier I and II Critical Components Checklist scores was examined to determine the extent to which the residuals appeared to be normally distribu ted. Second, the homogeneity of the variance across units was examined by visually analyz ing the distribution of residual variances across schools. A stem and leaf plot was created from the residual variances across school to determine the extent to whic h these residual variances were normally distributed. Figure 11 below includes the scatterplot of the residuals from predicated average item Tier I and II Critical Components Check list scores. A visual inspection of the scatterplot revealed relatively normally di stributed residua l variances. Figure 12 below includes a stem and leaf plot of the residua l average item belief scores across schools. A visual inspection of the stem and leaf plot suggested that the residual variances across schools were slightly skewed with three outliers appearing to contribute to the skewness observed. Although these residuals were slight ly skewed, the visual analysis did not suggest that multi-level modeling pr ocedures should be abandoned.

PAGE 375

367 ‚ ‚ 0.20 ˆ A ‚ ‚ A ‚ ‚ 0.15 ˆ ‚ ‚ A A A ‚ A A A A ‚ C A 0.10 ˆ ‚ A A A ‚ A A A A A A ‚ A A ‚ A A A A A 0.05 ˆ BB A A R ‚ A A A A A A e ‚ A A s ‚ A AA A A B i ‚ A AA A A d 0.00 ˆ A A A A A u ‚ AA A A A A A a ‚ A AB AA l ‚ A A A A A A A ‚ B A A -0.05 ˆ AA A A ‚ A AA A ‚ A AA ‚ B A A ‚ ABA A A A A -0.10 ˆ A A A ‚ A A A A ‚ ‚ A A ‚ -0.15 ˆ AB ‚ ‚ ‚ ‚ -0.20 ˆ ‚ Šƒˆƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒˆƒƒƒƒƒƒƒƒƒˆƒ -0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 Predicted Figure 11 Scatterplot of Predicted Tier I and II Critic al Components Checklist Score Residuals.

PAGE 376

368 Stem Leaf 44 3 42 40 5 38 36 34 4 32 30 7 28 26 24 06 22 20 46 18 9279 16 45 14 8 12 8 10 3 8 56714 6 12067 4 1378049 2 0124561168 0 01111344679037 Multiply Stem.Leaf by 10**-3 Figure 12. Distribution of Level 1 Re sidual Variance Across Level 2 Units for the Tier I and II Critical Components Checklist Model.

PAGE 377

About the Author Jose Michael Castillo rece ived his bachelors degree in psychology from Florida State University. He received a Presidential Fellowship to attend the University of South Florida to attain his doctorate in the fiel d of school psychology. Jose is currently employed as a Project Evaluator on the Florida Problem Solving/Response to Intervention (PS/RtI) Project. As a Project Ev aluator, the author engaged in the provision of training and technical a ssistance to PS/RtI Coaches on instrumentation and data collection procedures. The author also work ed collaboratively with Project staff and PS/RtI Coaches to use data collected to info rm Year 1 Project activities. The author’s employment by the Florida PS/Rt I Project and collaborative relationships with Project staff suggest that the results of the study should be interpreted as an internal rather than independent evaluation.


xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam 2200397Ka 4500
controlfield tag 001 002063926
005 20100318111251.0
007 cr bnu|||uuuuu
008 100318s2009 flu s 000 0 eng d
datafield ind1 8 ind2 024
subfield code a E14-SFE0003020
035
(OCoLC)558648071
040
FHM
c FHM
049
FHMM
090
LB1051 (Online)
1 100
Castillo, Jose Michael.
0 245
Evaluation of the first year of a statewide problem solving/response to intervention initiative :
b preliminary findings
h [electronic resource] /
by Jose Michael Castillo.
260
[Tampa, Fla] :
University of South Florida,
2009.
500
Title from PDF of title page.
Document formatted into pages; contains 368 pages.
Includes vita.
502
Dissertation (Ph.D.)--University of South Florida, 2009.
504
Includes bibliographical references.
516
Text (Electronic dissertation) in PDF format.
3 520
ABSTRACT: This program evaluation study examined the relationship between Problem Solving/Response to Intervention (PS/RtI) training and technical assistance and educator and implementation outcomes following the first year of a 3-year project. Educators from 40 pilot schools in eight districts participating in the study received ongoing professional development targeting the rationale for the initiative, systems change issues, and the steps of the PS/RtI model. Data on educator beliefs, educator perceived and demonstrated PS/RtI skills, and PS/RtI implementation were collected throughout the year from the 40 pilot schools as well as 33 comparison schools. To examine the relationships between PS/RtI training and technical assistance and preliminary outcomes, a series of multi-level models were conducted. Results of the analyses suggested that the ongoing professional development provided during the first year related to some outcomes. Specifically, PS/RtI training and technical assistance appeared to be positively related to increases in the beliefs and perceived skills of educators. The relationship between professional development activities and other outcomes targeted during the first year (i.e., demonstrated skills and implementation) was unclear. Potential explanations for the findings from this study and implications for future research are discussed.
538
Mode of access: World Wide Web.
System requirements: World Wide Web browser and PDF reader.
590
Advisor: George Batsche, Ed.D.
653
Data-Based Decision-Making
Systems Change
Educators
Implementation
Professional development
690
Dissertations, Academic
z USF
x Psychological and Social Foundations
Doctoral.
773
t USF Electronic Theses and Dissertations.
4 856
u http://digital.lib.usf.edu/?e14.3020