USF Libraries
USF Digital Collections

The effects of professional development efforts on educator beliefs and perceptions of competency within a school-based ...

MISSING IMAGE

Material Information

Title:
The effects of professional development efforts on educator beliefs and perceptions of competency within a school-based response to intervention model
Physical Description:
Book
Language:
English
Creator:
Nadeau, Joshua M
Publisher:
University of South Florida
Place of Publication:
Tampa, Fla
Publication Date:

Subjects

Subjects / Keywords:
Systems change
School policy
Teachers
Self-efficacy
Rti
Dissertations, Academic -- Psychological & Social Foundations -- Masters -- USF   ( lcsh )
Genre:
non-fiction   ( marcgt )

Notes

Abstract:
ABSTRACT: The purpose of this study was to identify and understand relationships between educators' perceived skills, observed practices, and stated beliefs - as well as the impact of evidence-based professional development upon those relationships - during the first year of ongoing school-based implementation for Florida's Statewide Problem- Solving/Response to Intervention (PS/RtI) Project. The PS/RtI model is conceptualized as providing a data-based template to drive student service delivery decisions; as providing a tiered framework of assessment and evaluation to maximize efficiency of allocated school funds; and as defining the determination of eligibility for special education services to be based solely upon a continuous necessity of resources/services beyond those typically available in the general education setting. The current study analyzed existing data from Florida's statewide PS/RtI project, collected during the 2007-2008 school year. During specified professional development sessions, educators provided responses to various questions about their beliefs regarding, perceptions of competency within, and estimated observational frequency of, critical components constituting the PS/RtI model. Three specific research questions were investigated from analysis of these responses; specifically: (1) What is the relationship between beliefs about a training objective, and the self-rated perception of skills and frequency of observed practices associated with that objective, (2) What are the effects of specific skills training on the relationship between self-reported beliefs, and associated perceptions of skills and frequency of observed practices, and (3) What is the relationship between initial (pre-training) and time two (post-training) measures of self-reported beliefs and perceived skills related to data usage, and of self-reported beliefs, perceived skills, and observed practices related to academic instruction? This study found that, for the first year of implementation, initial educator beliefs regarding evidence-based instruction and data-based decision making were only slightly related to self-perceived competence in these areas; furthermore, independent of any effect that skills training may have had upon educator survey scores, the relationship between beliefs, skills, and practices scores did not significantly differ over the first year of implementation. Implications of the findings for practice, including scaling up of RtI implementation efforts, are discussed.
Thesis:
Thesis (Ed.S.)--University of South Florida, 2010.
Bibliography:
Includes bibliographical references.
System Details:
Mode of access: World Wide Web.
System Details:
System requirements: World Wide Web browser and PDF reader.
Statement of Responsibility:
by Joshua M Nadeau.
General Note:
Title from PDF of title page.
General Note:
Document formatted into pages; contains X pages.

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
usfldc doi - E14-SFE0003334
usfldc handle - e14.3334
System ID:
SFS0027650:00001


This item is only available as the following downloads:


Full Text

PAGE 1

The Effects of Professional Development Effo rts on Educator Beliefs and Perceptions of Competency Within a School-Based Response to Intervention Model by Joshua Nadeau A thesis submitted in partial fulfillment of the requirements for the degree of Education Specialist Department of Psychological and Social Foundations College of Education University of South Florida Major Professor: George Batsche, Ed.D. Shannon Suldo, Ph.D. John Ferron, Ph.D. Rance Harbor, Ph.D. Date of Approval: October 2, 2009 Keywords: systems change, school po licy, teachers, self-efficacy, rti Copyright 2010, Joshua Nadeau

PAGE 2

Dedication This thesis is dedicated to the two most important people in my life my wife and son. To Catherine: your belief in me conti nually astounds and humbles me, particularly after countless evenings spent ignoring you to work on various projects. I could not have finished this thesis withou t your patience on such evenings without your willingness to help me find the right word, without your unflagging support on the good and bad days alike. I love you more than I could ever say, and your support and belief are blessings of which I strive to be worthy. To Lucas: you are an amazing child, and the love, faith and trust you have in me are truly breath-taking. Someday, when youre older, youll blow the dust off of this book and read these words. Youll know that on th e most difficult days, when I was bone-tired and thoroughly discouraged, the knowledge that I was coming home to see and play with you made everything else irrele vant. You are the best thing that ever happened to your mom and me, and you are an inspiration to me as your Daddy and as an educator. I love you. Getting to read books and tell bedtime stories to you is something I will always cherish. On that day in the future, when youre reading these words, know that this story is also for you. Come on, you know how it starts: Once upon a time

PAGE 3

Acknowledgments I would like to take this opportunity to thank a sm all group of professionals, without whom this ship would have foundered on the rocks many times over. First of all, to Dr. George Batsche: you were willing to let me use data from a project that is personally and professionally meaningful to you, and for that I am grateful; you took countless moments to patiently explain some of the more subtle points of our profession, and for that I am touched; three years a go, you looked past an a rrogant and sarcastic exterior and saw someone on whom you were w illing to take a chanceand for that I am deeply humbled. To Dr. Shannon Suldo: how does one put into wo rds how incredibly empowering it is to have you in my corner? For helping me to communicate more clearly, for impromptu counseling sessions (some thesis-relat ed, most not), for always having just the right twist in phrasing to make the writi ng pop, and for perpetually being willing to point me in the right direction when I wa s lost thank you. To Dr. Rance Harbor: the time you have spent in helping me avoid some of the pitfalls both in theses and in grad school have been quite profitable to me professionally, and quite enjoyable personally. Thank you for graciously sharing your know ledge, your experiences, your time, your insights, and your feedback. Last but certainly not least, to Dr. John Ferron: the depth of knowledge and love of learning that you bring to our conve rsations has been simply astounding. While I have always had an affi nity for the quietly powerful language of mathematics, your patience and kindness have been instrumental in helping me to translate this passion into the world of Education. It has be en an extreme privilege to work with each of you on this project, and I c onsider myself fortunate to be able to refer to all four of you as my thesis committee.

PAGE 4

i Table of Contents List of Figures vi A bstract vii Chapter 1: Introduction 1 Comparison of Traditional and RtI Service Delivery Models ................................ 3 Overview of Systems Change Process ................................ ................................ ..... 6 Conflicts between Educator Self Efficacy and Large Scale Change ...................... 8 Rationale for the Study ................................ ................................ .......................... 11 Purpose of the Study ................................ ................................ .............................. 12 Chapter 2: L iterature Review 14 Legislation ................................ ................................ ................................ .. 14 Introduction to RtI ................................ ................................ ................................ .. 17 Comparison of Outcomes in the Traditional and RtI Models ................................ 24 The Traditional Model. ................................ ................................ .............. 24 The PS/RtI Model. ................................ ................................ ...................... 31 Research on Systems Change ................................ ................................ ................ 34 Consensus. ................................ ................................ ................................ 35 Infrastructure. ................................ ................................ ............................ 37 Implementation. ................................ ................................ ......................... 39 Research on Educator Self Efficacy and Change ................................ .................. 40

PAGE 5

ii Summary ................................ ................................ ................................ ................ 45 Chapter 3: Methods 47 Participants ................................ ................................ ................................ ............. 47 Demonstration Schools. ................................ ................................ ............. 47 Comparison Schools ................................ ................................ .................. 48 Design of the study ................................ ................................ ................................ 50 Measures ................................ ................................ ................................ ................ 51 Beliefs Survey. ................................ ................................ ............................ 53 Perceptions of Skills Survey. ................................ ................................ ...... 55 Perceptions of Practices Survey. ................................ ............................... 57 Procedures ................................ ................................ ................................ .............. 58 Professional Development Training. ................................ ......................... 58 Data Collection and Data Entry. ................................ ............................... 60 Analyses ................................ ................................ ................................ ................. 61 Delimitations ................................ ................................ ................................ .......... 66 Limitations ................................ ................................ ................................ ............. 67 Chapter 4: Results 69 Research Question One ................................ ................................ .......................... 69 Descriptive Statistics ................................ ................................ .................. 70 Correlation Coefficients ................................ ................................ ............. 71 Research Question Two ................................ ................................ ......................... 72 Descriptive Statistics ................................ ................................ .................. 73 Correlation Coefficients ................................ ................................ ............. 74

PAGE 6

iii Research Question Three ................................ ................................ ....................... 75 Descriptive Data ................................ ................................ ........................ 76 Multiple Regression ................................ ................................ ................... 76 Chapter 5: Discussion 8 0 Study Overview ................................ ................................ ................................ ..... 81 Conclusions and Discussion ................................ ................................ .................. 82 Research Question One ................................ ................................ .............. 82 Research Question Two ................................ ................................ ............. 84 Research Q uestion Three ................................ ................................ ........... 86 Limitations and Considerations ................................ ................................ ............. 88 Internal Validity ................................ ................................ ......................... 88 External Validity ................................ ................................ ........................ 90 Recommendations for Future Research ................................ ................................ 91 References 95 Appendices 111 Appendix A: Demonstration District Mini Grant Application and Scoring Rubric ................................ ................................ ................................ ....... 112 Appendix B: Example Validation Forms ................................ ............................. 138 Appendix C: Beliefs Survey ................................ ................................ ................ 151 Appendix D: Perceptions of RtI Skills Survey ................................ .................... 154 Appendix E: Perceptions of Practices Survey ................................ ..................... 159 Appendix F: Data Collection, Entry, and Analysis Rubric ................................ .. 164

PAGE 7

iv List of Tables Table 1 Categorical proportionality of fourth grade students below Basic in reading 25 Table 2 Categorical proportionality of eighth grade students below Basic in reading 26 Table 3 Categorical proportionality of fourth grade students below Basic in math 27 Table 4 Categorical proportionality of eighth grade students below Basic in math 28 Table 5 Student achievement trends across NAEP mathematics administrations 28 Table 6 Comparison of universal and person sense of teaching inefficacy 41 Table 7 Demographics for demonstration and comparison buildings 48 Table 8 School and student demographics for selected demonstration schools 49 Table 9 School and student demographics for selected comparison schools 50 Table 10 Beliefs Survey Factor Descriptions 55 Table 11 Perceptions of RtI Skills Survey Factor Descriptions 56 Table 12 Perceptions of Practices Survey Factor Descriptors 58 Table 13 Operational definitions for Research Question One 62 Table 14 Operational definitions for Research Question Two 65 Table 15 Operational definitions for Research Question Three 66 Table 16 Training objective operational definitions 70

PAGE 8

v Table 17 Relevant Factor score descriptives for participating schools 71 Table 18 Correlation matrices for Research Question One 72 Table 19 Relevant Factor score Means and SDs for demonstration schools 73 Table 20 Correlation Matrix and ZPF values for Use of Data objective 74 Table 21 Correlation Matrix and ZPF values for Academic Instruction (Skills) objective 75 Table 22 Correlation Matrix and ZPF values for Academic Instruction (Practices) objective 75 Table 23 MRA Results for Time One Data Usage Variables Predicting Time Two Data Skills 77 Table 24 MRA Results for Time One Academic Variables Predicting Time Two Academic Skills 78 Table 25 MRA Results for Time One Academic Variables Predicting Time Two Academic Practices 79

PAGE 9

vi List of Figures Figure 1 Problem Solving Method/Response to Intervention M odel 8 Figure 2 Tri tiered academic and behavioral intervention delivery model 1

PAGE 10

vii The Effects of Professional Development Efforts on Educator Beliefs and Perceptions of Competency Within a School Based Response to Intervention Model Joshua Nadeau ABSTRACT The purpose of this study was to identify and understand relationships between educators perceived skills, observed practices, and stated beliefs as well as the impact of evidence based professional development upon those relationships during the first year of ongoing school based implementation for Floridas Statewide Problem S olving/Response to Intervention (PS/RtI) Project. The PS/RtI model is conceptualized as providing a data based template to drive student service delivery decisions; as providing a tiered framework of assessment and evaluation to maximize efficiency of allo cated school funds; and as defining the determination of eligibility for special education services to be based solely upon a continuous necessity of resources/services beyond those typically available in the general education setting. The current study an alyzed existing data from Floridas statewide PS/RtI project collected during the 2007 2008 school year. During specified professional development sessions, educators provided responses to various questions about their beliefs regarding, perceptions of co mpetency within, and estimated observational frequency of, critical components constituting the PS/RtI model. Three specific research questions were investigated from analysis of these responses; specifically: (1) What is the relationship

PAGE 11

viii between beliefs a bout a training objective, and the self rated perception of skills and frequency of observed practices associated with that objective, (2) What are the effects of specific skills training on the relationship between self reported beliefs, and associated pe rceptions of skills and frequency of observed practices, and (3) What is the relationship between initial (pre training) and time two (post training) measures of self reported beliefs and perceived skills related to data usage, and of self reported beliefs perceived skills, and observed practices related to academic instruction? This study found that, for the first year of implementation, initial educator beliefs regarding evidence based instruction and data based decision making were only slightly related to self perceived competence in these areas ; furthermore, independent of any effect that skills training may have had upon educator survey scores, the relationship between beliefs, skills, and practices scores did not significantly differ over the first y ear of implementation. Implications of the findings for practice, including scaling up of RtI implementation efforts, are discussed.

PAGE 12

1 Chapter 1: Introduction Mandates from federal and state agencies have continued to focus upon the improvement of educational service delivery; specifically, enhancing instructional methods and resources, requiring higher levels of performance from teachers, increasing the effectiveness of allowable curricula (Passow, 1990) a nd expecting higher levels of student performance (NCLB, 2002). Jacob and Hartshorne have argued (2003) that it is the nature of the relationship between public schools and government funding institutions that is to say, the implicit expectation of compl iance with federal and state statutes which results in the use of educational mandates as the vehicle for change in schools. Recent studies have found that 20 40% of school age children experience reading difficulties (Fletcher & Lyon, 1998), while 20 30 % struggle with basic math skills ( Lee, Grigg, & Dion 2007). Hoagwood and Johnson (2003) estimate that 16 22% of school age children exhibit diagnosable mental health problems. In addition to these statistics, Lee reports that there continue to be large g aps in achievement between low and high SES categories, as well as between Caucasian and ethnic minority students ( Lee, Grigg & Dion, 2007 ; Lee, Grigg, & Donahue, 2007 ). As a corollary to these systemic basic skill deficiencies among our school age child ren, American students continue to receive lower scores on standardized achievement tests than their foreign peers (National Center for Education Statistics, 2005).

PAGE 13

2 A theme emerging from educational research is that the traditional system of education in the United States is not designed to provide effective services to students with diverse needs (Tilly, 2002; Torgeson, 2002). Instead, referral for special education evaluation often becomes the default action for students who, due to the nature of their needs, do not respond to the general education curriculum (Batsche, Elliott, Schrag, & Tilly, 2005). What is particularly problematic in such a pathognomonic model is the generalized lack of effort to provide evidence based interventions within the general education classroom (Stanovich & Jordan, 1998). It is this heightened, discriminatory (to students with special needs) rate of eligibility referrals, combined with a lack of valid procedures for student identification procedures (Batsche et al., 2005) which has invited criticism leveled at the tradition al service delivery model (Fletcher, Coulter, Reschly, & Vaughn, 2004), colloquially referred to as the wait to fail method (Batsche et al., 2005; Presidents Commission on Excellence in Special Education, 2002). The model is thus termed due to the sign ificant discrepancy that is required between norm referenced scores of achievement and cognitive ability which results in students who may be a full school year (or more) behind their same age peers before being deemed eligible for special services. Conv ersely, this method often results in students who, though consistently behind their peers, do not receive needed services or achieve desired outcomes (Stanovich, 1999). The preferred vehicle for change in schools has been the use of educational mandates (J acob & Hartshorne, 2003). It is therefore not surprising to see legislation enacted in an effort to address the academic shortcomings previously noted. The No Child Left Behind Act of 2001 (NCLB, 2002) represents a focal shift from improvement

PAGE 14

3 of education al processes to provision of services intended to improve outcomes for all students. Furthermore, NCLB also contains an embedded mechanism for requiring schools to maintain accountability for the progress of all of their students; specifically, the Act ca lls for the exclusive use of data, both in the form of evidence based instruction and to inform decision making. One of the more well researched models for data based decision making is Response to Intervention (RtI; Gresham, 2001). RtI is a method for usi ng student centered data to develop and implement evidence based instruction and interventions in the general education environment, as well as to determine the nature of students response to these interventions via ongoing progress monitoring (Batsche et al., 2005). In this fashion, the use of RtI serves many roles in our schools. E xample s include the ability to determine the effectiveness of core instruction, as well as to consistent ly identif y analy ze and address learning problems at an early stage both of which allow for the implementation of empirically tested efforts aimed at improving student outcomes in whole class, small group, and individual settings. In facilitating such efforts, RtI will also reveal the magnitude and frequency of those services necessary to achieve desired student outcomes. By evaluating student service requirements at the whole class, small group and individual levels (known as tiers I, II, and III, respectively), RtI allows for more effective use of resources by providing these services in the general education setting. Comparison of Traditional and RtI Service Delivery Models Reliance upon special education as a place for helping students who are underachieving has historically shown poor effectiveness when examining impact on student a cademic outcomes (Kavale & Forness, 1999; Presidents Commission on

PAGE 15

4 Excellence in Special Education, 2002). When coupled with the heightened referral and placement rates (Vaughn & Fuchs, 2003) and racial/ethnic overrepresentation receiving special educatio n services (Donovan & Cross, 2002), it is easy to see the large scale problems inherent in the wait to fail model (Heller, Holtzman, & Messick, 1982; Batsche et al., 2005). In contrast, research focused upon outcomes within the RtI model has found increase d student performance in reading and math scores (Callender, 2006; Knoff & Batsche, 1995; Stollar & Graden, 2006). In addition, large scale decreases in referral and placement for special education services, as well as a lessened overrepresentation among v arious racial/ethnic groups, have consistently been found (Knoff & Batsche, 1995; Tilly, 2003; Torgeson, 2007). Floridas statewide Problem Solving/Response to Intervention (PS/RtI) project (Batsche, Curtis, Dorman, Castillo, & Porter, 2007) stands as an e xample of one such model, focused tightly upon student outcomes and educator accountability. This comprehensive implementation plan was designed to synchronize efforts at the state level, while simultaneously providing the support for each district to deve lop local implementation plans (Florida Department of Education, 2008). This was considered to be of paramount importance in Florida, where multiple initiatives (e.g., Positive Behavior Support, Reading First) are addressed by various offices within the Fl orida Department of Education. The result of this is that each such effort has access only to the resources available to its associated office. Furthermore, each initiative generates or is held answerable to its own requirements for data collection and pro fessional development,

PAGE 16

5 which translates as a lack of service integration and of data usable for comparison across all efforts (Florida Department of Education, 2008). This statewide consolidation of efforts within an RtI model was brought together as the Florida Problem Solving/ RtI State Pilot Project (Batsche, Curtis, Dorman, Castillo, & Porter, 2007). In an attempt to address the disparities across various efforts at RtI as discussed above, the PS/RtI Project has two purposes. First, to provide profess ional development and design assistance to any and all districts implementing the PS/RtI model in Florida. Second, to provid e training and school based coaching personnel to pilot districts for the purpose of conducting program evaluation to inform scali ng up of the RtI model across the state of Florida (Batsche, Curtis, Dorman, Castillo, & Porter, 2007). This initiative involves 40 pilot and 32 comparison schools across eight districts, each of which was awarded a mini grant to follow a building based p lan for implementation of RtI with Project support, thereby assist ing in evaluating the impact of the PS/RtI models implementation. This process of evaluation is facilitated through the services of PS/RtI coaches, who are trained in use of the PS/RtI mode l and provide site based training and direct support to their schools. As mentioned earlier, one purpose of the PS/RtI Pilot project is to evaluate the overall implementation effort, which is accomplished through examination of outcome measures such as dai ly school practices, school based team member beliefs and perceptions of PS/RtI skills, and direct observation of how each school uses the PS/RtI process on a daily basis (Batsche, Curtis, Dorman, Castillo, & Porter, 2007). In this fashion, a determination can be made as to

PAGE 17

6 whether or not, and to what extent, the implementation effort is successfully changing the manner in which Florida schools support their students. Overview of Systems Change Process An examination of change or reform efforts in Americ as educational system yields numerous movements that sought to effect large and small scale improvements in public schools (e.g., Crum & Hellman, 2009; Noddings, 2007). A closer look at these successful and unsuccessful attempts at change yields an inter esting pattern. Many proposed changes, which appear effective during the planning and initial stages of implementation, are redacted or hamstrung by unforeseen problems with legislation or necessary supports (Crum & Hellman, 2009). Historically, two method s have been used to implement policy changes at the school district/county and individual building levels top down and bottom up (Hsia & Beyer, 2000). The top down approach refers to instances when legislative changes increase the demands upon district and school administrators, who in turn increase demands upon the individual teachers and staff. The bottom up method is characterized by educators sharing newly learned behavioral or classroom management techniques with their colleagues, in an effort to change the way in which their school and/or district schools students. These two methods reflect the manner in which we, as a society in general, and as educational policy makers specifically, conceptualize our educational system. Paralleling our societa l level of technological sophistication, views of what constitutes systemic change have shifted slowly from straight and direct linear change (Fixsen, Naoom, Blas, Friedman, & Wallace, 2005), through increasingly complex

PAGE 18

7 flowcharts and fields of force (Le win, 1943), to the current conception of our schools as true systems that transact that is, bi directionally interact with countless other complex systems (Bronfenbrenner, 1977, 1992; Christenson, Abery, & Weinberg, 1986). While these ideas are of inte rest from a standpoint of explaining how we have moved from there to here, this progression of increasingly complex conceptualizations is also important because the manner in which a system is viewed is integral to an understanding of how to effect cha nge in that system (Christenson & Anderson, 2002). It is in the transactional nature of Bronfenbrenners models that the interplay between the various educational entities can be considered for the first time, for it is here that the educational process i s truly seen as a system a complex, dynamic, and interconnected network of people, processes, materials, and non tangible resources that continuously impact, and are impacted by, each other (Christenson & Anderson, 2002). In considering the best practi ces for implementing systems change in education, the accepted methodology (Curtis & Stollar, 2002) consists of a series of large scale and complex stages: Consensus, Infrastructure, and Implementation. Consensus is generally considered to consist of a shared belief or set of beliefs, a common vision, and an understanding of what must be d one in order to implement a proposed change (Batsche, Elliott, Graden, et al., 2005; Lau, Sieler, Muyskens, Canter, VanKeuren, & Marston, 2006). Infrastructure includes all of the support systems (e.g., policies, procedures, training, data collection and a nalysis methods, communication methods, universal and small group intervention systems, established decision making criteria) necessary to allow change to occur, be maintained, and flourish (Graden, Stollar, & Poth, 2007). Implementation is the initiation of the proposed change, which is put into place by

PAGE 19

8 personnel using the designed infrastructure, in an attempt to attain the common goal agreed upon during the initial building and maintenance of consensus. It is vital to understand that while Implementatio n depends upon Infrastructure, which depends upon Consensus, the systems change method discussed above is not linear in nature. Each stage of the process depends upon the other two stages. For example, if the necessary infrastructure cannot be achieved, an d the implementation is therefore more difficult than anticipated, the level of consensus originally achieved may decline as a result of these combined effects. Consensus building is a process that is embedded in the Infrastructure and Implementation stag es. This point is of particular importance, and represents a problem which must be addressed (Fullan, 1997): if the relationship between these stages is not linear, how do they impact each other? Conflicts between Educator Self Efficacy and Large Scale Ch ange According to Sarason (1990), meaningful educational reform has failed because legislators and administrators have historically ignored the history and social networks surrounding their schools. This restriction of focus led to a rapid fire launching o f educational reform initiatives, with little to no investigation of the problem being addressed, or of the supports and services necessary to maximize effectiveness of the initiatives. The failure of an initiative resulted in the immediate implementation of another, without examination of why the previous reform(s) did not produce the desired results. As a result, teachers learned that if they simply go about business as usual, a new initiative will come along. Interestingly, Sarason has also demonstrated (1982) that teachers do not routinely implement new practices that require more than a few skills that

PAGE 20

9 are outside of their existing skill set a phenomenon Sarason refers to as behavioral regularity. This is somewhat discouraging, when one realizes th at the implementation of RtI requires a significant paradigm shift from the traditional model. Specifically, educators must administer assessments and link data to evidence based interventions within the general education environment, they must learn to m ake data based decisions about intervention implementations, and services must improve student performance, instead of being based upon what educators think are best for the students. It is important to understand that these skills are different from those required by the traditional model (e.g., process focus) and thus represent tasks outside of the existing behavioral regularity for most educators. As a result, research indicates that many teachers do not implement planned interventions with integrity (N oell, et al., 2005; Sarason, 1982). While perfect fidelity of implementation is likely impossible (and not necessary), these findings have led to questions about expectations for teachers (Noell et al., 2005), and impact of poor integrity upon student ou tcomes (Burns, Appleton, & Stehouwer, 2005). However, research into educator training (Showers, Joyce, & Bennett, 1987; Zins & Ponti, 1996) indicates that the observed level of treatment fidelity can be improved through the use of evidence based professio nal development (PD) procedures. Specifically, effective PD consists of four stages: theory, demonstration, opportunities to practice, and immediate corrective feedback (Showers, Joyce, & Bennett, 1987). It is intriguing to note research into problem solvi ng procedure implementation that has consistently found a relationship between effective PD practices and increased use of problem solving methods (Curtis & Metz, 1986; Zins & Ponti, 1996).

PAGE 21

10 The importance of PD can be seen in research findings indicating t hat two conditions must exist before educators will embrace new ideas: they must understand the need for the idea, and they perceive that they either have the skills to implement the idea or they have the support to develop those skills (Joyce & Showers, 1 988; 1995). While the more reliable predictor of self efficacy in student teachers has been found to be the perceived level of guidance their cooperating teacher offers during training (Hamman, Olivarez, Lesley, Button, Chan, Griffith, & Elliott, 2006), mo re experienced teachers have been found to link their perceived level of self efficacy with the level of acceptance and/or flexibility of key administrators within their schools (Ashton & Webb, 1986). It is therefore easy to see that the role of mentoring is critical to building new teacher self efficacy: teachers who believe in themselves and their abilities to teach also believe in their students abilities to learn (Yost, 2002). Mentoring utilizes the four pathways to self efficacy: mastery experiences, physiological and emotional states, vicarious experiences, and social persuasions (Bandura, 1986, 1997). Research findings indicate that special education teachers are generally considered to be experts in individualized instruction strategies and assessments, even though general education teachers are far more likely to communicate with parents and students (Leyser, 2002). This observation resonates with the finding that there is a strong relationship between self efficacy and classroom instructional behaviors, as well as that self efficacy is routinely significantly higher for special education than for gene ral education teachers (Leyser, 2002). Another important consideration is the finding that secondary teachers exhibit significantly lower ratings of self efficacy with respect to classroom management for challenging student behavior when compared to their primary

PAGE 22

11 school counterparts (Baker, 2005). This information directly pertains to the relevance of standards for teacher preparation, as well as for considering the target audience of professional development efforts. Teacher viewpoints with respect to poli cy change implementation efforts focus on four factors: purpose of change, clarity of relevant implementation methods, effort required of teachers, and rewards (Janney, Snell, Beers, & Raynes, 1995). Further, any increases in knowledge and skills involving implementation of change occurs concomitantly with alterations in attitudes and behaviors of general education teachers, from hesitation/resistance to cooperation/support (Janney et al., 1995). Research supports the idea that when consensus is sought, ach ieved, and actively maintained, when the necessary infrastructure is determined, designed, and put into place, when implementation is performed with integrity, and when implementation is monitored with an eye toward modifications that are (or become) neces sary, then meaningful and lasting change can be achieved in a number of complex systems (Curtis & Stollar, 2002; Fullan, 1997). However, there are a number of questions that have not been investigated extensively; specifically, what does it look like when consensus has been achieved? What is the effect on infrastructure design and initiation when consensus is low, high, or erratic? It is this line of inquiry which, when slightly narrowed in scope, leads to the questions to be addressed in this study. Ration ale for the Study Investigating the relationship between consensus and infrastructure on levels of implementation is of value from the standpoint of adding to the systems change and professional development knowledge bases; however, this become s critically important

PAGE 23

12 when viewed through the lens of a shift in educational focus, from process improvement to improving student outcomes. In trying to bring about change at a statewide level, failure to obtain an adequate level of consensus could be disastrous to t he implementation process. Similarly, failing to understand changes in infrastructure as a result of low or erratic levels of consensus could greatly increase the complexity and level of resources required to successfully implement systems change efforts. As such, there is a clear need for research exploring the nature of this relationship between consensus and infrastructure. Purpose of the Study The purpose of this study was to investigate the relationship between educator beliefs and educator perceptions of competency on the implementation of response to intervention. Scaling up any reform initiative begins with the development of Consensus about the initiative. Joyce & Showe rs (1995) demonstrated the importance that the perceived need for the initiative and the perceptions of teachers regarding their skills or the support available to attain those skills have on the success of scale up. Specifically, this study sought to answ er the following research questions: 1. What is the relationship between beliefs about a training objective and the self rated perception of skills and frequency of observed practices associated with that objective ? 2. What are the effects of specific skills tr aining on the relationship between self reported beliefs and associated perception of skills and frequency of observed practices ?

PAGE 24

13 3. What is the relationship between initial (pre training) and time two (post training) measures of self reported beliefs and pe rceived skills related to data usage, and of self reported beliefs, perceived skills, and observed practices related to academic instruction?

PAGE 25

14 Chapter 2 : Literature Review The purpose of this Chapter is to provide the reader with a concise overview of the extant research in areas germane to the focus of this study. Included are an examination of modern educational reform legislation, an explanation of the framework and purposes of Response to Intervention, a comparison of student outcomes between tradi tional and RtI models of school based instruction and intervention delivery, a brief history of systems change paradigms, and a model for conceptualizing educator self efficacy. Legislation Among the many examples of federal legislation that impact our edu cational system are the No Child Left Behind Act (NCLB, 2002) and the Individuals with Disabilities Education Improvement Act (IDEIA, 2004). NCLB is intended to produce accountability within our school system, at the level of the county or district, the school administration, the classroom teacher, and the individual student. This intention is pursued via the use of evidence based practices in classroom instruction and student assessment, strict adherence to state approved benchmarks for student academic progress expectations, and disaggregating school level data when considering the status and progress of each school toward meeting adequate yearly progress (AYP). While this intention is sweeping in nature, there are several distinct points of impact withi n the school itself. For example, the mandatory disaggregation of data has

PAGE 26

15 resulted in a reprioritization of what data are necessary and valued when assessing the status of a school. Additionally, the use of evidence based practices and state approved benc hmarks has brought about a situation where the only discrepancy that matters with regards to individual students is that of their performance compared to the standards/benchmarks; further, the indiscriminate labeling of a child with disabilities has become secondary to tracking that childs progress toward meeting the academic benchmarks. Perhaps the most important point made clear through NCLB has been the urgent need by schools for additional services and supports at the level of the whole class and small groups, which is in stark contrast to the view espoused within traditional (individual student focus) special education eligibility determination methods. In a similar fashion, IDEIA has demanded wholesale change in the manner by which student disabilitie s are conceptualized. From a strict insistence on using effective, evidence based instruction methods in general education classrooms, through requirements for the use of assessments aimed at prevention rather than placement determination, to strong discou ragement regarding the reliance upon so called processing measures in the assessment of students experiencing academic or behavioral difficulties, IDEIA has forced educators, administrators and parents to reconsider what does and does not meet the defini tion of disability in schools, communities, and homes. As with NCLB, IDEIA has had multiple large scale impacts within the schools, such as a shift in focus regarding the data considered on a day to day basis when evaluating student performance, and an i ncreased need for services related to the development, implementation, and integrity of interventions at the level of the classroom and small groups (Porter, Batsche, Curtis, Castillo, and Witte, 2006). What is more

PAGE 27

16 staggering than the changes brought abou t by the passage of NCLB and IDEIA is the knowledge that there are still more changes being written to these pieces of legislation, which will further alter the environment and daily routine within the schools. One of the most far reaching of these propose d changes has to do with the calculation of AYP, which is currently based upon the percentage of students making proficiency within academic areas. The method proposed would have schools (and the state) use these disaggregated students growth rates defin ed as the proportion of actual progress compared to standard progress, as an equivalent to proficiency when calculating AYP. In this fashion, the response of a student to an intervention becomes of equal importance when compared to actual proficiency, as the response represents the progress of that student toward closing the gap between their academic performance and that required by the state approved benchmarks. While these changes are large in scale and demanding of the scant available resources, the s upport of state and local agencies is strong and ever increasing. In the foreword to Floridas Department of Education Response to Instruction/Intervention Implementation Plan (FLDOE, 2008), the Commissioner of Education stated the following: It is the re sponsibility of every educator, organization, and parent to actively engage in collaborative efforts to meet Floridas goals. In the unified effort, all schools in Florida should ensure evidence based practices, instructionally relevant assessments, system atic problem solving to meet all students needs, data based decision making, effective professional development, supportive leadership, and meaningful family involvement. These are the foundation principles of a Response to Instruction/Intervention (RtI) system

PAGE 28

17 which provides us the framework to elevate the efficacy of our statewide improvement efforts. When considered separately, NCLB and IDEIA are interesting and somewhat alarming documents that concern many educators and parents. Considered together, t hese two pieces of legislation are a manifestation of philosophical and epistemological change, on a scale beyond that of any large scale educational policy effort in recent history. The impact of this change has been immediate and severe, while the long t erm outcomes within our school system are not yet fully understood. It is therefore understandable that change on such a large scale must be approached and implemented with great care and planning, using the lessons learned from those attempts at systems c hange that have been previously attempted. Introduction to RtI As a result of a shift in schooling focus, from process improvement to accountability for student outcomes, requirements such as NCLB (2002) and IDEIA (2004) have been put into place. Additiona lly, federal and state level funding has been provided to facilitate the curricular and assessment changes necessary to meet the goal of outcome accountability for all students, regardless of disability. While the motivation and the goal have been made qui te clear, the matter of how to get from here to there is another matter entirely. Amongst the multiple examples of research based possibilities for reaching this goal, the use of a problem solving method incorporating response to intervention (RtI) is one of the most commonly discussed. Batsche and colleagues (2005) state that the PS/RtI model uses assessment for two fundamental purposes, the first of which is to facilitate the development and

PAGE 29

18 implementation of evidence based interventions within the ge neral education environment. Secondly, this model provides a means of determining the extent to which students respond to these interventions through continuous progress monitoring. By virtue of the fact that the problem solving method uses this progress m onitoring data to drive decisions about what skills to target, and how to intervene, it can be stated that student response to intervention is used to determine the effectiveness of interventions. As shown in Figure 1 below, t he typical PS/RtI model is a c yclical progression through four major stages: problem identification, problem analysis, plan development and implementation, and program evaluation/ response to intervention (Bergan & Kratochwill, 1990). Figure 1. Problem Solving Method/Response to Intervention Model In one conceptualization of this model (Batsche et al., 2005), it is within the problem identification phase that four distinct yet related actions are taken. First, the desired (or replacement) behavior is identified and defined in conc rete and measurable terms, in order to better facilitate accurate and effective intervention efforts. Next, a variety of authentic assessments are conducted to determine the current level of student performance, the current level of peer performance, and t he expected level of Step 1 Problem Identification Step 2 Problem Analysis Step 3 Intervention Design Step 4 Response to Intervention

PAGE 30

19 performance, or benchmark. The last facet of problem identification involves conducting a gap analysis to determine the distance between the target student and benchmark, the peers and benchmark, and the student and their peers. A key point in problem identification is the sole use of this gap analysis data to reveal the intervention and analysis point at the whole class tier, within small groups, or on an individual basis. Within the problem analysis phase of the model, the thrust of all activity is to definitively state the alterable factors contributing to the students failure to achieve the expected level of performance. This manifests as the generation of hypotheses across six domains of contextual impact; namely, those factors a ssociated with quality of instruction, level and quality of curricular materials, the classroom environment, the learner (target student), the target students classroom peers, or the students home and family environment. Once hypotheses are identified, d ata collection is used to validate or refute these hypotheses, and only those hypotheses supported by collected data are considered for intervention. The third stage of the PS/RtI model is that of plan implementation, where interventions are designed and i mplemented with the primary aim of overcoming the identified barriers hindering the student in performing the previously defined replacement behavior. As stated above, the gap analysis data is used to determine whether the selected intervention will be tar geted at the individual, group, or whole class level; however, regardless of the hierarchical location, any intervention to be used must be directly tied to the hypothesized cause of the problem, as well as empirically supported for use in addressing suc h cause.

PAGE 31

20 The fourth stage in the PS/RtI cycle is program evaluation, where the effectiveness of the implemented plan is tracked and analyzed via the students measured progress in response to intervention. It is here that progress monitoring is conducted, using frequently administered measures that are sensitive to small changes in the students behavior, in order to formatively evaluate the students progress both with respect to the previously identified performance gap, as well as with regards to the s tudents rate of growth. Those interventions that result in a narrowing of the gap are maintained (or increased in intensity), while those interventions displaying a low, zero, or negative rate of growth are modified or discontinued. The secondary use of t he PS/RtI model is to facilitate school resource management, through the use of a tiered (or differentiated) system of academic and behavioral intervention delivery (Batsche et al., 2005) as shown in Figure 2 below This conceptualization begins at the uni versal, or Tier I level, which focuses upon the whole class, all students, or schoolwide, depending upon the nature and scope of the identified problem. Whether discussing academic or behavioral problems, a very similar process is followed when progressi ng through the tiers of the model. Within academics, Tier I interventions are characterized by the use of periodic screening assessments, used both to determine the impact of schoolwide instruction, as well as to identify those students who display poor r esponse to academic intervention. Similarly, when discussing Tier I behavioral interventions, the barometer of choice is typically office discipline referrals (ODRs), which measure the impact of the schoolwide behavioral management program, while also re vealing those students displaying a poor response to behavioral intervention. When non responsive students are revealed through

PAGE 32

21 academic or behavioral screeners, or via teacher or staff referral, there are two fundamental questions used to determine how ne xt to proceed. Figure 2. Tri tiered academic and behavioral intervention delivery model The first question pertains to the adequacy of the classroom environment; specifically, are 80% or more of students achieving the targeted benchmark? The issue addres sed here is whether the student represents an outlier in the class, or is simply an indicator of the average students performance. The second question deals with the amount of exposure to adequate instruction that the student has experienced. Here, the th rust of the question is whether or not an excessive number of absences are a contributing factor to the students difficulty in class. If either of these questions reveal a possible problem, then interventions at Tier I are indicated, which could take the form of curricular modifications to improve the level of classroom instruction, and/or increasing the level of parental involvement to address excessive absences from class. If, on the other hand, neither question reveals such a problem, consideration is i ndicated for Tier II, or supplemental, interventions.

PAGE 33

22 At the supplemental level of intervention, there are three conditions being addressed, all of which are in addition to core instruction. That is to say, Tier II interventions are not a replacement for classroom instruction, but instead are meant to augment such instruction. The first condition common to supplemental interventions is an increase in exposure to instruction, which typically manifests as small group instruction in multiple settings. The second condition is an increase in the intensity of instruction, normally understood as narrowing the focus to encompass a smaller number of skills than that ad dressed at the whole class level of instruction. The third and final piece endemic to supplemental intervention within the PS/RtI model is an increase in the frequency of progress monitoring. Instead of the Tier I periodic benchmark testing, which typicall y occurs three to four times per year, the periodicity of assessment increases to occur on a monthly basis, which helps to determine the effectiveness of the more intense intervention. If the student displays a positive response to supplemental interventio ns, which is operationalized as sufficient increase in growth rate to narrow the performance gap, then they will either continue Tier II, or receive only Tier I instruction. If, however, the student exhibits a poor response to supplemental interventions, a s defined by a low, zero, or negative growth rate, the student will begin to receive individualized and intensive, or Tier III, interventions. At this point, the schools intervention and assessment team will select those interventions deemed appropriate b ased upon data collected throughout the entire assessment process to date. These interventions include increased instruction time (occurring in addition to core and

PAGE 34

23 supplementary time), individualized materials, and the presence of additional instructional personnel as necessary. It is important to note that the frequency of progress monitoring at Tier III again increases, typically to a weekly or bi weekly basis. If the student displays a positive response to these individualized and intensive intervention s, the interventions are maintained or faded to the supplemental level. If the student displays a poor response to Tier III interventions, an important decision must be made with respect to student interventions. In essence, the question becomes whether to continue using Tier III resources to target the replacement behavior or skill, or to instead use those resources (e.g., the large amounts of time, additional personnel, and specialized materials) to work on other skills which, while lower in priority than the initially targeted behavior, are nevertheless important to school and social functioning. A point of particular interest among educators and parents is the question of eligibility for special education services within the PS/RtI model of service deliv ery. Stepping through the model as described above, it becomes apparent that this question relies upon two separate but related questions at the Tier III level. The first question involves the amount and nature of those resources required to attain positiv e student response to intervention; specifically, do these resources exceed what is reasonably accessible within the general education classroom? If so, then the second question investigates the impact of fading such atypical resources on the students res ponse to intervention. Stated plainly, are these gains in student growth, which are experienced when providing an abundance of resources not normally available in the classroom, maintained when the auxiliary resources are removed? In the PS/RtI model, if s uch

PAGE 35

24 surplus resources are required to achieve and maintain a positive response to intervention, then a student may be found eligible for special education services. To summarize, this conceptualization of the PS/RtI model plays three vital roles in using a vailable funds to meet mandated requirements within NCLB and IDEIA. First, the model provides an algorithm to drive decisions of student service delivery, via evidence based interventions linked directly to skill or behavioral deficits. Second, PS/RtI uses a tiered framework of assessment and evaluation to more effectively prioritize allocated funds, by tackling systemic problems within the universal tier while retaining increased resources for the more selective, severe and/or intense problems within a sup plemental or individualized setting. Last, the model requires that the determination of eligibility be based solely upon the continuing necessity of resources and/or services beyond those available in the general education setting. Comparison of Outcomes in the Traditional and RtI Models The Traditional Model. Most studies looking at traditional model outcomes have investigated student academic achievement ( Lee, Grigg, & Dion, 2007; Lee, Grigg, & Donahue, 2007), behavioral and socio emotional student outcomes (Hoagwood & Johnson, 2003), referral and eligibility rates for special education services (Forness, 1981; Heller, Holtzman, & Messick, 1982), or gender and/or ethnicity biases (e.g., disproportionalit y) across all of the previous items (Donovan & Cross, 2002; Lee, Grigg, & Dion 2007). The National Assessment of Educational Progress (NAEP) in reading and math was administered in 2007 to a nationally representative sample of fourth and eighth grade stu dents ( Lee, Grigg, & Dion, 2007; Lee, Grigg, & Donahue, 2007 ). Results from the

PAGE 36

25 NAEP reading section indicated that approximately 34 % of fourth graders and 27% of eighth graders performed below basic in terms of reading skills ( Lee, Grigg, & Dion, 2007 ). The results also demonstrated disproportional representation among students who performed below basic (see table s 1 and 2 below) for both fourth and eighth grade students. Of particular interest in any discussion of outcomes is the indication that no meani ngful change in student reading achievement scores was evident between current and prior administrations of the NAEP as evidenced by the approximately 38% of fourth graders and 26 29% of eighth graders performing below basic in reading skills during the 2 002, 2003, and 2005 administrations of the NAEP ( Lee, Grigg, & Dion, 2007 ). Table 1 Categorical proportionality of fourth grade students below Basic in reading Student Category Percentage of students scoring below Basic Gender Female Male 31 38 Race Caucasian Black Hispanic 23 54 51 Free/reduced lunch status Eligible Not eligible 50 21 Disability Status Students with disabilities Students without disabilities 64 31 ELL Status ELL students Non ELL students 70 3 1

PAGE 37

26 Table 2 Categorical proportionality of eighth grade students below Basic in reading Student Category Percentage of students scoring below Basic Gender Female 23 Male 32 Race Caucasian Black Hispanic 17 46 43 Free/reduced lunch status Eligible Not eligible 42 18 Disability Status Students with disabilities Students without disabilities 66 24 ELL Status ELL students Non ELL students 71 25 The NAEP mathematics section investigated student performance across the domains of content and item complexity ( Lee, Grigg, & Donahue, 2007 ). Results from the 2007 NAEP mathematics section indicated that approximately 19 % of fourth graders and 30% of eighth graders performed below basic in terms of mathematics skills As was previously discussed in readi ng, there was clear evidence of disproportional representation in the 2007 NAEP administration, for both fourth (see Table 3, below) and eighth (see Table 4, below) grade students. Of particular interest is the contrast with the aforementioned reading achievement scores, in that change in student mathematics achievement scores was observed between current and prior administrations of the NAEP as shown in Table 5, below.

PAGE 38

27 Table 3 Categorical proportionality of fourth grade students below Basic in math Student Category Percentage of students scoring below Basic Gender Female 19 Male 18 Race Caucasian Black Hispanic 9 37 31 Free/reduced lunch status Eligible Not eligible 30 9 Disability Status Students with disabilities Students without disabilities 40 16 ELL Status ELL students Non ELL students 44 16 While disproportionality was common to both reading and math, it is interesting to note that there was some narrowing of the White Black gap in mathematics achievement scores across NAEP administrations (e.g., 2002, 2003, and 2005), although all other rates of disproportionality remained relatively constant ( Lee, Grigg, & Donahue, 2007 ). Overall, the data from the NAEP suggest that a significant proportion of students are not attaining basic reading and math skills. In addition, disproportional numbers of those students not attaining basic skills are from traditionally disadvantaged subgroups. Although some improvement across NAEP administrations in the math achievement of agg regated and disaggregated students was evident, significant achievement gaps remain among the aforementioned disaggregated subgroups.

PAGE 39

28 Table 4 Categorical proportionality of eighth grade students below Basic in math Student Category Percentage of student s scoring below Basic Gender Female 30 Male 29 Race Caucasian Black Hispanic 19 53 46 Free/reduced lunch status Eligible Not eligible 45 19 Disability Status Students with disabilities Students without disabilities 67 26 ELL Status ELL students Non ELL students 70 27 Table 5 Student achievement trends across NAEP mathematics administrations NAEP Year 2000 2003 2005 2007 (Percentage of students scoring below Basic) Fourth Grade students 36 24 21 19 Eighth Grade students 38 33 32 30 As an example of assessing behavioral and socio emotional outcomes of children, Hoagwood and Johnson (2003) reviewed multiple epidemiological studies that examined the prevalence of mental health problems. Across the reviewed studies, psychological disorde rs were found in approximately 16 22% of children up to the age of 18, one quarter to one half of which could be classified as seriously emotionally disturbed. Additionally, severe psychiatric disorders were found in 4 8% of children ages 9 17.

PAGE 40

29 However, ap proximately 20% of children with serious mental health problems managed to obtain mental health services. Put another way, in a group of 100 school aged children, about 19 would meet diagnostic criteria for a psychological disorder, although at most 5 of t hose 19 would receive mental health services. The first nation wide examination of special education service efficacy was conducted in 1982 (Heller, Holtzman, & Messick), analyzing the findings of an investigative panel assembled to look into the overrepresentation of minority students in special education specifically those identified as educably mentally retarded (EMR), as well as to determine possible causes for such a bias in service delivery. The study concluded that this disproportionality was systemic in nature (e.g., occurred nationwide), and that it was likely due to inappropriate assessment and inadequate instruction within general education (Heller, Holtzman, & Messick, 1982). In 2002, the PCESE generated a report with recommendations f or improving academic outcomes for children and adolescents with disabilities. What is of particular interest is that the panel included key personnel (e.g., parents and teachers) for the first time. The PCESE was critical of the traditional model, particu larly with respect to the apparent disparity found between the purported aims of this model and its actual outcomes; specifically, the PCESE broadcast nine findings, the most notable of which include: faulty prioritization of procedural compliance over stu dent education; little to no emphasis on early intervention or prevention, leading to the wait to fail situation; invalid evaluation procedures, which leads to increased misidentification of students; insufficient research base to support current educati onal practices, as well as a lack of access to those practices supported by research; and poor outcomes for those students

PAGE 41

30 identified and found eligible under the traditional model (e.g., dont graduate from high school, dont find full time employment). B ased on the panels findings, three major recommendations were offered in an effort to improve the outcomes of students identified with a disability. The first of these was using student data to drive eligibility decisions, rather than relying upon a blank et eligibility formula. Second, it was recommended that service delivery should be focused upon prevention or early intervention, through the use of empirically supported practices. Last, funding for educational entities should be determined by considering the total expenditures for all students, instead of providing financial bonuses for the number of children in special education. A metaanalysis of metaanalyses in essence, a megaanalysis on the effectiveness of special education was conducted in 200 1, the results of which suggest that placement in special education, with the possible exception of those students receiving services for mental retardation, can actually be deleterious to student outcomes (Forness, 2001). Compounding this issue is the ove rrepresentation of ethnic minorities and female students receiving special education services (Donovan & Cross, 2002). The authors found that black students were at more risk for being found eligible for services under a label of mental retardation, learni ng disability, and/or emotional disturbance than Caucasian students. The research to date on student outcomes suggests that the traditional model of service delivery is not effective for a large proportion of students, whether in general or special educati on settings. Additionally, the data suggest that the traditional model is

PAGE 42

31 resulting in inequitable outcomes for students, as well as steadily increasing the risk of student failure, particularly with respect to ethnic minority students. The PS/RtI Model In a study examining disproportionality and special education placement within a PS/RtI model, Marston (2003) reported that the proportion of students identified with mental impairment and learning disabilities was cut in half from 1994 to 2001; similarly, there was also a reported decrease in overrepresentation of African American students receiving special education services (Marston et al., 2003). Not surprisingly, it was concluded that student and systemic outcomes improved as a result of implementing t he PS/RtI model for this school district. In addressing the question of learning disability identification within a response to intervention framework, there are a number of important studies which focus upon various aspects of this question. Case, Speece, and Molloy (2003) investigated the validity of RtI while examining the discriminating ability of individual student differences and environmental factors. The study found that the RtI model was accurate in identifying students who require more intensive s ervices, and that the children so identified were consistently different from other at risk students on a number of individual differences (Case, Speece, & Molloy, 2003). A similar study by Vaughn, Linan Thompson, and Hickman in 2003 sought to determine whether or not school districts can reliably use the RtI model to identify students with a learning disability. This study is an important example in that it sets exit crite ria for reading interventions, which were used as an independent variable within which to categorize resulting student outcomes. The results demonstrated that follow on student academic performance was strongly related to the point at which exit criteria

PAGE 43

32 w ere met, indicating that RtI was effective in identifying students who need intensive intervention and support (Vaughn, Linan Thompson, & Hickman, 2003). This underscores the utility and importance of progress monitoring in data based decision making. In a n attempt to address the disproportionality of base rates across race, sex, and student achievement, VanDerHeyden and Witt (2005) directly compared RtI screening with teacher referral on the basis of accuracy, consistency, and disproportionality in eligibi lity determination. The findings were unequivocal in that teacher referral was less accurate, less consistent, and more disproportionate in comparison to RtI screening under all circumstances (VanDerHeyden & Witt, 2005). Two complications identified within the ethnic minority population of students include the difficulty associated with differentiating issues with the students ability to learn from issues of acquiring a new language, and the dearth of research into development of interventions for English Language (EL) learners (Burns, Griffiths, Parson, Tilly, & VanDerHayden, 2007). Given that a major assumption of RtI is that all students can learn, a 2005 study by Healy, Vanderwood, and Edelston investigated the impact of phonological awareness intervent ions on at risk EL learners; specifically, whether or not these students RtI effectively identified those most in need. The results indicated that use of student RtI to establish goals and track progress was effective in identifying those students who do not have a disability; further, the results support the idea that phonological awareness interventions delivered in English are beneficial for EL learners (Healy, Vanderwood, & Edelston, 2005).

PAGE 44

33 Similarly, Gerber and colleagues (2004) examined the impact of intensive small group direct instruction in Spanish upon EL learners who performed most poorly on measures of phonological processing skills. The research question of particular interest was whether or not these at risk students could close the achievemen t gap between themselves and their typically performing peers. The results of this study indicate that supplemental interventions are effective for EL students, and that the intensive interventions were effective in supplementing the core curriculum, as ev idenced by the significant closing of the performance gap on almost all measures (Gerber, Jimenez, Leafstedt, Villaruz, Richards, & English, 2004). Examples of larger scale PS/RtI implementation include an evaluation (Tilly, 2003) of the Iowa Heartland Ear ly Literacy Project (HELP), a report (McGlinchey, Schallmo, & Goodman, 2006) of Michigans statewide implementation project, and an examination (Stollar & Graden, 2006) of Ohios statewide initiative. In all of these cases, the focus of efforts was to educ ate stakeholders in the use of problem solving skills and data based decision making. Although on a larger scale than Marstons (2003) study, the overall outcome data look remarkably similar particularly with respect to issues of eligibility and dispropor tionality ; in all cases, percentages of students found eligible for special education services noticeably decreased, as did the measured disproportionality rate for minority students. From a metaanalysis standpoint, PS/RtI implementation efforts have been examined with a view toward systemic and student results (Burns, Appleton, & Stehouwer, 2005). Although variance was large in the studies reviewed, the overall mean and median effect sizes (1.27 and 1.02, respectively) suggest that these efforts have

PAGE 45

34 gener ally had clinically significant effects. For systemic outcomes of PS/RtI implementation, the mean and median effect sizes were 1.53 and 1.28, the strongest of all outcomes reported and appreciably larger than the mean and median effect sizes for student ou tcomes of implementation efforts (0.96 and 0.72, respectively; Burns et al., 2005). To review, the body of research investigating the implementation of PS/RtI models in schools is steadily growing, and has examined impact points ranging from individual c lassrooms to statewide efforts. Regardless of the research unit, outcomes at the system and student levels are consistently positive. Research on Systems Change The educational system is a social system, consisting of a staggering collection of interconnected elements, from large scale legislation to individual classrooms. This becomes of particular importance when considering the dynamic and transactional nature of the reciprocal influence within and among these elements (Bronfenbrenner, 1977). The unique aspect of any system its fingerprint, if you will is the manner in which that system responds to forces, whether internal or external (Curtis and Stollar, 2002). Those systems we consider to be effective are characterized by their increased capacity to assess, understand, and address these forces in a manner conducive to achieving systemic goals. If the purpose of any system change project is to make the sys tem more effective, it then stands to reason that any such effort should be focused on improving that systems problem solving ability. Several factors necessary for successful systems change have been identified through numerous studies of successful (an d unsuccessful) large scale implementation

PAGE 46

35 projects (Fixsen et al., 2005). One common factor among the more successful (large scale) projects is the use of evidence based implementation strategies from the systems theory and systems change literature. Acc ording to the literature on best practices for methods of systems change (Curtis and Stollar, 2002: Fisxen et al., 2005), the most commonly accepted model consists of three distinct yet interrelated and interdependent stages: Consensus, Infrastructure, a nd Implementation. Consensus. Consensus is generally considered to consist of a shared belief or set of beliefs, a common vision, and an understanding of what must be done in order to implement a proposed change. Educators will embrace new ideas when two conditions exist: they must understand the need for the idea, and they perceive that they either have the skills to implement the idea or they have the support to develop those skills (Joyce and Showers, 1988; 1995). Chapter four of the National Associati on of State Directors of Special Education (NASDSE) RtI Policy Manual (Batsche, Elliott, Graden, Grimes, Kovaleski, Prasse, et al., 2005) identifies those personnel critical to successful implementation, who include district level leaders, building leaders facilitators, teachers and student services, parents, and students. Insofar as what has been determined to be a shared vision and a common set of beliefs, requirements can depend greatly upon the role that a given person plays in the educational process (Lau, Sieler, Muyskens, Canter, VanKeuren, and Marston, 2006). For example, it is important for all persons to have a basic understanding of national, state, and district policies regarding RtI, the link between NCLB, IDEIA, adequate yearly progress (AYP) and RtI, the basic beliefs, knowledge and skills that support implementation of RtI, the steps in the PSM, multilevel RtI model, and how eligibility is

PAGE 47

36 determined using RtI, as well as a grasp for the fundamental utility of using progress monitoring. In a ddition to this everyman level of understanding and belief, teachers and student services staff must have a clear understanding of the need for universal, supplemental and intensive instructional strategies and interventions, the components of a successf ul professional development plan (PDP), the need for (and skills in) data based decision making and the need to share outcome data frequently, the need to publicly recognize the relationship between staff efforts and student outcomes, and the need to invol ve and inform parents of the essential elements of RtI and their role in the process. The point to be understood here is that, in addition to the overwhelming number of issues and concepts that must be understood for effective implementation, there is a cl ear differentiation of necessary skills and beliefs according to the role of the individual, whether they are the parent, the teacher, or the superintendant of schools (Sarason, 1982). Research by Curtis and Stollar (2002) suggests that achieving consensu s among key stakeholders regarding proposed innovations is a cornerstone of effective systems change, so much so that they suggest a commitment from at least 80% of the building stakeholders be obtained before proceeding with implementation. Knowing that t his level of personnel commitment with respect to proposed initiative is a key factor in determining the degree and success of implementation, gaining an understanding of the nature of educator beliefs becomes important, as does investigating how these bel iefs change in response to training. Sarason (1990) stated that a major reason for the pervasive failure of school reform initiatives lies in the lack of understanding by change agents with regards to how

PAGE 48

37 schools fit within the fabric of larger society. In mandated, or top down efforts, there is an expectation that school personnel will follow the legislated directives; however, the powerlessness experienced by school personnel as a result of this method of change typically results in a paucity of expende d effort toward the very change desired. In this manner, the failure to seek consensus from school personnel acts to block or impede progress toward school change (Sarason, 1990). Of particular interest is research by Guskey (1986), which indicates that tr aditional staff training programs are grossly unsuccessful at changing the existing beliefs of teachers. However, Guskey also found that when teachers practice new skills, and when these skills result in improved student performance, that changes in teache r attitudes often occur. As a result, Guskey stated that, for teachers, beliefs follow behavior. Despite its purported importance in driving success of implemented system change efforts, very few researchers have examined the role of stakeholder consensus and beliefs when evaluating the PS/RtI model. Batsche, Elliott, Schrag, and Tilly (2005) conducted two satisfaction evaluations with the Iowa Heartland Area Education Agency 11, as well as with a model implemented in Illinois. While both evaluations indica ted that principals, teachers, and parents experienced higher levels of satisfaction with the PS/RtI model than with the traditional model, neither evaluation addressed the attainment of educator consensus prior to implementation, the relationship between training topics and corresponding educator beliefs, or the impact of consensus level on implementation integrity. Infrastructure Infrastructure includes all of the support systems necessary for change to occur, be maintained, and flourish. These supports include (but are not limited

PAGE 49

38 to) policies, procedures, training, data collection and analysis methods, communication methods, universal and small group intervention systems, and established decision making criteria. A particularly important theme emerging from the literature is the idea that, while practicing skills will certainly increase their application, coaching and feedback provide the most significant leap in transfer of learning for the one being coached (Joyce and Showers, 1988; 1995). Specificall y, the finding was that 80% of the teachers who had received coaching implemented new strategies versus only 10% of the teachers who received instruction without follow up coaching. This finding is generally interpreted as making the ultimate goal of any s taff development effort the transfer of new learning to the participants active repertoire. It becomes logical here to ask what constitutes coaching. Joyce and Showers (1988) suggest that coaching is the combination of many activities, including the provi sion of professional development, consistent collaboration with staff, working to improve school instruction and decision making, supporting staff to develop the capacity of school and district facilities, and ensuring treatment fidelity. Stollar and colle agues define a coach as, a person internal or external to the school and/or district who provides leadership for implementing a three tier model (2008). Cameron (2005) has similarly defined the role of a coach as [building] teacher capacity to implement effective instructional practices to improve student learning and performance. The consistent theme that is represented here would be the idea of building capacity via the transfer of skills, which is something qualitatively different than the traditiona l view of training. An important point to bear in mind is that the review of research conducted (Joyce and Showers, 1988; 1995) was intended to investigate the effect of staff training on

PAGE 50

39 classroom practice. The finding of this review was that the traditio nal model of staff training has no effect on the classroom practice of the participants. Interestingly, many (if not most) facilities maintain the traditional staff training model, despite the findings that point to its lack of effectiveness. As a result, the recommended professional development session includes a theory element, where a rationale is provided for the training itself, and some background content knowledge is gained by the participants. A demonstration segment gives the participants the oppor tunity to see a practical application of the skill to be transferred, coupled with a practice session to allow the new skill to be tried out by the educators. It is important to remember that, during the practice segment, consistent and immediate correct ive feedback is provided by the coaches, to ensure that poor or unintentional use of the skill set is not reinforced. Accordingly, the most important part of any professional development session is the presence of coaches, as evidenced by the professional development study (Joyce and Showers, 1988; 1995). This study showed that training consisting solely of a t heory element resulted in skill transfer for approximately 10% of educators; addition of demonstration increased this amount by approximately two to five percent. Similarly, adding practice and feedback increased the skill transfer result by two to three percent each. However, the addition of coaching to professional development sessions boosted this skill transfer result to approximately 95% of educat ors, making the point that, while staff training does not often change teaching, having a coaching process that includes the use of demonstration usually does change teaching. Implementation Implementation is the initiation of the proposed change, which i s put into place by personnel using the designed infrastructure, in an attempt to attain the

PAGE 51

40 common goal agreed upon during the initial building and maintenance of consensus (Curtis and Stollar, 2002). While traditional views of system change efforts focus ed solely upon the method of implementation, the literature has consistently shown that actual implementation cannot be accurately understood or evaluated without intensive investigation into the efforts and techniques used to build consensus and install i nfrastructure supports (Fullan, 1997; Senge, Kleiner, Roberts, Ross, and Smith, 1994). Research on Educator Self Efficacy and Change In a study examining the construct of teachers sense of efficacy, which was defined as the situation specific expectation that teachers can help students to learn (Ashton & Webb, 1986), the construct was expanded into two independent dimensions: the objec tive sense of teaching efficacy (Bloom, 1981), and a subjective sense of personal teaching efficacy, which refers to a teachers estimation of their own competence in teaching. Notably, the sense of personal teaching efficacy was observed to influence teac hers choices of instructional strategies and techniques for classroom and behavior management. Furthermore, any failure to teach a student that was attributed (by the teacher) to their personal teaching efficacy resulted in debilitating stress (Ashton & Webb, 1986). In comparing universal and personal (i.e., teaching and personal teaching) inefficacy, an interesting chain of beliefs, expectations, and resulting deficits was observed (Ashton & Webb, 1986). This chain will be described, and is displayed in Table 6 below. Beginning with a low sense of teacher efficacy, it can be seen that the inability of the teacher to motivate the student may be perceived as either a personal failure, or a failure of the nature of teaching in general (i.e., a belief that teachers universally cannot

PAGE 52

41 motivate students). This perception is a corollary to a sense of helplessness, whether universal or personal, that generates negative expectations regarding future ability to motivate students. It is important to note that, to this point, regardless of whether the perceived sense of inefficacy is personal or universal, the expectations for future student motivation are identical. Similarly, the cognitive and motivational deficits resulting from these negative expectations spec ifically, a difficulty in understanding that students can be motivated, and little or no effort to motivate students, respectively appear identical across the universal and personal domains of perceived inefficacy. The key difference that is observed be tween these domains is the presence or absence of an affective deficit. When the expectations are tied to a sense of universal helplessness, there is no affective impact, as the teacher has no sense of responsibility regarding student motivation. However, when there is a sense of personal inefficacy, the teacher is comparing their own failure to motivate students against their belief that teachers should be able to motivate students; as a result, there is a sense of shame or guilt tied to this perceived dis crepancy in personal teaching competence (Ashton & Webb, 1986). Table 6 Comparison of universal and person sense of teaching inefficacy Low Sense of Teacher Efficacy Universal Sense of Inefficacy (Belief in universal inability of teachers to motivate students) Personal Sense of Inefficacy (Belief in ones personal sense of incompetence) Negative Expectations due to Universal helplessness Negative Expectations due to Personal helplessness Cognitive deficit? Motivational deficit? Affective deficit? Cognitive deficit? Motivational deficit? Affective deficit? Yes Yes No Yes Yes Yes Difficulty in learning that teachers can motivate students Little effort expended in attempting to motivate students Little or no stress, as no sense of responsibility for motivating students Difficulty in learning that one can motivate students Little effort expended in attempting to motivate students Stress, depression, guilt, and/or shame

PAGE 53

42 This subtle yet key difference between domains of teacher efficacy represents a pathway to understanding the importance of teacher development, as well as the utility of ecological analysis for investigation of the environmental processes that promote this development (Bronfenbrenner, 1976). It is of note that th ere are four basic assumptions operating during ecological analysis, two of which are of particular importance in understanding the interplay between teacher development and systems change (Ashton & Webb, 1986). The first such assumption is that the observ ed behavior is a function of the individuals perception. This assumption leads logically to the observation that one must understand the relevant (though subjective) definition of the existing situation in order to understand the individuals behavior. Th e second assumption of importance is that the observed behavior is a function of the context within which individual interactions occur. Again, a logical conclusion of this assumption is that behavior is in large part dependent upon the environment in whic h it occurs. In attempting to address the first of these assumptions the analysis of subjective perceptions within teachers sense of efficacy, there are many important findings in the research. Medley found (1978) that most studies claiming to examine teacher effectiveness did not include teacher goals or the perceptions they have of their environment. This resulted in a majority of studies evaluating teachers using effectiveness standards that may or may not have matched the standards teachers were using for their own judgment of effectiveness (Medley, 1978). One example of this mismatch was Jacksons study (1968), which used student achievement scores as the sole

PAGE 54

43 outcome measure of teacher effectiveness, even though most teachers do not define their effectiveness in terms of such scores (Ashton & Webb, 1986). Interestingly, more recent research indicates that not only is this mismatch persistent (e.g., Ellett & Garland, 1987; Kyriakides & Creemers, 2008; Loup, Garland, Ellett, & Rugutt, 1997), it wou ld also appear to be pervasive, in that administrators and other school personnel continue to use subjective measures (e.g., classroom observations, values based rating scales) to determine teacher effectiveness (Kyriakides, 2005; Peterson, 1987; Stronge, Helm, & Tucker, 1995; Stronge & Ostrander, 1997). Another issue with failure to understand subjective perceptions is the difficulty of altering inappropriate behaviors, when such behaviors are maintained by subjective beliefs that they are appropriate (Fen stermacher, 1979). Examples of such anomalous behaviors abound in the research on teachers classroom control, from success expectancies of instructional interaction (Cooper, Burger, & Seymour, 1979), to teacher attribution of classroom management problems (Metz, 1978). These two studies in particular promote the idea that classroom context is a major factor in teachers perceived self efficacy. The second assumption of ecological assessment that behavior is a function of the context within which individu al interactions take place, resonates with Bronfenbrenners (1976) description of the educational environment. It is here that we find the critical framework for identifying the variables that impact teachers sense of efficacy, within the nested nature of educational structures (Brim, 1975). The first such structures are those which make up the microsystem, or immediate setting for the teacher (typically the classroom), including student and teacher characteristics (e.g., Maccoby

PAGE 55

44 and Jacklin, 1974; Garre tt 1977), class size (e.g., Glass & Smith, 1979; Cahen, Filby, McCutcheon, & Kyle, 1983), activity structures (e.g., Bossert, 1979; Carew & Lightfoot, 1979), teacher ideology (e.g., Bernier, 1981; Mosenthal, 1984), and role definitions (e.g., Dreeben, 1973 ; Gehrke, 1981; Metz, 1978). The next layer of structures includes those which constitute Brims (1975) mesosystem, the interrelations between the major settings of teachers. Examples of such mesosystems include school size and demographics (e.g., Anderson 1968; Larkin, 1973), school norms (e.g., Leacock, 1969; Cohen, 1972), collegial relations (e.g., Holland, 1973; Super, 1970), principal teacher relations (e.g., Leithwood & Montgomery, 1982), decision making structures (e.g., Hornstein, Callahan, Fisch, & Benedict, 1968; Goodlad, 1975), and home school relations (e.g., Laosa, 1982). The third strata of educational structures is the exosystem; those social structures influencing the teachers settings. These include legislative and judicial mandates (e.g., Sarason, 1982; Wise, 1979), as well as the nature of the school district itself (Bidwell & Kasarda, 1975; Gross & Herriott, 1965; Cichon & Koff, 1978; Kalis, 1980). The last, and most pervasive layer of structures is the macrosystem (Brim, 1975), which ar e those cultural beliefs or ideologies which impact the thought and behavior of teachers, as well the systems which directly impact teachers. These include the conceptions of the learner (e.g., Weiner, 1980; Dweck, 1976), and popular conceptions of the rol e of education (e.g., Ashton & Webb, 1986; Jackson, 1968). Within the overarching beliefs which make up the macrosystem, a key point related to teacher self efficacy is to be found. If the basic assumption of our educational system is that the process of education results in success and advancement for those

PAGE 56

4 5 perso ns who are 1) motivated, and 2) have the abilities to make use of the opportunities offered, then the immediate and direct assumption when people fail is that the person lacks motivation, ability, or both (Ashton & Webb, 1986). The paramount point here i s that failure is being defined as a fault within the student, meaning that the teacher plays no role in this process. If this is so, then the teacher has no reason to question or challenge their perceptions of efficacy or educational equity (Ashton & Webb 1986; Jackson, 1968). Summary The research examined in this review has shown a need for a conceptual change within the educational system, as evidenced by the contemporary movement in legislation from a focus on streamlining the educational process to im proving and supporting student outcomes While recent research into the use of Response to Intervention (RtI) supports its validity and utility in promoting the desired outcomes in achievement for all students, RtI implementation is still in its infancy wi thin our nations schools. In trying to scale up implementation efforts, from individual schools and districts to coordinated and comprehensive initiatives at the state level, the extant research into systems change theory and processes indicates the importance of fostering consensus (e.g., shared beliefs and foundational knowledge) among the key stakeholders. Given the structure and nature of our educational framework, classroom teachers represent the primary and most visible interface between student s and schools. As such, any efforts at building consensus for change in our educational system will hinge upon the ability of teachers to understand, accept, and enact a major conceptual change in their behavioral repertoire.

PAGE 57

46 Examination of the research on teacher self efficacy reveals a gap in the literature surrounding the nature of the relationship between educator beliefs regarding a necessary skill and their self perceived competency with that skill. For instance, does believing a skill or trait to be valuable hinge upon a corresponding perception of competency in that skill; conversely, does perception of competency in a particular skill relate to an assignation of value or worth to that skill? To follow this line of questioning, does any such relation ship between beliefs and self perceived competency change in nature depending upon the preexisting level of beliefs? In other words, does having a higher (or lower) level of belief in the value of a skill impact the effectiveness of professional developmen t aimed at increasing competency in that skill?

PAGE 58

47 C hapter 3 : Methods The purpose of this chapter is to provide a detailed description of the current studys methodology, including information about the participants, instruments used, data collection pro cedures and timelines, and data analysis procedures Participants Demonstration Schools. A total of 40 schools within 8 school districts were selected, via a competitive application process, to participate in the PS/RtI project during the 2007 08 school year. All of Floridas 67 school districts were invited to submit proposals for up to six schools to begin model implementation. District leadership personnel (e.g., Exceptional Student Education Directors, Superintendents) recei ved the Requests for Proposals (RFPs) and attended one of three Bidder Conferences, each of which provided an overview of the Projects application submission requirements. Twelve of the eligible 67 school districts (18%) submitted applications to the pro ject, each of which was evaluated and scored independently by two or more reviewers from the Project Leadership Team. A standard evaluation rubric (see Appendix A) was used for scoring, which used 11 items to assess the applications conveyance of previous experience with similar initiatives, level of commitment with respect to the Project, willingness to collect and disseminate district and school level data, and commitment to providing the necessary resources and personnel. Final decisions as to district participation were based upon the average application score across the independent

PAGE 59

48 reviewers, and the degree of match with other applicant districts on school and student demographics (see Table 7 below). A total of 40 demonstration schools were selected from the eight represented districts, with the number of demonstration schools within each district ranging from three to seven. School size and student demographics varied across districts, as shown in Tab le 8 below. Table 7 School and student demographics for demonstration and comparison buildings District (Code) Student Population Percentage ethnic minority Percentage Female Percentage LEP Percentage Free/Reduced Lunch Primary exceptionality percentage 10 6036 29.2 47.1 2.3 30.3 30.9 13 10409 97.2 47.3 34.9 90.0 19.3 44 5463 41.9 47.4 10.8 3.2 25.0 51 9182 26.2 47.9 8.7 57.4 21.9 52 8268 43.5 47.3 10.9 51.9 24.5 53 4951 45.2 50.5 7.2 58.1 18.1 55 10239 19.8 51.8 1.3 26.6 24.0 66 3543 18.2 47.6 3.5 50.7 16.5 Comparison Schools In order to provide a method of comparison for model implementation efforts, each district was required to propose a matched comparison school for each demonstration school proposed. This resulted in 36 comparison schools proposed by the eight districts, each of which was examined by the Project Leadership Team for degree of similarity between matched sets of schools. This similarity judgment was based upon visual analysis of data for school and student demographics, leadership philosophy, and participatio n in other state projects or initiatives.

PAGE 60

49 Table 8 School and student demographics for selected demonstration schools District (Code) Average Demonstration school population Average Percentage ethnic minority Average Percentage Female Average Percentage LE P Average Percentage Free/Reduced Lunch Average Primary exceptionality percentage 10 929.6 35 47.4 2.6 29.9 27.4 13 738.5 96.4 46.4 35.9 89.6 17.6 44 683.7 45.2 47.7 11.5 2.8 24.6 51 908.4 27.6 47.3 9.1 55.2 22.3 52 875 43.1 46.8 9.7 51.3 24.7 53 730.7 45.5 48.3 7.8 56.3 18.5 55 774.7 25 48.3 1.2 35.1 23.6 66 781 17.2 48.8 4.2 46.1 14.9 After analysis, three of the proposed 36 comparisons were deemed not to be appropriate, due to being specialty schools, which include additional grade levels of students, as well as incorporating substantively different leadership philosophies. Additiona l comparison matches could not be provided by the two districts containing these inappropriate comparison schools, due to their small size, which resulted in a total of 33 matched comparison schools. Table 9 below provides summative district level data for matched comparison schools.

PAGE 61

50 Table 9 School and student demographics for selected comparison schools District (Code) Average Comparison school population Average Percentage ethnic minority Average Percentage Female Average Percentage LEP Average Percentage Free/Reduced Lunch Average Primary exceptionality percentage 10 1082.3 24.2 46.9 2 28.6 31.4 13 996.3 97.8 48 34.2 90.2 20.6 44 680.5 32 46.7 8.5 4.5 26.1 51 928 24.7 48.6 8.3 59.5 21.4 52 648.8 43.9 47.8 12.1 52.4 24.2 53 919.7 44.9 52.2 6.8 59.6 17.8 55 931.8 15.5 48.2 1.3 19.5 24.3 66 600 20.2 45.3 2.1 59.7 19.7 Design of the study A quasi experimental design was used to address the research questions for this study which examine d the relationship between educator beliefs and perceived competence, as well as how the administration of professional development impacts this relationship. The research questions w ere addressed via examination and analysis of existing data; specifically, previously collected survey response data within a database from the first year of the Florida PS/RtI Project, a n ongoing 3 year statewide school reform initiative that was reviewed and approved by the University of South Florida Institutional Review Boar d (IRB) The independent variables for this study include d the administration of professional development, and level of educator self reported beliefs. The dependent variables include d level of educator self reported skills and self reported frequency of a ssociate d skill usage in the schools.

PAGE 62

51 Measures As systemic implementation of a PS/RtI model only recently has been attempted in schools, empirically validated measures of the PS/RtI process we re not available in the literature. Therefore, instruments were collected from existing district and state initiatives, to be used as a primary source for creating instruments for the Florida PS/RtI Project. In addition, systems change and PS/RtI implementation literature was investigated to determine key variables for assessment within the Project. Examples of these variables include stakeholder consensus with respect to change, identification of strengths and weaknesses via needs assessments, use of a problem solving process for planning and decision making, as well a s monitoring progress toward desired goals (Curtis & Stollar, 2002). In addition, Noell and Gansle (2006) suggest that integrity of implementation is a critical piece when implementing a PS/RtI model. Based upon the literature review and examination of exi sting instruments, The Florida PS/RtI Project created four survey instruments to access and measure several constructs identified as key variables by systems change and PS/RtI research (e.g., Curtis & Stollar, 2002; Noell & Gansle, 2006). Because th e s e measure s purported to examine educators beliefs, perceived skills and practices associated with the model each instrument was reviewed by an Educator Expert Validation Panel (EEVP) composed of educators from a neighboring school district with experience regarding PS/RtI practices. The number and types of educators comprising the EEVP was determined through discussion, by Project staff, as to categories of educators who would be likely to be involved in implementation of PS/RtI.

PAGE 63

52 After creating a represent ative sample framework for the panel, a district contact provided educators fitting the provided descriptions. Validation panel response forms (see Appendix B) for the surveys were disseminated to two special education teachers, five general education teac hers, two school psychologists, two guidance counselors, two social workers, one reading specialist, one behavior specialist, three school administrators, three district administrators, and three program supervisors. The 24 panel members were asked to prov ide feedback regarding the content and clarity of each survey item, and to offer suggestions for adding or subtracting items. Upon returning the completed validation panel forms for all surveys, a $100 payment was made to the panel member by the Project. C ompleted validation forms were received from 14 panel members: one general education teacher, two special education teachers, one school administrator, two school psychologists, two guidance counselors, two social workers, three district administrators, an d one program supervisor. Upon completion of the validation process, feedback received from the EEVP members was reviewed by Project staff and revisions to the surveys were made. For each survey, d escriptive statistics were used to determine the proportion of respondent agreement as to item content and clarity. A threshold level of 80% agreement (i.e., 80% of panel members selected good when reviewing a given item) was used as the criterion for reta ining an item as written. When agreement from the panel members was below 80%, Project staff reviewed and discussed feedback from disagreeing respondents (i.e., those who selected one of the four responses indicating that some change was needed in terms of how the item was written; see Appendix B). This feedback was discussed until

PAGE 64

53 Project staff reached consensus regarding how to proceed with revising the item. Revisions were made to the majority of items where agreement was below 80%, after which agreement was recalculated and typically exceeded 80%. It should be noted that a few items displaying less than 80% EEVP agreement were not revised; this occurred when panel members incorrectly stated that the item was grammatically incorrect (e.g., For items where the subject of the sentence was the word data, some panel members provided feedback that the word are should be changed to is). Of the instruments created by the Project, three we re used for the purposes of the current study: the Beliefs Survey, the Perceptions of RtI Skills Survey, and the P erceptions of Practices Survey. A descriptive overview of each instrument follows. Beliefs Survey. The purpose of the Beliefs Survey (BS) was to assess the beliefs of educators regarding the provision of services to students within an RtI model The Beliefs Survey has 27 items assessing philosophy of service delivery, as well as beliefs about core instruction, student assessment, planned intervention, and determination of eligibility for special education services A 5 point Likert type scale was used for the Beliefs Survey with response choices ranging from strongly disagree to strongly agree (See Appendix C for a copy of the Beliefs Survey). The survey was administered during the Fall of 2007 and Spring of 2008 (see Appendix F for the Year 1 Survey Administration Rubric) to members of each of the School Based Leadership Teams in all 40 pilot schools, as well as all instructional staff in 62 pilot and comparison schools involved in the Florida Problem Solving/Resp onse to Intervention Project. It should be noted that instructional staff data were not received from 5 of the pilot and comparison schools, due to failure of trainers to administer required surveys and/or submit the

PAGE 65

54 completed surveys to Project staff for collection, coding, entry and analysis. As a result, a total of 2,430 Beliefs surveys were collected and analyzed for the purposes of the following analyses. In order to determine the pattern of relationships among the items on the Beliefs survey, an exploratory factor analysis (EFA) was co nducted. An examination of the eigen values, the percent of variance explained by each factor, and the scree plot indicate that three factors best illustrate the relationship among the items. In addition, the standardized regression coefficients were examined to determine which items were best described by each of the three factors. The results of this EFA are represented in T able 10 below. As shown in Table 10 the item content of the Beliefs Survey was conceptualized as falling within one of three categories: Factor One Student Ability which relate d to the ability of students with disabilities to achieve academic benchm arks ; Factor Two Data Usage, which relate d to data based decision making ; and Factor Three Instruction which relate d to the functions of core and supplemental instruction Additionally, items 6, 18, 19, and 26 were not accounted for by any of the three factors. Following factor analysis, further analyses were conducted to determine internal consistency for the Beliefs Survey items constituting the three factors. The Cronbach alpha coefficient s for each of the factors were: Factor One=0.8 7 ; Factor Two r =0.79; and r =0.8 5 for Factor Three.

PAGE 66

55 Table 10 Beliefs Survey Factor Descriptions Beliefs Survey Exploratory Factor Analysis (EFA) Factor Related Content Constituent Survey Items Factor Nomenclature Factor One: Ability of students with disabilities to achieve academic benchmarks. 9A, 9B, 10A, 10B, 11A, 11B (6 items total) Student Ability Factor Two: Data based decision making 12, 13, 14, 15, 16, 17, 20, 21, 22, 23, 24, 25, 27 (13 items total) Data Usage Factor Three: Functions of core and supplemental instruction. 7A, 7B, 8A, 8B (4 items total) Instruction Unrelated 6, 18, 19, 26 (4 items total) N/A Perceptions of Skills Survey. T he Perceptions of Skills (PS) Survey was intended to assess the perceptions of educators regarding the degree to which they possess skills consistent with an RtI process. The 57 item Perceptions of RtI Skills Survey uses a 5 point frequency scale for each item, with response choices ranging from I do no t have this skill at all to I am highly skilled in this area and could teach others this skill (See Appendix D for a copy of the PS Survey) During the Fall of 2007 and Spring of 2008 (see Appendix F for the Year 1 Survey Administration Rubric), the Pe rceptions of RtI Skills survey was administered to members of each of the School Based Leadership Teams in all 40 pilot schools, as well as all instructional staff in 62 pilot and comparison schools involved in the Florida Problem Solving/Response to Inter vention Project. A total of 2,184 Perceptions of RtI Skills surveys were collected and analyzed for the purpose of the following analyses.

PAGE 67

56 In order to determine the pattern of item relationships on the Perceptions of RtI Skills survey, an EFA was conducte d. An examination of the eigen values, percent of variance explained by each factor, and the scree plot indicate that the item interrelationships are best illustrated through the use of three factors. In addition, the standardized regression coefficients w ere examined to determine which items were best described by each factor. The results of the PS EFA are presented in Table 11 below. Table 11 Perceptions of RtI Skills Survey Factor Descriptions Perceptions of RtI Skills (PS) Survey EFA Factor Related Content Constituent Survey Items Factor Nomenclature Factor One: Educators perceptions of RtI skills in academics. 2A, 3A, 4A1, 4B1, 4C1, 4D1, 4E1, 4F1, 5A, 6A, 7A, 8A, 8C, 8E, 9A, 10A, 11A, 12A, 13A, 16, 17, 18A, 18B, 18C, 20C (25 items total) Academic Skills Factor Two: Educators perceptions of RtI skills in behavior. 2B, 3B, 4A2, 4B2, 4C2, 4D2, 4E2, 4F2, 5B, 6B, 7B, 8B, 8D, 8F, 9B, 10B, 11B, 12B, 13B, 18D (20 items total) Behavior Skills Factor Three: Educators perceptions of RtI skills in accessing, interpreting, and graphing data. 14A, 14B, 14C, 14D, 14E, 15, 19, 20A, 20B, 20D, 20E, 21 (12 items total) Data Skills As can be seen, the items from the PS Survey were considered to fall within one of three content categories: Factor One Academic Skills which relate d to educators perceptions of RtI skills in academics ; Factor Two Behavior Skills which relate d t o educators perceptions of RtI skills in behavior ; and Factor Three Data Skills which

PAGE 68

57 relate d to educators perceptions of skills in accessing, interpreting, and graphing data. It is of note that all items on the Perceptions of RtI Skills survey were accounted for by the three factors. Follow on internal consistency reliability analyses yielded a Cronbach alpha coefficient = 0.9 8 for Factor One; Factor Two r =0.97; and r =0.94 for Factor Three. Perceptions of Practices Survey. The Perceptions of Practices (PP) Survey w as designed to assess the perceptions of educators with respect to the presence and frequency of critical PS/RtI practices occurring in their schools. The 42 item Perceptions of Practices Survey used a 5 point freque ncy scale for all items, with response choices ranging from never occurs to always occurs (See Appendix E for a copy of the PP Survey). During the Fall of 2007 and Spring 2008 (see Appendix F for the Year 1 Survey Administration Rubric), the Perception s of Practices Survey was administered to members of each of the School Based Leadership Teams in all 40 pilot schools, as well as all instructional staff in 62 pilot and comparison schools involved in the Florida Problem Solving/Response to Intervention P roject. A total of 2,140 Perceptions of Practices surveys were collected and analyzed for the purpose of the following analyses. In order to determine the pattern of item interrelationships on the Perceptions of Practices survey, an EFA was conducted. Exam ination of the eigen values, percent of variance explained by each factor, and the scree plot indicates that the item interrelationships are best illuminated through the use of two factors. In addition, the standardized regression coefficients were examine d to determine which items best fit each of the two factors The results of the EFA are presented in Table 12 below As can be

PAGE 69

58 observed, the items contained within the Perceptions of Practices Survey are conceptualized as falling within one of two categorie s: Factor One Behavioral Practices which relate d to educators perceptions of educational practices in behavior ; and Factor T wo Academic Practices which relate d to educators perceptions of practices in academics. Table 12 Perceptions of Practices Survey Factor Descriptors Perceptions of Practices (PP) Survey EFA Factor Related Content Constituent Survey Items Factor Nomenclature Factor One: Educators perceptions of educational practices in behavior. 2B, 3B, 4B, 5B, 6B, 7B, 8B, 9B, 10A2, 10B2, 10C2, 11B, 12B, 13B, 14B, 15B, 16B, 17A2, 17B2, 17C2, 18B (21 items total) Behavioral Practices Factor Two: Educators perceptions of educational practices in academics. 2A, 3A, 4A, 5A, 6A, 7A, 8A, 9A, 10A1, 10B1, 10C1, 11A, 12A, 13A, 14A, 15A, 16A, 17A1, 17B1, 17C1, 18A (21 items total) Academic Practices Follow on a nalyses investigating internal consistency reliability for the PP survey resulted in a Cronbach alpha coefficient = 0.97 for Factor One, and r =0.96 for Factor Two. Procedures Professional Development Training. Project staff provided initial training to the demonstration districts and schools S pecifically, the three Regional Project Coordinators and the Project Leader provided PS/RtI training to all district a nd school leadership

PAGE 70

59 teams, as well as to the school coaches. The trainings followed a 2 1 1 1 format for Year 1 of the training with 2 days of training provided early in the fall, 1 day provided later in the fall, 1 day provided in the winter, and 1 day provided in the spring. Content covered during the first year of training included an overview of the PS/RtI model, legislative and policy issues affecting model implementation, systems change procedures, problem identification, Tier I assessment and instr uction interventions, and data collection and progress monitoring. In addition to scheduled training sessions, a two tiered system of technical assistance provision was implemented. At the top tier, Regional Coordinators provided need based assistance to t he PS/RtI Coaches, with respect to the unique needs of the schools and districts served by the Coach in question. These needs were determined via the Coaches responses on Beliefs and Skills assessments administered during Coaches training, as well as fro m needs assessments and various outcome data from the Coaches schools. The secondary tier of technical assistance was that provided by the Coaches to the School Based Leadership Teams and instructional staff, again based upon the unique needs identified a t the school level. Determination of school, student, and systemic needs was based upon a variety of data assessing skills that educators may need additional support to master. Pilot School c oaches provided PS/RtI training to the remainder of the school staff. This training included an overview of the PS/RtI model, as well as policy and legislative issued impacting model implementation. In addition, Coaches provided as needed supplemental training to district leade rship teams, school leadership teams, and school staff. This supplemental training was generated to address goals and objectives of the

PAGE 71

60 individual schools and districts, as determined by needs assessment and outcome data. Content of this training included specific components of the PS/RtI model, practices and procedures of assessment and intervention, and using databases to facilitate data based decision making. Data Collection and Data Entry Data collected during the ongoing Projects first year of imp lementation w ere used to address research questions for this study This information was collected by multiple individuals from multiple sources. The instruments relevant to this study (i.e., the Beliefs Survey, Perception of Practices Survey, and Percepti on of RtI Skills Survey) were administered to members of each of the School Based Leadership Teams in all 40 pilot schools, as well as all instructional staff in 62 pilot and comparison schools involved in the Florida Problem Solving/Response to Interventi on Project. As mentioned earlier, data were collected from instructional staff in 62 (of 67) schools. The five iterations of missing data represent buildings where a) school Coaches failed to administer Surveys during day 1 training, or b) completed Survey s were not submitted by Coaches to the Project for collection, entry and analysis. Survey administrations occurred during School Based Leadership Team trainings, staff trainings, and staff meetings (see Appendix F for the Year 1 Survey Administration Rubr ic). The surveys were printed using a format that permitted direct scanning of each participants survey To allow for comparison at the individual respondent level, each survey contained a space for participants to enter a self generated identification nu mber, as well as a request that they continue to use this ID number on all subsequent surveys. In addition, the surveys ensured confidentiality by using a six digit code corresponding to an individual district and school, thereby removing the necessity for collecting

PAGE 72

61 identifying information from the responding individual. Each survey was administered during Fall of 2007 (pre training) and Spring of 2008 (post training) to provide training impact data. The Regional Coordinators and Coaches were trained to administer the surveys and to answer questions arising during survey administration. Graduate Assistants were trained by Project staff to scan each completed survey into a database created by the Project. The integrity of the scanning process was monitored by routinely selecting surveys for scan checks. Fifteen percent of randomly selected surveys were checked for accuracy of entry by a Graduate Assistant who did not scan the surveys and inter rater agreement estimates were calculated. Inter rater agree ment was estimated by dividing the total number of erroneous data entries (if any) by the total number of data entries made, then multiplying by 100. When inter rater agreement estimates (Range: 80 100%; M=96.75%) were below 90%, the relevant batch of da ta entered was rechecked by the Graduate Assistants. Analyses Descriptive and inferential analyses were conducted to address each research question. For all questions, the individual buildings w er e considered as the unit of analysis. Research question O ne: What is the relationship between beliefs about a training objective, and the self rated perception of skills and frequency of observed practices associated with that objective? To examine this relationship, the data used for the training objective incl ude d calculated averages of the items constituting the Data Usage and Instruction Factors from the Fall 2007 Beliefs Survey administration ; the data used for

PAGE 73

62 self perception of skills and practices include d calculated averages of the items constituting the Academic Skills and Data Skills Factors from the Fall 2007 P S Survey, as well as the Academic Practices Factor from the Fall 2007 PP Survey s Table 13 below presents a graphical representation of the various Survey Factors used in R esearch Question One. The descriptive data include d the mean s and standard deviation s, at the building level, for the relevant Factor scores fr om the administered Beliefs, Perceptions of Skills (PS), and Perceptions of Practices (PP) surveys. Inferential analyses include d calculation of the Pearson product moment correlation coefficient (PPMCC) between mean building Beliefs Survey Factor scores and their corresponding Perceptions of Skills Survey mean Factor scores, as well as between mean Beliefs Survey F actor scores and Perceptions of Practices Survey Factor scores. Table 13 Operational definitions for Research Question One Training Objective Educator Belief (Beliefs Survey) Self rated Skill Level (PS Survey) Frequency of Observed Practices (PP Survey) Use of Data Data Usage (Factor Two) Data Skills (Factor Three) (N/A) Academic Instruction Instruction (Factor Three) Academic Skills (Factor One) Academic Practices (Factor Two) Research question T wo: What are the effects of specific skills training on the relationship between self reported beliefs, and associated perception of skills and frequency of observed practices? To investigate these effects, the data for skills training include d calculated averages of constituent items from the Data Usage and Instruction Factor s of the Fall 2007 and Spring 2008 Beliefs Survey ; similarly, the data

PAGE 74

63 used for change in self reported skill level and observed practices include d calculated averages of the items constituting the Academic Sk ills and Data Skills Factor s of the Fall 2007 and Spring 2008 P S Survey, as well as the Academic Practices Factor of the Fall 2007 and Spring 2008 Perceptions of Practices Surveys Table 14 represents a graphic organizer for the Survey Factors utilize d in Research Question Two. The descriptive data include d the mean s and standard deviation s (for Fall 2007 and Spring 2008), at the building level, for the relevant Factor scores fr om the administered Beliefs, Perceptions of Skills (PS), and Perceptions of Practices (PP) surveys. Inferential analyses for this research question consist ed of Time One (Fall 2007) and Time Two (Spring 2008) sets of PPMCCs between mean building Beliefs survey Factor scores and their corresponding PS an d PP Survey mean Factor sco res In addition, a comparison of correlations in essence, a correlation of correlations was used. The notion of examining differences in specified correlations across time required calculating the differences in Fisher r to Z transformed r s within the Pearson Filon statistic (ZPF; e.g., Raghunathan, Rosenthal, & Rubin, 1996). While a detailed description of the ZPF statistic is beyond the scope of this study, a brief explanation follows. The interested reader is referred to an elegant overview of th e ZPF statistic offered by Raghunathan, Rosenthal, and Rubin (1996). For the purposes of the current study, the calculation of the ZPF statistic is expressed in Equations 1 and 2 below: (1) where Z represents Fishers Z r transformation of r : (2)

PAGE 75

64 Furthermore, because the two administrations were dependent by definition, an adjustment for non independence normally signified by A was necessary for comparison of these correlated but non overlapping correlations. Equation 3 below presents a commonl y accepted approximation of A, termed A approx : (3) where ave ( r 2 other ) is the average r 2 among the non tested correlations, ave ( r 2 test ) is the average r 2 of the two correlations being tested, and ave ( r test ) is the average r of the two correlations being tested (Raghunathan, Rosenthal, & Rubin, 1996). As displayed in Table 14 above, the correlations of interest for this research question can be broken down by training objective. For the Use of Data training objective, the relevant correlations are those between Time One and Time Two Data Usage Beliefs scores, and between Time One and Ti me Two Data Skills scores. For Academic Instruction (Skills), the relevant correlations include Time One and Time Two Instruction Beliefs scores, and Time One and Time Two Academic Skills scores; similarly, the Academic Instruction (Practices) objectiv e makes use of correlations between Time One and Time Two Instruction Beliefs scores, and Time One and Time Two Academic Practices scores.

PAGE 76

65 Table 14 Operational definitions for Research Question Two Training Objective Variable Fall 2007 Spring 2008 Use of Data Educator Belief Time One Data Usage Time Two Data Usage (Data Usage) PPMCC Self Rated Skill Level Time One Data Skills Time Two Data Skills (Data Skills) PPMCC Use of Data ZPF Academic Instruction Educator Belief Time One Instruction Time Two Instruction (Instruction) PPMCC Self Rated Skill Level Time One Academic Skills Time Two Academic Skills (Academic Skills) PPMCC Frequency of Observed Practice Time One Academic Practices Time Two Academic Practices (Academic Practices) PPMCC Academic Instruction ZPF s Research question three: What is the relationship between initial (pre training) and time two (post training) measures of self reported beliefs and perceived skills related to data usage, and of self reported beliefs, perceived skills, and observed practices related to academic instruction? To investigate th e s e relationship, the data used for initial measures of belief include d calculated averages of items constituting the Data Usa ge and Instruction Factor s from the Fall 2007 Beliefs Survey As with Research Question Two, the data used for change in self reported skill level and observed practices included calculated averages of the items constituting the Academic Skills and D ata Skills

PAGE 77

66 Factors of the Fall 2007 and Spring 2008 PS Survey, as well as the Academic Practices Factor of the Fall 2007 and Spring 2008 Perceptions of Practices Surveys. Table 15 below presents the Survey Factors used for Research Question Three. The d escriptive data include d the mean s and standard deviation s (for Fall 2007 and Spring 2008), at the building level, for the relevant Factor scores fr om the administered Beliefs, Perceptions of Skills (PS), and Perceptions of Practices (PP) surveys. Inferent ial analys e s for this research question include d two multiple regression analys e s using initial P erceptions of RtI Skills, Perceptions of Practices and Beliefs Survey scores as predictor variables against the criterion variable of final P erceptions of RtI S kills and Perceptions of Practices Survey scores. Table 15 Operational definitions for Research Question Three Variable Use of Data training objective Academic Instruction training objective Time One Time Two Time One Time Two Educator Belief Data Usage (Fall 2007) N/A Instruction (Fall 2007) N/A Self Rated Skill Level Data Skills (Fall 2007) Data Skills (Spring 2008) Academic Skills (Fall 2007) Academic Skills (Spring 2008) Frequency of Observed Practice N/A N/A Academic Practices (Fall 2007) Academic Practices (Spring 2008) Delimitations This study focus ed upon the relationship between elementary school educator beliefs about the various components of the PS/RtI model, and self rated levels of

PAGE 78

67 perceived skills and observed practices regarding those same components. Additionally, the study sought to document the impact of professional development administered in related topics upon the above relationship. Research indicates that the PS/RtI model is applic able to secondary schools; however, the resources available for conducting this study (i.e., the existing Project data) necessitate d the focus of this study remaining upon the elementary school educator population. Limitations There were potential threats to internal and external validity for the current study. One such threat to internal validity revolve d around the limited control the Project exercise d with respect to integrity of PS/RtI implementation within demonstration schools. Given the PS/RtI models complexity and number of relevant variables, conclusive statements about control over the independent variables must be made with caution. Similarly, the Project had no control over non Project PS/RtI implementation efforts initiated by comparison schools during the period of data collection. This is a very real concern, as changes in state regulations have imposed expectations upon all educators to begin implementation of the PS/RtI model as soon as is practicable. The nature of the Pro ject application process ma de it possible that some of the matched comparison schools proposed by school districts will differ significantly in terms of certain variables (i.e., student and staff demographics, resources available). There we re two genera l threats to external validity for this study. The first issue wa s the amount of resources, support, and training offered to demonstration schools by Project staff. This assistance represent ed a level of power and reassurance that would not

PAGE 79

68 typically be av ailable to average schools throughout the state. The second threat concern ed the previously mentioned differences in demographic characteristics between demonstration schools and other districts/schools throughout the state, which limit ed the degree of a pplicability for this studys results to other areas.

PAGE 80

69 Chapter 4 : Results This study s ought to identify and understand relationships between educators perceived skills, observed practices, and stated beliefs as well as the impact of evidence based professional development upon those relationships during the first year of ongoing school base d implementation for Floridas Statewide Problem Solving/Response to Intervention (PS/RtI) Project. This chapter of the study describes the participating schools from which data were collected, as well as the results of the data analyses selected to answer each research question. Research Question One What is the relationship between beliefs about a training objective, and the self rated perception of skills and frequency of observed practices associated with that objective? For the purposes of this questi on, various factors from the Time One (Fall 2007) Beliefs, Perceptions of Skills (PS), and Perceptions of Practices (PP) Surveys were used to measur e two training objectives, as well as the ir associated self perceived skill levels and observed practices. T able 16 below gives a graphical representation of the defined training objectives, skill levels, and observations of practice discussed herein. As shown the Use of Data training objective included calculated building level averages of items constituting the Data Usage Factor of the Beliefs Survey, as well as averages of the constituent items from the Data Skills Factor of the PS Survey. Similarly, the Academic Instruction training objective encompassed building level averages of the 4 items on the

PAGE 81

70 Inst ruction Factor of the Beliefs Survey, as well as averages of the constituent items from the Academic Skills and Academic Practices Factor s of the P S and PP Surveys, respectively. Table 16 Training objective operational definitions Training Objective Variable Survey Factor Constituent Items Use of Data Beliefs Data Usage 12, 13, 14, 15, 16, 17, 20, 21, 22, 23, 24, 25, 27 (13 items total) Skill Levels Data Skills 14A, 14B, 14C, 14D, 14E, 15, 19, 20A, 20B, 20D, 20E, 21 (12 items total) Academic Instruction Beliefs Instruction 7A, 7B, 8A, 8B (4 items total) Skill Levels Academic Skills 2A, 3A, 4A1, 4B1, 4C1, 4D1, 4E1, 4F1, 5A, 6A, 7A, 8A, 8C, 8E, 9A, 10A, 11A, 12A, 13A, 16, 17, 18A, 18B, 18C, 20C (25 items total) Observed Practices Academic Practices 2A, 3A, 4A, 5A, 6A, 7A, 8A, 9A, 10A1, 10B1, 10C1, 11A, 12A, 13A, 14A, 15A, 16A, 17A1, 17B1, 17C1, 18A (21 items total) Descriptive Statistics D escriptive data included the means and standard deviations for the relevant Factor scores from the Beliefs, PS, and PP surveys a description of which appear in Table 17 below.

PAGE 82

71 To determine whether any relationships exist between training objective beliefs and their associated skills and observed practices, Pearson Product Moment Correlation Coeffic ients (PPMCC) were calculated by building, for each training objective. Table 17 Relevant Factor score descriptives for participating schools Training Objective Survey Factor Overall Mean Observed Range Standard Deviation (SD) Use of Data (Beliefs) Data Usage 3.7585 3.2846 4.0661 0.15321 (Skills) Data Skills 2.8549 2.2417 3.7755 0.31078 Academic Instruction (Beliefs) Instruction 3.9201 3.475 4.4444 0.1814 (Skills) Academic Skills 3.4172 2.8079 4.0056 0.2714 (Practices) Academic Practices 4. 215 3. 7406 4.6803 0. 2277 Correlation Coefficients The Pearson product moment correlation coefficient (PPMCC) was calculated to investigate the relationships between beliefs about a training objective (i.e., mean building Beliefs Survey Factor scores) and associated perceptions of skills and frequencies of observed practices (i.e., mean building Perceptions of Skills and Perceptions of Practices Survey Factor scores). The results are reported in Table 18 and indicated that weak to moderate relationships were observed (N=62; range r=.199 to

PAGE 83

72 r=. 25 6 ; Data Usage: Beliefs to Skills and Academics: Beliefs to Practices, respectively). Interestingly, the only strong relationship (r=. 62 3 Academic Practices and Academic Skills Factor s. This strong intercorrelation particula rly when juxtaposed with the weak correlations involving Beliefs Factors as discussed above raises questions as to the utility of conceptualizing Academic Practices and Skills factor scores as separate contributors to Academic Beliefs Table 18 Correlatio n matrices for Research Question One Variable Data Usage Beliefs Data Skills Variable Instruction Beliefs Academic Skills Academic Practices Variable r (H o N = 62 Variable r (H o N = 62 Data Usage Beliefs .19879 0. 1 214 Instruction Beliefs .22531 0.0783 25632 0. 0443 Data Skills Academic Skills 62283 Academic Practices Research Question Two What are the effects of specific skills training on the relationship between self reported beliefs, and associated perception of skills and frequency of observed practices? As with the first Question, training objectives on Use of Data and Academic Inst ruction were defined via the previously defined Beliefs, PS, and PP Factors; however, there are two key modifications to the question being asked. First, the

PAGE 84

73 acknowledgment of specific skills training indicates that only the 40 demonstration school respon ses can be used in data analysis. Second, the notion of change over time necessitates the use of Time One (Fall 2007) and Time Two (Spring 2008) survey administrations. Descriptive Statistics Table 19 Relevant Factor score Means and SDs for demonstration s chools Time Point Training Objective Survey Factor Overall Mean Observed Range Standard Deviation (SD) Time One (Fall 2007) Use of Data (Beliefs) Data Usage 4.09 3.5296 4.4923 0.2166 (Skills) Data Skills 3.135 2.1771 3.875 0.4218 Academic Instruction (Beliefs) Instruction 4.1701 3.25 4.75 0.2898 (Skills) Academic Skills 3.639 2.5142 4.4852 0.4301 (Practices) Academic Practices 3.86 2.2116 4. 7798 0. 5751 Time Two (Spring 2008) Use of Data (Beliefs) Data Usage 4.3306 3.8154 4.9231 0.2282 (Skills) Data Skills 3.4923 1.8542 4.7917 0.5227 Academic Instruction (Beliefs) Instruction 4.4859 3.7778 5.0 0.3135 (Skills) Academic Skills 3.9997 3.364 4.9 0.3637 (Practices) Academic Practices 4.1792 3. 5038 4.8286 0. 2982

PAGE 85

74 The descriptive data included the demonstration school means and standard deviations (for Fal l 2007 and Spring 2008), at the building level, for the relevant Factor scores from the administered Beliefs, PS, and PP surveys. A description of these data is presented in Table 19 above To determine whether skills training had any influence on relationships between training objective beliefs and their associated skills and observed practices, dual PPMCCs and their non overlapping correlations were calculated for each training objective. Correlation Coefficients The intercorrelations and ZPF calculation results are shown in Tables 20, 21, and 22 below, with the correlations of concern appearing in boldface type. N o observed relationships differ ed significantly from zero. Table 20 Correlation Matrix and ZPF values for Use of Data objective Variables Y1 Data Usage Beliefs Y1 Data Skills Y2 Data Usage Beliefs Y2 Data Skills r (Z r ) Y1 Data Usage Beliefs .33784 .23146 (.23573) .26475 Y1 Data Skills .06496 .58462 (.66946) Y2 Data Usage Beliefs .20859 Y2 Data Skills ZPF = 0.98 ZPF critical = 4.92

PAGE 86

75 Table 21 Correlation Matrix and ZPF values for Academic Instruction (Skills) objective Variables Y1 Instruction Beliefs Y1 Academic Skills Y2 Instruction Beliefs Y2 Academic Skills r (Z r ) Y1 Instruction Beliefs .09354 .22417 (.22804) .09970 Y1 Academic Skills .18878 .61066 (.70997) Y2 Instruction Beliefs .24765 Y2 Academic Skills ZPF = 2.10 ZPF critical = 4.92 Table 22 Correlation Matrix and ZPF values for Academic Instruction (Practices) objective Variables Y1 Instruction Beliefs Y1 Academic Practices Y2 Instruction Beliefs Y2 Academic Practices r (Z r ) Y1 Instruction Beliefs 0.05904 .22417 ( .22804 ) 0.07480 Y1 Academic Practices .29768 .38156 ( .40189 ) Y2 Instruction Beliefs 0.19436 Y2 Academic Practices ZPF = 0.76 ZPF critical = 4.92 Research Question Three What is the relationship between initial (pre training) and time two (post training) measures of self reported beliefs and perceived skills related to data usage, and of self reported beliefs, perceived skills, and observed practices related to academic

PAGE 87

76 in struction? As with the first two Questions, the Use of Data and Academic Instruction training objectives were used, consisting of the previously defined Beliefs, PS, and PP Factor scores. Given that the Question involves change over time, Time One and Time Two factor scores will be used again; however, a key conceptual change within this Question is the use of Time One Beliefs, PS, and PP Factor scores to predict Time Two PS and PP Factor scores. Again, the inclusion of training as an independent variab le requires that data analysis be limited to the smaller sample of demonstration schools Descriptive Data The descriptive data included the means and standard deviations for Time One (Fall 2007) and Time Two (Spring 2008), at the building level, for the relevant Factor scores from the administered Beliefs, PS, and PP surveys. These data were previously report ed in Table 19 above To determine the predictive ability of pre training factor scores, a Multiple Regression Analysis was conducted for each training objective. Multiple Regression Regression Analysis for Time Two Data Skills Factor. Results of the multi ple regression analysis for the Use of Data training objective are presented in Table 23 below. Calculation of the Coefficient of Multiple Correlation, R, was performed to indicate the strength of relationship between the predictor variables and criterion variable. The R value was 0.59. The Coefficient of Determination, R 2 indicates the proportion of unique and shared variability explained by all variables, and was calculated as 0.35, which is statistically significant, F R 2 =0.31. The proportion of unexplained variability, 1 R 2 was calculated as 0.65.

PAGE 88

77 The effect size was calculated to be .54 using the formula The effect size of .54 is considered to be large (Cohen, 1969) A review of the standardized coefficient indicated that Time One Data Skills had a strong unique contribution, with a significant coefficient (see Table 23). Table 23 MRA Results for Time One Data Usage Variables Predicting Time Two Data Skills Variable B SE B Data Usage Beliefs 0.18 0.34 0.08 0.59 Data Skills 0.69 0.17 0.56 0.0003 Note: R 2 =.35. n=39 schools. Data Skills had a significant beta weight. Regression Analysis for Time Two Academic Skill s Results of the multiple regression analysis for the Academic In s truction (Skills) training objective are presented in Table 24 below. Calculation of the Coefficient of Multiple Correlation, R, was performed to indicate the strength of relationship between the predictor variables and criterion variable. The R value was 0. 65 The Coeffic ient of Determination, R 2 indicates the proportion of unique and shared variability explained by all variables, and was calculated as 0. 43 which is statistically significant, F (3,36) = 8. 97 000 1 adjusted R 2 =0. 3 8 The proportion of unexplained variab ility, 1 R 2 was calculated as 0. 57 The effect size was calculated to be 75 using the formula The effect size of 75 is considered to be large. A review of the standardized coefficient indicated that Time One Academic Skills had a strong unique contribution, with a significant coefficient (see Table 24)

PAGE 89

78 Table 24 MRA Results for Time One Academic Variables Predicting Time Two Academic Skills Variable B SE B Academic Beliefs 0.10 0.16 0.08 0.55 Academic Skills 0.37 0.13 0.44 0.0086 Academic Practices 0.18 0.10 0.29 0.076 Note: R 2 =0.41. n=39 schools. Academic Skills had a significant beta weight. Regression Analysis for Time Two Academic Practices. Results of the multiple regression analysis for the Academic Instruction ( Practices ) training objective are presented in Table 25 below. Calculation of the Coefficient of Multiple Correlation, R, was performed to indicate the strength of relationship between the predictor variables and criterion variable. The R value was 0. 55 The Coeffic ient of Determination, R 2 indicates the proportion of unique and shared variability explained by all variables, and was calculated as 0. 30 which is not statistically significant, F (3,30) = 4.34 0118 adjusted R 2 =0. 2328 The proportion of unexplained variability, 1 R 2 was calculated as 0. 70 The effect size was calculated to be 43 using the formula The effect size of 43 is considered to be moderate A review of the standardized coefficient indicated that none of the variables had a strong unique contribution, and none of the coefficients was significant (see Table 25).

PAGE 90

79 Table 25 MRA Results for Time One Academic Variables Predicting Time Two Aca demic Practices Variable B SE B Academic Beliefs 0.006 0.166 0.006 0.97 Academic Skills 0.32 0.12 0.48 0.0159 Academic Practices 0.05 0.09 0.11 0.56 Note: R 2 =0.09. n=33 schools. None of the beta values were statistically significant.

PAGE 91

80 Chapter 5 : Discussion Interpretation of the findings from the Results section must occur against the backdrop of two important considerations First, this study addressed its research questions via a quasi experimental design. While similar to a pure exper imental design in that a comparison (or control) condition existed, there are a few critical differences within the quasi experimental design (e.g., lack of random assignment, insufficient control over implementation of independent variable) that a ffect interpretation of study findings (Johnson & Christensen, 2004). As a direct result of these differences in study design, the findings regarding relationships between various educator variables and, where applicable, effects of training must be conceptualiz ed as supporting or not supporting the existing professional development and systems change research literature, rather than via the experimental design paradigm in which an independent variable is seen as causing an effect within the dependent variables. In addition, it is worth noting that professional development in the context of the PS/RtI Proje c t was conceptualized as a training to mastery approach; therefore, while not reported extensively, checks and balances were in place to insure the integrity of professional development administration and, by extension, integrity of implementation. T he second consideration impacting interpretation of study results is to be found within PS/RtI implementation literature. Prior attempts at implementation (e.g., Bats che, Elliot t Schrag, et al., 2005) suggest that school based PS/RtI models cannot be

PAGE 92

81 considered fully implemented until a minimum of four years from initiation. In addition, the paucity of implementation exemplars in this area lend little guidance as to e xpectations for progress markers or benchmarks that would predict later success in implementation at the 4 6 year mark. Thus, given that the data used for this study came from the first year of Project implementation, findings of this study should not be c onsidered as conclusive or final; rather, these findings must be seen as preliminary, and any observed trends as formative or incremental in nature. This study concludes by offering a summary and discussion of the results in four sections. The first secti on represents the study overview, the second offers conclusions and a discussion of analysis results, the third section presents the strengths and limitations of the study, and the last section provide s future research recommendations. Study Overview This study was designed to identify the relationships between educator beliefs, self perceptions of skills, and observations of critical practices. The specific characteristics identified in this study are self reported variables associated with two discrete s kill domains; specifically, the beliefs and critical skills related to data based decision making, as well as those beliefs and skills encompassed by academic instruction. Additionally, the study was designed to determine the impact of evidence based professional development upon the aforementioned relationships. There is li mited research examining how professional development impacts consensus within a school based systems change model ( e.g., Joyce, Showers, & Bennett, 1987 ). While we have research indicating the potential for evidence based professional development to incre ase educator proficiency and self rated skill levels (e.g., Joyce & Showers, 1988; 1995 ) few

PAGE 93

82 studies examine the factors endemic to existing consensus levels that may facilitate or impede the impact of professional development upon self perceptions of cri tical skills Therefore, this study was designed to examine the ability of preexisting educator beliefs to predict changes in self reported skills and observed practices, in response to the administration of evidence based professional development. Conclus ions and Discussion Research Question One What is the relationship between beliefs about a training objective, and the self rated perception of skills and frequency of observed practices associated with that objective? With respect to the descriptive data obtained from the instructional staff survey responses, it is recognized that a discussion of means could be seen as somewhat misleading when considering educator beliefs; nevertheless, there are some interesting points to glean from the descriptive dat a. For example, a closer examination of the Use of Data training objective building level ranges reveals that data usage beliefs were above the midpoint ( Range: 3.28 4.0 7 ; SD=0.15) that is to say, all building level scores for this factor were Ne utral or Agree, with no buildings indicating disagreement regarding the importance of, and need for, increased data usage. It is therefore not surprising to find that the observed range for corresponding building level data skil ls (Range: 2.24 3. 78 ; SD=0.31) indicate s the need for some level (little to substantial) of support, as well as some existing level of proficiency (minimal to highly skilled) being reported. Similarly, the Academic Instruction training objective showed above midpoint instructional beliefs (3.4 8 4.44; SD=0.18), a s well as academic skills ranges (2.8 1 4.0 1 ;

PAGE 94

83 SD=0.27) indicating a need for support for an existing basal level of competency. However notice that the observed ranges (Range: 3. 74 4.68; SD=0.2 3 ) for academic practices a measure of the frequency with which educators observe the academic instruction skills being applied in their building indicate these applications as occurring often to always, raising the question as to whether the perceived need for support and improvement in the area of academic instruction indicated by responses from beliefs and skills responses was conceptualized by respondents as a need for improvement in the frequency of skill usage, in the quality of skills being applied, or in some combination of the two. In order to address the thrust of the research question, the Pearson Product Moment Correlation Coefficient (PPMCC) was calculated between each of the factor variables. Results for relationships within the Data Usage Beliefs Skills, Academics Beliefs Skills, and Academics Beliefs Practices varia ble pairings show ed weak to moderate correlations ( r =.199; r =. 2 25 ; r =. 25 6 respectively). These findings can be interpreted to indicate that, for the first year of Project implementation, preexisting levels of agreement as to the importance of PS/RtI components were very weakly if at all related to perceived levels of skill in using such c omponents. Furthermore, in the case of academic instruction, the perceived importance of empirically derived instruction showed little to no relationship with the frequency of applied evidence based practices observed within Project schools. Another findin g of interest, though ancillary, is the observed strong correlation ( r =. 623 relationship supports the use of both survey factors in describing a singular component of

PAGE 95

84 the PS/R tI school based model namely, the environmental, ideological, and practical aspects of evidence based academic instruction the high intercorrelation of these factors raises some question as to the utility of considering the factors as representing mean ingfully different indicators of academic instruction. The notion that initial educator beliefs regarding evidence based instruction and data based decision making were related only slightly to self perceived competence in these areas is important, partic ularly when considering findings (e.g., Joyce & Showers, 1988; 1995) that educators will openly embrace new ideas when they understand the need for such change operationalized here as beliefs specific to data usage and academic instruction and possess the necessary skills as signified by data skills and academic skills or feel that the school supports them gaining such skills. In addition, the finding that academic practices were higher than expected (given the expressed need for additional support) dovetails with research by Guskey (1986), who stated that as teachers practice new skills and when these skills actually improve the performance of their students teacher attitudes will change. In this manner, at least for teachers, beliefs can be see n as following observed behavior. Research Question Two What are the effects of specific skills training on the relationship between self reported beliefs, and associated perception of skills and frequency of observed practices? After calculating the differences in Fisher r to Z transformed r s (ZPF) to examine differences in the dependent correlated correlations, there were no statistically significant relationships overall. The reader is again cautioned against causal or summativ e statements, particularly in a case such as this, where it is not scores but relationships

PAGE 96

85 which are being compared. Th e s e findings are therefore interpreted as indicating that, independent of the effects skill training may have had on educator survey sco res, the relationship between beliefs, skills, and practices factor scores did not differ, from Time One to Time Two, more than that amount expected due to chance alone. There are several corollaries to these findings within the research on systems change and implementation of school based RtI efforts. For example, VanDerHeyden and Wit (2005) stated that, even though teacher referral practices were found to be less accurate, consistent, and proportionate than RtI, teachers remained reluctant to change exist ing methods of decision making. Similarly, the finding that teacher behaviors can often be maintained by beliefs that these behaviors are appropriate (Fenstermacher, 1979) resonates with the above finding from this study Finally, this finding dovetails wi th teacher self efficacy conceptualizations (e.g., Ashton & Webb, 1986; Jackson, 1968) where teachers beliefs regarding the role of education directly impacted their behavior; in essence, if the student is faulted (for academic failure), then there is no reason to change the educational process Interestingly, note that strong correlations were found from Time One to Time Two survey administrations for Data Usage Skills ( r =. 58 ), as well as for Academic Skills ( r =.61). Similarly, Academic Practices ( r =. 38 ) showed a moderate relationship from Time One to Time Two administrations. The interpretation here is simply that initial survey ratings for educator skills within the Data Usage and Academic factors were strongly related to Time Two educator skills surve y scores for the same factors and moderately related to educator practices survey scores for the same factor

PAGE 97

86 Research Question Three What is the relationship between initial (pre training) and time two (post training) measures of self reported beliefs an d perceived skills related to data usage, and of self reported beliefs, perceived skills, and observed practices related to academic instruction? The results of the Use of Data training objective multiple regression analysis indicated that the proposed mod el account ed for a large amount (adjusted R 2 =0.31, of the observed change in educators self rated Data Skills from Time One to Time Two survey administrations. As mentioned in the second research question findings, it is not surprising to find th at Time One Data Usage skills factor score was a strong predictor of Time Two Data Usage skills factor score. In similar fashion, the Academic Instruction (Skills) training objective multiple regression analysis indicated that the existing model accounted for a large amount (adjusted R 2 =0. 38 1 ) of the educators self rated change in Academic skills from Time One to Time Two survey administrations. In similar fashion to the Use of Data objective, the Time One Academic skills factor score was found to be a strong predictor of Time Two Academic skills factor score. In the Academic Instruction (Practices) training objective multiple regression analysis, it was found that the model accounted for a large amount (adjusted R 2 = 0.23 0118 ) of the observed c hange in educators Academic skills from Time One to Time Two survey administrations. In with the first two multiple regression analyses, Time One Academic practices factor score was a significant predictor of Time Two Academic practices score.

PAGE 98

87 Looking at results from the multiple regression analyses as a whole, it can be seen that the combination of preexisting beliefs about a training objective, perceived existing skills in that training area, and observed practices in this area were significant predictor s of changes in perceived skill levels from Time One to Time Two administrations Furthermore, i t appears that the strongest observed predictor of Time Two skills and practices factor scores is Time One skills and practices factor scores, and that Time One beliefs factor scores are not significantly related to the other factors ; however, interpretation is problematic for several reasons (e.g., small sample size, insufficient control over administration of professional development). These findings are consis tent with research indicating that, even when presented with evidence that a new way is more effective and/or efficient than the traditional instruction or decision making processes, educators are often slow to embrace efforts at process change (e.g., Va nDerHeyden & Witt, 2005; Joyce & Showers, 1995) Of interest is the restriction in range observed in all factor scores from Time One to Time Two administrations, particularly when considering the limiting effect of range restriction on possible correlation coefficients; specifically, as one variables range decreases with no change in the second variables range, the maximum possible correlation coefficient is observed to decrease. Put another way, it becomes appropriate to ask whether the observed relation ships would be similar if Time One building level beliefs factor scores fell within the Disagree or Strongly Disagree range; similarly, would these relationships hold for building level skills factor scores indicating that no preexisting skill level ex isted or, conversely, indicating that skill levels were sufficiently high that support was not warranted?

PAGE 99

88 Replication of analyses across implementation efforts, as well as comparison across years of implementation, will be critical to determining the role that initial educator beliefs play in predicting changes in skills and practices. Limitations and Consideration s Numerous threats to internal and external validity form the lens through which res ults from this study should be interpreted. Internal validity refers to the degree of control maintained over extraneous variables; thus, threats to internal validity appear in the area of social desirability. External validity relates to the generalizabil ity of this studys results to the population at large; therefore, threats to external validity manifest as populational and sampling biases. Internal Validity Social desirability. Data used for this study came exclusively from multiple self report instruments, which leads to the possibility of biased scores of participants due to social desirability, or the influence of respondents perceptions as to what is socially acceptable upo n their survey responses. Indeed, this effect was evident in that all survey administrations displayed a negatively skewed distribution, indicating that the participants selected higher ratings on most items to describe the degree of beliefs, of skill leve ls, and the frequency of observed practices within their schools. However, given that this is a frequently observed phenomenon in research using self report scales (e.g., Pallant, 2005), it is unlikely that this effect will invalidate the results of this s tudy. Integrity of Implementation. The amount of control exercised by the Florida PS/RtI Project with respect to integrity of PS/RtI implementation within comparison schools was unavoidably limited during survey administration windows. While to some

PAGE 100

89 extent this is true with almost any consideration of controlled policy implementation, the imposition of state level requirements regarding immediate implementation of PS/RtI made this a large concern for the Project. Sample Size While the number of compl eted surveys was quite large ( Beliefs N =2 430 ; Skills N= 2 184 ; and Practices N= 2 140), the reality of error manifesting within nested extraneous variables required that the individual buildings be considered as the unit of analysis. As a result, the applic able sample size s w ere N=62 for Research Question One, and N=40 for Research Questions Two and Three. It is important to recognize the reduction in statistical power represented by th e drastic drop in sample size. This translates as a need for caution when making statements as to the statistical significance of findings, due to the reduced ability to reliably discern the presence of real relationships between variables. Duration of Im plementation. The current study examined relationships and effects observed during the first year of Project implementation; however, the observation that PS/RtI implementation takes 4 6 years (Batsche, Elliott, Graden, Grimes, et al. 2005) implies that a change in the variables critical to successful implementation must occur at some point during those four to six years. T his suggests a movement from baseline conditions toward desired outcomes in essence a progression of effects as well as some variab ility among (and within) those schools attempting implementation, whether due solely to demographics differences or to additional variables acting as barriers to change.

PAGE 101

90 External Validity Implementation Support The amount of resources, support, and training offered to demonstration schools by Project staff represents a level of power and reassurance that would not typically be available to average schools throughout the state. As a result, any statements as to the impact of implementation support particularly professional development must be made with caution, as generalization of these effects may be exceedingly difficult without adequate support measures (Batsche, Elliott, Graden, Grimes, et al., 2005). Sampling Issues. The nature of the Proje ct application process makes it possible that some of the matched comparison schools proposed by school districts will differ significantly in terms of certain variables (i.e., student and staff demographics, resources available). These differences in de mographic characteristics between demonstration schools and other districts/schools throughout the state limit the degree of applicability for this studys results to other areas. Survey Response Range. As mentioned previously, the building level survey re sponse ranges for all buildings were positive; that is to say, Beliefs Survey values averaged Neutral to Agree, PS Survey responses indicated a need for some level of support coupled with some level of proficiency, and PP Su rvey responses indicated t hat applications of skills occurred from often to always. In addition to the aforementioned issue with social desirability, it is interesting to consider whether or not the findings from this study would be appreciably altered were the building level s core ranges more representative of the allowable response range. Put another way, would the

PAGE 102

91 hypothesized relationships between educator variables been stronger (or weaker) if the survey ranges included lower values particularly at Time One, or baseline? Recommendations for Future Research As stated many times throughout this discussion, the quasi experimental design used and the preliminary nature of the analyses conducted require that any attempt at interpretation can only be seen as a possible explanation of relationships, and not as a causal link between variables. However, there are some implications for further research that have been revealed during the course of this study. The unexpected pattern of survey responses for the Academic Instruction training objective (i.e., above midpoint response ranges for Beliefs, indicated need for support for Skills, and the unexpectedly high frequency reports for Practices ) raise s q uestions as to what specific supports were perceived as necessary to improve academic instruction and in what capacity improvement was being conceptualized by respondents. It is worth noting at this point that some proportion of these inflated initial Be liefs scores can be assumed to originate from selection bias within the schools. Put another way, participation in the PS/RtI Project represented an opportunity to receive high quality professional development and supports to facilitate implementation of a complex process; however, selection by the parent district for inclusion in the application process would obviously be influenced to a great degree by the perceived willingness of a given school to implement new policies. It is a safe assumption that this willingness would be reflected in initial Beliefs scores for the demonstration schools selected.

PAGE 103

92 Despite the impact of selection bias upon baseline Beliefs scores, there remains a curious pattern to the relationship between scores and requested suppo rts. As such, the following research questions are recommended for consideration: 1) Is there a relationship between educators self rated beliefs and indicated support with respect to academic instruction, and the types of support requested and/or considered necessary for successful application of evidence based instruction ? 2) Are the types of support requested by educators, with respect to academic instruction, intended to increase the frequency with which their skills are being applied, or to improve the quality of skills they are expected to employ ? The results from the multiple regression analyses indicate that the combination of baseline beliefs, perceived skill level, and frequency of observed skill application were a strong predictor of perceived skil l level at Time Two survey administration. However, a closer examination seemed to show that the real power in predicting Time Two Skills and Practices scores could be attributed to Time One Skills and Practices, with baseline beliefs factor scores not being related to one another Given the design and sample structure of the current study, conclusive interpretation on this point is problematic. As a result, the following research questions should be considered in an effort to investigate this point: 3) Do educators self rated beliefs regarding a specific training objective act as a modifier or moderator for the relationship between baseline and post training perceptions of skills critical to the training objective ?

PAGE 104

93 4) Do educators self rated beliefs regardin g a specific training objective act as a modifier or moderator for the relationship between baseline and post training reported application frequency of skills critical to the training objective ? The issue of survey response range was of particular interes t in the current study; specifically, the observation that, at the building level, Beliefs Survey values averaged Neutral to Agree, PS Survey responses indicated a need for some level of support coupled with some level of proficiency, and PP Survey re sponses indicated that applications of skills occurred from often to always. Although outside the boundary of data available during the first year of implementation, there is a point of particular importance that should be addressed here. Despite high average Practice s scores at baseline and small change from Time One to Time Two administrations, investigation of initial Practice s scores during Year Two (after the window of data for the current study) revealed a significant drop in average scores. It i s believed that this phenomenon was due to an initi al lack of widespread understanding as to the extent of components that constituted a given practice, which resulted in many observers overestimating the occurrences of listed practices during the first ye ar. It is further noted that, as a common understanding of practice definitions was attained throughout Year Two, average frequencies of observed occurrences increased. Regardless of origin there is a question as to whether or not the observed relationsh ips between educator variables would be stronger (or weaker) if the survey ranges included lower values particularly at Time One, or baseline. Accordingly, the following research question is recommended to further investigate this issue:

PAGE 105

94 5) How do changes i n self reported levels of educators beliefs regarding a specific training objective impact the relationship between perceived competence in skills critical to this training objective, from baseline to post training time points ? The final point of importan ce is the impact of PS/RtI implementation upon student outcomes. The continuing push for educational reform is driven by the desire to improve academic and behavioral outcomes for students Tying this important issue to the points addressed within this stu dy, the following research questions are recommended : 6) Is there a relationship between educator s self rated beliefs and self rated skills with respect to RtI core training objectives, and students academic and behavioral outcomes? 7) How does the relationships between educators self rated beliefs and skills, and students academic and behavioral outcomes, change across the first three years of PS/RtI model implementation ?

PAGE 106

95 References Anderson, J. (1968). Bureaucracy in education. Baltimore: Johns Hopkins University Press. Ashton, P. T., & Webb, R. B. (1986). Making a Difference: Teachers Sense of Efficacy and Student Achievement Research on Teaching Monograph Series. New York: Longman. Baker, P. (2005). Managing Student Behavior : How Ready are Teachers to Meet the Challenge? American Secondary Education, 33 (3), 51 64. Bandura, A. (1982). Self efficacy mechanism in human agency. American Psychologist, 37 (2), 122 147. Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory Englewood Cliffs, NJ: Prentice Hall. Bandura, A. (1997). Self efficacy in changing societies New York: W. H. Freeman. Batsche, G., Curtis, M. J., Dorman, C., Castillo, J. M., & Porter, L. J. (2007). The Florida Problem Solving/Response to Intervention Model: Implementing a Statewide Initiative. In S.R. Jimerson, M.K. Burns, & A.V. Vanderheyden (Eds.) Handbook of response to intervention: The science and practice of assessment and intervention (pp 3 78 396). New York, NY: Springer.

PAGE 107

96 Batsche, G., Elliott, J., Graden, J.L., Grimes, J., Kovaleski, J.F., Prasse, D., et al. (2005). Response to Intervention: Policy Considerations and Implementation. Alexandria, VA: National Association of State Directors of Special Education. Batsche, G.M., Elliot, J., Schrag, J., & Tilly, W.D. (2005, October 25). Response to intervention: The opportunity and the reality Presented at the National Association of State Directors of Special Education Annual Conference, Minneap olis, MN. Berman, P., McLaughlin, M., Bass, G., Pauly, E., & Zellman, G. (1977). Federal programs supporting educational change. Vol. 7: Factors affecting implementation and continuation Santa Monica, CA: The Rand Corporation. Bernier, N. (1981). Beyond instructional context identification: Some thoughts for extending the analysis of deliberate education. In J. L. Green & C. Wallat (Eds.), Ethnography and language in educational settings (pp. 291 302). Norwood, NJ: Ablex. Bidwell, C., & Kasarda, J. (197 5). School district organization and student achievement. American Sociological Review, 40, 55 70. Bloom, B. (1981). All our children learning. A primer for parents, teachers, and other educators. New York: McGraw Hill. Bossert, S. T. (1979). Tasks and soc ial relationships in classrooms: A study of instructional organization and its consequences. New York: Cambridge University Press. Brim, O. G. (1975). Macro structural influences on child development and the need for childhood social indicators. American J ournal of Orthopsychiatry, 45 516 524.

PAGE 108

97 Bronfenbrenner, U. (1976). The experimental ecology of education. Educational Researcher, 5 5 15. Bronfenbrenner, U. (1977). Toward an experimental ecology of human development. American Psychologist, 32 513 531. Bronfenbrenner, U. (1992). Ecological systems theory. In R. Vasta (Ed.), Annals of Child Development. Six Theories of Child Development: Revised Formulations and Current Issues (pp. 187 249). London: Jessica Kingsley. Brookover, W. B., Beady, C., Flood, P., Schweitzer, J., & Wisenbaker, J. (1979). School social systems and student achievement: Schools can make a difference. New York: Praeger. Burns, M., Appleton, J.J., & Stehouwer, J.D. (2005). Meta analytic review of responsiveness to intervention resear ch: Examining field based and research implemented models. Journal of Pscyhoeducational Assessment, 23 381 394. Burns, M. K., Griffiths, A., Parson, L. B., Tilly, W. D., & VanDerHayden, A. M. (2007). Response to Intervention: Research for practice. Natio nal Association of State Directors of Special Education, Inc.: Alexandria, VA. Cahen, L., Filby, N., McCutcheon, G., & Kyle, D. (1983). Class size and instruction. White Plains, NY: Longman. Callender, W.C. (2006, March). Summary evaluation of Idahos stat ewide RTI approach. Paper presented at the National Association of School Psychologists Annual Convention, Anaheim, CA. Cameron, M. (2005). The Coach in the Classroom. Northwest Education, 10 (4), 11 12.

PAGE 109

98 Carew, J. V., & Lightfoot, S. L. (1979). Beyond bias: Perspectives on classrooms. Cambridge, MA: Harvard University Press. Case, L. P., Speece, D. L., & Molloy, D. E. (2003). The validity of a response to instruction paradigm to identify reading disabilities: A longitudinal analysis of individual differences and contextual factors. School Psychology Review, 32, 557 582. Christenson, S., Abery, B., & Weinberg, R.A. (1986). An alternative model for the delivery of psychology in the school community. In S. N. Elliott & J.C. Witt (Eds.), The delivery of psycholog ical services in schools: Concepts, processes, and issues (pp. 349 391). Hillsdale, NJ: Lawrence Erlbaum. Christenson, S.L. & Anderson, A.R. (2002). Commentary: the centrality of the learning context for students academic enabler skills. School Psychology Review, 31 378 393. Cichon, D., & Koff, R. H. (1978, March). The teaching events stress inventory. Paper presented at the meeting of the American Educational Research Association, Toronto, Canada. Cohen, E. G. (1979, September). The desegregated school: Problems in status, power and interracial climate. Paper presented at the meeting of the American Psychological Association, New York. Cohen, E. G. (1972). Sociology and the classroom: Setting the conditions for teacher student interaction. Review of Educ ational Research, 42, 441 452. Cohen, J. (1969). Statistical Power Analysis for the Behavioral Sciences ( 1st Edit ion). Lawrence Erlbaum Associates, Hillsdale NJ (2nd Edition, 1988).

PAGE 110

99 Cooper, H. M., Burger, J. M., & Seymour, G. E. (1979). Classroom context and student ability influences on teacher perceptions of classroom control. American Educational Research Journal, 16 189 196. Crum, C. S., & Hellman, G. V. (2009). School board decision making in the era of No Child Left Behind. Educational Planning, 18 ( 1), 11 25. Curtis, M.J., & Metz, L.W. (1986). System level intervention in a school for handicapped children. School Psychology Review, 15, 510 518. Curtis, M.J. & Stollar, S.A. (2002). Best practices in system level change. In A. Thomas & J. Grimes (Eds.) Best Practices in School Psychology IV (Vol. 1, pp. 223 234). Bethesda, MD: National Association of School Psychologists Publications. Donovan, M.S., & Cross, C.T. (Eds.) (2002). Minority students in special and gifted education Washington, DC: National Academy Press. Dreeben, R. (1973). The school as a workplace. In R. Travers (Ed.), Second handbook of research on teaching (pp. 450 473). Chicago: Rand McNally. Dweck, C. (1976). Childrens interpretation of evaluative feedback: The effect of soci al cues on learned helplessness. Merrill Palmer Quarterly, 22, 105 110. Dweck, C. S., Davidson, W., Nelson, S., & Enna, B. (1978). Sex differences in learned helplessness: II. The contingencies of evaluative feedback in the classroom; III. An experimental analysis. Developmental Psychology, 14 268 276. Ellett, C. D., & Garland, J. S. (1987). Teacher evaluation practices in our largest school districts: Are they measuring up to state of the art systems? Journal of Personnel Evaluation in Education, 1 (1), 69 92.

PAGE 111

100 Ellett, C. D., & Masters, J. A. (1977). The structure of teacher attitude toward dimensions of their working environment: A factor analysis of the School Survey and its implications for instrument validity. Paper presented at the meeting of the Geor gia Educational Research Association, Atlanta. Fenstermacher, G. D. (1979). A philosophical consideration of recent research on teacher effectiveness. In L. S. Shulman (Ed.) Review of Research in Education (Vol. 6, pp. 157 185). Itasca, IL.: F. E. Peacock. Fixsen, D. L., Naoom, S.F., Blas, K.A., Friedman, R.M., & Wallace, F. (2005). Implementation Research: A Synthesis of the Literature Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health Institute, The National Implementation R esearch Network (FMHI Publication #231). Fletcher, J.M., Coulter, W.A., Reschly, D.J., & Vaughn, S. (2004). Alternative approaches to definition and identification of learning disabilities: Some questions and answers. Annals of Dyslexia, 54 (2), 304 331. Fletcher, J.M., & Lyon, G.R. (1998). Reading: A researched based approach. In W.M. Evers (Ed.), Whats gone wrong in Americas classrooms (pp. 50 72). Stanford: Hoover Institution Press. Florida Department of Education (2008). Response to Instruction/In tervention (RtI) Implementation Plan Tallahassee, FL: Author. Frieze, I., Fisher, J., Hanusa, B., McHugh, M., & Valle, V. (1979). Attributions of the causes of success and failure as internal and external barriers to achievement in women. In J. Sherman & F. Denmark (Eds.), Psychology of women: Future directions of research (pp. 124 162). New York: Psychological Dimensions.

PAGE 112

101 Fullan, M. (1997). The Challenge of School Change Arlington Heights, IL: IRI/SkyLight Training and Publishing. Garrett, G. (1977). The effect of sex as a variable in teacher perception. Gehrke, N. (1981). A grounded theory study of beginning teachers role personalization through reference group relations. Journal of Teacher Education, 32 (6), 34 38. Gerber, M., Jimenez, T., Leafstedt, J. Villaruz, J., Richards, C., & English, J. (2004). English reading effects of small group intensive intervention in Spanish for K 1 English learners. Learning Disabilities Research & Practice, 19, 239 251. Glass, G., & Smith, M. (1979). Meta analysis of r esearch on class size and achievement. Educational Evaluation and Policy Analysis, 1 (1), 2 16. Goodlad, J. I. (1975). The dynamics of educational change: toward responsive schools. New York: McGraw Hill. Gresham, F. (2001, August). Responsiveness to Interv ention: An alternative approach to the identification of learning disabilities. In R. Bradley, L. Danielson, & D. P. Hallahan (Eds.), Identification of learning disabilities: Research into Practice (pp. 467 519). Mahwah, NJ: Erlbaum. Gross, N., & Herriott, R. E. (1965). Staff leadership in public schools. New York: Wiley. Hall, G.E. & Hord, S.M. (2006). Implementing Change: Patterns, Principles, and Potholes. Boston: Allyn and Bacon. Hamman, D., Olivarez, A., Jr., Lesley, M., Button, K., Chan, Y M, Griffith R., & Elliott, S. (2006). Pedagogical Influence of Interaction with Cooperating Teachers on the Efficacy Beliefs of Student Teachers. The Teacher Educator, 42 (1), 15 29. Hargreaves, D. H. (1972). Staffroom relationships. New Society, 32, 434 437.

PAGE 113

102 Healy, K., Vanderwood, M., & Edelston, D. (2005). Early literacy interventions for English language learners: Support for an RtI model. The California School Psychologist, 10, 55 63. Heller, K.A., Holtzman, W.H., & Messick, S. (Eds.). (1982). Placing children in special education: A strategy for equity Washington, DC: National Academy Press. Hoagwood, K., & Johnson, J. (2003). School psychology: A public health framework I. From evidence based practices to evidence based policies. Journal of School Psychology, 41 3 21. Holland, J. (1973). Making vocational choices: A theory of careers. Englewood Cliffs, NJ: Prentice Hall. Hornstein, H. A., Callahan, D. M., Fisch, E., & Benedict, B. A. (1968). Influence and satisfaction in organization: A replication. Sociology of Education, 41 (4), 380 389. Hsia, H. M., & Beyer, M. (2000). System Change Through State Challenge Activities: Approaches and Products Juvenile Justice Bulletin Department of Justice, Washington, DC. Office of Juvenile Justice and Delinquency Preventio n. (NCJ 177625). Ikeda, M., Rahn Blakeslee, A., Niebling, B., Allison, R., &Stumme, J. (2006). Evaluating Evidence Based Practice in Response to Intervention Systems. Communiqu, 34, (8). Individuals with Disabilities Education Improvement Act, U.S.C. H.R. 1350 (2004). Jackson, P. W. (1968). Life in classrooms. New York: Holt, Rinehart and Winston.

PAGE 114

103 Jacob, S., & Hawthorne, T.S. (2003). Ethics and law for school psychologists (4 th ed.). Hoboken, NJ: John Wile & Sons, Inc. Janney, R., Snell, M., Beers, M., & R aynes, M. (1995). Integrating Students with Moderate and Severe Disabilities: Classroom Teachers Beliefs and Attitudes about Implementing an Educational Change. Educational Administration Quarterly, 31 (1), 86 114. Johnson, D. W., & Johnson, R. T. (1974). Instructional goal structure: Cooperative, competitive or individualistic. Review of Educational Research, 44 213 240. Joyce, B. & Showers, B. (1988, 1995). Student achievement through staff development. White Plains, NY: Longman Press. Kalis, M. C. (1980 ). Teaching experience: Its effect on school climate, teacher morale. NASSP Bulletin, 64, 89 102. Kavale, K.A., & Forness, S.R. (1999). Effectiveness of special education. In C.R. Reynolds & T.B. Gutken (Eds.), Handbook of school psychology (pp. 984 1024). Austin, TX: Pro Ed. Knoff, H., & Batsche, G.M. (1995). Project ACHIEVE: Analyzing a school reform process for at risk and underachieving students. School Psychology Review, 24 (4), 579 603. Kyriakides, L. (2005). Drawing from teacher effectiveness research and research into teacher interpersonal behaviour to establish a teacher evaluation system: A study on the use of student ratings to evaluate teacher behavior. Journal of Classroom Interact ion, 40 (2), 44 66.

PAGE 115

104 Kyriakides, L. & Creemers, B. P. M. (2008). A longitudinal study on the stability over time of school and teacher effects on student outcomes. Oxford Review of Education, 34 (5), 521 545. Laosa, L. M. (1982). School, occupation, culture, and family: The impact of parental schooling on the parent child relationship. Journal of Educational Psychology, 74 (6), 791 827. Larkin, R. W. (1973). Contextual influences on teacher leadership styles. Sociology of Education, 46, 471 479. Lau, M.Y., Siel er, J.D., Muyskens, P., Canter, A., VanKeuren, B., & Marston, D. (2006). Perspectives on the use of the Problem Solving Model from the viewpoint of a school psychologist, administrator, and teacher from a large Midwest urban district. Psychology in the Sch ools, 43 117 127. Leacock, E. (1969). Teaching and learning in city schools: A comparative study. New York: Basic Books. Lee, J Grigg, W. and Dion, G. (2007). The Nations Report Card: Mathematics 2007 (NCES 2007 4 94 ). National Center for Education Sta tistics, Institute of Education Sciences, U.S. Department of Education, Washington, D.C Lee, J., Grigg, W., and Donahue, P. (2007). The Nations Report Card: Reading 2007 (NCES 2007 496). National Center for Education Statistics, Institute of Education Sc iences, U.S. Department of Education, Washington, D.C. Leithwood, K. A., & Montgomery, D. J. (1982). The role of the elementary school principal in program improvement. Review of Educational Research, 52 (3), 309 339.

PAGE 116

105 Lewin K. (1943). Defining the "Field a t a Given Time. Psychological Review,50 pp. 292 310. Republished in Resolving Social Conflicts & Field Theory in Social Science, Washington, D.C.: American Psychological Association, 1997. Leyser, Y. (2002). Choices of Instructional Practices and Efficacy Beliefs of Israeli General and Special Educators: A Cross Cultural Research Initiative. Teacher Education and Special Education, 25 (2), 154 167. Lightfoot, S. (1973). Politics and reasoning : Through the eyes of teachers and children. Harvard Educational Review, 43 (2), 197 224. Loup, K. S., Garland, J. S., Ellett, C. D., & Rugutt, J. K. (1997). Ten years later: Findings from a study of teacher evaluation practices in our 100 largest school di stricts. Journal of Personnel Evaluation in Education, 10 (3), 203 226. Maccoby, E. E., & Jacklin, C. N. (1974). The psychology of sex differences. Stanford, CA: Stanford University Press. McDermott, R. P. (1977). Social relations as contexts for learning i n school. Harvard Educational Review, 47 202 215. McPherson, G. H. (1972). Small town teacher. Cambridge, MA: Harvard University Press. Medley, D. M. (1978). Alternative assessment strategies. Journal of Teacher Education, 29 38 42. Metz, M. H. (1978). Classrooms and corridors. The crisis of authority in desegregated secondary schools. Berkeley, CA: University of California Press.

PAGE 117

106 Meyer, J., & Cohen, E. (1971). The impact of the open space school upon teacher influence and autonomy: The effects of an org anizational innovation. Stanford, CA: Stanford University. Mosenthal, P. (1984). The effect of classroom ideology on childrens production of narrative text. American Educational Research Journal, 21 (3), 679 689. National Center for Education Statistics (2 005). The condition of education 2005 (NCES 2005 094). Washington, DC: U.S. Government Printing Office. NCLB (2002). No Child Left Behind Act U.S.C. 115 STAT. 1426. Noddings, N. (2007). When school reform goes wrong. Teachers College Press: Noell, G.H., Witt, J.C., Slider, N.J., Connell, J.E., Gatti, S.L., Williams, K.L., et al. (2005). Treatment implementation following behavioral consultation in schools: A comparison of three follow up strategies. School Psychology Review, 34, 87 106. Passow, A.H. (1990 ). How it happened, wave by wave: Whither (or wither?) school reform? In S.B. Bacharach (Ed.), Educational reform: Making sense of it all (pp. 10 19). Needham Heights, MA: Allyn and Bacon. Peterson, K. D. (1987). Teacher evaluation with multiple and variab le lines of evidence. American Educational Research Journal, 24 311 317. Porter, L.J., Batsche, G., Curtis, M.J., Castillo, J.M., & Witte, R. (2006, March). Problem solving and response to intervention: school psychologists beliefs, practices, and traini ng needs. Paper presented at the National Association of School Psychologists Annual Convention Anaheim, CA.

PAGE 118

107 Presidents Commission on Excellence in Special Education (2002). A new era: Revitalizing special education for children and their families (U.S. Department of Education Contract No. ED 02 PO 0791). Washington, DC: U.S. Department of Education. Raghunathan, T. E., Rosenthal, R., & Rubin, D. B. (1996). Comparing correlated but nonoverlapping correlations. Psychological Methods, 1 (1), 178 183. Rosenholtz, S. J., & Wilson, B. (1980). The effect of classroom structure on shared perceptions of ability. American Educational Research Journal, 17 75 82. Rutter, M. Maughan, B., Mortimore, P., Ouston, J., with Smith, A. (1979). Fifteen thousand hours: Secondary schools and their effects on children. Cambridge, MA: Harvard University Press. Sarason, S. (1982). Problems of change and the culture of the school. New York: Allyn & Bacon. Sarason, S.B. (1990). The predictable failure of school reform San Fra ncisco: Jossey Bass. Senge, P.M., Kleiner, A., Roberts, C., Ross, R.B., & Smith, B.J. (1994). The Fifth Discipline Fieldbook. New York: Doubleday. Showers, B., Joyce, B., & Bennett, B. (1987). Synthesis of research on staff development: A framework for fu ture study and state of the art analysis. Educational Leadership, 45 (3), 77 87. Shulman, L. S., & Lanier, J. E. (1977). The Institute for Research on Teaching: An overview. Journal of Teacher Education, 28 44 49.

PAGE 119

108 Stanovich, K.E. (1999). The sociopsychomet rics of learning disabilities. Journal of Learning Disabilities, 32 (4), 350 361.Tilly, W.D. (2002). Best practices in school psychology as a problem solving enterprise. In A. Thomas & J. Grimes (Eds.), Best Practices in School Psychology IV (pp. 21 36). Be thesda, MD: National Association of School Psychologists. Stanovich, P, & Jordan, A. (1998). Canadian Teachers and Principals Beliefs about Inclusive Education as Predictors of Effective Teaching in Heterogeneous Classrooms. The Elementary School Journal 98 (3), 221 238. Stollar, S., & Graden, J. (2006, March). Evaluation of RTI within the Ohio Integrated Systems Model Paper presented at the National Association of School Psychologists Annual Convention, Anaheim, CA. Stollar, S. A., Schaeffer, K. R., Skelton, S. M., Stine, K. C., Lateer Huhn, A., & Poth, R. L. (2008). Best practices in professional development: An integrated three tier model of academic and behavior supports. In A. Thomas & J.P. Grimes (Eds.), Best pra ctices in school psychology V (pp. 875 886). Bethesda, MD: National Association of School Psychologists. Stronge, J. H., & Ostrander, L. P. (1997). Client surveys in teacher evaluation. In J. Stronge (Ed.), Evaluating teaching: A guide to current thinking and best practice (p. 129 161). California: Corwin Press. Stronge, J. H., Helm, V. M., & Tucker, P. D. (1995). Evaluation handbook for professional support personnel. Kalamazoo: Western Michigan University, Center for Research on Educational Accountability and Teacher Evaluation. Super, D. E. (1970). Work values inventory. Boston: Houghton Mifflin.

PAGE 120

109 Tilly, W. D. (2003, December). How Many Tiers Are Needed for Successful Prevention and Early Intervention?: Heartland Area Education Agencys Evolution From Four to Three Tiers. Paper presented at the National Research Center on Learning Disabilities Responsiveness to Intervention Symposium, Kansas City, MO. Torgesen, J.K. (2002). The prevention of reading difficulties. Journal of School Psychology, 40, 7 26. Torg esen, J.K. (2007). Report on reduction of students identified as learning disabled in Reading First schools Tallahassee, FL: Florida State University, Florida Center for Reading Research. VanDerHeyden, A. M. & Witt, J. C. (2005). Quantifying context in as sessment: Capturing the effect of base rates on teacher referral and a problem solving model of identification. School Psychology Review, 34 161 183. Vaughn S., & Fuchs, L.S. (2003). Redefining learning disabilities as inadequate response to instruction: The promise and potential problems. Learning Disabilities Research & Practice, 18 (3), 137 146. Vaughn, S., Linan Thompson, S., & Hickman, P. (2003). Response to in struction as a means of identifying students with reading/learning disabilities. Exceptional Children, 69, 391 409. Weiner, B. (1980). The role of affect in rational (attributional) approaches to human motivation. Educational Researcher, 9 4 11. Wise, A. (1979). Legislated learning. The bureaucratization of the American classroom. Berkeley: University of California Press. Yost, R. (2002). I Think I Can: Mentoring as a Means of Enhancing Teacher Efficacy.

PAGE 121

110 The Clearing House, 75 (4), 195 197. Zins, J.E., & Ponti, C.R. (1996). The influence of direct training in problem solving on consultee problem clarification skills and attributions. Remedial and Special Education, 17, 370 376.

PAGE 122

111 Appendices

PAGE 123

112 Appendix A : Demonstration District Mini Grant Application and Scoring Rubric TO: School Districts, State of Florida FROM: Florida Problem Solving/Response to Intervention Statewide Project SUBJECT: Problem Solving/Response to Intervention ( PS /RtI) Demonstration Site Mini Grant Application Procedures Background The No Child Left Behind Act (NCLB) and the Individuals with Disabilities Education Improvement Act (IDEIA) of 2004 embrace the use of Problem Solving and Response to Intervention (Instruction) ( PS /RtI) to ensure that ALL students achieve state approved grade level benchmarks. In addition, the PS /RtI method has become part of the eligibility requirements for stude nts with disabilities (effective October 13, 2006). The Florida Department of Education (FLDOE) has funded the Florida Problem Solving/Response to Intervention P roject to ensure that all districts in Florida have access to high quality training in the ski lls necessary to implement this model. The Florida Problem Solving/Response to Intervention Project is funded by a grant from the Florida Department of Education and is administered through the University of South Florida. The purposes of the FLDOE PS/R tI Project are twofold: 1) organize and deliver statewide training in PS/RtI and 2) evaluate the impact of the PS/RtI model on district, building and student outcomes. The evaluation of the impact of PS/RtI will take place in pilot school sites in demonst ration districts throughout Florida. Demonstration districts will be selected from among those districts completing a Mini Grant Application. The purpose of this memo is to disseminate information regarding the Mini Grant Application process. General Info rmation Eligible Applicants: Any Florida public school district is eligible to apply to become a PS /RtI Demonstration District Pilot Schools: Each district may request funding to support a maximum of six (6) pilot schools within the district. Proposed pi lot schools within the district m ust house at least grades K 3. Demonstration districts may include Reading First s chools, Positive Behavior Supports schools, or schools participating in other state or local initiatives. The district must identify one (1) comparison school for each pilot school proposed in the application. The comparison school must contain the same grade levels and share similar student demographics as the pilot

PAGE 124

(Appendix A, continued) 113 school(s). The comparison school data will be used to compare the impact of the PS/RtI Project in schools with and without project implementation. Start Date: It is estimated that initial implementation activities with the demonstration sites will begin in the spring of 2007, with full implementation starting with the 2007 2008 s chool year. Application Deadline: Complete applications must be received by April 1, 2007. Mail the original and 5 copies to: Judith Hyde University of South Florida 4202 E. Fowler Avenue, EDU 162 Tampa, FL 33620 No FAX or email copies of p roposals will be accepted. Informational Meetings: All districts interested in completing a mini grant application to become a demonstration district are invited to attend one of three orientation/informational meetings to be held in the north, central, and south regions of the state (see Appendix A) Each district may send up to three people, including the individual who will be primarily responsible for facilitating the grant writing team, one administrative representative from general education and one administrative representative from special education. Each meeting is scheduled from 9:00 a.m. to 1:00 p.m. The meeting agenda will include presentations on the Florida Problem Solving/Response to Intervention Project, the responsibilities of participa ting districts and procedures for completing the mini grant application. Mini grant application requirements are described below. District representatives are encouraged to review the application requirements prior to the meeting. A question and answer (Q and A) session will be included in each meeting. NOTE: Pre registration is required in order to attend one of the Informational Meetings To pre register, go to http://floridarti.usf.edu/biddersconference/ click on Registration, complete the form and click on Submit Registration. If you encounter any difficulties with pre registration, contact Judi Hyde at JHyde@tempest.coedu.usf.edu or 813 974 7448. The schedule for these meetings is as follows: Monday, February 26 Ft. Lauderdale Embassy Suites 1100 Southeast 17th Street Directions: http://www.embassysuites.com/en/es/hotels/maps_directions.jhtml?ctyhocn=FLLSOES

PAGE 125

(Appendix A, continued) 114 954 527 2700 Thursday, March 1 Tallahassee Doubletree Hotel 101 S. Adams St. Directions: http://doubletree.hilton.com/en/dt/hotels/index.jhtml?ctyhocn=THLAPDT 850 224 5000 Monday, March 5 Orlando Orlando Airport Marriott 7499 Augusta National Drive Directions: http://marriott.com/property/propertypage/mcoap 407 851 9000 Attendance at one of the regional meetings is strongly encouraged but not required of districts planning to submit a mini grant application. Contact Person: For more information about application procedures contact Clark Dorman, Project Leader at Dorman@coedu.usf.edu or 813 391 3059

PAGE 126

(Appendix A, continued) 115 Overview of the Demonstration Site Project The demonst ration site component of the Statewide PS/RtI Project is designed to provide training, technical assistance and implementation support to individual schools within school districts. Statewide Project staff will conduct the training, provide technical assi stance and provide other training and implementation supports to the pilot schools. Pilot schools, in turn, will serve as evaluation sites to determine the impact of this project on student and other district and building outcomes. The demonstration sit e component of the Project will rely on a coaching and trainers method for implementation. State Project staff will serve as the external coaches to the schools. Funding will be provided for districts to hire one internal coach for up to three (3 ) pilot schools. Each school will create a school based implementation team consisting of six to eight members that includes representatives of general education, special education, instructional support and student services. The building administrator must be included as a member of the team. Building teams will learn how to develop a building implementation plan. The school based team and the building coach will become trainers and coaches for the building staff and will be responsible for buildin g wide implementation. I Services Provided to Demonstration Schools by the Statewide Project Staff 1. Training and technical assistance for school based teams to implement the Problem Solving/Response to Intervention model in pilot schools 2. Funding for each selected demonstration district for up to two coaches (one for each three schools) to complement training and provide technical assistance to pilot school sites in implementing PS/RtI, data collection and analysis, and dissemination of student outcome data 3. Training of and technical assistance and support for the coaches and building administrators 4. Training, technical assistance and support for the use of school based data to develop, implement and evaluate core, supplemental and intensive instruction/i ntervention 5. Training and technical assistance in the use of technology to organize and display building, classroom and student based data 6. Training and technical assistance in the use of technology to monitor intervention implementation, support data based decision making and track student progress 7. Support integration of existing and potential state level, district and school initiatives to facilitate implementation of DOE Strategic Imperative #3 Improve students rates of learning and Strategic Imperative #5 Increase the quantity and improve the quality of e ducation o ptions 8. Provide web based programs to collect and organize data from the demonstration sites. Internal coaches will be responsible for submitting demonstration site data to the web based progra ms

PAGE 127

(Appendix A, continued) 116 II. Expectations of Demonstration Districts and Pilot Sites Each demonstration district may identify up to six (6) pilot schools and an equal number of comparison schools within the district In order to receive the services delineated above, districts and their pilot schools submitting an application under this project initiative must agree to the requirements set forth in Commitments Needed for Success in Appendix B. These include certain district and school level administrative, curricular, financial, and personnel commitments, as well as parent involvement data collection and reporting requirements. Each proposed pilot school must have a comparison school that is similar to it on key demographic variables. Comparison schools will be asked only to participate in certain data collection activities, and must agree to participate in these activities. Coaches will support the collection of data in both pilot and comparison schools. III. Funding Each district may submit a mini grant application for up to $100,000.00 per year in funding for a maximum of three years. The mini grant is intended to support the employment of district based coaches and training activities. Districts must commit to a m inimum of three years of project implementation. Each application is for one year of funding. Continuing applications will be required each year for years 2 and 3 of the funding cycle. Continuation of funding for years 2 and 3 will be contingent on fulfil lment of expectations by the district and pilot and comparison schools. Mini Grant Application Requirements Each proposal must address each of the five components specified below in a narrative format in the order in which they are presented for a) the d emonstration district, and b) each of up to six (6) proposed pilot schools within the district. The total narrative (excluding demographic data required in item 2 below) must be double spaced using a 12 point font and should not exceed 25 pages in length Documentation required in 1 and 2 below should be included in appendices to the application and do not count against the 25 page limit.

PAGE 128

(Appendix A, continued) 117 1. District and Pilot School s Commitment : Proposals must outline specific commitments to implementing PS/RtI as a way of work and the activities (i) the district, and (ii) pilot schools will carry out in order to meet the requirements specified in Appendix B. L etters of ag reement/commitment from the following individuals must be included in the grant application. (See App endix B for the minimum required content of the se letters ) a) District S uperintendent b) Assistant Superintendent for Curriculum and Instruction c) Director of Elementary Education d) Director of Exceptional Student Education e) Director(s) of district/school wide Readi ng First and Positive Behavior Support Programs (if applicable) f) P rincipal of each of the proposed pilot school s g) Principal of each comparison school to provide data requested by Project Staff 2. District, Pilot and Comparison Schools Demographic Data : Proposals must include an outline of the a) District demographic data (see Appendix C Demonstration District Demographic Profile) b) Each proposed pilot schools demographic data (see Appendix D Demonstration Pilot Schools Demographic Profile), and c) Each comparison schools demographic data (see Appendix E Comparison School Demographic Profile) (Appendices C, D, and E outline the minimum required content for this section .) 3. Statement of Need and Expected Outcomes : Proposals must for each pilot school a) De scribe the schools needs (particularly student academic and/or behavioral needs) that will be addressed through participation in the PS/RtI project, including specific gaps, barriers, or w eaknesses b) Indicate how implementation of the PS/RtI model would imp act the academic and/or behavioral outcomes of students in each pilot school c) Identify measurable student and school outcomes, tied to the identified needs, that wil l result from participation as a pilot school site d) Identify outcomes for specific target pop ulations or school goals, including over representation of minority students in special programs, low SES and LEP students and/or D/F school status

PAGE 129

(Appendix A, continued) 118 4. District and Pilot School s Experience with Initiatives and Programs : Proposals must describe the district s and each pilot schools current and/or previous level of involvement in and extent of implementation (e.g., beginning, intermediate, fully implementing) of academic and/or behavioral initiatives and programs (e.g., Just Read Florida, Positive Behavioral Support). Include information for any reading initiatives implemented within the last five years in the district and in each proposed pilot school. Specify any existing curriculum based measures (e.g., DIBELS, CBM Math) or data collection tools ( e.g., PMR N, SWIS, A IMSweb ) currently in use. In addition, discuss any involvement the district and each proposed pilot school has had with the following FLDOE projects/initiatives : Continuous Improvement Model (CIM) Reading First Just Read Florida Voluntary Pre K (VPK) programs Positive Behavior Support PS/RtI Describe any other educational reform initiatives or elements of the above initiatives in which the district or school has been involved within the past five years. 5. District Personnel Resources and Technolo gy: Proposals must, for the district and each proposed pilot school: a) I dentify personnel (e.g., teachers, student support staff, and administrative staff) who will be assigned to this specific initiative at the district level and in each specific pilot scho ol site ; identify one coach for each three pilot schools b) I dentify percent FTE each will be assigned c) I dentify experience/qualifications to support implementation of the PS /RtI initiative d) Include a brief vita for each of the individuals identified as a poten tial coaches in (a) above in an appendix to the application e) Briefly describe the technology resources at the building or district levels that will be used in support of this initiative. In particular, describe any data management systems that will be used (See Appendix B)

PAGE 130

(Appendix A, continued) 119 The Application Process Only one (1) mini grant application will be accepted from each district. The Application Packet should include: 1) A Cover Letter from the District Superintendent indicating a desire for the district to participate in the PS/RtI Project 2) The School Districts response to relevant components of the proposal as specified under Proposal Requirements: Component 1 District Commitment Component 2 District Demographic Data Component 4 District and School E xperience with Initiatives and Programs Component 5 Personnel Resources and Technology Letters of Agreement/ Commitment as described above in sections 1.a) through 1.g) 3) Pilot Schools Responses A response for each proposed pilot school (up to six scho ols) to relevant components of the proposal as specified under Proposal Requirements: Component 1 Pilot School Commitment Component 2 Pilot School Demographic Data and Comparison School Demographic Data Component 3 Statement of Need and Expected Outcomes for the Pilot School Component 4 Pilot Schools Experience with Initiatives and Programs Component 5 Personnel Resources and Technology

PAGE 131

(Appendix A, continued) 120 Proposal Evaluation Scoring Guide T otal points awarded will be an important consideration in the sele ction of demonstration districts However, i t also is important that a diversity of students schools, and districts be represented in the demonstration districts and their pilot schools. Therefore, after all applications have been evaluated against the cr iteria below and have received a final score of from 0 to 175 additional factors will be considered prior to the selection of sites. Districts and pilot schools will be selected to include sites that are diverse with respect to: 1. Size of districts (i.e., s mall, medium, and large) 2. Geographic location 3. Student population demographics 4. Inclusion of D/F schools The application from each district will be evaluated using the Proposal Evaluation Form according to the following criteria: 1. District and Pilot School s Commitment ( 50 points ): The proposal demonstrates clear administrative, programmatic and fiscal commitment (including the required letters of commitment) to fully implementing PS/RtI and a capacity to fulfill the demonstration sites requirements as outlin ed in Appendix B. ( Note: District=20, mean rating across pilot schools = 30 ) 2. District and Pilot and Comparison School s Demographic Data ( 30 points ): The proposal provides detailed and current demographic data for the district and each proposed pilot school as required in Appendices C, D and E respectively. It provides a clear picture of the districts and pilot and comparison schools status on the in dicators given. (Note: District=10, mean rating across pilot schools =15, mean rating across comparison schools =5 ) 3. Statement of Need and Expected Outcomes ( 35 points ): The proposal clearly defines each pilot schools needs that will be addressed through participation as demonstration site s and provides convincing evidence that without assistance from the project, these needs would not be met. The proposal also delineates projected student and school outcomes including outcomes for specific target populat ions that: a) are measurable, b) are clearly linked to the identified needs and c) that demonstrate an increased capacity to support students academic and behavioral performance in the general education environment (Note: Mean rating across pilot school s=35)

PAGE 132

(Appendix A, continued) 121 4. District and School Experience with Initiatives and Programs ( 20 points ): The proposal describes in detail the level of district and school involvement in academic and/or behavioral initiatives and programs, resulting in a comprehensive picture of the districts and each pilot schools current systemic capacity. (Note: District=10, mean rating across pilot schools =10) 5. District Personnel Resources and Technology ( 15 points ). The proposal clearly identifies personnel assigned to the PS/RtI initiativ e at a) the district level, and b) each proposed pilot school site and the percent FTE each is assigned to the initiative. It provides a clear picture of personnel qualifications and experience to support implementation of PS/RtI. Technology resources and a data m anagement system to support the initiative at the district and school site level are clearly delineated ( Note: District = 6, mean rating across pilot schools =9 ) 6. Inclusion of D/F Schools ( 25 points ). D or F schools are represented among the propo sed pilot school sites. Total Possible Score = 175 points

PAGE 133

(Appendix A, continued) 122 APPENDIX A PS/ RtI Regional Areas

PAGE 134

(Appendix A, continued) 123 APPENDIX B C ommitments Required f or Success D emonstration D istrict A dministration will commit to: 1. Developing and implementing a plan to ensure that general education, special education and other program personnel work together at the district level to effectuate the successful implementation of PS/RtI in the district pilot schools 2. Assigning district personnel with the requisite qualifications and experience to the PS/RtI initiative to support district coordination and implementation of the initiative across the pilot school sites 3. Putting in place a district level leadership team to help pilot schools with the implementation of th e PS/RtI initiative 4. Implement ing evidenced based practices to support learning of all students, including those at risk and ESE students, to achieve AYP and Floridas A + Education Plan 5. Designating funds/resources to implement research based supplemental in struction and interventions to support students who do not attain expected grade level outcomes in reading and math 6. Designat ing resources to adequately support PS/RtI implementation at both the district and pilot school level including faculty and staff, time, materials for screening, assessment and interventions, and financial support for scientifically based progress monitoring software (e.g., AIMSweb or DIBELS ) 7. Providing funds/resources (including time) for professional development of district level pe rsonnel and pilot school teachers and staff in PS/RtI, data collection and management, data analysis and interpretation 8. Having in place the technolog ical resources and infrastructure including personnel, and a data management system to ensure ease of access to student performance data by school level and project personnel and to support the PS/RtI initiative 9. Providing access to district and state level student performance data for school level and project reporting purposes 10. Develop ing and implemen ting a plan to ensure parent involvement with PS/RtI efforts at the district and pilot school level s 11. Review ing the districts policies and procedures for general and exceptional student education to ensure that they are consistent with PS/RtI Pilot School Prin cipal and Administrative Team will commit to: 1. Implementing PS/RtI as a way of work at the pilot school site 2. Assigning personnel with the requisite qualifications and experience to the PS/RtI initiative to support its implementation at the school site 3. Putti ng in place a school leadership team that is representative of the schools grade level faculty support staff and parents (consisting of individuals with collective knowledge and experience in leadership, curriculum, data based decision making and systems change)

PAGE 135

(Appendix A, continued) 124 4. Being active participants in the school leadership team (attend PS/RtI trainings and team meetings ) 5. Providing for a regularly scheduled time and place for team meetings 6. Secur ing agreement from the school faculty to commit to PS/RtI Project Initiat ive training and practices (including identification and selection of appropriate scientifically based interventions, continuous monitoring of student progress and the systematic review of academic and discipline data for decision making ) 7. Developing and im plementing a plan to ensure that general education, special education and other program personnel work together to effectuate the successful implementation of PS/RtI at the pilot school site 8. Allocating required resources (funds, designated time, staff) to facilitate professional development of teachers and other professional personnel at the school site 9. Working collaboratively with the Project Coach and Regional Coordinator in implementing PS/RtI at the school site 10. Provid ing dedicated time and resources for the Project C oach to work with classroom teachers and other school based support personnel (as needed) to effectively support PS/RtI implementation at the school site 11. Allocating required personnel and other resources (e.g., teachers, administrative staff, time, materials ) for full implementation of PS/RtI at the school site 12. Having in place adequate technology infrastructure and a data management system to support the PS/RtI initiative at the pilot school site 13. Reallocating resources based on data outcomes 14. Budgeting funds for PS/RtI supplies, materials, travel and substitutes for team trainings/meetings, etc. School L eadership T eam will commit to: 1. Implementing a t eam based, problem solving process to provide interventions for all students at the universal, targeted and intensive levels 2. Participat ing in PS/RtI trainings and networking meetings 3. Working collaboratively with the Project Coach and Regional Coordinator (as needed) to effectively implement PS/RtI at the school site 4. Meet ing on a regular basis at spe cified times for school leadership team meetings 5. Collect ing and us ing student outcome data for decision making purposes 6. Work ing collaboratively with parents to ensure their involvement in PS/RtI planning, training and implementation activities 7. Us ing and su bmit ting required student performance and other data ( e.g., satisfaction s urveys ) 8. Develop ing an annual action plan for PS/RtI activities based on analysis of collected data

PAGE 136

(Appendix A, continued) 125 Appendix C District Demographic Data Outline 1. Total student enrollment 2. Student enrollment By grade level By race/ethnicity By SES (use eligibility for free and reduced lunch) 3. Number and percent (of student population) of LEP students Overall By grade level 4. Number and percent of students with disabilities (elementary level) By grade By race/ethnicity By disability type Analysis of disproportionality in the identification of students eligible for special education, if available 5. Student performance on FCAT in reading and mathematics For all elementary level students o By grade level o By race/ethnicity For elementary level students with disabilities o By grade level o By race/ethnicity o By disability For LEP students o By grade level 6. Percent of students (at elementary level) who attained AYP in AY 2004 05 and AY 2005 06 overall by grad e level by race/ethnicity SES LEP status 7. Number and percent of students retained in grade 3 based on performance on FCAT reading in

PAGE 137

(Appendix A, continued) 126 AY 2004 05 AY 2005 06

PAGE 138

(Appendix A, continued) 127 Appendix D Pilot School Demographic Data Outline (To be completed for each Proposed Pilot School) 1. Grade levels served ( school site must at least house grades K 3) 2. Total student enrollment (report number and percent) By grade level By race/ethnicity By SES (based on eligibility for free and reduced lunch) 3. Number and percent (of student population) of LEP students Overall By grade level 4. Number and percentage of students with disabilities By grade level By disability type By race/ethnicity Analysis of disproportionality in the identification of students as eligible for special education, if available 5. Number and percent of students placed in ESE in AY 2004 05 and AY 2005 06 By grade level By disability type By race/ethnicity 6. Educational environment/least restrictive environment data for students with disabilities By grade level By disability type By race/ethnicity Analysis of disproportionality in placement of students, if available 7. Title I status (non Title I, Title I targeted assistance, or Title I school wide)

PAGE 139

(Appendix A, continued) 128 8. Student p erformance on FCAT in r eading and mathematics For all students By grade level By race/ethnicity For students with disabilities By grade level By race/ethnicity By disability Analysis of performance gap between students with and without disabilities 9. Percent of students who attained AYP in AY 2004 05 and AY 2005 06 for reading and mathematics overall by grade level by race/ethnicity SES LEP status 10. Number and percent of students retained in Grade 3 based on performance on FCAT reading in AY 2004 05 AY 2005 06 11. School Grade (i.e., A through F) assigned by FLDOE based on 2005 06 school year: _____ 12. Does your school currently have or ever had a Reading First Grant? _____Yes _____No 13. Does your school have a positive behavior support (PBS) program in place? ____ Yes ____No

PAGE 140

(Appendix A, continued) 129 Appendix E Comparison Sc hool Demographic Data Outline (To be completed for each Comparison School) 1. Identify pilot school for which school will serve as comparison 2. Grade levels served ( school site must at least house grades K 3) 3. Total student enrollment (report number and percent) By grade level By race/ethnicity By SES (based on eligibility for free and reduced lunch) 4. Number and percent (of student population) of LEP students Overall By grade level 5. Number and percentage of students with disabilities B y grade level By disability type By race/ethnicity Analysis of disproportionality in the identification of students as eligible for special education, if available 6. Number and percent of students placed in ESE in AY 2004 05 and AY 2005 06 By grade level By disability type By race/ethnicity 7. Educational environment/least restrictive environment data for students with disabilities By grade level By disability type By race/ethnicity Analysis of disproportionality in placement of students, if available

PAGE 141

(Appendix A, continued) 130 8. Title I status (non Title I, Title I targeted assistance, or Title I school wide) 9. Student p erformance on FCAT in r eading and mathematics For all students By grade level By race/ethnicity For students with disabilities By grade level By race/ethnicity By disability Analysis of performance gap between students with and without disabilities 10. Percent of students who attained AYP in AY 2004 05 and AY 2005 06 for reading and mathematics overall by grade level by race/ethnicity SES LEP status 10. Number and percent of students retained in Grade 3 based on performance on FCAT reading in AY 2004 05 AY 2005 06 11. School Grade (i.e., A through F) assigned by FLDOE based on 2005 06 school year: _____ 12. Does your school currently have or ever had a Reading First Grant? _____ Yes _____No 13. Does your school have a positive behavior support (PBS) program in place? _____Yes _____No

PAGE 142

(Appendix A, continued) 131 Proposal Evaluation Scoring Guide T otal points awarded will be an important consideration in the selection of demonstration districts H owever, i t also is important that a diversity of students schools, and districts be represented in the demonstration districts and their pilot schools. Therefore, after all applications have been evaluated against the criteria below and have received a f inal score of from 0 to 175 additional factors will be considered prior to the selection of sites. Districts and pilot schools will be selected to include sites that are diverse with respect to: 1. Size of districts (i.e., small, medium, and large), 2. Geographic location 3. Student population demographics 4. Inclusion of D/F schools Evaluate the application from each district on the Proposal Evaluation Form according to the following criteria: 1. District and Pilot School s Commitment (50 points ): The proposal demonstrates clear administrative, programmatic and fiscal commitment (including the required letters of commitment) to fully implementing PS/RtI and a capacity to fulfill the demonstration sites requirements as outlined in Appendix B. ( Note: District=20, mean rating across pilot schools = 30 ) 2. District and Pilot and Comparison School s Demographic Data ( 30 points ): The proposal provides detailed and current demographic data for the district and each proposed pilot school as required in A ppendices C, D and E respectively. It provides a clear picture of the districts and pilot and comparison schools status on the indicators given. (Note: District=10, mean rating across pilot schools =15, mean rating across, comparison schools =5 ) 3. St atement of Need and Expected Outcomes ( 35 points ): The proposal clearly defines each pilot schools needs that will be addressed through participation as demonstration site s and provides convincing evidence that without assistance from the project, these needs would not be met. The proposal also delineates projected student and

PAGE 143

(Appendix A, continued) 132 school outcomes including outcomes for specific target populations that: a) are measurable, b) are clearly linked to the identified needs and c) that demonstrate an increased capacity to support students academic and behavioral performance in the general education environment (Note: Mean rating across pilot schools=35) 4. District and School Experience with Initiatives and Programs ( 20 points ): The proposal descr ibes in detail the level of district and school involvement in academic and/or behavioral initiatives and programs, resulting in a comprehensive picture of the districts and each pilot schools current systemic capacity. (Note: District=10, mean rating ac ross pilot schools =10) 5. District Personnel Resources and Technology ( 15 points ). The proposal clearly identifies personnel assigned to the PS/RtI initiative at a) the district level, and b) each proposed pilot school site and the percent FTE each is assigned to the initiative. It provides a clear picture of personnel qualifications and experience to support implementation of PS/RtI. Technology resources and a data m anagement system to support the initiative at the district and sch ool site level are clearly delineated ( Note: District = 6, mean rating across pilot schools =9 ) 6. Inclusion of D/F Schools ( 25 points ). D or F schools are represented among the proposed pilot schools sites. Total Possible Score = 175 points

PAGE 144

(Appendix A, continued) 133 Proposal Evaluation Form School District: ____________________ Reviewer: ____________________ Date of Review: ____________________ Refer to the Proposal Evaluation Scoring Guide for an explanation of factors to be considered in evaluating each of the fo llowing areas: 1. District and Pilot Schools Commitment (Total Possible Points = 50) District Rating (0 to 20 Points) _____ Pilot Schools (0 to 30 Points Each) 1. _____ 2. _____ 3. _____ 4. _____ 5. _____ 6. _____ Mean Pilot School Rating (0 to 30 Points) _____ Subtotal Points Awarded (District plus Mean Pilot Schools) = Comments :

PAGE 145

(Appendix A, continued) 134 2. District and Pilot and Comparison Schools Demographic Data (Total Possible Points = 30) District Rating (0 to 10 Points) _____ Pilot Schools (0 to 15 Each) Comparison Schools (0 to 5 Each) 1. _____ 1. _____ 2. _____ 2. _____ 3. _____ 3. _____ 4. _____ 4. _____ 5. _____ 5. _____ 6. _____ 6. _____ Mean Pilot School Rating (0 to 15) _____ Mean Comparison School Rating (0 to 5) _____ Subtotal Points Awarded (District, plus Mean Pilot, plus mean Comp) = Comments: 3. Statement of Need and Expected Outcomes (Total Possible Points = 35) Pilot School Ratings (0 to 35 Each): 1. _____

PAGE 146

(Appendix A, continued) 135 2. _____ 3. _____ 4. _____ 5. _____ 6. _____ Subtotal Points Awarded (Mean Rating for Pilot Schools) = Comments: 4. District and School Experience with Initiatives and Programs (Total Possible Points = 20) District Rating (0 to 10 Points) _____ Pilot School Ratings (0 to 10 Points Each): 1. _____ 2. _____ 3. _____ 4. _____ 5. _____ 6. _____ Mean Pilot School Rating (0 to 10) _____ Subtotal Points Awarded (District plus Mean for Pilot Schools) = Comments:

PAGE 147

(Appendix A, continued) 136 5. District Personnel Resources and Technology (Total Possible Points = 15) District Rating (0 to 6 Points) _____ Pilot School Ratings (0 to 9 Points Each): 1. _____ 2. _____ 3. _____ 4. _____ 5. _____ 6. _____ Mean Pilot School Rating (0 to 9) _____ Subtotal Points Awarded (District plus Mean for Pilot Schools) = Comments: 6. Inclusion of D/F Schools (Total Possible Points = 25) Su btotal Points Awarded = Total Application Points Awarded:

PAGE 148

(Appendix A, continued) 137 Criterion Area 1. _____ 2. _____ 3. _____ 4. _____ 5. _____ 6. _____ TOTAL POINTS AWARDED (0 to 175) = SIZE OF DISTRICT (Small, Medium, Large) _________ GEOGRAPHIC REGION _________

PAGE 149

138 Appendix B : Example Validation Forms Problem Solving/Response to Intervention Beli efs Survey Content Validation Item Content and Clarification Rating Form Directions : The Problem Solving/Response to Intervention Beliefs Survey is intended t o capture the degree to which school and district personnel possess the beliefs necessary for successful implementation of the Problem Solving/Response to Intervention (PS/RtI) model. The items on the survey are designed to assess the beliefs of school and district personnel in one or more of the following domains; overall educational philosophy, assessment practices, core instruction, intervention, and special education eligibility determination. Florida PS/RtI Project staff will use the data derived from the survey to inform the services provided to schools. A good survey is concise, contains clearly and accurately written items that relate to the purpose of the survey, and avoids duplicate items. To evaluate the degree to which the attached survey meets these criteria, please rate each item on the basis of ap propriateness of content, necessity, and clarity Read each question carefully and rate it by circling one or more of the following descriptors: G = Good (Item is clearly and accurately written); R = Redundant (There are items with similar content and meaning); N = Nonessential (The content is non related to any of the five PS/RtI belief domains); PW = Poorly Written (Item has semantic or grammatical errors); A = Ambiguous (Item has abstract or vague content, or double barreled items that ask two questi ons in one statement).

PAGE 150

139 If you have found an item to be problematic (i.e., you circled it with R, N, PW, or A ), please provide suggestions by rewriting the item in the space below, or write: Delete item if you believe the item does not address beliefs re lated to PS/RtI. This survey will be completed by school and district personnel participating in PS/RtI training across the state of Florida. Respondents will be asked to rate the degree to which they agree with each PS/RtI belief on a 5 point continuum of strongly disagree to strongly agree For your information, school and district personnel will use the following ratings: 1 = Strongly Disagree 2 = Disagree 3 = Neutral 4 = Agree 5 = Strongly Agree

PAGE 151

140 Problem Solving/Response to Intervention Beliefs Sur vey G=Good R=Redundant N=Nonessential PW=Poorly Written A=Ambiguous Essential PS/RtI Beliefs _________________Content and Clarity Ratings 1. I believe in the philosophy of No Child Left Behind (NCLB) even if I disagree with some of the requirements. G R N PW A Rewrite: ___________________________________________________________________ 2. Core instruction should be effective enough to result in 80% of the students achieving benchmarks in reading and math. G R N PW A Rewrite: ___________________________________________________________________ 3. The primary function of supplemental instruction is to ensure that students meet grade level benchmarks in reading and math. G R N PW A Rewrite: ___________________________________________________________________ 4. The majority of student with learning disabilities achieve grade level benchmarks in reading and math. G R N PW A Rewrite: ___________________________________________________________________ 5. The majority of students with behavioral problems (EH/SED) achieve grade level benchmarks in reading and math. G R N PW A Rewrite: ___________________________________________________________________ 6. Students with disabilities who are receiving special education services are capable of achieving grade level benchmarks in reading and math. G R N PW A Rewrite: ___________________________________________________________________

PAGE 152

141 7. General education teachers should implement more differentiated and flexible curricula to address the needs of a more diverse student body. G R N PW A Rewrite: ___________________________________________________________________ 8. General education classroom teachers would be able to implement more differentiated and flexible interventions if they had additional staff support. G R N PW A Rewrite: ___________________________________________________________________ 9. The availability of additional interventions in the general education classroom would result in success for more students. G R N PW A Rewrite: ___________________________________________________________________ 10. Prevention activities and early intervention strategies in schools would result in fewer referrals to problem solving teams and placements in special education. G R N PW A Rewrite: ___________________________________________________________________ 11. The severity of a students problem is determined not by how far behind (or inappropriate) a student is but by how quickly a student responds to intervention. G R N PW A Rewrite: ___________________________________________________________________ 12. The results of IQ and achievement testing can be used to identify effective interventions for students with learning and behavior problems. G R N PW A Rewrite: ___________________________________________________________________ 13. Many students currently identified as LD do not have a disability, but came to school not ready or got too far behind for the available interventions to close the gap sufficiently. G R N PW A Rewrite: ___________________________________________________________________ 14. Using student based data to determine intervention effectiveness is more accurate than using teacher judgment. G R N PW A

PAGE 153

142 Rewrite: ___________________________________________________________________ 15. Evaluating a students response to interventions is a more effective way of determining what a student is capable of than using scores from tests (e.g., IQ/Achievement). G R N PW A Rewrite: ___________________________________________________________________ 16. Time and resources should be given first to students who are not reaching benchmarks before significant time and resources are directed to students who are at or above benchmark. G R N PW A Rewrite: ___________________________________________________________________ 17. It is easier for me to make decisions about student performance and needed interventions when the student data are graphed. G R N PW A Rewrite: ___________________________________________________________________ 18. Parents should be involved in the problem solving process as soon as a teacher has a concern about a particular student. G R N PW A Rewrite: ___________________________________________________________________ 19. Students respond better to interventions when the parent is involved in the development and implementation of those interventions. G R N PW A Rewrite: ___________________________________________________________________ 20. All students can achieve grade level benchmarks if they have sufficient support. G R N PW A Rewrite: ___________________________________________________________________ If you believe that there are other important questions not addressed in this survey that would help identify the degree to which school and district personnel posses the beliefs necessary to implement the PS/RtI model,

PAGE 154

143 please list them below and state the domain (i.e., overall educational philosophy, assessment practices, core instruction, intervention, and special education eligibility determination) that it characterizes: _______________________________________________________________ ___________________ ____________________________________________ _______________________________________________________________ _______________________________________________________________ _______________________________________________________________ _______________________ ________________________________________ ______________________________________________________ Thank you for your assistance with this important step in validating a measure to capture the beliefs of school and district personnel as they relate to PS/RtI.

PAGE 155

144 Perception of Skills Survey Content Validation Item Content and Clarification Rating Form Directions : The Perception of Skills Survey is intended to capture the degree to which school and district personnel perceive that they have the skills needed to function within a Problem Solving/Response to Intervention (PS/RtI) model. The items on the survey are designed to assess school and district personnel perceptions about their skills in one or more of the following domains; data based decision making, t iered service delivery, the problem solving process, data collection procedures, technology use, and special education eligibility determination. Florida PS/RtI Project staff will use the data derived from the survey to inform the services provided to scho ols. A good survey is concise, contains clearly and accurately written items that relate to the purpose of the survey, and avoids duplicate items. To evaluate the degree to which the attached survey meets these criteria, please rate each item on the basis of appropriateness of content, necessity, and clarity Read each question carefully and rate it by circling one or more of the following descriptors: G = Good (Item is clearly and accurately written); R = Redundant (There are items with similar content and meaning); N = Nonessential (The content is non related to any of the five PS/RtI belief domains); PW = Poorly Written (Item has semantic or grammatical errors); A = Ambiguous (Item has abstract or vague content, or double barreled items that ask two qu estions in one statement). If you have found an item to be problematic (i.e., you circled it with R, N, PW, or A ), please provide suggestions by rewriting the item in the space below, or

PAGE 156

145 write: Delete item if you believe the item does not address skills needed in a PS/RtI model. This survey will be completed by school and district personnel participating in PS/RtI training across the state of Florida. Respondents will be asked to rate the degree to which they possess each skill on a 5 point continuum of I do not have this skill at all to I could teach others this skill For your information, school and district personnel will use the following ratings: 1 = I do not have this skill at all 2 = I need subst antial support to use this skill 3 = I have this skill, but still need some support 4 = I can use this skill with little support 5 = I could teach others this skill

PAGE 157

146 Perceptions of Skills Survey G=Good R=Redundant N=Nonessential PW=Poorly Writte n A=Ambiguous Skills______________ _________________Content and Clarity Ratings 1. I know how to access the data necessary to determine the percent of students in core instruction who are achieving benchmarks in: a. Academics b. Behavior G R N PW A Rewrite: ___________________________________________________________________ 2. I have the skill to use the data to make decisions about the effectiveness of the core curriculum for individuals and groups of students for: a. Academics b. Behavior G R N PW A Rewrite: ___________________________________________________________________ 3. Please rate your skill level on each of the following steps in the problem identification (i.e., referral reason) stage of problem solving: a. Defining the referral concern in terms of a replacement behavior (what you want the student to be able to do) instead of a referral problem for: 1. Academics 2. Behavior G R N PW A Rewrite: ___________________________________________________________________ b. Using data to define the current level of performance for the target student for: 1. Academics 2. Behavior G R N PW A Rewrite: ___________________________________________________________________ c. Determining the desired level of performance (i.e., benchmark) for: 1. Academics 2. Behavior G R N PW A

PAGE 158

147 Rewrite: ___________________________________________________________________ d. Determining current level of peer performance on the same behavior as the target student for: 1. Academics 2. Behavior G R N PW A Rewrite: ___________________________________________________________________ e. Calculating the gap between student performance and the benchmark for: 1. Academics 2. Behavior G R N PW A Rewrite: ___________________________________________________________________ f. Using gap data to determine whether core instruction should be modified or whether supplemental instruction should be directed to the target student for: 1. Academics 2. Behavior G R N PW A Rewrite: ___________________________________________________________________ 4. I have the skill to identify the appropriate supplemental intervention in my building for a student identified as at risk for: a. Academics b. Behavior G R N PW A Rewrite: ___________________________________________________________________ 5. I have the skill to develop potential reasons (i.e., hypotheses) why a student or group of students is/are not achieving desired levels of performance (i.e., benchmarks) for: a. Academics b. Behavior G R N PW A Rewrite: ___________________________________________________________________ 6. I have the skill to determine the most appropriate type(s) of data to use to determine which reasons (i.e., hypotheses) are likely to be contributing to the problem for: a. Academics b. Behavior G R N PW A Rewrite: ___________________________________________________________________

PAGE 159

148 7. I have the skills to access sources (e.g., myself, internet sources, professional journals) to develop evidence based interventions for: a. Academic core curricula b. Behavioral core curricula c. Academic supplemental curricula d. Behavioral supplemental curricula e. Academic individualized intervention plans f. Behavioral individualized intervention plans G R N PW A Rewrite: ___________________________________________________________________ 8. I have the skill to ensure that any supplemental and/or intensive interventions are integrated with core instruction in the general education classroom: a. Academics b. Behavior G R N PW A Rewrite: ___________________________________________________________________ 9. I have the skill to ensure that the proposed intervention plan is supported by the data that were collected: a. Academics b. Behavior G R N PW A Rewrite: ___________________________________________________________________ 10. I have the skill to provide the support necessary to ensure that the intervention is implemented appropriately for: a. Academics b. Behavior G R N PW A Rewrite: ___________________________________________________________________ 11. I have the skill to determine if an intervention was implemented the way it was supposed to be for: a. Academics b. Behavior G R N PW A Rewrite: ___________________________________________________________________ 12. I have the skill to select appropriate data (e.g., CBM, DIBELS, FCAT, behavioral observations) to use to progress monitor student performance during interventions: a. Academics b. Behavior G R N PW A Rewrite: ___________________________________________________________________

PAGE 160

149 13. I have the skill(s) to demonstrate the following graphing skills for large group, small group, and individual students: a. Graph target student data b. Graph benchmark data c. Graph peer data d. Draw an aimline e. Draw a trendline G R N PW A Rewrite: ___________________________________________________________________ 14. I have the skill to use progress monitoring data displayed on a graph to make decisions about the degree to which a student is responding to intervention (e.g., positive, questionable or poor response). G R N PW A Rewrite: ___________________________________________________________________ 15. I have the skill to make intervention recommendations based on the type of student(s) response to intervention. G R N PW A Rewrite: ___________________________________________________________________ 16. I have the skill to differentiate between students who have not learned skills (e.g., wait to fail, not ready, got too far behind) from those who have barriers to learning due to a disability. G R N PW A Rewrite: ___________________________________________________________________ 17. I have the skills to conduct the following data collection procedures: a. CBM b. DIBELS c. Accessing data from appropriate district or school wide assessments d. Standard behavioral observations e. Disaggregating data by race, gender, free/reduced lunch, language proficiency, and disability status G R N PW A Rewrite: ___________________________________________________________________ 18. I have skills to use technology in the following ways: a. Access the internet to locate sources of academic and behavioral evidence based interventions. b. Use electronic data collection tools (e.g., PDAs) c. Use the Progress Monitoring and Reporting Network (PMRN) d. Use the School Wide Information System (SWIS) for Positive Behavior Support e. Graph and display student and school data G R N PW A

PAGE 161

150 Rewrite: ___________________________________________________________________ 19. I have the skills to facilitate a PS/RtI meeting G R N PW A Rewrite: ___________________________________________________________________ If you believe that there are other important questions not addressed in this survey that would help identify the degree to which school and district personnel perceive they possess the skills needed in a PS/RtI model, please list them below and state the domain (i.e., data based decision making, tiered service delivery, the problem solving process, data collection procedures, technology use, and special education eligibility determination) that it characterizes: ___________________________________________ ____________________ _______________________________________________________________ _______________________________________________________________ _______________________________________________________________ _______________________________________________ ________________ _____________________________________________ Thank you for your assistance with this important step in validating a measure to capture school and district personnel perceptions about the degree to which they possess skills needed in a PS/ RtI model.

PAGE 162

151 Appendix C : Beliefs Survey 0 0 0 0 0 0 1 1 1 1 1 1 2 2 2 2 2 2 3 3 3 3 3 3 4 4 4 4 4 4 5 5 5 5 5 5 6 6 6 6 6 6 7 7 7 7 7 7 8 8 8 8 8 8 9 9 9 9 9 9 Directions : For items 2 5 below, please shade in the circle next to the response option that best represents your answer. 2. Job Description: PS/RtI Coach Teacher General Education Teacher Special Education School Counselor School Psychologist School Social Worker Principal Assistant Principal Other (Please specify): 3. Years of Experience in Education: Less than 1 year 1 4 years 5 9 years 10 14 years 15 19 years 20 24 years 25 or more years Not applicable 4. Number of Years in your Current Position: Less than 1 year 1 4 years 5 9 years 10 14 years 15 19 years 20 or more years 5. Highest Degree Earned: B.A./B.S. M.A./M.S. Ed.S. Ph.D./Ed.D. Other (Please specify): 1 Your PS/RtI Project ID: Your PS/RtI Project ID was designed to assure confidentiality while also providing a method to match an individuals responses across instruments. In the space provided (first row), please write in the last four digits of your Social Security Number and the last two digits of the year you were born. Then, shade in the corresponding circles.

PAGE 163

152 Directions : Using the scale below, please indicate your level of agreement or disagreement with each of the following statements by shading in the circle that best represents your response 1 = Strongly Disagree (SD) 2 = Disagree (D) 3 = Neutral (N) 4 = Agree (A) 5 = Strongly Agree (SA) SD D N A SA 6. I believe in the philosophy of No Child Left Behind (NCLB) even if I disagree with some of the requirements. 1 2 3 4 5 7. Core instruction should be effective enough to result in 80% of the students achieving benchmarks in 7.a. reading 1 2 3 4 5 7.b. math 1 2 3 4 5 8. The primary function of supplemental instruction is to ensure that students meet grade level benchmarks in 8.a. reading 1 2 3 4 5 8.b. math 1 2 3 4 5 9. The majority of students with learning disabilities achieve grade level benchmarks in 9.a. reading 1 2 3 4 5 9.b. math 1 2 3 4 5 10. The majority of students with behavioral problems (EH/SED or EBD) achieve grade level benchmarks in 10.a. reading 1 2 3 4 5 10.b. math 1 2 3 4 5 11. Students with high incidence disabilities (e.g. SLD, EBD) who are receiving special education services are capable of achieving grade level benchmarks (i.e., general education standards) in 11.a. reading 1 2 3 4 5 11.b. math 1 2 3 4 5 12. General education classroom teachers should implement more differentiated and flexible instructional practices to address the needs of a more diverse student body. 1 2 3 4 5

PAGE 164

153 SD D N A SA 13. General education classroom teachers would be able to implement more differentiated and flexible interventions if they had additional staff support. 1 2 3 4 5 14. The use of additional interventions in the general education classroom would result in success for more students. 1 2 3 4 5 15. Prevention activities and early intervention strategies in schools would result in fewer referrals to problem solving teams and placements in special education. 1 2 3 4 5 16. The severity of a students academic problem is determined not by how far behind the student is in terms of his/her academic performance but by how quickly the student responds to intervention. 1 2 3 4 5 17. The severity of a students behavioral problem is determined not by how inappropriate a student is in terms of his/her behavioral performance but by how quickly the student responds to intervention. 1 2 3 4 5 18. The results of IQ and achievement testing can be used to identify effective interventions for students with learning and behavior problems. 1 2 3 4 5 19. Many students currently identified as LD do not have a disability, rather they came to school not ready to learn or fell too far b ehind academically for the available interventions to close the gap sufficiently. 1 2 3 4 5 20. Using student based data to determine intervention effectiveness is more accurate than using only teacher judgment. 1 2 3 4 5 21. Evaluating a students response to interventions is a more effective way of determining what a student is capable of achieving than using scores from tests (e.g., IQ/Achievement test). 1 2 3 4 5 22. Additional time and resources should be allocated first to students who are not reaching benchmarks (i.e., general education standards) before significant time and resources are directed to students who are at or above benchmarks. 1 2 3 4 5 23. Graphing student data makes it easier for one to make decisions about student performance and needed interventions. 1 2 3 4 5 24. A students parents (guardian) should be involved in the problem solving process as soon as a teacher has a concern about the student. 1 2 3 4 5 25. Students respond better to interventions when their parent (guardian) is involved in the development and implementation of those interventions. 1 2 3 4 5 26. All students can achieve grade level benchmarks if they have sufficient support. 1 2 3 4 5 27. The goal of assessment is to generate and measure effectiveness of instruction/intervention. 1 2 3 4 5 THANK YOU

PAGE 165

154 Appendix D: Perceptions of RtI Skills Survey 0 0 0 0 0 0 1 1 1 1 1 1 2 2 2 2 2 2 3 3 3 3 3 3 4 4 4 4 4 4 5 5 5 5 5 5 6 6 6 6 6 6 7 7 7 7 7 7 8 8 8 8 8 8 9 9 9 9 9 9 Directions: Please read each statement about a skill related to assessment, instruction, and/or intervention below, and then evaluate YOUR skill level within the context of working at a school/building level. Where indicated, rate your skill separately for academics (i.e., reading and math) and behavior. Please use the following response scale : 1 = I do not have this skill at all (NS) 2 = I have minimal skills in this area; need substantial support to use it (MnS) 3 = I have this skill, but still need some support to use it (SS) 4 = I can use this skill with little support (HS) 5 = I am highly skilled in this area and could teach others this skill (VHS) The skill to: NS MnS SS HS VHS 2. Access the data necessary to determine the percent of students in core instruction who are achieving benchmarks (district grade level standards) in: a. Academics 1 2 3 4 5 b. Behavior 1 2 3 4 5 3. Use data to make decisions about individuals and groups of students for the: a. Core academic curriculum 1 2 3 4 5 b. Core/Building discipline plan 1 2 3 4 5 1. Your PS/RtI Project ID: Your PS/RtI Project ID was designed to assure confidentiality while also providing a method to match an individuals responses across instruments. In the space provided (first row), please write in the last four digits of your So cial Security Number and the last two digits of the year you were born. Then, shade in the corresponding circles.

PAGE 166

155 The skill to: NS MnS SS HS VHS 4. Perform each of the following steps when identifying the problem for a student for whom concerns have been raised: a. Define the referral concern in terms of a replacement behavior (i.e., what the student should be able to do) instead of a referral problem for: Academics 1 2 3 4 5 Behavior 1 2 3 4 5 b. Use data to define the current level of performance of the target student for: Academics 1 2 3 4 5 Behavior 1 2 3 4 5 c. Determine the desired level of performance (i.e., benchmark) for: Academics 1 2 3 4 5 Behavior 1 2 3 4 5 d. Determine the current level of peer performance for the same skill as the target student for: Academics 1 2 3 4 5 Behavior 1 2 3 4 5 e. Calculate the gap between student current performance and the benchmark (district grade level standard) for: Academics 1 2 3 4 5 Behavior 1 2 3 4 5 f. Use gap data to determine whether core instruction should be adjusted or whether supplemental instruction should be directed to the target student for: Academics 1 2 3 4 5 Behavior 1 2 3 4 5 5. Develop potential reasons (hypotheses) that a student or group of students is/are not achieving desired levels of performance (i.e., benchmarks) for: a. Academics 1 2 3 4 5 b. Behavior 1 2 3 4 5

PAGE 167

156 The skill to: NS MnS SS HS VHS 6. Identify the most appropriate type(s) of data to use for determining reasons (hypotheses) that are likely to be contributing to the problem for: a. Academics 1 2 3 4 5 b. Behavior 1 2 3 4 5 7. Identify the appropriate supplemental intervention available in my building for a student identified as at risk for: a. Academics 1 2 3 4 5 b. Behavior 1 2 3 4 5 8. Access resources (e.g., internet sources, professional literature) to develop evidence based interventions for: a. Academic core curricula 1 2 3 4 5 b. Behavioral core curricula 1 2 3 4 5 c. Academic supplemental curricula 1 2 3 4 5 d. Behavioral supplemental curricula 1 2 3 4 5 e. Academic individualized intervention plans 1 2 3 4 5 f. Behavioral individualized intervention plans 1 2 3 4 5 9. Ensure that any supplemental and/or intensive interventions are integrated with core instruction in the general education classroom: a. Academics 1 2 3 4 5 b. Behavior 1 2 3 4 5 10. Ensure that the proposed intervention plan is supported by the data that were collected for: a. Academics 1 2 3 4 5 b. Behavior 1 2 3 4 5 11. Provide the support necessary to ensure that the intervention is implemented appropriately for: a. Academics 1 2 3 4 5 b. Behavior 1 2 3 4 5

PAGE 168

157 The skill to: NS MnS SS HS VHS 12. Determine if an intervention was implemented as it was intended for: a. Academics 1 2 3 4 5 b. Behavior 1 2 3 4 5 13. Select appropriate data (e.g., Curriculum Based Measurement, DIBELS, FCAT, behavioral observations) to use for progress monitoring of student performance during interventions: a. Academics 1 2 3 4 5 b. Behavior 1 2 3 4 5 14. Construct graphs for large group, small group, and individual students: a. Graph target student data 1 2 3 4 5 b. Graph benchmark data 1 2 3 4 5 c. Graph peer data 1 2 3 4 5 d. Draw an aimline 1 2 3 4 5 e. Draw a trendline 1 2 3 4 5 15. Interpret graphed progress monitoring data to make decisions about the degree to which a student is responding to intervention (e.g., positive, questionable or poor response). 1 2 3 4 5 16. Make modifications to intervention plans based on student response to intervention. 1 2 3 4 5 17. Use appropriate data to differentiate between students who have not learned skills (e.g., did not have adequate exposure to effective instruction, not ready, got too far behind) from those who have barriers to learning due to a disability. 1 2 3 4 5 18. Collect the following types of data: a. Curriculum Based Measurement 1 2 3 4 5 b. DIBELS 1 2 3 4 5 c. Access data from appropriate district or school wide assessments 1 2 3 4 5 d. Standard behavioral observations 1 2 3 4 5 19. Disaggregat e data by race, gender, free/reduced lunch, language proficiency, and disability status 1 2 3 4 5

PAGE 169

158 The skill to: NS MnS SS HS VHS 20. Use technology in the following ways: a. Access t he internet to locate sources of academic and behavioral evidence based interventions. 1 2 3 4 5 b. Use electronic data collection tools (e.g., PDAs) 1 2 3 4 5 c. Use the Progress Monitoring and Reporting Network (PMRN) 1 2 3 4 5 d. Use the School Wide Information System (SWIS) for Positive Behavior Support 1 2 3 4 5 e. Graph and display student and school data 1 2 3 4 5 21. Facilitate a Problem Solving Team (Student Support Team, Intervention Assistance Team, School Based Intervention Team, Child Study Team) meeting. 1 2 3 4 5 THANK YOU

PAGE 170

159 Appendix E: Perceptions of Practices Survey 0 0 0 0 0 0 1 1 1 1 1 1 2 2 2 2 2 2 3 3 3 3 3 3 4 4 4 4 4 4 5 5 5 5 5 5 6 6 6 6 6 6 7 7 7 7 7 7 8 8 8 8 8 8 9 9 9 9 9 9 Directions : For each item on this survey, please indicate how frequently or infrequently the given practice occurs in your school for both academics (i.e., reading and math) and behavior. Please use the following response scale : 1 = Never Occurs (NO) 2 = Rarely Occurs (RO ) 3 = Sometimes Occurs (SO) 4 = Often Occurs (OO) 5 = Always Occurs (AO ) = Do Not Know (DK) In my School: NO RO SO OO AO DK 2. Data (e.g., Curriculum Based Measurement, DIBELS, FCAT, Office Discipline Referrals) are used to determine the percent of students receiving core instruction (general education classroom only) who achieve benchmarks (district grade level standards) in: a. Academics 1 2 3 4 5 b. Behavior 1 2 3 4 5 3. Data are used to make decisions about necessary changes to the core curriculum or discipline procedures to increase the percent of students achieving benchmarks (district grade level standards) in: a. Academics 1 2 3 4 5 b. Behavior 1 2 3 4 5 1 Your PS/RtI Project ID: Your PS/RtI Project ID was designed to assure confidentiality while also providing a method to match an individuals responses across instruments. In the space provided (first row), please write in the last four digits of your Social Security Number and the last two digits of the year you were born. Then, shade in the corresponding circles.

PAGE 171

1 60 In my School: NO RO SO OO AO DK 4. Data are used (e.g., Curriculum Based Measurement, DIBELS, Office Discipline Referrals) to identify at risk students in need of supplemental and/or intensive interventions for: a. Academics b. Behavior 5. The students identified as at risk routinely receive additional (i.e., supplemental) intervention(s) for: a. Academics 1 2 3 4 5 b. Behavior 1 2 3 4 5 6. Progress monitoring occurs for all students receiving supplemental and/or intensive interventions for: a. Academics 1 2 3 4 5 b. Behavior 1 2 3 4 5 7. Progress monitoring data (e.g., Curriculum Based Measurement, DIBELS, behavioral observations) are used to determine the percent of students who receive supplemental and/or intensive interventions who achieve grade level benchmarks for: a. Academics 1 2 3 4 5 b. Behavior 1 2 3 4 5 8. A standard protocol intervention (i.e., the same type of intervention used for similar problems) is used initially for all students who require supplemental instruction for: a. Academics 1 2 3 4 5 b. Behavior 1 2 3 4 5 Directions: Items 9 18 refer to the typical Problem Solving Team (i.e., Student Support Team, Intervention Assistance Team, School Based Intervention Team, Child Study Team) meeting in your school that includes a student who has been referred for problem solving or a special education evaluation. While addressing each item for academics (math and reading), think of a typical case in which a student has been referred for an aca demic concern. While addressing each question for behavior, think of a typical case in which a student has been referred for a behavioral concern. Then, please indicate how frequently each of the given practices occurs in your school using the same scale.

PAGE 172

161 In my School: NO RO SO OO AO DK 9. The target behavior is routinely defined in terms of the desired behavior (e.g., Johnny will raise his hand to ask a question, Susie will read 90 correct words per minute) instead of the problem behavior (e.g., Johnny talks out of turn, Susie reads below grade level) for: c. Academics 1 2 3 4 5 d. Behavior 1 2 3 4 5 10. Quantifiable data (e.g., reading fluency score, percent compliance, percent on task behavior) are used to a. identify the target students current performance in the area of concern for: Academics 1 2 3 4 5 Behavior 1 2 3 4 5 b. identify the desired level of performance (i.e., the benchmark) in the area of concern for: Academics 1 2 3 4 5 Behavior 1 2 3 4 5 c. identify the current performance of same age peers using the same data as the target student for: Academics 1 2 3 4 5 Behavior 1 2 3 4 5 11. The Problem Solving Team routinely develops hypotheses (i.e., proposed reasons) explaining why the target student is not demonstrating the desired behavior for: a. Academics 1 2 3 4 5 b. Behavior 1 2 3 4 5 12. Data are collected to confirm the reasons that the student is not achieving the desired level of performance for: a. Academics 1 2 3 4 5 b. Behavior 1 2 3 4 5

PAGE 173

162 In my School: NO RO SO OO AO DK 13. Intervention plans are routinely developed based on the confirmed reasons that the student is not achieving the desired level of performance for: a. Academics 1 2 3 4 5 b. Behavior 1 2 3 4 5 14. The teacher of a student referred for problem solving routinely receives staff support to implement the intervention plan developed by the Problem Solving Team for: a. Academics 1 2 3 4 5 b. Behavior 1 2 3 4 5 15. Data are collected routinely to determine the degree to which the intervention plans are being implemented as intended for: a. Academics 1 2 3 4 5 b. Behavior 1 2 3 4 5 16. Data are graphed routinely to simplify interpretation of student performance for: a. Academics 1 2 3 4 5 b. Behavior 1 2 3 4 5 17. Progress monitoring data are used to determine a. the degree to which the target students rate of progress has improved for: Academics 1 2 3 4 5 Behavior 1 2 3 4 5 b. whether the gap has decreased between the target students current performance and the desired level of performance (i.e., benchmark) for: Academics 1 2 3 4 5 Behavior 1 2 3 4 5

PAGE 174

163 In my School: NO RO SO OO AO DK c. whether the gap has decreased between the target students current performance and the performance of same age peers for: Academics 1 2 3 4 5 Behavior 1 2 3 4 5 18. A students response to intervention data (e.g., rate of improvement) are used routinely to determine whether a student is simply behind and can learn new skills or whether the students performance is due to a disability for: a. Academics 1 2 3 4 5 b. Behavior 1 2 3 4 5 THANK YOU

PAGE 175

164 Appendix F : Data Collection, Entry, and Analysis Rubric Year 1 Measure Collection Timeline Collection Method & Responsible Personnel Data Entry Method & Responsible Personnel Analysis Frequency Aug Sep Oct Nov Dec Jan Feb Mar Apr May Jun Jul Primary Training & Staff Surveys & Skill Assessments Beliefs Survey SBLT Day 1 & 2 & Staff Pre SBLT Day 5 & Staff Post (3/30 5/15) Administered by RCs & Coaches Uploaded via scantron by Project staff 1 x year Direct Skill Assessments SBLT Day 2 & Staff Pre SBLT Day 3 SBLT Day 4 SBLT Day 5 Administered by RCs & Coaches Scored & Entered by Project staff 2 4 x year ( Tied to training schedule for SBLTs ) Perceptions of Practices Survey SBLT & Staff Pre Administered by RCs & Coaches Uploaded via scantron by Project staff 1 x year Perceptions of Skills Survey SBLT & Staff Pre SBLT Day 5 & Staff Post Administered by RCs & Coaches Uploaded via scantron by Project staff 1 x year School Personnel Satisfaction Survey SBLT & Staff Pre Administered by RCs & Coaches Uploaded via scantron by Project staff 1 x year Training Evaluation Survey** SBLT Day 1 & Day 2 SBLT Day 3 SBLT Day 4 SBLT Day 5 Administered by RCs & Coaches Uploaded via scantron by Project staff 4 x year Tied to training schedule

PAGE 176

165 Measure Collection Timeline Collection Method & Responsible Personnel Data Entry Method & Responsible Personnel Analysis Frequency Aug Sep Oct Nov Dec Jan Feb Mar Apr May Jun Jul Training & Technical Assistance Logs Regional Coordinator Training & Technical Assistance Logs X X X X X X X X X X X RCs track activities and hours RCs enter into remote database (minimum of monthly) Monthly Coaches Training & Technical Assistance Logs* X X X X X X X X X X X Coaches track activities and hours Coaches enter into remote database (minimum of monthly) Monthly Implementation Integrity Measures Tiers I & II Critical Components Checklist* T1 Window T2 Window T3 Window Coaches ( checklists from permanent products ) Project staff enter into database 3 x year Tiers I & II Observation Checklist* NOT COLLECTED DURING YEAR 1 Tier III Critical Components Checklist* NOT COLLECTED DURING YEAR 1 Problem Solving Team Meeting Checklists: Initial & Follow Up* NOT COLLECTED DURING YEAR 1

PAGE 177

166 Self Assessment of Problem Solving Implementation (SAPSI) Pre Post SBLT completes while coach facilitates Project staff enter 2 x year Measure Collection Timeline Collection Method & Responsible Personnel Data Entry Method & Responsible Personnel Analysis Frequency Aug Sep Oct Nov Dec Jan Feb Mar Apr May Jun Jul School Demographics School Demographics (See School Demographics Data Protocol)* X PE collects from FL DOE Data Warehouse Project staff download files 1 x year School Staff Demographics (See School Staff Data Protocol)* X PE collects from FL DOE Data Warehouse Project staff download files 1 x year School Leve l Student and Systemic Outcomes SAT 10/FCAT* (See Individual Student Data Protocol) X PE collects from FL DOE Warehouse Project staff download files 1 x year DIBELS/CBM* (See Individual Student Data Protocol) X PE collects from FCRR Project staff download files 1 x year ODRs (See Systemic Outcome Data Protocol)* X PE collects from FL DOE Warehouse Project staff download files 1 x year PST Referrals (See Systemic Outcome Data Protocol)* X PE collects from districts Project staff download files 1 x year ESE Referrals (See Systemic Outcome Data Protocol)* X PE collects from FL DOE Warehouse Project staff download files 1 x year ESE Evaluations (See Systemic Outcome X PE collects from FL DOE Project staff download files 1 x year

PAGE 178

167 Data Protocol)* Warehouse ESE Placements (See Systemic Outcome Data Protocol)* X PE collects from FL DOE Warehouse Project staff download files 1 x year Measure Collection Timeline Collection Method & Responsible Personnel Data Entry Method & Responsible Personnel Analysis Frequency Aug Sep Oct Nov Dec Jan Feb Mar Apr May Jun Jul Absenc es (See Individual Student Data Protocol)* X PE collects from FL DOE Warehouse Project staff download files 1 x year Retentions (See Individual Student Data Protocol)* X PE collects from FL DOE Warehouse Project staff download files 1 x year Other Process Measures Coaching Evaluation Survey** X Mailed to principals to be completed by SBLTs Uploaded via scantron by Project staff 1 x year Technical Assistance Evaluation Survey Statewide Training Versions? NOT COMPLETED DURING YEAR 1 Other Outcome Measures Parent Satisfaction Survey* NOT COMPLETED DURING YEAR 1


xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam 22 Ka 4500
controlfield tag 007 cr-bnu---uuuuu
008 s2010 flu s 000 0 eng d
datafield ind1 8 ind2 024
subfield code a E14-SFE0003334
035
(OCoLC)
040
FHM
c FHM
049
FHMM
090
XX9999 (Online)
1 100
Nadeau, Joshua M.
0 245
The effects of professional development efforts on educator beliefs and perceptions of competency within a school-based response to intervention model
h [electronic resource] /
by Joshua M Nadeau.
260
[Tampa, Fla] :
b University of South Florida,
2010.
500
Title from PDF of title page.
Document formatted into pages; contains X pages.
502
Thesis (Ed.S.)--University of South Florida, 2010.
504
Includes bibliographical references.
516
Text (Electronic thesis) in PDF format.
538
Mode of access: World Wide Web.
System requirements: World Wide Web browser and PDF reader.
3 520
ABSTRACT: The purpose of this study was to identify and understand relationships between educators' perceived skills, observed practices, and stated beliefs as well as the impact of evidence-based professional development upon those relationships during the first year of ongoing school-based implementation for Florida's Statewide Problem- Solving/Response to Intervention (PS/RtI) Project. The PS/RtI model is conceptualized as providing a data-based template to drive student service delivery decisions; as providing a tiered framework of assessment and evaluation to maximize efficiency of allocated school funds; and as defining the determination of eligibility for special education services to be based solely upon a continuous necessity of resources/services beyond those typically available in the general education setting. The current study analyzed existing data from Florida's statewide PS/RtI project, collected during the 2007-2008 school year. During specified professional development sessions, educators provided responses to various questions about their beliefs regarding, perceptions of competency within, and estimated observational frequency of, critical components constituting the PS/RtI model. Three specific research questions were investigated from analysis of these responses; specifically: (1) What is the relationship between beliefs about a training objective, and the self-rated perception of skills and frequency of observed practices associated with that objective, (2) What are the effects of specific skills training on the relationship between self-reported beliefs, and associated perceptions of skills and frequency of observed practices, and (3) What is the relationship between initial (pre-training) and time two (post-training) measures of self-reported beliefs and perceived skills related to data usage, and of self-reported beliefs, perceived skills, and observed practices related to academic instruction? This study found that, for the first year of implementation, initial educator beliefs regarding evidence-based instruction and data-based decision making were only slightly related to self-perceived competence in these areas; furthermore, independent of any effect that skills training may have had upon educator survey scores, the relationship between beliefs, skills, and practices scores did not significantly differ over the first year of implementation. Implications of the findings for practice, including scaling up of RtI implementation efforts, are discussed.
590
Advisor: George M. Batsche, Ph.D.
653
Systems change
School policy
Teachers
Self-efficacy
Rti
690
Dissertations, Academic
z USF
x Psychological & Social Foundations
Masters.
773
t USF Electronic Theses and Dissertations.
4 856
u http://digital.lib.usf.edu/?e14.3334