USF Libraries
USF Digital Collections

Evaluation of a self-monitoring program to increase treatment integrity of behavior intervention plans

MISSING IMAGE

Material Information

Title:
Evaluation of a self-monitoring program to increase treatment integrity of behavior intervention plans
Physical Description:
Book
Language:
English
Creator:
Taylor, Lela E
Publisher:
University of South Florida
Place of Publication:
Tampa, Fla
Publication Date:

Subjects

Subjects / Keywords:
Behavioral consultation
Performance feedback
Self-management
Treatment fidelity
Sustainability
Dissertations, Academic -- Child and Family Studies -- Masters -- USF   ( lcsh )
Genre:
non-fiction   ( marcgt )

Notes

Abstract:
ABSTRACT: The growing number of school-aged children displaying challenging behavior has increased the need for effective interventions. School-based consultants (SBC) report using behavioral consultation to assist teachers in designing behavior intervention plans (BIP) that help students engage in appropriate behavior in the classroom. Research indicates that direct training methods increase teacher's implementation of the BIP. One commonly used direct training method, performance feedback (PF), is used to assess teachers' treatment integrity. Research also indicates that checklists (non-direct measures) are more cost efficient methods. The purpose of this paper was to evaluate a direct training method used to train teachers to self-monitor their own implementation of their student's BIP in an effort to increase accuracy of self-report and sustainable treatment integrity outcomes. Two educators who worked with children with challenging behavior participated in this study. The effect of using self-monitoring on both educators' implementation of BIPs was evaluated. Results indicated that both educators' implementation increased and maintained into the maintenance phase. Also, results indicated that educator's accuracy of reporting was similar to independent observers.
Thesis:
Thesis (M.A.)--University of South Florida, 2009.
Bibliography:
Includes bibliographical references.
System Details:
Mode of access: World Wide Web.
System Details:
System requirements: World Wide Web browser and PDF reader.
Statement of Responsibility:
by Lela E. Taylor.
General Note:
Title from PDF of title page.
General Note:
Document formatted into pages; contains 54 pages.

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 002069426
oclc - 608474017
usfldc doi - E14-SFE0003241
usfldc handle - e14.3241
System ID:
SFS0027557:00001


This item is only available as the following downloads:


Full Text
xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam 2200385Ka 4500
controlfield tag 001 002069426
005 20100422074954.0
007 cr bnu|||uuuuu
008 100422s2009 flu s 000 0 eng d
datafield ind1 8 ind2 024
subfield code a E14-SFE0003241
035
(OCoLC)608474017
040
FHM
c FHM
049
FHMM
090
HV697 (Online)
1 100
Taylor, Lela E.
0 245
Evaluation of a self-monitoring program to increase treatment integrity of behavior intervention plans
h [electronic resource] /
by Lela E. Taylor.
260
[Tampa, Fla] :
b University of South Florida,
2009.
500
Title from PDF of title page.
Document formatted into pages; contains 54 pages.
502
Thesis (M.A.)--University of South Florida, 2009.
504
Includes bibliographical references.
516
Text (Electronic thesis) in PDF format.
3 520
ABSTRACT: The growing number of school-aged children displaying challenging behavior has increased the need for effective interventions. School-based consultants (SBC) report using behavioral consultation to assist teachers in designing behavior intervention plans (BIP) that help students engage in appropriate behavior in the classroom. Research indicates that direct training methods increase teacher's implementation of the BIP. One commonly used direct training method, performance feedback (PF), is used to assess teachers' treatment integrity. Research also indicates that checklists (non-direct measures) are more cost efficient methods. The purpose of this paper was to evaluate a direct training method used to train teachers to self-monitor their own implementation of their student's BIP in an effort to increase accuracy of self-report and sustainable treatment integrity outcomes. Two educators who worked with children with challenging behavior participated in this study. The effect of using self-monitoring on both educators' implementation of BIPs was evaluated. Results indicated that both educators' implementation increased and maintained into the maintenance phase. Also, results indicated that educator's accuracy of reporting was similar to independent observers.
538
Mode of access: World Wide Web.
System requirements: World Wide Web browser and PDF reader.
590
Advisor: Carie L. English, Ph.D
653
Behavioral consultation
Performance feedback
Self-management
Treatment fidelity
Sustainability
690
Dissertations, Academic
z USF
x Child and Family Studies
Masters.
773
t USF Electronic Theses and Dissertations.
4 856
u http://digital.lib.usf.edu/?e14.3241



PAGE 1

Evaluation of a Self Monitoring Program to Increase Treatment Integrity of Behavior Intervention Plans b y Lela E. Taylor A thesis submitted in partial fulfillment of the requirements for the degree of Master of Arts College of Graduate Studies University of South Florida Major Professor: Carie L. English, Ph.D. Roger A. Bothroyd., Ph.D Bryon R. Neff, M.S. Date of Approval: October 16 2009 Keywords: Behavioral Consultation, Performance Feedback, Self Management, Treatment Fidelity, Sustainability Copyright 2009, Lela E. Taylor

PAGE 2

ii Table of Contents List of Tables iv List of Figures v Abstract v i iv CHAPTE R 1. Introduction and Literature 1 Behavioral Consultation 2 Treatment Integrity 5 Sustainability of School based Interventions 13 Self Report 15 Self monitoring 16 CHAPTER 2. Method 20 Participants 20 Response Definitions and Reliabi lity 20 Adherence of Implementation 20 Accuracy of Implementation 21 Interobserver Agreement 22 Prevent Teach Reinforce Process 24 Experimental Design and Procedure 25 Baseline 25 Self monitoring Training 26

PAGE 3

iii Maintenance 28 CHAPTER 3. Results 29 CHAPTER 4. Discussion 32 References 37 Appendix A Informed Consent 46 Appendix B Ma ria's Fidelity Sheet 47 Appendix C Lenora's Fidelity Sheet 49 Appendix D Maria's Checklist 51 Appendix E Lenora's Checklist 53

PAGE 4

iv List of Tables Table 1. Reliability Scores 24

PAGE 5

v List of Figures Figure 1. Percentage of Implementation o f Behavior Intervention Plans 30

PAGE 6

vi Evaluation of a Self Monitoring Program to Increase Treatment Integrity of Behavior Intervention Plans Lela E. Taylor A BSTRACT The growing number of school aged children displaying challenging behavior has i ncreased the need for effective interventions. School based consultants (SBC) report using behavioral consultation to assist teachers in designing behavior intervention plans (BIP) that help students engage in appropriate behavior in the classroom. Resear ch indicates that direct training methods increase teacher's implementation of the BIP. One commonly used direct training method, performance feedback (PF), is used to assess teachers' treatment integrity. Research also indicates that checklists (non direc t measures) are more cost efficient methods. The purpose of this paper was to evaluate a direct training method used to train teachers to self monitor their own implementation of their student's BIP in an effort to increase accuracy of self report and sus tainable treatment integrity outcomes. Two educators who worked with children with challenging behavior participated in this study. The effect of using self monitoring on both educators' implementation of BIPs was evaluated. Results indicated that both edu cators implementation increased and maintained into the maintenance phase. Also, results indicated that educator's accuracy of reporting was similar to independent observers.

PAGE 7

1 Chapter 1: Introduction and Literature Acco rding to a 2005 national report from the Center for Disease Control (CDC), one in every twenty children between the ages of 4 and 17 has been identified (by their parents) as having difficulties behaviorally, academically, or socially (Simpson, Bloom, Cohe n, Blumberg, & Bourdon, 2005). With the flourishing number of school aged children displaying challenging behavior, the need increases for effective methods to reduce such behaviors. Functional behavior assessment (FBA) is one method that has proven to be successful to determine the variables affecting challenging behavior (e.g., Blair, Liaupsin, Umbreit, & Kweon, 2006; Burke, Hagan Burke, & Sugai, 2003; Hughes, Alberto, & Fredrick, 2006; Stahr, Cushing, Lane, & Fox, 2006; Umbreit, Lane, & Dejud, 2004). Fu rthermore, behavior interventions plans (BIP) based on the results of FBAs have resulted in reductions in challenging behavior (Ingram, Lewis Palmer, & Sugai, 2005). Within schools, the FBA can be an effective tool for teachers to use when challenging beh aviors are exhibited; however, many teachers have not received formal training to conduct an FBA (Scott et al., 2004). School based consultants (SBC; e.g., behavior analysts and school psychologists) have the knowledge and skills to conduct an FBA and dev elop a BIP; however, the FBA/BIP process is implemented differently in different settings. Some SBCs utilize an expert model and conduct the FBA/BIP process

PAGE 8

2 independently and then provide the information to the teacher leaving him/her to implement the p lan on their own (Witt & Martens, 1988). Others use behavioral consultation to involve teachers and assist them through the FBA process (Sheridan, Welch, & Orme, 1996; Wickstrom, Jones, LaFleur & Witt 1998; Wilkinson, 2006). Behavioral Consultation Beha vioral consultation is a team based process in which the SBC problem solves with the teacher to increase s tudent engagement and appropriate behavior in the classroom. This process consists of identifying and assessing the challenging behavior and implement ing and evaluating the plan (Kratochwill & Bergan, 1990). Because a team approach is utilized, at least one person with expertise in the FBA/BIP process works with the teacher to develop a BIP that matches the function of the challenging behavior, as well as the contextual fit of the classroom (Benazzi, Horner, & Good, 2006). By utilizing a team based approach, greater teacher ownership in the BIP is observed, increasing the likelihood of implementation in the classroom (Witt & Martens, 1988). While havi ng all necessary stakeholders on the FBA/BIP team is likely to result in implementation of the BIP in the classroom, it does not guarantee the accuracy with which the plan is implemented. Failure to accurately implement the plan may not result in maximum e ffectiveness or worse, may result in an increase in challenging behavior. Thus, student behavior is not the only behavior targeted for change in the behavioral consultation process; teacher behavior also must change. Research has shown that different varia bles are associated with teacher implementation (e.g., Han & Weiss, 2005;

PAGE 9

3 Hughes, Grossman, & Barker, 1990) including school based, teacher related, and plan specific variables. Han and Weiss (2005) discussed different variables that affect a program' s implementation by a teacher. One school related variable that may influence implementation is support by the principal. The involvement of the principal has been shown to increase the teacher's likelihood of implementing the intervention (Gilat & Sulzer Azaroff, 1994). Additional ly, variables that may influence the teacher's likelihood of implementing a BIP include prior history of successes and failures Teachers who have not been successful in producing behavior change in the past or those that do not b elieve school procedures facilitate behavior change are less likely to implement intervention plans with efficacy. O ther variables that are specifically related to the plan include the teacher's buy in of the program, the perceived level of difficulty, the anticipated effectiveness, the time it takes to implement the plan, and the plan's compatibility with their own beliefs about student behavior. Additional variables may include resources that are not available in the classroom setting, lack of adequate t raining, as well as needing several people for implementation (Gresham, 1989; Yeaton & Sechrest, 1981). Given that many variables may influence a teacher's motivation to implement a BIP, SBCs should strive to address these issues prior to implementation. P roviding adequate teacher training prior to implementation is one method that may decrease the affects of plan related variables on implementation (Kratochwill & Bergan, 1990). Indirect and direct training methods are used when training the teacher on the implementation of the BIP (Sterling Turner, Watson, Wildmon, & Watkins, 2001).

PAGE 10

4 Indirect training consists of written or spoken instructions that describe the intervention. Direct training involves the SBC demonstrating the specific skills via role playing, modeling, and/or rehearsal and receiving corrective or positive feedback by the trainer (Kratochwill & Bergan, 1990; Sterl ing Turner et al., 2001; Watson & Robinson, 1996). In 2001, Sterling Turner and colleagues examined which training method ( direct vs. indirect) lead to better treatment integrity. The participants were 64 undergraduate students who were trained to implement a treatment protocol for a confederate who exhibited a facial tic. Participants were trained using one of three training methods: didactic training (DT), modeling training (MT), and rehearsal/feedback training (RFT). Participants who were trained using didactic training received a verbal explanation of the treatment procedures, which was considered an indirect training meth od. The participants who were trained using modeling, a more direct method, watched a videotape of a person implementing the treatment protocol while being given a verbal explanation of treatment components. Participants who were trained using rehearsal/ feedback training, the most direct method of training, practiced the actual protocol with the experimenter and confederate and received positive or corrective feedback on their implementation of the treatment protocol. Results indicated that participants w ho received the two most direct training methods, RFT and MT, obtained higher mean integrity scores than participants who received the indirect training, DT. However, the highest mean integrity scores were obtained by participants who received the most dir ect training method, RFT. These results suggest that direct training methods lead to higher treatment integrity scores.

PAGE 11

5 Sterling's and Turner's (2001) results suggest that merely discussing the BIP during the behavioral consultation process may not lead to sufficient implementation. Mor e direct training methods are needed to increase the likelihood of sufficient implementation (Watson & Robin, 1996). Additional strategies that could be used with direct training methods to increase teacher implementation of BIP s are training in the natural environment (Kratochwill & Bergan, 1990) or using prompts (Petscher & Bailey, 2006). Utilizing direct training methods increases the likelihood that not only will teacher s implement the BIP but also that the plan will b e implemented as intended Treatment Integrity The extent to which the teacher implements the planned intervention as intended is called treatment integrity (Elliot & Busse, 1993; Gresham, 1989; Kratochwill & Bergan, 1990) or fidelity (Mons her & Prinz, 1 991). When measuring treatment integrity, the consistency and accuracy of implementation are examined (Gresham, 1989; Lane, Bocian, MacMillan, & Gresham, 2004), providing further support that changes in student behavior are related to the intervention rath er than extraneous variables (Kratochwill & Bergan, 1990). In addition, treatment integrity scores lend greater support for the external validity of an intervention because it demonstrates that changes in student behavior were a result of the interventio n (Peterson, Homer, & Wonderlich, 1982). While research suggests that it is necessary to measure treatment integrity, it is unclear what level of integrity is needed to yield desired change s in behavior. For example, if a prescribed intervention instruct s the teacher not to attend to the challenging behavior and to praise the appropriate behavior, but the teacher only intermittently implements this strategy, the plan may result in a decrease in challenging behavior, but it

PAGE 12

6 may not be a sufficient decrease due to the inconsistency in teacher implementation. Research has yielded mixed results on the level of treatment integrity that is necessary to result in desired behavior change. For example, Gresham (1989) suggests that training participants to a high l evel of integrity (e.g., 80% or higher) is needed to produce changes in student behavior. However, Gansle and McMahon (1997) examined three different levels of integrity on a self monitoring (SM) procedure that included three components: self monitoring, f eedback and reward, and graphing of behavior. The purpose of the study was to determine if different levels of teacher integrity predicted change in student behavior. Participants were 21 3 rd to 6 th grade student teacher dyads who were assigned to one of the three treatment integrity groups: 100% integrity (SM with feedback, reward, and graphing), 83.3% integrity (SM with feedback and reward), and 66.7% integrity (SM only). All student participants were trained to self monitor their behavior; however, tea chers in the higher treatment integrity groups were taught how to provide feedback, rewards, and graph behavior based on student self monitoring records. Teachers in the 66.7% integrity group were not trained to provide any information to students on thei r self monitoring records. Prior to the commencement of the intervention, one appropriate and one inappropriate classroom behavior was selected for the student to monitor. The accuracy of teacher implementation of treatment components was assessed by teac her self report and collection of permanent products including records of data sheets, rewards earned by the student, and the graphs of behavior contingent on the condition. Results indicated that mean decreases in inappropriate behavior recorded were simi lar despite integrity level. However, the results indicated that higher levels of treatment integrity (i.e., implementing components of feedback, reward, and graphing) resulted in increased

PAGE 13

7 means of appropriate behavior recorded in the classroom. The resu lts of the Gansle and McMahon (1997) study indicate that different levels of treatment integrity result in different levels of behavior change. While reports of inappropriate behavior were similar across groups, reports on appropriate behavior were higher for groups receiving some type of feedback of reporting. These results suggest that implementing an intervention with greater levels of treatment integrity yields greater changes in behavior; however, it is still not clear what level of treatment integri ty is necessary to result in what level of behavior change. Despite the importance of the monitoring process of behavioral consultation, treatment integrity has not been assessed and/or reported adequately in experimental studies (Gresham, 1989; Gresham e t al., 1993, 2000; Peterson et al., 1982). For instance, Peterson and colleagues (1982) examined 539 experimental studies published in the Journal of Applied Behavior Analysis from 1968 1980 and conducted an analysis of the treatment integrity of operatio nal definition(s) and reported adherence for the independent variables (IV). Both variables were classified into three categories: 1) IV reliability and operational definition were included, 2) IV reliability and the operational definition were not include d, but were classified as a low risk for error or unnecessary, and 3) IV reliability and operational definition were not reported but were considered as high risk for inaccurate reporting or needed further information. High and low risk inaccuracy was de termined by the measurement tool utilized to measure implementation of treatment components. For instance, a study categorized as low risk included permanent products while another included a systematic machine that distributed the

PAGE 14

8 treatment components. An operational definition was considered unnecessary when the definition was clear and concise (i.e., the machine gave a child one M & M) or when a citation was provided that led to further detail. Results indicated that only 16% of articles operationall y defined and reported reliability assessment of the IV. Other meta analysis studies reported similar findings -the IV was operationally defined in approximately 35% and assessed in less than 20% (Gresham et al., 1993, 2000). In the discussion section, Pet erson and colleagues (1982) noted a "curious double standard" (pp.478) because more methodological rigor was used for the dependent variable (DV) than the IV in both assessment and operational definitions. Even though the dependent variables demonstrate th e effectiveness of the intervention, if the independent variable is not clearly defined and assessed, internal and external v alidity problems may occur (Mons her & Prinz, 1991). Research demonstrates that numerous studies have not adequately reported assess ment of the independent variable resulting in questions as to the internal and external validity of these studies. Despite the limited studies that report treatment integrity scores, it is a critical variable to measure when evaluating the effectiveness o f BIPs implemented in classroom settings because it allows the SBC to identify the level of accuracy of implementation, strengthening the internal and external validity of the intervention. In order to make sure that the plan is being implemented with i ntegrity, one must monitor the implementation process. There are different ways to measure treatment integrity. Direct methods (systematic observation) require someone to see the plan being implemented in its environment. Whereas, indirect methods obtain t he information on

PAGE 15

9 implementation via self reports, questionnaires, and behavior rating scales from persons in the environment (Gresham, 1989). Direct observation requires someone to observe the implementation of the prescribed plan, such as the SBC, a data collector, or someone else in the natural environment. When using direct observation to monitor treatment integrity, Gresham (1989) identified three steps that should be considered. Step one is to make sure that the components in the intervention are clearly defined. Step two is to make sure to have a way to measure both occurrence and nonoccurrence of the components in the intervention. Step three is to make sure to use percentages to measure the treatment integrity of each person implementing the pl an. As a result of using these three steps during direct observation, the observer will be able to focus on the variables targeted for the intervention, report integrity of variables, and assess the change in integrity over time (Gresham, 1989). Even th ough direct observation allows the SBC to observe what is occurring in the environment, there are limitations to this method of observation, including the implementer's reactivity to being observed (Gresham, 1989). Reactivity is defined as a change in beha vior due to the presence of an observer (Johnson & Pennypacker, 1993). Brackett, Reid, and Green (2007) examined the effects on staff performance when being observed. The participants were two job coaches assigned to work with three support workers who wer e unable to walk, had limited upper body movements, and communicated via gestures and vocalizations. Each job coach was trained to prompt the support workers to complete the steps requesting a work break. The dependent variable,

PAGE 16

10 did the job coach promp t the worker to take a break, was measured during times of direct observation, inconspicuous observations, and inconspicuous observations during which the job coach was required to self report on their implementation. Results indicated that during direct o bservation, neither job coach correctly prompted the support workers to complete the steps for work breaks. In fact, during inconspicuous observations, the job coaches were completing the steps of the work breaks for the support workers, rather than prompt ing them to complete the steps. These results lend further support that direct observation may result in reactivity and thus, inaccurate representation of the actual implementation of an intervention. Gresham (1989) discusses practical ways in which an ob server could minimize the reactivity. Three suggestions include: 1) varying the schedule in which to observe and use spot checks' on the implementation of intervention plans, 2) being inconspicuous with observational procedures (e.g., sitting in the back of the room or hiding measurement tools), and 3) not stating the purpose of the observation until it is finished. Even though reactivity is a natural reaction, it could misrepresent the actual performance of the person being observed. More specifically, wh en treatment integrity is influenced by the SBC's presence in the classroom, this could result in an unclear picture of what is being implemented when the SBC is not present weakening the treatment integrity of the BIP results. In conclusion, there are ad vantages and disadvantages to using direct observation as measurement tool. The primary advantage is it allows the SBC to observe what is occurring within the classroom. However, the disadvantages in using this monitoring tool

PAGE 17

11 are that it can be very tim e consuming, and due to possible reactivity, it may not provide an accurate measure of implementation. After direct observation of an intervention plan, performance feedback often is used to inform the implementer of their perfo rmance on an intervention plan (DiGennario, Martens, & McIntyre, 2005; Jones, Wickstrom, & Friman; 1997; Mortenson & Witt, 1998; Noell, Duhon, Gatti, & Connell, 2002). During performance feedback, the person observing reviews the data, praises correct implementation, addresses inco rrect implementation, if needed, and discusses questions or comments (Codding, Feinberg, Dunn, & Pace, 2005). Mortensen and Witt (1998) assessed performance feedback effects on implementation of a reinforcer based classroom intervention. The participants were four classroom teachers. The experimental conditions were teacher training, no assistance after training, performance feedback, and maintenance (no assistance/feedback). The criterion that initiated a performance feedback condition was a decline in i mplementation to less than 70% accuracy. Teachers who dropped below the criterion level participated in weekly meetings with the consultant. During the weekly meetings, discussion consisted of (1) a review of treatment integrity data and student academic performance, (2) positive feedback for correct implementation of treatment components and corrective feedback for missed or incorrect implementation of treatment components, (3) verbal agreement of teachers' commitment to the plan, and (4) a reminder of co ntinuation in submitting data summaries and the upcoming week's meeting. Results indicated that performance feedback increased three out of four teacher's implementation. Only one of the four

PAGE 18

12 teachers did not meet the criterion for the performance feedb ack condition because her implementation remained at or above 80%. During the maintenance condition, only two teachers participated because the third teacher's student was absent for the remainder of the study. The results indicated that one teacher disp layed a slight decrease in implementation from 80% to 72%, while the other teacher demonstrated more stable and higher levels of implementation. The authors noted that the teacher who demonstrated a higher and more stable level of implementation received m ore performance feedback sessions, which may have contributed to better treatment integrity outcomes. These results yield two implications. First, performance feedback can produce an immediate increase in implementation, but the removal of the consultant m ay lead to a decrease in implementation. This suggests that the consultant's presence is necessary for continued high implementation. Moreover, this poses practical concerns because some consultants are not permanently stationed in the school. The second implication is that performance feedback can become a time consuming process for the consultant and the teacher as results indicated that greater performance feedback sessions resulted in better results in implementation. This may pose a practical challeng e for consultants who can not continually meet with teachers due to insufficient time (Wilczynski, Mandal, & Fusilier, 2000). While performance feedback is an effective tool to increase the implementation of treatment intervention, the time frame to fade performance feedback can pose a limitation, especially if their implementation decreases when consultants leave the environment. Such decreased implementation affects the sustainability of behavior change.

PAGE 19

13 Sustainability of School based Interventions S u stainability is defined as the continuation of implementation of the intervention after the training and supports have been removed. While some authors have argued that the factors linked to sustainability are limited (Gersten, Chard, & Baker, 2000), other investigators have identified factors that either hinder or support sustainability. For example, Horner, Sugai, Lewis Palmer, and Todd (2001) identified unclear curricula expectations, ineffective instructional delivery, inadequate staff and administrativ e support, underfunded budgets, and the failure to provide ongoing and meaningful feedback as factors that hinder sustainability. Additional factors identified include: district commitment to the intervention (Klingner Arguelles, Hughes, &Vaughn 2001; V aughn, Klingner, & Hughes, 2000), leadership (Klingner et al., 2001; Greenberg, Weissberg, O'Brien, Zins, Fredericks, Resnik, & Elias, 2003 ), and teachers' acceptance of the intervention (Gersten et al., 2000; Klingner et al., 2001; Vaughn et al., 2000) w hich can support sustainability Researchers argue that addressing such factors builds a system in which program implementation is more likely to be successful ( Grimes, Kums, & Tilly III, 2006; Klingner et al., 2001; Massey, Armstrong, Boroughs, Henson, & McCash, 2005) While trying to address the school/district factors that affect sustainability, researchers have used different approaches. One approach is the PAR model, which stands for Prevent, Action, and Resolution (Rosenberg & Jackman, 2003). PAR is a consensus based team approach in which teachers, administrators, fa mily members, and other service providers share responsibility in decision making of the rules and

PAGE 20

14 consequences established. Another approach proposed by Sugai, Horner, Sailor, Dunlap, Eber, Lewis, et al. (2005) outlined a nine step approach for promoting the successful imple mentation and sustainability of positive behavior supports in schools. These steps are explained further in the School wide Positive Behavior Suppor t (PBS) : Impleme nters' Blueprint and Self Assessment which depicts the critical implementation elements to be addressed when implementing PBS These elements include: 1) leadership, 2) coordination, 3) funding, 4) visibility, 5) political support, 6) training capacity, 7 ) coaching capacity, 8) demonstrations, and 9) evaluation. Although school/district factors are critical to the successful long term sustainability of interventions, this takes ample time and effort to develop. Because teachers typically implement interv entions, research on classroom strategies and teacher supports that are related to sustainability should be explored further. Classroom factors that might hinder or improve classroom level implementation includ e a teacher's a cceptability of the program, th e time it takes to implement the plan, resour ces that are or are not available in the classroom setting, and the need for additional staff for implementation (Gresham, 1989; Yeaton & Sechrest, 1981 ). Researchers have examined indirect training methods to a ddress classroom factors including self reports and self monitoring. Indirect methods provide documentation of the strategies implemented by the teacher. Unlike direct observation, reactivity is less likely to occur. Furthermore, the document itself may serve as a prompt for implementation, which is helpful in producing a sustainable element for implementing the plan. Despite these advantages in using

PAGE 21

15 indirect methods, there is one main limitation The SBC is relying on the self report of the teacher t hat the data provided are an accurate representation of what is occurring in the environment. Research has indica ted that this could lead to biased reporting of what it is actually occurring in the environment (Wickstrom et al., 1998). Self Report Self r eport is a measure that has been described as the implementer reporting their level of implementation on each treatment component (Gresham et al., 2000; Wilkinson, 2006). One common format for the implementer to report integrity of implementation is a ques tionnaire (Jensen & Haynes, 1986). The questionnaire can vary in the responses used to measure treatment integrity. For example, the consultant may use a dichotomous response measure, asking whether the treatment component was implemented or not or a Like rt scale ranging 1 to 5 (strongly disagree to strongly agree) that measures the extent to which the treatment component was implemented (Gresham, 1989). One of the main advantages of self report measures is that they are cost efficient and require little t ime from the implementer and consultant (Hartman, Roper, & Bradford, 1979; Jensen & Haynes, 1986). Despite the benefits of self report, one main disadvantage is that certain biases, such as "social desirability," can occasion inaccurate reporting of implem entation (Jensen & Haynes, 1986). For example, a teacher reports that she is implementing the plan as intended, but in actuality has not implemented it to the extent to which it is reported because she is trying to please the consultant or other authoritie s such as administrators (Robbins & Gutkin, 1994; Wickstrom et al., 1998). Wickstrom and colleagues (1998) examined the relation between selected

PAGE 22

16 treatments and actual teacher implementation of the treatment. The participants were 29 consultant teacher dyads in regular education classrooms. Treatment integrity was monitored by teacher self report and direct observation. Results indicated that the mean teacher self report was 54%; however, the mean observer score of integrity was 4%. Thus, a large dis parity was seen between teacher self report and observed measures of treatment integrity. Similar findings were reported by Robbins and Gutkin (1994) who examined three teachers implementation of the recommended intervention. The teachers self reported tha t they implemented the intervention, but actual observation showed little to no change in teacher behavior. Due to the over reporting that has occurred on self report measures, it seems that additional monitoring measures may be need ed to accurately meas ure treatment integrity. However, this requires additional time and resources that are often not available in school settings. Self Monitoring A measure that requires the individual to self report their own behavior, but also trains the person how to obs erve and record the targeted behavior (Bornstein, Hamilton, & Bornstein, 1986) is self monitoring. Research suggests that self monitoring can result in behavior (Frith & Armstrong, 1985). When training the individual to self monitor, two behaviors consider ed are reactivity and accuracy (Sharpiro, Durnan, Post, & Levinson, 2002). Reactivity is lessened because the individual is examining and providing immediate results on their own behavior In addition, the consultant can take less responsibility for prompt ing the desired behavior (Richman, Riordan, Reiss, Pyles, &

PAGE 23

17 Bailey, 1988). T he accuracy of self monitoring also can be assessed by internal agents such as supervisors, (Richman et al., 1988), teachers (DiGangi & Rutherford 1991) or students in the class room (Gilberts, Agran, Hughes, & Wehmeyer, 2001 ) which increases cost efficiency. Self monitoring has been used with various populations, includi ng students (Gureasko Moore, DuP aul, & White, 2006), residential staff ( Richman et al., 1988; Suda & Miltenbe rger, 1993), and persons with disabilities (Gilberts et al., 2001). To date, self monitoring has been used most frequently throughout the educational literature as a tool to help students who are exhibiting academic and challenging behavior (e.g., DiGangi et al., 1991; Gilberts et al., 2001; Maag & Reid, 1993). It also has been used with teachers in an effort to change their behavior in the classroom (e.g., Allinder, Bolling, Oats, & Gagnon, 2000; Browder, Liberty, Heller, & D'Huyvetters, 1986; Kalis, Vanne st, & Parker, 2007). In 2004, Munton examined the effects of three consultation follow up methods (tip sheet, checklist, and performance feedback) on treatment integrity and student disruptive behavior. The participants consisted of 9 teacher/student dyad s. The tip sheet condition consisted of a sheet that provided examples and non examples of how to handle student disruptive behavior. The checklist condition required the teacher to record their implementation of the intervention component. The performance feedback session included data on treatment integrity, student progress, and positive and corrective feedback from the consultant. All participants' initial training consisted of reading through the tip sheet, providing examples of each step, suggesting a review of the tip sheet once per week, and keeping the tip sheet at the teacher's desk. Variables that were

PAGE 24

18 assessed were student behavior, treatment integrity, teacher and consultant time, school time, and social validity. Results indicated that rates of student disruptive behavior were lowest and treatment integrity scores were highest during the checklist condition and the performance feedback. In addition, both interventions were rated as acceptable interventions by teachers. According to these resu lts, the checklist was the most cost efficient method in terms of cost to implement and benefits produced. Research has demonstrated that self monitoring is an effective strategy in changing the behavior of the implementer (e.g., Allinder et al., 2000; Br owder et al., 1986; Kalis et al., 2007) and allows the individual to take responsibility for their own behavior (Gilberts et al., 2001). In addition, self monitoring has been reported to be a "non intrusive intervention, easy to implement, allows for immed iate feedback, and can be effective in changing behavior" (Kalis et al., 2007, p 26). Given that research has shown direct training methods are more effective when te aching implementation (Sterling, Watson, Wildmon, & Watkins 2001), utilizing a direct tr aining method to train educators to self report may increase the accuracy of implementation and decrease the reactivity associated with direct observation procedures. No studies were found that used direct training methods to teach teachers to self report on their implementation of behavior intervention plans. Given this, the purpose of this study was to use self monitoring procedure to increase and sustain implementation. The second purpose was to determine if the teacher level of reporting remained cons istent with the consultant. To evaluate this, educators were trained to evaluate their self report sheet to determine accuracy of implementation

PAGE 25

19 of the student's individualized behavior intervention plans. Training consisted of direct methods including r ole modeling role playing and performance feedback. Teachers were asked to observe their behavior and record the occurrence of each intervention component. In addition, at the end of implementation, teachers were required to calculate their treatment in tegrity scores, providing immediate feedback on their implementation for that day.

PAGE 26

20 Chapter 2: Method Participants Participants in this study were two special educators from the Tampa Bay area who had nominated students with challenging behavior in t heir classroom for participation in the Prevent Teach Reinforce (PTR) individual behavior support project. The second participant, Lenora, was a para professional who provided one on one services for a 5 th grade student with developmental disabilities. The first participant, Maria, was a second grade educator. Both educators signed an informed consent (see Appendix A) to participate in the PTR project including participation in this research project. Both educators qualified for participation in this study due to low treatment integrity scores during the PTR process (see procedure below). This study took place in the respective educators' public school classrooms. Participants were provided with all materials needed to complete the study. Response Definitio ns and Reliability Adherence of implementation. For purposes of this study, adherence of implementation was defined as the specific steps for each intervention component to be implemented in order to demonstrate a minimal effect. The criterion for each i ntervention component was individualized for each student's behavior intervention plan. For example, in Appendix B, Maria's BIP included a curricular modification, which

PAGE 27

21 involved two steps. The first step was reducing the student's assignment. The second step was reduci ng the student assignment by 25 to 50% immediately (within in minute) after presenting the assignment and before problem behavior began. Adherence of implementation for this student's BIP was that the educator must at least reduce the stude nt's assignment by some amount for the intervention to function as an effective prevention strategy. Examples of the fidelity sheets designed for the participants' students are listed in Appendix B and C. During implementation, if the educator reduced the student's assignment, she would receive a "yes" that the intervention component was implemented with adherence because the indicated step that was needed to effectively prevent challenging behavior was implemented. Total percentage of adherence of impleme ntation was calculated for each participant by dividing the total number of components scored as adhered to by the total number of intervention components to be adhered to and then multiplied by 100. Accuracy of implementation. For purposes of this study accuracy of implementation served as a measure of the quality of implementation and was defined as implementing the specific steps under each intervention component in order to demonstrate an optimal effect. The criterion for each intervention component was individualized for each student's BIP. For example, as previously mentioned, the intervention component of curricular modification involved two steps: reducing the assignment and immediately reducing the assignment by 25 to 50% prior to problem behavi or (see Appendix B). However, to obtain the optimal effect, both steps had to be implemented. Thus, if the educator completed both steps, a score of "yes" would be

PAGE 28

22 obtained for accuracy of implementation for transition support. Total percentage of accura cy of implementation was calculated for each participant by dividing the total number of components scored as accurate by the total number of intervention components needed for accuracy and then multiplied by 100. Interobserver Agreement. During basel ine and maintenance phases, an independent observer and the researcher scored the adherence and accuracy of implementation. Prior to observation of educator implementation, the independent observer was trained in scoring implementation of the intervention components. The training consisted of instructions of what intervention components to look for, examples of adherence and accuracy of implementation, and practice scoring sessions on videotape. The independent observer scored 100% on three consecutive ses sions during training to become eligible to score BIPs in the classroom. During the classroom observation, each observer was provided with a sheet that contained each task analyzed intervention component for each participant. Reliability scores obtained during observations included 1) treatment integrity and 2) educator reporting. Treatment integrity was measured by calculating the adherence and accuracy score of implementation. The reliability score for treatment integrity between the independent observe r and the researcher was scored as agreements of prescribed intervention components observed divided by agreements plus the disagreements of prescribed intervention components implemented during an observation period. Reliability checks on treatment integr ity were conducted during at least 30% of all sessions. Reliability scores for participant one averaged 88% [range, 75 to 100] in

PAGE 29

23 baseline, 85% [range, 75 to 94] in intervention, and maintenance 88% [range, 88]. Participant two received 8 sessions on re liability checks and participant two received 7 sessions of reliability checks. Reliability scores for participant two averaged 90% [range, 90] in baseline, 79% [range, 60 to 90] in intervention, and 90% [range, 79 to 100] in maintenance During intervent ion, accuracy of educator reporting was measured by comparing researcher's scores with the scores of the educator. Reliability scores for educator reporting were calculated based on the researcher's or the independent observer's and the educator's scores (see Table 1). Educator reporting was calculated as the number of intervention components accurately reported when compared to the researcher or independent observer. A percentage of reliability accuracy was obtained by dividing the total number of agreem ents of prescribed intervention components implemented by the agreements plus the disagreements of prescribed intervention components implemented during an observation period multiplied by 100. Results of the researcher's, Maria's, and Lenora's reliabilit y are depicted in Table 1. Maria's reliability with the researcher averaged 84% [range, 80 to 90]. Thus, Maria's accuracy of reporting was highly reliable with the researcher. Lenora's scores also indicate high reliability with the researche r, averaged 85 % [range, 70 to 94 ]. Thus, both educators were highly accurate when reporting their level of implementation during an observation.

PAGE 30

24 Table 1. Reliability Scores _______ Participant_____ _______ Maria Lenora Consultant 90% 94% 82% 81% 80% 94% 83% 70% Prevent Teach Reinforce Process The PTR process contained five steps that included team building, goal setting, assessment, intervention and evaluation. The first step, team building, encouraged all particip ants who work or have worked with the student to be part of the student specific team and included discussion of how to work together as a team to ensure effective team functioning. The second step, goal setting, determined both appropriate and inappropria te behaviors that were targeted for the intervention. After the goals were set, baseline data collection on the student's targeted behaviors commenced. The third step, assessment, involved conducting a functional behavior assessment (FBA) to gather inform ation on events in the environment that may have an effect on the student's behavior and ultimately to determine the function (s) of the behavior(s) targeted for reduction. The fourth step, intervention, involved determining the specific strategies that wer e used in

PAGE 31

25 the BIP. The fifth step was the primary focus of this study and required training the educator in the implementation of the BIP and evaluating treatment integrity to determine the level of implementation. During the fifth step of the PTR proce ss, the educator was taught how to implement the student's individualized behavior intervention plan. Initially, each educator was provided with verbal instructions on how to implement the BIP. Next, the educator was given an opportunity to role play and i mplement with the student the intervention components written in the plan. Once the educator demonstrated that she understood how to implement the BIP, the original consultant withdrew from the educator's classroom. If more training was needed as indicat ed by integrity levels falling below 70% on three consecutive days (see below for specific procedures), further support was provided as part of this study. From PTR sample, two educators qualified to participate in this study. Experimental Design and Pro cedure To examine the effect of self monitoring on educators' implementation of adherence and accuracy, this study used a multiple baseline design (Kazdin, 1982) across two educators including three potential phases: baseline, self monitoring training, and maintenance (see Appendix E). Baseline. During this phase, the researcher observed the participants' implementation of the targeted student's BIP. The researcher did not provide feedback to the educator. Following three consecutive days of treatment inte grity scores at 70% or below, the participant entered the self monitoring training phase.

PAGE 32

26 Self monitoring training The self monitoring training procedure was used to provide further assistance and training to the participant on implementation of the BI P to increase treatment integrity scores. The self monitoring training occurred in the classroom and included two steps: 1) training in the absence of the student and 2) training in the presence of the student. Initially, the researcher provided the part icipant with a rationale and an explanation of the purpose of self monitoring. The rationale statement included three explanations: 1) this is a tool to help the researcher understand what is occurring in the environment, 2) educator recording is a tool th at will help you determine your level of implementation of the students' BIP, and 3) educator scores will only be used for this study and would not affect her job. Next, the educator was provided with a verbal explanation of self monitoring, modeling, and positive and corrective feedback on the steps of the self monitoring process. The researcher discussed the task analyzed steps for each intervention component. In addition, the researcher and educator discussed prompts that could be provided to assist the educator in implementing an intervention component. For example, if a n educator was to prompt a student to ask for a break. A prompt to remind the educator would be the break pass on the student's desk which would serve as a visual cue to prompt the edu cator to ask if the student needs a break. Finally, a discussion on how to self monitor one's own implementation of the intervention components occurred. A checklist with all individualized intervention components was provided to the participant (see Appen dix D and E ). The checklist included two columns labeled: 1) Yes (I implemented this step of the plan) and 2) NA (changes in schedule prevented using this component; i.e. fire drill, exam, standardize test). When the educator did not implement the step, t hey were trained to put a dash

PAGE 33

27 mark in the "yes" column, which re presented "No, did not occur." The educator was taught to record on the checklist each step that was implemented or was not implemented immediately after the occurrence of the intervention component or at their next available moment. Additionally, at the bottom of the checklist, the educator was taught how to tally her responses and place a percentage in the box labeled total score. The percentage score was calculated as the number of presc ribed intervention components implemented divided by possible interventions components for that day multiplied by 100. Once the educator agreed that she understood the self monitoring process, the researcher talked through the procedure with the educator The researcher used the previous observation to demonstrate how to self monitor implementation. Next, the educator was asked which of the following steps she implemented. Once an answer was reached, the researcher demonstrated how to provide a rating on i mplementation. For example, if the educator indicated that she did not implement the intervention step, the researcher made a dash mark for that component. If the educator did implement the intervention component, the researcher provided a check beside the intervention step that was implemented. This process was completed until all steps had been assessed. When the educator's accuracy ratings matched the consultant's at 80% or higher, then she had successfully self monitored her implementation. Once reliab ility between the educator's and consultant's ratings was matched, the educator was allowed to self monitor their implementation in the classroom. This was step two of the training, in which the educator demonstrated skills acquired in the classroom with t he student. The researcher observed the educator implementing the BIP and the self monitoring steps. If the educator and researcher ratings did not match

PAGE 34

28 during an observation, the researcher provided verbal feedback. Verbal feedback included asking the e ducator to reflect when the intervention step may have occurred, determining if the intervention step was or was not implemented, and what they could do better to make sure that step was implemented during the next implementation. Once the educator obtaine d a consistent score of 80% or higher on accuracy of self monitoring and a treatment integrity score of 70% or higher on implementation, the consultant removed verbal feedback. After verbal feedback was removed, the participant was still required to self monitor their implementation. If she demonstrated a consistent level of implementation when feedback was removed, then she entered the maintenance phase. However, if the implementation dropped below 70% then verbal feedback was reinstated until a consisten t level of implementation was reached, in which verbal feedback was removed again. Maintenance. During this phase, the researcher did not provide any feedback to the educators and the educator was instructed to stop self monitoring. However, t he educator was asked to continue implement ing the behavior intervention plan

PAGE 35

29 Chapter 3: Results Both participants' implementation data are displayed in Figure 1. The graph depicts the part icipants' adherence and accuracy of implementation during each phase of t he study. The results of Lenora's implementation level are r epresented at the top of Figure 1. In baseline Lenora's adherence and accuracy of impleme ntation averaged 48% [range, 33 to 57] and 41% [range, 33 to 57 ], respectively. In the self monitoring ph ase, Lenora's average level of implementation for adherence and accuracy increased to 70% [range, 33 to 88 ] and 63% [range, 33 to 75 ], respectively. Lenora notified the researcher of withdrawal from the study prior to obtaining the aforementioned criteria for moving into the maintenance phase. The checklist was removed immediately upon notification of withdrawal. Nevertheless, in maintenance, Lenora continued to implement the intervention plan with high fidelity, resulting in an average of 86% [range, 86 ] for adherence and 71% [range, 71] for accuracy. Overall, Lenora's implementation improved and maintained until she withdrew from the study.

PAGE 36

30 Figure 1. Percentage of Implementation of Behavior Intervention Plans The bottom graph in Figure 1 displays participant two's (Maria) adherence and accuracy of implementation. In baseline Maria's adherence and accuracy level averaged 66% [range, 57 to 75] and 35% [range, 25 to 43 ], respectively. When self monitoring was implem ented, Maria's average level of implementation for adherence was 95% [range, 80 to 100 ] and accuracy increased to 71% [range, 60 to 80 ]. Once the participant reached stability, with and without verbal feedback, the fidelity checklist was removed. During t he maintenance phase, the results indicated that Maria's implementation remained high. The average level of impl ementation was 100% [range, 100] f or adherence and 87% [range, 71 to 100 ] for accuracy Overall, self monitoring implementation increased Educator 1 (Lenora)

PAGE 37

31 Maria' s implementation level for both adherence and accuracy In conclusion, both educators implementation increased after self monitoring was implemented. However, se lf monitoring had differential e ffects on adherence and quality of implementation. For example once self monitoring was implemented, adherence of implementation immediately increased to higher levels; where as, quality of impleme ntation gradually increased in the maintenance phase s This would suggest an upward trend for learning and/or correct re sponding. In addition to high implementation scores, both educators demonstrated high levels of accurate reporting in the classroom. A ccuracy of reporting for participant 1 averaged 85% [range, 70 to 94] and for participant 2 average 84% [range, 80 to 90 ] This demonstrates that the participants were highly truthful when reporting their implementation.

PAGE 38

32 Chapter 4: Discussion Self monitoring has been shown to be an effective procedure with different populations. However, little research has been done wi th using direct training methods to teach educators how to self monitor their own treatment integrity of behavior intervention plans. This study utilized self monitoring as a means to help the educators monitor implementation of students individualized i ntervention plan s Two educators were trained, in and out of the classroom, on how to use self monitoring in their classrooms. Once implemented, both participants demonstrated an increase in their level of treatment integrity. In addition, when self monito ring was removed, the treatment integrity continued to ascend. These results are important because it demonstrates that self monitoring may be a sustainable tool for implementation of treatment integrity. There are several possible reasons why self monito ring was an effective tool. One reason was the checklists were readily available and could be placed anywhere in the classroom environment. For example, one educator put the checklist on her desk and the other educator put it on her blackboard. Having the checklist easily accessible allowed the educators to review or glance at the checklist frequently ; thus, increasing the likeli hood that the educator would implement interventions as intended. In addition to having the checklist in the environment, the educ ator was required to assess their implementation after an intervention component was completed. Requiring the educators

PAGE 39

33 to assess themselves based on the checklist criteria may have increased their accountabil ity of implementation, and thus, the likeliho od to perform components as intended because the checklist served as an environmental prompt for implementation in the classroom. Another reason self monitoring could have been an effective tool was because the participants may have taken ownership on sel f monitoring their implementation. Once the checklist was assessed, the educators were given immediate feedback on implementation. In addition to the score they received after implementation, one participant, Maria, took it amongst herself to personalize t he checklist to either remind her to "Be proactive" or commend herself for reaching a goal by writing "I rock." The written statements made by the participant demonstrate the participant's level of commitment to implement the intervention plan, thus, furth er demonstrating the importance of having the educator involved throughout the implementation process. As a result of allowing the participants to assess themselves and determine if they were successful or unsuccessful may have aided them in their continu al improvement. Another possible reason why self monitoring could have been effective was because it taught the educator how to evaluate their own treatment integrity. Traditionally, the consultant provides feedback to the educator on correct and incorrec t implementation However, in this study, the checklist provided immediate feedback to the educator on implementation. E valuating one's implementation may have taught the educator what to do better next time; thus, requiring the educator to change their be havior in order to improve their implementation the next time. Another factor that may have

PAGE 40

34 influenced change in the educator's behavior was comparing scores amongst the consultant and the educator. Discovering there was a mismatch between the ratings m ay have increased correct teacher responding while implementing the intervention. Teaching the educator how to assess their implementation may have increased the likelihood of implementation in the maintenance stage. Yet another reason why self monitoring could have been effective was due to direct training methods. The purpose of this study was to use this method of training to increase the likelihood of implementation. As Sterling Turner and colleagues (2001) demonstrated, direct training methods allow fo r practice and feedback on implementation. In this study, most direct training occurred in the intervention phase. As a result, the results in the maintenance phase showed a slight increase and/or maintained from intervention. As a result, using direct tra ining methods in intervention may have lead to better sustainable implementation outcomes as demonstrated in the maintenance phase for both educators Despite intervention effectiveness, there were limitations to this study. First, the measurement of imple mentation was conducted through direct observation. As a result, the participant could have been reactive to the presence of observers in the classroom as this method has been associated with reactivity. However, this is not likely given the constant ascen ding improvement of each participant's progress into the maintenance phase. But, nevertheless, reactivity may have influenced the participants' implementation performance. The second limitation of this study was that the participants' were instructed to

PAGE 41

35 engage in repeated practice of self monitoring. Although this study required the participant to self monitor during only one period of the day which was the equivalent to an hour of data collection, this may be viewed as highly repetitive or time intensi ve to some educators. Thus, the results found in this study may not generalize to other participants who may find repetitive or time intense tasks aversive. This further stresses the point discussed by Han and Weiss (2005) on factors that influence teache r's implementation. The third limitation was the number of participants. This study included only two educators in the public school system and one of whom withdrew during the study As a result, these findings may not be generalizable to other educators. Future research should implement this study to see if the results generalize to other educators. The fourth limitation of self monitoring was the differential effects on both adherence and accuracy This study demonstrated that self monitoring demonstra ted a quicker effect on adherence than accuracy More specifically, both participants adherence to implementation quickly changed when self monitoring was introduced where as quality of implementation gradually improved throughout the study. One reason may be due to self monitoring being a learned behavior. The self monitoring premise of providing one's self immediate feedback on their own behavior may have lead to results found in this study, in which some intervention components were easier to implement t han others. For instance, adherence only required the participants to implement the minimum amount of steps in a component where as quality required the participant to perform multiple component steps which may have taken longer to learn.

PAGE 42

36 In summary, us ing a direct training method to train teachers to self monitor was an effective intervention that sustained once the researcher and tools required to self monitor were removed. Although the results are limited to two participants, both individuals demonst rated progress in both accuracy and quality of their implementation. Future research is needed to evaluate the sustained effectiveness of the training method used in this study as well as the generality to other educators

PAGE 43

37 References Alli nder, R. M., Bolling, R. M., Oats, R. G., & Gagnon, W. A. (2000). Effects of teachers self monitoring on implementation of curriculum based measurement and mathematics computation achievement of students with disabilities. Remedial and Special Education, 2 1 (4) 219 226. Benazzi, L., Horner, R. H., & Good, R. H. (2006). Effects of Behavior Support Team Composition on the Technical Adequacy and Contextual Fit of Behavior Support Plans. Journal of Special Education, 40 (3), 160 170. Blair, K. C., Liaupsin, C. J., Umbreit, J., & Kweon, G. (2006). Function based intervention to support the inclusive placements of young children in Korea. Education and Training in Developmental Disabilities, 41 (1), 48 57. Bornstein, P. H., Hamilton, S. B., & Bornstein, M. T. (198 6). Self monitoring procedures. In A. R. Ciminero, K. S. Calhoun, H. E. Adams (eds.), Handbook of behavioral assessment, (pp.176 222), New York: John Wiley & Sons Inc. Bracett, L., Reid, D. H., & Green, C. W. (2007). Effects of Reactivity to observations on staff performance. Journal of applied behavior analysis 40 ,191 105. Browder, Liberty, He ller, & D'Huyvetters. (1986). Self management by teachers: Improving instructional decision making. Professional School Psychology, 1 (3), 165 175.

PAGE 44

38 Burke, M. D., Hagan Burke, S., & Sugai, G. (2003). The efficacy of function based interventions for students with learning disabilities who exhibit escape maintained problem behaviors: Preliminary results from a single case experiment. Learning Disability Quarterly, 26 (1), 15 25. Codding, R. S., Feinburg, A. B., Dunn, E. K., & Pace, G. M. (2005). Effects of immediate performance feedback on implementation of behavior support plans. Journal of applied behavior analysis, 38 (2), 205 219. DiGangi, S. A., Maag, J. W., & Ru therford, R. B. Jr. (1991). Self graphing of on task behavior: Enhancing the reactive effects of self monitoring on on task behavior and academic performance. Learning Disability Quarterly, 14 (3), 221 230. DiGennaro, F. D., Martens, B. K., & McInty re, L. L. (2005). Increasing treatment integrity through negative reinforcement: Effects on teacher and student behavior. School Psychology Review, 34 (2), 220 231. Elliott, S. N., & Busse, R. T. (1993) Effective Treatments with Behavioural Consultatio n', In J. E. Zins, T. R. Kratochwill, & S. N, Elliott (Eds.), Handbook of Consultation Services for Children: Applications in Educational and Clinical Settings (pp. 179 203), San Francisco, CA: Jossey Bass. Frith, G. H., & Armstrong, S. W. (1985). Self m onit oring for behavior disordered students. Exceptional Children 18, 144 148. Gansle, K. A., & McMahon, C. M. (1997). Component integrity of teacher intervention management behavior using a student self monitoring treatment: An experimental analysis. Jou rnal of Behavioral Education 7 (4), 405 419.

PAGE 45

39 Gersten, R., Chard, D., & Baker, S. (2000). Factors enhancing sustained use of research based i nstructional pTACiices. Journal of Learning Disabilities, 33, 445 457. Gilberts, G. H., Agran, M., Hughes, C ., & Wehmeyer, M. (2001). The effects of peer delivered self monitoring strategies on the participation of student with severe disabilities in general education classrooms. The Journal of the Association for Person with Severe Handicaps 26 25 36. Gill at, A., & Sulzer Azaroff, B. (1994). Promoting principals' managerial involvement in instructional improvement. Journal of applied behavior analysis, 27 115 129. Greenberg, M., Weissberg, R., O'Brien, M., Zins, J., Fredericks, L., Resnik, H., & Elias, M. (2003). Enhancing school based prevention and youth development through coordinated social, emotional and academic learning. American Psychologist 58 466 474. Gresham, F. M. (1989). Assessment of treatment integrity in school consultation and prereferral intervention. School Psychology Review, 18 37 50. Gresham, F. M., Gansle, K., Noell, G. H., Cohen, S., & Rosenblum, S. (1993). Treatment integrity in school based intervention studies: 1980 1990. School Psychology Review, 22 254 272. Gresha m, F. M., MacMillan, D. L., Beebe Frankenberger, M. E., & Bocian, K. B. (2000). Treatment integrity in learning disabilities intervention research: Do we really know how treatments are implemented? Learning Disabilities Research & Practice, 15 198 205. Grimes, J., Kums, S., & Tilly III, W. D. (2006). Sustainability: An enduring commitment to success. School Psychology Review 35 (2), 224 244.

PAGE 46

40 Gureasko Moore, S., DuPaul, G. J., & White, G. P. (2006). The effects of self management in general education c lassrooms on the organizational skills of adolescents with adhd. Behavior modification, 30 (2), 159 183. Han, S. S., & Weiss, B. (2005). Sustainability of teacher implementation of school based mental health programs. Journal of Abnormal Child Psychology 33 (6), 665 679. Hartman, D. P., Roper, B. L., & Bradford, D. C. (1979). Some relationships between behavioral and traditional assessment. Journal of Behavioral Assessment 1 3 21. Horner, R. H., Sugai, G., Lewis Palmer, T., & Todd, A.W. (2001). Teachi ng school wide behavioral expectations. Emotional & Behavioral Disorders in Youth 1 77 96. Hughes, J. N., Grossman, P., & Barker, D. (1990). Teachers' expectations, participation In consultation, and perceptions of consultant helpfulness. School Psych ology Quarterly 5 167 179. Hughes, M. A., Alberto, P. A., & Fredrick, L. L. (2006). Self operated auditory prompting systems as a function based intervention in public community settings. Journal of Positive Behavior Interventions, 8 (4), 230 243. Ingram K., Lewis Palmer, T., & Sugai, G. (2005). Function based intervention planning: Comparing the effectiveness of FBA function based and non function based intervention plans. Journal of Positive Behavior Interventions, 7 (4), 224 236. Jensen, B. J., & Haye s, S. H. (1986). Self report questionnaires and inventories. In A. R. Ciminero, K. S. Calhoun, & H. E. Adams (Eds.), Handbook of behavioral assessment (pp.150 222) New York: John Wiley and Sons.

PAGE 47

41 Johnston, J. M., & Pennypacker, H. S. (1993). Strategies and tactics of behavioral research (2nd ed.). Hillsdale, New Jersey: LEA Publishers. Jones, K. M., Wickstrom, K. F., & Friman, P. C. (1997). The effects of observational feedback on treatment integrity in school based behavioral consultation. School Psych ology Quarterly, 12 (4), 316 326. Kadzin, A. E. (1982). Single case research designs: Methods for clinical and applied settings Oxford, New York: Oxford University Press. Kalis, T. M., Vannest, K. J., & Parker, R. (2007). Praise counts: Using self monit oring to increase effective teaching practices. Preventing School Failure, 51 (3), 20 27. Klingner, J. K., Arguelles, M. E., Hughes, M. T & Vaughn, S. (2001). Examining the school wide "spread" of research based practices. Learning Disability Quarterly 24, 221 254. Kratochwill, T. R., & Bergan, J. R. (1990). Treatment Implementation. In A.S. Bellack, & M. Hersen (Eds.), Behavioral consultation in applied settings: An individual guide (pp.143 156), New York: Plenum Press. Kratochwill, T.R., & Berga n, J.R. (1990). Behavioral Consultation: An Overview. In A.S.Bellack, & M. Hersen (Eds.), Behavioral consultation in applied settings: An individual guide (pp.15 44), New York: Plenum Press. Lane, K. L., Bocian, K. M., MacMillan, D. L., & Gresham, F. M. (2004). Treatment integrity: An essential but often forgotten component of school based interventions. Preventing School Failure, 48 36 43.

PAGE 48

42 Maag, J. W., Reid, R., & Digangi, S. A. (1993). Differential effects of self monitoring attention, accuracy, and productivity. Journal of Applied Behavior Analysis, 26 (3), 329 344. Massey, O. T., Armstrong, K., Boroughs, M., Henson, K., & McCash, L. (2005). Mental health services in schools: A qualitative analysis of challenges to implementation, operation, and sustainability. Psychology in the Schools 42 (4), 361 372. Monsher, F. J., & Prinz, R. J. (1991). Treatment fidelity in outcome studies. Clinical Psychology Review, 11 (3), 247 266. Mortenson, B., & Witt, J. C. (1998). The use of weekly performance feedbac k to increase teacher implementation of a prereferral intervention. School Psychology Review, 27 613 627. Munton, S. M. (2004). Uncovering the most cost beneficial methods of consultation in terms of disruptive behavior, treatment integrity, and social v alidity Unpublished doctoral dissertation, University of California, Riverside. Noell, G. H., Duhon, G. J., Gatti, S. L., & Connell, J. E. (2002). Consultation, follow up, and implementation of behavior management interventions in general education. Schoo l Psychology Review, 31 (2), 217 234. Peterson, L., Homer, A. L., & Wonderlich, S. A. (1982). The integrity of independent variables in behavior analysis. Journal of Applied Behavior Analysis, 15 (4), 447 492. Petscher, E. S., & Bailey, J. S. (2006). Effect s of training, prompting, and self monitoring on staff behavior in a classroom for students with disabilities. Journal of Applied Behavior Analysis, 39 (2 ), 215 226.

PAGE 49

43 Richman, G. S., Riordan, M. R., Reiss, M. L., Pyles, D. A. M., & Bailey, J. S. (1988). The effects of self monitoring and supervisor feedback on staff performance in a residential setting. Journal of Applied Behavior Analysis, 21 (4), 401 409. Robbins, J. R., & Gutkin, T. B. (1994). Consultee and client remedial and preventive outcomes following consultation: Some mixed empirical results and directions for future researchers. Journal of Educational and Psychological Consultation, 5 (2), 149 167. Ro senberg, M. S., & Jackman, L. (2003) Development, implementation, and sustainability of comprehensi ve school wide behavior management systems. Intervention and School Clinic 39 (1) 10 21. Scott, T. M., Bucalor, A., Liaupsin, C., Nelson, C. M., Jolivette, K., & DeShea, L. (2004). Using functional behavior assessment in general education settings: Mak ing a case for effectiveness and efficiency. Behavioral Disorders, 29 (2) 190 201. Sharpiro, E. S., Durnan, S. L., Post, E. E., & Levinson, T. S. (2002). Interventions for academic and behavior problems. Self monitoring procedures for children and Adolescents. Interventions for Academic and Behavior Problems II: Preventive and remedial approaches In M. A. Shinn, H. M. Walker, & G. Stoner (Eds.), (pp. 433 454), New York: National Association of School Psychologists. Sheridan, S. M., Welch, M., & Orme, S. F. (1996). Is consultation effective? A review of outcome research. Remedial and Special Education 17 ( 6 ) 341 354.

PAGE 50

4 4 Simpson, G. A., Bloom, B., Cohen, R. A., Blumberg, S., & Bourdon, K. H. (2005). U.S. Children with emotional and behaviora l difficulties: Data from the 2001, 2002, and 2003 National Health Interview Surveys. Advance data from vital and health statistics; no 360. Hyattsville, MD: National Center for Health Statistics. Sterling Turner, H. E., Watson, T. S., Wildmon, M. & Watkins, C. (2001). Investigating the relationship between training type and treatment integrity. School Psychology Quarterly 16 ( 1 ) 56 67. Stahr, B., Cushing, D., Lane, K., & Fox, J. (2006). Efficacy of a function based intervention in decreasing off tas k behavior exhibited by a student with ADHD. Journal of Positive Behavior Interventions, 8 (4), 201 211. Suda, K. T., & Miltenberger, R. (1993). Evaluation of staff management strategies to increase positive interactions in a vocational setting. Behavioral Residential Treatment, 8 (2), 69 88. Sugai, G., Horner, R., Sailor, W., Dunlap, G., Eber, L., Lewis, T., et al. (2005). School wide positive behavior support: Implementers' blueprint and self assessment Washington, DC: Technical Assistance Center on P ositive Behavioral Interventions and Supports. Sugai, G., Lewis Palmer, T., Todd, A. W., & Horner, R. H. (2001). School wide evaluation tool Eugene: University of Oregon, Educational and Community Supports. Umbreit, J., Lane, K. L., & Dejud, C. (2 004). Improving classroom behavior by modifying task difficulty: Effects of increasing the difficulty of too easy tasks. Journal of Positive Behavior Interventions, 6 (1), 13 20.

PAGE 51

45 Vaughn, S., Klingner, J. & Hughes, M. (2000). Sustainability of research base d practices. Exceptional Children, 66, 163 171. Watson, T. S., & Robinson S. L. (1996). Direct behavioral consultation: an alternative to traditional behavioral consultation. School Psychology Quarterly 11 (3), 267 278. Wickstrom, K. F., Jones, K. M., LaFleur, L. H., & Witt, J. C. (1998). An analysis of treatment integrity in school based behavioural consultation. School Psychology Quarterly, 13 141 154. Wilczynski, S. M., Mandal, R. L., & Fusilier, I. (2000). Bridges and barriers in behavioral cons ultation. Psychology in the Schools 37 (6), 495 504. Wilkinson, L. A. (2006). Monitoring treatment integrity: An alternative to the 'consult and hope' strategy in school based behavioural consultation. School Psychology International, 27 (4), 426 438. Witt J. C., & Martens, B. K. (1988). Problem with problem solving consultation: A re analysis of assumptions, methods, and goals. School Psychology Quarterly, 17, 211 226. Yeaton, W. H., & Sechrest, L. (1981). Critical dimensions in the choice and maintenan ce of successful treatments: Strength, integrity, a n d effectiveness. Journal of Consulting and Clinical Psychology, 49 (2), 156 167.

PAGE 52

46 Appendix A Informed Consent Sample Informed Consent for an Adult Social and Behavioral Sciences University of South Flor ida Information for People Who Take Part in Research Studies Researchers at the University of South Florida (USF) study many topics. We want to show the effectiveness of behavioral supports for student with problem behavior. To do this, we need the hel p of people who agree to take part in a research study. Title of research study: Evidence Based Interventions for Severe Behavior Problems: The Prevent Teach Reinforce Model Person in charge of study: Don Kincaid, Ed.D. Where the study will be done: W e will conduct this study within the student's school settings. We want to learn more about the student's behavior and their interactions with others at school. We will visit the school to see how the student interacts with others and engages in activities You will be asked to attend meetings, complete data collection forms, and learn new strategies to prevent the student's behavior problems and support positive development. Who is paying for it: This study is being funded by the U.S. Department of Educat ion. Should you take part in this study? This form tells you about this research study. You can decide if you want to take part in it. You do not have to take part. Reading this form can help you decide. Before you decide: Read this form. Talk about thi s study with Dr. Kincaid or the person explaining the study. You can have someone with you when you talk about the study. You can ask questions: You may have questions this form does not answer. If you do, ask Dr. Kincaid or study staff as you go along. You don't have to guess at things you don't understand. Ask the people doing the study to explain things in a way you can understand.

PAGE 53

47 Appendix B Maria's Fidelity Sheet Interventions Was the intervention implemented? (Adherence) Was the intervent ion done accurately? (Quality) Fidelity Score Y/Y = 2 Y/N = 1 N/N = 0.0 NA/NA = NA Comments 1. Curricular Modification (Eliminating Triggers) -Immediately (within 1 minute) after presenting assignment and before problem behavior begins, go to Jason and red uce his assignment by 25 50%. Ex. "I am crossing out these 3 problems so you don't even have to do them. Do your best on the other 7 problems." Y / N / NA Y / N / NA 2. Adult Verbal Behavior (Just Be Nice) -Maintain a ratio of 4:1 praise/comment to d emand/request Respond to inappropriate behavior redirecting with a calm voice and using simple language to tell Jason what to do. Y / N / NA Y / N / NA 3. Teach Asking for a Break -Teach Jason to ask for a break using the break pass through direct instruction. Model for Jason how to raise the break pass for the educator to see it, then sit quietly. Have him role play the skill with specific feedback. Y / N / NA Y / N / NA 4. Prompt Asking for a Break -Immediately prior to the students minim um sustained engaged time and BEFORE challenging behavior occurs (about X minutes into assignment), prompt Jason "Do you need a break? If so, show me your break pass." Y / N / NA Y / N / NA 5. Teach Asking for Help -Teach Jason the appropriate way t o ask for help using the reminder card on his desk (see steps). Model the skill for him and provide an opportunity for him to practice the skill with specific feedback. Y / N / NA Y / N / NA 6. Prompt Asking for Help -Immediately after presenting the assignment and BEFORE challenging behavior occurs prompt Jason, "If you come to a hard question, raise your hand and I will help you." Give another prompt at least one time per assignment. Y / N / NA Y / N / NA 7. Reinforce Asking for a Break -When Jason appropriately requests a break, IMMEDIATELY (within 30 seconds) honor his request and give specific praise for using the break pass. Ex. "Great job using the break pass Jason! That is the correct thing to do!" Y / N / NA Y / N / NA

PAGE 54

48 A ppendix B 1 Maria's Fidelity Sheet 8. 8. Dicontinue Reinforcement of Problem Behavior If Jason begins a class disruption, do not let problem behavior pay off' for Jason and let him escape the activity or go to time out. Give the prompts, "Do you need a break? and Do you need help? Matter of factly provides assistance to complete the task and ensure the task is completed. Y / N / NA Y / N / NA 9. Reinforce Asking for Help -When Jason raises his hand, IMMEDIATELY (within 30 seconds) acknowledge the hand raise and give specific praise. Ex. "Great job using the raising your hand quietly Jason! I will be right with you." Provide assistance immediately (within 1 minute) and give Jason a check on his sheet for asking for help with specific praise Y / N / NA Y / N / NA 10. Reinforce Starting Assignment -About 2 minutes after presenting the assignment give Jason a check on his work skills sheet with specific praise. Ex. "Nice job getting started right away Jason! Keep it up and you will earn your co mputer time. If Jason does not earn one of the checks, respond with minimal attention and low affect Y / N / NA Y / N / NA 11. Reinforce Completing Assignment -When Jason completes his assignment, give him a check on his work skills sheet with speci fic praise. If Jason does not earn one of the checks, respond with minimal attention and low affect. Y / N / NA Y / N / NA 12. Checklist Reinforcement -If Jason has earned at least 2 checks for that subject period, he will be allowed 5 minutes of co mputer time (educator approved activity). If Jason earns 8 checks for the day (to be increased), he will earn something from his menu of reinforcers. Y / N / NA Y / N / NA Implementation Scores (Total Y's/Total Y's + N's in column) Tota l Implementation/Fidelity Score (Total Y's/Total Y's + N's across 2 domains)

PAGE 55

49 Appendix C Lenora's Fidelity Sheet Interventions PREVENT Was the intervention implemented ? (Adherence) Was the intervention done accurately? (Quality) Fidelity Score Y/Y = 2 Y/N = 1 N/N = 0.0 NA/NA = NA 1. Environmental Support Hallway Transition Transition visual is available and easily accessible for student's use prior to start of transition Staff reviews visual with student prior to each transition Staff verba lly & physically prompts student to use visual during transition, as needed Staff immediately reviews support upon return to class Y / N / NA Y / N / NA 2. Environmental Support: Cafeteria Routine Step card present and provided to student upon ar rival Staff reviews card with student prior to getting in line Staff provides verbal and physical prompts for using card through routine Y / N / NA Y / N / NA 3. First Then & Choices for Academic Activities First then strip present on student's de sk Staff reviews first work (folder), then choice strip with student prior to start of each work session Staff offers student choice between two academic folders and immediately honors choice Staff offers student two choice items for then' activity and h onors student's choice Y / N / NA Y / N / NA 4. Environmental Support: Choice Board for Wait/Down Time Choice board present with 2 highly preferred, hands on activities Choice board provided prior to start of wait/down' time Staff immediately honors student choice of activity Timer set for 3 minutes Y / N / NA Y/ N / NA TEACH

PAGE 56

50 Appendix C 1 Lenora's Fidelity Sheet 1 Functional: Requesting a Break Break card is present & easily accessible to student Staff reviews the use of the card with student prior to non preferred activities Staff verbally prompts student to card prior to problem behavior and point to visual. Staff immediately releases student to break upon request. Timer set for 1 minute Staff prompts student to return to activity immediate ly after break Y / N / NA Y / N / NA REINFORCE 1. Environmental Support for Hallway Transition Educator provides "Visitor" pass with 50% success on transition card and immediately allows student to visit preferred staff Y / N / NA Y / N / NA 2. First Then & Choices for Academic Activities Staff immediately provides or releases student to choice activity/reinforcer upon completion of folder task If activity (vs. edible), timer is set for 3 minutes Y / N / NA Y / N / NA 3. Functional: Requ esting a Break Immediately provides praise every time card is used Immediately released to break' area for 1 minute Y / N / NA Y / N / NA Implementation Scores (Total Y's/Total Y's + N's in column) Total Implementation/Fidelity Score (Total Y 's/Total Y's + N's across 2 domains)

PAGE 57

51 Appendix D Maria's Checklist Yes NA Teach Asking for a Break 1. Did you model for J how raise the break pass or ask for a break, and then sit quietly during a time when he is calm ? 2. Did you allow J to demonstrate the steps on how to ask for a break and provide feedback? For example, "I like how you asked for a break appropriately." Teach Asking for Help 3. Did you demonstrate for J how to ask for help during a time when he is calm? 4. Did you allow J to demonstrate how raising his hand appropriately and provide feedback? For example, "Great job, raising your hand! How can I help you?" Prompt Asking for a Break 5. Did you prompt J "Do you need a break?" 6. Did you prompt J within 4 minutes of presenting the assignment and before problem behavior occurred? Prompt Asking for Help 7. Did you prompt J to raise his hand and ask for help? 8. Did you prompt J within 1 2 minutes of presenting the assignment and before problem behavior occurred? Curricular Modification 9. Did you reduce J's assignment? 10. Did you do so by 25 50% of the assignment? 11. Did you do so within 1 2 minutes after presenting the assignment and before problem behavior occurred? Adult Verbal Behavior 12. Did you use a praise/comment and then present a request? 13. Did you redirect inappropriate beha vior in a calm voice? 14. Did you maintain a ratio of 4:1 praise/comments to demand request? Reinforce Asking for a Break 15. When J requested a break, did you honor and praise his request for a break? For example, "Great job aski ng for a break. You may take one." 16. Did you provide the break within 1 2 minutes of the request? Discontinue Reinforcement of Problem Behavior 17. If J engaged in a classroom disruption, did you provide the prompts, "Do you need a break?" and /or "Do you need help?" 18. Did you provide the above prompts in a calm manner? Reinforce Asking for Help

PAGE 58

52 Appendix D 1 Maria 's Checklist 19. When J requested a break, did you honor and praise his request for a break? For example, "Great job asking for a break. You may take one." 20. Did you provide the break within 1 2 minutes of the request? Discontinue Reinforcement of Problem Behavior 21. If Jason engaged in a classroom disruption, did you provide the prompts, "Do you need a break?" and/or "Do you need help?" 22. Did you provide the above prompts in a calm manner? Reinforce Asking for Help 23. When J raises his hand, did you acknowledge him and provide praise? For example, "Thank you for raising your hand. What can I help you with?" 24. Did you do so within 30 seconds of J's raising his hand? Reinforce Starting Assignment 25. If earned, did you give J a check on his work skills sheet? 26. Did you give specific praise for starting assignment? For example, "I love how you got started on your work." Reinforce Completing Assignment 27. If e arned, did you give J a check on his sheet for completing the assignment? 28. Did you give specific praise for completing the assignment? For example, "You are finished. Awe some job!" Checklist Reinforcement 29. If earned set # of checks, did you allow J 5 minutes of computer time or a preferred activity? Total Total number of Yes's Total number of components implemented = 24 Total number of NAs Total number of Yes's / Total number of components implemented

PAGE 59

53 Appendix E Lenora's Checklist Intervention Yes NA Environmental Support Hallway Transition 1. Did you have the visual available for student's use? 2. Did you review visual with DB? 3. Prior to going down the hallway, did you review the visual support? 4. Did you verbally and physically prompt student to use visual during transition? 5. Did you review the visual when you returned to class ? 6. Did you review immediately review the visual when you returned to class? Environmental Support Cafeteria Routine 7. Did you present the step card when the student arrived to cafeteria? 8. Did you review the steps with the student? 9. Did you review the steps before going in line? 10. Did you verbally and /or physi cally prompt student to use the steps on the card through routine? Prevent: First Then & Choices Academic Activities 11. Did you review the First work (folder), then choice strip with DB? 12. Before you began the work session, did you review Firs t work, Then choice strip with student? 13. Did you offer the student a choice between two academic folders and then honor choice? 14. Did you provide the Then activity or edible? 15. Did you immediately honor the Then activity or edible? 16. Did you offer the student two choice items? Environmental Support Choice Board for Wait/Down' Time 17. Did you provide 2 choices of activities on the choice board? 18. Did you provide 2 choices of highly preferred, hands on activities? 19. D id you honor the student's choice? 20. Did you set the timer for 3 minutes? 21. Did you immediately honor student's choice of activity? Teach: Functional Request a Break 22. Did you have the break card available? 23. Did you review the brea k card? 24. Did you review the break card prior to non preferred activities? 25. Did you prompt the student to take the card prior to problem behavior? 26. When DB requested a break did you verbally or physically prompt student to take the card? 27. Did you set the timer for 30 seconds when the student arrived at break area? 28. Did you release the student to the break area? 29. Did you release the student immediately to the break area? 30. Did you verbally or physically prompt stud ent to return to activity? 31. Did you prompt student to return immediately after break? Reinforce: Environmental Support for Hallway Transition 32. When DB successfully completed 50% of steps in the transition did you provide "Visitor" pass? 33. Did you allow the student to immediately visit preferred staff? REINFORCE: First Then & Choices for Academic Activities 34. When task was completed did you honor the student's choice of activity/ edible? 35. Did you honor choice immediately ? REINFORCE: Functional Requesting A Break

PAGE 60

54 Appendix E 1 Lenora's Checklist 36. When student requested a break did you provide praise occasionally ? For example, "Great job, requesting a break!" 37. Did you immediately provide praise every time student used the break card? Total Total Number of Y's Total Number of Components Implemented = 37 minus Total Number of NA's Total Number of Y's divided by Total Number of Components Implemented