USF Libraries
USF Digital Collections

The effects of graphic display and training in visual inspection on teachers' detection of behavior change

MISSING IMAGE

Material Information

Title:
The effects of graphic display and training in visual inspection on teachers' detection of behavior change
Physical Description:
Book
Language:
English
Creator:
Luquette, Allana Duncan
Publisher:
University of South Florida
Place of Publication:
Tampa, Fla.
Publication Date:

Subjects

Subjects / Keywords:
graphs
visual analysis
teacher training
education
effectiveness
Dissertations, Academic -- Applied Behavior Analysis -- Masters -- USF
Genre:
government publication (state, provincial, terriorial, dependent)   ( marcgt )
bibliography   ( marcgt )
theses   ( marcgt )
non-fiction   ( marcgt )

Notes

Summary:
ABSTRACT: Although a number of researchers have attempted to evaluate the variables that affect teacher's acceptability of behavioral interventions, few have examined the influence of treatment effectiveness on teacher decision making. Interestingly, effectiveness information in the form of graphic feedback has been shown to improve treatment integrity, however little has been done to assess the effects of graphic feedback on teacher's ability to accurately recognize behavior change. This study assessed the effects of graphic display and training in visual inspection of graphed data on the ability of teachers to accurately recognize and report changes in student behavior. In addition, the researcher sought to evaluate the effects of the independent variables on participant decisions to continue behavioral interventions.Following baseline, two experimental treatments (graphic display and training in visual inspection plus graphic display) were implemented using a multiple baseline design across teachers. The dependent variables included accurate detection of behavior change and appropriate persistence with intervention choices. Teachers were shown a series of video clips depicting student problem behavior and they were asked to make a determination of behavior change. They were also asked to make decisions as to whether the current intervention should be continued or not based on the video. The results indicated that viewing the graph of student behavior during the graphic display condition improved participant performance on the accuracy measure. Additionally, viewing the graph immediately improved appropriate persistence, although further effects were not observed with the addition of training.
Thesis:
Thesis (M.A.)--University of South Florida, 2004.
Bibliography:
Includes bibliographical references.
System Details:
System requirements: World Wide Web browser and PDF reader.
System Details:
Mode of access: World Wide Web.
Statement of Responsibility:
by Allana Duncan Luquette.
General Note:
Title from PDF of title page.
General Note:
Document formatted into pages; contains 93 pages.

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 001505495
oclc - 60410923
notis - AJV6093
usfldc doi - E14-SFE0000584
usfldc handle - e14.584
System ID:
SFS0025275:00001


This item is only available as the following downloads:


Full Text
xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam Ka
controlfield tag 001 001505495
003 fts
006 m||||e|||d||||||||
007 cr mnu|||uuuuu
008 050201s2004 flua sbm s000|0 eng d
datafield ind1 8 ind2 024
subfield code a E14-SFE0000584
035
(OCoLC)60410923
9
AJV6093
b SE
SFE0000584
040
FHM
c FHM
090
BF176.7 (ONLINE)
1 100
Luquette, Allana Duncan.
4 245
The effects of graphic display and training in visual inspection on teachers' detection of behavior change
h [electronic resource] /
by Allana Duncan Luquette.
260
[Tampa, Fla.] :
University of South Florida,
2004.
502
Thesis (M.A.)--University of South Florida, 2004.
504
Includes bibliographical references.
516
Text (Electronic thesis) in PDF format.
538
System requirements: World Wide Web browser and PDF reader.
Mode of access: World Wide Web.
500
Title from PDF of title page.
Document formatted into pages; contains 93 pages.
520
ABSTRACT: Although a number of researchers have attempted to evaluate the variables that affect teacher's acceptability of behavioral interventions, few have examined the influence of treatment effectiveness on teacher decision making. Interestingly, effectiveness information in the form of graphic feedback has been shown to improve treatment integrity, however little has been done to assess the effects of graphic feedback on teacher's ability to accurately recognize behavior change. This study assessed the effects of graphic display and training in visual inspection of graphed data on the ability of teachers to accurately recognize and report changes in student behavior. In addition, the researcher sought to evaluate the effects of the independent variables on participant decisions to continue behavioral interventions.Following baseline, two experimental treatments (graphic display and training in visual inspection plus graphic display) were implemented using a multiple baseline design across teachers. The dependent variables included accurate detection of behavior change and appropriate persistence with intervention choices. Teachers were shown a series of video clips depicting student problem behavior and they were asked to make a determination of behavior change. They were also asked to make decisions as to whether the current intervention should be continued or not based on the video. The results indicated that viewing the graph of student behavior during the graphic display condition improved participant performance on the accuracy measure. Additionally, viewing the graph immediately improved appropriate persistence, although further effects were not observed with the addition of training.
590
Adviser: Austin, Jennifer.
653
graphs.
visual analysis.
teacher training.
education.
effectiveness.
690
Dissertations, Academic
z USF
x Applied Behavior Analysis
Masters.
773
t USF Electronic Theses and Dissertations.
0 856
u http://digital.lib.usf.edu/?e14.584



PAGE 1

The Effects of Graphic Display and Training in Visual Inspection on Teachers' Detection of Behavior Change by Allana Duncan Luquette A thesis submitted in partial fulfillment of the requirements for the degree of Master of Arts in Applied Behavior Analysis College of Graduate Studies University of South Florida Major Professor: Jennifer Austin, Ph.D. Trevor Stokes, Ph.D. Kevin Murdock, Ph.D. Date of Approval: November 5, 2004 Keywords: graphs, visual analysis, teacher training, education, effectiveness Copyright 2004, Allana Duncan Luquette

PAGE 2

Dedication I would like to dedicate this manuscript to Dr. Jennifer Austin. This research would not have been possible without her unwavering support and guidance. Her dedication to her students and to the field of Applied Behavior Analysis is truly inspiring. I am honored to call her my mentor and friend. Jenn, I am forever grateful. Thank you.

PAGE 3

Acknowledgements I would like to recognize the director of TSR, Scott Seifreit, for his assistance in the editing of the videos used in this study. I sincerely appreciate his patience and willingness to help. I would also like to thank my committee members for their invaluable contributions to my research and their continued dedication to the field. A special thanks to my husband and family for their continuous support and encouragement throughout the years. Thank you all.

PAGE 4

i Table of Contents List of figures ii Abstract iii Chapter One Introduction 1 Chapter Two Method 11 Participants and Setting 11 IRB Procedures 11 Video-taped sessions 12 Dependent Variables and Data Collection 15 Observer Training 16 Inter-observer Agreement 18 Experimental Conditions 18 Independent Variable Integrity 20 Participant Beliefs and Social Validity 21 Chapter Three Results 22 Chapter Four Discussion 30 References 37 Appendices Appendix A: Participant Selection Questionnaire 40 Appendix B: Participant Informed Consent 41 Appendix C: Parent Informed Consent 44 Appendix D: Video Graphs 48 Appendix E: Accuracy Scorecard 51 Appendix F: Persistence Scorecard 52 Appendix G: Quiz-data collection methods 54 Appendix H: Outline-participant training 56 Appendix I: Participant Independent Practice 71 Appendix J: Participant Beliefs Questionnaire (pre-exp) 79 Appendix K: Participant Beliefs Questionnaire (post-exp) 80 Appendix L: Participant Beliefs Questionnaire (before training) 83 Appendix M: Social Validity Questionnaire 87

PAGE 5

ii List of Figures Figure 1 Accurate detection of behavior change scores 24 Figure 2 Appropriate persistence scores 26

PAGE 6

iii The Effects of Graphic Display and Training in Visual Inspection on Teachers’ Detection of Behavior Change Allana Duncan Luquette ABSTRACT Although a number of researchers have attempted to evaluate the variables that affect teacher’s acceptability of behavioral interventions, few have examined the influence of treatment effectiveness on teacher decision making. Interestingly, effectiveness information in the form of graphic feedback has been shown to improve treatment integrity, however little has been done to assess the effects of graphic feedback on teacher’s ability to accurately recognize behavior change. This study assessed the effects of graphic display and training in visual inspection of graphed data on the ability of teachers to accurately recognize and report changes in student behavior. In addition, the researcher sought to evaluate the effects of the independent variables on participant decisions to continue behavioral interventions. Following baseline, two experimental treatments (graphic display and training in visual inspection plus graphic display) were implemented using a multiple baseline design across teachers. The dependent variables included accurate detection of behavior change and appropriate persistence with intervention choices. Teachers were shown a series of video clips depicting student problem behavior and they were asked to make a determination of behavior change. They

PAGE 7

iv were also asked to make decisions as to whether the current intervention should be continued or not based on the video. The results indicated that viewing the graph of student behavior during the graphic display condition improved participant performance on the accuracy measure. Additionally, viewing the graph immediately improved appropriate persistence, although further effects were not observed with the addition of training.

PAGE 8

1 Chapter One Introduction Behavior analysts working in classrooms often encounter significant resistance to the strategies they propose to produce behavior change. Because teachers usually play a critical role in the choice and implementation of the strategies used in their classrooms, it is crucial that they accept those strategies that have a high likelihood of producing meaningful changes in behavior. Acceptability has been defined as “judgments by laypersons, clients, and others of whether treatment procedures are appropriate, fair, and reasonable for the problem or client” (Kazdin, 1981). Interestingly, interventions found to be equally effective may vary greatly in their acceptability. Compared to interventions that are less acceptable, well-accepted interventions tend to be initiated and adhered to better than interventions that are less acceptable (Kazdin, 1981; Von Brock & Elliott, 1987). In fact, teachers are unlikely to even try an intervention if they do not believe it to be an acceptable way to deal with the problem behavior, regardless of whether that intervention has been “proven” to be effective (Elliott, Witt, Galvin, & Peterson 1984). If behavioral interventions are to be sought, utilized, and implemented with integrity in classrooms, teacher acceptance is clearly a critical variable. Research suggests that there is a need for consultants working in schools to be aware of teacher perceptions of intervention acceptability. In doing so, they will be better able to suggest interventions with a higher probability of being initiated and properly implemented

PAGE 9

2 (Martens, Witt, Elliott, & Darveaux, 1985). Witt (1986) has described four main factors that appear to influence teachers’ decisions to use and to continue an intervention. These include the reported effectiveness of the intervention, the time and resources required to implement the intervention, theoretical orientation of the intervention, and ecological intrusiveness of intervention implementation. Although one might expect that information on the effectiveness of an intervention would always be a salient factor in teacher decision-making, there is little evidence to support this assumption. In fact, the influence of treatment effectiveness on teacher decision-making rarely has been the focus of empirical research One notable exception is Von Brock and Elliott (1987), who conducted an analog study to evaluate whether the type of effectiveness information (i.e., research-based versus anecdotal evidence from applied settings) influenced teacher acceptance of interventions. To measure this variable, a revised version of the Intervention Rating Profile (IRP) called the Behavior Intervention Rating Scale (BIRS) was used, which included an effectiveness rating and provided an indication of teacher acceptance. A subscale called the Effectiveness Rating Profile (ERP) also was developed, which included nine new items intended to operationalize treatment effectiveness. Teacher participants in the study were given cases to read, which included variations of three factors. These factors were type of effectiveness information, type of intervention, and severity of the child’s behavior. Three levels were included in the effectiveness information variable. Participants either received no effectiveness information, consumer satisfaction ratings (from teachers who had utilized the intervention and found it to be effective), or outcome information supported by research (from articles published in

PAGE 10

3 professional journals). Specific information concerning degree of effectiveness was not provided; only general statements about effectiveness were used. The intervention variable consisted of one positive (i.e., reinforcement) and two negative (i.e., punishment) procedures. Problem severity was presented as either high or low. Participants were asked to complete a BIRS for each case, which provided an acceptability rating, an effectiveness rating, and a time rating. The results of the study indicated that both consumer satisfaction information and research based efficacy information increased the acceptability ratings of teachers when compared to the no effectiveness information condition. Interestingly, research-based information only influenced teacher acceptance when the problem behaviors were considered to be mild. In addition, when teachers rated an intervention as less acceptable, they also rated it as less effective. In a related study, Kazdin (1981) evaluated the influence of therapeutic effects of treatment on the acceptability of that treatment. Participants heard case studies describing one of two children with a specific behavior problem. After hearing the case, the participants heard a description of four treatments: reinforcement, time out from reinforcement, positive practice, and medication. A rationale was given for each intervention and the procedures were presented in detail. To evaluate the influence of treatment efficacy on acceptability ratings, statements about the efficacy of the treatment in changing the problem behavior were described. The treatment for each case was individually described as either producing weak or strong effects. Weak effects were described as having less rapid and pronounced effects, although clear improvements were noticeable. Strong effects were characterized by rapid effects and nearly or complete

PAGE 11

4 elimination of the problem behaviors. Interestingly, the results of this study did not suggest that treatment effectiveness information influenced the acceptability ratings of the treatments. Several limitations may have affected the results of these two studies. First, the studies did not address the actual effects of the interventions on real children in real classrooms. In addition, the definitions of effectiveness were very broad. Also, the participants were unable to see the behaviors or the changes in those behaviors following treatment; rather, they were limited to the descriptions given by the researchers. Different effects might have been observed had participants been evaluating the treatment acceptability of interventions used in their classrooms, which actually affected the behavior of their students. When discussing the issue of effectiveness of interventions or treatments, it is important to distinguish between empirical validation and applied success. When researchers refer to efficacy of an intervention, they are usually referring to empirical support that the intervention is effective (i.e., empirical validation) (Witt, 1986). It is important to note that, outside of the context of research studies, teachers generally do not have access to that type of effectiveness data (Witt, 1986). Moreover, interventions demonstrated to be effective in the research literature might not be necessarily accepted by consumers (Elliott, Witt, Galvin, & Peterson, 1984). Clearly, teachers often accept interventions even though they have no empirical evidence to support their effectiveness. In fact, some widely accepted interventions, such as reality therapy and assertive discipline, appeal to values and common sense rather than relying on empirical research (Martens, Peterson, Witt, & Cerone, 1986; Witt & Elliott, 1985). Therefore, the

PAGE 12

5 effectiveness of an intervention as determined by actual application in a real-world setting (i.e., applied success) may be the more influential variable in teachers’ acceptance of interventions. It is logical to conclude that if a teacher perceives or judges an intervention to be effective, she will be more likely to continue implementing that intervention (i.e., acceptance increases treatment integrity). If this is so, a critical variable appears to be whether teachers are able to accurately judge the effectiveness of behavioral interventions implemented in their classrooms. Besalel-Azrin, Azrin, and Armstrong (1977) conducted a study to improve classroom behavior in a fifth grade class. The researchers then asked teachers to report how much the problem behaviors (both conduct and academic) had been reduced as a result of the treatment. Interestingly, the independent observers found a reduction of about 50% from baseline, while the teachers reported a reduction of more than 90%. This study clearly illustrates discrepancies between actual changes measured by independent observers and those perceived by teachers. It appears that teachers’ overestimations of behavior change are not limited solely to observations of their students. Wickstrom, Jones, LaFleur, and Witt (1998) investigated variables affecting teacher ratings of acceptability and integrity of implementation in a classroom setting. In this study, teachers were asked to report how often they implemented the suggested intervention with integrity. Data were also collected by independent observers to compare teacher reports of integrity to actual implementation integrity. The results of the independent data collection showed that the integrity with which teachers implemented the interventions as instructed was only between 1-6%. Interestingly, teachers estimated that they implemented the treatments an

PAGE 13

6 average of 54% of the time. This discrepancy between actual behavior change and teacher’s perceptions of behavior change could be explained in at least two ways. One possibility is that the teachers overestimated the integrity with which they implemented the intervention in an effort to please the researchers (i.e., reactivity). Another plausible explanation is that the teachers were not able to accurately evaluate their own behavior. A skill deficit such as this potentially could affect a teacher’s ability to make the correct decision with regard to intervention choices. For example, if the teacher is unable to accurately judge treatment effects, he or she may be inclined to abandon effective interventions or continue to implement ineffective ones. Therefore, teachers may benefit from utilizing more systematic methods for determining whether behavior change has taken place and whether an intervention is effective or needs to be discontinued. In order for teachers to make decisions about whether to continue an intervention or not, they must first determine whether the current intervention is effective. Providing feedback to teachers on intervention effects, especially in graphed form, might be helpful in assisting teachers with effectiveness decisions. The effects of graphic feedback have been demonstrated frequently in the behavior analytic and school psychology literature. However, most studies have assessed the effects of feedback on treatment integrity as opposed to intervention acceptance, efficacy, and perseverance with prescribed procedures. Mortenson and Witt (1998) conducted a study to investigate the effects of graphic feedback on the integrity with which four teachers implemented a reinforcer-based intervention. The teachers were provided with performance feedback weekly by a consultant. These meetings consisted of presentation of a graph of the teacher’s

PAGE 14

7 implementation of the intervention along with data on student academic performance. In addition, positive verbal feedback was given for all completed intervention steps and corrective feedback was given when intervention steps were either omitted or incorrectly implemented. During the feedback sessions, researchers also addressed questions and comments from the teachers, obtained a verbal commitment from the teachers to implement the intervention correctly, and prompted the teachers to continue sending daily summaries of student performance. The results showed immediate increases in treatment integrity for each teacher after the implementation of the feedback condition. In addition, although the data on student behavior were somewhat variable, academic performance did improve when treatment integrity increased. Witt, Noell, LaFleur, and Mortenson (1997) provided performance feedback to four elementary school teachers in order to increase the integrity with which they implemented an academic intervention with a targeted student. The performance feedback condition included daily graphic presentation on student performance as well as the teachers’ treatment integrity score. The results indicated an increase in treatment integrity during the graphic feedback condition. In addition, increased treatment integrity appeared to have a positive effect on the academic performance of three of the four children. In order to further investigate the effects of graphic feedback on treatment integrity in schools, Jones, Wickstrom, and Friman (1997) compared traditional consultation to consultation involving graphic feedback on student on-task behavior as well as teacher implementation of the intervention. The primary dependent variable, treatment integrity, was defined as a percentage of intervals during which the teacher

PAGE 15

8 delivered a positive consequence following the on-task behavior of the student. The results indicated substantial improvements in treatment integrity after the implementation of graphic feedback. Results also indicated only moderate improvements in on-task behavior for two of the three students. Based on these findings, the authors suggested that feedback on the teacher’s performance might have been less important for improving treatment integrity than feedback on the child’s behavior. In all three cases, the child’s behavior did not improve significantly during the condition when treatment implementation was low (i.e., the consultation alone condition); therefore the teachers may have believed that the treatment was ineffective before graphic feedback was provided. This belief may have led to the inadequate levels of treatment integrity observed prior to the feedback condition (during the consultation alone condition). Without graphic feedback, it appears unlikely that teachers will be able to accurately recognize small changes in student behavior. As the authors suggest, this can lead to a decline in treatment integrity and possibly the termination of an effective intervention. Clearly, graphed data depicting changes in student behavior can influence teachers’ decisions regarding continued implementation of behavioral interventions Ingham and Greer (1992) conducted a two-study investigation to assess the effects of specific and non-specific feedback on teachers and their students. A procedure called teacher performance rate and accuracy (TPRA) feedback was compared to general feedback procedures typically used by supervisors. During observation sessions for both studies, teachers ran at least 20 instructional trials or task-analysis steps with one student. Teacher performance was calculated by totaling all correct reinforcements and corrections of student behavior given by the teacher, subtracting errors in reinforcement

PAGE 16

9 or correction and dividing that number by the duration of the session to determine a rate and accuracy score. Student responding was determined as incorrect if the student did not respond to an antecedent stimulus within five seconds of its presentation. The totals for student correct and incorrect responses also were divided by the duration of the session. During baseline in both experiments, the supervisor (who was the first author of the study) observed the classroom and recorded teacher and student responses. Following the observation session, the supervisor met with the teacher and provided nonspecific feedback in the form of praise, comments on student behavior, instructional tasks, and materials. Graphic feedback was not provided during this phase. In the treatment phase of study 1, the supervisor met with each teacher following each session and provided praise, verbal feedback, and written feedback. The written feedback included graphed data on the rate and accuracy of teacher responses (i.e. presentation of instructional trials and accurate application of consequences to student behavior) in the classroom, as well as the rate of student correct and incorrect responses. The data were explained to teachers in detail during the sessions. Results indicated that teacher performance increased during the rate and accuracy feedback phase for both participants. Student rates of correct responses increased during this phase as well. Study 2 added the collection of data throughout the day by the teachers in an effort to determine whether their performance generalized to periods of the day when the supervisor was not present. The teacher was asked to collect data on individual student’s correct and incorrect responses to instructional trials throughout the day. The supervisor collected the data at the end of each day, although teachers were not given feedback on the accuracy of their data collection. Lessons were videotaped so that accuracy of

PAGE 17

10 teacher recording could be determined by an independent observer. Procedures used in the previous study (i.e., feedback on accuracy and rate of teacher and student responses) were also used during Study 2. As in the first experiment, both teacher and student performance improved significantly following rate and accuracy feedback. According to teacher collected data, these improvements appeared to maintain even in the absence of the supervisor. Although the influence of graphic feedback on teacher and student behavior has been demonstrated repeatedly in the behavior analytic literature, little has been done to evaluate the influence of graphic feedback on teachers’ abilities to accurately recognize behavior change. Also, there is virtually no research investigating whether applied success (determined by data on student behavior) influences teachers’ decisions to continue behavioral interventions. Small changes in student behavior may be difficult for teachers to recognize given the distracters present in most classroom environments. Typically, teachers rely strictly on their perceptions of behavior change when evaluating the effects of an intervention instead of taking into account actual changes in student behavior. Access to graphed data on the frequency or rate of student behavior may improve their ability to recognize such changes. In order for teachers to make appropriate decisions about intervention effectiveness and implementation, it is important that they have access to a graphic display depicting student behavior. This study assessed the effects of graphic display and training in visual inspection of graphed data on the ability of teachers to accurately recognize and report changes in student behavior. In addition, it evaluated the effects of the independent variables on participant decisions to continue behavioral interventions.

PAGE 18

11 Chapter Two Method Participants and Setting Three elementary school teachers volunteered to participate in this study. All three participants were acquainted with the researcher through prior observation in their classes therefore a high degree of cooperation was anticipated. Mrs. Ashton was the most experienced of the three participants. During the study she taught third grade in a regular education classroom. She had five years of teaching experience prior to participating in this study. Ms. Flower was also a third grade teacher in a regular education classroom. She was a new teacher and had less than one year of teaching experience when the study began. Ms. Katch was also a first year teacher in a regular education setting, but unlike Ms. Flower and Ms. Ashton, she taught at the second grade level. All three participants reported having no specific training in visual analysis of graphs when given a brief questionnaire (Appendix A). All sessions took place during summer school. For two of the participants, data were collected in their classrooms during their lunch periods and after school. For the third participant, data were collected in her home and in a conference room at her elementary school. This participant did not teach summer school, and therefore did not have a classroom in which to meet during the course of this study. Institutional Review Board Procedures

PAGE 19

12 The University of South Florida’s Institutional Review Board and the Hillsborough County School Board approved all procedures prior to data collection. All participants were given an informed consent form explaining the study prior to data collection (Appendix B). All participants in the study returned an informed consent form prior to the collection of data. Videotaped Sessions To provide stimulus materials for data collection, two elementary school classrooms were filmed during regular classroom activities. All students taped as part of this study were given parental informed consent forms (Appendix C). Only those students who returned the signed consent forms and agreed to participate in the study were taped. The students who did not return consent forms were moved out of view of the camera during the taping sessions. Two students from a first grade class and one student from a third grade class were chosen as the target students. Each of these students was chosen based on the amount of time they engaged in out of seat behavior. Out of seat behavior was chosen as the target behavior because it is frequently observed in classrooms but considered to be a non-severe behavior. For student R (3rd grade), out of seat behavior was defined as any incidence when the student’s bottom was completely separated from the chair and parallel to or above the top of the chair back for 2 or more seconds. Examples of out of seat included kneeling on one or more knees in the seat, standing at or near desk, walking around the room, crawling under desk, leaning across desk, leaning over in seat so that feet are parallel with the chair or standing with one foot in the chair and one foot on the floor. Out of seat was not scored if the student was sitting on his/her feet (unless bottom was parallel to or above height of the chair back), sitting with legs

PAGE 20

13 crossed, or leaving his seat with verbal permission from the teacher (e.g., to go to bathroom following permission or direction by the teacher). For students K and L (1st graders), the definition of out of seat was identical to R with the exception of chair tilting, which was added to the definition based on the frequency with which this behavior occurred in the classroom. Chair tilting was included in out of seat when two or more legs of the student’s chair left the ground. This did not include a brief adjustment of the chair in an effort to get closer to the desk. Each videotaped session was divided into one minute segments and scored independently by the researcher and a second trained observer. Data were collected on the target behaviors using a ten-second partial-interval recording system. The observers’ data were compared interval by interval to determine agreement on the percentage of intervals in which the target behaviors occurred. Inter-observer agreement was scored by dividing the number of intervals in which the observers obtained an agreement by the total number of intervals observed. Only those video clips in which an agreement score of 83% or better was attained were used as stimulus material for data collection in this study (e.g., 5 of 6 agreement intervals in a minute). Agreement was calculated on the entire video for each student. Low agreement segments were then discarded. After all one-minute video segments were scored and coded, they were arranged into 28 “sets” of five clips each. The five video clips within each set were ordered to depict variable uptrending, invariable uptrending, variable downtrending, invariable downtrending, no trend with variability, and no trend without variability. In unstable data paths, the minimum change between data points was always at least five percent. Once sets were ordered by the researcher, trend was calculated using Microsoft Excel. A trend

PAGE 21

14 line for which the angle was greater than zero degrees and less than 90 degrees was labeled an uptrend. A trend line for which the angle was less than zero degrees was labeled a downtrend. The absence of a trend in the data was labeled as a no trend graph. Ten sets were constructed for student K, with two depicting uptrending graphs with variability, two uptrending without variability, two downtrending with variability, two downtrending without variability, one no trend with variability and one no trend without variability. Ten sets were constructed for student L, two depicting uptrending variable graphs, tow uptrending without variability, two downtrending with variability, one downtrending without variability, one no trend with variability, and two no trend without variability. Eight sets were constructed for student R, two depicting uptrending graphs with variability, one uptrending without variability, one downtrending without variability, two downtrending with variability, one no trend with variability and one no trend without variability. Graphs for each set can be found in Appendix D. All clips within a given set showed the same student; however some sets included clips that were used in previous sets. The sets were varied systematically within each condition to ensure that participants viewed a representative sample of both trending and nontrending graphs as well as variable and invariable graphs. Selection without replacement was used within each condition to prevent practice effects. In addition to trend calculations, each video clip in a set was individually compared to the preceding clip to arrive at a determination of behavior change (i.e., behavior is better, worse, or not changed). A rating of “better” was scored if there was a decrease in the percentage of intervals in which the behavior occurred. This was only scored if the decrease was at least 5%. A rating of “worse” was scored if there was an

PAGE 22

15 increase in the percentage of intervals in which the behavior occurred. The same 5% minimum applied. No change was scored if there was no change in the percentage of intervals in which the behavior occurred. Dependent Variables and Data Collection Measures of accurate detection of behavior change and appropriate persistence with intervention choices were included as dependent measures in the study. Accurate detection of behavior change was defined as an agreement between the participant’s rating of behavior change and the actual change as determined by prior analysis of the graphs by trained scorers. To determine the participant’s rating of behavior a scorecard was administered after each clip in a set (Appendix E) and required the participant to compare each new video segment to the last. The scorecard prompted the participant to rate the student’s behavior as better, a little better, a lot better, worse, a little worse, a lot worse or no change. A choice of “I don’t know” was also included as an option to prevent the participants from being required to guess if they were unsure. Only those ratings of “better” or “worse” were scored for the accuracy calculation during the study. Ratings of “a little” and “a lot” were only used as supplemental information and therefore were not operationalized. These ratings (“a little” and “a lot”) were scored equally as either better or worse. The percentage of accurate detection of behavior change during each session was calculated and graphed for each participant. This was done by dividing the number of accurate ratings by the total number of possible ratings available during the session. Appropriate persistence with intervention choices was defined as a match between trend of behavior change within a set of clips and the participant’s choice to continue or

PAGE 23

16 discontinue the intervention. Data on appropriate persistence was collected by giving participants a scorecard after each set of videos (Appendix F). The scorecard presented a question that presupposing that a teacher-selected or “teacher-friendly” intervention was in place to decrease the behavior, and asked the participant to determine whether the intervention should be continued or not. A match was scored if the behavior targeted for deceleration was downtrending and the participant stated that the intervention should continue, if the behavior was uptrending and the participant stated that the intervention should not be continued, or if the data were not trending and the participant stated that the intervention should not be continued. No-match was scored if the behavior targeted for deceleration was downtrending and the participant stated that the intervention should not be continued; the behavior was uptrending and the participant stated that the intervention should be continued; or if the behavior had no visible trend and the participant stated that the intervention should be continued. Three sets of five clips each were viewed by each participant during each session. The percentage of matches was calculated and graphed for each participant. This was done by dividing the number of matches by the total number of sets rated for each session. Observer Training The primary data collector for this study was a research assistant trained by the principal investigator. The procedures for data collection were described in detail by the principal investigator including an explanation of the participant scorecards, possible participant responses, and scoring procedures. The observer was given a copy of the scorecard for both the persistence and the accuracy measure. The procedure for scoring

PAGE 24

17 both correct and incorrect participant responses was explained in detail by the principle investigator. A quiz was given to the data collector prior to the practice scoring sessions. This quiz consisted of multiple choice questions pertaining to participant responses, scorecards and scoring procedures for each dependent measure (Appendix G). The observer was required to score a minimum of 90% on the quiz before beginning practice scoring sessions (actual observer score = 100%). Three practice sessions were conducted prior to data collection using mock participant scorecards for the persistence and accuracy measures. The observer compared the data on each scorecard to the actual change as determined by prior analysis by the researcher and determined whether the participant rating was accurate/inaccurate for the accuracy measure and whether there was a match/no-match for the persistence measure. During the practice sessions, the observer was given fifteen completed participant scorecards (12 accuracy and 3 persistence). She was then required to score twelve of the participant scorecards for the accuracy measure (scoring clips 2-5 only for each set) and calculate the percentage of accurate detection of behavior change for each set of five scorecards (three sets total). The observer then scored the remaining three participant scorecards for the persistence measure which corresponded to the three sets of accuracy score cards. IOA for the accuracy measure during the practice session was calculated by dividing the number of agreements by the total number of accuracy scorecards scored and multiplying by 100 (X/12 x 100). IOA for the persistence measure during the practice session was calculated by dividing the number of agreements by the total number of persistence scorecards scored in the session and multiplying by 100 (X/3 x 100). The observer was required to score three practice

PAGE 25

18 sessions for both dependent measures and obtain an IOA of 90% or higher with the principle investigator on each in order to begin data collection. The observer obtained a score of 100% after 3 practice sessions. Inter-observer Agreement A second trained observer scored 60% of all participant data. IOA checks were distributed across all phases of the study to prevent observer drift. For Mrs. Ashton and Ms. Katch, IOA was scored three times during baseline, five times during the graphic display condition, and one time during the final condition of the study. For Ms. Flower, the final participant to receive treatment, IOA was scored six times during baseline, two times during the graphic display condition and one time during the final condition. Agreement scores for accuracy were calculated by dividing the number of agreements by the number of clips rated in the session and multiplying by 100. Agreement scores for persistence were calculated by dividing the number of agreements by total number of sets rated in the session and multiplying by 100. Inter-observer agreement for persistence was 100% across all phases of the study. Agreement scores for accuracy ranged from 92% to 100% for Ms. Ashton and Ms. Katch (M=99%). Agreement scores for Ms. Flowers remained consistent at 100%. Experimental Conditions Baseline. During the baseline condition, data were collected on the dependent variables in the absence of specific procedures to assist participants in assessing the behavior presented on the videotaped sets. When participant data on both accuracy and

PAGE 26

19 persistence were stable or showed a clear trend in the opposite direction of desired behavior change, the next phase of the study was initiated. Graphic display. During the graphic display condition, a graph depicting the percentage of intervals in which the target behavior occurred in each clip was shown to the participants following each videotaped segment within a set. Subsequent graphs within a set included all data points for the current clip and any preceding clips (i.e., for each set, a total of four graphs were shown to participants, each updated based on the preceding clip). Graphs were generated using Microsoft Excel and were displayed on a sheet of paper. During this condition, participants were required to fill out their accuracy scorecard twice for each clip: once before they viewed the graph and once after they viewed the graph. This procedure was used to determine whether graphic display of the data affected observations of behavior independent of the graph itself. Persistence scorecards were completed after the participants viewed the 5th segment in the set and the graph containing all five data points When participant data on both accuracy and persistence were stable or showed a clear trend in the opposite direction of desired behavior change, the next phase of the study began. Training in visual inspection plus graphic display During this phase of the study, participants were given a short training session, led by the principle investigator, on visual inspection of graphed data. Training included explanations of trend and variability, as well as a discussion of their importance in evaluating treatment effects (see Appendix H for an outline of training content). Magnitude of change was not included in the training protocol since the participants were not required to view data across phase changes. Trend was the first dimension discussed in the training. The participants were

PAGE 27

20 shown several examples and non-examples of each type of trend and they were asked to report whether the behavior depicted was increasing/uptrending, decreasing/downtrending, or whether there was no trend in the data. The participants were also asked to draw a line of best fit on several graphs depicting clear trends in the data. Corrective feedback for incorrect answers and positive feedback for correct answers was given during the training session. Variability was the second dimension discussed during participant training. The participants were shown several graphs depicting variability in performance. Examples included graphs depicting no trend with variability, no trend without variability, uptrending variable data, and downtrending variable data. Participants were then asked to view several graphs and make a determination as to whether the data were stable or variable and whether the data were trending Additionally, they were asked to make judgments similar to those during the graphic display condition including whether the intervention should be continued or discontinued. Corrective feedback for incorrect answers and positive feedback for correct answers were given during the training session. Any questions or concerns were also addressed during this session. Training was conducted in one-to-one sessions, which took place in each participant’s classroom or home and lasted approximately 30 minutes. After training, experimental sessions and data collection were conducted in a manner identical to the previous condition (i.e., graphic display). Independent Variable Integrity Participant knowledge of graphing conventions was assessed using an independent quiz/practice to ensure that the participant understood the information presented during training (Appendix I). The quiz was administered immediately

PAGE 28

21 following the training session. The items on the quiz covered the material discussed during the training session. A score of 90% or higher was considered a sufficient mastery score. The investigator discussed any incorrect items with the participants following the completion of the quiz to ensure that the participant understood the answers. Participant Beliefs and Social Validity A questionnaire was given pre-baseline, before training and post-experimentation (Appendices J, K, and L). These questionnaires were used to assess participant beliefs about the importance of graphing in determining intervention effectiveness. In addition, they assessed their beliefs about the length of time required to determine whether an intervention is effective

PAGE 29

22 Chapter Three Results Participants' detection of behavior change scores across conditions are depicted in Figure 1. During the baseline condition, Ms. Ashton’s accurate detection of behavior change (top panel) averaged 61% (range, 42 to 75%). In the graphic display condition, there was an immediate change in level both before and after viewing the graph. Accurate detection before viewing the graph was initially high but began to decrease across time, although mean performance was higher during this condition than during baseline (M=77%; range, 67 to 92%). Further improvements in accuracy were observed when Ms. Ashton viewed the graph, and accuracy remained high and stable with only two data points falling below 100% (M=96%; range, 83 to 100%). Following training, Ms. Ashton’s accurate ratings before viewing the graph increased immediately, but overall, her data were downtrending as in the previous condition (M=88%; range, 75 to 100%). Ms. Ashton’s data after viewing the graph remained stable and high at 100%. The second panel in Figure 1 represents Ms. Katch’s accurate detection of behavior change scores. During baseline, Ms. Katch’s accuracy data were relatively stable (M=50%; range, 42 to 58%). During the graphic display condition, Ms. Katch’s accurate detection immediately increased both before and after viewing the graph of the student’s behavior. Before the graph, her behavior was quite variable until session 10, although mean performance was higher during this condition (M= 70%; range, 42 to

PAGE 30

23 83%). Accuracy after viewing the graph remained at 100% throughout the entire condition. Following training in visual inspection, Ms. Katch’s accuracy decreased noticeably before viewing the graph of the student’s behavior and mean performance decreased to below baseline levels (M=42%; range, 25 to 58%). With the assistance of the graph in this condition, Ms. Katch’s accuracy remained at 100%. Ms. Flower’s accuracy data are shown in the bottom panel of Figure 1. During the baseline condition, Ms. Flower’s data were relatively stable with the exception of the first two data points (M=60%; range, 42 to 75%). Before viewing the graph in the second condition, there were no immediate changes in her behavior, although her performance did increase as the condition progressed. There was very little change in mean performance from baseline levels during the graphic display condition prior to viewing the graph (M=64%; range, 50 to 83%). Viewing the graphs appeared to improve Ms. Flower’s accuracy, as is evidenced by the upward trend observed in data points collected after the presentation of the graphs. Her performance on the accuracy measure steadily increased across this condition until maintaining at 100% for three consecutive sessions (M= 91%; range, 75 to 100%). Following training, before seeing the graph, Ms. Flower’s accuracy and mean performance increased from the previous condition (M=71%; range, 58 to 83%). After viewing the graph in this condition, her behavior maintained at a high, stable level as in the previous condition (M=96%; range, 92 to 100%).

PAGE 31

24 Figure 1. Accurate detection of behavior change scores for Mrs. Ashton (top panel), Ms. Katch (middle panel), and Ms. Flower (bottom panel). 0 10 20 30 40 50 60 70 80 90 100 Baseline Graphic Display GD + Training Mrs. Ashton before graph 0 10 20 30 40 50 60 70 80 90 100 12345678910111213141516 Ms. Flower before graph 0 10 20 30 40 50 60 70 80 90 100 Ms. Katch before graph Sessions Percenta g e of Accurate Ratin g s

PAGE 32

25 Data on the percentage of appropriate persistence decisions across conditions are depicted in Figure 2. During baseline, Ms. Ashton’s data (top panel) displayed a downward trend with an average performance of 78% (range 67 to 100%). During the graphic display condition, there was an increase in the overall level of the data. Ms. Ashton’s matches with actual trend increased and maintained at 100% for all but one day during this condition (M=95%; range, 67 to 100%). During the training condition, there was an initial decrease during session 12, however during the final three sessions, Ms. Ashton’s matches with trend remained at 100% (M= 92%; range, 67 to 100%). The middle panel of Figure 2 represents Ms. Katch’s persistence data. During baseline, Ms. Katch’s data were somewhat variable (M= 75%; range 67 to 100%). After seeing the graph in the graphic display condition, Ms. Katch’s behavior immediately increased to 100% and remained high and stable across this condition, with the exception of two data points. Average performance also increased from the baseline condition (M=93%; range, 67 to 100%). Following training, during session 15, Ms. Katch’s data remained at the previous level; however her behavior decreased during the final session. The bottom panel of Figure 2 shows Ms. Flower’s persistence data across all three conditions. During baseline, her behavior was considerably variable (M= 72%; range, 33 to 100%). Ms. Flower’s matches with actual trend during the graphic display condition increased immediately, but overall, seeing the graph did not seem to impact her persistence choices. Her data were downtrending across the condition and mean performance decreased to 62% (range, 0 to 100%). Training in visual inspection produced no noticeable changes in her behavior, and matches with trend remained stable at 60% during this condition.

PAGE 33

26 Figure 2. Appropriate persistence scores for Mrs. Ashton (top panel), Ms. Katch (middle panel), and Ms. Flower (bottom panel). 0 20 40 60 80 100 BaselineGraphic DisplayGD + Training Mrs. Ashton 0 20 40 60 80 100 12345678910111213141516 Ms. Flower 0 20 40 60 80 100 Ms. Katch Sess i o n s Percenta g e of matches with tren d

PAGE 34

27 Responses on the participant selection questionnaire, beliefs about graphs survey, and social validity survey yielded interesting results. On the selection questionnaire, all three participants reported that they had never taken a course or in-service addressing methods for analyzing the effects of a behavior change strategy. Additionally, when asked how they knew when a strategy was working, none listed methods involving data collection or analysis. One participant said she did use “informal” observations of student behavior while another looked at whether more work was completed. All participants reported that they were somewhat confident in their abilities to determine whether a behavior management strategy was working. All participants reported that they never graphed student behavior when making decisions about behavioral interventions. It is also interesting to note that two out the three teachers who participated in this study said they were confident in their ability to not only collect data on student behavior, but also correctly evaluate it. A questionnaire assessing participant beliefs about graphs was given to each participant prior to collecting baseline, just before the training in visual inspection and after completing the final session. These surveys were given to evaluate whether participant beliefs and opinions about graphs changed after viewing the graph and after training. Each of these questionnaires contained similar questions about the usefulness of graphs, but they were not identical. On each of these questionnaires, all three participants agreed that graphs were useful tools when evaluating whether a behavior change strategy was effective. When asked before training whether they planned to graph student behavior in the future, two participants said “seldom”, while Ms. Flowers chose “sometimes”. On the final questionnaire, two participants said they would use graphs

PAGE 35

28 sometimes. On this survey, Ms. Flower said she would seldom use graphs when evaluating a behavior change strategy, the reason being that it takes too long. When asked what other methods they used to determine whether a strategy was effective, two participants stated that they “just know” while one said she would get someone else’s opinion or use proof of work progress. Before baseline, two of the three participants said they generally try an intervention for 5-6 days before making decisions about effectiveness, while the other participant chose 3-4 days. On the final questionnaire, all three participants said they planned to try behavioral interventions for 5-6 days before making decisions about effectiveness. The questionnaires given before training and post-experimentation inquired about the usefulness of the graphs presented in helping them to make decisions about behavior change and continuing the intervention. All participants stated that viewing the graph helped them when making both decisions and two participants found it difficult to make decisions about whether the IV was working prior to viewing the graphic display. It is interesting to note that, on the final beliefs survey, only one participant thought it was difficult to determine whether the students’ behavior was better or worse prior to viewing the graph. Clearly, participants felt more confident making decisions about whether a behavior had changed than they did about making decisions regarding the overall effectiveness of a strategy. In addition to this, participants were asked whether they found it easier to make decisions about behavior change as the study progressed. Two of the participants said no to this question, however all agreed that after training, it was easier to make decisions about continuing or discontinuing the intervention

PAGE 36

29 In addition to the beliefs questionnaires, a social validity questionnaire was given to all three participants (Appendix M). All participants said the training was useful and that the information provided in this study did contribute to their goals as teachers. They also stated that graphing was a useful tool when making decisions about student behavior and intervention effectiveness and they would be more inclined to use graphing procedures when making decisions in the future. Unfortunately, these responses did not correspond to those made on the beliefs questionnaires (even those given at the same time). In terms of video quality, which was a concern for the researcher, only one of the participants agreed that it was “sometimes” difficult to see the students’ behaviors due to the quality of the videos. All three participants said they enjoyed participating in this study.

PAGE 37

30 Chapter Four Discussion One goal of this study was to assess the effects of graphic display and a brief training in visual inspection of graphed data on the ability of teachers to accurately recognize and report changes in student problem behavior. Viewing the graph of student behavior during the graphic display condition did seem to affect participants’ performance on the accuracy measure. All three participants were more accurate when rating behavior change after viewing the graph than when making judgments with no visual representation of behavior change. It is interesting to note that accuracy for Ms. Ashton and Ms. Katch improved dramatically from baseline after being shown the students’ data, whereas Ms. Flower’s data increased gradually over time. It also is interesting to note that on two occasions early in the graphic display condition (session 8 and 9), Ms. Flower chose to keep her original answers on the accuracy scorecard even when they did not correspond to the data shown on the graph. For example, she rated the behavior in a clip as “worse” prior to viewing the graph and chose to keep this answer even after seeing that there was a decrease in out of seat behavior. Ms. Ashton also chose to disregard the graph on two occasions during the graphic display condition (session 5 and 8). When participant answers did not correspond to the graph, they seemed surprised and made comments such as, “I could have sworn the behavior was better.” These comments and their subsequent

PAGE 38

31 answers suggest that the teachers may not have believed that the data on the graph actually represented student behavior, or perhaps they were responding to other features of the behavior not captured in the operational definition. In any event, these findings raise interesting questions for behavioral consultants who share data on student behavior with teachers, especially with regard to the “believability” of graphs and the consistency between teacher and consultant perceptions of student behavior. Unlike Ms. Flower and Ms. Ashton, Ms. Katch immediately used the graph when making judgments about behavior change and continued to do so throughout the study. She also commented that, “the graph doesn’t lie” when one of her answers differed from the graphic display. Interestingly, the social validity and belief questionnaires did not reveal that Ms. Katch had more positive opinions about graphing that the other two participants. Another interesting finding was the change in participant performance prior to being shown the corresponding graphs for each clip. In other words, it appears as though simply viewing the graphs improved the accuracy with which participants rated behavior even in the absence of a graphic display. Although accuracy was somewhat variable without the graph, all participants eventually showed clear improvements when compared to baseline measures (although Ms. Ashton’s and Ms. Flower’s improvements were less impressive than those of Ms. Katch) One explanation for increased performance under these conditions may have been an increase in participants’ attending to relevant stimuli on the videos as a function of seeing the graph. However, this explanation does not explain the immediate increases in accuracy observed immediately after the baseline phase. Another possibility is that the graph of student behavior served to reinforce

PAGE 39

32 correct responding prior to viewing the graph. In fact, Ms. Flower commented several times that her goal was to “beat the graph”. It was clear from her verbal behavior that she enjoyed seeing the graph when her answers corresponded to the data and disliked seeing it when her answers differed from those on the graph. Although it is difficult to fully explain these results, it is clear that the participants were better able to accurately detect and report behavior change when viewing the graph in this condition than when relying on subjective opinions alone. Although conclusions are limited due to the lack of data during the final condition, training in visual inspection of graphed data did not seem to influence accuracy. Because performance was very high and stable prior to the third condition, absolute effects of training are difficult to discern. This study also sought to evaluate the effects of the independent variables on participant decisions to continue or discontinue behavioral interventions. For all three participants, viewing the graph immediately improved their persistence scores, although further effects were not observed with the addition of training. Additionally, Ms. Flower’s behavior returned to baseline levels after an initial increase in appropriate responding. This decrease in behavior may be due to a faulty learning history with respect to interpreting graphs. For instance, Ms. Flower viewed several uptrending graphs with two low data points. In these instances, she chose to continue the intervention. Although she was not asked to explain her decisions after answering, she commented that because the student’s behavior was lower on two of the five days, the intervention should be continued. She stated that, “these points show he can do it.” It was clear that Ms. Flower was not taking into account the overall direction of the data, but instead looking

PAGE 40

33 only at individual data points. As mentioned previously, training did not improve teacher performance on the persistence measure. It may be that a one-time training is not sufficient to overcome a faulty learning history. More time and practice may be needed to effectively train teachers to interpret graphs and subsequently make decisions, especially when graphs include data paths with considerable variability. It is interesting to note that majority of inappropriate persistence choices for all participants occurred when the data were highly variable. The participants seemed to have more difficulty making decisions about persistence when the data were not clearly increasing or decreasing. In other words, they were more accurate when there was minimal or no variability in the data. This finding suggests that researchers or practitioners working with teachers may need to provide additional explanations of graphs with variability. When viewing the persistence data, it is also important to take into account participant probability of answering correctly on the persistence scorecard. Participants completed three scorecards during each condition and for each they had a 50/50 chance of answering correctly unless they chose the “I don’t know” option (chosen rarely). Due to the limited range of responses in this condition, caution is warranted when drawing conclusions from these data. Some limitations concerning participant accuracy in the absence of the graph during all three conditions should be noted. First, although participants were asked to focus on the amount of time the students spent out of seat, they seemed to focus more on the intensity of the behavior. For instance, they were more likely to rate the behavior as worse if the student left his seat and walked around the room versus standing at his seat

PAGE 41

34 for an identical number of intervals. In this study, no attempts were made to control for intensity variables due to the limited number of children and sessions available for videotaping. Future researchers might alleviate this problem by taping for longer periods of time to gain more clips of the desired behavior, thereby allowing them to “match” intensities within a set of clips. This would also allow researchers to choose clips within sets that depicted roughly consistent engagement in on-task behavior, which also could influence teacher ratings of behavior. Given these constraints and the difficulty controlling them within natural settings, it may be advisable for future researchers to tape actors in a simulated classroom setting in order to control for more variables. Another potential limitation of the study involves the quality of the video and sound. Because the videos were compressed to CD’s, it was occasionally difficult to see the student in the clip engage in the target behaviors. Although the teachers did not find the quality too poor to view and make decisions concerning behavior change, this may have affected some of their answers on the accuracy and persistence scorecards. However, only one participant (Ms. Flower) reported that it was sometimes difficult to see the students’ behaviors on the videos due to the video quality. One clear weakness of the current research is the analogue nature of the study. Although it was clear that the teacher accuracy improved when shown graphs of student performance, viewing short clips of behavior is very different from dealing with a problem behavior on a daily basis. Therefore, future research should investigate whether graphs and training in visual inspection affect teacher decision making in their own classrooms.

PAGE 42

35 Clearly, future research is needed to determine what variables influence teacher decision making with regard to behavioral interventions. It would be interesting investigate teacher responses to different topographies of disruptive behavior. This study focused only on out of seat behavior in order to reduce possible confounds of introducing multiple behaviors, but it is important to note that each teacher has his/her own beliefs and opinions about whether a behavior is annoying or disruptive. These beliefs may very well influence teacher responses. It would also be interesting to investigate whether viewing a baseline graph influences teacher responses to persistence questions. For instance, if they were to first view a baseline graph depicting high stable data and then a treatment graph similar to those shown in this study, would their answers stay the same? They may view the data very differently depending on the trend and variability of the baseline data. Future researchers may also attempt to improve the training protocol used in this study in order to better facilitate appropriate decisions. One option would be to use programmed instruction to increase participant fluency in reading graphs with variability. This type of training environment would allow for the presentation of more difficult graphs only after a participant has mastered graphs with less variability. The tutorial should also include feedback frames for correct and incorrect responses. This type of feedback was provided during the training in this study, however a computer tutorial would allow for more rapid presentation of graphs and feedback allowing for more practice. Participants could then build their fluency by completing timed quizzes requiring them to make decisions about changes on the graph and persistence.

PAGE 43

36 Despite some methodological and logistical shortcomings, this study adds to the behavioral education literature in at least two important ways. First, the results of this study add to the validity of using graphs when consulting with teachers given participant responses to the data. Teachers were more accurate when making decisions about changes in behavior when they were viewing a graph than when they were relying solely on their subjective views of behavior change. Additionally, training in visual inspection did not appear to be necessary to maintain high performance on the accuracy measure. Second, although several researchers have investigated the role of graphs in treatment integrity (Ingham & Greer, 1992; Witt, Noell, LaFleur, & Mortenson,1997; Jones, Wickstrom, & Friman, 1997; Mortenson & Witt, 1998) this study was the first to show that graphs can be used to increase teachers’ abilities to accurately detect and report changes in student behavior. Additionally, for two of the three participants, appropriate persistence did improve as a result of viewing data of student behavior. These effects were apparent even without specific training in visual inspection. These are critical findings considering that teachers are often those responsible for making decisions about intervention effectiveness and implementation. Consultants working in schools should be aware of these findings and continue to provide teachers with objective data (and training if necessary) so they are better able to make these important decisions.

PAGE 44

37 References Besalel-Azrin, V., Azrin, N.H., & Armstrong, P.M. (1977). The student-oriented classroom a method of improving student conduct and satisfaction. Behavior Therapy 8 193-204. Elliott, S.N., Witt, J.C., Galvin, G.A., & Peterson, R. (1984). Acceptability of positive and reductive behavioral interventions: Factors that influence teacher’s decisions. Journal of School Psychology 22 353-360. Ingham, P., & Greer, R.D. (1992). Changes in student and teacher responses in observed and generalized settings as a function of supervisor observations. Journal of Applied Behavior Analysis, 25, 153-164. Jones, K.M., Wickstrom, K.F., & Friman, P.C. (1997). The effects of observational feedback on treatment integrity in school-based behavioral consultation. School Psychology Quarterly 12 316-326. Kazdin, A.E. (1981). Acceptability of child treatment techniques: the influence of treatment efficacy and adverse side effects. Behavior Therapy 12 493-506. Martens, B.K., Peterson, R.L., Witt, J.C., & Cirone, S. (1986). Teacher perceptions of school-based interventions. Exceptional Children 53 213-223. Martens, B.K., Witt, J.C., Elliott, S.N., & Darveaux, D.X. (1985). Teacher judgments concerning the acceptability of school-based interventions. Professional Psychology:Research and Practice 16 191-198.

PAGE 45

38 Mortenson, B.P., & Witt, J.C. (1998). The use of weekly performance feedback to increase teacher implementation of a prereferral academic intervention. School Psychology Review, 27, 613-627. Von Brock, M.B., & Elliott, S.N. (1987). Influence of treatment effectiveness information on the acceptability of classroom interventions. Journal of School Psychology 25 ,131-144. Wickstrom, K.F., Jones, K.M., LaFleur, L.H., & Witt, J.C. (1998). An analysis of treatment integrity in school-based consultation. School Psychology Quarterly 13 141-154. Witt, J.C. (1986). Teacher’s resistance to the use of school-based interventions. Journal of School Psychology 24 37-44. Witt, J.C., & Elliott, S.N. (1985). Acceptability of classroom intervention strategies. Advances in School Psychology, 4 251-288 Witt, J.C., Noell, G.H., LaFleur, L.H., & Mortenson, B.P. (1997). Teacher use of interventions in general education settings: Measurement and analysis of the independent variable. Journal of Applied Behavior Analysis, 30 693-696.

PAGE 46

39 Appendices

PAGE 47

40 Appendix A: Participant Selection Questionnaire 1. Have you ever taken a class or attended an in-service that addressed methods for analyzing the effectiveness of behavior change strategies in your classroom? a. yes b. no 2. In general, how do you know whether a strategy you are using to improve a student’s behavior is working? a. I observe the student and note any changes in behavior b. I collect data on the student’s behavior and graph changes 3. How confident do you feel that you are able to determine whether a behavior change strategy is working? Very confident Somewhat confident Confident Somewhat unconfident Very unconfident 4. Have you ever taken a class or attended an in-service that addressed how to graph data on student behavior in your classroom? a. yes b. no 5. Do you ever graph data on the behavior of students in your classroom? A. yes b. no If you answered yes, would you be willing to share some of those graphs with the research of this study? a. yes b. no 6. How confident do you feel that you are able to collect data on the behaviors of students in your classroom and correctly graph it? Very confident Somewhat confident Confident Somewhat unconfident Very unconfident

PAGE 48

41 Appendix B: Participant Informed Consent Informed Consent Social and Behavioral Sciences University of South Florida Information for People Who Take Part in Research Studies The following information is being presented to help you decide whether or not you want to take part in a minimal risk research study. Please read this carefully. If you do not understand anything, ask the person in charge of the study. Title of Study: The Effects of Graphic Display and Training on Teachers’ Detection of Behavior Change Principal Investigator: Allana Duncan Study Location(s): Graham Elementary You are being asked to participate because we are trying to understand what affects teachers’ perceptions of behavior change in the classroom. We are also trying to understand what impacts teachers’ decisions to continue behavior change procedures in the classroom. You are an ideal candidate for this study because you have reported experience with disruptive behavior either in the past or in your current classroom and would like to better understand how to evaluate behavior change procedures. General Information about the Research Study The purpose of this research study is to examine the effects of systematic methods for determining behavior change (graphs) on teachers’ perceptions of behavior in the classroom. Similar studies have looked at the effects of graphic feedback on the integrity with which teachers implement behavior change procedures and whether these methods affect teacher and student behaviors, however no studies have looked at the effects graphs may have on teacher decision making (i.e., whether student behavior is improving and whether the intervention should be continued). By conducting this study, we hope to find ways to promote effective evaluation of behavior change procedures in the classroom. Plan of Study As a participant in this study, you would be required to meet with the researcher weekly over the course of the study. The first meeting will consist of a brief questionnaire which should take no more than 10 minutes for you to complete. During each subsequent meeting, you will be asked to view several video taped segments depicting disruptive behavior in a classroom setting. Following each segment, you will be asked to complete a rating form. After viewing a set of five clips, you will be asked to complete a similar rating form. During one of the sessions, you’ll be trained on an effective way to evaluate whether the interventions used in your class are making a difference in student behavior. Each of these meetings will last 30-40 minutes. At the end of the study, you will be asked to complete a brief questionnaire that should take no more than 5-10 minutes to complete.

PAGE 49

42 Appendix B: (Continued) Payment for Participation You will not be paid for your participation in this study. Benefits of Being a Part of this Research Study By taking part in this research study, you may learn ways to evaluate whether a behavior change procedure is effective in your classroom. You will also receive a copy of the training materials used in the study. Risks of Being a Part of this Research Study There are no known risks associated with participation in this study. Confidentiality of Your Records Your privacy and research records will be kept confidential to the extent of the law. Authorized research personnel, employees of the Department of Health and Human Services, and the USF Institutional Review Board and its staff, and other individuals acting on behalf of USF may inspect the records from this research project. The results of this study may be published. However, the data obtained from you will be combined with data from others in the publication. The published results will not include your name or any other information that would personally identify you in any way. All data will be held by the primary researcher at her home office. Only the primary researcher, her supervisor, and the trained observers who will score the data will have access to the data. Volunteering to Be Part of this Research Study Your decision to participate in this research study is completely voluntary. You are free to participate in this research study or to withdraw at any time. If you choose not to participate, or if you decide to withdraw at any time, there will be no penalty or loss of benefits you are entitled to receive. Your decision about participation will in no way affect your job status. Questions and Contacts € If you have any questions about this research study, contact Allana Duncan (813) 789-7163 or Dr. Jennifer Austin (813) 974-6496. € If you have questions about your rights as a person who is taking part in a research study, you may contact the Division of Research Compliance of the University of South Florida at (813) 974-5638.

PAGE 50

43 Appendix B: (Continued) Consent to Take Part in This Research Study By signing this form I agree that: € I have fully read or have had read and explained to me this informed consent form describing this research project. € I have had the opportunity to question one of the persons in charge of this research and have received satisfactory answers. € I understand that I am being asked to participate in research. I understand the risks and benefits, and I freely give my consent to participate in the research project outlined in this form, under the conditions indicated in it. € I have been given a signed copy of this informed consent form, which is mine to keep. _________________________ _________________________ _______________ Signature of Participant Printed Name of Participant Date Investigator Statement I have carefully explained to the subject the nature of the above research study. I hereby certify that to the best of my knowledge the subject signing this consent form understands the nature, demands, risks, and benefits involved in participating in this study. _________________________ _________________________ _______________ Signature of Investigator Printed Name of Investigator Date Or authorized research investigator designated by the Principal Investigator Institutional Approval of Study and Informed Consent This research project/study and informed consent form were reviewed and approved by the University of South Florida Institutional Review Board and it’s staff, and other individuals acting on behalf of USF for the protection of human subjects. This approval is valid until the data provided below. The board may be contacted at (813) 974-5638. Approval Consent Form Expiration Date: ________ Revision Date:_______ IRB # _______

PAGE 51

44 Appendix C: Parent Informed Consent Parental Informed Consent Social and Behavioral Sciences University of South Florida Information for People Whose Children Are Being Asked to Take Part in a Research Study The following information is being presented to help you decide whether or not you want to allow your child to be a part of a minimal risk research study. Please read this carefully. If you do not understand anything, ask the person in charge of the study. Title of research study: The Effects of Graphic Display and Training In Visual Inspection on Teachers' Detection of Behavior Change Person in charge of study: Allana Duncan Luquette Where the study will be done: Pizzo Elementary School, Tampa, FL Your child is being asked to participate because he/she is an elementary school student in a regular education classroom. General Information about the Research Study The purpose of this research study is to help teachers recognize changes in student behavior in the classroom. Teachers will view short video clips of your child’s class and make decisions about changes in student behavior on those clips. Your child may or may not be one of the students targeted for observation by the teachers in the study. Plan of Study As a participant, your child will be video-taped in their regular classroom. This taping will take several hours across several days, but your child will not be asked to do anything outside their regular classroom routine. The tapes will be recorded during regular school hours. These video tapes will only be shown to a small research group at the University of South Florida and three to five teachers who will be participants in this study. The tapes will be shown to elementary school teachers for evaluation. Payment for Participation You and your child will not be paid for your child’s participation in this study Risks of Being a Part of this Research Study There are no known risks associated with this study Confidentiality of Your Child’s Records You and your child’s privacy and research records will be kept confidential to the full extent required by law. Authorized research personnel, employees of the Department of Health and Human Services, and the USF Institutional Review Board and its staff, and

PAGE 52

45 Appendix C: (Continued) other individuals acting on behalf of USF may inspect the records from this research project, including the video tapes. The results of this study may be published. However, the published results will not include your child’s name or any other information that would personally identify your child in any way. Code names will be used in place of your child’s real name in order to protect you and your child’s privacy. The principal investigator, a small research group, will have access to and view the video tapes. The videos will be kept in a locked office at the University of South Florida. The tapes will not be used for any purpose outside this research study. The tapes will be destroyed following completion of the study. Volunteering to Take Part in this Research Study Your decision to allow your child to participate in this research study is completely voluntary. You are free to allow your child to participate in this research study or to withdraw him/her at any time. If you choose not to allow your child to participate or if you remove your child from the study, there will be no penalty or loss of benefits that you or your child are entitled to receive. There will be no grade penalty for your child if you do not allow your child to participate. Questions and Contacts € If you have any questions about this research study, contact Allana Duncan Luquette at (813) 789-7163 or Dr. Jennifer Austin at (813) 974-6496. € If you have questions about your rights as a person who is taking part in a research study, you may contact the Division of Research Compliance of the University of South Florida at (813) 974-5638. Consent for Child to Take Part in this Research Study I freely give my consent to let my child take part in this study. I understand that this is research. I have received a copy of this consent form. ________________________ ________________________ ___________ Signature of Parent Printed Name of Parent Date of child taking part in study Investigator Statement I have carefully explained to the subject the nature of the above protocol. I hereby certify that to the best of my knowledge the subject signing this consent form understands the nature, demands, risks, and benefits involved in participating in this study.

PAGE 53

46 Appendix C: (Continued) ___________________ _______________ ________ Signature of Investigator Printed Name of Investigator Date Or authorized research investigator designated by the Principal Investigator Child’s Assent Statement Allana Duncan Luquette has explained to me this research study called The Effects of Graphic Display and Training in Visual Inspection on Teachers’ Detection of Behavior Change. I agree to take part in this study. ________________________ ________________________ ___________ Signature of Child Printed Name of Child Date taking part in study ________________________ ________________________ ___________ Signature of Parent Printed Name of Parent Date of child taking part in study ________________________ ________________________ ___________ Signature of person Printed Name of person Date obtaining consent obtaining consent If child is unable to give assent, please explain the reasons here: ________________________ ________________________ ___________ Signature of Parent Printed Name of Parent Date of child taking part in study ________________________ ________________________ ___________ Signature of person Printed Name of person Date obtaining consent obtaining consent Investigator Statement: I certify that participants have been provided with an informed consent form that has been approved by the University of South Florida’s Institutional Review Board and that explains the nature, demands, risks, and benefits involved in participating in this study. I

PAGE 54

47 Appendix C: (Continued) further certify that a phone number has been provided in the event of additional questions. ____________________ _________________________ _______________ Signature of Investigator Printed Name of Investigator Date

PAGE 55

48 Appendix D: Video Graphs K -uptrending with variability -set 10 20 40 60 80 100 1234567 sessions K-uptrending with variability-set 20 20 40 60 80 100 1234567 sessionspercentage of intervals out of seat/tilting K-uptrending with no variability-set 30 20 40 60 80 100 1234567 sessionspercentage of intervals out of seat/tilting Kset 40 20 40 60 80 100 sessionspercentage of intervals out of seat/tilting K-downtrending with variability-set 50 20 40 60 80 100 1234567 sessionspercentage of intervals out of seat/tilting K-downtrending with variability-set 60 20 40 60 80 100 12345 sessionspercentage of intervals out of seat/tilting K-downtrending with no variability-set 70 20 40 60 80 100 1234567 sessionspercentage of intervals out of seat/tilting K-downtrending with no variability-set 80 20 40 60 80 100 1234567 sessionspercentage of intervals out of seat/tilting K-no trend with variability-set 90 20 40 60 80 100 12345 sessionspercentage of intervals out of seat/tilting K-no trend-stable-set 100 10 20 30 40 50 60 70 80 90 100 1234567 sessionspercentage of intervals out of seat/tilting

PAGE 56

49 Appendix D: (Continued) L-uptrending variable-set 10 20 40 60 80 100 12345 sessionspercentage of intervals out of seat/tilting L-uptrending no variability-set 20 20 40 60 80 100 12345 sessions L-downtrending variableset 30 20 40 60 80 100 12345 sessionspercentage of intervals L-downtrending no variability -set 40 20 40 60 80 100 12345 L-no trend variable-set 50 20 40 60 80 100 12345 sessionspercentage of intervals L-set 60 20 40 60 80 100 1234567 L-Set 70 20 40 60 80 100 1234567 sessionspercentage of intervals out of seat Set 8-Downtrend variability0 20 40 60 80 100 12345 sessionspercentage of intervals out of seat Set 9-no trend-stable0 20 40 60 80 100 12345 Sessionspercentage of intervals out of seat Set 10-uptrend no variability0 20 40 60 80 100 12345 sessionspercentage of intervals out of seat

PAGE 57

50 Appendix D: (Continued) Set 1-uptrend-variable 0 20 40 60 80 100 12345 sessionspercentage of intervals out of seat Set 2-Down variable 0 10 20 30 40 50 60 70 80 90 12345 sessionspercentage of intervals out of seat Set 3 -no trend stable 0 10 20 30 40 50 60 70 80 90 12345 sessionspercentage of intervals out of seat Set 4-no trend variable0 20 40 60 80 100 12345 sessionspercentage of intervals out of seat Set 5-downtrend no variability0 20 40 60 80 12345 sessionspercentage of intervals out of seat Set 6-down variable0 20 40 60 80 100 12345 sessionspercentage of intervals out of seat Set 7-uptrend no variability0 20 40 60 80 100 12345 sessionspercentage of intervals out of seat Set 8-up variable0 20 40 60 80 100 12345 sessionspercentage of intervals out of seat

PAGE 58

51 Appendix E: Accuracy Scorecard Accuracy Scorecard Date:________ Behavior:__________ Participant:__________ Set #: ____ Condition:___________ Clip #:___ Compared to the previous video clip, the behavior in this clip was: { } Better { } Worse { } No change [ ] A little better [ ] A little worse [ ] A lot better [ ] A lot worse { } I don’t know Score:_________ Accuracy Scorecard Date:________ Behavior:__________ Participant:__________ Set #: ____ Condition:___________ Clip #:___ Compared to the previous video clip, the behavior in this clip was: { } Better { } Worse { } No change [ ] A little better [ ] A little worse [ ] A lot better [ ] A lot worse { } I don’t know Score:_________

PAGE 59

52 Appendix F: Persistence Scorecard Persistence Scorecard Date:_______ Behavior:____________ Participant:__________ Set #:_____ Condition:___________ Based on all the clips I’ve seen today, I think the teacher should: { } Continue the current intervention because it is working. { } Discontinue the current intervention because it is not working. { } I don’t know Score:_____________ Persistence Scorecard Date:_______ Behavior:____________ Participant:__________ Set #:_____ Condition:___________ Based on all the clips I’ve seen today, I think the teacher should: { } Continue the current intervention because it is working. { } Discontinue the current intervention because it is not working. { } I don’t know Score:____________ Persistence Scorecard Date:_______ Behavior:____________ Participant:__________ Set #:_____ Condition:___________ Based on all the clips I’ve seen today, I think the teacher should: { } Continue the current intervention because it is working.

PAGE 60

53 Appendix F: (Continued) { } Discontinue the current intervention because it is not working. {} I don’t know Score:________________

PAGE 61

54 Appendix G: Quiz-data collection methods 1. If a participant rates a behavior on the accuracy scorecard as “better” when the key indicates there is an increase in the percentage of the target behavior, you would score their response as a. Accurate b. Inaccurate 2. If a participant rates a behavior on the accuracy scorecard as “worse” when the key indicates there is an increase in the percentage of the target behavior, you would score their response as a. Accurate b. Inaccurate 3. If a participant rates a behavior on the accuracy scorecard as “worse” when the key indicates there is a decrease in the target behavior or there is no change in the target behavior, you would score their response as a. Accurate b. Inaccurate 4. If a participant views a set of videos and responds by saying the intervention should be continued when the data are stable according to the key, you would score their response as a a. Match b. No-match 5. If a participant views a set of videos and responds by saying the intervention should not be continued when the data are uptrending according to the key, you would score their response as a a. Match b. No-match 6. If a participant views a set of videos and responds by saying the intervention should not be continued when the data are downtrending according to the key, you would score their response as a a. Match b. No-match

PAGE 62

55 Appendix G: (Continued) 7. If a participant rates a behavior as being “a little better” on the accuracy scorecard when the key indicates there is a large increase in the percentage of the target behavior, you would score their response as a. Accurate b. Inaccurate 8. If a participant views a set of videos and responds by saying the intervention should be continued when the data are downtrending according to the key, you would score their response as a a. Match b. No-match 9. After a participant has filled out all accuracy scorecards for the session, you calculate the percentage of accurate detection of behavior change by a. dividing the number of accurate ratings by the total number of ratings during the session. b. dividing the number of accurate ratings by the number of inaccurate ratings during the session. c. dividing the number of accurate ratings by the total number of ratings per week. 10. After a participant has filled out all necessary persistence scorecards for the session, you calculate the percentage of matches by a. dividing the number of matches with actual trend by the number of no-matches rated for the session b. dividing the number of matches with actual trend by the total number of “sets” rated for the session c. dividing the number of matches with actual trend by the total number of “sets” rated per week.

PAGE 63

56 Appendix H: Outline-participant training Outline: Participant training in visual analysis of graphed data € In order to determine whether an intervention/behavior management strategy is effective, and therefore determine whether it should be continued or whether you should try something new, a systematic and objective form of examination can be used. This technique is called visual analysis of graphs. € There are several properties of data that should be considered when interpreting a graph. We will only talk about two of these properties, trend and variability. 1. Trend: € Definition:” the overall direction taken by a data path, described in terms of direction (increasing, decreasing, or zero/no trend), degree of trend, and the extent of variability of data points around the trend.” Also called uptrend, downtrend, or stable/variable with zero trend. € Three point ruleat least three data points are needed to determine whether there is a trend in the data. € Although some data points may be lower than others, the overall picture of the graph must be used when making decisions about whether interventions are working or not. € Examplesgraphs depicting uptrending, downtrending, and no trend data. € Drawing a line of best fit is very helpful in determining the direction and degree of trend in the data. € The participants will be shown how to draw a line of best fit on several graphs (like the ones shown during the graphic display condition) € The participants will practice determining whether the data are trending on several graphs depicting uptrends, downtrends, and no trends (by drawing lines of best fit) They will also practice making decision about whether an intervention should be continued based on the graphs shown. An emphasis will be placed on looking at the entire graph versus making judgments based on a few data points. 2. Variability: € Definition: the extent to which behavior under the same environmental conditions differ from one another. € A lot of variability makes it difficult to determine whether an intervention is effective. € Variable data can also be trending, which is important because small changes in behavior can be hidden in variable data. In other words, a behavior can be gradually increasing or decreasing over time even though the data are variable. This could affect decisions to continue interventions. This is why it is critical to consider trend and variability in the data when determining whether behavior is improving across time.

PAGE 64

57 € Appendix H: (Continued) € Participants will be shown several graphs depicting no trend stable, no trend variable, uptrending variable data, and downtrending variable data. € The participants will practice determining whether the data on a graph are variable or relatively stable. They will also identify trend in the data by drawing a line of best fit and make determinations as to whether interventions should be continued based on the graph shown 3. Quiz

PAGE 65

58 Appendix H: (Continued) Part 1: Trend Examples of uptrending graphs 0 20 40 60 80 100 1234567 sessionspercentage of intervals 0 20 40 60 80 100 1234567 sessionsPercentage of intervals out

PAGE 66

59 Appendix H: (Continued) 0 20 40 60 80 100 1234567 sessionspercentage of intervals out of seat/tilting Examples of downtrending graphs 0 20 40 60 80 100 1234567 sessions

PAGE 67

60 Appendix H: (Continued) 0 20 40 60 80 100 1234567 sessionspercentage of intervals out of seat/tilting 0 20 40 60 80 100 1234567 sessionspercentage of intervals out of seat/tilting Examples of no trend

PAGE 68

61 Appendix H: (Continued) 0 20 40 60 80 100 1234567 sessionspercentage of intervals out of seat/tilting 0 20 40 60 80 100 1234567 sessionspercentage of intervals out of sea t

PAGE 69

62 Appendix H: (Continued) 0 20 40 60 80 100 1234567 sessionspercentage of intervals out of seat \ Participant practice drawing lines of best fit in the data and determining whether to continue or discontinue the intervention A. Draw a line of best fit B. Circle continue or discontinue 0 20 40 60 80 100 1234567 sessionspercentage of intervals out of seat/tilting Continue or Discontinue

PAGE 70

63 Appendix H: (Continued) 0 20 40 60 80 100 1234567 sessionspercentage of intervals out of seat/tilting Continue or Discontinue 0 20 40 60 80 100 1234567 sessionspercentage of intervals out of seat/tilting Continue or Discontinue

PAGE 71

64 Appendix H: (Continued) 0 20 40 60 80 100 1234567 sessionspercentage of intervals Continue or Discontinue 0 20 40 60 80 100 1234567 sessionspercentage of intervals out o f seat/tilting Continue or Discontinue

PAGE 72

65 Appendix H: (Continued) 0 20 40 60 80 100 1234567 sessionspercentage of intervals out of sea t Continue or Discontinue

PAGE 73

66 Appendix H: (Continued) Part 2: Variability Examples of Variable graphs Variable uptrending 0 20 40 60 80 100 1234567 sessionspercentage of intervals out of sea t Variable downtrending 0 20 40 60 80 100 1234567 sessionspercentage of intervals out of seat

PAGE 74

67 Appendix H: (Continued) No trend variable 0 20 40 60 80 100 1234567 sessionspercentage of intervals out of seat Participant practice drawing lines of best fit and making decisions to continue or discontinue the intervention based on graph 0 20 40 60 80 100 1234567 sessionspercentage of intervals out of seat 1. Draw a line of best fit 2. Variable or not? 3. Continue or discontinue the intervention?

PAGE 75

68 Appendix H: (Continued) 0 20 40 60 80 100 12345 sessionspercentage of intervals 1. Draw a line of best fit 2. Variable or not? 3. Continue or discontinue the intervention? 0 20 40 60 80 100 sessionspercentage of intervals 1. Draw a line of best fit 2. Variable or not? 3. Continue or discontinue the intervention?

PAGE 76

69 Appendix H: (Continued) 0 20 40 60 80 100 1234567 sessionspercentage of intervals out of seat/tilting 1. Draw a line of best fit 2. Variable or not? 3. Continue or discontinue the intervention? 0 20 40 60 80 100 1234567 sessions 1. Draw a line of best fit 2. Variable or not? 3. Continue or discontinue the intervention?

PAGE 77

70 Appendix H: (Continued) 0 20 40 60 80 100 1234567 sessionspercentage of intervals out of seat/tilting 1. Draw a line of best fit 2. Variable or not? 3. Continue or discontinue the intervention?

PAGE 78

71 Appendix I: Participant Independent Practice 1. When the overall direction of a data path is increasing, you would label the data as a. downtrending b. no trend c. uptrending 2. When the overall direction of a data path is neither systematically increasing or decreasing, you would label the data as a. uptrending b. downtrending c. no trend 3. Please draw a line of best fit on the graph below The graph shown below a. has no trend b. is downtrending c. is uptrending 0 10 20 30 40 50 60 70 80 90 100 1234567 sessionspercentage of intervals out of seat/tilting 4. The graph in number 3 represents the Charlie’s out-of-seat behavior. His teacher has been implementing an intervention to decrease his behavior for five days. Should she continue the intervention or not? a. continue b. discontinue

PAGE 79

72 Appendix I: (Continued) 5. Please draw a line of best fit on the graph. The graph shown below a. has no trend b. is uptrending c. is downtrending 0 20 40 60 80 100 1234567 sessions 6. Based on the graph, is this student’s talking out behavior increasing or decreasing across time? a. increasing b. decreasing c. don’t know 7. The graph in number 5 represents Susie’s talking out behavior. Her teacher has been trying an intervention to decrease her behavior for five days. Given the data, should her teacher continue the intervention or not? a. continue b. discontinue 8. Draw a line of best fit on the graph below This graph shown below a. Is uptrending b. Is downtrending

PAGE 80

73 Appendix I: (Continued) c. Has no trend 0 20 40 60 80 100 sessionspercentage of intervals 9. The graph in number 8 represents Alison’s out-of-seat behavior during class. Her teacher has been trying an intervention to decrease her behavior for five days. Based on the graph, Alison’s behavior is a. increasing b. decreasing 10. Based on the graph, should Alison’s teacher continue the intervention or not? a. continue b. discontinue c. don’t know 11. How many data points are usually necessary to determine whether there is a trend or not? a. 4 b. 2 c. 3

PAGE 81

74 Appendix I: (Continued) 12. Draw a line of best fit on the graph If the goal of an intervention is to decrease behavior and you collect data that looks something like this, you would probably determine that the intervention should be a. continued b. discontinued c. don’t know 0 20 40 60 80 100 1234567 sessionspercentage of intervals out of seat

PAGE 82

75 Appendix I: (Continued) 13. Draw a line of best fit on the graph below If the goal of an intervention is to decrease behavior and you collect data that looks something like this, you would probably determine that the intervention should be a. continued b. discontinued c. don’t know 0 20 40 60 80 100 1234567 sessionsPercentage of intervals out 14. Variability refers to a. the overall direction taken by a data path, described in terms of direction b. The extent to which behavior under the same environmental conditions differ from one another c. the number of observations conducted

PAGE 83

76 Appendix I: (Continued) 15. Draw a line of best fit on the graph below. The data are a. variable with no trend b. Downtrending c. Stable with no trend 0 20 40 60 80 100 12345 sessionspercentage of intervals 16. Based on this graph would you continue the intervention if it was designed to decrease out of seat behavior? a. Continue b. Discontinue 17. Draw a line of best fit on the graph below. The data shown are a. Variable downtrending b. Variable uptrending c. Not trending

PAGE 84

77 Appendix I: (Continued) 0 20 40 60 80 100 1234567 sessionspercentage of intervals out of seat 18. Based on the graph above would you continue the intervention if it was designed to decrease out of seat behavior? b. Continue b. Discontinue 19. Draw a line of best fit on the graph below The data shown are d. Variable downtrending e. Variable uptrending f. Not trending 0 20 40 60 80 100 1234567 sessionspercentage of intervals sea t

PAGE 85

78 Appendix I: (Continued) 20. Based on this graph would you continue the intervention if it was designed to decrease out of seat behavior? a. Continue b. Discontinue

PAGE 86

79 Appendix J: Participant Beliefs Questionnaire (pre-exp) Participant beliefs about graphs: Pre-experimentation Graphs are useful tools to determine whether a behavior management strategy is working. 1 2 3 4 Strongly Agree Agree Disagree Strongly Disagree I use graphs when determining whether a behavior management strategy is effective. 1 2 3 4 Frequently Sometimes Seldom Never Do you currently graph behavior change when deciding whether a behavior management strategy is working? 1 2 3 4 Frequently Sometimes Seldom Never If you answered seldom or never to this question, why not? a. it takes too long b. it is too difficult c. it is not helpful I usually try a new intervention for ______ days before I decide whether it is effective Less than 1 1-2 3-4 5-6 7 or more What other methods do you use to determine whether a behavior management strategy is working? a. I get someone else’s opinion (principal, another teacher, behavior specialist etc.) b. I just know. c. Other(please explain)________________________________________________ If you chose b. as your answer, how do you know? a. The behavior stops b. I didn’t have to correct the behavior as much c. Other(please explain)________________________________________________

PAGE 87

80 Appendix K: Participant Beliefs Questionnaire (post-exp) Participant beliefs about graphs: Post-experimentation 1. Graphs are useful tools to determine whether a behavior management strategy is working. 1 2 3 4 Strongly Agree Agree Disagree Strongly Disagree 2. Graphs are useful tools when making decisions about whether behavior is improving across time 1 2 3 4 Strongly Agree Agree Disagree Strongly Disagree 3. Graphs are useful tools when making decisions about whether to continue or discontinue interventions 1 2 3 4 Strongly Agree Agree Disagree Strongly Disagree 4. Prior to viewing the graph, it was difficult to determine whether the students’ behavior on the videos was better or worse 1 2 3 4 Strongly Agree Agree Disagree Strongly Disagree 5. Prior to viewing the graph, it was difficult to determine whether the intervention was working or not (after watching the video clips) 1 2 3 4 Strongly Agree Agree Disagree Strongly Disagree 6. In this study: Viewing the graphs after watching the videos helped me make decisions about whether the behavior in the videos was better, worse, or whether there was no change in the behavior

PAGE 88

81 Appendix K: (Continued) 1 2 3 4 Strongly Agree Agree Disagree Strongly Disagree 7. In this study: Viewing the graphs after watching the videos helped me make decisions about whether the intervention should be continued or discontinued 1 2 3 4 Strongly Agree Agree Disagree Strongly Disagree 8. As the study progressed, I found it easier to determine changes in student behavior from clip to clip 1 2 3 4 Strongly Agree Agree Disagree Strongly Disagree 9. After the training, it was easier to make decisions about continuing or discontinuing the intervention 1 2 3 4 Strongly Agree Agree Disagree Strongly Disagree 10. I plan to use graphs when determining whether a behavior management strategy is effective… 1 2 3 4 Frequently Sometimes Seldom Never 11. In the future, I plan to graph changes in my students’ behavior when deciding whether a behavior management strategy is working… 1 2 3 4 Frequently Sometimes Seldom Never If you chose seldom or never to this question, why not? a. it takes too long b. it is too difficult c. it is not helpful

PAGE 89

82 Appendix K: (Continued) I plan to try a new intervention for ______ days before I decide whether it is effective Less than 1 1-2 3-4 5-6 7 or more

PAGE 90

83 Appendix L: Participant Beliefs Questionnaire (before training) Participant beliefs about graphs: Before training Graphs are useful tools to determine whether a behavior management strategy is working. 1 2 3 4 Strongly Agree Agree Disagree Strongly Disagree Graphs are useful tools when making decisions about whether behavior is improving across time 1 2 3 4 Strongly Agree Agree Disagree Strongly Disagree Graphs are useful tools when making decisions about whether to continue or discontinue interventions 1 2 3 4 Strongly Agree Agree Disagree Strongly Disagree In this study: Viewing the graphs after watching the videos helped me make decisions about whether the behavior in the videos was better, worse, or whether there was no change in the behavior 1 2 3 4 Strongly Agree Agree Disagree Strongly Disagree In this study: Viewing the graphs after watching the videos helped me make decisions about whether the intervention should be continued or discontinued 1 2 3 4 Strongly Agree Agree Disagree Strongly Disagree In the future, I plan to graph changes in my students’ behavior when deciding whether a behavior management strategy is working…

PAGE 91

84 Appendix L: (Continued) 1 2 3 4 Frequently Sometimes Seldom Never If you chose seldom or never to this question, why not? a. it takes too long b. it is too difficult c. it is not helpful other:__________________________________ I plan to try a new intervention for ______ days before I decide whether it is effective Less than 1 1-2 3-4 5-6 7 or more Comments:

PAGE 92

85 Appendix M: Social Validity Questionnaire Social Validity Questionnaire 1. Seeing graphs of student behavior as a method for making decisions about behavior change was a. very useful b. useful c. not very useful d. not useful at all 2. Seeing graphs of student behavior as a method for making decisions about whether the intervention should be continued was a. very useful b. useful c. not very useful d. not useful at all 3. The information provided during training was a. very useful b. useful c. not very useful d. not useful at all 4. Overall, I felt that the training in visual inspection was a. very useful b. useful c. not very useful d. not useful at all 5. I will be more inclined to use graphing to make decisions about the effectiveness of behavior management strategies in the future. a. Strongly agree b. Agree c. Disagree d. Strongly disagree 6. I found the videos presented during the study to be realistic a. strongly agree b. agree c. disagree

PAGE 93

86 Appendix M: (Continued) d. strongly disagree 7. It was sometimes difficult to see the students’ behaviors on the videos due to the video quality a. strongly agree b. agree c. disagree d. strongly disagree 8. Participation in this study and the information provided has contributed to my overall goals as a teacher a. strongly agree b. agree c. disagree d. strongly disagree 9. I enjoyed participating in this study a. stongly agree b. agree c. disagree d. strongly disagree 10. Please list the number of years you have been employed as a teacher_____________ Other Comments: