USF Libraries
USF Digital Collections

Qualitative analysis of teacher perceptions and use of the dynamic indicators of basic early literacy skills (DIBELS) wi...

MISSING IMAGE

Material Information

Title:
Qualitative analysis of teacher perceptions and use of the dynamic indicators of basic early literacy skills (DIBELS) within a district-wide Reading First program
Physical Description:
Book
Language:
English
Creator:
Gaunt, Brian T
Publisher:
University of South Florida
Place of Publication:
Tampa, Fla
Publication Date:

Subjects

Subjects / Keywords:
Data utilization
Education
Assessment
Data-based decisions
Evaluation
Dissertations, Academic -- Psychological & Social Foundations -- Doctoral -- USF   ( lcsh )
Genre:
non-fiction   ( marcgt )

Notes

Abstract:
ABSTRACT: The aim of the Reading First grant program was to (a) increase quality and consistency of instruction in K-3 classrooms; (b) conduct timely and valid assessments of student reading growth in order to identify students experiencing reading difficulties; and (c) provide high quality, intensive interventions to help struggling readers catch up with their peers (Torgesen, 2002). In the State of Florida, school districts must incorporate the use of an assessment tool called the Dynamic Indicators of Basic Early Literacy Skills (DIBELS) to qualify for Reading First grant funding. Though DIBELS has been found to be a valid and reliable assessment for screening, monitoring, and evaluating student outcomes in early literacy skills, very little discussion or research has been conducted concerning teacher use and attitudes about DIBELS within a Reading First program. The present study involved a qualitative analysis of teachers' perceptions and use of the DIBELS within a Reading First context. Fourteen teachers (seven kindergarten and seven first grade teachers), Reading Coaches, non-teaching Specialists, and DIBELS experts participated in the present study. Results were aggregated for comparisons across multiple data sources. Results suggest teacher's perceptions may not be easily classified on a simple dichotomous range; rather their reported benefits and concerns on the use of the DIBELS were found to be varied and highly situational. Results were further interpreted in the context of research literature on data utilization and analysis in schools.
Thesis:
Dissertation (Ph.D.)--University of South Florida, 2008.
Bibliography:
Includes bibliographical references.
System Details:
Mode of access: World Wide Web.
System Details:
System requirements: World Wide Web browser and PDF reader.
Statement of Responsibility:
by Brian T. Gaunt.
General Note:
Title from PDF of title page.
General Note:
Document formatted into pages; contains 334 pages.

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 002000957
oclc - 319169250
usfldc doi - E14-SFE0002519
usfldc handle - e14.2519
System ID:
SFS0026836:00001


This item is only available as the following downloads:


Full Text
xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam Ka
controlfield tag 001 002000957
003 fts
005 20090424134600.0
006 m||||e|||d||||||||
007 cr mnu|||uuuuu
008 090424s2008 flu s 000 0 eng d
datafield ind1 8 ind2 024
subfield code a E14-SFE0002519
035
(OCoLC)319169250
040
FHM
c FHM
049
FHMM
090
LB1051 (Online)
1 100
Gaunt, Brian T.
0 245
Qualitative analysis of teacher perceptions and use of the dynamic indicators of basic early literacy skills (DIBELS) within a district-wide Reading First program
h [electronic resource] /
by Brian T. Gaunt.
260
[Tampa, Fla] :
b University of South Florida,
2008.
500
Title from PDF of title page.
Document formatted into pages; contains 334 pages.
502
Dissertation (Ph.D.)--University of South Florida, 2008.
504
Includes bibliographical references.
516
Text (Electronic dissertation) in PDF format.
3 520
ABSTRACT: The aim of the Reading First grant program was to (a) increase quality and consistency of instruction in K-3 classrooms; (b) conduct timely and valid assessments of student reading growth in order to identify students experiencing reading difficulties; and (c) provide high quality, intensive interventions to help struggling readers catch up with their peers (Torgesen, 2002). In the State of Florida, school districts must incorporate the use of an assessment tool called the Dynamic Indicators of Basic Early Literacy Skills (DIBELS) to qualify for Reading First grant funding. Though DIBELS has been found to be a valid and reliable assessment for screening, monitoring, and evaluating student outcomes in early literacy skills, very little discussion or research has been conducted concerning teacher use and attitudes about DIBELS within a Reading First program. The present study involved a qualitative analysis of teachers' perceptions and use of the DIBELS within a Reading First context. Fourteen teachers (seven kindergarten and seven first grade teachers), Reading Coaches, non-teaching Specialists, and DIBELS experts participated in the present study. Results were aggregated for comparisons across multiple data sources. Results suggest teacher's perceptions may not be easily classified on a simple dichotomous range; rather their reported benefits and concerns on the use of the DIBELS were found to be varied and highly situational. Results were further interpreted in the context of research literature on data utilization and analysis in schools.
538
Mode of access: World Wide Web.
System requirements: World Wide Web browser and PDF reader.
590
Co-advisor: Kelly A. Powell-Smith, Ph.D.
Co-advisor: Michael J. Curtis, Ph.D.
653
Data utilization
Education
Assessment
Data-based decisions
Evaluation
690
Dissertations, Academic
z USF
x Psychological & Social Foundations
Doctoral.
773
t USF Electronic Theses and Dissertations.
4 856
u http://digital.lib.usf.edu/?e14.2519



PAGE 1

Qualitative Analysis of Teacher Perceptions and Use of the Dynamic Indicators of Basic Early Literacy Skills ( DIBELS) Within a District-Wide Reading First Program by Brian T. Gaunt A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy Department of Psychological and Social Foundations College of Education University of South Florida Co-Major Professor: Kelly A. Powell-Smith, Ph.D. Co-Major Professor: Michael J. Curtis, Ph.D. Robert M. Friedman, Ph.D. Roger A. Boothroyd, Ph.D. Date of Approval: April 25, 2008 Keywords: data utilization, education, assessment, data-based decisions, evaluation Copyright 2008, Brian T. Gaunt

PAGE 2

Acknowledgements In acknowledgement of all those who provided me wi th so much support, guidance, and encouragement throughout this process I wish to thank the following: Kelly Powell-Smith of Dynamic Measurement Group; Mi chael Curtis of USF School Psychology Program; Roger Boothroyd of the Louis De La Parte Florida Mental Health Institute; Robert Friedman of the Louis De La Part e Florida Mental Health Institute, Sharon Hodges of the Louis De La Parte Florida Ment al Health Institute; Nancy Deane of Pinellas County Schools; Deanna Texel of Pinellas C ounty Schools; my colleagues Bradley Beam and Janine Sansosti; my parents Susan and Thomas Gaunt; and of course my wife Maria and my daughters Viktoria and Natalia whose love and support were a foundation for all that I have accomplished.

PAGE 3

i Table of Contents Table of Contents i List of Tables vi List of Figures vii Abstract viii Chapter One – Introduction 1 Federal Response to National Reading Concerns 2 Reading First Program 3 Dynamic Indicators of Basic Early Literacy Skills (DIBELS) 4 Progress Monitoring and Reporting Network 7 Teacher Decision Making and Utilization of DIBELS d ata 12 Purpose of Present Study 13 Chapter Two – Research Review 16 History of Evaluation Research 16 Program Evaluation Research 17 Process Evaluation/Formative Evaluation 22 Rationale for Conducting a Process Evaluation 25 Process Evaluation in Education 29 Qualitative Research Methodology 35 Value and Nature of Qualitative Research Approach es 35 Purpose and Research Questions 39 Qualitative Research Methods 41 Researcher’s Biography and Role 41 Selection Methods 42 Data Collection and Analysis 45 Reliability and Validity 49 Dynamic Indicators of Basic Early Literacy Skills (DIBELS) 51 Reliability and Validity of DIBELS 59 DIBELS Best Practice 68 Teacher Training and Data Utilization 86 Response to Intervention 88 Chapter Three – Method 90

PAGE 4

ii Purpose 90 Research Questions 90 Research Design 91 Participants and Sites 92 Description of Sites 92 Description of Participants 92 Sampling Methods and Rationales Used for Selecting Participants and Sites 93 Selecting the School District 93 Selecting School Sites 94 Selecting Teachers, Reading Coaches, and Specialist s 94 Sample Size 95 Teachers 95 Experts 97 Focus Groups 97 Data Collection Instruments 97 Semi-Structured Interview Guide – Teachers 97 Case Study – Teachers and Experts 98 Description of PMRN reports 99 Focus Group Guide 100 Instrument Validation Procedures 101 Data Collection and Analysis Procedures 1 02 Site Entry 102 Obtaining Consent to Participate 104 General Procedures for Obtaining Consent 104 Obtaining Consent from Reading First Supervisor 104 Obtaining Consent from Teacher Participants 105 Obtaining Consent from DIBELS Experts 105 Obtaining Consent from Reading Coaches 106 Obtaining Consent from Specialists 106 Researcher Biography 106 Background Experience 107 Researcher Beliefs and Expectations 107 Data Collection Procedures 109 General Procedures 109 Teacher Interviews 110 Teacher Case Study Review 111 Expert Opinion Interviews 113 Focus Groups 113 Data management and Storages 114 Data Analysis 115 General Data Analysis Procedures 115 Teacher Interview Concurrent Data Analyses 11 6 Teacher Interview Formal Data Analyses 117

PAGE 5

iii Expert/Teacher Case Reviews 124 Focus Group Analyses 126 Member Checks and Peer Reviews 127 Chapter Four – Results 129 General Introduction 129 Description of Participants and Data Obtained 129 Teachers’ Perceptions and Understandings About DIBELS and PMRN 130 Climate and Culture of Schools 130 Support/Resources for Teachers’ Use of DIBELS 136 Knowledge of DIBELS 141 Correspondence With Other Assessments 146 DIBELS and Progress Monitoring 154 Using Assessment Data 155 Comparison of Teachers and Experts on the Use of PMRN Reports Case Study Review 161 Expert Review of Kindergarten Case Study 1 62 Kindergarten Teachers’ Review of Kindergarten Case Study 164 Expert Review of First Grade Case Study 16 5 First Grade Teachers’ Review of First Grade Case Study 166 Expert Opinions on the Use of DIBELS at Reading First Schools 168 Attitudes and Perceptions Among Persons Other Than Teachers 171 Climate and Culture of Schools 171 Supports/Resources Available to Support Use of DIBELS 174 Teachers’ Knowledge of DIBELS 177 Collecting and Using Assessment Data 181 Analysis of Hypotheses and Researcher Expectations 188 Analysis of Unanticipated Topics 191 Analysis of Qualitative Research Process – Researcher’s Reflections 195 General Introduction 195 Research Proposal and Preparations for Conducting Study 195 Accessing the Research Site and Participants 197 Data Collection and Data Analysis 202 Chapter Five – Discussion 214 Discussion of Primary Results 216

PAGE 6

iv Context/Culture/Climate 216 Resources/Supports/Infrastructures 225 Limitations 236 Contributions of the Present Study 240 Final Summary 241 Create a Culture of Change and Shared Vision 241 Create Capacity and Infrastructure to Support Use of DIBELS 243 Provide Additional and Ongoing Training on the us e of DIBELS/Formative Assessments 245 Provide Educators with Ongoing Supports for Data Analysis and Utilization 246 References 249 Appendix A – Figures 261 Figure 1 – Example of Class Status Report 262 Figure 2 – Example of Student Grade Summary Report 263 Figure 3 – Example of Reading Progress Monitoring S tudent Cumulative Report 264 Figure 4 – Instructionally Interpretable Zones of P erformance 265 Appendix B – Tables 266 Table 3 – Number of Schools Sampled Across the Thre e Groups of Participants 267 Table 4 – Probability Sampling Table Developed by DePaulo (2000) 268 Table 5 – Microsoft Excel Format Used for Organizin g, Coding, and Sorting Transcript Data 269 Table 6 – Comparison of Themes Identified Across Multiple Sources to Organize Findings for Presentat ion 270 Table 7 – School District Assessment Schedule for K indergarten and First Grade 271 Appendix C 272 Appendix C1 – Teacher Interview Guide 273 Appendix D 275 Appendix D1 – Case Study Questions 276 Appendix D2 – Case Studies 277 Appendix E 283 Appendix E1 – Focus Group Guide 284

PAGE 7

v Appendix F 285 Appendix F1 – Field Notes Teacher Interviews Format 286 Appendix G 288 Appendix G1 – Teacher Interview Code Draft 1 289 Appendix G2 – Teacher Interview Codes Draft 2 290 Appendix G3 – Teacher Interview Codes Draft 3 295 Appendix G4 – Teacher Interview Codes Draft 4 300 Appendix H 305 Appendix H1 – Preliminary Results for Member Checks and Peer Review 306

PAGE 8

vi List of Tables Table 1 DIBELS benchmark screening for each grade level 6 Table 2 General PMRN reports for use 10 Table 3 Number of schools sampled across the three groups of participants 272 Table 4 Probability Sampling Table for Selection o f Participants 273 Table 5 Microsoft Excel format used for organizing coding, and sorting transcript data 274 Table 6 Comparison of themes across multiple sourc es to organize findings for presentation 275 Table 7 School district assessment schedule for ki ndergarten and first grade 276

PAGE 9

vii List of Figures Figure 1 Example of a Class Status Report 267 Figure 2 Example of a Student Grade Summary Report 268 Figure 3 Example of a Reading Progress Monitoring Student Cumulative Report 269 Figure 4 Graph developed by Good et al. (2000) for use in a Benchmark Linkage Report 270

PAGE 10

viii A Qualitative Analysis of Teacher Perceptions and U se of the Dynamic Indicators of Basic Early Literacy Skills (DIBELS) Within a Distr ict-Wide Reading First Program Brian T. Gaunt ABSTRACT The aim of the Reading First grant program was to (a) increase quality and consistency of instruction in K-3 classrooms; (b) c onduct timely and valid assessments of student reading growth in order to identify student s experiencing reading difficulties; and (c) provide high quality, intensive interventions t o help struggling readers catch up with their peers (Torgesen, 2002). In the State of Flor ida, school districts must incorporate the use of an assessment tool called the Dynamic Indica tors of Basic Early Literacy Skills (DIBELS) to qualify for Reading First grant funding. Though DIBELS has been found to be a valid and reliable assessment for screening monitoring, and evaluating student outcomes in early literacy skills, very little disc ussion or research has been conducted concerning teacher use and attitudes about DIBELS w ithin a Reading First program. The present study involved a qualitative analysis of te achers’ perceptions and use of the DIBELS within a Reading First context. Fourteen te achers (seven kindergarten and seven first grade teachers), Reading Coaches, non-t eaching Specialists, and DIBELS experts participated in the present study. Results were aggregated for comparisons across multiple data sources. Results suggest teacher’s p erceptions may not be easily classified on a simple dichotomous range; rather their reporte d benefits and concerns on the use of

PAGE 11

ix the DIBELS were found to be varied and highly situa tional. Results were further interpreted in the context of research literature o n data utilization and analysis in schools.

PAGE 12

1 CHAPTER ONE INTRODUCTION Reading is perhaps the most important academic skil l for children to acquire. Indeed it is the cornerstone upon which many other academic and cognitive skills develop for school-aged children. Given its centra l importance in the socialization and education of children, it is troubling to discover how many students are affected by illiteracy. Moats (1999) found that approximately 20% of elementary students nationwide have significant problems learning to re ad. At least 20% of elementary students do not read fluently enough to enjoy or en gage in independent reading. The rate of reading failure for African-American, Hispanic, limited-English speakers and children living in poverty ranges from 60% to 70% (Moats, 19 99). One-third of poor readers nationwide are from college-educated families. Twe nty-five percent of adults in this country lack the basic literacy skills required in a typical job. The statistics concerning illiteracy in the United States are so pervasive that the National Institutes of Health regarded reading deve lopment and reading difficulty as a major public health concern (Moats, 1999). However the convergent literature has convinced many leaders in reading research to concl ude that reading failure is unnecessary and can be prevented and ameliorated. It is also important to note that experts suggested reading instruction is a signific ant challenge for teachers (National Reading Panel, 2000). Unlike spoken-language acqui sition, reading is not a natural process. Rather, many researchers argue reading in struction must involve explicit, direct,

PAGE 13

2 and systematic instruction (Foorman, Francis, Fletc her, Paras, & Schatschneider, 1998; Stanovich & Stanovich, 1995), and teachers must be provided specialized training and preparation to teach reading (National Reading Pan el, 2000). Federal Response to National Reading Concerns Though research on early intervention in literacy i s several decades old, a new wave of public discussion and debate at the federal level of education and policy making has been occurring. Much of the research in early literacy intervention argues reading development begins early in a child’s life and ther efore requires explicit and systematic focus in the kindergarten through third grade years (e.g., Adams, 1990) for successful reading development. The perceived convergence of research (e.g., National Reading Panel, 2000) on early identification and prevention of reading difficulties may have led to some of the increased emphasis in current policies enacted which place greater attention on accountability and focus on the use of “scientif ically-based” instruction in reading. One such policy enacted by the federal government w as the No Child Left Behind Act (NCLB), which was signed into law by President Bush on January 8, 2002. The act was based on the following four principles: (1) str onger accountability for results, (2) more freedom for states and communities, (3) encour aging proven educational methods, and (4) more choices for parents. At the core of t his legislation was a greater emphasis on scientifically-based teaching of early literacy skills in kindergarten through third grade.

PAGE 14

3 Reading First Program The Reading First grant program was a core component of the NCLB Act This grant program, which provided the nation’s schools with approximately $900 million in 2003, was intended to promote the use of scientific ally-based research to develop highquality reading instruction for grades K-3. The th ree main funding categories available within a Reading First grant are professional development for teachers, p urchase and implementation of assessments, and purchase of mate rials including software and books. The program was “…designed to select, implement, an d provide professional development for teachers using scientifically-based reading programs, and to ensure accountability through ongoing valid and reliable s creening, diagnostic, and classroombased assessment ( www.ed.gov/programs/readingfirst/index.html 2003).” More specifically, the aim of the Reading First program was to (a) increase quality and consistency of instruction in K-3 classrooms; (b) c onduct timely and valid assessments of student reading growth in order to identify student s experiencing reading difficulties; and (c) provide high quality, intensive interventions t o help struggling readers catch up with their peers (Torgesen, 2002). On September 7, 2001, Florida Governor Jeb Bush sig ned legislation (Executive Order #2001-260) to create the Just Read, Florida! initiative which was designed to coordinate several programs and grant funded initia tives already in place, while also providing a comprehensive plan to increase literacy rates in Florida’s public schools. This legislation created a leadership “triangle” to support and monitor the implementation efforts of this legislation ( www.justreadflorida.com/docs/guidance.pdf ) which consisted

PAGE 15

4 of the Florida Department of Education (FLDOE), the Florida Center of Reading Research (FCRR) and the Florida Literacy and Readin g Excellence (FlaRE) center. In addition to providing leadership direction for s chool districts by developing and evaluating research-based reading curriculums a nd instructional practices, FCRR provided technical assistance to school districts t hat have been awarded Reading First grants. The technical assistance provided by FCRR involved, but was not limited to, consultation about appropriate assessment tools for inclusion in a school district’s Reading First plan, training on the use of progress monitoring a nd outcome assessments, and management/consultation for data collection and analysis of assessment data. Additionally, FCRR coordinated with the FlaRE cente r to identify professional development needs such as appropriate instructional materials and resources, and professional development materials and practices th at are supported by scientific knowledge about reading and professional developmen t ( www.justreadflorida.com/docs/guidance.pdf ). Dynamic Indicators of Basic Early Literacy Skills ( DIBELS) A key implementation component of the Reading First program plan in Florida was the use of DIBELS (Torgesen, 2002). In general a school district in Florida must submit a plan which describes coordination of four types of assessments to be eligible to receive grant funding. These four types of assessm ents are screening, diagnostic, progress monitoring, and outcome assessments. The Reading First program in Florida required each school district to commit to using DI BELS. Though school districts may supplement with other assessment data, DIBELS purpo rted to offer a cost effective, time

PAGE 16

5 efficient approach that offered educators a reliabl e and valid means to (1) screen students to determine risk for reading failure, (2) progress monitor student achievement, and (3) assess outcomes in early literacy skills (Elliott, Lee & Tollefson, 2001; Kaminski & Good, 1996). The DIBELS provide very brief, individually adminis tered tests of critical early pre-reading and reading skills (Kaminski, Cummings, Powell-Smith, & Good, 2008). In Florida, this assessment was intended to be adminis tered at least three times a year as a screening tool, though schools may administer DIBEL S more frequently to monitor student progress more closely. More frequent progr ess monitoring may also be especially helpful for children who are assigned to receive more intensive reading instruction because of the importance of monitoring closely the effectiveness of interventions for students who are at-risk of readi ng failure (Kaminski et al., 2008). In general, DIBELS, as utilized in the State of Flo rida, involved five subtests used at specific grade levels. The five subtests are le tter naming fluency (LNF), initial sounds fluency (ISF), phoneme segmentation fluency (PSF), nonsense word fluency (NWF), and oral reading fluency (ORF). Table 1 depicts the FC RR recommended schedule for grades K-3 in Florida. This table shows the recommended s chool days in which DIBELS would be utilized as a screening instrument and the subte sts used for each grade level.

PAGE 17

6 Table 1 DIBELS Benchmark Screening Schedule for Each Grade Level. Days 20-30 Days 65-75 Days 110-120 Days 155-169 ___________________________________________________ _____________________ Kindergarten LNF LNF LNF LNF ISF ISF ISF PSF PSF NWF NWF First Grade LNF PSF PSF PSF PSF NWF NWF NWF NWF ORF ORF ORF ORF Second Grade NWF NWF NWF NWF ORF ORF ORF ORF Third Grade ORF ORF ORF ORF The term “fluency” in each of the subtests indicat es a measure of speed plus accuracy about a student’s progress. With exceptio n to the Initial Sound Fluency subtest, all subtests in the DIBELS involve a one-minute tim ed measure. Letter Naming Fluency (LNF) is a measure of how fast and accurate childre n can say the names of letters printed on a page. Students are shown upper and lower case letters that are arranged in random order, and they are asked to name as many letters a s they can in one minute. Initial Sound Fluency (ISF) is a measure of early p honemic awareness. Children are presented with four pictures at a time and aske d to point to the picture which corresponds with the initial sound provided orally by the examiner (for three of the pictures) and say the initial sound of the fourth p icture. This subtest measures the cumulative latency, in seconds, of student response s to task questions asked by the examiner.

PAGE 18

7 Phoneme Segmentation Fluency (PSF) is a slightly mo re advanced measure of phonemic awareness. It tests children’s ability to pronounce and segment the individual phonemes in words that have three and four phonemes (e.g., cat or rest). Children are orally provided with a word and asked to say all th e sounds they hear in the word. Nonsense Word Fluency (NWF) measures children’s kno wledge and skill in applying the alphabetic principle. Children can ea rn credit either by giving the individual sounds represented by the letters in simple non-wor ds (e.g., /l/-/u/-/t/) or by blending the sounds together and pronouncing the non-word as a w hole word (e.g., lut). Oral Reading Fluency (ORF) (also called CBM-Reading —Good, Simmons, & Kame’enui, 2001) is a measure of children’s ability to read grade level text aloud fluently and accurately. Children receive a score based on the number of words in a grade-level passage they can read accurately in one minute. Progress Monitoring and Reporting Network In Florida, schools utilized DIBELS at least three times during the school year in order to identify and monitor the students’ progres s in reading development. The DIBELS data were submitted into a web-based data ma nagement program known as the Progress Monitoring and Reporting Network (PMRN) wh ich was developed and is managed by FCRR. This reporting system was designe d to provide school districts with timely and reliable information about student perfo rmance on screening, progress monitoring, and outcome assessments in order for te achers to effectively plan classroom instruction and to assist schools in the evaluation of their core curricula and instructional practices. The PMRN system, combined with the assi stance of reading coaches at each

PAGE 19

8 school—an intended component of the Reading First model in Florida—offered teachers a potentially valuable approach to immediately iden tifying students who are struggling, identifying areas of instructional need when design ing reading interventions, and evaluating classroom instructional and curriculum v ariables. All K-3 student data in all Reading First schools were required to be included in this system of data collection; though fourth and f ifth grade data may also be entered and utilized using the PMRN. Specifically, the purpose of the PMRN system as indicated on the Florida Center for Reading Research website, was to “efficiently and accurately accomplish three tasks: (1) allow the data from req uired tests to be entered quickly and easily; (2) store the data in a safe and secure loc ation; and (3) provide timely and helpful reports to educators ( www.fcrr.org 2004). And, according to the Reading Coach’s Guide (August, 2003) provided by FCRR, the Reading First program in Florida requires that information from assessments guide reading ins truction and this information should inform teachers about the following: 1. Risk level of students 2. Specific weaknesses in reading skills that one or m ore students exhibit 3. How students should be grouped 4. The intensity of instruction required for certain s tudents through small groupings or more time on skill-building activities 5. Which skills should be emphasized for particular st udents 6. How much change is occurring in student skills over time following interventions

PAGE 20

9 7. The professional development needs of the teacher i n the area of reading instruction There are four basic steps to using the PMRN system : (1) assessment teams (made up of professionals at the school and trained by FCRR regional coaches) collect the DIBELS data, (2) designated data entry personne l input the student data into the PMRN system, (3) the PMRN system quickly aggregates the data and allows reports to be generated, and (4) teachers and other educators access the PMRN reports by accessing the web-site utilizing a password and username uniq ue to the teacher. Because the website was password protected, teachers had access to aggregate data for their classrooms as well as to individual student reports for students in their classroom. Similarly, administrators generally had access to aggregated d ata for their school, aggregated data of each classroom, and aggregated data by grade lev el. District leaders generally had access to aggregated data of the district and aggre gated data for each school and grade levels for the district. A variety of reports were available to teachers usi ng the PMRN system. In general, a student’s data would have been displayed in any of three levels of reports: individual, class, and school. Table 2 lists the g eneral types of information that can be found in PMRN reports. Throughout the reports, a c olor code was used to provide a visual representation of students’ instructional ne eds and risk-level. Red indicated immediate intensive intervention was needed and the student was at high-risk of not achieving grade-level reading skills. Yellow indic ated additional instruction was needed to improve targeted skill areas and the student was identified as having a moderate risk

PAGE 21

10 status. Green represented the student was likely t o achieve grade level reading skills with use of the core curricula and instruction and was s ubsequently identified as having a lowrisk status. Blue represented above average perfor mance on a particular measure or identified a student as having above grade level re ading skills. Table 2 General PMRN Reports for Use ___________________________________________________ _____________________ Report Description ___________________________________________________ _____________________ Progress Reports: Shows gains students are making i n a specific subtest area and how the performance has improved since the last assessm ent. Summary Reports: Show the percentage of students w ho may need extra instruction compared to those who are making adequate progress. Historical Reports: Compare the progress of the cu rrent year’s class to that of previous year’s classes. Comparison Reports: Compare the progress of a class or school to others in Florida serving similar children. These are only available to schools at which all students are being Progress Monitored. Cumulative Reports: Shows the results of all readi ng assessments recorded in the PMRN across all years and Year End Outcome Test scores, including scores from other schools. Appendix A contains the three types of specific rep orts that were used in the present study. The Class Status Report (Figure A1) was a common progress report used by teachers which showed how well students in the c lass performed on a specific assessment cycle using DIBELS. The list of student s can be organized in either alphabetical order, level of instruction recommende d (e.g., intensive, strategic, or initial), or by subtest score. The recommended instructional level was a feature of the PMRN

PAGE 22

11 that was generated for the teacher to direct their use of DIBELS data for developing broad instructional interventions. The colors used by the PMRN system were intended to convey a level of instructional support necessary f or helping a student achieve end-ofyear outcomes. Red indicated that intensive remedi ation was necessary (i.e., High-Risk; HR), yellow indicated that moderate remediation sup ports were needed (i.e., ModerateRisk; MR), green indicated that general classroom c urriculum was sufficient for continued progress (i.e., Low Risk; LR – core curri culum considered sufficient), and blue indicated that a student was performing above avera ge (i.e., Above Average; AA). A commonly used summary report was the Student Grad e Summary Report (Figure A2) which showed the progress of an individ ual student for a single assessment cycle for all DIBELS measures used for that assessm ent cycle. This report allowed the classroom teacher to analyze a specific student’s s tatus in comparison to the normative distribution of the classroom for the same assessme nt cycle. By using a box plot format, the upper and lower limits of the box indicated the 75th and 25th percentiles, respectively, while the middle horizontal line within the box ind icated the median population score. Individual data points above or below the whiskers indicated outliers whose scores fell within the upper or lower 5th percentiles. The individual student’s score that was being analyzed was represented by a colored flag with the corresponding letters indicating that student’s overall status (e.g., HR = High Risk). The Reading Progress Monitoring Student Cumulative Report (Figure A3) allowed a teacher to review a student’s progress ac ross all cycles given throughout the year in addition to having student scores for end-o f-year outcomes on other assessments

PAGE 23

12 such as the Peabody Picture Vocabulary Test (PPVT), the Standford-10 Achievement Test, and the Florida Comprehensive Assessment Test (FCAT). Though a color-coding system was not utilized in this report, the risk in dicator labels (e.g., HR) were available. One unique feature of this report was the use of a Recommended Instructional Level (RIL) rating which depicted a student’s overall ins tructional level for a given assessment cycle. Analysis of this indicator across all asses sment cycles allowed a teacher to identify the overall trend in a student’s performan ce throughout the school year. Teacher Decision-Making and Utilization of DIBELS D ata The issue of decision making and use of assessment information by teachers is a critical issue that has been addressed in research concerning the use of Curriculum-Based Measurement-Reading (Wesson, Skiba, Sevcik, King, T indal, Mirkin, Deno, 1983) which is closely related to the DIBELS measures (Good et al., 2001). Though the technical adequacy on DIBELS have been researched and found t o have acceptable reliability and validity for measuring students’ early literacy ski lls (e.g., Good, Kaminski, Simmons, & Kame’enui, 2001), little is known about how teachers are utilizing the assessment data to guide instructional decision-making within a Reading First context. Knowing this information is important because the degree to whic h a classroom teacher uses an assessment tool as it was designed to be used and i nterpreted can have a significant impact on outcomes (Wesson et al., 1983). For example, Wesson et al. (1983) trained three gr oups of teachers in the use of Curriculum-Based Measurement-Reading; each at diffe rent levels of technical adequacy (high, medium, and low). The study examined the ef fects of the level of training as an

PAGE 24

13 independent variable on student’s reading achieveme nt as measured by student performance on three reading passages using CBM-Rea ding measures. Their results showed students of those teachers who more accurate ly and consistently applied CBM made greater achievement gains. The key variable t hat was observed by the researchers was the degree to which teachers made instructional decisions based upon technically adequate data and consistent rules regarding how to use this data. Other studies have found much diversity among teach ers in regard to mastery of CBM measurement and data analysis techniques (King, Deno, Mirkin, & Wesson, 1983; Skiba, Wesson, & Deno, 1982), suggesting some teach ers either need extensive training or ongoing support in order to use the measurement data in a valid and meaningful way. Although reading coaches—individuals assigned to a single school for providing ongoing consultation and training for teachers—are a core c omponent of the Reading First program in Florida, little was known about how thes e coaches impacted teacher use of DIBELS data, or how teachers perceived the value of the reading coaches in their school. Purpose of Present Study DIBELS offer a preventive approach to handling read ing failure and are designed to be used within an Outcomes Driven Model (Kaminsk i et al., 2008) for reducing potential reading difficulties while supporting all children to achieve adequate reading outcomes by the end of the third grade (Good, et al ., 2001). Researchers have shown that when used appropriately DIBELS can make a positive difference in a classroom where struggling readers may be found (Smith, Baker, & Ou deans, 2001).

PAGE 25

14 When DIBELS are used in conjunction with teacher kn owledge of reading instruction, as well as specific feedback about imp lementation during ongoing professional development, significant improvements in student reading achievement may be obtained (Smith, et al., 2001). As Baker, Smith, Kame’enui, McDonnell & Gallop (1999) stated, it is the “way the curriculum was im plemented (and not the specific curriculum they used) [that] made the difference be tween successful and problematic learning for many [students].” DIBELS is a set of indicators that may be used to assess students’ early literacy skills regardless of the c urriculum program used. Thus, teachers are a fundamental and necessary component for succe ssful use of DIBELS towards a goal of all children reading at grade level. The primary purpose of the present investigation wa s to conduct a qualitative analysis of teacher’s perceptions and use of DIBELS within a Reading First program. Several factors made such on investigation necessar y. First, the research literature indicated that a teacher’s appropriate use and unde rstanding of assessment data was important for supporting and improving student read ing outcomes. To date, no studies were found by this examiner which evaluated teacher’s perceptions and understanding about using DIBELS within a Reading First context. Second, evaluating process variables such as progra m participant beliefs, knowledge, and perceptions of a given program can s upplement and assist in the evaluation of a program’s outcomes Program evaluation studies typically involve, at least, an evaluation of the program’s outcomes in t erms of value and effectiveness (Patton, 1990). However, several unanswered questi ons can still remain after an

PAGE 26

15 evaluation of a program’s outcomes. Though a compl ete process evaluation of the entire Reading First program was beyond the scope of the present study, an evaluation of a central component of the Reading First program was warranted (i.e., teacher’s perceptions and use of DIBELS). Third, though FCRR can monitor the frequency of tea cher use of the PMRN system (i.e., frequency of log-in attempts), there did not appear to be any formal collection of information regarding teacher percept ions and value ascribed to using the PMRN. For example, what reports did teachers find most useful? Do teachers need further support or training regarding their use and understanding of PMRN reports? Finally, a qualitative approach to investigating te acher perceptions and use of DIBELS was best suited for answering process related quest ions by which specific events or activities occur and the meanings individuals place on those events or activities (Patton, 1990). Given these issues, the following research questions were proposed for investigation: 1. What are teachers’ perceptions and understandings a bout DIBELS and the PMRN? 2. How do teachers’ understandings and use of DIBELS d ata, as presented in the PMRN reports, compare to Reading First experts who are provided with the same information? 3. What attitudes and perceptions exist among persons other than teachers who participate in the collection, input, and analysis of DIBELS data throughout the school year?

PAGE 27

16 CHAPTER TWO – Research Review The following section is a review of relevant resea rch and discussion articles covering the essential elements and procedures for utilizing a qualitative approach to process evaluation for the improvement of the Reading First program. It is important to note in the beginning that published material on pr ocess evaluation involving prevention programs was limited. After a summary of process e valuation approaches, qualitative analysis procedures are reviewed. Next, a review o f the literature concerning the use of the Dynamic Indicators of Basic Early Literacy Skil ls (DIBELS) is provided, followed by a review of research investigating teacher use of a ssessment data, so that the reader has a context for applying a qualitative approach to proc ess evaluation in the present study. History of Evaluation Research Rossi and Freeman (1993) provided a sufficient revi ew of the historical events leading to the modern day field of evaluation resea rch. Evaluation research emerged from the general acceptance of the scientific metho d around the 17th century. Though its beginnings can be traced to this time period, evalu ation research was considered a relatively modern development. The university sett ing provided for the earliest work on program evaluations and eventually led to an increa sed popularity in the social sciences. The fields of education and public health were amon g the first to use systematic evaluation efforts. Topics such as literacy rates, morbidity rates, and mortality rates in society were researched in earlier years. A sizabl e increase in the use of program

PAGE 28

17 evaluations of community action programs was observ ed in the 1930’s. After World War II, and the inception of many federally funded soci al programs, an accelerated pace in the use of program evaluation occurred to meet the need s of urban development and human need associated with urban living; similar rates in program evaluation studies also occurred in order to meet the demand for measurable results of large-scale programs. By the 1950s large-scale evaluations were commonpla ce in society. During the 1960s, papers and books on evaluation research grew substantially in number. In the 1970’s professional journals and conference organiz ations developed rapidly in response to the developments in program evaluation research. Much of the advances in evaluation research were due to advancements in statistics and related research methods. Today the role of evaluation research remains substantial. R egardless of political partisanship and leadership the role of evaluation research does not seem to change. Program Evaluation Research Some researchers define evaluation as any activity directed at collecting, analyzing, and interpreting information on the need for, implementation of, and effectiveness and efficiency of intervention effort s to better the lot of humankind (Rossi and Freeman, 1993). Rossi and Freeman defined eval uation research as, “the systematic application of social research procedures for asses sing the conceptualization, design, implementation, and utility of social intervention programs” (p. 5). The purpose of evaluation according to the authors depended on the types of questions asked about a particular program. In general, evaluations are un dertaken to (1) judge worth, (2) estimate usefulness to improve, (3) assess utility of new programs, (4) increase

PAGE 29

18 effectiveness, and (5) satisfy accountability requi rements and sponsors. Patton (1990) defined evaluation more broadly as any effort to in crease human effectiveness through systematic data-based inquiry. He generally concep tualized evaluation efforts as actions that inform stakeholders and enhance decision-makin g to solve human and societal problems. Whereas researchers such as Cook and Campbell (1979 ) argued for a purely scientific approach to evaluation research, others advocated for a more pragmatic approach similar to that offered by Cronbach (1982) (Patton, 1990; Rossi & Freeman, 1993). Evaluation research is different from scien tific studies in purpose and intent (Cronbach, 1982; Patton, 1990; 2002; Rossi & Freema n, 1993). The political nature and context in which many evaluation studies evolve fro m and operate is one distinctive feature of evaluation research. Evaluation efforts should be idiosyncratic to meet the needs of program sponsors and stakeholders. Scient ific studies strive to meet standards set by the investigators’ peers and the overall sta tistical and theoretical rigor of standards imposed by research journals. Evaluations should be designed and implemented in ways that recognize the policy and program interests of the sponsors and stakeholders. Evaluations should yield useful information for dec ision makers given the political circumstances, program constraints, and available r esources. “Resource limitations and more realistic expectations for social programs onl y increase the need for evaluation efforts as societies attempt to cope with their hum an and social problems” (Rossi & Freeman, 1993; p. 7).

PAGE 30

19 The intent of providing evaluation information tha t is useful for stakeholders and policymakers is consistent with Chen and Rossi’s (1 983) observation of the difference between controlled scientific and socially motivate d evaluations. Their observations suggested controlled experiments were an attractive and seductive approach to evaluating a program in that such studies allow for an estimat ion of net effects through randomized experiments provided that the goals and objectives of a program were stated in objective and measurable terms. However, defining a program’ s goals and objectives in objective and measurable terms is not always a salient option That is why Chen and Rossi placed great importance on going beyond identifying the go als of a program to also identify the goals articulated by policy makers and other stakeh olders who have legitimate interests in some aspects of the program. This distinction was important because outcome variables identified through identification of program goals tended to be narrower than the connotative intentions of program designers and/or policymakers that may not have been included in the original program design. Legislation in the Unites States deliberately foste rs diversity in the process of implementation, and therefore local conditions may require extensive adaptations in the program (Chen & Rossi, 1983). Where such adaptatio ns are made, it is important to evaluate the intended and stated goals of the polic y makers in comparison to the intended and unintended goals of those local personnel who w ould implement such a program. That is why Rossi and Freeman (1993) stated, “…eval uators must be responsive to the context in which they are working (p. 27).” The ai m of an evaluator should be the

PAGE 31

20 provision of the most reliable findings possible gi ven the political and ethical constraints, and limitations imposed by time, money, and human r esources. With regards to conducting evaluation research, par ticularly where programs are being evaluated to determine level of success achie ved, Chen and Rossi (1983) suggested categorizing program goals in one of three categori es: (1) Policy-directed/Plausible Goals, (2) Policy-directed/Implausible Goals, and ( 3) Theory-driven/Not Specified by Policy Goals (Chen & Rossi, 1983). The overall pur pose of making such distinctions is to not only confirm that intended effects are occur ring, but also to identify effects that were not intended by their designers. Such uninten ded effects may be either desirable or undesirable and may offset those intended. An eval uator should take into account inferred effects as well as those directly intended The first category identifies those effects the program designers intend to occur when the program is implemented as designed; where the goals and specifications for pr ogram implementation and operation are clearly defined and specified. The second cate gory identifies program goals or effects that are intended by program designers, but likely implausible due to low specificity of implementation requirements and program operation. The third category refers to effects not specified through policy or program designers, but is nonetheless plausible where local adaptations are likely to occur. A combined qualitative and quantitative approach to conducting evaluation studies may be utilized, and in some cases is best practice (Rossi & Freeman, 1993). These researchers argue the use of multiple sources increases the range of data collected and promotes data and methodological triangulation while increasing the validity of

PAGE 32

21 conclusions. Specifically, the authors suggest usi ng direct observations by the evaluator, written records, data from delivery agents of the i ntervention and data from intervention trial participants. Supportive of Chen and Rossi’s (1983) emphasis on the evaluation of unspecified-plausible goals at the local implementa tion level, Rossi and Freeman suggest a mixed methodological approach to evaluation combi ned with data collection from multiple sources provides a means for detecting suc h goals and unintended program features. Having considered the purpose of evaluations and t he role of the evaluator working within a social and political context, eval uators must also take important steps to ensuring that the results of their evaluation are u seful for policy makers of the program being evaluated and stakeholders who may have diffe rent questions about and expectations for a given program (Chen & Rossi, 198 3; Rossi & Freeman, 1993). Thus, merely providing information does not ensure its us e. Dissemination and utility of information may be conceptualized as more of an art form. It is important to understand the organizational arrangements under which a socia l program is operating. Evaluators should consider developing a strategic plan for pro viding feedback to policymakers and stakeholders if the information is to be found valu able for use. In the development of such plans, it may be necessary to establish a diss emination plan early in the evaluation process and revise the plan when new information yi elds specific requirements for ensuring stakeholders and policymakers use the eval uation information obtained. As such, evaluation may be viewed as an evolving proce ss rather than a static series of steps to follow.

PAGE 33

22 Process Evaluation/Formative Evaluation Some researchers suggest that process evaluations h ave traditionally not been given the attention they fully deserve in evaluatio n research (Chen & Rossi, 1983). However, this trend appears to be changing as more and more process evaluations are being conducted in a wide variety of social and pol itical areas. Researchers typically characterize process evaluation as involving the mo nitoring of implementation and accountability practices of a program (Rossi & Free man, 1993), examining the relationships among program components to identify critical intervention elements for future practitioners (Chen & Rossi, 1983), adherenc e to program implementation designs and specifications (Elliott & Mihalic, 2004), ident ifying a reasonable balance between program implementation fidelity and local adaptatio n of program implementations (CSAP, 2001), and identifying means for improving a program (Rossi & Freeman, 1993). Some have conceptualized process evaluation as a me ans to collect feedback about program implementation, participant responses to th e program, changes to the site in response to implementation, and information about p ersonnel competency (Israel, Cummings, & Dignan, 1995). Helitzer, Yoon, Wallers tein, Dow y Garcia-Velarde, (2000), argued process evaluation can provide criti cal documentation necessary for sustaining and replicating successful community-bas ed trials. Process evaluation may be used as a key to understanding the internal dynamic s of an intervention trial and to monitor the quality of a program. Rossi and Freeman (1993) suggest three questions wh en undertaking a process evaluation. The first question concerns the extent to which a program is reaching the

PAGE 34

23 appropriate target population. The second question concerns the delivery of services in terms of consistency with program design specificat ions. The third question asks what resources are being or have been expended in the co nduct of the program. The second of Rossi and Freeman’s questions is synonymous with th e concept of fidelity. It has been argued that a valid evaluation study rests upon the consistency to which a program is implemented as designed and specified for use (Elli ott & Mihalic, 2004). However, according to Chen and Rossi (1983) implementation r esearch, or process evaluations, have focused too much at worrying about fidelity an d not enough about understanding the process of implementation. Fidelity is an important concept that often arises when the topic of process evaluation is discussed. A debate exists in the re search literature about fidelity and local adaptation (Blakely, Mayer, Gottschalk, Schmitt, Da vidson, Roitman, & Emshoff, 1987; Castro, Barrera, & Martinez, 2004; Elliott & Mihali c, 2004; Kovaleski, Gickling, Morrow, & Swank, 1999). According to Elliott and M ihalic (2004), there has been a long standing debate between the need for implement ing programs as they were designed and the need to adjust program aspects to fit the l ocal conditions. Some researchers have argued evaluations should seek a balance between fi delity and local adaptation of a program’s implementation (CSAP, 2001), while others have questioned the assumption that local adaptation is inevitable (Elliott & Miha lic, 2004). The issue of fidelity as it applies to evaluating s taff persons who participate in the implementation and use of a program typically invol ves a combination of three factors: setting conditions, adherence, and competence (Fixs en, Blas, Naoom, & Friedman,

PAGE 35

24 2005). With respect to the present study, these fa ctors are relevant for consideration. According to Fixsen et al., setting involves struct ural aspects of a program that must be in place for program success. Adherence is the extent to which staff persons use the program in a manner intended by the program designe rs. Competence refers to the skills one has in utilizing components of the program as d esigned and intended for use. Elliott and Mihalic (2004) suggest an appropriate a mount of support and encouragement from directors, or administrators, is essential in motivating staff to adhere to program features as designed. The authors argue that because the research literature has demonstrated the importance of fidelity and its significant relationship to effectiveness in outcomes any bargaining away from fidelity will lead to a decrease in program effectiveness. Elliott and Mihalic indicat ed their observations suggest local adaptation that occurred incidentally in their impl ementation efforts did so at the direct staff or teacher level and not at the administrator level. These adaptations were typically made without input from designers or implementation consultants. They suggest that where such adaptations are occurring it is importan t to have a process for identifying them and determining the degree to which they may a ffect outcomes. Given the relevant research available about process evaluations, it is important to recognize that just as it is important to identify the program’s goals when evaluating outcomes of a program, it is important to identify the purpose of an evaluation in order to determine the relevant dependent variables to be me asured during a process evaluation. If stakeholders and policymakers explicitly do not intend or desire local adaptations to occur, then utilizing a process evaluation for meas uring fidelity of program

PAGE 36

25 implementation and use would be best practice. How ever, if adaptation and flexibility are built into the design of a program, then the is sue of fidelity may not be as important as identifying the factors that lead to successful outcomes for the program as a whole, as well as factors that improve staff adherence to cer tain components of the program and competence in using the program. Rationale for Conducting a Process Evaluation Regardless of what the intent is of conducting a pr ocess evaluation, there are several potential benefits in conducting a process evaluation. Evaluation of a program’s outcomes alone often provides narrow and sometimes distorted understandings of the program (Chen & Rossi, 1983). Chen and Rossi provi ded a theory-driven approach to conducting evaluations by conceptualizing evaluatio n research as the testing of hypotheses for why a particular program or service is failing or succeeding. The authors argued that a program’s failure might be due to poo r conceptual foundations, low dosage levels of intervention, or because of poor implemen tation (e.g., fidelity). When outcome evaluations reveal little or no impact it is still possible to estimate if a program’s treatments are efficacious by examin ing process evaluation data (Rossi & Freeman, 1993). Process evaluations can provide in formation about efficiency of operations. Process evaluations also provide infor mation about how accurately and effectively the program was implemented as it was d esigned. Another potential benefit of process evaluation is its capacity for documenti ng operational effectiveness before a program has matured for outcome evaluation.

PAGE 37

26 Process evaluation results can lead to fine-tuning and redesigning a program in the interests of improving a program (Patton, 1990) Bias can result in program implementations from self-selection by targets or f rom program actions and can compromise not only the success of the intervention but also assessments of outcomes (Rossi & Freeman, 1993). Process evaluation data c an provide a means for identifying areas of bias so that such bias may be treated effe ctively and objectively when conducting an outcomes evaluation. Bias according to Rossi an d Freeman refers to the extent that some target population participants are being serve d, or covered, more densely than others. Three basic sources of information are used in measuring program coverage: program records, surveys of program participants, a nd community surveys (Rossi & Freeman, 1993). This information provides a means for identifying areas for improving the program as a whole or components of the program where problems with service delivery and bias in participation occur. Process evaluation is synonymous with program moni toring (Patton, 1990; 2002; Rossi & Freeman, 1993). Four methods should be con sidered in the design of a process evaluation according to Rossi and Freeman: (1) dire ct observations, (2) review of service records, (3) interview of service providers, and (4 ) interview of program participants. Where qualitative information is obtained from staf f about features of the program, information can allow for not only identifying staf f professional development needs, but also creative approaches to implementing or utilizi ng program features. Other investigations may also be conducted.

PAGE 38

27 Rossi and Freeman identify three types of investiga tions that may be conducted using a process evaluation. The first is a descrip tion of the program. A useful description of a program estimates a coverage and b ias in participation. It may identify the types of services delivered and classify the in tensity of services given to participants and the reactions of participants to the services p rovided. Both quantitative and qualitative data may be used for a description of t he program. Descriptions of the program may provide program designers a unique unde rstanding of variables which were either not intended or were not anticipated with re spect to implementation of and participation in the program. The second general type of investigation using proc ess evaluation results is a comparison between sites where a particular program is implemented. This permits an understanding of sources of variability in implemen tation and outcomes. Sources for obtaining information may include staff, administra tors, target populations, and surrounding environment (e.g., competing programs f or existing services). Such information may help standardize implementation eff orts or offer clues why the program works at some sites and not at others. The third type of investigation is the level of con formity of the program to its design. This type of investigation is most consist ent with measuring the fidelity of a program’s implementation and use. Discrepancies ma y be dealt with by moving the implementation towards the intended program specifi cations or providing a respecification of the design itself where adaptation s are required for implementation at the local level. Such an investigation provides an opp ortunity to judge the appropriateness of

PAGE 39

28 outcome evaluations and to determine whether or not a more formative evaluation is required to develop the desired balance between des ign specification and local implementation requirements. This type of investigation is consistent with McGr aw, et al. (1996) who suggested that when there is a lack of information about what actually occurred during the implementation of an intervention, investigators ma y not be adequately interpreting study results of an outcome evaluation. Data on the exte nt or dose of the intervention delivered, the fidelity with which the intervention was carried out, and the presence of competing programs, events, or other confounding in fluences can be useful for the interpretation of study results (McGraw, Sellers, S tone, Bebchuk, Edmundson, Johnson, Bachman, & Luepker, 1996). According to these autho rs, and consistent with Rossi and Freeman (1993), process data are useful for describ ing the program implementation, providing information for quality control and monit oring, and help explain program effects, or outcomes. Finally, Patton (1990) depicted process evaluation s as “elucidating and understanding the internal dynamics of how a progra m, organization, or relationship operates (p. 95).” He provided several types of qu estions that are consistent with this approach: 1. What are the things people experience that make the program what it is? 2. What are the strengths and weaknesses of the progra m? 3. How are clients brought into the program and how do they move through the program once they are participants?

PAGE 40

29 4. What is the nature of the staff-client interactions ? One potential benefit of conducting process evaluat ions of a program is that it provides outside persons not intimately involved in the dayto-day operations of a program (e.g., funding agencies, public officials, supervising age ncies, etc.) to understand how a program operates. Patton uses the terms formative evaluations and process evaluations synonymously. Both are aimed at program improvemen t and often rely heavily, if not primarily, on process data. In general, process ev aluations focus on how something happens in a program rather than on the outcomes or results of a program. Process Evaluations in Education Though some process evaluation research has been co nducted in education settings or related fields, process evaluation stud ies conducted on the Reading First program could not be found. However, some research and discussion was available which offered insight on the factors which may impa ct a process evaluation in an educational setting. For example, Fitzgerald and C lark (1976) conducted a process evaluation of inservice training for a school readi ng program. The overall purpose of the study was to evaluate the effectiveness of the inse rvice training using teacher self-ratings, director ratings, and monitor ratings as well as an y correlations between these ratings. In general, the study found significant discrepancies between the three sources of information. Nonetheless, these authors argued tha t product and process evaluation are essential to provide information on which to judge the effectiveness of inservice training. The immediate goal of inservice training is a proce ss change in which the teacher demonstrates both cognitive and behavioral developm ent as a result of inservice training.

PAGE 41

30 Studies investigating process variables have been c riticized for not including judgments about changes in teacher performance in the classro om as a result of inservice training. Researchers argue that more information is provided when attitudes about program and self-ratings are used simultaneously with descripti ons of organizational process variables (Fitzgerald & Clark, 1976). Chen and Rossi (1983) contend that teachers in an e ducation organization are notoriously difficult to control and influence beca use the instructional approaches used by teachers are not directly interdependent on the instructional approaches used by other teachers. It may be common to find variability in the level of fidelity in implementing an educational program; and may also influence teacher variability in the level of participation in a program. It would be important to consider evaluation questions in terms of relative differences between sites; for ex ample, in relation to implementation of program components and participation levels among t eachers. Chen and Rossi suggest the implementation of large-scale educational progr ams relies heavily on the adherence of teacher’s behavior to the program’s intended des ign where teachers are depended upon for delivering any component of the program. Becau se individual schools may be viewed in many respects as independent entities apart from the operations of other schools, and contain personnel who do not have a direct dependen ce between themselves in the individual school, there is high probability that i mplementation of large scale education programs will demonstrate modest to low levels of f idelity where specific design features are intended for implementation.

PAGE 42

31 Harachi, Abbott, Catalano, Haggerty, and Fleming (1 999) discussed the use of process evaluation studies in the field of educatio n. The 1990s witnessed a greater emphasis on process evaluation of prevention progra ms in education which attempted to elaborate on mechanisms through which outcomes oper ate. Examination of program implementation is particularly crucial in the desig n of efficacy studies of school-based preventative programs according to the authors. Ha rachi et al, state the extent to which teachers deliver a particular program component as designed is a critical question that needs to be addressed when evaluating outcomes. Th ey argue evaluators must assess program fidelity as well as program outcomes in ord er to best differentiate the failure or success of implementation as a context for examinin g the failure or success of a program. Understanding teacher participation levels and perc eptions about a program would be an important set of variables when a process evaluatio n is conducted in education; even if only a partial evaluation of process variables is u ndertaken. Considerable variation in implementation of even si milar programs is likely to yield differential outcomes. Though process evalua tion studies applied to Reading First programs is lacking, process evaluation studies hav e been conducted in other educationally-based areas such as staff training in health education (Helitzer, et al., 2000), substance abuse treatment (Battistich, Schap s, Watson, & Solomon, 1996; Elliott & Mihalic, 2004; Pentz, Trebow, Hansen, MacKinnon, Dwyer, Johnson, Flay, Daniels, & Cormack, 1990), community mental health programs (D urlak & Wells, 1997), drug prevention programs (Goodstadt, 1988; Schaps, Mosko witz, Malvin, & Schaeffer, 1986), and nutrition, (Helitzer, Davis, Gittelsohn, Gohn, Murray, Snyder, & Stecker 1999).

PAGE 43

32 Some researchers have argued that although much inf ormation is available for how to conduct a methodologically sound evaluation study, few such studies may be found given the political and financial environments in which a program is often evaluated within (Goodstadt, 1988). Goodstadt, in a review of evalu ation research regarding school-based drug education programs in North America, hypothesi zed variability in evaluation research design and implementation exists due to re servations on the part of administrators and program developers. Reservation s may often be due to the commitment of financial resources needed for planni ng and executing such evaluations. Evaluation research may often cause delays in devel oping a program while it is being tested in the formative stages of the evaluation. Administrators often assume that implementing evaluation studies requires a speciali zed evaluator when in fact, according to Goodstadt, specialized evaluators are often only needed for research design and data analysis. Given these observations, considerable v ariation is to be found among evaluation research studies regarding methodology a nd implementation of design features. Durlak and Wells (1997) provided a meta-analytic re view of primary prevention mental-health programs. In their review they obser ved that few studies provided any relevant data regarding how program implementation influenced outcomes. As such, it is apparent that future process evaluation studies wil l be necessary in order to effectively understand outcome effects, as well as make compari sons between studies. Schaps, et al. (1986) described a series of seven e valuation studies of schoolbased drug prevention strategies. The seven strate gies consisted of 4 in-service teacher

PAGE 44

33 oriented strategies that focused on classroom manag ement, student attitudes towards school, student self-esteem, and student developmen t of social competencies. Two strategies consisted of academic elective courses f or students to enroll which taught skills and provided opportunities for helping peers. None of the six strategies above directly addressed drug use. The final strategy was a drug education course that taught students social skills and provided drug facts and informati on. The study evaluated the implementation and effects of the strategies listed above. Process data were gathered to monitor implementation. Pre-test and post-test inf ormation was used to evaluate outcomes. Process evaluation methods consisted of teacher attendance at each session of training, anonymous teacher ratings of sessions, ob served participation in sessions, and documentation of agenda, content, and procedures us ed in each session. The results of this study found a lack of effect for all strategie s, with the exception of partial support for the efficacy of the drug education program. These researchers argued the lack for support for the programs was due to an ineffective theory of drug prevention as opposed to limitations in the design and implementation of their evaluation research. Battistich et al., (1996) found higher degree of im plementation integrity led to greater outcome results in a study examining teache r’s implementation of the Child Development Project; a program designed to reduce r isk factors and increase protective factors among children. They also found decreases in substance abuse among students in the high and moderate implementation schools while increases were found in comparison populations. Though their findings did not yield a statistically reliable effect given the

PAGE 45

34 decreased sample sizes for the subgroup analyses, t heir results do suggest the quality of intervention implementation should be considered wh en evaluating program outcomes. Pentz et al., (1990) found considerable variability among school-based prevention programs in terms of maintenance of effects, amount of time between provision of the intervention and observation of effects, and the ma gnitude of the effects found. Furthermore, such variability was found between stu dies based on similar methodology and content. Pentz et al., hypothesized variabilit y observed among these prevention evaluation studies may be due to differential imple mentation of evaluation design and interventions. These researchers offered three app roaches for defining quality of implementation: adherence—degree to which the inter vention is provided to the experimental group and not the control group; expos ure—amount of intervention provided to the target group; and reinvention—degre e to which implementation deviates from the designed program standard. Reinvention in particular is of most interest given the unique conditions that often exist at the local level when a program or program component is implemented on a large scale in an edu cational setting (e.g., state-wide or even county-wide). Furthermore, Pentz found diffic ulty in measuring the degree to which teachers deviated from implementation design when using self-report surveys. They found 100% of the teachers survey reported the y did not deviated substantially from the designed intervention. Those that had deviated to some degree indicated the inclusion of additional materials, discussion, or number of s essions provided. Given the contextual, political, and cultural const raints of implementing largescale educational programs, Harachi et al. suggest evaluators look at variables other than

PAGE 46

35 just fidelity. One example is to measure the diffe rences between implementers and nonimplementers on variables such as years of teaching experience, degree of self-efficacy, enthusiasm, preparedness, teaching methods compatib ility, and principals’ encouragement and support. The above variables may also help to identify the conditions under which a program is highly and accurately impl emented as designed. These researchers advocate for a complete description and operationalization of the key elements of a program as a first step in designing a process evaluation in education. Helitzer et al. (1999) utilized process evaluati on methods as a formative tool to improve and refine an intervention while being impl emented as well as explain and interpret intervention outcomes. These researchers developed a process evaluation plan by outlining a sequence of steps of the interventio n that theoretically leads to the desired outcome. In this plan they identified the most sal ient indicators to measure implementation. The study used the following types of dependent measures: (1) self administered questionnaires that included facilitat or characteristics, (2) training evaluation forms completed by facilitators, (3) fac ilitator check-off lists that documented curriculum-implementation, and (4) observations of group sessions. Qualitative Research Methodology Value and Nature of Qualitative Research Approaches Any research proposal involving qualitative resear ch methodologies must provide a rationale for engaging in such an enterprise (Mar shall & Rossman, 1999). Such a justification involves an understanding of the valu e and nature of qualitative research. Patton (1990) reasoned that one of the powerful ben efits of qualitative investigations is

PAGE 47

36 that is allows the researcher to understand a parti cular phenomenon of interest from the points of view of people directly involved in the i ssue or phenomenon rather than through predetermined views held by the researcher. This r ich, detailed characteristic is one of the hallmarks of qualitative designs (Marshall & Ro ssman, 1999; Maxwell, 2005; Patton, 1990; 2002; Taylor & Bogden, 1998). Qualitative re searchers are naturalistic in their approach to inquiry (Patton, 1990; 2002; Taylor & B ogden, 1998). Qualitative studies allow the evaluator to study topics of interest by approaching fieldwork without being limited by predetermined categories of analysis (Pa tton, 1990). Qualitative research provides a wealth of information about a relatively smaller group of people that although reduces generalizability, increases understanding i n greater depth than can be achieved through traditional quantitative approaches. Qualitative inquiry is not a single thing with a s ingle subject matter (Patton, 1990; 2002). Patton indicated qualitative inquiry builds on several interconnected themes which may be realized in the real world depending o n the purpose, situation, questions being asked, and availability of resources. Specif ically, he lists 10 such themes: Naturalistic, inductive analysis, holistic perspect ive, qualitative data, personal contact and insight, dynamic systems, unique case orientation, context sensitivity, empathic neutrality, and design flexibility. Any combinatio n of these can occur and serve as a foundation for conducting a qualitative study. A n aturalistic theme focuses on studying the real world situations as they unfold with no at tempts to manipulate or otherwise interfere with those situations. An inductive anal ysis theme refers to an immersion into details and specifics of the data to discover impor tant categories, dimensions, and

PAGE 48

37 interrelations by beginning with exploration first rather than testing theoretically derived hypotheses. A holistic perspective involves seeing the phenomenon as a whole whereby it is greater than the sum of its parts given its c omplexity as a system. Personal contact and insight refers to direct contact with people, s ituations, and the phenomenon of interest whereby the researcher brings their person al experiences and insights as an important part of the inquiry and critical to under standing the phenomenon. A dynamic system theme involves paying attention to processes and assuming change is constant and ongoing whether the focus is on an individual or an entire culture. Unique case orientation themes assume each case is special and the focus is on capturing details in individual cases and then engaging in cross-case an alyses based on individual case studies. Context sensitivity refers to placing fin dings within a social, historical, and/or temporal context. The theme of empathic neutrality assumes complete objectivity is impossible and pure subjectivity undermines credibi lity. Thus, the researcher’s role is not in proving anything, advocating any particulars or otherwise advancing any personal agenda. Finally, Design flexibility refers to the openness for adapting the inquiry as understanding deepens and/or situations change by a voiding rigid designs that eliminate responsiveness, but rather pursues new paths of dis covery as they emerge. Qualitative methods are particularly suited for st udies focusing on process variables involved in a program or organization (Ma rshall & Rossman, 1999; Maxwell, 2005; Patton, 1990; 2002). Such investigations all ow for an understanding of (a) implementation fidelity, (b) program operations and innovations where specific guidelines or rules are not provided in the impleme ntation plan, (c) attitudes, beliefs, and

PAGE 49

38 perceptions held by those serviced by a program or who participate in the operation of a program—this is particularly of interest where part icipants in the program’s operation are depended upon for delivering the appropriate levels of the intervention/service to clients, and (d) program components or issues that require m odification for overall program improvement. As Patton suggests, a common activity for all can result in drastically different outcomes depending on how the activity is experienced, what the unique needs are of each individual, and which parts of the acti vity were found to be most salient for individuals. Qualitative studies are further well suited for understanding individual client outcomes (Marshall & Rossman, 1999; Patton, 1990; 2 002; Taylor & Bogden, 1998), programs comparisons (Patton, 1990; 2002), evaluabi lity assessment – the degree to which a particular program is ready for systematic quantitative evaluation (Patton, 1990; 2002), and quality assurance (Marshall & Rossman, 1 999; Patton, 1990; 2002; Taylor & Bogden, 1998). In their book, The Discovery of Grounded Theory Glasser and Strauss (1967) argue qualitative research should focus its efforts to developing social theory and concepts. Their Grounded Theory approach was desig ned to enable researchers to find plausible support for theories through discovery fr om data rather than from a prior assumptions, other research, or existing theoretica l frameworks (Taylor & Bogden, 1998). The Grounded Theory approach begins with co llection of data about a particular phenomenon from which themes, concepts and ideas ar e based upon. Reviews of the themes, concepts and ideas are conducted in additio n to collection of other data for comparisons and refinement of ideas. Convergence o f information builds towards the

PAGE 50

39 establishment of a theory which “fits” the data. A ccording to Glasser and Strauss (1967), the main criteria in evaluating theories are if the y “fit” and “work.” Other researchers have called into question this em phasis on focus for pursuing qualitative investigations (Patton, 1990; Seale, 19 99; Taylor & Bogden, 1998). Seale (1999) for example differentiated between social or cultural commentary against social research. Patton (1990) allows for a more pragmati c approach whereby qualitative studies may be conducted to describe human action, beliefs, attitudes, and understandings regarding a program. More specifically, Patton wrot e, “…there is a practical side to qualitative methods that simply involves asking ope n-ended questions of people and observing matters of interest in real-world setting s in order to solve problems, improve programs, or develop policies (p. 89).” His view o n this issue, as he further argues, is like saying one may utilize statistical methods in a relatively straightforward way and not necessarily include an exhaustive review of logical -positivism. “While these intellectual, philosophical, and theoretical traditions have grea tly influenced the debate about the value and legitimacy of qualitative inquiry, it is not necessary, in my opinion, to swear vows of allegiance to any single epistemological pe rspective to use qualitative methods (Patton, 1990; p. 89).” Purpose and Research Questions A qualitative inquiry must provide a detailed purpo se and justification for using a qualitative approach to investigating a particular phenomenon (Marshall & Rossman, 1999; Patton, 1990). Marshall and Rossman provide several general justifications when conducting such a study. One in particular bears s pecific mention as it relates to the topic

PAGE 51

40 chosen in the present study: Research on informal a nd unstructured linkages and processes in an organization. This justification i s consistent with Patton’s (1990) descriptions of formative evaluations, or process e valuations whereby understanding the specific actions as they occur in the natural setti ng can assist in identifying potential areas for improvement. One of the potential purposes of engaging in a qualitative approach is that this approach allows for a collection of rich and detailed information about the beliefs, attitudes, or general understandings from the perspective of the individuals participating. As such, qualitative studies tend t o focus on three types of questions according to Maxwell (2005): (a) understanding of t he meaning of events and activities to the people involved; (b) understanding the influenc e of the physical and social context on these events, and (c) process questions by which th ese events and activities occur. Similarly, Marshall and Rossman (1999) describe thr ee purposes of qualitative research: (a) exploration of phenomenon not well un derstood in order to identify important categories of meaning to generate hypothe ses for future research, (b) explanation of patterns related to a phenomenon in order to identify plausible relationships shaping the phenomenon, and (c) descr iption and documentation of the phenomenon of interest in order to identify the sal ient actions, events, beliefs, attitudes, or social structures and process occurring in a giv en phenomenon or topic of interest. Questions which focus on differences of variance ar e considered appropriate for quantitative studies (Maxwell, 2005; Patton, 1990). However, questions which have as their goal an understanding of what is actually tak ing place in the field is best supported by a qualitative approach (Maxwell, 2005; Patton, 1 990).

PAGE 52

41 Qualitative Research Methods Researcher’s Biography and Role Researchers recommend a qualitative study must be specific and upfront of the researcher’s ro le in the study. According to Marshall and Rossman (1999), a qualitative study must provid e in its proposal a clear description of the researcher’s role in the study, level of rec iprocity provided, plans for entering the organization being studied, a description of the re searcher’s assumptions and potential biases, and consideration of any ethical concerns. Patton (1990) provided five dimensions for consideration when designing a quali tative study. These dimensions each lie on a continuum and are defined as follows: 1. Role of the Evaluator-Observer – the extent to whic h the researcher participates in the daily activities of the participants in their n atural setting (full – partial – onlooker). 2. Portrayal of the Evaluator Role to Others – the ext ent to which participants know that observations are being made and who the observ er is (overt – partial knowledge – covert). 3. Portrayal of the Purpose of the Evaluation to Other s – the extent to which participants are informed of the purpose of the res earch study (full – partial – covert – false explanation). 4. Duration of the Evaluation Observations (single obs ervation with limited time – long-term multiple observations). 5. Focus of the Observations (narrow/single element or component in a program – broad focus/holistic view of the entire program and all its elements).

PAGE 53

42 These five dimensions are used by Patton to describ e the variation in approaches to observations which can occur in qualitative studies They provide a framework for the researcher to define the parameters of a study and review how the evaluation is proceeding. As Patton states, “It is not possible t o observe everything (p. 216).” By focusing the study along very specific dimensions t he researcher can find organization and logic while attempting to observe the complex r eality of a given situation or program. Marshall and Rossman (1999) also suggest any qualit ative study must contain a researcher biography which specifically highlights any a priori expectations, beliefs, and/or biases the researcher may have about conduct ing the study. The purpose of providing such a biography is to provide greater cr edibility to the reader in providing up front the experiences, biases, expectations, and/or beliefs about the topic being investigated. Consistent with Patton’s concept the me of empathic neutrality, one cannot present themselves as a clean slate. Qualitative r esearch does not require the researcher to remove such subjectivity. Rather, the goal is t o acknowledge subjectivity and understand how it influences the study. This is di scussed more in the following pages concerning validity and reliability constructs in q ualitative studies. Selection Methods Researchers have indicated there are no formal r ules for sample size in qualitative studies (Marshall & Ross man, 1999; Patton, 1990; Taylor & Bogdan, 1998). The size of the sample depends on what the researcher wants to know, the purpose of the study, what information is inten ded to be useful to stakeholders, what will have credibility, and what can be done with av ailable time and resources. The size of the sample chosen for a qualitative study must b e large enough to capture most or all

PAGE 54

43 of the perceptions that might be important for incl usion in the study. A sample size that is too narrow runs the risk of narrowing the scope or range of information available within a population. A sample size that is very la rge has less risk regarding threats to validity; however, resources may be limited for des igning studies with very large samples. Marshall and Rossman (1999) argue that th e number chosen for a sample size must be thoroughly justified in any research propos al involving qualitative research. Finding a balance between breadth and depth of a st udy is essential when conducting a qualitative study. A method is needed for determin ing a reasonable starting point that finds balance between available resources and assur ing a majority of potential perceptions are captured within the study. Patton recommends researchers provide a minimum sample level for selection. The researcher may add to this number as the study unfolds. Researchers have suggested recording consecutive in dividual interviews until a saturation point is reached – that is, a point in w hich the information obtained is no longer new (Fossey, Harvey, McDermott, & Davidson, 2002; Patton, 1990; 2002). However, according to DePaulo (2000), such an appro ach is not solidly grounded and may not tell the researcher in advance the optimal qualitative sample size needed which maximizes capture of new and relevant information w hile also minimizing unnecessary resource expenditures. Starting with a sample size of 30 may be considered a reasonable starting point when random sampling procedures are employed (DePaulo, 2000; Griffin & Hauser, 1993).

PAGE 55

44 Specifically, DePaulo reasoned that if the incidenc e rate of an assumed subpopulation (e.g., consumers who are dissatisfied wi th a product) is 1 in 10, then the chance of randomly selecting a person with a satisfactory perception is 0.9. To determine the chance of finding the same type of participant with a satisfactory perception a second time in a row, one would multiply 0.9 to the second power. Therefore, the probability of selecting 10 satisfied participants in a row would be 0.9 to the tenth power, or 0.35. DePaulo reasoned there would be a 35 percent probab ility that a sample of 10 would have missed participants who were dissatisfied assuming an incidence rate of 1 in 10. DePaulo developed a table for researchers to use as an init ial guide to minimize the risk for sampling error by repeating the power calculations for various incidences and sample sizes. The table is then useful for selecting a sa mple size that is within the resource limits while also reducing the risk of sampling error. Hi s calculations suggested 30 participants with an assumed incidence of 10% would have a 95% c hance of capturing all potential perceptions in the population sampled. This starti ng point is consistent with Griffin and Hauser (1993) who found fewer new perceptions with each additional interview conducted beyond 30 individual interviews in a stud y involving consumer use of products. Again, this assumes a population inciden ce rate of 10%. Conversely if one assumes an incidence rate of 33% then according to the table, there would be about a 98% chance of capturing all relevant perspectives. Following selection of the number of participants f or inclusion in a qualitative study, the researcher must consider the appropriate sampling method for selecting participants. Some researchers have provided compr ehensive lists which describe

PAGE 56

45 various types of methods for sampling (Miles & Hube rman, 1994; Patton, 1990). Patton in particular provided a list of 16 methods. The r eader is referred to Patton for a complete description of the strengths and weaknesse s of each method which is beyond the scope and purpose of this review. A few method s do deserve some attention: Criterion sampling, random purposeful sampling, and convenience sampling. Criterion sampling procedures involve picking all cases that meet some criterion. According to Patton, this approach is common in quality assuranc e studies. This sampling method can be applied to identify cases from questionnaires or tests for in-depth interviews. Random purposeful sampling adds credibility to a study whe n the potential pool of participants is too large for one to handle. It also reduces judgm ent within a purposeful category. The weakness of this approach is that it does not allow for generalization or representativeness. This may not be an issue for studies where the focus involves a descriptive purpose for engaging in the qualitative study, or generating potential hypotheses for future research. Convenience sampli ng methods serve as the poorest rationale for sampling participants (Maxwell, 2005; Patton, 1990; 2002). However, this approach is focused on minimizing costs and reducin g time. Data Collection and Analysis. Great care and consideration must be involved when designing data analysis procedures in a qualit ative study. Researchers have noted that the data analysis part of any qualitative stud y is often the weakest part of the study (Maxwell, 2005). Taylor and Bogden (1998) describe d qualitative data analysis as, “the most difficult aspect of qualitative research to te ach or communicate to others (p. 140).” According to Miles and Huberman (1994), there are f ew agreed upon rules for drawing

PAGE 57

46 conclusions and verifying the sturdiness of the con clusions. What is agreed upon in the research literature is the need to conduct analyses of the data concurrently with data collection, and provide rich detailed information c oncerning the physical and social contextual variables; in addition to the researcher ’s thought processes throughout the analysis of results. Patton (1990) argued the rese archer has an obligation to, “monitor and report their own analytical procedures and proc esses as fully and truthfully as possible,” regardless of the analysis procedures em ployed (p. 372).” That is, the researcher will not only report on the findings of the study, but also on the analytical process. Marshall and Rossman (1999) described typical quali tative analysis procedures as having six phases: (a) organizing the data; (b) gen erating categories, themes, and patterns; (c) coding the data; (d) testing the emer gent understandings; (e) searching for alternative explanations; and (f) writing the repor t. Maxwell (2005) conceptualized analytic options into three broad categories: (a) m emos; (b) categorization strategies; and (c) connecting strategies. Categorization strategi es include the popular procedure of coding the data. Categorization strategies may be classified into three types: organizational, substantive, and theoretical. Orga nizational categorization involves predetermined identification of broad topics or cat egories which may be later used as “bins” for presenting the results of the study. Su bstantive categorization primarily involves a description of the participants’ concept s or beliefs. This process involves inductive analysis, and the outcomes of this proces s may be used to facilitate a more general theory of what is going on in the study, bu t does not necessarily depend on the

PAGE 58

47 theory being explored. Theoretical categorization approaches typically represents the researcher’s concepts rather than denoting the part icipants’ own concepts and the topics developed may be either derived from existing theor ies or from an inductively developed theory. Taylor and Bogden (1998) suggested “discovery” is t he first step in the analysis of qualitative data. This ongoing process of disco very involves identifying themes and developing concepts and propositions. Given the sh eer volume of information (measured in pages of text) it is essential that data analysi s procedures be ongoing throughout the data collection process. The cycle of collection a nd analysis allows the researcher to track early themes and concepts, either intended or unintended, which offer the researcher a means of refining, highlighting, or otherwise foc using on such themes and concepts in subsequent data collection sessions. Coding the dat a is the second step for Taylor and Bogden. This process usually occurs after the data have all been collected. These researchers caution the reader about any delay in c oding the data after it has been collected since the greater the delay the greater t he difficulty may occur with going back to informants to clarify any points or tie up loose ends. Researchers according to Taylor and Bogden may even maintain casual contact with in formants during the analysis phases and/or have informants read draft reports as a chec k on interpretations. Finally, the third step involves attempting to discount findings, or u nderstand the data in the context in which they were collected. Field notes are essenti al for undertaking this step (Patton, 1990). This final step is consistent with Patton’s recommendation to researchers that the analytical process be communicated clearly and in g reat detail when writing the results of

PAGE 59

48 the study. Thus, making sense of qualitative data must include the researcher’s role, the context in which the data was obtained, informal ob servations made before, during, and following interviews or focus groups, and the resea rcher’s thinking process for making specific interpretations and the evidence used to m ake those interpretations. More recently computerized software programs are be ing utilized for the analysis of qualitative data. According to some researchers there is a seductive aspect about this approach to analysis (Fielding, 1993; Taylor & Bogd en, 1998). According to Fielding (1993), there is a “perceived danger of superficial analysis produced by slavishly following a mechanical set of procedures (p. 3).” Fielding argues that most software may have an implicit conventional grounded theory appro ach; however such programs may not be valuable for researchers interested in herme neutic approaches, ethnomethodology, conversation analysis, or holistic analysis. Nonet heless, computerized analysis can provide a measure of reliability in the coding of d ata for identifying themes or concepts towards interpretation (Seale, 1999; Taylor & Bogde n, 1998) Patton (1990; 2002) referred to two analytical appr oaches as “case analysis” and “cross-case analysis”. Patton suggested case studi es are appropriate when the researcher is interested in variation in individuals as the pr imary focus. Cross-case analyses involve “grouping together answers from different people to common questions or analyzing different perspectives on central issues (p. 376).” When an interview guide is used as the tool for collection of interview data, people can b e grouped by topics from the guide. It is important that the researcher understands upfron t that the relevant data won’t

PAGE 60

49 necessarily be found in the same place for each int erview given the natural conversational flow required of an interview process. Reliability and Validity. Any discussion of data analysis procedures must inc lude a discussion of validity and reliability (Maxwell, 2005; Patton, 1990; 2002; Seale, 1999). The validity and reliability of qualitative data de pend to a great extent on the methodological skill, sensitivity, and integrity of the researcher (Patton, 1990; 2002). Unlike traditional quantitative methodologies which rely upon well designed instruments and statistical procedures for ensuring the reliabi lity and validity of the study, qualitative studies must make explicit the use of any procedure s for minimizing threats to validity and ensuring internal reliability since the researc her is the research instrument (Marshall & Rossman, 1999; Maxwell, 2005; Patton, 1990; Seale 1999). According to Seale (1999) attempts at replicability of qualitative studies is rarely found and where found are exercises to promote arti ficial consensus. In other words, according to Seale, different people are likely to have different accounts of the world. This view seems consistent among other researchers of qualitative studies. Taken as a whole, the research literature suggests the concept s of reliability and validity are not readily applicable to a purely qualitative study. Nonetheless these concepts are consistent within a qualitative approach when conceptualized a s determining the “trustworthiness” of the researcher’s interpretations of the informat ion. Therefore, one obligation the researcher has in order to demonstrate a trustworth iness is to “facilitate the expression of these accounts” (Seale, p. 42) so that others may m ake their own analyses to compare with the researcher’s.

PAGE 61

50 However, this may not necessarily provide trustwort hiness if such facilitation is viewed as selections of possible versions. To make sense of this, Seale recommends researchers conceptualize the reliability aspects o f a qualitative study as having either external or internal reliability. External reliabi lity is a demanding process referring to the replication of findings in conducting the same stud y again. In qualitative research, the sheer variety of different points of view can often lead to different outcomes in a replicated study. Thus, “the expectation of comple te replication is a somewhat unrealistic demand (p. 42).” Internal reliability is much more obtainable in qualitative studies and there are a variety of strategies that may be used to demonstrate a qualitative studies’ internal reliability. Internal reliability can ref er to the extent to which different researchers identify similar constructs (Seale, 199 9). Computers may also be used to facilitate the finding of coding errors or discrepa ncies according to Seale. This basic procedure typically involves a use of inter-rater c hecks on the coding process. Lincoln and Guba (1985) reasoned that the conventio nal terms of validity and reliability may be inappropriate for qualitative wo rk if a “naturalistic” approach is used which rejects a cause and effect view of the world; typically characteristic in a positivist view of the world. They conceptualized reliability and validity within a qualitative study as one of “trustworthiness” being established. The y identified four additional ways a researcher can establish trust in their research pr oject: (a) internal validity or “credibility”; (b) external validity or “transferab ility; (c) reliability or “dependability; and (d) objectivity or “confirmability". Credibility c an be achieved by engaging in long durations in the field with participants, engaging in multiple and persistent observations,

PAGE 62

51 engaging in triangulation exercises, and exposing t he research report to others and/or searching for alternative interpretations. Transfe rability is not intended to be achieved by use of random sampling or probabilistic reasoning. Rather the researcher provides a detailed, rich description of the setting studied s o that readers are given sufficient information to be able to judge the applicability o f findings to other settings which they know. Dependability can be achieved by engaging in “auditing” which consists of the researchers’ documentation of data, methods, and de cisions made during a project, as well as its end product. Auditing is also useful i n establishing confirmability. Auditing, according to Lincoln and Guba is also conceptualize d as an exercise in “reflexivity” which involves the provision of a methodologically self-critical account of how the research was done, and can also involve triangulati on exercises. Dynamic Indicators of Basic Early Literacy Skills ( DIBELS) Early literacy skills have been a focused area of r esearch in the last couple of decades. More importantly, advances have been made with regards to intervention and assessment of early literacy skills in the last dec ade. Adams’ (1990) book entitled Beginning to read: Thinking and learning about prin t was written based on a comprehensive review of the research literature on reading. Adams concluded a student’s trajectory of reading development was inc reasingly highly resistant to interventions if reading difficulties persisted bey ond the first grade. However, her research suggested that a student’s trajectory was sensitive to change during the prekindergarten through first grade years. Johnston a nd Allington’s (1991) review of remedial reading intervention studies was consisten t with Adams’ findings in which they

PAGE 63

52 too concluded remedial reading instruction has not been very effective, in general, at making children more literate. Prevention has ther efore gained more attention as the optimal course of action in developing students’ re ading literacy. In 1996, Good & Kaminski published a manuscript ent itled Assessment for instructional decisions: Toward a proactive/prevent ion model of decision-making for early literacy skills. This article provided a case study to illustrate th e use of DIBELS within a problem-solving model of educational decis ion-making. The purpose of the article was to demonstrate how to use DIBELS to dev elop local norms, monitor progress of student performance, and evaluate effects of int erventions on an individual basis. It emphasized a link between teaching and assessment i n which assessment is most useful when it is directly linked to the curriculum. It a lso emphasized a need for early intervention for at-risk learners in reading before the end of first grade. The goals of the decision-making model using DIBELS are to (1) preve nt reading problems by ensuring that students have early literacy skills, and (2) p roactively engage in instructional modifications to minimize the magnitude of reading problems for students who are having trouble developing early literacy skills. I n short, DIBELS were developed as a prevention approach to reading improvement. DIBELS are not intended to be the sole form of asse ssment used with young children (Kaminski et al., 2008). Convergent infor mation from multiple sources and multiple assessment procedures is desirable especia lly in high stakes educational decisions. The methods used in the DIBELS measures do not suggest the methods of instruction. DIBELS were developed to serve as ind icators of critical early literacy skills.

PAGE 64

53 The measures link most directly to three of the fiv e “big ideas” or essential skill areas identified in the reading research literature (Nati onal Reading Panel, 2000). Those three skill areas are (1) phonological awareness, (2) alp habetic principle, and (3) fluency with connected text. Phonological awareness refers to t he explicit awareness of the sound structure of language, including the ability to ora lly manipulate sound units smaller than words. This skill area includes rhyming words, ble nding phonemes, segmentation of phonemes and syllables, and deletion of phonemes an d syllables. Phonemic awareness is not the same thing as phonics which is the pattern of letter-sound corresponden ces in written language. The alphabetic principle refers to a ch ild’s awareness of letter-sound correspondences. Fluency refers to a child’s oral reading rate at a particular instructional level within a standardized time frame. As part of the early development of DIBELS, Kamins ki & Good (1996) examined the reliability, validity, and sensitivity of exper imental measures to assess areas of early literacy. The measures were phoneme segmentation f luency, letter naming fluency, and picture naming fluency. The measures were designed for repeated use to identify children with difficulty acquiring basic early lite racy skills. The measures were tested on 37 kindergarten students who could not read and 41 first grade students who could read. All 78 students were drawn from one elementary scho ol in the Pacific Northwest. Both the kindergarten and first grade groups were divide d randomly into two groups, a monitored and non-monitored group. Students in the monitored groups were administered the measures two times a week for a pe riod of 9 weeks while students in the

PAGE 65

54 non-monitored groups were tested with the measures only at the beginning and at the end of the 9 week period. The study was conducted in t hree phases. In phase one all students were tested on the McCart hy Scales of Children’s Abilities and the three DIBELS measures. Their tea chers completed the Rhode Island Pupil Identification Scale and a teacher rating sca le in order to further evaluate students’ classroom behaviors. Kindergarten teachers also ad ministered the Metropolitan Readiness Test, which assesses reading and math rea diness skills. The first grade teachers administered the Standford Diagnostic Read ing test and CBM reading as a measure of reading readiness. Phase two consisted of the 9-week progress monitoring of students assigned to the monitoring groups. Phase three involved all measures except the McCarthy. Results were summarized in three ways: p oint, level, and slope of scores. Reliability estimates on the three DIBELS measures for the kindergarten students ranged from .77-.93 (point), .97 to .99 (level), an d .62-.88 (slope). The reliability estimates for the first graders were .60-.83 (point ), .83-.95 (level), and .00-.36 (slope). The authors concluded moderate to high reliability for kindergarten students on the use of the DIBELS measures. The first grade reliability c oefficients were low to moderate. Criterion validity was examined by correlating the DIBELS estimates (point, level, and slope) with the criterion tests used in the first p hase of the study. For the kindergarten students, significant positive correlations were fo und for all point and level estimates with all criterion measures (range = .43 to .90, p < .01). All but one of the twelve correlations for slope estimates for kindergarten s tudents were not significant. Furthermore, the correlations for slope on the lett er naming DIBELS measure were

PAGE 66

55 negative with the first one (correlated with the Mc Carthy) being -.59 (p < .05). Only 9 of the 45 correlations for the first grade students on point, level, and slope estimates were significant positive correlations with two of them significant at the .05 level. (range = .42 .74, p < .01; .32 .47, p < .05). Analysis of sensitivity yielded acceptable evidence that children’s performances on the measures may change substantially from the end of the kindergarten to the end of the first grade years. The phoneme segmentation measures proved to be a mo re reliable, valid, and sensitive measure than the letter naming and pictur e naming measures. Furthermore, slope estimates for the kindergarten and first grad e groups were very low and suggested little change during the 9 week monitoring period. The authors concluded that the failure of the picture and letter naming fluency tasks on s lope estimates may indicate that these two measures are not very sensitive to performance change over a 9 week period. They further hypothesized that such little evidence of c hange may be a result of little instruction provided for those skills in the classr oom during the 9-week monitoring period. Authors correctly concluded that the sampl e size of the study was limited and cautioned the reader about generalizing their findi ngs while calling the results of the study preliminary. According to Kaminski and Good (1998) reading is a cultural imperative in today’s information-based society. The authors ar gue that DIBELS provide a significant value because it provides a cost-effective, time-ef ficient approach to (1) identifying students who require early literacy skills interven tions beyond the general education curriculum, (2) providing formative evaluation rega rding intervention effectiveness for

PAGE 67

56 individual students, and (3) determining when inter ventions have successfully reduced the risk of reading failure by remediation of early literacy skills deficits. The key to the success of DIBELS is its emphasis on formative asse ssment. By providing repeated measurements of a student’s progress over time, tea chers are best equipped to make critical instructional decisions that ensure succes sful progress towards goals. Kaminski and Good also discussed the differences between Cur riculum-based Measurement (CBM) Reading and DIBELS. Though the authors list severa l differences between DIBELS and CBM-Reading which are not directly pertinent to thi s study, two aspects deserve attention. Kaminski and Good indicated variability in performa nce is a natural characteristic of all people. The authors argue CBM, in general, displays a moderate level of variability compared to DIBELS, which may be characterized as h aving high variability. This variability may be associated with the nature of th e skills being assessed (e.g., emerging early literacy skills as opposed to oral reading fl uency). As a result, more data points may be required when using DIBELS in order to obtai n the same level of confidence given the sensitivity and age of the population bei ng served. Also, CBM has a long duration with reliable use over several years while DIBELS have a short duration with many measures having floor effects and reaching cei ling effects for different time periods with different measures. The authors characterize DIBELS as a necessary but insufficient set of skills towards life skills. Thu s it is best to consider DIBELS as an assessment tool for students acquiring early litera cy skills only.

PAGE 68

57 Good, Simmons, & Smith (1998) provided a rationale for early and intensive literacy intervention by reviewing three converging areas of research in early literacy and reading acquisition. The authors also provided mec hanisms to enhance early literacy development through the strategic and timely linkag e of assessment and intervention. The three areas of converging research are essential sk ills to teach for successful literacy development, research-based instructional designs f or teaching essential literacy skills, and utilizing assessment to make curricular and ins tructional changes in the classroom. The authors highlight a consistent finding that chi ldren who experience early reading difficulties are likely to continue to experience l ater difficulties. However, much of these conclusions come from correlational studies that do not communicate sufficiently the scope of the issue and problem. CBM-Reading may be used in order to directly monito r a child’s reading trajectory. However, such assessment tools may not allow for early detection of difficulties in acquiring early literacy skills whi ch provide the foundation for later reading fluency and comprehension. In this context, Good e t al, describe the “Mathew Effect” (Stanovich, 1986) in which initial skills lead to f aster rates of acquisition of subsequent skills for those students with high skills and slow er acquisition for students with lower initial skills. The authors argue the differences in developmental reading trajectories may be due to a series of reading experiences and activ ities that began with difficulty in early literacy skills, combined with minimal exposure to print, and further build to lowered motivation and desire to read. They highlight rese arch that shows good readers may have as much as twice the exposure to the amount of voca bulary experienced by poor readers

PAGE 69

58 prior to entering into the kindergarten year. The authors argue low initial skills and low slope in trajectories of reading development lead t o a near impossibility for “catching up” to their same-age peers beyond the first grade. Good, et al. (1998) point to a converging body of r esearch that highlights a number of different areas, or big ideas for early l iteracy programs. The first of these is phonological awareness, or an individual’s sensitiv ity and awareness of the sound structure of one’s language. Phonological awarenes s includes skills such as segmenting, blending, and naming letters and sounds. The secon d area is an understanding of the alphabetic principle, which refers to an individual s mapping of print to speech and establishing an association between a letter’s soun d and its written form. Phonological recoding is the third area that describes an indivi dual’s ability to recognize the relationship between words, syllables, phonemes, an d letter sounds in order to pronounce an unknown written word or to spell a spoken word. The development of this skill is progressive. The easiest form is when a child is p rovided with a spoken word and asked to match it with three distinctly different written words. Later, children use letter-sound correspondences and their positions in sequences to spell and read words. With redundancy and practice, children learn to recogniz e words more efficiently. The fourth area concerns accuracy and fluency with connected t ext. Poor word recognition skills inhibit an individual’s ability to comprehend text. By making these four big ideas explicit components of an early literacy program, e arly reading acquisition skills may be enhanced for students and provide a solid foundatio n for continued reading development (e.g., reading comprehension).

PAGE 70

59 How to teach the above skills is also of critical i mportance. In demonstrating the link between what to teach and how to teach it, Goo d et al, drew upon the work developed by Smith, Simmons, and Kame'enui (1998) w ho synthesized the results of 25 intervention studies conducted from 1985 to 1996. Good et al., summarized the five features for teaching phonological awareness. The first is to provide instruction at the phoneme level. The second is to scaffold tasks and examples according to a range of linguistic complexity. Next, explicitly model phon ological awareness skills prior to student practice and provide students with generous opportunities to produce isolated sounds orally during practice. Fourth, provide sys tematic and strategic instruction for identifying sounds in words, blending and segmentin g, and culminate with integration of phonological awareness and letter-sound corresponde nce instruction. Finally, use concrete materials to represent sounds. Good et al describe the use of DIBELS as a means of evaluating the acquisition of early readin g skills and using DIBELS data to make instructional decisions. The authors address the limitations of traditional reading assessments in terms of poor utility towards interv ention design and poor utility for formative assessment or progress monitoring. Reliability and Validity of DIBELS Some research and discussion exists in the literatu re which demonstrates DIBELS utility for early detection of student difficulties in acquiring early literacy skills (Barger, 2003; Buck & Torgesen, 2003; Elliott, et al., 2001; Good, et al., 2001; Good, Simmons, Kame’enui, Kaminski, & Wallin, 2002; Kaminski et al ., 2008; Shaw & Shaw, 2002). Elliott, et al. (2001) evaluated four of the DIBELS measures in identifying kindergarten

PAGE 71

60 students who are at-risk for reading failure. Thes e measures were letter naming fluency, sound naming fluency, initial phoneme ability, and phoneme segmentation. The authors point out that the original version of DIBELS was a set of 10 brief measures conceptualized as downward extensions of CBM readin g probes. The 10 original measures were: Story Retell, Picture Description, P icture Naming Fluency, Letter Naming Fluency, Sound Naming Fluency, Rhyming Fluen cy, Blending Fluency, Onset Recognition, and Phoneme Segmentation Fluency. Ell iott et al. evaluated two of the three measures evaluated by Kaminski and Good (1996 ); Letter Naming Fluency and Phoneme Segmentation Ability. The authors also eva luated the reliability and validity of using the Sound Naming Fluency and Initial Sound Fl uency measures. The purpose of the study was also to extend the results of Kaminsk i and Good by including a larger sample size of participants. The study involved 75 kindergarten students from fo ur classrooms in three elementary schools in a Midwestern city. Although t he sample was small, the group was representative of local and national populations in gender percentages, percentage of minorities as a whole, and percentage of population on free or reduced lunch status in school. Five assessments were provided to students : 4 DIBELS measures, the Broad Reading and Skills Clusters on the Woodcock-Johnson Psychoeducational Achievement Battery—Revised (WJ-R) the Test of Phonological Awareness —Kindergarten form (TOPA), and the Kaufman Brief Intelligence Test (K-BIT) School staff earlier in the school year completed the Developing Skills Checklist (DSC) The DIBELS measures were the predictor measures. The WJ-R, TOPA, DSC, and the teacher-rating

PAGE 72

61 questionnaire were used as criterion measures. The K-BIT scores were used to control for differences in ability in the regression analys es. Inter-rater reliability estimates and coefficients of stability and equivalence for three of the DIBELS measures ranged from .80 .90 (Letter Naming Fluency, Sound Naming Fluency, and Phoneme Segmentation Ability). The authors concluded that the magnitude of coefficients of stability/equivalence indicated most of the variance in students’ scores may be attributable to true differ ences in abilities measured than to errors in measurement. Concurrent validity of the DIBELS measures were computed by measuring the correlation between level estimates o n the DIBELS measures (average scores over repeated administrations) and each of t he criterion measures used. The strongest correlations were found between the DIBEL S Level estimates and the Skills cluster of the WJ-R and Developing Skills Checklist The authors reported 35-40% of the variance in scor es on the two achievement measures was explained by correlations between the DIBELS measures and the criterion achievement measures. Correlations between the DIB ELS total and the WJ-Achievement Battery were .62 (Broad Reading cluster), .81 (Skil ls Cluster), and .67 (Letter-Word Identify). Correlations between the DIBELS and the Teacher Rating Questionnaireprereading, Developing Skills Checklist and Test of Phonological Awareness (TOPA) were .67, .74, and .69, respectively. The authors concluded that the DIBELS measures explained at least 16% of the variance and, in most cases, between 30 and 40% of the variance in students’ scores when the achievement m easures were used as criteria.

PAGE 73

62 Hierarchical regression analyses were computed to f urther understand the relationship between the DIBELS measures and the ac hievement measures used in this study. Controlling for ability level and phonemic awareness ability, the authors concluded the DIBELS measures explained a significa nt proportion of the remaining variance in students’ scores on the three achieveme nt measures. The authors concluded the DIBELS measures explained a larger portion of t he variance in achievement scores than did either the Kaufman Brief Intelligence Test (K-BIT) or the Test of Phonological Awareness (TOPA). Finally, the authors concluded t he Letter Naming Fluency measure of DIBELS was the single best predictor of performa nce of kindergarten achievement scores on the Broad Reading and Skills clusters of the Woodcock-Johnson Psychoeducational Battery-Revised (WJ-R) and of tea chers’ ratings of student reading status. Hintze, Ryan, and Stoner (2003) also analyzed the technical aspects of DIBELS by examining the concurrent validity and diagnostic accuracy against the Comprehensive Test of Phonological Processing (CTOPP). The CTOPP is comprised of 7 subtests (Ellison, Rapid Color Naming, Blending Words, Sound Matching, Rapid Object Naming, Memory for Digits, and Non-Word Repetition). The “ Ellison” subtest requires a student to repeat a verbally presented stimulus word while omitting a sound. Both rapid subtests are timed measures. Standard scores for each subte st and three composite scores are obtained on the CTOPP. The three composite scores are Phonological Awareness Composite, Phonological Memory Composite, and the R apid Naming Composite. Subtest Coefficient Alphas on the CTOPP ranged from .74-.93. Internal consistency for

PAGE 74

63 the three composite scores ranged from .84-.97 on t he 5-yr old measure, and a range of .81-.96 on the 6-yr old measure. Subtest test-rete st reliability ranged from .74-.97. Testretest scores for the three coefficients ranged fro m .70-.92. Predictive validity scores with the Woodcock Reading Master Test-Revised range d from .42-.71. The study involved 86 kindergarten students in the winter of their kindergarten year. Correlation patterns between the two measure s and decision accuracy studies were conducted to determine the appropriate cut-off scor es for DIBELS. Students were given both the CTOPP and the DIBELS measures during the s ame 20-minute session with the sequence of presentation counterbalanced and a shor t break offered between measures. Results found DIBELS strongly correlated with most subtest and composite scores of the CTOPP. Results indicate the Initial Sound Fluency and Phoneme Segmentation Fluency measures of DIBELS correlated the strongest with the four CTOPP subtests which measure phonological awareness and memory and less strongly with CTOPP subtests which required rapid naming and memory for digits. Results also indicated the Letter Naming Fluency measure of DIBL ES correlated strongly with CTOPP subtests and composite scores that measure ph onological awareness and memory as well as rapid naming abilities. Authors conclud e these data provide support for LNF as an early indicator of phonological development. Results of the diagnostic accuracy analysis involve d examination of cutoff scores on the basis of sensitivity (accuracy in identifica tion of students at-risk between DIBELS and CTOPP), specificity (accuracy in identifying st udents who do not present a problem as identified by CTOPP scores), and predictive powe r (number of false positives and

PAGE 75

64 false negatives in comparison to CTOPP results). E arlier analyses used the cutoff scores for the Initial Sound Fluency and Phoneme Segmentat ion Fluency measures as suggested by the DIBELS developers. Results indicated the so le use of ISF and the PSF scores on the DIBELS are likely to over-identify students as having a weakness in phonological awareness skills in comparison to CTOPP composite s cores. The authors conducted follow up analyses by conducting a series of Receiv er Operating Characteristic (ROC) curves to determine appropriate cutoff scores for I SF, LNF, and PSF measures that balance both sensitivity and specificity. Cutoff s cores of 15 for ISF, and 25 for LNF were found to provide an appropriate balance betwee n sensitivity and specificity against the CTOPP Phonemic Awareness Composite (PAC) score. The PSF score failed to predict Phonemic Awareness Composite scores over a range of cutoff scores while moderate to high levels of sensitivity were observe d when using cutoff scores ranging between 20 and 34. Thus, where the PSF accurately predicts students who exhibit phonological awareness problems on the CTOPP, it do es so at the expense of a moderate number of false-positives. In 2002, Good, et al. published a technical report providing practitioners with updated cutoff scores for determining student need for intensive, strategic or benchmark instruction. The authors indicate that the adjustm ents were made in order to balance sensitivity and specificity. The reader is referre d to their paper for specific criteria found for each measure and for each grade level. In gene ral, however, it is important to point out that the DIBELS is not a nationally norm-refere nced measure. It was never designed to be a nationally norm-referenced measure. The fl exibility it offers to local education

PAGE 76

65 agencies is that each agency can develop their own grade level norms in order to best meet the educational needs of its students. As suc h, states which have implemented DIBELS (e.g., Florida) may provide a different set of norms than those offered by Good et al. (2002) to reflect their own state’s standard s and expectations for student performance (e.g., Florida’s Sunshine State Standar ds). Research has shown that successful performance on t he DIBELS measures provides a reliably predictive method of ensuring s tudents will be successful on later high stakes testing (Barger, 2003; Buck & Torgesen, 200 3; Good, et al., 2001; Good et al., 2002; Shaw & Shaw, 2002). Good et al, (2001) found DIBELS Oral Reading Fluency (ORF) measure is a reliable predictor of performanc e on Oregon’s statewide achievement test. Additionally, using the benchmarks of Above 110; 80-109; and Below 80 the researchers correctly classified 96% of low-risk st udents of failing the statewide achievement test and correctly classify 72% of high -risk students for failing the test. Where students are identified as in need of strateg ic and intensive interventions additional assessments may be necessary for intervention desig n in order to best serve the unique needs of the individual student to ensure reaching their benchmark goals. Shaw and Shaw (2002) evaluated the relationship bet ween the DIBELS ORF measure and the Colorado State Assessment Program ( CSAP). Third grade ORF measures for 52 students were analyzed in relation to their third grade performances on the end of the year CSAP. The performance level ca tegories on the CSAP are identified as (1) unsatisfactory, (2) partially proficient, (3 ) proficient, and (4) advanced. Results indicated the spring ORF measures correlated with t he CSAP at .80 indicating a high

PAGE 77

66 relationship with each other. Both the fall and wi nter third grade ORF measures correlated with the CSAP at .73. Student performan ces were grouped by CSAP performance levels and analyzed against performance on DIBELS ORF. Of the 30 students who scored at or above 110 on the third gr ade spring benchmark, 90% performed at or above the proficient level on the C SAP. Of the remaining 28 students who performed below the DIBLES ORF spring criteria of 110 words per minute, only 16 scored in the proficient level of the CSAP. The au thors concluded the criterion of 110 words per minute is a sufficient indicator for succ ess on the CSAP. Results also suggested students who performed between 90 and 110 have a high probability of performing at proficient levels on the CSAP because those students who performed above 90 (including those scoring above 110), 91% scored at the proficiency level or above. Only 27% of those who scored less than 90 on the OR F measure performed at the proficient level on the CSAP. The authors argue th at an appropriate cut off score for ensuring proficient performance on the CSAP may be as low as 90, however they point out that all of the students who performed at the a dvanced level of the CSAP has ORF scores of 120 or above. Barger (2003) compared the DIBELS ORF measure with the North Carolina end of grade reading assessment. The study involved 38 third-grade students who participated from one school and were given the Spr ing ORF in first week of May. One week later, the same students were given the North Carolina End of Grade Test. This test consists of 56 questions and students have 115 minu tes to complete the test. Students read each passage and answer a series of multiple c hoice questions before moving on the

PAGE 78

67 next passage. The North Carolina End of Grade Test uses a four-level grading scale: Level 1 = lowest level/insufficient mastery; Level 2 = inconsistent mastery; Level 3 = consistent mastery; Level 4 = highest level/superio r mastery. Students must achieve a level 3 to be considered on grade level. The autho r found a cut off score of 100 correct words per minute on the ORF measure was a reliable indicator of obtaining at least a level 3 score on the NCEGT. Correlation between NC EGT and DIBELS was .73 which was similar to the correlations found by Good, et a l. (2001) on the Oregon Statewide Assessment (r =.67), and by Shaw & Shaw (2002) on t he Colorado State Assessment Program (r = .80). Buck and Torgesen (2003) found similar results when DIBELS was used to predict later performance on the Florida Comprehens ive Assessment Reading Test (r = .73). Buck and Torgesen studied the relationship b etween the DIBELS ORF measure and the Florida Comprehensive Assessment Test (FCAT). They also examined the relation of ORF measure to the FCAT-math section, and FCAT read ing comprehension section, as well as the FCAT-norm referenced test. Scores on t he FCAT range from 1 to 5 on with a score of three or higher indicating at or above gra de level. Thirteen schools from one school district participated with a total populatio n of students at 1102. Significant correlations were found between ORF scores and the reading FCAT scores (r=.70, p<.001) and the math FCAT scores (r=.53, p<.001), a nd reading scores on the norm referenced test FCAT (r=.74, p<.001). The Buck & Torgesen results were consistent to Good et al., (2002) that ORF scores above 110 are considered to be at low risk f or reading skills below grade level on a

PAGE 79

68 comprehensive measures of reading comprehension. O RF scores between 80-109 per minute are considered to be at some risk, were as s cores below 80 are considered at high risk of scoring below a level 3 on the FCAT. Simil ar to Good et al. (2002), this study found 91% of students who performed at or above 110 on the DIBELS ORF achieved adequate performance on the reading section of FCAT Eighty-one percent of students who performed at or below an 80 on the DIBLES ORF p erformed below a level 3 on the reading section of the FCAT. The authors found sco res above 110 for minority students was less predictive of success, while scores below 80 for this population more predictive of failure. However, the authors caution the reade r about this result considering the small number of minority students involved in study. Aut hors tentatively suggest the difference in race predictions may be linked to voc abulary development. DIBELS Best Practice Researchers have identified best practices in the u se of DIBELS, and they describe its use within an outcomes-driven model (G ood, Gruba, & Kaminski, 2001; Kaminski et al., 2008). The primary message provided by these researchers i s that schools can prevent reading difficulties for a sign ificant number of students in the elementary school years if schools adopt a preventi on-orientated assessment and intervention approach to reading development and pr ogramming. DIBELS were developed to provide schools with an assessment too l that is predictive, cost efficient, time efficient, leads to intervention, provides bot h formative and summative evaluation of student progress, and provides information towards the evaluation of curriculum and instructional decision making (Kaminski, et al., 20 08).

PAGE 80

69 The Outcomes-Driven Model is derived from a problem -solving model and the initial application of the problem-solving model to early literacy skills (i.e., Kaminski & Good, 1998). The Outcomes-Driven Model is essentia lly a decision-making system that focuses on preventative approaches in order to mini mize reading difficulties for students. It is also a model that is highly consistent with a Response-to-Intervention (RtI) approach to educational service delivery advocated through t he passage of the Individuals with Disabilities Education Improvement Act of 2004 (Kam inski, et al., 2008). Five educational decisions are utilized to accomplish st eps to outcomes. Those decisions are: (1) identify need for support, (2) validate need fo r support, (3) plan for support, (4) evaluate and modify support, and (5) review outcome s. The basic premise behind the Outcomes-Driven Model is that no child is allowed to fail. The instructional support provided to a s tudent must match their instructional needs. Benchmark testing is encouraged to first id entify those students who may be in need of further testing and/or critical instruction al support. Students who are identified as at-risk for reading difficulties during the first s tep of the model may be more closely evaluated to validate their status. Evaluation of human error in the scoring or administration of the DIBELS is primary and should always be the initial assumption during the second step of an outcomes model. When administration and scoring errors are ruled out and/or corrected, students are retest ed with alternative forms of the DIBELS measures. Students participating in this step of t he model are tested at least three times with alternate forms to increase the validity of th e scores in determining whether a low score is the result of low skills rather than a “ba d day.”

PAGE 81

70 Three patterns of assessment results are possible o n repeated assessments: (1) consistently low on all measures, (2) increasing pe rformance with each testing, and (3) extreme variability in results. Where extreme vari ability is found, other factors potentially affecting the child’s performance are e valuated. Students whose need for critical instructional support is validated the thi rd step in the outcomes model is to plan for instructional support. Such a plan should incl ude (1) a clear instructional goal that will be a step to outcomes for the achievement of r eading; (2) a focus on essential skills; (3) a plan for the amount and type of support a stu dent needs; (4) a specification of the logistics of who will teach using what instructiona l materials, when and where; and (5) a measurement plan to evaluate progress. Identificat ion of the goal in relation to baseline measures validated for a student with at-risk statu s allows educators to visually track or monitor the progress of the student’s performance o n a graph. It is important to identify the specific skill(s) t hat will be directly and systematically taught. It is also important to dis tinguish between what skill is to be taught, how often instruction is provided, what cur riculum materials are to be used, and what instructional procedures are followed. The au thors emphasize that a match between student need and instructional support level is cri tical at this step of the Outcomes-Driven Model. Once a student is provided instructional su pport, it is then necessary to monitor the progress of the student’s performance using the DIBELS measures as a means of evaluating student characteristics of performance a nd instructional effectiveness. This step is considered key in the Outcomes-Driven Model because formative assessment of

PAGE 82

71 student progress is essential to the efficient and timely modification of instructional supports. By tracking the student’s progress more frequently, it is possible to determine when instructional modifications are needed to assi st the student in meeting their goals. When students with substantial need for instruction al support are identified, formative assessment should as often as weekly. Students wil l require less frequent monitoring where less need is identified. The authors advocat e a rule to follow for determining when to adjust instructional or curricular methods as pe rformance falling below the aim-line on three consecutive data points. The purpose of the last step in the model is the review of student outcomes in terms of goal attainment. Student outcome data may be used to review the stru cture of supports the school has in place to achieve outcomes at both an individ ual level and a systems level. At the individual level, a student’s performance on the DI BELS measures in relation their intended goal facilitates the outcomes review. At a systems level, aggregate student data facilitates the evaluation of a school’s structure of instructional supports and the overall effectiveness of curriculum and instruction provide d in supporting all children to achieve reading outcomes. The authors provide a general ru le of 80% as the goal for a school’s core curriculum and instruction. The core curricul um and instructional methods should allow for 80% of the student population to meet the ir grade level goals on the DIBELS measures. It is assumed that at least 15% will req uire additional targeted support for areas of specific difficulty with another 5% requir ing intensive, carefully designed instruction to achieve benchmark goals. The author s describe two reports, the outcomes

PAGE 83

72 report and the benchmark linkage report, which may be used to review a school’s reading program or core instruction and additional instruct ional supports. An outcomes report assists a district in determini ng what percentage of students achieved essential reading outcomes as defined by t he school district. By having a clear set of achievement goals, a district may review fro m year to year the progress towards reaching that goal. One method in which DIBELS is designed to assist in this approach is to review aggregated DIBELS data by grade level for each of the measures including Oral Reading Fluency which begins in the first grad e and then comparing the percentage of students performing at the bottom quartile, the top quartile, and the distribution of scores in between. Because the DIBELS measures are based in rate of performance, the x-axis of an outcomes graph would display the numbe r of correct words or correct sounds, etc. The y-axis would indicate the frequen cy of students performing at x-axis levels. The authors indicate that though the outco mes report helps a district stay focused on the “bottom line” it provides little assistance in determining whether the core curriculum and instructional practices are in need of modification. The benchmark linkage report provides a picture of the linkage between students achieving earlier benchmark goals and achieving lat er benchmark goals. The goal is to ensure later achievement of benchmark goals by reac hing earlier benchmark goals on time. The benchmark linkage report provides what t he authors describe as a “dot picture” which shows student performance on an earlier bench mark and a later benchmark at the same time. For example, Figure A4 shows a students performance on an earlier benchmark along the x-axis (e.g., winter Onset Reco gnition Fluency) may be compared

PAGE 84

73 to the same students’ performance on a later benchm ark measure along the y-axis (e.g.,. spring Phoneme Segmentation Fluency). The graph wo uld also indicate the cutoff scores for each of the two benchmark measures indicating t hose students who reached benchmark goals and those who did not. The benchma rk goal for winter Onset Recognition Fluency is 25 and the benchmark goal fo r spring PSF is 35. Scores of 10 or less on either measure are considered performances considered as deficit. Scores between 10 and 25 on Onset Recognition Fluency and scores between 10 and 35 on Phoneme Segmentation Fluency are considered perform ances At-Risk for reading failure. Four different zones of performance would be identi fied on the graph. The top right zone, or what the authors identify as Zone A, would be indicative of those students who reached the earlier benchmark goals and the lat er benchmark goals as well. These students would be identified as those students on t rack towards successful performance on later high stakes testing. Student performances in the bottom right zone, or Zone B would be indicative of those students who reached e arlier benchmark goals but did not achieve later benchmark goals. Students in Zone C, or the top left area of the graph would be indicative of those students who did not m eet the earlier benchmark goals, but were able to achieve later benchmark goals, perhaps due to instructional modifications made just following the earlier benchmark testing. Students who did not reach benchmark goals for either of the two benchmark tim es cannot be predicted to succeed on later high stakes testing and are students whose pe rformances requires intensive remedial efforts. By arranging various combinations of DIBE LS measures across different grade levels, a school may more accurately determine the extent to which their core curriculum

PAGE 85

74 and instructional structures are preparing students for later high stakes testing (e.g., third grade FCAT in Florida). Good et al., (2001) and Kaminski, et al., (2008) su ggest a school can identify strengths and weaknesses in their system by reviewi ng linkages from kindergarten through third grade. When students are identified as performing in a zone of concern (e.g., B, C, or D), a school would first determine if that performance is valid or if testing errors occurred. Where both hypotheses are ruled o ut, curriculum and instruction factors are considered. According to the authors, examinat ion of the linkage reports allows a review of outcomes in terms of (1) the quality, foc us, and intensity of the core curriculum and instruction and the system of providing additio nal supporting instruction prior to the first benchmark and (2) the quality, focus, and int ensity of the core instruction and curriculum and system for providing additional inst ructional supports between the first and second benchmark. Finally, Good et al., (2001) and Kaminski et al., ( 2008) suggest a model for decision-making and assessment. The model contains three primary decision-making loops: (1) Assessment-intervention feedback loop, ( 2) Review of benchmark goals from one benchmark to the next for individual students, and (3) review of year-to-year outcomes. The first loop involves progress monitor ing interventions for those students identified early in the year as experiencing readin g difficulties. The second loop involves analysis of individual student performances through the year from one set of benchmark measures to the next. Students receive additional instructional supports in the beginning of the year due to at-risk concerns identified from the first benchmark, but who perform

PAGE 86

75 successfully on the next set of benchmarks would no t require the same level of supports and monitoring as long as they continue to achieve adequate progress in subsequent benchmark goals for the year. The third loop descr ibes the process of using outcome reports and benchmark linkage reports in order to e valuate the core curriculum and instruction and system of providing additional inst ructional supports of a particular school. Other studies have also advocated for the use of DI BELS as an effective and efficient assessment approach to building and choos ing effective curriculum and instructional practices in the school and classroom environments (Simmons, Kame’enui, Good, Harn, Cole, & Braun, 2001; Smith, et al., 200 1). For example, Simmons et al. (2001) discussed the utility of DIBELS assessment i n building, implementing and sustaining a beginning reading improvement model at a school level. Overall, the authors describe implementing large scale reading programs in a number of schools. To start, the authors advocate three general principles concernin g the building, implementing, and sustaining of an effective reading program. First, where schools vary in size and in amount of resources, “the principles and strategies of conceptualizing, designing, implementing, and sustaining instructional and beha vioral change are fundamentally the same for all individual schools (p. 539).” Second, if effective reading programs are to be sustained over long periods of time, they must be i mplemented, monitored, and supported at the school-building level. And third, implement ation and support of comprehensive programs at the building level are a necessary, but insufficient condition for increasing

PAGE 87

76 and sustaining student performance. Thus, district level support and commitment are necessary for long-term sustainability. The authors state the implementation of a beginnin g reading program involves knowledge of reading in an alphabetic writing syste m and procedural knowledge of how to organize and implement research-based reading pr ograms in schools which are composed of people, practices, pedagogy, and policy An additional eight research-based tenets serve as the basis for the design of a schoo l-wide reading program. First, address reading success and failure from a systems level pe rspective. It is important for school personnel to recognize how their policies and pract ices contribute to, inform, and/or detract from their efforts to delivery effective re ading instruction. Next, embrace a preventative approach by focusing on early assessme nt and early intervention. Recognize and respond to multiple contexts of reading achieve ment. Utilize a research-based core curriculum and enhancement programs. Be familiar a nd utilize the convergent knowledge of effective reading practices. Use scho ol-based teams to customize interventions. Rely on and foster ability of the p rincipal as instructional leader. Finally, use formative assessments that are sensitive to cha nges in student performance for early identification and evaluation of interventions. The model articulates five distinct stages for impl ementing and sustaining a beginning reading program. The first stage consist s of auditing a schools’ current reading program and to assess baseline student performances The focus at the school level is the use of the Planning and Evaluation Tool (Kame’enui & Simmons, 2000) to conduct the

PAGE 88

77 school audit. The student level focus is on the us e of DIBELS and CBM for determining baseline student performances. During Stage 2, the school level team reviews the a udit, identifies strengths and areas for development, prioritizes development plan s, and establishes an action plan. At the student level, individual students are identifi ed as either meeting benchmark goals, atrisk for reading failure in later years unless prov ided with strategic intervention planning, or in need of intensive interventions to remediate deficit levels in performance. During Stage 3, the building team designs the core instructional interventions by specifying the goals of the school, core curriculum program to be used, time allocated for reading, instructional grouping and scheduling, ins tructional implementation and training, and design of a progress monitoring system. At the individual level, intervention planning is conducted for those students requiring either strategic or intensive instructional supports to meet later benchmark goal s. The teacher or team specifies individual goals, the curricular and instructional methods to be used for intervention, the amount of instructional time afforded, and a system for monitoring student performance. Stage 4 involves implementing a progress monitorin g system towards stated goals at both the school level and the individual s tudent level. Additional focus at the school level concerns identifying valid and reliabl e indicators for evaluating program effectiveness, commit resources, determine schedule for monitoring, and interpret and communicate results. At the individual level stude nts identified as requiring intensive interventions are in need of weekly to bi-weekly mo nitoring, whereas students identified

PAGE 89

78 as merely at-risk may only require monthly monitori ng. Benchmark testing resumes for all each of the four times scheduled throughout the school year. The final stage involves an evaluation of achieveme nt towards goals at both the school and student levels. At the school level, th e authors suggest engaging in this stage at least three times during the year in respect to both intermediate goals and end of the year goals. Additional analyses may involve an exa mination of intervention components at the building level, adjustments of instructional practices and curricular materials, as well as a determination to continue key interventio ns. At the student level individual students are evaluated in terms of achievement rela tive to their goals. Students not learning enough should have instructional profiles charted to determine amount of growth achieved and to determine whether there is a need f or reduction, continuation, or the intensifying of intervention strategies. DIBELS and CBM-Reading are utilized through this whole process at each stage of the model at bo th the school and individual levels. Aggregated DIBELS data for the school is utilized t o make decisions at that level while individual student data are utilized to monitor pro gress towards each of the scheduled benchmarks through the year. The authors emphasize that use of these measures does not negate the use of other measures or reading dimensi ons such as vocabulary and comprehension. Instead, student performance on the se two measures is used as indicators for efficient and reliable progress monitoring, and outcomes assessment. Similarly, Smith, et al. (2001) supported the use o f DIBELS has having high treatment validity. These researchers found two ef fective dimensions of a professional development program in early literacy skills that l ead to significant increases in student

PAGE 90

79 reading achievement performances: (1) adoption of a dynamic early literacy assessment system, and (2) teacher implementation of researchbased early literacy practices. Their article focused on a case study involving an evalua tion of the link between teachers’ conceptual knowledge about reading development and instruction and student performance on DIBELS. Eighty-five percent of the students who participated in the study were identified as having free/reduced lunch status, which was the highest percentage in the school district. Approximately 1 5-20% of students were Englishlanguage learners. The summary of the article disc usses the results in student achievement after 4 years of continued professional development at the school. The authors found it is the way the curriculum is i mplemented rather than the type of curriculum per se that seems to make the differe nce according to the teachers at this school. Authors argue that changes in teacher’s kn owledge of research-based practices in teaching reading led to improvements in instruction which when combined with an ongoing formative assessment of student performance led to the significant increases in student reading achievement scores. The 4-year pro fessional development program used a series of professional development meetings to gr adually introduce teachers to relevant aspects of the early literacy research base. Teach ers were asked to prepare for each meeting by reading a selected research article or a one-page synthesis the authors prepared of a report or article. Meetings also pro vided teachers with opportunities to have discussions about student responses to literac y instruction, determining which practices to keep, modify, or discontinue, and how to align new and existing knowledge about the teaching of literacy skills in kindergart en.

PAGE 91

80 A variety of themes were discussed during the train ings including big ideas in early literacy, explicit and consistent teacher wor ding, sequence of letter name and sound instruction for early use of words (e.g., use of hi gh utility letters and first), emphasis on systematic review for mastery of skills, establishi ng appropriate goals and using ongoing assessment to inform decision making, and implement ing regular testing. The authors also listed 7 critical components of early literacy instruction. 1. Allocated time for daily highly focused literacy in struction 2. Consistent routines for teaching big ideas for earl y literacy 3. Explicit instruction for new letter names and sound s. 4. Daily “scaffolding” or assisted practice with audit ory phoneme detection, segmentation, and blending. 5. Immediate corrective feedback. 6. Daily application of new knowledge at the phoneme a nd letter-sound levels across multiple and varied literacy contexts. 7. Daily review Good, et al., (2003) described how to use DIBELS to evaluate a schools’ core curriculum and system of additional intervention in kindergarten. The authors indicate the article draws heavily from the information avai lable from schools and districts participating in the DIBELS Data System; a longitud inal data collection system of aggregated school data and student performances usi ng DIBELS. The DIBELS Data System at the time of the article was comprised of 300 school districts, more than 600 schools, and more than 32,000 students during the 2 001-2002 school year. The authors

PAGE 92

81 caution the reader that (1) the data on the DIBELS Data System may not be representative of the nation at large, and (2) ther e are currently no procedures in place at participating schools to ensure adherence to standa rdization practices in the use of DIBELS, thus the data obtained from schools must be analyzed cautiously. The authors provide a template for formatting schoo l reports. The authors identify important participants in the DIBELS syste m as school principles, kindergarten and first grade general education teachers, remedia l teachers, school psychologists, speech-language pathologists, and other support per sonnel as available in the school. The authors also identify a number of critical componen ts to an effective beginning reading program. Consistent with Smith, et al. (2001), the authors advocate for a linkage between professional development, realities of the classroo m, and the use of student data to give new teachers ways of reflecting on their teaching a nd choice of materials. School-based reports are encouraged for use in operationalizing four principles of effective professional development: (1) identifying appropriate benchmark goals; (2) links between technical and conceptual components of instruction, early lit eracy skills, and DIBELS measures; (3) grade level discussion support systems to enhan ce instructional programming; and (4) use of visual representations to assist teachers in monitoring the effectiveness of their curricular and instructional choices in relation to student performance. DIBELS may be used to make decisions about core ins tructional practices and curricular materials. The authors argue that to me et kindergarten benchmark goals, a number of supplemental materials and programs are n ecessary. The authors advocate for research-based programs which offer the general edu cation kindergarten teacher a variety

PAGE 93

82 of options to support the core lessons of the class room. School reports are also encouraged for use in evaluating kindergarten curri culum in determining the effectiveness of the core curriculum and instructio n. The authors caution that a supplemental reading programs’ evidence of effectiv eness does not guarantee its successful use with every child needing intensive s upport. Frequently monitoring the progress of such student helps to ensure the approp riate type and amount of instruction and curriculum supports are provided. Consistent with Good, et al. (2001), the authors de scribe the use of DIBELS in an Outcomes-driven Model. Extending this work, the au thors of this chapter provide 7 organizing questions for schools when analyzing stu dent data for their school. For example, how effective is the core curriculum and i nstruction in supporting students who are entering kindergarten with benchmark skills to achieve the DIBELS ISF goal in the middle of kindergarten? How effective is the syste m of additional interventions in supporting students who are entering kindergarten a t risk for reading difficulty to achieve the DIBELS ISF goal in the middle of kindergarten? If schools are interested in factors related to the community context, effectiveness of preschools in the community, and how much emphas is on early literacy skills exists in the community, then schools could compare their ent ering kindergarten students’ DIBELS results with those from other schools. The entry-level skills can also reflect the language and cultural factors within the larger com munity. Schools may use early kindergarten DIBELS data (ISF and LNF scores) to as sess the degree to which students require intensive instructional supports. Where si gnificant numbers of students (approx.

PAGE 94

83 more than 20 percent) require additional or intensi ve supports, a school should review and adjust its core curriculum and system of additi onal interventions. The authors provide a table for schools to compare their kinder garten students’ entry level achievement performances with other schools partici pating in the DIBELS Data System; a nationwide collection of DIBELS data. With regards to mid-year performances of kindergar ten students, the authors suggest a student with risk indicators in two or mo re areas may require intensive interventions to achieve early literacy goals. It is also important to analyze mid year patterns of student performance on the DIBELS measu res to identify any measurement error or problems with the integrity of the assessm ent process. Three types of analyses may be utilized within the context of answering thi s second question. First, how well do the current kindergarten students perform mid-year in relation to previous years? Second, how well do the current kindergarten studen ts perform mid-year in relation to other schools in the DIBELS data system? Finally, how well do current kindergarten students perform mid-year in relation to the school s desired kindergarten mid-year goals? If a school wishes to examine the effectiveness of the core curriculum and instruction in assisting most of the students in re aching the mid year benchmark goals in kindergarten, DIBELS data may be used to facilitate such an examination. An effective and appropriate core curriculum and instruction sho uld ensure that the majority of those students who reached beginning of the year benchmar ks are able to also reach mid year benchmarks and end-of-the-year benchmarks. The aut hors provide a table for normative assessment using other schools in the DIBELS Data S ystem. The authors advise that

PAGE 95

84 given the normative data available in the DIBELS Da ta System, there exists marked variability in schools’ core curriculum and instruc tion in supporting those students who met beginning of year benchmarks to achieve mid yea r benchmarks. The authors suggest it appears most schools do not adequately address the skills of initial sounds in the first half of the kindergarten year. DIBELS data can also be used to examine how effect ive the system of additional interventions is in supporting students entering ki ndergarten who are at risk to meet mid year benchmark goals. The normative data provided by the authors is alarming in that a typical school in the DIBELS Data System supports o nly 3% of the students with an intensive instructional recommendation at the begin ning of the school year for kindergarten to meet mid year benchmark goals. The normative data also suggest 45% of schools do not support any of the students with a r ecommendation at the beginning of the year for intensive instructional support to meet mi d year benchmarks. This does not necessarily mean that school are not doing anything to help these students, but that rather many of them are not benefiting from what services are provided to meet mid-year benchmarks. In the DIBELS Data System, focus for e nd of year kindergarten goals has been on PSF and NWF measures, while treating LNF as a risk indicator. The authors state while the research shows naming letters is a good predictor of later literacy skills, why it is a good predictor is less clear. Benchmar k achievement at end of kindergarten year seems to be a good predictor of end of year fi rst grade performance. The DIBELS Data System demonstrated that 87% of students who r eached end of kindergarten goals reached the DIBELS ORF goal of 40 words per minute or more at end of first grade.

PAGE 96

85 The DIBELS Data System may also be used to address es how well a school compares to other schools in the DIBELS Data System on end of year kindergarten goals. In a typical median school, 15% of students have a recommendation for intensive supports, 17% are identified as requiring strategic supports, and 65% are identified as requiring benchmark general education core curricul um and instruction. The authors provide a normative table for schools to evaluate t heir end of kindergarten outcomes compared to other schools. The sixth organizing qu estion addresses the effectiveness of the core curriculum and instruction in supporting s tudents who are met mid year benchmarks to achieve DIBELS PSF goal at the end of kindergarten year. The primary purpose is to identify areas for improvement in the kindergarten program. The authors suggest disappointing end of year outcomes may be a ttributable to (1) low early literacy skills in the middle of kindergarten, (2) inadequat e focus provided by the core curriculum and instruction to emphasize essential components o f early literacy skills, or (3) the system of additional interventions is not providing adequate support for students identified at-risk. Finally, the DIBELS Data System can offer a school an opportunity to examine a school’s system of additional interventions in supp orting students identified at-risk for middle of year kindergarten goals to achieve end of kindergarten PSF goals. Normative data is again provided for schools to evaluate thei r system of supports in comparison to other schools. The normative data suggests typical schools in the Data System can support approximately 26% of these students. Many schools however are able to support at least 91% of students with intensive needs mid y ear in kindergarten.

PAGE 97

86 The authors close by providing a few observations f rom their experience in helping schools utilize DIBELS to improve schools’ abilities in meeting their literacy goals. First, data are helpful because teacher per ception is not always accurate. Second, changes in outcomes at one grade level precipitate changes in the next grade level. Next, grade level data across classrooms indicate much ab out the general way of doing business within a school. Also, outcomes are stable and rep licable unless big changes in curriculum, instruction, and system of additional i ntervention are made. Finally, even when the schools have very different orientations t o beginning reading instruction, evaluation of student outcome data can be used by s chools to change reading outcomes. Teacher Training and Data Utilization In addition the general findings provided by Chen and Rossi, (1983) regarding implementation of programs in an educational settin g and results of the study conducted by Smith, et al. (2001), additional research has in vestigated the link between teacher utilization of assessment data to make instructiona l decisions and the degree of training necessary to improve student outcomes (Fuchs, Wesso n, Tindal, Mirkin, & Deno, 1982; Fuchs, Deno & Mirkin, 1984; King, et al., 1983; Wes son, et al., 1983). The combined results of these studies suggest the provision of a ssessment data alone is insufficient for ensuring teacher utilization of data to make instru ctional changes to improve student performance. Furthermore, teachers require high le vels of support and ongoing professional development in addition to technically adequate assessment data to ensure high quality instructional changes are made to impr ove student performance.

PAGE 98

87 Fuchs, et al. (1982) investigated how the introduc tion and use of data-utilization strategies affect the number of modifications teach ers make in their classroom and student performance. Teachers were trained during a one week schedule of full day workshops prior to the start of the school year and during bi-monthly, half-day workshops throughout the school year. The focus of training was on training teachers to write curriculum-based IEPs, create a curriculum-ba sed measurement procedure, measure frequently and graph student progress toward IEP go als, and develop strategies to improve the feasibility of implementing the frequen t measurement systems. Result indicated that measurement of student progress alon e was insufficient for ensuring that those data will be used to make instructional chang es in students’ educational programs. Authors concluded teachers will require specific ev aluation procedures and datautilization rules to ensure that teachers will use data to make instructional changes to increase student performance. In a study investigating the effects of repeated a nd frequent testing on student academic achievement, Fuchs, et al. (1984) found re peated use of CBM-Reading combined with evaluation procedures for data analys is affected positively both student achievement and student awareness of their own achi evement. Students in the experimental group performed better than student of comparison teachers on virtually all achievement measures, rate and accuracy of oral rea ding scores, and performance on the Standford Diagnostic Reading Test. What is most interesting about this study is that teachers were not instructed on what changes to mak e in response to assessment data. Teachers were merely scheduled to evaluate data tre nds and levels at frequent intervals

PAGE 99

88 and “introduce a new dimension” into the student’s program where performance was found to be unsatisfactory. Response to Intervention (RtI) Passage of the reauthorization of the Individuals with Disabilities Education Improvement Act (IDEA 2004) in 2004 launched a move ment across the country for incorporating a new model of service delivery and d etermination of special education eligibility. At the heart of this new initiative i s the use of a problem-solving model for achieving better educational outcomes for children (Castillo, Cohen, & Curtis, 2006). Decisions made towards eligibility for students sus pected of having a learning disability would require the evaluation of a student’s respons e to interventions provided in the school. As Castillo et al., indicate, this form of service delivery involves substantial systems’ change. This national movement of systems change is expected to incorporate evidence-based instructional practices for all stud ents and the use of progress monitoring data to evaluate student growth in academic outcome s such as reading. In a review of four different types of approaches to the assessment of children and adolescents with learning disabilities, Fletcher, F rancis, Morris, and Lyon (2005) reported that a Response-to-Intervention model was more reli able and valid than the use of traditional aptitude-achievement discrepancy, low a chievement, and intra-individual difference models. However these researchers advoc ate for a combined RtI and low achievement model for assessing students suspected of having learning disabilities. The general concept of an RtI model of is an approach t o linking several short assessment probes while intervening with a child in a specific content area (Fletcher et al., 2005).

PAGE 100

89 Thus, underachievement may then be operationalized, in part, by the relative responsiveness of the student to empirically-valida ted interventions provided to the student (Gresham, 2002). Griffiths, VanDerHeyden, Parson, and Burns (2006) discussed the practical applications of a RtI model. The characterize an e ffective RtI model as having three specific features: (1) systematic data collection t o identify students who may be at risk for learning difficulties; (2) effective implementation of interventions for adequate durations, and (3) a review of student progress data to determ ine level of treatment and density of services. These researchers further identified two basic approaches to RtI in their review of the relevant research literature: (1) standard p rotocol; and (2) problem-solving model. At the time of the present research study, an state wide program evaluation study was underway towards an identification of critical elem ents needed for successful implementation of a problem-solving RtI model of se rvice delivery in Florida (Batsche, Curtis, Dorman, Castillo, & Porter, in press; Casti llo, et al., 2006). In short, a problemsolving/RtI model of educational service delivery i s very consistent with Kaminski, et al.’s (2008) Outcome-driven Model for using DIBELS data and holds promising potential for educators towards increasing student achievemen t outcomes and treating academic difficulties through a prevention-oriented approach

PAGE 101

90 CHAPTER THREE METHOD Purpose The present study was conducted to provide informat ion that may be useful for improving implementation on the use of DIBELS, feat ures regarding DIBELS data collection and analysis methods at the school build ing level, and Reading Coach consultation efforts. The results were aggregated and summarized in a manner that allowed for the understanding of what was occurring in the field through a description of interviews and focus groups, as well as identifying salient issues or variables for future research. Overall, the research literature suggest s teachers’ skills in the use and analysis of data to guide instruction is a topic that must b e understood (Kerr et al., 2006; Smith, et al. 2001). Therefore, the general purpose of the p resent study was to understand teachers’ perceptions and uses of the Dynamic Indic ators of Basic Early Literacy Skills (DIBELS) within a Reading First program. Research Questions Based on the research literature and the purpose o f the study, three research questions were formulated related to teachers’ use of DIBELS data: 1. What are teachers’ perceptions and understanding ab out DIBELS and the PMRN? 2. How do teachers’ understanding and use of DIBELS da ta, as presented in the PMRN reports, compare to Reading First experts who are provided with the same information?

PAGE 102

91 3. What attitudes and perceptions exist among persons other than teachers who participate in the collection, input, and analysis of DIBELS data throughout the school year? Research Design The following study used a basic interpretive and d escriptive design (Merriam, 2002a; 2002b) for the purposes of understanding tea chers’ perceptions and use of DIBELS. Data were collected through a combination of semi-structured interviews, case study interview/observations, and focus groups. Thr oughout the data collection process contrary evidence was explicitly probed depending o n participant responses to interview/focus group questions. Data analyses inv olved several procedures which may be classified as early concurrent analyses during d ata collection efforts and later descriptive analyses. Coding of data involved firs t level coding and pattern coding procedures (Miles & Huberman, 1994) combined with c onstant comparison methods (Patton, 2002). Within-case and cross-case compari sons were conducted in order to identify convergence of themes and observe variabil ity among responses. A number of research strategies were used to ensure validity an d reliability including inter-rater agreement checks, triangulation across multiple sou rces, member checks (e.g., participant feedback on accuracy and completeness of comments r eceived), description of researcher’s position, peer review/examination, dat a saturation, and an audit trail. The units of analyses in the present study were both th e individual and group level. Individual teachers participated in a one-time semi -structured interview and case study review and results were analyzed at both the indivi dual and group level. Reading

PAGE 103

92 Coaches and specialists participated, respectively, in a one-time focus group and results were analyzed at the group level. Experts particip ated in a one-time case study review and results were analyzed at the individual and gro up level. Participants and Sites Description of Sites Information available through the Florida Departme nt of Education (FLDOE) identified the participating school district as one of the largest school districts in the State of Florida, serving a student population larger tha n 100,000 students. The school district employed approximately 14,500 employees across 169 schools. At the start of the present study, 54 elementary schools, of the 87 tot al elementary schools, in the participating district were identified as Reading First schools that were in their 4th year of implementing the Reading First grant. Participant views and comments were sample d across approximately 28% of these schools (i.e., 15 elementary schools). A total of 14 teachers were sampled across 9 elementary schools. Eight Reading Coaches were sampled across eight elementary schools. Six “spec ialists,” due to their itinerant status, held positions across 13 different schools but indi cated only participating in either the input or analysis of DIBELS data at seven schools, collectively. Table 3 shows the representation among the participating schools of t eachers, Reading Coaches, and specialists. Description of Participants The participants for this study were seven kinderga rten and seven first grade teachers, eight elementary school site-based Readin g Coaches, two experts on the use of

PAGE 104

93 DIBELS and the Progress Monitoring and Reporting Ne twork (PMRN), and six school specialists (e.g., non-teaching support service sta ff who were involved in the collection, input, and analysis of DIBELS data at Reading First schools). “Experts” in the present study were selected based on involvement in statewi de efforts to provide training and technical assistance to school districts in the use of DIBELS and the PMRN. “Specialists” in the present study, all combined, r epresented the roles of school psychologist, academic diagnostician, and ESOL inst ructional support (an instructional support program for students for whom English is a second language). These individuals held itinerant positions across several schools; Reading First and nonReading First schools. Demographics were collected for all teachers who pa rticipated in the present study. All teachers were female and the reported y ears of experience ranged from 2-26 years at their present grade level at their present school. Half the teachers (seven) reported less than 10 years total experience teachi ng reading in elementary schools while the remaining seven reported experience teaching re ading in elementary schools above 10 years. Age of teachers ranged from 21-65 years of age. Teachers identified themselves as Caucasian (13) and African American (1). Range of educational credentials included Bachelor’s degrees in K-12 Education (11) and Maste r’s degrees in education (3). Sampling Methods and Rationales Used for Selecting Participants and Settings Selecting the School District A combined purposeful criterion sampling and purpos eful convenience sampling method (Patton, 1990) was used to identify one scho ol district which had implemented

PAGE 105

94 the Reading First program for at least two years. The use of the co nvenience sampling method was intended to keep resource costs down ass ociated with travel and time required for conducting the study. The selection o f a school district using a convenience sampling method was based upon distance and approva l for conducting the present study. The criterion of a two year minimum implementation was established based on an assumption that an initial amount of time was requi red for a school district to align its resources and standardize its implementation effort s district-wide before any evaluation of the program or its components may be appropriate ly conducted. Selecting School Sites A purposeful criterion sampling method was used to select potential sites for recruiting teachers, Reading Coaches, and specialis ts. First, all Reading First schools for the chosen school district were identified. Of tho se schools, 54 were identified as having implemented the Reading First grant for more than 2 years. Selecting Teachers, Reading Coaches, and Specialist s A purposeful criterion sampling method combined wit h participant self-selection was used to recruit teachers, Reading Coaches, expe rts, and specialists. Teachers, Reading Coaches, and specialists were invited to pa rticipate in the study by providing invitations to all 54 Reading First schools identified as implementing the Reading First grant for more than 2 years. The criteria for part icipation were a minimum 2 years experience working at one of the 54 Reading First schools and 2 years minimum experience teaching at current grade level in eithe r kindergarten or first grade (teachers only). These two criteria were established because it was necessary to ensure that

PAGE 106

95 participants had sufficient time to participate in their school’s Reading First implementation efforts. It was the primary investi gator’s judgment that two years experience would be a sufficient minimum criterion. Specialists also had to have been involved in either the data collection and/or analy sis of DIBELS data at their assigned school. Kindergarten and first grade teachers were targeted for participation because the combined DIBELS measures used at those grade levels encompass the entire DIBELS assessment. When used as a benchmark screening too l, second grade only uses the Nonsense Word Fluency and Oral Reading Fluency subt ests; third grade and above primarily uses the Oral Reading Fluency measure for benchmark screening. Sample Size Teachers. The number of teacher participants selected for the present study was determined by using a combined incidence probabilit y sampling method (DePaulo, 2000) and a saturation method (Fossey et al., 2002; Patto n, 1990; 2002). Appendix B2 shows a probability sampling table developed by DePaulo (20 00) which may be used to determine a minimum starting point for selecting the number o f participants for a given qualitative study. Use of this probability table required an i nitial assumption about the incidence rate of a dissenting or favorable point of view. I nformal discussions held by the researcher with Reading Coaches and school psycholo gists who work in Reading First schools revealed, anecdotally, that approximately o ne-fourth to one-half of the teachers in Reading First schools they worked articulated positive perceptio ns about the use of

PAGE 107

96 DIBELS assessment. This suggested an incidence rat e of at least 25% favorable opinion regarding the use of DIBELS. Based on this assumption, and by use of DePaulo’s p robability table for qualitative sampling, selecting a random sample of at least 10 teachers should allow for collection of all relevant, and potentially distinc t, perceptions with only a 5% probability of not identifying a unique perception for inclusio n in the study. Further dialogue with Reading Coaches in the field prior to conducting th e study suggested the knowledge needed to understand and use DIBELS among kindergar ten and first grade teachers was not unique to their grade level. Thus, teachers we re viewed as one group rather than two separate groups. A saturation method (Fossey et al., 2001; Patton, 1 990; 2002) was utilized beyond this minimum number to ensure no new information wa s available and to add credibility to the selection process. The following set of pro cedures was followed for determining the need for additional participants beyond the est ablished minimum. First, following collection and an initial review of the first 12 te acher interviews and case study observations (six kindergarten and six first grade) two additional teachers were recruited. If information obtained from either of these two te achers yielded any new information or perspectives, then an additional two teachers would have been selected. These steps would have continued until a saturation point had b een reached. A total of 14 teachers (seven for each grade level) were sampled in accord ance with the above sampling size methods used as no new information was obtained fro m the last two participating teachers.

PAGE 108

97 Experts. Because teacher perceptions were the primary focus of the study, and because experts were only needed to provide an expe rt opinion standard for comparing teachers’ perceptions and comments regarding the ca se study presented to them, it was determined that it was not necessary to sample seve ral experts for comparison with teachers. However, it was decided that at least tw o experts should be sampled in the event that a given expert may make recommendations or comments unique to themselves. Focus Groups A minimum of five persons and a maximum of ten persons were used as the criteria for conducting the two focus g roups (Patton, 2002). This sample size was selected to ensure a minimum level of diversity in views was obtained while also avoiding the potential of having a group too large that it would adversely affect the quality of responses among participants. Additiona lly, because teacher perceptions and use of DIBELS was the primary focus of the present study, the use of other non-teaching participants was to provide alternative perceptions with which to compare teachers’ responses using a triangulation method using multip le sources (Patton, 2002). Data Collection Instruments Semi-Structured Interview Guide – Teachers Appendices C through F contain the instruments used in the study. Appendix C illustrates the original interview guide developed for use during the semi-structured interviews with teachers. To answer the first rese arch question, the focus of the interview was to address (1) topics identified as important f or understanding based on a review of the relevant research including specific knowledge about DIBELS, level of training on the utilization of DIBELS data, and the climate or culture of supports in relation to

PAGE 109

98 DIBELS and assessment in general; and (2) any uniqu e perceptions or use of DIBELS emerging from interviews with teachers. Changes or emphases made to the interview guide in response to teacher comments were analyzed and are included in the Results section of this report. The Teacher Interview Guide was intended to be approximately thirty minutes in length. Only one version of this form was used as the questions asked of teacher participants were not exclusive to any p articular grade level teacher. Case Study Teachers and Experts The case study materials were developed to answer t he second research question. Appendix D illustrates the Case Study Questions and PMRN data reports used in the present study for conducting the teacher and expert case study review. Follow-up questions listed were used as needed based on teach er/expert comments. The case study review involved a kindergarten and first grade vers ion depicting a particular student at the end of the year (i.e., 4th assessment cycle). Teachers were given the case s tudy corresponding to the grade level they were teaching at the time of the interview. Three PMRN reports involving real student data were utili zed to share with teacher participants. All identifying information for the case study stud ent, class, school, and district were removed for confidentiality purposes. The case stu dies involved the following specific reports for use: Class Status Report (Appendix D2 & D5), Student Grade Summary Report (Appendix D3 & D6), and the Reading Progress Monitoring Student Cumulative Report (Appendix D4 & D7) These reports were chosen from the variety of rep orts available through the PMRN based on recommendations by Reading Coaches currently

PAGE 110

99 serving Reading First schools and who work with teachers on a daily basi s. The case study review was intended to take approximately thi rty minutes to complete. Description of PMRN Reports The Class Status Report showed how well students in the class performed on a current DIBELS assessment cycle. This report allowed the user to organize the class list based o n either alphabetical order or in order of “Recommended Instructional Level” (i.e., intensive, strategic, or initial). For purposes of this study, the class list was organized in order o f “Recommended Instructional Level.” The color coding system used on this type of report also communicated the level of concern regarding a student’s performance. RED ind icated a student’s performance for a given subtest was well below expectations. YELLOW indicated an at-risk performance. GREEN indicated a student’s performance was conside red meeting benchmark expectations while BLUE indicated a student perform ance that may be considered above average. The same color coding system was used for the “Recommended Instructional Level” indicator except that BLUE was not used. St udents demonstrating a profile that was above benchmark expectations for the particular assessment cycle were listed as GREEN “Initial” as the “Recommended Instructional L evel.” The Student Grade Summary Report provided a compre hensive summary of a student’s performance on the most recent assessment as compared to the class. Box Plot graphs were designed to communicate class performan ce between the 25th and 75th percentiles, including a horizontal line within the box which communicated the median class score. Whiskers indicated student performanc es above the 75th percentile or below the 25th percentile. Individual data points above or below the whiskers represent outliers

PAGE 111

100 whose scores fell within the upper or lower 5th percentiles. These box plots were depicted in relation to the benchmark goal for the specific subtests used in the assessment cycle designated by a solid green line. The graph allowed a teacher to evaluate a student’s performance in relation to both the class average and the benchmark expectation. The student’s position in relation to the class was indicated by a small colored flag with letters indicating the student’s risk status: HR for High Risk, MR for Moderate Risk, LR for Low Risk, and AA for Above Av erage. The same color coding was applied to the student flag. The third report was the Reading Progress Monitori ng Student Cumulative Report. This report provided a set of tables which showed the student’s progress on each subtest given throughout the school year for each q uarter of benchmark assessments. This report also provided an overall Recommended In structional Level (RIL) for each assessment period. Instructional level descriptors used were Initial Instruction (ii), Strategic Instruction (S), or Immediate Intensive I nstruction (iii). End of year outcome tests such as the Peabody Picture Vocabulary Test ( PPVT) and the Stanford Achievement Test – Tenth Edition (SAT-10) were also provided fo r 3rd cycle reports. This report allowed a teacher to evaluate the progress of a stu dent on a particular subtest/skill (e.g., Nonsense Word Fluency) and identify their change in performance status as the year progressed (e.g., strategic instruction). Focus Group Guide Appendix E illustrates the Focus Group guide used for investigating the third research question involving two separate focus grou ps; one with Reading Coaches and

PAGE 112

101 one with specialists. The questions used were deve loped to understand non-teacher perceptions and use of DIBELS, as well as non-teach er perceptions of teacher use of DIBELS. Each focus group was intended to be approx imately one hour and thirty minutes; the final thirty minutes was used for summ arization of findings based on notes taken during the session for validation purposes. Instrument Validation Procedures A peer review/examination method (Merriam, 2002b) w as used to field test the above instruments for use prior to data collection. Peers in this context included: Researchers experienced in qualitative research met hods; researchers knowledgeable in the use of DIBELS; Reading Coaches and a Reading Co ach supervisor; and elementary teachers. The comments and feedback received by al l reviewers were not recorded or used as primary data for the present study; though they were used to modify instruments as needed. A district-wide Reading First supervisor provided feedback about the interview guide, case studies and the focus group guide. Rec ommendations and changes needed as a result of consultation with the Reading First supervisor were made to the interview guide, focus group guide, and case studies. The Reading First supervisor was asked to review all instruments for clarity, appropriateness of questions, and relevance to the Reading First model. As a priority, comments were solicited on the equiv alence between the two case studies. This was important as it directly affecte d whether the kindergarten and first grade groups were treated as independent groups. The issue of equivalence for this study

PAGE 113

102 concerned the degree to which a set of skills neede d to analyze one case study were sufficient for analyzing the other. Feedback recei ved by the Reading First supervisor confirmed the equivalence of the two case studies. Therefore, only one case study guide was needed for use by both kindergarten and first g rade teachers. The Interview Guide and Case Study Questions were f ield-tested additionally by inviting one kindergarten and one first grade teach er to review the questions and provide feedback on the clarity and appropriateness of the questions. Teachers validated the appropriateness and clarity of the questions being asked within the instruments. The focus group guideline was additionally reviewed by a researcher at the local university who had expertise in conducting focus gr oups in order to determine the appropriateness of the questions, the structure of the format, and any other critical information that would ensure an appropriate and cr edible focus group process. Revisions were made by the researcher following the review of the Focus Group Guide by the university researcher. Data Collection and Analysis Procedures Site Entry Due to the participating school district’s IRB revi ew, all recruitment solicitations had to be provided to all potential school sites an d approved by school principals before obtaining consent for participation from any potent ial participant. Further, the researcher was prevented from identifying and contacting any p otential participant directly. School district research compliance requirements also requ ired a district employee to sponsor the research study. For this purpose, a school distric t Reading First supervisor was contacted

PAGE 114

103 for purposes of providing support (see above). Upo n receipt of support for conducting the research study it was agreed that all results w ould be shared with the district and participants. Additionally, the district Reading First supervisor offered to contact all 54 schools through e-mail to announce the start of the research study and to acknowledge the district’s support for the study. Following the introductory announcement provided by the district Reading First supervisor, packets of invitations for each of the three groups of persons identified above, and including building administrators, were sent to all 54 Reading First schools identified as eligible for participation. The following mater ials were included in each packet of information: (1) Principal invitation cover letter with blank consent form addressed to the school principal introducing the research study and its purpose along with solicitation for support in distributing enclosed materials and part icipation in a principals’ focus group, (2) teacher invitation cover letter and blank conse nt form, (3) Reading Coach invitation cover letter and blank consent form, and (4) specia lists cover letter and consent form. With the exception of the Reading Coach, 5 copies o f the teacher cover letters/consent forms as well as specialists’ cover letters and con sent forms were provided for distribution by the school principal. All invitati ons provided contact information using a personal cell phone and email address for the resea rcher. All potential participants were encouraged through the invitations to contact the r esearcher if they were interested in participating or if they had any questions about th e study.

PAGE 115

104 Obtaining Consent to Participate. General procedures for obtaining consent. In general, all individuals asked to participate in any aspect of the present study were required to give written consent using a consent form approved by the supervising universi ty’s institutional review board (IRB) for the protection of human research participants. The consent form was written in full accordance with the university’s IRB polices. Over all, participants were given a general overview of the purpose and focus of the study. Al l information identifying any participant, including organization and school site name, were kept strictly confidential and known only by the principal investigator. The consent form detailed the procedures that were followed to ensure a high level of confid entiality. The consent form also provided the participants wit h contact phone numbers and email addresses for this investigator as well as su pervising personnel from the university. A statement of all risks and benefits for participa ting in the study as well as informed notice of a provision to refuse participation at an y time was included in the consent form. Additionally, to ensure confidentiality, specific c ontact names and contact numbers or email addresses of all participants was secured usi ng a desktop computer only accessible by the principal investigator and which involved us e of a password and username for access. Obtaining consent from Reading First supervisor One Reading First supervisor was asked to participate in the review of instrumen ts for quality assurance and participate in member check procedures detailed below. Upon wr itten consent, the Reading First

PAGE 116

105 supervisor was asked to comment on the instruments as described above and later asked to engage in member check procedures to provide val idation of findings. Obtaining consent from teacher participants. Each teacher who responded either by email or by phone and who expressed an interest to participate was contacted to schedule a date and time that was convenient for th em to participate. No travel was required by any teachers for purpose of this study. At the start of each individual interview with a teacher, time was taken to go thro ugh the consent form and provide full disclosure of the study’s title and purpose. Howev er, teachers were not informed at first that experts would be involved in the study and ask ed to provide comments towards the case studies for the purpose of comparison with tea cher responses. Following all data collection, teachers were then informed of this pro cedure and all results shared with them (see below). Teachers were asked to provide writte n consent on the day of the scheduled interview if they chose to participate. Obtaining consent from DIBELS experts. Contact was made with a representative of a state agency providing training and technical assistance to schools on the use of DIBELS and PMRN. A description of the present stud y was provided and a copy of the appropriate consent form was faxed to the agency. Permission was provided to contact two experts who had sufficient expertise and knowle dge of DIBELS and PMRN. Each expert was contacted by phone separately and asked to participate in the present study. All questions were answered and full disclosure of the study’s purpose and scope were given. Upon verbal consent, a copy of all material s associated with the case study review, including a copy of the consent form, was m ailed to each expert with a return

PAGE 117

106 envelope for receipt of the written consent. Both experts were provided with a copy of the kindergarten and first grade case studies. Fol lowing obtained written consent a phone conference was scheduled for each expert to conduct the case study review. Obtaining consent from focus group-reading coaches. Each Reading Coach who contacted the researcher was thanked for their inte rest and was informed of waiting until sufficient numbers were available for conducting th e focus group. Upon receiving sufficient numbers, all potential participants were contacted by email to begin identifying a convenient time and date for all. All participan ts who attended were informed of the purpose of the study and asked to complete a writte n consent form if they chose to participate in the study. Obtaining consent from focus group-specialists. Each specialist who contacted the researcher either by phone or email was contact ed, thanked for their initial interest, and informed of waiting until sufficient numbers we re available for conducting the focus group. Upon receiving sufficient numbers, all pote ntial specialists were contacted for the purposes of finding a date and time that was conven ient for all who participated. On the day and time selected by the group each member was provided a full disclosure of the purpose and title of the study and asked to provide written consent if they chose to participate in the study. Researcher Biography The researcher’s background and biases were identified and documented prior to the initiation of any data collection or analysis proce dures to allow for monitoring of potential bias that may influence the data collecti on and/or analysis phases of the study.

PAGE 118

107 The researcher’s biography was also documented to a dd to credibility to the study (Marshall & Rossman, 1999; Patton, 1990). The rese arch literature used for preparation of the present study provided an initial context fo r documenting the researcher’s biography and potential for bias. Background and Experience The researcher has worked for several years thro ugh graduate training on the use of DIBELS and has been employed within the participating school district as a school psychologist. Both of these experiences have led the researcher to positively value the use of DIBELS an d advocate for its use in the schools. The researcher has participated in trainings on the use of DIBELS and has also facilitated training on the use of DIBELS. Some of the earlies t training experiences on CBMReading, and subsequently DIBELS, occurred at the U niversity of Oregon where the DIBELS was developed. Further background experienc e relevant to the present context included an advanced practicum project working with representatives from the Florida Center for Reading Research. This experience invol ved a review of seminal works on reading development and instruction combined with a review of the Reading First model in Florida as an exercise in learning how research can inform educational policy. Additional background experiences also involved 4 y ears of teacher training at a local college on educational psychology and remedial afte r-school tutoring in reading for students in elementary school grades. Researcher Beliefs and Expectations The following beliefs and/or expectations concerning the use of DIBELS are provided as the re searcher’s biases:

PAGE 119

108 1. DIBELS is a reliable and valid assessment tool for screening, progress monitoring, and evaluating student outcomes in earl y literacy skills. 2. DIBELS, as currently used in the Reading First program in the State of Florida uses five subtests which collectively measure three of the five skill domains involved in reading development identified by the N ational Reading Panel (2000). 3. The PMRN system is useful for tracking student prog ress and to identify target skill areas for remediation and intervention design for students who are struggling based on the DIBELS results. 4. Interpretation and utilization of DIBELS results re quires an understanding of the five skill domains involved in reading development and how they are connected or associated (i.e., phonemic awareness, phonics/de coding, oral reading fluency, vocabulary, and comprehension). 5. There is a difference between screening and progres s monitoring. 6. DIBELS is a time efficient, cost-effective, and sen sitive assessment tool for monitoring progress towards reading outcomes. “Sen sitive,” in this context, refers to the degree to which an assessment tool ca n reliably identify small gains in performance. 7. It is advisable to collect DIBELS data on a weekly or bi-weekly basis for students who are not meeting benchmark standards defined by FCRR. Following the identification of the researcher’s be liefs and/or assumptions, a list of hypotheses was developed to provide greater orga nization and focus to the study for investigation:

PAGE 120

109 1. There exists moderate to high variability among tea chers’ perceived value of DIBELS in assisting their students’ learning needs. 2. There exists substantial variability in the percept ions of non-teacher participants involved in the implementation of procedures for us ing DIBELS at the schoolbuilding level. 3. Given the various other assessment tools used at th e classroom level and the overlapping schedules of providing those other asse ssments (e.g., FCAT, countywide assessments, Lexile assessments, Running Recor d assessments, etc.) within a school district, teachers perceptions of using DI BELS are negatively impacted. 4. Teachers understand what DIBELS is and what it meas ures, but are discouraged or unsure about how to best utilize the data obtain ed. 5. Finally, given the multiple competing demands and s eemingly fast paced nature of school activities, teachers are not accessing th eir class/student reports on the PMRN, but rather are provided such reports, if any, from the school’s Reading Coach or school-based DIBELS team. 6. A low level of direct involvement and use of PMRN r eports serves as a barrier for utilizing the data effectively or efficiently. Data Collection Procedures General procedures. A variety of procedures were followed regarding the data collection and data management process for the pres ent study. Data analysis procedures are discussed below. However, data analysis proced ures were followed concurrently as data were gathered. In general, the following thre e types of data were collected: Teacher

PAGE 121

110 interview and case study reviews, expert review of kindergarten and first grade case studies, and focus groups with Reading Coaches and specialists. All teacher interviews took place in their respective classrooms after sch ool hours on a date and time chosen by them. Interviews with experts took place via phone calls to each expert, individually, on a date and time chosen by them, respectively. Focu s groups took place in a large meeting room located at the participating school district’s administration building at a time and date convenient to all participants. All participa nts were knowledgeable and informed of the researcher’s role as a school psychologist and the purposes of conducting the study. All data were recorded using a digital audio record er for transcription and later analysis. No identifying information regarding the participant(s) identities were recorded on the digital recorder. All tapes were downloaded unto a personal computer and were coded to ensure that no identifying information was available from their use. Only the researcher had access to the coding system used to identify the tapes. All participants were informed that the tapes and observation notes were coded to ensure confidentiality, and they were prompted when the recording device wa s turned on and turned off. They were also informed that the purpose of using the au dio recording was solely to keep accurate records for later analysis and that only t he researcher would have access to the audio tapes. Specific data collection procedures b elow are provided in order of the presenting research questions. Teacher interviews All interviews were conducted by the researcher using the Teacher Interview Guide (Appendix C) and the approp riate grade level case study. Field notes were taken at the start of each interview to facilitate later analysis and provide

PAGE 122

111 additional observations not available through the p rimary interview. Appendix F provides the format used for documenting field note s. Additional comments, observations, and thoughts prompted by each intervi ew were also recorded using the digital audio recorder after each interview for the purpose of adjusting interview questions, for providing context towards later anal yses, and for identifying emerging and anticipated topics discussed during the interviews. The Teacher Interview Guide (Appendix C) was utiliz ed in a semi-structured format (Patton, 1990). The questions used in the g uide were derived from a review of the relevant research and information available about D IBELS and the Reading First grant in Florida. Any follow-up questions used that were no t on the Interview Guide were used as needed to check for clarification and accuracy of u nderstanding of teacher responses. Active search for contrary evidence and alternative perspectives was routinely used during each interview in order to test the limits o f participant perspectives and to provide greater reliability. The interview was designed to last 30 minutes. Teacher case study review Following the use of the Interview Guide, each teacher was asked to comment on their perceptions a nd thoughts concerning a case study presented to them. Appendix D contains the Case St udy Questions and PMRN reports which were used with each teacher for their respect ive grade level. Teachers were not provided with any background knowledge about the st udent other than the student’s gender, grade level, and difficulty with reading. Questions asked by the teacher concerning additional information about the student ’s characteristics or history of achievement were captured during the audio recordin g and written up in the transcripts

PAGE 123

112 for latter analysis. Follow-up questions were prov ided as needed based on teacher responses to the case study materials. Teachers we re generally asked to comment on any aspect they felt was important for discussion and a nalysis of the case study. The Case Study Review was intended to last only 30-mintues. Teachers were not initially informed that their res ponses to the case study would be compared to experts who were provided with the s ame case study as this would have presented a potential confound to the study. There were no known risks associated with excluding this information from participants. All teacher responses were aggregated for comparison to expert opinion. Individual teacher r esponses were only used as short direct quotes which highlight a general theme or to pic mentioned by many teachers, or which provides a strong example of teacher percepti on that would otherwise not be conveyed through aggregated or paraphrasing options In such cases no participant’s name, assigned school, or school district was used in any way in the reporting of results in the present study. Teachers were debriefed abou t the involvement of experts when provided with preliminary results as part of the me mber check procedure. After each interview was completed, each teacher wa s informed of their option to contact the researcher if they had additional quest ions or comments following the interview. Additionally, each teacher was asked fo r their verbal permission to contact them via email to check for accuracy of comments an d solicit any additional comments based on the overall results. All teachers acknowl edge their verbal permission to be contacted again when results were available for com ment.

PAGE 124

113 Expert opinion interviews. Following consent to participate, each expert was independently interviewed and asked to comment on e ach of the two case studies through a telephone conference call, one case at a time, us ing the same Case Study Guide (see Appendix D) that was used for teachers. Follow-up questions listed on the Case Study Guide were used as needed depending on responses ma de by each expert. Additional questions prompted by expert responses were documen ted and recorded using the audio recorder for later analysis. The expert interview was intended to last approximately 3045 minutes. At the end of each expert interview ex perts were asked for their verbal permission to contact them again if further clarifi cation or checks for accuracy are needed during the analysis phase of the study. Both exper ts provided verbal approval to be contacted when results were available for comment. Focus groups The Focus Group Guide (Appendix E) was used duri ng the two focus groups. Questions were developed based upon an understanding of the research concerning DIBELS and knowledge about the Reading First grant in Florida. Both focus groups were facilitated by the researcher and assis ted by a graduate student with field experience in recording meeting notes identified or addressed during the session. The graduate student also assisted in facilitating a re view of comments captured for purposes of validating accuracy of comments and ensuring no additional themes/topics were missed. Consultation between the researcher and th e graduate student assistant occurred before both focus groups occurred in order to assis t the graduate student assistant in understanding the focus and purpose of the study. Field notes were taken throughout the focus group sessions using a large presentation-sty le note pad in full view of all

PAGE 125

114 participants. Field notes taken were written as sh ort phrase quotes or paraphrases made by focus group participants. Each focus group session lasted 90 minutes with the first 60 minutes allocated for focus group discussion using the Focus Group Guide. The final 30 minutes were utilized by reviewing all field notes with the focus group p articipants for clarification of information recorded and to check for accuracy in t he understanding of participant responses. Additional comments or any clarificatio ns made were written down using the large note pad in addition to being collected on th e audio recorder. Additional comments provided by the facilitator or in response to facil itator questions not included in the guide were also recorded. At the conclusion of each focus group, participants were asked for their verbal permission to contact them by email if further clar ification or check for accuracy was required as the data was analyzed. They were encou raged to contact this investigator after the focus group was concluded if they had any other comments or ideas they felt would be helpful, or if they had thoughts or feelin gs that they did not feel comfortable sharing in the group setting. Data Management and Storage The use of a digital recorder allowed all audio re cordings to be downloaded onto a personal computer. Each audio recording was coded and saved using a coding system known only by the researcher. All audio recordings of interviews, focus groups, and field notes/observations were transcribed using a compute r/word-processing program (i.e., Microsoft Word). All computer audio files and tran script files were secured using a

PAGE 126

115 password only known by the researcher. The elapsed time for each teacher interview was recorded in minutes for each teacher comment record ed to provide a later analysis of duration of time spent discussing a particular topi c and/or analysis of percentage of time spent talking compared to the researcher. Each transcript was saved using the same correspon ding identifier as used with saving the audio recordings. Each transcript was t hen saved as a copy to be used for printing and analysis to ensure the original transc ript was not compromised during analysis procedures. All audio and transcript file s were backed up on a 4 gigabyte data pin which was secured and kept in the researcher’s personal locked file cabinet in the same room as the computer being used for data manag ement and storage. All printed or hand written documents were secured in a file folde r and kept in the researcher’s personal locked file cabinet. Additional documents produced through data analysis procedures were also saved and stored on either a personal com puter and/or in the researcher’s personal locked file cabinet. Data Analysis General data analysis procedures The procedures for analyzing the data collected in the present study were organized into three different sections based on the three research questions: (1) teacher interviews, ( 2) expert opinion and teacher case reviews; (3) focus groups. Within each group proce dures for organizing the collected data, developing coding schemes, evaluating the rel iability of the codes, and applying the coding schemes is discussed. Procedures are then l isted next for summarizing and organizing the findings for validation by participa nts and peer reviewers.

PAGE 127

116 Teacher interview concurrent data analyses. The analysis of teacher interviews was conducted in two phases: (1) concurrent initial analysis with data collection and (2) formal descriptive analyses after all data were col lected. Concurrent analysis procedures were used with data collection procedures and invol ved the use of descriptive field notes and personal notes, and a review of completed audio interviews. The focus of this level of analysis provided an ongoing examination of the focus of the study, the appropriateness of questions and instruments being used, and observations of topics or themes which may be important for latter analyses. Adjustments towards each successive interview were conducted based on a review of field notes and personal notes. Descriptive field notes collected were written usin g a standard form developed by the researcher (see Appendix F). This format was meant to be specific in focus. Personal notes/reflections were recorded using the digital a udio recorder and/or written in a journal after each interview was collected. Field notes and personal notes were summarized usin g a word-processing computer program and saved in an electronic folder assigned to the participant using a coded label for confidentiality. Hard copy notes t aken using the field notes form were secured in the researcher’s personal locked file ca binet. The general format of the narrative summary of field notes was in paragraph f orm to allow for coding at a later time. Each case summary of the field notes taken f or a given participant was followed by personal ideas, hunches, concerns, and/or general r eflections about the interview/research study. The analysis of personal notes allowed for recording and/or observing of errors and biases.

PAGE 128

117 Review of all field notes and personal notes allowe d for the identification of initial topics or themes discussed in order to dete rmine when saturation of important and relevant information was reached. Audio interviews and all associated field notes and personal notes were reviewed in the order they were collected and used to identify initial themes emerging from the interviews. This list of topics was written in the personal notes/journal for later use in the coding process. Teacher interview formal data analyses After all interviews were transcribed they were separated into one of two groups: kinderg arten and first grade teachers and within the order they were collected in order to in vestigate any salient themes or emphases specific to a particular grade level. All field notes and personal notes were utilized to develop an initial list of topics for e ach grade level. As each of the teacher interview transcripts was analyzed, additional topi cs not identified from field notes or journal entries was identified. Both lists of topi cs from kindergarten and first grade teachers, respectively, were then combined into one comprehensive list of topics (Appendix G1). Numerical codes were developed and ascribed to the initial list of topics. Because the present study involved a relatively tig ht design – that is, a narrow and specific focus was involved from the start of the s tudy (Miles & Huberman, 1994), and relatively few unanticipated topics emerged from bo th groups of interviews, the Teacher Interview Guide was used to organize the initial li st of codes into different topic clusters (See Appendix G2). Once a comprehensive coding sc heme was developed it was applied to each full teacher transcript.

PAGE 129

118 Reliability of the coding process was assessed by u sing an inter-rater agreement method (Patton, 2002). Inter-rater agreement was a ssessed by calculating the number of agreements divided by the total number of agreement s plus disagreements on 37% of all data (i.e., 6 teacher interviews, 4 teacher case st udy reviews, 1 expert case study review, and 1 focus group). A minimum of 90% agreement was used as the criterion for determining the reliability of the codes (Miles & H uberman, 1994). Inter-rater accuracy checks that fell below this criterion prompted the research assistant and researcher to review the transcription together and discuss disag reements. The research assistant who participated in this pro cess held a degree in business administration and had four years experience workin g as a market researcher. He has experience in conducting field interviews and utili zing both quantitative and qualitative methods for the analysis of consumer perceptions. He did not have any experience working in education. Prior to engaging in analysi s procedures, the researcher and research assistant discussed over several sessions the purpose, scope, and nature of the research questions addressed in the study. Hypothe ses and researcher biases were shared along with an overview of how educational systems o perate (e.g., NCLB, current policies and procedures concerning the assessment and evalua tion of student outcomes, Reading First model, etc.). All questions asked by the research assistant were answered and discussed until the research assistant indicated su fficient knowledge and understanding of concepts addressed prior to engaging in any analysi s procedures. At the start of the inter-rater agreement process, each code was operationally defined and shared with the research assistant. Tw o randomly chosen transcripts were

PAGE 130

119 provided to the research assistant (one kindergarte n and one first grade teacher interview transcript) along with the list of current codes an d definitions. The researcher and research assistant separately coded the same two tr anscripts and then reviewed each rendition together. Disagreements found were primarily the result of di sagreement over the size of data segments. This led to discussions about decis ion rules for defining data segments. It was decided that the level of analysis for determin ing data segments would be at the sentence level. Further, it was decided that each data segment would start with a question posed by the researcher unless the response provide d by the participant includes a transition into a different topic. Codes and defin itions were reviewed again prior to conducting another round of inter-rater agreement c hecks using the same (but new/clean) transcripts. Two additional inter-rater checks were conducted wh ich did not result in reaching the minimum criterion established. The researcher and research assistant discussed at length the disagreements and review of the research questions. Results of these discussions led the researcher and research assista nt to conclude three factors where operating which resulted in low agreement percentag es: (1) disagreements regarding the identification of prior data segments despite decis ion rules established, (2) disagreements related to poor differentiation among some of the c odes, and (3) disagreements related to an organization of codes a priori using the Teacher Interview Guide. Following this conclusion it was decided that the r esearcher should conduct a fresh review of the transcripts and field/personal notes without any a priori search for

PAGE 131

120 topics of interest. The original list of topics de veloped (See Appendix G1) was condensed into different clusters independent of an y a priori organizational structure or topics of interest. This next list was then given a 4-digit code for each topic cluster and applied to all 14 teacher transcripts by writing th e code into the margin for each data segment identified. A manual sorting process (Miles & Huberman, 1994; P atton, 2002) was attempted which involved cutting up and sorting data segments based on those codes. Codes receiving too many data segments were re-read and b roken down to provide greater specificity and boundaries in relation to other cod es. Codes not receiving any segments were either discarded or clustered with similar cod es. This process was conducted several times and involved a high level of resource s (i.e., time and materials) and was found to be too ineffective and inefficient for the researcher. At that point, the researcher’s doctoral committee was approached to s olicit approval for engaging in additional procedures which might prove more effect ive. Researchers have recommended the process of coding involve other persons to discuss ideas and define boundaries for topics bein g identified (Miles & Huberman, 1994; Patton, 2002). The same literature also advoc ated for the use of computer-based approaches for sorting and coding the data for a mo re efficient process. Given this review, the research assistant and principal invest igator reviewed all 14 teacher transcripts together and identified through discuss ion of the transcript data where to define the data segments for each transcript. The latest “grocery list” of emerging topics identified without considering a priori topics of i nterest was then reviewed between the

PAGE 132

121 research assistant and the principal investigator. This process led to the development of a draft of codes. Using this draft of codes, all tra nscript data for teacher interviews was imported into a Microsoft Excel spreadsheet. Identified data segments for all 14 teacher transcr ipts were entered into a Microsoft Excel spreadsheet in order to utilize a c omputer-based approach to sorting the data. Table 5 in Appendix B contains the format th at was used in the entry of the transcript data segments: Topic Code (four digit nu mber), Participant Code (two letter code whereby the first letter indicated the grade l evel – F or K, and the next letter to identify order of interview – A – N), Data Segment # (i.e., data segments numbered in order and restarted for each individual interview) and finally the transcript content for each data segment with the interviewer listed in it alics/underlined for visual ease. After all the data segments for all 14 teachers were ente red, the use of Microsoft Excel allowed for sorting the data using the following priorities for sorting: Code, followed by Teacher, followed by Data Entry #. Having sorted the data i n this manner, it was found to be more efficient and cost effective for adjusting the code s and developing a third formal set of codes for use (See Appendix G3). Having defined the data segments together, but prio r to engaging in additional inter-rater agreements, a rubric for coding the dat a segments was developed based upon a priori topics of interest, previous experience with inter-rater checks, and review of transcripts with research assistant. For each data segment, the first step towards coding was to read the researcher’s question followed by t he participant’s response. If the question involved one of several a priori topics us ed in the study then the data segment

PAGE 133

122 was coded to reflect that topic. The exception to this rule was if the participant’s response did not address the researcher’s question. For example, if the researcher asked about how often a teacher collected DIBELS data for a student who was struggling in reading, and the teacher either did not understand what the question was asking or described some other aspect of DIBELS data collecti on that did not address the question asked, then priority was given to the participant’s response for coding. Aside from prioritizing the researcher’s question f or coding data segments, priorities were also established for which codes to use if there was an option or possibility for multiple codes to be assigned. Of the eight broad topic areas used in the coding scheme (see Appendix G3), priority was given to codes 2000 through 7000 before using codes within the 1000 or 8000 level. For exa mple, if in response to a researcher’s question the participant talked about their knowled ge of DIBELS measure specifically while also highlighting their perceived value for t he DIBELS overall, then priority was given to coding that data segment as “knowledge of DIBELS (2100)” rather than “benefits of DIBELS (1100).” Additionally, if a da ta segment included multiple topics by a participant and involved a decision between to pics listed between the 2000 through 7000 levels, then priority was given to the topic t hat was closest to the research question asked. Regarding the use of the “miscellaneous (8000) cate gory, this code was used only for those responses that either involved an exchang e of comments between the researcher and participant clarifying the specific question be ing asked, or reflected responses that were unrelated to DIBELS, reading, or school climat e. For example, if in response to a

PAGE 134

123 question about reading assessments a participant di scussed concerns with math assessments and student achievement in math, then t hat data segment was coded as “miscellaneous.” Though the coding scheme that was developed (see Ap pendix G3) provided codes for specific examples of responses (e.g., 1310, 131 1, 1312), only one of the 28 secondary number levels was used for coding (e.g., 1300). Af ter inter-rater checks were completed utilizing the 3rd draft of codes (Appendix G3), inter-rater calculat ions increased substantially though still just below the required criteria. Analysis of inter-rater disagreements identified discrepancies in coding wi thin the 3000, 4000, and 7000 level topics. Further discussion between the research as sistant and the researcher about the nature of the study and its research questions, the se three topic levels were each collapsed, respectively, into one level each (3100, 4100, and 7100 – See Appendix G4). It was determined through discussion and reflection that these three topic categories did not require subtle minor topics for coding. For ex ample, the 4000 topic of Progress Monitoring and Reporting Network (PMRN) was a neces sary topic because it was an a priori topic of interest in the study. However, to o few responses were identified for each of the four sub-categories and the primary focus on this topic was merely on teacher’s general use and perceptions of the PMRN. Therefore the four sub-categories were collapsed into one overall category. Observations of using this third draft of codes ind icated a higher level of agreement (between 60 and 70%). Discussions of dis agreements at this point resulted in some of the higher-order categories being clustered further as described above (see

PAGE 135

124 Appendix G4). This fourth set of codes was then us ed to transcribe the same two transcripts again which resulted in sufficient agre ement (93%). Following this, two more sets of transcripts (four total – two for each grad e level) were again used for conducting inter-rater agreement checks. Results of those che cks resulted in sufficient agreement percentages ranging between 92% and 97%. Upon reac hing sufficient inter-rater agreement on those transcripts sampled for use, the final set of codes was applied to all teacher transcripts using the Microsoft Excel sprea dsheet. Again, the data where then sorted and organized by prioritizing Code, Teacher, Data entry #. Following this, attention was given to coding expert/teacher review s. Expert/teacher case reviews. As with teacher interviews concurrent analysis procedures used during data collection procedures, case reviews conducted by teachers and experts were monitored using personal notes. R eviews of those notes were used for initial formal analysis procedures following the co llection of all case review data. First, for each expert, the transcript data were separated into two groups: (1) kindergarten case study and (2) first grade case study. It was neces sary to separate the expert transcripts based on the grade level of the case studies for la ter comparison among teachers for each of the two different grade levels. Each of the two transcripts was read separately for regularities and patterns as well as topics the dat a cover. Words or phrases used by the experts were used to represent the topics or patter ns identified. These words or phrases provided a set of initial coding topics. Once the list of topics was developed for each expe rt transcript for each grade level case study, the two sets of topics (kindergar ten and first grade case study) for each

PAGE 136

125 expert were analyzed for common and unique ideas in relation to each other for both grade level case studies (e.g., expert one kinderga rten case compared to expert two kindergarten case). Common topics or patterns were analyzed first and combined for further coding. Unique topics or patterns were lis ted together for visual ease and analyzed again for any common ideas or thoughts not originally found in the first steps. Ideas remaining unique between both experts for a s pecific grade level case study were coded and added to the list of shared common topics for that case study. The above organizing and coding procedures were als o used to develop a set of codes for kindergarten and first grade teacher case reviews. These sets of codes were then used to compare teacher and expert comments in relation to a given case study. However, it was expected that many topics identifie d by experts and teachers would not be mutually exclusive or unique to either kindergar ten or first grade teachers. Therefore, the above procedures provided for an initial compar ative analysis between experts and teachers. A secondary analysis involved the development of a single comprehensive set of codes to be applied to all expert and teacher case review responses. To do this, the topic lists developed for each grade level were compared and analyzed for common and unique topics. Common topics between the teacher cases we re combined and reorganized. This list was then compared with the expert list of topi cs and a constant comparison procedure was used to develop a final comprehensive set of to pics to be applied to both expert transcripts and teacher transcripts, regardless of case study discussed. This final list of codes involved three distinct topic areas for codin g: (1) Using PMRN reports; (2)

PAGE 137

126 Additional information needed or wanted; and (3) Ge neral comments about using DIBELS data. A computerized method for organizing and coding the teacher/expert case reviews was not needed for the analysis of case rev iew data. Inter-rater agreement checks were applied to two se ts of Teacher Case Study Reviews (Total of four – two for each grade level), and one of the Expert Case Study Reviews (for both kindergarten and first grade case s). As with teacher interviews, a 90% agreement criterion was used to check inter-rater r eliability. No additional procedures were needed to secure sufficient agreement between the research assistant and researcher. Inter-rater agreement percentages for all case revi ews (both teacher and expert) resulted in 100% agreement. Focus group analyses. To begin, the Reading Coach focus group and the specialist focus group were initially separated whi le the following procedures were applied to both groups. Prior to reviewing each foc us group transcript, notes taken and validated by focus group participants was reviewed and an initial list of topics was identified. The first list of topics was then kept aside while the focus group transcript was read for topics and themes not captured in the notes taken. A review of both the focus group transcripts did not yield any new topic s beyond the notes taken during each of the focus group sessions. The first formal draft of topics for focus group da ta was developed by engaging in a constant comparison method (Patton, 2002) in whic h similar topics were combined and unique topics were listed without exclusion. From this process a comprehensive list of topics was identified for the focus group data. Ea ch topic was assigned a number and

PAGE 138

127 applied to both focus group transcripts. No additio nal procedures (i.e., computerized approach to organizing and sorting the data) were n eeded for coding the focus group data. Inter-rater agreement procedures were applied to th e Reading Coach focus group transcript. Results of the inter-rater agreement y ielded a satisfactory agreement percentage of 96%. Member checks and peer reviews. In preparation for evaluating the validity of findings generated to this point, the coding system s developed for teacher interviews and both focus groups were compared using a constant co mparison approach (Patton, 2002). A comparative review of these sources of codes reve aled a pattern that prompted a further organization of topics using the following four cat egories: Climate/Culture of School, Supports/Resources Available, Knowledge of DIBELS, and Collecting and Using Data. Table 6 in Appendix B depicts the consolidation of topics into these emerging final four themes. Preliminary results were written using thi s organizational format using an outline version for ease of reporting and reading ( Hodges, personal communication 2008). A copy of the preliminary results (See Appe ndix H) was developed for each group of participants whereby the section reporting their group’s data was highlighted in yellow for ease of review. Each participant in the study was provided via email a copy of the preliminary results and asked to provide feedba ck on the accuracy and completeness of the information. Peer reviewers were given the same report without h ighlighted sections. All versions included a table of contents to assist par ticipants with an efficient means for finding areas of interest. The preliminary results were emailed to all participants and

PAGE 139

128 asked to respond by emailing the researcher within two weeks from the date the report was sent. Participants were asked to comment on th e findings and were encouraged to offer any additional thoughts prompted by the resul ts. Peer reviewers were asked to review the results and offer any critical feedback or questions concerning the findings, the organization of reporting, and any other method ological aspects. Responses received were used to further analyze the validity and relia bility of the study and are reported in the results section.

PAGE 140

129 CHAPTER FOUR RESULTS General Introduction Given the methodological design used in the present study, the results were written in two parts. First, in accordance with th e research questions posed at the start of the study, the findings obtained in relation to tho se questions are presented. Next, an analysis of the research process and the researcher ’s perspectives throughout the course of the study are presented. The primary findings a s they relate to the research questions were written in a traditional third person voice. The researcher’s analysis of the research process and personal accounts of the evolution of t he study were written in first person voice. Description of Participants and Data Obtained Procedures for selection of participants and acces s to sites were provided in the previous methods section and therefore are not rest ated in full here. However, it is important to highlight that all participants self-s elected themselves for participation in the study. All participants were asked to take part in the study through a one-time interview or focus group session. A total of 831 minutes, or nearly 14 hours of audio-recorded information was obtained directly from participants as a whole. When transcribed, these data amount to 229 pages of text. Aggregated teacher interview data may be described as 381 minutes of one-onone interview discussions or 135 pages of transcrip ts. The average participation time in

PAGE 141

130 the teacher interview was approximately 27 minutes with a range of 13 to 43 minutes. A total of 617 separate data segments were identified for coding/analysis purposes. Teacher case study review data may be described as 141 minu tes of audio recording or 44 pages of transcribed text. The average amount of time sp ent discussing the case study review was approximately 10 minutes. Expert case study re view may be described as a total of 47 minutes of audio recording or 10 pages of transc ribed text. Focus group data may be described as 162 minutes of audio recording or 47 p ages of transcribed text. Teachers’ Perceptions and Understandings about DIBE LS and the PMRN. Climate and culture of schools Several a priori and emerging topics were analyzed in order to understand teacher’s perceptio ns and use of DIBELS and the PMRN. To begin, the culture and climate of education with in which teachers had been asked to work emerged as an important factor to understand. Overall, teachers reported experiencing various levels of pressures and stress related to their profession. Most of these comments seemed tied to a general climate of accountability to increase student performance in education. Many sources of pressure s were identified through teacher interviews. Some of these pressures were reported to have evolved in relation to the implementation of Reading First and the use of DIBELS. Some of the variation in reaction to accountability pressures was found to b e related to the school’s status or grade as evaluated by the State of Florida’s Department o f Education using the No Child Left Behind criteria as one teacher indicated. So, we’ve been under a lot of pressure because we a re not making A.Y.P.; Annual Yearly Progress. We are one year away from restruc turing. And, I work

PAGE 142

131 extremely hard, and every person on this campus wor ks extremely hard. So the feeling is there across the board. It comes from N o Child Left Behind. To expect every child to be on or above grade level like 2012 is the most ridiculous thing I’ve heard in my life. Unless you are cloning the c hildren, it’s not going to work because children learn in different stages. A great amount of pressure was expressed by some te achers as a function of the DIBELS test being implemented into the schools. Mo st of the teachers reported an initial impact described as increased competition among tea chers. However, this initial impact of DIBELS had evolved in most of the schools sample d as teachers reported they no longer perceived anyone would lose their jobs due t o the outcome data on the DIBELS. They reported that collaboration as a grade level t eam was functional for determining what was working or not working for their students. When asked to generally share her views about the DIBELS at a Reading First school, one teacher stated the following: We’ve been doing it for a few years now. When we f irst started none of us knew what it was about or how it would impact us. And t hen it was kind of a competition between teachers; you know whether your scores went up or your scores were down. And then we decided it was more of a team thing because it depended on more than just your class. Sometimes y ou might have a really young class and so then you just kind of looked at your s cores and how they were improving and not just where they’re at but where w ere they and where are they at now?

PAGE 143

132 In general, all the teachers interviewed expressed believe in the importance of the role of leadership at their school as it related to pressures and use of DIBELS. The role of leadership was reported as important in terms of setting a climate of cooperation and a culture of using data to make decisions. It was al so described by teachers as important for ensuring that teachers use the DIBELS as the Reading First grant intended. Most reported that DIBELS had been given a greater value by the school leadership or school district than other assessments. And in some cases uncertainty was reported about why; as if the DIBELS was the main indicator for student progress at the exclusion of using multiple sources of data. Or, as one teacher comme nted, “There’s a lot of expectations and I really think that sometimes it’s more importa nt than the classroom assessments. Because I do think they feel that this is the way t o see where these kids are coming from and where they need to go, and how you’re gonna get ‘em there.” Another teacher described the importance of school leadership in helping to deflect outside pressures of accountability while a nother teacher described the isolated feeling of working in a school that is very “politi cal.” Several of the teachers commented on the concern and simultaneous pressure to “teach to the test.” But over time, influenced in part by school leaders (including the Reading Co ach-see below) and increasing student achievement scores, some of these teachers reported less pressures about how their student’s were performing on the DIBELS and instead were more encouraged to use the data to help support student learning. Aside from perceived pressures brought about by sup ervisors, some teachers described their concerns about the general climate of their educational field as “removing

PAGE 144

133 the childhood from the child.” It was reported by some of the kindergarten teachers that there was too much intensity in kindergarten compar ed to the past. This view was further characterized by teachers as if what kindergarten s tudents are being asked to do is developmentally inappropriate. When these teachers were asked if that was indeed their view, they all indicated an affirmative response. One teacher also characterized her view as too much emphasis in reading at the expense of o ther kindergarten goals such as social-emotional development. As she stated, “…yes they get real strong on the DIBELS but their behavior hasn’t gone to a place th at is appropriate for society.” When asked to further explain her point, this teacher mo dified her response to reflect a more general feeling that an intense focus on reading wa s making some teachers feel less and less time was available for helping children learn to develop positive relationships and social skills. A different teacher commented on th is similarly when she spoke of her fears regarding a rumor that the district would be considering the removal of rest time for kindergarten – a time she valued highly both for he r students and for her to catch up on classroom assessments and paperwork. Perceived pressures were also reported to be occurr ing from other grade levels. One teacher described the pressures felt by third g rade teachers in relation to the FCAT and No Child Left Behind as being passed down to th e second grade teacher to make sure that exiting second grade students were prepared to move into third grade. In turn, the second grade teachers expected the same from the fi rst grade teachers and so on. The teacher who presented this view also indicated that this was often a contributing source of why retention was considered at the grade levels be low third grade, especially first grade.

PAGE 145

134 A final source of perceived pressure identified amo ng teacher responses was the concern over the amount of testing that was occurri ng in the schools. In identifying the various reading assessments that can occur at a Reading First school in the school district that participated in the study, Table 7 depicts the assessments and frequencies of administration were reported by teachers – in some cases the specific name of the test was withheld in order to maintain confidentiality o f the school or district. The frequencies of administration alone throughout the year were deceiving as, for example, the kindergarten district assessments were conducte d in a one-on-one format between teacher and student. Kindergarten teachers indicat ed, on average, that it took about 3 weeks to complete a whole class using that assessme nt because of its required one-on-one assessment format. Some teachers shared their concerns over the loss o f instructional time due to the amount of testing. One teacher stated it in the fo llowing way which was very representative of kindergarten teachers in this stu dy: “That’s all we’re doing is testing. They’re using it for that reason, but I want to tea ch the children. There’s just no time to do that. And, um, so that’s what drives a lot of t eachers crazy.” And another teacher took a more humorous approach by saying, “Sometimes I look at them and think, how did I expect you to get…to go higher when I haven’t had any instruction time in between. It’s a miracle!” Compounding this stated teacher f rustration, some reported frustration with having to conduct assessments that were not us eful for driving instruction such as the district’s school readiness test; a test that o ne teacher wished the district would

PAGE 146

135 arrange for students to take before the start of th e year because of how little value it provided her and the amount of time it took to comp lete a whole class. It is relevant to note in a context of pressure an d accountability that all teachers reported concerns about using the DIBELS data more effectively and efficiently as a result of not being the ones trained to administer the test. Only two teachers indicated an interest in further training to learn how to give t he DIBELS themselves; one other teacher had already been trained to give the test. All the others cited the current assessment responsibilities they have as a barrier to taking o n the administration of the DIBELS. And yet, these same teachers indicated a preference for using their district assessment for monitoring student progress between assessment cycl es rather then using DIBELS because (1) they had access to the materials they n eeded and know how to administer the test, and (2) they valued being able to directly he ar the student read in order to grasp a qualitative understanding of what the student can d o. One teacher stated it in the following manner when asked if it would be benefici al to learn how to administer the DIBELS herself in order to have more direct observa tion of student performance: It might make a difference in some cases. But, we’ re talking about timelines here and individual one-on-one testing. So how they wou ld ever get it so your class is covered and the teacher does all this testing? Bec ause if one person had to test 20 kids, that would be a lengthy process. In response to a question about what would be neede d to sustain the use of the DIBELS assessment when the Reading First grant is terminated, almost all teachers indicated that the school would need additional per sonnel to help with collecting

PAGE 147

136 assessments – not just DIBELS. In fact, as it stoo d when these teachers were interviewed, many of them had concerns about the pe rsonnel that were then involved with helping to collect the data (i.e., Title 1 sta ff), as doing so meant missed instructional time for students with those staff members. Support/Resources for teachers use of DIBELS Having provided a description of the school climate/culture and expectations perceiv ed by teachers, analysis of the first research question then focused on the topic of supp orts and resources that existed in schools to help teachers with their understanding a nd use of DIBELS. Consistently, across all teacher participants, the Reading First grant was praised for the various resources it provided to schools. Teachers reporte d high value for the grant and its provision of books, curricular and instructional ma terials, and additional media materials. But the most valued of all the resources provided t o schools, as reported by teachers, was the assignment of a full-time school-based Reading Coach. Teachers expressed numerous reasons for their high value for the Reading Coach. According to the teachers interviewed, the Reading Coach provided coordination of all data collection and data input efforts. The Readin g Coach was reported to provide technical assistance to teachers on the use of DIBE LS and other assessment data for making instructional decisions. The Reading Coach was reported to help some teachers create and /or organize various activities for thei r classrooms as well as help teachers organize their classroom libraries. Some teachers reported receiving frequent support from the Reading Coach on the use of the Progress M onitoring and Reporting Network (PMRN) reports– generated from a web-based program that provides educators with

PAGE 148

137 graphs of their student’s DIBELS data. More specif ically, Reading Coaches were reported to meet with grade level teams at least af ter each DIBELS benchmark assessment to assist those teams in using the data to develop action plans for increasing student reading performance. Specifically, with regards to support given by the Reading Coach for using the PMRN, teachers reported they highly valued the acce ssibility of color graphs which were provided to them by the Reading Coach. The PMRN re ports utilized a color coding system to assist teaches in helping to scan their c lassroom data and identify areas of instructional need. All teachers indicated that th ey did not have access to color reports and were dependent on the Reading Coach to provide them because of an insufficient supply of color printers available on school ground s. Some teachers, who valued the use of the PMRN reports, indicated that when they did d ownload and print their own reports in black and white, they were left with having to c olor them in by hand for visual analyses. The lack of access to color printers may have been a potential barrier to teacher’s use of DIBELS data, let alone a barrier to teacher’ s use of the PMRN (discussed in more detail later). Nonetheless, teachers voiced much a ppreciation for their Reading Coaches who at times also provided modeling of instructiona l programs or strategies. And in some cases, teachers reported their Reading Coach h ad set up site visits to other schools for them to observe expert teachers who utilized a given reading program or instructional approach that the Reading Coach was interested in h elping her team learn. One teacher’s

PAGE 149

138 view in particular may best convey the general sens e of teachers’ valuing the Reading Coach: Um, she came in…for a six week stretch and modeled lessons for me. We cotaught some lessons and I did some on my own and sh e observed and gave me feedback, that kind of thing. Looking at the diffe rentiated instruction and making games up for rotations, and things along those line s, and word work. And um, she’s always been there as a resource. Questions m e…you know, helps me think things through cause I don’t see it somehow, she’s been there in that regard as well. Um, working with getting the classroom libra ry up and going. The whole program started with the grant and they bought all those extra books and we’ve been doing that. Wonderful person. I mean, withou t her I don’t think it would have went as smoothly? Yes! Definitely not. The topic of classroom libraries and books was anot her major contributing factor influencing teacher’s value for the Reading First grant. Most of them indicated having access to “leveled books” as part of their personal classroom library for use in their reading center activities. The classroom library w as often mentioned in the same context as providing students with a differentiated instruc tional approach that allowed all students to work at their instructional level – an instructi onal design being promoted through the Reading First model. Prior to having the grant, teachers report ed they had few other options for supporting their classroom libraries an d similar curricular or instructional needs than to use their own money for purchasing su ch materials. In this regard, the grant was viewed by the teachers in this study as h ighly beneficial.

PAGE 150

139 A less common positive impact of the grant reported by teachers was the introduction of DIBELS. Some of the teachers (4 of the 14) indicated a direct relation between use of the DIBELS and the increases in thei r overall school performance. Most of the teachers, when discussing the impact of the grant, merely described DIBELS as a requirement for which they had mixed views. Or, as one teacher commented, “It’s just another test.” But a few teachers commented on how DIBELS had been a very valuable resource provided through the Reading First grant because it helped them to focus their instructional efforts and guide their professional development in reading. More specifically one of these few teachers stated, “Um, I was resistant in the beginning because of the amount of time. I see some good res ults with DIBELS and Reading First ….What has happened is that it has created an aware ness of what’s important. I think that’s the big thing of value.” A final area of support identified through teacher interviews was the availability and focus of trainings provided to teachers on the use of DIBELS data and the PMRN data reports. None of the teachers reported attend ing any district level trainings on the use of DIBELS data. However, some of the teachers reported attending a mandatory training provided to the whole staff at the start o f the first year of implementation. This particular training was described by these few teac hers as having low value as it was reported the training consisted of a facilitator re ading from a scripted manual over several hours of training across four days.

PAGE 151

140 One teacher indicated she had attended a district training to learn how to administer the DIBELS. This teacher indicated the following in response to a question about the general atmosphere among school staff on the use of DIBELS: “I think people don’t appreciate the DIBELS that we have. I don’t think they know enough about it to appreciate the information that they’re truly getting. For a lot of people, it’s just another test. Um, for t he new teachers that have come in, as team leader, I’ve really tried to impress and he lp them to understand the information that we are getting. Um, as a first gr ade team I think we understand it, but some of us still don’t appreciate it; at le ast some of the older generations. Although all of the teachers indicated receiving tr aining either as a grade level team or as an individual from their Reading Coach, most report ed those trainings consisted of how to use or implement various intervention/instructio nal ideas in relation to the DIBELS data. Few teachers indicated any participation or formal training in how to use DIBELS data to inform instruction – at least to a level of independence without assistance from the Reading Coach. Most of the teachers reported recei ving some amount of training on how to interpret the colors of the PMRN graphs to ident ify students at-risk for reading difficulties and for determining what students shou ld receive additional instructional services (e.g., use of specific curricula or partic ipation in pull-out programs). Regardless, it was difficult to determine whether the continued dependence on Reading Coaches to use DIBELS for instructional decision-making was a function of a lack of sufficient training, or a direct result of teacher’s reluctanc e to embrace such activities in light of the

PAGE 152

141 competing responsibilities described above with ass essments and the pressures over student performance. Some of the teachers, however, reported an interes t in further training. These same teachers had also voiced positive uses of the DIBELS assessments. Teachers who perceived little value in the use of DIBELS decline d to participate in any further training. Of the teachers who indicated an interest in more t raining on the use of DIBELS data, two of them indicated a specific interest in using DIBELS for more frequent progress monitoring among their students. Those teachers wh o indicated an interest in more training stated they were interested in not only ho w to administer the DIBELS, but to learn more about its development as a test and how to use it more effectively to guide instruction. An example of the kind of information that teachers find valuable was provided by one teacher who indicated learning abou t the high correlation between the DIBELS Oral Reading Fluency subtest and the FCAT. Knowledge of DIBELS Teacher’s knowledge of the DIBELS was a primary focus of the teacher interview. Identifying what t eachers know about the DIBELS was expected to provide information to help understand how teachers perceive and utilize the DIBELS. All teachers indicated knowledge of the fr equency of DIBELS testing as a benchmark assessment – three times a year. This fr equency just recently changed at the time of the interviews from four times a year durin g the prior school year to three times a year. All teachers reported knowledge of the use o f the DIBELS as a required assessment through the Reading First grant.

PAGE 153

142 As interview questions explored more specific know ledge about DIBELS, substantial variation of responses was found. Such topics explored included the timing of students, the perceived correspondence with other a ssessment in use, knowledge of specific subtests and what they measure, and knowle dge of DIBELS as a progress monitoring tool. Timing students. Half of the teachers who were interviewed voiced co ncerns over the timed aspect of DIBELS. Of these seven teacher s, five were kindergarten teachers. Kindergarten teachers in particular were most vocal about this concern. Some of these teachers felt the timing of students was unnecessar y and/or inappropriate. A representative view of timing students in kindergar ten was stated in the following way: Well, one of the things I find, um, difficult from the children’s standpoint is that they’re timed at this early age. Especially at the beginning when they don’t even know what letters are; I mean for the most part. A nd uh, you know, to put a test like that in front of a child seems intimidating to me. And so I don’t like that very much; I don’t like the timed aspect. I mean, and t hat’s what it is, it’s a timed thing. I feel like it’s a little bit pushy. These seven teachers who voiced concerns about timi ng students perceived the DIBELS test as a kind of “speed test.” Further, some of t hese seven teachers perceived less reliability or validity of the DIBELS because it in volved a timing aspect. And yet, the other seven teachers indicated a perceived higher r eliability in DIBELS because it was a measure of student’s fluency of a particular readin g skill – often discussed in the context

PAGE 154

143 of comparing it with Running Record assessments (fo r first grade teachers) which were described by these other teachers as a measure of r eading accuracy only. The perceived value of DIBELS as a “speed test” am ong the seven teachers who voiced concerns about timing students was also seem ingly impacted by these teachers’ concerns that reading fast was not the goal of read ing; rather it was to understand what they are reading. A first grade teacher stated it in this way which was characteristic of the other teachers sharing this view: Running Records, um…I don’t time them, but if they are reading at a nice talkable speech, then I think that’s fluency. And I encoura ge them when they stop and have to sound out, that’s exactly what I want them to. They don’t have to read like an eighth grader or a tenth grader. We talk ab out how we read as we speak and I talk about the wind and the waves going back and forth, and that’s when not-to-talk-like-this (imitating a robot voice). The teachers who identified concerns about timing s tudents highlighted their concern about perceived importance being placed on how fast kids read rather than their level of comprehension. These teachers stated a preference for students reading slow but understanding what they read over reading fast and not understanding what they read. Or, as one teacher said, “I’d rather have a slow re ader comprehend like that (snaps finger) after talking to them which is what I as his teache r could do, versus DIBELS which is… so you read 23 words and the criteria is 40 words; uh, you have no oral reading fluency. He does have oral reading fluency; he’s j ust a slower systematic reader.”

PAGE 155

144 Some teachers who voiced concern over timing studen ts perceived students feeling anxious and nervous about being timed. And yet, when asked if any students have ever voiced such concerns directly, none of th e teachers could recall a specific instance of students voicing concern about being ti med. But teachers themselves perceived pressure and stress over the timed aspect s of DIBELS to warrant a view that students must themselves have similar feelings. Fo r example: (Researcher): Have any of your students ever brough t up any concerns about not knowing someone or feeling inhibited to perform or talk? (TCH:) They.... No, they haven’t brought any concer ns up, but I know…just as a teacher, when they go to…I know they’re…even just b ringing them into another classroom they’re like (models child behavior by ma king stiff posture with body and avoids eye contact). They just change… They j ust change; they’re quiet, they’re…you know…I mean I could have a behavior pro blem over there and bring them over here and their behaviors change just ‘cau se he really don’t know [the other teacher], or who ever else…you know, they don ’t know so they’re a little bit more withdrawn than when they are with their own te acher. Additionally, concerning student’s reactions to bei ng tested with the DIBELS, all teachers perceived the potential for unreliable dat a if a student is unfamiliar with the person who is conducting the assessment. Though th is concern was evident in all teacher responses, some indicated a greater concern than ot hers. Some teachers reported this as a concern primarily in the first cycle of assessment; after which students were reported to adjust and there was then less concern about this t opic. But other teachers maintained

PAGE 156

145 that even with knowing or being familiar with the p erson conducting the assessment, they perceived students were likely to not respond as we ll as if they were performing for their classroom teacher. It was noteworthy that the teac hers most concerned about students being tested by persons other than the teacher were kindergarten teachers who reported low value for the DIBELS. It is difficult to deter mine if teachers’ perceived low value for the DIBELS led them to perceive anxiousness in the students or if their perception of student anxiety contributes to their low value for DIBELS. Some teachers reported procedures used at their sch ool to help students understand the expectations for participating in DI BELS assessments. Generally, a person, such as a Reading Coach, would visit the cl assroom, either a day or two in advance, or on the same day of testing to talk to t he students about the process and to describe what they would be asked to do. Some teac hers reported the use of stickers and similar rewards given to students following their t esting experience. Only one teacher actually reported positive value with the DIBELS be ing tested by someone other than herself as she found it reassuring to know that som eone else was observing students’ successes or difficulties; thus potentially validat ing her own observations in the classroom. Teachers who did not voice any concern about timing students generally reported DIBELS as a valuable tool for identifying students who need help in specific areas of reading. Though these seven teachers generally und erstood the DIBELS as a measure of fluency, some of these teachers also perceived less value in the DIBELS as a test more focused on how fast students read rather than under standing what they read (i.e.,

PAGE 157

146 comprehension). This perception is discussed later in the context of understanding what DIBELS measures. When asked about students’ reacti ons to being timed, some of these seven teachers reported it was the teacher’s respon sibility to set the expectations for students by preparing them for assessment as well a s, “making a game of it.” For example, one teacher indicated she encouraged stude nts to do their best by competing against their own scores. This same teacher also h ad incorporated her own version of progress monitoring of all students in the classroo m in addition to teaching her students to graph their scores (discussed further later). Correspondence with other assessments Aside from timing of students, teachers were asked to comment on their perceptions of the D IBELS as compared to the other assessments they were using in the classroom. The intent of exploring this topic was to better understand teachers’ perceptions of how to u tilize the DIBELS assessment in relation to other assessments to determine student needs. Among the 14 teachers interviewed five of the teachers voiced preference for classroom assessments over the DIBELS or, in all five of these cases, perceived li ttle value for the DIBELS because it did not correspond with other assessments (four of thes e teachers were kindergarten teachers). Two other teachers reported mixed views on this topic; indicating they generally perceived a correspondence between DIBELS and their classroom assessments, but they still preferred to use classroom assessmen ts to guide instruction. These teachers reported the data from these classroom assessments were more immediately available for use since the teacher was the one giving the assess ment. The remaining seven teachers all reported value in using DIBELS in conjunction w ith their classroom assessments. For

PAGE 158

147 example, one teacher had the following to say when asked about any concerns using the DIBELS with other assessments: (Int:) Ok, and do you find any difficulties or conc erns with making use of the DIBELS information along with these other assessmen t options? (TCH): No, um, I think it all pretty much coincides because if you know that their reading fluency is low then their Running Record is probably not going to be anywhere near a 17 which is what it needs to be at the end of the school year. Um, if it shows that they’re, you know, in that yel low area they’re probably in the process of learning. I think what you do in the OR F (oral reading fluency) directly correlates with what you do on the Running Record. One of the 4 kindergarten teachers who perceived a lack of correspondence between the DIBELS and her classroom assessments ci ted concerns over how the student’s demonstrate knowledge of letter names thr ough their classroom assessments, but then test poorly on the Letter Naming Fluency s ubtest on the DIBELS. Though specific to a particular DIBELS subtest, the genera l theme was consistent between these 4 teachers. It is notable that these same teachers a lso voiced concerns about timing students on the DIBELS and did not perceive DIBELS as a valuable measure of reading ability. Thus, it appeared that the perception of DIBELS as a “speed test” was associated these teachers’ perceptions about the utility of DI BELS in relation to their classroom assessments. When teachers were asked if the DIBELS provided any information that their current classroom assessments did not, some teacher s – even some who found

PAGE 159

148 correspondence between DIBELS and the district asse ssment – reported a low value for DIBELS citing no new information available through the use of DIBELS aside from what their classroom assessment provided. However, some teachers who valued using DIBELS with their classroom assessments perceived r elative benefits in using DIBELS over their classroom assessments. These teachers w ere first grade teachers and commented on DIBELS in relation to the use of Runni ng Record assessments; they generally reported the DIBELS to be more useful for analyzing a student’s errors in order to determine instructional ideas. These teachers a lso cited how the use of Running Records “just let’s you know if the student can rea d.” All teachers reported that they preferred the DIBE LS over other assessments in being more time efficient to complete a class of st udents. Different schools indicated different procedures for collecting the DIBELS, but two main approaches were identified. One was to schedule classes to visit the media cent er where the students would be occupied with an activity while students were asses sed by one of several testers. The other procedure was having each of the DIBELS teste rs select classrooms/grades to pull students from one-by-one to a nearby area to comple te the assessment. Regardless of the approach used at a particular school, all teachers indicated the amount of time needed to complete a class was around 20-40 minutes. The DIB ELS was further valued over classroom assessments in this context because teach ers were not the ones testing the students and could instead focus their time on inst ructing their class. Kindergarten teachers reported that it could take as much as thr ee weeks to complete their whole class using the district assessment – during which time t hey found difficulty in trying to

PAGE 160

149 provide instruction and collect assessment data sim ultaneously. First grade teachers reported similar views in the use of Running Record s. On the topic of DIBELS knowledge, five of the teac hers interviewed expressed concerns about the use of the Nonsense Word Fluency (NWF) subtest which was an unanticipated topic by the researcher. Two of thes e teachers had perceived positive impacts on the use of DIBELS at their school, and y et all 5 of these teachers viewed the NWF as highly confusing for students, incompatible with the strategies they are teaching in the classroom, and less valued because it asks s tudents to read “silly” words, as one teacher called it. Among these five teachers, the NWF subtest was se en as very confusing for students because they were taught specific strategi es in the classroom about how to identify a word if they don’t already identify it b y sight memory. The general strategy reported among these teachers was to encourage the student to look at the word, sound out the first part or all parts known, and if the w ord doesn’t sound right, then think of words they know already that would fit or make sens e. Additionally, some teachers reported they encouraged students to use picture cl ues and context to identify words they do not know or cannot read. Given this, these teac hers believed it confuses a student to learn a particular set of strategies for decipherin g an unknown word and then have to “throw out what they have learned” in order to perf orm on the NWF subtest. A couple of teachers were so passionate about their concern ove r the use of the NWF subtest that they wanted district leaders to throw the whole NWF meas ure out because, “We don’t teach nonsense words in kindergarten!”

PAGE 161

150 Two of the remaining nine teachers had no comment either way about the use of the NWF subtest. However, the remaining seven teac hers identified the NWF subtest as a measure of student’s decoding ability – not their sight word reading ability. Regarding the teachers who perceived no value in using the NW F measure, when further interviewed about their knowledge of the NWF measur e, some misunderstandings were identified about its use as a measure of decoding s kills. When asked this question directly and why some teachers do not like this subtest, one first grade teacher who valued the NWF subtest stated, “I’ve heard that from the DIBEL S assessors. And yet on the PMRN parent letter it says that the nonsense word piece is in there so that we’re not assessing the reading ability but the decoding ability.” However one teacher who did have an understanding of the NWF subtests still voiced conc erns regarding its use. Um, I’m sad for my kids when they become such a gre at reader and they forget how to read those words or it takes them so long because they want to try to sound it out and make it into a real word. I th ink it’s a very useful tool because you know what these kids know. But, um, we work a lot on nonsense words in here as far as our word work is concerned. Um, we use dice and we roll them and talk about what a nonsense word is. And I want them to be familiar with that concept which is why I do that work with them. But as they get into higher Running Record, they almost try to put too much mea ning into it. And I think in that aspect it makes it harder for these children. This teacher had been incorporating direct teaching of decoding skills with the use of nonsense words in the classroom while also generall y advocating for its use. And yet,

PAGE 162

151 she indicated concern over how students will someti mes struggle on that particular subtest. From her perspective, more specifically, she did indicate that this observation mostly occurred with students who were already read ing above grade level standards on the Running Record assessments in her class. The teachers who valued the use of the NWF subtest however, reported that the score alone was insufficient for them to assist stu dents in their decoding skills. These teachers all generally highlighted the use of the a ctual protocols used to record the students’ performances during the assessment. They indicated a value in being able to identify the patterns of errors using the student p rotocol to determine the appropriate focus of instruction. As one teacher put it, “…it‘ s the only way to really see what you’re actual scores show you in the end. And so, just be cause they tell you they’re red, green, or yellow, or blue, doesn’t mean anything unless yo u get to see those tests along with it.” Additionally, some teachers indicated high value i n the use of qualitative notes written on the test protocols by assessors to furth er aid teachers in the interpretation of scores (e.g., notes that the student wasn’t feeling well.). However, teachers reported high variability in the quality of the notes shared by D IBELS assessors on the protocols. Thus, at times the notes were reported to be very h elpful, and at other times, not so helpful or simply not provided. In this context, a couple of teachers did indicate some interest in receiving training on how to administer DIBELS in order to see for themselves directly how students respond on the DIBELS. In fact, many teachers, when talking about their ow n district or classroom assessments valued giving a test themselves because they valued the qualitative aspects

PAGE 163

152 of hearing the students read for them. And yet, wh en asked if they’d like to learn to give the DIBELS in order to maintain the same opportunit y to directly view student responding, most declined by citing their current a ssessment responsibilities. At the time of the present study, no information was available regarding the reliability and validity of the district assessments in use for kindergarten an d first grade. One must then consider, would it be appropriate to trade district assessmen ts in for an assessment or set of assessments that are less distracting from instruct ional time and yet offer greater reliability and validity for classroom decision-mak ing? On the topic of specific subtest perceptions two t eachers did voice concern over the use of the Phoneme Segmentation Fluency subtest These teachers believed it confused students when they are being taught to ble nd sounds together to make words and yet on the DIBELS PSF subtest, they were being asked to separate sounds from words. It seemed like these teachers may not have understood the purpose of the PSF task – to assess an auditory skill. Segmenting and blending skills are both important auditory early literacy skills for reading (Nationa l Reading Panel, 2000). It should be noted that these teachers had also expressed concer n over students being asked to read nonsense words. Other teachers, however, voiced po sitive value in the use of the PSF subtest because it was measuring similar skills to either what was measured in the district assessment for kindergarten or because it helped th e teacher to determine if the student has mastered their phonemic awareness skills. One teacher in kindergarten voiced a concern over t he use of the Initial Sounds Fluency subtest. This subtest presents a student w ith four pictures and the pictures are

PAGE 164

153 labeled for the student by the assessor. The stude nt is then asked to identify a picture that begins with a specific initial sound and the latenc y in their response is measured. Specifically, this teacher cited concerns over the use of the labels given to a particular picture. She indicated that many students who spea k English as a second language may use a different word for a given picture. It is im portant to mention that this teacher does value a measure of students’ abilities to segment t he initial sounds of words, but specifically questions the labels used in the asses sment for particular pictures. And it’s not that the actual procedure – but that t he pictures are bad. For instance there’s one, and I don’t exactly remember what it is, there’s a house and a road, and there’s some grass, and there’s suppose d to be this thing, a yard. Well, you know, they might not even use yard or the y might use yard. Another word is luggage. Um, one of them either city or bu ildings and it’s like…it’s just a language thing. So the child has to go what did th ey call [it] this time? And I practice them with my children. This time they mig ht call this a puppy. This time they might call this picture a dog. So you’re goin g to have to listen to what they call it so that you will know how to answer. It’s very hard to remember. They’re just learning the language. Regarding the Oral Reading Fluency subtest, first grade teachers offered mixed views on this subtest. Again, one of the variables leading to the mixed findings seemed related to the issue of students being timed. A fi rst grade teacher who held very strong concerns about timing students, and whose perceptio ns were unique, indicated concerns about the criteria for first grade students at the end of the year. She felt that it was too

PAGE 165

154 high for them and questioned the validity of the su btest as a “speed test.” Yet, other first grade teachers perceived it as a risk-indicator of being able to comprehend what they are reading. One first grade teacher, as mentioned pre viously, had learned that a correlation had been found in the research between the ORF scor e and the FCAT. Some teachers reported some concern about how to interpret a stud ent’s high performance on the Running Record assessment and a low performance on the ORF subtest, while other teachers did not have this problem as they perceive d the Running Record assessment as a test of accuracy and the DIBELS ORF as a test of fl uency. DIBELS and progress monitoring On the general theme of understanding teachers’ knowledge of the DIBELS, the issue of usi ng DIBELS as a progress monitoring tool was investigated. All teachers were asked if they used DIBELS for progress monitoring more frequently than the three times a y ear for benchmark purposes on students who are struggling in reading. All teache rs indicated they did not, but the reasons for such were varied. For some, it was a m atter of having access to materials for use – citing knowledge only of the materials that w ere available for benchmark assessments. For others, it was a matter of who wo uld collect such information as teachers were not trained to give the DIBELS. For others, it was not valued to use DIBELS for progress monitoring as they voiced prefe rence for using their district or classroom assessments/observations to track student progress. The reasons cited by teachers for this particular preference seemed to c onverge on teachers’ comfort in observing their students reading directly in order to observe what they can or cannot do.

PAGE 166

155 However, one unique teacher who was interviewed di d indicate developing her own version of progress monitoring materials, using a one-minute time limit, in order to track what students were learning between benchmark cycles. This teacher also indicated teaching her first grade students how to graph thei r performance on small index cards while encouraging them to often practice and “try t o beat their own score.” Two other teachers reported an interest in learning how to us e DIBELS for progress monitoring citing barriers to do so because of a lack of acces s to necessary materials. When learning about the option to receive such materials on-line through a companion website for DIBELS, these teachers reported interest in learnin g how to administer the DIBELS for such use. None of the 14 teachers indicated any kn owledge of the option to download and print out DIBELS progress monitoring materials from the official DIBELS website developed through the University of Oregon. The re maining 11 of 14 teachers, upon being asked if they’d be interested in learning how to administer the DIBELS for progress monitoring purposes, reported a feeling of being overwhelmed with the current level of assessment responsibilities charged under their care. Using assessment data To this point, an understanding of the climate a nd culture of education in which teachers are working, identif ication of what resources exists to support their use and understanding of DIBELS, and an understanding of what teachers knew about the DIBELS has been reported. The next theme that was identified for reporting was focused on how teachers are using the data, whether DIBELS or otherwise, to make decisions in the classroom.

PAGE 167

156 Of the data collected in the present study through teacher interviews, the topic of using data in the classroom can be discussed in ter ms of driving instruction in the classroom, identifying students who are in need of additional supports, deciding what additional supports to give to students who are str uggling, and using data to determine placement into ESE programs or retention within a c urrent grade. Generally, the teachers in the present study reported meeting with their Re ading Coach at least three times a year – corresponding with the timing of the DIBELS bench mark cycles – in order to review the data and identify areas for the grade level tea m to focus their efforts. Of the teachers who reported mostly positive perce ptions and value on the use of DIBELS, all reported a preference for using multipl e sources of data to inform or guide their instruction. Some of these teachers reported using DIBELS data to confirm findings from other assessments in the classroom. When DIBE LS data were found to be different from the district or classroom assessments, these t eachers reported efforts to understand why. Of the teachers who voiced mostly a negative percep tion or value on the use of DIBELS, all reported a preference for using their d istrict or classroom assessments to make decisions. The reasons for de-emphasizing the use of DIBELS for informing their instructional decision-making were varied. Some te achers cited delays in accessing their classroom DIBELS scores; by the time they received them, the class had already moved forward in the curriculum. Some teachers reported no value for DIBELS to inform instruction citing a perceived lack of reliability and validity of the DIBELS since it was a “timed test” or because it included subtest measure s that “do not apply” to their grade

PAGE 168

157 level. And finally, some of the teachers who expli citly did not use the DIBELS to guide instruction cited a lack of value in the informatio n it provided beyond the information already obtained through their district or classroo m assessments. The pattern of responses with respect to valuing t he use of DIBELS to make instructional decisions appeared to match the patte rn found in which teachers valued the use of DIBELS as influenced by their knowledge of t he DIBELS. Thus, if a teacher reported limited knowledge of the test or mispercep tions of the use of DIBELS, they also perceived little value in using it for making instr uctional decisions. Teachers who voiced greater knowledge of the DIBELS and/or perceptions that were consistent with the true parameters of the test also perceived higher value in using it to guide instructional efforts. It’s important to note that none of the teachers wh o found benefit to using the DIBELS to guide instruction reported using the DIBELS as a si ngle source of information. These teachers all reported benefits to using the DIBELS along with their district and classroom assessment to identify student needs and making dec isions in the classroom. The teachers who reported positive value in using DIBELS to guide their instruction specifically reported using DIBELS for organizing their instructional groups and adjusting these instructional groups throughout the year. In this context, teachers often discussed a differentiated instructional appr oach promoted through the Reading First grant. More specifically, these teachers highligh ted the use of reading center activities, developed by researchers at FCRR, durin g their reading block time. These activities consisted of various reading activities grouped and organized by reading skill type and level of complexity. Teachers could acces s the activity plans through either the

PAGE 169

158 FCRR website or through a large binder available fr om the school’s Reading Coach. Teachers who did not value using DIBELS to guide i nstruction did not report any use of DIBELS data to align or even develop their i nstructional groups. As one kindergarten teacher commented, “By the time we get our scores back, we’ve already determined our groups.” These teachers described D IBELS as “just another test we have to give,” or unhelpful because it doesn’t reveal an ything new beyond the district assessment data or observations made in the classro om. Of the 14 teachers who were interviewed in the pre sent study, eight of them explicitly identified using DIBELS at least to iden tify which students should have access to either Title 1 personnel/services and/or access to supplemental curriculum programs. Of these eight teachers, some of them relied on DIB ELS to determine specific skills that a student needed help with and then assigned the st udent to work on those skills either with an assistant in the classroom or through a sup plemental program. The other teachers did not indicate using the DIBELS data to identify specific skills on which to focus, but rather generally used the risk indicators of the DI BELS PMRN reports to select which students would access extra supports. The teachers who did use the DIBELS data in this fashion expressed concerns about how to interpret t he data and a feeling of just trying to give the student access to everything they could gi ve to them in hopes that something would work. The following programs were reported b y teachers as being used for students who are identified as in need of extra hel p using the DIBELS data: Great Leaps, SRA Open Court, Head Sprout, teacher developed read ing activities (cited by teachers

PAGE 170

159 who used DIBELS data to identify specific skills to focus a student on), and Reading First activities (i.e., reading center activities binder ). Five kindergarten teachers reported specific use o f DIBELS data – in some cases along with district assessments and classroom asses sments – to make decision about whether a student would be retained, promoted to ne xt grade level, and/or referred for psychoeducational evaluation to determine eligibili ty for special education. None of the first grade teachers reported any use of DIBLES at their school for making retention decisions or referring students for evaluation. On e first grade teacher indicated only district assessment data were used at her school fo r deciding retention of students. Many teachers across both grade levels reported an expec tation among student services staff (e.g., school psychologist, social worker, etc.) th at DIBELS data be used for determining how students are progressing in response to interve ntions being provided to them in the classroom. Within the theme of using data, teachers shared th eir views on the use of the Progress Monitoring and Reporting Network (PMRN) – a web-based program developed and maintained by the Florida Center for Reading Re search which provides educators with graphs and reports for analyzing their DIBELS data. All teachers reported knowledge of this technology and indicated using th e reports generated from it. However, the level of use found among teachers was substantially variable. All of the teachers reported using two specific reports the mo st: The “parent letter” and the Class Status Report (discussed later).

PAGE 171

160 The parent letter is a report provided by the PMRN which involves a standard text template for communicating to parents what their st udent’s score on the DIBELS was on the most recent cycle and how to interpret the resu lts. It also provided parents brief ideas for how to support their child’s learning to read a t home. All teachers reported high value for the use of this specific report because i t was easy and efficient in communicating to parents what the DIBELS scores mea n. Some teachers, in order to make the report even more useful for parents, highl ighted specific areas of the report to direct parent’s attention to key information. Few teachers indicated accessing the PMRN directly themselves in order to evaluate their class data. And among the few teach ers who did access the PMRN independently, they reported accessing the PMRN mai nly after each DIBELS assessment cycle. Most teachers indicated no need to access t he PMRN because the reports they use from it are already provided to them by the Reading Coach (e.g., Class Status Report). And of the ones that expressed comfort and availabi lity to print out their own reports, none of them chose to because they did not have acc ess to color printers. Because the PMRN reports utilize a color coding sys tem of red, yellow, green, and blue to assist educators in scanning their data for analysis, the lack of access to color printing was found to be a barrier to teachers acce ssing the PMRN themselves. Of the few teachers who access the PMRN themselves, they i ndicated only knowing how to use the colors and scores to identify students in need of additional help. All teachers reported, regardless of their comfort level for usi ng DIBELS or the PMRN, they would not be able to utilize the data as they do without the guidance and support of the Reading

PAGE 172

161 Coach. All teachers expressed concern about the po ssibility of losing their Reading Coach when the grant expires, citing concerns about the impact it would have on their school. Of the teachers who communicated a use of PMRN rep orts other than the parent letter, all indicated a high use of the Class Statu s Report. This report provided the scores for each of their students on the subtests given fo r a particular assessment cycle. Each student’s subtest scores were color coded as descri bed above and their overall performance profile across the subtests is reported as the “recommended level of instruction” (i.e., intensive, strategic, or initia l). This report offered the option of organizing the data either by student name alphabet ically, by instructional level needed, or by a specific subtest reported for that assessme nt cycle. Some described a “box-and-whiskers” format (either a Student Grade Summary Report or a Class Grade Summary Report) that had be en used in the past through the Reading Coach. All but one of those teachers that reported this type of format reported confusion about its style of data display and indic ated little use for it. One teacher reported learning just recently from the Reading Co ach on how to interpret that type of graphic display and further reported learning to fo cus on getting the boxes smaller and above the expectation line for a particular cycle a s the goal. No other reports were mentioned by teachers. Comparison of Teachers and Experts on the use of PM RN reports – Case Study Review To answer the second research question, all teache rs and two DIBELS experts were asked to review a case study. Experts were as ked to review both a kindergarten and

PAGE 173

162 first grade case study. Teachers were asked to rev iew the case study appropriate for the grade level they teach. Results between teachers a nd experts were then examined for similarities and differences in the use of the DIBE LS data presented through the PMRN reports used. Across all participants, teachers an d experts, the common three themes that emerged in response to the questions asked through the case study review were: (1) using PMRN reports, (2) additional information needed, an d (3) general comments about using DIBELS (experts only). Following the reporting of the findings for the first two of these themes a summary of additional comments offered by experts on the use of DIBELS data is presented. Expert review of kindergarten case study When presented with the three types of reports being used for the kindergarten case stu dy, the experts utilized the cumulative report to evaluate the student’s progress of skills and to evaluate the student’s level of vocabulary. They reported the case study student w as demonstrating a deficit in phonemic awareness skills as identified through the performance on the Initial Sounds Fluency subtest. Experts also indicated that becau se the case study reflected end of year performance, the data was useful for first grade te achers at the start of next year for incoming students prior to the first cycle of data collection for the next year. Using the Student Grade Summary Report, the experts demonstra ted analysis of the student’s performance in relation to the class and in relatio n to the expectation for that given assessment cycle. Experts in the present study used the reports to id entify student strengths and weaknesses. Across all reports, but specifically u sing the Class Status Report, experts

PAGE 174

163 reported that the student had demonstrated proficie ncy in Letter Naming Fluency, required intensive help with phonemic awareness ski lls, and had demonstrated some knowledge of letter-sound correspondence. However, regarding letter-sound correspondence skills, both experts reported in int erest in reviewing the student’s protocols to identify patterns of errors on the NWF subtest. Further, both experts made general recommendations about providing the student with substantial interventions that are highly focused and narrow on the development of phonemic awareness skills. Both experts reported a need or desire for additio nal information to better determine the student’s needs or to better develop targeted interventions. Specifically, experts were interested in why the student was behi nd the peers in phonemic awareness skills. Was the student already receiving special education services? Was English a second language for this student? And if so, was h e or she receiving ESOL support? Additionally, they asked how the student was behavi ng on the day of testing; is there any evidence that this information should be questioned for reliability? They also wanted to know what services or interventions have been attem pted so far and how responsive the student was to those interventions. Finally, they asked if there available background information on the student? For example, did the s tudent attend pre-school? Was the student born in the United States? Regarding the cumulative report, experts wanted to know how the student went from 0 to 24 sounds on the NWF subtest. They asked if the student was receiving interventions specifically on letter-sound correspo ndence. They also questioned the reliability of the previous assessment cycle which reported the student had a score of zero

PAGE 175

164 on the NWF measure. Finally, regarding the primary concern identified for this student, the experts wanted to know what phonological skills this student does have. They asked about what the student can do already? The experts reported the data on this student only indicates a concern about his/her ability to engage in phonological awareness tasks, but does not actually pinpoint were to focus interventi ons more specifically. Kindergarten teachers review of kindergarten case s tudy When kindergarten teachers were asked to review the same case study, all seven teachers reported familiarity with the Class Status Report. Three of the seven t eachers reported awareness of the Student Grade Summary Report. And only one kinderg arten teacher recognized the cumulative report. All kindergarten teachers used the Class Status Report to identify student strengths and/or weaknesses. Responses by teachers were varied in most instances. However, all reported that the student had mastered his letter names. Some reported the student was “having trouble with sound s” based on the NWF score. Yet, others reported the student could perform well on t he NWF, but that he/she “doesn’t know sounds” when referring to the PSF subtest. On e teacher stated, “he knows his sight words” even though the ORF subtest was not provided to this student. A few kindergarten teachers highlighted the student’s nee d for help with beginning sounds and letter-sound relationships; similar to expert opini on. All kindergarten teachers provided suggestions to assist the student. Most of these w ere general statements of assigning a classroom assistant to work with the student one-on -one or in a small group. Two teachers indicated they would have referred this st udent for psychoeducational evaluation in consideration for special education placement.

PAGE 176

165 Only four of the seven kindergarten teachers asked for additional information. Two of them wanted to know if the student’s native language was English or if the student was receiving ESOL services. All four of t he teachers wanted to know if the student was currently receiving special education s ervices for either language or speech disabilities. One teacher asked for observations o f the student during testing (e.g., any distractions?). One teacher asked if the student h as had any history of short-term or longterm memory difficulties, or other “processing defi cits.” And one teacher wanted to eliminate questions about cognitive skills by knowi ng the students IQ before assuming what needed to occur to support this student’s lear ning needs. Expert review of first grade case study When provided with the first grade case study, as with the kindergarten case study, the PMR N reports were used to identify student strengths and needs. And as similar as wit h the kindergarten case study, the experts highlighted the information contained in th e cumulative report to determine the student’s progress through the year and examined th e student’s performance on vocabulary (PPVT) and comprehension (SAT-10). Spec ific to this case, the experts identified the student’s primary area of need to be phonics, or letter-sound correspondence skills. Next, they targeted oral re ading fluency as a need for intervention. The experts identified phonological awareness skill s as a relative strength for the student. Based on the comparison of the SAT-10 and Peabody v ocabulary results, and the student’s ORF score, the experts hypothesized the s tudent was compensating somewhat for comprehension despite having low vocabulary and low oral reading fluency skills. The experts viewed the case as requiring an individ ual focus as the Student Grade

PAGE 177

166 Summary Report indicated the rest of the class was performing above the expected benchmarks on the NWF score. Finally, the experts highlighted the importance of looking at more than just the colors of the reports but also the numbers in order to determine just how far ahead or below a student’s p erformance was from the benchmark goal. The experts wanted additional information as with the kindergarten case in order to determine what the student was already able to d o before and to learn what interventions or services have been attempted to da te. Specifically, regarding the students skills in phonics, the experts wanted to k now what letter sounds the student did know. They wanted to view the student’s NWF protoc ol to identify patterns of errors and blending skills. They wanted to explore more forma lly if the student’s primary area of difficulty was a problem with decoding or a problem just with oral reading fluency as they indicated the student’s NWF was just below the expectation for the assessment cycle. Experts advocated the student should be mon itored more frequently for progress using DIBELS probes available from the official web site for DIBELS from the University of Oregon. In this context, experts rep orted several resources were available on the FCRR website to assist teachers on the use o f ongoing progress monitoring. First grade teachers review of first grade case stu dy All first grade teachers recognized the Class Status Report and indicated us ing that report often. All reported knowledge of how to interpret the color coding syst em used in the reports. Reports were used by the teachers – the Class Status Repot in pa rticular – to identify student strengths and weaknesses. Only one of the seven teachers val ued using the Student Grade

PAGE 178

167 Summary report and demonstrated an understanding of how to use it. However, she reported that the box and whiskers format was mostl y used to examine her whole class rather than a specific student in relation to the c lass. The report she was referring to was likely the Class Summary Report. All of the other six teachers reported no value in using the Student Grade Summary Report citing it was too confusing visually for interpretation. A few of these teachers recognized the cumulative r eport with some of them finding value in its use. One teacher in particular used t hat report first to look at the student’s progress over the year; similar to how experts prio ritized use of the same report in their examination of the data. All of the first grade teachers used the reports t o identify student strengths and weaknesses. Greater consistency was observed among first grade teachers’ interpretation of the data as compared to kindergarten teachers. All of them identified the student’s need in the area of phonics and reading fluency. T hey identified the student’s strength to be in phonemic awareness. Most teachers described various activities to teach the student phonics, oral reading fluency, and sight word vocab ulary. Some of these teachers indicated a preference for using alternative report s not included in the case study review in order to evaluate student progress: Class Histor ical Report and the Class Recommended Level of Instruction Report. Regarding additional information needed or request ed, five of the seven first grade teachers commented. Specifically, three firs t grade teachers wanted to view the student’s NWF protocol to observe error patterns. Three of the first grade teachers wanted to know if the student was already receiving special education services. And if

PAGE 179

168 so, was the student suffering from any disabilities that would impact their learning of phonics (i.e., ADHD, language impairments, etc.). Some wanted to know about the observations of the student during testing (e.g., a ny distractions?). Was this student learning English as a second language? Was he/she receiving ESOL services? Some wanted to view the student’s error patterns on the Oral Reading Fluency subtest protocol. Finally, as observed with expert opinions, some of these teachers wanted to know what interventions had been attempted so far and what ba ckground information was available on this student. Expert opinions on the use of DIBELS data at Readin g First schools. Following the review of both case studies, each expert was as ked to comment on particular comments made by them through the course of the cas e review in order to understand their perspective on what teachers should or should not be expected to do when using DIBELS data. From their view, DIBELS data did not tell one how to fix a problem; only that there was a problem and the area of reading de velopment where the problem likely exists (e.g., phonics, or oral fluency). That is, because DIBELS was just a snapshot; a single moment in the student’s life, it was not exp ected that the benchmark data be completely useful for instructional planning. Howe ver, the experts indicated that where a student is identified as having some difficulties i n reading, it was important to examine that student’s skills further; either through more formal diagnostic testing, or in response to interventions provided. Thus, it was the opinio n of the experts that any student who was having difficulties as measured by the DIBELS, and there was confidence that the

PAGE 180

169 data was reliable, ongoing progress monitoring shou ld occur at a frequency appropriate for the level of need. The experts reported that if the student’s recommen ded level of instruction was identified in the reports as “intensive” then the s tudent should likely be monitored weekly to bi-weekly to monitor the effectiveness of interv entions being provided to the student. They reported that if the student’s recommended lev el of instruction was identified as “strategic” then the progress monitoring should be occurring bi-weekly to monthly. It was the opinion of the experts that ongoing progres s monitoring was more appropriate for instructional planning and decision making. Regardi ng ongoing progress monitoring, the experts were asked to comment on the number of samp les needed for use as compared to benchmark assessment (i.e., ORF on benchmark assess ments requires the use of three reading probes). These experts indicated that only one probe was needed per session for use when engaging in progress monitoring. Experts also indicated that when analyzing the data through PMRN reports, it was important to determine if a student’s area of diffi culty was isolated to just that student, or reflective of a larger number of students in the cl ass or even the same grade level. This view was consistent with reports by teachers who me et with their Reading Coach as a grade level team to examine the DIBELS data across the grade level and classroom levels. They advocated educators to make hypothese s about why patterns of need exist and to then establish plans to improve outcomes; mu ch like what some teachers reported was occurring when their grade level team meets wit h the Reading Coach.

PAGE 181

170 In this same context, the experts advocated for edu cators to use DIBELS through a multiple data-source approach; using the data wit h existing data from other assessment to make instructional decisions. The experts also indicated high value in teacher’s knowledge about a student’s background for use when interpreting DIBELS information. However, these experts also expressed concerns rega rding teachers’ ability to make use of multiple sources of data, especially when these sources do not correspond initially; citing most teacher training programs do not adequa tely prepare teachers for how to analyze such data. Using DIBELS data only to make instructional decisions was cautioned against even though DIBELS was highly use ful for making some broad conclusions about how to support a grade level, cla ss, or student. It was this concern about teacher’s need for suppor t to engage in more formal data analysis and instructional planning that the expert s indicated awareness within their organization that much variability existed among sc hools in the ability to use data to guide instruction (not just with the DIBELS). They both reported that the Reading Coach was an invaluable source of support for schools and teachers to not only provide support on the interpretation of data, but also to help wit h using the data to make instructional decisions and to also provide ongoing professional development support for teachers. In relation to ongoing progress monitoring, the expert s stated such activities were not happening nearly as much as they would have liked t o see, but it was hoped, through ongoing support by their organization and from the schools’ Reading Coaches that educators would be more prepared and influenced to engage in ongoing progress monitoring.

PAGE 182

171 Attitudes and Perceptions Among Persons Other than Teachers The third research question was investigated throu gh focus groups involving (1) Reading Coaches, and (2) “specialists.” Specialists in the present study involved a mix of student services personnel from the areas of school psychology, academic diagnostics, and ESOL. These specialists worked as itinerant st aff members who were assigned as many as two to perhaps five different schools, with the exception of the ESOL teacher who was assigned to only one school. Their view of DIBELS was expected to provide a relative comparative view not only among different Reading First schools, but also between Reading First schools and nonReading First schools. Inclusion of this research question not only allowe d for an examination of nonteacher perspectives on the use of DIBELS, but is a lso provided a way of examining the validity and reliability of teacher perceptions thr ough a triangulation using multiple sources method (Miles & Huberman, 2002; Patton, 200 2). Results of both focus groups were organized in the same manner as that used with teacher interviews for two reasons. First, it provides a consistent organizational fram ework. Second, the focus groups occurred after all teacher interviews had been comp leted; thus the focus groups were more effectively used for comparing non-teacher per spectives with those of teacher perspectives. Climate and culture of schools Reading Coaches and specialists both reported a belief that teachers were very overwhelmed in the d istrict. Reading Coaches perceived teachers as not having enough time to complete thei r assessment responsibilities and having too much paperwork to complete. Reading Coa ches perceived teachers viewing

PAGE 183

172 the DIBELS as just one more thing imposed upon them Reading Coaches perceived teachers having limited value for the DIBELS becaus e it was one more thing they had to do which interfered with their instructional time. Specialists characterized the climate as being very punitive for teachers which led to pressures to “teach to the test” in order to avo id negative professional judgments by their school administrators. Reading Coaches did n ot perceive a punitive environment, but cited barriers in the consistent use of DIBELS among teachers occurring from the requirement among teachers to engage in and collect data on students that has little instructional relevance. This perspective was evid ent among some kindergarten teachers who cited specifically the district’s entrance asse ssment as being lengthy and having little utility for instructional planning. Specialists shared their comparative perspectives a mong Reading First schools and nonReading First schools. Specifically, they indicated that the nu mber of referrals for psychoeducational evaluation for consideration of special education eligibility was much lower at Reading First schools than at nonReading First schools and have dropped in number at Reading First schools due to the implementation of the grant. S pecialists also reported that teachers at nonReading First schools were more likely to use DIBELS data to refer a student for special education consi deration even when other assessment data suggested the student was performing at grade level. Academic diagnosticians who participated in the foc us group for specialists reported on patterns of student performance between those referred for evaluation from Reading First schools and nonReading First schools as strikingly different. They

PAGE 184

173 reported students at Reading First schools were not demonstrating as much difficulty in phonemic awareness and phonics skills as those refe rred at nonReading First schools based on the nationally norm-referenced standard ac hievement tests they usually administer for students referred for psychoeducatio nal evaluations. They attributed this observation to the increased amount of explicit and direct teaching of phonemic awareness and phonics skills at Reading First schools. Both Reading Coaches and specialists emphasized the importance of school leadership setting the right climate and culture fo r using DIBELS data effectively. Reading Coaches in particular, being assigned to on ly one school, full-time, had the most to say about this topic. They stated a high value for data analysis was likely to exist, school-wide if the administrators for that school v alued such data analysis and utilization along with holding staff accountable for using the information generated from such analyses. Reading Coaches reported the issue of wh y some leaders may not demonstrate explicit support for DIBELS had probably less to do with their perceived value as much as it was the competing demands of other responsibi lities placed on them by the school district. That is, Reading Coaches perceived some principals were not able to demonstrate as much open and explicit support for t he use of DIBELS (e.g., attend data analysis meetings with grade level teams) because t hey were burdened with much responsibility in other areas of school operation. It was the opinion of Reading Coaches that in many cases, to help support school leaders in the use of DIBELS data, it was necessary to take the data to the principal and communicate often with them. Reading Coaches saw t heir role as important for this set

PAGE 185

174 of efforts and felt an obligation to demonstrate, c onsistently, the utility of the DIBELS data. Further, they perceived some principals may not have to review the data often, but entrust a few to do that and provide input to staff Overall though, Reading Coaches articulated a need for a school culture that genera lly embraced the use and analysis of data to make instructional and curricular changes. They insisted that teachers and administrators alike, many times, need to be led to using the data. Supports/Resources available to support use of DIBE LS. As observed consistently through teacher interviews, specialist s identified the Reading Coach as the single most important variable supporting teacher’s effective and efficient use of DIBELS data. Specialists cited Reading Coaches as having the responsibility for providing training on the use of DIBELS assessment team each year and in some cases before each benchmark cycle to ensure standardizati on procedures are known and followed for administering DIBELS. A unique perspective found among specialists was th at they perceived the Reading Coach should have had greater involvement a nd a stronger role in the coordination and development of interventions for s tudents who were having difficulties in reading. Specifically, specialists cited the av ailability of a binder of student center reading activities provided by the Florida Center f or Reading Research, but indicated a perception that teachers have very little time to u tilize such a resource effectively enough. They suggested the Reading Coach could help with th is by being familiar with the activities and offering recommendations to teachers when they meet with them to go over their grade level DIBELS data.

PAGE 186

175 When asked to comment on their job description, Rea ding Coaches cited what amounted to a long list of responsibilities. These included the provision of technical support (e.g., data collection, analysis, and utili zation) at the grade, classroom, and individual levels, modeling lessons involving resea rch-based interventions or strategies, ensuring fidelity and integrity of instructional ap proaches or programs being implemented in a grade level or classroom, support for teachers who have ongoing questions about assessments or instruction, on-site training on the use of DIBELS, demonstration of the utility and value of DIBELS da ta, and coordinate data collection efforts. Regarding the demonstration of the utilit y and value of DIBELS data, Reading Coaches reported a variety of methods used to meet this job responsibility. Specifically, they cited meeting with teachers individually or in grade level teams. They engaged school leaders and leadership staff through meeting s about the data. They help develop action plans to address student growth and improvin g outcomes. And they also cited use of the students’ actual data protocols with staff i n order to assist staff efforts to interpret the data. Regarding the coordination of DIBELS data collectio n efforts, Reading Coaches confirmed the benchmark assessments as occurring th ree times a year. They also reported on efforts to engage teachers in ongoing progress m onitoring activities. Reading Coaches reported teachers were generally starring to see th e value of progress monitoring. Reading Coaches voiced uncertainty about how well t eachers would embrace further progress monitoring efforts without support and enc ouragement to do so. When asked about their perceptions of why teachers may be relu ctant to engage in more frequent

PAGE 187

176 progress monitoring, Reading Coaches suggested it m ay be due to fear of results and/or due to the limited time available among all the oth er assessment responsibilities they have. Reading Coaches cited efforts to increase teachers’ value of and use of DIBELS. They reported one approach involved working with th e teacher to understand a student’s background or conditions of the testing situation i n an attempt to understand why a student did not perform as expected and to develop interventions to support the student. Reading Coaches indicated that they believed teache rs found value in data analyses when they saw a direct link to making plans for improvin g performance of students that involved others so that the teacher was not alone t o carry out all aspects of the intervention. Reading Coaches also reported that i t was beneficial to hold individual conferences with teachers and grade level teams to help them interpret the data and determine action plans in response to the data coll ected on students. Specialists also voiced their perceptions on what they felt helped to increase teacher value of using DIBELS data. They indicated teachers were more likely to perceive value in using DIBELS if they perceived th ey had the time to do so. They also believed teachers were more likely to access the PM RN if the teachers had consistent access to color printers. Specialists reported tha t teachers appreciated the use of graphs and preferred to interpret the DIBELS data through graphical representation. However, because of the overwhelming amount of testing that was taking place in the schools, Specialists felt teachers appeared less motivated t o engage in such analyses. Overall though, Specialists perceived teachers value and us e of DIBELS to have increased with

PAGE 188

177 each successive school-year; an perception that was also stated by some of the teacher who participated in the present study. Specialists described several approaches that they found to be helpful in leading teachers to more acceptance and increased value for using DIBELS. One such strategy was to follow up with teachers after each assessmen t cycle to share the results quickly and explain any qualitative observations of the stu dent that were written on the protocols. Another approach that was described was to sit with the teacher and review the protocols on students who were struggling and assist them in analyzing the patterns of data. Finally, one approach that had been observed to be successful among the Specialists in helping teachers to value the use of DIBELS involve d going to the classroom before assessment cycles to give students advanced underst anding of what they would be asked to do and to give directions for how to participate in the assessment process. This last approach was consistent with reports by teachers on the value of having DIBELS assessors visit the class to ensure the results obt ained were valid and reliable. Teachers’ knowledge of DIBELS. Reading Coaches and specialists were asked to share their opinions on teachers’ understanding of DIBELS. Specialists reported, similar to reports by teachers, that schools have evolved i n their acceptance and use of DIBELS over time. Specialists and Reading Coaches confirm ed previous teacher reports regarding teachers’ concerns about students being t ested by people other than the classroom teacher and about being timed on the DIBE LS. Reading Coaches reported believing that teachers perceive the DIBELS as anot her high-stakes test that students need to pass. Both focus groups reported teachers were concerned about students being

PAGE 189

178 timed and that teachers saw less validity in the us e of DIBELS because of its timed aspect. Regarding students being tested by persons other t han the classroom teacher, Reading Coaches and specialists also indicated a co ncern about students being unfamiliar with the person who was testing them. Several proc edures were shared that were used at schools to ensure students were demonstrating their true abilities during the DIBELS assessment. Reading Coaches reported spending time in the classrooms – especially kindergarten classrooms – prior to testing cycles i n order to help students become familiar with who will be testing them and/or help them understand what they would be asked to do. Additionally, Reading Coaches spent t ime trying to help kindergarten and first grade teachers understand that the first cycl e was not a reflection of their teaching as much as it was a measurement of the students’ incom ing knowledge and ability to follow directions. Reading Coaches reported teachers seem ed to feel pressured that they would be held accountable for the first DIBELS cycle desp ite the Reading Coaches’ attempts to alleviate such concerns. Regarding student reactions to being testing, Read ing Coaches and specialists reported very little concern beyond making sure stu dents were familiar with who was testing them. Aside from that, no major concerns w ere reported by the focus groups about students being timed on the DIBELS. Rather, both groups reported students often appeared very comfortable and willing to engage in the task because they were used to the assessment. For example, specialists reported students in all grade levels often indicated they’ve seen the NWF “SIM and LUT page” b y automatically reading “SIM”

PAGE 190

179 and “LUT” before the instructions were given. That is, specialists and Reading Coaches did not report, as did some teachers, that students had any anxiety or concerns about being asked to read nonsense words. Reading Coaches and specialists reported their perc eptions of teacher’s use of the NWF subtest and their efforts to support teacher un derstanding about the NWF subtest. First, both groups reported that they thought teach ers perceived the NWF subtest as confusing because teachers believed students were g etting confused when trying to make real words out of the nonsense words as they’ve bee n taught to do with unknown words. However, both Reading Coaches and specialists repor ted students had little concern participating in the NWF subtest. They indicated t hat after they students got used to it, they no longer tried to read them as real sight wor ds. Specialists emphasized the importance of the Reading Coach to help teachers an d staff to understand the importance and usefulness of the NWF measure. Reading Coaches reported trying to educate teachers about the NWF subtest being a measure of decoding skills rather than sight word reading. And though Reading Coaches understood what teachers were trying to exp ress, Reading Coaches reported observing a more systemic pattern among students on the NWF subtest. Specifically, Reading Coaches reported seeing large numbers of fi rst grade students decreasing in their NWF performance during the second DIBELS assessment cycle. According to Reading Coaches this finding was related to students being taught long vowel sounds at that time of the year – observations of students over-general izing a new skill. Thus, when observing the errors among first grade students on the NWF subtest, large numbers of the

PAGE 191

180 students were using long vowel sounds. Reading Coac hes reported this phenomenon often had disappeared by the end of the year during the last DIBELS assessment cycle. Both Reading Coaches and specialists agreed that te acher access to the scoring protocols was an invaluable method for identifying students instructional needs related to emerging phonics skills. Specialists and Reading C oaches advocated for DIBELS testers to take detailed notes about the students they’ve t ested or some kind of indicator to remind them to go back to the teacher and share the information beyond the score. Both groups reported high value in the use of the studen t protocols to help make the data more valuable for teachers and to assist data analysis b eyond the color coding system. At nonReading First schools, specialists reported observing teachers r efusing to use the results of the NWF subtest because they bel ieved teachers disagreed with using nonsense words to teach reading. Specialists furth er reported that it appeared teachers at nonReading First schools had less understanding of the value and ut ility of the DIBELS. And aside from the use of nonsense words, Reading C oaches and specialists reported a unique concern in that the directions provided to s tudents on the NWF measure offer the student the choice of either reading the nonsense w ords sound-by-sound, or as whole words. Some students reportedly will sound out the word sound-by-sound and then read the whole word; thus losing time and negatively imp acting their final result. Both groups questioned whether such patterns are truly the emer ging characteristics of the student’s performance or if they merely demonstrated what was modeled for them in the practice trials.

PAGE 192

181 Both Reading Coaches and specialists shared ideas o n how they’ve attempted to help support teacher understanding of the DIBELS an d increase value and utility for DIBELS. Both groups reported that sharing the rese arch on the development and characteristics of the DIBELS would be, and in some cases had been, valuable for helping teachers to see the utility of DIBELS. Spe cialists believed it would offer teachers greater context and a “big picture” understanding f or why the DIBELS was so widely encouraged for use. Specialists also indicated tha t school administrators need to know the research on DIBELS as well. Specialists believ ed that even though an awareness of the research would not help them know how best to i nterpret the assessment results, it would at least help them appreciate its use as a re ading assessment. Reading Coaches added another approach to helping t eachers use DIBELS by working with them in the classroom to set up variou s center activities and model instructional approaches (e.g., Word Work). They a lso reported helping the teachers to use student assessment data to assign students to s pecific center activities. They reported working with teachers to analyze the NWF fluency da ta and other DIBELS subtests to help teachers find the patterns in the data. Collecting and using assessment data In general, Reading Coaches indicated that they thought teachers felt more comfortable and tru sting of the DIBLES data when the same person collected the data each cycle for their class. Specialists identified different data collection procedures at different schools. T he choice of which procedure was used at a given school appeared to be have been adopted by the influence of what seemed possible at that time for the school (e.g., staff a vailable, schedules, structural arrangement

PAGE 193

182 of the building, etc.). Regardless of which proced ure was used, specialists reported that the efficiency in the process of collecting DIBELS data had steadily improved year to year. Both groups reported the same two types of p rocedures for collecting data schoolwide as reported by teachers. Given their unique position in the school district specialists described unique views on assessing special populations with the DIB ELS (e.g., ESOL, special education, etc.). They reported that it was helpful if the sa me person who worked with the student or students continued to be the person who assessed them in the future – or at least made sure the students were very familiar with the asses sor. They perceived students who had special needs or circumstances would show a truer o r reliable performance with someone they knew and had worked with. Specialists reporte d it was helpful to have a system in place that communicated specific student circumstan ces or characteristics that would need to be considered before working with such stud ents regardless of who collected the information. And additionally, specialists reporte d value in sharing results with teachers immediately after testing; including observations a nd qualitative notes. The perceptions of Reading Coaches on the teachers ’ use of DIBELS data were more extensive. Overall, Reading Coaches agreed th at teachers’ acceptance of DIBELS had evolved over time with each successive year of implementation. In the beginning, according to this group, teachers accepted DIBELS d ata that validated their expectations of a student and did not value DIBELS data for stud ents who did not perform well on it. Reading Coaches reported it took approximately 2-3 years for teachers to see the value of DIBELS and they observed some teachers found the DI BELS more reliable because

PAGE 194

183 someone else conducted the assessment which was sim ilar to that reported by one of the teachers interviewed. At the time of the focus gro up, Reading Coaches reported teachers were just starting to show an interest in learning how to administer the DIBELS in order to see for themselves how students perform and resp ond to the assessment. Reading Coaches reported that they believed teache rs were so inundated with data that they were unsure about how to use all of it to gether. They suggested that much variability existed in teachers being willing to ta ke the next step in data analysis and data utilization; an observation that was consistent wit h the DIBELS experts who were interviewed in this study. Reading Coaches advocat ed more support and training be given to teachers on how to use multiple sources of data to make decisions about a student’s needs. They reported that many teachers were still having difficulty seeing the correlation between fluency and comprehension; not to mention how to interpret the DIBELS scores. Reading Coaches indicated some teachers were puttin g too much emphasis on the DIBELS at the expense of ignoring other measures/as sessments. This observation was in contrast to teacher reports about feeling the prima ry focus on the DIBELS emerging from administrators and student services personnel. Spe cialists shared similar statements for teachers having access to more training about the D IBELS. They believed that teachers showed a preference for Running Record assessments in first grade. They thought teachers did not have enough information about the DIBELS to appreciate its use as a reading assessment. They also reported a belief th at teachers needed more assistance on

PAGE 195

184 how to use DIBELS data to organize their instructio nal groups and to modify these student groups through the year based on the data. Reading Coaches shared their views on the use of P MRN reports by teachers and administrators. Overall, this group reported belie ving that teachers needed continued help and training on how to use the PMRN reports. It was the perceptions of Reading Coaches that KG and 1st grade teachers often appeared more proficient at u sing the reports. This group indicated some teachers were s imply more proactive in using them and is more independent at accessing their own repo rts online. Reading Coaches thought the increasing proficiency and use of PMRN reports was related to the evolving acceptance and understa nding of DIBELS among teachers as a whole. However, they reported the skills involve d in data analysis were less tangible; much like the observations reported by DIBELS exper ts on this topic. Reading Coaches advocated for more teacher supports to teach staff how to analyze data at a level of independence. In summary of the PMRN reports, Read ing Coaches stated that they believed teachers would embrace using these reports if they saw the value in them. Reading Coaches expressed concern about what suppor ts would exist for teachers on how to find value in the reports when the grant expired Specialists offered additional recommendations on how to support teachers in their development of data analysis skills. They su ggested teachers needed to put more emphasis on student growth rather than focusing sol ely on student level or color of performance. Specialists argued educators needed t o be watching the student’s trend rather than a single performance on one assessment cycle. Reading Coaches also stated a

PAGE 196

185 need to focus on student trend and growth rather th an a single snapshot in time. Thus, specialists and Reading Coaches saw a need for teac hers and schools as a whole to engage in more ongoing progress monitoring and atte nd training on how to utilize multiple sources of data. These observations and r ecommendations were consistent with those offered by the DIBELS experts who participate d in the study. Regarding ongoing progress monitoring, specialists reported that they believed teachers were starting to engage in some progress m onitoring, but that teachers were having problems with how to do it for a variety of reasons. Specialists described teachers as having limited time available to engage in progr ess monitoring. Specialists perceived teachers weren’t getting out of it as much as when someone else had done it and brought the information and observation notes to the teache r. Specialists also cited the lack of available space to conduct the ongoing assessments without distraction or interruptions. They also indicated a lack of personnel to assist t he teacher in covering classrooms while they engaged in progress monitoring activities. Ove rall though, specialists perceived teachers at Reading First schools seemed more ready to understand the role o f progress monitoring than teaches at nonReading First schools. A final area of discussion regarding teacher use o f DIBELS data involved cautions and advice from specialists and Reading Co aches, respectively. Specialists, when asked to comment on their perspectives on teac hers’ administering the DIBELS themselves, gave a one-word response: “unrealistic. ” Specialists reported teachers would need extra personnel to watch the rest of the class or else the reliability or validity of the DIBELS may be compromised due to breaks in standard ization or assessment error. This

PAGE 197

186 group cited the current state of testing for kinder garten teachers using the district assessment which required one-on-one testing as a b arrier to having kindergarten teachers take responsibility for administering the DIBELS. Teachers have already reported – and specialists gave similar observations – that becaus e the district assessment in kindergarten took several weeks to complete they we re challenged to both test and conduct instruction at the same time. The speciali sts argued that even if kids were provided with independent activities, a teacher con ducting assessments alone using DIBELS could take weeks to complete a whole class. However, specialists indicated some kindergarten an d first grade teachers who have taken the initiative to conduct their own DIBE LS assessments have been observed to not need someone to come back and interpret the data for them. The overall concern or caution offered by the specialists was that a st andard requirement for all teachers to administer the DIBELS without providing them suppor t or at least removing some of the current assessment responsibilities was unrealistic ; at the very least because of the large variability that currently existed among teachers a bilities to conduct the assessment themselves. The specialists suggested teachers sho uld be allowed to volunteer rather than be mandated to conduct the DIBELS assessment on the ir own following the expiration of the grant. Reading Coaches gave several pieces of advice and made cautionary comments on the current and future use of DIBELS at schools. They reported critical decisions were being made on very small snapshots of student performance such as retention and special education consideration. Reading Coaches r eported concerns about the fidelity of

PAGE 198

187 use among nonReading First schools. Reading Coaches also expressed concerns about teachers administering the DIBELS themselves for th e same reasons as cited by specialists; limited time and little assistance. O n the topic of PMRN reports, Reading Coaches advised the PMRN should generate graphs tha t reflect the correlation between students’ oral reading fluency and later performanc e on 3rd grade FCAT to influence teachers’ use of DIBELS. In summarizing these concerns and advice for curre nt and future use of DIBELS among kindergarten and first grade teachers, Readin g Coaches emphasized their role’s importance to make sure new teachers have the suppo rt they need to learn how to use the DIBELS effectively. They cited the need to support teachers who are at different levels of understanding and proficiency in the use of DIBE LS. Reading Coaches were concerned about who would provide guidance and dire ction for coordinating data collection and analysis activities when their role was discontinued. They reported that currently at their schools there was no one availab le or capable to adopt these responsibilities – “it’s a full time job,” as one R eading Coach added. If teachers were asked to take over the responsibi lity to administer the DIBELS, Reading Coaches stated teachers would need more pla nning time, other assessment requirements be taken away and/or additional person nel to help with covering classrooms during assessment cycles. Reading Coaches argued t he district needed to prioritize the assessments they are demanding teachers to use.

PAGE 199

188 Analysis of Hypotheses/Researcher Expectations At the start of this study, the researcher provide d several hypotheses about what would be expected to be occurring in the field prio r to conducting the present study. These hypotheses were provided for guiding investig ative efforts and adding credibility to the study, as well as to guide efforts to minimi ze any potentially negative influence of the researcher’s biases. The hypotheses included t he following: 1. There exists substantial variability among teachers ’ perceived value of DIBELS in assisting their students’ learning needs. 2. There exists a relatively high level of variability in the perceptions of non-teacher participants involved in the implementation of proc edures for using DIBELS at the school-building level. 3. Given the various other assessment tools used at th e classroom level and the overlapping schedules of providing those other asse ssments (e.g., FCAT, countywide assessments, Lexile assessments, Running Recor d assessments, etc.) within a school district, teachers perceptions of using DI BELS are negatively impacted. 4. Teachers understand what DIBELS is and what it meas ures, but are discouraged or unsure about how to best utilize the data obtain ed. 5. Given the multiple competing demands and seemingly fast paced nature of school activities, teachers are not accessing their class/ student reports on the PMRN, but rather are provided such reports, if any, from the school’s Reading Coach or school-based DIBELS team.

PAGE 200

189 6. A low level of direct involvement and use of PMRN r eports serves as a barrier for utilizing the data effectively or efficiently. Having provided the results of the study in relatio n to the stated research questions posed from the start, the above hypothese s were analyzed. Evidence was found for confirming the first hypothesis through teacher interviews. However, variability was found not only across teachers but within each teac her participant. It was not possible to simply classify each teacher as merely being in fav or for or against the use of DIBELS overall. In fact, all teachers voiced various reas ons for positively valuing the DIBELS, and concerns for its use. Hypothesis number two cannot be confirmed given the results that were obtained through Reading Coach and specialist focus groups. Review of transcripts from those focus groups actually found very little, if any, va riability in the perceptions of teachers use and value on the DIBELS. Results from teacher interviews would suggest that hypothesis number three is credible. Several teachers spoke of the pressures and lack of time related to the numerous assessments that were conducted in their schools or classrooms. In particular, kindergarten teachers reported that their district assessments took as much as three weeks to complete a whole class. Some evidence existed t hat teachers were not necessarily against the use of DIBELS, but rather, because of t he various other assessment demands and responsibilities imposed by the school district they perceived the DIBELS as just another test they have to use. Some teachers repor ted they discovered how to use DIBELS with other assessments while other teachers continued to struggle with how to

PAGE 201

190 make sense of data from the DIBELS when it does not correspond with other assessments they use. The fourth hypothesis can neither be confirmed nor disconfirmed. Rather, it seemed that teachers who had a relatively strong un derstanding of the DIBELS valued the use of PMRN reports more than teachers who had less understanding of the DIBELS. However, it appeared all teachers were having diffi culty linking the results of DIBELS with instructional decision making. Moderate varia bility was found in the ability to use the DIBELS data to make decisions, but all teachers interviewed in this study indicated a need for more support towards this objective. Addi tionally, some teachers asked for more support through available school staff to anal yze the data for them, while others asked for more training to learn to analyze the dat a themselves. Some evidence existed to support the fifth hypothes is. All teachers reported limitations to using the reports available to them through PMRN. Some of those limits were access to color printers to make optimal use o f the color code system the PMRN reports use for visual analysis. Given this, teach ers were reported to be highly dependent on the Reading Coach or other school personnel for accessing color reports. Other limits involved teachers forgetting their password and/or usernames. Some teachers indicated not having enough time to look through the website and make use of the information that was available to them. It appeared that those teac hers who had a higher value and understanding for the DIBELS reported accessing the PMRN more than those who valued the DIBELS less. All of the teachers indica ted accessing the PMRN to use the parent letter which was reported to be very valuabl e to all the teachers for communicating

PAGE 202

191 to parents what the DIBELS measures and how parents should interpret their child’s scores. Regarding the last hypothesis stated at the start o f the study, it cannot be confirmed or disconfirmed. Rather, it appeared sev eral variables existed as barriers to teacher’s use of DIBELS data. Though some teachers reported having difficulty making the most of the PMRN/FCRR website, many of them ind icated a lack of training on what reports were available and which ones were suitable for different types of analyses. Aside from the PMRN reports, it seemed evident that teachers preferred their own district or classroom assessments over the DIBELS b ecause they administered them and it allowed them to observe the child in the testing session. Teachers voiced positive value in being able to observe the student during testing in order to gain an understanding of what the student can do. But most of them declined to be trained on how to give the DIBELS citing time and workload as barriers. Even though teachers described the district assessments to be burdensome, time consumi ng, and/or an interruption of instruction (all of them indicated the DIBELS was m ore time efficient) many still preferred to give the district assessment over the DIBELS because they were the ones that gave it. As was observed in statements made by Rea ding Coaches and specialists, it may not be realistic to expect teachers to take respons ibility of giving the DIBELS unless something else is removed from their responsibility regarding assessments. Analysis of Unanticipated Topics Several topics emerged throughout this study which were unanticipated by the researcher. The first concerns the use of the NWF subtest. It was surprising that even

PAGE 203

192 among teachers who reported high value for the DIBE LS had also reported concerns with using the NWF subtests. Teachers reported that stu dents were confused about how to perform on this test because they were attempting t o apply strategies they were taught in the classroom for identifying unknown words. Some of these strategies involved using picture clues or context clues. It was also report ed that students were generally taught strategies to sound out unknown words and think of words they knew that were similar and/or made meaningful sense. It was interesting that Reading Coaches, though the y did not share the same heightened concern for the NWF subtest, indicated a larger pattern among first grade students when they were being taught long vowel sou nds in the classroom and then try to apply long vowel sounds during the NWF subtest. Th e Reading Coaches indicated this appears to be most evident during the second DIBELS cycle in the first grade, but then was less of a concern by the end of the year. Another unanticipated concern was teachers’ percep tions of the DIBELS as a “timed test.” Only a few of the teachers character ized the DIBELS as a fluency measure and were able to compare and contrast it with other assessments as a difference between measuring accuracy and fluency. But most teachers perceived the timing of students either unnecessary at the least, or developmentally inappropriate at the extreme. Some teachers expressed great concern about students bei ng anxious or stressed because of being timed on the test. And yet, no teacher could recall a specific instance where a student voiced fear or concern about being tested i n the DIBELS.

PAGE 204

193 Reading Coaches and specialists did not report any instance of students feeling anxious or intimidated by being tested with DIBELS. However they did report that for the first cycle in kindergarten, students were not familiar with it – and teachers were advised that the results for kindergarten students on the first cycle was not a measure of their classroom as much as it was a measure of what students skills were upon entering school and how well they could follow directions. By the second cycle, there was no concern reported by Reading Coaches and specialists on the testing of students with DIBELS. When asked why teachers perceived the timi ng aspects of the DIBELS negatively, Reading Coaches and specialists suggest ed teachers often perceived the DIBELS as a high stakes test that students were sup posed to pass. Teacher use of the parent letter was unanticipated. All teachers reported using it and valued it as a tool for communicating with pare nts about their child’s reading skills. Some teachers reported modifying the reports to hig hlight the sections most relevant for parents. Others reported value in the recommendati ons it provided parents to support their child at home. And others reported value wit h respect to the parent letter for how it described what the DIBELS are and what they measure It was anticipated that teachers who had been teach ing at a Reading First school for at least two years of implementation would have had sufficient training on how to not only understand what DIBELS measures, but also how to interpret the scores. Though some teachers, as reflected through the case studie s, interpreted the data in similar ways as the DIBELS experts, some teachers demonstrated a continued need for support. Mostly, it was reported that teachers were still ve ry highly dependent on their Reading

PAGE 205

194 Coaches for understanding what the DIBELS can offer them and how to use the data it provided. Many of the specialists reported concern s for the future use of DIBELS when Reading Coaches become unavailable following the ex piration of the grant. A few teachers indicated they would likely be okay with i nterpreting and using the data without their Reading Coach but did not see how the school would be able to support the data collection efforts without the Reading Coach. A final unanticipated result observed concerned the presence of unreliable DIBELS data. Reading Coaches and specialists both indicated a need to review standardization procedures among their data collect ion team to maintain a reliable collection of the data. Only one teacher of those interviewed indicated retesting students as an option when measurement error was suspected. DIBELS scores are more stable indicators of performance when repeated assessments are given (Kaminski & Good, 1998). The benchmark assessments serve conceptuall y as screenings for identifying students who may be at-risk for reading difficultie s. When a student was identified as having difficulty and that identification was found to be in contrast with what was known about the student either by observation or other as sessments, there did not appear to be any process in place at the schools sampled for che cking the accuracy of data for that particular student. Related to this issue was the use of DIBELS as a pr ogress monitoring tool. DIBELS experts interviewed in the present study con firmed that very little progress monitoring was occurring, not only with DIBELS, but with other data sources in the schools. And yet, the DIBELS experts reported that progress monitoring data was most

PAGE 206

195 useful for making instructional decisions for stude nts rather than the benchmark data which only represents performance at one point in t ime. This perspective was consistent with the concerns voiced by Reading Coaches regardi ng high stakes decisions such as retention and/or access to special education based on DIBELS benchmark data. Importantly, Reading Coaches, DIBELS experts, and s pecialists reported that progress monitoring with DIBELS was only just being introduc ed to teachers. Thus, it was not surprising that little progress monitoring was occu rring. However, these non-teaching professionals expressed concerns about the appropri ateness and/or possibility of asking teachers to conduct their own progress monitoring a ssessments with DIBELS when they were already engaged in a high level of assessments in their classrooms. Analysis of Qualitative Research Process – Research er’s Reflections General introduction This next section provides the reader an account of the researcher’s reflections through the course of the study as a means to offer an audit trail regarding the changing dynamics of the research pro cess as experienced by the researcher. To provide a fluid reflection of the r esearch process, this next section was written in the first person and written in chronolo gical order. It is expected that the following will further provide the reader with a ri ch and thick description of the research study towards an evaluation of the credibility and integrity of the data and its interpretation (Patton, 2002). Research proposal and preparation for conducting th e study The research proposal was presented in May of 2006. Several rec ommendations were provided by the doctoral committee to revise specific aspects of th e study and its methodologies. These

PAGE 207

196 revisions were completed by the end of the summer o f 2006 with the exception of one recommendation which involved the development of mo re knowledge on qualitative research methodology. To meet this need, several m onths were used to further review the research literature on qualitative methodologies. Additionally, the field instruments that were developed for the study (e.g., guided intervie ws, focus group guides) were shared with researchers at the Louis De La Parte Florida M ental Health Institute (FMHI) who were recommended for their expertise in qualitative research design. During the months of January and February of 2007, I consulted with research experts at FMHI to develop sufficient knowledge of qualitative methods and to ensure that my instruments were appropriate for use. Havi ng received feedback from these expert researchers, I revised my instruments as nee ded and kept notes of all advice given for later consideration. One particular recommenda tion provided by these experts was in regards to the focus groups. They suggested that I enlist the help of someone trained in facilitating focus groups to provide greater credib ility. Specific names were obtained for contact to solicit their support in addition to rec ommendations to contact the department of anthropology for possible help. I was unable to find anyone willing to help without requiring a modest fee for their service. Given th e financial limits of the study, I was unable to hire anyone to facilitate the focus group s. I shared this experience with a couple of doctoral committee members and I was advised that there was no expectation that I must h ave someone else conduct the focus groups, though it would be allowed if I could find someone. Having no success in finding someone with sufficient experience at an af fordable cost, I prepared to conduct

PAGE 208

197 the focus groups myself and enlist the help of grad uate students who had experience in helping to facilitate as a scribe – to assist in ke eping notes through the focus group meeting. Graduate students were identified with he lp from committee members and enlisted to participate at a later time when the fo cus groups were scheduled. Accessing the research site and participants. Having received approval from the doctoral committee and while soliciting consultatio n advice from additional research experts, I engaged in the process of gaining approv al from both the University IRB and the school district’s own research and accountabili ty office. Though my original plans for accessing the schools and participants were app roved by the University IRB, they were not acceptable to the school district’s resear ch office. I was advised by the school district personnel that I would have to modify the procedures for selecting participants as I would not be permitted to contact teachers direct ly; though I could send invitations or similar documentation announcing the opportunity fo r participation. Further, I was advised by participating school district personnel that I would have to submit any documents for approval by the school building princ ipal for authorization before obtaining consent from any teaching staff. All of the above required changes to my proposal re garding procedures for recruiting participants and accessing sites. Howev er, it also required adjustments to the manner in which I sought to purposefully include va riability based on school performance as measured by schools’ Annual Yearly P rogress (AYP) score – a score determined by performances on the FCAT. This infor mation was presented to my

PAGE 209

198 committee and I was permitted to revise my procedur es for recruiting participants. The specific procedures were listed in the methods sect ion of the present document. Originally I had intended to randomly sample teache rs from specific schools that would have been selected based on outcome scores on FCAT (i.e., AYP percentages). However, because of the constraints imposed by the school district, I was required to send invitations/announcements to all Reading First schools’ principals for their approval and then ask them to forward the remaining material s to kindergarten and first grade teachers, their Reading Coach, and their DIBELS dat a collection team. In addition, the school district required me to hav e a district sponsor for conducting the study. I was able to obtain support from one of the school district’s supervisors of the Reading First program. Having shared with her the scope, purpos e and intent of the project I then shared with her the co nstraints regarding participant recruitment. Given the large size of the school di strict and the number of schools that were potentially eligible for inclusion in the stud y, we decided it might be helpful, prior to mailing the invitation packets to all schools, t hat she contact principals at those schools first to demonstrate the school district’s support for the study and secondly to announce in advance the materials that would be forthcoming. I found this assistance very helpful as I was concerned initially about principals recei ving, unexpectedly, a large envelope of materials to review when their time was of course v ery valuable. It was also helpful because the list of potentially inclusive Reading First schools totaled 54 elementary schools. Thus, it was helpful to have a means of c ommunicating in advance with so many schools at once to quickly and efficiently com municate approval and existence for

PAGE 210

199 the study. I gave each school two weeks to review and consider the proposed study before following up with phone calls or emails to t he school’s principal. At this point in time, FCAT testing was occurring a nd continued during most of March. I had anticipated minimal response in light of this context. Following the twoweek window for review of the proposal, I emailed a ll principals along with providing a carbon copy to my district sponsor asking if they h ad received the materials and if they had an opportunity to consider the requests for sta ff participation. Very few principals responded, and of those that did, a few of them ind icated they would not be able to participate, citing concerns with the timing of the year (i.e., nearing the end of the school year). Some principals did send an email response indicating they had passed along the materials to the specific persons identified in the cover letter – most of them chose to pass the information along to the Reading Coach to manag e. While waiting for potential participants to contact me, I organized my materials and prepared for data collection. I obtained a dig ital audio recorder and immediately began to see the benefit of such a device as the au dio files could be uploaded unto my personal computer for ease and storage of the data. I also printed enough copies of consent forms, organized them along with copies of interview field notes, and a personal journal for use to record personal reflections of t he study. During data collection efforts, I soon found the audio recorder to be more efficien t and effective for recording my reflections and hunches. By recording my thoughts on the audio recorder, I was able to go back to them as needed and listen for emerging i deas and questions about the study.

PAGE 211

200 Approximately one month passed before receiving the first emails and phone calls from teachers who had an interest to participate. As each teacher contacted me to participate, I made arrangements immediately to mee t them at their school on a day and time of their choice. Data collection of teacher i nterviews occurred between the second week of April and the third week of May. During th is same time, a few Reading Coaches and specialists contacted me with an interest in pa rticipating in the focus groups. Only one principal had contacted me by email to indicate an interest in participating in the focus group for principals. Several Reading Coaches asked to participate and no concerns were noted in recruiting their participation. However, attempts to find a time and place convenient to all of them proved to be especially challenging. G iven this, I contacted my district sponsor for assistance in finding a solution. She indicated a Reading Coaches meeting was scheduled for later in the month of May and tha t I could discuss with the potential participants their interest to participate prior to their staff meeting. This proved to be very effective as it already determined the place w here the focus group would be held and all were willing to participate prior to their staf f meeting. The Reading Coaches focus group was held on May 14, 2007. Recruiting DIBELS experts also proved to be a relat ively easy process. I contacted a representative of the state agency that provides training and technical assistance to schools on the use of DIBELS and pres ented the scope, purpose, and intentions of the study. I gave this person my con tact information along with a request for help to identify and recruit two persons to ser ve as DIBELS experts. I was then

PAGE 212

201 contacted by email within a week by two individuals Interviews with both of these individuals were held via telephone, respectively, and recorded using the audio recorder for later transcription. Because the interviews wi th the experts were held approximately mid-way into teacher interviews, and though not exp licitly planned from the start of the study, I did find it fortuitous to talk with each e xpert following the interviews about the information I had obtained thus far from teachers a nd found the experts very gracious with their time in sharing their thoughts about the findings thus far. These additional comments and views are listed in the results sectio n as general expert opinions on the use of DIBELS. Towards the last month of school for the district ( May) I received many more requests to participate from teachers. However at that time, I was still concerned about the show of interest among specialists and principa ls for participation in focus groups. By the first week of May, I had received interest f rom five potential specialists – four of which were all from the same school who were willin g to host the focus group at their school. To ensure a sufficient number of participa nts as well as a balance of different perspectives from different schools, I contacted se veral school psychologists and academic diagnosticians to solicit their participat ion in the specialist’s focus group. Of those that contacted me with an interest in partici pating, a few had met the criteria of being involved in a school’s DIBELS data collection efforts for at least two years. Following this round of recruiting, I had at that t ime, a pool of 12 participants for the specialists’ focus group. However, when attempts w ere made to find a place, date and time that would be convenient to all, five had to d ecline participation as they could not

PAGE 213

202 travel away from their school campus and one had to decline due to personal events. The focus group for specialists was held on May 17, 200 7. By the first week of May, I had not received any re quests to participate from principals, except for the one early in April, desp ite several emails. I had considered towards the end of April in revising procedures to try to engage principals in a one-onone interview at a time and place convenient to the m. However, prior to initiating this adjustment to procedures, I contacted seven princip als by phone to determine the viability of this approach. Of the seven, three did not retu rn my phone call, and the other four declined to participate, citing time constraints. As a result of this experience, further attempts to involve principals were suspended. Att empts to include them through the data analysis procedures (e.g., Member Checks) are described below. Site entry into the schools was made available though support from the district sponsor. However, her role in the district was parallel to principals and ther efore likely did not offer any opportunity to ensure principal participation. Data collection and data analysis. Going into the data collection process, I reminded myself of my own background experiences in using the DIBELS and my training as a school psychologist. Indeed, both of these sets of experiences had led me to highly value the DIBELS as a tool for helping to im prove the reading outcomes of students and assist teachers in making instructiona l decisions. I also reviewed my hypotheses of what I expected to find occurring in the field. Because of my high value for the use of DIBELS, I wanted to make sure that I did not engage teachers in a debate

PAGE 214

203 about how they should value it or limit myself to o nly hearing what I wanted to hear. This of course required much self-discipline and op enness to other views. So, in preparation for such openness, I reflected t hat although I perceived DIBELS is a useful tool, it was teacher’s perceptio ns that were most important for understanding since they provide the instruction to students. I felt that I needed to have an appreciation for what teachers go through to bes t prepare myself to listen openly to their perceptions. So, I talked with a couple fami ly members who are themselves teachers and talked openly about their experiences working as teachers and the struggles they encounter trying to meet their students’ instr uctional needs. They expressed to me the pressures they feel in their jobs and the negat ivity that is sometimes presented in the public media about the quality of schools. They sh ared with me their concerns about how the schools are changing and how it makes them feel less valued as professionals. And of most interest was how they indicated to me knowing what to do to help students, but often feeling constrained to do so because decision s for how the school system should operate were made by those outside of the classroom s. I valued these conversations I had with my family m embers because it helped me to avoid perceiving teachers as the source of blame for why they may not be using DIBELS or valuing DIBELS. Instead I was able to wa lk into each interview with a perspective that every teacher cares about their st udents and each teacher works very hard to meet their students’ needs. These talks also se rved to emphasize a need to pay particular attention to how the systemic variables either supported or hindered teachers in their use of DIBELS. I walked into each interview assuming that every teacher values

PAGE 215

204 data. I also was able to appreciate all the pressu res they are under for student academic outcomes and imagine how it might make me feel to b e under such pressure. I found myself more capable to see how competing demands an d/or conflicting educational policies and practices could interfere with a teach er’s selection of what tools they chose to use in the classroom. Further, to make sure tha t I maintained an openness and empathy for what teachers are experiencing in their profess ion, I wrote down these insights I gained and kept them in my possession throughout th e whole study to remind me of my need to be open to other possible reasons for what may be happening in the field. To further guard against the possibility of any pos sible bias on my part, I started each interview with a brief, but openly honest expr ession of the purpose of the study. All participants were aware of my employment in the sch ool district as a school psychologist and were further made aware of my value for the DIB ELS as an assessment tool. However, I also shared with each participant or gro up that although I had a favorable outlook on the DIBELS, I was not a teacher and ther efore wanted to understand teachers’ perceptions by speaking with them directly. I enco uraged them all to be as honest as they could to share their own views, especially if they did not value the DIBELS, or any aspect of it. I believe that by having been open a bout my biases and role I was able to maintain openness to what teachers wanted me to kno w from their perspective. Further, in accordance with my understanding of qua litative methods, and advice received from research experts on qualitative metho ds, I had planned a priori to maintain a consistent search for alternative views or themes as well as divergent patterns and rival explanations (Patton, 2002). I had assumed going i nto the study that it would be possible

PAGE 216

205 to identify teachers as either having a positive or negative value for DIBELS. However, as data collection proceeded, even from the first i nterview, it became increasingly clear that no one teacher could be classified on a simple dichotomous scale. This experience further reinforced the need to seek out alternative views, opinions, and more specifically limits to one’s positive or negative view of the DI BELS. During all interviews and focus groups, when an ind ividual or group shared positive or negative views on the use of DIBELS, ex plicit questions were used to test alternative views. For example, if a teacher voice d positive value in the use of DIBELS and did not provide alternative views, then I would ask her explicitly to identify any concerns or negative values about the DIBELS. This consistent approach to seeking alternative views both within and across participan t sources proved to be very useful as it helped me to see how complex participant views were on the topic of DIBELS. As mentioned previously, I found early on and througho ut the data collection phase that perceptions of the DIBELS could not be classified a s a simple like or dislike. Rather, their values and perspectives were related to varia bles operating in the schools. Some of these variables were anticipated and others emerged through the course of the interviews/focus groups. Following each interview/focus group, attempts were made to try to transcribe the audio recording. However, due to the rapid time fr ame in which interviews and focus groups were conducted, combined with limited resour ces to support the transcription process, I found it very difficult to transcribe ea ch interview/focus group before engaging in the next interview. I had investigated the opti on of hiring someone to transcribe

PAGE 217

206 interviews/focus groups for me, but as was found wi th trying to hire someone to conduct focus groups, the fee for such services was prohibi tive. So, to maintain a fresh perspective on the evolving aspects of the study, s ummaries of interviews were written and my reflections of them were recorded. This way before entering into another interview, I was able to review the notes from the previous interview(s) and reflections in order to (1) listen for possible recurring themes, (2) divergent and convergent views, and (3) test the limits of specific unanticipated theme s. The use of a triangulation across sources method wa s planned a priori for the present study and discussed more specifically in th e context of analyzing data later in the present paper. However, no explicit plans were mad e about the order in which interviews and focus groups would be scheduled as it was antic ipated that several factors beyond my control would influence how they would be scheduled It was fortuitous though that the focus groups occurred after the majority of teacher interviews had been completed (i.e., 11 by the time of the Reading Coaches focus group a nd 13 by the time of the specialist focus group). The timing of the focus groups was f ortuitous because I was able to seek convergence and divergence on specific anticipated and unanticipated themes that arose during the teacher interviews. On some very specific unanticipated themes brought up by teachers, I had expected the Reading Coaches and specialists to hav e different views from each other. However, I was surprised to find very little diverg ence of views on all themes being monitored between these two focus groups. By being able to seek non-teacher opinions on the use of DIBELS from two different groups of p eople after teacher interviews had

PAGE 218

207 occurred, I found myself able to share specific quo tes or comments provided by some teachers to check the validity of such statements a mong the focus group participants, as well as solicit their opinions about such statement s. Data analysis occurred concurrently with data colle ction procedures. I tracked anticipated and unanticipated themes by summarizing and reflecting on each interview. Early stages of analysis involved looking for recur ring patterns of responses and identifying limits to the views or opinions shared by participants. Analysis in the early stages of data collection also involved keeping a p ulse on my own organization, objectivity, and reflections as I tried to document my changing views and focus through the course of the study. Throughout the study, I found it deceptively easy a t times to not write something down or mention it into the audio recorder because many times my thoughts were so subtle or seemingly so benign at first that they di d not register as “relevant.” When I found myself failing to record such thoughts, I wou ld immediately write down or audio record my thoughts. Some of the ease in avoiding t he recording of my reflections was in part influenced by the rapid time in which intervie ws were scheduled and held. I found it very easy to keep track of themes from interview to interview because of the short time frame between each interview. But I maintained a p erspective for myself that keeping such thoughts recorded was just as important for da ta collection and analysis procedures at that time as it would be for later analysis when I would not be in the field and thus removed from the moment.

PAGE 219

208 To prevent a loss of my evolving views of the study as much as possible, I maintained a summary table of all teacher interview s and my reflections throughout the data collection process. This table was updated fo llowing the transcription of all teacher interviews. Further, along with the use of the evo lving summary table of teacher interviews, all participant involvement in either i nterviews or focus groups were documented into an excel spreadsheet by recording t he date of contact, the date of formal participation, their name, school, and their resear ch participant code. All interviews and focus groups were completed by t he third week of May in 2007. Days after, schools were closed for the summ er. I spent the months between June and September of 2007 transcribing all data collect ed. The process of transcribing proved to be so time consuming that I sought again to locate someone to help with the process. However, again, I did not pursue this opt ion due to cost limitations. I transcribed the data in order of when it was receiv ed to maintain a perspective of the context and changing focus of the study as reflecte d in my questions to participants. This was compared also with my summary of reflections re corded through the data collection phase. Concurrently, as I was transcribing each se t of data, I kept a running list of topics or themes of interest relevant to the research ques tions. At times the transcription process was so laborious that I felt removed from the data. I struggled for months to stay connected wit h the data. It was difficult to transcribe and allow my thoughts to progress towards an unders tanding of what the data were telling me. To alleviate some of this problem, I again rel ied on the use of the summary table of findings by using it as a checklist of what I was f inding as I transcribed each set of audio

PAGE 220

209 recordings. A constant cross comparison and check against my field notes and reflective entries were used to make sure that an objective an d thorough use of the data was maintained. Once all of the data were transcribed, I then found myself overwhelmed with approximately 200 pages of information. At this po int, I was reminded by comments I had read from the qualitative methods research lite rature which emphasized the challenge of reducing the data to those elements that were mo st relevant for answering the research questions. At first all of the information seemed important and relevant. I relied upon the research question to assist me in keeping clear what my priorities were and were not. I also found myself having to recall the purpose of the study and the intentions I held in the beginning for how I thought the data could be u sed (e.g., professional development ideas) to assist in reducing the data. While maintaining a perspective of the purpose and parameters of the study, I further engaged in the process of developing a code system for data analysis purposes. By having a combined use of an ongoing collection o f topics identified through the data collection phase, and having identified specific to pics of interest prior to conducting the study, I was able to create a long list of topics t o pursue in the development of a coding system. This experience was detailed in the method s section as it related to interobserver reliability checks. While following the proposed set of procedures deve loped at the start of the study, I found problems in obtaining the necessary reliability in the earlier phases of the coding process. Specifically, my research assistan t and I were unable to agree,

PAGE 221

210 independently of each other, on what constituted a particular data segment in the transcribed data. Further, we found much disagreem ent on what a data segment should be coded as. This experience led me to examine and reflect on (1) the procedures for developing a coding system, (2) the lack of backgro und experience in education of my research assistant, (3) my own limited experience i n coding qualitative data, and (4) the potential for personal bias influencing the problem at hand. In all, during the course of many conversations with my research assistant we de termined that perhaps all the above were likely involved. Given this, I approached my doctoral committee with this information and solicited approval to amend my proc edures for data analysis as it related to coding the data, in order to solve the lack of r eliability problems being found. As with similar events discussed already, I returne d to the research literature and research experts to discuss alternative approaches to working through the data. From the collective information obtained through those sourc es, I arrived at the conclusion that both my research assistant and myself needed to wor k together to (1) help support his limited knowledge of educational systems, (2) profi t from his experience working with qualitative methods, and (3) minimize the impact of my personal biases or expectations. More specifically, what I found was that although I took great care in trying to remain objective and open to the responses being given by participants and even identifying unanticipated themes from interview to interview, m y biases were at play when I tried to code those data. As can be seen in the documented version of the cod e systems I developed (Appendix G1 to G2), I had been trying to assign un anticipated topics/themes into

PAGE 222

211 categories that were developed a priori as reflecte d in the questions used in the teacher interview guide. Given this experience with the re lative lack of experience by my research assistant in the field of education and my own limited experience coding qualitative data, it was not surprising that we wer e unable to find sufficient reliability in our agreement checks. To that I would say that one pattern I found that differentiated my research assistant and I was that I found he was mo re likely to define larger chunks of text as a data segment where I was more likely to i dentify smaller data segments for use. By working together to identify data segments for a ll l4 interviews and both focus groups, we immediately were able to reduce a large number of disagreements. From there, we focused on condensing the number of codes being used by prioritizing what was important for answering the research questions. Thus, as can be seen between the third and fourth versions of the code system, some higher-order categories were collapsed into one because of their relatively low priority c ompared to other data topics. In the end, we then set about independently coding samples of t he data and found we were able to reach acceptable levels of agreement. On the issue of a manual cut and sort procedure for organizing the data, this too proved to be a challenging experience. It was time intensive and costly, and it slowed my momentum in analyzing the data. It was not until I transferred the data segments into an excel spreadsheet, along with their codes, and iden tifiers, that I was able to sort and organize the data in manageable ways more efficient ly. The benefit of using Excel was that I was able to sort the data as I needed, eithe r by teacher, by grade level, by topic, etc. Having organized and spent hundreds of hours with t he raw transcript data, four broad

PAGE 223

212 areas or themes emerged with which to summarize and report the results: (1) climate/culture in school affecting perceptions of DIBELS, (2) Supports/Resources available for teachers’ use of DIBELS, (3) Teachers knowledge of DIBELS, and (4) Teachers’ use of DIBELS data. This process resulte d in a more evolved organization of the data (see appendix B3) and led to the developme nt of preliminary results to be shared with stakeholders and participants. In preparing a document for sharing the results of the study, I developed an outline of findings (Hodges, personal communication March 2008). I confirmed the appropriateness of this while also highlighting my concerns about the length of the document. Specifically, I was concerned that the l ength of the document might detract some stakeholders from reviewing the results and of fering feedback about the accuracy of findings. I received advice from members of my com mittee that by highlighting sections that specifically apply to particular groups of par ticipants, and providing a table of contents, I could direct participants to the sectio n that was a priority for their review. Having done this, I found 6 of the 14 teachers imme diately responded back with thanks for being given the preliminary results for use. I n addition, all acknowledged the accuracy and appropriateness of statements reflecti ng teacher perceptions. To date, none of the specialists or reading coaches responded with any feedback. However, this was less of a concern since both grou ps’ perceptions and comments were checked for accuracy and completeness immediately f ollowing each focus group session through a debriefing of results obtained by myself and my research assistant. Specifically, a large note pad was used to document comments received through each

PAGE 224

213 session and then used to share with the participant s to confirm the information written was complete and accurate. Attempts also were made to share results with distr ict leaders and state leaders, specifically those involved with Reading First To date, no feedback had been given nor any offered to discuss the findings. Efforts to ov ercome this continued at the time of this report. Attempts to understand the lack of interes t by district and state leaders has led to reflections on alternative means for disseminating the findings while also considering the appropriate presentation for these audiences. Perh aps the creation of a web-site with extended links based on the organization of the fin dings would provide these audiences with a more user-friendly approach to view the resu lts.

PAGE 225

214 CHAPTER FIVE – DISCUSSION The Reading First grant is a core program provided through the No Ch ild Left Behind Act (NCLB). The Reading First grant was initiated to promote the use of evidence-based research in reading and to develop h igh quality instruction in reading for kindergarten through third grade. The Reading First grant provided funding for three primary areas: (1) professional development for tea chers, (2) purchase and implementation of reading assessments, and (3) purc hase research-based curriculum and instructional materials for the classrooms. More s pecifically, the aim of the Reading First grant was to (a) increase quality and consistency of instruction in K-3 classrooms; (b) conduct timely and valid assessments of student reading growth in order to identify students experiencing reading difficulties; and (c) provide high quality, intensive interventions to help struggling readers catch up w ith their peers (Torgesen, 2002). At the heart of this program in Florida was the adopti on of the Dynamic Indicators of Basic Early Literacy Skills (DIBELS) as a required assess ment for all schools receiving the Reading First grant to help fulfill the federal grant requiremen ts of using assessments for screening, progress monitoring, diagnostic assessme nt, and outcomes evaluation. The DIBELS (Good & Kaminski, 2002) are a set of ind ividually administered research-based, reliable and valid measures of crit ical early literacy skills (Kaminski, et al., 2008). DIBELS are comprised of seven subtests for the measurement of critical early pre-reading and reading fluency skills. In Florida five of these subtests are used. The

PAGE 226

215 assessment was intended to be administered at least three times a year as a screening tool and as a means for frequently monitoring the progre ss of students who were assigned to receive more intensive reading instruction. Progre ss monitoring is a highly critical component of the intervention process with respect to determining the effectiveness of interventions for students who are at-risk for read ing failure (Coyne & Harn, 2006). Understanding how educators were using the DIBELS w as considered to be a valid topic for investigation in light of the resea rch literature on educators’ use of data and the direct effects it can have on student outco mes (Fuchs & Fuchs, 2002; Stecker, Fuchs, & Fuchs, 2005; Wesson, King, & Deno, 1984; W esson, Skiba, Sevcik, King, & Deno, 1984). At the time of this study, no studies could be found which specifically investigated teachers’ perceptions and use of the D IBELS within a Reading First context. The present study primarily sought to understand te acher perceptions and use of the DIBELS in order to provide a foundation for explori ng teacher use of DIBELS data within a Reading First context. Secondly, the present study sought to pr ovide information useful for either improving current pro fessional development initiatives for teachers, and/or identify salient variables for fut ure research on teachers’ use of DIBELS data. Three primary sources of research literature were used to provide a context regarding the discussion of the current research re sults: (1) data-based decision making in educational contexts (e.g. Kerr, Marsh, Ikemoto, Da rilek, & Barney, 2006); (2) Curriculum-based Measurement or CBM (e.g., Stecker, et al., 2005); and (3) system change literature (e.g., Curtis, Castillo, & Cohen, 2008).

PAGE 227

216 Due to the increased accountability pressures impos ed upon school districts in the United States as a function of the No Child Left Be hind policies, schools have been engaging in more frequent and more complex data-bas ed inquiry (Brunner, Fasca, Heinze, et al., 2005). Prior to the initiation of this increased climate of accountability and pressure to increase student academic outcomes, res earch had been ongoing since the 1970’s on the utility and effective implementation of Curriculum-based Measurement (Stecker et al., 2005). Similar historical develop ments have occurred also in the literature concerning systems change (Curtis, et al., 2008). All combined, these sources of literature have identified several variables which may impact educators’ use of data and are relevant for discussion the present study’s fin dings. Those variables were organized similarly as the same four specific higher-order th emes which emerged in the present study to provide the reader consistency in the pape r’s organizational format: (1) the organizational climate/culture, (2) resources avail able for teachers’ use of data, (3) teacher knowledge and skills in using data, and (4) teacher perceptions of self and data. Discussion of Primary Results Context/Culture/Climate Context and culture are critical for understandi ng when attempting to implement large-scale programs (Boyd, 1992; Curtis, et al., 2008; Fixen, et al., 2005). To begin, a body of research has indic ated that administrative leadership and stakeholder involvement are essential for not only effecting the culture and climate of the school regarding the systematic use of data, but al so in ensuring high quality implementation of a data-based decision making mode l (Allinder, 1996; Armstrong & Anthes, 2001; Bernhardt, 2000; Brunner et al., 2005 ; Boyd, 1992; Coburn & Talbert,

PAGE 228

217 2006; Curtis, et al., 2008; Feldman & Tung, 2001; F ixen, et al., 2005; Hargreaves, 1997; Klein & Herman, 2003; Ingram, Louis, & Schroeder, 2 004; Kerr et al., 2006; Lachat & Smith, 2005; Stecker, et al., 2005; Supovitz & Klei n, 2003; Yell, et al., 1992; Young, 2006). In fact, Hargreaves (1997) argued most scho ol reforms fail because efforts to exact change involve only restructuring rather than careful consideration of helping stakeholders revise their attitudes, beliefs, or pe rceptions about an innovation being adopted; something for which administrator involvem ent is essential. At the very least, stakeholders – teachers specifically – require an u nderstanding and rationale for the change (Curtis, et al., 2008). In the present study, attempts were made to directl y investigate administrator perceptions and roles in the use of DIBELS at Reading First schools. However, attempts to recruit their participation were unsuccessful. It was believed that the approach used to enter the schools was insufficient to ensure admini strator participation. Nonetheless, teachers were asked during interviews to comment on their perceptions of the current climate and leadership at their school regarding th e use of DIBELS. On the specific topic of leadership, all teachers indicated – some more d irectly than others – that leadership was important for their current and continued use o f DIBELS. However, there seemed to be some variability in teacher’s perceptions of the amount of direct involvement a school’s administrator had. Some indicated more ge nerally that their principals support the use of DIBELS, while others indicated the princ ipal and his/her leadership team directly review the DIBELS data often.

PAGE 229

218 According to researchers leadership is important fo r increasing buy-in from key stakeholders when working against cultural barriers that could be significant detractors to effective data-based decision making and exacting s ystems change (Curtis, et al., 2008; Feldman & Tung, 2001). Though not observed directl y through principal participation in the present study, it is plausible given this line of research that any variability among administrator involvement and support in the use of DIBELS may influence the variability observed in teacher perceptions of the value and utility of the DIBELS. Wayman and Stringfield (2006) summarized the import ance of leadership best when they indicated it was the leaderships’ role to make sure that teachers “use the data rather than be used by data” (p. 569). Pressure to perform was a recurring theme in the pr esent study. Several teachers voiced concerns over the increased accountability, or assessment and instruction responsibilities they were experiencing. A few of these teachers attributed these feelings of pressure directly to No Child Left Behind. This perception has been documented in previous literature (Brunner, et al., 2005; Kerr et al., 2006) and deserves some attention with respect to the present results. This climate of pressure and accountability seems to have existed in the perceptions of those who partic ipated in the present study prior to the implementation of the DIBELS. According to some re searchers (Coburn & Talbert, 2006), understanding the ways in which individual b eliefs and practices are situated in and shaped by organizational context – in this case preexisting pressures of accountability and the districts reactions to the policy mandates responsible for the pressure – is important for fully understanding how teachers perc eive new information or technology.

PAGE 230

219 A consistent theme among teachers was the expectati on that leadership was responsible for creating a positive and collaborati ve environment. In particular, one kindergarten teacher in the present study suggested that it was the leadership’s responsibility to help deflect the pressures of the district and state policies or practices to help create a climate of collaboration and a cultur e of data-based decision making. This single teacher view was consistent with literature on systems change which suggests culture and context which influence increased press ures of accountability can originate not only at the individual, grade, or building leve l, but can also originate from the district, state, or federal level. This perspective seemed c onsistent with systems change literature that described the limitations and probability for failure by using a top-down approach to implementing systems change (Curtis, et al., 2008; Fullan, 1997). According to Fullen (1997), neither the top-down or bottom-up approaches to exacting a process for systems change have found mu ch success. Specifically, he indicated that a top-down approach often fails beca use schools are too complex to be controlled merely from the top (e.g., restructuring policies and procedures to mandate staff practices). This perspective is consistent w ith Curtis, et al., (2008) view of schools as “living organizations.” They argue that schools are social systems and living organizations because they reflect the attitudes, b eliefs, and practices of the people within them. As such, they argue no two schools are likel y to be functionally the same given the unique characteristics of the people within each re spective school. Therefore, efforts to affect district policies and procedures must be foc used on the level of the school building when implementing large scale systems change effort s. It would seem, given this view,

PAGE 231

220 that some flexibility is required at the district a nd state levels to foster a district-wide climate of support for schools as each tries to bui ld upon their unique strengths and find growth in their areas of concern to meet the needs of their student body. A purely topdown approach is not likely to foster such flexible support for schools to find a process of implementation that works for them, specifically. Bottom-up approaches are not likely to be successfu l either according to Fullen (1997). He argued such approaches often lack the n ecessary coordination and vision; and therefore can lead to chaos and disillusionment. P lanning for change is considered a critical first step for engaging in effective syste ms change (Curtis & Stollar, 1996; Grimes & Tilly, 1996; Hall & Hord, 2001; 2006). Th us, a bottom-up approach will be limited in its capacity to bring about needed chang e through a process of finding a shared vision and involvement of key stakeholders. Central to both of the approaches mentioned above i s the lack of recognition of the importance of addressing cultural barriers, ide ntifying cultural or contextual resources, and/or developing opportunities for invo lvement by all stakeholders. Returning to the results of the present study, it i s not surprising given the research on systems change that teachers expressed variability in their views concerning their perceptions and understanding of using DIBELS. Fir st, teachers were not involved directly at all levels with the adoption of the Reading First grant within their school. Second, a top-down approach to implementing the use of DIBELS likely influenced teacher’s initial or sustained resistance for its u se. Third, it appeared in the present study that teachers had few opportunities to engage in re flection about their current beliefs and

PAGE 232

221 practices concerning reading development and instru ction as it involved the use of DIBELS. Even those teachers who indicated a positi ve perception in their current use and understanding of DIBELS indicated several barri ers to being able to reflect on their data and make use of the information for instructio nal decision making. Thus, it appeared that additional infrastructures were neede d to more effectively engage in data analysis and utilization to support teacher’s effec tive use of DIBELS. Beyond the top-down and bottom-up approaches to sys tems change, Hall and Hord, (2001; 2006) argued for a horizontal perspect ive in which all participants in a system (i.e., district level, school building level grade level, etc.) have the same vision and work together to implement that vision. The va riability that existed among teachers’ knowledge of the value and utility of DIBELS, was u nderstandable given the limited opportunities for staff to engage in collaborative work and find a clear and shared vision. Where some hints of this occurred was in the contex t of teachers describing how their initial reactions to the use of DIBELS progressed f rom a culture of competition among colleagues to one of collaborative professional lea rning communities. A promising model exemplifying an implementation ap proach consistent with Hord’s (2006) horizontal perspective is the Florida Problem-Solving/Response to Intervention Model which was initiated at the time of the present report. The Florida RtI initiative was a program evaluation aimed at identi fying the critical variables that lead to effective implementation of a problem-solving/respo nse to intervention model of service delivery (Batsche, et al., in press). At the heart of this model is a three stage implementation model that focuses initially on deve loping building-wide consensus

PAGE 233

222 through activities designed to increase educator kn owledge and opportunity to participate in discussion about RtI and problem-solving. Follo wing the attainment of consensus among staff and appropriate levels of buy-in among all stakeholders in the school building, implementation efforts prioritized an ana lysis of the school’s infrastructure for supporting systematic data collection, data managem ent, and data utilization among staff. In this stage, schools identify gaps in their curre nt services to students and utilize data to prioritize service delivery needs as well as provid e on ongoing analysis of quality improvement at the building level. The final stage of implementation articulated in the Florida RtI initiative is implementation. Overall, the Florida RtI initiative seeks to evaluate the conditions necessary for effective imp lementation of a problemsolving/response to intervention model by utilizing best practices in systems change efforts as described in the research literature (e. g., Hall & Hord, 2006). Collaborative use of data among grade level teams o f teachers was reported to emerge over time during the implementation of the Reading First grant. However, at first, many indicated a climate of competition amon g colleagues and a fear of retribution; both of which have been identified in the research literature as barriers to effective use of data to make educational decisions (Armstrong & Ant hes, 2001; Bernhardt, 2000). Moreover, there are many identified benefits of eng aging teachers in a professional learning community (Hargreaves, 1997; Hord, 1997). These benefits include a way to create a culture of change in schools, create a sha red vision and purpose, build upon and learn from the collective creativity of other teach ers, a supportive environment for

PAGE 234

223 solving problems, and a culture of valuing the use of data to make decisions (Hord, 1997). A first grade teacher at one school reported experi encing great difficulty and fear regarding the use of DIBELS at her school. She des cribed the climate and culture of her school as one that was highly “political.” This de scription was in contrast to a different first grade teacher at another school who reported a collaborative working environment at her school. Notably, this second teacher was more representative of the other first grade teachers in the study. Further research through ei ther a process or outcomes evaluation of the Reading First grant may wish to take into account the level of i nvolvement by district and school administrators and staff perceptions of a culture of collaborative data-based decision making. Further, future research may wish to more formally evaluate how involvement or lack of involvement by district admi nistrators has contributed to teacher perceptions of practice. Another important systems influence which may have contributed to the variability found among teachers’ perceptions and u se of the DIBELS is in regards to the level of complexity of the school district. Resear ch suggests that complex organizations such as school districts can influence educators’ u nderstanding of new information or technology (Coburn & Talbert, 2006). That is, give n the complexity of a given system or organization, it is possible that individuals worki ng in different parts of the system may develop contrasting sets of understandings. An exa mple of how the context of the school district’s policies and procedures were likely impa cting teachers’ perceptions and use of the DIBELS may be found in participant reactions to the volumes of assessments that are

PAGE 235

224 required by the school district in addition to thos e required by the Reading First grant. Specifically, the high level of assessment activity that occurred in the participating school district served as a barrier to teachers in the fol lowing manner: (1) limited time and resources to engage in data collection, analysis an d utilization using the DIBELS; (2) limited or no opportunities to engage in progress m onitoring using the DIBELS; and (3) influenced teachers to view the DIBELS as “just ano ther test they have to give.” Implementing a large-scale program requires signifi cant resources, time, and supports for those whose practices are to change (C urtis, et al., 2008). Findings from the present study suggest the district’s assessment sch edule may have contributed as a barrier to effective implementation of the Reading First grant by creating limited time for teachers to engage in data analysis and utilization One of the most frequently stated barriers to teachers’ participation in the DIBELS c ollection, analysis, and utilization in the present study was one of time. This was reporte d by teachers, Reading Coaches, and specialists who participated. It may not be reason able to expect teachers’ to change their attitudes, beliefs, and or participation levels nee ded for the adoption of new technology if they are not given adequate opportunities to engage in reflective thought about their current practices as they relate to student outcome s as well as an opportunity to find value in the new technology to improve their efforts to h elp students reach their literacy goals. Given the relatively independent nature of school s ites as living organizations capable of learning and adapting while the actions of one may not necessarily directly affect another, future research may wish to explore the manner in w hich the vision, mission, and purpose of the Reading First grant were communicated from the state to the dist rict and

PAGE 236

225 subsequently from the district to school building p ersonnel as a means of identifying barriers to successfully building a shared vision f or implementing the grant. Resources/Supports/Infrastructure In the present study, several resources were identified through interviews and focus groups to s upport teacher’s understanding and use of DIBELS. These included the availability of a full-time Reading Coach, access to a web-based data management system for reviewing and analyzing student data through the provision of graphs, consultative efforts provided by resource personnel (e.g., Title 1 staff, school psychologists, academic diagnostician s, etc.), and access to professional development opportunities in the understanding and use of DIBELS. Research has found that several resource variables are likely to influ ence educators’ effective use of data. Teachers may reliably use the available data if the re is sufficient up-front and ongoing training and consultation on the use of data (Armst rong & Anthes, 2001; Chen, Heritage, & Lee, 2005; Fuchs, Fuchs, & Hamlett, 1989; Fuchs, Fuchs, Hamlett, & Ferguson, 1992; Kerr et al., 2006; King et al., 1983; Love, 2004; S kiba et al., 1982; Stecker, et al., 2005; Wesson et al., 1984; Yell et al., 1992); if the dat a are of high quality, accurate, and immediately available for use (Coburn & Talbert, 20 06; Kerr et al., 2006; Lachat & Smith, 2005; Wesson et al., 1984); if teachers have available time (Allinder, 1996; Coburn & Talbert, 2006; Lachat & Smith, 2005; Yell et al., 1992; Supovitz & Klein, 2005); and if teachers have available technology an d skills for using the data utilization technology (Chen et al., 2005; Stecker, et al., 200 5). Regarding the availability and quality of training and consultation available for teachers on the use of DIBELS, it was understood as part of the larger Reading First

PAGE 237

226 implementation plan in the school district, that al l Reading First schools were given several full-days of training on the DIBELS and the various other components to the Reading First grant. Of those teachers who could recall attendin g this initial training, all reported unfavorable perceptions about the training and generally reported it to be unhelpful. Some were distracted by the format in w hich the training was provided (i.e., reading from a manual), and others perceived the in formation to be of little value above their current knowledge based on instructing studen ts in reading. The Florida Center for Reading Research provided ongoing training on a var iety of assessment and instructional topics aimed to support teachers in their evolving professional development as it relates to Reading First However, when asked if they have attended any tr aining outside of their school campus either in the district or throu gh participation in statewide conferences or professional seminars, no teacher indicated any such participation. All however, indicated receiving various types and amounts of consultation from their on-site Reading Coach. All teachers reported high value for the Reading Coach and sited multiple activities the Reading Coach was abl e to provide them in support of their use of DIBELS. Overall, substantial variability ex isted in the reporting of how teachers perceived their ability to independently interpret and use the DIBELS data. Four specific topics of interest that relate to the available res earch on implementing formative assessments like the DIBELS (e.g., CBM) are (1) the level of training provided to teachers on how to use and interpret formative asse ssments, (2) the type of consultation provided to teachers, (3) teacher formative measure ment skills with or without data

PAGE 238

227 analysis or evaluation skills, and (4) the infrastr ucture available to support teacher participation in the collection, analysis, and util ization of the data. First, a study conducted by Wesson et al. (1984) in vestigated the comparative academic outcomes among students with teachers who received one of three types of training (1) an initial set of three half-day works hops at the start of the year including the provision of a manual for using CBM, followed by se mi-frequent phone calls or visits by an observer; (2) a train the trainers format in whi ch teachers from different schools were trained during a one-day training workshop and prov ided with a manual and instructions for teaching others in their school; or (3) a fullday initial workshop, plus monthly halfday workshops through the year, plus specific and f ocused training content on specific skills for training combined with opportunities to practice. Results found that the students from the third group of teacher participan ts made the most gains in academic performance. Students whose teachers accurately an d consistently applied CBM in their classrooms made better progress than those of stude nts in other classrooms. Wesson et al. concluded that teachers required more training than just how to measure student performance with CBM, but also direct and explicit training on how to evaluate and make use of the assessment information. Similar results have been found in other studies (King et al., 1983; Skiba et al., 1982; Wesson et al., 19 84). Additionally, work in the field of teacher training by Joyce and Showers (2002) confirms the results found in preparing teachers to adopt new skills. Specifically, Joyce and Showers indicated four components to effective teacher training: (1) knowledge and understanding of the theory and rationale for the n ew innovation; (2) modeling of the new

PAGE 239

228 skill(s) with multiple exemplars; (3) practice of t he new skills in a safe environment with repeated opportunities to rehearse and receive feed back; and (4) ongoing supportive coaching or opportunities to collaborate with peers regarding implementation of the new skill(s). Moreover, the work by Little (1997) sugg ests the above model is insufficient to sustain teacher use of a new skill, let alone devel op a culture of change. He recommended identifying the level of understanding where teachers were operating in relation to a given topic or skill being learned an d tailoring the instruction or activities so the new ideas can be fully integrated into the curr ent practices. Further, he advocated that teachers have an opportunity to express dissen t and challenge prevailing beliefs and practices. He suggested that teachers do require a n account of the “big picture” on educating children and the purpose of education; s imilar to engaging them in a “systems thinking” according to Senge’s (1990) five factors that individuals collectively need to become and function as a learning organization. A full-time Reading Coach was available to providin g coaching and ongoing professional development to teachers on the use of DIBELS. However, teachers reported high degrees of dissatisfaction with the manner in which early training was provided. Given the limited opportunities for teachers to par ticipate as stakeholder partners in the change process, and the barriers to advancing their knowledge and understanding of DIBELS facilitated by low value perception for trai nings on the topic, it is not surprising that variability was found in teacher’s ability to not only use the DIBELS data more effectively, but also value its use for instruction al planning. And though the Reading Coach was consistently identified as a crucial vari able towards the positive perceptions of

PAGE 240

229 DIBELS among some teachers, little evidence was fou nd which demonstrated efforts by the Reading Coaches or the school district or schoo l buildings to build a capacity among school staff for sustainability of using the DIBELS According to Adelman & Taylor (2003) large scale programs often fail in their imp lementation when temporary funding has ended and the staff hired to support implementa tion efforts are removed. This is a topic most central to a failure of creating the inf rastructure needed to support the continued use of the program and all it’s component s (Adelman & Taylor, 2003; Curtis, et al., 2008). Next, regarding the type of consultation provided t o teachers, researchers have found that teachers require ongoing consultative su pport in the development of skills and in support of using CBM data to inform instructiona l decision making (Fuchs et al., 1992). This research even found that teachers who reported positive and high value in the use of CBM procedures in their class still had great difficulty regarding how to interpret and make sense of the data for instructio nal decision making. Research like that of Fuchs et al., in response to such observations, have developed evaluation guidelines to help teachers know when to consider adjusting instr uction and how to determine what to change. However, a vital pre-requisite to using su ch guidelines is the active implementation of progress monitoring data collecti on procedures; something that the State of Florida Reading First model only just recently started to introduce to s chools rather than at the beginning of implementation. In this context, the present study found moderate t o substantial variability in the reporting among teachers regarding the frequency an d types of support received by their

PAGE 241

230 Reading Coach. It cannot be determined from the pr esent study the degree to which Reading Coaches provided teachers explicit consulta tion and training guidance towards the independent use and analysis of the their own d ata. However, results consistently found perceptions among teacher and non-teacher par ticipants that teachers were still highly dependent on and in need of continued suppor ts to further their development of skills in the analysis and interpretation of DIBELS data. Future process or outcome evaluations of the Reading First grant should consider the frequency and quality of consultative efforts provided by Reading Coaches wh en evaluating the impact of teachers’ use of DIBELS data in relation to student academic outcomes in reading. Regarding the topics of training, none of the teach ers could recall receiving any training, with exception to what was provided by th eir Reading Coach, on how to independently analyze and link their class data to instructional decision-making. Some voiced strong interest in more training to learn ho w best to do this. The Florida Center for Reading Research provided several online resour ces and guides to assist teachers in this context, but none of the teachers interviewed were aware of such information; some were not even aware of the website beyond the avail ability of the PMRN database. To this researcher the Reading Coach was highly valuab le in supporting the school staff in the collection, interpretation, and utilization of DIBELS data. However, it appeared from the teacher interviews that teachers were highly de pendent on the Reading Coach for this continued set of supports and would likely experien ce substantial difficulties without such support when the grant expires. Indeed, sever al researchers have noted that of the various influences that exists on educators’ use of data, data analysis, interpretation, and

PAGE 242

231 utility for decision making is one of the most elus ive areas for professional development initiatives (Brunner et al., 2005; Casey, Deno, Mar ston, & Skiba, 1988; Cizek, 2001; Fuchs et al, 1992; Fuchs, et al., 1989; Fuchs & Fuc hs, 1986; 2002; Grigg, Snell, & Loyd, 1989; Herman & Gribbons, 2001; Kerr et al., 2006; L achat & Smith, 2005; Stecker, et al., 2005; Tindal et al., 1981; Wesson, 1991; Young, 200 6). The implications of this professional development n eed and support for teachers is highly relevant to the use of DIBELS. For examp le Fuchs, et al. (1989) investigated the academic outcomes of students from teachers who participated in one of three groups: (1) CBM measurement use plus evaluation consultatio n; (2) CBM measurement only without evaluation support, though receiving consul tation as general a support for using CBM; and (3) a control group that did not use CBM. Results found that the students from the class in which CBM measurement was used pl us consultation and training on how to use the information to make decisions out-pe rformed students from the other two groups on CBM outcome measures. Of particular inte rest was their observation that teachers who collected CBM data, but did not make u se of that data did not effect any better student outcomes than those in control condi tions where CBM measures were not used. Thus, any process or outcome evaluations of the Reading First grant must consider the student outcomes in context of how effectively and frequently teachers were using the data to make frequent and targeted instructional ch anges in their classes. Related to the topic of utilizing data for instruct ional planning is the use of DIBELS for screening, progress monitoring, diagnosi ng academic needs, and evaluating student outcomes. Coyne and Harn (2006) provided a conceptual framework for thinking

PAGE 243

232 about early literacy assessment across the four dis tinct assessment purposes just listed above. Recall, that in the present study all teach ers described the DIBELS as a test that is administered three times a year; thus characterizin g the use of DIBELS as a screening tool for identifying students who may be at-risk fo r reading difficulties. Although several teachers reported using the DIBELS to identify stud ents in need of additional supports or interventions, none of the teachers reported using DIBELS for progress monitoring. However, two teachers indicated knowledge of the op tion of using DIBELS as a progress monitoring tool, but cited a lack of access to eith er necessary materials or training to use DIBELS for progress monitoring. Of particular interest was the observation that man y of the teacher who participated in the present study demonstrated data analysis skills consistent with the DIBELS experts. It is important to point out that these teachers had also voiced positive perceptions and value in using DIBELS data as well as reported relatively higher selfefficacy in using the DIBELS data to organize learn ing groups and assign standard interventions for student in need of assistance. F urther, it is important to note that none of the teachers had been engaged in any progress mo nitoring activities with the DIBELS at the time of the study; thus, no such data were u sed in the case studies for teacher use. According to Coyne and Harn (2006), progress monito ring allows teachers to determine if students are making adequate growth to ward meeting grade-level reading outcomes. Further, they suggest as best practice t hat the DIBELS be frequently used for formative evaluation with students receiving interv entions to monitor their progress and measure the effectiveness of interventions. This p oint is critical from this researcher’s

PAGE 244

233 perspective. First, as Coyne and Harn have indicat ed, screening and progress monitoring activities alone do not tell an educator everything that is needed to make instruction more effective or efficient. This was consistent with i nterview comments provided by the DIBELS experts in the present study. However, prog ress monitoring does provide a pulse, or “vital sign,” as stated by Coyne and Harn for the teacher to track, in a reliable, and non-intrusive manner, how a student is function ing in the class on a particular set of skills and whether an intervention is having the de sired impact. Some teachers in the present study reported concern over the amount of time between DIBELS benchmark cycles as a barrier to usi ng the data to make instructional decisions. So, without either the knowledge of usi ng DIBELS for progress monitoring, having access to progress monitoring materials, and /or training on how to administer DIBELS, teachers are left with a void with which th ey filled by using either less technically adequate assessment tools for monitorin g student progress, or their own personal judgments to assess students’ responses to instruction. And yet, before one considers the implication this all has on professional development needs in the participating district, re call that many teachers, when asked if they would like to learn how to administer the DIBE LS, rejected the idea citing overwhelming assessment and instructional responsib ilities. It would seem appropriate that any program evaluation of the Reading First model take into consideration the degree to which students are not being monitored ei ther at all, or are being monitored with assessments that have a broad assessment focus at best, or lack technically adequate data at worst.

PAGE 245

234 A final area of consideration regarding variables t hat can influence teacher’s use of data were teachers’ perceptions and attitudes co ncerning the use of a particular assessment. Research has suggested teacher’s use o f data can be influenced by teachers’ perceived validity of an assessment tool (Coburn & Talbert, 2006; Kerr et al., 2006; Yell et al., 1992); staff attitudes about using data to inform practice (Bernhardt, 2000; Ingram et al., 2004; Lachat & Smith, 2005); teachers’ self -efficacy in supporting student learning (Allinder, 1995; 1996; Ingram et al., 2004); teache rs’ response to change (Allinder, 1996; Yell et al., 1992); teacher perceptions of new init iatives as additional work (Wesson et al., 1988; Yell et al., 1992); and beliefs about wh at constitutes data (Allinder, 1996; Coburn & Talbert, 2006; Ingram et al., 2004; Farlow & Snell, 1989; Young, 2006), In the present study, teacher perceptions about the value and utility of the DIBELS were substantially variable across teachers and schools. Several teachers characterized the DIBELS as an invalid test because (a) the tasks are timed, (b) the tests are conducted by someone other than the teacher, (c ) it is given a higher priority by their administrators over assessments the teachers have l ong used and with which they have become comfortable (e.g., Running Records), (d) the results did not correspond with other assessments or observations conducted by the teachers in the classroom, and/or (e) it uses measures that they perceived are “inappropr iate” for their grade level (e.g., NWF subtest). The comments by teachers about DIBELS as a “timed test” suggest that many teachers see the DIBELS as a high stakes test that students are required to pass – for which teachers feared professional evaluations eith er early in the use of DIBELS or even presently as one teacher reported.

PAGE 246

235 Reading Coaches, specialists, and DIBELS experts in the present study had reported the above concerns. It would seem plausib le that one professional development option is to share with teachers the psychometric p roperties of the DIBELS in the context of developing the test and that doing so would serv e as a foundation for helping teachers to see the valid use of DIBELS. As one teacher and several Reading Coaches indicated in the present study, they have found teacher perce ptions change in favor, or at least greater acceptance for, the DIBELS when it is share d that research has found a strong correlation between end-of-first grade ORF measures and later third grade FCAT performance. When Reading Coaches were asked during a focus grou p to comment on teachers’ use of data, they indicated that what was essential was a culture of valuing and using data. Teachers, who were interviewed however, were observed to not necessarily value data – though some did explicitly state such a valu e for DIBELS over other assessments – but suggested a culture of collecting data for the sake of collecting data. Teachers overwhelmingly communicated that there is too much assessment being conducted in the schools – and at times to the detriment of protecti ng instructional times. They perceived the DIBELS to be just one more such assessment; muc h like Yell et al.’s discussion of CBM perceived as an “add-on” by teachers as a barri er to effective implementation and use. In this same context, almost all teachers spoke of having insufficient time to plan, research alternative instructional ideas, consult w ith other teachers or the Reading Coach, or to even review, let alone analyze, their class D IBELS data. Having been an employee

PAGE 247

236 in the present district, it has been observed first hand that the amount of testing that teachers are required to participate in is extensiv e. It seems unlikely that professional development activities provided to teachers on what the DIBELS are, how to administer the measures and use the data will be enough to ove rcome these issues. Rather, such training would likely be limited in value and effec tiveness to the degree to which teachers must continue to juggle their time around the multi ple assessment schedules that impact their instructional times and/or take away from the ir time to plan and use data to guide their efforts. Limitations Having discussed the results of the present report in light of the relevant and available research, discussion of several limitatio ns to the study is warranted. First, the study was conducted primarily to provide a rich des cription of teacher perceptions and use of the DIBELS to understand the conceptions amo ng individuals participating in a large-scale grant program within a school district – something that Coburn & Talbert (2006) have indicated little research has undertake n. As such, only a small sample of perceptions was obtained that may exist on the chos en topic. Given the limitations, the results do not lend themselves to generalization ac ross the school district necessarily, and certainly not across the state. Future research ma y wish to conduct similar research studies either within other school districts receiv ing the Reading First grant, or across several school districts receiving Reading First grants to evaluate the consistency of findings among other sites and participants.

PAGE 248

237 Another limitation in the present study may be foun d in the recruitment procedures used in the present study. Though a sat uration method was used to determine the extent to which possible unique perspectives we re found, the way in which teachers were selected may limit the degree to which the fin dings are representative of the teachers in the participating school district. Due to constraints imposed by the school district’s Institutional Review Board, principals h ad to be informed of teachers being recruited for participation and principals had to g ive their permission for teachers at their school to participate. These constraints may have contributed to the low interest found among teachers to participate in the study. Also, teachers essentially self-selected themselves to participate in the study and therefor e may not be completely representative of all teachers in the district. One possible safeguard against this limitation was in the use of a saturation method (Patton, 2002) to identify the point at whic h data collected no longer yielded any new information. Application of the saturation met hod led the recruitment of only 14 teachers having found no new information from the f inal two interviews for each grade level, respectively. Nonetheless, future research should consider larger-scale applications of the present study’s approach to answering the re search questions to identify any salient perceptions not captured in the present study. Similar limitations also existed in the recruitment of Reading Coaches and Specialists. Little variability was found within e ach group of responses. All who participated in either the Reading Coaches or speci alists focus groups voiced unanimous value in the use of DIBELS. It cannot be determine d if the high degree of consistency

PAGE 249

238 among these two separate groups was a function of e ither (1) a higher knowledge and understanding of the DIBELS when compared to teache rs, (2) a non-representation of other Reading Coaches and specialists given the sel f-selection approach to recruitment, or (3) the focus group format may have resulted in con trasting perceptions being withheld by participants out of concern for sharing opposing views in the group format. To safeguard against the third possibility, all partic ipants in each group were given the primary researcher’s contact information and encour aged to contact him if they wanted to share any additional information with the understan ding that all their information would be kept confidential. No participants from either f ocus group engaged the researcher in additional conversation on the study’s topic outsid e of the focus groups. Another limitation to the findings was related to t he fact that for the teachers who were interviewed, the Reading Coach or specialists at their school may not have participated. Thus, any input provided by Reading Coaches and specialists cannot be extracted to investigate specific claims or observa tions made by a specific teacher at the same school. However, because data from the teache rs were aggregated within each group, it was possible to explore the general theme s that arose through teacher interviews and provide a comparative analysis against the view s of Reading Coaches, DIBELS Experts, and specialists. Additionally, the presen t study sought to document teacher perceptions and use of the DIBELS, and not an inves tigation of their accuracy of those perceptions against actual events occurring in thei r school by others. Another limitation was the limited resources availa ble to the researcher to provide a quick provision of the results to identified stak eholders. They delay in providing the

PAGE 250

239 results may have negatively impacted their motivati on to provide any feedback or discussion of the findings. At the time of the pre sent report, efforts continued to engage relevant stakeholders at the state and local level to discuss the results and their possible implications for practice. Another limitation involves the researcher’s limite d experience in conducting and participating in qualitative research. However, sev eral activities were engaged in to minimize this potential weakness. First, the invol vement of a doctoral committee provided guidance and review of the research proces s to ensure that an appropriate study was being conducted that would be valuable to the r esearch literature, that it involved sufficient methodological rigor, and that it was co nducted in a reliable manner. Second, additional researchers with known expertise in qual itative methodology were consulted regarding the methodologies and practices of the pr esent study to ensure that the researcher was conducting an appropriate and reliab le study. Third, several quality assurance methods were employed to provide a credib le account of the research topic investigated. These methods included (1) a rich an d thick description of the research process, (2) documentation of the researcher’s posi tion and biography, (3) saturation of data, (4) purposeful search for variation and maxim ization of responses, (5) use of a triangulation across data sources, (6) inter-observ er reliability checks, (7) member checks for accuracy and completeness of data, and (8) peer review by members of the doctoral committee. A final limitation may be found in the selection of kindergarten and first grade teachers only. The decision to limit the study to these grade levels was because the

PAGE 251

240 DIBELS subtests used for benchmark assessments at t hose grade levels comprise the entire DIBELS assessment used in Florida. Higher g rade levels were not investigated in the present study. Thus, the findings of this stud y should not be generalized with respect to implications for supporting teachers in those hi gher grade levels as it relates to DIBELS. Contributions of Present Study Despite the identified limitations of the present study, several strengths or contributions are worth discussing. At the start o f conducting the study, little research could be found which investigated teacher perceptio ns and skills regarding the use of DIBELS within a Reading First context. This study provided a foundation for fut ure qualitative and mixed-methodological approaches to investigating teacher use of DIBELS and evaluation of the process of implementing and m aintaining the Reading First model of reading as it related to the use of DIBELS, or s imilar assessments. Next, the present study identified variables and t opics worthy of continued investigation as found in related fields of study s uch as systems change, data utilization, and best practices in the use of DIBELS. Earlier r esearch on teachers’ use of CBMReading assessments to improve student outcomes, co upled with best practice knowledge of effective systems change and the related variabl es that can impact schools’ use of data converged in the present study to provide a broad c ontext for improving efforts towards: (1) professional development for teachers in the ef fective use of DIBELS, (2) building capacity and infrastructure for schools to support teachers’ use of DIBELS, (3) identifying opportunities to allow teachers to part icipate as stakeholder participants in the

PAGE 252

241 continued school efforts and to support implemented features of the Reading First grant, and (4) building a rationale for the school distric t and building administrators to reanalyze their shared vision and efforts to create and foster a culture that values data utilization as well as collaborative planning and p roblem-solving. Final Summary The present study was conducted to understand teach er perceptions and use of the DIBELS as a core component of the overall Reading First model in the State of Florida. Several barriers and resources were discovered whic h hindered or fostered effective adoption and use of DIBELS by educators, respective ly. Given the results of the present study, and having provided a context for interpreti ng the results within a larger body of research in systems change, data utilization, and b est practices in using CBM/DIBELS, several recommendations are appropriate for those i nterested in implementing the use of DIBELS at their school or school district: 1. Create a culture of change and shared vision. a. Provide educators with a conceptual model for using data and solving problems at the building, grade, classroom, and student levels. The authors of the DIBELS recommend the use of DIBELS within the context of t he Outcomes-Driven Model. Additionally, the recent legislative reques ts for schools to adopt a Response to Intervention (RtI) approach to service delivery provides such a framework for helping schools learn and adopt a pro blem-solving model for effective data management and utilization. At the time of this report the Florida RtI initiative was being implemented and is consist ent with this current

PAGE 253

242 recommendation. Evidence from that initiative will be important for review in light of this recommendation. b. Ensure leadership at the school building level is d irectly and consistently involved in implementing the use of DIBELS and utilizing dat a to inform instruction. c. Encourage a shared vision for the use of formative assessments among staff and a culture that embraces data analysis and data-based decision making. d. The school district may wish to revise its current district assessment plan to consider using formative assessments similar to DIB ELS that have documented reliability and validity while also affording time and cost efficient means of collecting student data with minimal interruption o f class instruction. e. The school district may wish to consider replacing their district reading assessments for all elementary schools with the DIB ELS, CBM reading, or similar formative reading assessments with sufficie nt technical adequacy for the purpose of screening and progress monitoring; a sum mative district assessment could still be provided at the end of the year. Thi s change would afford the opportunity to use DIBELS, or measures like them, m ore effectively for progress monitoring. Additional testing would be restricted to those cases where more diagnostic information is needed for intervention d esign. f. Teachers are the most important stakeholder when im plementing the use of DIBELS or any other formative assessment system. I t will be key to involve them in all aspects of decision making and implemen tation design.

PAGE 254

243 g. Schools may wish to consider the establishment of a school-based team comprised of one administrator, representatives for all grade levels, and representatives from ESE and Student Services as a venue for regularly analyzing school data, supporting teacher professional develo pment needs, and developing plans to increase student outcomes in reading. h. Principals and district administrators, as well as support personnel like school psychologists, should be familiar with literature o n effective systems change while also becoming familiar with recognizing teach ers’ level of concern (Hall & Horn, 2006) in response to change so as to tailor a ny trainings or professional development activities to effectively support teach ers in the analysis and use of data. i. The school district may wish to consider either dis continuing the use of assessments that do not provide opportunities for i nstructional decision making, or provide supports to schools to have them complet ed in a timely manner with minimal impact on instructional times. 2. Create Capacity and Infrastructure to Support U se of DIBELS a. Teachers need more time to engage in data analysis and utilization. This can be achieved by prioritizing the types and frequencies of assessments that are required for collection through a RtI/Problem-Solving model that increases assessment density only as a function of severity of student u nderachievement and/or need for more information to identify successful interventio ns.

PAGE 255

244 b. The school district and school building administrat ors should consider making the role of the Reading Coach permanent or identify a s taff person who can take on the full-time responsibility of facilitating system s change and providing ongoing professional development to teachers on the effecti ve use of DIBELS data. c. Teachers need access to color printers if they are expected to effectively use the PMRN graphs. d. The professional learning community format for grad e level teacher teams being used by the school district should be maintained an d supported as an appropriate venue for collaborative problem-solving among teach ers. e. Teachers need ongoing support and consultation to m aintain and establish greater skills in data analysis and utilization. f. DIBELS assessors should be encouraged to provide no tes and/or record specific student errors so that teachers can more effectivel y evaluate error patterns and make use of the information for intervention develo pment. g. Continue to encourage teachers to use the scoring p rotocols for analysis of errors towards the development of either intervention plan s or consideration of other diagnostic measures. h. Infrastructures need to be developed to support ong oing progress monitoring of students identified as being at-risk for later read ing difficulties. This recommendation should be accomplished within the co ntext of adopting a conceptual problem-solving model (i.e., the Outcome s-Driven Model or RTI) for

PAGE 256

245 allocating resources, prioritizing goals, and provi ding professional development opportunities for teachers. 3. Provide Additional and Ongoing Training on the Use of DIBELS/Formative Assessments a. Provide teachers with a broad “big picture” rationa le for why DIBELS is an effective method for screening, monitoring progress and evaluating effective reading programs. This may be accomplished by prov iding the historical development and case study examples that exist in t he literature on the use of DIBELS and other similar formative assessments. b. Some teachers may require additional knowledge on w hat DIBELS measures and how it corresponds with the development of reading skills. They also require knowledge on how DIBELS differs as a General Outcom e Measure as compared to the mastery-oriented assessments used through th e school district. c. Teachers need to further understand what formative assessments offer for differentiating instruction. d. All educators need to understand the difference bet ween benchmark assessment and progress monitoring and the value and purpose o f both. e. Educators need to understand the importance of prog ress monitoring and staff need to be held accountable and supported by buildi ng principals and district leaders to engage in progress monitoring activities using formative assessments like the DIBELS.

PAGE 257

246 f. Teacher would benefit from learning about research on the development of DIBELS and the establishment of benchmark goals (e. g., end of year expectations). This information will help them und erstand conceptually what the benchmark standards are and how they were developed so they see the correlation those benchmark standards have with successful read ing in upper grade levels. g. Teachers need support and further training on the i mportance of fluency of early literacy skills and their links to comprehension. The concept of fluency and general outcome indicators are critical for teacher s if they are to avoid perceiving the DIBELS as a “speed test.” h. Some teachers may need additional training and supp ort to help them understand the importance of using nonsense words as a measure of student decoding skills and the value of measuring such skills. 4. Provide Educators with Ongoing Supports for Data An alysis and Utilization a. Ensure that data is collected in a timely manner an d immediately available for teachers (within a week). b. Educators at all levels of the school system requir e ongoing support and training for analyzing graphs/data, particularly the PMRN if that system is to be implemented along with the use of DIBELS. Schools have become effective and efficient in collecting data; however, they are oft en missing a conceptual and knowledgeable approach to analyzing the data. Both the Outcomes-Driven Model and the RtI/Problem-Solving model for data-based de cision making provide a

PAGE 258

247 conceptual approach for schools to analyze their da ta to inform their practices as well as evaluate their instructional and programmat ic efforts. c. Teachers need to be encouraged, allowed time, and s upported in their efforts to access and use the PMRN. d. Provide explicit and systematic training for teache rs on how to use PMRN reports to answer questions about level of intervention foc us within a tiered system of service delivery like that being offered currently within a Response to Intervention (RtI) model (Castillo, Cohen, & Curtis 2006). e. Teachers should be encouraged and or trained to obs erve and prioritize student performance trends rather than single snapshots of data. Examining performance in this manner requires the collection of progress monitoring data to establish reliable trends for analysis. f. Provide ongoing support, training, and expectations that schools and teachers use multiple sources of data when making decisions abou t students. Further, training should focus on how to reconcile data sources that are seemingly in contrast with each other (i.e., acceptable performance on one mea sure and not another). g. Schools need to prioritize the use of ongoing progr ess monitoring. The lack of such monitoring appears to be driven by low consens us among staff for the value of monitoring progress more frequently coupled with insufficient infrastructure for supporting this level of data collection. In ad dition, the decision to not implement ongoing progress monitoring activities fr om the start of the Reading

PAGE 259

248 First grant in Florida also may have influenced educator s’ views with respect to progress monitoring. In closing, the present research study was conducte d to understand the perceptions and uses of the DIBELS from the views of classroom teachers and non-teaching educators in an effort to identify salient variable s for later research as well as provide targets of opportunity for later professional devel opment initiatives to support teachers’ use of DIBELS or similar formative assessments. It was hoped, with continued focus and research on the variables that increase the effecti veness and understanding of how educators implement and use formative evaluation sy stems, that schools will be able to find greater efficiency and effectiveness towards i mproving student academic outcomes in reading.

PAGE 260

249 List of References Adams, M. J. (1990). Beginning to read: Thinking and learning about prin t Cambridge, MA: MIT Press. Adelman, H. S., & Taylor, L. (2003). On sustainabil ity of project innovations as systemic change Journal of Educational and Psychological Consult ation, 14 1-25. Allinder, R. (1995). An examination of the relation ship between teacher efficacy and curriculum-based measurement and student achievemen t. Remedial and Special Education 16 247-254. Allinder, R. (1996). When some is not better than n one: Effects of differential implementation of curriculum-based measurement. Exceptional Children 62 525535. Armstrong, J., & Anthes, K. (2001). How data can he lp: Putting information to work to raise student achievement. American School Board Journal 188 38-41. Baker, S., Smith, S., Kame’enui, E. J., McDonnell, M., & Gallop, S. M. (1999). A blueprint for bridging the gap between research and practice involving the Springfield Public Schools and the University of Or egon. (Final Report, U.S. Department of Education Grant H023G50021). Washing ton, D.C.: U.S. Department of Education. Barger, J. (2003). Comparing the DIBELS oral reading fluency indicator and the North Carolina end of grade reading assessment. (Technical Report). Ashville, NC: North Carolina Teacher Academy. Batsche, G. M., Curtis, M. J., Dorman, C., Castillo J. M., & Porter, L. J. (In press). The Florida Problem-Solving/Response to Intervention Mo del: Implementing a statewide initiative. In S. R. Jimerson, M. K. Burns, & A. M VanDerHeyden (Eds.), Handbook of response to intervention: The science and practi ce of assessment and intervention. New York: Springer. Battistich, V., Schaps, E., Watson, M., & Solomon, D. (1996). Prevention effects of the Child Development Project: Early findings from an o ngoing multisite demonstration trial. Journal of Adolescent Research, 11 12-35. Bernhardt, V. (2000). Intersections. Journal of Staff Development, 21 33-36.

PAGE 261

250 Blakely, C., Mayer, J., Gottschalk, R., Schmitt, N. Davidson, W., Roitman, D., & Emshoff, J. (1987). The fidelity-adaptation debate: Implications for the implementation of public sector social programs. American Journal of Community Psychology, 15, 253-268. Boyd, V. (1992). School context: Bridge or barrier for change Austin, TX: Southwest Educational Development Laboratory. Brunner, C., Fasca, C., Heinze, J., Honey, M., Ligh t, D., Mandinach, E., & Wexler, D. (2005). Linking data and learning: The grow networ k study. Journal of Education for Students Placed At Risk 10 241-267. Buck, J. & Torgesen, J. (2003). The relationship between performance on a measure o f oral reading fluency and performance on the Florida Comprehensive Assessment Test. (FCRR Technical Report #1) Tallahassee, FL: Flori da Center for Reading Research. Casey, A., Deno, S., Marston, D., & Skiba, R. (1988 ). Exceptional teaching: Changing teacher beliefs about effective instructional pract ices. Teacher Education and Special Education 11 123-132. Castillo, J. M., Cohen, R., & Curtis, M. J. (2006). Evaluating intervention outcomes: A problem-solving/Response to intervention model as s ystems-level change. Communique’ 35 8. Castro, F. G., Barrera, M. Jr., & Martinez, C. R. J r., (2004). The cultural adaptation of prevention interventions: Resolving tensions betwee n fidelity and fit. Prevention Science, 5 41-45. Center for Substance Abuse Prevention (CSAP), (2001 ). Finding the balance: Program fidelity and adaptation in substance abuse. Rockville, MD: SAMHSA, U.S. Department of Health and Human Services. Chen, E.,. Heritage, M., & Lee, J. (2005). Identify ing and monitoring students’ learning needs with technology. Journal of Education for Students Placed At Risk 10 309332. Chen, H. and Rossi, P. (1983). Evaluating with sens e: The theory-driven approach. Evaluation Review, 7 283-302. Coburn, C., & Talbert, J. (2006). Conceptions of ev idence use in school districts: Mapping the terrain. American Journal of Education 112 469-495.

PAGE 262

251 Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design and analysis issues for field settings Boston: Houghton-Mifflin Company. Coyne, M., & Harn, B. (2006). Promoting beginning r eading success through meaningful assessment of early literacy skills. Psychology in the Schools 43 33-43. Cizek, G. (2001). Conjectures on the rise and fall of standards setting: An introduction to context and practice. In G. J. Cizek (Ed.), Setting performance standards: Concepts, methods, and perspectives (pp. 3-18). Mahwah, NJ: Lawrence Erlbaum Associates, Inc. Cronbach, L. J. (1982). Designing evaluations of educational and social pro grams. San Francisco: Jossey-Bass. Curtis, M. J., Castillo, J. M., & Cohen, R. M. (200 8). Best practices in system-level change. In Thomas, A., & Grimes, J. (Eds.) Best practices in school psychology V (pp. 887-902). Washington, D.C.: National Associat ion of School Psychologists. Curtis, M. J., & Stollar, S. A. (1996). Applying pr inciples and practices of organizational change to school reform. School Psychology Review, 25 409-417. DePaulo, P. (2000). Sample size for qualitative res earch. Quirk’s Marketing Research Review, 636, 1-5. Retrieved March 20, 2006, www.quirks.com/articles Durlak, J. A. & Wells, A. M. (1997). Primary preven tion mental health programs for children and adolescents: A meta-analytic review. American Journal of Community Psychology, 25 115-152. Elliott, D. S., & Mihalic, S. (2004). Issues in dis seminating and replicating effective prevention programs. Prevention Science, 5 47-53. Elliott, J., Lee, S. W., & Tollefson, N. (2001). A reliability and validity study of the Dynamic Indicators of Basic Early Literacy Skills-M odified. School Psychology Review, 30 33-49. Farlow, L. and Snell, M. (1989). Teacher use of s tudent performance data to make instructional decisions: practices in programs for students with moderate to profound disabilities. Journal of the Association for Persons with Severe Handicaps 14 1322. Feldman, J., & Tung, R. (2001). Using data-based inquiry and decision-making to improve instruction. ERS Spectrum 19 10-19.

PAGE 263

252 Fielding, N. G. (1993). Qualitative data analysis with a computer: Recent d evelopments Social Research Update, March, University of Surrey Fitzgerald, T. P. & Clark, R. M. (1976). Process e valuation for inservice training. Reading Improvement, 13 194-198. Fixen, D. L., Naoom, S. F., Blas, K. A., Friedman, R. M., & Wallace, F. (2005). Implementation research: A synthesis of the literat ure (FMHI Publication 231). Tampa, FL: Louis de la Parte Florida Mental Health Institute, University of South Florida. Fletcher, J. M., Francis, D. J., Morris, R. D., Lyo n, G R. (2005). Evidence-based assessment of learning disabilities in children and adolescents. Journal of Clinical Child and Adolescent Psychology, 34 506-522. Florida Center for Reading Research (FCRR), (n.d.). Progress Monitoring and Reporting Network. Retrieved March 20, 2006, from www.fcrr.org Florida Department of Education (n.d.) Just Read Florida! Retrieved March 20, 2006, from www.justreadflorida.com/docs/guidance.pdf Foorman, B. R., Francis, D. J., Fletcher, J. M., Pa ras, M., & Schatschneider, C. (1998). The role of instruction in learning to read: Preven ting reading failure in at-risk children. Journal of Educational Psychology, 90 37-55. Fossey, E., Harvey, C., McDermott, F., & Davidson, L. (2002). Understanding and evaluating qualitative research. Australian and New Zealand Journal of Psychiatry, 36 717-732. Fuchs, L., Deno, S., & Mirkin, P. (1984). The effec ts of frequent curriculum-based measurement and evaluation on student achievement, pedagogy, and student awareness of learning. American Educational Research Journal 21 449-460. Fuchs, L. & Fuchs, D. (1986). Effects of systematic formative evaluation: A metaanalysis. Exceptional Children 53 199-208. Fuchs, L & Fuchs, D. (2002). Curriculum-based Measu rement: Describing competence, enhancing outcomes, evaluating treatment effects an d identifying treatment nonresponders. Peabody Journal of Education 77 64-84. Fuchs, L., Fuchs, D., & Hamlett, C. (1989). Effects of instrumental use of Curriculumbased Measurement to enhance instructional programs Remedial and Special Education 10 43-52.

PAGE 264

253 Fuchs, L., Fuchs, D., Hamlett, C., & Ferguson, C. (1992). Effects of expert system consultation within Curriculum-based measurement, u sing a reading maze task. Exceptional Children 58 436-450. Fuchs, L. S., Wesson, C., Tindal, G., Mirkin, P., & Deno, S.L. (1982). Instructional changes, student performance, and teacher preferenc es: The effects of specific measurement and evaluation procedures. Research Paper #64. University of Minnesota: MINN. Institute for Research on Learning Disabilities. Fullan, M. (1997). The challenge of school change Arlington Heights, IL: IRI/SkyLight Training. Glasser, B. G. & Strauss, A. (1967). The discovery of grounded theory: Strategies for qualitative research Chicago: Aldine. Good, R. H., Gruba, J., & Kaminski, R. A. (2001). B est practices in using Dynamic Indicators of Basic Early Literacy Skills (DIBELS) in an outcomes-driven model. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology IV (pp. 699-720). Washington, D.C.: National Association of School Ps ychologists. Good, R. H. & Kaminski, R. A. (1996). Assessment f or instructional decisions: Toward a proactive/prevention model of decision-making for early literacy skills. School Psychology Quarterly, 11 326-336. Good, R. H., & Kaminski, R. A. (2002). DIBELS Oral Reading Fluency passages for first through third grades (Technical Report No. 10). Eugene, OR: University of Oregon. Good, R. H. Kaminski, R. A., Smith, S., Simmons, D. S., Kame’enui, E. J., & Wallin, J. (2003). Reviewing outcomes: Using DIBELS to evaluat e a school’s core curriculum and system of additional intervention in kindergart en. In S. R. Vaughn & K. L. Briggs (Eds.), Reading in the classroom: System for observing teac hing and learning Baltimore: Paul H. Bookes. Good, R. H., Kaminski, R. A., Simmons, D., Kame’enu i, E. J. (2001). Using Dynamic Indicators of Basic Early Literacy Skills (DIBELS) in an Outcomes-Driven Model: Steps to Reading Outcomes. OCCS Bulletin, 44 Win 2001. Good, R. H., Simmons, D. S., & Kame’enui, E. J. (20 01). The importance and decisionmaking utility of a continuum of fluency-based indi cators of foundational reading skills for third-grade high stakes outcomes. Scientific Studies of Reading, 5 257-288.

PAGE 265

254 Good, R. H., Simmons, D. S., Kame’enui, E. J., Kami nski, R. A., & Wallin, J. (2002). Summary of decision rules for intensive, strategic, and benchmark instructional recommendations in kindergarten through third grade (Technical Report No. 11). Eugene, OR: University of Oregon. Good, R. H., Simmons, D. S., & Smith, S. (1998). Ef fective academic interventions in the United States: Evaluating and enhancing the acquisi tion of early reading skills. School Psychology Review, 27 45-56. Goodstadt, M. S., (1988). School-based drug educati on in North America: What is wrong? What can be done? Journal of School Health, 56 278-281. Gresham, F. M. (2002). Responsiveness to intervent ion: An alternative approach to the identification of learning disabilities. In R. Bra dley, L. Danielson, & D. P. Hallahan (Eds.), Identification of learning disabilities: Research t o practice (p. 467-564). Mahawah, NJ: Lawrence Erlbaum Associates, Inc. Griffin, A., & Hauser, J. R. (1993). The voice of t he customer. Marketing Science, 12 127. Griffiths, A. J., VanDerHeyden, A. M., Parson, L. B ., Burns, M. K. (2006). Practical applications of response to intervention research. Assessment for Effective Intervention, 32 50-57. Grigg, N., Snell, M., & Loyd, B. (1989).Visual anal ysis of student evaluation data: A qualitative analysis of teacher decision making. The Journal of the Association for Persons with Severe Handicaps 14 23-32. Grimes, J. & Tilly, W. D. (1996). Policy and proces s: Means to lasting educational change. School Psychology Review, 25 465-476. Hall, G. E. & Hord, S. M. (2006). Implementing change: Patterns, principles, and potholes Boston: Allyn & Bacon. Hall, G. E., & Hord, S. M. (2001). Implementing change: Patterns, principles, and potholes Boston: Allyn & Bacon. Harachi, T. W., Abbott, R. D., Catalano, R. F., Hag gerty, K. P., Fleming, C. B. (1999). Opening the black-box: Using process evaluation mea sures to assess implementation and theory building. American Journal of Community Psychology, 27 711-731. Hargreaves, A. (1997). Rethinking educational change with heart and mind. 1997 ASCD Yearbook. Alexandria, VA: Association for Supervision and C urriculum Development.

PAGE 266

255 Helitzer, D. L., Davis, S. L., Gittelsohn, J., Gohn S., Murray, D. M., Snyder, P., & Stecker, A. B. (1999). Process evaluation in a mult i-site obesity primary prevention trial for Native American school children. Supplement to the American Journal of Clinical Nutrition, 69 816S-824S. Helitzer, D., Yoon, S., Wallerstein, N., Dow y Garc ia-Velarde, L. (2000). The role of process evaluation in the training of facilitators for an adolescent health education program. Journal of School Health, 70 141-147. Herman, J., & Gribbons, B. (2001). Lessons learne d in using data to support school inquiry and continuous improvement: Final report to the Stuart Foundation (CSE Tech. Rep. No. 535). Los Angeles: University of Cal ifornia, Center for the Study of Evaluation. Hintze, J. M., Ryan, A. L., and Stoner, G. (2003). Concurrent validity and diagnostic accuracy of the Dynamic Indicators of Basic Early L iteracy Skills (DIBELS) and the Comprehensive Test of Phonological Processing. School Psychology Review, 32 541-556. Hord, S. M. (1997). Professional learning communities: Communities of c ontinuous inquiry and improvement. Southwest Educational Development Laboratory: Aus tin, TX. Individuals with Disabilities Education Improvement Act of 2004, Pub. L. 108-466. Ingram, D., Louis, K., & Schroeder, R. (2004). Ac countability policies and teacher decision making: Barriers to the use of data to imp rove practice. Teachers College Record, 106, 1258-1287. Israel, B., Cummings, K., & Dignan, M. (1995). Eval uation of health education programs: current assessment and future directions. Health Education Quarterly, 22 364-389. Johnston, P. & Allington, R. (1991). Remediation. In R. Barr, M. Kamil, P. Mosenthal, & P. D. Pearson (Eds.), Handbook of reading research (Vol. II, pp.984-1012). New York: Longman. Joyce, B., & Showers, B. (2002). Student achievement through staff development, 3rd Edition Alexandria, VA: The Association for Supervision and Curriculum Development.

PAGE 267

256 Kaminski, R. A., Cummings, K. D., Powell-Smith, K. A., & Good, R. H. (2008). Best practices in using Dynamic Indicators of Basic Earl y Literacy Skills (DIBELS) in an outcomes-driven model. In A. Thomas and J. Grimes (Eds.), Best practices in school psychology V (pp. 1181-1204). Bethesda, MD: National Associati on of School Psychologists. Kaminski, R. A. & Good, R. H. (1996). Toward a tech nology for assessing basic early literacy skills. School Psychology Review, 25 215-227. Kaminski, R. A., & Good, R. H. (1998). Assessing ea rly literacy skills in a problemsolving model: Dynamic Indicators of Basic Early Li teracy Skills. In M.R. Shinn (Ed.) Advanced applications of curriculum-based measureme nt (pp. 113-142). New York: Guilford Press. Kame’enui, E. J., & Simmons, D. C. (2000). Beyond e ffective practices to schools as host environments: Building and sustaining a school-wide intervention model for reading. OSSC Bulletin 41 3-24. Kerr, K., Marsh, J., Ikemoto, G., Darilek, H., & Ba rney, H. (2006). Strategies to promote data use for instructional improvement: Actions, ou tcomes, and lessons from three urban districts. American Journal of Education 112 496-520. King, R. P, Deno, S., Mirkin, P., & Wesson, C. (198 3). The effects of training teachers in the use of formative evaluation in reading: An expe rimental-control comparison Research Paper #111. University of Minnesota: MINN. Institute for Research on Learning Disabilities. Klein, J. & Herman, R. (2003). The contribution of a decision support system to educational decision-making processes. Journal of Educational Computing Research, 28 273-290. Kovaleski, J. F., Gickling, E. E., Morrow, H., & Sw ank, P. R. (1999). High versus low implementation of instructional support teams: A ca se for maintaining program fidelity. Remedial and Special Education, 20 170-183. Lachat, M. & Smith, S. (2005). Practices that suppo rt data use in urban high schools. Journal of Education for Students Places At Risk 10 333-349. Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Beverly Hills, CA: Sage Publications, Inc. Little, J. W. (1997). Teachers’ professional develo pment in a climate of educational reform. In M. Fullan (Ed.), The Challenge of School Change (pp.57-84). Arlington Heights, IL: Skylight Professional Development.

PAGE 268

257 Love, N. (2004). Taking data to new depths. Journal of Staff Development 25 22-46. Marshall, C. & Rossman, G. B. (1999). Designing qualitative research Third Edition. Thousand Oaks: Sage. Maxwell, J. (2005). Qualitative research design: An interactive approac h. Second Edition. Thousand Oaks, CA: Sage Publications. McGraw, S., Sellers, D., Stone, E., Bebchuk, J., Ed mundson, E., Johnson, C., Bachman, K., & Luepker, R. (1996). Using Process Evaluation to Explain Outcomes. Evaluation Review, 20 291-312. Merriam, S. B., (2002a). Introduction to qualitativ e research. In Merriam and Associates (Eds.) Qualitative research in practice: Examples for disc ussion and analysis (pp. 317). San Francisco, CA: Jossey-Bass. Merriam, S. B., (2002b). Assessing and evaluating q ualitative research. In Merriam and Associates (Eds.) Qualitative research in practice: Examples for disc ussion and analysis (pp. 18-36). San Francisco, CA: Jossey-Bass. Miles, M. B. & Huberman, A. M. (1994). Qualitative data analysis: An expanded sourcebook Second Edition. Thousand Oaks, CA: Sage Publicati ons. Moats, L. C. (1999). Teaching reading is rocket sci ence: What expert teachers of reading should know and be able to do. Washington, D.C.: A merican Federation of Teachers. National Reading Panel (2000). Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction. National Institution of Child Health and Human De velopment (HIH) – sponsoring agency. Report #NIH-00-4769. P ub date: 2000-04-00. Patton, M. Q. (1990). Qualitative evaluation and research methods Second Edition. Newbury Park: Sage Publications. Patton, M. Q. (2002). Qualitative evaluation and research methods Third Edition. Thousand Oaks, CA: Sage Publications. Pentz, M. A., Trebow, E. A., Hansen, W. B., MacKinn on, D. P., Dwyer, J. H., Johnson, C. A., Flay, B. R., Daniels, S., & Cormack, C. (199 0). Effects of program implementation on adolescent drug use behavior. Evaluation Review, 14 264-289. Rossi, P., & Freeman, H. (1993). Evaluation: A systematic approach 5th Edition, Newbury Park, CA: Sage Publications.

PAGE 269

258 Seale, C. (1999). The quality of qualitative research: Introducing qu alitative methods Thousand Oaks, CA: Sage Publications. Schaps, E., Moskowitz, J. M., Malvin, J. H., & Scha effer, G. A. (1986). Evaluation of seven school-based prevention programs: A final rep ort on the Napa Project. The International Journal of the Addictions, 21 1081-1112. Senge, P. (1990). The fifth discipline New York: Doubleday. Shaw, R., & Shaw, D. (2002). DIBELS oral reading fluency-based indicators of thi rd grade reading skills for Colorado State Assessment Program (CSAP). (Technical Report) Eugene, OR: University of Oregon. Simmons, D. C., Kame'enui, E. J., Good, R. H., Harn B. A., Cole, C., & Braun, D. (2001). Building, implementing, and sustaining a be ginning reading improvement model school by school and lessons learned. In M. S hinn, G. Stoner & H. M. Walker (Eds.), Interventions for academic and behavior problems II : Preventive and remedial approaches. Washington, DC: National Association of School Psychologists. Skiba, R., Wesson, C., & Deno, S. (1982). The effe cts of training teachers in the use of formative evaluation in reading: An experimental-co ntrol comparison (Research Report No. 88). Minneapolis: University of Minneso ta, Institute for Research on Learning Disabilities. Smith, S. B., Baker, S., & Oudeans, M. K. (2001). M aking a difference in the classroom with early literacy instruction. Teaching Exceptional Children, 33 8-14. Smith, S. B., Simmons, D. C., & Kame’enui, E., J. (1998). Phonological awareness: Research bases. In D. C. Simmons & Kame’enui, E. J (Eds.), What reading research tells us about children with diverse learning needs : Bases and basics (pp. 61-127). Mahwah, NJ: Lawrence Erlbaum Associates. Stanovich, K. E. (1986). Mathew effects in reading: Some consequences of individual differences in the acquisition of literacy. Reading Research Quarterly, 21 360-407. Stanovich, K. E, & Stanovich, P. J. 1995). How res earch might inform the debate about early reading acquisition. Journal of Research in Reading, 18 87-105. Stecker, P., Fuchs, L., & Fuchs, D. (2005). Using C urriculum-based Measurement to improve student achievement: Review of research. Psychology in the Schools, 42 795-819.

PAGE 270

259 Supovitz, J., & Klein, V. (2003). Mapping a course for improved student learning: How innovative schools systematically use student perfo rmance data to guide improvement. Report, Philadelphia, PA: Consortium for Policy Res earch and Education,. Taylor, S. J. and Bogdan, R. (1998). Introduction to qualitative research methods Third Edition. New York: John Wiley & Sons, Inc. Tindal, G., Fuchs, L., Mirkin, P., Christenson, S., & Deno, S.(1981). The relationship between student achievement and teacher assessment of shortor long-term goals (Research Report no. 61). Minneapolis: Institute f or Research on Learning Disabilities. Torgesen, J. K. (2002). Florida’s Reading First assessment plan: An explana tion and guide. Retrieved March 20, 2006, from Florida Center for Reading Research website ( www.fcrr.org/assessment/PDFfiles/Fl_Assess_Plan_Fin al.pdf ). U.S. Department of Education (n.d.). Reading First Retrieved March 20, 2006, from www.ed.gov/programs/readingfirst/index.html Wayman, J. C. & Stringfield, S. (2003). Technology -supported involvement of entire faculties in examination of student data for instru ctional improvement. American Journal of Education, 112 549-571. Wesson, C. (1991). Curriculum-based Measurement and two models of follow-up consultation. Exceptional Children 57 246-256. Wesson, C., Deno, S., Mirkin, P., Sevcik, B., Skiba R., King, R., Tindal, G., & Maruyama, G. (1988). A causal analysis of the relat ionship among ongoing measurement and evaluation, structure of instructio n, and student achievement. The Journal of Special Education 22 330-343. Wesson, C., King, R., & Deno, S. (1984). Direct and frequent measurement of student performance: If it’s good for us, why don’t we do i t? Learning Disability Quarterly 7 45-48. Wesson, C., Skiba, R., Sevcik, B., King, R., Tindal G., Mirkin, P., & Deno, S. (1983). The impact of the structure of instruction and the use of technically adequate instructional data on reading improvement. Research Paper # 116. University of Minnesota: MINN. Institute for Research on Learning Disabilities. Wesson, C., Skiba, R., Sevcik, B., King, R., and De no, S. (1984). The effects of technically adequate instructional data on achievem ent. Remedial and Special Education 5 17-22.

PAGE 271

260 Yell, M., Deno, S., Marston, D. (1992). Barriers to implementing curriculum-based measurement. Diagnostique 18 99-112. Young, V. (2006). Teachers’ use of data: Loose coup ling, agenda setting, and team norms. American Journal of Education 112 521-548.

PAGE 272

261 Appendix A

PAGE 273

262 Figure 1: Example of a Class Status Report which is one type of progress report available through the PMRN. This type of report allows for a nalysis of both class and individual needs. Colors used indicate severity of need for ad ditional supports or instructions to meet benchmark goals. This type of report may be r ank ordered by student alphabetical order or student instructional need.

PAGE 274

263 Figure 2: Example of a Student Grade Summary Report which is one type of summary report. This type of report allows for an analysis of individual and classroom needs in relation to benchmark goals for a particular assess ment period.

PAGE 275

264 Figure 3: Example of Reading Progress Monitoring St udent Cumulative Report which is one type of a cumulative report offered by the PMRN This type of report offers an analysis of a student’s progress throughout the sch ool year in relation to periodic benchmark goals and end of year goals. It also all ows for comparative analysis of alternative assessments provided throughout the sch ool year (e.g., Peabody Picture Vocabulary Test )

PAGE 276

265 Appendix A Continued: Figure A4 Instructionally Interpretable Zones of Performance0 10 20 30 40 50 60 010203040506070 Winter Onset Recognition FluencySpring Phonemic Segmentation Fluency Zone B Zone A Zone C Zone D Figure 4: Graph developed by Good et al. (2000) for use in a Benchmark Linkage Report which may help identify a school’s core curriculum needs and student instructional needs by analyzing performance on earlier benchmark goals in relation to later benchmark goals.

PAGE 277

266 Appendix B

PAGE 278

267 Appendix B: Table 3 Number of schools sampled across the three groups o f participants. Schools Teachers Reading Coaches Specialists 1 X X 2 X 3 X 4 X 5 X 6 X X 7 X X 8 X 9 X X 10 X 11 X X 12 X 13 X X X 14 15 X X 16 X Totals Schools Sampled 9 8 7

PAGE 279

268 Appendix B: Table 4 Probability Sampling Table Developed by DePaulo (20 00) Probability of Missing a Population Subgroup in a R andom Sample Number of Respondents Population Incidence 10 20 30 40 50 60 100 200 .50 .001 <.001 <.001 <.001 <.001 <.001 <.001 <.001 .33 .018 <.001 <.001 <.001 <.001 <.001 <.001 <.001 .25 .056 .003 <.001 <.001 <.001 <.001 <.001 <.001 .20 .107 .012 .001 <.001 <.001 <.001 <.001 <.001 .10 .349 .122 .042 .015 .005 .002 <.001 <.001 .05 .599 .358 .215 .129 .077 .046 .006 <.001 .01 .904 .818 .740 .669 .605 .547 .366 .134

PAGE 280

269 Appendix B: Table 5 Microsoft Excel format used for organizing, coding, and sorting transcript data Topic # Participant ID Data Entry # Data Segment 1300 FA 1 Text Text Text 2300 FA 21 Text Text Text

PAGE 281

270 Appendix B: Table 6 Comparison of themes identified across multiple sou rces (teachers, Reading Coaches, and specialists) to organize findings for presentation. Data Source Climate/Culture Resources/Supports Knowledge of DIBELS Collecting and Using data Teachers Benefits of DIBELS x x x x Concerns of DIBELS x x x x General Comments of Impact of Reading First Grant/DIBELS x x Knowledge of DIBELS x Collecting Data x DIBELS vs. Other Assessments x x Issues Related to Monitoring Student Progress x x General Comments about Assessments x x Using Data x x Usefulness/Benefits to PMRN reports x x x Student reactions to DIBELS x Teacher Self-Efficacy x x x Support for teachers to use DIBELS x Expectations/Emphasis/Pressures x Advice for using DIBELS x x Reading Coaches Role of Reading Coach x x x Reading Coaches Perceptions of Teachers using DIBELS/PMRN x x x x RC perceptions of factors influencing teacher use of DIBELS x x x x Student’s Reactions to DIBELS x RC’s concerns about teachers/schools using DIBELS x x x Advice from RC for using DIBELS x x Role of Leadership x x DIBELS vs. Other Assessments x x RC’s perceptions of teacher concerns of DIBELS subtests x x x Specialists Comparisons b/w RF and nonRF schools x Role and Importance of RC x x Specialists perceptions of factors influencing teachers’ use of DIBELS x x x Actions used to increase teachers value or use of DIBELS x x x Progress Monitoring x x Advice for future x x Specialists’ perceptions of teachers’ value of DIBELS x x x Data collection procedures x Specialists’ perceptions of teacher concerns about DIBELS subtests x x Teachers’ ability for data analysis x x

PAGE 282

271 Appendix B: Table 7 School district assessment schedule for kindergarte n and first grades. Tests Frequency (KG) school readiness test 1 x at start of KG year. (KG) district assessment 5 x in year (at least) – m any use for progress monitoring also (1st) Running Record Assessments 5 x in year (at least) – many use for progress monitoring also (KG/1st) DIBELS 3 x in year (KG/1st) Peabody Picture Vocabulary Test 1 x at end of the year (1st) Stanford Achievement Test – 10 1 x at end of the year

PAGE 283

272 Appendix C

PAGE 284

273 Appendix C1 Teacher Interview Guide (30 minutes) 1. As a teacher at a Reading First school, can you help me understand what DIBELS is and what it is used for? Follow-Up Questions (if needed) How helpful are DIBELS data for you and why? Do you have any concerns about using DIBELS and why ? 2. I understand that all teachers at Reading First schools have access to a webbased program called the Progress Monitoring and Re porting Network which allows teachers access to their students’ DIB ELS data. Do you use this program, and if so, how helpful has it been for you ? Follow-Up Questions (if needed) If no, why not? How do you receive your class info rmation? If yes: o What types of graphs do you use and why? o What do you use the graphs for? o How often do you log onto this program? Do you have any suggestions about how to improve th is data-based program? 3. How often do you collect DIBELS data for students w ho are struggling in your class in reading? Follow-up Questions (if needed) Do you collect progress monitoring data for student s who are struggling? If yes: Why do you collect such information? If no: How do you track the progress of students wh o are struggling? If teacher does not collect such data, Who collects such information? 4. Have you received, or participated in, any training in the use of DIBELS; either one-on-one with your Reading Coach, or throu gh formal training with the school district? If so, could you briefly tell me what you learned and how helpful that/those trainings were? Follow-Up Questions (if needed) Can you help understand what the different subtests measure? Have you learned what the 5 skill areas are in read ing? Would you help me with what they are? Do you know which of the 5 skill areas DIBELS is in tended to measure? What are teachers supposed to do with this data? 5. What other types of assessments are conducted throu ghout the school year in your classroom or school, and do you have any conce rns about how to use DIBELS in addition to using other assessments? Follow-Up Questions (if needed) How do you obtain the DIBELS data for your class? Does anyone help you evaluate the scores and determ ine what to do with them? What is the most frequent use of the DIBELS data fo r you?

PAGE 285

274 Does DIBELS data provide you with any information t hat other assessments do not? If so, what? 6. What supports are available for you in understandin g how to interpret your students’ scores? Follow-Up Questions (if needed) What role does your school’s Reading Coach have? W hat does he/she do? What learning opportunities are there to learn more about using DIBELS? Would you say that your school strongly supports DI BELS use? Why or Why not? 7. In final, do you have any comments or thoughts rega rding the use of DIBELS that leaders at the District Level and State Level should know about from a classroom teacher’s perspective? Follow-Up Questions (if needed) If teacher responds with general statement of overw helming amounts of assessments in district: o It sounds like there is a lot going on in this scho ol district. Can you help me understand how this is affecting your p erceptions about DIBELS? o Do you feel that DIBELS has been valuable for you? Why or Why not? o Would you like to receive more training or informat ion about using DIBELS? Thank you so much for you time!

PAGE 286

275 Appendix D

PAGE 287

276 Appendix D1 Case Study Questions (30 minutes) I have a case study that I would like to get your f eedback on. It is for a student who is in kindergarten/the first grade. The studen t has not repeated any grades. The student is only having difficulties in reading. Could you take a look at this student’s data and tell me what your impressions ar e about this student? Follow-Up Questions (if needed) What would you do with a student like this? What do you feel this student needs? What other types of information would you need to k now to help this student? What difficulty(s) is this student having based on this information? What interventions would you suggest to help this s tudent? Have you seen these types of reports before? Do you prefer to use other types of reports for loo king at individual students?

PAGE 288

277 Appendix D2: Case Study #1: Kindergarten Student Cl ass Status Report

PAGE 289

278 Appendix D2: Case Study #1: Kindergarten Student Gr ade Summary Report

PAGE 290

279 Appendix D2: Case Study #1: Kindergarten Reading P rogress Monitoring Student Cumulative Report

PAGE 291

280 Appendix D2: Case Study #2: First Grade Class Statu s Report

PAGE 292

281 Appendix D2: Case Study #2: First Grade Student Gra de Summary Report

PAGE 293

282 Appendix D2: Case Study#2: First Grade Reading Prog ress Monitoring Student Cumulative Report

PAGE 294

283 Appendix E

PAGE 295

284 Appendix E1 Focus Group Guide (1 hour) 1. As a Reading First school you are required to collect DIBELS data at least 4 times per year. What are your thoughts concerning the use of DIBELS at your school? (10 minutes) How do you feel about the use of DIBELS? What changes have occurred at your school since DIB ELS has been in use? Has DIBELS been helpful in improving student achiev ement at your school? Do you have any concerns regarding its use at your school? 2. What are your thoughts or feelings concerning teacher’s use of DIBELS? (10 minutes) How do teachers use the data? What problems or benefits arise with teacher’s use of DIBELS? What is your opinion about how to support teachers’ use of DIBELS data? 3. How are DIBELS data being utilized? (10 minutes) What challenges does your school face regarding the use of DIBELS data? Have any benefits occurred by using DIBELS data? How do you feel about using the Progress Monitoring and Reporting Network – (the webbased reports offered by the Florida Center for Rea ding Research). How do teachers feel about using the reports offere d by FCRR? What reports do you feel offer the most information or the most useful information? Why? 4. How does DIBELS compare to other assessments provid ed in your school? (10 minutes) Do you feel DIBELS provides the necessary informati on your school needs? What is your opinion regarding the use of multiple assessments in reading at your school? What assessments are most helpful for screening and monitoring student achievement in reading in your opinion and why? 5. What additional comments, concerns, or suggestions do you all have about using DIBELS at your school that you feel is import ant for others to know? (10 minutes) Are there any changes that you would like to see ha ppen at either the district or state level with regards to using DIBELS? Has the use of DIBELS been helpful in your school m eeting its goals for students? Are there any concerns or problems that you would l ike district or state leaders to address with regards to using DIBELS? Thank you All for you time!!!

PAGE 296

285 Appendix F

PAGE 297

286 Appendix F1 Field Notes Teacher Interviews Participant Code: ______ Date Collected: _________ Date Summarized: _________ 1. Gender: Male ____ Female ____ 2. Current Grade Teaching ____ 3. Age: 21-25 26-30 31-35 36-40 41-45 46-50 51-55 56-60 61-65 66+ 4. Ethnicity: _____ Caucasian _____ African American _____ Hispanic/Latino _____ Asian/Pacific Islander _____ Native American /Alaska Native _____ Other (specify):________________________ 5. Number of years experience teaching reading at curr ent grade level? ___ <1 ___ 1-5 ___ 6-10 ___ 11-15 __ 16-20 ___ 21-25 ___ 26+ 6. Number of years teaching at present school in the current grade level being taught? ___ <1 ___ 1-5 ___ 6-10 ___ 11-15 __ 16-20 ___ 21-25 ___ 26+ 7. Highest Degree Attained: ___ Bachelors ___ Masters ___ Specialist ___ Doctorate ___ Other:________________ 8. Credentials (Teacher Certification Areas) – Please mark all that apply : ___ K-12 General Education Teacher ___ Special Education Teacher ___ Reading Endorsement ___ Principal/Administrator ___ Other (specify): ___________________ 9. Average number of hours of participation in professional d evelopment activities in reading per year? ___ 0 ___ 1-5 ___6-10 ___ 11-15 ___ 16-20 ___21-25 ___ 26-30 ___ 30+ 10. Any previous experience teaching students with Lear ning Disabilities as an ESE teacher: ___ YES: If so, how many years experience: _____ ___ NO

PAGE 298

287 11. Is your Reading Block scheduled for 90 minutes per day? ___ YES ___ NO: If not, how many minutes per day? (specify) :_____ 12. How many students do you currently have in your cla ssroom? (specify): ____ 13. Descriptions of the classroom environment (physical arrangement): ___________________________________________________ ________________________ ___________________________________________________ ________________________ ___________________________________________________ ________________________ ___________________________________________________ ________________________ ___________________________________________________ ________________________ ___________________________________________________ ________________________ ___________________________________________________ ________________________ __________________________________________ 14. Other relevant notes to include concerning physical arrangement and/or instructional practices observed: ___________________________________________________ ________________________ ___________________________________________________ ________________________ ___________________________________________________ ________________________ ___________________________________________________ ________________________ ___________________________________________________ ________________________ ___________________________________________________ ________________________ ___________________________________________________ ________________________ __________________________________________ 15. Notes concerning interview – ideas, thoughts, feeli ngs concerning the interview: ___________________________________________________ ________________________ ___________________________________________________ ________________________ ___________________________________________________ ________________________ ___________________________________________________ ________________________ ___________________________________________________ ________________________ ___________________________________________________ ________________________ ___________________________________________________ ________________________ ___________________________________________________ ________________________ ________________________________________________

PAGE 299

288 Appendix G

PAGE 300

289 Appendix G1 – Teacher Interview Codes Draft 1 Interview Codes: 100 – Description of DIBELS 200 – Use of Colors on DIBELS 300 – General perceptions of PMRN reports 400 – Specific PMRN reports preferred 500 – Using PMRN reports to inform instruction 600 – Understanding of how fluency relates to compr ehension 700 – Using DIBELS for progress monitoring 800 – How to monitor students’ progress if not DIBE LS 900 – What data are used to inform instruction 1000 – Participation in DIBELS training 1100 – PMRN Training 1200 – DIBELS vs. Other Assessments (comparisons) 1300 – Accessing the PMRN (Technical skills) 1400 – Teacher’s perceptions of school culture rega rding DIBELS 1500 – Using nonsense words 1600 – Perceptions of the Nonsense Word Fluency sub test. 1700 – Good readers who don’t comprehend what they read 1800 – Conflict between reading nonsense words and class strategies for unknown words 1900 – Instructional strategies for reading unknown words 2000 – Setting the climate for students taking the DIBELS 2100 – What is needed to maintain use of DIBELS int o future at school 2200 – What state and district leaders should know from a teacher’s perspective 2300 – How DIBELS is used to assign students to int erventions/services 2400 – Literacy Success Program/Title I services 2500 – Administrator support for DIBELS 2600 – Concerns about using DIBELS 2700 – What teachers like about DIBELS 2800 – Do teachers think DIBELS is valuable 2900 – What teacher’s think the DIBELS measures 3000 – What teacher’s think the Nonsense Word Fluen cy measures 3100 – Do teacher’s need more training on DIBELS 3200 – What additional training do teachers want

PAGE 301

290 Appendix G2 – Teacher Interview Codes Draft 2 DIBELS D1 Knowledge of DIBELS Test/Measures Description of DIBELS testing cycles What teacher’s think the DIBELS measures What teacher’s think a particular subtest measure s Teachers’ knowledge of the subtests used for their grade level In first year, teachers knew little about the DIBEL S and were generally skeptical D2 Procedures for collecting and distributing resul ts of DIBELS School procedures for providing data to teachers Timeliness of procedures to collect and distribute DIBELS data D3 DIBELS Training Participation in DIBELS training Specific requests for additional training Teachers feel they don’t have sufficient training o n what the DIBELS does Need to know why it is important as a tool in relat ion to teaching D4 Using the DIBELS to monitor students and Make Decis ions Using DIBELS for progress monitoring How DIBELS is used to assign students to interventi ons/services Teachers’ ability/motivation to interpret DIBELS da ta Grade level data used to identify students in need of help (e.g., Great Leaps) Staffing children for ESE consideration using the D IBELS Making retention decisions using DIBELS data Three cycles of data collection are not enough Not enough emphasis is placed on student progress r ather than on meeting standards D5 Teacher Observations of Students’ reactions to D IBELS Setting the climate for students taking the DIBELS Concerns about timing children on the DIBELS Children frustrated with being tested by a stranger or different location Children who are still below benchmarks but making progress need lots of praise and continued encouragement D6 Teacher Perceptions of School Support for DIBELS Teacher’s perceptions of school culture regarding D IBELS What is needed to maintain use of DIBELS into futur e at school

PAGE 302

291 Concerns that there is an overemphasis on the use o f DIBELS DIBELS viewed negatively b/c of association with po litics at school More collaborative environment after first couple o f years D7 Teachers’ Perceptions of the Value of DIBELS DIBELS vs. Other Assessments (comparisons) Concerns about using DIBELS with children who hav e speech problems Concerns about teacher competitiveness over DIBEL S results What teachers like about DIBELS Value of the Nonsense Word Fluency subtest Lack of agreement between DIBELS and other assessme nts Concerns about accuracy or consistency of the DIBLE S measures over time DIBELS takes less time to complete compared to othe r assessments DIBELS interrupts classtime Teachers feel they need to teach to the test since it is timed Teachers don’t value it if they don’t understand it ’s usefulness Concerns with interpreting quantitative information w/o qualitative information Teachers feel it is a depressing test because the standards are so high Most of the time DIBELS is not helpful because obse rvations in small groups/classroom assessments already indicate what students need help Teachers like that someone else does the DIBELS ass essment PMRN P1 Teachers’ perceptions of the value of PMRN Reports Concerns/Problems with using PMRN reports What teachers like about using the reports General perceptions of PMRN reports Results should be available faster/Reports take too long Using the Parent Letter report to communicate with parents Concerns that the parent report does include “above average” praise types of feedback. Use Parent Letter to solicit support from parents a nd offer ideas for use at home Parent report too much for some parents Add notes to parent report for parents to read, or highlight important parts of the report Use “box and whisker” formats to compare classes an d school to other schools P2 Technical Skills(UNDERSTANDING) How to Access and R ead PMRN Reports

PAGE 303

292 Use of Colors on PMRN PMRN Training Accessing the PMRN (Technical skills) Teachers need constant reward/encouragement to use the reports/PMRN website P3 Using PMRN Reports to Identify Student Needs Specific PMRN reports preferred Using PMRN reports to inform instruction READING COACHES RC1 Teachers’ Perceptions of the Value of Reading Coach es Concerns about Reading Coaches Likes about Reading Coaches RC is always available for help if needed RC2 What Reading Coaches do in the Schools RC provides training and support RC assists with data analysis with DIBELS. RC provides assistance with intervention and instru ctional ideas RC assists with how to make sense of conflicting da ta from multiple data sources Takes teachers to other schools to observe best pra ctices Model lessons in the classroom Provide books/chapters for teachers to read as prof essional development READING DEVELOPMENT RESEARCH RD1 Relationship Between DIBELS and Reading Researc h Understanding of how fluency relates to comprehensi on Understanding how DIBELS is directly tied to resear ch on reading development READING INSTRUCTION/INTERVENTIONS RI1 Specific Strategies/Lessons Taught in the Classroom Instructional strategies for reading unknown word s Practice blending sounds Practice making rhyming words Practice reading nonsense words Using nonsense words Stretching out words and sounding them out

PAGE 304

293 Encourage students to sound out words Model for students how to read with expression/flue ncy Earobics Computer program SRA Open Court Harcourt Interventions Great Leaps RI2 Instructional Services in Schools Provided by other s Literacy Success Program/Title I services R13 Issues with providing interventions in the clas s Not enough time to find or develop individual lesso ns/interventions for students who are not doing well. Teachers feel they need more supports or people to help deliver interventions Behavior problems interfere with teaching and pro viding instructions Concerns with limited support at home for student s TEACHER ABILITY TO INTERPRET AND USE ASSESSMENT DATA T1 Knowledge of how to use multiple sources of data together Dealing with conflicting assessment information on a student Identifying converging information about a student’ s abilities Good readers who don’t comprehend what they read T2 Knowledge or Training on RTI RTI will be used at school for next year ADMINISTRATOR INVOLVEMENT AI1 Teachers’ Perceptions of Value of Administrator Involvement Administrator support for DIBELS Changes in value or importance of DIBELS among admi nistrators How DIBELS data are used at the building level (by administrators) Concerns about pressure to perform as a teacher Feeling that job is on the line Data is used as means to judge teacher competence Too much emphasis as an outcomes measure instead o f a progress measure Climate of negativity or positiveness established b y the school leadership DIBELS can become associated with negative politics at a school Fear of being able to ask for help out of being jud ged negatively.

PAGE 305

294 Feeling isolated because climate at school prevents asking for help from colleagues Need more collaboration among staff ALTERNATIVE ASSESSMENTS AS1 Using Assessments Other than DIBELS How to Monitor Students’ Progress if not DIBELS What Data are Used to Inform Instruction if Not DIB ELS Using Running Records to decide placement of groups and decisions Description of the PIAP/Kaplan Assessment ADVICE TO EDUCATION LEADERS FROM TEACHERS AE1 Recommendations to State/District Leaders about DIBELS Suggestions about how to revise the DIBELS Concerns about using DIBELS Additional training/information requested by teach ers Results need to be more immediately available for use. Need more support on how to use the DIBELS informa tion to help students AE2 Statements About Assessments in Schools in Gene ral What state and district leaders should know from a teacher’s perspective What additional training do teachers want

PAGE 306

295 Appendix G3 – Teacher Interview Codes Draft 3 CODES There are 8 broad categories of topics listed below The BOLD and UNDERLINED topic headings are the codes to use with the transcripts. Across the broa d categories there are 28 Topic Headings for use wi th the transcripts. Sub-headings listed below the bold/un derlined headings are merely there to provide some examples of things that would fall under that categ ory. 1000 Overall Value of DIBELS 1100 Benefits Of DIBELS 1110 Develop and adjust instructional groups 1111 Identify specific skills for targeted instruc tion 1112 Helps to monitor student gains and needs 1113 Requires more understanding of reading develo pment 1114 Allows for differentiated instruction to supp ort all students 1115 Access to raw data allows for developing inte rventions 1116 Provides information for supporting student i nto next grade 1117 Corresponds with other assessments and class observations 1118 Seems more reliable/objective with someone els e assessing the student 1119 Use of DIBELS/RF grant has led to increased r eading outcomes 1120 Instructional ideas learned through DIBELS/RF has proven to be useful in math and writing (e.g., use of differenti ated instruction) 1121 Provides information about who is entering gra de with necessary prerequisite skills 1200 Concerns About DIBELS 1210 Too much emphasis on fluency and none on comp rehension 1211 NWF subtest 1212 Questionable validity due to: being a timed test doesn’t correspond with other assessments or ob servations students tested in unknown or different setting s than classroom 1213 Doesn’t provide enough information for student going to next grade 1214 Too much emphasis on the score when it’s only a snapshot 1215 Questionable reliability due to: when someone other than teacher is assessing (unf amiliar with student) errors made by people doing the assessment when student is shy or unsure of stranger doing t he assessment results inconsistent with other assessments 1216 Encourages teaching to the test 1217 Not helpful for improving instruction 1218 ORF standards are too high 1219 Creates negative competition and judgment amo ng teachers

PAGE 307

296 1220 Too little recognition of student gains when t hey are still below benchmark 1300 General Comments of Impact of DIBELS/ Reading First Grant 1310 Teachers have mixed views – basically just a nother test to give. 1311 Teacher may not know enough about it to appr eciate the data 1312 Younger generation teachers seem more open to it than older generation 1313 Overall perceptions seem to increase positiv ely with each year 1314 First impression with DIBELS negatively impa cted due to training 1315 Reading First grant has helped to create more team work 1316 Reading First grant has brought many resources that have been very valuable and effective for supporting student learning in reading. 1317 Like that there is only three cycles than four – too much testing 1318 Generally ok with DIBELS, but it’s not critica l to driving instruction 1319 Little value because it offers little more tha n already known by teacher who work with them. 2000 Conducting Assessments 2100 Knowledge of DIBELS 2110 How often given 2111 Description as a timed test 2112 Required assessment 2113 Knowledge about why timing is important 2114 Description of subtests 2200 Collecting Data procedures/description 2210 DIBELS – Whole Class all at once approach 2211 DIBELS – One student at a time approach 2212 District Assessments – One-on-one 2213 Comparison of time efficiency to complete diff erent tests 2300 DIBELS vs. Other Assessments (Value Comparis ons) 2110 Preference of specific test for use in plannin g instruction Influenced by immediacy of feedback to teacher on how to support student Influenced by familiarity of tests available for us e 2311 Ability to use DIBELS in conjunction with oth er assessments Influenced by knowledge of DIBELS and what it offer s Influenced by knowledge of interpret DIBELS data

PAGE 308

297 Influenced by correspondence of successful outcome between DIBELS and other assessments 2312 Use DIBELS only because it is required 2313 Absence of district assessment in reading for first grade 2314 DIBELS more time efficient and less impacting upon instructional time 2315 More emphasis given to other assessments because th ey drive decisions regarding retention at end of year 2400 Issues related to monitoring student progres s 2410 Influenced by knowledge of administrating and accessing materials 2411 Awareness of option to give DIBELS more often than 3X a year 2412 Influenced by level of training to administer and accessibility for use 2413 Direct observations during group/class instruc tion given higher value 2414 Influenced by availability of others to give D IBELS 2415 Lack of possibility or awareness of option to retest if measurement error suspected – devalues DIBELS. 2416 Using or making graphs to chart progress 2417 Involving students in monitoring progress and goal setting 2500 General Comments about Assessments 2510 Too much testing in schools and it interferes with instruction 2511 Too much testing that does not lead to anythin g valuable to use 2512 KG PIAP and 1st grade Running Record tests take too long to complete whole class 2513 Following tests are given between KG and 1st grades collectively: KG Flickers (1x at beginning of year) KG PIAP (district assessment) – (5x in year) KG DIBELS (3x in year) KG Peabody Picture Vocabulary Test (1x at end of year) 1st Grade Running Record (5x in year) 1st Grade DIBELS (3x in year) 1st Grade Standford Achievement Test – 10 (1x at end of year) 3000 Using Data 3100 Assigning resources based on DIBELS data (e. g., personnel) 3200 Using DIBELS to determine what intervention s to give

PAGE 309

298 3300 Using DIBELS for placement decisions (e.g ., Spec. Ed or Retention) 3400 Using DIBELS data to determine level or f ocus of instruction (i.e., individual, small group, classroom, or grade level). 3500 Collaborative problem-solving with grade level colleagues (i.e., PLC’s) 3600 Effective use influenced by frequency of data collection and access to raw scores 3700 Using qualitative information about stude nts to help interpret DIBELS 4000 Progress Monitoring and Reporting Network (P MRN) 4100 Usefulness/Benefits of PMRN reports 4110 Communicating with parents 4111 Use of colors to read reports 4300 Preference for specific reports or mention o f reports used 4400 Problems associated with interpreting or acc essing reports 4500 Recommendations for improving the reports/PM RN system 5000 Student Involvement or Reactions to DIBELS 5100 Student Involvement or Reactions to DIBELS 6000 Climate/Culture at School/District Related t o Assessments and Standards 6100 Teacher Self-Efficacy 6110 Beliefs about educating students 6111 Coping with changes in teaching standards/test ing 6112 Pressures to teach to the test 6113 Taking initiative to learn more about DIBELS 6200 Support for Teachers to Use DIBELS 6210 Reading Coach Support 6211 Administration Support 6212 Training 6213 Assistants helping in the classroom

PAGE 310

299 7000 Advice to District or State Leaders 7100 Need more people to assist with testing/inst ruction 7200 Need more training on the DIBELS 7300 Access to more intervention materials to use based on DIBELS data 7400 Share more information about what is work ing at other schools 7500 Minimize or streamline the amount of testin g that is happening 8000 Other/Misc. 8100 Other/Misc.

PAGE 311

300 Appendix G4 – Teacher Interview Codes Draft 4 CODES There are 8 broad categories of topics listed below The BOLD and UNDERLINED topic headings are the codes to use with the transcripts. Across the broa d categories there are 16 Topic Headings for use wi th the transcripts. Sub-headings listed below the bold/un derlined headings are merely there to provide some examples of things that would fall under that categ ory. 1000 Overall Value of DIBELS 1100 Benefits Of DIBELS 1110 Develop and adjust instructional groups 1111 Identify specific skills for targeted instr uction 1112 Helps to monitor student gains and needs 1113 Requires more understanding of reading deve lopment 1114 Allows for differentiated instruction to su pport all students 1115 Access to raw data allows for developing in terventions 1116 Provides information for supporting student in to next grade 1118 Seems more reliable/objective with someone els e assessing the student 1119 Use of DIBELS has led to increased reading outcomes 1121 Provides information about who is entering gra de with necessary prerequisite skills 1200 Concerns About DIBELS 1211 Concerned about using the NWF subtest 1212 Questionable validity 1213 Doesn’t provide any new information 1215 Questionable reliability 1216 Encourages teaching to the test 1217 Not helpful for improving instruction 1218 ORF standards are too high 1219 Creates negative competition and judgment amo ng teachers 1220 Too little recognition of student gains when t hey are still below benchmark 1300 General Comments of Impact of DIBELS/ Reading First Grant 1310 Teachers have mixed views – basically just ano ther test to give. 1311 Teacher may not know enough about it to apprec iate the data 1312 Younger generation teachers seem more open to it than older generation 1313 Overall perceptions seem to increase positivel y with each year

PAGE 312

301 1314 First impression with DIBELS negatively impact ed due to training 1315 Reading First grant has helped to create more team work 1316 Reading First grant has brought many resources that have been very valuable and effective for supporting stu dent learning in reading. 1317 Like that there is only three cycles than four – too much testing 1318 Generally ok with DIBELS, but it’s not critica l to driving instruction 1319 Little value because it offers little more tha n already known by teacher who work with them. 1320 Any general statement about liking or disliki ng but without reasons. 1321 Instructional ideas learned through Reading First are helpful in other areas 2000 Conducting Assessments 2100 Knowledge of DIBELS 2110 How often given 2111 Description as a timed test 2112 Required assessment 2113 Knowledge about why timing is important 2114 Description of subtests (without value or jud gement) 2200 Collecting Data procedures/description 2210 DIBELS – Whole Class all at once approach 2211 DIBELS – One student at a time approach 2212 District Assessments – One-on-one 2213 Comparison of time efficiency to complete dif ferent tests 2214 General procedures or description of how long it takes to test class 2215 Teachers not administering the DIBELS 2300 DIBELS vs. Other Assessments (Value Comparis ons) 2110 Preference of specific test for use in plannin g instruction Influenced by immediacy of feedback to teacher on how to support student Influenced by familiarity of tests available for us e 2316 Using DIBELS in conjunction with other assessments Influenced by knowledge of DIBELS and what it offer s Influenced by knowledge of interpret DIBELS data

PAGE 313

302 Influenced by correspondence of successful outcome between DIBELS and other assessments 2317 Use DIBELS only because it is required 2318 Absence of district assessment in reading for firs t grade 2319 DIBELS more time efficient and less impacting upon instructional time 2320 More emphasis given to other assessments because th ey drive decisions regarding retention at end of year 2321 Direct observations during group/class instruction given higher value 2400 Issues related to monitoring student progres s 2410 Knowledge of administrating and accessing mat erials 2411 Awareness of option to give DIBELS more often than 3X a year 2412 Influenced by level of training to administer and accessibility for use 2414 Influenced by availability of others to give DIBELS 2415 Lack of possibility or awareness of option to retest if measurement error suspected – devalues DIBELS. 2416 Using or making graphs to chart progress 2417 Involving students in monitoring progress and goal setting 2418 Description of using other tests for monitorin g progress 2419 Observing as monitoring progress 2500 General Comments about Assessments 2510 Too much testing in schools and it interferes with instruction 2511 Too much testing that does not lead to anythin g valuable to use 2512 KG PIAP and 1st grade Running Record tests take too long to complete whole class 2513 Following tests are given between KG and 1st grades collectively: KG Flickers (1x at beginning of year) KG PIAP (district assessment) – (5x in year) KG DIBELS (3x in year) KG Peabody Picture Vocabulary Test (1x at end of year) 1st Grade Running Record (5x in year) 1st Grade DIBELS (3x in year) 1st Grade Standford Achievement Test – 10 (1x at end of year)

PAGE 314

303 3000 Using Data 3100 Using Data (any data for any assessment) 3110 Assigning resources based on DIBELS data (e.g. personnel) 3111 Using DIBELS to determine what interventions t o give 3112 Using DIBELS for placement decisions (e.g., Sp ec. Ed or Retention) 3113 Using DIBELS data to determine level or foc us of instruction (i.e., individual, small group, classro om, or grade level). 3114 Collaborative problem-solving with grade le vel colleagues (i.e., PLC’s) 3115 Effective use influenced by frequency of da ta collection and access to raw scores 3116 Using qualitative information about student s to help interpret DIBELS 3117 Description of using other types of assessment data 4000 Progress Monitoring and Reporting Network (P MRN) 4100 Usefulness/Benefits of PMRN reports 4110 Communicating with parents 4111 Use of colors to read reports 4112 Preference for specific reports or mention of reports used 4113 Problems associated with interpreting or acces sing reports 4114 Recommendations for improving the reports/P MRN system 5000 Student Involvement or Reactions to DIBELS 5100 Student Involvement or Reactions to DIBELS 5110 Students feeling anxiety or negativity as a re sponse to being tested by the DIBELS. 6000 Climate/Culture at School/District Related t o Assessments and Standards 6100 Teacher Self-Efficacy 6110 Beliefs about educating students 6111 Coping with changes in teaching standards/test ing 6112 Pressures to teach to the test 6113 Taking initiative to learn more about DIBELS 6200 Support for Teachers to Use DIBELS

PAGE 315

304 6210 Reading Coach Support 6211 Administration Support 6212 Training 6213 Assistants helping in the classroom 6300 Expecations/Emphasis/Pressures 6310 Too much emphasis on fluency and none on comprehension 6311 Too much emphasis on the score when it’s only a snapshot 6312 State Standards/School Grades by State 7000 Advice to District or State Leaders 7100 Advice related to using the DIBELS 7110 Need more people to assist with testing/instru ction 7111 Need more training on the DIBELS 7112 Access to more intervention materials to use b ased on DIBELS data 7113 Share more information about what is working a t other schools 7114 Minimize or streamline the amount of testing t hat is happening 7115 Advise on how to modify the DIBELS test 8000 Other/Misc. 8100 Other/Misc. Use this category ONLY if comment talks about anyth ing OTHER THAN reading, assessments, school climate/pressures, or teacher t alking about herself. Also use this category if teacher does not understand the questio n and is not providing any information.

PAGE 316

305 Appendix H

PAGE 317

306 Appendix H1 – Preliminary Results for Member Checks and Peer Review Table of Contents I. General Information………………………………………………………………2 a. Purpose of Study………………………….………………………………2 b. Research Questions……………………….………………………………2 c. Participants…………………………………………………………….….2 d. Data Collection Overview…………………….…………………………..2 e. General Overview of Findings………………….………………………...2 II. Researcher’s Topics in Interviews/Focus Groups…………. ..……………………3 III. Teacher Perceptions and Understandings of the DIBEL S………………………...3 a. Climate/Culture of School…………………….…………………… ……..3 b. Supports/Resources Available…………….………………………………4 c. Knowledge of DIBELS…………………………………………………...4 d. Collecting and Using Data………………………………………………..7 IV. Perceptions of Reading Coaches…………….………………………………….. 10 a. Climate/Culture of School……………...………………………………..10 b. Supports/Resources Available……...……………………………………10 c. Teachers’ Knowledge of DIBELS………………...……………………..11 d. Collecting and Using Data………...……………………………………..12 e. Advice………...………………………………………………………….13 V. Perceptions of “Specialists”…………………………….……………………….. 14 a. Culture/Climate of Schools……...……………………………………….14 b. Support/Resources Available……...……………………………………..15 c. Teachers’ Knowledge of DIBELS…………...…………………………..15 d. Collecting and Using Data…………...…………………………………..15 VI. Case Study Comparative Analysis between Teachers an d DIBELS Experts…....17 a. Expert Review of Kindergarten Case Study…………………...… ……...17 b. KG teachers’ review of Kindergarten Case Study……….. .……………..18 c. Expert Review of First Grade Case Study………...……………… ……..18 d. First grade teachers’ review of First Grade Case St udy……...………….19 e. Expert Opinion on the use of DIBELS/ Data in General at Reading First Schools…………………………… .…19

PAGE 318

307 I. General Introduction a. Purpose of Study i. Descriptive Qualitative Study ii. To understand the perceptions and use of DIBELS by teachers at elementary schools with Reading First grant. b. Research Questions i. What attitudes and perceptions exist among persons other than teachers who participate in the collection, input, and analysis of DIBELS data throughout the school year? ii. What are teachers’ perceptions and understandings a bout DIBELS and the PMRN? iii. How do teachers’ understandings and use of DIBELS d ata, as presented in the PMRN reports, compare to Reading First experts who are provided with the same information? c. Participants i. Schools 1. Elementary schools in fourth year of Reading First grant implementation. 2. 15 schools total sampled (teachers, reading coaches specialists) ii. Teachers 1. KG or 1st grade teachers only (both combined represent all 5 subtests of DIBELS). 2. At least 2 years teaching present grade at present school 3. Aggregated Demographics a. 14 teachers (7 KG and 7 First) b. Range of years of experience teaching current grade = 1-26 years; 7 teachers with less than 10 years experienc e and 7 above 10 years experience. c. Range of age = 21-65. d. Range of credentials – 11 teachers with Bachelor’s degree in K-12 Teaching, 3 with Masters degree. iii. Reading Coaches 1. 8 participants iv. “Specialists” (involved in collection and/or use of DIBELS at Reading First schools). 1. 6 participants 2. Student Services 3. ESOL v. DIBELS “experts” (individuals at state level who ha ve expertise in use of PMRN/DIBELS data) 1. 2 participants

PAGE 319

308 2. Individual interviews reviewing both KG and 1st grade Case Studies d. Data Collection i. All involved audio recording for purposes of transc ribing and analyzing ii. Transcribed data coded iii. Patterns observed and reported e. General Overview of Findings i. Teacher Perceptions of using DIBELS 1. Complex and unable to sort on dichotomy of like/dis like only. 2. Perceptions influenced by multiple variables 3. DIBELS value depends on variable addressed 4. Differences and Consensus across grades and within grades ii. Reading Coach Perceptions of using DIBELS iii. Specialists’ Perceptions of using DIBELS iv. Teacher/Expert Comparisons of utilizing case study involving PMRN reports. 1. Class Status Report 2. Student Grade Summary Report 3. Reading Progress Monitoring Student Cumulative Repo rt II. Researcher’s Topics in Interviews/Focus Groups a. Guided by knowledge of research literature on DIBEL S/Reading First Grant b. Guided by ongoing observations of comments given by each consecutive interview/focus group c. Overall – Topics addressed through interviews/focus groups i. General likes/dislikes about DIBELS ii. General value or perceived impact of Reading First Grant at school iii. Conducting Assessments at school iv. Using data v. PMRN reports vi. Student reactions to being tested with DIBELS vii. Support for teachers in the use of DIBELS at school viii. Advice offered by teachers concerning use of DIBELS d. Perceptions of DIBELS can be organized into four ma in topic areas: i. Climate/Culture of school ii. Support for using DIBELS iii. Knowledge of the DIBELS iv. Collecting/Using assessment data III. Teacher Perceptions and Understandings of the DIBEL S ( What are teachers’ perceptions and understandings about DIBE LS and the PMRN?)

PAGE 320

309 a. Climate/Culture of school i. Pressures on teacher performance 1. Teachers feel general sense of pressure by general climate of accountability (e.g., NCLB) to increase student per formance in education. 2. Most cited teacher competition as initial impact of using DIBELS in first couple of years. 3. More collaboration evolved over time which led to m ost teachers becoming less concerned about being evaluated based on students’ DIBELS data. 4. Emphasis of value implicitly or explicitly stated b y administrators/district a. Overall, most reported that DIBELS has a greater va lue placed on it by the school/district than other assessments b. Most teachers agreed that administrative support an d encouragement of use of DIBELS is very important to influence teacher use of DIBELS. c. Some indicated that administrators are important fo r deflecting pressures of accountability outside of school build ing. 5. When asked about the source of pressures being felt one kindergarten teacher commented that it existed befo re DIBELS and that much of the resistance towards DIBELS is less about the test than about “removing the childhood from the child.” Too much intensity in kindergarten than every before. “…yes they get real strong on the DIBELS but their behavior hasn’t gone to a place that is appropriate for society.” 6. Pressures also occur on earlier grades from later g rades to be sure students are ready to enter next grade level (e.g., the pressure at 3rd grade gets transferred to lower grades all the way to kindergarten). ii. Test Overload 1. Recognition of all tests given and required frequen cy of test administration a. School readiness assessment (KG) – 1x at start of y ear. b. District-wide assessment (KG) – 5x in year. c. Running Records (1st) – 5x in year d. DIBELS – (KG/1st) – 3x in year e. Peabody Picture Vocabulary Test (PPVT) – (KG/1st) – 1x at end of year. f. Standford Achievement Test 10th Edition (SAT-10) – (1st) – 1x at end of year. 2. All teachers voiced concerns about the quantity of testing that is taking place in district.

PAGE 321

310 a. Some voiced amazement that children are even learni ng at all with all the missed instructional times due to test ing. b. Only two of the 14 teachers were interested in lear ning how to give DIBELS themselves for progress monitoring whil e all others rejected idea because of too much testing al ready in their responsibility. c. Amount of testing and types of testing were importa nt factors in teacher’s perceptions of DIBELS (see below). 3. All teachers indicated they needed more personnel t o help with collecting assessments in the classroom, including DIBELS (Title 1 teachers are pulled away from instruction during DIBELS cycles to help with collecting DIBELS data). b. Supports/Resources available through Reading First grantAll teachers praised the Reading First grant for various support s and or resources it provides schools. i. The most valuable is the full-time Reading Coach 1. Helps coordinate DIBELS data collection and data en try 2. Provides technical assistance to teachers about how to use DIBELS data and other assessment data for making instructi onal decisions. 3. Helps set up/create/introduce various activities/in structional programs for classrooms 4. Helps set up classroom libraries 5. Provides training on DIBELS/PMRN 6. Provides PMRN reports a. All teachers indicated this as important because th ey don’t have access to color printers are dependent on Read ing Coach to provide color reports b. Some print their own reports but then have to color them in because they value the color aspects of the PMRN re ports (see below) 7. Provides modeling/coaching for teaching reading 8. Coordinates visits to other schools that are modeli ng effective practice. 9. Accessible and knowledgeable to answer any question s related to reading or DIBELS. 10. Some Reading Coaches established book clubs to help teachers increase their knowledge of reading development and reading instruction ii. Instructional materials was the next most talked ab out benefit of the Reading First grant. 1. Teachers value the amount of books that they could supply their classrooms with. 2. Various other instructional aids have been valuable to teachers.

PAGE 322

311 iii. Less common positive impact reported (only 1 teache r) was that Reading First grant and use of DIBELS has forced ed ucators to learn more about reading development and reading instruct ion. iv. Training 1. None of the teachers reported attending any distric t level trainings on the use of DIBELS. However, one teacher indicat ed she did get trained on how to administer the DIBELS. 2. Some teachers who see a positive value in the use o f DIBELS felt most teachers don’t value the DIBELS because they d on’t have sufficient information about what it is and how to use the information from it to guide instruction – to them it’s just another test. 3. All indicated receiving some training from their Re ading Coach on school grounds. a. Some training was focused on utilizing various inte rvention programs for students. b. Those who reported attending the first mandatory tr aining at the start of the grant complained about the format – the trainers read to them word-for-word from the training manual (scripted training). This apparently has been changed in res ponse to teacher concerns over this training format. c. Most indicated some general training or technical a ssistance received on how to use the colors and numbers to id entify struggling students and allocate resources based on the DIBELS information (e.g., Title 1 assistants, parti cipation in Great Leaps reading program, etc.). 4. Some indicated general interest in participating in future trainings if they provided more information about the develop ment and research on DIBELS and how to link it to instructio n. 5. Only two of the 14 teachers indicated an interest i n learning to use DIBELS for monitoring progress and administering it themselves. Most declined this out of concern that there is too much testing already. 6. Some teachers reported they had not received any fo rmal training on what the DIBELS is, what it assesses, or why it is timed. And yet, others indicated at least receiving such infor mation more informally through working with the Reading Coach o r through learning on own. 7. One teacher reported learning that DIBELS Oral Read ing Fluency is a predictor of reading success in later grades. c. Knowledge of the DIBELS i. DIBELS as a Benchmark Assessment 1. All teachers indicated knowledge of the DIBELS bein g administered three times a year.

PAGE 323

312 2. All teachers reported knowledge of it being a requi red test at schools with a Reading First grant. ii. DIBELS as a timed test. 1. Some teachers do not see value in the DIBELS becaus e it is timed. a. Don’t see correspondence with DIBELS and district assessments or classroom observations. i. Ex. Student’s who do well on district assessment fo r letter naming in kindergarten or oral reading on Running R ecords in first grade, but then do poorly on Letter-Naming Fluency or Oral Reading Fluency, respectively, on the DIBEL S. ii. Some indicated DIBELS only corresponds with distric t assessments for students that are struggling (e.g., low on both DIBELS and district assessments) b. 6 of the 7 kindergarten teachers voiced concerns ab out the appropriateness of timing kindergarten students. i. Some mostly concerned with timing in the first cycl e since kindergarten students are not used to being tested. ii. Others feel it is too much pressure on very young c hildren. iii. These teachers were concerned about student’s feeli ng anxious about being timed. c. Most teachers reported DIBELS as a kind of “speed” test. i. Most concerned that the DIBELS values speed instead of comprehension of what is read. ii. Some reported greater value in students reading slo w but with comprehension rather than fast without comprehension. 2. Those that reported positive perceptions of the DIB ELS did not indicate any concerns about the DIBELS being timed and also reported finding correspondence with other assessme nts and/or teacher observations (in both grade levels). 3. Couple of teachers reported they prepare students f or the timing by “making a game of it” iii. DIBELS – What it measures 1. Some teachers reported having difficulty rememberin g the specific subtests in the DIBELS and/or what the acronyms sta nd for. 2. Some teachers reported seeing a correspondence with what the DIBELS measures and what their classroom district a ssessments measure. 3. Of those that do see a correspondence between the D IBELS and their district assessments, some don’t see value in DIBELS because it doesn’t provide any new information beyo nd what their district assessments provide or what they observe i n the classroom. 4. Some kindergarten teachers did not value DIBELS bec ause they perceive very little correspondence – only Letter N aming Fluency

PAGE 324

313 and Phoneme Segmentation Fluency (PSF) as being clo se to what the district assessment measures. 5. Nonsense Word Fluency (NWF) a. 5 of 14 teachers voiced concerns about using the NW F subtest. i. One referred to it as the “silly” test because it u ses nonwords. ii. These teachers saw concerns in students being confu sed on NWF when they try to apply the strategies they are being taught in the classroom to figure out a word they d on’t know by thinking of words they do know that look li ke it or begin with the same sound/letter – the effect is th at they lose time. iii. These teachers generally reported a feeling that th e NWF subtest is inappropriate because the goal of readin g is to help students read real words. b. 5 of the 14 teachers voiced positive uses regarding the NWF subtest. i. When asked about any concerns using the NWF subtest these teachers generally reported that it tells you if the student can sound out a word they don’t know. ii. Some of these teachers reported value in the NWF su btest for their students to identify their letter-sound correspondence skills (especially with using short vowel sounds). iii. These teachers reported that the raw protocols on w hich NWF data is collected is more valuable then the NWF score itself because they can observe it for patter ns in student errors to inform instruction c. Remaining 4 teachers did not report any perceptions on the use of NWF subtest. 6. Phoneme Segmentation Fluency (PSF) a. 2 of the first grade teachers reported concerns abo ut the PSF subtest because they feel it confuses students when they are being taught to blend sounds together to make words and then tested to see how they break words into sounds. b. Some teachers (across both grade levels) reported a positive value in the PSF subtest because it is similar to e ither what is measured in kindergarten on district assessments or because it helps teacher to determine if student has phonemic awareness skills. 7. Initial Sound Fluency (ISF) a. Only one teacher reported concern about the ISF sub test.

PAGE 325

314 b. Specifically, feels it is invalid because of the vo cabulary labels given for certain pictures that might be labeled so mething different by a student. c. This teacher did not have any concerns with the pro cedures for administration of the ISF or the pictures themselve s – only the chosen vocabulary labels assigned to certain pictur es (e.g., “grass” vs. “yard”). 8. Letter Naming Fluency (LNF) a. No teacher reported any concerns about the LNF subt est. b. However, some kindergarten teachers indicated conce rns of the timing of DIBELS as leading to lack of corresponden ce between letter naming ability on DIBELS vs. distric t assessments (i.e., meeting expectation on one and n ot the other). c. None of the teachers indicated any value in this su btest except for a general indicator for early kindergarten to s ee if they are ready to learn how to read. 9. Oral Reading Fluency a. First grade teachers have mixed views on this subte st. b. Because of timing, some see it is a kind of speed r eading test. c. Others see it as an indicator of being able to comp rehend text. d. Some have trouble seeing correspondence between DIB ELS ORF and Running Record performances (i.e., low ORF score and high Running Record score). i. Of these specific reports about lack of corresponde nce with Running Records, none indicated a student profile w here ORF is high and Running Record is low. e. Some first grade teachers see it as having value in determining readiness for second grade. f. One teacher felt ORF criteria at end of first grade was too high – concerned that students are being pushed to read fast instead of taking time to comprehend what they read. 10. Comprehension Some teachers indicated they saw le ss value in the DIBELS because it does not measure comprehensio n. iv. DIBELS as Progress Monitoring Tool 1. All teachers were asked if they use DIBELS for prog ress monitoring more frequently than the three times a y ear for benchmark on students who are struggling. 2. All teachers indicated they do not and instead rely on district assessments and/or observations during small group instruction with students to guide instruction day to day.

PAGE 326

315 3. Only one teacher indicated creating graphs to track student performance and the development of teacher-made mat erials to assess skills using a one-minute timing. 4. Two teachers indicated an interest in learning how to use DIBELS for progress monitoring and felt that it would be m ore useful than DIBELS benchmark data to guide instructional decisi ons for struggling students. 5. One teacher who was trained in administering DIBELS reported not being able to use DIBELS for progress monitorin g due to lack of materials – only access to those being used for Benchmark Assessments. 6. No teacher reported any knowledge of being able to access DIBELS materials from websites for use in the class room to monitor progress of students. 7. When asked if interested in learning how to adminis ter DIBELS for use as a progress monitoring tool, most indicat ed no interest due to feeling overwhelmed with current testing res ponsibilities. d. Collecting and Using Data i. Collecting Data 1. Two general procedures reported for collecting Benc hmark Assessment data on DIBELS: a. Whole class goes to Media Center and work on comput ers while each student is assessed by one of 5-7 people collecting DIBELS. b. DIBELS testers come to classroom and take one stude nt at a time to nearby quiet area to test. c. Both reported as being quick and efficient processe s. 2. Students working with testers other than teacher a. All teachers indicated a concern, some greater than others, about students being tested by someone they do not recognize or feel comfortable with. b. Some teachers indicated this is mostly a concern on ly during first cycle. c. Some teachers indicated students less affected when person giving the assessment, or when Reading Coach, visit s students before testing to explain the test and set the clim ate for participation. d. Most students ok because people testing them are of ten Title 1 personnel who work at the school daily. e. A few teachers indicated value in someone else coll ecting DIBELS data – seems to make information more object ive and reliable when it matches what teacher is seeing in the classroom. 3. Quality of notes on test protocols

PAGE 327

316 a. Some teachers indicated high value in qualitative n otes added to test protocols to further aid teachers in the in terpretation of scores (e.g., not feeling well, shy, etc.) b. Teachers reported different testers lead to differe nt use of qualitative notes – some helpful, some not. 4. DIBELS vs. district assessments – Benchmark Assessm ents a. All teachers reported value in how quick DIBELS is collected compared to district assessments in the classroom. b. All teachers reported value in someone else collect ing the data – controlling for strangers as testers – because th ey are overwhelmed with their own assessment responsibilit ies. c. Although teachers have a value for someone else col lecting the DIBELS data, some indicated concerns about not find ing as much usefulness in the data because they are not gi ving the test – cannot observe qualitative aspects of student per formance. i. Access to raw data on protocols improves situation. ii. Inclusion of qualitative notes by tester improves s ituation. 5. DIBELS vs. district assessments – Progress Monitori ng a. All teachers indicated using a combination of distr ict assessments and teacher observations in the classro om/small group instructional setting to monitor student prog ress. b. Influenced by lack of training, knowledge, personne l, and/or time to administer DIBELS for progress monitoring. ii. Using data 1. Preference for assessment(s) to inform instructiona l decision making. a. Of the teachers who reported positive perceptions a nd value on the use of DIBELS, all indicated a preference for u sing multiple sources of data to inform or guide instruc tion. At the very least, some of these teachers used DIBELS data to confirm results of other assessments/observations – when different, they seek to understand why. b. Of the teachers who reported negative perceptions a nd value on the use of DIBELS, all indicated preference for usi ng district assessments and/or observations to inform or guide instruction. Only use DIBELS mainly because it is required – but DIBELS doesn’t offer anything new beyond what district ass essments and observations reveal about students. c. Preference for what data is used seems to be influe nced by knowledge of DIBELS. d. All teachers reported a greater emphasis placed on the use of DIBELS at their schools through meetings with eithe r Reading Coach, administrators, student services personnel, and/or Title 1 personnel.

PAGE 328

317 2. Data to form and adjust instructional groups a. Teachers who prefer a multiple data source approach reported positive value in DIBELS helping to form instructio nal groups and assist in adjusting group membership through th e year. i. Some teachers used DIBELS data to organize their re ading center activities. ii. Some teachers reported positive value in using FCRR binder of instructional activities. b. Teachers who preferred district assessments to guid e instruction did not report any positive value in us ing DIBELS to form instructional groups i. Describe DIBELS as “just another test we have to gi ve.” ii. Describe DIBELS as unhelpful because it doesn’t rev eal anything new beyond the district assessment data or observations made in classroom. 3. Data to determine interventions or placement in pro grams a. Working in grade level teams i. All teachers reported they often meet as a grade le vel team (at least once a month) to discuss data and observa tions of students who need more help. ii. All teachers reported meeting with Reading Coach at least after each DIBELS cycle to go through DIBLES data a nd develop strategies to increase grade level/classroo m level scores b. Assigning students to reading programs/Small Group Instruction i. Most teachers reported that they use assessment information to identify students who needs more hel p in reading. 1. Only some reported explicitly using the DIBELS to determine what students are provided. ii. Some teachers reported specific reading programs th at are provided to students who are identified needing hel p in reading based on either the DIBELS or classroom assessments: 1. Great Leaps 2. SRA Open Court 3. HeadSprout 4. Teacher developed reading activities 5. FCRR reading activities (“FCRR binder”) iii. All teachers reported they provide interventions an d supports to struggling student during small group instruction c. Title 1/Classroom Volunteers

PAGE 329

318 i. On the topic of using data, many of the reports giv en by teachers indicated the use of Title 1 teaching assi stants or classroom volunteers. ii. Some reported DIBELS is used to decide which classr oom Title 1 support is given. iii. Other reported using DIBELS/classroom assessments t o identify who should get Title 1 help. iv. Title 1 small group setting reported as an interven tion – only some teachers explicitly indicated use of a program/activity in that setting (e.g., Great Leaps ). d. Retention/ESE Referral i. Most of the kindergarten teachers reported using DIBELS/Classroom assessments to make retention decisions and/or to identify students for referral for ESE evaluation/consideration. ii. None of the first grade teachers indicated any use of DIBELS at their school for making retention decisio ns. One first grade teacher indicated retention decisio ns were made only based on district assessments. iii. Some teachers (all grades) indicated that DIBELS is looked at by support staff (e.g., student services personn el) when a student is being considered for ESE placement. 4. Using PMRN a. Accessing PMRN online i. Most of the teachers (all grades) reported they acc ess the PMRN to download data reports. 1. All of these teachers only use the PMRN for accessi ng the Parent Letter. a. Parent letter seen as very valuable tool for communicating with parents on where student is instructionally at and how to support at home. b. Some teachers supplement or highlight sections of the Parent Letter to make more useful or efficient for parents (e.g., notes/comments). 2. A few teachers indicated accessing the PMRN more than 3x a year to identify and compare student performances across the year. ii. A few teachers indicated they never access the repo rts online because they are provided to them by the Rea ding Coach. iii. Teachers don’t have access to color printers. iv. Most of the teachers who access the PMRN or use the reports it generates reported only knowing how to u se the colors and scores to identify student needs.

PAGE 330

319 b. Reading Coach support i. Reading Coach provides reports and technical assist ance to grade level teams and individual teachers on interp reting reports at the grade and classroom level. ii. Teachers all indicated, regardless of comfort level for PMRN/Reports or knowledge of DIBELS, that they woul d not be able to utilize the data as well as they do without the guidance and support of the Reading Coach. iii. All teachers commented great concern about possibly losing their Reading Coach when the grant is expire d and the impact it would have on their use of the DIBELS data at their school. c. Preference for specific reports (other than parent letter). i. Most showed or described using the Class Status Rep ort as their preference. ii. Some described a “box and whiskers” format. 1. All but one of these teachers reported confusion ab out that style of data display. 2. One teacher reported learning just recently from th e Reading Coach that goal is to get the boxes smaller and above the expectation for the cycle. iii. No other reports were mentioned or reported by teac hers. IV. Perceptions of DIBELS by Reading Coaches (What attitudes and perceptions exist among persons other than teachers who participate in the collection, input, and analysis of DIBELS data throughout the school year?) a. Climate/Culture of School i. Work load 1. Perceptions there is not enough time to use the ass essment or data 2. Teachers have too much on their plate regarding the assessment of students and paperwork – DIBELS as one more thing i mposed on them to do. 3. Teachers are burdened with so much paperwork and responsibilities to complete many other types of as sessments which offer less information for instructional planning. 4. Teachers are resistant to DIBELS many times because it is one more thing that when added all up, a great deal of instructional time is being lost. ii. Level of administrator involvement, direction, and emphasis on use of DIBELS

PAGE 331

320 1. If administrators do not value data analysis and da ta utilization along with holding staff accountable, then DIBELS i s less likely to be used or valued. 2. Leadership is essential for the use of DIBELS. 3. The issue around why some leaders may not demonstra te explicit support for DIBELS may be less to do with their per ceived value of it as much as competing demands of other respons ibilities placed on them by the district – too much on their plates to have direct and consistent involvement. 4. Reading Coaches see need to take initiative by taki ng data to the principal and communicating often with them. Readi ng Coaches job to show and demonstrate, consistently, the util ity of DIBELS data. 5. Some principals may not have time to review the dat a often, but entrust a few to do that and provide input to staff iii. Need culture of valuing the use of data to make cha nges to curriculum/instruction 1. Reading Coaches reported that schools need to have a culture of valuing the use of data to make decisions. 2. Teachers and administrators many times need to be l ead to using the data b. Supports/Resources Available i. Role of Reading Coach 1. Technical support (data analysis; data utilization; accessing reports) a. Grade level focus b. Classroom level focus c. Individual student level focus d. Identify growth made at each level for each cycle 2. Modeling lessons/research-based interventions or pr ograms 3. Ensuring fidelity and integrity of instructional ap proaches being used 4. Support teacher needs/questions about instruction a nd data analysis 5. On-site training on use of DIBELS 6. Demonstrate the utility for DIBELS and help others see the value of it a. Meeting with individual teachers b. Meeting with grade level teams c. Meeting with School Leadership Team/Administrators d. Develop plans to increase growth to next cycle e. Share raw data sheets with teachers and encourage t hem to review them for patterns. 7. Coordinate DIBELS collection efforts a. Benchmark assessments 3x a year.

PAGE 332

321 b. Progress monitoring i. Teachers just now seeing value of progress monitori ng in forth year of implementation. ii. Reading Coach’s unsure how well others will embrace without teacher support and encouragement to engage in progress monitoring iii. Teachers may be apprehensive about progress monitor ing due to fear of results. ii. Reading Coach efforts to increase teachers’ value o r use of DIBELS 1. Teachers feel validated when Reading Coach and othe r support personnel work with the teacher to better understan d why a student did not perform well (e.g., understanding a student ’s home life, or conditions of the testing session, etc.). 2. Teachers find value in data analysis when they see direct link to making plans for improving performances of students that involves others so that the teacher is no alone. 3. Holding individual conferences with teachers and gr ade level teams have helped teachers find value in using the DIBELS through Reaching Coach support and encouragement. c. Teachers’ Knowledge of DIBELS i. Sharing research on correlation between DIBELS and FCAT helped increase value of DIBELS among teachers. ii. Many teachers are still seeing the DIBELS as anothe r high stakes test rather than a progress monitoring tool to guide ins truction. iii. Reading Coach perceptions of student reactions to b eing tested with DIBELS 1. Reading Coaches validated teacher concerns that man y students (especially KG students) do not test well with adul ts they are not familiar with. a. Changes implemented to deal with this that have bee n found helpful: i. RC’s or assessors spend time in the classrooms more prior to testing cycles and engage in student activities. ii. Provide students prior to testing what they will be asked to do and what to expect. 2. Reading Coaches try to explain to KG teachers that the first cycle is not a reflection of their teaching, but rather m ore of a measurement of students’ incoming knowledge and abi lity to follow directions. 3. Reading Coaches feel teachers still feel pressure t hat they will be held accountable for the first DIBELS cycle with KG students despite Reading Coaches reassurance that it is not.

PAGE 333

322 4. 1st grade students reactions to NWF – most get used to it quickly by the mid to end of the first grade and beyond – n o longer an issue of trying to “make it a real word”. 5. When training sheet for NWF subtest – called “SIM a nd LUT page”, most students usually say immediately before directions are given, “SIM/LUT” because they’ve seen it before 6. RCs concerned about the direction of NWF offering s tudent option to say sounds or whole word because many fir st graders will often sound out and then also say whole word – this loses them time. 7. Are they doing this because it is being modeled to them in the instructions? iv. Specific Subtests NWF 1. RC’s validated teacher reactions to the NWF test as a test that should not be used. 2. Teachers say to RCs that students are trying to mak e them real words by reading with long vowel sounds because the y’ve been taught to use that strategy when trying to read/dec ode a word they’ve not seen before. 3. RCs see a larger pattern where in the curriculum st udents are being introduced to final-e patterns and long vowel sound s mid-year. Most first graders are showing decreased performanc es during this mid cycle of DIBELS on NWF because of long vowel vs short vowel. 4. RCs try to educate teachers about NWF subtest by em phasizing value of decoding as well as sight word reading ski lls. 5. Better that a student demonstrates a strategy that is consistently applied but wrong, instead of no strategy at all – just guessing. 6. RCs have introduced “word work” activities that hav e helped teachers support student learning of decoding skill s and strategies. 7. Over the years, students become very familiar with the nonsense words and no longer becomes an issue for most of th em. 8. Questions about students in green or blue range of performance if they only give letter sounds instead of whole, blen ded, words – use raw data sheets to identify this. Value in the new addition by FCRR to include measure of whole words read on NWF subtest. 9. The raw data for NWF is invaluable to see how the s tudent attacks a word (e.g., segment all sounds and then blend who le word; segment initial sound and then read whole word; or just read whole word, etc.). 10. Adding new measurement of number of words read stil l not as valuable as teachers looking at patterns in the raw data. d. Collecting and Using Assessment Data i. Data Collection Procedures

PAGE 334

323 1. Teachers feel more comfortable and trusting of the DIBELS data when the same person collects the data each cycle f or their class. 2. Some described a process of either the whole class coming to the media center or DIBELS assessors pulling students o ne at a time to nearby quiet rooms for testing. ii. Reading Coach perceptions of teachers using DIBELS 1. 1st year vs. 4th year of implementation – evolution of teacher acceptance and use 2. In beginning, teachers accepted DIBELS data that va lidated their expectations of a student and did not value DIBELS data for students who did not perform well on it. 3. Took long time for teachers to see value of DIBELS (approx. 2-3 years). 4. Some teachers find the DIBELS more reliable because someone else conducts the assessment – a sense it is more o bjective. 5. In 4th year of implementation teachers trained to give so me subtests of DIBELS to some students – many found mo re value in DIBELS through this. Doing it helps them understan d it more. iii. DIBELS vs. other assessments 1. People administering the DIBELS need to be taking d etailed notes about the children they’ve tested or some kind of i ndicator to know to go to the teacher and give them lots of informat ion beyond the score. Going back to the teacher is one of the bes t ways to better understand why a student may not have performed wel l on the test. It’s additional information (quantitative + qualita tive). 2. RCs all agree that sharing the raw DIBELS scoring s heets help make the data more valuable for teachers. 3. Without looking at the raw data, the color itself m ay not indicate what to do to help that student. 4. Using the raw data to find patterns is the role of the teacher – to use that information. iv. Teacher’s Level of Proficiency in the use of DIBELS data 1. Teachers are inundated with so much data, but are u nsure how to make use of it all at once. 2. Variability exists in teacher willingness to take t he next step in data analysis and data utilization – differences in reactions to change and new concepts. 3. Teachers need much more support and training on how to use multiple sources of data to make decisions about a student’s needs. (e.g., if student is low on DIBELS but is passing d istrict assessment teacher will not provide interventions). 4. Some teachers are still having difficulty seeing th e correlation between fluency and comprehension.

PAGE 335

324 5. Some teachers put too much emphasis on DIBELS at th e expense of ignoring other measures/assessments and at the e xpense of only focusing on fluency. v. Views on the use of PMRN reports by teachers/admini strators 1. Teachers needs more help to learn how to use the re ports – more training. 2. KG and 1st grade teachers often seem more proficient at using the reports. 3. Some teachers are simply more proactive in using th em and are more independent at accessing their own data online – teacher selfefficacy in using data. 4. Relates to the evolution of Reading First grant: te achers are only now reaching a point where they are reading to begi n embracing the use of the reports and using them to make instr uctional decisions. 5. Data analysis is a less tangible process for many e ducators – need continued support for staff to reach a level of ind ependence in this skill. 6. RC’s have to sometimes encourage teachers to use th em by sitting with them one-on-one and consistently following up with them until they find the value in doing it themselves. T his also relates the barrier of not enough time. Some teachers may have the skills but don’t feel they have the time. 7. Teachers will embrace using the reports when they s ee the value in them. e. Advice i. Concerns about the use of DIBELS 1. Critical decisions being made on very small snapsho ts of student performance a. It’s just a snapshot and yet big decisions are bein g made on a snapshot (e.g., retention and special education con sideration). 2. In the Future a. Concerns about teachers taking full ownership of co nducting the DIBELS assessment out of fear that objectivity will be lost through bias or poor standardization. i. Teachers in first grade have about 45 minutes to an hour that they could dedicate to testing each day (durin g reading center times). Even with that time, it would take more than a week to complete the whole class alone. Then add that time three for each type of test being done in the district – that is valuable instructional time for small group lessons that is lost.

PAGE 336

325 ii. Even with an assistant or sub, instructional time w ill be affected because the teacher still needs to plan fo r the sub/assistant ahead of time. 3. Concerns about fidelity of use among non-Reading Fi rst schools. ii. Allow PMRN to generate graphs that reflect the corr elation between students’ oral reading fluency and later performanc e on 3rd grade FCAT. iii. Reading Coaches are needed! 1. To make sure new teachers have the support they nee d to learn how to use the DIBELS effectively (i.e., turnover r ates). 2. To support teachers who are at different levels of understanding and proficiency in the use of DIBELS. 3. Giving the DIBELS administration over to the teache rs completely could threaten the validity as many teachers still see it as a test that they NEED to pass; or as a test that could be used against them regarding their effectiveness as a teacher. 4. To coordinate data collection and analysis activiti es. No one else at the schools is currently trained to do this – a full time job. iv. If teachers are to take over using and coordinating the use of DIBELS, including the analysis and utilization of that data they’ll need: 1. more planning time 2. something needs to be taken away – district typical ly adds more stuff but rarely takes anything away. Teachers can not be doing Common Assessments, Kaplan, DIBELS, and Project Foc us assessments on their own. And then add more progre ss monitoring with DIBELS – it’s too much. Something has to give 3. District needs to prioritize the assessments they a re demanding teachers to use. 4. Additional personnel in the classroom to sub or tea m teach to allow time for the teachers to give assessments rel iably and analyze the data for use. 5. Asking teachers to be responsible for testing stude nts with DIBELS without taking other assessment requirements away is unrealistic and/or lead to valuable instructional t ime being lost. v. Need more progress monitoring and focus on the stud ents growth in relation to interventions given. vi. Teachers and administrators need to pay more attent ion to a student’s growth (i.e., score) in relation to standard rather than simply the color of the performance. V. Perceptions of DIBELS by “Specialists” (What attitudes and perceptions exist among persons other than teachers who partici pate in the collection, input, and analysis of DIBELS data throughout the s chool year?)

PAGE 337

326 a. Culture/Climate of School i. RF schools vs. Non RF schools 1. Observed number of referrals for psychological eval uations have decreased as a function of using DIBELS data at RF schools. 2. Observed that DIBELS is used to refer students for psychological assessments even when other assessment data indicat es student is performing within average range at Non-RF schools. 3. Day and Night differences in the amount of explicit and direct teaching of focused skills in KG and 1st grades (e.g., phonemic awareness) – diagnosticians are seeing big differen ces in students being assessed at RF and NRF schools on nationally normreferenced achievement tests in reading. ii. Climate for teachers right now very punitive and in tense. Many teachers feel pressured to teach to the test out of fear of being judged professionally based on the DIBELS scores. iii. Leadership plays a huge role in setting the right c limate. b. Support/Resources Available i. Role and Importance of Reading Coach: 1. Provide training to DIBELS assessment team each yea r and in some cases before each cycle to ensure standardizat ion procedures are known and followed for administering the DIBELS 2. Reading Coaches should have a stronger role in coor dinating interventions with teachers for students. 3. FCRR provided schools with large intervention binde rs with assorted collection of intervention ideas organized by skill area. Teachers don’t have time to utilize this resource; RCs can help with this by being familiar with the activities and offering recommendations to teachers when they meet with the m to go over the DIBELS data after each cycle. ii. Teachers valuing the use of DIBELS data influenced by: 1. Available time 2. Access to color printer 3. Motivation to use it 4. Seeing the usefulness of it over time 5. Teachers are overwhelmed by the amount of testing t hat is taking place 6. Using graphs increases value for teachers, especial ly for progress monitoring data (line graphs). More “ah-ha” moment s. iii. Observed the following has led to more acceptance a nd increases in value of DIBELS by teachers: 1. Follow up with teachers after assessment cycle to s hare results quickly. 2. Share qualitative observations of students with the teacher as they relate to the DIBELS performance.

PAGE 338

327 3. Show teacher actual DIBELS raw data/protocols 4. Going into the classrooms before assessment cycles to give students advanced understanding of what they will b e asked to do and given directions for how to participate in the assessment process. c. Teachers’ Knowledge of DIBELS i. Observed that teachers at RF schools have evolved i n their acceptance and use of DIBELS over time. ii. Teachers concerned about it being a timed test and therefore not as valid a measure iii. Concerns about the test being given by different pe ople throughout the year who are unfamiliar with the students iv. Reflections on the NWF subtest. 1. Specialist observations do not necessarily agree or disagree with teacher comments on this subtest – most likely beca use of the unique background of the specialists (e.g., diagnos ticians). 2. Emphasized the importance of the Reading Coach to h elp teachers and staff understand the importance and usefulness of the NWF measure. 3. At Non-RF schools, observed teachers who have backg rounds in reading have refused to use the NWF subtest or at l east refuse using the results – find it has no value to read su ch words. v. Sharing research on DIBLES/Reading Development 1. Agree that this would help increase value of DIBEL among teachers 2. It would provide more context for why the DIBELS is so useful. 3. Administrators need to know this information just a s well 4. Sharing such information may not help them interpre t the results better, but at least help them see the benefit of u sing it. d. Collecting and Using Assessment Data i. Observations that people collecting DIBELS data are more accurate when ongoing training and coaching are available – especially when assessment team members will be testing a different grade than before or it’s been a long time since they worked with a s pecific grade level/subtest. ii. Observations of assessing special populations (ESOL Spec Ed, Speech, etc.) with DIBELS 1. Helpful if same person keeps working with them who is familiar with them – to differentiate more accurately an err or vs a foreign accent for example. 2. Students who have special needs or circumstances ma y show a more true or valid performance with someone they kn ow and have worked with before.

PAGE 339

328 3. Sometimes it may be helpful to at least have a syst em in place that communicates specific student circumstances or char acteristics that would need to be considered before working with suc h students regardless of who collects the information. 4. Also recommended for progress monitoring since it i s often not possible for the same person to do all the progress monitoring on a particular student because of schedules. 5. Increases value of results when data and observatio ns during testing are immediately shared with teacher. iii. Observations of procedures for collecting DIBELS at different schools: 1. Different at different schools. 2. Procedures adopted by a given school not observed t o change over the years, but become more efficient. 3. Original choice of approach observed to be adopted by the influence of what seemed possible (i.e., staff avai lable, schedules, structural arrangement of school). iv. Teachers conducting progress monitoring 1. 06-07 first year some teachers asked to give DIBELS to some students for progress monitoring. 2. Observed teachers have problems with this because: 3. Time available to engage in activity 4. Teachers don’t feel they get out of it as much as w hen someone else had done it and brings them the information an d observation notes. 5. Lack of suitable space to conduct the assessment wi thout distraction or interruption – no one else to watch the other students 6. Teachers at RF schools seem more ready to understan d the role of progress monitoring than teachers at Non RF schools v. Teachers administering DIBELS in absence of DIBELS assessment team or Reading Coach: 1. Unrealistic 2. Teachers would need extra personnel regardless, to watch rest of class or else reliability and/or validity of result s threatened. 3. Similar problem to that faced by KG teachers curren tly with PIAP assessment 4. Even if other kids are provided with independent ac tivities, teacher conducting assessment alone with DIBELS could take weeks to complete whole class with allotted time available t o teachers for such activities. 5. Some have observed KG and 1st grade teachers who have taken initiative to do their own DIBELS assessments while intern covered class – these teachers did not require some one to come

PAGE 340

329 back to them after DIBELS cycles (when someone else is collecting the information) to help them interpret it. 6. Some teachers will be able to do this and be motiva ted to do this, but not likely that all teachers will – should be v oluntary for teachers to do it rather than forcing people to do one more thing. 7. Teachers in upper grades should be more encouraged to conduct their own DIBELS on at least those students who are not comprehending. Much less of the assessments in upp er grades correspond with the content measured in DIBELS. vi. Teachers don’t find value in DIBELS to drive instru ctional decision making. 1. Teachers showing preference for Running Records eve n though not a fluency assessment – teachers need more infor mation about the importance of fluency. 2. Some teachers need to be more flexible about organi zing student reading groups in the beginning weeks of school as the first cycle of DIBELS data is acquired. – this should be review ed throughout the year as student’s progress in their skills. vii. Analyzing DIBELS data: 1. Teachers need to put more emphasis on student growt h rather than focusing only on the color. 2. Educators need to be watching the student’s trend r ather than their single performance on one assessment cycle. 3. Teachers needs more guidance and support/training o n how to use multiple sources of data. VI. Case Study Comparative Analysis Between Teachers an d DIBELS Experts a. Two Experts in the use of DIBELS/PMRN were separate ly and independently asked to review 1 KG case study and 1 First Grade case study. b. Each teacher asked to review case study matching th e grade level they teach only. c. Both case studies reflected end of year cycle d. Each case study involved three specific PMRN report s chosen for use based on comments received by Reading Coaches and s chool staff who work with teachers and DIBELS. i. Class Status Report ii. Student Grade Summary Report (Box and Whiskers form at) iii. Reading Progress Monitoring Student Cumulative Repo rt (referred to as Cumulative Report below). e. Each teacher/expert asked to give their impressions about the student case presented to them. i. What would they do with a student like this?

PAGE 341

330 ii. What did they feel this student needs? iii. What other types of information would they want to know about this student? iv. How helpful are these types of reports? v. Are there other types of reports that are used or p referred? f. Expert reviews analyzed before teacher reviews i. Results reported as aggregate data for experts and teachers, respectively by grade level. ii. Results can be categorized into three main themes: 1. Using the PMRN reports 2. Additional information needed/wanted 3. General comments about using DIBELS data iii. Expert review of kindergarten case study 1. Using PMRN Reports a. Using the cumulative report allows opportunity to s ee progress of skills b. Using the cumulative report allows opportunity to s ee vocabulary level c. Using the cumulative report allows opportunity to s ee scores on ISF in cycles 1-3 d. Using the cumulative report allows 1st grade teacher at beginning of year to see what students are coming i n with – skills they have. e. Using Class Status Report allows you to sort the da ta in different ways f. Use reports to identify student strengths and weakn esses i. Student is fluent in letter naming skills ii. Student needs significant help in phonological awar eness skills iii. Student seems to have some letter-sound corresponde nce skills g. Recommendations for the student in the case study h. Using the Student Grade Summary Report allows for comparing student to class to standard 2. Additional Information Needed a. Need more information to understand why student is behind peers in ISF and PSF b. Need more information i. Is the student in ESE, ESOL? ii. Does the student have any language impairments/dela ys iii. Is English primary language iv. How did student respond during testing v. How does the student respond to activities/interven tions in the classroom

PAGE 342

331 vi. What has been done for this student so far vii. More background information c. Need more information to understand why student wen t from 0 to 24 on NWF d. Need more information to know what phonological ski lls the student does have DIBELS data only tells us the s tudent has a problem in phonological awareness skills. iv. Kindergarten teacher reviews of kindergarten case s tudy 1. Using PMRN Reports a. All kindergarten teachers recognized and reported u sing the Class Status Report b. All used the Class Status Report to identify studen t strengths or weaknesses i. Student is “having trouble with sounds” based on th e NWF ii. Fluent in letter naming skills iii. Student can do nonsense words but “doesn’t know sou nds” referring to PSF iv. “He knows his sight words” v. Needs help with beginning sounds and letter-sound relationships vi. Main concern by one teacher was PSF vii. Recommendations for various activities to teach stu dent viii. Two teachers indicated this student should be refer red for ESE consideration. c. 3 of the kindergarten teachers reported or demonstr ated using the Student Grade Summary Report to either identify student strengths or needs and/or use to explain to parents their child’s needs. d. Only 1 teacher found value and used the Cumulative Report to identify student progression over the year. 2. Additional Information Needed a. Only 4 teachers asked for additional information i. Two of the teachers wanted to know if student is in ESOL or has English as second language ii. All 4 teachers wanted to know if student has any sp eech or language impairments iii. 1 teacher asked for observations of student during testing (e.g., distractions?) iv. 1 teacher asked if any short-term or long-term memo ry problems; other processing deficits v. 1 teacher would want to eliminate questions about cognitive skills before assuming student isn’t tryi ng. v. Expert review of first grade case study 1. Using PMRN Reports

PAGE 343

332 a. Use Cumulative Report to see student progression th rough the year b. Use Cumulative Report to see other outcome measures in vocabulary (PPVT) and comprehension (SAT-10) c. Use reports to identify student strengths and weakn esses i. Student needs help in phonics ii. Student needs help in reading fluency iii. Student strength in phonological awareness iv. Student seems to be compensating somewhat for comprehension despite low vocabulary and low fluenc y skills v. This case is an individual student problem because class is performing above benchmark higher than the student. d. Don’t just look at the colors, but also the numbers 2. Additional Information Needed a. What can this student do; what sounds do they know in phonics b. Teachers need to use knowledge of reading developme nt to figure out where to start helping a student (e.g., problem with fluency – is it because they can’t decode words) c. Teachers can do more progress monitoring d. Teacher can access additional probes from Oregon we bsite and give it themselves e. FCRR website has several guides for teachers on the use of ongoing progress monitoring vi. First Grade teacher reviews of first grade case stu dy 1. Using PMRN Reports a. All first grade teachers recognized and reported us ing the Class Status Report often. b. All reported using the colors on the Class Status R eport to identify student strengths and weaknesses c. Only one of the 7 teachers valued using the Student Grade Summary Report and understood how to use it – but reported that the Box and Whiskers format was mostl y used at the class level in her class. d. All of the other 6 teachers reported no value in us ing Student Grade Summary Report – too confusing and to o much visually. e. A few teachers either recognized and/or found value in using the Cumulative Report – one teacher in partic ular used that report first to look at the student’s pro gression over the year. f. All teachers reported student strengths and needs: 1. Student needs help in phonics 2. Student needs help in reading fluency

PAGE 344

333 3. Student strength in phonological awareness 4. Most teachers described various activities to teach the student phonics/oral fluency/sight word vocabulary g. The “Historical Report” and “Class Recommended Leve l of Instruction Report” were stated as alternative r eports used by teachers. 2. Additional Information Needed a. Only 5 of the first grade teachers inquired about a dditional information for helping to interpret the PMRN repor ts i. Error patterns on the NWF? ii. Is student and ESE student? iii. Any disabilities (e.g., ADHD or Language Impairment)? iv. Observations of student during testing (i.e., distr acted)? v. Is student ESOL or is English a second language? vi. Error patterns on the ORF? vii. What interventions have been tried already? viii. What conditions exist at students home? vii. Expert Opinion on Using Data – general – comments about using data/DIBELS at Reading First schools. 1. DIBELS data tells you there is a problem but doesn’ t tell you how to fix it 2. DIBELS is only one snapshot 3. DIBELS Benchmark data not meant for instructional p lanning 4. Having access to raw data/probes is valuable 5. Progress monitoring data is more useful for making instructional decisions 6. Frequency of progress monitoring depends on the stu dent; generally more intensive cases need weekly; moderat e risk could be bi-weekly or monthly. 7. Only need to give one probe for progress monitoring 8. Use data to determine if individual student problem or class-wide problem 9. Teachers should use knowledge about student (e.g., background) to help them interpret DIBELS data 10. Progress monitoring not happening nearly as much as would like to see 11. Experts observe much variability among schools’ abi lities to use data to guide instruction (not just with DIBELS). 12. Agree that schools that have been doing it at least three years have a smoother process in place 13. Experts observe that teachers need continued suppor t for how to use data to make decisions about instruction – some thing not really taught in teacher training programs.

PAGE 345

334 14. Reading Coaches are an invaluable source of support for professional development for teachers 15. Need to use DIBELS with other observations and othe r assessments 16. DIBELS data provides some information to differenti ate instruction for class broadly 17. Wouldn’t base any decisions on just this DIBELS cyc le 18. Making hypotheses about student deficits and using data to confirm or disconfirm