USF Libraries
USF Digital Collections

Early learning experiences

MISSING IMAGE

Material Information

Title:
Early learning experiences education with coaching and the effects on the acquisition of literacy skills in preschool children
Physical Description:
Book
Language:
English
Creator:
Cusumano, Dale Lynn
Publisher:
University of South Florida
Place of Publication:
Tampa, Fla.
Publication Date:

Subjects

Subjects / Keywords:
Early literacy
Igdi
Dibels
Esi-k
Hur
Dissertations, Academic -- Interdisciplinary Education -- Doctoral -- USF   ( lcsh )
Genre:
government publication (state, provincial, terriorial, dependent)   ( marcgt )
bibliography   ( marcgt )
theses   ( marcgt )
non-fiction   ( marcgt )

Notes

Summary:
ABSTRACT: Reading to learn becomes a difficult task for children if they have not become proficient at comprehending written text. It was hypothesized that, for some children, reading difficulties may have been averted had they been reared in homes or participated in early childhood settings where literacy-based activities, interactions, or materials were prevalent. The purpose of this study was to examine the impact that training early childhood educators in research based early literacy instructional strategies (within the HeadsUp! Reading curriculum HUR) had on the development of early reading skills in the preschool children they taught. Further examination also identified the impact that providing teachers with a Literacy Coach (LC) to mentor them in their application of the strategies had on early literacy development.The HUR class, LC positions, and additional resources provided to teachers partaking in this early childhood educator training were funded by the Early Learning Opportunities (ELO) grant. To examine the impact that teacher participation in the ELO grant had on childrens early literacy development, a hierarchical linear modeling (HLM) was conducted with childrens early literacy development measured at two points in time by the Individual Development and Growth Indicators (IGDI). After examining these indicators within a three-level model, change over time was documented. Specifically, age and race emerged as significant predictors of rates of literacy skill acquisition with older students and White students demonstrating higher rates of literacy development. Household socioeconomic status (SES) of children also accounted for significant amounts of variance in literacy development with higher rates of growth found in children from higher household SES.
Thesis:
Thesis (Ph.D.)--University of South Florida, 2005.
Bibliography:
Includes bibliographical references.
System Details:
System requirements: World Wide Web browser and PDF reader.
System Details:
Mode of access: World Wide Web.
Statement of Responsibility:
by Dale Lynn Cusumano.
General Note:
Title from PDF of title page.
General Note:
Document formatted into pages; contains 244 pages.
General Note:
Includes vita.

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 001681079
oclc - 62745912
usfldc doi - E14-SFE0001037
usfldc handle - e14.1037
System ID:
SFS0025358:00001


This item is only available as the following downloads:


Full Text
xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam Ka
controlfield tag 001 001681079
003 fts
005 20060215071217.0
006 m||||e|||d||||||||
007 cr mnu|||uuuuu
008 051227s2005 flu sbm s000 0 eng d
datafield ind1 8 ind2 024
subfield code a E14-SFE0001037
035
(OCoLC)62745912
SFE0001037
040
FHM
c FHM
049
FHMM
090
LB1555 (Online)
1 100
Cusumano, Dale Lynn.
0 245
Early learning experiences
h [electronic resource] :
b education with coaching and the effects on the acquisition of literacy skills in preschool children /
by Dale Lynn Cusumano.
260
[Tampa, Fla.] :
University of South Florida,
2005.
502
Thesis (Ph.D.)--University of South Florida, 2005.
504
Includes bibliographical references.
516
Text (Electronic thesis) in PDF format.
538
System requirements: World Wide Web browser and PDF reader.
Mode of access: World Wide Web.
500
Title from PDF of title page.
Document formatted into pages; contains 244 pages.
Includes vita.
520
ABSTRACT: Reading to learn becomes a difficult task for children if they have not become proficient at comprehending written text. It was hypothesized that, for some children, reading difficulties may have been averted had they been reared in homes or participated in early childhood settings where literacy-based activities, interactions, or materials were prevalent. The purpose of this study was to examine the impact that training early childhood educators in research based early literacy instructional strategies (within the HeadsUp! Reading curriculum HUR) had on the development of early reading skills in the preschool children they taught. Further examination also identified the impact that providing teachers with a Literacy Coach (LC) to mentor them in their application of the strategies had on early literacy development.The HUR class, LC positions, and additional resources provided to teachers partaking in this early childhood educator training were funded by the Early Learning Opportunities (ELO) grant. To examine the impact that teacher participation in the ELO grant had on childrens early literacy development, a hierarchical linear modeling (HLM) was conducted with childrens early literacy development measured at two points in time by the Individual Development and Growth Indicators (IGDI). After examining these indicators within a three-level model, change over time was documented. Specifically, age and race emerged as significant predictors of rates of literacy skill acquisition with older students and White students demonstrating higher rates of literacy development. Household socioeconomic status (SES) of children also accounted for significant amounts of variance in literacy development with higher rates of growth found in children from higher household SES.
590
Adviser: George Batsche, Ph.D.
Co-adviser: Kathy L. Bradley-Klug
653
Early literacy.
Igdi.
Dibels.
Esi-k.
Hur.
690
Dissertations, Academic
z USF
x Interdisciplinary Education
Doctoral.
773
t USF Electronic Theses and Dissertations.
4 856
u http://digital.lib.usf.edu/?e14.1037



PAGE 1

Early Learning Experiences: Education With Coaching and the Effects on the Acquisiton of Literacy Skills in Preschool Children by Dale Lynn Cusumano A dissertation submitted in partial fulfillment Of the requirement for the degree Doctor of Philosophy Department of Psychological and Social Foundations College of Education University of South Florida Co-Major Professor: George Batsche, Ed.D. Co-Major Professor: Kathy L. Bradley-Klug, Ph.D. Kathleen Armstrong, Ph.D. Jeffrey Kromrey, Ph.D. Susan Homan, Ph.D. Date of Approval: March 10, 2005 Keywords: early literacy, IGDI, DIBELS, ESI-K, HUR Copyright 2005, Dale Lynn Cusumano

PAGE 2

i TABLE OF CONTENTS LIST OF TABLES vi LIST OF FIGURES vii ABSTRACT x CHAPTER ONE: INTRODUCTION 1 Carving a Response to Children’s Needs 7 Early Learning Opportunities Grant 9 Summary 11 Rationale for the Study 12 Research Questions 12 Hypothesized Outcomes 13 Definition of Terms 14 CHAPTER TWO: REVIEW OF THE LITERATURE 20 Four Concepts in Readiness for Learning 20 Idealist/Nativist 21 Empiricist/Environmental 23 Social Constructivist 23 Interactionist 24 Summary of Four Concepts of Readiness for Learning 26 Assessment of Readiness for Learning 27 Historical Overview 27 Contemporary Approaches 28 The Acquisition of Reading Skills 31 Early Literacy Acquisition 31 Five Big Ideas in Reading 35 Monitoring Acquisition of Reading Skills with General Outcome Measures 36 Infant and toddler IGDI 38 Preschool IGDI 41 School-Aged IGDI 42 Summary 44 Risk Factors for Learning in Young Children 44 Categorizing Risk Factors 45 What Does Work? 46 Teacher Qualifications 49

PAGE 3

ii HeadsUp! Reading 51 Curriculum 53 Curriculum Class Activities 53 Curriculum Activities in the Preschool Classroom 54 Assessment 54 Assessment Class Activities 54 Assessment Activities in the Preschool Classroom 54 Talking 54 Talking Class Activities 55 Talking Activities in the Preschool Classroom 55 Playing 55 Playing Class Activities 56 Playing Activities in the Preschool Classroom 56 Reading 56 Reading Class Activities 56 Reading Activities in the Preschool Classroom 56 Writing 57 Writing Class Activities 57 Writing Activities in the Preschool Classroom 57 Learning the Code 57 Learning the Code Class Activities 58 Learning the Code Activities in the Preschool Classroom 58 Program Evaluation of HUR 58 Coaching as a Supplement to Learning 59 The Structure of Coaching 63 Does Coaching Work? 64 Summary of the Coaching Literature 68 Direction of Current Study 69 CHAPTER THREE: METHOD 75 Overview of Research Design 75 ELO Participants 76 Teachers/Participants 76 Student/Participants 80 Measures 86 Literacy Skills 86

PAGE 4

iii Preschool IGDI 86 Picture Naming 86 Alliteration 87 Rhyming 88 Psychometric Properties 89 School Aged IGDI 91 Letter Naming Fluency 91 Initial Sounds Fluency 92 Interpretation of DIBELS scores 92 Psychometric Properties 93 Developmental Level 93 Psychometric Properties 94 Treatment Integrity 95 Treatment Intensity 98 Socioeconomic Indicators 99 Procedure 101 Timeline of ELO Grant Activities 101 Recruitment and Selection of Teacher/Participants 102 Beginning of LAE 2000 Course 103 Hiring and Training of Literacy Coaches 104 Obtaining Institutional Review Board (IRB) Approval 105 Creating Coaching and No Coaching Treatment Groups 105 Creating Control Groups 106 Obtaining Informed Consent From Teacher/Participants and Their Students 107 Training Program Evaluators 108 Initial Wave of Data Collection 110 Administration of the IGDI 111 Picture Naming 111 Alliteration 112 Rhyming 113 Environmental Literacy Checklist 115 Director’s Workshop 116 Second Wave of Data Collection 116 Administration of DIBELS Letter Naming Fluency Subtest 118 Administration of DIBELS Initial Sounds Fluency Subtest 118 Early Screening Inventory 119 Review of Literacy Coach Session Notes 123 Data Entry and Inter-rater Agreement for Data Transfer 123 Procedure for Review, Selection, and Analysis of Archival

PAGE 5

iv Data 124 Visual Overview of the Study 126 Treatment Integrity 127 Review of ELOC Scores 128 Review of Literacy Coaches’ Session Notes 129 Research Design and Analysis 130 CHAPTER FOUR: RESULTS 139 Descriptive Characteristics of Dependent Measures 139 Hierarchical Linear Modeling 145 Summary 162 CHAPTER FIVE: DISCUSSION 164 Purpose of the Study 164 Response to Research Questions 164 Did Teacher Participation In The ELO Grant Affect Early Literacy Development In The Students They Taught? 164 Child Characteristics 165 Classroom Characteristics 166 Summary for Question One 167 Did Providing a Coach to Teachers While They Participated in the Training Have an Effect on Their Students’ Literacy Development? 168 What Effect Did Teacher Participation in the ELO Grant Have on Children’s Overall Cognitive, Language, and Motor Development, as Measured by the Early Screening Inventory – Revised (ESI-K)? 169 Overall Summary of Findings 169 Implications for School Psychologists and Early Childhood Educators 170 Limitations 172 Suggestions for Future Research 175 Conclusion 179 REFERENCES 182 APPENDICES 196 Appendix A: Job Qualifications for Literacy Coach Positions 197 Appendix B: Early Literacy Observation Checklist 198 Appendix C: Early Literacy Observation Checklist Scoring Key 200 Appendix D: Treatment Integrity Data Collection Forms for Literacy Coaches 203 Appendix E: Teacher Application for Participation in ELO Grant 204 Appendix F: LAE 2000 Course Syllabus 206

PAGE 6

v Appendix G: Narrative Introduction for Phone Contact with Potential Control Group Childcare Centers 211 Appendix H: Teacher Consent 212 Appendix I: Parent Assent 215 Appendix J: Cover Letter for Control Group Center Directors 219 Appendix K: Cover Letter for Parent Assent 220 Appendix L: Consent Return Reminder Letters 221 Appendix M: Accuracy Checklists for IGDI Administration 222 Appendix N: Demographic Data Collection Sheet Information for Student/Participants 225 Appendix O: Data Collection Sheet for Measures 226 Appendix P: Types of Early Childhood Centers 227 Appendix Q: Correlation Matrix for Student Variables for Literacy Training/ Coaching Group 229 Appendix R: Correlation Matrix for Student Variables for Literacy Training/No Coaching Group 230 Appendix S: Correlation Matrix for Student Variables for No Literacy Training/No Coaching Group 231 ABOUT THE AUTHOR End Page

PAGE 7

vi List of Tables Table 1 Definition of Terms 14 Table 2 Demographic Information for Teacher/Participants by Condition 78 Table 3 Types of Child Care Centers and Numbers of Participating Teachers 79 Table 4 Demographic Information for Teacher/Participants by Condition 80 Table 5 Return Rate of Consents by Classroom and Condition 81 Table 6 Descriptive Information and Changes in Student/Participant Sample From Time 1 to Time 2 85 Table 7 IGDI Means and Units of Change Per Month for Typically Developing Children and Children Living in Poverty at 53, 59, and 66 Months of Age 90 Table 8 Interpretation of DIBELS scores for Kindergarten Entry According to Florida Center for Reading Research Benchmarks 93 Table 9 ESI-K Scoring and Categorical Definitions 94 Table 10 Timeline of ELO Grant Activities from November 2003 to June 2004 101 Table 11 Sample Sizes for Letter Naming Fluency and Initial Sounds Fluency DIBELS Subtests and ESI-K at Time 2 117 Table 12 Visual Overview of Treatment Groups and Student/Participants Sample Sizes at Time 1 126 Table 13 Number and Percentage of Classrooms Meeting ELOC Total Score Criterion Across Time by Conditions 129

PAGE 8

vii Table 14 Characteristics of Descriptive and Measurement Data 130 Table 15 Skewness and Kurtosis Values for Dependent Measures 140 Table 16 Means and Standard Deviations for Dependent Variables 142 Table 17 Means and Standard Deviations for ELOC Across All Conditions 143 Table 18 Correlation Matrix for Total Sample 144 Table 19 Intraclass Correlation Coefficients 148 Table 20 Level One Model – Within Child Characteristics 150 Table 21 Level Two Model – Child Characteristics 153 Table 22 Level Three Model – Classroom Characteristics – for ELO Participation 156 Table 23 Level Three – Classroom Characteristics – Model for Coaching Component for IGDI Measures 158 Table 24 Level One Model for LNF and ESI-K 160 Table 25 Level Two Model for ELO Participation for Letter Naming Fluency and ESI-K 161 Table 26 Level Three Model for Coaching Component for LNF 162

PAGE 9

viii List of Figures Figure 1 Progression of Early Literacy Skills 33 Figure 2 Relationship Between the IGDI and the Five Big Ideas in Reading 39 Figure 3 Estimated Products of Training (Joyce & Showers, 2002) 62 Figure 4 Visual Overview of Timeline and Scope of Study 127 Figure 5 ELOC Total Scores and Treatment Integrity Across Conditions 129 Figure 6 Relationship Between Outcome and Predictor Variables in a Three Level Hierarchical Linear Model 134 Figure 7 Sample With-In Child Level One Model 135 Figure 8 Distribution of Alliteration Scores at Time 2 140 Figure 9 Growth Over Time in IGDI Picture Naming Scores for a Random Selection of Three Student/Participants 150 Figure 10 Growth Over Time in IGDI Alliteration Scores for a Random Selection of Three Student/Participants 151 Figure 11 Growth Over Time in IGDI Rhyming Scores for a Random Selection of Three Student/Participants 151 Figure 12 Growth Over Time by Race in IGDI Picture Naming 154 Figure 13 Growth Over Time by Race in IGDI Alliteration 154 Figure 14 Growth Over Time by Race in IGDI Rhyming 155 Figure 15 Growth Over Time by ELO Participation in IGDI Picture Naming 157

PAGE 10

ix Figure 16 Growth Over Time by ELO Participation in IGDI Alliteration 157 Figure 17 Growth Over Time by ELO Participation in IGDI Rhyming 158

PAGE 11

x Early Learning Experiences: Education with Coaching and the Effects on the Acquisition of Literacy Skills in Preschool Children Dale Lynn Cusumano ABSTRACT Reading to learn becomes a difficult task for children if they have not become proficient at comprehending written text. It was hypothesized that, for some children, reading difficulties may have been averted had they been reared in homes or participated in early childhood settings where literacy-based activities, interactions, or materials were prevalent. The purpose of this study was to examine the impact that training early childhood educators in researchbased early literacy instructional strategies (within the HeadsUp! Reading curriculum – HUR) had on the development of early reading skills in the preschool children they taught. Further examination also identified the impact that providing teachers with a Literacy Coach (LC) to mentor them in their application of the strategies had on early literacy development. The HUR class, LC positions, and additional resources provided to teachers partaking in this early childhood educator training were funded by the Early Learning Opportunities (ELO) grant. To examine the impact that teacher participation in the ELO grant had on children’s early literacy development, a hierarchical linear modeling (HLM) was conducted with children’s early literacy development measured at two points in

PAGE 12

xi time by the Individual Development and Growth Indicators (IGDI). After examining these indicators within a three-level model, change over time was documented. Specifically, age and race emerged as significant predictors of rates of literacy skill acquisition with older students and White students demonstrating higher rates of literacy development. Household socioeconomic status (SES) of children also acc ounted for significant amounts of variance in literacy development with higher rates of growth found in children from higher household SES. Most relevant to this study, ELO participation emer ged as a significant predictor of rates of growth in children’s phonological awareness with students of teachers who had participated in the ELO grant demonstrating higher rates of growth than students of teachers not participating in the ELO grant. Data to support the provision of a Literacy Coach to early childhood educators relating to higher rates of literacy development were not found, however. The findings of this study are offered within this document. In addition, limitations are highlighted and used as recommendations for future research exploring literacy skill acquisition in early childhood.

PAGE 13

1 CHAPTER ONE INTRODUCTION Reading and writing are critical skills that predict a child’s future success not only in school but also later in life (National Center for Education Statistics, 2000; Neuman, Copple, & Bredekamp, 2000; Snow, Burns, & Griffin, 1998). In fact, children who enter kindergarten exhibiting early literacy skills (e.g., retelling nursery rhymes, recognizing letters in the alphabet, or displaying an awareness that words flow from print) demonstrate higher levels of reading achievement one, two, and three years later than children who entered kindergarten lacking these skills (Bond & Dykstra, 1967; National Reading Panel [NRP], 2000; Neuman, Copple, & Bredekamp, 2000). Additional research documents that children who lack these early skills upon their entrance to kindergarten are three to four times more likely not to graduate from high school (U.S. Department of Education, 1999). Despite findings that highlight the need to address early literacy development during the preschool years (e.g., National Center for Education Statistics, 2000; Neuman, Copple, & Bredekamp, 2000; Snow, Burns, & Griffin, 1998), many children are not exposed to literacy-related activities such as being read to, being encouraged in writing activities, or being exposed to a wide range of reading materials in their home or child care center settings. Often this void in

PAGE 14

2 literacy-rich stimulation stems from the belief that children will not learn until they are ready (Kagan, 1990; Schickendanz, 1999; Snow, Burns, & Griffin, 1998; Teale, 1978). This view is known as the Idealist/Nativist or readiness view of development and is evident in research published throughout the later 1980’s and 1990’s (e.g., Carlton & Winsler, 1999; Deitz & Wilson, 1985; Kagan, 1990; Shepard & Smith, 1989). In contrast, however, current research supports that children can and do acquire literacy-related skills well before formal instruction begins in kindergarten (e.g., Adams, 1990; Neuman, Copple, & Bredekamp, 2000; Snow, 1991; Snow, Burns, & Griffin, 1998). Notably, children’s experiences listening to stories (Wells, 1985; Bus, van IJzendoorn, & Pellegrini, 1995), being asked to think about the stories they hear (Karweit & Wasik, 1996), being exposed to unfamiliar vocabulary (Snow, 1991), and being guided in letter identification and writing tasks (Clay, 1991; Stanovick & West, 1989; Teale, 1984) have been identified as critical activities that support the acquisition of early literacy skills. Needless to say, early literacy development has become a vital component in national educational agendas. For example, in 1990 President George H. W. Bush established six national education goals. The basic premise of these goals was that all children can learn – a process that is referred to as a lifelong endeavor. Most relevant to early childhood educators was a readiness goal stating that all children in America would start school ready to learn (Swanson, 1991). Within this goal was the call for developmentally appropriate

PAGE 15

3 programming, use of a comprehensive readiness assessment, and collaboration between preschool programs and social services. Current legislation affecting education is found in the recent reform of the Elementary and Secondary Education Act, now referred to as No Child Left Behind (NCLB). Rather than focusing on readiness for learning, NCLB mandated that all children should read at or above their respective grade levels. Notably, the underlying goal seeks to close the achievement gap between minority or disadvantaged children and their peers. Thus, “all children” refers to all children regardless of their race, previous learning experiences, disabilities, or socioeconomic status. Driven by this push for literacy, the focus of reading instruction has begun to filter downward to younger and younger children (National Reading Panel, 2000). Perhaps then, it is not surprising that research has been tracking the role that early intervention programming plays in preparing children for instruction. For instance, prior research (Campbell & Ramey, 1994; Lee, Brooks-Gunn, Schnur, & Liaw, 1990; Ramey, 1999) has looked at kindergarten readiness as its yardstick of accountability for preschool effectiveness and found positive results for those children who participated in structured early childhood programming such as Head Start. Unfortunately, others have found the effects of prekindergarten participation on academic achievement fade across time (Berlin, Brooks-Gunn, McCarton, & McCormick, 1998; McCarton et al., 1997).

PAGE 16

4 What also emerges from research on preschool programming is that the quality of what is provided in these programs is most important (National Reading Panel, 2000). Thus, current efforts have begun examining instructional practices used in educating young children (Snow, Burns, & Griffin, 1998). An outgrowth of this drive for evidence-based practice is the birth of educational curricula claiming a solid foundation of research-validated practices. One such example is the HeadsUp! Reading (HUR) early literacy professional development curriculum (National Head Start Association – NHSA). HUR focuses on training early childhood educators in research-based strategies for early literacy instruction. Notably, HUR’s attention is directed toward teachers; nevertheless, its overall goal is to accelerate early literacy development in the students of the teachers targeted by its curriculum. In general, two foundational areas (Curriculum and Assessment) and five domains or “gateways to literacy” (Talking, Playing, Writing, Reading, and Learning the Code) provide the framework of the HUR curriculum. Offered across satellite distance learning, participants watch presentations by nationally recognized experts in the field of early literacy (e.g., Dorothy Strickland, Patton Rabors, Bill Teale, and Hallie Yopp) as well as real life classroom applications of the material. The course is facilitated by a college professor, who is a certified HUR instructor. Currently, HUR is offered at over 900 sites in more than eight states including Florida. Program evaluation of HUR is underway. Initial findings, however, document positive growth in teacher knowledge and

PAGE 17

5 application of content (Neuman & Seung-hee, 2001). Surprisingly, the component evaluating the effects that teacher participation in the HUR curriculum had on literacy growth in the students they taught is missing. Limitations to research that has evaluated environmental interventions (e.g., Head Start, the High/Scope Perry-Preschool) or early childhood professional development curriculum (HeadsUp! Reading) are significant. For example, evaluations of environmental inte rventions, such as those conducted by May and Welch (1984), Reynolds (1992), and Shepard (1989), have based their findings on broad measures of reading achievement (e.g., Iowa Test of Basic Skills, Stanford Achievement Test). Notably, these measures lack the specificity and sensitivity to monitor short-term change in children’s learning (Snow, Burns, & Griffin, 1998). Consequently, it is possible that significant change or growth in children’s skills may have been overlooked. Along a similar trend, evaluations of early childhood teacher training curricula have neglected to analyze the effect implementation had on the literacy skills of children (Neuman & Seung-hee, 2001). To compensate for these weaknesses, future research should utilize measures that are sensitive to smaller, short-term growth in early literacy skill acquisition. Specifically, early literacy skill development could be assessed with General Outcome Measures (GOM). In short, GOMs are brief measures or indicators of development. GOMs are valuable because they are sensitive to short-term changes. As an indicator, however, they do not provide a detailed

PAGE 18

6 image of what is wrong should a delay be noted. That is, if results show deviance from normal ranges of growth, higher level diagnostic tools would be administered. A thermometer is an everyday example of a GOM. Specifically, a thermometer is used to monitor the presence of infection in a person’s body. A thermometer also is sensitive to short term changes in body temperature. Notably, however, a thermometer cannot be used to diagnose a problem. Instead, specific follow-up tests would be administered should deviations from normal ranges of body temperature be discovered. A GOM used in the educational setting is curriculum-based measurement (CBM) (Shinn, 1989). In one example of CBM, a child’s reading fluency is assessed by asking him or her to read from a passage for one minute. After this task, the number of words the child read correctly would be compared to other children of the same grade level. Most importantly, his or her progress could be monitored by comparing current assessment results to earlier data. Again, this measure is quick and can be repeated as often as necessary; thus serving as an indicator of growth and progress over time (Good, Gruba, & Kaminiski, 2002). Noting the strength of GOMs for monitoring development, the Early Childhood Research Institute on Measuring Growth and Development (ECRI MGD) responded to a call in the mid 1990’s that asked for instruments to be developed to monitor child growth and development (McConnell et al., 1998). In short, the ECRI MGD represents a collabor ative effort of research teams at the University of Kansas, the University of Minnesota, and the University of Oregon.

PAGE 19

7 Products of these efforts can be found in the Individual Growth and Development Indicators (IGDI). IGDI measures represent a system of data collection that assesses children’s growth in the areas of communication, early literacy, adaptive behavior, and social-emotional functioning (McConnell et al., 1998). The IGDI covers the developmental range from birth to eight years old. Three forms of the IGDI have been developed by the ECRI MGD teams and are clustered into three groups: infants and toddlers (birth to three years of age), preschool (three to five years of age), and school-aged (five to eight years of age). The most notable measure from the infant and toddler spectrum is the Early Communication Indicator (ECI), which records early expressive language skills (i.e., gestures, verbalizations, and utterances) during an informal play session (Luze et al., 2001). As a stepping stone beyond the ECI, the preschool IGDI and school-aged IGDI (also known as the Dynamic Indicators of Basic Early Literacy Skills DIBELS) focus on early literacy, reading fluency, and expressive language skills (Luze, et al., 2001; McConnell & McEvoy, 1998; Good, Gruba, & Kaminiski, 2002). With the emergence of the IGDI measures, it is now possible to document short-term changes in young children’s literacy development. Carving a Response to Children’s Needs Based on the outpouring of research (e.g., Adams, 1990; NRP, 2000; Neuman, Copple, & Bredekamp, 2000; Snow, Burns, & Griffin, 1998) and subsequent national initiatives with reading and early literacy as central themes (National Education Goals, No Child Left Behind), the question of “What can we

PAGE 20

8 do better?” arises. Answers are embedded in goals advocated by Torgesen (2000) and the National Reading Panel (2000). Notably, they assert that learning to read begins well before formal reading instruction. As such, literacy instruction should begin well before kindergarten – a time when Torgesen (2002) claims that instruction should be directed at developing phonemic awareness, awareness of print (e.g., print reflects words that represent language), and basic phonics skills. In fact, Torgesen (2002) and Adams (1990) state that these skills are so crucial to later academic success that they should be emphasized before formal education typically begins in kindergarten. Thus, preschools now are under the spotlight with avenues for infusing literacy instruction being explored (Burns, Griffin, & Snow, 1999; Neuman, Copple, & Bredekamp, 2000; Schickendanz, 1999). With guidelines endorsed by the National Reading Panel (2001), Snow, Burns, & Griffin (1998), and Torgesen (2002) in mind, efforts at the local level have been spurred into action. In Pinellas County, Florida, additional statistics that showed that one-third of their students enrolling for kindergarten were identified as not ready for kindergarten instruction, highlighted this need even further (Pinellas County Schools Kindergarten Readiness Standards Report for 2002). More specifically, 56% of these Pinellas County students could not demonstrate where reading began or that re ading progressed from left to right, top to bottom. An additional 40% could not recite two lines from a nursery rhyme. In response to these ominous findings, the Pinellas County Schools

PAGE 21

9 Readiness Coalition adopted as its primary mission increasing early childhood literacy and readiness skills. This goal was highlighted in their implementation of the Early Learning Opportunities Act (ELO) grant, which was supported by the collaborative efforts of agencies such as Coordinated Child Care of Pinellas County, Directions for Mental Health, Pinellas County Childcare Licensing Board, and Florida Mental Health Institute. Early Literacy Opportunities Grant Specifically, the Pinellas County ELO grant funded a training program for early childhood educators that sought to build teachers’ repertoire of evidencebased practices to use in developing literacy skills in preschool students. Central to the ELO grant was the use of the research-based literacy curriculum, HeadsUp! Reading (HUR), which was offered to teacher/participants as a college level course (Language Development in Young Children – LAE 2000) at St. Petersburg College in Pinellas County, Florida. Numerous resources also were provided to the ELO teachers such as books for classroom libraries, props for dramatic play and story telling, and magnetic alphabet letters with display boards. Another feature of the ELO grant was the provision of Literacy Coaches (LCs) who visited teachers in their childcare settings. In short, the role of the LC was to facilitate and guide the application of research-based strategies into the participating teachers’ classrooms. Addition of this coaching component was based on research documenting that when coaching was provided to teachers, they not only practiced these skills more frequently but they implemented the

PAGE 22

10 strategies more effectively (Showers, 1982a; Showers, 1982b). During coaching sessions, LCs engaged in a cycle of observing the teacher, providing feedback, modeling instructional strategies, and setting goals for the teacher for subsequent coaching sessions. The framework for this coaching model was adopted from the Early Literacy and Learning Model (ELLM) that was designed to assist preschool and early elementary school teachers in their integration of research based literacy instruction into their classrooms (Fountain, 2002). Approximately half of the teachers participating in the HUR class received this coaching component and were referred to as the Literacy Training and Coaching (LT/C) group. An additional and noteworthy feature of the ELO grant was the program evaluation component. Noting the positive features of the preschool and schoolaged IGDI assessments, the program evaluation component was built upon the use of these measures as the metric fo r monitoring the literacy development of the children served by the teachers participating in the grant. Thus, the effects of implementation of the research-based strategies for literacy instruction can be examined. To examine the effects of ELO participation and coaching more clearly, a control group of teachers and students also was created. Specifically, this control group did not participate in the HUR training, did not receive any additional resources, nor were they recipients of coaching: Their only role was to partake in all program evaluation activities. This sample of teachers and

PAGE 23

11 students are referred to as the No Literacy Training and No Coaching (NL/NC) group. Implementation of the year-long grant began in January of 2004. Two cohorts of teachers of children from the ages of 8 weeks to 5 years were selected for participation. The first cohort of approximately 50 teachers enrolled in the 15week spring 2004 semester while the second cohort of another 50 teachers participated in a 10-week summer semester. Relevant to this study are the spring cohort of teachers and the students in their classrooms. The question of the efficacy of the ELO grant in enhancing the early literacy skills of the students of the teachers of this spring cohort, however, remains unanswered. Summary Thus far, the national goals and agendas driving newfound emphasis on early literacy and reading have been presented. Numerous prevention or intervention based efforts (e.g., Head Start pre-kindergarten programming, HUR curriculum) have emerged to counter the obstacles that often deter all children from reaching these goals. The efficacy of these endeavors often is elusive for reasons that include the absence of meas ures that are sensitive to short-term changes in children’s growth and development. Recently, however, systems for monitoring these precursor skills (e.g., IGDI) such as expressive language, phonemic awareness, and early phonics have gained credence as useful tools in early childhood education. Thus, the path for future research is paved.

PAGE 24

12 Rationale for the Study This study sought to examine the effects that teacher participation in the ELO grant in Pinellas County, Florida had on the literacy development of the students they taught. A second component addressed the impact on their students’ literacy development that the provision of Literacy Coaches provided. Specifically, LCs assisted teachers in applying research-based strategies into their classrooms through observation, feedback, modeling, and goal setting. Not all early childhood educators received support from a LC, however. Given this framework, it was possible to explore whether literacy growth in students differed based on whether their teacher received coaching as a supplement to the HUR curriculum content of the LAE 2000 course. A control group of teachers (and their respective students) who did not receive the training, coaching, or materials also was created from a random selection of preschool teachers who were candidates for the summer semester of the training. Research Questions 1. Did teacher participation in the ELO grant affect early literacy development in the students they taught? That is, when compared to a control group of children whose teachers did not receive the professional development opportunity and additional resources as offered in the ELO grant (NL/NC), did children in the treatment group (LT/C and LT/NC) show significant differences in their rate of literacy skill attainment when compared to the control group?

PAGE 25

13 2. Did providing a coach to teachers while they participated in the training have an effect on their students’ literacy development? To examine this question, comparisons between children whose teachers received the literacy training and coaching (LT/C) and children whose teachers received only the literacy training (LT/NC) were made. 3. What effect did teacher participation in the ELO grant have on children’s overall cognitive, language, and motor development, as measured by the Early Screening Inventory – Revised (ESI-K) (Meisels, 1999). Hypothesized Outcomes Based on abundant research that espo uses the impact of early exposure to literacy-based activities and an environment that is rich in print (Burns, Griffin, & Snow, 1999; National Reading Panel, 2000; Neuman, Copple, & Bredekamp, 2000; Snow, Burns, & Griffin, 1998; Snow & Ninio, 1986), the following outcomes were expected: 1. Students of teachers who participated in the ELO grant (LT/C and LT/NC) would demonstrate higher rates of literacy skill development than students of teachers who did not participate in the ELO grant (NL/NC); 2. Students of teachers who received coaching along with their participation in the training (LT/C) would display higher levels of literacy development than would students of teachers who did not receive coaching (LT/NC);

PAGE 26

14 3. Students of teachers who participated in the ELO grant and received coaching (LT/C) would show the highest rates of literacy skill development followed by the students of teachers who received only the literacy training (LT/NC); and 4. Students of teachers who received the literacy training (regardless of whether a coach was provided) (LT/C and LT/NC) would show higher rates of literacy development than students of teachers who did not participate in the ELO grant activities (NL/NC). Definition of Terms To conclude this chapter, a reference tool is offered that contains a list of concepts, acronyms, and measures utilized in this study (see Table 1). It is hoped that this resource will familiarize readers with terminology referenced throughout the remaining chapters. Table 1 Definition of Terms Concept Description Children/participants Children who participated in program evaluation component of the ELO grant were students of teachers who either participated in the LAE 2000 course and accompanying ELO activities or served as the control group of teachers who did not participate in any ELO activities during the spring 2004 semester. Table continued on next page.

PAGE 27

15 Table 1 (Continued) Definition of Terms Concept Description Coaching Coaching refers to assistance provided to a subset of teachers in the ELO grant. During coaching, the application of research-based strategies for literacy instruction were modeled and observed by Literacy Coaches. This external avenue of support was included to enhance teachers’ sk ills. The gr oup of teachers and their students who received coaching and participated in the literacy training are referred to as the Literacy Training and Coaching (LT/C) group. Control Group A subset of teachers and students who did not participate in any ELO activities except for the program evaluation component during the spring 2004 semester. This sample of participants is referred to as the No Literacy Training and No Coaching group (NL/NC). Table continued on next page.

PAGE 28

16 Table 1 (Continued) Definition of Terms Concept Description No Coaching Reflects the absence of external assistance and modeling by a Literacy Coach outside of the LAE 2000 course. Teachers and their students who participated in the literacy training but did not receive coaching are referred to as the Literacy Training and No Coaching (LT/NC) whereas the teachers and their students who did not participate in the literacy training nor did they receive a coach are referred to as the No Literacy Training and No Coaching (NL/NC) group. Early Learning Opportunity Act (ELO) grant The ELO grant serves as a vehicle to provide research-based literacy instruction (e.g., HeadsUp! Reading) to early childhood educators through a college level course (LAE 2000). External supports through the provision of Literacy Coaches and additional resource materials are central features to this grant. An additional feature also addresses the needs of children with challenging behaviors; although, this facet of the grant is not addressed in this study. Table continued on next page.

PAGE 29

17 Table 1 (Continued) Definition of Terms Concept Description Early Literacy Skills Early literacy skills include those pre-reading skills such as the ability to indicate where to start reading in a book, reading from left to right top to bottom, and phonological and phonemic awareness. These sk ills can be thought of as those that provide a foundation for learning to read. Early Screening Inventory (ESI-K) The ESI-K is a brief, individually administered screening instrument that gathers data about a child’s development in the areas of language, motor functioning, cognitive development, and adaptive behavior. General Outcome Measures (GOM) Brief measures of indicators of development. A positive aspect of GOMs is that they can be repeated as needed. Doctor’s height and weight charts reflect one type of a GOM. IGDI measures used in this study serve as a GOM of literacy development. Table continued on next page.

PAGE 30

18 Table 1 (Continued) Definition of Terms Concept Description HeadsUp! Reading curriculum (HUR) HeadsUp! Reading is a fifteen week course for early childhood educators that focuses on teaching research-based strategies for literacy instruction. It is offered through a satellite distance learning environment that is facilitated by college faculty who are certified HUR instructors. St. Petersburg College (Florida) is offering the HUR as a core component of the LAE 2000 course, which also is a central feature to the ELO grant. Individual Growth and Development Indicators (IGDI) A series of GOMs that assess children’s growth and development across such domains as early literacy, adaptive behavior, social-emotional functioning, and expressive language. The IGDI measures, which were developed by the ECRI MGD research teams, gather this data from children from birth to eight years old. LAE 2000 This course offered by St. Petersburg College also is known as Language Development in Young Children, and serves as an introductory study of speech and language development form birth to eight years of age. A core component to this course is the use of the HeadsUp! Reading curriculum for professional development for early childhood educators. Table continued on next page.

PAGE 31

19 Table 1 (Continued) Definition of Terms Concept Description Literacy Coach (LC) Literacy Coaches serve as an external support for teachers who are partaking in the LAE 2000 course in the ELO grant. Their role is to assist in the application of research-based strategies for literacy instruction to the early childhood classrooms. LCs provide this assistance through observation, modeling, and problem-solving with teachers. Student/participants Students who participated in the ELO program evaluation and were taught by teachers who either participated in the LAE 2000 course and accompanying ELO activities or served as the control group of teachers who did not enroll in the LAE 2000 course or receive any additional resources during the spring 2004 semester. Teacher/Participant Teachers who participated in the program evaluation component of the ELO study.

PAGE 32

20 CHAPTER TWO REVIEW OF THE LITERATURE To understand the differences in children’s learning and to enhance the effectiveness with which to meet their needs, we first must establish the underpinnings of when and how they acquire information. Consequently, this literature review will begin with a summary of four perspectives of childhood skill acquisition, or readiness for learning. Following this overview, a brief description of the assessment of readiness will be presented. The acquisition of literacy skills will support this prior section as relevant research on the acquisition of early literacy and reading skills is reviewed. Next, the known risk factors to successful reading will be addressed. Interventions to counter less than optimal external factors will follow this section. Finally, the major focus of a national education agenda – reading – and its possible ties to early intervention will be highlighted in the concluding section of this review. Four Concepts of Readiness for Learning Research has documented that children present with varying characteristics that enhance or hinder learning (Huffman, Mehlinger, & Kervian, 2000). Prior to discussing the interventions to reduce the barriers to learning, it is important to discuss how we measure the elusive concept of “readiness for learning” so that the efficacy of implemented interventions can be determined. A

PAGE 33

21 review of research highlights that readiness has been the center of much research and debate. Four distinct paradigms of thought have emerged: Idealist/Nativist, Empiricist/Environmental, Social Constructivist, and, Interactionist (Meisels, 1999). At various times, each viewpoint has held varying levels of credence among audiences such as developmental researchers, educational professionals, psychologists, and parents. A brief description of these perspectives is presented next. Idealist/Nativist Adopting a stepwise approach to skill acquisition often associated with Piagetian theory of development, the Idealist/Nativisit perspective maintains that children must reach a certain biologically based developmental stage before they can progress to the next level and, as a result, benefit from certain types of instruction (Kagan, 1990). Consequently, this perspective espouses that external influences have little effect on a child’s learning until the child is internally “ready” (Carlton & Winsler, 1999; Kagan, 1990). Popularity for this perspective stems from its face validity: it “feels” right. Evidence of hypotheses based on this framework is noted by comments such as, “Johnny just needs to grow up a little before he can learn the curriculum presented in the classroom.” Delayed entry into kindergarten or “redshirting” is an outgrowth of this line of thought. Transitional K-1 (kindergarten to first gr ade) classes where kindergarten children are held in a special class for an extra year before progressing into first grade curriculum also follow this philosophical approach. On a similar path is formal

PAGE 34

22 retention in a regular kindergarten class. Regardless of the terminology, these decisions are often based on the perception of a child’s social maturity rather than individualistic learning patterns (May et al., 1994; Meisels, 1999). The efficacy of retaining children based on perceived maturity has been explored by many researchers (e.g., Cameron & Wilson, 1990; Kundert, May, & Brent, 1995; Shepard & Smith, 1989). In short, no significant differences have been noted in standardized achievement test scores between children who had delayed entry into first grade and those who have been promoted along with other children their same age (Cameron & Wilson, 1990; Deitz & Wilson, 1985; Kundert, May, & Brent 1995). Regardless of the limitations in academic growth, higher levels of maturity typically do emerge across the waiting year. However, this “growth” occurs at the expense of missed academic opportunities (Leinhardt, 1980). Most succinctly, Shepard and Smith in their 1987 and 1989 studies offered further documentation that children retained in kindergarten displayed no substantial gains over children who were referred for retention but instead were promoted with their same age cohort. Particularly, achievement scores were not the only domains affected. That is, lower self-worth, avoidance of academically related tasks, lower self-concept, and negative attitudes directed to school also complicate this finding. Outcomes such as these have prompted researchers such as White and Howard (1973) to state that the “…most dramatic case of officially sanctioned failure in elementary school is the failure to be promoted” (p. 182). With this research in mind, Dennebaum and Kulberg (1994) recommend

PAGE 35

23 that children should progress through their school careers alongside their similarly aged peers regardless of their academic standing. Furthermore, Dennebaum and Kulberg (1994) call for a wider degree of flexibility in services and school resources as a means of addressing the diversity of children’s needs. Empiricist/Environmental From another perspective, the empiricist/environmental view looks at a child’s acquisition of skills such as identification of letters, colors, shapes, comparison and categorization skills, and respectful behavior as indicators of readiness for learning. Within this framework, skills are attained in an orderly, hierarchical fashion with higher-level skills being mastered only after more simplistic skills are acquired (Gagne, 1970). Accordingly, the question of whether a child is “ready” for school (instruction) is central to this perspective. Counter to other assertions and empirical findings such as those made by Dennebaum and Kulberg (1994) and Sheppard and Smith (1987 & 1989), Meisels (1999) states that children identified as developmentally immature should be offered an additional year either before or after kindergarten before transitioning into first grade. Social Constructivist Rather than directing readiness investigations inward to the children themselves, the social constructivist approach looks outward at the community and social expectations. As a consequence, the ruler by which to judge a child’s readiness varies from community to community or from school to school. For

PAGE 36

24 example, in a community where needs are high for physically based labor employment as opposed to high level technical skills, the community expectations will differ with one community promoting physical prowess and the other promoting the need for higher educational achievements particularly in the areas of science and mathematics so that its needs for technically skilled workers are met. Not surprisingly, little consistency across boundaries exists, and little supporting research leaves this domain as an interesting, yet difficult framework from which to generalize. Interactionist Although the three perspectives presented previously appear divergent in some aspects their approach, some commonality exists and can be seen in the Interactionist explanation of the learning process. Vygotsky’s (1978) sociocultural developmental theory establishes a strong conceptual starting point for this discussion. Briefly, Vygotsky’s (1978) theory claims that behavior is shaped from interactions between the internal features of an individual and his or her sociocultural environment. Thus, this framework suggests that children advance cognitively when provided with opportunities for interaction in an atmosphere with supportive cultural experiences. Given this assertion, Hogan and Pressley (1997) suggest that early education settings should include a responsive social support system within which a child can gain feedback from experiences in response to his or her developing abilities. Thus, according to the Interactionist perspective, a child centered and driven learning environment

PAGE 37

25 reflects the optimal setting for learning. More specifically, Berk and Winsler (1995) and Graue (1993) define the “curriculum” for early childhood as a reciprocal relationship that advances based on children’s interactions with others and the environment. In comparison to earlier perspectives, the Interactionist viewpoint places learning as preceding or leading development. Additional support for this viewpoint is found in neurodevelopmental literature that confirms changes and modifications in brain structure and function in response to this bi-directional exchange (Bruer, 1997; Fox, Calkins, & Bell, 1994; Schore, 1994). In light of these findings, it is not surprising that waiting for a child to learn is counterproductive: Children do not grow into a learning state. Instead, their learning reflects the interaction between internal characteristics and external factors (e.g., social, environment) (Bruer, 1997; Huffman, Mehlinger, & Kervian, 2000; Schore, 1994). Research espousing the Interactionist perspective on school learning is plentiful. For example, May and Welch in their (1984) study explored differences in third grade achievement between students retained in kindergarten and those promoted despite similar characteristics of concern. Using standardized achievement tests and analyzing across groups, no notable between-group differences emerged. This finding led the researchers to state that the extra year of kindergarten did little to enhance what should have been the target goal of this act – greater academic competency. What must be noted, however, is that

PAGE 38

26 achievement measures such as employ ed by May and Welch in their 1984 study (Stanford Achievement Test) often fail to capture incremental differences in skill acquisition. Thus, only large boosts in academic skills would be perceptible. Similar assertions made by Shepard (1989) also were based on data that failed to document the educational effectiveness of pre-first transitional programs for children who were labeled as “developmentally immature.” Once again, though, caution is warranted in the interpretation of these results as measures of limited sensitivity (Iowa Test of Basic Skills) were used to capture student progress over time. Summary of Four Concepts of Readiness for Learning Readiness for learning can be conceptualized as fitting into many different frameworks. From an Idealist/Nativist perspective, readiness is assumed to be internally driven. Thus, development evolves on its own timetable that cannot be accelerated (Carlton & Winsler, 1999; Kagan, 1990). The Empiricist/Environmental perspective, on the other hand, analyzes readiness skills into a hierarchical scale within which lower level skills must be attained before higher level ones (Gagne, 1970). In other words, development progresses in a step-wise fashion. From yet another viewpoint, theorists espousing the Social/Constructivist viewpoint reference cultural or community standards from within which to value skills and, as a result, much difficulty arises based on the wide variation of values, culture, or geographic variables that may be present across even a small region. Most notable, perhaps, is the

PAGE 39

27 Interactionist perspective, which suggests that development progresses as a result of the interaction of internal characteristics and environmental events (Vygotsky, 1978). Outgrowth of the Interactionist perspective offers the greatest source of fuel for intervention efforts since it is within this framework that external events (e.g., early education experiences) become agents of change for mediating the effects of less than favorable internal or external conditions. Given this framework where the linkage between needs and intervention arises, the need to evaluate the effect that interventions have on preparing children for their future schooling emerges. In summary, although strong debates are tied to its use as a means of describing children’s learning, the value for operationalizing readiness emerges when the efficacy of interventions or programming are evaluated. The next section will offer an overview of efforts to measure readiness. Assessment of Readiness for Learning Historical Overview The use of readiness data in a decision making process has brought forth much discussion (e.g., Carlton, 2000; Kendall, 1996; Wolery & Wilbers, 1994). Indeed, the theme among these writers shifts readiness assessment away from the child and toward the flexibility of the systems that shape the early education environment. Regardless, many modes of assessing readiness for instruction have been utilized and remain indices by which decisions about children’s academic careers are judged. For example, based on the Idealist/Nativist

PAGE 40

28 perspective, several standardized measures have emerged throughout history to identify the timeline for a child’s learning (e.g., Gesell School Readiness Test, Gesell Developmental Assessment). Not surprisingly, given the elusive and illdefined conceptualization of readiness, problems such as misidentification of children and misuse of the instrument (e.g., as an index of intellectual ability) continue to hinder the valid use of such tools. Perhaps most familiar to adult populations are assessments developed from within the mindset of the Empiricist/Environmental perspective. Examples include, the Iowa Tests of Basic Skills, Stanford Early School Achievement Test, and Comprehensive Test of Basic Skills. Criticisms, however, weaken this vein of assessment as well. The most notable complaints target the awareness that tests of this type focus on recognition skills not on identification or application of these skills (e.g., Kagan, Moore, & Bredekamp, 1995; Meisels, 1999). Similarly, the skills assessed on these measures often fail to align with those within the classroom curriculum. Contemporary Approaches Given the abundance of research that bolsters the Interactionist view of readiness, much attention also is drawn to modes of assessing its impact. A significant assumption of the interactionist approach is that development results from a bi-directional relationship experienced over time. Thus, assessment or measurement of readiness for learning cannot be represented accurately by a single test score derived at one moment in time as is with common standardized

PAGE 41

29 readiness tests. Instead, Meisels (1999) proposed and pursued the use of curriculum-embedded assessments of academic performance. These dynamic assessments seek to match and synthesize information from the child, the teaching environment, and the child’s sociocultural background. Interestingly, the facet of using data to inform instructional practices opens a new direction in data collection. Notably the goal is not to know “who” but “how” to intervene. The reliability of accurately capturing the elusive concept of academic readiness continues, yet the target has been reframed as assessing a child’s developmental level (Carlton, 2000; Kendall, 1996; Wolery & Wilbers, 1994). Close inspection of this conceptual shift, however, will unearth a parallel between readiness and measures of children’s developmental levels. Questions remain as to how these data will inform instructional practices. Nevertheless, one of the tools that has gained a strong foothold in the state of Florida is the Early Screening Inventory (ESI). The ESI was developed by Meisels, Mardsen, Wiske, and Henderson in the mid to late 1990’s not as a test of readiness but as a screening instrument “to survey children’s ability to acquire skills” (Meisels, 1999, p. 3). Three general areas are assessed during the screening: visualmotor/adaptive skills, language and cognition, and gross motor skills. Results are purported to identify whether children are developing similarly to a national sample of like aged children. When children are found to fall below the normal range, it is suggested that they be rescreened within an eight to ten week period. After rescreening, the need for a more extensive evaluation is made (Meisels,

PAGE 42

30 Marsden, Wiske, & Henderson, 2003). A preschool version also is available. Revisions also have been made to the original ESI with a subsequent ESI-K now in use. With strong predictive validity to bolster its use, the ESI-K has become a prominent tool for early identification of students who are at-risk for school difficulties (Meisels, Henderson, Liaw, Browning, & TenHave, 1993). Most notably, all kindergarten students in the state of Florida are assessed with the ESI-K during the initial 45 days of their first year of kindergarten (School Readiness Uniform Screening System, n.d.). The utility of assessing students with the ESI-K in the state of Florida has yet to be established. That is, did screening accurately identify students in need and link these students with appropriate interventions or resources? Several related questions still linger. Is screening to identify children who are at-risk for not adequately advancing at the expected rate necessary? Are children entering school more different than alike? Finally, does identifying deficiencies or less well developed areas help? Some answers may be found. Others still elude our grasp. Torgesen (2002) tackled one of these questions. Specifically, he has identified two domains within which notable diversity in children has been found. First, a wide variation in the range of children’s awareness of print, identification of letters, and early phonemic awareness skills exists. Notably, these skills form some of the precursor technical skills for reading. On the other hand, a second domain refers to vocabulary development

PAGE 43

31 and a child’s general knowledge about the world. Notably, these elements serve as a framework in which technical skills can gain momentum and translate into meaningful thought processes (Snow, Burns, & Griffin, 1998). For example, if two children encounter an unfamiliar word group – ice berg – the student who has background knowledge about snow or Alaska will make connections that are more meaningful to the word and demonstrate a higher probability of using this new word in an appropriate context. In contrast, the child with no prior background knowledge will struggle more with this new word and will be less likely to use this word again in a manner that demonstrates comprehension. The Acquisition of Reading Skills With the concept of readiness for learning in mind, it becomes important to describe the development of early literacy and later processes for learning to read. Given this agenda, an overview of the developmental progression of early literacy development will be offered. After this overview, the content of skills that children acquire while gaining proficiency in reading, referred to as the Five Big Ideas of Reading, will be offered. Finally, a skill based process for monitoring the acquisition of reading skills will be described. Early Literacy Acquisition Learning to read depends on factors both within the child (e.g., health, sensory, or perceptual organs) and extern al to the child such as exposure to literacy-rich social environments (Durkin, 1966; Snow & Goldfield, 1983; Snow, Burns, & Griffin, 1998). According to Stanovich, Cunningham, and Cramer

PAGE 44

32 (1984), intellectual ability, given that it falls within the Average range, does not factor into the ease with which children acquire literacy skills. Instead, it is the age appropriateness of children’s sensory, cognitive, perceptual, and social skills that portend the ease for becoming successful readers (Snow, Burns, & Griffin, 1998). The acquisition of literacy skills begins shortly after birth and progresses well into the high school years (Snow, Burns, & Griffin, 1998). Sulzby and Teale (1991) assert that literacy-based experiences that an infant observes (e.g., parent reading to him or her) serve as a model that reflects the importance attributed to reading. Figure 1 provides a visual image of the progression of literacy acquisition and begins with these early caretaker-infant interactions. Notable in this visual, the acquisition of literacy is not defined by clear stages. Rather, these skills emerge at overlapping times based on internal and environmental factors. From this visual it can been seen that differences as early as 8 to 12 months can be noted in infants whose parents read to them on a regular basis with infants demonstrating early awareness of pre-reading activities such as holding a book and turning a page when reared in a literacy-rich setting (Snow & Ninio, 1986).

PAGE 45

33 High Social Reinforcement of Literacy Normal ranges of cognitive, sensory, and/or health functioning Limited Social Reinforcement of Literacy Limited cognitive, sensory, or health functioning Figure 1. Progression of Early Literacy Skills. Caregivers read, talk to, and respond to needs of infant. At 8-10 months, infant grabs at books, turns pages At 2-3 years, child learns objects stand for something else –“reads” logos Late in 2n d year, child “reads” favorite book At About 5-7 Years of Age, A cquisition of Real Reading Typically Begins Child learns alphabet is a symbol system for sounds About 3-4 years, child experiments in writing (scribbling, letters, or letterlike forms) Child sees print as mapping to speech sounds that make up written words

PAGE 46

34 Around two to three years of age, children begin to realize that symbols represent something other than the visual image they depict (Marzolf & De Loache, 1994). For example, a child may understand that a red bull’s eye symbol represents the store, Target. In fact, research by Masonheimer, Drum, and Ehri in 1984, found that 92% of children aged two to five years could “read” a color photo of the McDonalds™ logo. Further into that second year, many children will begin to “read” favorite books – a behavior that often reflects awareness of vocal intonations and wording most often found in written text (Snow, Bush, & Griffin, 1998; Sulzby & Teale, 1987, 1991). At this time, a child moves beyond the ABC’s as only being a song and into recognition that the alphabet is a system that represents sounds that make up words and spoken language (Adams, 1990). Also within this time period, children will begin demonstrating their first attempts at writing letter-like forms with inventive spelling emerging after the scribbling turns into letters (Adams, 1990; Sulzby & Teale, 1987). One last notable moment in the early stage of literacy acquisition is the time at which children understand the mapping of letters to sounds (i.e., phoneme to grapheme). It is at this point, often around the ages of five to seven years, that real reading instruction begins (Calfee, Lindamood, & Lindamood, 1973; Snow, Bush, & Griffin, 1998). As the acquisition of early literacy skills moves into learning to read, new factors, and processes emerge. Accord ing to Adams (1990) “…skillful reading depends critically on the speed and completeness with which words can be

PAGE 47

35 identified from their visual forms” (p. 333). With these objectives in mind, the next sections will address what is learned at each stage of reading skill acquisition, which is also referred to as the Five Big Ideas in Reading. After this overview, a discussion about how to monitor children’s acquisition of early literacy and reading skills is presented. Five Big Ideas in Reading The focus now turns away from the process of reading skill acquisition and into the content of reading processes. Specifically, what skills do children need to reach basic levels in reading? The National Reading Panel [NRP] (2000) asked this same question. After the national panel synthesized their findings of empirically based instructional practices in reading, the panel followed other reading education practices and framed their response according to Five Big Ideas in Beginning Reading: Phonemic Awareness, Alphabetic Principle, Fluency with Text, Vocabulary, and Comprehension. Briefly, hearing and manipulating sounds in words reflects the principle of Phonemic Awareness. The Alphabetic Principle, on the other hand, is the skill at linking individual phonemes to individual graphemes and using these to make words. The ability to read text quickly and accurately falls into the category of Fluency with Text. Further, the ability to understand individual words as well as use them to communicate is the idea of Vocabulary development. Finally, the skill of highest complexity is that of Comprehension, which is noted when reading reaches the level where meaning

PAGE 48

36 is conveyed through text. According to the NRP (2000), it is through analysis of these areas that the content and structure of reading instruction should be driven. Monitoring Acquisition of Reading Skills with General Outcome Measures In 1996, The Office of Special Education and Rehabilitation Services and United States Department of Education of fered funding for research directed at the development and evaluation of measures that would monitor children’s development from birth to eight years of age. Research teams in Minnesota, Oregon, and Kansas responded and were awarded a grant to accomplish this goal. Collectively, this trio became the Early Childhood Research Institute on Measuring Growth and Development (ECRI MGD), and in late 1996 merged their efforts to develop a system of general outcome measures that would monitor overall indicators of growth and development in children from birth to eight years of age (McConnell et al., 1998). The University of Kansas research team, also known as Juniper Gardens, adopted the birth to age three span; the University of Minnesota assumed responsibility for the three to five year age span; and, the University of Oregon developed measures for the young school-aged group (i.e., 5 to 8 year olds). Not surprisingly, each age bracket required different approaches for capturing these skills. Significant, however, all teams emerged with general outcome measures that were standardized, efficient, and sensitive to change (McConnell et al., 1998). Briefly, General Outcome Measures (GOM) assess indicators of overall health or development (Deno, 1997). A thermometer represents an everyday

PAGE 49

37 example of a GOM. That is, the information gleaned from taking an individual’s body temperature reflects an overall picture of the presence of infection in the person’s body. Deviations above or belo w typical ranges of body temperature may signal the need for more diagnostic tests. Another positive feature of GOMs is their ability to be performed repeatedly without carryover or practice effects confounding future measurements. In reference to the earlier example, a thermometer can be used as often as neces sary to monitor body temperature. Perhaps most important, these measures also are sensitive to change. The application of GOM to the academic setting can be found in curriculum based measurement (CBM) (Shinn, 1989). Curriculum based measurement is a basic structure for quick, standardized assessments that can gather data in reading, mathematics computation, spelling, and written expression. Sensitivity to short-term changes in these areas (e.g., number of words read correctly in one minute) has given much merit to the use of CBM as a screening instrument and progress monitoring tool (Shinn, 1989). Building upon the framework of CBM, the research teams at the ECRI MGD developed GOMs for assessing development in children in the areas of cognitive, language, adaptive, social-emotional, early literacy, and basic reading. Known as the Individual Growth and Development Indicators (IGDI), they are partitioned into three age groupings. Specifically, the University of Kansas adopted the birth to three years age span. Notable to emerge from this team, also know as Juniper Gardens, is the Early Communication Indicator (ECI)

PAGE 50

38 (Carta, Greenwood, & Walker, 2004). The ECI has been developed to gather specific data regarding a child’s early communication skills. Meanwhile, at the preschool level, the University of Minnesota put forth the preschool version of the IGDI. The preschool IGDI surveys expressive language as well as the acquisition of early literacy skills such as rhyming and alliteration (McConnell et al., 2002). Finally, from Oregon came the Dynamic Indicators of Basic Early Literacy Skills (DIBELS). The DIBELS subtests build further on the preschool IGDI and add early reading fluency measures. DIBELS measures can be administered to children between the ages of five to eight years. Figure 2 depicts the relationship between the IGDI measures and Five Big Ideas in Reading. A brief discussion of each measure will be presented as well. Infant and Toddler IGDI One of the most important skills a child must develop during early childhood is expressive communication (Bates, O’Connell, & Shore, 1987). Gazing at parents, gesturing, and babbling are the earliest forms of communication (McCathren, Warren, & Yoder, 1996). If development falters in this area, development in other domains also may be affected (Luze et al., 2001) and can transcend into difficulties with the acquisition of literacy skills, later academic struggles, and the establishment of social relationships (McCathren, Warren, & Yoder, 1996). Most notable, however, are the findings that document improvement in children’s communication skills when intervention services for

PAGE 51

39 Individual Growth and Development Indicators (IGDI) Early Childhood Research Institute on Monitoring Growth and Development Birth to 3 Years 3 to 5 Years 5 to 8 Years Early Communication Indicator (ECI) Individual Growth and Development Indicators ( IGDI ) Dynamic Indicators of Basic Early Literacy Skills ( DIBELS ) Naturalistic Play Observation Rhyming Alliteration Picture Naming Oral Reading Fluency Nonsense Word Fluency Phoneme Segmentation Fluency Initial Sound Fluenc y Letter Naming Fluency Five Big Ideas in Reading Phonemic Awareness Phonics Fluency Comprehension Vocabulary Figure 2. Relationship Between the IGDI and the Five Big Ideas in Reading

PAGE 52

40 expressive language are provided (e.g., Cole, Dale, & Mills, 1991; Yoder & Warren, 1999). Thus, early identification of students with delays in this area is justified. As part of the ECRI MGD mission, the team at Juniper Gardens focused the development of their GOM toward expressive communication. The intent of the ECI is for identification and progress monitoring of response to intervention. Specifically, the ECI gathers observational data about three forms of infants’ and toddlers’ early forms of communication (i.e., gestures, vocalizations, and utterances) observed during a naturalistic play activity (Carta, Greenwood, & Walker, 2004). Gestures reflect a physical movement that a child makes in an attempt to communicate. For example, a child might push a toy toward a play partner, nod his/her head, or point with his or her finger. In contrast, vocalizations are noted when a child emits a non-word or unintelligible sound such as a laugh, animal noise, or “mmm.” Finally, utterances are recorded when a child expresses a recognizable word. Specifically, these early forms of communication are recorded during a six-minute play session with an adult play partner. Five scores are obtained: Gestures, Vocalizations, Single Word Utterances, Multiword Utterances, and a Total Communication composite. Research by Luze et al. (2001) examined the psychometric properties of the ECI. First, the short-term sensitivity of the ECI was examined using multiple administrations across age groups. Findings indicated that the rates of communication paralleled that which would be expected at respective times in

PAGE 53

41 development. For example, rates of single and multiple word utterances increased over time. In contrast, rates for gestures decreased at around 25 months as children began relying more on their words for communication (Luze et al., 2001). Concurrent criterion validity was documented as well with moderate correlations noted between the ECI Total score and the Preschool Language Scale – Third Edition. Finally, split-half reliability was conducted by grouping odd and even monthly assessment data collected across nine months of progress monitoring. Strong reliability was established. Additional evaluations of the psychometric properties of the ECI across larger samples are forthcoming (Luze et al., 2001); however, at this point, it appears that this tool will fill a needed void in GOMs for infants and toddlers. Preschool IGDI The preschool IGDI serves as a GOM for children from three to five years of age. Different from the ECI, the preschool IGDI employs more directive tasks that ask children to engage in activities that tap expressive language and early literacy skills (McConnell, Priest, Davis, & McEvoy, 2002). Three timed subtests are included in the preschool IGDI. More specifically, expressive language is assessed in the Picture Naming subtest, which is an activity that prompts children to identify as many pictures of common everyday objects (e.g., car, dog, mop) as they can in a one-minute time period. Early literacy, on the other hand, is measured in two separate subtests, Alliteration and Rhyming. During Alliteration, children are asked to look at four pictures of common objects and

PAGE 54

42 pick out two that begin with the same sound. Similarly, during the Rhyming subtest, children are prompted to select two pictures that depict words that rhyme. Two minutes are allotted for eac h of these last two subtests. Each subtest also contains a teaching porti on that provides opportunities for the children to practice and receive feedback about their performance. The number of correct responses in each time period re flects the total score in each area. Psychometric evaluations of the preschool IGDI have provided documentation that this tool may serve as a valuable indicator of children’s development from three to five years of age (Priest, Davis, McConnell, McEvoy, & Shinn, 1999). Concurrent relationships between the expressive language component (Picture Naming subtest) and other childhood expressive language assessments such as the Peabody Picture Vocabulary Test – Third Edition have found moderate correlations supporting the assertion that the Picture Naming subtest is assessing expressive language skills (Priest et al, 1999). Further documentation of the preschool IGDI’s sensitivity to growth and change over time was noted in a sample of typically developing children and developmentally delayed children (Priest et al, 1999). One-month alternate form reliability also is moderate. Given these findings, the use of the preschool IGDI holds promise as a valuable tool for monitoring children’s language and literacy development. School-aged IGDI The Dynamic Indicators of Early Literacy Skills (DIBELS) serves as the IGDI for school-aged children from five to eight years of age (Good & Kaminski,

PAGE 55

43 2002). Developed by the team at the University of Oregon, the DIBELS is an individually administered tool that collects data about a student’s early literacy or reading fluency skills. Five subtests make up the DIBELS battery, although not all are applicable to all grade levels. For example, assessment of fluency for reading text or Oral Reading Fluency would not be administered to kindergarten students during the initial months of school. Similarly, measures of fluency with naming alphabet letters assessed with the Letter Naming Fluency test would not customarily be administered to students in third grade. Specific to the DIBELS, the Letter Naming subtest asks children to identify as many letters as they can from a page with a random selection of upper and lower case letters in a one-minute time frame. This subtest is noted to tap early phonics skills. Initial Sounds Fluency, in contrast, taps phonological awareness as children are asked to identify the first sounds in words for common objects depicted in a picture. Phoneme Segmentation is another subtest that prompts students to segment three and four syllable words. This subtest is reported to tap phonemic awareness skills. Nonsense Word Fluency is another higher level test that asks students to read make-believe words according to their phonetic appearance. Finally, Oral Reading Fluency assesses fluency with reading. Technical reports assert strong parallels between the subtests of the DIBELS and the Five Big Ideas in Reading exist as these principles guided the development of the measure (Kaminski & Good, 1996). As noted by researchers such as Good and Kaminski (2002) and Shaw and Shaw (2003), the DIBELS

PAGE 56

44 provides a reliable medium through which to monitor students’ progress through grade level curriculum. Summary In general, if children lack exposure to print and are unaware that letters link to sounds and written text, learning to read will become a struggle (Casey & Howe, 2002). That is, their skills at associating letters with sounds and blending these sounds into language, which provides the foundation for reading, will suffer. For as many as 20% of all elementary school aged children – an estimate that increases dramatically for children who live in poverty – this is the case as they struggle in the acquisition of reading skills (Shaywitz, Shaywitz, Fletcher, & Escobar, 1990). The questions now becomes, do we know what hindered their development? Even more importantly, how do we identify them before they fail so that interventions can redirect their progression of skill development? The next section will tackle this first question and explore general risk factors that hinder learning. Finally, intervention and prevention efforts aimed at enhancing literacy development will be presented. Risk Factors for Learning in Young Children Most succinctly, Levine, Swartz, Reed, Hill, Wakely, Lind, and Marincic (1997) describe a model to guide one’s understanding of the effects and reciprocity of risk factors. In this model, Levine et al. (1997) suggest that the relationship be seen as a balancing act between positive influences in a child’s life and less than optimal elements (e.g., exposure to various risk factors,

PAGE 57

45 inherent weaknesses) all which serve to counterbalance the scale. Specifically, risk factors have been defined as those elements that when present in the child’s internal or external domain may decrease his or her capacity to reach his or her optimal level of growth or development. Is a child where he or she could be? Or, have they been detoured by obstacles in learning? Categorizing Risk Factors In general, risk factors can be categorized into one of three different categories: fixed, variable, and causal. A fixed marker or risk factor reflects those variables that cannot be changed such as gender, race, and temperament. Somewhat different, a variable factor identifies those elements that can be changed but even when altered will not decrease the risk of negative effects. Pathways can be changed in variable risk markers. That is, a mother could complete her high school education when her child is three years of age; however, this act will not decrease the earlier risk attributed to her child during his or her early years (Kochanek, Kabacoff, & Lipsitt, 1990). A final type of risk factor is that which intervention efforts are most concerned – a causal risk factor. Notably, attention is directed here since strategic interventions have the potential to alter the direction of less than optimal pathways. According to Coolahan, Fantuzzo, Mendez, and McDermott (2000), living in poverty is one of the most significant factors that negatively impact a child’s future. Focusing on the effects that poverty has on later academic success revealed higher levels of academic difficulties, emotional problems, and

PAGE 58

46 retention (Donovan & Cross, 2002; Duncan, Brooks-Gunn, & Klebanov, 1994; McLoyd, 1998; Offord, Boyle, & Jones, 1987). With financial strain permeating all aspects of life, it almost is inevitable that high levels of stress also will pervade the environment. Limitations in parent education and community stress manifested in increased violence all create a whirlwind of a cause and effect chain of events (Donovan & Cross, 2002; Huffman, Mehlinger, & Kervia, 2000). Perhaps it is not surprising that poor health care, lack of adequate housing, high levels of stress in the household, living within a violent community (Huston, McLoyd, & Carcia Coll, 1994), and lower levels of maternal education (Byrd, Weitzman, & Auinger, 1997; Kalff et al, 2001) often coexist with this life of poverty. Notably, maternal education has been found to be a stronger predictor of a child’s disability status at his or her entry to kindergarten than has the child’s own behavior prior to school entry (Kochanek, Kabaoff, & Lipsitt, 1990). Regardless of the obstacles that children encounter during their early years, all children benefit from exposure to and interactions with certain critical elements in their early childhood settings. Further discussion now will focus on identifying key elements that have been noted as playing a significant role in promoting children’s learning and establishing early literacy skills. What Does Work? Childcare studies conducted by the National Institute of Child Health and Human Development (NICHD) provide direction for preschool settings. More specifically, their studies conducted in 1997 and 1998 find that children’s

PAGE 59

47 intellectual, language, and social development can be promoted through participation in high quality early academic environments. Ripple, Gilliam, Chanana, and Zigler (1999) further define the elements that influence positive childhood development. Specifically, they report that two years of participation in a preschool program more positively influences children than one year of attendance. In addition, more hours of attendance per day bears greater benefits. Also contingent to positive outcomes are competitive teacher salaries, a wide diversity of children, active and responsive teacher assistance, and ongoing process and outcome evaluations within the preschool program itself (Ripple et al. 1999). Specific to early literacy development, it is known that children’s skills do not emerge or are delayed when deprived of interactions with print and oral language (National Reading Panel, 2000; Neuman, Copple, & Bredekamp, 2000). Perhaps the most important of these interactions occurs when adults read to children (Bus, van IJzendorn, & Pellegrini, 1995; National Reading Panel, 2000; Snow, 1991; Wells, 1985). Questioning that takes place during these sessions (e.g., “What do you think will happen next?” “Have you ever felt like that character?”) bolsters critical thinking skills and enhances listening comprehension (Karweit & Wasik, 1996; Snow, 1991). Introducing children to the idea of print and its linkage to spoken language also plays a significant role in the acquisition of early literacy skills (Clay, 1991; Stanovich & West, 1989; Teale, 1984). One activity often used to build this awareness is pointing out to children

PAGE 60

48 the significance of letter strings and how spaces between groups of letters mark the end of one word and the beginning of another. Instruction in letter recognition and writing falls into this category as well (Adams, 1990; Neuman, Copple, & Bredekamp, 1999; Neuman & Roskos, 1997; Schickendanz, 1999). Establishing a classroom library also serves as a means for increasing children’s interactions with books (Morrow & Weinstein, 1986; Neuman, Copple, & Bredekamp, 1999; Snow, Burns, & Griffin, 1998). Still yet, when a comfortable place for children to sit while they peruse books is provided in a classroom, the time children spend pretending to read or looking through books increases (Morrow & Weinstein, 1986; Neuman, Copple, & Bredekamp, 1999). Print in the form of labels on shelves, posters on walls, and children’s names displayed prominently around the room further enhance a learning environment (McGee, Lomax, & Head, 1988; Morrow, 1990). One additional area of interest is found in children’s exposure to nursery rhymes. For example, research by Maclean, Bryant, and Bradley (1987) linked knowledge of nursery rhymes at three year of age with the later phonemic awareness. With notable pathways drawn between early learning experiences and literacy development, the emphasis now has turned to disseminating these evidence-based practices. Professional development training for early childhood educators has been used to meet this need. One of the most notable is the HeadsUp! Reading (HUR) curriculum, which was developed by the National Head Start Association. However, prior to reviewing this curriculum, research

PAGE 61

49 that has examined the messengers of this content – teachers – will be reviewed. That is, what teacher characteristics relevant to teacher education, certification, or years of experience are linked to more positive outcomes in children’s learning? Teacher Qualifications Teachers serve as facilitators of learning and development. Not surprisingly, they are often seen as the component that is easiest to address when student learning is not progressing as expected. Evidence of this can be noted in the abundance of specialized teacher training courses and numerous hours during which even experienced teachers must attend workshops to refine their skills (Mangione, 1995). National focus also has put forth requirements that all children have highly qualified teachers in their classrooms (No Child Left Behind Act, 2002). With improving teacher quality a national agenda, what defines and/or raises teachers to this “qualified” level has received much attention. Buchanan, Burst, Bidner, White, and Charlesworth (1998) have tacked this question by examining teacher variables that predicted developmentally appropriate practices (DAP) for educating young children. The variables of interest to this research team were teachers’ major in college, certification status, years of experience, and beliefs about their influence on their classroom curriculum. Of these variables, only teacher’s beliefs about their influence on implementation of their teaching curriculum emerged as a significant predictor of

PAGE 62

50 DAP with teachers who felt more in control of their classroom curriculums reporting greater alignment with DAP. Additional findings also indicated that teachers who were certified in early childhood education endorsed DAP more often than those who were certified in elementary education. This outcome bolsters the National Association for the Education of Young Children (NAEYC) assertion that specialized preparation for early childhood educators is needed (Bredekamp & Copple, 1997). Differences in state implementation of these guidelines are present with Florida and Virginia as the only two southeastern states that do not hold community-based early childhood educators to the same standards as schoolbased pre-kindergarten teachers (Denton, 2002). For example, in Florida teachers in school-based pre-kindergarten cl assroom are required to have a four year degree and certification in early childhood. In contrast, teachers in community-based pre-kindergarten sites are required to hold only a Child Development Associate (CDA) credentialing when teacher student ratios exceed 1:25. Specifically, a CDA is an entry-level, non-degree certification that is awarded after a 40 hour childcare training and documentation of work with children in early childhood settings (http://www.childcarepinellas.org/preschool. htm). When the CDA is compared to a four year degree, a wide span is noted between the two levels of educational requirements. Experience in the field of education also is believed to influence student

PAGE 63

51 learning outcomes. Interestingly, however, research has documented that an inverse relationship exists between the number of years of teaching experience and DAP (Buchanan et al., 1998; McCarthy, 1990; Sarasota, 1991). That is, teachers with more years of experience were less likely to endorse DAP for instructing young children than were teachers with fewer years of experience. A hypothesis for this finding is that teachers with less experience also tend to be recent graduates from college of education programs during which time more current NAEYC st andards and guidelines regarding DAP have been espoused (Hart, Burts, & Charlesworth, 1997; Sarason, 1991). Drawn from this outcome is the recommendation that asserts on-going professional development activities are needed for even experienced teachers. With teacher qualification variables reviewed, it is now necessary to examine routes to enhancing teachers’ instructional skills. One route that receives much attention is through teacher trainings and professional development activities. One such curriculum that attempts to meet this need is HeadsUp! Reading (HUR). The goal of HUR is to increase literacy development in young children by training teachers in research based strategies for early literacy instruction. The following sections are dedicated to describing the HUR curriculum. HeadsUp! Reading HeadsUp! Reading is an early literacy professional development curriculum for early childhood educators. Framed within a college credit course,

PAGE 64

52 HUR’s goals are to establish research-based strategies for literacy instruction in its participants. The course is presented through a satellite distance learning network. Early childhood college faculty (also certified as HUR instructors) facilitate the discussion and activities that are part of the class content. Eight states have added the HUR curriculum as part of their early literacy initiatives. The state of Florida adopted the curriculum in January of 2002 and offers it at over 43 locations. Currently, over 250 educators have been certified as HUR facilitators. Two goals are central to HUR (NHSA). First, HUR aims to strengthen early childhood educators’ skills by increasing their knowledge of effective strategies for literacy instruction. Second, and arguably most notable, HUR’s intent is to increase children’s literacy skills. Specifically, the HUR curriculum is tied to five core principles of early literacy development endorsed by early childhood research (e.g., Bowman, Donovan, & Burns, 2000; Burns, Griffin, & Snow, 1998; Snow, Burns, &Griffin, 1998). First, the process of learning to read is regarded as a gradual acquisition of skills that begins moments after birth and continues well into the primary grades. Next, it is recognized that learning to read does not occur in a vacuum. Instead, it depends on all facets of a child’s well-being such as his or her physical growth, social-emotional functioning, and cognitive development. A third underpinning adopted by the HUR curriculum is the knowledge that many underlying and early skills support later oral reading. Therefore, delays in any of the precursor skills (e.g., phonological awareness,

PAGE 65

53 alphabetic principle) can result in reading difficulties in later years. A fourth principle acknowledges that early and explicit instruction and environmental intervention (e.g., establishing a literacy-rich classroom setting) shapes early skills needed to learn to read. The final principle asserts that not all instructional strategies are equal with some being more effective than others. Thus, HUR has adopted only strategies with a solid foundation of supportive research associated with their use (NHSA). Course content is structured around seven topics: Curriculum, Assessment, Talking, Playing, Reading, Writing, and Learning the Code (NHSA). The first two topics reflect foundational knowledge while the last five are identified as gateways to literacy. A brief description of the curriculum that is covered across the fifteen class sessions is presented next followed by a description of activities and classroom application exercises as well. Curriculum This topic focuses on how the classroom environment can enhance children’s literacy development with specific components of a literacy-rich environment presented. Further attention is directed to importance of broadening children’s background knowledge. Strategies to survey and enhance children’s knowledge base are presented. “Curriculum” class activities. One example of an activity from this topic entails asking teachers to think about the literacy messages their preschool classroom sends to their students (where and how are books

PAGE 66

54 displayed, are items around the room labeled for children, etc). Yet another activity challenges teachers to consider changes they might make in their classrooms. Teachers also are prompted to select a favorite book in their classroom and decide how they would assess children’s background knowledge prior to reading the book. “Curriculum” activities in the preschool classroom. Some sample activities for the preschool classrooms include adding writing materials to an area of a classroom and observing children’s reactions. From another direction, teachers are asked to explore their students’ interests and breadth of experiences through conversations with the children and their parents. Assessment The second topic, Assessment, focuses on how to identify children’s present levels of skills and knowledge and align this information with instruction. “Assessment” class activities. In general, activities in this domain prompt teachers to think about assessment and how it can be meaningful to their lesson planning. “Assessment” activities in the preschool classroom. One application activity asks teachers to observe approaches various children display while exploring books and then use this data to approximate where the children are in their literacy development. Talking The focus of this third topic is on expressive and receptive language and

PAGE 67

55 vocabulary. Teaching strategies such as engaging children in conversations about books and effective strategies for responding to children’s sounds, words, or questions are presented. Strategies for building listening skills in children also are discussed. For example, teachers are offered suggestions such as using props during storytelling, creating times during the day where children listen to and sing with music, as well as using “wait time” and staying quiet while waiting for a student to respond to questions. “Talking” class activities Activities for this lesson include refining teachers’ understanding of listening skills by asking them to identify what indicators they use to assess if a child is listening. Additional activities ask teachers to think about how they know children understand what they are being told. “Wait time” also is the focus of one activity during which teachers are asked to plan some times to implement it in their interactions with children. “Talking” activities in the preschool classroom. Teachers are asked to experiment with “wait time” when asking questions. Other application exercises include working to build skills that engage a student in an extended conversation by us ing open ended questions. Playing The content in the Playing section of the HUR curriculum instructs teachers in how to encourage literacy-rich play. For example, suggested strategies include encouraging book-related dramatic play, supporting pretend

PAGE 68

56 reading of books, and providing writing instruments and materials in a play area so that children can write a grocery list or “play waiter.” “Playing” class activities. During this content area, teachers are asked to consider how different students in their classroom play and how they can support this play further from a literacy-focused perspective. “Playing” activities in the preschool classroom. In the application of this content, teachers are asked to engage in activities that convey writing not only as a means of communication but also as a source of fun and pleasure. Reading The topic addresses how many different types of reading can be built into the classroom (e.g., lap reading, group reading, shared reading). This session also describes important elements in teacher-led reading such as how to ask questions, build vocabulary, introduce a new book, and make connections to background knowledge. “Reading” class activities Class activities for this content area include tasks such as a self-reflection on how to engage students during book reading. Teachers also are asked to observe how they modify their behavior based on whether they are reading a new or old book. Another activity asks teachers to think about strategies they could use to create opportunities to read with small groups. “Reading” activities in the preschool classroom. Some activities teachers

PAGE 69

57 are asked to experiment with after the introduction of this content include observing the knowledge-base that different children have about reading (e.g., do they hold a book correctly, do they point to words while “reading”). Teachers also are asked to identify a new goal to pursue while reading aloud a familiar book to their class (identifying words that sound alike, predicting what happens next, taking “picture walks” through the book prior to reading it, etc.). Writing The next content area addresses writing and how it supports reading. The developmental progression of writing from scribbling to inventive spelling is presented as well as strategies for modeling writing for children. “Writing” class activities Activities in this session prompt teachers to note where different students’ writing samples fall from a developmental perspective (scribbling, letter strings, inventive spelling). Teachers also are asked to consider their classroom routines and how they could provide for and support children’s attempts at writing more positively. “Writing” activities in the preschool classroom. Teachers are encouraged to observe a child who is writing and then ask him/her to “read” what was written. An additional activity asks teachers to create books with their students to read and share with their friends and families. Learning the Code The final content session is devoted to how children develop phonological

PAGE 70

58 awareness. After describing the importance that this skill has for later reading success, several strategies for providing opportunities for its development are offered (e.g., fingerplays, poetry, and games and songs). Teaching phonological awareness to second language learners also is presented. “Learning the Code” class activities. Activities in this final content area ask teachers to consider how they would describe phonological awareness to parents with specific emphasis on how its importance for later reading success. Teachers also are asked to think about the “tools” that they have for promoting phonological awareness (rhyming books, poems) and how they could use them more effectively. “Learning the Code” activities in the preschool classroom Ideas for games such as asking students to think of words that rhyme with their name or the name of a character in a story are suggested. Program Evaluation of HUR Program evaluation of HUR is underway in several states including Florida. To date, Neuman and Seung-hee (2001) offer one of the few published evaluations of HUR, which was conducted across 11 program sites in three states (Pennsylvania, Ohio, and Michigan). In total, 130 teachers from 10 treatment sites and eight teachers from one control site served as participants. Program impact was assessed with a preand post-test of teachers’ knowledge, skills, and practices in early literacy. Results from this comparison documented positive growth in teachers’ knowledge of literacy instruction after eight weeks of

PAGE 71

59 instruction with the HUR curriculum. Additional data that tapped teachers’ perceptions of changes in their classr oom behavior with respect to greater attention and direction toward literacy based activities also were gathered. Results from the comparison of these data to that obtained from the control group documented higher levels of literacy supportive behaviors in the HUR program group. Observational data gathered by the program evaluators, however, revealed that, despite reports by teachers that they had changed the way they worked with their students, the application of the research-based strategies taught in the course were implemented inconsistently. Given this observation, it was recommended that more structured assistance and feedback be provided to students above and beyond the class meetings. Additional followthrough and modeling activities also were suggested. A noteworthy void, however, is the question of how the HUR curriculum impacted the students of the participating teachers. This is a significant loss when looking at the outcomes and accountability associated with the curriculum. With inconsistencies in implementation noted to hinder effective and efficient generalization of skills taught in the HUR course into early childhood classrooms, alternative routes to enhance this process are needed. One model has gained a strong following is coaching. Coaching as a Supplement to Learning Coaching is a process that is believed to facilitate the transfer of learning. An ultimate goal of transfer of learning, and thus coaching, is that knowledge

PAGE 72

60 attained during a training or professional development activity is generalized into targeted environments (Joyce & Showers, 1982). For example, athletic coaches seek to enhance transfer of learning in order to refine athletic skills. In an academic setting, a teacher who attends a social skills training may receive coaching to help him or her generalize the strategies endorsed in the curriculum into the classroom setting. In short, transfer of learning (or training) can be seen as a bridge between the initial learning environment (training or professional development activity) and skill use and implementation. The outcomes of transfer of learning can be categorized across several dimensions. First, transfer can be either positive or negative (Cree & Macaulay, 2000). Positive transfer is defined as occurring when new learning enhances prior knowledge. On the other hand, negative transfer is noted when new learning impedes previous knowledge. Lateral and vertical transfer also can occur (Showers, 1982a). Specific to these dimensions, lateral transfer is reported when skills attained generalize to others in an individual’s repertoire; whereas vertical transfer reflects the process where new knowledge provides a deeper understanding of prior knowledge. Not surprisingly, transfer of training strives to achieve learning that is positive, lateral, and/or vertical (Cree & Macaulay, 2000; Showers, 1982a; Showers, 1982b). Transfer of learning has been suggested to increase when coaching supplements training (Showers, 1982; Neubert, 1988). Most often, the coaching role is assumed by people either internal or external to the system. Coaches can

PAGE 73

61 be defined further by their level of expertise. That is, when a coach’s skill level is on par with that of the recipient of the c oaching, the term “peer coaching” is used. In contrast, an expert coach is one who is perceived to possess higher levels of skill development or technical prowess than the individual receiving the coaching (Neubert, 1988). Although most commonly related to the arena of sports, coaching roles also are found in business, management, and, most recently, education. Joyce and Showers (2002) discuss the potential outcomes of training when a coaching component is added. In general, their research identifies four training components tied to three potential outcomes. Specifically, training components can be identified as being theory-based during which time the training focuses only on disseminating information and knowledge. Demonstration of skills is an additional component, which is bolstered further by trainee practice and role playing. The final training component is coaching. Three levels of acquisition are identified with thorough knowledge being surpassed by strong skills to which transfer implementation is superior. As has been noted throughout other research (e.g., Baker & Showers, 1984; Cree & Macaulay, 2000; Feltz, Chase, Moritz, & Sullivan, 1999) transfer implementation is the ultimate goal of coaching with its recipients autonomously thinking with and applying the newly acquired skills (Showers, 1982). The alignment of these training components and outcomes provide a striking visual (see Figure 3). Notably, only when coaching was offered following

PAGE 74

62 a 30 hour training for teachers in an academic setting were meaningful outcomes in transfer of learning observed (Joyce & Showers, 2002). To be exact, only 10% of participating teachers demonstrated a command of the theory studied with little or no transfer of this theory into their repertoire of skills when only discussion was provided. When demonstration of skills was added, 20% of teachers were observed to posses strong skills; however, even this advantage was not related to skill transfer. Practice boosted these percentages again but still only 5% of teachers infused the skills into their classroom instruction. In sharp contrast, 95% of teachers did accomplish this goal with transfer of learning documented. 0 10 20 30 40 50 60 70 80 90 100% of Teachers Thorough KnowledgeStrong SkillTransfer Implementation Study Theory Add Demonstration Add Practice Add Coaching Figure 3. Estimated Products of Training (Joyce & Showers, 2002) Additional data also suggest that teachers who have received coaching not only practiced the skills more frequently but also implemented these skills

PAGE 75

63 more appropriately than did teachers who did not receive coaching (Showers, 1982a; Showers, 1982b). Observational data gathered by Showers (1982a) indicated that of those uncoached teachers who did attempt to implement the strategies into their classrooms their application was static and locked into the framework within which they were presented during the training. In contrast, coached teachers were reported to extend their skill application beyond the exemplars described in the training. After six months, differences between coached and uncoached teachers remai ned with coached teachers showing greater long-term retention of knowledge and skills as noted in interviews, observations, and reviews of lesson plans (Showers, 1982b; Showers, 1994). The structure of coaching. Coaching can follow many paths; however, it appears that specific elements appear in all models. In general, three to five common cyclical processes can be identified. That is, each coaching session typically contains some form of a conference or planning meeting. During this time, which could begin and/or conclude the coaching session, recipient’s specific needs relative to the acquisition of the targeted skill can be discussed (Kagan, 1994; Neubert, 1988). An observation of the teacher implementing the skill often follows. At this time, the coach observes the agreed upon skill noting referent and objective behaviors (Kagan, 1994). A feedback session provides the medium for discussing observational data. Importantly, however, it is suggested that feedback from the individual receiving the coaching be elicited first (Harvey & Struzziero, 2000; Kagan, 1994). Following his or her response,

PAGE 76

64 the coach then offers feedback, which is couched in a positive approach of “one thing to grow on, ten things to glow on” (Kagan, 1994, p. 21:5). Importantly, only agreed upon or targeted skills are addressed with the understanding that overall teacher performance is not evaluated but, instead, skill application is refined (Chan & Latham, 2004; Kagan, 1994; Neuman, 1988). Coaching, however, is not successful in promoting adaptation if all feedback is positive and praising. Thus, problem solving processes must be tapped to enhance skill development (Harvey & Struzziero, 2000; Kagan, 1994). During this phase of coaching, specific skill components are identified, barriers to their use are explored, and modeling of appropriate implementation are provided (Kagan, 1994). In fact, Neubert (1988) asserts that coaches in educational settings gain more credibility when they teach alongside the individual under the tutelage. Goal setting for future skill use and coaching sessions often conclude the coaching session (Chan & Latham, 2004; Neubert, 1988). Does coaching work? Questions remain. When coaching was added as a supplement to training, did those receiving coaching use their new skills in their respective settings? Did they use the skills appropriately? Were there long-term effects? And, what makes an effective coach? Qualitative data from an analysis of the effects of coaching that was added to a professional development component for teachers answers some of these questions (Joyce & Showers, 1982). First, teachers reported that coaching helped them take more risks while implementing their newly acquired skills. Second, these teachers admitted that

PAGE 77

65 without coaching, they would have abandoned their attempts at applying the strategies. In short, aside from technical assistance, coaching appears to add an element of accountability to the transfer of learning. Research conducted by Feltz, Chase, Moritz, and Sullivan in 1999 examined coaching in the world of sports. In this study, data from 189 high school basketball coaches were collected. Data included coaches’ perceptions about their teams’ abilities, community support, and histories of wins and losses. From these data, it was found that highly effective coaches (identified by greatest history of wins) were noted to use higher rates of praise and positive reinforcement with less time dedicated toward technical instruction and organizational management. Possible reasons for these findings are that expert coaches are more fluid and efficient in their instruction, thus delivering more direct skill direction. Implications of this study suggest that the quality of coaches’ behavior is more important that the quantity. Chan and Latham (2004) examined coaching as an added element in two MBA programs in Canada and Australia. Further examination of the differential effects of external, peer, and self coaching also was made. Overall, 53 students (Canadian participant sample n = 30) received coaching from one of the three types of coaches. Coaching occurred twice during the semester. Goal setting and self-management techniques were the targeted behaviors for the coaching session. Results from these studies indicate that external coaches were superior to the other types of coaches in bringing about positive behavior change and

PAGE 78

66 increased knowledge (i.e., higher grades) than did coaching provided by peers or self-coaching. These findings are reported to closely parallel those within a study conducted by Hillery and Wexley in 1974. Also notable, external coaches were perceived as more credible and therefore more favored. Despite these findings, one significant limitation must be noted. That is, no control group was created. Thus, no comparisons between students in the MBA program who did and did not receive coaching could be made. An additional study conducted by Streufert (1984) examined the effects of coaching as it aided implementation of the Challenge Reading Program into Gifted program classrooms. To examine the impact of coaching, seven of fourteen teachers were matched on initial skills and assigned to either coaching or no coaching conditions. Coaching was provided by former teachers experienced in the curriculum. Direct and indirect effects were examined with teachers’ competency assessed with questionnaire measures, while student achievement was assessed with the Wo odcock-Johnson achievement scales. Overall, no indirect effects on student skills were attributable to coaching: all student achievement increased. That is, significant differences were not found between groups. Direct effects were documented with competency increasing over time for the coached teachers and decreasing over time for un-coached teachers. As has been noted in other re search utilizing standardized measures of intelligence or achievement, failure to detect differences in achievement between groups may be linked to measurement error in that these tools are not

PAGE 79

67 sensitive to short term change. More recent research by Joyce and Showers (2002) found that when training was delivered as theory only in lectures, discussion, or readings, knowledge increased by an effect size of .50. Specifically, when mean posttest scores were mapped along the pretest score distribution they now fell at the sixty-seventh percentile. Interestingly when coaching was added, an effect size of 1.42 was found with 90% of participants identified as possessing strong skills. Coaching is an integral part of an early literacy curriculum known as the Early Literacy and Learning Model (ELLM) (Fountain, 2002), which is being implemented out of the Florida Institute of Education and the University of North Florida. In this model, a network of coaches assist preschool, kindergarten, and first grade teachers with their implementation of research based strategies for reading instruction. Coaching within the ELLM model is housed within the professional development component and is viewed as the conduit between a two-day intensive literacy seminar and successful implementation of the curriculum into the classrooms of the participating teachers. Weekly coaching visits cycle through a process of modeling, observing, providing feedback, and developing action plans. Approximately 189 teachers implement ELLM in their classrooms across six counties in Florida. Indirect effects of the implementation of the ELLM curriculum and coaching model were assessed (Fountain, 2002). Specifically, a normreferenced test of early reading ability, the Test of Early Reading Ability – Third

PAGE 80

68 Edition (TERA-3), and an alphabet recognition test assessment were administered to children whose teachers were implementing the ELLM curriculum. Impact was assessed through a pretest/posttest design. Overall, findings from this evaluation component indicated that children’s skills had increased over time. In addition, 85% of kindergarten children were rated as proficient in letter recognition compared to 66% of the national sample. Further analysis also revealed that while 30% of he children’s TERA-3 pretest scores fell in the Above Average range, 47% of posttest TERA-3 scores fell in the Above Average range. Despite the ELLM project presenting as an ideal venue for examining the impact of an implementation of a research-based instructional curriculum that is coupled within a framework of coaching, significant flaws in evaluating this project exist. First, children’s progress was monitored using norm-referenced instruments. As a result, small changes in children’s skill attainment may have been missed due to the limited sensitivity of the instrument. Most striking from this evaluation component, however, is t hat no control or no-coaching groups were included. Thus, it becomes difficult to examine the true impact that either participation or coaching had on children’s literacy development. Summary of the coaching literature. The assertion that transfer of learning is a fact of life is supported by evidence that people do learn (Fleishman, 1987). Documenting its effects, particularly when it is coupled with coaching, is a notable void (Chan & Latham, 2004; Joyce & Showers, 2002; Neuman, 1988;

PAGE 81

69 Showers, 1984). As asserted by Joyce and Showers (1982), without coaching relatively few teachers would transfer skills attained in a professional training session into their classroom settings. This assertion serves as an impetus for further research on the impact of coaching. Direct and indirect effects of it application also should be examined. Direction of Current Study In summary, it appears that few children escape the onslaught of elements that pose a negative impact or hinder their development. Perhaps more daunting, it appears that many children are subjected to multiple risk factors such as poverty, family stress, sub-average daycare, and community violence. To counter this picture, research has sought to identify not only what eases these influences but also what enhances a child’s development. Not surprisingly, much research has focused on and identified the benefits of high quality early education. Indeed, guided by the positive results gleaned from studies assessing these influences served as an impetus for Ziegler’s (1998) advisement that free public education should begin at three years of age. At this point, only the state of Georgia offers full day pre-kindergarten to all four year-old children with 75% of children attending such programs. The question also remains as to how these early educational experiences influence the development of later academic skills, most notably in reading. Given the need for early instruction and scaffolded experiences, this study examined how implementation of an early literacy-based curriculum for early

PAGE 82

70 childhood educators affected the development of children’s early literacy skills. Fortunately, efforts to support this mode of intervention have been developed and implemented under the auspices of the Early Learning Opportunities Act (ELO) grant. In short, the ELO grant was brought to fruition by the Pinellas County Schools Readiness Coalition and reflected collaboration among several agencies such as Directions for Mental Health, Coordinated Child Care of Pinellas County, Pinellas County Schools Readiness Coalition, and Florida Mental Health Institute. The key goal that drove implementation of the grant was the desire to create a learning community of early childhood professionals who were empowered with research-based skills targeted at enhancing learning readiness and literacy development in young children. Six elements were central to the ELO grant. First, early childhood educators were offered the opportunity to participate in a three-credit college course, Language Development in Young Children (LAE 2000), that focused on the HeadsUp! Reading (HUR) curriculum. Scholarships for enrollment in this class offered by St. Petersburg College in Pinellas County, Florida were provided to participants in the grant. The LAE 2000 course was offered at two locations during the spring and summer 2004 semesters. Faculty who facilitated this course also were certified as HUR instructors. A second feature of the ELO grant was the provision of a coaching partner or Literacy Coach (LC). The LCs conducted weekly, face-to-face visits of about an hour in duration to assist teacher/participants in applying the skills discussed

PAGE 83

71 in the LAE 2000 course into their preschool settings. Specifically, three LCs were hired who held a baccalaureate degree in early childhood education and extensive experience (more than 5 years) working in early childcare education. Appendix A contains a description of the qualifications required for the LC positions. As a supplement to their early childhood education backgrounds, LCs also were trained in the Early Literacy and Learning Model (ELLM) of coaching, which cycles through observation, feedback and modeling activities between the LC and teacher/participants (Fountain, Cosgrove, Wiles, Wood, & Senterfitt, 2001). At the center of the ELLM process is the identification of and agreement on goals to be addressed during the coaching cycle. Literacy Coaches also were required to attend all LAE 2000 class meetings along with the teacher/participants. A third element of the ELO grant affected the Directors of the childcare centers where participating teachers were employed. Specifically, center Directors were asked to attend a full day workshop held on a Saturday in April of 2004. The content of this workshop included a discussion of how to involve parents in enhancing literacy in the home setting as well as avenues promoting literacy throughout their entire childcare center. Finally, strategies for coaching the teachers/participants at their work sites were discussed with the understanding that center Directors would serve as the coach after the LC visits had ended. Center Directors also agreed to support their participating teacher by

PAGE 84

72 allowing him or her to dedicate time during their work day to share with at least one other staff member at the center the content and knowledge gained from the literacy activities. This sharing of information occurred no later than three days after each of the 15 classes. Further commitments included offering one or more of the following: 1) releasing the teacher early on the days when classes were held, 2) allowing the participating teacher to model and share the skills learned with at least two different community-based childcare centers that did not participate in the ELO grant, and/or 3) guaranteeing a pay increase of $.20 an hour following successful completion of the course. The provision of literacy resources served as an additional feature of the ELO grant. To supplement the LAE 2000 coursework, teacher/participants were given three text books: Learning to Read & Write: Developmentally Appropriate Practices for Young Children (Neuman, Copple, & Bredekamp, 2000), Starting Out Right: A Guide to Promoting Children’s Reading Success (Burns, Griffin, & Snow, 1999), and Much More Than the ABC’s: The Early Stages of Reading and Writing (Schickedanz, 1999). Classroom materials such as children’s books, an easel for reading big books, and puppets or other props that related to the stories read in the preschool classrooms also were provided. Additional materials included alphabet letters with an accompanying magnetic board and a minilibrary of hardcover books. With the exception of the LAE 2000 course texts, which were provided at the start of the course, the remaining resources were distributed at regular intervals throughout the fifteen-week LAE 2000 course.

PAGE 85

73 An additional and relevant feature of the ELO grant was family outreach. Vinyl backpacks containing special reading materials, books, and information for parents were distributed in the teacher/participants’ classrooms where students took turns taking the materials home. Additional “Take Home” books also were provided to teacher/participants to distribute to the students in their class. Other reading materials that focused on the importance of early literacy development, available child development resources, and behavior management information packets made up a parent education portion of the Take Home reading program. Teacher/participants also were required to sponsor at least one literacy event for the families of their students. During this literacy event, which could take the form of a reading festival, reading related activities, literacy games, and tips for reading to children were shared. The final feature of the ELO grant was the addition of a program evaluation component. Specifically, the program evaluation team consisted of nine graduate students from the school psychology program at the University of South Florida who were supervised by a faculty member at the Florida Mental Health Institute. The overall goal of the evaluation component was to investigate the integrity, efficiency, cost, and efficacy of the implementation of the ELO grant. Integrity of implementation included tools for monitoring activities and time spent in these activities (e.g., teacher time logs, LCs’ time logs, monitoring forms completed by teachers following sharing of information with other staff members at their early education site). Outcome measures included literacy assessments

PAGE 86

74 (IGDI measures), pre and posttests of teacher knowledge (LAE 2000 course exams), environmental observations of the classrooms (Early Literacy Observation Checklist), and participant focus groups conducted at the end of the spring 2004 semester.

PAGE 87

75 CHAPTER THREE METHOD This chapter serves several purposes. First, provides an overview of the basic structure of the research design for this study. Second, it describes the participants whose data were selected for analysis from the archival database. Next, a discussion is presented of the measures from this secondary data source that were used to answer the research ques tions from this study. Subsequent to this description, a timeline of the relevant activities that occurred as part of the Early Learning Opportunities (ELO) grant in Pinellas County, Florida is presented. Further, the processes followed during two waves of data collection (i.e., Time 1 and Time 2) are described. Treatment integrity is addressed next with an analysis of environmental changes examined and a review of Literacy Coaches notes presented. After this, the statistical methods that were conducted to answer the questions that drove this study are offered. Overview of Research Design The structure of this study reflects a review of secondary data that were collected following implementation of the ELO grant in Pinellas County that sought to enhance literacy skills in children. Data selected from the archival source reflected three levels of treatment conditions, based on teachers’ level of participation in the grant. Three participant conditions occurred: (1) literacy

PAGE 88

76 training with coaching (LT/C), (2) literacy training with no coaching (LT/NC), and (3) no literacy training and no coaching (NL/NC). The first two conditions represented teacher/participants and their respective students who partook in the ELO grant activities while the later represented a control group of teachers and their respective students. Although teacher/participants were randomly assigned to the coaching and no coaching treatment conditions, selection for acceptance as a teacher/participant was not randomized. Random selection of childcare facilities for participation in the control group from a pool of teachers who were potential candidates for the summer session, however, did occur. Thus, this study reflects a quasi-experimental design using archival data to answer questions regarding the effects of teacher training in research-supported instructional literacy practices on the development of early reading skills in the students they taught. With the archival nature of the data in mind, it is important to note that references to teacher/participants, teachers, and student/participants or students reflect individuals who either took part in the ELO grant during the spring 2004 semester (as teacher/participants), were students of teachers who participated in the ELO grant during the spring 2004 semester (student/participants), or were teachers and students within the control group of the ELO grant. ELO Participants Teacher/Participants Twenty-two out of the fifty teachers who participated in the 2004 spring

PAGE 89

77 cohort of the Language Development in Children (LAE 2000) course served as participants in the program evaluation. Allocation of resources (e.g., Literacy Coaches, Program Evaluators) determined the number of classrooms, teachers, and students from whom data collection could be completed. Criteria established to select teachers for program evaluation component were as follows: employed in a childcare center, private pre-kindergarten, or Head Start; and, work with children between the ages of three to five years. That is, no teachers who worked with children under the age of three years were included in the spring program evaluation component. Further, teachers who worked in a family or home-based setting were not included. Finally, one teacher for whom a maternity leave was pending was not included as a candidate for participation in the evaluation of the ELO grant. Overall, 22 teachers were identified as meeting these criteria. In addition, 19 teachers out of 25 who were not selected for participation in the Spring LAE 2000 course but indicated interest in the summer offering of the course also met these criteria and were solicited for participation in the control group. In the end, the teacher/participant sample consisted of 41 teachers, 40 of whom were female. Twelve teacher/participants formed the literacy training and coaching group (LT/C), 10 teacher/participants were assigned to the literacy training and no coaching group (LT/NC), and 19 teacher/ participants formed the control group (NL/NC). Table 2 contains descriptive information about the teacher/participants in the sample. Notable from these data is that teacher/participants in the LT/NC

PAGE 90

78 group reported more years of experience teaching in early childhood education settings (M = 13.62) than did teachers in the LT/C (M = 8.24) and NL/NC groups (M = 7.99). Furthermore, only 33-37% of teachers in the LT/C and LT/NC conditions reported attaining a post-secondary education whereas 95% of teachers in the NL/NC condition reported education beyond the secondary level. Consequently, screening to inspect these differences was conducted. Table 2 Demographic Information for Teacher/Participants by Condition Number of Participants Experience (in Years) Highest Level of Education (% of Teachers) Teachers Students M (SD) Range High School Some College AA 4 Yr LT/C 12 165 8.24 (4.67) 3 to 17 67% 17% 8% 8% LT/NC 10 106 13.62 (7.99) 5 to 28 60% 0% 0% 40% NL/NC 19 115 7.99 (4.51) 1 to 19 5% 47% 26% 21% All 41 386 9.68 (6.25) 1 to 28 37% 27% 15% 22% Participating teachers were employed in one of three types of early childhood settings. Specifically, 61% of the teacher/participants were employed in a private early childhood setting, 12% were employed in a Head Start program, and 27% taught in a faith-based early childhood center that also offered a Christian-based curriculum. Table 3 contains the distribution of teacher/participants across the types of centers.

PAGE 91

79Table 3 Types of Child Care Centers and Numbers of Participating Teachers Private Centers* Head Start Centers Faith-Based Centers Treatment Conditions Number of Sites Teacher/ Participants Number of Sites Teacher/ Participants Number of Sites Teacher/ Participants LT/C 8 8 3 3 1 1 LT/NC 7 7 2 2 1 1 NL/NC 4 10 0 0 3 9 Total Sample 19 25 5 5 5 11 Note The number of private child care centers and teachers does not include faith-based sites. Due to recommendations that Chi-Square tests may not be valid when observed frequencies in any of the classification cells is less than five, which was noted in several instances in these data, Fisher’s Exact Test was conducted to look at differences between the expect ed versus the observed frequencies of teachers in these types of settings ac ross conditions (Hatcher & Stepanski, 1994; Thorne, 1989). Results from this analysis documented a link between treatment conditions and participating types of child care centers in these data. Visual inspection reveals that more Head Start centers were present in the treatment conditions (i.e., literacy training and coaching – LT/C; literacy training and no coaching – LT/NC). Sites with no literacy training and no coaching (NL/NC) also were more often represented by teachers identified as teaching in a faith-based center than were sites in the treatment conditions (LT/C and LT/NC). Descriptive information about the teacher/participants in the sample is presented in Table 4. Notable from these data is that teacher/participants in the LT/NC group reported more years of experience in early childhood education settings than did teachers in the LT/C and NL/NC groups. An analysis of

PAGE 92

80 variance (ANOVA) was conducted to expl ore these differences further and documented that no significant differences existed among the levels of experience (in years) that teacher/participants reported across conditions, F (2, 37) = 2.91; p = .067. Table 4 Demographic Information for Teacher/Participants by Condition Number of Experience (in Years) Highest Level of Education Teacher/ Participants Student/ Participants M (SD) High School Some College AA 4 Yr Degree LT/C 11 165 7.75 (4.77) 8 2 0 1 LT/NC 10 106 13.10 (8.23) 6 0 0 4 NL/NC 19 115 8.00 (4.57) 1 9 5 4 Total Sample 40 386 9.68 (6.25) 15 11 5 9 Further exploration of the relationship between the teachers in the treatment conditions and levels of education were conducted. To accomplish this, Fisher’s Exact Test again was conducted. Results of this analysis provided documentation that a relationship did exist between teacher’s level of education and the treatment condition in which they served. Specifically, 27% of teachers in the LT/C condition reported having post secondary education compared to 40% of teachers in the LT/NC condition. In contrast, 95% of teachers in the NL/NC group indicated that they had post secondary education. Student/Participants Six hundred and twenty-three children who were enrolled in participating teachers’ classes were solicited for participation. These 623 children reflected students who were between the ages of three to five years, identified English as

PAGE 93

81 their primary language, and did not present with any diagnosed cognitive delays or sensory deficits (e.g., hearing or visual disabilities). Students who did not meet these criteria were not given consent forms. Table 5 contains information describing the total number of students that were solicited for participation in the study. Also provided in this table is the number of signed consents returned. Percentages for each category are reported as well. Specifically, the percentage of returned consents was arrived at by dividing by the total number of consents distributed in each classroom or condition by the total number of signed consents returned in that setting. Additionally, the “Percentage Included” column represents the number of children who returned signed consents and were present and willing to participate in the study during Time 1 of data collection. This figure was derived by dividing the number of participants by the number of returned signed consents. Table 5 Return Rate of Consents by Classroom and Condition Total Consents Distributed Total Consents Returned Total Included in Data Collection Classroom Code N N Returned N Included Literacy Training and Coaching 701 2020 100% 20 100% 702 1919 100% 19 100% 703 1815 83% 14 93% 704 1818 100% 18 100% 708 1514 93% 14 100% 709 13 6 46% 6 100% 722 1715 88% 15 100% 723 1513 87% 12 92% 743 1513 87% 13 100% 744 10 6 60% 5 83% 745 2120 95% 20 100% 749 1310 77% 9 90% Table continued on next page.

PAGE 94

82Table 5 (Continued) Return Rate of Consents by Classroom and Condition Total Consents Distributed Total Consents Returned Total Included in Data Collection Classroom Code N N Returned N Included Literacy Training and Coaching Total 190 169 89% 165 98% Literacy Training and No Coaching 810 15 8 53% 6 75% 814 1514 93% 14 100% 818 22 5 23% 5 100% 819 1711 65% 11 100% 820 1512 80% 12 100% 825 1710 59% 10 100% 833 16 9 56% 8 89% 834 2118 86% 18 100% 835 10 9 90% 9 100% 850 2013 65% 13 100% Total 168 109 64% 106 97% No Literacy Training and No Coaching 905 18 8 44% 8 100% 906 11 6 55% 6 100% 907 14 6 43% 6 100% 911 20 4 20% 3 75% 915 8 4 50% 3 75% 916 9 8 89% 8 100% 926 16 5 31% 5 100% 927 18 6 33% 6 100% 928 17 6 35% 3 50% 929 18 4 22% 2 50% 930 12 6 50% 4 67% 936 1515 100% 15 100% 937 10 9 90% 7 78% 938 1514 93% 14 100% 940 14 4 29% 3 75% 941 13 6 46% 6 100% 942 8 3 38% 3 100% 946 17 9 53% 9 100% 947 12 4 33% 4 100% Total 265 127 48% 115 91% Total Sample Total All 623 405 65% 386 95% An Analysis of Variance (ANOVA) was conducted to determine if differences existed in the return rate for consents across conditions, F (2, 38) = 9.227, p = .001. Results indicated that significant differences were present.

PAGE 95

83 Post-hoc testing (Tukey’s Honestly Significant Difference test) revealed that more consents were returned in teachers’ cl assrooms where coaching took place than those that served as control sites (NL/NC). No other significant differences were noted. An additional univariate analysis also was conducted to examine if the return rate of consents differed between faith and non-faith-based early childhood centers. Findings here documented that significant differences were present with higher rates of consents returned at sites that were not faith-based, F (1, 39) = 21.96, p =.001. Demographic data were not accessible for students from whom consent was not received. Therefore, analysis to determine differences between those students from whom consent was received and those from whom it was not cannot be conducted. Two hundred and seventy-one children formed the original Spring 2004 ELO student/participant treatment groups. All 271 children were students of the twenty-two teachers who were participating in the ELO grant in either the concurrent or delayed coaching conditions. An additional 115 children served as the student/participant control group based on their teacher being one of the 19 who agreed to participate in the control group (i.e., they did not attend the LAE 2000 course, were not assigned a Literacy Coach, and did not receive resources that were part of the spring ELO grant). Demographic data describing the children who participated in this study are provided in Table 6. Also included in

PAGE 96

84 Table 6 is information on the attrition rate from Time 1 to Time 2 of data collection. Overall, a 14% attrition rate was noted across the total sample. More specifically, 14% was noted among student/participants in the LT/C condition, 10% among student/participants in the LT/NC condition, and 11% occurred among student/participants in the NL/NC condition. Notably, this 14% attrition rate is comparable to the mobility rate often reported in early childhood education centers where average student turnover rates of 12-18% are found (Coordinated Child Care of Pinellas County).

PAGE 97

85 Table 6 Descriptive Information and Changes in Student/Participant Sample From Time 1 to Time 2 Number of Student/Participants Race Distribution Male Female All White AA Hisp Asian Other T 1 T2 T1 T2 T1 T2 T1 T2 T1 T2 T1 T2 T1 T2 T1 T2 LT/C 75 66 90 72165 13894 82 39 30 10 9 4 3 18 14 LT/NC 54 49 52 50106 99 79 76 11 9 8 7 3 3 5 4 NL/NC 50 44 65 58115 10289 83 20 13 2 2 2 2 2 2 All Students Total Sample 179 158 207 180386 338262 241 70 52 20 18 9 8 25 20 Note T1 = Time 1, T2 = Time 2, AA = African American, Hisp = Hispanic.

PAGE 98

86 Measures Literacy Skills Literacy skills of students in the preschool setting were measured with the Individual Growth and Development Indicators (IGDI) assessment tools (McConnell et al., 1998). Two forms of the instrument were utilized. The preschool version was administered to all children from the ages of three to five years. Further, the school-aged version (i.e., DIBELS) also was administered to those students who were identified as entering kindergarten in the fall of 2004 (i.e., students whose birthdates were prior to September 1, 1999). Further description of each of these IGDI measures follows. Preschool IGDI The preschool form of the IGDI was developed by McConnell and McEvoy at the University of Minnesota. Their efforts were driven by the goal of developing a General Outcome Measure (GOM) that assessed early literacy skills such as expressive language and phonemic awareness in children between the ages of three to five years (McConnell, Priest, Davis, & McEvoy, 2002; Priest et al, 2001). Three subtests are included in the preschool IGDI: Picture Naming, Alliteration, and Rhyming. These subtests will be presented next. Picture Naming. The Picture Naming subtest assesses expressive language skills while it asks children to identify common objects (e.g., house, dog, fish) depicted in pictures presented to them (McConnell et al, 2002). Four

PAGE 99

87 sample items are presented first with feedback provided. Following presentation of the sample items, the examiner tells the child that he or she will show him or her more pictures. An additional prompt reminds the child to name the pictures as fast as he or she can. The examiner begins timing as he or she displays the first card. If a child does not respond within three second of being shown a card, the examiner asks the child, “What do we call this?” If the child does not answer, then the card is placed into a pile along with incorrectly named cards, and the next card is shown. After the one-minute time limit has elapsed, the correctly identified cards are counted. This number becomes the Picture Naming score. Alliteration The Alliteration subtest taps early phonemic awareness by engaging children in tasks that ask them to identify pictures of objects that start with the same sound. For example, a child would either verbally or through pointing indicate that d ice and d og begin with the same sound. Six sample cards are presented with decreasing levels of support and feedback provided by the examiner. When the examiner has finished presenting the sample cards, children who answer one or fewer cards correctly are transitioned into the next subtest (Rhyming). In contrast, children who correctly answer at least two out of the four sample cards correctly continue this task during which time, the examiner starts the timer, identifies the images on the card, and asks which picture below starts with the same sound as the picture on the top of the card. For example, “Here is a dog, rock, desk, and skate. Which picture [pointing to the bottom row] starts with the same sound as dog?” If a child does not respond

PAGE 100

88 in three seconds, the next card is shown. Cards eliciting accurate responses are placed in a one pile. Cards eliciting inaccurate or non-answers are placed in a separate pile. Two minutes are allowed for the Alliteration subtest with a child’s score on the this subtest reflecting the number of correctly identified alliteration pairs from the cards presented in the two minute span. Rhyming The last subtest, Rhyming, also measures early phonemic awareness skills. Specifically, it asks children to identify objects whose names rhyme. For example, a child could point to or verbalize that a star and car sound the same. The Rhyming subtest follows a similar presentation format as the Alliteration subtest. That is, six sample cards are presented. Failure on more than two of the last four samples results in discontinuation of the subtest. In contrast to the Alliteration subtest, however, the Rhyming task asks children to point to one of three images on the bottom row of a card that sounds the same as or rhymes with the image depicted on the top of the card. During the subtest, the examiner identifies all images that appear on the card and then follows this naming process with a reminder of the task requirements, i.e., “This is a hat, boat, fan, and cat. Point to the picture that sounds the same as hat?” Timing of this subtest begins with the presentation of the first card and continues until two minutes have elapsed. Cards eliciting co rrect responses in the two minute period are placed into one pile while cards receiving incorrect responses are placed into a second pile. A child’s score on the Rhyming subtest represents the number of similarly sounding pairs of objects that he or she could identify from the stimulus

PAGE 101

89 cards in two minutes (or the number of cards in the correct pile). Psychometric properties Priest, Davis, McConnell, and Shinn (1999) and Missall and McConnell (2004) have evaluated the psychometric properties of the preschool IGDI. Results of their efforts offer support for its use as a valid and reliable indicator of children’s literacy growth and development. For example, moderate correlations (r = .69) have been documented when the Picture Naming subtest and a second test of expressive language, the Peabody Picture Vocabulary Test – Third Edition ( PPVT-3), were administered approximately one to two weeks apart from each other. An additional feature of the preschool IGDI is its purported sensitivity to short-term change. Evidence to document this quality can be found in data that examined change over time in a sample of typically developing children (r = .63) and developmentally delayed children (r = .48). Specifically, higher rates of progress were documented in typically developing rather than developmentally delayed children. One-month alternate form reliability also is moderate with a range from r = .44 to .78 obtained. Further examination of the Alliteration and Rhyming subtests of the IGDI has been conducted as well (Missall & McConnell, 2004; McConnell et al., 2002). With regards to the Alliteration subset, moderate correlations with other measures of vocabulary (PPVT-3, r = .40 to .57) and phonological skills (Test of Phonological Awareness [TOPA], r = .75 to .79) were documented. Alliteration scores also have been positively correlated with age (r = .61). Finally, test-rest reliability over a three week interval has been used to support the stability of the

PAGE 102

90 Alliteration subtest, r = .46 to .80. The Rhyming subtest follows along a similar trend, with correlates between it and the PPVT-3 (r = .56 to .62), the TOPA (r = .44 to .62), and DIBELS Letter Naming Fluency (r = .48 to .59) marking moderate to strong concurrent validity (Missall, 2002; McConnell et al., 2002). Further research also has documented the sensitivity of this measure with positive correlations between chronological age and Rhyming scores reported (r =.46). Preliminary data that offer a glimpse at the trends for typically developing children and children living in poverty has been published by Priest, Silberglitt, Hall, and Estrem (2002) and Missall and McConnell (2004). Table 7 contains IGDI means and units of change per month that are presented in these studies. Table 7 IGDI Means and Units of Change ( ) Per Month for Typically Developing Children and Children Living in Poverty at 53, 59, and 66 Months of Age Picture Naming Alliteration Rhyming Mean Per Month Mean Per Month Mean Per Month Typically Developing At 53 Months A ------5.23 .38 7.61 .38 At 59 Months B 16.97 ---5.19 ---6.29 ---At 66 Months A 26.90 .44 ------------Living in Poverty At 53 Months A ------4.28 .25 ------At 59 Months A ------------6.50 .95 At 59 Months B 16.51 ---1.09 ---1.68 ---At 66 Months A 19.01 .28 ------------Note A Denotes research by Preist, Silberglitt, Hall, & Estrem, 2000, N = 90; B Denotes research by Preist, Silberglitt, Hall, & Estrem, 2000, N = 69; ---D enotes statistics not available.

PAGE 103

91 School-Aged IGDI Literacy skills for those students who were identified as entering kindergarten in the fall of 2004 also were measured by the Dynamic Indicators of Basic Early Literacy Skills Sixth Edition (DIBELS). The DIBELS is a standardized and individually administered assessment tool designed to tap the development of early literacy and reading fluency skills (Good & Kaminski, 2002). Two subtests (i.e., Letter Naming Fluency and Initial Sounds Fluency) were administered in this study. Data from t hese two subtests purport to assess two of the five Big Ideas in reading categories. That is, Letter Naming Fluency tapped skills reflective of the phonics domain whereas Initial Sounds Fluency responses measured the development of phonemic awareness skills (Kaminski & Good, 1996). After this overview, results of research that have examined the psychometric properties of the DIBELS will be offered. Letter Naming Fluency Letter Naming Fluency (LNF) taps early phonics skills. During this subtest, students are given one minute to name as many letters as they can from a probe displaying randomly placed upper and lower case letters of the alphabet. Timing of this subtest begins immediately after the examiner introduces the activity, i.e., “H ere are some letters. I want you to name as many letter as you can. When I say begin, start here and go across the page…Ready? Begin.” Hesitations of more than five seconds are followed by the examiner identifying that letter and then pointing to the next letter and asking, “What letter?” The total number of correctly identified letters during the one-

PAGE 104

92 minute timed interval becomes the child’s LNF score. Initial Sounds Fluency. During the Initial Sounds Fluency (ISF) subtests, students are asked to demonstrate their awareness of initial sounds in words. To accomplish this task, children either point to or verbalize pictures of objects that start with the sound vocalized by the examiner. A second task within this subtest prompts students to vocalize the initial sound in the name of an object that is depicted in a picture placed in from of them. For example, the examiner asks, “What sound does ‘foot’ begin with?” Timing for this subtest is accomplished by the examiner starting his or her stopwatch immediately after giving the scripted directions and then stopping at the child’s response or within 5 seconds of the prompt given by the examiner. At this point, the examiner stops but does not clear the stopwatch. The stopwatch is started once again after the examiner finishes giving the prompt for the next item. Thus, a cumulative measure of a child’s think time is obtained. Scoring for the ISF subtest includes totaling the number of correct responses and multiplying the sum by 60. This obtained number then is divided by the total number of seconds representing the child’s response time. This final figure becomes the child’s ISF score. Interpretation of DIBELS scores Outcome scores on the DIBELS vary by subtest as do Benchmark expectations. In general, higher scores indicate higher levels of skill acquisition. Table 8 depicts the alignment of scores with risk indicators that have been adopted for Reading First schools in the state of Florida.

PAGE 105

93Table 8 Interpretation of DIBELS Scores at Kindergarten Entry According to Florida Center for Reading Research Benchmarks Score Interpretationa High Risk Moderate Risk Low Risk Letter Naming Fluency 0-1 2-8 9 and above Initial Sounds Fluency 0-3 4-8 9 and above Notea High Risk reflects seriously below grade level and in need of intensive intervention; Moderate Risk reflects below grade level performance indicating a need for intervention; Low Risk indicates at grade level performance. Psychometric properties Strong reliability bolsters the use of DIBELS subtests. For example, Good, Gruba, and Kaminski (2002) found strong testrests reliability for kindergarten through fifth grade subtests (i.e., r = .92 to .97). Criterion-related reliability also was reported to range from .52 to .91. Psychometric properties such as this add credence to the use of DIBELS as a progress monitoring tool. In fact, DIBELS measures have been adopted on a large scale in numerous states as a system for progress monitoring the acquisition of reading skills (Simmons, Kame’enui, Good, Harn, Cole, & Braun, 2002). Developmental Level Measures of a child’s developmental level were assessed using the Early Screening Inventory-Revised (ESI-K) (Meisels et al., 1993). The ESI-K is an individually administered and norm-referenced screening tool that purports to assess children’s acquisition of skills that fall within three areas of development: Visual-Motor/Adaptive, Language and Cognition, and Gross Motor skills. Within the Visual-Motor/Adaptive domain, a child was asked to engage in a drawing

PAGE 106

94 task, build a four-dimensional model with blocks, and play a visual memory game that requires eye-hand coordination and short-term memory. Tasks in the Language and Cognition portion of the ESI-K gather data about a child’s language comprehension, verbal expression, ability to reason and count, and ability to remember auditory information. Finally, the Gross Motor subsection asks children to perform physical acts such as hopping on his or her foot, balancing, and skipping. Administration time for the ESI-K ranges from 15 to 20 minutes. Obtained scores on the ESI-K can be classified into one of three categories: OK, Rescreen, or Refer. Numerical scores also can be obtained. For the purpose of this study, only the numerical scores will be used as a source of data to answer the research questions. Table 9 provides details regarding the numerical and categorical descriptions of the scoring. Table 9 ESI-K Scoring and Categorical Definitions Age (in years) Categories 4.6 to 4.11 5.0 to 5.5 5.6 to 5.11 Description OK > 14 > 18 > 20 Child is developing normally. Rescreen 10-13 14-17 16-20 Rescreen in 8-10 weeks. Refer < 9 < 13 < 15 Refer for evaluation. Psychometric Properties Test-retest reliability is reported by Meisels et al. (2003) with a Cronbach alpha of .87 obtained during standardization procedures. As a screening tool, the ESI-K correctly identified 93% of children who subsequently were found to

PAGE 107

95 have a significant delay or disabling condition (Meisels et al., 2003). Further correlations documenting the predictive nature of the ESI-K were conducted. Specifically, a correlation coefficient of .73 was reported between the ESI-K and the McCarthy Scales of Children’s Abilities administered within seven to nine months of each other. Given these data, Meisels et al., (2003) has promoted the ESI-K as a reliable and valid screening tool for identifying those students who may experience significant difficulties with the acquisition of academic curriculum. Treatment Integrity A modified version of the Early Literacy Observation Checklist (ELOC) (see Appendix B) was utilized as an index of literacy-related environment and teacher-student interaction variables (Justice, 2002). Most importantly, for this study, the ELOC was used as a measure of treatment integrity. That is, were teachers participating in the ELO grant implementing the strategies promoted in the LAE 2000 class? For example, the HUR training asserts that early childhood settings should provide an environment that is noted as containing many literacyrich stimuli (e.g., books placed on a shelf so that the front cover is in view, writing materials placed throughout classroom) as well as literacy supporting interactions (e.g., teachers using open-ended questions, conducting read-alouds with the class, pointing out similarities between words in common nursery rhymes). Thus, it was expected that the classrooms of teachers who were participating in the HUR training would exhibit these components. Treatment integrity,

PAGE 108

96 consequently, was documented by the presence of 80% of these research-based strategies as measured by the ELOC. The ELOC was completed in all settings (i.e., LT/C, LT/NC, and NL/NC classrooms) at two points or during the first (Time 1) and second (Time 2) waves of data collection. Specifically, the ELOC gathered data regarding the presence or absence of environmental features (a dedicated space for reading, writing materials), functional characteristics (e.g., can students use books during play activities, can parents borrow books from the classroom), and teacher-student interaction styles such as linking topic content of a book being read to children’s lives and adults comments observed during reading. Four distinct areas are assessed with the ELOC: Storybook Reading, Classroom Library, Writing Center, and Print Environment. The ELOC was completed after a 30-minute observation of the classroom during which time a literacy activity had occurred. Observers informed the teacher of this need before the observation. Questions that could not be answered after the observation were clarified through a teacher interview. When this occurred, a note referencing the source of the data was included alongside the item. Two forced choice responses (i.e., “Yes,” “No”) follow the majority of the questions. Three, four, and five choice responses also are distributed throughout the checklist. Toward the end of the ELOC, a different response pattern is solicited. Specifically, observers indicated where reading materials are displayed around the classroom. Responses for this item include “No where,” “A few

PAGE 109

97 places,” “Many places,” and “Everywhere.” Additional space also was provided so that details to support the ratings could be added. Modifications to the original ELOC (Justice, 2002) included the addition of two items (i.e., “Are printed materials displayed prominently in the early learning environment?” and “Are posters and signs displayed at eye level?”), and the expansion of ratings on two other items. Specifically, the original ELOC contained an open-ended item that asked how often group story time was held. Modification changed this item to a forced choice response format (i.e., never, once a week, two to three times a week, once a day, and more than once a day). A similar change was made to the question asking if there was a specific space for children’s independent and group writing activities. Three responses were provided: specific writing center, center set up only during choice time, or no place for writing. Finally, the ELOC was reformatted to increase the speed and accuracy of data collection. Modifications to the ELOC were driven by feedback from the ELO LCs and HUR facilitators. Scoring for the ELOC was completed by assigning point values to the responses. Appendix C contains a scoring key for the ELOC. Totals for each of the four separate areas (Storybook Reading, Classroom Library, Writing Center, and Print Environment) were obtained first and then summed into one Overall Literacy Environment score. Thus, higher scores reflected classroom environments that contained more indicators of a literacy-rich environment. ELOC data from LT/C and NL/NC classrooms was gathered by the

PAGE 110

98 program evaluation team that consisted of seven School Psychology graduate students and three Lead Program Evaluators (who also were doctoral level School Psychology students). Literacy Coaches completed the ELOCs in the classrooms where teachers were receiving coaching. Training in completing the observation was provided by one program evaluator and focused on defining the terms and ratings in the checklist. After this training, dyad pairs (comprised of one lead program evaluator and one School Psychology graduate student or LC) completed the ELOC after observing a literacy activity in an early childhood classroom. Inter-rater agreement then was calculated by dividing the number of agreements by the total number of items. Agreement of .85 or above was required prior to use in the evaluation component of the ELO grant. If agreement was not achieved, a discussion of discrepancies between the dyad members took place after which time a second ELOC observation was completed. All Program Evaluators reached this level by the second observation. The ELOC also served as a forum for feedback for teachers in the coaching condition. In this setting, LCs shared their findings from the ELOC as part of their observation, feedback, and modeling coaching model. Teachers in LT/NC and NL/NC settings were not provided with feedback relative to observations made while completing the ELOC. Treatment Intensity The structure of the ELO grant proposed that teachers in the LT/C group would receive an average of 14 coaching sessions with their LC. It was expected

PAGE 111

99 that these visits would last approximately one hour. Data to reflect the actual number and duration of visits was gathered and used to depict the intensity of the coaching. Appendix D contains the form that was completed by the LCs to reflect the frequency and duration of their coaching visits with each teacher. Another section of the form asked LCs to indicate the types of activities that they engaged in during their visits (e.g., observation, feedback, modeling, goal setting). Finally, based on feedback from the LCs that time also was dedicated to phone conversations, a column on this form was added so that LCs could further describe the coaching sessions (e.g., face-to-face, phone conversation). Socioeconomic Indicators Research examining reading or literacy development would be considered negligent if it failed to consider the impact that socioeconomic status ( SES) has as a contributing factor in academic success (e.g., Byrk & Raudenbush, 1992; NRP, 2000; Snow, Burns, & Griffin, 1998). Difficulty gathering information regarding the socioeconomic status of children’s households from childcare centers was encountered, however. Given this resistance, it was decided that data regarding the impact of the neighborhood within which the children/participants lived would be measured. To attain an estimate of the socioeconomic status of the households of the participating children, home zip codes were obtained. These zip codes then were compared to an internet-based GIS Mapping system that was developed by the Pinellas County Economic Development department as a tool for linking geographic locations with

PAGE 112

100 demographic indicators such as racial distributions, home values, and median household incomes (http://www.silicombay.org/gis3/gis_content.cfm). Data are sorted by municipalities, census trac ts, and zip codes. Home zip codes of participating children were entered into this internet system to attain an indicator of neighborhood socioeconomic status. Focus also has been redirected from the individual level of SES of students’ households to the group level SES of the school or community. Specifically, research conducted by Wh ite (1982), Horn and O’Donnell (1984), and Alwin and Thornton (1984) document that the SES of the school unit as opposed to the individual student serves as the strongest predictor of academic success. For example, meta-analysis has found average correlations of .68 between SES at the school level and aver age achievement (White, 1982). In contrast, average correlations between academic achievement and SES at the individual level in this meta-analytic study were .23 (White, 1982). Recent hierarchical linear modeling analysis of this relationship found that mean school SES was predictive of reading and writing achievement but not of science or mathematics (Ma & Klinger, 2002). Caldas and Bankston (1997) also assert that students from low SES households are at less risk for academic failure when they attend schools in middle to upper class communities than when they attend schools in a low SES community. Given these findings, SES status of the childcare sites also was used as an indicator of socioeconomic status. Similar to the measurement of the child/participants’ household SES, the socioeconomic

PAGE 113

101 status of the preschool settings were based on the median income of its geographic location as identified by the GIS Mapping system. Procedure Timeline of ELO Grant Activities Although the ELO grant was not implemented fully until January of 2004, some activities were initiated in the spring of 2003. Table 9 depicts a timeline of the activities that are relevant to this study. Table 10 Timeline of ELO Grant Activities from November 2003 to June 2004 Activities November 2003 – Early Activities !" Recruitment flyers sent to all childcare settings. !" Selection of Teacher as Participants December 2003 !" Staffing Position Interviews (Literacy Coaches, Grant Manager, etc) January 2004 – Implementation of the ELO Grant !" Literacy Coaches hired !" USF IRB application submitted !" Program Evaluation team training in IGDI measures !" LAE 2000 course began !" Coaching (LT/C) and No Coaching Treatment (LT/NC) groups identified !" LCs received assignments for coaching !" Control Groups contacted for participation by Lead Program Evaluators !" Lead Program Evaluators meet with prospective control group sites Table continued on next page.

PAGE 114

102Table 10 (Continued) Timeline of ELO Grant Activities from November 2003 to June 2004 !" Activities February 2004 !" IGDI Training held for Evaluators !" ELLM training held for LCs !" Literacy Coaches began weekly coaching with assigned teachers !" Time 1 (IGDI & ELOC) Data collection began (continued for 3 weeks) !" LCs completed Time 1 ELOCs March 2004 !" Lead Program Evaluators completed ESI-K training April 2004 !" DIBELS training held for Program Evaluators !" Directors Workshop !" Time 2 IGDI, DIBELS, ESI-K, & ELOC data collection began (continued for 3 weeks) May 2004 !" Time 2 data collection continued !" LCs completed Time 2 ELOCs !" LAE 2000 course ends !" Focus Groups held with teachers Recruitment and Selection of Teacher/participants The first activity reflected the recruitm ent of the teacher/participants. This recruitment process was completed by the Pinellas County School Readiness

PAGE 115

103 Coalition and included sending notification of the opportunity for participation to childcare settings, home day care settings, and home visiting teacher programs in Pinellas County. Participation in the grant was referred to as a scholarship opportunity for a HeadsUp! Reading project. Over 146 applications were returned (Appendix E contains a blank application). The Pinellas County School Readiness Coalition (PCRC) reviewed each application and selected one teacher from each center that applied. No input or control over the selection process was given by the researcher. Although PCRC reported that preference was given to teachers with more limited experience due to their desire to provide assistance to those who were new to the field – an effort that was thought to help increase retention of early childhood education teachers – a review of data indicated differently. Specifically, teachers in the LT/C and LT/NC conditions reported lower levels of education (only 33-40% of teachers in the treatment conditions reported post-secondary education whereas 95% of teachers in the control sites reported having completed post-secondary education). Experience in early childhood settings also differed with LT/NC sites reporting the highest mean years of experience (M = 13.59) while teachers in the control sites reported the fewest (M = 7.99). Beginning of LAE 2000 Course January 21, 2004 marked the first night of classes for the 15-week LAE 200 course at the two St. Petersburg College campuses. Classes were held at two locations (Gibbs and Seminole campus). The first meeting served as an

PAGE 116

104 introductory overview of the course and grant activities. The second meeting reflected the first day of HUR content (see Appendix F for a course syllabus). Hiring and Training of Literacy Coaches The Pinellas County Readiness Coalition posted a county-wide notification about the openings for Literacy Coaches for the ELO grant. Notifications were sent to childcare settings and local universities. Notice also was posted in the employment section of the St. Petersburg Times newspaper. Three candidates were hired for the Literacy Coach (LC) positions. All three held a baccalaureate degree in early childhood education with more than 5 years experience working in early childcare settings (please consult Appendix A for more details about employment qualifications). It was expected that each LC would coac h seven to eight teachers as they applied HUR strategies into their early childhood classrooms. As a guide for this process, LCs were trained in the Early Literacy Learning Model (ELLM) for coaching teachers implementing literacy instruction. A consultant from Coordinated Child Care of Pinellas County who was trained in the ELLM coaching model presented this training. In short, the ELLM model provides a framework for coaching that cycles through observation, feedback, and modeling (Fountain, Cosgrove, Wiles, Wood, & Senterfitt, 2001). In general, LCs were trained to begin this cycle with a classroom observation that targeted the most recent topic of discussion from the LAE 200 class and HUR curriculum. For example, if the HUR topic from the week before was “Writing,” then the LC would

PAGE 117

105 observe a class “book making” activity as well as examine the room to see if a writing center had been created. The LC then gave the teacher feedback from this observation and modeled strategies to enhance the infusion of literacy-based writing activities and environmental stimuli into the classroom. Goals for future skill development then were created and revisited during the first part of the subsequent coaching session. Obtaining Institutional Review Board (IRB) Approval An application for University of South Florida IRB approval for the grant was submitted in January of 2004. Approval was obtained in February of 2004. An application for USF IRB approval for the use of the archival data from the ELO grant for this study was submitted and obtained in August of 2004. Creating Coaching and No Coaching Treatment Group After potential teacher/participants signed and returned their participation agreement forms, they were assigned to either a coaching or no coaching condition. A two step process was followed by the PCRC to create these treatment groups. Of note, the researcher had no control over the assignment process. As a first step, the PCRC divided teachers into groupings based on the ages of the children they taught (infants, toddlers, 2-3 year olds, 3-4 year olds, and pre-kindergarten). Once these groupings were made, one teacher from per age group was selected randomly and assigned to the coaching condition (LT/C). After this selection, another teacher was chosen from that same age grouping and placed in the no coaching condition (LT/NC). This process was continued

PAGE 118

106 until all teachers were placed in either t he coaching or no coaching conditions. Thus, 50% of the assignments were random while the remaining were conveniently assigned into the other condition. The general goal of this selection procedure was based on the PCRC’s desire to have equal representation of the teachers of students from similar ages divided between the two conditions. Of note, no effort was made to match treatment groups based on the socioeconomic status of the sites or geographic locations. Creating Control Groups In an effort to create a control group of teachers and children who were not participating in the ELO grant, a random selection of teachers who had applied for but were not selected for participation in the spring session were contacted. Seven centers agreed to serv e as a control group. Contact with the control groups was made by one of the three doctoral level graduate students in the Program Evaluation Team. Initial contact was made through a phone call and followed a basic introductory script (see Appendix G). As each center agreed to participate in the study, a program evaluator visited the site and spoke with the center Director. Essentially, the only involvement between the control sites and the ELO grant personnel was just before and during the Time 1 and Time 2 data collection cycles and consisted of a phone call two weeks prior to verify participation and schedule times for data collection. No other contact was made nor were any other stipulations or requirements tied to participation. Regardless of these differences, data coll ection procedures mirrored those at the

PAGE 119

107 treatment sites (LT/C and LT/NC). Obtaining Informed Consent from Teacher/participants and Their Students Consent forms were provided to the ELO steering committee for distribution to all teacher/participants (Appendix H) and their students (Appendix I). Specifically, these forms asked for teachers’ participation in the program evaluation component of the ELO grant. Parent assent also was solicited from parents of the students in the participating teachers’ classrooms. Since the program evaluation team of the ELO grant made the first contact with the control group sites, they also delivered these consent packets during the first meeting with the center Directors who had agreed to volunteer as participants in the study. Appendix J contains a sample cover letter for the control group center Directors, Appendix K contains the cover page for the parent assent form, and Appendix L contains the reminder letters that were distributed to parents when forms were not returned within one week. A follow-up phone call was made to all center Directors one to two weeks after the consent forms had been delivered. At this time, Lead Program Evaluators inquired about the number of consent forms returned and made arrangements for data collection. Six hundred and twenty-three consent forms were distributed to the parents of the children in the targeted classrooms. Four hundred and five consents forms giving t he program evaluation team consent for data collection were signed and returned (overall response rate of 64%). Specifically, 171 out of 190 consent forms were signed and returned for students

PAGE 120

108 at concurrent coaching sites (90% respons e rate), 107 out of 168 consents were returned for students at sites in the LT/NC conditions (response rate of 64%), and 127 out of 265 consents were signed and returned for students at control sites (response rate of 48%). No sociodemographic information was provided for students who did not return consent forms. Eighteen students whose parents had given consent for participation in the evaluation were not included due to students’ absences. Up to three attempts were made to assess all children from whom consent had been obtained. One additional child did not want to participate in the activities. After two different Program Evaluators obtained the same response, no further attempts to engage the child were made. Training Program Evaluators Ten graduate students from the School Psychology program at the University of South Florida served as members of the evaluation team. Seven members were second year graduate students. The remaining three members were doctoral students also in the school psychology program and served as the Lead Program Evaluators. Two of these doctoral students (including the author) also are employed by Pinellas County Schools as Ed.S. level school psychologists. As part of their employment obligations to the school system, these two school psychologists also were certified as DIBELS administrators after attending a full-day training sponsored by the Florida Center for Reading Research.

PAGE 121

109 For the present evaluation efforts, two half-day trainings were provided for the second-year graduate students and addressed the background, administration, and scoring of the preschool IGDI, DIBELS, and ESI-K measures. Time for practicing these procedures was provided one week later at which time test kits were distributed. An additional practice requirement included administering the subtests to at least three more children while being supervised and given feedback by a dyad partner who also received training in IGDI administration. A checklist was provided for the observing person with 100% accuracy required (see Appendix M). The second half-day training focused on the administration of the two subtests of the DIBELS (LNF and ISF). The three Lead Program Evaluators presented this second set of trainings, which followed a similar format to that of the IGDI training. That is, an introduction to the measures, administration procedures, and scoring of the subtests was offered. Time dedicated for practice in dyad pairs was provided as well. The Lead Program Evaluators completed training in the administration and scoring of the ESI-K. One half day was dedicated to this undertaking. Structure for this training was provided in the ESI-K manual (Meisels et al., 2003), which was further supported by a training video tape. Each Lead Program Evaluator also was responsible for conducting at least three practice administrations with children.

PAGE 122

110 Initial Wave of Data Collection The initial wave of data collection measuring children’s literacy skills in LT/C, LT/NC, and NL/NC settings began on February 17, 2004 and ended on March 9, 2004. Data collection included the administration of the preschool IGDI measures and classroom observations with the ELOC. Each evaluator was assigned to at least three sites. Evaluators made their own arrangements for dates and times for completing data collection during the three week time span. Evaluators were provided with a file for each of their respective sites that included data sheets, information about the site (contact person, teacher’s name, hours, ages of children to be assessed, etc.), and a map highlighting driving directions. For reasons of confidentiality, no student names were placed on data sheets that were used during data collection. Instead, center Directors were provided with a key that contained the list of students from whom signed consents were obtained. Also present on this page was a row within which numerical codes were entered. Each student had their own numerical code alongside their name. It was this code that was placed on the data sheets used to record background information (Appendix N contains this demographic information sheet) as well as during data collection (see Appendix O for an example of this datasheet). After the codes and related information were transferred to the Program Evaluators’ data sheets, this page of children’s names and code numbers was given to the center Directors for safekeeping. When all

PAGE 123

111 data had been transferred into the computer database from the paper datasheet and all evaluation components were completed for the Spring ELO cohort, center Directors were asked to destroy the page with student names and numbers. Thus, no student names were immediately linked to data obtained. Completed files, which had been returned to one of the Lead Program Evaluators, placed all completed files that had been entered into the database into locked drawer. Administration of IGDI. Specifically, administration of the IGDI measures followed a standardized procedure with scripted directions for the examiner to read to the student. A stopwatch or timer was required and was provided as part of the IGDI test kits. All assessments were conducted one-on-one between the examiner and the child and took place in a quiet setting. Duration of the IGDI administration time was approximately 5 to 7 minutes per child. Picture Naming The Picture Naming subtest prompted children to identify common objects in pictures presented to them (McConnell et al, 2002). A set of sample items was presented to children first. After calling attention to pictures on the card held in front of the child, the administrator identified each image by name. Next, the child was told that it would be his/her turn next with a prompt of, “Now you name the pictures.” Praise was given for correctly named images and corrections were given when objects were misidentified. Four sample items (cards) were presented. These same ca rds were used for all administrations. The remaining 100 cards were shuffled before each session. After the sample cards were presented, the child was told that it would be his/her turn next. The

PAGE 124

112 child also was prompted to name the items as fast as he or she could. Correctly named items (cards) were placed in one pile alongside the examiner and incorrectly identified cards were placed in another pile. If a child hesitated to respond to cards presented (after three seconds), a prompt of “Do you know what this is?” or “What’s this?” was given. The child then was given two more seconds to respond. If no response or an incorrect response was elicited, then the card was placed in the incorrect pile. Correct responses resulted in the card being placed in the correct pile. After one-minute, the task was stopped and the number of cards in the correct pile was entered onto a data sheet (see Appendix O). Alliteration The Alliteration subtest tapped early phonemic awareness. Specifically this subtest prompted children to look at a card containing pictures of four common objects (e.g., fish, baby, car, foot). After these objects were named by the administrator, children were told that they were going to look on the card for objects that started with the same sound. Specifically, the administrator told the child, “We are going to look at some pictures and find the ones that start with the same sound.” Following this prompt, the examiner demonstrated the task by identifying two objects that followed this pattern (“Dice and dog start with the same sound.”). Six sample items were presented. The first two items were demonstrated by the examiner who provided comments that demonstrated the task. The next two (cards three and four) reflected the child’s first attempts at completing this task. Corrective feedback was provided. In contrast, the last two

PAGE 125

113 cards (cards five and six) were presented in a similar manner to the previous two items; however, no corrective feedback was given. When the child responded correctly to at least two of the last four cards, the examiner continued with the subtest. When a child did not produce at least two correct responses, the test was discontinued and N/A was entered on the data sheet. The first two sample cards remained consistent throughout the study. The subsequent four sample cards were selected randomly from the pile of cards. The examiner started the two-minute timed portion of the subtest by identifying the four images depicted on a randomly selected card. Next, the examiner asked the child to identify the object from the bottom row that started with the same sound as the object at the top of the card. Each card was introduced in the same way with the examiner identifying all four images on the card and then asking the child to identify the object that started with the same sound. Cards that elicited correct answers were separated from cards prompting incorrect or no responses. Three seconds were provided for children to respond to items. After this time, the next card was shown. The previous card then was placed in the incorrect pile based on the child’s non-response. At the end of the two-minute period, the examiner counted the number of cards in the pile for correctly named alliterations. This number represented the child’s score on the Alliteration subtest. No credit was given for sample items. Rhyming The last subtest, Rhyming, also tapped early phonemic awareness skills and asked children to identify two objects out of four depicted

PAGE 126

114 on the card (e.g., star, jacks, car, and horse) that rhymed. Specifically, the examiner said, “We are going to look at some pictures and find the ones that sound the same.” Sample items were provided and reflected a similar structure to those from the Alliteration subtest where the first two standardized items (cards) were demonstrated by the examiner, followed by two randomly selected items where corrective feedback was offered, and a final set of two items where no feedback was provided. Also similar was the discontinue rule where only students who responded correctly to at least two cards continued on in the task. Rhyming also is a two-minute timed task. As with the Alliteration subtest, timing for the scored portion of the subtest began after the examiner told the child that there were more pictures to look at. After this prompting, the examiner started the stopwatch and began identifying the four images on a card selected at random. Reminders to point to the picture that sounded the same as the picture at the top of the card followed. Three seconds were allowed after the presentation of the card and identification of the images before the examiner presented a new card. Cards were placed in one of two piles after administration. One pile was for those items that elicited correct responses within the three seconds. A second pile was created for cards that were followed by incorrectly identified rhyming pairs or non-responses. At the end of the twominutes, the examiner stopped the subtest and counted the number of cards in the pile for correctly named rhymes. This number was entered onto the data sheet and reflected the child’s score on this subtest.

PAGE 127

115 Environmental Literacy Checklist. Beginning the week of February 17, 2004, and ending the week of March 1, 2004, all three Literacy Coaches, seven Program Evaluators, and three Lead Program Evaluators completed their first structured observation of the early childhood classrooms using the Early Literacy observation Checklist (ELOC) (Justice, 2002). This checklist was completed for all participating teachers’ classrooms following a 30-minute observation. Questions that could not be answered after the observations were clarified through teacher interviews. When this occurred, an “I” was placed in the margin alongside the clarified item. In addition, the name of the person from whom clarification was obtained was noted. Whereas LCs completed the observation checklist in the classrooms of treatment group participants, Program Evaluators completed the observations at the LT/NC and NL/NC sites. Literacy Coaches also utilized the ELOC information to identify key needs to be addressed during coaching sessions. For example, if the recent HUR class targeted the topic of Writing, then information gleaned from the ELOC such as whether writing centers were accessible to children or if writing materials were available for free play, served as a topic for feedback and modeling. Additional feedback also was provided to teachers by the LCs if weaknesses, strengths, or areas of growth were noted. Thus, the ELOC served as a source of data-based decision making around which LC’s structured their coaching sessions. Notably, program evaluation team members did not offer feedback after their observations in the LT/NC and NL/NC settings.

PAGE 128

116 Overall, inter-rater agreement of ratings was obtained (r = .85 to .93) between dyad partners following observation of a literacy activity in a classroom. Two observations were needed to reach this level of agreement in all but two cases. Scoring of the ELOC was done within an Excel database where formulas were entered to calculate the points assigned to responses. The Lead Program Evaluators entered the ELOC responses into the database. After the scores from each ELOC were entered, a second ev aluator checked the data for errors. Director’s Workshop In April of 2004, a workshop was held for the Directors of the centers from which teachers were selected for participation. This full day training provided directors with strategies for promoting literacy in their childcare sites and although it was not required, attendance strongly was urged. The workshop was held on the St. Petersburg College campus. The aim of this workshop was to equip center Directors with the knowledge and skills to support the coaching of their employees when the grant ends. Du ring this workshop, presentations by Gabriel White Deer of Autumn Horn, who is a faculty member and children’s author, addressed the importance of diversity in literacy instruction. A copy of his book, Ceremony in the Circle of Life, was provided to each Director. Additional faculty from St. Petersburg College also presented on topics related to literacy training and coaching. Second Wave of Data Collection The second round of data collection began on April 26, 2004 and ended

PAGE 129

117 on May 10, 2004 in all settings (LT/C, LT/NC, and NL/NC groups). At this time, the same preschool IGDI measures were re-administered (Picture Naming, Alliteration, and Rhyming). Additionally, however, two DIBELS subtests (Letter Naming Fluency and Initial Sounds Fluency) were administered to a subset of children who were identified as entering kindergarten in August of 2004 with birthdates on or before September 1, 1999. A third subsample of three student/participants per child care site who were four years and six months of age or older were selected randomly for assessment with the Early Screening Inventory-Revised (ESI-K). Table 11 contains information regarding sample sizes across conditions for these subsamples. Table 11 Sample Sizes for Letter Naming Fluency and Initial Sounds Fluency DIBELS Subtests and the ESI-K at Time 2 Letter Naming Fluency Initial Sounds Fluency ESI-K Male Female All Male Female All Male Female All LT/C 40 49 89 40 49 89 3 12 15 LT/NC 30 30 59 30 29 60 7 5 15 NL/NC 20 23 40 19 21 43 8 6 14 Administration of the measures followed standardized procedures with scripted directions for the examiner to read to the student. All assessments were conducted one-on-one with the child. Duration of the IGDI administration time was approximately 5 to 7 minutes per child. An additional 15 to 20 minutes were added for the cohort of kindergarten entry-level children due to the supplemental measures being administered (i.e., DIBELS, ESI-K). Since the procedures for

PAGE 130

118 administering the DIBELS subtests and ESI-K were not presented earlier, as they were not used in the first round of data collection, they will be described in the following sections. Administration of DIBELS Letter Naming Fluency subtest. Phonics skills were assessed in the Letter Naming Fluency (LNF) subtest of the DIBELS. During this assessment, students were asked to name as many letters as they could from a probe containing randomly placed upper and lower case letters of the alphabet. Students who hesitated more than three seconds were told the name of the letter by the examiner who then pointed to the next letter and said, “What letter?” Scores reflected the correct number of letters identified within a one-minute time period. This subtest was discontinued and scored as a zero when a student did not correctly identify any of the ten letters in the first line of the probe presented. Administration of DIBELS Initial Sounds Fluency subtest. During the Initial Sounds Fluency (ISF) subtest, students were asked to indicate which picture out of four placed in front of him or her began with the same sound as said by the examiner. For example, the examiner stated, “This is a hat, ball, telephone, and cup. Which picture begins with the sound /b/?” Students were prompted to point or orally respond to the question. After three questions of this type per page, the child was asked to pronounce the beginning sound of the remaining picture. That is, the examiner asked students to respond to the following question, “What sound does ‘telephone’ begin with?” This subtest was discontinued when a

PAGE 131

119 student did not respond correctly to the first five items. When this occurred, a score of zero was recorded. During the ISF subtest, examiners obtained an estimate of the child’s thinking time by recording the elapsed seconds between the question being offered and the child’s response. To obtain the ISF score, the number of correct responses were multiplied by 60 seconds and then divided by the child’s response (thinking) time. Scoring for the ISF subtest was completed by the three Lead Program Evaluators after data collection. Early Screening Inventory. The developmental level of a random selection of 45 student/participants (three children per pre-kindergarten classroom) was assessed with the ESI-K. This measure, which gathers information about children’s language, visual-motor, and gross motor skills, is comprised of tasks that require verbal and motor responses. All tasks contained one practice item that was not scored. Initially, children were asked to build a tower with ten wooden blocks. After this warm up task, the examiner built a gate structure with five of the blocks and asked the child to build a similar one. Notably, for the first trial the examiner built this gate behind a cardboard screen. If the child was unsuccessful building the gate after the first prompt of “Make yours just like mine,” the examiner modeled how to build the structure and then asked the child to try again. Successful attempts to build the gate without the modeling received two points while second attempts following modeling received one point. When a child did not build the gate following the modeling, no points were awarded.

PAGE 132

120 Copying tasks were presented next during which time the child was asked to copy four shapes (i.e., circle, square, triangle, and a plus sign) one at a time to a white unlined piece of paper. Standardized scoring procedures and templates (e.g., no gaps of more than one quarter inch appear in a circle, the horizontal line in a cross should not be more than one half as long as the vertical line) provided in the ESI-K manual (Meisels et al., 2003) were referenced. In general, one point per item was awarded for accurate representations of the images. After completing this copying task, the child was asked to draw a picture of a person (male or female). Scoring for this task was determined by the number of correct body parts with more than five items (e.g., a pair of eyes, hair, legs, feet, nose, mouth) receiving two points and images depicting three to five body parts receiving one point. No points were awarded for responses that did not meet these criteria. Following these copying and drawing ac tivities, tasks that tapped a child’s visual memory were presented. During this section, the examiner presented two picture cards (i.e., a duck and a cup) and placed them face down on the table in front of the child. Prompts to look closely at the cards were given. After this direction, the examiner turned the cards face down and then showed the child a card that matched one of the cards that had been turned over. The child then was directed to point to the turned over card that matched the card the examiner presented. If the child did not respond correctly, a second trial was administered. If a child was not successful with the task on the second trial, the examiner

PAGE 133

121 transitioned him or her to the next activity. Children who provided correct responses then were presented with a third card (a house). Again, prompts were given to look closely at the cards, which then were turned over. Similar to the task when it contained two cards, one matching card at a time was presented and children were asked to identify the location of its corresponding card. One point was awarded for success with the three cards. Next, ten blocks used earlier were presented again. This time rather than constructing a design with them, the child was asked to count the blocks. Prompts to count out loud so that the examiner could hear also were provided. If a child was unable to complete this task, five blocks were removed, and the prompt was given again. After the child completed the counting task, the examiner asked the child “How many blocks are there all together?” Correct responses to both counting and quantifying the number of blocks were awarded two points each for task (i.e., counting, identifying quantity) completed with ten blocks or one point each for task completed with five blocks. Expressive language skills were assessed next in an activity where children were provided with one of four objects (red ball, green block, blue button, and yellow and red car) and then asked to talk about the object presented. Responses were scored as to whether descriptions regarding the shape, color, name, or use of the object was provided spontaneously or following prompts from the examiner (e.g., “What shape is it?). Spontaneous responses received two points whereas responses that were elicited by a prompt received

PAGE 134

122 one point. Verbal skills were assessed further with a sentence completion task. More specifically, analogies such as “A hat is worn on your head and shoes are worn on your __________?” were provided. One point each was awarded for correct responses to four similarly constr ucted items. Incorrect or no responses received a score of zero. An auditory sequencing memory task followed the expressive language tasks. In this section, children were asked to listen to and repeat a string of three and four digits. One point was awarded for correct responses to three digit strings; two points were awarded for correct responses to four digit strings. If a child failed the first attempt at a three or four digit string, a second trial was given. No point discrimination was made between correct responses for first or second trials with successful responses receivi ng one point each. Incorrect responses received no points. The ESI-K administration concluded with gross motor tasks. For this section, children were asked to stand on one foot for ten seconds, hop on one foot five times, and skip across the room. Children who balanced (or hopped) on each foot received two points. Successful balancing (or hopping) on only one foot received one point. Skipping across the room was awarded two points. After these gross motor tasks, children were thanked for their participation and escorted back to their classrooms. Total scores for the ESI-K assessment were obtained after summing the

PAGE 135

123 number of points a child received across the entire test. A range of scores from 0 to 27 points was possible. Due to the higher level of training and prowess required to administer the ESI-K, all ESI-K assessments were completed by one of the three Lead Program Evaluators who completed a half day training and three practice sessions devoted to administration procedures. Review of Literacy Coach Session Notes In order to document alignment with the ELLM coaching model (Fountain, 2002), which provides a framework for coaching that cycles through observation, feedback, modeling, and goal setting, a review of notes completed by LCs and signed by their respective teachers re ceiving the coaching was conducted. Notes from nine of the twelve sites where coaching was provided were included in this review. Three sites for each of the three LCs were randomly selected by the researcher. Following this review, the percentage of session notes that contained reference to the coaching following the ELLM model was calculated. Variations from this ELLM model also were noted. Data Entry and Inter-Rater Agreement for Data Transfer Data entry of the scores obtained throughout these two waves of assessment was performed by one of the Lead Program Evaluators. All data were entered and tied to individual students and teachers by a code assigned to them that was developed to reflect the site number, treatment level, and student identification number. Specifically, data from control sites began with the number nine, which was followed by a two-digit number reflecting a teacher code. The

PAGE 136

124 final three-digit number reflected the student identification number. Treatment groups where concurrent coaching was provided was linked to data that began with the number seven, while data from treatments groups where coaching was delayed began with the number eight. An example of this application is the code 906018, which reflects a student with an identification number eighteen whose teacher’s code was 06 and served as part of the control group. Data were entered into a Microsoft Excel spreadsheet, which could be imported into a variety of statistical software packages. After data from three sites were input into the Excel spreadsheet, a second lead program evaluator reviewed the information entered into the spreadsheet. One hundred percent accuracy in the data entry was required. Procedure for Review, Selection, and Analysis of Archival Data The purpose of this study was to examine the effect that early educator training in research-based literacy strategies will have on the acquisition of literacy skills in the children they teach. The provision of an external support (Literacy Coaches) to assist in applying these strategies into the classroom also was explored. Data to answer these questions were found within the literacy skill measures collected as part of the ELO grant. Specifically, archival data were obtained from the Florida Mental Health Institute (FMHI) at the University of South Florida, which served as the home base for the program evaluation team for the ELO grant. Data were obtained in a deidentified format so that no information could be linked to any specific individuals who served as participants

PAGE 137

125 in the ELO grant. Code numbers, instead, were included in the data set. Exact site names also were not distinguishable as code numbers were used rather than the actual labels in the dataset. Archival data regarding teacher variables selected for analysis for this study included years of experience as an early childhood educator and highest level of education obtained. Observations of the classroom environment and teacher-student interactions that were part of the archival database also were used in the present study. Student-based data also selected from the archival source included gender, age, race, home zip code, and number of days present at school between January 1st, 2004 and April 30th, 2004. Finally, data reflecting outcome measures of literacy instruction (IGDI, DIBELS, and ESI-K) also were selected for inclusion in this study. Preschool IGDI data were collected at two points during the ELO grant (late February of 2004 and early May of 2004). Data from both times were included in this study. Of importance, concerns regarding the administration and scoring of the Initial Sounds Fluency (ISF) subtest from the DIBELS were raised. Notably, the ISF subtest requires timing and scoring procedures that differ from the other IGDI and DIBELS subtests. That is, ISF solely measures student “think time” rather than general fluency. Inspection of the ISF protocols by the Program Evaluation team indicated that general fluency was measured. Specifically, over 85% of the protocols noted that the ISF subtest was completed in 60 seconds. Further, this notation was accompanied, in many instances, by a termination of the subtest

PAGE 138

126 administration once the 60 second time limit had occurred. Given these findings, administration accuracy for the ISF subtest appears to have been violated. As a result, these data were not used in this present study. Visual Overview of the Study The following table and figure are offered as aids for visualizing the study. Specifically, Table 12 provides the basic structure of participants sample and their distribution across conditions. From another perspective, Figure 4 offers a visual picture of the scope of the study according to the time of implementation. Table 12 Visual Overview of Treatment Groups and Student/Participant Sample Sizes at Time 1 Age of Students/ Participants LT/C LT/NC NL/NC Total 36 to 47 months n = 17 4 classrooms n = 21 3 classrooms n = 25 9 classrooms n = 63 16 classrooms 48 to 59 months n = 99 12 classrooms n = 55 10 classrooms n = 61 16 classrooms n = 215 38 classrooms 60 to 72 months n = 47 7 classrooms n = 34 6 classrooms n = 29 9 classrooms n = 110 22 classrooms Total Sample n = 163 23 classrooms n = 110 19 classrooms n = 115 34 classrooms N = 386 41 classrooms

PAGE 139

127 HUR and LAE 2000 Topics January February March April May Assessment (2/3) Writing (3/2) Assessment (4/6) – Literacy Goals and Involving Families Writing (5/4) – Forms and Functions of Print Talking (2/10) – Language Development Learning the Code (3/16) – Phonological Awareness Talking (4/13) – Second Language Learners Introductions (1/20) Playing (2/17) – Literacy Play Environments Learning the Code (3/23)Alphabetic Principle Playing (4/20) – Narrative Play Curriculum (1/27) – Literacy Rich Environments Reading (2/25) – Vocabulary, Phonemic and Print Awareness Curriculum (3/30) Scaffolded Instruction Reading (4/27) – Using the Library and Involving Parents HUR and Coaching by Conditions LT/C HUR Begins HUR Ends Coaching Begins Coaching Ends 8/04 LT/NC HUR Begins HUR Ends Coaching Begins 9/04 NL/NC No HUR – No Coaching Program Evaluation Team Data Collection Activities January February March April May IRB submitted IRB Approved ESI/DIBELS Trainings Focus Groups IGDI /ELOC Trainings Consents Distributed Time 1 Data: IGDI & ELOC Time 2 Data: IGDI, ELOC, DIBELS, & ESI-K Figure 4 Visual Overview of Timeline and Scope of Study 2004 January February March April May

PAGE 140

128 Treatment Integrity Prior to discussing the analyses conducted to answer the research questions that drove this study, an evaluation of the integrity of the implementation of the treatment (i.e., ELO grant and related activities) occurred. Two avenues for examining this were employed: Total scores on the Early Literacy Observation Checklist (ELOC) were examined and Literacy Coaching session notes were reviewed. Review of ELOC Scores To examine the issue of treatment integrity, Total scores on the Early Literacy Observation Checklist (ELOC) were compared to a criterion upon which 80% of possible literacy related characteristics on the ELOC were present in the classrooms. Thus, treatment integrity would be assumed when ELOC Total scores of 33 or higher (out of a possible 41 points) were obtained. Comparison of ELOC Total Time 1 scores to this criterion were as follows: LT/C = 30.02, LT/NC = 31.80, and NL/NC = 27.25. Thus, no mean ELOC Total Time 1 scores met the criterion; however, at Time 2, mean scores from both treatment conditions fell within this range (LT/C = 33.77, LT/NC = 34.20). In contrast, the mean ELOC Total Time 2 score from the NL/NC group did not meet this criterion, M = 30.45. Figure 5 depicts these scores over time. Finally, percentages of classrooms that met the criterion at Time 1 and 2 across conditions are presented in Table 13.

PAGE 141

129 0 5 10 15 20 25 30 35 40Time 1Time 2ELOC Total Score Literacy/Coaching Literacy/No Coaching No Literacy/No Coaching Treatment Integrity Criterion Range Figure 7. ELOC Total Scores and Treatment Integrity Across Conditions Table 13 Number and Percentage of Classrooms Meeting ELOC Total Score Criterion Across Time By Conditions Meeting Criterion Time 1 Time 2 LT/C ( n = 12) 33% 73% LT/NC ( n = 10) 40% 70% NL/NC ( n = 19) 26% 32% Review of Literacy Coaches’ Session Notes Three files for each of the three LCs were randomly selected by the researcher for review. In general, inspection of these files offered support for close alignment with the Early Literacy Learning Model (ELLM) of coaching. Specifically, across these sites an average of 7 coaching sessions had occurred of approximately 50 minutes in duration. Ninety seven percent of these session

PAGE 142

130 notes contained reference to the LC observing the teacher, providing feedback, and then setting goals for future sessions. The missing component from the 3% of notes that did not depict full implementation of this cycle lacked reference to the LC modeling the skill under discussion. Accountability for the anecdotal references in these session notes is documented by signatures of the LC and teachers indicating that the session notes reflect an accurate representation of what occurred during the LC session. Research Design and Analysis Attention now will be directed at identifying key variables used in the analysis process. Notably, Table 14 contains information that details relevant concepts, operationalizations of thes e concepts, level of measurement, and range of data obtained in the measurement of these variables. Table 14 Characteristics of Descriptive and Measurement Data Concept/Construct Measured By Level of Measurement Obtained Range Age of students Age in months at time of data collection Ratio 36 to 66 months Student Attendance Number of days attended school during the time span of January 1, 2004 to April 30, 2004. Ratio 4 to 86 days Table continued on next page.

PAGE 143

131Table 14 (Continued) Characteristics of Descriptive and Measurement Data Concept/Construct Measured By Level of Measurement Obtained Range Class Size Highest number of students assigned to a teacher’s classroom Ratio 8 to 22 Developmental Level ESI-K Total Score Interval 12 to 27 Expressive Language IGDI – Picture Naming Ratio 0 to 39 Gender Gender of participant Nominal Dichotomous Coding Male and Female 1 = Male 0 = Female Treatment Integrity Early Literacy Observation Checklist Ratio 11 to 41 Treatment Intensity Number of visits and duration (in minutes) of coaching sessions between teacher and LC Ratio Number: 0 to 11 Duration: 0 to 975 Phonemic Awareness DIBELS – Letter Naming Fluency Ratio 0 to 85 Phonological Awareness IGDI – Rhyming IGDI – Alliteration Ratio Ratio 0 to 27 0 to 27 Race Race of participant Nominal Dichotomous Coding White, African American, Hispanic, Asian, Other 1 = White 0 = Not White Home SES Median household income associated with student/participants’ home zip code Ratio $21,502 to $79,705 Table continued on next page.

PAGE 144

132Table 14 (Continued) Characteristics of Descriptive and Measurement Data Concept/Construct Measured By Level of Measurement Obtained Range Site SES Median household income associated with the geographic location of the school site as identified by the zip code of the site Ratio $21,502 to $56,057 Teacher Education Highest level of education Nominal Dichotomous Coding High School Diploma or GED, Some College, AA, 4 Year Degree 1 = Post-Secondary 0 = No PostSecondary Teacher Experience Years of experience teaching in an early childhood setting Ratio .08 to 28 years Treatment ELO participation Coaching Nominal Dichotomous Variable Dichotomous Variable Literacy Training with Coaching (LT/C); Literacy Training with No Coaching (LT/NC); No Literacy Training and No Coaching (NL/NC) 1 = LT/C and LT/NC 0 = NL/NC 1 = LT/C 0 = LT/NC Significant relationships among variables in settings such as those found in schools must be noted. For example, teacher-related variables (e.g., years of experience, application of skills) have the potential to influence the lower level

PAGE 145

133 unit of analysis (skills in students of these teachers). Since this reflects a hierarchical structure often found in school settings, it is necessary that analysis to examine the outcomes of interventions or programming in these environments be able to partition variability accordingly at all levels. Hierarchical Linear Modeling (HLM) is a statistical procedure that can hold different levels of influence constant so that individual, nested layers can be investigated for their contribution to the overall outcome (Willms, 1999). Given these assertions, HLM was used to investigate the research questions driving this study. Three levels were present (within child, child, and cl assroom). Outcome measures for these analyses were the three IGDI subtest scores (i.e., Picture Naming, Alliteration, and Rhyming). Figure 6 reflects the hierarchical structure of the three level HLM model.

PAGE 146

134 Attendance Level 3: C l ass r oo m Outcome Variables: Preschool IGDI: Picture Naming Alliteration Rhyming Level 1: Wi t hin C hil d Age Teacher Experience/Education Class Size ELO Participation SES of Site Treatment Intensity A ttendance Gender Race Level 2: Child CharacteristicsTime 1 vs. Time 2Figu r e 6. Relationship Between Outcome and Predictor Variables in a Three Level Hierarchal Linear Model Home SES

PAGE 147

135 Specifically, at the first level of the HLM model, the within-child characteristics were examined. That is, are Time 1 versus Time 2 differences present? Given this perspective, a child’s entry skills (i.e., Time 1 scores) reflect the intercept of a regression equation whereas growth in observed skills is noted as the slope in the regression equation. Figure 7 demonstrates how the relationship of growth of skills (scores) over time could be depicted in a sample of five students. Figure 7 Sample With-In Child Level One Model At the second level, child characteristics were entered into the model. Thus, the child’s gender, age, race, home SES, and number of days that he or she attended school are included and serve to answer questions such as 0 5 10 15 20 25 30 35 Time 1 Time 2 Score on Picture Naming Child A Child B Child C Child D Child E

PAGE 148

136 whether girls demonstrated higher scores t han boys at Time 1 or did children who attended school more often demonstrate greater growth than did children who attended school less often. Two regression equations were formed and examined the individual effect of each predictor while controlling for the effects of the other predictor variables. The first equation (predicted intercept) evaluated children’s skills at Time 1. Along a similar trend, the second equation (predicted slope) evaluated the growth in children’s skills. Regression equations for the Level Two model are as follows: Predicted Intercept (Time 1 or Entry Skills) = a00 + B01*(Gender) + B02*(Age) + B03*(Race) + B04* (Home SES) + B05 (Attendance) + ei Predicted Slope (Growth Over Time) = a10 + B11*(Gender) + B12*(Age) + B13*(Race) + B14* (Home SES) + B15 (Attendance) + ei The classroom level became the focus of the third level of the HLM model. Here, the impact of the treatment effect and teacher characteristics on children’s skills was evaluated. Six predictors were employed at this level: teacher experience, teacher education, number of students in class, SES of site, ELO participation, and treatment intensity. Regression weights and intercepts derived from the Level 2 equations now served as outcome variables when they were regressed on classroom characteristics. For example, the following regression equation was formed.

PAGE 149

137 Predicted B15 = G 010 + G011* (ELO Participation) + G012* (Experience) + G013* (Education) + G014 (Number of Students) + G015* (SES) + G016* (Treatment Intensity) + ei Specifically, this equation related the effect of attendance at school on student growth to classroom characteristics after controlling for all other variables noted in the second and third levels in the HLM model. Thus, it was possible to identify if attendance at school was more important for children in the literacy training or no literacy training conditions. Two additional outcome variables were of concern in this study. Specifically, students’ performance on a subtest from the DIBELS (Letter Naming Fluency LNF) and the ESI-K also served to answer research questions that drove this study. In contrast to the IGDI data, however, DIBELS and ESI-K scores were obtained only at one point in time – Time 2. Thus, only a two level HLM model was used. The first level described child characteristics (age, gender, race, home site SES, and attendance) and the second level examined classroom characteristics (treatment condition, teacher experience, teacher education, number of students in class, SES, and treatment intensity). Initial steps in HLM call for the researcher to identify unconditional models for the outcome variables. Consequently, intraclass correlation coefficients (ICC) were calculated for each of the outcome variables. Intraclass correlation coefficients represent a measure of variability between and within schools when the outcome variables are addressed. Specifically, an ICC of 1.0 indicates that

PAGE 150

138 all the variability in the outcome variables is accounted for by between school factors rather than within school factors. In contrast, a value of 0.0 documents that no variability in the outcome scores is explained by the between school factors and instead all the variability is associated with the within school factors. Next, it was expected that the variance would be partitioned at each level. That is, level two and level three predictors were entered into HLM analyses. Separate HLM models also were run for each outcome measure. To control for alpha build up, which could lead to a higher than expected chance of committing a Type I Error, a Bonferoni correction was applied.

PAGE 151

139 CHAPTER FOUR RESULTS The purpose of this study was to examine the impact that teacher participation in an early childhood educator professional development training ( HeadsUp! Reading HUR) had on early literacy development of their students. Further analysis also explored the effect that providing a Literacy Coach to the participating teachers had on the early literacy development of the students they taught. Given the nested relationship that existed among these data, both within (child characteristics) and between (teacher characteristics) classroom factors were examined for their potential contribution to early literacy skill development. Since outcome measures were administered at two points in time, an additional facet to this equation is the within child differences that occurred over time. Consequently, a three level hierarchical model (HLM) was employed to describe the impact of the ELO grant. The results of this study are presented in two sections. First, a description of the statistical characteristics of the dependent measures in the study (e.g., means, standard deviations, intercorrelations) is presented. To conclude this chapter, results of the HLM analyses are offered. Descriptive Characteristics of Dependent Measures Skewness and kurtosis of dependent measures were examined and are contained in Table 15. In short, skewness and kurtosis values that exceed 1.0 or are less than -1.0 indicate that the distribution of scores reflects a non-normal distribution. For this subset of data, skewness and kurtosis values for Alliteration

PAGE 152

140 scores exceeded 1.0 with most notable deviations emerging at Time 2. Skewness and kurtosis values for Letter Naming Fluency also reflects a nonnormal distribution at Time 2. Visual inspection of the distribution of scores for the Alliteration subtest (as depicted in Figure 8) revealed one extreme outlier. Based on these findings, subsequent analyses were run both with and without this outlying score. No analyses were impacted significantly by removing this datum. Consequently, only results with the outlying score included are reported. Table 15 Skewness and Kurtosis Values for Dependent Measures N Skewness Kurtosis Time 1 Measures Picture Naming 386-.31.17 Alliteration 386 1.141.17 Rhyming 386.94-.12 Time 2 Measures Picture Naming 339.09.22 Alliteration 339 1.562.82 Rhyming 339.71-.45 Letter Naming Fluency 1921.081.80 ESI-K 44-.64-.32 Figure 8. Distribution of Alliteration Scores at Time 2. A2 0 5 10 15 20 25 30 Outlier

PAGE 153

141 Means and standard deviations for dependent measures for the whole sample are displayed in Table 16. Further, this table contains means and standard deviations for the LNF subtest and ESI-K measures that were administered to the subsample of pre-kindergarten participants. Finally, Table 17 contains means and standard deviations for ELOC Total and subscale scores across Time 1 and Time 2. The intercorrelation of dependent measures was examined next and is detailed for the total sample in Table 18. Appendices Q, R, and S contain correlation matrices partitioned by treatment conditions. Notably, at Time 1 of data collection, SES of school sites emer ged as accounting for significant amounts of variance in IGDI scores (i.e., p < .001). For example, 3% of the variance in Time 1 Picture Naming scores was accounted for by Site SES with a similar pattern found for the remaining IGDI measures. At Time 2, some change was found with Site SES acc ounting for 6% and 3% of the variance in the Alliteration and Rhyming subtest scores, respectively. In contrast, when the relationship between Site SES and classroom environment, as measured by the ELOC Total Score, was examined, no significant patterns emerged. Perhaps not surprisingly, site SES and Home SES were found to be strongly correlated, r = .43. When the relationship between Home SES and environmental factors (ELOC Total Scores) was considered, similar amounts of variance in IGDI scores were found. However, this trend did not continue when ELOC Total scores were examined as no significant correlations were noted between Home SES and ELOC Total scores.

PAGE 154

142Table 16 Means and Standard Deviations For Dependent Variables All LT/C LT/NC NL/NC Mean SD Mean SD Mean SD Mean SD Time 1 Measures Picture Naming ( n = 386) 18.84 6.99 18.10 7.21 19.16 7.34 19.62 6.27 Alliteration ( n = 386) 2.68 3.77 2.01 3.60 2.93 3.54 3.40 4.07 Rhyming ( n = 386) 4.73 5.61 5.04 6.00 4.92 5.19 4.12 5.40 Time 2 Measures Picture Naming ( n = 339) 22.36 7.04 22.30 6.73 21.87 6.73 22.91 7.75 Alliteration ( n = 339) 3.73 4.74 3.75 4.78 3.94 5.20 3.49 4.24 Rhyming ( n = 339) 6.25 6.36 6.83 6.62 6.51 6.48 5.23 5.79 Other Measures LNF ( n = 188 ) 7.73 7.47 17.33 16.26 15.73 13.14 16.09 13.43 ESI-K ( n = 44) 21.30 3.80 21.53 3.56 8.22 7.64 21.14 4.17 Attendance ( n = 365) 68.80 14.26 69.39 13.46 69.55 11.99 67.38 16.90 Site SES ( n = 386) 32186 5223 30229 4047 33974 4160 33346 6538 Home SES ( n = 345) 34975 7277 32709 4983 35512 6162 37815 9830 Frequency of LC Visits ------6.962.12 ------------Total Duration of LC Visits ------462.66218.43 ------------

PAGE 155

143 Table 17 Means and Standard Deviations For ELOC Across All Conditions All LT/C LT/NC NL/NC Mean SD Mean SD Mean SD Mean SD Time 1 Measures ELOC Storybook Reading 15.50 4.06 16.46 3.24 16.80 2.78 14.21 4.80 ELOC Classroom Library 3.39 .80 3.33 .78 3.40 .70 3.42 .90 ELOC Writing 3.73 .45 3.54 .54 3.95 .16 3.74 .45 ELOC Print Environment 6.55 2.58 6.69 2.67 7.65 1.52 5.89 2.85 ELOC Total Score 29.17 6.48 30.02 4.64 31.80 3.46 27.25 8.13 Time 2 Measures ELOC Storybook Reading 17.88 3.82 19.39 2.40 19.20 2.83 16.32 4.42 ELOC Classroom Library 3.52 .10 3.27 .79 3.80 .42 3.53 .51 ELOC Writing 3.85 .36 3.86 .32 3.75 .43 3.89 .36 ELOC Print Environment 7.04 2.00 7.25 2.29 7.45 1.27 6.71 2.08 ELOC Total Score 32.30 5.64 33.77 4.52 34.20 4.26 30.45 6.44

PAGE 156

144Table 18 Correlation Matrix for Total Sample 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 1. Attendance 1.0 ( n ) 2. Site SES -.04 1.0 (n) (365) 3. Home SES -.03 .43** 1.0 (n) (343) (345) 4. PN T1 -.02 .17** .17** 1.0 (n) (365) (345) (345) 5. Alliteration T1 -.03 .18** .18** .35** 1.0 (n) (365) (345) (345) (386) 6. Rhyming T1 .02 .14** .14* .35** .46** 1.0 (n) (365) (345) (345) (386) (386) 7. PN T2 -.08 .11 .11 .54** .34** .40** 1.0 (n) (332) (319) (319) (339) (339) (339) 8. Alliteration T2 -.02 .24** .24** .31** .63** .51** .36** 1.0 (n) (332) (319) (319) (339) (339) (339) (339) 9. Rhyming T2 .02 .18** .18** .34** .43** .74** .41** .54** 1.0 (n) (332) (319) (319) (339) (339) (339) (339) (339) 10. ELOC T1 Total -.07 .10 .06 .11* .15** .29** .24** .25** .25** 1.0 (n) (365) (386) (345) (386) (386) (386) (339) (339) (339) 11. ELOC T2 Total -.04 .06 .03 -.07 .05 .20** .09 .15** .15** .79** 1.0 (n) (365) (381) (345) (381) (381) (381) (339) (339) (339) (381) Notes: PN = Picture Naming; SR = Storybook Reading; PE = Print Environment; T1 = Time 1 Data Collection; T2 = Time 2 Data Collection; p < .05; ** p < .001.

PAGE 157

145 Relationships between IGDI measures also were inspected and revealed a high level of association between subtests and across time. For example, at Time 1, correlation coefficients between Picture Naming and both Alliteration and Rhyming were .35. Time 1 to Time 2 Picture Naming scores also were relatively stable with coefficients of .54 obtained. Similar relationships between the remaining IGDI subtests were noted as well. That is, when the relationship between IGDI scores and classroom environment were examined, notable relationships between both the ELOC Total scores and IGDI measures were found among Time 1 data with correlation coefficients of .11 for Picture Naming, .15 for Alliteration, and .29 for Rhyming obtained. Relationships between Time 1 ELOC Total scores and Time 2 IGDI scores also were examined. The outcome of this inquiry highlighted that ELOC Total Time 1 scores accounted for approximately 6% of the variance in IGDI Time 2 scores. Furthermore, a strong correlation between Time 1 and Time 2 ELOC Total scores was documented ( r = .79). Thus, stability over time for IGDI and ELOC measures was documented and direct relationships between environmental factors assessed with the ELOC and early literacy skills emerged. Hierarchical Linear Modeling Hierarchical Linear Modeling (HLM) was employed to account for the nested relationships present in these data (students within classrooms). Specifically, HLM is a statistical procedure that allows the researcher to estimate the effects of multiple layers of data through a process that partitions variance

PAGE 158

146 both within and among classrooms (Byrk & Raudenbush, 1992). Control over data where independence is not assumed also is achieved (Byrk & Raudenbush, 1992). Thus, HLM extended a degree of control over variability in children’s early literacy development that occurred because of differences between children (e.g., age, gender, race, home SES), and differences between classrooms (e.g., teacher’s level of experience, number of students in class, site SES, participation in the ELO grant). A three-level model was employed. At the first level, individual child differences over time were defined. Next at the second level, five variables were expected to explain variance in early literacy development. These were child/participants’ (1) age, (2) gender, (3) race, (4) attendance at school, and (5) home socioeconomic status as defined by the median household income for the children’s home zip codes. At the third level the question of how ELO participation impacted early literacy development was examined with the following classroom level variables entered as predictors (1) teacher experience or years teaching in early childhood se ttings, (2) teacher education defined as post secondary or no post secondary education prior to ELO participation, (3) site SES as defi ned by the median household income for the site zip code, (4) number of students in classroom, (5) ELOC Total Time 1, (6) ELOC Total Time 2, and (7) ELO participation. When the question examined how coaching impacted literacy development, the following nine variables were examined: (1) teacher experience, (2) teacher education, (3) site SES (4) number of students in

PAGE 159

147 classroom; (5) ELOC Total Time 1, (6) ELOC Total Time 2, (7) coaching participation; (8) number of Literacy Coach (LC) visits, and (9) total duration in minutes of all LC visits. With regards to this final question examining the impact that coaching had on literacy development, only students whose teachers participated in the ELO grant were included (i.e., LT/C and LT/NC). That is, data from students in the no literacy training and no coaching sample (NL/NC) were not included in this subsample. Modifications to data prior to conducting the HLM analyses included dichotomizing four variables. First, gender was defined as male with the number one representing male participants and zero representing female participants. Additionally, race was defined as White (1) and Not White (0). Teacher education also was categorized so teachers who reported post secondary education were coded with a number one whereas teachers who did not report education beyond the secondary level were identified with a zero. Treatment conditions were coded dichotomously as well. That is, ELO participation (LT/C and LT/NC) was identified as a number one and non-ELO participation (NL/NC) was categorized as a zero. Finally, coaching was dichotomized in a similar fashion with teachers who participated in the ELO grant who also received coaching (LT/C) identified with a number one. In contrast, teachers who participated in the ELO but did not receive coaching (LT/NC) were coded as a zero. Unconditional models for the outcome variables were analyzed first.

PAGE 160

148 Specifically, intraclass correlation coefficients (ICC) were obtained that measured variability between and within classrooms for each outcome variable. Potential ICC values can range from 0.0 to 1.0 with an ICC of 1.0 reflecting that all variability is noted between the classroom variables rather than within the classroom variables. In contrast, an ICC of 0.0 would indicate that variability in the outcome variable could be attributed to within classroom variables with no variability attributed to between classroom factors. Intraclass Correlation Coefficients are presented in Table 19. Inspection of these values, which range from .54 to .73 indicate that a portion of the variability is attributed to between classroom variables in Picture Naming. Further, a majority of the variability is attributed to between classroom variables rather than within classroom variables in both Alliteration and Rhyming measures. Consequently, it appears that between class differences are present in these data. Table 19 Intraclass Correlation Coefficients Dependent Measures Reliability of the Intercept ICC Picture Naming .5083 .5399 Alliteration .5841 .6010 Rhyming .7151 .7305 Prior to examining the coefficients obtained, it is important also to frame these results within the following guidelines. First, although the slopes for some predictor variables may be significant, not all may be meaningful. Further, some predictors may account for between-child variance at entry; however, they may

PAGE 161

149 not account for variance in the rate of growth in literacy development. Finally, variance estimates for intercepts and their respective p -values are included. Significant p -values for intercepts indicate whether additional variance is left to be explained. In contrast, the slope reflects the individual contribution for each predictor to the variance in the rate of change over time in the dependent measures. Finally, to control for alpha buildup, a Bonferoni correction was applied to reduce the potential of family-wise error. Specifically, when three dependent measures were examined, significant findings are those where the alpha level was less than .0167 (i.e., .05/3). It was expected that one factor, time, would explain the within child differences in the Level 1 model. Table 20 summarizes these findings for each dependent measure. Figures 9, 10, and 11 depict the relationship of growth of skills in Picture Naming, Alliteration, and Rhyming IGDI scores, respectively, in a random sample of three students (one from each condition – LT/C, LT/NC, and NL/NC). As can be noted in these figures, support for growth in literacy skill acquisition over time was documented. Further, it can be noted that significant variance in rate of growth is yet to be explained in all three outcome measures.

PAGE 162

150Table 20 Level One Model – Within Child Characteristics Outcome Variables Predictor Average Intercept and Average Slope p-value of Variance for Slope p-value of Variance, for Intercept Picture Naming Intercept 4.73 <.0001 Time 1.32** <.0001 Alliteration Intercept 2.68 <.0001 Time 0.93** <.0001 Rhyming Intercept 4.73 <.0001 Time 1.32** <.0001 Note: p < .05; ** p < .0167 0 5 10 15 20 25 30 35 Time 1Time 2Picture Naming Scores LT/C LT/NC NL/NC Figure 9. Growth Over Time in IGDI Picture Naming Scores for a Random Selection of Three Student/Participants.

PAGE 163

151 0 2 4 6Time 1Time 2Alliteration Scores LT/C LT/NC NL/NC Figure 10. Growth Over Time in IGDI Alliteration Scores for a Random Selection of Three Student/Participants. 0 2 4 6 8 10 12 14Time 1Time 2Rhyming Scores LT/C LT/NC NL/NC Figure 11 Growth Over Time in IGDI Rhyming Scores for a Random Selection of Three Student/Participants.

PAGE 164

152 To partition the variance further, five level two variables examining child characteristics were entered next into the HLM. Table 21 summarizes these findings. Overall, several significant relationships emerged. Notable at first is that age accounted for 27% of the between-child variance in entry-level Picture Naming, 17% of the between-child variance in entry-level Alliteration, and 26% of the between-child variance in entry-level Rhyming scores. Furthermore, age accounted for 20% and 32% of the between-child variance in growth in Alliteration and Rhyming scores, respectively. Race also emerged as contributing a significant amount to between-child variance for all three IGDI subtests; however, race was not documented to be a significant contributor to growth or the rate of literacy skill acquisition. Figures 12, 13, and 14 depict these findings over time with mean scores plotted by race (White and Non-White) for each of the three IGDI subtests. Home SES also was found to account for 6% of the explainable between-child variance in initial status and 18% of the variance in the growth of Alliteration scores: Higher scores were found within participants from higher SES households. As with the Level 1 model, significant variance in Picture Naming, Alliteration, and Rhyming has yet to be explained.

PAGE 165

153Table 21 Level Two Model – Child Characteristics Outcome Variables Predictor Parameters for Fixed Effects p-value Between Child Variance Accounted for At Entry Between Child Variance Accounted for In Growth Picture Naming Intercept 4.54 ** <.0001 Age 0.35** <.0001 26.60% 0.00% Gender (Male)a -0.46 .4059 2.62% 0.00% Race (White)b 5.19** <.0001 26.97% 0.00% Attendance -0.04 .0401 0.00% 0.00% Home SES 0.01 .9422 2.25% 0.00% Alliteration Intercept -8.44 ** <.0001 Age 0.18** <.0001 17.39% 20.22% Gender (Male)a 0.86* .0209 1.74% 0.00% Race (White)b 1.09** .0113 7.83% 2.25% Attendance -0.01 .3028 0.00% 11.24% Home SES 0.00 ** .0049 6.09% 17.98% Rhyming Intercept -14.70 ** <.0001 Age 0.37** <.0001 26.04% 31.52% Gender (Male)a 1.08* .0371 3.20% 5.43% Race (White)b 2.08** .0006 7.55% 0.00% Attendance -0.03 .1230 0.00% 2.17% Home SES 0.00 .1963 0.51% 0.00% Note : p < .05; ** p < .0167; a For Gender Male = 1 and Female = 0; b For Race White = 1 and Non-White = 0.

PAGE 166

154 0 5 10 15 20 25 30Time 1Time 2Picture Naming White Non-White Figure 12 Growth Over Time by Race in IGDI Picture Naming 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5Time 1Time 2Alliteration White Non-White Figure 13 Growth Over Time by Race in IGDI Alliteration

PAGE 167

155 0 1 2 3 4 5 6 7 8 Time 1Time 2Rhyming White Non-White Figure 14 Growth Over Time by Race in IGDI Rhyming Next, the seven variables that examined classroom characteristics were entered into the HLM. Table 22 summarizes these findings. Specifically, ELOC Total Time 1 scores accounted for 16% of the between-child variance in growth in Picture Naming. For this finding, higher ELOC Total scores were tied to greater rates of growth in Picture Naming scores. Teacher participation in the ELO grant also emerged as accounting for significant amounts of between-child variance in Alliteration scores. In addition, 3% of between-child growth in this IGDI subtest was accounted for by teacher ELO participation. The number of students assigned to a classroom was documented to account for significant amounts of variance with 23% and 33% of between-child variance found in Alliteration and Rhyming scores, respectively. Additionally the number of

PAGE 168

156 students assigned to early childhood classrooms served to account for 18% of student-participants’ growth in Rhyming scores. Further inspection of the p value for the intercept for Picture Naming indicated that the majority of the variance had been explained at the classroom level. Figures 15, 16, and 17 are provided to illustrate IGDI scores over time with mean scores partitioned by Teacher ELO participation. Table 22 Level Three Model – Classroom Characteristics – for ELO Participation Outcome Variables Predictor Parameters for Fixed Effects p-value Between-Child Variance Accounted for At Entry Between-Child Variance Accounted for In Growth Picture Naming Intercept 9.39 .2329 Teacher Experience -0.02 .8370 0.00% 0.00% Secondary Educationa -1.71 .3294 0.00% 0.00% Site SES 0.00 .8950 0.00% 0.00% Number of Students 0.42 .0798 0.00% 2.10% ELOC Total Time 1 0.42 ** .0158 0.00% 16.00% ELOC Total Time 2 -0.19 .2927 0.00% 0.00% ELOb 2.02 .3223 0.00% 0.00% Alliteration Intercept -8.71 .0074 Teacher Experience 0.05 .2535 0.00% 0.00% Secondary Educationa 0.67 .2643 0.00% 0.00% Site SES 0.00 .2535 0.00% 0.00% Number of Students 0.38 ** .0003 27.83% 0.00% ELOC Total Time 1 0.11 .0957 0.01% 0.00% ELOC Total Time 2 -0.03 .7231 0.00% 0.00% ELOb 3.03 ** .0006 23.48% 3.10% Rhyming Intercept -11.32 .0339 Teacher Experience -0.01 .9804 0.00% 0.00% Secondary Educationa -0.13 .9081 0.00% 0.00% Site SES 0.00 .3956 0.00% 0.00% Number of Students 0.49 ** .0038 33.00% 18.00% ELOC Total Time 1 0.17 .1138 5.00% 3.00% ELOC Total Time 2 0.04 .7724 0.00% 0.00% ELOb 2.06 .1306 0.23% 0.00% Note : p < .05; ** p < .0167; a For Teacher Education – Post-secondary = 1 and No Postsecondary = 0; b For ELO – ELO Participation (LT/C and LT/NC) = 1 and No ELO (NL/NC) = 0.

PAGE 169

157 0 5 10 15 20 25 Time 1Time 2Picture Naming ELO Participant Not an ELO Participant Figure 15 Growth Over Time by ELO Participation in IGDI Picture Naming Figure 16 Growth Over Time by ELO Participation in IGDI Alliteration 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 Time 1Time 2 Alliteration ELO Participan t Not an ELO Participant

PAGE 170

158 Figure 17 Growth Over Time by ELO Participation in IGDI Rhyming The contribution that coaching made to early literacy development was examined next. Table 23 contains the findings for the third level of this model. Within this model, no significant predictors emerged. Table 23 Level Three – Classroom Characteristics – Model for Coaching Component for IGDI Measures Outcome Variables Predictor Parameters for Fixed Effects p-value Between-Child Variance Accounted for At Entry Between-Child Variance Accounted for In Growth Picture Naming Intercept 20.41 ** .0001 Teacher Experience -0.03 .6421 0.00% 0.00% Secondary Educationa -0.19 .8289 0.00% 0.00% Site SES 0.00 .7078 0.00% 0.00% Number of Students -0.03 .8255 0.00% 0.00% ELOC Total Time 1 0.17 .0639 0.50% 0.00% ELOC Total Time 2 -0.02 .8136 0.00% 0.00% Coachingb 2.98 .2447 0.00% 0.00% LC Visits 1.00 .1686 0.00% 0.00% LC Time -0.01 .1712 0.00% 0.00% Table continued on next page. 0 1 2 3 4 5 6 7 8 Time 1Time 2Rhyming ELO Participan t Not an ELO Participan t

PAGE 171

159Table 23 (Continued) Level Three – Classroom Characteristics – Model for Coaching Component for IGDI Measures Outcome Variables Predictor Parameters for Fixed Effects p-value Between-Child Variance Accounted for At Entry Between-Child Variance Accounted for In Growth Alliteration Intercept -9.58 ** .0127 Teacher Experience -0.03 .6365 0.00% 0.00% Secondary Educationa 0.08 .9112 0.00% 0.00% Site SES -0.00 .4894 0.00% 0.00% Number of Students 0.19 .1088 0.00% 0.00% ELOC Total Time 1 0.10 .2032 0.00% 0.00% ELOC Total Time 2 -0.04 .6549 0.00% 0.00% Coachingb 2.04 .3592 0.00% 0.00% LC Visits -0.02 .9732 0.00% 0.00% LC Time 0.00 .7503 0.00% 0.00% Rhyming Intercept -20.41 ** .0001 Teacher Experience -0.03 .6421 0.00% 0.00% Secondary Educationa -0.19 .8292 0.00% 0.00% Site SES 0.00 .7294 0.00% 0.00% Number of Students -0.01 .8312 0.00% 0.00% ELOC Total Time 1 0.17 .0694 0.00% 0.00% ELOC Total Time 2 0.02 .8136 0.00% 0.00% Coachingb 2.98 .2447 0.00% 0.00% LC Visits 1.00 .1686 0.00% 0.00% LC Time -0.01 .1712 0.00% 0.00% Note : p < .05; ** p < .0167; a For Teacher Education – Post-secondary = 1 and No Postsecondary = 0; b For Coaching – ELO with coaching (LT/C) = 1 and ELO with no coaching (LT/NC) = 0. The next analyses addressed differences in dependent measures administered at only one point in time, namely Letter Naming Fluency (LNF) from the DIBELS and the Early Screening InventoryRevised (ESI-K). Consequently, only a two level HLM model was employed. Table 24 reflects the results of the Level 1 model at which time the following variables were entered: (1) gender, (2) race, (3) age, (4) home SES, and (5) attendance. To control for alpha buildup, a Bonferoni correction was applied to reduce the potential of family-wise error.

PAGE 172

160 Since two dependent measures were examined in these analyses, significant findings are those where the alpha level is less than .025 (i.e., < .05/2). Inspection of these tables reveals that both gender and age were positively correlated to LNF scores with higher scores documented in female participants. Similarly, older participants had higher LNF scores than younger students. In contrast, when relationships between the same predictors and ESIK scores were inspected, no significant relationships, i.e., < .025, were found. Table 24 Level One Model for LNF and ESI-K Outcome Variables Predictor Parameters for Fixed Effects p-value % of Variance Explained p-value LNF Intercept -3.0917 .0272 Gender (Male)a -.4010** .0082 3.87% Race (White)b -.1863.3252 1.54% Age .0538** .0116 3.41% Home SES 0000.4274 0.00% Attendance -.0082.2284 0.05% ESI-K Intercept -5.7207 .0309 Gender (Male)a .6364* .0260 16.00% Race (White)b .1751.6243 2.00% Age -.0079.0654 0.00% Home SES .0001 .1063 2.00% Attendance -.0079.5594 0.00% Note: p < .05; *** p < .025; a For Gender Male = 1 and Female = 0; b For Race White = 1 and Non-White = 0. Variables hypothesized to account for variance at the classroom level in early literacy and developmental level were entered next. Table 25 offers a summary of the findings from these analyses. In short, no significant predictors emerged when an alpha level of .025 was applied.

PAGE 173

161 Table 25 Level Two Model for ELO Participation for Letter Naming Fluency and ESI-K Outcome Variables Predictor Parameters for Fixed Effects p-value % of Variance Explained p-value LNF Intercept -3.4930 .0552 Teacher Experience .0054 .7744 0.00% Secondary Educationa -.1809 .5169 1.30% Site SES .0000 .4279 0.00% Number of Students .0138 .7894 0.01% ELOC Time 1 .0678* .0431 16.94% ELOC Time 2 .0170 .6223 0.02% ELOb -.5336 .1707 0.03% ES-K Intercept -7.3275 .0446 Teacher Experience .0004 .9842 0.00% Secondary Educationa -.1966 .6151 0.87% Site SES .0000 .6899 0.00% Number of Students .1395* .0462 1.95% ELOC Time 1 .0311 .4753 0.09% ELOC Time 2 .0083 .9787 0.00% ELOb -.8442 .1783 0.03% Note p < .05; ** p < .025; a For Teacher Education – Post-secondary = 1 and No Post-secondary = 0; b For ELO – ELO Participation (LT/C and LT/NC)= 1 and No ELO (NL/NC) = 0. The impact that coaching had on literacy development as assessed with the LNF also was explored. Inspection of results depicted in Table 26 brings to light the lack of significant predictors in this model. Due to the limited sample size of ESI-K data that were gathered at ELO participant pre-kindergarten sites ( n = 30 with n = 15 at LT/C sites and n = 15 at LT/NC sites), the impact of coaching was not examined for this variable.

PAGE 174

162Table 26 Level Three Model for Coaching Component for LNF Outcome Variables Predictor Parameters for Fixed Effects p-value % of Variance Explained p-value Intercept -4.1190 .4276 Teacher Experience -.0238 .6536 0.05% Secondary Educationa -.2556 .6532 0.53% Site SES -.0000 .4867 0.00% Number of Students .0026 .9863 0.00% Coachingb .1321 .4162 1.75% LC Visits .0535 .8889 0.29% LC Time .0006 .8885 0.00% ELOC Time 1 .0469 .6407 0.22% ELOC Time 2 .0723 .4510 0.52% Note: p < .05; ** p < .025; a For Teacher Education – Post-secondary = 1 and No Postsecondary = 0; b For Coaching – ELO with coaching (LT/C) = 1 and ELO with no coaching (LT/NC) = 0. Summary In conclusion, several predictors emerged as accounting for significant amounts of variance in early literacy skill development. For Picture Naming, Alliteration, and Rhyming, the child characteristics of age and race were identified as significant predictors. In addition, home SES was noted to explain significant rates of change in children’s Alliteration scores. At the classroom level, ELOC Total Time 1 scores accounted for significant amounts of betweenchild variance in participants’ Picture Naming scores. Number of students in early childhood classrooms also accounted for significant amounts of variance in slopes of Rhyming scores. When coaching was added to the model, however, no significant predictors were documented in IGDI scores. From another perspective, gender and age emerged as significant predictors in accounting for

PAGE 175

163 variance in participants’ scores on the Letter Naming Fluency subtest. No significant predictors at the child or classroom level emerged in relation to describing variance in participants’ scores on the ESI-K; however, caution should be tied to this finding as the sample size was limited.

PAGE 176

164 Chapter Five Discussion Purpose of the Study Two central goals drove this study (1 ) to examine the effect that teacher participation in an early childhood education initiative (i.e., Early Learning Opportunities –ELO– grant) had on the early literacy skills of students in these teachers’ classrooms and (2) to explore the impact that providing a Literacy Coach to these teachers had on the early literacy development of the students they taught. This chapter will synthesize and discuss the results of data analyses conducted to answer the research questions. Limitations to the current research will follow this section. Finally, suggestions for future research and implications of its findings conclude this chapter. Response to Research Questions Did Teacher Participation in the ELO Grant Affect Early Literacy Development in the Students They Taught? It was hypothesized that the students of teachers who participated in the ELO grant would demonstrate higher rates of literacy skill development than students of teachers who did not participate in the ELO grant. Limited support was documented for this hypothesis. That is, students of teachers who participated in the ELO grant demonstrated higher rates of growth in phonological awareness as measured by the Alliteration subtest of the Individual Growth and Development Indicators (IGDI). Additional predictors of literacy skill

PAGE 177

165 acquisition also emerged. Child characteristics. At the child level, age and race accounted for significant amounts of variance in the rates of skill attainment in expressive language and phonological awareness. Specifically, older children experienced the highest rates of growth in phonological awareness as measured by the Alliteration subtest of the IGDI. The support for age as a significant predictor of early literacy development aligns with research by Missall and McConnell (2004) and McConnell, Priest, Davis, and McEvoy (2002) where early literacy development as measured by the IGDI was positively correlated with age. Race also appears to play a factor in children’s developing phonological awareness. Specifically, greater increases in skills in phonological awareness were documented in children identified as White. This finding aligns with research (e.g., NRP, 2000; Neuman, Copple, & Bredekamp, 2000; Snow, Burns, & Griffin, 1998) that has explored reading achievement among elementary aged African American students. Across this research, attention is drawn to achievement gaps between African American and White students with the discrepancies widening in favor of White students as the years progress. With this in mind, it is not surprising that small but notable differences are found between these populations in the early years of skill acquisition. Further results also highlighted that household SES acc ounted for a significant amount of variance in the rate of phonological awareness development, as measured by the Alliteration subtest on the IGDI. Notably, this

PAGE 178

166 relationship is aligned with research by Shaywitz, Shaywitz, Fletch, and Escobar (1990) who identified poverty as a leading obstacle in early literacy development for children. Classroom characteristics. When child factors were controlled, learning environment at Time 1 accounted for variance in growth in expressive language with classrooms richest in literacy-based stimuli and literacy supported interactions associated with greatest increases in expressive language. An additional variable, number of students in the early childhood classroom explained significant amounts of variance in growth in phonological awareness as measured by Rhyming scores with larger classrooms linked to higher rates of growth. Further, it appears that the overall richness of the literacy environment within which children were instructed prior to ELO participation served as significant predictors to their development across time. Perhaps not surprising, this finding has been touted by many researchers in reference to general attendance at even subaverage early childhood centers (particularly for minority children from low SES households) (e.g., National Institute of Child Health and Development, 1998; Ripple, Gillam, Chanana & Ziegler, 1999) and with regards to the level of literacy stimulation in the early childhood setting (e.g., Casey & Howe, 2002; Newman, Copple, & Bredekamp, 2000). The lack of support for this same relationship at Time 2 provokes thought. One hypothesis is that differences in literacy environments lessened when all teachers began infusing more literacy-based stimuli into their classrooms; however, children’s skill

PAGE 179

167 development may not have had adequate time to respond to these changes. Long-term monitoring of children’s rates of skill acquisition would be needed to explore this avenue of thought with data documenting higher rates of skill attainment in later months offering support for hypothesis. Teacher participation in the ELO grant was a predictor of critical concern. Results did find support for it as a predictor of literacy growth in phonological awareness as measured by IGDI Alliteration scores. This finding did not emerge for the Picture Naming and Rhyming subtests, however. At the classroom level, teacher experience and education did not emerge as significant predictors of literacy development in children. These findings conflict with research that has tied teacher education, area of certification (Bredekamp, 1987; Buchanan, Burts, Bidner, White, & Charlesworth, 1998; Smith, 1997), and years of experience (Buchanan et al., 1998; Hart, Burts, & Charlesworth, 1997) to developmentally appropriate practices (DAP). Interestingly, years of teaching experience were inversely related to implementation of DAPs. That is, teachers who had graduated most recently and thus had fewer years of experience were exposed to instructional strategies that endorse more current guidelines for educating young children. Summary for question one. In summary, change was noted in literacy development and skill acquisition in children with significant rates of growth tied to ELO participation when phonological awareness was tapped. Other characteristics such as age, race, and the depth of literacy stimulation in the

PAGE 180

168 classroom setting were significant as well. For example, significant differences in the rates of skill development were noted in older children, White children, and children from higher SES households. Also, the literacy environment within which students were exposed to on an ongoing basis, rather than just during a 15-week time span, appears important. Did Providing a Coach to Teachers While They Participated in the Training Have an Effect on Their Students’ Literacy Development? A hypothesis that students of teachers who received coaching from a Literacy Coach (LC) would demonstrate higher rates of literacy development than would students of teachers who did not receive coaching was posited, however, not supported by the findings. That is, coaching variables from both qualitative (having a coach or not) and quantitative (number of visits and duration of visits) perspectives did not account for significant amounts of variance in rate with which children attained early literacy skills. Findings that targeted the coaching component of the ELO grant again were aligned with a review of research where the impact of coaching in enhancing skill development was not documented or easily teased apart (e.g., Chan & Latham, 2004; Fountain, 2002; Streufert, 1984). One hypothesis for this current finding is that growth in children’s literacy skills is still forthcoming. That is, perhaps more time is required to assess the outcome of indirect effects (early literacy development) as opposed to direct effects (teacher skill attainment). Thus, although changes were noted in the infusion of literacy into the early

PAGE 181

169 childhood settings, these environmental factors may not have immediate impact on children. Instead, they might create the right conditions for this development to occur – a process that may take longer than the 10 week data collection cycle in this current study. Long-term data collection monitoring children’s rates of skill acquisition would provide avenues for examining this hypothesis. What Effect Did Teacher Participation in the ELO Grant Have on Children’s Overall Cognitive, Language, and Motor Development, as Measured by the Early Screening Inventory – Revised (ESI-K)? Predictors at both child and classroom levels did not account for significant amounts of variance in developmental level. In general, however, female participants were noted to present with higher overall developmental level. This finding aligns with the research that reported gender differences often at the rate of 2:1 to 5:1, existed in clinical samples of students presenting with reading difficulties (Critchley, 1970; Finucci & Childs, 1981). However, when more representative samples were included, these differences diminished (Shaywitz, et al., 1990; Wadsworth, DeFries, Stevenson, Gilger, & Pennington, 1992). Thus, it is possible that the small sample size to whom the ESI-K was administered (n = 44) did not reflect an accurate representation of the developmental level of children. Overall Summary of Findings Essentially, results from this study provided limited support for the indirect effects of implementation of the ELO grant. For example, early literacy did

PAGE 182

170 change over time in the children participating in this study. Thus, additional support for the assertions that learning to read begins well before the primary years of elementary school is found (National Reading Panel, 2000; Neuman, Copple, & Bredekamp, 2000; Snow, Burns, & Griffin, 1998). The factors that are tied to this development, however, did not gain overwhelming support. In the end, more questions arose than were answered. For instance, given more time for skill attainment, would the impact of the ELO implementation differ? Did teachers need more time to accommodate their new skills into their classrooms? If coaching had been provided more often, would the outcomes have been the same? Or, does this study serve as documentation that enhancing early literacy skill development in children cannot be addressed simply by training early childhood educators in research-based instructional strategies? Implications for School Psychologists and Early Childhood Educators Findings from this study can assist practitioners and researchers who work with early childhood educators. First, it was documented that classroom environment does play a critical role in children’s literacy development. Further, strategies suggested to enhance literacy development do not always include a book or the act of reading (Bredekamp & Copple, 1997; Burns, Griffin, & Snow, 1999). For example, although resources distributed to teachers participating in the ELO grant included books for the classroom library, additional items such as puppets for the dramatic play area, letters for the magnetic board, and markers, crayons, and clipboards also were provided. In addition, interactive games and

PAGE 183

171 word play (e.g., clapping syllables in names, rhyming guessing games) were suggested as strategies for developing areas such as phonological awareness (Bredekamp & Copple, 1997; Schickendanz, 1999). In short, despite the knowledge that reading to children plays a significant role in the development of literacy skills, it is not the only critical part. Thus, practitioners in the field can assist early childhood educators to create environments that support the development of early literacy skills in the students they teach. From a practical perspective, practitioners would benefit by adding IGDI measures to their, albeit, limited supply of tools for monitoring the development of early reading skills in children. With its ease of administration, strong psychometric properties, and ability to be us ed as often as needed, it is a sound method for assessing skill acquisition in early childhood. In addition, IGDI measures can be used to monitor res ponse to interventions that target expressive language or phonological awar eness in children older than five years of age. Perhaps most exciting to emerge from this study is that children are acquiring literacy skills during their preschool years. This finding supports the progression of early literacy skills that is described by researchers such as Snow and Ninio (1986), Teale (1978), and Morrow (1990) where experiences encountered even shortly after birth shape the course of early literacy development. In short, professionals working with young children or early childhood educators must focus their attention to how they can support the

PAGE 184

172 development of early literacy skills in the children with whom they work. Limitations Although this study bears strengths, limitations were evident. First, there is some question as to whether teachers who opted to participate in the ELO grant were inherently different from teachers who did not apply for such an opportunity. Since teachers could not be selected randomly and were required to participate, it is inevitable that some characteristics of enthusiasm for teaching, interest in early literacy, or the desire to take advantage of the scholarship offered for this class as a step in earning a college degree cannot be controlled. Further, teachers who participated in the no treatment or control group were not blind to the goals of the study. That is, teachers were aware that the ELO grant focused its efforts on enhancing literacy development in young children. In addition, they were aware that their students’ literacy skills were being monitored. With this in mind, it is plausible that teachers in the control groups may have been more attuned to how literacy was addressed in their classrooms. Another limitation in this study was the absence of a baseline measure of children’s early literacy skills. This shortcoming is less disparaging, however, when the inclusion of the control group is noted. Nevertheless, use of the control group of teachers and students also bring with it some uncontrolled variance. That is, it is unfortunate that a wider array of childcare sites could not have been solicited for participation as a control group in the ELO grant evaluation component. However, given the grand undertaking of data collection that comes

PAGE 185

173 with an evaluation project such as is a part of the ELO grant, constraints with increasing this sample are significant (e.g., the need for more evaluators, need for more positive response from childcare centers). Furthermore, this study examined only short-term growth (i.e., across nine to ten weeks) in children’s acquisition of early literacy skills. Long-term changes also need to be evaluated. Also missing was a component to measure teacher’s desire to implement the strategies endorsed in the HeadsUp! Reading course. That is, it is possible that some teachers who accepted this opportunity may have not been driven by their own desire to increase their skills but instead participated in response to an administrator at their place of employment who strongly suggested he or she participate. Another limitation present in this study was the lack of access to actual student’s household SES status. Altho ugh research has suggested that this variable at the individual level is not as valid an indicator of future academic success as is the use of SES at the group level (school/community), its inclusion would have added extra credence to this prior finding (e.g., Alwin & Thornton, 1984; Horn & O’Donnell, 1984; Snow, Burns, & Griffin, 1998; White, 1982). Additional concerns and potential limitations were noted regarding individual differences in the administration of the IGDI. That is, some questions arose regarding differences in rates and fluency with which cards containing the stimulus items were administered. Notably, some variation between administrators due to their dexterity with manipulating the materials or rate at

PAGE 186

174 naming the items on the cards might have been present. Although efforts to have the same data collectors gather both waves of data collection were made, it often was not feasible due to limited personnel and resources. Thus, control over error due to administrator differences was not obtained. The potential for test taking practice effects also may have hindered the accurate assessment of children’s early literacy skills. For many children participation in the two waves of IGDI data collection may have reflected their first experiences with tasks requiring a high degree of engagement and focused attention, particularly for the youngest participants. Furthermore, in most cases, IGDI data collectors were strangers to the children. In light of these concerns, it is possible that IGDI data reflected not pure indices of early reading skills but, instead, a combination of children’s levels of comfort with the task requirements, familiarity with the data collectors, and early literacy skill development. Generalization of these findings must be considered with the following caution in mind. Specifically, demographic data were not accessible for students from whom consent was not received. Therefore, analysis to determine differences between those students from whom consent was received and those from whom it was not cannot be conducted. Given this limitation, only tentative hypotheses can be made as to differences in these two populations (i.e., children with and without consent). For example, it is possible that children from whom consent was not received were reared in larger or single parent households where caregivers may have limited time to follow up on their children’s school

PAGE 187

175 activities and paperwork. Teacher commitment and accountability for disseminating and reclaiming the consents also may have had an impact. Specifically, in classrooms where the teachers also received coaching (LT/C), Literacy Coaches (LCs) prompted teachers to distribute and remind parents about the consents. Also notable was the significant difference that was documented in the return rate of consents across faith and non-faith-based early childhood sites. A tentative hypothesis for this finding may be that faith-based early childhood sites are less open to research or data collection from agencies outside their own campus. Suggestions for Future Research Future research should address the shortcomings noted in the current study. First, it is recommended that this cohort of students’ literacy development should be monitored over a longer duration of time – perhaps into the early years of elementary school. The goal of this effort would be to answer the question of whether the rates of early literacy skill acquisition accelerated after longer exposure to the instructional strategies implemented by their teachers. Second, the question of whether short tem benefits waned over time as has been noted in follow-up research for other early intervention pre-kindergarten programming (Marcon, 1999; Parker, Boak, Griffin, Ripple, & Prey, 1999) should be addressed. Exploring whether the provision of feedback reflecting student skill attainment to the early childhood educators is worthy of future attention. In question is whether different outcomes would have be found if LCs had provided

PAGE 188

176 graphs depicting individual student progress or overall classroom growth to teachers. Next, would these differences filter downward into enhanced literacy skill development in the teacher’s students? Future research addressing these questions is needed. A more accurate measure of class size also might be beneficial. Specifically, the variable representing the number of students in the classroom utilized in this study reflected the capacity of the classroom setting rather than the typical number of students present in the classroom. For example, a classroom might have a capacity of 18 children; however, on average only 12 children are typically present on any given day. This discrepancy provides a different picture of the early childhood setting. Furthermore, this number did not take into consideration the corresponding number of adults typically present in the classroom or teacher:pupil ratio (TPR). That is, one classroom might have a capacity of 18 children with only one teacher assigned to be in the classroom with the students (i.e., an 18:1 TPR). On the other hand, another setting might have a student capacity of 18 but have two teachers assigned to be co-teaching the class (i.e., a 9:1 TPR). Future research should consider the use of these alternative ways of quantifying class size (e.g., TPR). Differences in types of early childhood education sites also may have impacted the results. That is, are differences in the types of settings (private, Head Start, or faith-based) tied to the rate of skill acquisition in children? Return rates of consents did differ based on the type setting. Does this difference also

PAGE 189

177 impact children’s development of early reading skills? An additional line of thought also questions whether the IGDI is ready for use as a tool for assessing early literacy skill acquisition in children. Based on information that questioned the consistency of its administration, it is suggested that future research focus on the development of an automated or computer assisted IGDI measure that would control for administrator differences. A scoring and graphing program also could be tied to the IGDI software program and further ease the teacher feedback process. Limited administrator training requirements based on this automated version also would prove helpful particularly when large scale implementation and on-going monitoring is considered. With this computerized IGDI tool at hand, exploring the question of whether data gathered during the administration of the IGDI parallels, supplements, or duplicates that collected with other readiness measures such as the ESI-K. Future research identifying the link between individual subscales of the ESI-K such as the Language and Cognition component also warrants attention. If the IGDI and ESI-K, for example, appear to be tapping the same constructs, additional research targeted at identifying the most valid and reliable measure is needed. A review of LC coaching notes brought an additional concern to light. That is, LC notes did not consistently document that the modeling component of the Early Literacy and Learning Model (ELLM) occurred. It is important to note,

PAGE 190

178 however, that this does not indicate that it did not occur. In contrast, it may indicate a lack of notation regarding its use. Based on these concerns about docum entation of presence or absence of the ELLM components, it suggested that alternate measures to document the coaching sessions be employed. For example, audio or video taping these sessions and then rating them on a scale that indicated alignment with the ELLM model would have be helpful in assessing the treatment integrity of the coaching component. The ELOC also should be examined and refined in an effort to describe the environment more precisely. One suggestion is to make the scale more sensitive to change over time by rewording items so that frequency counts could be obtained. For example, the number of times a teacher praised a student’s efforts at reading a word, the number of posters that were on the walls of the classroom setting, or a Likert 5-point response (from never to sometimes to always) tied to the item that asked whether teachers linked the book they were reading to children’s background knowledge would be interesting additions to the scale. Future research also should identify the specific items from the ELOC that are linked to higher rates of literacy growth in children. For example, what is more beneficial for a classroom library: having many books of the same genre, or having a wide assortment of books? Finally, changes in implementation of the ELO grant need to be addressed and then evaluated for their impact on student literacy growth. One suggestion highlights the need for follow-up IGDI data with the main focus of data collection

PAGE 191

179 occurring after the HUR curriculum is completed. For example, the HUR course could be offered to early childhood educators across the summer months. During teachers’ participation in the HUR class, preliminary student baseline data could be collected utilizing measures such as the IGDI and ESI-K. Notably, the main thrust of data collection examining student literacy skills would not occur until after the teachers had completed the HUR course. Assessment of teachers’ knowledge and skills also would be gathered across the HUR class with mastery of skills driving the instruction and coaching sessions. Next, comparison data using the IGDI could occur monthly across the fall and winter semesters. Tools to assess the maintenance of teachers’ skills (e.g., ELOC) also would be completed during the fall wave of data collection and used to identify skills needing attention from LCs working with these teachers. Conclusion This study served as a critical piece in evaluating the effects that early instruction has on the acquisition of early reading skills. Research has documented that quality, early instruction is the “royal roadway” to later success not only in learning to read but also in reading to learn (National Reading Panel, 2000; Neuman, Copple, & Bredekamp, 2000; Snow, Burns, & Griffin, 1998). Thus, an effort to identify what instruction is best is vital. Most notably, this study examined not the effect on the teachers who attended the training class but, instead, how it impacted the students of these teachers. At the child level, gender, age, race, and home SES were identified as factors explaining significant

PAGE 192

180 amounts of variance in the rate of early literacy skill acquisition with older, White children from higher SES households displaying the highest rates of skill acquisition. On the other hand, children identified as not White who were reared in lower SES households fared the worst, with lowest scores on the literacy measures at the start of the grant. T hus, documentation of an achievement gap during early childhood was found. At the classroom level, classroom environment, number of students, and teacher ELO participation appear to have played a role in the attainment of early literacy skills. The provision of a Literacy Coach to early childhood educators did not emerge as accounting for significant amounts of variance in the literacy development of students in the teachers’ classrooms. That is, neither the qualitative or quantitative aspects of this component accounted for significant amounts of variance in the rates of development for expressive language, phonological awareness, and phonemic awareness as they were measured in the present study. In conclusion, implementation of the ELO grant and resources did impact, to a limited degree, the early literacy development in the students of the participating teachers. Additional assistance such as that provided through a coaching model with a LC did not appear to enhance literacy skill acquisition in students of the teachers, however. Notably, some interesting findings and avenues for future research related to classroom environments and child characteristics also emerged. Importantly, this study serves as documentation

PAGE 193

181 that children are acquiring early literacy skills with attention drawn to the finding that they are doing this at an age well before their entrance into formal educational settings where reading instruction traditionally begins. In light of this, efforts seeking to make early learning experiences as productive as possible gain merit particularly those that target non-White children from low SES households where potential for closing the achievement gap exists.

PAGE 194

182 REFERENCES Adams, M.J. (1990). Beginning to Read: Thinking and Learning about Print. Cambridge, Mass: Bradford Books. Alwin, D.F., & Thornton, A. (1984). Family origins and the schooling process: Early versus late influence of parental characteristics. American Sociological Review, 49, 784-802. Bates, E., O’Connell, B., & Shore, C. (1987). Language and communication in infancy. In J.D. Osofdky (Ed.), Handbook of infant development (2nd ed., pp. 149-203). New York: Wiley. Berk, L.E. (1994). Child Development – Third Edition. Needham Heights, Mass: Allyn and Bacon. Berk, L.E. (2003). Child Development – Sixth Edition. Needham Heights, Mass: Allyn and Bacon. Berk, L.E., & Winsler, A. (1995). Scaffolding children’s learning: Vygotsky and early childhood education. Washington, DC: National Association for the Education of Young Children. Berlin, L.J., Brooks-Gunn, J., McCarton, C., & McCormick, M.C. (1998). The effectiveness of early intervention: Examining risk factors and pathways to enhanced development. Prevention Medicine, 27 (2), 238-245. Bond, G.L., & Dykstra, R. (1967). The cooperative research program in firstgrade reading instruction. Reading Research Quarterly, 2 5-142. Bowman, B.T., Donovan, M.S., Burns, M.S. (Eds.) (2000). Eager to learn: Educating our preschoolers. Washington, DC: National Academy Press. Bredekamp, S., & Copple, C. (1997). Developmentally Appropriate Practice in Early Childhood Programs: Revised edition. Washington, DC: National Association for the Education of Young Children. Bruer, J.T. (1997). Education and the brain: A bridge too far. Educational Researcher, 26 4-16.

PAGE 195

183 Buchanan, T.K., Burts, D.C., Bidner, J., White, F., & Charlesworth, R. (1998). Predictors of the developmental appropriateness of the beliefs and practices of first, second, and third grade teachers. Early Childhood Research Quarterly, 13, 459-483. Burns, M.S., Griffin, P., Snow, C.E. (Eds.) (1999). Starting Out Right: A Guide to Promoting Children’s Reading Success. Washington, DC: National Academy Press. Bus, A.,van IJzendoorn, M., & Pellegrini, A. (1995). Joint book reading make for success in learning to read: A meta-analysis on intergenerational transmission of literacy. Review of Educational Research, 65, 1-21. Byrd, R.S., Weitzman, M., & Auinger, P. (1997). Increased behavior problems associated with delayed school entry and delayed school progress. Pediatrics, 100 654-661. Byrk, A.S., & Raudenbusch, S.W. (1992). HLM: Applications and data analysis methods Newbury Park, CA: Sage. Caldas, S.J., & Bankston, C. (1997). Effect of school population socioeconomic status on individual academic achievement. The Journal of Educational Research, 90, 269-277. Calfee, R.C., Lindamood, P.E., & Lindamood, C.H. (1973). Acoustic-phonetic skills and reading – kindergarten through 12th grade. J ournal of Educational Psychology, 64, 293-298. Cameron, M.B., & Wilson, B.J. (1990). The effects of chronological age, gender, and delay of entry on academic achievement and retention: Implications for academic redshirting. Psychology in the Schools, 27 26-263. Campbell, F., & Ramey, C. (1994). Effects of early intervention on intellectual and academic achievement: A follow-up study of children from low-income families. Child Development, 65, 684-698. Carlton, M.P. (2000). Motivation and school readiness in kindergarten children. Dissertation Abstracts International, 60, 3899. Carlton, M.P., & Winsler, A. (1999). School readiness: The need for a paradigm shift. School Psychology Review, 28 338-352.

PAGE 196

184 Carta, J.J., Greenwood, C.R., & Walker, D. (2004). Early Communication Indicator (ECI) training manual. Kansas City, Kansas: Early Childhood Research Institute on Measuring Growth and Development. Casey, A., & Howe, K. (2002). Best practices in early literacy skills. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology IV (pp. 721-735). Bethesda, MD: National Association of School Psychologists. Chan, C., & Latham, G.P. (2004). The relative effectiveness of external, peer, and self-coaches. Applied Psychology: An international Review, 53, 260278. Clay, M. (1991). Becoming Literate. Portsmouth, NH: Heinemann. Cole, K.N., Dale, P.S., & Mills, P.E. (1991). Individual differences in language delayed children’s responses to direct and interactive preschool instruction. Topics in Early Childhood Special Education, 11, 99-124. Coolahan, K., Fantuzzo, J., Mendez, J., & McDermott, P. (2000). Preschool peer interactions and readiness to learn: Relationships between classroom peer play and learning behaviors and conduct. Journal of Educational Psychology, 92 458-465. Cree, V.E., & Macaulay, C. (2000). Transfer of Learning in Professional and Vocational Education. New York: Rutledge. Critchey, M. (1970). The Dyslexic Child Springfield, IL; Charles C. Thomas. Deitz, C., & Wilson, B.J. (1985). Beginning school age and academic achievement. Psychology in the Schools, 31 5-12. Dennebaum, J.M., & Kulberg, J.M. (1994). Kindergarten retention and transitional classrooms: Their relationship to achievement. Psychology in the Schools, 31, 5-12. Deno, S.L. (1997). Whether thou goest…Perspectives on progress monitoring. In J. W. Lloyd, E.J. Kameenui, & D. Card (Eds.), Issues in educating students with disabilities (pp. 77-99). Mahwah, NJ: Lawrence Erlbaum. Denton, D.R. (2002). Focus on quality: Prekindergarten programs in SREB states. Southern Regional Education Board Atlanta, Ga.: Southern Regional Education Board. Donovan, S., & Cross, C.T. (2002). Minority Students in Special and Gifted Education. Washington, DC: National Academy Press.

PAGE 197

185 Duncan, Brooks-Gunn, J., & Klebanov, P.K. (1994). Economic deprivation and early childhood development. Child Development, 65, 296-318. Durkin, D. (1966). Children Who Read Early New York: Teachers College Press. Dynamic Indicators of Basic Early Literacy Skills (2000-2002). Retrieved May 20, 2003 from http://dibels.uoregon.edu/ Feltz, D.L., Chase, M.A., Moritz, S.E., Sullivan, P.J. (1999). A conceptual model of coaching efficacy: Preliminary investigation and instrument development. Journal of Educati onal Psychology, 91, 765-776. Finucci, J.M., & Childs, B. (1981). Are there really more dyslexic boys than girls? In A. Ansara, N. Gershwind, A. Galaburda, M. Alvert, and N. Gartrell, (Eds.) Gender Differences in Dyslexia (p. 1-9). Towson, MD: Orton Dyslexia Society. Fleishman, E.A. (1987). Foreword in S.M. and J.D. Hagman (Eds) Transfer of Learning San Diego: Academic Press. Fountain, C., Cosgrove, M., Wiles, D., Wood, J., & Senterfitt, H. (2001). The Early Literacy and Learning Model. Retrieved January 13, 2004 from http://www.unf.edu/dept/fiellm/pdf Fountain, C. (2002). ELLM 2001/2002 Annual Report. Jacksonville, FL: Florida Institute of Education and the University of North Florida. Fox, N.A, Calkins, S.A., & Bell, M.A. (1994). Neural plasticity and development in the first two years of life: Evidence from cognitive and socioemotional domains of research. Developmental and Psychopathology, 6, 677-696. Gagne, R.M. (1970). The conditions of learning (2nd ed.). Austin, TX: Holt, Rinehart, & Winston. Good, R.H., & Kaminski, R.A. (2002). DIBELS Oral Reading Fluency Passages First through Third Grades (Technical Report No. 10). Eugene, OR: University of Oregon. Good, R.H., Gruba, J., & Kaminiski, R.A. (2002). Best practices in using Dynamic Indicators of Basic Early Literacy Skills (DIBELS) in an outcomes-driven model. In A. Thomas and J. Grimes, (Eds.) Best Practices in School Psychology IV (p.699-720). National Association of School Psychologists: Bethesda, MD.

PAGE 198

186 Good, R.H., Gruba, J., & Kaminiski, R.A. (2001). The importance and decisionmaking utility of a continuum of fluency-based indicators of foundational reading skills for third grate high-stakes outcomes. Scientific Studies of Reading, 5, 257-288. Graue, M.E. (1993). Ready for What? Constructing meanings of readiness for kindergarten Albany, NY: State University of New York Press. Hart, C.H., Burts, D.C., & Charlesworth, R. (1997). Integrated curriculum and developmentally appropriate practice: Birth to age 8. New York: SUNY Press. Harvey, V., & Struzziero, J. (2000). Effective Supervision in School Psychology Bethesda, MD: National Association of School Psychologists. Hatcher, L., & Stepanski, E.J. (1994). A Step-by-Step Approach to Using the SAS System for Univariate and Multivariate Statistics. Cary, NC: SAS Institute. Hillery, J.M., & Wexley, K.N. (1974). Participation effects in appraisal interviews conducted in a training situation. Journal of Applied Psychology, 59, 168171. Hogan, K., & Pressley, M. (Eds.). (1997). Scaffolding student learning: instructional approaches and issues. Cambridge, MA: Brookline. Horn, W.F., & O’Donnell, J.P. (1984). Early identification of learning disabilities: A comparison of two methods. Journal of Educational Psychology, 76, 11061118. Huffman, S., Mehlinger, S., & Kervian, A. (2000). Off to a good start. The Child Mental Health Foundations and Agencies Network. Huston, A.C., McLoyd, V.C., & Carcia Coll, C. (1994). Children and poverty: Issues in contemporary research. Child Development, 65, 2 75-282. Joyce, B.R., & Showers, B. (1982). The coaching of teaching. Educational Leadership, 40, 4-10. Joyce B.R., & showers, B. (2002). Student Achievement Through Staff Development (Third Edition). Alexandria, VA: Association for Supervision and Curriculum Development.

PAGE 199

187 Justice, L.M. (2002). The Early Literacy Observation Checklist. University of Virginia MCGuffrey Reading Center Retrieved February 12, 2004 from http://www.nde.state.ne.us.ECH/surveyEnglish.pdf. Kagan, S.L. (1990). Readiness 2000: Rethinking rhetoric and responsibility. Phi Delta Kappan, 72, 272-279. Kagan, S. (1994). Cooperative Learning. San Clemente, CA: Kagan. Kagan, S.L., Moore, E., & Bredekamp, S. (1995). Reconsidering children’s early development and learning: Toward common views and vocabulary. Washington, DC: National Education Goals Panel. Kalff, A.C., Kroes, M., Vles, J.S.H., Hendriksen, J.G.M., Feron, F.J.M., Steyaert, J., Van Zeben, T.M.C.B., Jolles, J., & Can Os, J. (2001). Neighborhood level and individual socioeconomic effects on children’s problem behavior: A multilevel analysis. Journal of Epidemiology & Community Health, 55 246-250. Kaminski, R.A., & Good, R.H. (1996). Toward a technology for assessing basic early literacy skills. School Psychology Review, 25 215-227. Karweit, N., & Wasik, B. (1996). The effects of story reading programs on literacy and language development of disadvantaged pre-schoolers. Journal of Education for Students Placed At-Risk, 4, 319-348. Kendall, F.E. (1996). New approaches to the education of young children. New York: Teachers College Press. Kids Count 2003 Data Book Online, (n.d.) Retrieved July 20, 2003 from http://www.aecf.org/cgi-bin/kc.cgi?action=profile&area=United+States/ Kochanek, T.T., Kabacoff, R.I., & Lipsitt, L.P. (1990). Early identification of developmentally disabled and at-risk preschool children. Exceptional Child, 56 (6), 528-538. Kundert, D.K., May, D.C., & Brent, D. (1995). A comparison of students who delay kindergarten entry and those who are retained in grades K5. Psyhcology in the Schools, 32 202-209. Lee, V., Brooks-Gunn, J., Schnur, E., & Liaw, F. (1990). Are Head Start effects sustained? A longitudinal follow-up comparison of disadvantaged children attending Head Start, no preschool, and other preschool programs. Child Development, 61 (2), 495-507.

PAGE 200

188 Leinhardt, G. (1980). Transition Rooms: Promoting maturation or reducing education? Journal of Educational Psychology, 72, 55-61. Levine, M.D., Swartz, C., Reed, M., Hill, M., Wakely, M., Lind, S., & Marincic, L. (1997). “Schools Attuned” syllabus. Chapel Hill, NC: University of North Carolina School of Medicine. Luze, G.J., Linbarger, D.L., Greenwood, C.R., Carta, J.J., Walker, D., Leitschuh, C., & Atwater, J.B. (2001). Developing a General Outcome Measure of growth in expressive communication of infants and toddlers, School Psychology Review, 30, 383-406. Ma, X., & Klinger, D.A. (2000). Hierarchical linear modeling of student effects on academic achievement. Canadian Journal of Education, 25, 41-55. Maclean, M., Bryant, P., & Bradley, L. (1987). Rhymes, nursery rhymes, and reading in early childhood. Merrill-Palmer Quarterly, 33, 255-281. Marcon, R.A. (1999) Positive relationships between parent-school involvement and public school inner-city preschoolers’ development and academic performance. School Psychology Review, 28, 395-412. Marzolf, D.P., & DeLoache, J.S. (1994). Transfer in young children’s understanding of spatial representations. Child Development, 65, 1-15. Masonheimer, P.E., Drum, P.A., & Ehri, L.C. (1984). Does environmental print identification lead children into word reading? Journal of Reading Behavior 16, 257-271. May, D.C., Kundert, D.K., Nikoloff, O., Welch, E., Garrett, M., & Brent, D. (1994). School readiness: An obstacle to intervention and inclusion. Journal of Early Intervention, 18, 290-301. May, D.C., & Welch, E.L. (1984). The effects of developmental placement and early retention on children’s later scores on standardized tests. Psychology in the Schools, 21 381-385. McCarthy, J. (1990). The content of early childhood teacher education programs: Pedagogy. In B. Spodek & O.N. Saracho (Eds.), Yearbook in Early Childhood Education (pp.145-166). Greenwich, CT: JAI Press.

PAGE 201

189 McCarton, C., Brooks-Gunn, J., Wallace, I., Bauer, C., Bennett, F., Bernbaum, J., Broyles, R., Casey, P., McCormick, M., Scott, D., Tyson, J., Tonascia, J., & Meinert, C. (1997). Results at age 8 years of early intervention for lowbirth-weight premature infants. JAMA, 277(2) 126-132. McConnell, S., McEvoy, M., Carta, J.J., Greenwood, C.R., Kaminski, R., Good, R., Shinn, M. (1998, April). Research and development of Individual Growth and Development Indicators for children between birth to age eight – Technical Report # 4. Minneapolis, MN: Early Childhood Research Institute on Measuring Growth and Development. McConnell, S.R., Priest, J.S., Davis, S.D., & McEvoy, M.A., (2002). Best practices in measuring growth and development for preschool children. In A. Thomas & J. Grimes (Eds.), Best Practices in School Psychology (4th ed., Col. 2, pp. 1231-1246). Washington DC: National Association of School Psychologists. McConnell, S.R., Priest, J.S., Davis, S.D., & McEvoy, M.A., (2000). Best practices in measuring growth and development for preschool children. Retrieved January 13, 2004 from http://education.umm.edu/CEED/projects/html. McCathren, R.B., Warren, S.F., & Yoder, P.J. (1996). Prelinguistic predictors of later language development. In K.N. Cole, P.S. Dale, & D.J. Thal (Eds.), Assessment of communication and language (pp. 57-74). Baltimore, MD: Brookes. McGee, L., Lomax, R., & Head, M. (1988). Literacy’s Beginnings Boston, Mass: Allyn & Bacon. McLaughlin, M.W., & Marsh, D.D., (1978). Staff development and school change. Teachers College Record, 80, 70-94. McLoyd, V.C. (1998). Socioeconomic disadvantage and child development. American Psychologist, 53, 185-204. Meisels, S.J. (1999) Assessing readiness. In R.C. Pianata & M.J. Cox (Eds.), The Transition to kindergarten (pp. 39-66). Baltimore, MD: Paulh Brookes Publishing Co. Meisels, S.J., Henderson, L.W., Liaw, F., Browning, K., & TenHave, T. (1993). New evidence for the effectiveness of the Early Screening Inventory. Early Childhood Research Quarterly, 8, 327-346.

PAGE 202

190 Meisels, S.J., Marsden, D.B., Wiske, M.S., & Henderson, L.W. (2003). The Early Screening Inventory-Revised Manual New York, NY: Pearson Education, Inc. Missall, K.N., & McConnell, S.R. (2004, April). Psychometric Characteristics of Individual Growth & Development Indicators: Picture Naming, Rhyming, And Alliteration. Technical Report. Minneapolis, MN: Early Childhood Research Institute on Measuring Growth and Development. Morgan, R.L., Menlove, R., Salzbert, C., & Hudson, P. (1994). Effects of peer coaching on the acquisition of direct instruction skills by low-performing pre-service teachers. The Journal of Special Education, 28, 59-76. Morrow, L.M. (1990). Preparing the classroom environment to promote literacy during play. Early Childhood Research Quarterly, 5, 537-554. Morrow, L.M., & Weinstein, C. (1986). Encouraging voluntary reading: The impact of a literature program on children’s use of library centers. Reading Research Quarterly, 21, 330-346. National Association for the Education of Young Children. (1998). Accreditation criteria and procedures Washington, DC: Author. National Center for Education Statistics. (2000). America’s Kindergarteners: Early Childhood Longitudinal Study. Washington, DC: US Government Printing Office. National Education Goals Panel. (1999). The National Education Goals report: Building a nation of learners, 1999. Washington, DC: US Government Printing Office. National Head Start Association. T he Early Years of HeadsUp! Reading: A Report from the National Head Start Association. Alexandria, VA: National Head Start Association. Retrieved 1/11/04 from www.huronline.org. National Institute of Child Health and Human Development Early Child Care Research Network. (1998). Early child-care and self-control, compliance, and problem behavior at 24 and 36 months. Child Development, 69(4), 1145-1170. National Reading Panel. (2000). Teaching Children to Read U.S. Department of Health and Human Services. Washington, DC.

PAGE 203

191 Neubert, G.A. (1988). Improving Teaching Through Learning. Bloomington, IN: Phi Delta Kappa Educational Foundation. Neuman, S.B., Copple, C., & Bredekamp, S. (2000). Learning to Read and Write: Developmentally Appropriate Practices for Young Children. Washington, DC: NAEYC. Neuman, S.B., & Roskos, K. (1997). Literacy knowledge in practice: Contexts of participation for young writers and readers. Reading Research Quarterly, 32, 10-32. Neuman, S.B., & Seung-hee, C. (2001). Teacher Education in Early Literacy: The HeadsUp! Reading Evaluation Ann Arbor, MI: University of Michigan and the Center for the Improvement of Early Reading Achievement. No Child Left Behind Act of 2001., H.R. 1, 107th Congress (2001). Offord, D. R., Boyle, M.H., & Jones, B.R. (1987). Psychiatric disorder and poor school performance among welfare children in Ontario. Canadian Journal of Psychiatry, 32, 518-525. Parker, F.L., Boak, A.Y., Griffin, K.W., Ripple, C., & Peay, L. (1999). Parent-child relationship, home learning environment, and school readiness. School Psychology Review, 28, 413-425. Pinellas Early Childhood Collaborative. (2002). Annual Pinellas County Child Care Assessment Report Priest, J., Davis, K., McConnell, S., McEvoy, M., & Shin, J. (1999, December). Individual Growth and Development Indicators of preschoolers’ “expressing meaning” skills: Follow that trajectory! Presentation at the annual meeting of the Division for Early Childhood, Council for Exceptional Children, Washington, D.C. Priest, J.S., McConnell, S.R., Walker, D. Carta, J.J., Kaminski, R.A., McEvoy, M.A., Good, R.H., Greenwood, C.R., & Shinn, M.R. (2001). General growth outcomes for young children: Developing a foundation for continuous progress measurement. Journal of Early Intervention, 24, 163180. Ramey, S.L. (1999). Head Start and preschool education: Toward continued improvement, American Psychologist, 54, 344-346.

PAGE 204

192 Ripple, C.H., Gilliam, W.S., Chanana, N., & Zigler, E. (1999). Will fifty cooks spoil the broth? The debate over entrusting Head Start to the states. American Psychologist, 54 327-343. Sarason, S.B. (1991). The Predictable Failure of Educational Reform. San Francisco: Jossey-Bass. Schickendanz, J.A. (1999). Much More Than the ABC’s: The Early Stages of Reading and Writing. Washington, DC: National Association for the Education of Young Children. School Readiness Uniform Screening System (n.d.). Retrieved from http://www.firn.edu/doe/sas /srushome.htm/ Schore, A.N. (1994). Affect regulation and the origin of self: The neurobiology of emotional development. Hillsdale, NJ: Erlbaum. Shaw, R. & Shaw, D. (2003). DIBELS Oral Reading Fluency indicators of third grade reading skills for Colorado State Assessment Program (C ASP). (Technical Report). Eugene, OR: University of Oregon. Shaywitz, S.E., Shaywitz, B.A., Fletcher, J.M., & Escobar, M.D. (1990). Prevalence of reading disability in boys and girls: Results of the Connecticut longitudinal study. Journal of the American Medical Association, 264, 998-1002. Shepard, L.A. (1989). A review of resear ch on kindergarten readiness. In L.A. Shepard & M.L. Smith (Eds.), Flunking grades: Research and policies on retentio n (pp. 64-78). London: The Falmer Press. Shepard, L.A., & Smith, M.L. (1987). Effects of kindergarten retention at the end of first grade. Psychology in the Schools, 24 346-357. Shepard, L.A., & Smith, M.L. (1989). Introduction and overview. In L.A. Shepard & M.L. Smith (Eds.), Flunking grades: Research and policies on retention (pp. 1-15). London: The Falmer Press. Shinn, M.R. (Ed.). (1989). Curriculum-based measurement: Assessing special children New York: The Guilford Press. Showers, B. (1982a). A Study of Coaching in Teacher Training. Eugene, OR: Center for Educational Policy and Management. Showers, B. (1982b). Transfer of Training: The Contribution of Coaching. Eugene, OR: Center for Educational Policy and Management.

PAGE 205

193 Showers, B. (1984). Peer Coaching: A Strategy for Facilitating Transfer of Training Eugene, OR: Center for Educational Policy and Management. Simmons, D.C., Kame’enui, E.J., Good, R.H., Harn, B.A., Cole, C., & Braun, D. (2002). Building, implementing, and sustaining a beginning reading improvement model: Lessons learned school by school. In M.R. Shinn, H.M. Walker, & G. Stoner (Eds.), Interventions for academic and behavior problems II: Preventative and remedial approaches (pp. 537-570). Bethesda, MD: National Association of School Psychologists. Smith, K.E. (1997). Student teachers’ beliefs about developmentally appropriate practice: pattern, stability, and influence of locus of control. Early Childhood Research Quarterly, 7 277-296. Snow, C.E. (1991). The theoretical basis for relationships between language and literacy in development. Journal of Research in Childhood Education, 6 510. Snow, C.E., Burns, M.S., & P. Griffin. (1998). Preventing reading difficulties in young children. Washington, DC: National Academy Press. Snow, C.E., & Goldfield, B.A. (1983). Building stories: The emergence of information structures from conversation and narrative. In D. Tannen (Ed.), Georgetown University roundtable on language and linguistics 1981, Analyzing discourse: Text and talk (pp. 127-141). Washington, DC: University Press. Snow, C.E., & Ninio, A. (1986). The contracts of literacy: What children learn from learning to read books. In W.H. Teale, & E. Sulzby (Eds.), Emergent literacy: Writing and reading (pp. 116-137). Norwood, NJ: Ablex. Stanovich, K.E., Cunningham, A.E., & Cramer, B.B. (1984). Assessing phonological awareness in kindergarten children: Issues of task comparability. Journal of Chan, C., & Latham, G.P. (2004). The relative effectiveness of external, peer, and self-coaches. Applied Psychology: An international Review, 53, 260-278. Stanovich, K.E. & West, R.F. (1989). Exposure to print and orthographic processing. Reading Research Quarterly, 24, 402-433. Stevens, J. (2002). Applied multivariate statistics for the social sciences (4th Ed.). Mahwah, NJ: Erlbaum.

PAGE 206

194 Streufert, E.H.C. (1984). The effects of coaching, a follow-up component of inservice training, on transfer of training to teacher competency, teacher performance, and student outcomes. Unpublished doctoral dissertation, University of Nebraska. Sulzby, E., & Teale, W.H. (1991). Emergent literacy. In R. Barr, M.L. Kamil, P. Mosenthal, & P.D. Pearson (Eds.), Handbook of reading research (pp. 727-757). New York: Longman. Swanson, B. (1991). An overview of the six national education goals ERIC Digest. Teale, W.H. (1978). Positive environments for learning to read: What studies of early readers tell us. Language Arts, 55, 555-570. Teale, W.H. (1984). Reading to you children: Its significance for literacy development. In H. Goelman, A. Oberg, & F. Smith (Eds.), Awakening to literacy (pp. 110-121). Portsmouth, NH: Heinemann. Thorne, B.M. (1989). Statistics for the Behavioral Sciences. Mountain View, CA: Mayfield Publishing. Torgesen, J. (2002). Florida’s Reading First assessment plan: An explanation and guide. Retrieved July 18, 2003 from http://www.fcrr.org/assessment/PDFfiles/Fl%20Assess%20 Plan%20Final.pdf/ U.S. Department of Health and Human Services. (1999). Head Start. Administration for Children and Families. Facts Sheet. Washington, DC: U.S. Department of Health and Human Services. Vygotsky, L.S. (1978). Mind in society: The development of higher mental processes. Cambridge, MA: Harvard University Press. Wadsworth, S., DeFries, J.C., Stevenson, J.C., Gilger, J.W., & Pennington, B.F. (1992). Gender ratios among reading-disabled children and their siblings as a function of parental impairment. Journal of Child Psychology and Psychiatry and Allied Disciplines, 33, 1229-1239. Wells, G. (1985). The Meaning Makers. Portsmouth, NH: Heinemann. White, K.R. (1982). An integrative review of early intervention efficacy studies with at-risk children: Implications for the handicapped. Analysis and Intervention in Developmental Disabilities, 5 7-31.

PAGE 207

195 White, K., & Howard, J.L. (1973). Failure to be promoted and self-concept among elementary school children. Elementary School Guidance and Counseling, 7 182-187. Willms, J. D. (2001). Journal of Child Psychology & Psychiatry & Allied Disciplines, 42, 141-162. Wolery, M., & Wilbers, J.S. (1994). Including children with special needs in early childhood programs Washington, DC: National Association for the Education of Young Children. Yodder, P.J., & Warren, S.F. (1999). Facilitating self-initiated proto-declaratives and proto-imperatives in prelinguistic children with developmental disabilities. J ournal of Early Intervention, 22, 337-354. Ziegler, E. (1998). School should begin at age 3 years for American children. J ournal of Developmental and Behavioral Pediatrics, 19 38-40.

PAGE 208

196 APPENDICES

PAGE 209

197 Appendix A: Job Qualifications for Literacy Coach Positions Position Title: Literacy Coach Position Purpose: To provide literacy coaching and mentoring to early childhood professionals Key Responsibilities: !" Model best practices in early literacy instruction !" Coaches participants in best practices for early literacy instruction !" Supports participants in the development of a literacy rich environment !" Observes participants in their early childhood setting !" Assists teachers in the development and implementation of their HeadsUp! Action Plan !" Supports the implementation of family literacy activities in the participants’ early childhood setting Specifications: !" Bachelor’s degree in early childhood or related field (Early literacy experience preferred) !" Four to seven years teaching and or training experience !" Ability to communicate effectively both verbally and in writing !" Basic computer literacy required !" Ability to work flexible hours !" Ability to work multiple locations and access to reliable transportation

PAGE 210

198 Appendix B: Early Literacy Observation Checklist Teacher: Site: Ages of Children: Date of Observation: Time: Number of Children: Observer: EARLY LITERACY OBSERVATION CHECKLIST (ELOC) STORYBOOK READING Are there a variety of children’s books easily accessible to children? Yes Some Very Few The types of children’s books include (Place an “X” in front of those noted) Wordless picture books Picture books with extensive illustrations and few words Storybooks (books told by text with some illustrations) Concept books (books about concepts, like colors, opposites, etc.) Alphabet and number books Interactive books (list-the-flaps books, touch books, pop-up books) Children’s magazines Early educational materials (writing workbooks, etc.) Are children permitted to borrow these books for home use? YES NO Are there specific times set aside during the day for reading activities? YES NO Please describe: How frequently is story time held? Never 1/wk 2-3/wk 1/day >1/day Do children participate in choosing the books? YES NO Do children make their own books and stories? YES NO Does story time include a follow-up activity? YES NO If yes, please describe: Do adult-child book reading interactions include a print and literacy focus? Yes Somewhat No If yes, please describe: Does the adult point to words when reading? YES NO Are the children asked to help read the story? YES NO Does the adult link the book to the children’s life? YES NO Does the adult point to letters in the book? YES NO Does the adult ask the children to name letters in the book? YES NO

PAGE 211

199 Appendix B (Continued): Early Literacy Observation Checklist Does the adult ask the children to help read the title? YES NO Does the adult praise the children’s participation? YES NO Does the adult make connections between printed and spoken word? YES NO CLASSROOM LIBRARY Is there a dedicated space in the classroom for children’s independent/ group reading activities (e.g.reading corner or library)? YES NO Is there a soft place to sit? YES NO Is there a book theme featured each week? YES NO Are children free to use the books during play activities? YES NO WRITING CENTER Are there a variety of children’s writing materials easily accessible to children (e.g. paper, crayons, pencils) Can reach by self Out of reach/ must ask Please describe: Is there time available to visit the writing center? YES NO Do group activities frequently involve writing and drawing? YES NO Is there a specific space for children’s independent and group writing activities? Specific writing center Center set up only during choice time No place for writing Please describe: OVERALL PRINT ENVIRONMENT Are printed materials displayed prominently in the early learning environment? Nowhere A few places Many places Everywhere Please describe: Are posters and signs displayed at eye level? YES NO Are books embedded in play activities? YES NO Are writing and drawing embedded in play activities? YES NO Are children’s drawings prominently displayed? YES NO Are signs and posters abundant in early learning environment? YES NO Is the alphabet displayed in early learning environment? YES NO Is each child’s name prominently displayed? YES NO Are items in the early learning environment labeled in print? YES NO Thank You!

PAGE 212

200 Appendix C: Early Literacy Observation Checklist Scoring Key EARLY LITERACY OBSERVATION CHECKLIST (ELOC) SCORING KEY 1. Assign point values noted in grey if response or rating is selected. 2. Sum of obtained points in each area (Storybook Reading; Classroom Library; Writing Center, and Overall Print Environment. 3. The sum of these three areas is the Total ELOC Score. STORYBOOK READING Are there a variety of children’s books easily accessible to children? Yes 1 Some .5 Very Few 0 The types of children’s books include (Place an “X” in front of those noted) 1 Wordless picture books 1 Picture books with extensive illustrations and few words 1 Storybooks (books told by text with some illustrations) 1 Concept books (books about concepts, like colors, opposites, etc.) 1 Alphabet and number books 1 Interactive books (list-the-flaps books, touch books, pop-up books) 1 Children’s magazines Assign one point for each type of book noted. 1 Early educational materials (writing workbooks, etc.) Are children permitted to borrow these books for home use? YES 1 NO 0 Are there specific times set aside during the day for reading activities? YES 1 NO 0 Please describe: How frequently is a group story time held? Never 0 1wk 25 2-3/wk .5 1/day .75 >1/day 1 Do children participate in choosing the books? YES 1 NO 0 Do children make their own books and stories? YES 1 NO 0 Does story time include a follow-up activity related to the book? YES 1 NO 0 If yes, please describe: Do adult-child book reading interactions include a print and literacy focus? Yes 1 Somewhat .5 No 0 If yes, please describe:

PAGE 213

201 Appendix C (Continued): Early Literacy Observation Checklist Scoring Key Does the adult point to words when reading? YES 1 NO 0 Are the children asked to help read the story? YES 1 NO 0 Does the adult link the book to the children’s life? YES 1 NO 0 Does the adult point to letters in the book? YES 1 NO 0 Does the adult ask the children to name letters in the book? YES 1 NO 0 Does the adult ask the children to help read the title? YES 1 NO 0 Does the adult praise the children’s participation? YES 1 NO 0 Does the adult make connections between the printed word and the spoken word? YES 1 NO 0 CLASSROOM LIBRARY Is there a dedicated space in the classroom for Independent/group reading activities (e.g. a reading corner or library)? YES 1 NO 0 Is there a soft place to sit? YES 1 NO 0 Is there a book theme featured each week? YES 1 NO 0 Are children free to use the books during play activities? YES 1 NO 0 WRITING CENTER Are there a variety of children’s writing materials easily accessible to children (e.g., paper, crayons, pencils) Can reach by self 1 Out of reach/ must ask .5 Please describe: Is there time available to visit the writing center? YES 1 NO 0 Do group activities frequently involve writing and drawing? YES 1 NO 0 Is there a specific space for children’s independent and group writing activities? Specific writing center 1 Set up only during choice time .5 No place 0 Please describe: OVERALL PRINT ENVIRONMENT Are printed materials displayed prominently in the early learning environment? No where 0 A few places .5 Many Places .75 Everywhere 1 Please describe: Are posters and signs displayed at eye level? YES 1 NO 0 Are books embedded in play activities? YES 1 NO 0

PAGE 214

202 Appendix C (Continued): Early Literacy Observation Checklist Scoring Key Are writing and drawing embedded in play activities? YES 1 NO 0 Are children’s drawings prominently displayed? YES 1 NO 0 Are signs and posters abundant in early learning environment? YES 1 NO 0 Is the alphabet displayed in early learning environment? YES 1 NO 0 Is each child’s name prominently displayed? YES 1 NO 0 Are items in the early learning environment labeled in print? YES 1 NO 0

PAGE 215

203 Appendix D: Treatment Intensity Data Collection Forms for Literacy Coaches Spring ELO Literacy Coach Activity Summary Literacy Coach: Site: Teacher: Date Type Duration (in minutes) Activities FTF PC Other________ 15 30 45 60 90 >90 Observation Feedback Model Set Goals Other ___________________ Comments: Date Type Duration (in minutes) Activities FTF PC Other________ 15 30 45 60 90 >90 Observation Feedback Model Set Goals Other ___________________ Comments: Date Type Duration (in minutes) Activities FTF PC Other________ 15 30 45 60 90 >90 Observation Feedback Model Set Goals Other ___________________ Comments: CODES FOR COMPLETING FORM Type: FTF = PC = Face to Face Phone Call Activities: Observation = Feedback = Model = Set Goals = Other = Observed teacher applying skills Provided feedback to teacher based on data collected Modeled (or role played) application of skills for teacher Set goals for teacher based on data gathered Please describe any other activities that you engaged in with the teacher during the coaching session.

PAGE 216

204 Appendix E: Application and Agreement Form for ELO Teacher Participation HEADS UP! READING PLUS LITERACY PROJECT SCHOLARSHIP APPLICATION Applicant Name: Day Phone: Evening Phone: Highest Level of Education: (Check one) H.S Diploma G.E.D. Some College 2 Yr. College Degree 4 Yr. College Degree Advanced Degree Site Employer Name: Work Address: City: State: Zip: Center Director (if applicable): Type of Work Site: (Check one) Family Child Care Child Care Center Private Pre-K Private Kindergarten Pre-K ESE Head Start Public Kindergarten Home Visitor Program Number of years you have worked in Early Childhood: Age of Children you are currently working with: (Check all that apply) 0-1 1-2 2-3 3-4 4-5 5-6 Number of Children currently in your care: Number of Children in your care whose first language is not English: Please list any previous training in Early Childhood Literacy: 1) 2) 3) Preferred Campus if selected: (Check one) Seminole St. Pete/Gibbs No Preference I understand that: 1) If eligible, I will receive more information about the requirements of participation for me and my Director (if applicable); 2) If employed at a Child Care Center, my Director must support my participation in this project. 3) If selected, there is no charge that I must attend all 15 classes and these classes are for college credit X X Applicant Signature Director Signature (if applicable)

PAGE 217

205 Appendix E (Continued): Application and Agreement Form for ELO Teacher Participation Training Participant Contract I agree to participate in the Pinellas Early Literacy Community Project Training and Coaching Program, and will fulfill the following obligations: 1. Obtain the support and commitment from my Center Director to participate in the program. 2. I will attend the Orientation Session and all 14 satellite training session. (Will be allowed to miss one session to allow for illness or family obligations.) Should I miss a session, I will view the videotape of the session. 3. I will implement the literacy idea, activities and strategies learned in eh training/coaching program in my classroom. After each session, I will develop a brief action plan detailing how I will implement the strategy discussed, and return to the next training session with the plan. 4. I agree to share the specific printed literacy activities provided at each training session with my Director and at least one other teacher. I will assist my fellow teacher in developing an action plan, and bring to the next training session. 5. I will distribute books and materials to the families of children in my classrooms. 6. I will hold at least one “literacy event” for families of children in my classroom. 7. I agree to work with the Literacy Coaches in my classroom, and participate in six coaching visits. 8. I agree to participate in the evaluation, by completing surveys, encouraging parents to complete their surveys and assisting the Evaluator in connecting with families for literacy surveys. 9. I agree to participate in the Literacy Learning Community Showcase, and to bring a display of activities, photographs and other visual materials of how they implemented literacy activities in their classrooms. ____________________________________ ________________ Signature of Applicant Date ____________________________________ ________________ Signature of Director Date

PAGE 218

206 Appendix F: LAE 2000 Syllabus LAE 2000 LANGUAGE DEVELOPMENT IN YOUNG CHILDREN SESSION II, 2003-204 3 Credits Tuesday, 6:00-9:30 PM Instructor: Anne Sullivan Office Number: 2. Course Description : This course is an introductory study of speech and language development from birth to eight years of age. Emphasis is on the application of language arts activities in early childhood facilities. This course is accepted as early childhood education credit by the Pinellas County License Board. 47 contact hours. This section of LAE 2000 will utilize the HeadsUp! Reading curriculum as a core component. B. Major Learning Outcomes : 1. The student will comprehend the developmental patterns, critical periods and factors that influence language development from infancy to age eight. 2. The student will demonstrate knowledge of the areas that comprise language arts and methodologies caregivers can employ to promoted skills development. 3. The student will comprehend emergent literacy and whole language with the strategies to support language development in young children. 4. The student will demonstrate knowledge about language acquisition and issues related to dialect. 5. The student will demonstrate knowledge of the strategies needed to identify language problems. 6. The student will comprehend the relationship between language and culture. STUDENT COUNSELING: Students who are experiencing difficulty with the course should visit the instructor during office hours or by appointment. If you wish to receive special accommodations as a student with a documented disability, please make an appointment with the Learning Specialist. COURSE WITHDRAWAL : Students who wish to withdraw with a grade of “W” need to speak to the instructor. If a student stops coming to class, the grade given will be the total points earned up until that time and may result in a grade of “F.”

PAGE 219

207 Appendix F (Continued): LAE 2000 Course Syllabus CHEATING AND PLAGIARISM : Cheating and plagiarism are serious offenses. Any student observed cheating on exams and/or written assignments or plagiarizing materials will be dealt with according to the procedure stipulated in the student handbook. COURSE EXPECTATIONS : Regular attendance at all meetings Being on time to class Participation in the classroom discussions Satisfactory completion of all reading, assignments, and examinations on time Assignments are meant to be typed unless otherwise specified College-level quality and accuracy are expected Courtesy to other students is expected at all times NO CELL PHONES ARE TO BE USED IN THE CLASSROOM ATTENDANCE POLICY : Consistent with institutional policy, attendance at class meetings is mandatory. In the case of more than two absences, points will be deducted from the student’s final grade. In the case of more than four absences, the student will be dropped from the course. Each student is responsible for work missed during the absence. In this course, materials viewed are extremely important – you will be expected to view the missed material outside of class time – speak with the instructor re: obtaining the tapes. Punctuality is important and lateness will mean deduction in points. ASSIGNMENTS AND GRADES : Activity Reports : 80 points Each week, an activity plan and observation of results is assigned. Eight of these activities will be typed as a report. Reading Journal : 40 points Eight summaries of the assigned readings. Each summary will provide the student the opportunity to reflect on the readings and may include reactions, ideas, plans for use, etc. These may be handwritten and placed in your Resource Folder. Each summary should be at least 250 words.

PAGE 220

208 Appendix F (Continued): LAE 2000 Course Syllabus Reference List : 30 points Each student will compile a reference list with information on books, chants/songs, and fingerplays/poems. The information will be typed and placed in your Resource Folder. The file will contain: A. Books Categories: Board Books, Picture Books, Concept Books (counting, alphabet, shapes, etc.) B. Chants/Songs List titles and words of 15 chants or songs (if material is from a tape or CD, include that information) C. Fingerplays/Poems List titles and words of 15 fingerplays, poems or nursery rhymes. Quizzes : 30 points Two quizzes will be given worth 15 points each Attendance and Participation: 20 points GRADING: 180 – 200 points = A 160 – 179 points = B 140 – 159 points = C Below 140 points = F SCHEDULE AND TOPICS 1/20 Welcome, introductions, registration materials, syllabus Pre-survey Orientation 1/27 Questions, Review, Sharing Curriculum – literacy rich environments 2/3 Questions, Review, Sharing Assessment – continuum of reading, writing, and language development 2/10 Questions, Review, Sharing Talking – interactions that support language development

PAGE 221

209 Appendix F (Continued): LAE 2000 Course Syllabus 2/17 Questions, Review, Sharing Playing – literacy enhanced play environments 2/25 Questions, Review, Sharing Reading – vocabulary, phonemic and print awareness Quiz # 1 3/2 Questions, Review, Sharing Writing NO CLASS ON 3/10 – SPRING BREAK 3/16 Questions, Review, Sharing Learning the Code – developing phonological awareness 3/23 Questions, Review, Sharing Learning the Code – alphabetic principle and cultural and linguistic diversity Four activity reports due 3/30 Questions, Review, Sharing Curriculum – scaffolded instruction 4/6 Questions, Review, Sharing Assessment – literacy goals and involving families Reference List due 4/13 Questions, Review, Sharing Talking – second language learners 4/20 Questions, Review, Sharing Playing – using play to support the elements of narrative 4/27 Questions, Review, Sharing Reading – using the library, involving parents Reading Journal due Quiz #2

PAGE 222

210 Appendix F (Continued): LAE 2000 Course Syllabus 5/4 Last class meeting Questions, Review, Sharing Writing – forms and functions of print Four activity reports due Post-survey READING ASSIGNMENTS Due Date Assignment 1/27 Learning to Read and Write -p. 27-47 2/3 Learning to Read and Write -p. 20-23 2/24 More Than the ABC’s -Chapter 3 3/2 Learning to Read and Write -p. 64-69 Starting Out Right – p. 30-35 3/16 Learning to Read and Write -p. 80-87 3/22 More Than the ABC’s -Chapter 5 3/30 Learning to Read and Write -p. 56-63 Starting Out Right – p. 42-45 4/6 Learning to Read and Write -p. 103-110 4/13 Starting Out Right – p. 15-29 4/20 More Than the ABC’s -Chapter 4

PAGE 223

211 Appendix G: Narrative Introduction for Phone Contact with Potential Control Group Childcare Centers Hi, may I speak to (Center Director’s Name) ? My name is ________________________________. I am calling you because someone from your center applied to be a part of the HeadsUp! Reading Early Literacy Opportunities project for this spring semester and is being considered for the summer semester. In the meantime, I am calling you to see if you would be interested in participating in the project. Do you have a moment to speak? We will be gathering information from these teachers who are receiving the training to measure the impact the training has on them. However, in order to do that we need to look at a group who currently is not receiving training to compare. Would your center be interested in participating? Here is what participation would entail: !" We will do a classroom visit. !" A member of our team will conduct an environmental survey to see what literacy activities currently are in place. !" In addition, we would like to administer a brief assessment of the literacy skills of children between the ages of 3 to 5 years who are attending your center. This will take no more than 10 minutes per child. This will happen two times. Once in a week or two and again at the beginning of the summer. There also may be the opportunity for us to gather additional data throughout and beyond the summer, but we can discuss this with you after these first two rounds of information are gathered. Your center does not have to participate in the course, but you will have a preferred spot in the selection for the summer semester. If agree: !" Arrange a time to meet in person and talk in more detail about participation !" Confirm address. !" Confirm number of student and teachers for classroom with children between the ages of 3 to 5 years. If do not wish to participate: !" Thank for your time. !" Wish a good day.

PAGE 224

212 Appendix H: Teacher Consent ADULT INFORMED CONSENT FOR CHILD CARE PROVIDERS Social and Behavioral Sciences University of South Florida Information for People Who Take Part in Research Studies The following information is being presented to help you decide whether or not you want to take part in a minimal risk research study. Please read this carefully. If you do not understand anything, ask the person in charge of the study. Title of Study: Evaluation of Pinellas Early LIteracy Learning Community Project: Early Learning Opportunities (LCP: ELO)] Principal Investigator: Kathleen Hague-Armstrong. You are being asked to participate in the evaluation of LCP: ELO because you have applied to participate in the “Language Development In Young Children” course at St. Petersburg College. General Information about this evaluation: This evaluation intends to document the implementation and impact of the LCP: ELO. The LCP: ELO is a unique comprehensive approach towards improving literacy, reading readiness, and social-emotional functioning of children ages 0-5. The project w ill be c onducted in Pinellas County, Florida, and w ill provide opportunities for caregivers and teachers from publicly funded and private children's programs to increase their level of professional education, earn college credits, gain early literacy teaching skills, tools and materials for their classrooms, and promote healthy social-emotional development in the children they serve. In addition, parent educators with expertise in early childhood mental heatlh will provide support to families to enhance the young child's social and behavioral development. The evaluation goals include: (1) determine if LCP activities and objectives are implemented in a timely fashion; (2) determine if the home visiting component enhances family confidence and competence; (3) determine if the home visiting component enhances child social and emotional functioning; (4) determine if the classroom-teaching component increases knowledge and sk ills in child care providers; (5) determine if the mentoring and coaching of child care providers improve their confidence and competence in implementing early literacy strategies; (6) determine if children participating in LCP activities show improvement in the of language and literacy skills; (7) determine if children transitioning to kindergarten demonstrate kindergarten readiness skills; (8) determine if it is feasible to implement this collaborative model within the community; (9) and determine the cost of implementing this model. Where the study will be done: Pinellas County early childhood centers, St. Petersburg College, Directions for Mental Health, Inc., and Florida Mental Health Institute at the University of South Florida. Plan of Study: The evaluation will be conducted within the natural context of your classroom and childcare center. If you consent to participate, you may be asked to participate in individual interviews and/or an audiotaped one-hour focus group, and to complete rating scales and simple

PAGE 225

213 Appendix H (Continued): Teacher Consent data collection forms. We will want to collect your information throughout the semester you are taking the “Language Development in Young Children” course in addition to the semester before (for those on the waiting list) and one-two semesters after the completion of the course. An evaluator will meet with you three times per semester for visits up to one hour and one half. These visits may be conducted during your regular meeting times with “Language Development in Young Children” or during your working hours. Payment for Participation: There will be no additi onal payment for participation in the evaluation. Benefits of Being a Part of this Research Study: By taking part in this evaluation, you will provide valuable information about the implementation and outcomes of the LCP: ELO project. This information will be used to modify and improve the current project. Risks of Being a Part of this Research Study: There are no known risks to participating in this evaluation. Confidentiality of Your Records: Your privacy and research records w ill be kept confi dential to the extent of the law. Authorized research personnel, employees of the Department of Health and Human Services, and the USF Institutional Review Board may inspect the records from this research project. The results of this evaluation may be published. However, the data obtained w ill be combi ned with data from other childcare providers in the publication. The published results will not include your name or any other information that would personally identify you in any way. A pseudonym will be used in place of your name on all documents related to the evaluation and all data w ill be stored in locked files. Data stored within data bases will be entered with the pseudonym and will be only accessible to the research team through the use of a password. How many other people will take part? About 50 – 150 children care providers, 1500 children, and 50 families. Volunteering to Be Part of this Research Study: Your decision to participate in this evaluation is completely voluntary. You are free to participate or to withdraw at any time. There will be no penalty or loss of benefits you are entitled to receive if you stop taking part in the evaluation. Questions and Contacts € If you have any questions about this evaluation, please contact Kathleen Armstrong, Ph.D. at (813) 974-8530. € If you have questions about your rights as a person who is taking part in an evaluation, you may contact the Division of Research Compliance of the University of South Florida at (813) 974-5638. Consent to Take Part in This Research Study By signing this form I agree that: € I have fully read or have had read and explained to me this informed consent form describing this research project.

PAGE 226

214 Appendix H (Continued): Teacher Consent € I have had the opportunity to question one of the persons in charge of this research and have received satisfactory answers. € I understand that I am being asked to participate in research. I understand the risks and benefits, and I freely give my consent to participate in the research project outlined in this form, under the conditions indicated in it. € I have been given a signed copy of this informed consent form, which is mine to keep. _________________________ ________________________ _______________ Signature of Participant Printed Name of Participant Date Investigator Statement I have carefully explained to the subject the nature of the above evaluation. I hereby certify that to the best of my knowledge the subject signing this consent form understands the nature, demands, risks, and benefits involved in participating in this evaluation. _________________________ _________________________ __________ Signature of Investigator Printed Name of Investigator Date or authorized research investigator designated by the Principal Investigator Investigator Statement: I certify that participants have been provided with an informed consent form that has been approved by the University of South Florida’s Institutional Review Board and that explains the nature, demands, risks, and benefits involved in participating in this evaluation. I further certify that a phone number has been provided in the event of additional questions. _________________________ _________________________ ____________ Signature of Investigator Printed Name of Investigator Date

PAGE 227

215Appendix I: Parent Assent Child Informed Assent Social and Behavioral Sciences University of South Florida Information for People Who Take Part in Research Studies The following information is being presented to help you decide whether or not you want your child to take part in a minimal risk research study. Please read this carefully. If you do not understand anything, please contact the person in charge of the study. Title of Study: Pinellas Early LIteracy Learning Community Project: Early Learning Opportunities (LCP: ELO)] Principal Investigator: Kathleen Hague Armstrong. Your child is being asked to participate because he/she is in a classroom whose teacher is attending the “Language Development In Young Children” course at St. Petersburg College. General Information about the Research Study: This is an evaluation of the Pinellas Early Literacy Learning Community Project, which assess the implementation of the “Language Development In Young Children” course activities and outcomes related to literacy development in children. The LCP: ELO is a unique comprehensive approach to improving literacy, reading readiness, and social-emotional functioning of children ages 0-5. The project will be conducted in Pinellas County, Florida, and will provide opportunities for caregivers and teachers from publicly funded and private children's programs to increase their level of professional education, earn college credits, gain early literacy teaching skills, tools and materials for their classrooms, and promote healthy social-emotional development in the children they serve. Parent educators with expertise in early childhood mental health are also available to support families and provide home-based training to enhance the young child's social and behavioral development. The Evaluation Goals Include : (1) determine if LCP activities and objectives are implemented in a timely fashion; (2) determine if the home visiting component enhances family confidence and competenece; (3) determine if the home visiting component enhances child social and emotional functioning; (4) determine if the classroom-teaching component increases knowledge and skills in child care providers; (5) determine if the mentoring and coaching of child care providers improve their confidence and competence; (6) determine if children participating in LCP activities show improvement in the of language and literacy skills; (7) determine if children transitioning to kindergarten demonstrate readiness; (8) determine if it is feasible to implement this collaborative model within the community; (9) and determine the cost of implementing this model.

PAGE 228

216Appendix I (Continued): Parent Assent Where The Study Will Be Done: This is a collaboration of Pinellas County early childhood centers, St. Petersburg College, Directions for Mental Health, Inc., and Florida Mental Health Institute at the University of South Florida. Plan of Study : The study will be conducted within the natural context of the classroom and childcare center. If you give your child permission to participate, your child may be selected to complete several assessments that measure language and literacy skills depending on the child’s age, such as the Individual Growth and Developmental Indicators (IGDI; Carta, Greenwood, Walker, Kaminski, Good, McConnell & McEvoy) if your child is 3-5 years old. The IGDI includes naming items on flashcards. If your child is transitioning to kindergarten, he/she will be administered the ESI-R, which is a brief assessment that measures kindergarten readiness skills, such as drawing a line and naming objects, that is utilized on all children entering kindergarten in Pinellas County. If the child is 0 to 3, he may be administered the Birth to 3 Comprehensive Test of Developmental Abilities (BTAIS; Ammer & Bangs, 2000). Your child may also be administered the Early Screening Inventory-Revised (ESI-K; Meisels, Marsden, Wiske & Henderson, 1997), a screening tool for kindergarten readiness. All children with permission will be administered the Ages and Stages Social/Emotional Questionnaire (ASQ) to assess the child’s emotional and behavioral functioning. If the child meets the criteria, the child will be referred for further assessment. Further permission will be sought from you for the further assessment. Additionally, the teacher will complete a version of the Infant-Toddler Literacy Assessment (Munroe-Meyer Institute, 2003), which assesses the child’s (ages 0 to 3) ability to interact with print-related material. If the child is 3 to 5, the teacher will administer a version the Teacher Rating of Oral Language (TROLL; Dickinson, McCabe, & Sprague, 2001). Finally, upon your assent, your child will be photographed and videotaped to document his or her progress in the classroom. You can give permission for your child to receive the assessments and not the photographing or vice versa. Payment for Participation: There will be no payment for participation. Benefits of Being a Part of this Research Study : By taking part in this study, you will provide valuable information about the implementation and outcomes of the LCP: ELO project. This information will be used to modify and improve the current project to increase the early literacy skills of the children in the program. Risks of Being a Part of this Research Study : There are no known risks to participating in this study. Confidentiality of Your Records: Your privacy and evaluation records will be kept confidential to the extent of the law. Authorized research personnel, employees of the

PAGE 229

217Appendix I (Continued): Parent Assent Department of Health and Human Services, and the USF Institutional Review Board may inspect the records from this evaluation project. The results of this study may be published. However, the data obtained will be combined with data from other childcare centers in the publication. The published results will not include your child’s name or any other information that would personally identify your child in any way. A pseudonym will be used in place of your child’s name on all documents related to the study and all data will be stored in locked files. Data stored within data bases will be entered with the pseudonym and will be only accessible to the research team through the use of a password. How many other people will take part? About 50 to 150 child care providers and about 1500 children and families. Volunteering to Be Part of this Research Study: Your decision to allow your child to participate in this research study is completely voluntary. You are free to allow your child to participate in this research study or to withdraw at any time. There will be no penalty or loss of benefits you or your child are entitled to receive if you stop taking part in the study. Questions and Contacts € If you have any questions about this research study, please contact Kathleen Armstrong, Ph.D. at (813) 974-8530.If you have questions about your rights as a person who is taking part in a research study, you may contact the Division of Research Compliance of the University of South Florida at (813) 974-5638. Investigator Statement: I have carefully described this study to the parent regarding the nature of the above research study. I hereby certify that to the best of my knowledge that this form explains the nature, demands, risks, and benefits involved in participating in this study. ______________________________ ______________________________ _______ Signature of Investigator OR Authorized Printed Name Of Investigator Da te Research Investigator Designated By the Principal Investigator Investigator Statement: I certify that participants have been provided with an informed consent form that has been approved by the University of South Florida’s Institutional Review Board and that explains the natu re, demands, risks, and benefits involved in participating in this study. I further certify that a phone number has been provided in the event of additional questions. ______________________________ ______________________________ _______ Signature of Investigator OR Authorized Printed Name Of Investigator Da te Research Investigator Designated By the Principal Investigator

PAGE 230

218Appendix I (Continued): Parent Assent Consent to have child take part in this research study (please review 1 and 2) By signing this form I agree that: € I have fully read or have had read and explained to me this informed consent form describing this research project. € I have had the opportunity to question one of the persons in charge of this research and have received satisfactory answers. € I understand the risks and benefits, and I freely give my assent for him/her to participate in the research project outlined in this form, under the conditions indicated in it. € I have been given a signed copy of this informed consent form, which is mine to keep. 1. I give permission for (_____________________) to receive the assessments Child’s name mentioned in this form and to participate in the study. ___________________________ ___________________________ ________ Signature of Caregiver of Participant Printed Name of Caregiver Date 2. I give permission for my child to be photographed and video-taped ___________________________ ___________________________ ________ Signature of Caregiver of Participant Printed Name of Caregiver Date If you do not wish to have your child participate, please sign one of the three below and return this form to your child’s school or childcare center. 1. I do not wish to have my child (____________________) participate in any part of this study. (Child’s Name) ___________________________ ___________________________ ________ Signature of Caregiver of Participant Printed Name of Caregiver Date 2. I do not wish to have my child (____________________) to be photographed or videotaped, but he/she may participate in the assessments. ___________________________ ___________________________ ________ Signature of Caregiver of Participant Printed Name of Caregiver Date 3. I do not wish to have my child (____________________) participate in the assessments, but he/she may be photographed. ___________________________ ___________________________ ________ Signature of Caregiver of Participant Printed Name of Caregiver Date

PAGE 231

219 Appendix J: Cover Letter for Control Group Center Directors Dear (Center Director) Thank you again for your help gathering information about children’s literacy growth and development. This effort is a grand undertaking, but I think you can agree a worthy endeavor. Attached you will find the consent forms that should be distributed to the parents of your students who are between the ages of 3 to 5 years old. If they have any questions, they may contact Dr. Kathleen Armstrong at (813) 974-8530 or any of the evaluators (Dale Cusumano, Melissa Todd, or Rachel Cohen). We will be in contact with you in the next few weeks to check on the status of the returned consent forms as well as to arrange a time for one of us to visit your school to collect the data. The instrument that we will be using is the Individual Growth and Development Indicator. In brief, this is an individually administered instrument that measures expressive language and early literacy development. This will take about 10 minutes per child and entails asking children to look at pictures of common objects and indicate various features associated with them (common sounds, etc.). Please feel free to ask us to show you this measure if you are interested in learning more about it. We also will return in the early summer to gather additional data that will indicate the level of growth that children have made in the acquisition of language and literacy skills. I have included a few extra consent forms that can be distributed if a parent indicates that he or she has misplaced the first copy. In addition, extra copies could be given to parents who have not returned them within a week of being sent home. A reminder notice also has been included if you would like to remind parents in this manner. Finally, a plastic envelope has been provided to store returned consent forms until we collect them. Again, we appreciate your assistance in this important project. It is through efforts such as this that we learn how best to teach children. Please do not hesitate to contact one of us if you have any additional questions. Thank you again!! Respectfully, ______________________________ ______________________________ Dale Cusumano Melissa Todd Project Evaluator Project Evaluator dcusuman@tampabay.rr.com mftodd@aol.com (727)577-5125 ______________________________ Rachel Cohen Project Evaluator rachelcohen@tampabay.rr.com

PAGE 232

220 Appendix K: Cover Letter for Parent Assent Dear Parent/Guardian. Congratulations! We have been selected to participate in an exciting early literacy project. The project was designed to increase literacy and school readiness for young children in Pinellas County. Specifically, it will be gathering information about different instructional strategies that help children learn to read. Please read the attached pages. If you would like for you child to take part in this project, please sign the last page and return to your teacher. Thank you. We look forward to your response. Sincerely, Signed by Center Director

PAGE 233

221 Appendix L: Consent Return Reminder Letters Dear parent/Guardian, Our excitement is growing as our efforts to gather information about how children learn to read continue. This note is to remind you to read and return the information letter and permission form that was given to you last week. If you misplaced your papers, please let me know, and I can give you a replacement copy. Thank you again. We are looking forward to taking part in this important project!! Sincerely,

PAGE 234

222 Appendix M: Accuracy Checklists for IGDI Administration Picture Naming Checklist for Accurate Administration Evaluator Observed: ____________________ Date: _____ Observer: _____________________ #"Has materials out and ready: Picture Naming Cards, Administration Instructions, Stopwatch, and Tracking Form #"Shuffles cards (except Sample Cards) #"Reads bold words aloud, exactly as written in instructions #"Starts with Sample Cards #"Names each sample card clearly #"Gives child opportunity to name each sample card #"STOPS administration if child does not correctly name all 4 sample cards #"Begins administration by starting the stopwatch and showing the first card to the child. #"Does NOT provide correct response if child responds incorrectly during administration #"Follows directions as written on Administration Instructions if child does not respond within 3 seconds #"Shows next card if the child does not respond within an additional 2 seconds #"Separates cards into two piles, one for correct and one for incorrect or skipped responses, during administration #"Stops presentation after EXACTLY 1minute #"Writes total number correct on the tracking form ____/14 = ______% Administration Accuracy

PAGE 235

223 Appendix M (Continued): Accuracy Checklists for IGDI Administration Alliteration Checklist for Accurate Administration Evaluator Observed: ____________________ Date: _____ Observer: _____________________ #" Has materials out and ready: Alliteration Cards, Administration Instructions, Stopwatch, and Recording Form. #" Shuffles cards before each administration (except Sample Cards). #" Reads bold words aloud, exactly as written in instructions. #" Starts with Sample Cards. #" Points to and names each picture on Sample Cards. #" Begins administration by starting the stopwatch and immediately showing the first card to the child. #" Continues with administration of alliteration measure only if the child gives 2 correct responses on samples 3 through 6. #" Does give periodic praise for attention, effort, and task engagement. #" Does NOT include any of the Sample Cards in the test administration. #" Does NOT provide correct response if child responds incorrectly during administration. #" Does NOT provide correct response if child responds incorrectly to sample cards 5-6. #" Follows directions as written on instructions if child does not respond within 3 seconds. #" Provides correct response if child responds incorrectly to Sample Cards 3-4. #" Points to and names each picture during administration. #" Separates correct and incorrect or skipped responses into two piles. #" Shows next card if the child does not respond within an additional 2 seconds. #" Stops presentation after exactly 2 minutes. #" Writes total number correct on the recording form, excluding correct Sample Card responses. ____/18 = ______% Administration Accuracy

PAGE 236

224 Appendix M (Continued): Accuracy Checklists for IGDI Administration Rhyming Checklist for Accurate Administration Evaluator Observed: ____________________ Date: _____ Observer: _____________________ #"Has materials out and ready: Administration Instructions, Rhyming Cards, Stopwatch, and Recording Form. #"Shuffles cards before each administration (except Sample Cards). #"Reads bold words aloud, exactly as written in instructions. #"Starts with Sample Cards. #"Points to and names each picture on Sample Cards. #"Begins administration by starting the stopwatch and immediately showing the first card to the child. #"Continues with administration of rhyming measure only if the child gives 2 correct responses on samples 3 through 6. #"Does give periodic praise for attention, effort, and task engagement. #"Does NOT include any of the Sample Cards in test administration. #"Does NOT provide correct response if child responds incorrectly during administration. #"Does NOT provide correct response if child responds incorrectly to sample cards 5-6. #"Follows directions as written on instructions if child does not respond within 3 seconds. #"Points to and names each picture during administration. #"Provides correct response if child responds incorrectly to Sample Cards 3-4. #"Separates correct and incorrect or skipped responses into two piles. #"Shows next card if the child does not respond within an additional 2 seconds. #"Stops test administration after exactly 2 minutes. #"Writes total number correct on the recording form, excluding correct Sample Card responses. ____/18 = ______% Administration Accuracy

PAGE 237

225 Appendix N: Demographic Data Collection Sheet for Student/Participants Center: Address: Center Director: Phone Number: Child’s Code: DOB Age Gender Race Teacher Code # of Days Attended 1/1/044/30/04 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35.

PAGE 238

226 Appendix O: Data Collection Sheet for Measures Center Code Date: PN = Picture Naming; A = Alliteration; R = Rhyming; LNF = Letter Naming Fluency; ISF = Initial Sounds Fluency Participant IGDI Time 1 IGDI Time 2 DIBELS Code: PN A R PN A R LNF ISF

PAGE 239

227 Appendix P: Types of Early Childhood Centers Literacy Training – Coaching Classroom Code Number of Student/Participants Type of Center 701 20Private 702 19Private 703 14Private 704 18Private 708 14Head Start 709 6Faith-Based 722 15Private 723 12Head Start 743 13Head Start 744 5Private 745 20Private 749 9Private Literacy Training – No Coaching Classroom Code Number of Student/Participants Type of Center 810 6Head Start 814 14Private 818 5Private 819 11Private 820 12Private 825 10Head Start 833 8Private 834 18Private 835 9Private 850 13Faith-Based No Literacy Training – No Coaching Classroom Code Number of Student/Participants Type of Center 905 8Private 906 6Private 907 6Private 911 3Private 915 3Private 916 8Private

PAGE 240

228 Appendix P (Continued): Types of Early Childhood Centers No Literacy Training – No Coaching Classroom Code Number of Student/Participants Type of Center 926 5Faith-Based 927 6Faith-Based 928 3Faith-Based 929 2Faith-Based 930 4Faith-Based 936 15Private 937 7Private 938 14Private 940 3Faith-Based 941 6Faith-Based 942 3Faith-Based 946 9Private 947 4Faith-Based

PAGE 241

229 Appendix Q: Correlation Matrix for Student Variables for Literacy Training/Coaching Group 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 1. Attendance 1.0 ( n ) 2. Site SES .03 1.0 (n) (147) 3. Home SES -.18* -.27** 1.0 (n) (145) (145) 4. PN T1 -.18* -.20** -.19 1.0 (n) (147) (165) (145) 5. Alit T1 -.06 -.27** .04 .35** 1.0 (n) (147) (165) (145) (165) 6. Rhym T1 -.08 -.16* .07 .37** .42** 1.0 (n) (147) (165) (145) (165) (165) 7. PN T2 .03 -.22* .02 .52** .31** .43** 1.0 (n) (131) (138) (131) (138) (138) (138) 8. Alit T2 -.08 -.30** .11 .37** .66** .53** .39** 1.0 (n) (131) (138) (131) (138) (138) (138) (138) 9. Rhym T2 .01 -.08 .13 .41** .43** .73** .46** .60** 1.0 (n) (131) (138) (131) (138) (138) (138) (138) (138) 10. ELOC T1 Total -.13 -.42** .05 .20** .25** .31** .40** .34** .26** 1.0 (n) (147) (165) (145) (165) (165) (165) (138) (138) (138) 11. ELOC T2 Total -.10 -.35** .10 .05 .14 .14 .15 .17* .10 .62** 1.0 (n) (147) (165) (145) (165) (165) (165) (138) (138) (138) (165) Notes : PN = Picture Naming; SR = Storybook Reading; PE = Print Environment; T1 = Time 1 Data Collection; T2 = Time 2 Data Collection; p < .05; ** p < .001.

PAGE 242

230 Appendix R: Correlation Matrix for Student Variables for LiteracyTraining/No Coaching Group 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 1. Attendance 1.0 ( n ) 2. Site SES .22* 1.0 (n) (103) 3. Home SES -.05 .27** 1.0 (n) (102) (104) 4. PN T1 .03 .16 .14 1.0 (n) (103) (106) (104) 5. Alit T1 -.02 .11 .20* .29** 1.0 (n) (103) (106) (104) (106) 6. Rhym T1 -.02 .22* .19 .39** .58** 1.0 (n) (103) (106) (104) (106) (106) 7. PN T2 -.10 -.00 .16 .54** .29** .26** 1.0 (n) (99) (99) (99) (99) (99) (99) 8. Alit T2 .05 .06 .34** .25* .62** .50** .31** 1.0 (n) (99) (99) (99) (99) (99) (99) (99) 9. Rhym T2 .03 .14 .18 .39** .45** .73** .33** .53** 1.0 (n) (99) (99) (99) (99) (99) (99) (99) (99) 10. ELOC T1 Total -.26** -.11 .28-.15 .32** .10** .21* .26* .33** 1.0 (n) (103) (106) (104) (106) (106) (106) (99) (99) (99) 11. ELOC T2 Total -.19 -.06 .11 -.4 .09 .14 -.03 -.02 .12 .40** 1.0 (n) (103) (106) (104) (106) (106) (106) (99) (99) (99) (106) Notes : PN = Picture Naming; SR = Storybook Reading; PE = Print Environment; T1 = Time 1 Data Collection; T2 = Time 2 Data Collection; p < .05; ** p < .001.

PAGE 243

231 Appendix S: Correlation Matrix for Student Variables for No Literacy Training/No Coaching Group 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 1. Attendance 1.0 ( n ) 2. Site SES -.17 1.0 (n) (115) 3. Home SES .09 .46** 1.0 (n) (96) (96) 4. PN T1 .13 .16 .13 1.0 (n) (115) (115) (96) 5. Alit T1 .08 .23* .18 .38** 1.0 (n) (115) (115) (96) (115) 6. Rhym T1 .14 .13 .21* .32** .47** 1.0 (n) (115) (115) (96) (115) (115) 7. PN T2 -.16 .14 .12 .61** .40** .48** 1.0 (n) (102) (102) (89) (102) (102) (102) 8. Alit T2 -.01 .15 .29** .31** .66** .48** .40** 1.0 (n) (102) (102) (89) (102) (102) (102) (102) 9. Rhym T2 .02 .04 .33** .22* .51** .76** .46** .46** 1.0 (n) (102) (102) (89) (102) (102) (102) (102) (102) 10. ELOC T1 Total -.10 .55** .24* .18 .23* .28** .32** .29** .22* 1.0 (n) (115) (115) (96) (115) (115) (115) (102) (102) (102) 11. ELOC T2 Total -.06 .53** .31** .15 .18 .27** .32* .30** .14 .85** (n) (115) (115) (96) (115) (115) (115) (102) (102) (102) (115) Notes : PN = Picture Naming; SR = Storybook Reading; PE = Print Environment; T1 = Time 1 Data Collection; T2 = Time 2 Data Collection; p < .05; ** p < .001

PAGE 244

232 ABOUT THE AUTHOR Dale Lynn Cusumano received her Bachelor of Arts Degree in Psychology in 1996 when she graduated from the Psychology and University Honors programs at the University of South Florida. Her education then continued as she received her Masters of Arts Degree in 1997 and Ed.S. Degree in 2000 both from the School Psychology Program at the University of South Florida. Dale then accepted full-time employment with Pinellas County Schools. During this employment, Dale received her national certification and state licensure to practice as a school psychologist. In the summer of 2002, while continuing to work as a school psychologist, Dale returned to the University of South Florida to complete her Ph.D. in School Psychology. Across these years, Dale has coauthored three publications and presented at national and regional conferences for school psychologists, teachers, and early childhood educators. Dale also has taught as an adjunct faculty member for the University of South Florida. Beyond her professional role, and perhaps most importantly, Dale is a wife and mother of two children.