USF Libraries
USF Digital Collections

The effects of an internet-based program on the early reading and oral language skills of at-risk preschool students and...

MISSING IMAGE

Material Information

Title:
The effects of an internet-based program on the early reading and oral language skills of at-risk preschool students and their teachers' perceptions of the program
Physical Description:
Book
Language:
English
Creator:
Huffstetter, Mary
Publisher:
University of South Florida
Place of Publication:
Tampa, Fla.
Publication Date:

Subjects

Subjects / Keywords:
Childhood education
Computers
Developmentally appropriate computer use
Educational technology
Implementation integrity
Instructional research
Dissertations, Academic -- Early Childhood Education -- Doctoral -- USF   ( lcsh )
Genre:
government publication (state, provincial, terriorial, dependent)   ( marcgt )
bibliography   ( marcgt )
theses   ( marcgt )
non-fiction   ( marcgt )

Notes

Summary:
ABSTRACT: This investigation examined the effects of instruction, within the context of the Headsprout Reading Basics program, on the oral language and early reading skills of at-risk preschool students, and their teachers perceptions of the program. Random assignment was used in a pretest-posttest, control group design to assess the effects of this program. Thirty-one students, across two preschool settings, participated in the experimental group, and 31 students participated in the comparison group. The experimental group received instruction through the Headsprout Reading Basics program, which teaches the alphabetic principle, decoding strategies, print awareness, vocabulary, and deriving meaning from texts. The comparison group received instruction through Millies Math House, which teaches numbers, shapes, counting, sizes, patterns, quantities, sequences, addition, and subtraction. Daily instruction was provided for 30 minutes over a period of eight weeks.Oral language skills were measured using the Test of Language Development-Primary: 3rd edition (TOLD-3) and early reading skills were measured using the Test of Early Reading Ability- 3rd edition (TERA-3). Teachers and teachers assistants perceptions of the Headsprout Reading Basics program also were assessed through analysis of their responses to a structured, open-ended interview. Results indicated that students who received instruction through the Headsprout Reading Basics program exhibited gains in oral language and early reading skills that were statistically higher than the students who did not receive this instruction. Effect sizes associated with these gains were found to be large. Examination of the effects of gender, and minutes of instruction received did not yield significant statistical differences.Analysis of interview data indicated that the teachers and teachers assistants viewed Headsprout Reading Basics as a desirable way to increase the oral language and early reading skills of their students and would continue to use the program if given the opportunity. Implications for future research are discussed.
Thesis:
Thesis (Ph.D.)--University of South Florida, 2005.
Bibliography:
Includes bibliographical references.
System Details:
System requirements: World Wide Web browser and PDF reader.
System Details:
Mode of access: World Wide Web.
Statement of Responsibility:
by Mary Huffstetter.
General Note:
Title from PDF of title page.
General Note:
Document formatted into pages; contains 162 pages.
General Note:
Includes vita.

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 001709520
oclc - 68813195
usfldc doi - E14-SFE0001337
usfldc handle - e14.1337
System ID:
SFS0025658:00001


This item is only available as the following downloads:


Full Text

PAGE 1

The Effects of an Internet-Based Program on the Early Reading and Oral Language Skills of At-Risk Preschool Students and Th eir Teachers Perceptions of the Program by Mary Huffstetter A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy Department of Childhood Education College of Education University of South Florida Major Professor: James King, Ed.D. Susan Homan, Ph.D. Anthony Onwuegbuzie, Ph.D. Kelly Powell-Smith, Ph.D. Jenifer Schneider, Ph.D. Date of Approval: November 10, 2005 Keywords: childhood education, computers, developmentally appropriate computer use, educational technology, implementation integrity, instructional research Copyright 2005, Mary Huffstetter

PAGE 2

Dedication This is dedicated to the entire Huffstetter/Hupp/Shortt/Eversdyke clan for the support and the humor that have sustained me. Yes, Clare, I am slow, but I am finally finished with school! I love and respect each and every one of you more than I could ever express here. And.I like chicken.

PAGE 3

Acknowledgments Heartfelt thanks go out to Dr. King, Dr. Homan, Dr. Schneider, Dr. Onwuegbuzie, Dr. Powell-Smith, Dr. Ferron, Dr. Austin and Dr. Dunlap for your encouragement, your challenges and your wisdom. My deep gratitude is extended to Jesse Coraggio for his invaluable assistance with statistical procedures. Additionally, Id like to thank all my fellow doc students at USF. It has been a wonderful ride and I have enjoyed learning from each of you. Special thanks are also extended to my colleagues in Port Saint Lucie for your support, and to Janet Giles, USFs manuscript editor. Finally, thank you to the teachers, directors and students of the Head Start centers. It was a privilege to work with you. I have great hope for the future! God bless us everyone!

PAGE 4

Table of Contents List of Tables v List of Figures vi Abstract vii Chapter 1. Introduction 1 Statement of the Problem 1 Theoretical Framework of the Present Investigation 3 Rationale for the Investigation 11 Purpose of the Investigation 13 Research Questions 14 Research Question 1 14 Research Question 2 14 Research Question 3 14 Research Question 4 14 Research Question 5 14 Research Question 6 15 Research Question 7 15 Hypotheses 15 Null Hypothesis 1 15 Research Hypothesis 1 15 Null Hypothesis 2 15 Research Hypothesis 2 15 Null Hypothesis 3 16 Research Hypothesis 3 16 Null Hypothesis 4 16 Research Hypothesis 4 16 Null Hypothesis 5 16 Research Hypothesis 5 17 Null Hypothesis 6 17 Research Hypothesis 6 17 Educational Significance of the Investigation 17 Definitions of the Terms 18 At-risk 18 Early reading skills 18 Generative instruction 19 Oral language skills 19 Sunshine State Standards 19 Delimitations of the Investigation 19 Limitations of the Investigation 20 i

PAGE 5

Organization of the Remaining Chapters 22 Chapter 2. Literature Review 23 Overview 23 Early Reading Instruction 23 Oral language and early reading skills 25 Phonological awareness 27 Print awareness 29 Alphabetic knowledge 30 Instructional materials 33 At-Risk for Reading Difficulties 35 Computers and Early Reading Skills 38 Generative Instruction 43 Headsprout Reading Basics 47 Previous field studies using Headsprout Reading Basics 49 Gender and Reading Skills 51 Teachers Perceptions of Educational Technology Implementations 52 Summary and Implications for the Present Investigation 53 Chapter 3. Methodology 55 Statement of the Purpose 55 Research Questions 55 Research Question 1 55 Research Question 2 55 Research Question 3 56 Research Question 4 56 Research Question 5 56 Research Question 6 56 Research Question 7 56 Research Design 57 Description of the Participants 58 Ethical Considerations 62 Materials and Instruments 65 Headsprout Reading Basics 65 Millies Math House 65 Structured open-ended interview protocol 65 Test of Early Reading Ability-3rd Edition (TERA-3) 66 Test of Language Development-Primary: 3rd Edition (TOLD-3) 66 Mobile computer lab 67 Quantitative Procedures 67 Quantitative Data Analysis 71 Qualitative Procedures 72 Qualitative Data Analysis 75 Combined Quantitative and Qualitative Data Analysis 76 ii

PAGE 6

Chapter 4. Results 78 Treatment of Data 78 Pre-tests 78 Classroom literacy activities 80 Implementation integrity 81 Observation journal 82 Post-tests 83 Results for Research Questions 1 and 2 84 Test score reliability 87 Null Hypotheses 88 Null Hypothesis 1 88 Null Hypothesis 2 89 MANOVA 89 Discriminant analysis and effect sizes 91 Results for Research Questions 3-6 92 Null Hypotheses 93 Null Hypothesis 3 93 Null Hypothesis 4 93 Null Hypothesis 5 93 Null Hypothesis 6 93 Factorial ANOVA 94 Results for Research Question 7 96 Narrative of qualitative results 100 Combined Quantitative and Qualitative Data Analysis 100 Chapter 5. Discussion 102 Overview 102 Summary of Findings 105 Comparison of Findings with Theoretical Framework and Previous Research 106 Threats to Internal Validity 109 Threats to External Validity 110 Implications for Future Research 110 Implications for Future Practice 113 Conclusion 114 References 116 Appendices 133 Appendix A: Summary and Procedures of Research for Teachers & Assistants 134 Appendix B: Interview Protocol for Pilot Study 135 Appendix C: Interview Protocol 136 Appendix D. Teacher and Teacher Assistant Demographic Information 137 Appendix E. Teacher Orientation to Headsprout & Millies Math House 138 Appendix F. Implementation Checklists 139 Appendix G. Institutional Review Board Consent Forms 141 iii

PAGE 7

Appendix H. Permission to use Headsprout Reading Basics Graphics 149 About the Author End Page iv

PAGE 8

List of Tables Table 1 U.S. Department of Health & Human Services (2004) 58 Poverty GuidelinesFlorida Table 2 Student Demographics 60 Table 3 Teacher and Teacher Assistant Demographics 61 Table 4 Description of Classroom Literacy Activities 81 Table 5 Synopsis of Observer Involvement 83 Table 6 Univariate Normality of TEGAINS and TOGAINS 86 Table 7 Standardized Skewness and Kurtosis Coefficients 86 Table 8 Internal Consistency of the TERA-3 and TOLD-3 86 Table 9 Results of MANOVA of TEGAINS and TOGAINS 89 Table 10 Descriptive Results for TEGAINS and TOGAINS 91 Table 11 Means & SD of Gain Scores for Gender and Minutes in Program 95 Table 12 Participants Perceptions Categories with Related Indicators 97 Table 13 Participants Perceptions Categories with Illustrative Quotes 99 v

PAGE 9

List of Figures Figure 1. Example of phonological awareness instruction 6 Figure 2. Example of print awareness instruction 7 Figure 3. Example of alphabetic knowledge instruction 7 Figure 4. Headsprout research & development process 8 Figure 5. Theoretical connection to chosen methods for present investigation 11 Figure 6. Headsprout performance data 63 vi

PAGE 10

The Effects of an Internet-Based Program on the Early Reading and Oral Language Skills of At-Risk Preschool Students and Their Teachers Perceptions of the Program Mary Huffstetter ABSTRACT This investigation examined the effects of instruction, within the context of the Headsprout Reading Basics program, on the oral language and early reading skills of at-risk preschool students, and their teachers perceptions of the program. Random assignment was used in a pretest-posttest, control group design to assess the effects of this program. Thirty-one students, across two preschool settings, participated in the experimental group, and 31 students participated in the comparison group. The experimental group received instruction through the Headsprout Reading Basics program, which teaches the alphabetic principle, decoding strategies, print awareness, vocabulary, and deriving meaning from texts. The comparison group received instruction through Millies Math House, which teaches numbers, shapes, counting, sizes, patterns, quantities, sequences, addition, and subtraction. Daily instruction was provided for 30 minutes over a period of eight weeks. Oral language skills were measured using the Test of Language Development-Primary: 3rd edition (TOLD-3) and early reading skills were measured using the Test of Early Reading Ability3rd edition (TERA-3). Teachers and teachers assistants perceptions of the Headsprout Reading Basics program also were vii

PAGE 11

viii assessed through analysis of their responses to a structured, open-ended interview. Results indicated that students who received instruction through the Headsprout Reading Basics program exhibited gains in oral language and early reading skills that were statistically higher than the students who did not receive this instruction. Effect sizes associated with these gains were found to be large. Examination of the effects of gender, and minutes of instruction received did not yield significant statistical differences. Analysis of interview data indicated that the teachers and teachers assistants viewed Headsprout Reading Basics as a desirable way to increase the oral language and early reading skills of their students and would continue to use the program if given the opportunity. Implications for future research are discussed.

PAGE 12

Chapter 1 Introduction Statement of the Problem According to the U.S. Department of Education (2002), approximately 40% of students across our nation cannot read at a basic level, average-performing students have made no progress in reading achievement over the last 10 years, and the lowest-performing readers have become even less successful over this same period. More specifically, only 30% of our nations fourth graders and 32% of Floridas fourth graders are at or above proficiency level in reading (National Assessment of Educational Progress, 2003). The ability to read is vital to school success and, as such, reading development has become a national priority. Consistent with this assertion, the No Child Left Behind Act (NCLB) of 2001, signed into law by President Bush on Jan. 8, 2002, has become a focal point of educational policy (United States Department of Education [USDOE], 2001). This act created a new program, Reading First (Armbruster, Lehr, & Osborn, 2003), that calls for scientific-based reading programs in Grades K-3, with funding priority given to high-poverty areas. Most children know something about reading when they enter school. However, many students from high-poverty areas arrive at school at a disadvantage due to differences in the amount of language and literacy interactions they experience in their early years (Adams, 1990; Durkin, 1975; Hart & Risley, 1995; Stanovich, 1986; Teale & Sulzby, 1986). The call for high-quality early education is an attempt to level the 1

PAGE 13

educational playing field for these students. Fueled by the need for quality, early language and literacy experiences, this call for reliable interventions supported by replicable research is filtering down into preschools. Educators and lawmakers are examining early intervention strategies, methods, and programs in attempts to preempt the need for costly remedial programs and to increase the probability of reading proficiency for every student (USDOE, 2005). As an example, after registering 59% voter approval, the Constitution of the State of Florida mandates that every 4-year-old child be offered a high-quality preschool learning experience beginning with the 2005-2006 school year (Florida Department of Education [FLDOE], 2004). Based on the call for scientific, evidenced-based practices in our K-12 schools, it is reasonable to expect that scientific, evidence-based practices also will be called for when providing instruction to these youngest students in Florida. Playing a lead role in this search for evidenced-based practices, the USDOE initiated the Early Reading First (ERF) program (USDOE, 2005). ERF is designed to assist early education programs in becoming centers of instructional excellence. In other words, this initiative aims to provide high-quality education to young children, particularly those from low-income households. The overarching goal of ERF is to prepare young students to enter kindergarten with the skills they need for school success. In particular, ERF focuses on the development of (a) oral language (vocabulary, expressive language, listening comprehension); (b) phonological awareness (rhyming, blending, segmenting); (c) print awareness; and (d) alphabetic knowledge (USDOE, 2005). Although these components are not an exhaustive list of the skills young readers need to develop, they are seen as critical components for building a foundation for early 2

PAGE 14

reading and for subsequent success in school (Adams, 1990; Chall, 1989; National Institute of Child Health and Human Development (NICHD), 2000; Snow, Burns, & Griffin, 1998). Theoretical Framework of the Present Investigation This investigation drew from the mixed methods paradigm of research, using both quantitative and qualitative approaches to capitalize on the strengths and minimize the weaknesses of both approaches and to obtain complementary data (Johnson & Onwuegbuzie, 2004). This investigation was situated within a scientifically informed approach to teaching, largely based on Engelmann and Carnines (1991) theory of instruction. This theory makes the following assumptions: (a) the environment is the primary variable in accounting for what the learner learns; (b) we should not attempt to control the student, so we must attempt to control the environment; and (c) the student will learn and retain concepts if they are presented in a clear manner, practiced to fluency, and transferred to new learning situations. The current demand for accountability and the call to observe, identify, and document effective, replicable, instructional practices framed the quantitative portion of this investigation from an empirical perspective (Martella, Nelson, & Marchand-Martella, 1999). Researchers (Engelmann & Carnine, 1991; Twyman, Layng, Stikeleather, & Hobbins, 2004) have investigated the application of the principles derived from the scientific study of instruction to the teaching of fundamental or early reading skills to produce empirical, replicable results. One system of instruction that has demonstrated the potential to teach initial reading concepts explicitly in a number of studies is generative instruction, which is described as a careful sequence of procedures that 3

PAGE 15

establishes key component skills, provides practice to fluency or automaticity, and then provides environments that increase the probability that these skills will combine into more complex skills with little additional instruction (Johnson & Layng, 1994; Layng, Twyman, & Stikeleather, 2004). Some researchers (e.g., Elkind, 1981) have questioned this empirical focus on skill acquisition and have purported that it is contrary to focusing on developmentally appropriate practices. Elkind further implies that focusing on these skills too early may be detrimental to student reading achievement. Using the theory of instruction (Engelmann & Carnine, 1991) as a guide, this principal investigator took the stance that most students would benefit from explicit instruction in oral language and early reading skills if they were allowed to work at their own pace, and at their own level, with individualized support (Clay, 2001; Skinner, 1968; Vygotsky, 1978). Individualizing instruction at an early age may prevent at-risk students from remaining perpetually behind their classmates in reading ability. Vygotsky (1978) discussed providing this support in a students zone of proximal development through adult guidance or through collaboration with more capable peers. It is possible that this support also can be provided through educational technology. Although the discourse continues as to whether the teaching of reading is an art or a science and whether or not programs can make a difference, researchers have suggested that teaching reading efficaciously to a diverse group of students is a scientific enterprise (Twyman et al., 2004), and program evaluations have shown that some programs are more efficacious in this quest than others (NICHD, 2000). However, despite the prevalence of the term "scientifically based research" in the current discourse of effective 4

PAGE 16

pedagogical practices, there is dissenting opinion as to what this term encompasses, and whether experimental research is more scientific than descriptive or qualitative research (Fletcher & Francis, 2004). For the purposes of this investigation, the following description of scientific research given by Fletcher and Francis (2004) was used: Studies are scientific when: 1. There is a clear set of answerable questions that motivates the design 2. The methods are appropriate to answer the question 3. Competing hypotheses can be refuted on the basis of evidence 4. The studies are explicitly linked to theory and previous research 5. The data are systematically analyzed with the appropriate tools 6. The results are made available for review and critique (pp. 74-75) The quantitative portion of this investigation also followed the guidelines given by Baer, Wolf, and Risley (1968), who describe a scientific study as one in which: (a) the independent and dependent variables are carefully selected and specified; (b) environmental control is used in delivery of the independent variable; and (c) changes in the dependent variable, as a function of the delivery of the independent variable, are objectively evaluated. Objective evaluation also is a component mentioned by Simmons and Kameenui (2003) when describing science-based practices. The Coalition for Evidence-Based Policy (2003) lists similar traits (i.e., controlled studies, comparison groups and outcomes, and some combination of pre-testing and post-testing) in its description of scientifically based studies. The criteria set forth by Fletcher and Francis (2004), Baer, Wolf, and Risley (1968), and Simmons and Kameenui (2003) guided the quantitative structure of this investigation. 5

PAGE 17

In choosing an intervention for this investigation, I searched for a short-term (9-week), scientifically-based, supplemental program that would allow students to work fairly independently and would not require extensive teacher training. Headsprout Reading Basics was chosen as the program to be used in instructing the experimental group, based on a review of the literature, beginning with a review of the research reports provided by the Florida Center for Reading Research (FCRR) (2004). The 28 technology-based programs listed were narrowed down to 5 (Earobics, Funnix, Headsprout Reading Basics, Read Naturally, & Waterford) by choosing only the programs with no weaknesses listed. Of the remaining 5 programs, 1 is not designed for independent study (Funnix), 1 begins at the first-grade level (Earobics), 1 suggests a beginning reading vocabulary of approximately 50 words (Read Naturally), and 1 (Waterford) is a year-long program with a long-term teacher training commitment. As Headsprout Reading Basics met the initial criteria, the principal investigator examined the 40 episodes and further examined the FCRR (2004) report and confirmed that Headsprout Reading Basics uses generative instruction to teach oral language (e.g., speak aloud icon prompts oral responses), phonological awareness (see Figure 1), print awareness (see Figure 2), and alphabetic knowledge (see Figure 3), all focal points of ERF (USDOE, 2005). Figure 1. Example of phonological awareness instruction. 6

PAGE 18

Figures 1-5 reproduced from the Headsprout website. Permission obtained from Janet S. Twyman, Ph.D., V.P. Instructional Development (see Appendix H). Figure 2. Example of print awareness instruction. Figure3. Example of alphabetic knowledge instruction. The scientific approach to teaching that framed this investigation extended to the selection of the intervention. Scientifically based, in this context, refers to program development and using formative evaluation to test curricula for effectiveness, and then to revise and retest based on the results (Twyman et al., 2004). During the development of Headsprout Reading Basics, Twyman et al. (2004) describe their use of a nonlinear instructional design that included content analysis, setting instructional objectives, 7

PAGE 19

conducting criterion testing, determining entry repertoires, developing logical instructional sequences, establishing performance data, and developing contingencies to maintain engaged learner behavior throughout the course of instruction. In this process, falsified, or ineffective instructional practices are either modified or discarded by the designers (Twyman et al., 2004). Effective practices, then, are verified and replicated across a variety of learners in different contexts (See Figure 4). Figure 4. Headsprout research & development process. As previously stated, the developers of the intervention chosen for this study (Headsprout Reading Basics) state it has undergone formative evaluation (see Layng et al., 2004, for details). Its developers describe Headsprout Reading Basics as an engaging, Internet-based reading program that effectively and systematically teaches children reading fundamentals (Layng, et al., 2004). Generative instruction, as previously stated, is a sequence of procedures that establishes key component skills, provides practice of the skills to fluency, and then provides environments that increase the likelihood that the 8

PAGE 20

component skills will combine into more complex composite skills with little additional instruction (Layng, Twyman & Stikeleather, 2002). Headsprout Reading Basics has been used in previous studies (Layng, Twyman, & Stikeleather, 2003; Layng, Twyman & Stikeleather, in press), but there have been no published studies with at-risk preschool students that have also included a control group, and none that have interviewed the teachers to gain their perspectives. As Cook and Campbell (1979) state, it is not best practice to attempt to infer causality from results using only pre-tests and post-tests. Thus, this investigation also was viewed through the lens of James (1994) conceptualization of pragmatism, as this principal investigator attempted to determine if a certain type of instruction makes real, significant, and desirable difference to the population being studied. James saw pragmatism as a method of inquiry that examined results to determine what was effective in different situations. Truth, as James saw it, relied on verifiability. His philosophy further influenced this study through his tenet that the meaning of any idea has validity primarily in terms of its experiential and practical consequences. Maxcy (2003) mentions that many researchers dismiss pragmatism as a nave orientation that attempts to simplify complex philosophical issues into what works. From both a practitioners and a researchers perspective, seeking what works is not a simplification, but a worthwhile and attainable goal (NICHD, 2000). Stated another way, this investigation will be guided by the primary purpose of applied research, which is to provide data that are immediately useful to practitioners (Chhabra & McCardle, 2004; Martella et al., 1999) and access to instructional methods and programs that work for the population of students they are responsible for teaching. This investigation is an attempt 9

PAGE 21

to add to the information about the worth of one program as it pertains to the early reading and oral language skills of preschool students, not to promote false hope or expectations. If history is any indication, the debate (Chall, 1989, 1996) regarding teaching methods, styles and curricula (Bond & Dykstra, 1967) will churn for a long time. While the early reading skills addressed by ERF (USDOE, 2005) are not the only skills a child needs to be successful in school and to develop a life-long love for reading, there is a need for assisting preschool teachers in choosing curricula and methods that will significantly increase oral language and early reading skills that have been deemed critical by ERF (USDOE, 2005). The qualitative portion of this investigation also was framed within a pragmatic paradigm (James, 1994), which avoided the forced choice between positivism-postpositivism and constructivism-interpretation (Tashakkori & Teddlie, 1998) and focused on the outcomes desired by the participants. Kvale (1996) describes pragmatic validation as a type of social construction of knowledge that leads to action. This construction also is described by Maxcy (2003) who outlines the constructivist approach as one in which the interests and values of the participants are explored, analyzed, interpreted, and presented. As Miles and Huberman (1994) state, collecting and analyzing these qualitative data provide information that is often more convincing to a reader than numbers alone. Figure 5 illustrates the connection between the theoretical orientation of this investigation and the chosen methods. 10

PAGE 22

Overarching Goal Increase Oral Language and Early Reading Skills I nstructional Theoretical Orientation Explicit, clear and carefully sequenced Scientifically-based Individualized Makes a real, significant difference Participants desire outcomes Viewed Through the Lens of Pragmatism Truth relies on verifiability Validity measured primarily in terms of experiential and practical consequences Methods Quantitative: to examine what works Qualitative: to ascertain the desirability of the instruction and outcomes for the participants Figure 5. Theoretical connection to chosen methods for present investigation. Rationale for the Investigation Researchers have documented the efficacy of providing explicit reading instruction to early readers (Adams, 1990; Carnine, Silbert, Kameenui, & Tarver, 2004; Graves, Juel, & Graves, 2004) as part of a balanced approach to teaching early reading skills (Snow et al., 1998). Researchers also have shown that reading development begins before students reach kindergarten (Teale & Yokota, 2000) and that one teacher working 11

PAGE 23

in isolation cannot meet the needs of every student (Crevola & Hill, 1998). Additionally, scant research exists on the effectiveness of reading curricula for preschool students. From a pragmatic viewpoint, due to this discrepancy between expectations and resources, it seems prudent to explore and identify efficacious supplemental methods that explicitly teach oral language and early reading skills to preschool students. Once identified and implemented, these supplemental methods and programs could enhance the quality of education we are providing to these young students, prepare them for a successful school experience, and possibly reduce the need for special education placements, and remedial and summer school programs. Examination of the effects of instruction, within the context of the Headsprout Reading Basics program, on the early reading and oral language skills of at-risk preschool students had not previously been undertaken. Therefore, the purpose of this investigation was twofold. First, the effects of instruction, within the context of the Headsprout Reading Basics program, on the early reading and oral language skills of at-risk preschool students were explored. Second, their teachers perceptions of the Headsprout Reading Basics program after first-time implementation were obtained and analyzed. This investigation fell under the category of instructional research, where the role was to examine and identify teaching practices that are effective in helping at-risk (i.e., a member of a low socioeconomic family, a student with limited English proficiency, or a student with an identified disability for this investigation) students acquire the skills and attitudes they need to become proficient readers (Torgesen, 2004). For the purposes of this investigation, at-risk students were defined as children from low socioeconomic 12

PAGE 24

families (poverty guidelines used for present study presented in Chapter 3) and those with Limited English Proficiency (McGee and Richgels, 2003). The Head Start program serves students from low socioeconomic families and those with Limited English proficiency, as well as students diagnosed with disabilities. Because 10% of the slots in the Head Start program are reserved for children with disabilities, and because students with disabilities are also at-risk for reading difficulties, these students were included as well. Contrary to the viewpoints of some researchers (Genishi, Ryan, Ochsner, & Yarnall, 2001) who state that labeling a child at-risk is tantamount to saying they have cultural deficits, I believe that recognizing children are at-risk due to lack of experiences allows us to view each student as a capable learner and places the responsibility for teaching each child on the teacher and the chosen method or curriculum. Purpose of the Investigation The purpose of this investigation was to determine whether instruction through the Headsprout Reading Basics program is an effective and desirable method for increasing the oral language and early reading skills of at-risk preschool students. As previously mentioned, scant research exists on the effectiveness of reading curricula for at-risk preschool students. It was expected that the results would produce data that would yield findings that would contribute to the knowledge base of potentially effective instructional methods to use with preschool students. Also, by interviewing teachers and teachers assistants and assessing their perceptions, understandings, and attitudes, I intended to gain further insight into the possible strengths and weaknesses of the instruction in Headsprout Reading Basics in teaching oral language and early reading skills to at-risk preschool students. The effects 13

PAGE 25

of gender and time in the program (i.e., minutes engaged in the program) on student achievement were also explored. Research Questions The following research questions were addressed: Research Question 1. What is the difference in achievement in early reading skills, as measured by the overall reading quotient of the Test of Early Reading Ability (TERA3), between students who receive instruction through the Headsprout Reading Basics program, and students who do not receive this instruction? Research Question 2. What is the difference in achievement in oral language skills, as measured by the spoken language quotient of the Test of Language Development (TOLD-3), between students who receive instruction through the Headsprout Reading Basics program, and students who do not receive this instruction? Research Question 3. What is the effect of instruction through the Headsprout Reading Basics program on student achievement in early reading skills, as measured by the overall reading quotient of the Test of Early Reading Ability (TERA-3), as a function of number of minutes in the program? Research Question 4. What is the effect of instruction through the Headsprout Reading Basics program on student achievement in oral language skills, as measured by the spoken language quotient of the Test of Language Development (TOLD-3), as a function of number of minutes in the program? Research Question 5. What is the effect of instruction through the Headsprout Reading Basics program on student achievement in early reading skills, as measured by 14

PAGE 26

the overall reading quotient of the Test of Early Reading Ability (TERA-3), as a function of gender? Research Question 6. What is the effect of instruction through the Headsprout Reading Basics program on student achievement in oral language skills, as measured by the spoken language quotient of the Test of Language Development (TOLD-3), as a function of gender? Research Question 7. What are the perceptions of preschool students teachers and their assistants regarding instruction through the Headsprout Reading Basics program after first-time implementation with their students? Hypotheses The following hypotheses were tested: Null Hypothesis 1. There is no difference in achievement in early reading skills, as measured by the overall reading quotient of the Test of Early Reading Ability (TERA3), between students who receive instruction through the Headsprout Reading Basics program and students who do not receive this instruction. Research Hypothesis 1. Students who receive instruction through the Headsprout Reading Basics program experience higher gains in achievement in early reading skills, as measured by the overall reading quotient of the Test of Early Reading Ability (TERA3), than do students who do not receive this instruction. Null Hypothesis 2. There is no difference in achievement in oral language skills, as measured by the spoken language quotient of the Test of Language Development (TOLD-3), between students who receive instruction through the Headsprout Reading Basics program and students who do not receive this instruction. 15

PAGE 27

Research Hypothesis 2. Students who receive instruction through the Headsprout Reading Basics program experience higher gains in achievement in oral language skills, as measured by the spoken language quotient of the Test of Language Development (TOLD-3), than do students who do not receive this instruction. Null Hypothesis 3. There is no difference in achievement in early reading skills, as measured by the overall reading quotient of the Test of Early Reading Ability (TERA3), between students who receive a greater number of minutes of instruction through the Headsprout Reading Basics program and students who receive fewer minutes of instruction. Research Hypothesis 3. Students who receive a greater number of minutes of instruction through the Headsprout Reading Basics program experience higher gains in early reading skills, as measured by the overall reading quotient of the Test of Early Reading Ability (TERA3), than do students who receive fewer minutes of instruction. Null Hypothesis 4. There is no difference in achievement in oral language skills, as measured by the spoken language quotient of the Test of Language Development (TOLD-3), between students who receive a greater number of minutes of instruction through the Headsprout Reading Basics program and students who receive fewer minutes of instruction. Research Hypothesis 4. Students who receive a greater number of minutes of instruction through the Headsprout Reading Basics program experience higher gains in achievement in early reading skills, as measured by the overall reading quotient of the Test of Early Reading Ability (TERA3), than do students who receive fewer minutes of instruction. 16

PAGE 28

Null Hypothesis 5. There is no difference in achievement in early reading skills, as measured by the overall reading quotient of the Test of Early Reading Ability (TERA3), between males who receive instruction through the Headsprout Reading Basics program and females who receive the same instruction. Research Hypothesis 5. There is a difference in achievement in early reading skills, as measured by the overall reading quotient of the Test of Early Reading Ability (TERA3), between males who receive instruction through the Headsprout Reading Basics program and females who receive the same instruction. Null Hypothesis 6. There is no difference in achievement in oral language skills, as measured by the spoken language quotient of the Test of Language Development (TOLD-3), between males who receive instruction through the Headsprout Reading Basics program and females who receive the same instruction. Research Hypothesis 6. There is a difference in achievement in oral language skills, as measured by the spoken language quotient of the Test of Language Development (TOLD-3), between males who receive instruction through the Headsprout Reading Basics program and females who receive the same instruction. The significance level and procedures used to test these hypotheses are discussed in Chapter 3. Research question 7 is exploratory, therefore, a hypothesis was not considered. The procedures used to address this question also are detailed in Chapter 3. Educational Significance of the Investigation Crevola and Hill (1998) state that ensuring all students make satisfactory progress in early literacy is generally beyond the capacity of one classroom teacher working in isolation. Preschool teachers are being asked to educate increasingly heterogeneous 17

PAGE 29

populations of students and prepare them for more academically focused kindergarten experiences. With Floridas voluntary preschool program slated to begin in the 2005-2006 school year and estimated to serve 151,000 4-year-old students at a cost of $4,200 per student per year (Florida Department of Education, 2004), examination of supplemental programming to assist preschool teachers is timely. Results from this investigation add to the knowledge base of potential programs to use to with preschool students to increase the critical early reading and oral language skills identified by ERF (USDOE, 2005) and other researchers (Adams, 1990; Chall, 1996; Clay 2001). Also, it was hoped that this investigation would yield greater insight into teachers perceptions of the Internet-based reading program, Headsprout Reading Basics. If significant gains were found among at-risk preschool students who received instruction through the Headsprout Reading Basics program, and their teachers perceptions generally were positive, educators would have a larger research base from which to choose a supplemental, instructional program that may improve the early reading skills of at-risk preschool students. If significant differences were not found, I still expected to be able to contribute to the research base pertaining to implementation issues surrounding computer-based reading programs and preschool students. Definitions of the Terms At-risk. At risk children are those whose families meet poverty index guidelines (see Figure 1), children who have an identified disability, or children with limited English proficiency (McGee & Richgels, 2003; U.S. Department of Health & Human Services, 2004). 18

PAGE 30

Early reading skills. Early reading skills consist of the fundamental knowledge and skills necessary for optimal reading development in kindergarten and beyond. These skills include oral language (i.e., vocabulary, expressive language, listening comprehension), phonological awareness (i.e., rhyming, blending, segmenting), print awareness, and alphabetic knowledge. (USDOE, 2005) Generative instruction. Generative instruction is a sequence of procedures that establishes key component skills, provides practice of the skills to fluency, and then provides environments that increase the likelihood that the component skills will combine into more complex composite skills with little additional instruction (Layng et al., 2002). Oral language skills. For this investigation, oral language skills consist of fundamental skills necessary for subsequent reading development in kindergarten and beyond. These include vocabulary and aspects of syntax and semantics (Newcomer & Hammill, 1997) Sunshine State Standards. The Sunshine State Standards refer to a listing of the strands, standards, and benchmarks pertaining to the content to be learned by the students in Florida. These standards serve as the basis for quality programs in Florida (FLDOE, 2004). Delimitations of the Investigation For the quantitative component, only at-risk preschool (4 years old as of September 1, 2004) students in two Head Start centers in a city on the east coast of Florida were included in this investigation. For the qualitative portion, only teachers and teachers assistants of these preschool students in the two randomly selected sites who also agreed to participate were included. 19

PAGE 31

Additionally, the preschool students were pre-assessed to ensure they could adequately control a mouse and follow one-step directions. Accommodations (e.g., restricted mouse movement area) were not needed for any of the participants in this investigation. The decision was made to exclude any student who could not control the mouse, or follow one-step directions after three unsuccessful tutorial attempts. However, exclusion was not necessary, because all students passed the prerequisite skills assessment. Limited English Proficiency affects interactions and educational performance (National Association for Language Development in the Curriculum, 1998). However, with the heavy focus on phonics, phonemic awareness, and vocabulary building, Headsprout Reading Basics addresses skills needed by students whose first language is not English. Therefore, these students also were included in this investigation. Limitations of the Investigation Several potential threats to validity exist in the quantitative portion of this investigation. Particular threats to internal validity were history and maturation (Martella et al., 1999; Onwuegbuzie, 2003) because it was expected that all of these students also came into contact with conditions that were unrelated to the intervention that might have increased their oral language and early reading skills. However, the consistent relationship reported in the literature among poverty, disability, and LEP on one hand, and literacy on the other, suggests that limited experiences with literacy might be common. Additionally, students were expected to become more skilled as they grew older and more mature. There was a threat that these changes could be incorrectly attributed to the intervention. 20

PAGE 32

Another threat to internal validity was implementation bias (i.e., lack of adherence to protocol) (Onwuegbuzie, 2003), because the teachers and teachers assistants implemented the intervention to various degrees. To guard against this threat, I monitored the implementation of the programs using implementation checklists (see Appendix F). A further threat to internal validity is instrumentation bias (Martella et al., 1999; Onwuegbuzie, 2003) as the questions I designed may have limited or guided responses. In a pilot investigation, I conducted the interview with four individuals who were not involved in this investigation. I then used peer review techniques to make changes based on reactions and interpretations of the questions by the participants (see Appendices B & C). To guard against researcher bias (Onwuegbuzie, 2003) in the interpretation of interview responses, I used member checking and peer review to confirm my model of categories, indicators and illustrative quotes identification and interpretations (Tashakkori & Teddlie, 1998). Pre-test sensitization (Onwuegbuzie, 2003) posed another threat to internal validity, particularly in the oral language skills testing because there was only one form of the test. I used two forms of the early reading skills test to guard against this threat. Threats due to selection and resentful demoralization of the control group (Martella et al., 1999) were controlled by the design of this study. Students were randomly assigned, and the control group was given equal computer time during the study, as well as offered the intervention after completion of the 8-week intervention. External validity (i.e., the extent to which I could generalize my findings to other populations or settings) of this investigation also consisted of a number of factors. One potential threat to external validity was multiple treatment interference (Martella et al., 21

PAGE 33

1999) as it was expected that these students also received reading instruction in their classrooms and some of these students also may have received different instruction from parents in their homes. Classroom literacy experiences were compared for their similarity across sites through the collection and categorization of each participating teachers lesson plans. During this investigation, I was not able to account for the kinds and intensities of literacy experiences in the homes. This factor remained a threat to validity. Ecological validity (Onwuegbuzie, 2003) also presented a threat to external validity because preschool settings differed. I used a random selection of preschool settings in an attempt to minimize this threat. Researcher bias (Onwuegbuzie, 2003) also threatened external validity because the results may have been influenced by my involvement. In an attempt to minimize this threat, the teachers were trained to implement the intervention and I maintained a journal to document my limited involvement and to identify potentially influential statements or actions. Organization of the Remaining Chapters Chapter 2 includes a review of the existing literature pertaining to early reading and oral language skills and instruction, the notion of being at-risk for reading difficulties, and the use of instructional technology to teach early reading skills. I continue my literature review with research pertaining to the tenets of generative instruction and conclude with a review of previous studies of the use of generative instruction, within the context of Headsprout Reading Basics, to teach early reading skills. Chapter 3 includes details of the methodology that will be used in this investigation. Chapter 4 is a presentation of the results of this investigation, followed by a discussion in Chapter 5. 22

PAGE 34

Chapter 2 Literature Review Overview This chapter begins with a review of literature on early reading and oral language skills and instruction, followed by a review of literature on the notion of being at-risk for reading difficulties, and on the use of instructional technology in teaching early reading skills. This chapter then continues with a review of the tenets of generative instruction, and concludes with a review of the previous studies of generative instruction, within the context of Headsprout Reading Basics, in teaching early reading skills. The preliminary literature review involved computerized searches of the Education Resources Information Center and PsycInfo databases. As a second method, I conducted worldwide web searches using the Google and Google Scholar search engines. This chapter ends with a brief summary. Early Reading Instruction Research indicates that children begin learning to read well before they begin formal schooling (Adams, 1990; Snow et al., 1998; Teale & Sulzby, 1986). Therefore, the issue of when to begin formal instruction has become somewhat moot, and the issue of how to provide this instruction has taken center stage (Teale & Yokota, 2000). This issue provides the impetus for the continuation of a debate related to early reading instruction (Chall, 1989, 1996). While early manifestation of this debate centered on phonics versus whole word, more recent discourse, as it pertains to early reading, has 23

PAGE 35

centered on the intricacies of emergent literacy (Teale & Sulzby, 1986) and the ways to ensure that children obtain the prerequisite skills they need to support later, higher-order literacy skills. Although it is difficult to label ideas in this arena as facts, there are some conclusions about reading growth that are assumed to be true based on consistent and repeated research findings (Torgesen, 2002). The ultimate goal of early reading instruction is to help children learn the competencies necessary to comprehend, enjoy, and use the many forms and genres of text (Torgesen, 2002). The National Reading Panel (NRP), a committee of professionals commissioned by the U.S. Congress to review the recent research on reading and reading instruction and identify consistent findings, suggested that effective reading instruction should include (a) teaching children to break apart and manipulate the sounds in words (phonemic awareness); (b) teaching children that these sounds are represented by letters of the alphabet that can be blended together to form words (phonics); (c) having children practice what they have learned by reading aloud with guidance and feedback (guided oral reading); and (d) applying comprehension strategies to guide and improve reading comprehension (National Institute of Child Health and Human Development [NICHD], 2000). Consistent with the suggestions made by the NRP, strong support can be found for the guidelines set by the Early Reading First (ERF) program (USDOE, 2005) to include instruction designed to focus on the development of (a) oral language (vocabulary, expressive language, listening comprehension); (b) phonological awareness (rhyming, blending, segmenting; (c) print awareness; and (d) alphabetic knowledge. Other researchers (Adams, 1990; Bond & Dykstra, 1967; Clay, 1993; Snow et al., 1998) 24

PAGE 36

have shown evidence that these skills are predictive of future reading achievement. Once these tenets are accepted, the goal for many early childhood educators, therefore, becomes providing instruction through a challenging, interesting, and developmentally appropriate curriculum (International Reading Association [IRA], 1998; National Association for the Education of Young Children [NAEYC], 2003). The IRA and NAEYC elaborate on this in their position statement on early childhood curriculum. They recommend the implementation of curriculum that (a) is thoughtfully planned; (b) is challenging and engaging; (c) is developmentally appropriate and culturally and linguistically responsive; (d) is comprehensive; and (e) promotes positive outcomes. Because preschool curricula in Florida will be guided by the recommendations of the Early Reading First program, this portion of this literature review will focus on ERFs previously mentioned recommendations that early reading instruction focus on the development of oral language, phonological awareness, print awareness, and alphabetic knowledge. Oral language and early reading skills. The literacy process has widely been studied from diverse perspectives such as linguistics (e.g., Chomsky, 1965), psycholinguistics (e.g., Goodman, 1967), sociolinguistics (e.g., Heath, 1983), and cognitive psychology (e.g., Rumelhart, 1975). The complexities of these studies are beyond the scope of this literature review, however, because language and literacy develop in a parallel and interactive manner (Ruddell & Ruddell, 1994), their interrelationship is pertinent to this study. Oral language skills are often categorized as representing either expressive (i.e., the length and complexity of sentence utterances) or receptive (i.e., knowledge of semantics, syntax, pragmatics, and ability to comprehend) 25

PAGE 37

skills (Snow et al., 1998). The preschool years are a crucial time for language development (Dyson & Genishi, 1993) and oral language development, particularly vocabulary acquisition and its uses, is highly predictive of successful reading development and text comprehension (Clay, 2001; Dickinson & Tabors, 2001; Snow et al., 1998; Torgesen, 2002). Some researchers suggest that oral language skills exert an influence over word recognition development that is independent of that associated with phonological skills (Nation & Snowling, 2004). To explore this suggestion, Nation and Snowling (2004) conducted a study with 72 children, measuring the broad oral language skills of vocabulary, listening comprehension, and semantic skills. Using a series of hierarchical regression models, they assessed the effects that these skills had on the reading skills of word recognition, non-word reading, reading comprehension, and irregular word reading. This was a study designed to assess both concurrent and longitudinal predictors of reading success, so the children were approximately 8.5 years old at the first testing and approximately 13 years old at the post-testing. Analyses from the study conducted by Nation and Snowling (2004) showed that oral language skills predicted word recognition and reading comprehension, both concurrently and longitudinally. Oral language skills accounted for unique variance (between 4% and 14%) in word recognition skills and reading comprehension, even after accounting for the influences of age, nonverbal ability, non-word reading ability, and phonological skills. Although the Nation and Snowling (2004) study did not give demographic data for the participants, other researchers (Dickinson & Snow, 1987; Dickinson & Tabors, 2001) have found similar results across a range of social classes, 26

PAGE 38

further supporting the view that broader language skills (beyond phonological skills) contribute to future reading skills. The large body of knowledge that links oral language development and reading success (e.g., Clay, 1991; Ruddell & Ruddell, 1994), coupled with other researchers (Carnine et al., 2004) contentions that students who have not had a large amount of early language experiences benefit from explicit instruction in vocabulary, oral language, and reading skills, provides a foundation for exploring methods to develop them concurrently. Phonological awareness. Phonological awareness refers to ones awareness of, and access to, the sound structure of oral language (Wagner & Torgesen, 1987). A considerable body of evidence suggests that phonological awareness is a predictor of later reading success (Adams, 1990; Cunningham, 1990; Ehri, 1979; Juel, 1994; Snow et al., 1998; Pressley, 1998; Stanovich, 2000; Torgesen, 1999). Phonological awareness includes phonemic awareness (i.e., ability to hear and manipulate the constituent sounds that make up words) (Teale & Yokota, 2000) and the ability to identify word, syllable, and onset/rime levels (Adams, 1990; Sindelar, Lane, Pullen, & Hudson, 2002). Children must be aware that words are composed of phonemes and of graphemes that correspond to those phonemes (Juel, 1991). The goal of instruction in phonemic awareness is to teach children to focus on and manipulate phonemes (i.e., the smallest unit of speech) in spoken words. This includes the tasks of blending sounds to form words, segmenting words into individual phonemes, and identifying rhyming words (Ehri, Nunes, Willows, Schuster, Yaghoub-Zadeh, & Shanahan, 2001). The National Reading Panel (NRP) conducted a meta-analysis on 52 phonemic awareness studies in order to assess whether phonemic awareness affected reading ability. The panel 27

PAGE 39

examined effect sizes to determine whether the treatment groups (i.e., the groups that received phonemic awareness instruction) achieved higher reading scores than those that did not receive this instruction. The majority of effect sizes was positive, with a mean effect size of +0.53, which indicates that students receiving phonemic awareness instruction showed higher reading achievement scores than did students in the control groups (NICHD, 2000). Because the panel selected only those studies that used an experimental or quasi-experimental design with a control group or a multiple-baseline method, many correlational, descriptive, and qualitative studies that contribute to our understanding of the reading process were excluded. While this exclusion does not discount the findings of the panel, inclusion of these studies in further analyses can only serve to enhance our understanding of the relationship between phonemic awareness and reading achievement. In addition to phonemic awareness, phonological awareness includes a childs awareness at the syllable, word, and onset/rime levels. Using qualitative approaches, Goswami (2001) found that phonological development is a holistic, developmental progression. Her research on phonological awareness led her to suggest that (a) syllables are natural units of analysis for English speakers; (b) onsets and rimes are particularly salient for young learners as their ability with phonology becomes more sensitive to segmentation; (c) children are able to use onset and rime as the basis for analogy at a young age; (d) phonological awareness of onset and rime predicts later success in reading and spelling; and (e) phonemic awareness develops through instruction in alphabetic orthography. Phonemic awareness, onsets and rimes, and syllable and word level awareness all were addressed in the current study. 28

PAGE 40

Print awareness. Print awareness refers to knowledge of the purposes and conventions of print. Print awareness requires a child to understand that written language is similar to oral language and to recognize that words are groups of letters; however, it also goes beyond these constructs. Print awareness also includes procedural knowledge such as a book is strategically arranged and directionally read from front to back, left to right, and top to bottom (Graves et al., 2004). Additionally, Graves et al. (2004) stated that print awareness also encompasses attitudes and feelings toward text. They suggest that the most important attitude for children to acquire is that reading can be fun, causing them to engage in a variety of reading activities. Clay (1993) discusses the print conventions that readers need to learn to be able to attend to the variety of visual information that is available. She states that a reader can use visual knowledge taken from print in highly efficient ways, such as scanning for enough detail to make sense of the text. According to Clay, the beginning reader must either learn for himself, or be taught to analyze print visually to locate clues, features, and make distinctions among letters, words and other signs. Print awareness also requires a beginning reader to understand the functions of white space in text (Clay, 2001). The notion that a child can develop some print awareness for himself/herself is reiterated by Graves et al. (2004), who stated that, although all children will have varying degrees of print awareness development, virtually all children, at least in the United States, are surrounded by print environments and generally know that print carries some type of meaning. Clay (1993) has found that her Concepts About Print test has been shown to be a sensitive indicator of a group of behaviors that support reading acquisition. These 29

PAGE 41

behaviors include (a) book orientation knowledge; (b) principles involving the directional arrangement of print on a page and the use of white space; (c) knowledge that the print contains the story; and (d) understanding of simple punctuation marks. Many researchers (Bowey & Patel, 1991; Dickinson & Tabors, 2001; Scarborough, 1991) have found that print awareness correlates with other early reading skills such as phonological abilities and oral language. Alphabetic knowledge. In learning to read words, students progress through the logographic (i.e., using non-phonemic visual characteristics rather than letter-sound correspondences to read words), alphabetic (i.e., reading words by processing letter-sound relations), and orthographic (i.e., using grapheme-phoneme correspondences and orthographic knowledge to read words) phases (Ehri, 1994). Although words eventually become the units of the English language that are most easily processed by readers (Rayner & Pollatsek, 1989), students at the alphabetic phase of reading development do not possess the background knowledge to identify words as units. Alphabetic knowledge, or letter recognition, refers to knowledge of the shapes and names of the letters of the alphabet and their relationship to spoken language. In order for children to link their knowledge of spoken language to written language, they must be able to master the alphabetic code (i.e., the system of grapheme-phoneme correspondences that link spellings and pronunciations) (Ehri et al., 2001). In particular, children must be aware that words can be spoken or written and that speech corresponds to print. Part of this process is referred to as decoding, which plays a critical role in the reading process (Snow et al., 1998). In order for alphabetic knowledge instruction and decoding instruction to be efficacious, they must be grounded in what we know about the 30

PAGE 42

stages of reading development (i.e., logographic, alphabetic, and orthographic [Ehri, 1994]) and the structure of the English language and should be aligned with the emerging competence of the student (Moats, 1998). Discourse pertaining to best practice extends to instruction at the alphabetic phase, particularly in phonics instruction. Phonics is a method of instruction that teaches correspondences between letters and phonemes and then teaches how to use these correspondences to read and spell words (Ehri, 2004). Traditional phonics programs often taught unnecessary and confusing terminology or rules and taught the code backwards (i.e., they go from letter to sound instead of sound to letter) (McGuinness & McGuinness, 1998; Moats, 1998). There is now strong support for teaching children each sound, then linking that sound to a grapheme (i.e., letter, letter group, or letter sequence) and teaching pattern recognition, not rule memorization (Ehri, 2004; Moats, 1998; Snow et al., 1998). For some researchers, systematic phonics instruction (i.e., the direct teaching of a set of letter-sound relationships in a clearly defined sequence) is considered essential in learning to read because the English writing system is alphabetic and can cause difficulties if children do not learn the system (Adams, 1990; Chall, 1996; Ehri, 2004). Chall (1996) conducted a comprehensive review of beginning reading instruction and found that early and systematic instruction in phonics led to higher achievement in reading than did later and less systematic phonics instruction. Adams (1990) supported these findings in her comprehensive review of beginning reading instruction. Ehri (2004) further states that the goals of instruction in alphabetic knowledge are to teach beginning readers letter-sound correspondences and how to use these correspondences to decode 31

PAGE 43

words. This leads to a primary goal of alphabetic instruction, which is to teach students to read words in and out of context. The NRP conducted a meta-analysis on phonics, which compared the effectiveness of systematic phonics instruction, unsystematic phonics instruction, and no phonics instruction at all. The panel located studies that included both experimental and control group and were conducted in school rather than in laboratory settings (Ehri et al., 2001). For inclusion in the met-analysis, these studies also had to measure reading as an outcome of instruction. Studies were excluded if they had been included in the Panels other meta-analysis of phonemic awareness instruction. Additionally, the results of included studies had to have been published in peer-reviewed journals. Specific skills incorporated into the instruction included learning the shapes and names of all capital and lowercase letters, and learning major grapheme-phoneme correspondences. Reading outcomes that were measured included reading words and pseudowords, reading text orally, and text comprehension (Ehri et al., 2001). Sixty-six treatment-control group comparisons were made and the researchers used an effect size index to analyze the effects of phonics instruction on reading outcome measures. Medium effect sizes were found on measures of decoding regularly spelled words (+0.67) and pseudowords (+0.60). Most of the other effect sizes were positive and approached medium effect sizes with an overall mean of +0.41 (Ehri et al., 2001). These findings indicate that instruction that includes systematic phonics is more effective in teaching children to read than instruction without it (NICHD, 2000). An interesting implication that the NRP members suggest in their summary is that when teaching not 32

PAGE 44

only is effective but also is enjoyable, it is more likely that teachers will be committed to delivering the instruction. Clay (2001) discusses the importance of explicitly teaching the relationship of speech to the code. Clay states that the code represents many objects (e.g., signals, signs, rules, marks), but for Clay, the code constitutes abstract symbols that represent letters and the idea that these letters make words. She suggests that preschool children have difficulty remembering the shapes of some of the letters or symbols and if this difficulty is not replaced through instruction at the early stages, confusions may become firmly established. Instructional materials. The what as well as the how of instruction becomes imperative, as researchers have demonstrated the importance of choosing appropriate texts in developing early reading skills (Fountas & Pinnell, 1996; Hiebert, 1999; Sindelar et al., 2002; Teale & Yokota, 2000). Appropriate texts and well-constructed, pertinent materials allow teachers to devote more of their time to their interactions with students (Carnine et al., 2004). Although complete agreement of the design of instructional texts is not found in the literature, there are components that have a great deal of evidence to support their inclusion in texts for beginning readers. A thorough discussion of text features is beyond the scope of this investigation, however, because of the importance of text choice for beginning readers, a brief mention is made here. Decodability can make texts accessible for beginning readers (Hoffman, Sailors, & Patterson, 2001). Decodable texts are those with (a) a proportion of words with phonically regular relationships between letters and sounds; and (b) a degree of matching between the letter/sound relationships represented in the text and in those that the student 33

PAGE 45

has been taught (Beck & Juel, 1995). Hoffman et al. (2001) describe decodability as being focused on word level and reflecting the use of high frequency words, as well as words that are phonically regular. Kameenui and Simmons (1997) suggest that decodability follows a continuum and has been shown to be an effective, integral part of larger instructional programs. Other researchers (Beck & Juel, 1995; Mesmer, 2001) believe that decodability has a discrete developmental period of usefulness (see Mesmer, 2001 for a theoretical model for the use of decodable text). Clay (2001) describes appropriate texts and materials as those that allow the reader to engage with novel features of the text while simultaneously controlling for error behavior. Therefore, according to Clay, the choice of appropriate texts will produce successful learning experiences and motivation for further learning. Hiebert (1999) notes the importance of appropriate texts to provide practice with word patterns, and as a critical bridge to efficient decoding abilities. While definitions of appropriate materials vary, it is clear that carefully chosen texts can serve to motivate children (Marsh, 2003; Moll, Amanti, Neff, & Gonzalez, 1992) and that text features impact early readers. Marsh (2003) suggests using texts drawn from popular culture to allow children to call on their prior experiences in social contexts and help make meaning of the text. Moll et al. (1992) reiterate this suggestion and discuss the wealth of knowledge available for children to provide background knowledge and assist them in taking ownership of texts that are presented to them. Despite, or perhaps because of, the continuing debate concerning the best way to teach beginning reading, it is generally agreed upon that there is no single approach that will meet the needs of all children (Adams, 1990; IRA & NAEYC, 1998; Pressley, 1998). 34

PAGE 46

However, the research cited in this section strongly supports (a) systematic instruction in phonological awareness and phonics, (b) explicit teaching of vocabulary and other oral language skills, and (c) the use of interesting, age-appropriate texts and materials. At-Risk for Reading Difficulties Being at-risk for school-based reading difficulties can be attributed to a number of economic, environmental, academic, or emotional variables (Wharton-McDonald, Pressley, & Hampston, 1998). Snow et al. (1998) report finding convincing evidence that some groups of children are at-risk for reading difficulties because they are affected by one of more of the following conditions: (a) they are expected to attend schools with chronically low achievement levels; (b) they reside in low-income families, and live in poor neighborhoods; (c) they have limited proficiency in spoken English; and (d) they speak in a dialect of English that differs substantially from the one used in school. Although social, familial, and cultural mismatches to school culture and language can be mediated to enhance educational outcomes (Heath, 1983), they must also be addressed if they are hindering education. Because the participants in the current study are children who qualify for the Head Start program, which is designed, in part, to foster healthy development in low-income children, the focal point of this section of the literature review will be on children from low-income families. Children require exposure to vocabulary and language in general, and they specifically will need exposure to expression and interpretation that will increase the probability of success in school (Hart & Risley, 1995). Unfortunately, many children arrive in school with serious differences relative to school-based literacies, partially due to a lack of such exposure (Hart & Risley, 1995; McGee & Richgels, 2003). Torgesen 35

PAGE 47

(1999) found that some children from families of lower socioeconomic status also enter school with significant weaknesses in school-based phonological skills, print-related knowledge, and vocabulary. Hart and Risley (1995) conducted a longitudinal study to discover the relationships between family interaction patterns and vocabulary growth rates. The observers conducted monthly visits to the homes of children, ages birth to four years, from professional families, working-class families, and welfare families. The observers stayed in the home for one-hour intervals. These researchers conducted four-way reliability observations to achieve high percentages of inter-observer agreement as they coded interactions and vocabulary usage. Vocabulary was separated into the following categories: (a) nouns, (b) verbs, (c) modifiers, (d) functors (pronouns, prepositions, demonstratives, articles) and, (e) special codes for proper nouns so that family and name vocabulary would not inflate the numbers. Overall results indicated that parents in professional families seemed to be preparing their children to participate in problem solving and advanced education, as indicated by later vocabulary growth and reading achievement. The talk within the welfare-receiving families suggested a culture focused on established customs. Therefore, language that was rich in nouns and modifiers did not appear to be necessary. These findings are consistent with those of other researchers (e.g. Heath, 1983; Labov, 1968) who found that adult-child verbal interactions are quite different from those found in schools. Hart and Risleys (1995) study also reported very different lifestyles among the families, but agreed that all participants were similarly involved in the fundamental task of raising a child. All children were found to have similar types of language experiences. 36

PAGE 48

They all heard talk about people, relationships, actions, feelings, and events. What was markedly different was the amount of these experiences. Their data revealed the following differences in words heard per hour by children in the following categories of families (a) welfare (616), (b) working class (1,251), and (c) professional family (2,153). The researchers translated these differences into lower trajectories of word learning for children in the welfare-receiving families. The researchers estimated that in order to catch up to their more advantaged peers, these children would need 41 hours of out-of-home language experiences per week. Hart and Risley (1995) further state that by four years of age, children had already established patterns of vocabulary growth that were, often times, intractable. Although the patterns of behavior in theses homes could have been affected by the presence of the observer, the longitudinal nature of this study appear to have been sufficient to minimize these effects, and although no two homes are alike, it seems reasonable to assume that similar patterns would be found in other homes of low socio-economic status. However, this is not to say that there is a lack of richness of language and literacy experiences in the homes of many families of low socio-economic status (Heath, 1983; Taylor & Dorsey-Gaines, 1988). Rather, it is to acknowledge that some children from families of low socio-economic status may be at-risk for reading difficulties and subsequent school success because their home language content and processes differ from those used in schools. Ruddell and Ruddell (1994) also explored language development in the early years and its relationship to literacy. They too, noted the importance of access to environmental encounters with language and extended this concept to include encounters 37

PAGE 49

with print materials. Although they acknowledged that childrens environments influence their language and literacy development, Ruddell and Ruddell also concluded that children enter school with a high degree of language competence. Although few would disagree that each child brings a unique background to school, caution is suggested in assuming that all children are entering school with a high degree of language competence, as it relates to school readiness, as exceptions do exist (Hart & Risley, 1995; Stanovich, 2000) Makin (2003) also states that at-risk children usually come from low-income, low-literacy or bilingual homes. Like Snow et al. (1998), Makin summarizes that although these three factors may have a cumulative nature, poverty appears to be a salient predictor of problems with reading. In summation, research suggests that many children from low socio-economic homes are ill prepared to enter school. Considering the concern that by the time children are four years of age, intervention programs may be too little to make up for the past (Hart & Risley, 1995), it seems prudent and necessary to explore possible solutions. Heath (1983) suggests striving for instructional similarities that bridge home and school literacies as we search for these solutions. Ideally, early literacy instruction would be tailored to an individual students learning characteristics. However, the vastitude of the current daily workload of a preschool teacher precludes such fine-tuning. Computers and Early Reading Skills Computers are familiar objects to many children; yet some students, particularly those from families of low socioeconomic status, may have very little experience using them. However, the use of educational technology to support the instruction provided by 38

PAGE 50

individual classroom teachers whose responsibilities often exceed their resources (Crevola & Hill, 1998) is a salient issue. This strain on a preschool teachers resources stems from a variety of issues, including longer hours, the complexity of effective early reading instruction, individual student preferences, and their needs for highly engaging academic activities (Crevola & Hill, 1998). As computers become more prevalent in preschool classrooms, questions arise concerning the developmental appropriateness of this technology for young children (Robinson, 2004). Labbo and Reinking (2003) provide evidence that computers are motivational and can provide practice opportunities; yet, they caution that the research base is shallow due partially to the relatively short history of educational computing. Some researchers (Haugland, 1992; Johnson, 1985; Liu, 1996) report that children interact better with software that provides them with control and choices, whereas others (Torgesen & Barker, 1995) show that drill-and-practice software can be effective in developing early reading skills. According to Labbo and Reinking (2003), context counts when it comes to effective use of computer technology in early childhood and the nature of the learning conditions set up by the teacher are imperative to success. Concerns about input devices also are prevalent when discussing preschool children and computers, as their engagement is likely to be affected by the ease or difficulty of using a keyboard, mouse, or other input device. In three studies conducted with preschool students (Alloway, 1994; Liu, 1996; Revelle & Strommen, 1990), the mouse has been found to be the most efficient method of input as it precludes the need for complete alphabetic recognition and highly developed fine motor skills. 39

PAGE 51

In a study of 64 three-year-olds, Revelle and Strommen (1990) found that the accuracy rate using a mouse increased over a five-day period while the accuracy use of a keyboard and joystick stayed the same. In a study conducted with 12 preschool children from mid-to-low income families, Liu (1996) questioned the children about their computer knowledge. Fifty-eight percent of the children did not know what a computer was and the rest said they had some experience with computers. Liu was interested in observing how these children used computers. The software was designed for the children to work on the spatial concepts of up/down, in/out, front/behind and above/below. Liu (1996) reported that the children with computer experience used the mouse very well and, conversely, the students with no experience exhibited difficulties. Liu further reported that the mouse was more efficient than keyboards and joysticks when the task called for manipulating items on the screen. Liu also stated that children who were given control over their computer programs, spent more time at the computer than in other classroom activities. The findings from these studies suggest that it is reasonable to assume that the ease of use of the mouse as an input device makes it a safe choice when using computers with preschool children. Patterson, Henry, OQuin, Ceprano, and Blue (2003) moved beyond input devices and addressed the question of program effectiveness in their year-long, mixed methods study of the effects of a computer-based reading program (Waterford Early Reading) on the reading achievement of students in 16 (8 experimental, 8 control) kindergarten and first-grade classrooms. To assess literacy growth, they chose Clays (1993) observation survey to secure an assessment that was independent of the curriculum and materials, as 40

PAGE 52

well as semi-structured interviews with the teachers to elicit their beliefs about early literacy instruction and the Waterford program. Results from this study indicated that the Waterford program did not produce any statistically significant effects on reading or early literacy (Patterson et al., 2003). These researchers found results to support the notion that it is the teacher (Bond & Dykstra, 1967; Pressley, 1998) rather than the program who produces the greatest positive effects. Patterson et al. (2003) found that children whose teachers spent the greatest proportion of their time on instruction rather than classroom management showed gains in reading, whether they utilized the Waterford program or not. Interestingly, though, the interviews revealed that the teachers expressed complete confidence in the Waterfords programs ability to design and monitor appropriate instruction to enhance literacy growth. In the Patterson et al. (2003) study, The Waterford and non-Waterford groups were matched for comparison purposes via the instructional styles of teachers. These matches relied on a description of the classrooms by reading supervisors and volunteer teachers. Although it is reasonable to assume that the matched classrooms were similar, random assignment of children to the two groups would have increased the confidence with which we can assume that the Waterford program made no difference in the reading levels of these students. Another possible explanation for the statistically nonsignificant results in this study may be the degree to which the Waterford program was implemented. Studies that closely monitor the amount of time each student is spending on a program and the fidelity of implementation make it easier to draw further conclusions. 41

PAGE 53

Blok, Oostdam, Otter, and Overmaaat (2002) conducted a meta-analysis of 42 English and Dutch studies published between 1990 and 2000 that dealt with the effectiveness of using computers to teach beginning reading to children aged 5-12 years. Chosen studies had to include pre-course assessment, an experimental design, and employment of some sort of reading skill measure as a dependent variable. Their meta-analysis found an overall effect size of 0.19, favoring computer-assisted reading instruction. For the English-only studies, the researchers showed a greater, moderate effect size of 0.5. Blok et al. (2002) noted a scarcity of high-quality studies and suggested that future studies include random assignment or matching of students and better description of the control group conditions. Despite these reasonable concerns, the moderate effect size, for the English-only studies, indicates a benefit to providing computer-assisted instruction in teaching early reading. Reitsma and Wesseling (1998) explored the effect that a computer program would have on the development of phonological skills in kindergarten and first-grade students. Their findings indicated that children being trained in phonological awareness skills using the computer scored significantly higher in these blending and decoding skills than did students in the control group. Additionally, these researchers conducted follow-up tests and found that these results were durable six months later. Comparison groups and post-testing lend to the convincing conclusion of this study, which suggests that computers can be used to provide effective phonological awareness instruction. Van Daal and Reitsma (2000) also studied the effects of computer-assisted instruction (Circus of Reading) on kindergarten students. They randomly assigned 9 children to the experimental group who received the computer-based reading intervention, and the 42

PAGE 54

remaining 13 children formed the control group. The intervention spanned four months, but the average amount of time spent on the computer was only 3 hours and 13 minutes. At the end of the intervention, all 22 children were tested on their ability to name letters, recognize words, and decode non-words. Post-testing revealed statistically significant gains for the experimental group over the control group in both real and non-word reading. The random assignment of students in this study increases the confidence we can have that the groups were similar, however, the relatively small number of participants makes it necessary for replication before generalization to other populations can be made confidently. The current study is similar to these studies, but will seek to expand on these findings by exploring the effects of starting the intervention at a younger age (preschool), assessing the effect of a computer-based program on comprehension skills and assessing the effects the program has on the professionals who implement it. Generative Instruction As previously mentioned, many students arrive in school at-risk for reading difficulties due to a lack of exposure to the types of early experiences with language and reading activities that are presented in schools. Researchers have documented the efficacy of providing explicit reading instruction to early readers to reduce the instructional differences that result from individual experiences, and to secure a strong foundation on which to build higher-order skills (Carnine et al., 2004; Graves et al., 2004; Snow et al., 1998). One system of instruction that has demonstrated the potential to teach initial reading concepts explicitly in a number of studies is generative instruction, which is described as a careful sequence of procedures that establish key component skills, 43

PAGE 55

provides practice to fluency or automaticity, and then provides environments that increase the probability that these skills will combine into more complex skills with little additional instruction (Johnson & Layng, 1994; Layng et al., 2004). Layng et al. (2002) further explain that generative instruction can be combined with contingency adduction (i.e., recruitment of a skill established under one set of conditions by an entirely new set of conditions) to bring about acquisition of complex skills and strategies. Complex skill and strategy acquisition requires (a) an instructional sequence that firmly establishes constituent skills, (b) specially arranged environments that occasion these constituent skills, and, (c) a consequential event that serves to select the new skill set (Johnson & Layng, 1994). As an example of how this type of instruction can be used to teach a beginning reading skill, Layng et al. (2002) describe their examination of contingency adduction, applied in a generative instructive sequence, and its effectiveness in establishing sound-to-letter correspondence. They studied 241 non-reading children, of various socioeconomic, racial, and geographic categories, ranging in age from 2 years, 11 months to 11 years, 8 months, with the majority being 4-6 years old. At the beginning of the study, none of the children demonstrated a sound/letter correspondence repertoire. The researchers systematically taught a set of phonetic elements that established letter-sound correspondence by asking the children to (a) present themselves with letter and sound pairings, (b) click on a letter or letter set upon hearing the sound, (c) select the phonetic elements from an array of other phonetic elements easily confused with the target element, (d) learn another phonetic element with the same routines (a to c), 44

PAGE 56

(e) conditionally select taught elements placed together in a new array based upon what was said, (f) pick the sound elements out of words, and (g) complete timed practice exercises to ensure segmenting fluency. Layng et al. (2002) used both the oddity-from-sample procedure and the combined stimulus procedure in their study. The objective of the oddity-from-sample procedure was to have the children segment combined sounds into their constituent sounds without having been explicitly taught the constituent sounds. As an example, the child was taught the /cl/ sound via the procedure explained in the previous paragraph. The narrator then explains that some sounds have other sounds inside them and asks the child to click on the sound that does not say /cl/. The set of sounds that were taught prior to this procedure were /an/, /cl/, /fr/, /ip/, /ish/, and /sw/. To see if contingency adduction was taking place, the researchers tested for the sounds /n/, /c/, /l/, /f/, /r/, /i/, /p/, /sh/, and /w/. The objective of the second procedure, the combined stimulus procedure, was to blend individually learned sounds into combined blends without having directly been taught the blend. The set of sounds taught prior to this procedure was /c/, /r/, /f/, /l/, /s/, /r/, /t/, and /n/. To see if contingency adduction was taking place, the researchers tested for the blends /cr/, /fl/, /sl/, /sn/, /pl/, /pr/, /sp/, /st/, and /tr/. Results from both procedures were promising. The percent correct for the oddity-to-sample procedure ranged from 90% to greater than 96%, showing that the children could distinguish the sound that was not the one previously learned. In the combined stimulus procedure, the mean percent correct ranged from 86% to 95%, showing that children could select blends that had not been directly taught. Layng et al. (2002) suggest 45

PAGE 57

that the data obtained from this study show that contingency adduction in a generative instruction model can produce a high level of effectiveness in initial phonics instruction. Although these results are positive, further research is needed to see if the childrens recognition of these sounds and blends transfers to other settings such as book reading, particularly books that are not associated with the particular program (Headsprout) used in this study. In the current study, Headsprout books were used as prescribed in the program, but achievement was measured through pre and post-tests of the Test of Early Reading Ability (TERA-3) and the Test of Language Development (TOLD-3). Generative instruction promotes fluency building (i.e., a combination of accuracy and speed which leads to ease of skill performance, retention of the skill, and the ability to apply the skill to new situations [Binder, 1988]), which Johnson and Layng (1994) see as a way to address the needs of the children who are at-risk of falling behind their peers, and to help teachers decrease the gaps in same-age student performances. To measure these levels of achievement, generative instruction relies on frequency measurement. Frequency measurement is a critical component of generative instruction because of the continuous, orderly data it produces, and because it can accurately predict future behavior (Johnson & Layng, 1994; Skinner, 1953). The generative instruction program (Headsprout Reading Basics) selected for the current study uses frequency measurement to report the frequency of use for each student, including average days between episodes and average number of episodes completed each week. Frequencies of correct and incorrect responses also are calculated and expressed as an overall percentage of correct responses. These data reports can be printed up as needed by the classroom teachers to guide further instruction and document 46

PAGE 58

progress. In the current study, the frequencies of correct and incorrect responses were used in conjunction with Benchmark Assessments to determine when it was necessary for a child to repeat an episode. Additionally, frequency reports of minutes spent in the program were used to assess the effect of time spent in the program on student achievement. Details are provided in Chapter 3. Headsprout Reading Basics The Headsprout Early Reading program consists of two parts: Headsprout Reading Basics (Episodes 1-40) and Headsprout Reading Independence (Episodes 41-80). This investigation limited the intervention to the first 40 episodes, namely Headsprout Reading Basics. Headsprout Reading Basics is a supplemental beginning reading program designed to teach critical foundational skills (FCRR, 2004). It is designed to capture and maintain the attention of the student though the use of one-on-one instruction at the students level, immediate positive feedback, and entertaining and engaging characters and graphics. Program environments include Space World, Dinosaur World, Undersea World, and Jungle World (Layng, Twyman, & Stikeleather, 2003). The episodes are designed to be completed independently, although the teacher needs to be well versed on the skills being addressed and the particular instructions in order to trouble shoot or redirect (e.g., exchange a high-five with a child who is calling for attention after a successful interaction and say see what happens next) when necessary. Each episode should be completed in 20-30 minutes and teachers have next-day access to individual progress reports to use in making instructional decisions (e.g., reset a student with an accuracy score of less than 80% on an episode). Teachers also have a scope and sequence chart and individual progress maps to assist in monitoring 47

PAGE 59

skill acquisition for each student. Based on a review of the literature as well as a personal examination of the 40 episodes of Headsprout Reading Basics, the critical components identified by the ERF (USDOE, 2005) are taught by this program (FCRR, 2004). As previously mentioned, these include (a) oral language, (b) phonological awareness, (c) print awareness, and (d) alphabetic knowledge. One way oral language is developed in Headsprout Reading Basics is through vocal potentiation routines. These routines encourage the child to speak in the absence of an independent listener. Potentiating routines in Headsprout Reading Basics use presentation, confirmation, and correction methods to bring a childs spoken behavior under the guidance of textual stimuli and his/her own discriminative skills (Layng, Twyman & Stikeleather, in press). Phonological awareness is taught through visual and auditory stimuli that are presented in a logical sequence and in a way whereby the childs behavior is either confirmed or corrected. Layng et al. (in press) call this type of routine an establishing routine, and it is also used to teach whole word reading when necessary. One way print awareness is taught in Headsprout Reading Basics is through story routines. Children learn word order and sentence sense as a narrator reads the words and the software highlights them. This progresses to where the child reads the words as the software highlights them and leads to a comprehension question where the child clicks on a picture indicating the meaning of the sentence. As an example of how alphabetic knowledge is taught in Headsprout Reading Basics, blending and segmenting are taught by requiring the child to hold sounds until the next sound is vocalized and then say the word quickly, as one normally would (Layng et al., in press). 48

PAGE 60

A unique feature and possible benefit of being Internet-based is the ability of this program to adapt to the individual needs and pace of each student using a technology that responds to a students pattern of errors and sets up a series of correction procedures (FCRR, 2004). Individualized instructional routines are established depending on student responses, and the student exits each episode only after demonstrating mastery of the lessons objectives. Incorporating individualized routines into pedagogical practices is supported by researchers (Clay, 2001; Vygotsky, 1978), who stress the importance of providing instruction within the realm of each students individual instructional level. Previous field studies using Headsprout Reading Basics. Layng et al. (in press) report that in one investigation, 20 preschool children completed the 40 Headsprout Reading Basics lessons (less than 15 hours of instruction). These students demonstrated a mean gain of one year (from 0.5 to 1.5 years), as measured by the Woodcock-Johnson Letter-Word Identification subtest (Woodcock, McGrew, & Mather, 2001). Although these gains are impressive, it can reasonably be assumed that part of those gains stem from other instruction or maturation. Additional studies including control groups would control for these other possibilities and allow for more credible inferences. Layng et al. (in press) also report on a pilot study that was implemented in a Title I kindergarten class in the Seattle Public School system in 2002. Prior to 2002, no more than 50% of these kindergarten students scored on grade level, as measured by the Developmental Reading Assessment (DRA) (Beaver, 1997). Twenty-three students who completed the lessons were subsequently evaluated through the DRA. All of the students scored above the kindergarten level and 82% scored at an early to mid first-grade level. Again, the results appear to be impressive; yet, additional studies using a control group 49

PAGE 61

and a form of pre-test/post-test are needed to infer causation and rule out other possible explanations. Headsprout Reading Basics also was used in 2003 in the same school with 16 kindergartners. Assessment using the Woodcock-Johnson Word Identification sub-test yielded a pre-test level of 0.4 (i.e., within kindergarten level) while post testing revealed a within grade level of 1.3. Based on a description of this subtest provided by Rathvon (2004), this finding indicates that the kindergarten students who received instruction in Headsprout Reading Basics significantly improved in their ability to identify and name letters and words. Methodologically, this study represented an improvement over the previous study by including pre-tests and post-tests; yet, other factors, including other reading instruction, cannot be ruled out as having caused these gains. Clarfield and Stoner (2005) examined the effects of Headsprout Reading Basics on three kindergarten and first-grade students who had been diagnosed with Attention-Deficit/Hyperactivity Disorder. They used a multiple-baseline design, across participants, to investigate the programs effects on oral reading fluency and task engagement. During the baseline condition, the students received instruction in the schools general reading curriculum. During the experimental condition, the participants also received instruction in the Headsprout Reading Basics program during nonacademic time. All three students completed more than one-half of Headsprout Reading Basics 40 episodes. For all these students, Clarfield and Stoner (2005) reported higher mean levels of oral reading fluency and greater rates of growth, as measured by the Dynamic Indicators of Basic Early Literacy Skills Oral Reading Fluency, as compared to the baseline rates. Results also indicated that off-task behavior, as measured by the Behavior Observation of Students in 50

PAGE 62

Schools, was immediately decreased by the introduction of the program. Despite the small number of participants in this study, Clarfield and Stoner (2005) demonstrated positive effects on kindergarten and first-grade students with a diagnosed disability. Evaluation of a program is critical when choosing instructional technology to use in teaching early reading skills. Wepner and Ray (2000) list the following key components of instructional technology that will aid skill development: (a) immediacy and predictability of visual and auditory clues; (b) focused, individual feedback; (c) opportunity for multiple repetitions; (d) introduction of skills in a predictable sequence; and (e) development of concepts through visual, auditory, and kinesthetic modalities. Wepner and Ray further posit that if a program includes these components, it provides opportunities for developing literacy that are usually unavailable through other means. They conclude their discussion of technology and early literacy learning by stating the following: Adjusting our instructional schemas to include these technological enhancements is not always easy, but the reward for that adjustment is the knowledge that we are helping children to develop literacy with todays tools for tomorrows future (Wepner & Ray, 2000, p. 181). Based on a literature review (FCRR, 2004; Layng et al., in press), and a personal review of the forty Headsprout Reading Basics episodes, the instructional technology program (Headsprout Reading Basics) used for this investigation follows the guidelines set by Wepner and Ray (2000). The program also is compatible with the majority of the computers used in the Head Start centers in regard to both hardware requirements (a minimum of a 266 MHz processor, a mouse, 32 MB RAM, 30 MB free disk space and a 16 bit sound card) and software requirements (Windows, a web browser, Macromedia 51

PAGE 63

Flash Version 6 r47 or above). Access to the Internet also is needed (at least a 56k modem) and although this access commonly is relegated to just one computer at the Head Start programs, it was provided, in necessary numbers, via the mobile computer lab. Gender and Reading Skills Females in the United States outperform males in reading, and the majority of the students identified as being at-risk of poor achievement in reading are male (National Center for Education Statistics, 2000; Programme for International Student Assessment [PISA], 2003). Responsibility for this difference in achievement has been linked to female teachers and the types of texts used in our schools. Millard (2003) indicates that boys have a relative lack of interest in school reading curricula, and that female teachers may inadvertently limit boys involvement in reading due to curricula choices. Stereotypic models of gendered behavior also influence how boys interact with curricula options (Gilbert & Gilbert, 1998) and may contribute to school-based literacy disadvantages for boys. However, boys do exhibit high interest levels in electronic media such as television and video games. Unfortunately, time spent on these games often replaces literacy activities and contributes to literacy underachievement (Rowe, 2000). Newkirk (2002) suggests rethinking school practices, moving from inactive to active learning, and including entertainment media in those practices. Millard (2003) supports the rethinking of school practices by suggesting that teachers provide preferred texts and genres. Because of the discrepancy in reading achievement between boys and girls, this investigation explored the effects gender had on the outcome measures. Due to Headsprout Reading Basics similarities to video games, it may be a preferred text for 52

PAGE 64

boys. If Internet-based reading programs capture and keep students attention, rethinking school practices to include them at an early age may improve literacy achievement outcomes for boysand girls as well (Rowe, 2000). Teachers Perceptions of Educational Technology Implementations Developmentally appropriate technology, infused with the current curriculum encourages children to solve problems and enhances achievement (NAEYC, 2003). One salient problem however, is that many teachers receive very little training in how to use technology or in how to gauge its effectiveness (Willis & Mehlinger, 1996). Haugland (2005) suggests that working together, teachers and technology specialists can achieve computer integration. Teachers support manageable and meaningful changes in their classrooms, and obtaining this support is critical in using technology-based programs to enhance student learning (Willis & Mehlinger, 1996). To obtain this support, Helterbran and Fennimore (2004) report the need for continuing education for early childhood teachers. They found that it is not prudent to spend time and resources on professional education opportunities, unless participants view those opportunities as being important and helpful. Helterbran and Fennimore (2004) also discuss the accountability challenges facing early childhood educators pertaining to the development of academic skills. Collaborations formed to meet these challenges will only be successful if undertaken for and with early childhood teachers rather than to them. Freeman and King (2003) report that this type of professional development rarely focuses on curriculum and assessment, or the preschool role in preparing students for kindergarten. The principal investigator kept these cautionary statements in mind when 53

PAGE 65

providing the training for this program. Data analysis of the interview responses will add to the literature base of teachers perceptions of infusing technology into early childhood programs to increase academic achievement. Summary and Implications for the Present Investigation This chapter began with a review of the literature pertaining to instruction in early reading and oral language skills. The research has shown that oral language and early reading skills develop in a parallel and interactive manner. This suggests the need to explore methods to develop them concurrently. In addition, the notion of being at-risk for reading difficulties was discussed. Studies indicate that reading difficulties can be attributed to a variety of economic and environmental variables. Poverty, however, appears to be a salient predictor of problems with reading. Computer use with young children also was explored. Results are not conclusive, but there are indications that providing computer-assisted instruction offers benefits to young children. Scant research exists pertaining to preschool children at-risk for reading difficulties and the use of computers to teach early reading skills to help combat that risk. Gender differences in literacy also were reviewed. Indicators were that females outperform males in most literacy categories. Some of the literature suggested that educators rethink school-based literacy practices to accommodate boys preferences. Literature on generative instruction and its use in the Internet-based program Headsprout Reading Basics also was reviewed. While a few studies show positive results, more studies that include appropriate control groups and random assignment are needed before any firm inferences can be made. Finally, a review of the literature on teachers perceptions of educational technology implementations in preschool settings 54

PAGE 66

was conducted. This research suggested that integration of technology-based educational programs could succeed if early childhood teachers receive the support they need, and if the programs meet their needs and the needs of their students. 55

PAGE 67

Chapter 3 Methodology Statement of the Purpose This investigation evaluated the effects of instruction through the Headsprout Reading Basics program on the early reading and oral language skills of at-risk four and five year-old preschool students. In addition, the role that gender and total minutes in the program have upon its effectiveness was investigated. Also, it was my intent to obtain data on the teachers and their assistants perspectives of instruction, within the context of Headsprout Reading Basics, after first-time implementation. These perspectives are germane to the reading and oral language achievement outcome measures inasmuch as the effort expended by the teachers must reap perceived benefits before an intervention will be accepted for future use. These perspectives provide rich information that could improve implementation procedures. The following discussion addresses the participants, instruments, and procedures that comprised the investigation. Research Questions The specific research questions that were addressed were: Research Question 1. What is the difference in achievement in early reading skills, as measured by the overall reading quotient of the Test of Early Reading Ability (TERA3), between students who receive instruction through the Headsprout Reading Basics program and students who do not receive this instruction? 56

PAGE 68

Research Question 2. What is the difference in achievement in oral language skills, as measured by the spoken language quotient of the Test of Language Development (TOLD-3), between students who receive instruction through the Headsprout Reading Basics program, and students who do not receive this instruction? Research Question 3. What is the effect of instruction through the Headsprout Reading Basics program on student achievement in early reading skills, as measured by the overall reading quotient of the Test of Early Reading Ability (TERA-3), as a function of number of minutes in the program? Research Question 4. What is the effect of instruction, through theHeadsprout Reading Basics program on student achievement in oral language skills, as measured by the spoken language quotient of the Test of Language Development (TOLD-3), as a function of number of minutes in the program? Research Question 5. What is the effect of instruction through the Headsprout Reading Basics program on student achievement in early reading skills, as measured by the overall reading quotient of the Test of Early Reading Ability (TERA-3), as a function of gender? Research Question 6. What is the effect of instruction through the Headsprout Reading Basics program on student achievement in oral language skills, as measured by the spoken language quotient of the Test of Language Development (TOLD-3), as a function of gender? Research Question 7. What are the perceptions of preschool students teachers and their assistants regarding instruction within the context of Headsprout Reading Basics after first-time implementation with their students? 57

PAGE 69

Research Design This investigation was a QUAN-qual (i.e., a quantitative and a qualitative method used sequentially with a deductive theoretical drive) (Morse, 2003), mixed method design, using both qualitative and quantitative approaches in the data collection and analysis phases. The QUAN-qual design was deemed the most appropriate to gather the quantitative data (scores on the TERA-3 and TOLD-3) and the qualitative data (teachers and their assistants perceptions of instruction, within the context of Headsprout Reading Basics, after first-time implementation). Using the dimensional conceptualization generated by Patton (1990), this mixed model investigation is experimental, yet goes beyond the statistical analysis and inference employed in pure quantitative designs to include qualitative analysis and inference (Tashakkori & Teddlie, 1998). Mixed-methods has philosophical roots in the post-positivist perspective, but also embraces other perspectives (e.g., pragmatist) to gain a great understanding of the phenomenon being studied (Tashakkori & Teddlie, 1998). According to Johnson and Turner (2003), mixed methods approaches, like those employed in this investigation, are used to: (a) obtain corroboration of findings, (b) minimize alternative explanations for conclusions, and (c) elucidate divergent aspects of the research. The quantitative portion of this investigation was designed to be confirmatory, and is the dominant portion, with the purpose of the qualitative analysis being exploratory and complementary (Tashakkori & Teddlie, 1998). This study was experimental, as the students were randomly assigned into either the experimental group or the control group. The effectiveness of instruction was assessed, within the context of the Headsprout Reading Basics program, on early reading ability and oral language by comparing 58

PAGE 70

achievement, as measured by gains shown from pre-testing to post-testing on the TERA-3 and the TOLD-3. The experimental groups performances on the TERA-3 and TOLD-3 were compared as a function of gender and number of minutes in the program. Description of the Participants This investigation was conducted with at-risk 4-year old (as of September 1, 2004) students in two of the five Head Start preschool centers in a city on the east coast of Florida. For the purpose of this investigation, this population was more narrowly defined by using the description given by McGee and Richgels (2003), coupled with income qualification guidelines for the Head Start program (US Department of HHS, 2004). Therefore, being at-risk for reading difficulties included children whose families met poverty index guidelines (see Table 1), students with Limited English proficiency, and students with a diagnosed disability. Because poverty is one of the most accurate predictors of low reading achievement (Chandler, 2000), it seemed prudent to situate this investigation within Head Start programs to study ways to improve the quality of early reading experiences for this at-risk group. Table 1. U. S. Dept. of Health and Human Services (2004) Poverty Guidelines-Florida Family Size Family Income Family Size Family Income 1 $9,310-$12,489 5 $22,030-$25,209 2 $12,490-$15,669 6 $25,210-$28,389 3 $15,670-$18,849 7 $28,390-$31,569 4 $18,850-$22,029 8 $31,570-$34,749 Note. The poverty guidelines are updated periodically in the Federal Register by the U.S. Department of Health and Human Services under the authority of 42 U.S.C 9902 (2). 59

PAGE 71

To control for differences in prior oral language and early reading skills, and to ensure a more equitable comparison between instructional conditions, the students were randomly assigned to groups. Initially, the probability sampling technique of two-stage sampling was used, as two Head Start centers were randomly chosen from the five that serve at-risk students in this Florida city. From this sample, a table of random numbers was used to assign the students from these two centers into either the experimental group or the control group. There were 31 students in the experimental group and 31 students in the control group for a total of 62 preschool students. Based on the current enrollment of 282 students, the sample was 22% of the total population of 4-5 year-old students in Head Start programs in this city. This number of participants was deemed adequate because it provided acceptable statistical power (i.e., .82) for detecting a moderate difference ( 2 = .15; medium effect size; Cohen, 1988) between two groups at the (two-tailed) .05 level of significance. More specifically, the power of .82 was computed simultaneously for a 2-group multivariate analysis of variance (MANOVA) and 2-group discriminant analysis with two outcome variables (i.e., oral language skills achievement and early reading skills achievement) because these two types of analyses yield the same power coefficient (Cohen, 1988). Demographic information for each child was gathered and is presented in Table 2. This demographic information includes age, gender, ethnicity, English as second language status, and exceptional student education status. The percent of students on free and reduced lunch was not included because all students in the Head Start centers qualify for free or reduced lunch; an indication that low income was a homogeneous characteristic of all student participants. 60

PAGE 72

Table 2. Student Demographics Headsprout Millies Math House Age in Months M SD 60.39 3.71 60.61 3.77 Gender Male Female 19 12 15 16 Race Black Hispanic 25 6 27 4 English as Second Language Status Yes No 17 14 15 16 Exceptional Student Education Status Yes No 3 28 0 31 The sampling technique for the qualitative portion of this investigation was random purposeful (Miles & Huberman, 1994), because those invited to participate in the qualitative portion of this investigation were the teachers and their assistants of the 4 to 5 year-old students whose Head Start centers were randomly chosen to participate in this investigation. Demographic information on five teachers and 5 of their 6 assistants was collected at the end of the investigation. This information included race, gender, number of years teaching pre-k, number of years teaching other ages, and type of degree or training (see Table 3). One teachers assistant declined to participate in the interview, stating she did not like to be interviewed. Ten participants is an adequate sample size for a phenomenological study (Creswell, 2002). 61

PAGE 73

Table 3. Teacher and Teacher Assistant Demographics Teachers Teacher Assistants Race Black White 5 0 4 1 Gender Female Male 5 0 5 0 Years of classroom experience with pre-k 0-2 years 3-5 years 6-10 years 11-20 years 0 1 4 0 1 1 2 1 Years of classroom experience with other age-groups 0-2 years 3-5 years 6-10 years 11-20 years 4 0 0 1 4 0 1 0 Highest degree Recd GED HS Diploma Associates Bachelors 0 0 4 1 2 3 0 0 The researcher-to-participant relationship is best described as observer in the quantitative portion and as participant-observer during the qualitative portion. Details of the investigation were presented to the teachers and their assistants through an oral presentation, a handout (Appendix A), and the consent form (see Appendix G) approved by the Institutional Review Board (IRB) of a large, southeastern, public university. The principal investigator explained the research to parents through an IRB-approved consent 62

PAGE 74

form (see Appendix G), and the Head Start family liaison translated information to Creole or Spanish when requested. Ethical Considerations The IRB at a large, southeastern, public university approved this study. Additionally, this research was conducted with the individual needs of the students in mind. For example, students who had difficulty sitting for prolonged periods of time were given periodic breaks. As another example of individualizing the intervention, the mice were adjusted to accommodate all left-handed students. I had planned to exclude students who did not demonstrate the prerequisite skills of controlling a mouse and following one-step directions, and offering the intervention to them at the same time as the control group. However, all students demonstrated the prerequisite skills to be included in this investigation. As developmental appropriateness is a salient issue for this age group, teachers were asked to report any concerns of off-task behavior to the principal investigator. With few exceptions, the individualized support, mastery criteria, and motivational devices of the Headsprout Reading Basics program served to keep the students on-task. The motivational devices of Millies Math House also served to keep the students in the control group on-task, with few exceptions. The teachers responded to those few exceptions of off-task behaviors by redirecting the student back to the program or giving praise if appropriate. On one occasion, a student was removed from the session due to disruptive behavior that had begun prior to the session. This student finished that episode later in the day. No other incidents required more than redirection back to the program. 63

PAGE 75

In other procedures designed to protect the participants, all pre-test and post-test data were entered using assigned numbers rather than participants names, and secured in a locked file cabinet to maintain anonymity. Headsprout developers collected the following data (see Figure 6) and protected it through passwords. Individual performance reports, generated by the Headsprout program, were sent home with the students each Friday and parents were encouraged to contact the principal investigator with any concerns. Parents periodically provided positive comments, but no concerns were reported. Performance Data. Headsprout collects information directly from your child, via the Internet, in the form of the clicks that your child will make when completing an episode of the Reading Program. We refer to these clicks, and data on when your child starts and stops a lesson, as "Performance Data." We will use Performance Data to (1) measure your child's performance in each episode of the Headsprout Reading Basics Program and to adapt the Reading Program to his or her learning needs, (2) analyze your child's Performance Data, and provide you with periodic progress reports about your child's performance in the Reading Program, and (3) improve the Reading Program. In the event that we ever modify the Reading Program, or any other Headsprout products and services, such that the continued use of the Reading Program and other Headsprout products and services require the collection of information that is not Performance Data directly from your child, Headsprout will seek your authorization prior to collecting such additional information from your child. Headsprout may aggregate your child's Performance Data with the Performance Data of other children participating in the Reading Program for marketing and other business related purposes. Such aggregate information will be anonymous and will not identify your child. Retrieved from http://www.headsprout.com Figure 6. Headsprout performance data Another issue of concern is treatment of the control group. Critics of experimental research argue that it is unethical to withhold a treatment that might be beneficial to all 64

PAGE 76

students, whereas others argue that randomized, experimental studies are the only ethical way to determine causation (Reyna, 2004). To address both of these issues, the intervention was available to the control group for eight weeks immediately following the conclusion of this investigation, in a delayed treatment model. Quality use of teacher and student time also is a salient ethical issue. To address quality of use of teacher and student time, the scope and sequence of the Headsprout Reading Basics program was reviewed with the teachers to help them identify the corresponding preschool Sunshine State Standards (SSS), which were approved by the Florida Board of Education in 1996 as the basis for quality programs in the state of Florida (FLDOE, 2005). Sessions were also scheduled to disrupt classroom practices minimally. All teachers reported no difficulties in incorporating this intervention into their daily lesson plans, and referred to the excitement of the students in supporting this intervention as a quality use of student time. A final ethical issue is related to the principal investigators position with Literacy Launchers, Inc., a non-profit organization founded by the principal investigator with a partner to support the preschool programs in one county in Florida. The principal investigator has a paid position as the curriculum specialist, and Literacy Launcher Inc. owns the mobile computer lab. This investigation did not include the entire population served by Literacy Launchers, Inc. However, other preschool providers may use the results to determine whether or not they will offer the Headsprout Reading Basics program to their students. To answer concerns about the potential for research bias in this situation, the principal investigator does not have a commitment to promote Headsprout Reading Basics. While it is conceivable that the principal investigator could have made 65

PAGE 77

changes to the data to ensure significant results, such an action would be detrimental to the mission of Literacy Launchers, Inc. which is to provide curricula that are efficacious in increasing the oral language and early reading skills of preschool students. Curricula choices and implementations are guided by data, not perceived loyalties. Materials and Instruments Headsprout Reading Basics. Headsprout Reading Basics is an Internet-based, supplemental reading program for students in pre-k through second grade who are not yet reading or who are in the beginning stages of the reading process. Headsprout uses one-on-one, generative instruction to teach the alphabetic principle, the use of sound elements to decode words, print awareness, vocabulary, and deriving meaning from texts (FCRR, 2004; Layng et al., in press). Headsprout Reading Basics was the intervention provided to the experimental group in this investigation. Millies Math House. Millies Math House is a pre-k-2 software program that introduces and builds fundamental early math skills (e.g., numbers, shapes, counting, sizes, patterns, quantities, sequences, addition and subtraction). The software uses spoken and graphic instructions to allow pre-readers and early readers to explore the program. The explicitness of the directions and the on-screen guides promote independence in the use of this software. Millies Math House was the intervention provided to the control group in this investigation. Structured open-ended interview protocol. The structured, open-ended interview protocol was developed by the principal investigator to gather information about the perceptions of the teachers and their assistants regarding instruction, within the context of Headsprout Reading Basics, after first-time implementation (refer to Appendix C). 66

PAGE 78

Information about the field-testing and subsequent revision of this protocol is provided in the Qualitative Procedures section of this chapter. Test of Early Reading Ability-3rd Edition (TERA-3). The TERA-3 is a direct measure of childrens mastery of early developing reading skills. The subtests include: (a) alphabet: measuring knowledge of the alphabet and its uses; (b) conventions: measuring knowledge of the conventions of print; and (c) meaning: measuring the construction of meaning from print. An overall Reading Quotient is computed from the scores of the three subtests. Three of the five identified purposes of the TERA-3 are: (a) to document progress as a result of early reading intervention, (b) to serve as a measure in research studying reading development in young children, and (c) to accompany other assessment techniques. These three purposes guided the use of the TERA-3 in this investigation. The other identified purposes are: (a) to identify children who are significantly below their peers in reading development, and (b) to identify strengths and weaknesses of individual children (Reid, Hresko, & Hammill, 2001). The last two purposes were not addressed in this investigation. Internal consistency score reliability has been found to range from .81 to .96 and test-retest reliability to range from .77 to .92 (FCRR, 2004). The TERA-3 test developers estimated the concurrent validity using the Test of Early Reading Ability-2, the Stanford Achievement Test-9, and the Woodcock Reading Mastery Test-Revised. A Buros reviewer (DeFur, 2003) concluded that the TERA-3 authors provide convincing evidence that the TERA-3 is a psychometrically sound measure of early reading ability. Test of Language Development-Primary: 3rd Edition (TOLD-3). The TOLD-3 is an individually administered, norm-referenced test designed to assess the oral language 67

PAGE 79

competence of children 4-0 through 8-11 years of age. The six core subtests measure semantics and syntax and three supplemental subtests measuring phonology (Rathvon, 2004). Internal consistency score reliability has been found to range from .78 to .94, and test-retest reliability has been found to range from .77 to .90 (Rathvon, 2004). Concurrent validity with the Bankson Language Test-Second Edition has been documented as ranging from .50-.97 (FCRR, 2004; Rathvon, 2004). Mobile computer lab. The mobile computer lab is a retrofitted school bus with 18 computers, small chairs, reduced-size mice, Internet access, and Macromedia Flash plug-in availability. Quantitative Procedures Prior to the intervention stage of this investigation, the five preschool teachers and six teachers assistants were trained to implement the interventions for both the experimental (Headsprout Reading Basics) and the control (Millies Math House) groups (refer to Appendix D). As previously mentioned, one of the assistants declined to be interviewed, but was trained and assisted a teacher in implementing the program on a few occasions. The principal investigator conducted the training on two separate days at each of the two sites to accommodate all of the participants schedules. The training consisted of oral explanations, modeling, and guided teacher practice. Teachers also were given access to the Headsprout Reading Basics episodes and the Millies Math House software for review prior to their students reaching each episode. Teachers were trained to respond to technology issues (e.g., volume adjustments), to access and decipher reports, and to intervene and redirect (i.e., use minimum of amount of gesturing or gentle physical guidance to return student to engagement in task) when necessary. For reference 68

PAGE 80

purposes, teachers and teachers assistants also were given a copy of the implementation checklists that were used to monitor implementation integrity (refer to Appendix F). Prior to beginning the intervention, both groups of students were pre-tested by the principal investigator using the TERA-3 and the TOLD-3. The intervention was provided on the mobile computer lab. A teacher or assistant brought the students to the mobile computer lab. On the first two days, the teacher helped the students find their computers (i.e., the one with their name above a large arrow on the screen of the monitor), put their headphones on, and begin their programs. After the first two days, all students were able to find their computers, put their headphones on, and begin their programs independently. In the Headsprout Reading Basics program, students interacted with characters in the environments of Space World, Dinosaur World, Undersea World, and Jungle World. Great Job! and You did it! illustrate praise statements students received from characters such as San, a spaceman, and Lee, a dinosaur. Character names also provide an opportunity for students to learn that words that are unfamiliar to them also have meaning. As an illustration of an exercise to develop phonemic awareness, students hear letter sounds, then select corresponding visual stimuli and hear the sound again as confirmation of the correct choice. Headsprout Reading Basics begins with very consistent letters and sounds such as ee, v, and an. Students receive instruction on the alphabetic principle, decoding strategies, print awareness, vocabulary, and deriving meaning from texts. Students are encouraged to respond orally as well as with the mouse and teachers and teacher assistants provided praise for these responses. Upon completion of an episode, the teacher or assistant gave the student a sticker (their progress maps were 69

PAGE 81

updated in their classrooms on Fridays). When there was a story to accompany the completed episode, the teacher or teacher assistant sat with the student and had the student read the story to them. Students were given these stories at the end of the day to take home and read with their families. The experimental group then received 30 minutes of daily instruction in the Headsprout Basic Reading program for an 8-week period. In an attempt to prevent resentful demoralization (Martella et al., 1999) of the control group, they received 30 minutes of daily numeracy instruction on the computers via Millies Math House program. Millies Math House uses cartoon characters to build fundamental early math skills (e.g., numbers, shapes, counting, sizes, patterns, quantities, sequences, addition and subtraction). Implementation integrity was measured using separate 10-item procedural checklists for Headsprout Reading Basics and Millies Math House (refer to Appendix F). Two teachers, who use the program in their classrooms, reviewed the checklist for Millies Math House. Both stated they believed the checklist covered the necessary steps to implement the program, and suggested no changes. The checklist for Headsprout Reading Basics was reviewed by one of the developers of the program who stated that she approved of it and planned to use pieces of it (J.S. Twyman, personal communication, September 6, 2005). No changes were suggested at the time of this contact. I scored each item as either present or absent. Throughout the investigation, these implementation integrity assessments were conducted 10 times for each program at each site, and inter-rater reliability data were calculated on 3 of these occasions at each site for each program. The second rater was a 70

PAGE 82

retired teacher who was familiar with both Millies Math House and Headsprout Reading Basics. The checklists were reviewed with her and a practice session conducted where examples and non-examples were discussed. Procedural checklists yielded a percentage of items implemented using the formula: number of items present divided by the number present and absent X 100. Results are discussed in Chapter 4. At the end of the 8-week period, both groups were post-tested using the TERA-3 and TOLD-3. A 9-week period was originally chosen because it accommodates the school year schedule and because the Headsprout Reading Basics program can be completed in eight weeks if implemented every day for a 20-30-minute period. An additional week was originally being added to accommodate those students who missed instruction due to absences or who did not meet mastery criterion for certain lessons and needed to repeat those lessons. However, due to scheduling conflicts with graduation practice and end-of-the-year field days, only eight weeks of intervention were provided. The principal investigator was present the entire time the students were on the computers and observed and provided brief feedback to the teachers and assistants after each session for both the experimental and control groups to promote procedural integrity (i.e., the degree to which the programs were implemented as intended). Students in the control group were offered the intervention, in a deferred structure, during the summer at the same sites as this investigation. Students in the experimental group who did not complete the program in the initial 8-week period were allowed to continue during the summer period if their parents so chose. One student in the experimental group finished the Headsprout Reading Basics episodes during the 8-week period; therefore, she received instruction through Millies Math House software 71

PAGE 83

for a few days. The post-testing, however, was completed after the initial 8-week intervention stage. Quantitative Data Analysis The data analysis for this project were generated using SAS/STAT software, Version 9.12 of the SAS System for Unix. A one-way (two-group) MANOVA was conducted to examine the difference between the experimental group and the control group as a function of oral language achievement and early reading skills achievement. The = .05 level of significance for statistical tests was used. Prior to conducting the MANOVA, the relationship between the two variables and the assumptions of multivariate normality and homogeneity of the variance-covariance matrix involving the two variables of interest were assessed (cf. Stevens, 2002). Because a statistically significant main effect was found for the MANOVA, a discriminant analysis was conducted as a follow-up to determine which outcome variables best distinguish the experimental and control groups. A corrected effect size associated with the MANOVA, as measured by 2 was reported and interpreted for all statistically significant findings. Results are reported in Chapter 4. My third, fourth, fifth and sixth research questions inquired about the effects gender and minutes in the program have on the dependent measures. To answer these research questions, TERA-3 and TOLD-3 gain scores from the experimental group were entered into 2 (male vs. female) x 4 (280-375 minutes vs. 376-470 minutes vs. 471-565 minutes vs. 566-660 minutes in the program) factorial ANOVA design. Factorial designs are used to assess the effects of two or more independent variables, or the interaction of 72

PAGE 84

participant characteristics with the independent variable (Martella et al., 1999). The four equal partitions of time were chosen in to ensure representation in each partition. However, the sample sizes in each partition were small. Two separate post hoc regression analyses, using time as a continuous variable, were undertaken to examine further the effect that number of minutes in the program had on achievement. Results are reported in Chapter 4. Qualitative Procedures The qualitative portion of this research design situated this investigation within a more holistic perspective (Patton, 1990) of the phenomenon (i.e., the first-time implementation of an Internet-based supplemental reading program). Data were collected on teachers and their assistants perceptions of instruction provided through the Headsprout Reading Basics program. To address the qualitative question, a structured open-ended interview (Patton, 1990) was developed by the principal investigator. The primary purpose of this interview protocol (see Appendix C) was to gather information about the perceptions of teachers and their assistants regarding instruction through the Headsprout Reading Basics program, after first-time implementation. The interview protocol (refer to Appendix B) was field-tested at another preschool in the same city as this investigation. At the time of this field-testing, the four interviewees, two teachers and their assistants (not included in the present study), at this preschool had been providing Headsprout Reading Basics to their preschool students for nine weeks. One class was a Head Start classroom and the other was not. Types of questions in the interview included (a) experience/behavior, (b) opinion/values, (c) feeling, (d) knowledge, and (e) background/demographic (Patton, 1990). Exact wording 73

PAGE 85

and sequence of questions were determined in advance and all interviewees were asked the same questions in the same order. The questions were presented in an open-ended format to solicit rich data. The strengths of this structured type of interview instrument were (a) it allowed respondents to answer the same questions, thereby increasing the comparability of responses and reducing interviewer effects and biases; (b) it permitted evaluation users to see and review the instrumentation; and (c) it facilitated organization and analysis of the data (Patton, 1990). Weaknesses were (a) there was little flexibility in relating the interview to a particular individual and circumstances, and (b) standardized wording of questions may have constrained the naturalness and relevance of the answers (Patton, 1990). The carefully designed structure of this interview should have improved content-related validity (Patton, 1990) by ensuring that the significant information was elicited pertaining to teachers and their assistants perceptions of the Headsprout Reading Basics program after first-time implementation. During the field-testing of the interview protocol, it was determined that demographic information would be more easily collected on a written form (see Appendix D). Question 1 (Do you think Headsprout Reading Basics increases expressive and receptive oral language skills, and if so, how?) and Question 2 (Do you think Headsprout Reading Basics increases alphabetic skills? Print awareness skills? Phonlogical awareness skills, and if so how?) asked about several specific skills, yet elicited little differentiation among these skills, and caused confusion. Question 1 became Based on your interactions with the program and the monitoring of your students, do you think Headsprout Reading Basics helped develop your students oral language skills 74

PAGE 86

and if so, how?; and Question 2 became Based on your interactions with the program and the monitoring of your students, do you think Headsprout Reading Basics helped develop your students early reading skills and if so, how? Question 5 (What activities were left out of your day due to the addition of Headsprout Reading Basics?) generated little response regarding the children and also was changed to What activities, if any were left out of the childrens day due to the addition of Headsprout Reading Basics? Question 7 (What comments, if any, did you hear from the childrens parents regarding their childs involvement in Headsprout Reading Basics?) resulted in the retelling of some of the childrens quotes. Therefore, Question 7 became two questions asking about both childrens statements and their parents statements. In the current investigation, individual interviews were conducted at the end of the 8-week intervention period. Best and Kahn (1993) state that the key to effective interviewing is to establish rapport. Rapport was built through the initial training (refer to Appendix E) of teachers as well as by being on site and conducting daily observations and providing daily feedback. The principal investigator was also available to help troubleshoot as needed. The principal investigator was the only one conducting the interviews, which should have amounted to greater consistency of procedures. The teachers and their assistants who implemented the Headsprout Reading Basics program were asked to participate in these individual interviews. Each interview lasted 25-30 minutes and remained informal. Interviews were held in locations identified by respondents as being comfortable for them. Some were held in the teacher workroom, others in the lounge, and others at a picnic table. The interviews occurred at a time where there were few interruptions to allow for continuity, confidentiality, and thoroughness. 75

PAGE 87

The principal investigator conducted all 10 interviews and there were no disruptions or incidents that caused any difficulty with data collection. Each interview was audiotaped and tapes were subsequently transcribed verbatim, then reviewed and corrected. Additionally, two external coders, both experienced teachers, read the qualitative research question, qualitative procedures, and data analysis sections. The purpose of the current investigation and the qualitative procedures also were explained to them. Each external coder was provided with a copy of the original transcripts. Qualitative Data Analysis A methodology of grounded theory and progressive focusing (Glaser & Strauss, 1967) was used to examine and code the responses, and to form categories, across questions, to describe these responses (Strauss & Corbin, 1998). Categories for the qualitative questions were descriptive and interpretive (Miles & Huberman, 1994). Categories were specified a posteriori. As explained by Onwuegbuzie and Teddlie (2003), in the a posteriori case, categories were created after all data had been collected. Ethnograph,, version 5.08, a software program for computer-assisted analysis of text-based data was used to store the transcripts, code and index text units, and establish and refine categories. Data analysis was undertaken in a recursive, iterative manner and revisions and consolidations were made. Responses of yes and no provided no information other than affirmation of categories and indicators already listed, so they were put aside as unusable data. This process produced a model of categories, indicators, and illustrative quotes. Trustworthiness was verified throughout the analysis process using a variety of strategies. These included (a) verbatim transcripts, (b) member checks, and (c) coding 76

PAGE 88

checks. These categories, indicators, and illustrative quotes were shared with one teacher and one teacher assistant for the purpose of member checking. Ideally, these would have been reviewed with each teacher and assistant. However, because the interviews were conducted at the end of the school year, access to teachers and assistants was difficult. The availability of one teacher and one assistant provided a member check of 20% of the interview responses. The teacher and assistant provided affirmation for my model and made no suggestions for change. The two external coders were subsequently used as sources of verification to substantiate the categories, indicators and illustrative quotes. A few consolidations of categories were recommended and revisions were made. Initial categories of Early Reading Skills and Oral Language Skills became Skill Acquisition, and Student Outcomes and Teachers Assessments became Measuring Success. Further explanation and results are displayed, using an across-site summarizing table (Miles & Huberman, 1994), in Chapter 4. Combined Quantitative and Qualitative Data Analysis Both quantitative and qualitative data were collected and analyzed in a concurrent, explanatory design (Creswell, Clark, Gutmann, & Hanson, 2003). Data types were analyzed in a complementary manner with sequential collection and analysis. An independent sequential analysis was employed in that the results of student achievement, as measured by the scores on the TERA-3 and TOLD-3, did not inform teacher interview responses. Analysis of data for this investigation use a mixed methodology framework congruent with Tashakkori and Teddlies (1998) recommendation of combining the 77

PAGE 89

qualitative and quantitative approaches into the research methodology of a single study or multiphase study. According to Onwuegbuzie and Teddlie (2003), the reason for this type of data analysis is both representation and legitimization. The former is to cull sufficient information about the effects of the Headsprout Reading Basics program on both students and teachers, whereas the latter is concerned with validity and trustworthiness. To assist the reader in understanding how these data might be interpreted, the principal investigator acknowledges the belief that well-designed curricula coupled with sound, engaging instruction are critical needs in preschool settings. As mentioned previously, the principal investigator used a journal to record observers comments throughout the course of this investigation. Chapter 4 contains the results of this investigation, which includes a summary of salient, recurring comments pertaining to researcher involvement that may have contributed to researcher bias. 78

PAGE 90

Chapter 4 Results The purpose of this investigation was to determine the effects of instruction, through the Headsprout Reading Basics program on the oral language and early reading skills of at-risk 4-5 year-old preschool students. Additionally, the effects that gender and number of minutes in the program had on oral language and early reading skills also were examined. Finally, interviews were conducted to discern the teachers and their assistants perspectives of instruction, within the context of the Headsprout Reading Basics program, after first-time implementation. This chapter presents the description and analysis of the data that were collected. Treatment of Data Pre-tests. Prior to beginning the intervention, both groups were pre-tested using the TERA-3 and the TOLD-3. Testing conditions were conducive and similar for all students. All tests took place in a well-lit, quiet room in each of the two centers, with no disruptions. Following the explicit guidelines provided in the examiners manual of both the TERA-3 and the TOLD-3 minimized researcher influence on these test results. After the pre-tests were conducted, they were scored and recorded on a master log. In order to control for differences in prior oral language and early reading skills and ensure a fair comparison between instructional conditions, the students were randomly assigned into groups. However, in order to examine whether or not this random assignment produced equal groups at baseline, I conducted a one-way (two-group) multivariate analysis of 79

PAGE 91

variance (MANOVA) to examine the difference between the experimental and control groups as a function of achievement as measured by the pretest scores on the TERA-3 and the TOLD-3. Prior to conducting the MANOVA, I tested the three assumptions that should be met. Because the pre-tests were administered individually in a secluded setting, it was assumed that the independence of vectors assumption had not been violated. The second assumption is homogeneity of the variance, covariance matrix. To assess this assumption, I conducted Boxs M test. The resulting Chi-Square value of 4.43, p > .05, was not statistically significant, suggesting that there was not a violation of this assumption. The final assumption is that of multivariate normality. The skewness and kurtosis values for the TERA-3 pretests (skewness = .30, kurtosis = -.45) and the TOLD-3 pre-tests (skewness = -.07, kurtosis = -.26) were within normal limits (Lei & Lomax, 2005.) Multivariate normality also was assessed. For Group 1, n = 31, the observed multivariate skewness value of 0.0130 is less than the 95% upper percentile for b1p = 1.687. For n = 31, the observed multivariate kurtosis value of 6.6732 is between the lower 2.5% percentile for b2p = 5.855 and the upper 97.5% percentile for b2p = 10.156. With respect to multivariate outliers, the largest observed Di is 5.4839, which is less than the 95% upper percentile (for test of single multivariate outlier) value of 10.58 for n = 30. No multivariate outlier is indicated. For Group 2, n = 31, the observed multivariate skewness value of 1.6477 is less than the 95% upper percentile for b1p = 1.687. For n = 31, the observed multivariate kurtosis value of 8.17932 is between the lower 2.5% percentile for b2p = 5.855 and the upper 97.5% percentile for b2p = 10.156. With respect to multivariate outliers, the largest observed Di is 6.9470, which is less than the 95% upper 80

PAGE 92

percentile (for test of single multivariate outlier) value of 10.58 for n = 30. No multivariate outlier is indicated. Based on these data, it was assumed that this assumption was not violated. Having met all the assumptions, the MANOVA was conducted using a = .05 level of significance for statistical tests. No statistically significant difference in means on the set of pre-tests was found (F [2,59] = 1.53, p > .05, Wilks = .9507). It was therefore indicated that there was equality of groups at baseline, with respect to the skills measured on the TERA-3 and the TOLD-3. The pre-tests were then stored in a locked file cabinet. Classroom literacy activities. Teachers lesson plans were collected throughout the course of the investigation. The principal investigator acknowledges the inevitable variability in the nature and quality of literacy experiences offered in the five different classrooms. Scheduling conflicts prevented the principal investigator from observing classroom literacy practices. However, in an attempt to provide a systematic and focused overview of the classroom literacy experiences, the principal investigator reviewed the five teachers lesson plans during the eight weeks of intervention. A summary of classroom literacy activities that may have affected student achievement is presented (see Table 4). Categories were culled from the skills that are tested by the TERA-3 and the TOLD-3. Categorizing the literacy activities from the lesson plans yielded a percentage of activities implemented using the formula: number of weeks the activity was present divided by the total number of weeks x 100 (i.e., N/8 x 100). 81

PAGE 93

Table 4. Description of Classroom Literacy Activities Percentage of Weeks (N/8 x 100) Activities Were in Lesson Plans ACTIVITY PERTAINING TO: Classroom 1 Classroom 2 Classroom 3 Classroom 4 Classroom 5 Alphabet and its functions 75 75 88 88 88 Phonological awareness 75 63 100 75 100 Conventions of print 100 100 100 100 100 Oral language development 100 100 100 100 100 Finding meaning 100 100 100 100 100 Implementation integrity. The experimental group received 30 minutes of daily instruction in the Headsprout Basic Reading program for an 8-week period, while the control group received 30 minutes of daily numeracy instruction via Millies Math House program. A teacher or assistant brought the students to the mobile computer lab. On the first two days, the teacher helped the students find their computers (i.e., the one with their name above a large arrow on the screen of the monitor), put their headphones on, and begin their programs. After the first two days, all students were able to find their computers, put their headphones on, and begin their programs independently. Throughout the course of the 8-week intervention, implementation integrity was measured using separate 10-item procedural checklists for Headsprout Reading Basics and Millies Math House (refer to Appendix F). Implementation integrity assessment 82

PAGE 94

observations were conducted 10 times throughout the investigation at each site. Inter-rater reliability data were calculated on three of these occasions at each site. Procedural checklists yielded a percentage of items implemented using the formula: number of items present divided by the number present and absent X 100. Implementation integrity percentages for Headsprout Reading Basics ranged from 60% to 90% with a mean of 77%. Implementation integrity percentages for Millies Math House ranged from 60% to 100% with a mean of 78%. Inter-rater reliability was measured with the Kappa coefficient, which represents the proportion of agreement obtained after removing the proportion of agreement that could be expected to occur by chance. Inter-rater reliability was measured for 12 of the 40 sessions, or 30% of the sessions where implementation integrity was assessed. Kappa coefficients were computed for six sessions of the Headsprout Reading Basics implementation and ranged from .55 to .74. Kappa coefficients were computed for six sessions of the implementation of Millies Math House and ranged from .62 to .74. These coefficients suggest moderate to substantial inter-rater reliability (Landis & Koch, 1977). Kappa coefficients for 4 of the 6 Headsprout Reading Basics sessions and 5 of the 6 Millies Math House sessions were above the .70 level that is considered satisfactory (Cohen, 1988) Based on the implementation integrity percentages and the Kappa coefficients, it is reasonable to expect that implementation integrity was adequate for the purposes of this investigation. Observation journal. Throughout the intervention stage of this investigation, the principal investigators involvement and thoughts pertaining to the implementation of the two programs was documented in an attempt to identify potential influential actions or 83

PAGE 95

statements. It was extremely difficult for the principal investigator to avoid interacting with the students, particularly after a successful interaction that they were anxious to share. I attempted to interact with both groups in a similar manner. A summary of the nature of salient, recurring comments and interventions that may have contributed to bias is presented in Table 5. Table 5. Synopsis of Observer Involvement Comments to Teachers or Assistants Primarily reserved comments for brief review after sessions When asked a direct question, referred participant to checklist Exceptions occurred occasionally when students were being ignoredbrought these incidents to the teachers attention Other exceptions occurred when teachers or assistants matched student excitement levels-praise provided to teacher/assistant Comments to Students Waited for teachers to provide praise during sessions, but smiles & thumbs up were given after teacher response Provided praise and questions before and after sessions Interacted with child in a few emergency situations when child had to use the restroom or needed a band aid Let children read their books to me after the session if they asked, and subsequently provided praise Interventions Provided direct assistance in conducting Benchmark Assessmentsinitial training was not sufficient Provided brief feedback after sessions-referring to checklists Went in and retrieved class and teacher when they were more than 10 minutes late Provided chocolate to teachers and animal crackers to students on Fridays Post-tests. At the end of the 8-week period, both groups were post-tested using the TERA-3 and TOLD-3. Post-tests were then scored and the results recorded onto a master log by the principal investigator. Data for each student were recorded and tracked by student identification numbers. The post-tests were then locked in a file cabinet until 84

PAGE 96

needed for assessment of internal consistency, and to provide achievement data to be used in answering the research questions. Results for Research Questions 1 and 2 Research question 1 was: What is the difference in achievement in early reading skills, as measured by the overall reading quotient of the Test of Early Reading Ability (TERA3), between students who receive instruction through the Headsprout Reading Basics program and students who do not receive this instruction? Research question 2 was: What is the difference in achievement in oral language skills, as measured by the spoken language quotient of the Test of Language Development (TOLD-3), between students who receive instruction through the Headsprout Reading Basics program and students who do not receive this instruction? To answer these two questions, a one-way (two group) multivariate analysis of variance (MANOVA) was conducted to examine the difference between the experimental group and the control group as a function of early reading and oral language skills achievement. A = .05 level of significance for statistical tests was used. Additionally, the means and standard deviations of the two groups are presented for inspection. Prior to conducting the MANOVA, the relationship between the two variables (i.e., gains on the TERA-3 (TEGAINS) and gains on the TOLD-3 (TOGAINS) were examined. As measured by the Pearson Correlation Coefficient, the correlation between gains on the TERA-3 and gains on the TOLD-3 was moderate ( = .46). This moderate correlation suggests that gains in early reading skills, as measured by the TERA-3, and gains in oral language skills, as measured by the TOLD-3, tend to increase together. Based on this moderate correlation between the two variables, assumptions were tested. 85

PAGE 97

Three assumptions should be met before conducting a MANOVA. The first assumption is that of independence of vectors, or, that the participants are responding to the assessments independently of one another. Because the tests were administered individually in a secluded setting, it was assumed that this assumption had not been violated. The second assumption is homogeneity of the variance-covariance matrix. To assess this assumption, Boxs M test was conducted. The Chi-Square value of 10.65 was significant (p = .0138) suggesting a violation of this assumption. Boxs M test is extremely sensitive to violations of the assumption of normality, which may have contributed to the significant p-value (Haksitan, Roed & Lind, 1979). To further examine homogeneity of variance-covariance, the variance of each group on each gain score was examined (TEGAINS1, 2 = 57.59, TEGAINS2, 2 = 30.61, TOGAINS1, 2 = 53.60, TOGAINS2, 2 = 19.68) and variance ratios computed (TE = 1.88, TO = 2.72). Because the sample sizes were equal and the variance ratios were less than 3:1 (Stevens, 2002), it was concluded that the violation of this assumption was nonconsequential. The final assumption was that of multivariate normality. Each variable was presented in terms of mean, standard deviation, skewness and kurtosis (see Table 6). Most researchers tend to categorize skewness and kurtosis absolute values of less than 1.0 as acceptable (Lei & Lomax, 2005). 86

PAGE 98

Table 6.Univariate Normality of TEGAINS and TOGAINS TEGAINS TOGAINS Mean 5.1935 6.6452 SD 7.9152 7.4374 Skewness 0.4852 0.6699 Kurtosis -0.2796 -0.2231 The skewness and kurtosis values for the TEGAINS (skewness = .04852 and kurtosis = -0.2796) and for the TOGAINS (skewness = 0.6699 and kurtosis = -0.2232) are within normal limits. However, TOGAINS appeared to have one outlier. Data pertaining to this potential outlier were checked and were found to be correct. To address this concern, a formal test of statistical significance was conducted by computing and analyzing the ratio of the skewness and kurtosis coefficients to their standardized skewness and standardized kurtosis coefficients (please refer to Table 7). Table 7. Standardized Skewness and Kurtosis Coefficients Standardized Skewness and Kurtosis Coefficients TEGAINS TOGAINS Standardized Skewness Coefficient 1.5596 2.1533 Standardized Kurtosis Coefficient -0.4494 -0.3586 87

PAGE 99

Standardized skewness and kurtosis coefficients within 2 suggest no serious departures from normality. Coefficients outside this range, but within the 3 boundary signify slight departures from normality (Onwuegbuzie & Daniel, 2003). Based on these guidelines, the standardized skewness and kurtosis coefficients for TEGAINS and the standardized kurtosis coefficient for TOGAINS suggested no serious departures from normality. The standardized skewness for TOGAINS suggested a slight departure from normality. With the exception of the outlier, there does not appear to be a major deviation from multivariate normality. However, as a precautionary measure, the MANOVA was conducted twice, once with the complete data and the second time with the outlier observation removed. The conclusions for the study were the same in both cases and will further be discussed with the presentation of the MANOVA data. Based on these data, it was concluded that the outlier did not have undue influence on the data and it was concluded that this assumption was not violated. Test score reliability. Because the gain scores on the TERA-3 and the TOLD-3 were the dependent measures, it was important to examine the degree of homogeneity among the items on each of those tests. To determine this homogeneity, the internal consistency of the test items was computed for each of the two tests. Internal consistency demonstrates the extent to which the items correlate with one another and is computed using Cronbachs coefficient alpha method (Reid et al., 2001). Internal consistency results for pre and post-tests are shown in Table 8. 88

PAGE 100

Table 8. Internal Consistency of the TERA-3 and TOLD-3 Internal Consistency: TERA-3 & TOLD-3 Cronbachs Coefficients TERA-3 TOLD-3 Pre-test Post-test Pre-test Post-test .8555 .8911 .9328 .9199 Cronbachs coefficients reported in Table 8 are raw coefficients based on item correlation. Correlations of .70 and above suggest that both the TERA-3 and the TOLD-3 tests are consistent, and additionally, the higher the alpha, the more consistent the test (Nunnally, 1978). These results indicate that the test items are very similar to each other in content. However, caution should be taken when interpreting these coefficients because of the effect the stop rules of each test may have had on internal consistency. Once a child misses three questions in a row on the TERA-3, and five questions in a row on the TOLD-3, the remaining items on that subtest are scored as zeros. The reasoning behind this procedure is that because the test items become increasingly more difficult as a student progress through each subtest, it is assumed that a student who can not answer the less difficult questions correctly, also will not be able to answer the more difficult ones correctly. For this young age group (4-5 year olds) in particular, Cronbachs coefficients may be inflated. Despite this inflation, the measures indicate the tests are consistent. Null Hypotheses Null Hypothesis 1. There is no difference in achievement in early reading skills, as measured by the overall reading quotient of the Test of Early Reading Ability (TERA89

PAGE 101

3), between students who receive instruction through the Headsprout Reading Basics program and students who do not receive this instruction. Null Hypothesis 2. There is no difference in achievement in oral language skills, as measured by the spoken language quotient of the Test of Language Development (TOLD-3), between students who receive instruction through the Headsprout Reading Basics program and students who do not receive this instruction. MANOVA. As previously stated, to control for differences in prior oral language and early reading skills and ensure an equitable comparison between instructional conditions, the students were randomly assigned into groups. Having addressed the assumptions of this model, a one-way (two-group) multivariate analysis of variance (MANOVA) was conducted to examine the difference between the experimental group and the control group as a function of oral language achievement and early reading skills achievement. The = .05 level of significance for statistical tests was used. The MANOVA was conducted twice, once with all data and a second time with the outlier removed. Results are shown for both calculations (see Table 9). Table 9. Results of MANOVA of TEGAINS and TOGAINS Data Set Wilks Lambda F P All Test Scores .5521 23.93 .0001 TOLD outlier removed .4998 29.02 .0001 90

PAGE 102

The difference in means on the set of achievement tests was found to be statistically significant with all data included (F [2, 59] = 23.93, p < .0001, Wilks = .5521) and with the outlier removed (F [2, 58] = 29.02, p < .0001, Wilks = .4998). Because a statistically significant difference was found, univariate results were examined. Because removal of the outlier did not influence statistical significance, those results will not be presented nor discussed further. Statistically significant differences were found in favor of the experimental group in both TEGAINS (F [1,60] = 26.66, p < .0001) and TOGAINS (F [1,60] = 32.09, p < .0001). Inspection of the means helps to explain the statistically significant findings (cf. Table 10). Specifically, the experimental group had impressive gains in means for both early reading skills (M = 9.55) and oral language skills (M = 11.00), compared to smaller mean gains for the control group in early reading skills (M = O.84) and oral language skills (M = 2.29). Because statistical significance was obtained, a corrected effect size associated with the MANOVA was calculated, as measured by 2 With the full set of data, 2 = .57 and c 2 = .55 and with the outlier removed, 2 = .49 and c 2 = .47. Using Cohens (1988) criteria, the effect sizes associated with both 2 and c 2 are large. 91

PAGE 103

Table 10. Descriptive Results for TEGAINS and TOGAINS Descriptive Results for TEGAINS & TOGAINS TEGAINS TOGAINS Group N M SD M SD 1 31 9.55 7.59 11.00 7.32 2 31 0.84 5.53 2.29 4.44 Discriminant analysis and effect sizes. Because a statistically significant main effect was found for the MANOVA, a discriminant analysis was conducted as a follow-up to provide additional descriptive information about the contribution of each respective variable to the discrimination between groups. In analyzing the pooled, within-class structured coefficients, TEGAINS produced a of .60 and TOGAINS a of .69. These results suggest that both variables are discriminating groups similarly, thus, the intervention was working. Based on these analyses, it is reasonable to reject null hypothesis 1 and conclude that students who received instruction through the Headsprout Reading Basics program experienced statistically higher gains in early reading skills, as measured by the overall reading quotient of the Test of Early Reading Ability (TERA-3), than did students who did not receive this instruction. It is also reasonable to reject null hypothesis 2 and conclude that students who received instruction through the Headsprout Reading Basics program experienced statistically higher gains in oral language skills, as measured by the 92

PAGE 104

spoken language quotient of the Test of Language Development (TOLD-3), than did students who did not receive this instruction. Results for Research Questions 3-6 Research Question 3 was: What is the effect of instruction, through the Headsprout Reading Basics program, on student achievement in early reading skills, as measured by the overall reading quotient of the Test of Early Reading Ability (TERA-3), as a function of number of minutes in the program? Research Question 4 was: What is the effect of instruction, through the Headsprout Reading Basics program, on student achievement in oral language skills, as measured by the spoken language quotient of the Test of Language Development (TOLD-3), as a function of number of minutes in the program? Research Question 5 was: What is the effect of instruction through the Headsprout Reading Basics program, on student achievement in early reading skills, as measured by the overall reading quotient of the Test of Early Reading Ability (TERA-3), as a function of gender? Research Question 6 was: What is the effect of instruction through the Headsprout Reading Basics program, on student achievement in oral language skills, as measured by the spoken language quotient of the Test of Language Development (TOLD-3), as a function of gender? To answer these four questions about the effects gender and minutes in the program have on the dependent measures, TERA-3 and TOLD-3 gain scores from the experimental group were entered into 2 (male vs. female) x 4 (280-375 minutes vs. 376-470 minutes vs. 471-565 minutes vs. 566-660 minutes in the program) factorial ANOVA 93

PAGE 105

designs. Factorial designs are used to assess the effects of two or more independent variables, or the interaction of participant characteristics with the independent variable (Martella et al., 1999), and the primary purpose of factorial analysis is data reduction and summarization. Scores from the control group were not entered into this analysis, as only the effects that gender and minutes in the program on achievement in the group who received instruction through the Headsprout Reading Basics program were of interest. Null Hypotheses Null Hypotheses 3-6 were presented in Chapter 1 and are: Null Hypothesis 3. There is no difference in achievement in early reading skills, as measured by the overall reading quotient of the Test of Early Reading Ability (TERA3), between students who receive a greater number of minutes of instruction through the Headsprout Reading Basics program and students who receive fewer minutes of instruction. Null Hypothesis 4. There is no difference in achievement in oral language skills, as measured by the spoken language quotient of the Test of Language Development (TOLD-3), between students who receive a greater number of minutes of instruction through the Headsprout Reading Basics program and students who receive fewer minutes of instruction. Null Hypothesis 5. There is no difference in achievement in early reading skills, as measured by the overall reading quotient of the Test of Early Reading Ability (TERA3), between males who receive instruction through the Headsprout Reading Basics program and females who receive the same instruction. 94

PAGE 106

Null Hypothesis 6. There is no difference in achievement in oral language skills, as measured by the spoken language quotient of the Test of Language Development (TOLD-3), between males who receive instruction through the Headsprout Reading Basics program and females who receive the same instruction. Factorial ANOVA. Examination of the results of the factorial ANOVA revealed that the test of interaction for TEGAINS was not statistically significant (F [3, 23] = 0.61, p > .05). The main effects for gender and minutes in the program also were not statistically significant (F [1, 23] = 1.17, p > .05; F [3, 23] = .38, p > .05, respectively). Results of the factorial ANOVA reveal that the test of interaction for TOGAINS also was not statistically significant (F [3, 23]= .75, p > .05). The main effects for gender and minutes in the program also were not statistically significant (F [1, 23]= .00, p > .05; F [3, 23]= .38, p > .05, respectively). From these analyses, it was concluded that gender and number of minutes in the program had no significant effect on gain scores on either the TERA-3 or the TOLD-3. Although statistical significance was not found for the variables gender and minutes in the program, descriptive data are presented for informational purposes (cf. Table 11). As can be seen from Table 11, females had larger raw score gains than did males on the TERA-3 (female M gain = 11.50, male M gain = 8.32) and on the TOLD-3 (female M gain =11.08, male M gain = 10.95). Additionally, no pattern was evident pertaining to gains based on number of minutes in the program. 95

PAGE 107

Table 11. Means and SD by Gains Scores for Gender and Minutes in Program SOURCE TEGAINS TOGAINS Gender N M SD N M SD Male 19 8.32 7.39 19 10.95 6.60 Female 12 11.50 7.81 12 11.08 8.65 Minutes in Program 280-375 4 9.25 5.74 4 13.75 6.50 376-470 6 6.33 7.03 6 10.33 6.02 471-565 8 10.75 8.50 8 12.25 6.82 566-660 13 10.38 8.13 13 9.69 8.69 Based on analysis of the factorial ANOVAs, Null Hypotheses 3 and 4 cannot be rejected. The effect of number of minutes in the program was explored further using two sets of post-hoc regression analyses. The first regression analysis revealed that number of minutes in the program was a statistically significant predictor of TERA-3 post-test scores (F [1, 29] = 5.62, p < .05). Moreover, number of minutes in the program explained more than 16% (i.e., R 2 = .162) of the variance in TERA-3 post-test scores. Using Cohens (1988) criteria, this suggests a medium effect size. The second regression analysis revealed that number of minutes in the program also was a statistically significant predictor of TOLD-3 post-test scores (F [1, 29] = 4.85, p < .05). Moreover, number of minutes in the program explained more than 14% (i.e., R 2 = .143) of the variance in TOLD-3 post-test scores. Using Cohens (1988) criteria, this also suggests a medium effect size. These two sets of analyses, when combined, provide a more 96

PAGE 108

comprehensive picture than the data from the factorial ANOVAs alone. Number of minutes in the program does not predict gain scores, but it does predict post-intervention scores. The latter finding implies that number of minutes in the program is associated with higher levels of performance, even though is does not predict how much a student will gain. Practical significance of these conclusions is discussed in Chapter 5. Based on these analyses, I am able to reject Null Hypothesis 3 and 4, at least partially, and conclude that there is a difference in achievement in early reading skills, as measured by the overall reading quotient of the Test of Early Reading Ability (TERA3), and in oral language skills, as measured by the spoken language quotient of the Test of Language Development (TOLD-3). This difference manifests itself in the post-intervention scores. Based on further analysis of the factorial ANOVAs, I am not able to reject Null Hypotheses 5 and 6, and therefore conclude that there is no difference in achievement in early reading skills, as measured by the overall reading quotient of the Test of Early Reading Ability (TERA3), or in oral language skills, as measured by the spoken language quotient of the Test of Language Development (TOLD-3) between males who receive instruction through the Headsprout Reading Basics program and females who receive the same instruction. Results for Research Question 7 Research question 7 was: What are the perceptions of preschool students teachers and their assistants regarding instruction through the Headsprout Reading Basics program, after first-time implementation with their students? 97

PAGE 109

To answer this question, individual interviews were completed with five teachers and five of their assistants. The five teachers and five assistants included in the interviews represented all the teachers of the students in this investigation and five of the six teacher assistants, and were therefore purposefully chosen. One assistant declined to be interviewed, stating that nothing was wrong, but that she did not like to be interviewed. Description of the design of the interview protocol was provided in Chapter 3. Results are shown in a descriptive model (see Table 12) with illustrative quotes (see Table 13), using across-site summarizing tables (Miles & Huberman, 1994). Table 12. Participants Perceptions Categories with Related Indicators Categories Indicators Skill Acquisition: Participants felt Headsprout Reading Basics helped reinforce the skills they were teaching in the classroom and had a positive effect on the oral language, early reading, and technological skills of their students. Active oral responding while on the computer Increase in vocabulary and verbalizations Sounding out words Transfer of word identification to other books Printing words they learned on the bus Recognizing letters and sounds Improvement in writing own name Reinforcing the phonological awareness, print awareness and phonics activities taking place in the classroom Trying to print the words from the bus Motivation: Participants felt Headsprout was a preferred activity for many of their students and sufficiently motivated the majority. A few reported the need to provide extra praise and encouragement to students who were struggling. A majority of participants were excited to be involved in the instruction, while some noted staffing and time constraints as barriers to their motivation. Participants agreed that not many parents became involved, but the ones who did appeared to be excited and interested in the instruction that was taking place on the bus. Willingness to attempt reading activities because theyve already had experiences Likened to video games the kids love Immediate reinforcement from the program characters Take home books were a source of pride Students looked at their progress charts everyday Lessons were enjoyable, competitive & fun Students were anxious to share what they were learning Parents see the improvement in their kids and are excited to have them continue Parents who observed on the bus loved it and asked if it would be around next year 98

PAGE 110

Table 12. Participants Perceptions Categories with Related Indicators (Continued) Categories Indicators Developmental Appropriateness: Participants felt Headsprout was appropriate for most of their students. A few exceptions were noted. A third of the participants felt the program should be offered at an even younger age. Requests to start the program with the 3-4 year olds Start program the second semester when students have been exposed to 1/2-3/4 of the alphabet & sounds Some of the children who struggled may not have been mature enough Fits their needs as they enter kindergarten Difficulty level fit the students level and the program demonstrates patience Measuring Success: Participants measured success of the program in multiple ways, but most cited the interest and excitement levels of their students and transfer of skills to the classroom. Using the progress notes and observing the children during instruction were mentioned as ways of measuring success by a majority of the teachers. A students ability to pick up a book and read it or sound out words The amount of excitement shown by students in wanting to learn to read The transfer of skills to reading words in the classroom Skill retention The progress notes Through direct observation of the students on the bus Interest level Improvement in name writing, and letter and sound recognition Improvement Ideas: Participants share a positive perception of the implementation of Headsprout with their children. However, there was also consensus on ways to improve future implementations, particularly providing instruction to the younger children, providing the program two or three times a week, and improving staffing patterns. Start a similar program with the 3-4 year old students Start the program earlier in the school year Provide individual learning centers to remove the distraction of other students celebrating Need for more staff Use technology to incorporate writing 99

PAGE 111

Table 13. Participants Perceptions Categories with Illustrative Quotes Categories Illustrative Quotes Skill Acquisition They actually talk to the computer as they give the answer. Theyre interacting more. You could see them reading words out of other books that they learned on the computer. They started using some of the vocabulary associated with the program. Ive had kids use episode and I was like, wow, what a nice word! Motivation (Students) (Teachers) It gets the kids to want to learn how to read. Theyd jump up from circle time hollering, theres the bus! They loved it, they were excited. Some would sit up during naptime and ask, Are we going on the bus? It boosts their self-esteem, when their moms come to pick them up, the first thing theyd do is pick up their little books, theyre so anxious to read it. Oh man I adore it. I think its a 10! My thoughts, hey, lets keep it up. I like it. Woo! I learned some new things too! I love for them to love learning. I saw it as a great learning tool. Developmental Appropriateness The younger kids also need it, but the kids who are graduating and moving up should be the first to get to the computer. It should have a starter program for the younger ones. Some of the kids got burned out, so those probably were the ones who werent mature enough. Very accessible, not too difficult. Measuring Success By how much the children learn from it. Their ability to pick up that book and read it or sound out the words. It gets the kids to want to learn how to read. They would find similar words in other books and they knew the words. Looking at their star charts that charts where they are and how far they came. Improvement Ideas They should have a starter, so the younger ones will be ready to do the full Headsprout thing. Start it earlier in the year so we can get it into our lesson plans daily, then we can reinforce it. If they could have their own area, that would probably be better. Celebrating is distracting for some. Perhaps maybe a smaller group, but I guess that goes back to staffing. Cut it down like a Monday, Wednesday, and Friday. 100

PAGE 112

Narrative of qualitative results. Tables 12 and 13 provide an overview of the five categories culled from the teacher and teacher assistant interviews. A salient point of these findings is that in all categories, almost all of the responses supported the use of Headsprout Reading Basics with their students. The responses indicate that the participants find the Headsprout Reading Basics program to be motivating, developmentally appropriate, and effective in helping them increase the oral language and early reading skills of their students. Negative comments primarily reported time constraints and staffing concerns. One outlier was noticed in the developmental appropriateness category as one teacher felt that a few children, who burned out and struggled, were not mature enough for this intervention. This comment touches on an extremely important issue. Students who are easily distracted while receiving instruction through this program could be missing critical prerequisite skills. This issue was beyond the scope of this investigation, but could be an important variable to study in future implementations of the Headsprout Reading Basics program with the preschool population. Combined Quantitative and Qualitative Data Analysis Quantitative and qualitative data were collected in a concurrent explanatory manner (Creswell et al., 2003) and analyzed in a complementary and concurrent manner. The results of student achievement, as measured by the scores on the TERA-3 and TOLD-3, did not inform teacher interview responses. Analysis of data for this investigation used a mixed methodology framework congruent with Tashakkori and Teddlies (1998) recommendation of combining the qualitative and quantitative approaches into the research methodology of a single study or 101

PAGE 113

multiphase study. According to Onwuegbuzie and Teddlie (2003), the reason for this type of data analyses is both representation and legitimization. The former is to cull sufficient information about the effects of the Headsprout Reading Basics program on both students and teachers, whereas the latter is concerned with validity and trustworthiness. To assist the reader in understanding how these data have been interpreted, the principal investigator acknowledges her belief that well-designed curricula coupled with sound, engaging instruction are critical needs in preschool settings. Use of a concurrent explanatory design led to the conclusion that use of the Headsprout Reading Basics program, as a supplementary instructional tool, can be effective in increasing the oral language and early reading skills of at-risk preschool students. Additionally, the teachers and their assistants in this investigation found this instructional tool to be helpful in reinforcing and extending the skills they teach in their classrooms. In this chapter, the results of both the quantitative and qualitative data collected in this investigation were presented. In the final chapter, interpretations of these findings are presented. Additionally, some of the limitations of this investigation are reviewed. Implications for future practice and research also are presented in Chapter 5. 102

PAGE 114

Chapter 5 Discussion This chapter contains an overview of the investigation, major findings, and comparisons with previous research. Implications of the findings for both research and future practice with the at-risk preschool population are discussed. Overview Approximately 40% of students across our nation cannot read at a basic level (USDOE, 2002). Researchers respond to this concern by studying teacher practices, effects of poverty, school environments, and a host of other contributing factors. Researchers, educators, and lawmakers also are examining research-based early intervention strategies, methods, and programs in attempts to preempt the need for costly remedial programs, and to increase the probability of reading proficiency for every student (USDOE, 2005). Fueled by the need for quality early language and literacy experiences, the call for reliable interventions, supported by replicable research, is filtering down into preschools. Being at-risk for school-based reading difficulties can be attributed to a number of economic, environmental, academic, or emotional variables (Wharton-McDonald et al., 1998). These variables can include (a) residing in a low-income family, (b) limited proficiency in English, and (c) a diagnosed disability. Although no single approach can be labeled best practice, there are instructional methods that have demonstrated efficacy in reducing the probability of reading difficulties. 103

PAGE 115

One method of instruction that has demonstrated the potential to teach initial reading concepts explicitly in a number of studies is generative instruction, which was described in detail in Chapter 2. Generative instruction is the basis of the Internet-based program Headsprout Reading Basics. Headsprout Reading Basics has been evaluated in terms of increasing preschool students letter and word recognition, but there have been no studies that examined the effects of the program on preschool students oral language and a composite of early reading skills. In fact, scant research exists in evaluating programs for their effects on both the oral language and the early reading skills of at-risk preschool students. Because language and literacy develop in a parallel and interactive manner (Ruddell & Ruddell, 1994), there is a need to explore options to develop them simultaneously. Thus, the present investigation was conducted to determine if the Headsprout Reading Basics program would be effective in significantly increasing the oral language and early reading skills of at-risk preschool students. Additionally, this investigation sought to ascertain teachers and their assistants perceptions of the Headsprout Reading Basics program after first-time implementation. By providing instruction through Headsprout Reading Basics, it was hypothesized that student achievement in both early reading and oral language skills would be significantly increased. A probability sampling method, two-stage sampling, was initially used to randomly choose two Head Start centers from the five that serve at-risk students in one county in Eastern Florida. From this sample, a table of random numbers was used to assign the students from these two centers into either the experimental group or the 104

PAGE 116

control group. There were 31 students in the experimental group and 31 students in the control group for a total of 62 preschool students. The interview participants were those teachers and teachers assistants of the 4-5 year-old students whose Head Start centers were randomly chosen, and who also agreed to be interviewed. Prior to beginning the intervention, both groups were pre-tested using the TERA-3 and the TOLD-3. The experimental group then received 30 minutes of daily instruction in the Headsprout Basic Reading program for an 8-week period and the control group received 30 minutes of daily numeracy instruction on the computers via Millies Math House program. Implementation integrity was measured and found to be adequate for this investigation. At the end of the 8-week period, both groups were post-tested using the TERA-3 and TOLD-3. In the first inferential analysis, a one-way (two group) multivariate analysis of variance (MANOVA) was conducted to determine if significant differences in student achievement measures existed between the group who received instruction through the Headsprout Reading Basics program and the group that did not. Findings from the analysis could have important implications for preschool programs in Florida who serve at-risk students. A discriminant analysis was conducted as a follow-up to provide additional descriptive information about the contribution of each respective variable to the discrimination between groups. A corrected effect size associated with the MANOVA, as measured by 2 was also calculated. Additionally, interview responses from teachers and their assistants were categorized to provide an overview of their perceptions of the Headsprout Reading Basics program after first-time implementation. 105

PAGE 117

Summary of Findings The major finding in this investigation was that statistically significant differences and large effect sizes, for both measures, emerged between the achievement of preschool students who received instruction through the Headsprout Reading Basics program and the achievement of students who did not. A second finding was that both outcome measures discriminated groups similarly, thus indicating that the intervention worked in concurrently increasing the oral language and early reading skills of preschool students. A third finding was that no statistically significant differences emerged between males and females, or between students who received a greater number of minutes in the program, and students who received fewer number of minutes, for either outcome measure investigated. That is, the Headsprout Reading Basics program was equally effective in increasing the oral language and early reading skills of both males and females. Additionally, receiving instruction through the Headsprout Reading Basics program was equally effective in increasing oral language and early reading skills regardless of the number of minutes of instruction received (minutes in program ranged from 280-660). This information may be useful to preschool educators as they consider scheduling and lesson planning. Other important findings in this investigation were that the teachers and their assistants who implemented the Headsprout Reading Basics program found it to be effective in increasing the oral language and early reading skills of their preschool students and would use the program in the future if given the opportunity. Additionally, they mentioned that the program reinforced the skills they were teaching in the classroom, and they noticed the positive difference in their students skills. They also felt 106

PAGE 118

the interest and excitement shown by the students was an indicator that Headsprout Reading Basics was developmentally appropriate, and a preferred instructional activity of their students. Conversely, some discussed time constraints and staff shortages as impacting their ability to implement a new program as prescribed. As previously mentioned, these perspectives are germane to the reading and oral language achievement outcome measures inasmuch as the effort expended by the teachers and assistants must reap perceived benefits before an intervention will be accepted for future use. These perspectives also should provide suggestions that could improve implementation procedures (e.g., provide the program 2-3 times a week as opposed to everyday). Comparison of Findings with Theoretical Framework and Previous Research The statistically significant results of this study support the theoretical framework, which draws upon a scientifically informed approach to teaching, largely based on Engelmann and Carnines (1991) theory of instruction. As discussed in Chapter 1, the current investigation provided an environment where new concepts were presented in a clear manner and practiced to fluency. Students in the current investigation demonstrated fluency of carefully sequenced reading skills by successfully completing Headsprout Reading Basics episodes. As mentioned in Chapter 1, Engelmann and Carnines (1991) theory of instruction also includes a students ability to transfer knowledge to new learning situations. The teachers and assistants in the current investigation reported incidents of the students finding words in books and posters in the classroom and reading them, while proclaiming they learned those words on the computer bus. Results also indicate that this investigation falls under the category of instructional research, where the role is to examine and identify teaching practices that 107

PAGE 119

are effective in helping at-risk students acquire the skills and attitudes they need to become proficient readers (Torgesen, 2004). With the current need to identify research-based curricula for our preschool students, this investigation provides evidence that Headsprout Reading Basics can be effective in helping at-risk students acquire early reading and oral language skills. Findings of the current investigation are inconsistent with those of Elkind (1981) who questions the empirical focus on skill acquisition and purports that focusing on these skills too early may be detrimental to student reading achievement. In the current investigation, empirical focus on skill acquisition significantly increased some of the critical skills necessary for reading achievement. On the other hand, the findings of the current study are consistent with researchers and theorists (e.g., Clay, 2001; Skinner, 1968; Vygotsky, 1978) who purport that students will benefit from explicit instruction in early reading skills if they are allowed to work at their own pace, and at their own level with individualized support. In this investigation, Headsprout Reading Basics provided individual support to 31 preschool students who benefited by acquiring language, early reading and computer skills. Their teachers reported increased student performance in vocabulary, writing skills, word and letter recognition, and phonological awareness. Additionally, the findings from the current study support Labbo and Reinking (2003) who provide evidence that computers are not only useful in providing practice opportunities, but they are also motivational. The teachers and assistants in the current study stated that the students loved the immediate reinforcement provided by the programs characters and one likened the program to a preferred student activity, playing video games. 108

PAGE 120

Analyses from the current investigation support the findings of Nation and Snowling (2004), who showed that oral language skills predicted word recognition and reading comprehension, both concurrently and longitudinally. Inferences can be made from the current study based on the results of the disriminant analysis, which showed the Headsprout Reading Basics program, which is designed to increase early reading skills, also was efficacious in increasing oral language skills in a concurrent manner. In extending the findings of Layng et al. (in press) who demonstrated that Headsprout Reading Basics increased the early reading skills of kindergarten students, this study revealed that the Headsprout Reading Basics program is effective in increasing early reading skills in preschool children. Additionally, to support the findings of Layng et al. (in press), this investigation also provides evidence that the Headsprout Reading Basics program is effective in developing oral language skills in this same group of students. A second addition to the findings of Layng et al. (in press) is that preschool teachers and assistants find the Headsprout Reading Basics program to be an effective and desirable supplemental program in reinforcing what they are teaching in the classroom, and in assisting them in increasing the oral language and early reading skills of their students. Analyses of the teachers and teachers assistants responses to the interview questions provide a variation to one of the findings of Patterson et al. (2003), whose qualitative data revealed that the teachers expressed complete confidence in the Waterfords programs ability to design and monitor appropriate instruction to enhance literacy growth, despite the absence of statistically significant differences. In contrast to that study, this investigation demonstrated significant gains in achievement as a result of 109

PAGE 121

the intervention. The positive feedback given by the teachers in both instances may be the result of researcher influence and bias, or, could possibly suggest the need for working more closely with practitioners in measuring the effectiveness of programs. A final comparison to address is one that may provide promise to early educators who are concerned about time constraints. Hart and Risley (1995) estimated that in order to catch up to their more advantaged peers, children who had received fewer oral language experiences needed 41 hours of out-of-home language experiences per week. These researchers also inferred that by four years of age, children had already established patterns of vocabulary growth that were, often times, intractable. Although Hart & Risley (1995) provide strong evidence to support that inference, results of the present investigation are inconsistent with that finding. Although the present investigation consisted of a small sample size, the gains in oral language skills demonstrated by students who received as little as 280 minutes of instruction may suggest that, with the proper personnel, curricula and support, educators may be able to assist at-risk preschool students in catching up to their more advantaged peers in a more timely manner. Threats to Internal Validity The findings presented in this study should be interpreted with caution due to the possible threats to validity that prevailed. Particular threats to internal validity for this investigation were history and maturation (Martella et al., 1999; Onwuegbuzie, 2003) because it is expected that all of these students also came into contact with conditions, unrelated to the intervention, that may have increased their oral language and early reading skills. Additionally, students are expected to demonstrate improvement in oral language and early reading skills, as they grow older, even though this was only eight 110

PAGE 122

weeks older. However, presence of a control group, with the same expectations of outside influences should minimize this limitation in terms of the outcomes of the investigation. Another threat to internal validity is implementation bias (i.e., lack of adherence to protocol) (Onwuegbuzie, 2003), stemming from the teachers and their assistants implementing the intervention to various degrees. To guard against this threat, implementation of the programs was monitored using implementation checklists with inter-rater reliability checks. Finally, despite attempts to minimize it with an observation journal, member checking and peer review, researcher bias (Onwuegbuzie, 2003) existed in the implementation and interpretation stages of the investigation. Threats to External Validity. Ecological validity (Onwuegbuzie, 2003) presented a threat to external validity for the present investigation because preschool settings differ substantially. I used random selection of preschool settings in an attempt to minimize this threat. However, that selection came from Head Start centers, therefore, generalization to a private or public school-based preschool setting only can be undertaken with extreme caution. A second threat to external validity is researcher bias (Onwuegbuzie, 2003). Researcher bias threatened external validity as well as internal validity, as the results may have been influenced by my presence and involvement. As previously stated, I maintained a journal and summarized my involvement in Table 5 for the reader to use in making inferences pertaining to this investigation. Implications for Future Research Guided by the premise that reading development begins before students reach kindergarten, and that one teacher working in isolation cannot meet the needs of every 111

PAGE 123

student, it seems not only prudent, but necessary to explore and identify efficacious supplemental methods and programs that explicitly develop both oral language and early reading skills. Because scant research exists on the effectiveness of supplemental literacy programs for at-risk preschool students, an immediate opportunity presents itself to researchers interested in this population. Once identified and implemented, these supplemental methods and programs could enhance the quality of education we are providing to these young students and may serve to reduce the probability of future reading difficulties. A first suggestion for future research focuses on the population from which the participants for the current investigation were sampled. This investigation examined the achievement of at-risk preschool students in Head Start centers in one county in Eastern Florida. Research on this population is important because these learners are considered to be at-risk for reading difficulties if they do not receive quality literacy experiences. However, concurrent development of oral language and early reading skills is important for all preschool students, as well as those students in the early elementary years. As such, the relationships among instruction in research-based programs, and achievement in oral language and early reading skills should be examined with a variety of participant samples so that specific recommendations regarding program efficacy are based on research findings. In evaluating supplemental early reading and language instructional programs, researchers should consider all the factors that can impact student achievement within a preschool setting. Scores could be impacted by literacy experiences received in the home, classroom, or community. In future investigations, collecting data on preschool 112

PAGE 124

classroom literacy experiences and home literacy experiences in a comprehensive, systematic manner would help clarify the differences that could be attributed to the supplemental program. A related variable of interest might be the extent to which the supplemental materials provided through the Headsprout Reading Basics program are being utilized. The systematic evaluation of literacy activities in the home should provide useful information to parents and teachers seeking to strengthen and support joint instructional efforts. These types of investigations would be time-consuming and would take a skilled researcher to minimize the effects of researcher presence in the home or classroom. Future research that is predominantly quantitative should continue to incorporate a qualitative component to investigate the perceptions of teachers, teachers assistants, parents, and administrators. Qualitative data can provide insightful information on staffing, scheduling, motivational, and effectiveness issues. In the current study, the qualitative data supported the quantitative data that indicated program effectiveness, but also gave suggestions about scheduling and staffing concerns that could lead to improved implementation. Qualitative data also assist a researcher in supplementing the numbers collected in a quantitative study, and gaining a more complete picture of both the challenges and the rewards of providing quality literacy programming in preschool settings. A salient factor in need of addressing is the amount of demands being placed on preschool teachers. Teachers and assistants reported having to spend a substantial amount of time completing paperwork (not related to the Headsprout Reading Basics program) and time constraints due to staffing shortages. If researchers and administrations cannot identify means to support preschool teachers, by providing the time and training needed 113

PAGE 125

to implement new programs with integrity, the programs will not be efficacious in increasing the oral language and early reading skills of preschool students. There is also a need for longitudinal studies to explore the effects early intervention programs have on the language and reading skills of students as they progress through their formal schooling years. With the emphasis on reading proficiency in schools, not to mention the access to literature that reading proficiency affords, examination of the long-term effects of early intervention programs could provide critical information. While short-term statistically significant gains are impressive, long-term gains can be keys to program sustainability. Implications for Future Practice As discussed in Chapter 1, The No Child Left Behind Act (NCLB) of 2001, signed into law by President Bush created a new program, Reading First (Armbruster, et al., 2003), which calls for scientific-based reading programs in Grades K-3, with funding priority given to high-poverty areas. With the Constitution of the State of Florida mandating that every 4-year-old child be offered a high quality preschool learning experience, it is reasonable to expect that scientific, evidence-based practices also will be called for when providing instruction to these youngest students. The Headsprout Reading Basics program might be one supplemental program that, if implemented with integrity, may be able to assist preschool teachers in meeting the goals set by the state, as well as meeting some of the individual needs of their students. The implementation checklist used in this study (cf. to Appendix F) may be of use to teachers and assistants who plan to implement this program. If attended to prior to the time of implementation, fewer difficulties should arise. For instance, volume and 114

PAGE 126

headsets could be checked before the student sits down at the computer to avoid any unproductive waiting periods. Placing the cord behind the students head as soon as they sit down will avoid some (not all!) of the chewing and increase the longevity of the headsets. As with any program, familiarity could only serve to increase efficacy. That is to say that teachers or assistants implementing the program would benefit from working through the episodes themselves and becoming more familiar with content, error correction procedures, and reinforcement techniques used in the program. The close examination of the teachers and their assistants responses to the interview questions lead to several implications for teachers. We might infer from the present study that the teachers and assistants of at-risk preschool children are receptive to new teaching methods and programs, and if given the opportunity to see positive effects on their students, are willing to embrace and incorporate the new instructional method into their instructional routines. Even so, teachers and assistants who work with at-risk preschool students face time constraints and multiple demands throughout the course of the day. Proper administrative support is imperative for any successful implementation. Conclusion This investigation began as a search for supplemental reading programs to support the classroom efforts of pre-k teachers in increasing the oral language and early reading skills of their students. It was believed that an interactive, Internet-based reading program would be successful in increasing oral language and early reading skills by providing individualized instruction, practice opportunities, and immediate feedback. The gains in oral language and early reading skills demonstrated by the students who received the program, Headsprout Reading Basics, were shown to be significant and the effect sizes 115

PAGE 127

were large. These differences were realized in the relatively short time period of eight weeks. The teachers and their assistants who implemented the program found the program useful in supporting oral language and early reading instruction, and stated that they would use it again if given the opportunity. It is this researchers contention that using the Headsprout Reading Basics program to supplement language arts and reading instruction in preschool classrooms may be beneficial and motivating, for both students and teachers, if it were used the second half of the school year, or during the summer preceding students entry into kindergarten. Based on the feedback from the teachers and assistants and the results of this study, it is also suggested that further examination of program scheduling be undertaken. It might be more efficacious to provide instruction through Headsprout Reading Basics 2-3 times a week for an extended period rather than every day for eight weeks with the preschool population. Conducting randomly assigned, experimental versus control group studies, using computer-based technology to implement literacy interventions with the preschool population is not a simple undertaking, but it is an important one. Myriad variables and technological features can be examined. Exploring the relationships that facilitate student achievement within preschool environments can lead to instructional techniques that may reap exponential improvements in students readiness for their school careers. Moreover, significant results in investigations such as the present one may lead preschool practitioners to embrace educational technology as one of their partners in meeting the individualized needs of their diverse student population. 116

PAGE 128

References Adams, M. J. (1990). Beginning to read: Thinking and learning about print. Cambridge, MA: MIT Press. Alloway, N. (1994). Young childrens preferred option and efficiency of use of input devices. Journal of Research on Computing in Education, 27, 104-110. Armbruster, B. B., Lehr, F., & Osborn, J. (2003). Put reading first: The research building blocks of reading instruction: Kindergarten through grade 3 (2nd ed.). Jessup, MD: National Institute for Literacy. Baer, D., Wolf, M. M., & Risley, T. R. (1968). Some current dimensions of applied behavior analysis. Journal of Applied Behavior Analysis, 1, 91-97. Beaver, J. (1997). Developmental reading assessment. New York: Celebrations Press. Beck, I. L., & Juel, C. (1995). The role of decoding in learning to read. American Educator, 19, 2, 8, 21-25, 39-42. Best, J.W., & Kahn, J. V. (1993). Research in education. Needham Heights, MA: Simon & Schuster. Binder, C. (1988). Precision teaching: Measuring and attaining exemplary academic achievement. Youth Policy, 10(7), 12-15. Blok, H., Oostdam, R., Otter, M., & Overmaat (2002). Computer-assisted instruction in support of beginning reading instruction: A review. Review of Educational Research, 72(1), 101-130. 117

PAGE 129

Bond, G. L., & Dykstra, R. (1967). The cooperative research program in first-grade reading instruction. Reading Research Quarterly, 2, 5-142. Bowey, J. A. & Patel, R. K. (1991). Metalinguistic ability and early reading achievement. Applied Psycholinguistics, 9, 367-384. Carnine, D., Silbert, J., Kameenui, E., & Tarver, S. (2004). Direct instruction reading (3rd ed.). Upper Saddle River, NJ: Pearson Education. Chall, J. S. (1989). Learning to read: The great debate 20 years later-A response to Debunking the great phonics myth. Phi Delta Kappan, 521-538. Chall, J. S. (1996). Learning to read: The great debate (3rd ed.). New York: McGraw-Hill. Chandler, K. (2000). Functional illiteracy. The Advisor, 26, 3. Chhabra, V., & McCardle, P. (2004). Contributions to evidence-based research. In P. McCardle & V. Chhabra (Eds.), The voice of evidence in reading research (pp. 47-58). Baltimore, MD: Brookes. Chomsky, N. (1965). Aspects of the theory of syntax. Cambridge, MA: MIT Press. Clarfield, J., & Stoner, G. (2005). The effects of computerized reading instruction on the academic performance of students identified with ADHD. School Psychology Review, 34, 246-254. Clay, M. (1993). An observation survey of early literacy achievement. Portsmouth, NH: Heinemann. Clay, M. (2001). Change over time in childrens literacy development. Portsmouth, NH: Heinemann. 118

PAGE 130

Coalition for Evidence-Based Policy. (2003). Identifying and implementing educational Practices supported by rigorous evidence: A user-friendly guide. Washington, D.C.: Coalition for Evidence-Based Policy. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, N.J.: Erlbaum. Cook T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design and analysis issues for field studies. Boston: Houghton Mifflin. Creswell, J. W. (2002). Research design: Qualitative, quantitative and mixed methods approaches. Thousand Oaks, CA: Sage. Creswell, J. W., Clark, V. L., Gutmann, M. L., & Hanson, W. E. (2003). Advanced mixed methods research designs. In A. Tashakkori & C. Teddlie (Eds.), Handbook of mixed methods in social and behavioral research (pp. 209-240). Thousand Oaks, CA: Sage. Crevola, C. A., & Hill, P. W. (1998) Evaluation of a whole-school approach to prevention and intervention in early literacy. Journal of Education for Students Placed At Risk, 3, 133-158. Cunningham, A. E. (1990). Explicit versus implicit instruction on phonemic awareness. Journal of Educational Psychology, 82, 733-740. DeFur, S. (2003). Test review of the Test of Early Reading Ability (3rd ed.). B. S. Plake, J. C. Impara, & R. A. Spies (Eds.), The fifteenth mental measurements yearbook [Electronic version]. Retrieved February 22, 2005, from http://www.unl.edu/buros 119

PAGE 131

Dickinson, D. K., & Snow, C. E. (1987). Interrelationships among pre-reading and oral language skills in kindergartners from two social classes. Early Childhood Research Quarterly, 2, 1-25. Dickinson, D. K., & Tabors, P. O. (2001). Beginning language with literacy: Young children learning at home and school. Baltimore: Brookes Publishing. Durkin, D. (1975). A six-year study of children who learned to read in school at the age of four. Reading Research Quarterly, 10, 9-61. Dyson, A. H., & Genishi, C. (1993). Visions of children as language users: Language and language education in early childhood. In B. Spodek (Ed.) Handbook of research on the education of young children. (pp. 122-36). New York: Macmillan. Ehri, L. C. (1979). Linguistic insight: Threshold of reading acquisition. In T. Waller & G. Mackinnon (Eds.), Reading research: Advances in research and theory (Vol. 1) (pp. 63-114). New York: Academic Press. Ehri, L. C. (1994). Development of the ability to read words: Update. In R. B. Ruddell, M. R. Ruddell, & H. Singer (Eds.), Theoretical models and processes of Reading. Vol. 4. (pp. 323-358). Newark, DE: IRA. Ehri, L. C. (2004). Teaching phonemic awareness and phonics: An explanation of the National Reading Panel meta-analyses. In P. McCardle & V. Chhabra (Eds.), The voice of evidence in reading research (pp. 153-186). Baltimore, MD: Brookes. Ehri, L. C., Nunes, S. R., Willows, D. M., Schuster, B. V., Yaghoub-Zadeh, Z., & Shanahan, T. (2001). Phonemic awareness instruction helps children learn to read: Evidence from the National Reading Panels meta-analysis. Reading Research Quarterly, 36, 3, 250-287. 120

PAGE 132

Elkind, D. (1981). The hurried child: Growing up too soon. Reading, MA: Addison-Wesley. Engelmann, S., & Carnine, D. (1991). Theory of instruction: Principles and applications. Eugene, OR: ADI Press. Fletcher J. M., & Francis, D. J. (2004). Scientifically based educational research. In P. McCardle & V. Chhabra (Eds.), The voice of evidence in reading research (pp. 59-80). Baltimore: Brookes. Florida Center for Reading Research (FCRR). (2004). FCRR Reports. Retrieved October 1, 2004, from http://www.fcrr.org Florida Department of Education. (2004). Floridas Voluntary Universal Prekindergarten Program. Retrieved July 30, 2004, from http://www.fldoe.org/linked_sites/univ_ Pre_k.asp Florida Department of Education. (2005). Sunshine State Standards. Retrieved January 2, 2005, from http://www.firn.edu/doe/curric/prek12/index.html Fountas, I., & Pinnell, G. S. (1996). Guided reading. Portsmouth, NH: Heinemann. Freeman, G. D., & King, J. L. (2003). A partnership for school readiness. Educational Leadership, 60, 76-79. Genishi, C., Ryan, S., Ochsner, M., & Yarnell, M. (2001). Teaching in early childhood education. In V. Richardson, (Ed.), Handbook of research on teaching (4th ed.) (pp. 1175-1210). Washington, DC: American Educational Research Association. Gilbert, R., & Gilbert P. (1998). Masculinity goes to school. New York: Routledge. Goodman, K. S. (1967). Reading: A psycholinguistic guessing game. Journal of the Reading Specialist, 4, 126-135. 121

PAGE 133

Goswami, U. (2001). Early phonological development. In S. B. Neuman & D. K. Dickinson (Eds.), Handbook of early literacy research (pp. 111-25). New York: Guilford. Graves, M., Juel, C., & Graves, B. (2004). Teaching reading in the 21st century. Boston, MA: Allyn & Bacon. Haksitan, A. R., Roed, J. C., & Lind, J. C. (1979). Two-sample T 2 procedure and the assumption of homogeneous covariance matrices. Psychological Bulletin, 86, 1255-1263. Hart, B., & Risley, T.R. (1995). Meaningful differences in the everyday experience of young American children. Baltimore: Paul H. Brookes. Haugland, S. (1992). Effect of computer software on preschool childrens developmental gains. Journal of Computing in Childhood Education, 3(1), 15-30. Haugland, S. (2005). Selecting or upgrading software and web sites in the classroom. Early Childhood Education Journal, 32, 329-340. Heath, S. B. (1983). Ways with words: Language, life, and work in communities and schools. New York: Cambridge Press. Helterbran, V. R., & Fennimore, B. S. (2004). Collaborative early childhood professional development: Building from a base of teacher investigation. Early Childhood Education Journal, 31, (267-271). Hiebert, E. H. (1999). Text matters in learning to read. The Reading Teacher, 52, 552-556. 122

PAGE 134

Hoffman, J. V., Sailors, M., & Patterson, E. U. (2001). Decodable texts for beginning reading instruction: The year 2000 basals. Retrieved July 18, 2004, from http://www.ciera.org/library/reports/inquiry-1/1-016/1-016.html International Reading Association (IRA) & National Association for the Education of Young Children. (NAEYC). (1998). IRA & NAEYC position statement: Learning to read and write: Developmentally appropriate practices for young children. Young Children, 53(4), 30-46. James, W. (1994). Pragmatism. Mineola, NY: Dover Publications. Johnson, B., & Turner, L. (2003). Data collection strategies in mixed methods research. In A. Tashakkori & C. Teddlie (Eds.), Handbook of mixed methods in social and behavioral research (pp. 297-319). Thousand Oaks, CA: Sage. Johnson, J. E. (1985). Characteristics of preschoolers interested in microcomputers. Journal of Educational Research, 78, 299-305. Johnson, K. R., & Layng, J. (1994). The Morningside model of generative instruction. In R. Gardner, D. Sainato, J. Cooper, T. Heron, W. Heward, J. Eshleman, T. Grossi (Eds.), Behavior analysis in education (pp. 173-197). Pacific Grove, CA: Brooks/Cole. Johnson, R. B., & Onwuegbuzie, A. J., (2004). Mixed methods research: A research paradigm whose time has come. Educational Researcher, 33(7), 14-26. Juel, C. (1991). Beginning reading. In R. Barr, M. L. Kamil, P. B. Mosenthal, & P. D. Pearson (Eds.), Handbook of reading research (Vol. 2) (pp. 759-788). New York: Longman. 123

PAGE 135

Juel, C. (1994). Learning to read and write in one elementary school. New York: Springer. Kameenui, E. J., & Simmons, D. C. (1997). Decodable texts and language of dichotomy: A response to Allington. Reading Today, 15, 18. Kvale, S. (1996). Interviews: An introduction to qualitative research interviewing. Thousand Oaks, CA: Sage. Labbo, L. D., & Reinking, D. (2003). Computers and early literacy education. In N. Hall, J. Larson, & J. Marsh (Eds.), Handbook of early childhood literacy (pp. 338-354). London: Sage. Labov, W. (1968). A study of nonstandard English. Urbana, IL: National Council of Teachers of English. Landis J. R., & Koch G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33, 159-174. Layng, J., Twyman, J., & Stikeleather, G. (2002, June). Discovery learning in a phonics-based program. Paper presented at the Annual Convention of the Society for the Scientific Study of Reading, Chicago, IL. Layng, J., Twyman, J., & Stikeleather, G. (2003). Headsprout Early Reading: Reliably teaching children to read. Behavioral Technology Today, 3, 7-20. Layng, J., Twyman, J., & Stikeleather, G. (2004, January). Scientific formative evaluation: The role of individual learners in generating and predicting successful educational outcomes. Paper presented at the National Invitational Conference on the Scientific Basis of Educational Productivity, Arlington, VA. 124

PAGE 136

Layng, J., Twyman, J., & Stikeleather, G. (in press). Selected for success: How Headsprout Reading Basics teaches children to read. In D. J. Moran & R. Malott (Eds.), Evidence based education methods (pp. 171-197). St. Louis, MO: Elsevier Science/Academic Press. Lei, M., & Lomax, R. G. (2005). The effect of varying degrees of nonnormality in structural equation modeling. Structural Equation Modeling, 12, 1-27. Liu, M. (1996). An exploratory study of how pre-kindergarten children use the interactive multimedia technology: Implications for multimedia software design. Journal of Computing in Childhood Education, 7(2), 71-92. Makin, L. (2003). Creating positive literacy learning environments in early childhood. In N. Hall, J. Larson, & J. Marsh (Eds.), Handbook of early childhood literacy (pp. 327-337). London: Sage. Marsh, J. (2003). Early childhood literacy and popular culture. In N. Hall, J. Larson, & J. Marsh (Eds.), Handbook of early childhood literacy (pp. 112-125). London: Sage. Martella, R., Nelson, R., & Marchand-Martella, N. (1999). Research methods: Learning to become a critical research consumer. Needham Heights, MA: Allyn & Bacon. Maxcy, S. J. (2003). Pragmatic threads in mixed methods research in the social sciences: The search for multiple modes of inquiry and the end of the philosophy of formalism. In A. Tashakkori & C. Teddlie (Eds.), Handbook of mixed methods in social and behavioral research (pp. 51-89). Thousand Oaks, CA: Sage. 125

PAGE 137

McGee, L. M., & Richgels, D. (2003). Designing early literacy programs: Strategies for at-risk preschool and kindergarten children. New York: Guilford Press. McGuinness, C., & McGuinness, G. (1998). Reading reflex: The phono-graphix method for teaching your child to read. New York: Simon & Schuster. Mesmer, H. A. (2001). Decodable text: A review of what we know. Reading Research and Instruction, 40, 2, 121-142. Miles, M. B., & Huberman, A.M. (1994). Qualitative data analysis: An expanded sourcebook. Thousand Oaks, CA: Sage. Millard, E. (2003). Gender and early childhood literacy. In N. Hall, J. Larson, & J. Marsh (Eds.), Handbook of early childhood literacy (pp. 22-33). London: Sage. Moats, L.C. (1998). Teaching decoding. American Educator, 22 (1 & 2), 42-49, 95-96. Moll, L., Amanti, C., Neff, D., & Gonazalez, N. (1992). Funds of knowledge for teaching: Using a qualitative approach to connect homes and classrooms. Theory into practice, 31, 651-90. Morse, J. M. (2003). Principles of mixed methods and multimethod research design. In A. Tashakkori & C. Teddlie (Eds.), Handbook of mixed methods in social and behavioral research (pp. 189-208). Thousand Oaks, CA: Sage. Nation, K., & Snowling, M. J. (2004). Beyond phonological skills: Broader language skills contribute to the development of reading. Journal of Research in Reading, 27, 4,342-356. National Assessment of Educational Progress. (2003). Reading achievement state by state. Retrieved October 1, 2004, from http://nces.ed.gov/nationsreportcard/naepdata/ 126

PAGE 138

National Association for the Education of Young Children (NAEYC). (2003). Early childhood curriculum, assessment, and program evaluation: Building an effective, accountable system in programs for children birth through age 8. Position statement. Washington, DC: NAEYC. National Association for Language Development in the Curriculum. (1998). Provision in literacy hours for pupils learning English as an additional language. Watford, UK: NALDIC. National Center for Education Statistics. (2000). Trends in educational equity of girls and women. Washington, DC: U. S. Department of Education. National Institute of Child Health and Human Development. (NICHD). (2000). Report of The National Reading Panel. Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction. Reports of the subgroups (NIH Publication No. 00-4754. Washington, DC: U.S. Government Printing Office. Newcomer, P. L., & Hammill, D. D. (1997). Test of Language Development-Primary: Third Edition. Austin, TX: PRO-ED. Newkirk, T. (2002). Misreading masculinity. Portsmouth, NH: Heinemann. Nunnally, J. C., (1978). Psychometric theory (2nd ed.). New York: McGraw-Hill. Onwuegbuzie, A. J. (2003). Expanding the framework of internal and external validity in quantitative research. Research in the Schools, 10(1), 71-89. 127

PAGE 139

Onwuegbuzie, A. J., & Daniel, L. G. (2003). Typology of analytical and interpretational Errors in quantitative and qualitative educational research. Current Issues in Education, 6(2). Retrieved September 16, 2005 from http://cie.ed.asu/volume6/ number2 Onwuegbuzie, A. J., & Teddlie, C. (2003). A framework for analyzing data in mixed methods research. In A. Tashakkori & C. Teddlie (Eds.), Handbook of mixed methods in social and behavioral research (pp. 189-208). Thousand Oaks, CA: Sage. Patterson, W., Henry, J., OQuin, K., Ceprano, M., & Blue, E. (2003). Investigating the effectiveness of an integrated learning system on early emergent readers. Reading Research Quarterly, 3, 2, 172-207. Patton, M. (1990). Qualitative evaluation and research methods (2nd ed.). Newbury Park, CA: Sage. Programme for International Student Assessment. (PISA). (2003). Report of the International Reading Association PISA task force. Retrieved October 1, 2005, from www.reading.org/resources/issues/reports/pisa.html Pressley, M. (1998). Reading instruction that works: The case for balanced teaching. New York: Guilford Press. Rathvon, N. (2004). Early reading assessment: A practitioners handbook. New York: Guilford. Rayner, K., & Pollatsek, A. (1989). The psychology of reading. Englewood Cliffs, NJ: Prentice Hall. 128

PAGE 140

Reid, K., Hresko, W., & Hammill, D. (2001). Test of Early Reading Ability (3rd ed.). Austin, TX: Pro-ed. Reitsma, P., & Wesseling, R. (1998). Effects of computer-assisted training of blending skills in kindergarten. Scientific Studies of Reading, 2, 301-20. Revelle, G. L., & Strommen, E. F. (1990). Effects of practice and input device used on young childrens computer control. Journal of Computing in Childhood Education, 2(2), 33-41. Reyna, V. (2004). Why scientific research? In P. McCardle & V. Chhabra (Eds.), The voice of evidence in reading research (pp. 47-58). Baltimore, MD: Brookes. Robinson, L. (2004). Engaging young children in computer activities. Retrieved January, 6, 2005, from http://www.wiu.edu/ users/mimacp/wiu/articles/engag.html Rowe, K. J. (August, 2000). Problems in the education of boys and exploring real effects from evidence-based research: Useful findings in teaching and learning for boys and girls. Background paper of keynote address presented at the Teaching Boys, Developing Fine Men Conference, Brisbane, Australia. Ruddell, R. B., & Ruddell, M. R. (1994). Language acquisition and literacy processes. In R. B. Ruddell, M. R. Ruddell, & H. Singer (Eds.), Theoretical models and processes of Reading. (Vol. 4; pp. 83-103). Newark, DE: IRA. Rumelhart, D. (1975). Notes on a schema for stories. In D. G. Bobrow & A. M. Collins (Eds.), Representation and understanding: Studies in cognitive psychology. (pp. 211-236). New York: Academic. 129

PAGE 141

Scarborough, H. S. (1991). Antecedents to reading disability: Preschool language development and literacy experiences of children from dyslexic families. Reading and Writing, 3, 219-3. Sindelar, P. T., Lane, H. B., Pullen, P. C., & Hudson, R. F. (2002). Reading interventions for students with decoding problems. In M. A. Shinn, H. M. Walker, & G. Stoner (Eds.), Interventions for academic and behavior problems II: Preventive and remedial approaches (pp. 703-727). Bethesda, MD: NASP. Simmons, D. C., & Kameenui, E. J. (2003). A consumers guide to evaluating a core Reading program grades K-3: A critical elements analysis. Retrieved November 12, 2003, from http://reading.uoregon.edu/appendices Skinner, B. F. (1953). Some contributions of the experimental analysis of behavior to psychology as a whole. American Psychologist, 8, 69-78. Skinner, B. F. (1968). The technology of teaching. New York: Appleton-Century-Crofts. Snow, C. E., Burns, M. S., & Griffin, P. (Eds.). (1998). Preventing reading difficulties in young children. Washington, DC: National Academy Press. Stanovich, K. E. (1986). Matthew effects in reading: Some consequences of individual differences in the acquisition of literacy. Reading Research Quarterly, 26, 32-71. Stanovich, K. E. (2000). Progress in understanding reading: Scientific foundations and new frontiers. New York: Guilford. Stevens, J. P. (2002). Applied multivariate statistics for the social sciences (4th ed.). Mahwah, NJ: Erlbaum. Strauss, A., & Corbin, J. (1998). Basics of qualitative research: Techniques and procedures for developing grounded theory (2nd ed.). Thousand Oaks, CA: Sage. 130

PAGE 142

Tashakkori, A., & Teddlie, C. (1998). Mixed methodology: Combining qualitative and quantitative approaches. Applied Social Research Methods Series (Vol. 46). Thousand Oaks, CA: Sage. Taylor, D., & Dorsey-Gaines, C. (1988). Growing up literate. Portsmouth, NH: Heinemann. Teale, W., & Sulzby, E. (1986). Emergent literacy: Writing and reading. Norwood, NJ: Ablex. Teale, W. & Yokota, J. (2000). Beginning reading and writing: Perspectives on instruction. In D. S. Strickland & L. A. Morrow (Eds.), Beginning reading and writing (pp. 3-21). New York: Teachers College Press. Torgesen, J. K. (1999). Phonologically based reading disabilities: Toward a coherent theory of one kind of learning disability. In R. J. Sternberg & L. SpearSwearling (Eds.), Perspectives on learning disabilities (pp. 231-262). New Haven: Westview Press. Torgesen, J. K. (2002). The prevention of reading difficulties. Journal of School Psychology, 40(1), 7-26. Torgesen, J. K. (2004). Lessons learned from research on interventions for students who have difficulty learning to read. In P. McCardle & V. Chhabra (Eds.), The voice of evidence in reading research (pp. 355-382). Baltimore, MD: Brookes. Torgesen, J. K., & Barker, T. A. (1995). Computers as aids in the prevention and remediation of reading disabilities. Learning Disabilities Quarterly, 18, 76-87. 131

PAGE 143

Twyman, J., Layng, J., Stikeleather, G., & Hobbins, K. (2004). A non-linear approach to curriculum design: The role of behavior analysis in building an effective reading program. In W. L. Heward, T. Heron, N. Neff, S. Peterson, D. Sainato, G. Cartledge, R. Gardner, L. Peterson, S. Hersh, & J. Dardig (Eds.). Focus on behavior analysis in education (Vol. 3, pp. 55-68). Upper Saddle River, NJ: Merrill/Prentice Hall. U.S. Department of Education (DOE). (2001). No Child Left Behind Act of 2001, Pub. L No. 107-110. Retrieved October 8, 2004, from http://www.NoChildLeftBehind.gov U.S. Department of Education (DOE). (2002). Reading First. Retrieved July 28, 2004, from http://www.ed.gov/programs/readingfirst/faq.html U.S. Department of Education (DOE). (2005). Early Reading First (ERF). Retrieved February 15, 2005, from http://www.ed.gov/print/programs/earlyreading/index.html U.S. Department of Health and Human Services. (2004). 2004 Federal Poverty Guidelines. Retrieved October 26, 2004, from http://www.aspe.hhs.gov/poverty.shtml Van Daal, V. H. P., & Reitsma, P. (2000). Computer-assisted learning to read and spell: Results from two pilot studies. Journal of Research in Reading, 23, 181-193. Vygotsky, L. S. (1978). Mind in society. Cambridge: MA: MIT Press. Wagner, R, K., & Torgesen, J. K. (1987). The nature of phonological processing and its causal role in the acquisition of reading skills. Psychological Bulletin, 101, 192-212. 132

PAGE 144

Wepner, S. B., & Ray, L. C. (2000). Sign of the times: Technology and early literacy learning. In D. S. Strickland & L. M. Morrow (Eds.), Beginning reading and writing (pp. 99-110). New York: Teachers College Press. Wharton-McDonald, R., Pressley, M., & Hampston, J. (1998). Literacy instruction in nine first-grade classrooms: Teacher characteristics and student achievement. Elementary School Journal, 99, 101-128. Willis, J. W., & Mehlinger, H. D. (1996). Information technology and teacher education. In J. Sikula, T. Buttery, E. Guyton (Eds.). Handbook of research on teacher education (pp. 978-1029). New York: Simon & Schuster. Woodcock, R. W., McGrew, K. S., & Mather, N. (2001). Woodcock-Johnson Test III. Itasca, IL: Riverside. 133

PAGE 145

Appendices 134

PAGE 146

Appendix A. Summary and Procedures of Research for Teachers & Assistants Mary Huffstetter, M.Ed., BCBA USF Doctoral Student Curriculum & Instruction Reading /Language Arts 772-408-7755 mimihuff23@yahoo.com Proposed Dissertation Study: Investigating the effects of generative instruction, within the context of an internet-based reading program, on 4-year-old students Rationale: Need to identify research-based literacy programs for our preschool population. Design: Pre-test-Post-test Control Group Design Approval will be obtained from the IRB of the University of South Florida. Permission will be obtained from parents before any student is included in the study. 2 Head Start centers will be randomly chosen, and then students from those centers will be randomly assigned into one of 2 groups. Tutorial program will be conducted to ensure each student can control a mouse and follow one-step directions. If students are not able to do these two things, they will be excluded from the study, but and will be offered the program with the control group Pre-tests will be conducted using the Test of Early Reading Ability (TERA-3) and the Test of Language Development (TOLD-3). Each group will consist of 31 students. Intervention: Headsprout Reading Basics program -students will engage in daily, 30-minute lessons for 9 weeks. Control group will receive 30 minutes of computer time with Millies Math House and will be offered the Headsprout program during the summer upon the teachers and Head Start administrations request. Teachers will receive an initial 2-hour training session and will have access to both programs daily access to the researcher. Following the 9-week intervention, teachers and teachers assistants will be asked to participate in an interview designed to capture their perceptions of the strengths and weakness of the intervention. Resources needed: Electrical hookup & space to park the mobile lab on a daily basis Resources provided: Headsprout Reading Basics program, Millies Math House program, TOLD-3 testing materials, TERA-3 testing materials, paper to print supplemental books, Internet hookup. Results will be discussed with the Head Start Teachers and Administrators before deciding on subsequent access to the program for the control group. 135

PAGE 147

Appendix B. Interview Protocol for Pilot Study Background information: # of years taught? Other ages taught & # of years? Educational Background? Degrees? Training? 1. Do you think Headsprout Reading Basics increases expressive and receptive oral language skills? If so, how? 2. Do you think Headsprout Reading Basics increases alphabetic awareness skills? Print awareness skills? Phonological awareness skills? If so, how? 3. What early reading skills do you think Headsprout Reading Basics does not address, or addresses poorly? 4. What difficulties, if any, did you experience incorporating Headsprout Reading Basics into your existing curriculum? 5. What activities, if any, were left out of your day due to the addition of Headsprout Reading Basics? 6. What are your thoughts about the developmental appropriateness of Headsprout Reading Basics for your students? 7. What comments, if any, did you hear from the childrens parents regarding their childs involvement in Headsprout Reading Basics? 8. How would you measure the success of Headsprout Reading Basics? 9. How could Headsprout Reading Basics be improved upon? 10. What are your thoughts about using Headsprout Reading Basics in the future? 11. What thoughts do you have about this experience that havent been covered by the previous questions? 136

PAGE 148

Appendix C. Interview Protocol 1. Based on your interactions with the program and the monitoring of your students, do you think Headsprout Reading Basics helped develop your students oral language skills? If so, how? 2. Based on your interactions with the program and the monitoring of your students, do you think Headsprout Reading Basics helped develop your students early reading skills? If so, how? 3. What early reading skills do you think Headsprout Reading Basics does not address, or addresses poorly? 4. What difficulties, if any, did you experience incorporating Headsprout Reading Basics into your existing curriculum? 5. What activities, if any, were left out of the childrens day due to the addition of Headsprout Reading Basics? 6. What are your thoughts about the developmental appropriateness of Headsprout Reading Basics for your students? 7. What comments, if any, did you hear from the children regarding their involvement in Headsprout Reading Basics? 8. What comments, if any, did you hear from the childrens parents regarding their childs involvement in Headsprout Reading Basics? 9. How would you measure the success of Headsprout Reading Basics? 10. How could Headsprout Reading Basics be improved upon? 11. What are your thoughts about using Headsprout Reading Basics in the future? 12. What thoughts do you have about this experience that havent been covered by the previous questions? 137

PAGE 149

Appendix D. Teacher and Teacher Assistant Demographic Information Head Start Teacher and Teacher Assistant Demographic Information Name: ____________________________________________ Head Start Center: ___________________________________ Position: ___________________________________________ 1. Degree(s) received. (Circle any that apply.) High School Diploma GED Associates in ____________________________ Bachelors in_____________________________ Masters in ______________________________ Other __________________________________ 2. Years of teaching experience in the pre-k setting? ________________ 3. Have you taught other age groups? ________________ If so, what age(s)? ____________________ # of years? ___________ 138

PAGE 150

Appendix E. Teacher Orientation to Headsprout Reading Basics & Millies Math House Time requested for training: 2 hours Location of training: Head Start centers Materials used: Headsprout Overview retrieved from: http://www.headsprout.com/info/presentations/html/ orientation/slide1.htm Headsprout practice lessons accessed @ www.headsprout.com Millies Math House Teachers Guide (Riverdeep, 2003) I. Introduction to Headsprout Reading Basics (10 min.) II. Scope & Sequence of Headsprout Reading Basics (10 min.) III. Supplemental Materials for Headsprout Reading Basics (10 minutes) IV. Introduction to Millies Math House (10 min.) V. Scope & Sequence of Millies Math House (10 min.) VI. Supplemental Materials for Millies Math House (10 min.) VII. Exploration of Headsprout Episodes via the internet (20 min.) VIII. Exploration of Millies Math House software (20 min.) IX. Question and Answer Session (20 min.) 139

PAGE 151

Appendix F. Implementation Checklists Headsprout Implementation Checklist Week of: ________________ Directions: Put a checkmark in first column when teacher or assistant performs task (present {P}). Put a checkmark in second column if the task is not completed (absent {A}). Mon Tues Wed Thurs Fri Teacher/Assistant P A P A P A P A P A Review performance data and reset and/or use cards for review Confirm students begin & stay in their programs (say nothing if they exit, just sign back in) Praise oral responding Praise finishing an episode Respond to requests for help by redirecting back to program Have the students read the HS story when they finish the episodes where a book icon is displayed Conduct Benchmark Assessments Put headset cords behind heads (prevents chewing) Adjust volume if necessary Give students a sticker upon daily completion/have students mark their progress on maps (Fri only) 140

PAGE 152

Appendix F (Continued). Implementation Checklists Millies Math House Implementation Checklist Week of:________________ Directions: Put a checkmark in first column when teacher or assistant performs task (present {P}). Put a checkmark in second column if the task is not completed (absent {A}). Mon Tues Wed Thurs Fri Teacher/Assistant P A P A P A P A P A Review skill checklist and tell students the skill theyll work on today Confirm students begin & stay in their programs (say nothing if they exit, just sign back in) Model new skill for student if they do not complete it independently Praise/document independent completion of skill in the Q & A mode Praise practice in explore and discover mode Encourage review of previously taught skills when finished with todays skill Put headset cords behind heads (prevents chewing) Adjust volume if necessary Report use or non-use of teachers in-class activities Give students a sticker upon daily completion 141

PAGE 153

Appendix G. Institutional Review Board Consent Forms Parental Informed Consent Social and Behavioral Sciences University of South Florida Information for Parents who are being asked to allow their child to take part in a research study Researchers at the University of South Florida (USF) study many topics. For example, we want to identify programs and methods that will help preschool teachers increase the early reading skills of their students. To do this, we need the help of people who agree to take part in a research study. Title of research study: The Effects of an Internet-Based Reading Program on At-Risk Preschool Students and Their Teachers Person in charge of study: Mary Huffstetter Study staff who can act on behalf of the person in charge: None Where the study will be done: Francina Duval Head Start Center & George W. Truitt Head Start Center in Fort Pierce, FL Who is paying for it: Private donations to Literacy Launchers, Inc. Should your child take part in this study? This form tells you about this research study. You can decide if you want your child to take part in it. They do not have to take part. Reading this form can help you decide. Before you decide: Read this form, then talk to Mary Huffstetter and ask her to answer any questions you may have. Or you can talk to your childs teacher or an interpreter if you need further clarification. You can have someone with you when you talk about the study. Find out what the study is about. You can ask questions: You may have questions this form does not answer. If you do, ask the person in charge of the study or study staff as you go along. You dont have to guess at things you dont understand. Ask the people doing the study to explain things in a way you can understand. After you read this form, you can: Take your time to think about it. Have a friend or family member read it. Talk it over with someone you trust. Its up to you. If you choose to let your child be in the study, then you can sign the form. If you do not want your child to take part in this study, do not sign the form. 142

PAGE 154

Appendix G (Continued). Institutional Review Board Consent Forms Why is this research being done? The purpose of this study is to find out if an Internet-based supplemental reading program can assist preschool teachers in increasing the early reading skills of their students. We are asking your child to take part in this study because we are interested in identifying programs or methods that will assist in increasing the reading skills of at-risk students. Because your child qualified for a Head Start program, he/she is considered at risk for possible reading difficulties. How long will your child be asked to stay in the study? Your child will be asked to spend about 10 weeks in this study (pre & posttests and 9-week intervention. If the teachers find the program desirable and your child has not finished all 40 lessons, he/she will be allowed to continue working in the program during the summer. The children in the control group, who will receive mathematics instruction in this study, will be offered the reading instruction in the summer. How often will your child need to come for study visits? A study visit is one you have with the person in charge of the study or study staff. Your child will need to come for approximately 42 study visits in all as there are 40 lessons and we will also be conducting pre and post-testing. Your childs teacher or teachers assistant will be implementing the program in coordination with Mary Huffstetter. Children will be working on the computer daily for a 9-week period. Most study visits will take about 20-30 minutes. At these visit, the person in charge of the study or staff will: Conduct pretests using the Test of Early Reading Ability-3 and the Test of Language Development-3 for all children. Each day, the person in charge will provide access to the computer programs and will monitor the teachers and teachers assistants as they implement the programs. The intervention phase will last 9 weeks. Conduct posttests using the Test of Early Reading Ability-3 and Test of Language Development for all students in this study after the 9-week intervention phase. How many other people will take part? One other Head Start Center is participating in this study. It is estimated that 60-80 children and 7-8 teachers and teachers assistants will be involved. What other choices do you have if you decide not let your child to take part? It is okay if you decide not to let your child take part in this study. Your child can work at a teacher-designed learning center if you do not want him/her to be included in this study. How do you get started? If you decide to let your child take part in this study, you will need to sign this consent form and return it to your childs Head Start Center. 143

PAGE 155

Appendix G (Continued). Institutional Review Board Consent Forms What will happen during this study? Demographic (i.e., gender, ethnicity, age (in months), ESOL and ESE status) will be obtained from every students school records for descriptive and comparative purposes. All children will be pre-assessed (using the Mousing Around tutorial) to see if they can control a computer mouse and follow one-step directions. If they can, they will proceed in the study. If they can not, they will be assigned a volunteer to work on these skills and will be placed on the computer when they are ready, but will not be included in the data collection for this study. For the children who pass the pre-assessment: Each child will be placed in one of two groups. Both groups will be pre-tested using the Test of Early Reading Abilities-3 and the Test of Language Development-3. One group will then receive 9 weeks of instruction in an Internet-Based reading program called Headsprout. The other group will receive instruction in Millies Math House program. At the end of the 9week period, both groups will be post-tested. The scores will be compared to assess the effects of the reading program on early reading skills. Here is what your child will need to do during this study Work on the computer program he/she has been assigned to for 20-30 minutes a day and read the Headsprout Readers and companion books for the reading program when ready. Will you or your child be paid for taking part in this study? We will not pay or reward your child for their time in this study. What will it cost you to let your child take part in this study? It will not cost you anything to take part in the study. The study will pay the costs of: the Internet-based reading program, the mathematics software program and the use of the mobile computer lab. What are the potential benefits to your child if you let him/her take part in this study? We dont know if your child will get any benefits by taking part in this study, but it is possible that they will improve their reading, math, and/or computer skills. What are the risks if your child takes part in this study? There are no known risks to those who take part in this study. If your child mentions any problems during this study, please call Mary Huffstetter right away at 772-408-7755. If your child is harmed because he/she takes part in the study: We will pay the medical costs if your child was harmed because our staff did something they should not have done. Florida law limits how much USF is able to pay. USF cannot pay for lost wages, disability, or discomfort. Read Florida Statute 768.28 to find out how much USF is able to pay. You can get a copy of the law by calling USF Research Compliance at (813) 974-5638. Call the USF Self Insurance Programs (SIP) at (813) 974-8008 and ask them to look into what happened. Affiliate Statement: Literacy Launchers, Inc. carries liability insurance through National Liability & Fire Insurance Company should your child be injured while on the mobile computer lab. Your child will never be in the lab when it is moving. Call Susan Port at (772) 461-6040 and ask her to check into any incident. 144

PAGE 156

Appendix G (Continued). Institutional Review Board Consent Forms What will we do to keep your childs study records from being seen by others? Federal law requires us to keep your childs study records private. Your child will be assigned a number and all data will refer to that number in place of your childs name. Headsprout requires a password to access child information. Mary Huffstetter will be the only one to have access to that password. All records will be kept in a locked cabinet in Mary Huffstetters office. However, certain people may need to see your childs study records. By law, anyone who looks at your childs records must keep them confidential. The only people who will be allowed to see these records are: The study staff (Mary Huffstetter) The Head Start director, teachers and teachers assistants Headsprout Personnel People who make sure that we are doing the study in the right way. They also make sure that we protect your rights and safety: The USF Institutional Review Board (IRB), its staff and other individuals acting on behalf of USF The United States Department of Health and Human Services (DHHS) We may publish what we find out from this study. If we do, we will not use your childs name or anything else that would let people know who your child is. What happens if you decide not to let your child take part in this study? You should only let your child take part in this study if both of you want to take part. If you decide not to let your child take part: You and your child wont be in trouble or lose any rights either of you normally have. You and your child will still get the same services you would normally have. You and your child can still get your regular educational experiences. What if you let your child join the study and then later decide you want to stop? If you decide you want to stop taking part in the study, tell the study staff as soon as you can. We will tell you how to stop safely. We will tell you if there are any dangers if you stop suddenly. If you decide to stop, your child will continue receiving a regular preschool educational experience. Are there reasons we might take your child out of the study later on? Even if you want your child to stay in the study, there may be reasons we will need to take him/her out of it. Your child may be taken out of this study: If you or your child are not coming for your study visits when scheduled If your child is showing any on-going signs of discomfort caused by participation in this study. You can get the answers to your questions. If you have any questions about this study, call Mary Huffstetter at 772-408-7755. If you have questions about your rights as a person who is taking part in a study, call USF Research Compliance at (813) 974-5638. 145

PAGE 157

Appendix G (Continued). Institutional Review Board Consent Forms Consent for Child to Take Part in this Research Study Its up to you. You can decide if you want to your child take part in this study. I freely give my consent to let my child take part in this study. I understand that this is research. I have received a copy of this consent form. As my child is only 4 or 5 years old, he/she will not be signing an assent form. ________________________ ________________________ ___________ Signature of Parent Printed Name of Parent Date of child taking part in study Statement of Person Obtaining Informed Consent I have carefully explained to the person taking part in the study what he or she can expect. The person who is giving consent to take part in this study Understands the language that is used. Reads well enough to understand this form. Or is able to hear and understand when the form is read to him or her. Does not have any problems that could make it hard to understand what it means to take part in this study. Is not taking drugs that make it hard to understand what is being explained. To the best of my knowledge, when this person signs this form, he or she understands: What the study is about. What needs to be done. What the potential benefits might be. What the known risks might be. That taking part in the study is voluntary. ________________________ ________________________ ___________ Signature of Investigator Printed Name of Investigator Date or authorized research investigator designated by the Principal Investigator Child participant is 4 or 5 years old and wont be asked for assent. 146

PAGE 158

Appendix G (Continued). Institutional Review Board Consent Forms Informed Consent Social and Behavioral Sciences University of South Florida Information for People Who Take Part in Research Studies The following information is being presented to help you decide whether or not you want to take part in a minimal risk research study. Please read this carefully. If you do not understand anything, ask the person in charge of the study. Title of Study: The Effects of an Internet-Based Reading Program on At-Risk Preschool Students and Their Teachers Principal Investigator: Mary Huffstetter Study Location(s): Francina Duval Head Start Center and George W. Truitt Head Start Center You are being asked to participate because you teach students who may be at risk for reading difficulties and we are seeking your insight into the desirability of a supplemental, Internet-based reading program for your preschool students. General Information about the Research Study The purpose of this research study is to find out if an Internet-based supplemental reading program can assist preschool teachers in increasing the early reading skills of at-risk students. Following the implementation of this program, you will be asked to participate in an interview which is expected to last from hour to an hour. I will consult with a peer group to identify patterns in all the teachers responses to determine your perceptions (including your likes and dislikes) of the Internet-based reading program after first time implementation. Plan of Study All children will be pre-assessed (using the Mousing Around tutorial) to see if they can control a computer mouse and follow one-step directions. If they can, they will be included in the study. If they cannot, they will be assigned to a volunteer to work on these skills and will be placed on the computer when they are ready, but will not be included in the data collection of this study. The children who pass the pre-assessment will be placed in one of two groups. Both groups will be pretested using the Test of Early Reading Abilities-3 and the Test of Oral Language-3. One groups will then receive 9 weeks of instruction in an Internet-Based reading program called Headsprout. The other groups will receive instruction in Millies Math House program. At the end of the 9-week period, both groups will be post-tested. The scores will be compared to assess the effects of the reading program on early reading skills. Teacher and teacher assistant demographic data (i.e., age, race, gender, educational degree, and number of years taught) will be obtained from each participant for descriptive and comparative purposes. Teachers and teachers assistants will be interviewed after the 9-week intervention period, prior to the release of the posttest results, to gain information on your perceptions of the Internet-based reading program after first time implementation. 147

PAGE 159

Appendix G (Continued). Institutional Review Board Consent Forms Payment for Participation You will not be paid for your participation in this study. Benefits of Being a Part of this Research Study By taking part of this research study, you may increase our overall knowledge of appropriate reading instruction for preschool students. Risks of Being a Part of this Research Study You will experience no risk as a result of this study. Confidentiality of Your Records Your privacy and research records will be kept confidential to the extent of the law. Authorized research personnel, employees of the Department of Health and Human Services, and the USF Institutional Review Board, its staff and other individuals acting on behalf of USF, may inspect the records from this research project. The results of this study may be published. However, the data obtained from you will be combined with data from others in the publication. The published results will not include your name or any other information that would personally identify you in any way. All materials pertaining to this study will be kept in a locked file cabinet in Mary Huffstetters office. You will be assigned a reference number, so number rather than name will refer to any responses given. A group of peers will assist me in identifying patterns to your and your fellow teachers responses. In Case of Illness or Injury If you get sick or injured while participating in this study, call Mary Huffstetter at (772) 408-7755. If you have an emergency, go to the closest emergency room or clinic for treatment. After you have been treated for this illness or injury, call the USF Self Insurance Programs, at (813) 974-8008. They will investigate the matter. Affiliate Statement: Literacy Launchers, Inc. carries liability insurance through National Liability & Fire Insurance Company should you be injured while on the mobile computer lab. You will never be on the lab when it is moving. Call Susan Port at (772) 461-6040 and ask her to check into any incident. Volunteering to Be Part of this Research Study Your decision to participate in this research study is completely voluntary. Your decision to participate or not to participate will in no way affect your job status. You are free to participate in this research study or to withdraw at any time. There will be no penalty if you stop taking part in the study. Questions and Contacts If you have any questions about this research study, contact Mary Huffstetter at (772) 408-7755. If you have questions about your rights as a person who is taking part in a research study, you may contact the Division of Research Compliance of the University of South Florida at (813) 974-5638. 148

PAGE 160

Appendix G (Continued). Institutional Review Board Consent Forms Consent to Take Part in This Research Study By signing this form I agree that: I have fully read or have had read and explained to me this informed consent form describing this research project. I have had the opportunity to question one of the persons in charge of this research and have received satisfactory answers. I understand that I am being asked to participate in research. I understand the risks and benefits, and I freely give my consent to participate in the research project outlined in this form, under the conditions indicated in it. I have been given a signed copy of this informed consent form, which is mine to keep. _________________________ ________________________________________ Signature of Participant Printed Name of Participant Date Investigator Statement I have carefully explained to the subject the nature of the above research study. I hereby certify that to the best of my knowledge the subject signing this consent form understands the nature, demands, risks, and benefits involved in participating in this study. _________________________ ________________________________________ Signature of Investigator Printed Name of Investigator Date or authorized research investigator designated by the Principal Investigator 149

PAGE 161

Appendix H. Permission to use Headsprout Reading Basics Graphics From: "Janet Twyman" To: "'Mary Huffstetter'" Subject: RE: permission to use graphics Date: Mon, 17 Oct 2005 14:11:00 Mary, Of course you have our permission. Please use the following conventions to cite us, depending on what youre referring too: Headsprout Headsprout Reading Basics Sprout Stories Best, Janet Janet S. Twyman, Ph.D. Headsprout 150

PAGE 162

About the Author Mary Huffstetter received a Bachelors Degree in Communicati ons/Theatre Arts from St. Josephs College in 1984 and a M.Ed. in Varying Exceptionalities from Florida Atlantic University in 1997. She taught spec ial education in the St. Lucie County school district before transferring to the Univers ity of South Florida s Behavior Analysis Services Program, while also entering th e Ph.D. program. While in the Ph.D. program, Ms. Huffstetter and a colleague began a non-pr ofit organization, Literacy Launchers Inc., to support the early childhood cen ters in St. Lucie County in their journeys to become centers of excellence.


xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam Ka
controlfield tag 001 001709520
003 fts
005 20060614125248.0
006 m||||e|||d||||||||
007 cr mnu|||uuuuu
008 060515s2005 flu sbm s000 0 eng d
datafield ind1 8 ind2 024
subfield code a E14-SFE0001337
035
(OCoLC)68813195
SFE0001337
040
FHM
c FHM
049
FHMM
090
LB1139.23 (Online)
1 100
Huffstetter, Mary.
4 245
The effects of an internet-based program on the early reading and oral language skills of at-risk preschool students and their teachers' perceptions of the program
h [electronic resource] /
by Mary Huffstetter.
260
[Tampa, Fla.] :
b University of South Florida,
2005.
502
Thesis (Ph.D.)--University of South Florida, 2005.
504
Includes bibliographical references.
516
Text (Electronic thesis) in PDF format.
538
System requirements: World Wide Web browser and PDF reader.
Mode of access: World Wide Web.
500
Title from PDF of title page.
Document formatted into pages; contains 162 pages.
Includes vita.
520
ABSTRACT: This investigation examined the effects of instruction, within the context of the Headsprout Reading Basics program, on the oral language and early reading skills of at-risk preschool students, and their teachers perceptions of the program. Random assignment was used in a pretest-posttest, control group design to assess the effects of this program. Thirty-one students, across two preschool settings, participated in the experimental group, and 31 students participated in the comparison group. The experimental group received instruction through the Headsprout Reading Basics program, which teaches the alphabetic principle, decoding strategies, print awareness, vocabulary, and deriving meaning from texts. The comparison group received instruction through Millies Math House, which teaches numbers, shapes, counting, sizes, patterns, quantities, sequences, addition, and subtraction. Daily instruction was provided for 30 minutes over a period of eight weeks.Oral language skills were measured using the Test of Language Development-Primary: 3rd edition (TOLD-3) and early reading skills were measured using the Test of Early Reading Ability- 3rd edition (TERA-3). Teachers and teachers assistants perceptions of the Headsprout Reading Basics program also were assessed through analysis of their responses to a structured, open-ended interview. Results indicated that students who received instruction through the Headsprout Reading Basics program exhibited gains in oral language and early reading skills that were statistically higher than the students who did not receive this instruction. Effect sizes associated with these gains were found to be large. Examination of the effects of gender, and minutes of instruction received did not yield significant statistical differences.Analysis of interview data indicated that the teachers and teachers assistants viewed Headsprout Reading Basics as a desirable way to increase the oral language and early reading skills of their students and would continue to use the program if given the opportunity. Implications for future research are discussed.
590
Adviser: James King.
653
Childhood education.
Computers.
Developmentally appropriate computer use.
Educational technology.
Implementation integrity.
Instructional research.
0 690
Dissertations, Academic
z USF
x Early Childhood Education
Doctoral.
773
t USF Electronic Theses and Dissertations.
856
u http://digital.lib.usf.edu/?e14.1337