Educational policy analysis archives

Educational policy analysis archives

Material Information

Educational policy analysis archives
Arizona State University
University of South Florida
Place of Publication:
Tempe, Ariz
Tampa, Fla
Arizona State University
University of South Florida.
Publication Date:


Subjects / Keywords:
Education -- Research -- Periodicals ( lcsh )
non-fiction ( marcgt )
serial ( sobekcm )

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
E11-00317 ( USFLDC DOI )
e11.317 ( USFLDC Handle )

Postcard Information



This item has the following downloads:

Full Text
xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam a22 u 4500
controlfield tag 008 c20039999azu 000 0 eng d
datafield ind1 8 ind2 024
subfield code a E11-00317
0 245
Educational policy analysis archives.
n Vol. 11, no. 19 (June 30, 2003).
Tempe, Ariz. :
b Arizona State University ;
Tampa, Fla. :
University of South Florida.
c June 30, 2003
Using large-scale research to gauge the impact of instructional practices on student reading comprehension : an exploratory study / Harold Wenglinsky.
x Research
v Periodicals.
2 710
Arizona State University.
University of South Florida.
1 773
t Education Policy Analysis Archives (EPAA)
4 856

xml version 1.0 encoding UTF-8 standalone no
mods:mods xmlns:mods http:www.loc.govmodsv3 xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govmodsv3mods-3-1.xsd
mods:relatedItem type host
mods:identifier issn 1068-2341mods:part
mods:detail volume mods:number 11issue 19series Year mods:caption 20032003Month June6Day 3030mods:originInfo mods:dateIssued iso8601 2003-06-30


1 of 19 A peer-reviewed scholarly journal Editor: Gene V Glass College of Education Arizona State University Copyright is retained by the first or sole author, who grants right of first publication to the EDUCATION POLICY ANALYSIS ARCHIVES EPAA is a project of the Education Policy Studies Laboratory. Articles appearing in EPAA are abstracted in the Current Index to Journals in Education by the ERIC Clearinghouse on Assessment and Evaluation and are permanently archived in Resources in Education Volume 11 Number 19June 30, 2003ISSN 1068-2341Using Large-Scale Research to Gauge the Impact of Instructional Practices on Student Reading Comprehension: An Exploratory Study Harold Wenglinsky Baruch CollegeCitation: Wenglinsky, H.. (June 30, 2003).Using Lar ge-Scale Research to Gauge the Impact of Instructio nal Practices on Student Reading Comprehension: An Expl oratory Study. Education Policy Analysis Archives, 11 (19). Retrieved [date] from 19/.AbstractSmall-scale research has identified classroom pract ices that are associated with high student performance in readingcomprehension. It is not known, however, whether t hese findings generalize to larger samples and populations, as mo st large-scale studies of the impact of teaching on student perfor mance do not include measures of classroom practices. Generaliz ing to larger populations is particulary important at a time when policies national in their scope are calling for “scientifically-based” instruction in reading. The current study explores the possibility of using large-scale data and methods to study classroom practices in readingcomprehension. It finds that such studies are both feasible and necessary. They are feasible insofar as it proved possible to collect


2 of 19 and analyze data on classroom practices and student reading comprehension, and discern substantial effects of t he one on the other. They are necessary insofar as the study con firmed the effectiveness of some classroom practices but not o thers. It therefore cannot be assumed that the findings of sm all-scale studies generalize to large populations. IntroductionWith the passage of the Federal Reading First progr am and parallel state efforts to improve student reading skills, policy makers and e ducators have been looking for “scientifically-based” materials, professional deve lopment and instruction. While the research base on early reading skills such as p honics has been found to be substantial, there is a lack of conclusive evidence on effective instruction in more advanced skills such as reading comprehension.This research gap is, for the most part, attributab le to the small-scale nature of research that seeks to identify effective technique s for teaching reading comprehension. Over the last forty years, research ers have conducted a host of small-scale studies identifying certain classroom p ractices as effective in teaching reading comprehension. The findings of these studi es have been remarkably consistent, with the same set of classroom practice s again and again appearing to be related to student reading comprehension perform ance. The strength of these studies lies in their high level of internal validi ty. Many use experimental designs, and many others are quasi-experimental. These stud ies also possess a common shortcoming, however. Because the development of a robust design is extremely labor intensive, such studies tend to be small in s cale, limited to a few classrooms or schools. The degree to which these studies appl y to large populations is therefore not known.Large-scale research does not provide much informat ion on classroom effects. The common method for gauging the impact of teachin g on student performance using large-scale data is known as the production f unction. It involves collecting observational data on large numbers of teachers and students and then using the technique of regression analysis, which relates tea cher characteristics to student performance. Most production function studies do n ot measure classroom practices, due to the difficulty of measuring them for large numbers of teachers. And most have not found a clear and consistent rela tionship to student outcomes for the teacher characteristics they do measure.The present study addresses methodological problems common in the production function literature to demonstrate the possibility of using large-scale data to study classroom practices. The problems addressed here i nclude the lack of measurement models, the low validity and reliabilit y of teacher self-reports and the interdependence of many independent variables. Thi s is accomplished by using national data on 7,194 fourth graders and their tea chers from the 2000 National Assessment of Educational Progress (NAEP). The stu dy relates teachers’ classroom practices, as well as their background ch aracteristics, to student performance on a reading comprehension assessment, taking into account student background characteristics.


3 of 19 The study finds that the addition of classroom prac tices to large-scale models of reading performance is vital to the successful isol ation of teacher effects. Once such variables are introduced, teacher effects can prove quite substantial, nearly as large as student background effects. The study als o finds that testing small-scale results with large-scale data is crucial for establ ishing the effectiveness of specific classroom practices. A link was confirmed between some but not all classroom practices and student performance. It would be pre mature, however, to draw substantiative conclusions about effective reading practices from these findings. Rather, the purpose of this paper is to draw method ological conclusions about the viability of large-scale research on instructional practice. Before addressing these conclusions, however, it is worth reviewing key fin dings from the prior literature and describing in some detail how the current study was conducted.BackgroundIntensive study has been devoted to classroom effec ts on reading. The reports of the National Research Council (Snow et al., 1998) a nd the National Reading Panel (2000) have identified hundreds of studies of how c lassroom practices affect reading performance. Much of this research, howeve r, focuses on the early stages of reading skill acquisition, such as phonemic awar eness and word recognition, with much less on reading comprehension (Snow et al., 20 00). Nonetheless, the body of research on reading comprehension is substantial enough to make it possible to identify seven kinds of practices that are consiste ntly associated with improvements in student reading comprehension.First, students perform better when explicitly taug ht metacognitive skills. Metacognitive skills are the ways in which readers glean meaning from texts. In a series of studies, Durkin (1978, 1981) found that t eachers, in explicating texts, rarely instruct students in methods of explication. In the wake of this finding, a variety of approaches to teaching metacognitive ski lls were developed, including reciprocal teaching, questioning and direct instruc tion. Research on these techniques has generally found positive effects on reading comprehension (Cross & Paris, 1988; Rosenshine & Meister, 1994; Hansen & Pearson, 1983; Wharton McDonald et al., 1998; Kaniel et al., 2000; Mueller 1997; Alfassi, 1998). Second, students seem to perform better when readin g and writing instruction are integrated. After years of these skills being taug ht separately, educators called for their integration in the 1980Â’s. Later, reading an d writing were combined under the single heading of language arts, as reflected in ma ny of the recently promulgated state academic standards. Research on using writin g to improve reading comprehension has generally supported the approach (Wharton-McDonald et al., 1998; Cantrell, 1999; Knapp et al., 1995).Third, research on the texts students read has docu mented advantages to using trade books rather than basal readers. Basal reade rs tend to abridge texts to maximize accessibility, whereas trade books are rea l world texts, and are often chosen to convey content rather than just reading s kills. The use of trade books has been found to increase student motivation as we ll as to improve reading comprehension skills (Popplewell & Doty, 2001; Guth rie, 2001; Guthrie et al., 2000; Guthrie, 2000; Guthrie et al., 1999; Guthrie, 1998; Guthrie 1996).


4 of 19 Fourth, students seem to benefit from time spent re ading in class. Research on time on task has suggested that more time spent tea ching reading is associated with improved performance. Particularly strong eff ects have been found for time spent in the act of reading, with oral reading appe aring more beneficial than silent reading (Topping & Paul, 1999; National Reading Pan el, 2000). Three other instructional techniques supported by t he literature are having students work in groups, involving parents and using authent ic assessments to measure student progress. Supposedly, students learn more f rom one another and are more motivated when engaging in group work. A variety o f parental involvement activities, from checking homework to reading toget her, have been found to be conducive to improved reading performance (Epstein, 2001; Epstein & Dauber, 1994). And measuring student performance through t asks that are as similar as possible to class work and homework seems to be mor e effective than having students take traditional multiple-choice or shortanswer tests. While small-scale research has succeeded in identif ying numerous ways in which teachers affect student performance, large-scale re search has generally not been able to confirm these findings. Beginning with the Equality of Educational Opportunity Study (Coleman et al., 1966), productio n function studies have found that most teacher effects are overwhelmed by studen t effects. Of the hundreds of production functions estimated in the wake of the C oleman Report, less than one-third could discover a link between student out comes and teacher experience, less than one-quarter could do so for teachers sala ries, and just one in ten could do so for educational attainment. Two types of teache r effect did prove more robust. Many studies have isolated modest effects for teach ersÂ’ majoring in the subject they teach and teacher scores on basic skills tests The relative lack of large-scale studies confirming teacher effects, however, has to led to meta-analysis of them coming to divergent conclusions, some accepting and others questioning the existence of teacher effects (Hanushek, 1997; Hanus hek, 1996a; Hanushek, 1996b; Hanushek, 1989; Greenwald, Hedges & Laine, 1 996; Hedges & Greenwald, 1996; Hedges, Greenwald & Laine, 1994).The disappointing results of large-scale studies ma y stem from their various methodological shortcomings. First, these studies tend to focus on teacher effects that are relatively easy to measure with large-scal e data, namely teacher background characteristics such as education level or college major. Such studies thus tend not to measure the effects that small-sca le research has found to be substantial. Second, such studies tend to lack mea surement models; they assume variables are perfectly measured and do not develop constructs from multiple indicators. Yet measurement error in teacher selfreports of behavior and background is substantial, and can be minimized thr ough multiple indicators (Mayer, 1999). Third, such studies tend not to rel ate independent variables to one another, when small-scale research suggests that te acher variables are very much affected by student background and school context.A few large-scale studies do relate classroom pract ice to student performance in mathematics and science, and reveal substantial cla ssroom effects. The nationally representative National Educational Longitudinal St udy (National Center for Education Statistics, 1996) found that an emphasis by teachers on conveying


5 of 19 higher-order thinking skills was positively associa ted with student performance in math but not in science. A study representative of the state of California (Cohen & Hill, 2000) found that reform-minded classroom prac tices were positively associated with student mathematics performance. A nd an analysis of the nationally representative 1996 National Assessment of Educational Progress in Mathematics (Wenglinsky, 2001) found that an emphas is on conveying higher-order thinking skills, engaging in hands-on learning activities, and receiving professional development to address special populat ions of students were all positively related to math scores. The fact that a ll three of these studies uncovered substantial teacher effects when classroom variable s were included suggests the need for similar work in the area of reading. It i s to this work that we now turn.Research Questions, Data and MethodThe exploratory study described here is designed to address two methodological research questions suggested by the prior literatur e. First, do the classroom practices identified as important by the small-scal e literature prove to be uniformity related to student reading performance? If all are confirmed, large-scale research can be said to add little to what is already known. If it proves, however, that some practices are confirmed while others are not, this finding would suggest the need to conduct an independent program of studies using lar ge-scale data. Second, does the addition of classroom practices to teacher effe cts models substantially increase the importance of these effects, compared to studen t background effects? If so, this finding would suggest the importance of includ ing classroom practice variables in future large-scale studies of reading. As will be seen, difficulties encountered in doing this exploratory study indicate that large-sc ale studies of classroom practices, while vital, raise many methodological hurdles that subsequent research will need to overcome.To answer these questions, it was necessary to obta in large-scale data that were representative of a large population and included m easures of student reading comprehension, teacher background, classroom practi ces and student and school background. Fortunately, a recent administration o f the National Assessment of Educational Progress met these criteria. NAEP was administered to a nationally representative sample of 7,194 fourth graders in 20 00 to assess their reading comprehension skills. In addition to the assessmen t, questionnaires were administered to students and their reading teachers generating information on their backgrounds and classroom practices. (For overview of the NAEP 2000 Reading Assessment, see National Center for Education Stati stics, 2001). The use of the NAEP, however, introduces certain me thodological hurdles that the current study needed to overcome. First, the study needed to appropriately handle variability in the reading comprehension measure. To limit the amount of time students were assessed, each student answered only a limited number of test items; consequently, it is not possible to generate a single student score. Instead NAEP provides five scores based upon the items the student answered and student and school background information. The rec ommended procedure for conducting secondary analyses using these five scor es, known as plausible values, is to estimate a separate model for each and then p ool them. The unstandardized and standardized coefficients are pooled by calcula ting their means and variances through the following formula:


6 of 19 v = u + (1.2)B, where v is the pooled variance, u is the average sampling variance and B is the variance among the five plausible values. Second, the study needed to appropriately handle th e sample design. Because NAEP is a clustered, stratified sample, student and teacher observations are not independent of one another. If treated as a simple random sample, these observations will underestimate standard errors. C onsequently, standard errors need to be adjusted. One acceptable technique for doing so is using a design effect. NAEP provides weights, known as jackknife weights, that can be used to estimate the effect of the sample design on the sta ndard error of each coefficient, known as the design effect. Because of the computa tionally expensive nature of estimating the effect for every coefficient in a mo del, it is appropriate to estimate effects for a subset of coefficients and then selec t one of these for the purpose of inflating the standard errors of all coefficients.To relate the teacher and student characteristics t o student reading comprehension, the statistical technique of structu ral equation modeling (SEM) was employed. Like regression analysis, SEM makes it p ossible to relate independent variables to dependent variables, taking into accou nt both the independent variables and statistical controls. It has two adv antages over regression. First, it can test the fit of entire path models, meaning tha t it can estimate the coefficients and overall goodness of fit of models that relate i ndependent variables to one another as well as to the dependent variable. This makes it possible to incorporate intervening variables into the model. Second, it ca n construct its independent and dependent variables from observed variables through factor models. This makes it possible both to take into account measurement erro r and to reduce such error through the use of multiple indicators. (Note 1) The current study estimated two sets of factor and path models: The first set consisted of five versions of a teacher background model, one for each plausible value, and the second set consisted of five version s of a classroom effects model, also one for each plausible value. These models we re estimated using AMOS 3.6 (Arbuckle, 1996), an SEM package, and STREAMS 1.8 ( Gustafsson & Stahl, 1997), a pre-and post-processor for SEMs. The fact or model portion of the teacher background model constructed measures of four teach er background characteristics (major, education level, years of e xperience, and perceived preparedness to teach), two student background char acteristics (socio-economic status (SES) and home reading behavior), one school characteristic (class size) and one student outcome (a plausible value for read ing comprehension performance). SES was constructed from five measur es, home reading behavior from four, and the rest from single measures. (Fac tor models for single measures fix factor loadings at 1 and error terms at 0. See Table 1 for full list of the measures employed.) Table 1. Descriptive Statistics for Teacher, Studen t and School Background Characteristics MSDN


7 of 19 Teacher BackgroundPreparedness -.03.867914 Major in English (1=yes) .19.427914 Major in Reading/Language Arts (1=yes).22.467914Education Level (1=MasterÂ’s or more).38.447914Years of Experience (From 1= low to 5=high)3.331.22 7914 Student Socio-economic StatusFamily Subscribes to Newspaper (1=yes).72.437914Family Subscribes to Magazines (1=yes).72.417914Family Owns Encyclopedia (1=yes).80.377914Family Owns More Than 25 Books (1=yes).93.257914Mother College Graduate (1=yes).71.367914 Student Reading BackgroundUses Libraries(From 1= never or hardly ever to 4=most every day) 2.60.937914 Talks to Peers about Reading(From 1=never or hardly ever 4=almost every day) 2.641.107914 Reads for fun(From 1=a poor reader to 4= almost every day) 3.031.047914 Self-assessment of Reading Skills(From 1=a poor reader to 4=a very good reader) 3.20.837914 School BackgroundClass Size(From 1=36 or more students to 5=1-20 students) 4.00.867914 The path portion of the model then related the stu dent outcome to the teacher background measures, taking into account the studen t background and school measures. The factor portion of the classroom prac tices model constructed measures of eight classroom practices (teaching met acognitive skills, integrating reading and writing, use of reading materials such as trade books and basal readers, time spent reading in class, working in gr oups, parental involvement, authentic assessment and traditional assessment), t wo students background characteristics (SES and home reading behavior), on e school characteristic (class size) and one student outcome (a plausible value). Metacognitive skills was constructed from five measures, integrating reading and writing from four measures, reading materials from five measures, tim e spent reading from two measures, group work from two measures, authentic a ssessment from four measures and traditional assessment from three meas ures. (See Table 2 for full list).


8 of 19 Table 2. Descriptive Statistics for Classroom Pract ices MSD Metacognitive SkillsDescribe AuthorÂ’s Method(From 1=never or hardly ever to 4=almost every day) 2.61.80 Explain Reading(From 1=never or hardly ever to 4=almost every day) 3.39.61 Make Generalizations(From 1=never or hardly ever to 4=almost every day) 3.44.59 Predict Outcomes(From 1=never or hardly ever to 4=almost every day) 3.48.59 Learning New Words in Context(From 1=never or hardly ever to 4=almost every day) 3.78.39 Writing in Service of ReadingWriting about Literature(From 1=never or hardly ever to 4=almost every day) .43.46 Reading and Writing(From 1=never or hardly ever to 4=almost every day) .70.43 Writing about Reading(From 1=never or hardly ever to 4=almost every day) 3.18.69 Answers Questions in Writing(From 1=never or hardly ever to 4=almost every day) 3.37.61 Reading MaterialsTrade Books(From 1=never or hardly ever to 4=almost every day) .20.37 Basal Readers(From 1=never or hardly ever to 4=almost every day) .17.34 Reading Kits(From 1=never or hardly ever to 4=almost every day) 1.881.01 ChildrenÂ’s Newspapers(From 1=never or hardly ever to 4=almost every day) 2.08.79 Worksheets(From 1=never or hardly ever to 4=almost every day) 2.97.82 Time ReadingReading Aloud(From 1=never or hardly ever to 4=almost every day) 3.59.57 Reading Silently(From 1=never or hardly ever to 4=almost every day) 3.78.45


9 of 19 Group WorkWork in Small Groups(From 1=never or hardly ever to 4=almost every day) .69.42 Engage in Group Activities(From 1=never or hardly ever to 4=almost every day) 2.34.73 Parental InvolvementParents Check Homework(From 1=never or hardly ever to 4=almost every day) 2.331.01 Parents Sign Off(From 1=never or hardly ever to 4=almost every day) 2.96.92 Authentic AssessmentPortfolios(From 1=never or hardly ever to 4=once or twice a w eek) 1.99.97 Paragraphs(From 1=never or hardly ever to 4=once or twice a w eek) 3.19.80 Projects(From 1=never or hardly ever to 4=once or twice a w eek) 2.65.67 Oral(From 1=never or hardly ever to 4=once or twice a w eek) 3.00.88 Traditional AssessmentMultiple-Choice Test(From 1=never or hardly ever to 4=once or twice a w eek) 2.87.88 Short-Answer Test(From 1=never or hardly ever to 4=once or twice a w eek) 3.17.78 Tests(From 1=never or hardly ever to 4=once or twice a w eek) 2.49.67For the final version of the models, some of the mu ltiple indicator constructs were turned into single indicator constructs to identify which indicator was responsible for the classroom effect of a construct. Thus, integra ting reading and writing and reading materials were divided into their constitue nt indicators. Student background, school characteristics, and student out comes were measured as per the teacher background models. The path portion of the classroom effects model related the classroom practice constructs to the st udent outcome, taking into account student background and school characteristi cs, as well as relating the student background and school characteristics to ea ch of the classroom practices, thus making it possible to gauge the extent to whic h classroom practices acted as intervening variables between student background, s chool characteristics and student outcomes. (Note 2) Results


10 of 19 The factor models and goodness-of-fit statistics re veal that the models fit the data well. For all factor models, the constructs loaded substantially and on all of the corresponding indicators, and all loadings were sta tistically significant at the .05 level. (Factor models are not presented here, but are available upon request.). All ten of the factor and path models also had adequate goodness-of-fit statistics. For the teacher background models, the RMSEAs were at t he .03 level, with normed goodness-of-fit indices of .92 and comparative good ness-of-fit indices at .92 and .93, depending upon the plausible value. For the c lassroom practice models, the RMSEAs were at the .05 level, with both normed and comparative goodness-of-fit indices at .98. These results suggest that the hyp othesized models were confirmed by the observed data.The path models for teacher background reveal only a modest effect of teaching on student reading comprehension (Table 3). The stron gest effects come from students, with SES having the largest effect in the model (b=.37) followed by reading background (b =.14). The school control, c lass size, also had an effect, albeit a modest one (b =.03). Among the five teach er background variables, only one, years of experience, proved statistically sign ificant, with an unstandardized coefficient of .05. This findings differs somewhat from the literature, in which teacher major tend tends to have an effect and teac her experience tends not to have one. This divergence may be attributable to t he fact that this study is of fourth graders and their elementary school teachers, where as most of the studies of teacher major are at the high school level. (Note 3) Table 3. Structural Equation Model of Teacher Backg round Effects Reading Achievement Student Socio-economic Status140.04** .37 Student Reading Background16.99** .14 Class Size1.50** .03 Teacher Preparedness.72 .02 Teacher Major1.96 .02 Teacher Education Level-2.57* -.03 Teacher Experience1.48** .05 Error .90 *p <.10**p<.05 The classroom practice path models reveal much more substantial teacher effects (Table 4). As with the teacher background model, t he strongest single effect is of SES (b=.43). This is followed, however, by two te acher effects, the positive effect


11 of 19 of metacognitive skill instruction (b= .31) and the negative effect of time spent reading in class (b=.30). Student reading backgrou nd is next in importance, with students with stronger backgrounds scoring higher o n the reading comprehension assessment (b=.13). TeachersÂ’ having students writ e about literature they are reading and using trade books as their primary read ing materials had modest positive effects (b=.04 for each). Class size, as in the teacher background model, also had a statistically significant effect of that size. Table 4. Structural Equation Models of Classroom Ef fects Reading Achievement Student Socio-economic Status138.66** .43 Student Reading Background15.96** .13 Class Size1.64** .04 Writing .95** .04 Basal Reading.19 .01 Trade Books1.43** .04 Metacognition1.67* .31 Time Reading1.38* .30 *p <.10 **p< .05 In addition, the classroom practice path models ind icate that students are exposed to very different practices depending upon their ba ckground characteristics and those of their schools. Affluent students are more likely to be exposed to metacognitive instruction (b=.04), writing about li terature (b=.07) and reading trade books (b=.07) than their less affluent peers. Ther e is, however, no difference in time spent reading in class or the use of basal rea ders between the two groups. Schools with smaller classes also differ from those with larger classes, with small class students more likely to be exposed to metacog nitive instruction (b =.03) and writing about literature (b =.06) as well as to spe nd more time reading in class (b=.05). It thus appears that effective classroom practices act as intervening variables between student SES and reading comprehen sion performance, with higher SES students more likely to be exposed to th ose practices that are themselves associated with higher NAEP scores. Wit h class size, the pattern is less clear, as the practices associated with smalle r class sizes may or may not have a positive relationship to NAEP scores.These findings answer the first research question i n the negative and the second in the affirmative. The large-scale data do seem to c onfirm some of the findings from small-scale research but not others. Some practice s, namely metacognition, using


12 of 19 trade books and a measure of integrating reading an d writing, did prove positively related to reading comprehension. Other practices, however, such as having students work in groups, increasing parental involv ement, and the use of authentic assessment, did not. And time spent reading in cla ss actually had a negative relationship to student performance. The addition of classroom practices to large-scale models seems to make the overall impact of teachers comparable to that of student background. As with typical produc tion functions, the teacher background model revealed only a single modest teac her effect. The classroom practice model, however, revealed multiple teacher effects, some of them quite strong. The total standardized effect for the four teacher variables (.70) is actually somewhat larger than the total standard effect of t he two student background measures (.56).ConclusionsThese findings have significant methodological impl ications for research on teacher effects on reading comprehension. The finding that some of the classroom practices proved effective while others did not sug gest the need for synergy between small-scale and large-scale research. The findings of small-scale, highly internally valid, studies should serve as the basis for large-scale, highly externally valid, studies. Only in this way can it be known i f small-scale findings are applicable to large populations. (This does not ru le out the possibility that small-scale research can, by itself, provide inform ation about small populations.) The finding that the introduction of classroom prac tices leads to substantial teacher effects suggests the need for large-scale research to embrace such variables. Clearly, the failure of previous large-scale resear ch to uncover substantial teacher effects is in large part due to its not including s uch variables. In addition, the other methodological advances of the current study over t raditional production functions proved useful. The use of multiple indicators impr oved the quality of the measures employed, and the use of path models led to the fin ding that classroom practices act as intervening variables between student backgr ound and reading comprehension performance.Yet while the current exploratory study does take s ome steps to improve the large-scale methodology for the study of teacher ef fects, much remains to be done. One shortcoming of the current study is the ad hoc manner in which it addressed problems with teacher self-reports. Beca use it relied on pre-existing data, the study made use of interaction effects to increase the likelihood that teachers reporting the use of certain practices wer e actually using them. Doing so, however, truncated the sample, and is based on the assumption that the more experienced, better prepared teachers are more like ly to accurately assess and report what their practices are. This assumption m ay or may not hold true for a given teacher. A more effective technique for redu cing problems with teacher self-reports would be to begin to design questionna ire items that make clearer what the practices are and minimize the social desirabil ity effects. For instance, a questionnaire might include a scenario in a classro om and ask the respondent to describe how he or she would address it. Responden ts could also be asked to rank order the effectiveness of the classroom pract ices of others, or to draw up a time budget for various practices. Such methods, o ften employed in small-scale research, need to be applied on a larger scale to e nhance the reliability and validity of the large-sample teacher reports.


13 of 19 Another shortcoming of the study is its use of cros s-sectional data. Because the data are cross-sectional, it is not clear whether p articular practices enhance reading comprehension or high performing students are more likely to have teachers engaging in such practices. The study did address this problem in an ad hoc fashion by controlling for measures of student home reading behavior. Indeed, those controls may have resulted in underestimates of teacher effects, in that teachers may positively influence home reading beha vior. Whatever the impact of the ad hoc procedure, it is no substitute for longi tudinal data that follow student performance over time, and hence it is crucial for subsequent large-scale studies to collect such data. Indeed, the Early Childhood Lon gitudinal Study (ECLS), which will follow a national sample of students from kind ergarten through fifth grade, testing their reading skills and measuring teacher classroom practices, may address this need.A third shortcoming of this study is its failure to fully take into account the multilevel nature of its data. This study involved multiple l evels of analysis in that it related teacher-level inputs to student-level outputs. Yet students are not selected at random, but are clustered within classrooms. The e mployment of design effects addressed this issue somewhat by increasing standar d errors based upon clustering at the level for the school district. B ut it did not take into account the impact of classroom level non-independence on stand ard errors. It also did not distinguish between student-level and contextual ef fects; the influence of student SES, for instance, may be in part due to the averag e SES of that student’s peers. To fully address all of these issues, multilevel te chniques, such as Hierarchical Linear Modeling or multilevel versions of SEM (MSEM s) need to be employed. Finally, rich data is needed on teacher background. This study used the same kinds of summary measures employed by production fu nction studies, such as years of experience and education levels. It may b e that teacher background is extremely important, but can only be fully gauged t hrough learning in a more nuanced way about the background. The education le vel of the teacher may not be important, but the nature and extent of the teac her education curriculum may be. Perhaps certain kinds of induction experiences are more conducive to high student performance. And the nature and extent of professional development experiences may play a role in encouraging particul ar effective classroom practices. Thus, while the current study suggests that failure to consider classroom practices has led large-scale research to underesti mate teacher effects, it may be that the effects of teacher background have also be en underestimated. The only way to gauge the full impact of teachers is to coll ect as much information about them as is collected about their students, and see how the biographies and classroom actions of the two actors unfold together In sum, it should be possible to gauge the effectiv eness of instructional practices in the area of reading comprehension using large-scale data. The ability to generalize from smaller to larger populations is critical give n the existence of new policies that call for “scientifically-based” instruction in read ing, that are national in scope. Without knowing the applicability of particular tec hniques to all populations, policy makers and educators run the risk of imposing that technique on an inappropriate population. Consequently, research should build up on the methods identified in this exploratory study to determine which instructi onal practices are help for all


14 of 19 students, and which are helpful for particular subp opulations of students. NotesSEM accomplishes this through three steps. First, factor and path models are specified by the researcher. Factor models indicat e which observed variables load on which constructs. Path models indicate whi ch constructs are permitted to be related to one another. Second, th rough an iterative process, the covariance matrix that these specifications imp ly (S) is matched with a covariance matrix of the observed data (S) to maxim ize their fit with one another. Finally, the resulting output consists, f or each construct, of standardized factor loadings and standard errors fo r each indicator, standardized and unstandardized path coefficients a nd their standard errors for each relationships between constructs; and good ness-of-fit statistics including the root mean squared error of approximat ion (RMSEA) and indices such as the comparative and normed fit indices. 1. Because of issues in using teacher self-reports, th e classroom practice measures had to be transformed for the five models. Research has found teacher self-reports of classroom practices to be f requently unreliable. Some teachers may misrepresent the practices in which th ey engage because they do not fully understand what the named practices ar e, and some may misrepresent the practices because they perceive th e practices as socially desirable. The NAEP data indicated that most teach ers claim to engage in most practices, and consequently, giving full weigh t to the responses of all teachers would make it difficult to distinguish bet ween those that actually do and do not engage in a given practice. Instead, pr actices were weighted by teachersÂ’ years of experience and their perception of their preparedness, giving more weight to the responses of the more pre pared, more experienced teachers. This was accomplished through calculatin g interaction effects between each classroom practice and the two teacher background measures, and substituting these for the classroom practice m easures in the models. 2. TeachersÂ’ education level had a negative coefficien t at the .10 level. This counterintuitive effect requires further exploratio n. 3.ReferencesAlfassi, M. (1998). Reading for meaning: The effica cy of reciprocal teaching in fostering reading comprehension in high school students in remedial r eading classes. American Educational Research Journal, 35, 2, 309-332. Anderson, S. A. (2000). How Parental Involvement Ma kes a Difference. Reading Improvement, 37 2, 61-86. Arbuckle, J. L. (1997). Amos usersÂ’ guide. Version 3.6. Chicago: Small Waters Corporation. Bromley. K. & Modlo, M. (1997). Using cooperative l earning to improve reading and writing in language arts. Reading and Writing Quarterly, 13 1, 21-35. Bryk, A. S. & Raudenbush, S. W. (1992). Hierarchical linear models: Applications and data a nalysis methods Newbury Park, CA: Sage Publications. Cantrell, S. C. (1999). The effects of literacy ins truction on primary students' reading and writing achievement. Reading Research and Instruction, 39, 1, 3-26. Cohen, D. K. & Hill, H. C. (2000). Instructional po licy and classroom performance: The mathematics


15 of 19 reform in California. Teachers College Record, 102, 2, 294-343. Coleman, J. S., Campbell, E. Q., Hobson, C. J., McP artland, J., Mood, A. M., Weinfeld, F. D., & York. R. L. (1966). Equality of educational opportunity Washington, D.C.: U.S. Government Printing Offic e. Cross, D. R. & Paris, S. G. (1988). Developmental a nd instructional analyses of children's metacogniti on and reading comprehension. Journal of Educational Psychology, 80, 2, 131-142. Durkin, D. (1978). What classroom observations reve al about reading comprehension instruction. Reading Research Quarterly, 14, 4, 481-533. Durkin, D. (1981) Reading comprehension instruction in five basal reader series. Reading Research Quarterly, 16, 4, 515-544. Epstein, J. L. (2001). School, family and community partnerships: Preparin g educators and improving schools Boulder, CO: Westview Press. Ferguson, R. F. (1991). Paying for public education : New evidence on how and why money matters. Harvard Journal of Legislation, 28 (2), 465-498. Ferguson, R. F. & Ladd, H. F. (1996). How and why m oney matters: An analysis of Alabama schools. In H. F. Ladd (Ed.), Holding school accountable: Performance-based refor m in education (pp. 265-298). Washington, D.C.: The Brookings Institution. Goldhaber, D. D. & Brewer, D. J. (1996) Why donÂ’t s chools and teachers seem to matter? Assessing the impact of unobservables on educational productivity Journal of Human Resources, 32 (3), 505-520. Greenwald, R., Hedges, L. V., & Laine, R. D. (1996) The effect of school resources on student achievement. Review of Educational Research, 66 3, 361-396. Gustafsson, J. E. & Stahl, P. A. (1997). STREAMS userÂ’s guide (Version 1.7). Mlndal, Sweden: Multivariate Ware. Guthrie, J. T. (1996). Growth of literacy engagemen t: Changes in motivations and strategies during concept-oriented reading instruction. Reading Research Quarterly, 31 3, 306-332. Guthrie, J. T., Schafer, W. D., Von Secker, C. & Al ban, T. (2000). Contributions of instructional prac tices to reading achievement in a statewide improvement p rogram. Journal of Educational Research, 93, 4, 211-225. Guthrie, J. T., Schafer, W. D. & Huang, C.-W. (2001 ). Benefits of opportunity to read and balanced instruction on the NAEP. Journal of Educational Research, 94, 3, 145-162. Hansen, J. & Pearson, P. D. (1983). An instruction al study: Improving the inferential comprehension o f good and poor fourth-grade readers. Journal of Educational Psychology 75, 6, 821-829. Hanushek, E. A. (1989). The impact of differential expenditures on school performance. Educational Research, 18, 4, 45-51. Hanushek, E. A. (1996a). A more complete picture of school resource policies. Review of Educational Research, 66, 3, 397-409. Hanushek, E. A. (1996b). School resources and stude nt performance. In G. T. Burtless (Ed.), Does money matter? The effect of school resources on stu dent achievement and adult success (pp. 43-73). Washington, D.C.: The Brookings Institution Hanushek, E. A. (1997). Assessing the effects of sc hool resources on student performance: An update Educational Evaluation and Policy Analysis, 19 2, 141-164. Hedges, L. W. & Greenwald, R. (1996). Have times ch anged? The relation between school resources and student performance. In G. T. Burtless (Ed.), Does money matter? The effect of school resources on student achievement and adult success (pp. 74-92). Washington, D.C.: The Brookings Institution. Hedges, L. V., Laine, R. D., & Greenwald, R. (1994) Does money matter? A meta-analysis of studies of the effects of differential school inputs on studen t outcomes. Educational Research, 23 3, 5-14.


16 of 19 Kaniel, S., Licht, P., & Peled, B. (2000). The inf luence of metacognitive instruction of reading and writing strategies on positive transfer. Gifted Education International, 15, 1, 45-63. Knapp, M. S. (1995). Teaching for meaning in high-poverty classrooms New York, NY: Teachers College Press, Columbia University. Mayer, D. P. (1999). Measuring instructional practi ce: Can policymakers trust survey data? Educational Evaluation and Policy Analysis, 21 1, 29-45. Mislevy, R. J. (1993). Should "multiple imputations be treated as "multiple indicators"? Psychometrika, 58, 1, 79-85. Monk, D. H. (1994). Subject area preparation of sec ondary mathematics and science teachers and student achievement. Economics of Education Review, 13 (2), 125-145. Mueller, M. E. (1997). Using metacognitive strategi es to facilitate expository text mastery. Educational Research Quarterly, 20, 3, 41-65. Muthn, B. O. (1994). Multilevel covariance structu re analysis. Sociological Methods and Research, 22, 3, 399-420. National Center for Education Statistics. (1996). High school seniorsÂ’ instructional experiences in s cience and mathematics. Washington, D.C.: U.S Government Printing Office. National Center for Education Statistics. (2001). The nation's report card: Fourth-grade reading high lights Washington, DC.: U.S Government Printing Office. National Reading Panel (2000). Teaching children to read: An evidence-based assess ment of the scientific research literature on reading and its i mplications for reading instruction Bethesda, MD: National Inst. of Child Health and Human Developmen t (NIH),. OÂ’Reilly, P. E., Zelenak, C. A., Rogers, A. M., & K line, D. L. (1996). 1994 trial state assessment program in reading secondary-use data files user guide. Washington, D.C.: U.S. Department of Education. Popplewell, S. R. & Doty, D. E. (2001). Classroom instruction and reading comprehension: A comparison of one basal reader approach and the four-blocks fr amework. Reading Psychology, 22, 2, 83-94. Rosenshine, B. & Meister, C. (1994). Reciprocal tea ching: A review of the research. Review of Educational Research, 64, 4, 479-530. Snow, C. E., Ed.; Burns, M. Susan, Ed.; Griffin, Pe g, Ed. (1998). Preventing reading difficulties in young children Washington, D.C.: Committee on the Preservation o f Reading Difficulties in Young Children. Topping, K. J. & Paul, T. D. (1999). Computer-assis ted assessment of practice at reading: A large scal e survey using accelerated reader data. Reading and Writing Quarterly, 15, 3, 213-231. Wenglinsky, H. (2001). The use of multilevel latent growth modeling to measure school effects. In R. Nata (Ed.). Progress in Education 4 (pp. 167-195). Huntington, NY: Nova Science Pub lishers. Wenglinsky, H. (2002). How schools matter: The link between teacher classroom practices and student academic performance. Education Policy Analysis Archives, 10, 12. Wharton-McDonald, R., Pressley, M., & Hampton, J. M (1998). Literacy instruction in nine first-grade classrooms: Teacher characteristics and student ach ievement. Elementary School Journal, 99, 2, 101-128.About the AuthorHarold WenglinskyCenter for Educational LeadershipBaruch College School of Public Affairs


17 of 19 17 Lexington Ave, Box C-305New York, NY 10010Email: Harold Wenglinsky is Associate Professor in the Bar uch School of Public Affairs and Research Director of the Center for Educational Leadership. His research interests include the impact of teachers’ instructi onal practices on student learning and maximizing the effectiveness of educational tec hnology, about which he is currently completing a book. The World Wide Web address for the Education Policy Analysis Archives is Editor: Gene V Glass, Arizona State UniversityProduction Assistant: Chris Murrell, Arizona State University General questions about appropriateness of topics o r particular articles may be addressed to the Editor, Gene V Glass, or reach him at College of Education, Arizona State Un iversity, Tempe, AZ 85287-2411. The Commentary Editor is Casey D. Cobb: .EPAA Editorial Board Michael W. Apple University of Wisconsin David C. Berliner Arizona State University Greg Camilli Rutgers University Linda Darling-Hammond Stanford University Sherman Dorn University of South Florida Mark E. Fetler California Commission on Teacher Credentialing Gustavo E. Fischman California State Univeristy–LosAngeles Richard Garlikov Birmingham, Alabama Thomas F. Green Syracuse University Aimee Howley Ohio University Craig B. Howley Appalachia Educational Laboratory William Hunter University of Ontario Institute ofTechnology Patricia Fey Jarvis Seattle, Washington Daniel Kalls Ume University Benjamin Levin University of Manitoba Thomas Mauhs-Pugh Green Mountain College Les McLean University of Toronto Heinrich Mintrop University of California, Los Angeles


18 of 19 Michele Moses Arizona State University Gary Orfield Harvard University Anthony G. Rud Jr. Purdue University Jay Paredes Scribner University of Missouri Michael Scriven University of Auckland Lorrie A. Shepard University of Colorado, Boulder Robert E. Stake University of Illinois—UC Kevin Welner University of Colorado, Boulder Terrence G. Wiley Arizona State University John Willinsky University of British ColumbiaEPAA Spanish Language Editorial BoardAssociate Editor for Spanish Language Roberto Rodrguez Gmez Universidad Nacional Autnoma de Mxico Adrin Acosta (Mxico) Universidad de J. Flix Angulo Rasco (Spain) Universidad de Teresa Bracho (Mxico) Centro de Investigacin y DocenciaEconmica-CIDEbracho Alejandro Canales (Mxico) Universidad Nacional Autnoma Ursula Casanova (U.S.A.) Arizona State Jos Contreras Domingo Universitat de Barcelona Erwin Epstein (U.S.A.) Loyola University of Josu Gonzlez (U.S.A.) Arizona State Rollin Kent (Mxico) Universidad Autnoma de Puebla Mara Beatriz Luce (Brazil) Universidad Federal de Rio Grande Javier Mendoza Rojas (Mxico)Universidad Nacional Autnoma Marcela Mollis (Argentina)Universidad de Buenos Humberto Muoz Garca (Mxico) Universidad Nacional Autnoma Angel Ignacio Prez Gmez (Spain)Universidad de Daniel Schugurensky (Argentina-Canad) OISE/UT, Simon Schwartzman (Brazil) American Institutes forResesarch–Brazil (AIRBrasil)


19 of 19 Jurjo Torres Santom (Spain) Universidad de A Carlos Alberto Torres (U.S.A.) University of California, Los EPAA is published by the Education Policy Studies Laboratory, Arizona State University


Download Options

Choose Size
Choose file type
Cite this item close


Cras ut cursus ante, a fringilla nunc. Mauris lorem nunc, cursus sit amet enim ac, vehicula vestibulum mi. Mauris viverra nisl vel enim faucibus porta. Praesent sit amet ornare diam, non finibus nulla.


Cras efficitur magna et sapien varius, luctus ullamcorper dolor convallis. Orci varius natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Fusce sit amet justo ut erat laoreet congue sed a ante.


Phasellus ornare in augue eu imperdiet. Donec malesuada sapien ante, at vehicula orci tempor molestie. Proin vitae urna elit. Pellentesque vitae nisi et diam euismod malesuada aliquet non erat.


Nunc fringilla dolor ut dictum placerat. Proin ac neque rutrum, consectetur ligula id, laoreet ligula. Nulla lorem massa, consectetur vitae consequat in, lobortis at dolor. Nunc sed leo odio.