USF Libraries
USF Digital Collections

Educational policy analysis archives

MISSING IMAGE

Material Information

Title:
Educational policy analysis archives
Physical Description:
Serial
Language:
English
Creator:
Arizona State University
University of South Florida
Publisher:
Arizona State University
University of South Florida.
Place of Publication:
Tempe, Ariz
Tampa, Fla
Publication Date:

Subjects

Subjects / Keywords:
Education -- Research -- Periodicals   ( lcsh )
Genre:
non-fiction   ( marcgt )
serial   ( sobekcm )

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
usfldc doi - E11-00359
usfldc handle - e11.359
System ID:
SFS0024511:00358


This item is only available as the following downloads:


Full Text
xml version 1.0 encoding UTF-8 standalone no
mods:mods xmlns:mods http:www.loc.govmodsv3 xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govmodsv3mods-3-1.xsd
mods:relatedItem type host
mods:identifier issn 1068-2341mods:part
mods:detail volume mods:number 12issue 10series Year mods:caption 20042004Month March3Day 55mods:originInfo mods:dateIssued iso8601 2004-03-05


xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam a22 u 4500
controlfield tag 008 c20049999azu 000 0 eng d
datafield ind1 8 ind2 024
subfield code a E11-00359
0 245
Educational policy analysis archives.
n Vol. 12, no. 10 (March 05, 2004).
260
Tempe, Ariz. :
b Arizona State University ;
Tampa, Fla. :
University of South Florida.
c March 05, 2004
505
Increasing equity and increasing school performance conflicting or compatible goals? : addressing the issues in Williams v. State of California / Jeanne M. Powers.
650
Education
x Research
v Periodicals.
2 710
Arizona State University.
University of South Florida.
1 773
t Education Policy Analysis Archives (EPAA)
4 856
u http://digital.lib.usf.edu/?e11.359



PAGE 1

1 of 31 A peer-reviewed scholarly journal Editor: Gene V Glass College of Education Arizona State University Copyright is retained by the first or sole author, who grants right of first publication to the EDUCATION POLICY ANALYSIS ARCHIVES EPAA is a project of the Education Policy Studies Laboratory. Articles appearing in EPAA are abstracted in the Current Index to Journals in Education by the ERIC Clearinghouse on Assessment and Evaluation and are permanently archived in Resources in Education Volume 12 Number 10March 5, 2004ISSN 1068-2341Increasing Equity and Increasing School Performance—Conflicting or Compatible Goals?: Addressing the Issues in Williams v. State of California Jeanne M. Powers Arizona State UniversityCitation: Powers, J. M. (2004, March 5). Increasing Equity and Increasing School Performance — Conflicting or Compatible Goals?: Addressing the Is sues in Williams v. State of California Education Policy Analysis Archives, 12 (10). Retrieved [Date] from http://epaa.asu.edu/epaa/v12n10/.Abstract This work addresses some of the arguments regarding equity in public education versus school performance at issue in the case of Williams v. State of California. The plaintiff’s expert witnesses have argued that the state is responsible to reduce the inequities in California’s public educational system. In contr ast, the state’s witnesses argue that some of the plaintiff’s propos als have limited educational effects at the cost of reducing local a utonomy. In this paper, I use four years of data from California’s P ublic Schools Accountability Act (PSAA) to evaluate these claims.IntroductionOn May 17, 2000, the 46th anniversary of the decisi on in Brown v. Board of

PAGE 2

2 of 31 Education outlawing racial segregation in public schools, a class action lawsuit was filed on behalf of California’s public school s tudents in an effort to make the state address some of the inequities in California’ s public educational system (Purdum, 2000). Represented by the ACLU and civil r ights organizations, the plaintiffs allege that the state is responsible for ensuring that all public school children across the state have the right to experie nce the same quality of textbooks, teachers, and classrooms. The plaintiff’ s experts have documented a range of inequities in California’s public educatio nal system and have further argued that these inequities are fundamentally unfa ir given the high stakes accountability program California initiated in 1999 with the Public Schools Accountability Act (PSAA). (Note 1) This legislation mandated the ranking of all California public schools based on their Academic P erformance Index (API), which has been calculated based on the results from the state-mandated tests administered to students in grades 2 through 11 in the spring of each school year. By the winter of 2003, the fifth year of stat e school rankings will be released. Over three years later, the case of Williams v. State of California is still working its way through the court system; the trial is set to begin August 30, 2004. (Note 2) In this article, I evaluate the claims made in one expert report written by Margaret Raymond of the Hoover Institution’s Center for Research on Educational Outcomes (CREDO) on behalf of the state (Note 3) In her report, Raymond utilizes the data generated in the wake of the PSAA to rebut the plaintiff’s claims. There are four main sections to the article. First, I briefly outline some of the major claims made by Raymond, focusing specifically on her analysis of API data. Second, I describe the API an d the other sources of data in the analysis. In this section I also discuss some p roblems with the data and the strategies I used to address these problems in my a nalysis. I also present the results of my efforts to recreate Raymond’s analysi s. In the third and fourth sections, I provide analyses of the API data addres sing the issues of teacher qualifications and facilities, respectively.Key ClaimsIn her report rebutting the plaintiff’s arguments i n the case of Williams v. State of California, Margaret Raymond (2003a) argues that th e plaintiffs “haven’t developed a reliable production function for educat ion that highlights the factors at issue in this case” (6). In her discussion she f ocuses on three in particular, the quality of the teaching staff, facilities, and inst ructional materials. In her analysis she specifically focuses on the effect of teacher c redentials on school performance as measured by the Academic Performance Index (API). In part, this analytical strategy is based on the availabili ty of data. To my knowledge no state level datasets exist which provide informatio n about facilities and instructional materials.In elaborating her point about appropriate research strategies that would provide evidence supporting the plaintiffs’ claims, Raymond further argues that “[t]o be confident that the plaintiff’s claims have merit, i t would be necessary to study the effects of each of their proposals under controlled circumstances: that is, to study the effects on student achievement in schools where the factor is abundant compared to schools where the factor is sc arce, controlling for other

PAGE 3

3 of 31 possible effects” (6). Yet interestingly, she doesn ’t appear to follow this strategy in her own analysis of school API scores, which she uses to draw conclusions about the influence of credentialed teachers on sch ool performance. Moreover, the evidence for her assertions is the results from an inappropriate analysis of API data.More specifically, Raymond argues that in “educatio nally challenged” schools, teacher credentials do not influence school perform ance as measured by the Academic Performance Index (API). However, one of h er criteria for defining educationally challenged is the percentage of fully credentialed teachers at a school. If the percentage of fully credentialed tea chers in a school is below the mean percentage of fully credentialed teachers of t he approximately 39 Williams schools in her analysis and it also has greater percentages of free and reduce d lunch students and minority students than the mean of the Williams’ schools on these variables, then the school is defined as educ ationally challenged (Raymond, 2003a, 12). Using this sample of schools, Raymond constructs an econometric model which she argues shows that teach ers credentials – measured by the percentage of teachers at a school that are fully credentialed – have relatively little effect on school API (Raymon d, 2003a, 13). She further argues that the variables in her regression models explain relatively little of the variation in school API scores.Before accepting these conclusions, it is important to examine the research strategy employed in this analysis more closely. On e of the criteria for inclusion in the group “educationally deprived” is that there must be a relatively low percentage of fully credentialed teachers at a give n school. Thus, Raymond uses a sample of schools in which there is little variat ion in the availability of fully credentialed teachers to construct what is basicall y a tautological argument. If the schools in her sample are relatively similarly situated in terms of the percentage of fully credentialed teachers on staff, then it is not surprising that her regression analysis suggests that teachers’ cre dentials don’t matter. This method of sample selection also explains why she ob tains what she describes as a very low R-square for her regression models (1 3). In a sample of schools with little variation on the key explanatory variab les, it is not surprising to find they have a relatively small effect on the model wh en you test for the influence of these variables on a dependent variable. It is well known that restriction in the variability of a variable attenuates that variable’ s correlation with any other variable. (Glass & Hopkins, 1996, 121-3) As I will also detail below, it is also questionable whether or not the comparison sample R aymond utilizes in her analysis is an appropriate comparison for the Willi ams schools.A Brief Overview of the APIThe Academic Performance Index (API) is a state-con structed measure of school performance mandated by the 1999 Public Scho ols Accountability Act. From 1999 to 2001, a summary score for each school was constructed by weighting student scores in each content area of th e SAT-9 tests administered to students in grades 2 through 11 by their national p ercentile ranking (NPR) and then weighting each content area to create an overa ll score (California Department of Education Office of Policy and Evalua tion, 2000, p. 9). For the calculation of the API for elementary and middle sc hools, the content areas were

PAGE 4

4 of 31 weighted in the following manner: mathematics 40%, reading 30%, language 15% and spelling 15%. For high schools, the followi ng content areas were each weighted 20%: Reading, Mathematics, Language, Scien ce, and Social Science. In 2001, the results from the California Standards Tests (CST) in Language Arts were incorporated into a 2001 Base API. In addition a 2001 Growth API using only the SAT-9 test results was also calculated. Th e 2002 Growth API was calculated using the formula for the 2001 Base API. The 2002 Base API utilized in Raymond’s analysis incorporates the CST in Math for all grades and the History and Social Science tests for grades 10 and 11 as well as the results of the High School Exit Examination (HSEE). In the mai n part of the analysis I utilized the API scores that are the most comparabl e across the four years – 1999-2001 API scores calculated using SAT 9 results only and the 2002 “Growth” API, which incorporates the results from t he California Standards Test (CST) in Language Arts only but otherwise is calcul ated from SAT 9 test scores. In the section of the analysis reproducing the Raym ond analysis, I used the 2002 Base API to ensure relative consistency of results.In addition to the API rankings, the API datasets m ade available by the California Department of Education contain additional variable s measuring various types of school characteristics. Three main categories of va riables are utilized in the analyses here: 1) variables related to students’ ba ckground characteristics; 2) variables related to teacher characteristics; and 3 ) variables indicating the type of calendar the school follows. I discuss each type of variable in turn in the sections that follow.Student Background Characteristics% Reduced/Free Lunch is measured by the percentage of students eligible for reduced or free lunch. Mobility is a state-constructed variable that provides a measure of the transiency of the student population by indicating the percentage of test-taking students who first attended the scho ol within the current school year. Students who first attended the district a gi ven academic year were excluded from that year’s API calculations. % English Learners denotes the percentage of students school-wide reported as Engl ish learners. Additional variables denote the percentages of students belong ing to one of 7 racial groups at each school: African American, American Indian, Hispanic, Asian American, Filipino, Pacific Islander, and White. One addition al student background variable available in the API data that Raymond utilized in her also requires more in-depth discussion. The variable for percentage of parents not high school graduates is one of a series of variables measuring the percentage of parents at the school that have reached a given educational tr ansition (e.g. high school graduates, some college, etc.). More importantly fo r the discussion here, however is the variable also available in the 2002 Base API data indicating the percentage of parents responding to this question a t the school, % Response for Parent Education .Teacher CharacteristicsThe API data also contains variables denoting the p ercentage of teachers at a school holding full and emergency credentials. Acco rding to the CDE website, in the API datasets, it is possible for one teacher to be in both the fully credentialed

PAGE 5

5 of 31 and emergency credentialed categories. As a result, for some schools, the total of the percentages for "Fully Credentialed" and "Em ergency Credentialed" may exceed 100. Another issue not addressed by the CDE in their discussion of the API data is the problem of missing information; for some schools the percentages of fully credentialed and emergency cre dentials add to less than 100. In order to more precisely assess the credenti aling at schools, I used variables drawn from the California Basic Education al Data System (CBEDS) Professional Assignment Information Form. These fil es, which contained records for approximately 325,000 teachers across the state were aggregated by school and matched to the API data using the unique code f or each school. Fully Credentialed indicates the percentage of teachers who have comp leted a teacher preparation program and hold a preliminary, clear professional, or life credential. Emergency Credentialed indicates the percentage of teachers that hold an emergency credential. Emergency credentials are granted to individuals who are not qualified for a credential or internshi p but meet minimum certification requirements. These minimum requirements include: a passing score on the state’s basic skills exam (CBEST); a bachelor’s deg ree; and 10 semesters of college coursework in any four of the following are as -language studies, literature, history, social science, mathematics, s cience, humanities, art, physical education, and human development (Darling Hammond, 2002.). In addition, teachers working on emergency permits must submit a statement indicating their intent to complete the credentialing requirements. Some teachers are designated as having a full credential AND an emergency creden tial. This group of teachers could include teachers who are credentialed in one field but teaching out of field, or teachers that are credentialed in another state and working on California state certification (Darling Hammond, 2002.). In the case of the latter, teachers are counted in a third variable, Both Full and Emergency An additional variable was used to indicate whether the teacher’s credential i nformation was missing. I also added a variable to the analysis that is als o available in the Professional Assignment Information Form. Years Teaching counts the average total years of educational service among the teaching staff as tea chers in any district, state or country. This figure includes teaching in private s chool settings but does not include any years teaching as a substitute teacher or in classified staff positions. Like the credential variable described above, an ad ditional variable indicated whether or not the teacher’s information was missin g.School CalendarIndicator variables for the type of year round scho ol were created from the CBEDS School Information Form Sections G through K. Traditional indicates that the school follows a traditional educational c alendar with an extended summer vacation. Year Round Single-Track indicates that the school operates on a single-track year-round calendar with more fre quent and shorter vacation periods (usually three a year ranging from three to five weeks in duration). The major change from the traditional calendar for the year round single-track calendar is the timing and duration of instructiona l and vacation periods; all of the staff and students are in school or in session at the same time (California Department of Education, 2000). Year Round Multiple-Track indicates that the school follows a year-round calendar where the stud ents and faculty are divided

PAGE 6

6 of 31 into three to five groups that rotate throughout th e year. This schedule is used to maximize enrollment at the facility; as one group o f students and staff go on vacation, another returns for instruction. Year Round MultipleTrack “Concept 6” is a specific type of year round multiple-track cal endar in which students have fewer instructional days than the other types of sc hool calendars; instead the school day is lengthened so students receive the sa me number of instructional minutes as the other calendars. Whereas the other t ypes of school calendars have 180 instructional days in the school year, “Co ncept 6” schools have 163 (Oakes, 2002). As a result, I distinguish these sch ools from the other four types of year round multi-track calendars. The advantage of the “Concept 6” instructional calendar is that it allows schools to enroll 50% more students that it would be able to handle at the facility if it were to follow the traditional calendar (California Department of Education, 2001). In cont rast, the other types of year round multiple-track calendars allow schools to inc rease their enrollment by 33% compared to the traditional calendar. An additional variable indicated if calendar information was missing.Control VariablesIndicator variables denoting school type (elementar y, middle, high) were created using the school type variable provided in the API dataset and included in the revised models. As Raymond notes, the median score for the state as a whole varied considerably by school type with elementary schools having the highest median scores followed by middle and then high scho ols. In a critique of another similar analysis of API scores, Rogosa (2002) argue s that given these differences it is important to control for school t ype in these and similar analyses as school type might serve as a proxy for other unm easured factors (23). Surprisingly, Raymond (2003b) makes a similar point in her analysis of the API scores of California charter schools and uses analy tical strategies that take school type into account. However, she does not app ear to control for school type in the analyses provided in the report under d iscussion here.Reproducing the Raymond AnalysisIn this section, I reproduce the Raymond analysis b ased on the information provided in her report. First, I detail the method of selection. Next I provide the descriptive statistics for the comparison sample an d a discussion of how this group compares with the Williams schools in her ana lysis and the statewide sample. Finally, I recreate her regression analysis and also provide an alternative model with the corrected variables desc ribed above.Selection MethodFirst, I selected out the 39 schools Raymond listed in Table 1 of her report from the 2002 API Base data. Of these, 36 had 2002 API s cores and complete information on all variables. (Note 4) I calculated descriptive statistics on the three measures she used to select her sample of 584 schools: 1) the percentage minority students; 2) the percentage of students th at qualified for reduced or free lunch; and 3) the percentage of teachers at the sch ool who held full teaching credentials. Table 1 below provides the descriptive statistics for these three

PAGE 7

7 of 31 variables for the 36 schools:Table 1 Descriptive Statistics for Raymond’s Sample Selecti on VariablesMinimumMaximumMeanStd. Dev. 2002 Base API 425805575.56104.01 % Reduced/Free Lunch 2410068.4722.31 % Fully Cred. 2710073.6117.05 % Minority 2210074.7825.40 Raymond did not indicate which of the 6 possible no n-white racial groups she included in the variables she calculated for the pe rcentage of minority students. This variable is not included in the original 2002 Base API data and thus must be calculated from the variables indicating the percen tages of students in each of 7 racial categories: African American, American India n, Hispanic, Asian American, Filipino, Pacific Islander, and White. In the analy sis described here, I defined “Percentage Minority” as the percentage of African American, Hispanic, and American Indian students at a school and the result s roughly parallel her analysis.According to Raymond, she used these sample means t o select her cases, by choosing all of the cases above the sample mean for percentage of students on reduced/free lunch and percent minority students, a nd below the sample mean for percentage of fully credentialed teachers (Raym ond, 2003, 12). In Raymond’s analysis, this yielded a sample of 584 sc hools, 565 of which had information for all variables in the analysis. Usin g the means listed in Table 1 above to recreate Raymond’s sample, I initially sel ected 593 cases. Of these, 574 had complete information on all variables. In a ddition, I also corrected and augmented the data using other datasets readily ava ilable from the CDE website per the discussion above.Finally, before turning to a discussion of specific variables, it should be noted that there are 7444 schools in the 2002 API Base da ta with API scores. Of these, 7225 have complete information on all of the variables in Raymond’s analysis. Using either of these figures, Raymond’s sample of 565 schools is less than 10% of the schools assigned 2002 Base API scor es in the state. Table 2 provides descriptive information on all var iables in the analysis for three groups: 1) a statewide sample of schools with 2002 API scores and information on all variables in the analysis (Column 1); the 35 “Williams” schools with 2002 API scores and information on all variables I used to recreate the Raymond analysis (Column 2); and the “comparison” schools c hosen by following Raymond’s inclusion criteria (Column 3).Table 2 Descriptive Statistics for All Variables

PAGE 8

8 of 31 State Sample N=7225 Mean (S.D.) Raymond’s Williams Sample N=35 (Note 5) Mean (S.D.) Comparison Sample Using Raymond’s Criteria N=547 Mean (S.D.) Full Comparison Sample N=582 Mean (S.D.) 2002 “Base” API 688.69 (105.37) 575.83 (105.52) 568.27 (61.36)568.72 (64.75) Elementary School .71 (.45).43 (.50).78 (.41)***.76 (.43) Middle School .16 (.37).26 (.44).18 (.38).19 (.39) High School .13 (.33).31 (.47).03 (.18)***.05 (.22) % Minority 48.73(29.76) 74.23 ( 25.56)92.48 (6.62)***91.38 (9.91) % Reduced/Free Lunch 48.58(30.96) 68.14 (22.55)90.20 (9.53)***88.88 (11.94) % Fully Cred. 86.83(13.28) 73.16 (15.53)60.83 (11.64)***61.57 (12.25) % Emergency Cred. 7.26(9.52) 14.82 (12.63)23.53 (13.21)***23.05 (13.38) % Both Full and Emer. 1.32(3.64) 1.69 (2.64)1.27 (2.02)1.29 (2.06) % Credential Missing 4.59(6.64) 10.33 (6.65)14.32 (9.08)*14.08 (9.00) % English Learner 23.64(21.85) 36.29 (21.79)52.28 (19.94)***51.32 (20.40) School Mobility 18.46(14.10) 18.57 (9.29)19.04 (14.85)19.01 (14.57) % Parents Not H.S. Graduates 20.16(18.91) 32.69 (17.94)43.17 (17.21)***42.54 (17.42) % Response for Parent Education 76.62(24.48) 71.51 ( 24.18)63.59 (29.14)64.07 (28.91) ***t-test comparing sample means for Columns 2 and 3 statistically significant at p .001 **t-test comparing sample means for Columns 2 and 3 statistically significant at p .01 *t-test comparing sample means for Columns 2 and 3 statistically significant at p .05 What should be immediately evident if you compare a cross columns are the differences in means for the three groups of school s shown in the first three columns. These three groups of schools are very dif ferent. Compared to the

PAGE 9

9 of 31 state as a whole, the 35 Williams schools included in the Raymond analysis are disadvantaged across all variables. However, Raymon d’s comparison group (Column 3) is much more disadvantaged than the Will iams schools in her analysis. Comparison with the results presented in Table 1 suggest why this is the case. Raymond used the group mean for the Willi ams schools on the three selection variables. In addition, for a school to b e included in the analysis it had to fit all three of the selection criteria rather than any one of t he three criteria. This has the effect of selecting a more disadvantag ed group overall for comparison by definition because it excludes school s which are comparable to the schools that are lower than the Williams school s’ group means on the selection variables (in other words, the relatively advantaged among the Williams schools). If we look at the standard deviations for the three groups, we can also see that there is relatively less variation in the comparison group for most of the explanatory variables than the other two groups. Be cause Raymond’s Williams Schools group is so small (N=35), when it is combin ed with the comparison group as shown in column 4, it has a minimal effect on the means for the full comparison sample. To confirm this, I conducted t-t ests on the sample means shown in Columns 2 and 3. The asterisks in Column 3 indicate that most of the differences in means between Raymond’s Williams sch ools and her comparison group are statistically significant, which suggests that the schools are not appropriate comparison groups as Raymond contends.Another striking difference between the Williams sc hools and the other two groups is that while the comparison group resembles the statewide sample in the distribution of school types (elementary, middle, h igh), almost half of Raymond’s Williams schools are middle and high schools. Final ly, what should also be noticeable from Table 2 is that in Raymond’s group of Williams schools and the comparison group, there is a good deal of missing i nformation in the two series of variables for teachers’ credentials and parental education variables. When the teacher credential variables are corrected using th e Professional Assignment Information Form as detailed above, higher poverty schools are more likely to have missing information compared to the state as a whole. We see this in both the Williams and the Raymond comparison groups, whi ch on average are missing information on about 10 percent and 14 perc ent of their teachers’ credentialing data, respectively. Similarly, in the average school in the state sample just under 25% of the parent education infor mation is missing. However for the two comparison groups this figure increases to 28.5% for the Williams schools and 36% for Raymond’s comparison group.Regression AnalysesIn Table 3, I present the results of initial regres sion analyses. In the first column I provide the reanalysis using Raymond’s model in Tab le 2. In the columns that follow I provide models for the statewide sample (C olumn 2) and the comparison sample (Column 3). In both models I add the variabl es described above that correct for missing information in the credential a nd parent education variables. One of the problems with the statewide model shown in Column 2 is collinearity. Collinearity occurs when the two or more of the ind ependent or predictor variables are highly correlated with one another. I address the issue of collinearity in more detail in the following sectio n where I discuss the issue of the impact of credentials and other teacher characteris tics on the model. However, I

PAGE 10

10 of 31 include this model here to illustrate the dramatic increase in the R-square for the statistical model once the sample size is increased In this case, the R-square of .78 in the corrected model using the statewide samp le (middle column) indicates that these variables explain about 78% of the varia tion in school API scores.Table 3 Regression Models Following Raymond’s analysis with Statewide Sample and Reconstructed SampleRaymond’sModel in Table2 Corrected Model Using StatewideSample Corrected Model Using ComparisonSample Constant 694.98 (33.86)*** 853.41 (7.35)***790.06 (31.63)*** Middle School -45.11 (1.66)***-63.58 ( 5.98)*** High School -107.58 (1.92)***-92.87 (11.11)*** % Minority -1.65 ( .31)***-1.01 ( .04)***-1.51 ( .27)*** % Reduced/Free Lunch .64 ( .27)*-1.53 ( .04)***.04 ( .25) % Fully Cred. .35 ( .21).12 ( .07)-.09 ( .19) % Both Full and Emer. .26 ( .17)-.18 ( 1.06) % Credential Missing -.37 ( .12)**-1.28 ( .25)*** % English Learners .31 ( .16).17 ( .05)***-.22 ( .15) School Mobility -.35 ( .17)*-1.01 ( .04)***-.48 ( .15)** % Parents Not H.S. Graduates -1.47 ( .17)***-1.02 ( .06)***-1.11 ( .16)*** % Response -Parent Education .08 ( .03)**.34 ( .08)*** Williams School -23.26 (11.94)*-18.07 (8.41)*-10.33 (10.62) R-squared .18***.78***.39*** What we see from the comparison of the three models above is that the low R-square reported by Raymond and the even lower R-s quare yielded in my replicate analysis is due to two factors: 1) the cr iteria for selecting the sample as discussed above which reduces the variation within Raymond’s sample on most of the explanatory variables; and 2) the omission o f important control variables for school type and missing information in the cred ential and parent information variables, which are among the most statistically s ignificant variables in the analysis. It is also worth noting that the coeffici ent for the Williams schools, while

PAGE 11

11 of 31 still negative, has decreased considerably with the corrected model (Column 3) and is no longer statistically significant.Is Teacher Certification and Other Types of Teacher Training Important?On the basis of these findings, Raymond argues that teacher certification has a negligible effect on school performance. In additio n, she also cites an earlier study she conducted in the Houston Unified School D istrict in which she argued that on average Teach for America teachers did as w ell or better than their peers (10). However, a more recent quasi-experimental stu dy of Teach for America teachers in 5 districts in Arizona (Laczko-Kerr and Berliner, 2002) which matched Teach for America and other under-certified teacher s with fully certified peers found 1) little differences in student performance between Teach for America teachers and other under-certified teachers (i.e. e mergency credentialed teachers); and 2) students taught by fully credenti aled teachers outperformed students taught by under-certified teachers. More s pecifically, their results indicated that students taught by fully-certified t eachers gained approximately two additional months per academic year across subj ects than students taught by under-certified teachers.Similarly, in an analysis of 1999 and 2001 API scor es for elementary schools in Los Angeles and San Diego, I found that the impact of teacher credentials versus teacher experience varies based on district context (Powers, 2003). For example, in Los Angeles, if we look at variables re lated to teachers’ credentials and training, the main disparity between high pover ty schools and low poverty schools is the percentage of emergency credentialed teachers. In contrast, in San Diego, where there is a relatively even distrib ution of emergency credentialed teachers across the district, the main disparity between high poverty and low poverty schools is the average years of tea ching experience among the teaching staff. Not surprisingly, these differences are reflected in the results of regression models run separately for each district predicting the influence of student and teacher characteristics on school achie vement as measured by the API. However, a more overarching conclusion that we can draw from this type of analysis is that these types of disparities across schools do matter, and that while public policies meant to address inequities m ight have to be tailored to the specific features of the district context, they are not inconsequential. In the section below, I provide a reanalysis of the API data, adding the variable for teacher experience described above to the analy sis. However, instead of using Raymond’s list to select schools, I use a lis t of current Williams schools provided by the plaintiff’s lawyers. I also utilize an alternative method of analysis. Rather than examine API at one point in time, as in the Raymond analysis, I examined the relationship between the API and the f actors of interest – school demographic variables and teacher qualifications – for the group of Williams schools and the statewide sample with complete info rmation on all variables from 1999-2002. (Note 6) This yielded 29 of the 34 Williams schools, and 64 52 cases for the statewide sample. According to my cal culations, this group of 6452 schools is approximately 83% of all the schools eli gible for a 1999 API. (Note 7) The advantage of this strategy is that by using the same sample of schools over all four years of the analysis, we can examine chan ges in the explanatory

PAGE 12

12 of 31 variables over time. In addition, using a statewide sample also addresses Hoxby’s (2003) concern that arguments for increased equity rely on data that is “representative of California public schools in gen eral” (2). (Note 8) As noted above, this analysis also utilizes the API scores that are the most comparable across the four years – 1999-2001 API sc ores calculated using SAT 9 results only and the 2002 Growth API, which incor porates the results from the California Standards Test (CST) in Language Arts on ly but otherwise is calculated from SAT 9 test scores. Each of these fi les was matched to the files created from the Professional Assignment Informatio n Files described above for the appropriate year, and then all four years of AP I data were merged together. Finally, the cases with all information for all of the variables of interest were selected for analysis. In the case of the Williams schools and the statewide sample the 2002 sample means for the 4-year analysi s were roughly similar to the sample means using the 2002 data, (Note 9) which suggests that the selection criteria requiring 4 years of data did no t result in a consequential loss of information for either group.For ease of presentation, Table 4 provides descript ive statistics for the Williams schools and the statewide sample for 1999 and 2002. Like the Raymond sample above, middle schools and high schools are over-rep resented in the Williams sample compared to the state sample. 28% of the Wil liams schools are middle and 41% are high schools compared to 16% and 12% re spectively. It is also worth noting that if we compare the means for the 3 5 schools used in Raymond’s analysis shown in Table 2 to the means for this gro up, we see that the current Williams schools are relatively more disadvantaged. (Note 10) Table 4 Descriptive Statistics for Statewide Sample of Scho ols with API Data from 1999-2002 and Current Williams School sWilliams Schools N=29 Statewide Sample N=6452 1999 Mean (S.D.) 2002 Mean (S.D.) 1999 Mean (S.D.) 2002 Mean (S.D.) API 464.52 (65.51) 520.59 (87.68) 632.37 (131.76) 693.79 (112.35) Elementary .31 (.47).34 (.48) .71 (.45).71 (.45) Middle .28 (.45).24 (.44) .17 (.37).17 (.37) High .41 (.50).41 (.50) .12 (.33).12 (.33) % Minority 84.74(20.03) 85.59(18.26) 47.43 (29.24)49.82 (29.48) % Reduced/Free Lunch 75.05(18.30) 75.62(16.44) 49.03 (29.96)48.91 (30.73) % English Learners 46.59(20.76) 44.83(20.59) 24.00 (21.91)24.44 (21.46)

PAGE 13

13 of 31 Mobility 13.30(18.50) 13.28 (6.72)13.16 (12.74)16.56 (9.49) % Fully Cred. 71.25(14.95) 69.75(14.89) 86.71 (12.25)87.17 (12.01) % Emer. Cred 19.52(14.37) 16.62(13.92) 8.58 (9.90)7.04 (8.38) % Both Full and Emer. 2.90 (3.34)2.24 (2.53)1.92 (3.23)1.27 (3.27) % Missing 6.34 (3.24)11.39 (6.97) 2.79 (4.56)4.52 (5.90) Yrs. Teach 12.31 (3.22)11.63 (2.87)13.24 (3.23)13.25 (3.23) % Yrs Tch. Missing 1.53 (2.01)1.29 (1.45).59 (1.90).48 (1.83) What should be immediately obvious from comparing a cross the columns is that on average, the Williams schools have very differen t profiles than the state average. Minority students and students receiving r educed and free lunch comprise the majority of the student populations in Williams schools. It is difficult to ascertain the changes in teachers’ credentials o ver time because in both the Williams schools and the statewide sample, the perc entage of teachers with missing information has increased by approximately 40 percent for the two groups from 1999 to 2002. (Note 11) Even in the unlikely scenario that all of the teachers with missing information were fully creden tialed and that the percentage of teachers with emergency credentials h as decreased by 2.9% in the Williams schools over the four-year period – tw ice the rate of the statewide sample – a large gap between the two groups remains To put this figure in perspective, we might do well to remember that if t he percentage of emergency credentialed teachers in the Williams schools was a ctually decreasing at that rate, it still will take more than 12 years for the Williams schools to reach the average for the state sample in 2002. In addition, while in the Williams schools sample the average years of teaching experience amo ng the teaching staff has decreased slightly, in the statewide sample there i s a very slight increase. As I noted in the discussion of the replication of Raymond’s analysis above, if we use the state sample of schools and the same group of independent variables in the model, the problem of collinearity among the in dependent variables results. Three variables among the group utilized by Raymond in her analysis are particularly highly correlated: % Minority, % Reduced/Free Lunch and % English Learners It is not surprising that % Minority and % English Learners are highly related since Latinos are the largest racial group in California’s public schools and Spanish-speakers comprise the majority of the E nglish learners in California. (Note 12) Similarly, given the strong relationships between race and poverty in the United States and the degree to which public sc hools continue to be segregated by both race and class, it is not surpri sing to see a strong relationship between these two variables. Tables 5 and 6 illustrate these relationships in two different ways. First, I divid ed the 6452 schools in Table 4 above into quartiles by the variable % Minority using the 1999 data. In Table 5 I present descriptive statistics for the first and fo urth quartiles, the schools with the least and the most minority students, respectively, for 1999 and 2002. Of the 29 Williams schools in the analysis, 23 or 79% fall in to the fourth quartile.

PAGE 14

14 of 31 Table 5 Descriptive Statistics for Schools with Least and M ost Minority Students 1999 and 2002Schools with the Least Minority Students in 1999 N=1616 Schools with the Most Minority Students in 1999 N=1611 1999 Mean (S.D.) 2002 Mean (S.D.) 1999 Mean (S.D.) 2002 Mean (S.D.) API 775.52 (81.53) 810.47 (73.17) 485.02 (68.56) 579.91 (69.55) % Minority 12.13 ( 5.26)13.57 ( 6.37)87.68 ( 8.53)89.00 ( 7.52 ) % Reduced/Free Lunch 17.97 (17.41)16.91 (16.92)81.41 (15.41)83.00 (15.59 ) % EL 5.75 ( 7.74)6.13 ( 7.91)48.21 (21.18)47.60 (20.39) School Mobility 10.42 (12.18)13.53 ( 8.57)14.39 (12.89)17.57 ( 9.66 ) % Fully Cred. 94.71 ( 5.78)94.32 ( 6.48)75.38 (13.57)76.79 (13.92 ) % Emer. Cred. 2.83 ( 4.45)3.03 ( 4.62)17.63 (11.85)13.02 (11.07) % Both Full and Emer. 1.10 ( 2.05).87 ( 2.33)2.34 ( 3.33)1.62 ( 3.68) % Credential Missing 1.35 ( 2.65)1.77 ( 2.97)4.65 ( 5.87)8.57 ( 7.48) Avg. Yrs. Teaching 14.17 ( 3.15)14.50 ( 3.17)12.00 ( 2.99)11.70 ( 2.83 ) % Yrs. Tch. Missing .26 ( 1.25).44 ( 1.93).92 ( 2.09).60 ( 1.69) Table 5 allows us to see the strong relationship be tween many of the independent variables in the analysis. In general, schools that have relatively low percentages of minority students also have relative ly low percentages of students eligible for reduced or free lunch, and re latively low percentages of English language learners compared to schools with high percentages of minority students. Mobility is also much lower in s chools with less minority students. Likewise, on average, the schools with th e least minority students also have much higher percentages of fully credentialed teachers (and conversely fewer emergency credentialed teachers or teachers w ith both full and emergency credentials indicating teaching out of field) and m ore experienced teachers. Tables 6 shows the bivariate correlations between t hese variables of interest over the four years of data, which allows to see tr ends over time in these variables. The most striking feature of Table 6 is that it illustrates the strong relationship overall between these variables. The l owest correlation between pairs of variables is just under .75. What we see i n Table 6 is while the relationship between race and poverty at the school level is not only strong, but

PAGE 15

15 of 31 also consistently increasing from 1999-2002. It is also worth noting that with the exception of 1999, % Minority is also the most highly correlated with the variables for teacher credentials and experience (s ee Appendix).Table 6 Correlations between Student Background Variables o ver TimeBivariate Correlations % Minority &% Reduced/Free Lunch %Minority &% ELL % Reduced/Free Lunch &% ELL 1999 .818.754.762 2000 .834.750.768 2001 .834.746.771 2002 .842.745.763 In Table 7 I present regression models for 1999 and 2002 using the full state sample. Because the yearly models use the same samp le of schools, this strategy allows us to look at changes over time in the regression coefficients. In these models, I omitted the variable for % Minority after inspecting the regression diagnostics for the full model (Note 13) There were two main reasons for this choice: 1) this variable was marginally mo re highly correlated with the other independent variables in the model for three out of the four years in the analysis (Lewis-Beck, 1980); and 2) the percentage of students eligible for reduced or free lunch is the more theoretically int eresting variable. (Note 14) I also included the controls for school type and the missing information on the teacher qualifications variables utilized in the re gression models above (not shown). Finally, the last variable is an indicator variable denoting whether or not the school is one of the 29 Williams schools. Becau se the teacher credential variables add to 100%, they function as a set of in dicator variables; % Fully Credentialed is the omitted comparison category.Table 7 Regression Models19992002 Constant 845.02 ( 1.76)*** 819.40 ( 4.26)*** 887.70 (1.73)*** 863.01 (3.90)*** % Red./Free Lunch -3.21 ( .04)***-3.21 ( .04)***-2.67 ( .04)***-2.68 ( .04)*** % EL -.80 ( .06)***-.79 ( .06)***-.57 ( .05)***-.53 ( .0 5)*** School Mobility -.18 ( .06)**-.14 ( .06)*-.93 ( .08)***-.89 ( .08)* ** % Emer. Cred. -1.41 ( .08)***-1.24 ( .09)***-.34 ( .09)***-.17 ( .09)*** % Both Full & Emer. -1.59 ( .24)***-1.39 ( .24)***.10 ( .21)***.27 ( .2 1)

PAGE 16

16 of 31 Avg. Yrs. Teaching 1.77 ( .27)***1.66 ( .23)*** Williams School -16.36 (11.32)-16.26 (11.29) -38.24 (9.98)*** -37.88 (9.94)*** R-squared .790***.792***.776***.778*** In the first column for each year I present the mod els using just the teacher credential variables. In the second column I add th e teacher experience variable to the model. The decrease we see in the coefficien t for teacher credentials from the first model to the second is not surprising as the two variables are related. Schools with high percentages of teachers on emerge ncy credentials are also more likely to have less experienced teaching staff s. However, both variables have a meaningful independent effect on the model, and the regression diagnostics indicate that while the teacher experie nce variable is not unrelated to the other variables in the model, it is not so high ly correlated that it might adversely affect the model.The coefficient indicating whether or not the schoo l is a Williams school is negative across all of the models, and in 2002 the coefficient is statistically significant. However, this coefficient should be in terpreted with care because the 29 Williams schools are approximately one-half of o ne percent of the entire sample of 6452 schools. While the coefficient is ne gative, which can be interpreted that when all other factors in the mode l are statistically held constant, Williams schools do worse than other schools, two f actors might be considered that temper this conclusion. First, this model cont rols only for the demographic characteristics of the student body and teachers’ q ualifications and not other unmeasured factors such as facilities and textbooks that might also influence student achievement. Second, the gap between Willia ms schools and all other school using the “raw” API in Table 5 is just over 173 points in 2002, which is considerably higher than the 38 point gap indicated by the coefficient for the Williams schools which we might read as the gap in school API once student and teacher characteristics are accounted for.This strategy provides a good way to examine trends over time in the coefficients and the overall strength of the model. For example the decreases in almost all of the coefficients from 1999 to 2002 suggests regress ion to the mean. (Note 15) To confirm this interpretation, I examined the chan ge in API from 1999 to 2002 by calculating the raw change score by subtracting the 1999 API from the 2002 API. Next I correlated the 1999 to 2002 percentage change score with % Reduced/Free Lunch and % English Learners for 2002. The bivariate correlations were .523 and .482 respectively (p .001), which indicates that schools with higher percentages of poor students an d English learners made greater gains in API over the four-year period. (Note 16) The above strategy of analysis, which controls for school type, does not allow us to discern whether or not the variables of interest might work differently across school types. This is particularly important to con sider given the predominance of elementary schools in the statewide sample (71%). I n Table 9 I present the same models run separately by school type for 2002; descriptive statistics by school type are shown in Table 8. However, because there are relatively few

PAGE 17

17 of 31 Williams schools in each of the categories, (Note 17) and the purpose of this analysis is understanding the effects of teacher ch aracteristics on each type of school, I omitted this variable from the analyses.Table 8 Descriptive Statistics by School Type, 2002 API Gro wth DataElementary N=4588 Middle N=1084 High N=780 API 706.67 (111.43)674.60 (112.94)644.66 (98.85) % Reduced/Free Lunch 52.79 ( 31.36)44.98 ( 27.56)31.54 (23.69) % EL 26.91 ( 22.78)20.25 ( 17.35)15.75 (14.28) School Mobility 17.18 ( 8.18)17.01 ( 12.54)12.28 (10.57) % Emer. Cred. 6.29 ( 8.25)9.13 ( 9.10)8.53 ( 7.33) % Both Full and Emer. 1.09 ( 3.04)1.70 ( 3.69)1.75 ( 3.76) Avg. Yrs. Teaching 12.99 ( 3.31)13.38 ( 3.05)14.55 ( 2.64) What we see from Table 8 is that there are importan t differences across the school types. Students receiving reduced/free lunch and English learners are the most highly concentrated in elementary schools and least concentrated in high schools, with middle schools falling in between. Th is is in part because elementary schools tend to serve the smallest geogr aphical areas, and are thus more likely to be economically segregated. The lowe r mobility in high schools could also be attributable to the larger geographic al areas served by high schools as this variable measures within-district m obility; students whose families move frequently within the district would probably be less likely to have to change high schools than elementary or middle sc hools. While elementary schools have lower percentages of emergency credent ialed teachers on average than middle and high schools, they also tend to hav e less experienced teaching staffs (although the average difference between mid dle and elementary schools is less than a percentage point).Table 9 Regression Models by School Type, 2002 API Growth D ataElementary Coefficient (S.E.) Middle Coefficient (S.E.) High Coefficient (S.E.) Constant 857.64 (4.49)***827.76 (9.31)***746.29 (15.22)*** % Reduced/Free Lunch -2.65 ( .04)***-3.01 ( .09)***-2.20 ( .12)*** % EL -.51 ( .06)***-.60 ( .13)***-1.66 ( .20)*** School Mobility -.98 ( .10)***-.47 ( .12)***-.99 ( .19)***

PAGE 18

18 of 31 % Emer. Cred. .18 ( .11)-.71 ( .19)***-1.42 ( .30)*** % Both Full and Emer. .93 ( .26)***-.51 ( .41)-1.86 ( .55)*** Avg. Yrs. Teaching 1.71 ( .26)***1.55 ( .04)**2.17 ( .87)* R-squared .777***.820***.689*** In Table 9 the regression models for each school ty pe yield interesting findings. Of particular note are the results for the teacher credential and experience variables, all of which have the strongest effect i n the high school model. This finding is masked in the model shown in Table 7 bec ause of the predominance of elementary schools in the state sample. However, it is not surprising if we consider that teaching at the high school level req uires the most specialized subject area training, which could also explain the strong negative effect of the variable for the percentage of teachers with both f ull and emergency credentials to the extent that it provides an indicator of the percentage of teachers teaching outside their subject area training. This finding i s also consistent with those of Fetler (1999) who found that once student backgroun d characteristics are controlled, teacher training and experience were th e strongest predictors of high school math achievement. (See also Darling-Hammond, 2000, more generally) In sum, these findings make it much more difficult to dismiss the effect of teachers’ credentials on school performance as meas ured by the API, particularly at the high school level. To put these findings in more policy relevant terms, on average, a five percent decrease in the p ercentage of emergency credentialed teachers will increase school API by j ust over 7 points, which is close to the average target increase in API of 7.86 for this sample of high schools in 2002. (Note 18) The results obtained here also suggest that given the current budget constraints facing the state of Cali fornia, a policy aimed at equalizing the quality of teachers across schools m ight be most effectively targeted at high schools, much like California’s cl ass-size reduction initiative targeted the lower grades.To bolster this interpretation, I present a final a nalysis in this section in which I compare the 29 Williams schools with a group of com parison schools I created by matching each of the Williams schools with a sim ilar school by selecting out all of the schools of the same type (elementary, mi ddle, high) with the same value on the variable % Reduced/Free Lunch in the 2002 API Growth data. From the 29 lists that resulted, I chose a matching scho ol for each Williams school by choosing the school with the most fully credentiale d teachers that was also the closest match on the variable % English Learners On average, this group had close to 19 percent more fully credentialed teacher s than the Williams schools group. Thus, this comparison group most closely mat ches Raymond’s call for comparing the achievement – or in this case what is more accurately described as school performance -of the Williams schools wi th schools that are “abundant” in fully credentialed teachers, controll ing for other factors. Table 10 provides descriptive statistics on the Williams sch ools and the “Abundant” comparison group on the variables of interest for 1 999 and 2002. In addition, I also used t-tests to assess whether or not the diff erences in means across the two groups are statistically significant. What we s ee from Table 10 is that the three groups are roughly comparable in terms of stu dent demographics. With the

PAGE 19

19 of 31 exception of % Minority between the Williams schools and the “Abundant” schools, none of the differences in means for the b ackground variables are statistically significant. However, we also see tha t the major differences between the two groups of schools are in the variables meas uring teachers’ qualifications.Table 10: Sample Means for Williams Schools and Com parison SchoolsWilliams Schools N=29 Mean “Abundant” Schools N=29 Mean 1999 200219992002 % Min. 84.7485.5970.89a73.14a% Reduced/Free Lunch 75.0575.6275.7775.45 % English Learners 46.5944.8340.7942.07 Mobility 13.3013.288.8112.59 % Emer. 19.5216.627.35aaa5.74aaa% Fully Cred. 71.2569.7587.21aaa88.42aaaYrs. Teach 12.3111.6314.17a14.13aa a Comparison with Williams schools mean statisticall y significant at p.05aa Comparison with Williams schools mean statisticall y significant at p.01aaa Comparison with Williams schools mean statisticall y significant at p.001 Figure 1 shows the mean API for the two groups of s chools from 1999 to 2002. What we see is that there is a consistent gap betwe en the two groups of schools, a good portion of which we can reasonably attribute to the differences in teachers credentials across the two groups of schoo ls as most of the background characteristics of students have essenti ally been held constant. (Note 19) Figure 1. Mean API for Williams Schools and “Abunda nt” Schools:

PAGE 20

20 of 31 1999-2002Beyond Teacher Certification: Indirect Evidence of the Importance of FacilitiesWhile most of Raymond’s discussion focuses on teach er qualifications, her findings can also be read as providing indirect evi dence of the importance of decent facilities on school performance. In one of her regression models with a smaller sub-sample of her comparison group, she inc ludes a variable for whether or not the school is a year round school (Raymond, 2003a: Table 4). The regression coefficient is large (-25.81) and statis tically significant at p.05, which indicates that within this sample of schools (Note 20) with all other factors held constant, schools on a traditional calendar have an API score that is about 26 points higher than those on a year round calendar.As noted above, there is no data available that wou ld allow us to assess the effect of facilities on school performance as measu red by the API. However, as I also noted, the year round multi-track calendars ar e utilized in California’s public schools as a way to address overcrowding. According to the California Department of Education, if 20 percent of the stude nts attending year round multi-track schools are “housed in excess of capaci ty at their school sites,” the state and local school districts save approximately two billion in construction costs (California Department of Education, 2003: 92 ). Of the four types of multi-track calendars, “Concept 6” calendars are no table because they have fewer instructional days. Thus, we might consider t he use of year round calendars, and in particular the “Concept 6” calend ar a proxy for inadequate facilities. In this section, I provide a more robus t analysis of Raymond’s findings which takes into account the different types of yea r round calendars by utilizing additional demographic data available from the Cali fornia Department of Education that can be downloaded and matched to the API datasets. The “Concept 6” calendar is used in only four distr icts across the state: Lodi Unified, Los Angeles Unified, Palmdale Elementary, Vista Unified. (Note 21) Lodi, Palmdale, and Vista are all small school dist ricts with less than 30,000 students and well under 50 schools each. In Palmdal e, all but one school in the district follows the “Concept 6” calendar. In Lodi Unified and Vista approximately half of the schools are “Concept 6” schools; howeve r most other schools in Lodi follow a traditional calendar while in Vista, the m ajority of the remaining schools follow the other types of year round calendars. Bec ause these three districts are so small and have such divergent patterns in their utilization of four types of calendars, I restrict the analysis to the Los Angel es Unified School District. The second largest school district in the country, the Los Angeles Unified School District also has a distribution of schools across the four types of school calendars that best allow us to test the effect of school calendar on school performance, controlling for other factors. This st rategy also has the advantage of controlling for possible district effects that m ight distort the results if the four “Concept 6” districts were pooled for the regressio n analysis. Descriptive statistics for the 571 schools in the Los Angeles U nified School District with 2002 Base API (Note 22) scores and complete information on all variables a re provided in Table 11.

PAGE 21

21 of 31 Table 11 Descriptive Statistics for the Los Angeles Unified School District 2002 API Base DataLos Angles Unified School District N=571 API 635.81 (97.73) ElementaryMiddle .14 ( .34) High .09 ( .28) % Reduced/Free Lunch 78.33 (25.52) % English Learners 42.86 (23.30) Mobility 15.54 ( 8.87) % Fully Cred.% Emer. 14.63 ( 8.47) % Both Full and Emer. 2.47 ( 2.35) Yrs. Teach 11.52 ( 2.41) TraditionalYR Single .02 ( .15) YR Mult. .06 ( .24) “Concept 6” .31 ( .46) Table 12 shows the regression model for the Los Ang eles Unified School District. The same variables used in prior analysis were included, including the control variables for school type and missing infor mation in the teacher variables (not shown). Traditional is the omitted comparison category. As a result, a positive coefficient on one of the remaining types of calendar variables indicates that schools operating on that type of calendar hav e higher API scores than schools running on traditional calendars. Conversel y, a negative coefficient indicates that a school operating on the designated calendar has a lower API score than schools operating on a traditional calen dar.Table 12 Regression Model Testing for School CalendarLos Angles Unified School District API Base 2002 Coefficient (S.E.) Constant 911.99 (19.67)*** % Reduced/Free Lunch -1.91 ( .13)***

PAGE 22

22 of 31 % English Learners -.72 ( .15)*** Mobility -1.17 ( .25)*** % Emer. -1.97 ( .34)*** % Both -1.18 ( .89) Yrs. Teach .71 ( 1.18) YR Single 58.54 (16.66)*** YR Mult. -8.56 ( 9.01) “Concept 6” -13.94 ( 5.51)* R-squared .767*** What we see from the results here is that in the Lo s Angeles Unified School district, the district with 77 percent of all “Conc ept 6” schools in the state, with all other factors held constant, “Concept 6” school hav e lower API scores than schools on a traditional calendar. Conversely, scho ols on a year round single-track calendar have higher API scores than s chools on the traditional calendar. Although there are only a small number of year-round single track schools in the Los Angeles Unified School District, this finding is notable because unlike the other types of year-round calend ars, the year-round single track calendar is not used to increase enrollment b ut is primarily intended to increase student achievement by minimizing the lear ning gap over the summer months. The year round single track calendar also i ncreases the possibility for remedial and enrichment classes during intersession s. While this model tests how students are organized into facilities rather t han directly testing the quality of the facilities, we see from this analysis that, as with the quality of the teaching staff, facilities are not inconsequential to school performance.ConclusionTo a certain degree, Raymond and other expert witne sses for the state do agree with at least some of the broad issues involved in the case, more specifically, the importance of highly qualified teachers for student achievement. (Note 23) However, they argue against the creation of policie s to help ensure a more equitable distribution of resources across schools. One of their main arguments is that it is very difficult to define “quality” te achers and that the minimum definition proposed by the plaintiffs – a fully cre dentialed teacher – doesn’t measurably affect student achievement. (Note 24) Moreover, Raymond and other state experts further argue that the cost of decrea sed local control outweigh any possible educational effects of state policies mand ating that schools and districts hire fully credentialed teachers and provide curren t textbooks and adequate facilities for their students. Finally, Raymond als o asserts that these proposals are not only fiscally unreasonable given California ’s current budget crisis but would also have the effect of “disenfranchising par ents” because they would “remove the option for parents to be co-creators of the educational programs that best meet the needs of their children” (17).It is difficult to imagine how ensuring that most, if not all teachers are credentialed would disempower parents. While inform ation regarding teachers’

PAGE 23

23 of 31 credentials is currently made available to parents in a standard reporting format through state-mandated school accountability report cards, not only is there a significant lag in the information (i.e. informatio n about the prior school year is reported in the report card published the following school year) but all of the information is aggregated at the school level. As a result, it is difficult for a parent to use this information to advocate for her/his chi ld; the best remaining options, then, are direct inquiry at the school or the wordof-mouth networks that exist among parents. (Note 25) Given these conditions, it could be argued that ensuring that most, if not all classroom teachers a re fully credentialed would actually empower parents because they can be assure d that all of their children’s teachers meet the criteria established for teachers by the California Commission on Teacher Credentialing and can thus use their tim e and resources advocating for their children in other arenas.The results of these analyses suggest that short of desegregating schools by socioeconomic status, increasing and equalizing the percentage of fully credentialed teachers is an “input” that is not onl y relatively amenable to change through state and local policy–and certainly much e asier than building additional facilities to ease overcrowding–but also contribute s to school performance. (Note 26) Addressing the disparities documented here might e ntail creating pay and other incentives (e.g. increased autonomy) that wou ld encourage experienced teachers to work in high-poverty schools. (Note 27) And, given that the API is essentially an average of student scores, while the magnitude of the effect on schools is subject to debate, such a policy could make a l arge difference for the academic achievement and life chances of individual students Even if we accept the argument that a more equitable distribution of teachers has a relatively small but positive effect on achievement, we might also c onsider whether or not the goal of increasing equity in public education – whi ch this analysis suggests can be done without sacrificing school performance – is an important and desirable end in itself. As we consider the issues in this ca se, it is important not to let statistical arguments about the determinants of sch ool achievement and the valorization of local control distract us from the larger issues of justice and educational opportunity for all students that are at the heart of this case. For those of us who are comfortably middle class or hig her, why should we expect poor and minority students to settle for anything l ess than the schools we want and often demand for our own children?Notes 1. For a synthesis of the plaintiffs’ experts’ reports see Oakes (2002a). 2. A May 2003 San Francisco Chronicle newspaper story reported that the state has spent approximately $18 million fighting the ca se (Asimov, 2003). 3. Similar arguments were made by Hanushek (2003), Hox by (2003), and Philips (2003). I focus specifically on Raymond’s report he re because of her use of API data to create what she describes as “econometric m odels of educationally challenged schools in California.” In contrast, Phi lips (2003) discussion of the API is a secondary analysis of reports using API da ta. 4. Raymond indicates that she substituted API scores f or prior years in the case

PAGE 24

24 of 31 of two schools without 2002 API. When I selected ou t the cases listed in Raymond’s Table 1, I found a third school was missi ng API information. It is also worth noting that two of the remaining 36 schools w ere not listed as plaintiff schools on the Williams v. California website; howe ver I included these in the analysis for the purpose of reconstructing Raymond’ s analysis as precisely as possible. The Williams case website (www.decentscho ols.org) lists a total of 72 plaintiff schools, but it is unclear from Raymond’s narrative why her analysis focused on the group of schools in listed in Table 1 of her report. 5. An additional school was missing information on the parent education variable and was omitted from the analysis. 6. I also restricted the sample to schools with less t han 30% missing information on the teacher credentials and experience variables 7. The 1999 API excluded alternative schools and schoo ls with fewer than 100 students. In the 2000-2002 files used here, there w as a fourth school type indicating if the schools was a small school, i.e. the school only had between 11 and 99 valid tests available to calculate its API. Since none of the Williams schools fell into this category, I omitted the appr oximately 39 schools designated as small schools for these three years. 8. Hoxby also argues that “Good Research” utilizes ext ensive controls for family background, measures that are either unavailable, o r as I will detail below, in the case of the parental education variables in the API unreliable. However, it is also worth noting that Hoxby’s analysis of the effects o f centralization on state performance on NAEP provided in her report does not appear to control for students’ family background. 9. For the 4-year state sample, the 2002 Growth API wa s less than 8 points lower than the state sample in Table 2 above. The s ample means on all other variables were within a percentage point of each ot her. 10. 30 of these schools have 2002 Base API scores with a mean for this group is 529.13, which is roughly comparable. As with the st ate sample, the means on the other variables using the 2002 Base API sample are all within a percentage point or two of those presented for 2002 in Table 4 11. If we look at the percentage of fully credentialed teachers in the Williams schools for 2000 and 2001, both years with less mis sing information than 2002, there trend is somewhat inconsistent but tends more towards a decreasing percentage of fully credentialed teachers. In 2000, the average Williams school had 68.91 percent of the teachers were fully certif ied (7.96 percent missing information). In 2001 the same figure was 70.16 (9. 41 percent missing information). 12. According to 2001-2002 figures available on EdData ( http://www.eddata.k12.ca.us/welcome.asp ), in 2001-2002, 44.2 percent of all California pub lic school students were Latino. Of the 25.5% English L earners in California’s public schools, 21.2% were Spanish speakers. 13. Subtracting the tolerance statistic from 1 gives us the R-squared from the

PAGE 25

25 of 31 regression of all the other independent variables o n the independent variable of interest (R2 j where j=the variable of interest). Fox (1991) reco mmends taking the square root of R2 j, noting that when this figure approaches .9, colli nearity becomes a serious problem for the estimation of reg ression coefficients. In this case, I obtained Rj of .88 for the variable % Minority for three of fo ur year of the analysis (this figure was only marginally lower for the 1999 model). 14. See, for example the analysis by Phillips, Brooks-G unn, Duncan, Klebanov, and Crane in Jencks and Phillips (1998). 15. I could have just as easily chosen the 1999 figures for % Reduced/Free Lunch and % English Learners as they correlate at r> .95 for both variables, reflecting the relative stability of these variable s over time. 16. Rogosa (2001, 2002) notes that this pattern is in p art a result of how the API is constructed. Because the API is constructed by u sing a percentile rank metric, students in the highest scoring schools can’t raise school scores because they have “topped out” the index. 17. 10 of the Williams schools are elementary schools, 7 are middle schools and 12 are high schools. 18. A schools API target or the amount its API should i ncrease from one year to the next is determined by taking 5 percent of the d ifference between the school’s API in a given year and 800 which is the target API for all schools set by the state. Rogosa (2002) has argued that a difference i n API of 5 points or fewer is not significant and is approximately equivalent to about half of the students answering an additional question on the SAT-9 test correctly. Rogosa (2000) has also estimated that if every student increased thei r percentile rank on each test by one point, the school’s API would increase by 8 points (1). I frame the results in terms of the growth targets set by the state bec ause irrespective of the educational consequences of a rise and fall in API, whether or not schools reach their targets has important political consequences for schools because it is one of the criteria for determining whether or not a sc hool is labeled as performing adequately or inadequately by the state. 19. Even we look at the range of API between these two groups as Rogosa (2001) suggests, although there is substantial over lap in the middle, the lowest boundary for the Williams schools is substantially lower than the minimum for the “Abundant” schools. Likewise, the maximum value amo ng the “Abundant” school is much higher than the maximum value for the Willi ams schools. I also chose a second comparison group by matching each school by school type and % Reduced/Free Lunch and % Fully Credentialed On average, the API scores of this second comparison group were slightly higher t han the Williams schools, but the difference in means was not statistically signi ficant. 20. The 129 schools in this model are most likely predo minantly middle schools and high schools because one of the variables in th e model that Raymond describes as Number of Core Classes missing informa tion for most schools in the full dataset. However, this variable appears to be mislabeled on the CDE website. For all of the other API datasets with the exception of the 2002 Base

PAGE 26

26 of 31 API Raymond used in her analysis, this variable ind icates the Average Size Class, Core Classes, which include the following su bject areas: English, Foreign Languages, Math, Science, and Social Science (Calif ornia Department of Education, Policy and Evaluation Division, 2003: 8) Given this definition, it is not surprising that so many schools were missing inform ation on this variable; if one examines the data by looking at the distribution of the variable by school type (elementary, middle, high) 87.5% of the 2302 school s with information on this variable were middle and high schools. As a result, it is also likely that many of the Williams schools were not included in this mode l. Interestingly, Raymond uses this model – with an incorrectly interpreted c oefficient – as evidence for her assertion that other types of improvements will inc rease student achievement more than increasing the percentage of fully certif ied teachers. 21. In contrast, well over 100 districts have schools f ollowing various types of year round multi-track calendars other than the Con cept 6 calendar. 22. I use the 2002 API Base Data because here, unlike t he prior analyses, understanding changes over time is less important a nd the 2002 Base data with its greater incorporation of the CST, is currently the most politically salient for schools. However, the results I obtained here are c onsistent with the results using the 2002 Growth API data, which is not surpri sing because the 2002 Growth API and the 2002 Base API correlate at .999. 23. Raymond (2003) writes: “There is no quibble that th e three proposed solutions – sufficient textbooks, quality teachers, and adequate facilities – play a role in the production of good education. But the d efinitions of “sufficient,” “quality,” and “adequate” are elusive and highly su bjective. Moreover, it is a large leap to accept that these elements are only effecti ve in the precise formulations advanced by the experts” (11). Similarly, Philips ( 2003) writes: “Though inconvenient, students can share books, use copied materials, or internet resources, wear coats in a cold classroom, or use a restroom on another floor. But if a classroom teacher is not able to effective ly focus instruction on the state content standards, for the subject area of the clas s, disadvantaged children may be ill-equipped to learn the material on their own” (75). 24. To some degree, this is a moot point since No Child Left Behind requires that all teachers of core subject areas be “highly quali fied” by the 2005-2006 school year. Teachers on emergency permits, waivers, or pr e-intern certificates do not meet the criteria for “highly qualified.” A June 20 03 memo from the State Superintendent of Public instruction directs distri cts, counties and charters schools to focus their current hiring and recruitme nt efforts on teachers that meet the NCLB requirements (O’Connell, 2003). 25. A similar argument can be made about textbooks. Eve n if classroom teachers use textbooks differently, wouldn’t it be more empowering to parents to know that the current textbooks are available in th eir child’s classroom for the teacher to use at her/his discretion? 26. Kahlenberg (2001) argues that the economic integra tion of schools could also be a strategy to insure a more equal distribut ion of teachers across schools (78-80).

PAGE 27

27 of 31 27. As noted above, some of the changes generated in t he wake of NCLB address the issue of credentialing. However, the re sults of an earlier analysis focusing on the Los Angeles Unified School District and the San Diego Unified School District suggested that the gains made from hiring more fully credentialed teachers could be offset by a loss in more experien ced teachers as this will in all likelihood emerge as a source of inequality between schools once the disparities in credentials are equalized (Powers, 2003).ReferencesAsimov, N. (2003, May 5, 2003). Bitter Battle over Class Standards. State Spends Millions to Defeat Students' Suit. San Francisco Chronicle California Department of Education. (2000). Fact book 2000: Handbook of education information Sacramento: Author. California Department of Education. (2001). Year-ro und education calendars. ” Sacramento: Author. [accessed May 15, 2003]California Department of Education Office of Policy and Evaluation. (2000). 1999 base year academic performance index (API) Sacramento: Author. California Department of Education Office of Policy and Evaluation. (2003). Explanatory notes for the 2002 Academic Peformance Index base report Sacramento: Author.Darling-Hammond, L. (2002). “Access to quality teac hing: An analysis of inequality in California's schools.” Decent Schools for California: Williams v. State of California (Available online: http://www.decentschools.org) Fetler, M. (1999). High school staff characteristic s and mathematics test results. Education Policy Analysis Archives 7 (9). Fox, J. (1991). Regression diagnostics Newbury Park, CA: Sage Publications. Glass, G. V., & Hopkins, K. D. (1996). Statistical methods in education and psychology (3rd Ed.) Boston: Allyn and Bacon. Hoxby, C. M. (2003). Achievement, efficiency and ce ntralization in California’s public schools. Decent Schools for California: Williams v. State of California (Available online: http://www.decentschools.org)Laczko-Kerr, I., & Berliner, D. C. (2002). The effe ctiveness of "Teach for America" and other under-certified teachers on stud ent academic achievement: A case of harmful public policy. Education Policy Analysis Archives 10 (37). Lewis-Beck, M. (1980). Applied regression: An introduction Beverly Hills, CA: SAGE Publications.Oakes, J. (2002). Education inadequacy, inequality and failed state policy. A synthesis of expert reports prepared for Williams v State of California. Decent Schools for California: Williams v. State of Califo rnia (Available online:

PAGE 28

28 of 31 http://www.decentschools.org)O’Connell, J. (2003, June 30). Update on California ’s plan of highly qualified teachers (Letter to district supervisors, county su perintendents, and charter school administrators: Sacramento: California Dep artment of Education. [Available online at www.cde.ca.gov]Powers, J. M. (2003). An analysis of performance-ba sed accountability: Factors shaping school performance in two urban school dist ricts. Educational Policy, 17 (5), 558-586. Purdum, T. S. (2000, May 18). Rights Groups Sue Cal ifornia Public Schools. New York Times Raymond, M. E. (2003a). Williams v. State of Califo rnia report by Margaret E. Raymond Decent Schools for California: Williams v. State of California (Available online: http://www.decentschools.org)Raymond, M. E. (2003b). The performance of Californ ia charter schools Palo Alto, CA: CREDO, Hoover Institution.Rogosa, D. (2001). Year 2000 update: Interpretive notes for the Academ ic Performance Index Palo Alto, CA: Stanford University. Rogosa, D. (2002). A further examination of student progress in charter schools using the California API. Palo Alto, CA: Stanford U niversity. Russell, M. (2000). Summarizing change in test scor es: Shortcomings of three common methods. Practical Assessment, Research and Evaluation 7 (5).About the AuthorJeanne M. Powers Division of Educational Leadership and Policy Studi es Arizona State UniversityP. O. Box 872411Tempe, AZ 85287-2411Phone: 480.965.0841Email: jeanne.powers@asu.eduJeanne M. Powers is an Assistant Professor in the D ivision of Educational Leadership and Policy Studies, Arizona State Univer sity. She received her PhD in Sociology from the University of California, San Diego. Her current research examines the equity implications of education polic y with a focus on high-stakes testing and accountability, and school choice.Appendix Table A1 Correlations between Student Background Variables a nd Teacher Qualifications Variables Over Time

PAGE 29

29 of 31 Bivariate Correlations: %Emer. Cred and… % Minority % Reduced/Free Lunch % English Leaners 1999 .589.457.469 2000 .566.447.429 2001 .510.378.378 2002 .457.347.340 Bivariate Correlations: Avg.Yrs Exper and … % Minority % Reduced/Free Lunch % English Learners 1999 -.272-.279-.284 2000 -.325-.306-.308 2001 -.332-.293-.318 2002 -.343-.302-.315 The World Wide Web address for the Education Policy Analysis Archives is epaa.asu.edu Editor: Gene V Glass, Arizona State UniversityProduction Assistant: Chris Murrell, Arizona State University General questions about appropriateness of topics o r particular articles may be addressed to the Editor, Gene V Glass, glass@asu.edu or reach him at College of Education, Arizona State Un iversity, Tempe, AZ 85287-2411. The Commentary Editor is Casey D. Cobb: casey.cobb@unh.edu .EPAA Editorial Board Michael W. Apple University of Wisconsin David C. Berliner Arizona State University Greg Camilli Rutgers University Linda Darling-Hammond Stanford University Sherman Dorn University of South Florida Mark E. Fetler California Commission on TeacherCredentialing Gustavo E. Fischman Arizona State Univeristy Richard Garlikov Birmingham, Alabama Thomas F. Green Syracuse University Aimee Howley Ohio University Craig B. Howley Appalachia Educational Laboratory William Hunter University of Ontario Institute ofTechnology

PAGE 30

30 of 31 Patricia Fey Jarvis Seattle, Washington Daniel Kalls Ume University Benjamin Levin University of Manitoba Thomas Mauhs-Pugh Green Mountain College Les McLean University of Toronto Heinrich Mintrop University of California, Los Angeles Michele Moses Arizona State University Gary Orfield Harvard University Anthony G. Rud Jr. Purdue University Jay Paredes Scribner University of Missouri Michael Scriven University of Auckland Lorrie A. Shepard University of Colorado, Boulder Robert E. Stake University of Illinois—UC Kevin Welner University of Colorado, Boulder Terrence G. Wiley Arizona State University John Willinsky University of British ColumbiaEPAA Spanish and Portuguese Language Editorial BoardAssociate Editors for Spanish & Portuguese Gustavo E. Fischman Arizona State Universityfischman@asu.eduPablo Gentili Laboratrio de Polticas Pblicas Universidade do Estado do Rio de Janeiro pablo@lpp-uerj.netFounding Associate Editor for Spanish Language (199 8-2003) Roberto Rodrguez Gmez Universidad Nacional Autnoma de Mxico Adrin Acosta (Mxico) Universidad de Guadalajaraadrianacosta@compuserve.com J. Flix Angulo Rasco (Spain) Universidad de Cdizfelix.angulo@uca.es Teresa Bracho (Mxico) Centro de Investigacin y DocenciaEconmica-CIDEbracho dis1.cide.mx Alejandro Canales (Mxico) Universidad Nacional Autnoma deMxicocanalesa@servidor.unam.mx Ursula Casanova (U.S.A.) Arizona State Universitycasanova@asu.edu Jos Contreras Domingo Universitat de Barcelona Jose.Contreras@doe.d5.ub.es

PAGE 31

31 of 31 Erwin Epstein (U.S.A.) Loyola University of ChicagoEepstein@luc.edu Josu Gonzlez (U.S.A.) Arizona State Universityjosue@asu.edu Rollin Kent (Mxico) Universidad Autnoma de Puebla rkent@puebla.megared.net.mx Mara Beatriz Luce (Brazil) Universidad Federal de Rio Grande do Sul-UFRGSlucemb@orion.ufrgs.br Javier Mendoza Rojas (Mxico)Universidad Nacional Autnoma deMxicojaviermr@servidor.unam.mx Marcela Mollis (Argentina)Universidad de Buenos Airesmmollis@filo.uba.ar Humberto Muoz Garca (Mxico) Universidad Nacional Autnoma deMxicohumberto@servidor.unam.mx Angel Ignacio Prez Gmez (Spain)Universidad de Mlagaaiperez@uma.es DanielSchugurensky (Argentina-Canad) OISE/UT, Canadadschugurensky@oise.utoronto.ca Simon Schwartzman (Brazil) American Institutes forResesarch–Brazil (AIRBrasil) simon@sman.com.br Jurjo Torres Santom (Spain) Universidad de A Coruajurjo@udc.es Carlos Alberto Torres (U.S.A.) University of California, Los Angelestorres@gseisucla.edu EPAA is published by the Education Policy Studies Laboratory, Arizona State University