xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam a22 u 4500
controlfield tag 008 c20069999azu 000 0 eng d
datafield ind1 8 ind2 024
subfield code a E11-00478
Educational policy analysis archives.
n Vol. 14, no. 5 (February 23, 2006).
Tempe, Ariz. :
b Arizona State University ;
Tampa, Fla. :
University of South Florida.
c February 23, 2006
National board certification as professional development : what are teachers learning? / David Lustick [and] Gary Sykes.
Arizona State University.
University of South Florida.
t Education Policy Analysis Archives (EPAA)
xml version 1.0 encoding UTF-8 standalone no
mods:mods xmlns:mods http:www.loc.govmodsv3 xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govmodsv3mods-3-1.xsd
mods:relatedItem type host
mods:identifier issn 1068-2341mods:part
mods:detail volume mods:number 14issue 5series Year mods:caption 20062006Month February2Day 2323mods:originInfo mods:dateIssued iso8601 2006-02-23
Readers are free to copy display, and distribute this article, as long as the work is attributed to the author(s) and Education Policy Analysis Archives, it is distributed for noncommercial purposes only, and no alte ration or transformation is made in the work. More details of this Creative Commons license are available at http://creativecommons.org/licen ses/by-nc-nd/2.5/. All other uses must be approved by the author(s) or EPAA EPAA is published jointly by the Colleges of Education at Arizona State University and the Universi ty of South Florida. Articles are indexed by H.W. Wilson & Co. Send commentary to Casey Cobb (c email@example.com) and errata notes to Sherman Dorn (epaa-editor@s hermandorn.com). EDUCATION POLICY ANALYSIS ARCHIVES A peer-reviewed scholarly journal Editor: Sherman Dorn College of Education University of South Florida Volume 14 Number 5 February 23, 2006 ISSN 1068 National Board Certif ication as Professional Development: What Are Teachers Learning? David Lustick University of Massachusetts Lowell Gary Sykes Michigan State University Citation: Lustick, D., & Sykes, G. (2006). National board certification as professional development: What are teachers learning? Education Policy Analysis Archives, 14 (5). Retrieved [date] fr om http://epaa.asu.edu/epaa/v14n5/. Abstract This study investigated the National Boar d for Professional Teaching Standards (NBPTS) assessment process in order to identify, quantify, and substantiate learning outcomes from the participants One hundred and twen ty candidates for the Adolescent and Young Adult Science (A YA Science) Certif icate were studied over a two-year period using the recurrent institutional cycle re search design. This quasi-experimental methodology allowed for the collection of both cross-sectional and longitudinal data insu ring a good measure of in ternal validity regarding observed changes between individual and across group means. Transcripts of structured interviews with each teacher were scored by multiple assessors according to the 13 standards of NBPTS framew ork for accomplished science teaching. These scores provided the quantitative eviden ce of teacher learning in this study. Significant pre-intervention to post-interv ention changes to these individual and group means are reported as learning outcomes from the assessment process. Findings suggest that the intervention had significant impact upon candidates understanding of knowledge associated with science teaching with an overall effect
Education Policy Analysis Archives Vol. 14 No. 5 2 size of 0.47. Standards associ ated with greatest gains include Scie ntific Inquiry and Assessment. The results support the claim that the certification process is an effective standards based professional le arning opportunity comparable to other human improvement interven tions from related domain s. Drawing on qualitative data, we also explore three possible implicat ions of teacher lear ning outcomes from certification upon classroom practice id entified as Dynamic, Technical, and Deferred. These patterns suggest that more than one kind of learning may be taking place in relation to board certification. The discussion then considers the importance of this study fo r policy making and science teaching communities. Keywords: Teacher Learning; Professional Development; Certification; Science Education; National Board for Profe ssional Teaching Standards (NBPTS). The National Board and Public Policy Evidence has begun to accumulate that demons trates a relationship between National Board certification and student achi evement (see Cavaluzzo, 2004; Goldhaber & Anthony, 2004; Vandevoort, Amrein-Beardsley, & Berliner, 2004). Th is relationship is relevant to policy because so many states and localities are pr oviding financial support and incentives to encourage teachers to become Board certified. Whether such teachers in fa ct are highly accomplished as determined by their ability to promote higher student achievement is a matter of great interest at the moment. But if Board certified teachers are unusually capable, wh y may this be so? One answer is self-selection. Teachers who volunteer to undertake Board certifica tion are superior teachers to begin with. In this case, certification is primarily a selection mechanism. Another answer is that the certification process itself constitutes a form of professional devel opment that actually enhances teacher knowledge, skills, and dispositions in candidates regardless of whether or not they achieve certification. In this case, certification is a development process. Between these two possibilities, the former is most likely, because certification is a relatively brief treatment. Although intensive and time-consuming, extending over a year of work, the certification process primarily calls on teachers to document their practice. Consequently, those who volunteer are most likely quite good teachers to begin with. Still, teachers prepare in study groups, learn from the materials they receive, and from the processes they undergo. Many teachers who are unsuccessful at achieving certification during th e first attempt may re-take portions of the examination, so the process can extend over more than one year.1 In light of the importance of Board certification as a significant policy effort to enhance the quality of teachers, the question of what teachers learn through the process is noteworthy. The study reported here supplies some preliminary evidence on this question. The context for an inquiry into Board certification is the contemporary debate on professionalization as a policy choice. The heart of the professional premise is that teachers utilize expert knowledge in their work that may be codified and transmitted to practitioners. Such codification occurs in a number of places including the curriculum of preservice training, continuing education, and the standards and assessments that are used to select and screen entrants to the profession. As the National Commi ssion on Teaching and Americas Future (1996) argued, the basis for professional standards today is a three legged stool that includes certification standards for entry under the jurisdiction of states, standards for teach er preparation programs established by NCATE; 1 For this investigation, only first time candidates were included in the population pool. All retake candidates were removed from consideration.
National Board Certification as Professional Development 3 and advanced standards for professional practice recently established by the National Board for Professional Teaching Standards (NBPTS). This mo del is represented in varying ways in most professions but is most highly elaborated in the medical field where advanced standards for practice are associated with specialization and have been developed by the various medical specialty boards. Standards are one form for the representation and transmission of prof essional knowledge. Such knowledge is typically validated in relation to its efficacy in producing desirable outcomes, but due to the inevitably incomplete nature of professional knowledge, standards are the creation of consensus panels made up of experts in the releva nt fields who draw on research based knowledge but also utilize judgment in formulating the standards. In this respect, the NBPTS has followed accepted practice in developing its specialized stan dards for each certificate area based on the work of consensus panels. Still, the work of validating pr ofessional standards is an ongoing process, and the present study contributes to this effort. The policy of professionalization, as it enlists the resources and authority of the state is under challenge, however, from those who doubt that teaching involves codified expert knowledge that may be represented in various forms and used to discipline preparation, entry, evaluation, and advancement. Arguments made by opponents ten d to stress that teachers employ ordinary knowledge and intelligence, typically acquired in univ ersity liberal arts programs and that screening for entry should involve just tests of basic skills, general aptitude, and knowledge of relevant subject matter. In our society today, both views are advanced in a policy climate of skepticism about the quality of teaching, and these contending positions suggest quite different policy approaches to the enhancement of teaching. The first or profe ssional view places more emphasis on the steady development of validated standards that underlie the three legged stool. The second or antiprofessional approach places relia nce on recruitment strategies that open teaching to applicants with diverse backgrounds and qualifications based on indicators of intelligence and general learning plus some modest how to knowledge and practical experience. The present study enters this policy debate by presenting evidence a bout what teachers are learning from the National Board ce rtification process. Along the lines of the debate just indicated are two views. One suggests that teachers learn to be more reflective practitioners as a result of the process, supporting the professional claim. The othe r position argues that teachers simply learn how to master the assessment process in order to gain the incentives that states and districts are beginning to provide, such as additional pay. If the first view has force, then states and districts may be justified in providing public incentives. If th e second view has force, then such allocations are unlikely to be worthwhile. Framing the policy issue in these terms however only meets the threshold condition that teachers are learning something of value. Beyond the threshold are further questions about whether they use what they have learned in their practice and even further whether such use enhances student learning. Still, if the threshold condition is not met then further inquiry will be moot, as teachers are not undertaking any useful learning in the first place. So the present inquiry aims to provide the first test of this propositionthat boa rd certification promotes useful learning among candidates and so is a worthwhile policy investment. The National Board for Professional Teaching Standards has been setting standards for accomplished teaching and certifying teachers who measure up to those standards since 1993. During the past decade, the NBPTS has developed 27 areas of certification and has awarded certificates to more than 40,000 teachers (NBPTS, 2005). The certification process is rigorous; only one-half of applicants are successful. Yet, evidence suggests that candidates whether they pass or not, find the experience valuable to their prof essional growth. Candidates must complete an extensive portfolio that profiles their work with students, school, and community. In addition,
Education Policy Analysis Archives Vol. 14 No. 5 4 candidates take a computerized assessment that ev aluates their content knowledge in their area of expertise. Together, the portfolio and the exam constitute the foundation of the certification experience. Certification costs $2300 per teacher, and as we ll as defraying application fees many states and local districts offer additional and often genero us financial incentives to encourage teachers to become certified. All told, 48 states and 544 local districts offer some form of financial incentive or support for National Board certification. For exam ple, North Carolina set aside more than $26 million in 2003 to encourage and support teacher s who pursue certification (Leef, 2003). There, incentives can add up to $50,000 per teacher over the life of the 10-year certificate (NBPTS, 2004a). Nationwide, annual expenditures are substantial In 2003, for example, approximately 16,000 teachers pursued certification for a total cost $36.8 million that was paid almost entirely by state departments of education, local districts, and to a lesser extent, teacher unions. Eight of sixteen thousand achieved certification in that year (NBP TS, 2004a). If 75% of those who pass received a bonus, financial reward, or salary increase equivalent to $25002, then an additional outlay of $20 million needs to be added to the total public expend itures. For 2003 alone, taxpayers invested nearly $57 million in National Board certification. This is a considerable sum, although only a miniscule portion of all funding for teacher professional development.3 In fiscally difficult times, states and local districts are now debating the merits of providing financial incentives and support for National Board certification (Griffin, 2003). Consequently, evidence on the effectiveness of this intervention is salient. With growing numbers of teachers pursuing National Board certification and with increasing amounts of public dollars being used to fund and encourage the process, what kind of impact is certification having upon teacher quality? Does Na tional Board certification provide teachers with an effective professional development? In other wo rds, does the Board certification process provide opportunities for teachers to learn new knowledge and skills relevant to their work with students? Recent studies indicate that National Board teachers facilitate greater studen t achievement in their students (Goldhaber & Anthony, 2004 Vandevoort et al, 2004). However, very little quantitative evidence exists that indicate how and to what ex tent the certification process improves teacher quality. The What teachers are learning? question remains the least understood aspect of the professional development paradigm. According to Wilson and Berne (1999), research in this area has yet to identify, conceptualize, and assess what teachers are learning. Program evaluation research, in such professional development initiatives as the Eisenhower Professiona l Development Program for Math and Science Teachers, which focused upon teacher learning prod uced vague answers due primarily to limited methodologies such as surveys and self-reports (Mullens, Leighton, Laguarda, & O'Brien, 1996). Strikingly missing from the literatur e are empirical studies that address the questions surrounding professional development, teacher learning, and the impact upon teacher quality. The study reported here makes a contribution to these issues, based on the use of a quasiexperimental design with several cohorts of cand idates. While prior research has supplied evidence on the validity of the certification process (see Bond, Smith, Baker, & Hattie, 2000) and on its 2 Financial incentives range from $1000 to $7500 annually for the life of the certificate (10 years). States like North Carolina offer a 12% increase in sala ry for the life of the certificate with successful completion of the process. For a complete review of the incentives and support offered by each state and more than 500 local districts, visit http://www.nbpts.org/about/state.cfm. 3 Determining the annual expenditures for professional development is a tricky and uncertain process. For detailed studies on how estimates are calcul ated, see Killeen, et al. (2002), and Odden, et al., (2002).
National Board Certification as Professional Development 5 association with student outcomes, the current st udy takes up the question of what teachers might be learning from the process itse lf. If professional development is accepted as a primary means of improving student learning, then it becomes importa nt to understand what teachers may or may not be learning from a specific professional devel opment intervention. Before teachers can improve their work with students, they first must acqui re new knowledge and skills to employ in the classroom. The results of this study present a system atic analysis of what teachers are learning from a specific intervention and how that learning mi ght influence the quality of instruction. For the policy community, this study provides valuable quantitative knowledge that describes a learning opportunity for science teachers that may have dr amatic impact upon student achievement and the learning experience.4 Description of the Intervention The National Board for Professional Teaching Standards was established in 1987 through a grant from the Carnegie Corporation of New Yo rk as a means of defining, assessing, and recognizing accomplished teaching (NBPTS, 1991). The NBPTS has identified three critical aspects of certification: standards, establishing, reviewing, and refining standards of accomplished teaching through consensus about what teachers should know and be able to do; assessment, providing a valid and accessible means to evaluate teachers against the standards; and professional development providing teachers with the opportunity to strengthen their practice through self-examination (Koprowicz, 1994). All standards, assessments, an d scoring rubrics are based upon the five core propositions of accomplished teaching th at the Board developed and disseminated.5 The year long certification process for teacher s has two main components: the construction of a detailed, reflective, and analytic portfolio over a four to six month span; and the completion of a content focused four hour computerized assessment.6 The portfolio for Adolescent and Young Adult Science has four sections that address the thirteen standards for AYA Science: Teaching a Major Idea in Science, Active Scientific Inquiry, Whole Class Discussion in Science, and Documented Accomplishments: Contributions to Student Learning.7 (For a description of these 4 Studies of effectiveness do not settle the matter. Cost-effectiveness studies are needed, that compare various kinds of professional development. Nevertheless, as a fi rst order of business, inquiry into the effectiveness of professional development treat ments is worthwhile. For further commentary, see Borko (2004). 5 The five core propositions state that 1) Teachers are committed to students and their learning. 2) Teachers know the subjects they teach an d how to teach those subjects to students. 3) Teachers are responsible for managing and monitoring student learning. 4) Teachers think systematically about their practice and learn from experience. 5) Teachers are members of learning communities. (NBPTS, 1991 p. 13-14) 6 The assessment center exercises have changed over the course of the last few years. Originally for example, the exam for AYA Science took two 8 hour days covering post-secondary level content material in science and pedagogical knowledge in the science classr oom. Today, the exam has been reduced to a 4-hour session focused entirely on science content. For a comp lete description, go to: http://www.nbpts.org/candidates/acob/ nextgen/n20.html (NBPTS, 2004b). 7 It should be noted that starting in 2001, the n ew format for portfolio c onstruction was phased in over a two year period. The original portfolio required 6 entries. An Entry called Assessment was merged with the three other classroom-oriented entries an d the Documented Accomplishments, Professional and Community were combined. Both formats were involved in this study, though which candidates had which form is unknown. The vast majority had to complete the new 4 entry version. Though the use of two
Education Policy Analysis Archives Vol. 14 No. 5 6 standards, see Table 3.) Using videotape, exampl es of student work, and artifacts representing professional accomplishments, teachers address quest ions in each section of the portfolio while constructing a presentation of their best practice8. The final product serves as evidence demonstrating the teachers impact on classroom ac ademic environment, student learning, and the school community. In this study, the identified components of certification also served as the curriculum and resources for teacher learning. National Board Certification as Professional Development The perception of National Board Certification (NBC) as a productive professional development may be due in no small part to the numerous endorsements received by a wide range of organizations. One such endorsement is from th e Center for Research on Education, Diversity & Excellence: We believe the process (NBC) represen ts sound professional development practiceit is focused on subject matt er content and student learning, uses teacher self-reflection and inquiry linked to the teachers own teaching situation and practice, and is highly collaborat ive. This kind of thorough, focused professional development is far too rare for most of Californias teachers. (CREDE, 2003, p. 10) How does National Board certification fit with current conc eptions of effective professional development? The answer can be found by un derstanding the characteristics of quality professional development. Over the last decade, ideas about what cons titutes effective professional development for teachers have been changing (Little, 1993 & 1997; Ball & Cohen, 1995; Hu berman 1993; Hargraves 1995; Darling-Hammond & McLaughlin, 1996; Stein and Brown 1997; Sykes, 1999). Such reexamination of professional development has been motivated by a pervasive dissatisfaction with traditional professional development, whose feat ures are well known. Hawley and Valli (1999, p. 135) summarize these as follows: 1. The Individually guided model: individual teachers performing self-assessments and designing appropriate curriculum 2. The Observer/assessment model: principal or colleague observe teacher in class and then comment 3. The Development/improvement model: teachers involve th emselves in whole school reform efforts 4. The Training model: teacher participation in course work, workshops, and conferences. Teacher learning in these models has not been considered very effective due in part to the passive process where teachers are the recipients of knowledge and skills as defined by an outside authority such as a principal, visiting expert, or government administrator. The traditional model of versions during part of this study could represent a co nfounding factor, it is unlikely since the two versions still address the same standards with the same types of tasks. The differences are more focused on reducing paperwork and streamlining the certification process rather than changing what it is teachers need to do. For example, Version 1 had two separate entries that addressed professional development history and community involvement respectively. In versi on 2, these entries were combined into one entry called Documented Accomplishments that addresses both categories of professional activities. 8 For a thorough description of NBPTS, the process of certification, candidate requirements, and details of the process see Bailey and Helms (2000).
National Board Certification as Professional Development 7 professional development is not constructed around any set of common standards or goals for the educators. For the most part, the experiences are isolated, extrinsically motivated, undisciplined, and leave little room to assess the accountability of results. Hawley and Valli describe what they term a c onsensus model for improved professional development, oriented around seven principles: 1. Driven by goals and student performance 2. Involve teachers in the planning and implementation process 3. School based and integr al to school operations 4. Organized around collaborative problem solving 5. Continuous and ongoing invo lving follow-up and support 6. Information rich with multiple source s of teacher knowle dge and experience 7. Provide opportunities for developing theoretical understanding of the knowledge and skills learned. (H awley & Valli, 1999, p. 137) Part of a comprehensive change process that incl udes issues of student learning, the consensus model of professional development sees the teac her as an active learner and the process of learning embedded in practice. The model al so emphasizes the role of reflection and professional discourse as effect ive means of teacher learning. Both traditional and emerging models of professional develo pment add something meaningful to an understanding of how teaching may improve as a result of National Board certification. Ingvarson (1998, p. 133) explained, In principle, both systems are essential and each should be complementary to the other, like two pillars holdin g up the same building. This view of professional development sugge sts that the process of National Board certification is an effective form of professional development. The process is completely voluntary as per the Consensus Model. It encourages profe ssional discourse and collegiality as described in elements of both the traditional and consensus models It encourages teachers to examine their work both inside and outside the classroom while embedd ing the collection of data on practice within the practice itself, played out over a considerable leng th of time. In addition, Board certification has well defined standards of performance and a well-specified goal as a result of participation. It is focused on both process and content and it incorporates meaningful attention to student learning as part of the work of self-assessment. As National Board certification grows and matu res, its impact may be felt beyond the teachers directly involved together with the stud ents they teach. It may challenge many of the fundamental or traditional assumptions about what professional development looks like and how it is implemented. Ingvarson (1998, p. 134) writes, Steadily increasing numbers of educ ation authorities ar e accepting Board certification as evidence of profession al development The hope is that a new infrastructure of professional learning w ill develop around th e incentive of Board certification, and there are signs that this is happening. According to Reichardt (2001) National Board certificati on provides a vision of good teaching and serves as a tool to direct individual teac her professional development, and that there is emerging evidence of th e effectiveness of National Board certi fication as a method to improve teacher quality. The study reported here is a fi rst effort to test the proposition that board certification indeed is worthwhile professional development.
Education Policy Analysis Archives Vol. 14 No. 5 8 Evidence for National Board Certification as Effective Professional Development The NBPTS has maintained that the process of recognizing accomplished teachers should provide opportunities for candidates to deve lop professionally (ETS, 1999). Some anecdotal evidence supports this objective. Whether they pass or fail, many teachers say they feel better about themselves as professionals and believe they are better practitioners beca use of their efforts. However, what teachers feel and believe may be quite different from what they learn. Therefore, inquiring about the precise nature of the lear ning outcomes from Board certification becomes important. Anecdotal reports support the contention that National Board certification serves as effective professional development. Claims to this effect have regularly appeared (Tracz, et al., 1995; Kowalski, Chittenden, Spicer, Jones, & Tocci, 1997). Numerous teachers have testified to the benefits of National Board certification for their practice (Bailey & Helms, 2000; Gardiner, 2000; Jenkins, 2000; Chase, 1999; Benz 1997; Haynes, 1995; Marriot, 2001; Roden, 1999; Wiebke, 2000). These teachers use such terms as enlightening (Mahaley, 1999) or revitalizing (Areglado, 1999) to describe their experiences with National Board cer tification. These accounts provide insights into the value of the Board certification experience, but tell little about what candidates actually may be learning. Surveys have been conducted that expand upon testimonial accounts and provide more extensive interpretations of what National Bo ard Certified teachers are learning from the assessments. For example, the NBPTS issued two re ports based upon survey data that provided a national profile of Board Certified teachers and thei r feelings of becoming a better teacher from the certification process (NBPTS, 2001a, 2001b). These surveys report that among the more than 5,600 teachers who returned a completed survey (53% return rate), 92% felt that they were better teachers as a result of certification and 96% rated the certification pro cess as a(n) Excellent, Very Good, or Good professional development experi ence (NBPTS, 2001b, p. 2). Such results are indicative yet leave open questions regarding the va lidity of self-reports and the particulars of how the process of National Board certifica tion achieves particular outcomes. Other studies structured around a support group of candidates involved in the certification procedures provide more fine-grained evidence that candidates learn from NBC by participating in extended professional communities (Burroughs, Sc hwartz, & Hendricks-Lee, 2000; Manouchehehri, 2001; Rotberg, Futrell, & Lieberman, 1998). Studie s also have found a value to the NBPTS materials such as the standards documents and portfolio instructions as important sources of teacher learning (Kowalski et al, 1997; Rotberg, et al ., 1998). These investigations provide insight into the means and ends of Board certification, but do not pin down actual learning in any detail. Recent commentary on professional development identifies a need to complement small-scal e, qualitative inquiry on teacher learning with quantitative investigations th at attempt to clarify, identify, and substantiate specific outcomes (Borko, 2004; Floden, 2001; Cr awford & Impara, 2001; Garet, Porter, Desimone, Birman, & Yoon, 2001; Knapp, 2003). This study responds to these calls. Research Design The measurement of teacher learning in this study required three components: a uniform curriculum serving as the intervention, a viable means of assessment, and a method that fit the cohort nature of National Board certification. For the curriculum, we chose the tasks and materials for AYA Science certification due to the lead author s experience with this particular certificate. To
National Board Certification as Professional Development 9 convert observations into measurable data, we re lied on the procedures and rubrics the National Board employs to assess candidates for certificati on. To measure teacher learning outcomes that result from a specified treatment we turned to the logic of quasi-experimental design. The aim is to specify the association between certification (the in dependent variable) and teacher learning (the dependent variable). Crucial to such a design is the random selection of subjects and their random assignment to treatments. Since potential learning from National Board certification begins with a self-selected population, an experimental approach is not feasible (Campbell & Stanley, 1963; Cook & Campbell, 1979). In response, we chose a quasi-experimental design that accounts for the voluntary, self-selected nature of the subjects participation while maintaining the pre-post collection of data. Titled the Recurrent Institutional Cycle De sign (RICD), it controls (to the extent possible) for non-random threats to internal validity while pr oviding a means of establishing some degree of causality between the treatment and observed results (Campbell & Stanley, 1963). The RICD has been used for treatments that recur on a cyclical schedule where one group of individuals is finishing and another grou p is just beginning (Campbell & McCormick, 1957; Shavelson, Webb, & Hotta, 1987; Jimenez, 1999). Numerous studies in the social and medical sciences have used some variation of the RICD to address questions pertaining to program effectiveness including the effects of an intervention on leadership development (Lafferty, 1998) and on employment (Juin-jen, 1999). As Figure 1 illustrates, the RICD allows for cross sectional data to be collected from different groups at the same time; and longitudinal data from the same group over time. Group 1 X O1 Group 2A O2 X O3 Group 2B X O4 Group 3 O5 X XIntervention (Board certification process) OData Collection (Interviews) Figure 1 RICD design for study. In this diagram of the research design, time is measured along the x-axis and groups of subjects are along the y-axis. The observation references are important in interpreting Table 4. Over approximately 15 months from August 2002 to November 2003, data were collected from three groups sampled from three consecutive cohorts of AYA Science candidates. The design allows for the comparison of the pre and post mea sures between groups (cross sectional) and within groups (longitudinal). Pre to post gain scores test the relationship between the intervention and specified learning outcomes. Group 2 was divide d randomly to create Group 2A and 2B. The two subgroups were needed to test for the effect of data collection on observed results. Since Group 2A had both pre and post observations (denoted 2APre and 2APost respectively) and Group 2B only post, the comparison between the two post groups would allow us to consider any impact (if any) the interview process for pre observations may ha ve had on the assessed scores (effect of testing).9 9 A comparison of Group 2A-Post and Group 2B revealed no significant differences indicating no effect of testing..
Education Policy Analysis Archives Vol. 14 No. 5 10 The RICD involves some restrictions, particul arly concerning external validity (Campbell & Stanley, 1963). Generalization of outcomes is lim ited to teachers pursuing certification in AYA Science with the current set of 13 standards and not to all science teachers or other certificates. As well, quasi-experimental research designs also involv e threats to their internal validity from a number of sources. While the RICD addresses threats due to history, testing, selection, mortality, and instrumentation, it cannot guard against maturati on effects (Campbell & Stanley, 1963, Merriam & Simpson, 1995; Cook & Campbell, 1979). However, si nce the research is focused on the acquisition of a series of complex and highly specialized sk ills and knowledge sets, it seems unlikely that just growing older or more experienced would be a significant influence on outcomes (Campbell and Stanley, 1963, p. 59). The effects of history or a test-retest ef fect cannot explain observed cross sectional differences between cohorts. The biggest threat to such observed differences however could be due to differences in recruitment (selection) from one year to the next. To account for this possible confounding explanation, demographic informat ion was collected from each cohort and its respective group to provide a profile for compar ing how each cohort and group compares on the characteristics of gender, years of teaching, school context, and students ability. Finally, since the instruments used to measure differences were unchan ged throughout the study, it is unlikely that this could be a threat to internal validity. For discussion of additional limitations and caveats, see Appendix A. Hypothesis Our primary hypothesis states that the post-da ta (Observations 1 and 4) will demonstrate gains when compared with the pre-data (Observations 2 and 5). In this hypothesis, every participant from each of the three cohorts is considered si multaneously with four out of the five available observations. The alternative hypothesis (HAlt1) is that post scores from Group 1 and Group 2B will be greater when compared to the pre scores from Group 2A (Pre) and Group 3. The null hypothesis (HNull1) is that p ost scores from Group 1 and Group 2B will not be greater when compared to the pre scores from Group 2A (Pre) and Group 3. By identifying, quantifying, and substantiatin g observed differences among groups on each of the National Boards thirteen standards, this study provides evidence of teacher learning. This operational definition for learning within the con text of this investigation allows for identifying effects of intervention on thirteen dimensions of pr actice and a rich analysis into what the observed learning might mean. The experience of each candidate with the intervention of National Board certification serves as the independent variable. The dependent variable is represented by the assessed scores of each candidate on each of the thirteen standards of accomplished secondary science teaching. Study Population This investigation focused upon the population of secondary science teachers who applied to undertake National Board certification.10 Applicants must be certified in the state in which they teach, currently teach at least two classes in the area of certification, and have at least 3 years of full 10 AYA Science was chosen since the lead author has first hand experience having achieved National Board certification in AYA Science in 1998.
National Board Certification as Professional Development 11 time teaching experience. For this study, any teacher who registered for the certification process had to verify these requirements before being accepted as a candidate for National Board certification. The sample for this study was drawn from this pool of self-selected candidates for AYA Science certification. The population pool included all registered teachers for AYA certification for the years 2001, 2002, and 200304. For each year this represents approximately 450 teachers. From this pool, approximately half the tea chers were randomly invited to participate. The final list of participants was determined on a first to reply basis. Recruitment of teachers ended once each of the three groups reached the target of 40 teachers; however, recruitment procedures from year to year were not perfectly even. Effects due to variations in recruitment remain an important limitation to this study (details in Appendix A). Group to Cohort comparisons indicated a high de gree of similarity and fair representation, though the information available on the entire cohort included only age, gender, and geographic location. In analyzing the similarities and diffe rences between Groups 1, 2A, 2B, and 3, twelve characteristics were compared. Table 1 provides a summary of these comparisons. Of those areas that showed difference (i.e., years of experience, learning of, incentive for, and support for National Board certification), the most important would appear to be years of experience. Groups 1, 2A, and 2B have an average of 15.3 years of experience whereas Group 3 has an average of 11.0 years of experience. Though Group 3 is significantly diffe rent from the other groups we would argue that there is very little qualitative difference between teachers who have 11 years versus teachers who have 15 years of experience. According to the li terature on teacher effectiveness, both lengths of time fall into the category of veteran teacher (Stronge, 2002; Darling-Hammond, 2000). The other identified between-group differences (how teacher s learned of National Board certification, their incentive for pursuing Board certification, and the types of support provided for the process) are most likely due to the smaller group sizes for 2A and 2B. When both groups are combined, the differences are not significant. Overall, more than 90% of all teachers in this study received some form of financial incentive and support for pursuing National Board certification. Table 1 Summary of Group to Group Comparisons on Demographic Variables Demographic Characteristic Group 1Post Group 2APre Group 2BPost Group 3Pre Grades ND ND ND ND Content ND ND ND ND School ND ND ND ND Region ND ND ND ND Students ND ND ND ND Gender ND ND ND ND Years Teaching ND ND ND Different Class Size ND ND ND ND Length of Profiles (WORDS) ND ND ND ND Learn of National Board ND Different ND ND Incentive for National Board ND ND Different ND Support for National Board ND ND Different ND ND: no significant difference.
Education Policy Analysis Archives Vol. 14 No. 5 12 Instrumentation Interview protocols and assessment rubrics c onstitute the two forms of instrumentation used in this study.11 The goal for the structured interview was to reproduce on a smaller scale the portfolio construction experience candidates complete in the certification process. Trained assessors then scored the transcribed interview as if the transcripts were complete portfolios in miniature. The coding of the transcripts by assessors provided a form of assessment that measured a candidates weight of evidence regarding each of the thirteen standards of accomplished teaching. This evidence was then converted into a score on a 4-point scale so that each interview yielded thirteen scores of teacher knowledge, which in turn formed the basis for the pre-post comparisons. The structured interview developed for this study is based (in part) on an approach developed by Kennedy, Ball, McDiarmid, and W illiamson (1993) to track changes in teacher knowledge over the course of teach er education. These investigator s state that one possible way of identifying changes in what teachers know is by presenting teachers with hypothetical teaching situations (Kennedy et al., 1993, p. 7). They cont inue, If the situations were standardized, then the amount of irrelevant, idiosyncratic differe nces in responses could be reduced, and the detailed, contextualized information about teachers perceptions of practice (ibid) would be increased. Focusing the protocols on teaching situations and standardizing them for all study participants allows for researchers to see how th e various aspects of expertiseknowledge, beliefs, attitudes about learning, teaching, and subject matter were drawn on to make teaching decisions (ibid). The interview for this study had six sections repr esenting the different parts of the portfolio. Each section (or scenario) was modeled after one of the four mandatory portfolio entries. In addition, background and school context inform ation also was collected. Table 2 summarizes the similarities and differences between the structure d interview protocols and the portfolio entries. Table 2 Comparison of Structured In terview and Portfolio Entry Required Aspects Structured Interview Protocols NBPTS Portfolio Entry/ Standard Introductory Questions Teacher Background Teacher Background School Context School Context Student Profile Student Profile Scenario #1 Teaching a Major Idea in Science Over Time Teaching a Major Idea in Science Over Time Scenario #2 Scientific In quiry Scientific Inquiry Scenario #3 Best Practice Assessment Center Tasks Scenario #4 Whole Class Discussion Whole Class Discussion Scenario #5 Community, Professional Development, and Leadership Community, Professional Development, and Leadership To assess the quality of individual teacher responses to the structured protocols in the interview, we used the rubrics and scoring procedures developed by the NBPTS for candidate 11 For a discussion of the NBPTS rubrics and assessm ent procedures, please see Educational Testing Service, ETS (1999).
National Board Certification as Professional Development 13 portfolio assessment. Experienced and knowledgeable National Board assessors for AYA Science were contracted to apply the assessment tools to th e interview transcripts in a manner that paralleled their application to portfolio entries submitted to the National Board for certificate evaluation. The scoring rubrics are based on the same thirteen stan dards of accomplished secondary science teaching as the portfolio prompts that candidates must address in the presentation of their practice. Standards to be Assessed Teams of experienced science teachers, academics, researchers, and educational leaders developed the standards of accomplished teach ing used in the AYA Science certificate. The standards are field tested regularly, and every five years are re-evaluated and adjusted based on input from the science education community. They repr esent an expert consensus on what constitutes accomplished teaching in science. Table 3 provides an overview of the thirteen standards as separated into their four sets12. Table 3 Standards for AYA Science Interview Protocols Standard set Standard Preparing the Way for Productive Student Learning I. Understanding Students II. Knowledge of Science III. Instructional Resources Advancing Student Learning IV. Science Inquiry V. Goals & Conceptual Understanding VI. Contexts of Science Establishing Favorable Context Learning VII. Engagement VIII. Equitable Participation IX. Learning Environment Supporting Teaching and Learning X. Family & Commun ity Outreach XI. Assessment XII. Reflection XIII. Collegiality & Leadership Interrater Reliability Interrater reliability is a measure of the degree to which raters agree in their assessment of each standard for each candidate. Assessors included the lead author (a former National Board assessor) and two National Board portfolio assessors each with 3 years of assessment experience, who also had served as assessor trainers for th e AYA Science entries. To improve the agreement among the three assessors, the study provided a sc oring guide, resource documents, and one to one training. Assessors were trained for three days based on procedures used by the National Board to prepare its assessors to score live portfolios. Pr ior to actual scoring, assessors practiced their assessment skills on model entries (previously scored portfolios) where they le arn to understand and agree upon a common framework for scoring. The scor ing rubric judged the family of scores on a 12 See Appendix D for definitions of each standard.
Education Policy Analysis Archives Vol. 14 No. 5 14 scale from one to four. For this study, we provided the same sort of training, but on a much smaller scale. Assessors received a binder with 125 pages of training materials to guide their scoring. After working through all exercises (approximately 4 hou rs of work) and returning the completed score sheets for practice entries assessors were evaluated for their level of agreement and accuracy. An hour-long conversation with each assessor followe d where feedback was provided and calibrations made. Only after successfully finishing the training were live entries given to the Assessors for scoring. The assessor training was designed around ideas that have been shown to reduce rater effects. Objectives for the training materials and pr ocedures included familiarizing judges with the measures used in the study, ensuring that the assessors understand the sequence of operations they must perform for each entry, and providing direction on how questions regarding data may be resolved or interpreted (Rudner, 1992). That all three assessors were quite familiar with the scoring rules and procedures of the National Board increased confidence in their ratings. Still, the measures of reliability among raters reflect the difficulty and complexity of the task. Comparison of ratings by three assessors on thirteen standards reveals a fairly wide range of variability. Inter-rater reliability analysis for this study indicates a fair to moderate relationship among assessors scores for the same interview. A Pearson correlation of .458 existed among the three raters. (For a complete description of the inter-rater reliability st atistics see Appendix B.) Reasons for this moderate level of agreement can be traced to the overlapping nature of some of the standards used. For example, Knowledge of Students states that teachers know how to assess their students learning. The standard for Assessment also emphasizes the assessment of student understanding. So an assessor has more than one category in which to place evidence pertaining to assessment. Unlike the work of a ssessors in portfolio assessment for the National Board, assessors on this project were instructed to determine thirteen distinct measures as opposed to the one overall measure required of the Boards process. With such complexity and the opportunity to place evidence in multiple categories, the moderate inter-rater reliability is understandable. Still, the conclusions from this stud y must be qualified by this reliability concern. (See Appendix A for further details on this matter.) Data Collection Data collection began with receipt of the Consent Form, which included some basic questions of the candidates. This study was conducted under the assumption that data would be collected in clearly identifiable pre and post inter vention conditions. Post observations were made after a candidate completed and submitted a portfolio and had taken the assessment center exams and before they received word from the Nationa l Board about the outcome. This timing was achieved successfully in all Post-Observation case s. Pre-observations were made after a candidate paid the non-refundable registration fee and before significant amounts of portfolio work had been completed. (See Appendix A for a more detailed discussion of the problems associated with the preobservations.) Each candidate was sent an identical intervi ew packet containing a sealed six-minute video clip of a whole class discussion in science, student artifacts, and classroom situations to be discussed during the interview. The specific questions they would be asked about these materials were not included in the packet. During an extended tele phone interview (ranging from 40 to 90 minutes), teachers examined and analyzed the artifacts, thought about and responded to the interview questions, and watched the videotape for the fi rst time. After the audio taped interview was
National Board Certification as Professional Development 15 transcribed, a processed version of the transcription13 was then scored by at least two assessors using the rubrics and standards of the National Boar d certification process. For each transcript, an assessor provided one score for each of the thirtee n standards. The thirteen assessed scores for each candidate were then aggregated to the group level so that means representing different observations could be compared for significant differences at the overall, set, and individual standard level of analysis. Results The Flowchart for analyzing the results is presented in Figure 2 which provides a branching schematic for the decision making process. In this approach, testing continues only when significant differences are identified at the Overall, Sets, and then the Standards levels of analysis. The Recurrent Institutional Cycle Design for this study yielded a total of five obs ervations (see Figure 1). Table 4 presents the Combined Pre-Post Comparis on (H1) using four out of the five observations, pooling data from all 118 participants.14 All data sets except for O3 (Group 2APost) are included in this comparison. Throughout this analysis, one-tailed Contrast t-tests were used to determine significance.15 A comparison of This study asked, Does National Board certification lead to significant learning in teachers undertaking the process? If so, the post observations would be greater than pre observations. Table 4 provides the results. There are 114 degrees of free dom indicating every participating teacher in the study was taken into account for this comparison. The value of the contrast has a p value of .009, which is significant at an alpha level of 0.05 The corresponding effect size of this observed difference is 0.473, which according to Cohens eff ect size metric for behavioral sciences is a moderate indication that there are meaningful differences between pre and post group scores (Cohen, 1977). 13 The processed transcript was a crucial step in this study. The assessors task was meant to resemble their experience with live portfolios as much as possible. To hand them a raw transcript would have impeded their ability to adequately and fairly evaluate the written words. The National Board is a stickler for format (font, margins, spacing, etc) and the research ers wanted the interview transcripts to resemble a real entry as much as possible. The aim of the processing was to improve the appearance, but not the meaning or intent of the words. See Appendix C for an example of this process. 14 In Group 2A, one more subject than anticipated dropped out of the certification process (we anticipated six dropping out of the certification process and the study, but the actual number was seven) and technical problems with the tape recorder during one interview allowed for only 18 usable pre-post comparisons. 15 T-tests were used to compare two post groups with 2 pre groups. ANOVA analysis that compares the four groups without consideration of pairings, produ ces similar if not less signif icant results. Instead of nine significant standards, only the three most significant standards from the t-tests re main significant in the ANOVA analysis at the .05 level. Since the comparison is between pairs of groups and not four groups independent from each other, the contrasting paired t-test is most appropriate.
Education Policy Analysis Archives Vol. 14 No. 5 16 Figure 2 Analysis Flowchart
National Board Certification as Professional Development 17 Table 4 Descriptive Statistics for al l Groups in Hypothesis 1 95% Confidence Interval Cohort Group Obs. N Mean Std. Dev. Std. Error LowerUpper Min. Max. 1 1 (post) 40.0 2.81 0.45 0.07 2.67 2.96 1.65 3.69 2A 2 (pre) 18.0 2.64 0.56 0.13 2.36 2.92 1.70 3.62 2B 4 (post) 20.0 2.79 0.43 0.10 2.59 2.99 2.04 3.64 3 5 (pre) 40.0 2.54 0.39 0.06 2.42 2.67 1.54 3.19 Means 29.5 2.69 0.46 0.04 2.61 2.77 1.54 3.69 See Figure 1 for the sequence of observations and cohort groups. Table 5 Test for Significance Overall for Hypothesis 1 Standard Value of Contrast Std. Error t df p (1-tailed) Effect Size Overall 0.423 0.176 2.400 114 .009 0.473 Next, we can look more closely into the standards to pinpoint more specifically what teachers may have learned from the certification process. Four sets of standards for AYA Science require four separate t-tests for analysis so it b ecomes necessary to employ an adjustment procedure that takes into account the use of multiple t-tes ts, which increases the likelihood of committing a Type I error. To address this concern, a Bonf erroni adjustment procedure was employed. This conservative adjustment reduces the risk of a family-wise error, while still allowing for the identification of observed differences (Yip, 2002; Ho mack, 2001). Because we wished to maintain a 0.05 alpha level for each of the four tests, significance was determined at an level of .0125. At this level, as Table 6 reveals, the contra sts for Set II (Advancing Student Learning) and Set IV (Supporting Teaching and Student Learning) were both found to be significant at p = 0.008 and p =.005 respectively. Set III (Establishing a Favo rable Context for Student Learning) was found to be marginally significant at p =.013. Set II and Set IV had effect sizes of 0.482 and 0.524 respectively indicating the main areas of observe d learning. Because significant differences were identified in three out of the four sets, we can exam ine each set of standards more closely to identify the specific standards that may be responsible for the observed learning.
Education Policy Analysis Archives Vol. 14 No. 5 18 Table 6 Results of Analysis at the Level of Sets of Standards Set Value of Contrast Std. Error t df p (1-tailed) Effect Size I. Preparing the Way for Productive Student Learning 0.304 0.176 1.727 114 .043 0.341 II. Advancing Student Learning 0.494 0.202 2.442 114 .008* 0.482 III. Establishing Favorable Context for Student Learning 0.461 0.204 2.253 114 .013 0.444 IV. Supporting Teaching and Student Learning 0.459 0.173 2.656 114 .005* 0.524 p < .0125 Sets II, III, and IV have a total of 10 standard s requiring 10 t-tests. Once again, a Bonferroni Adjustment procedure was used to reduce the chances of a Type I error due to repeated use of the test. Again, to maintain an overall level of .05, the significan ce for each test was set at = .005. At this level, two standards are significant and two are marginal as shown in Table 7. Scientific Inquiry from Set II and Assessment from Set IV are significant at .001 and .002 with effect sizes of 0.606 and 0.596 respectively. The two standards that were marginally significant included Goals and Conceptual Understanding from Set II and Reflection from Set IV at =.009 and .007 respectively. Though marginally significant at the level of the S et, Set III did not have any standards significant at the.005 level. What might account for the observed differences at the overall sets and standards levels of analysis? Are the gains observed in this study due to the intervention or something else? What percent of the observed variance can be attribu ted to possible covariates? To answer these questions, an analysis of covariance (ANCOVA) was conducted using potential confounding factors. Co-variates included gender, years of experience, class size, student type, school context, and geographic region. The results of this analysis are presented in Table 8.
National Board Certification as Professional Development 19 Table 7 Analysis of Individual Standards Standard Value of Contrast Std. Error t df p (1-tailed) Effect Size II. Advancing Student Learning Science Inquiry 0.667 0.217 3.073 114 .001* 0.606 Goals and Conceptual Understanding 0.494 0.206 2.392 114 .009 0.472 Contexts of Science 0.321 0.279 1.151 114 .126 III. Establishing Favorable Context for Student Learning Engagement 0.477 0.208 2.294 114 .012 0.452 Equitable Participation 0.448 0.242 1.851 114 .033 0.365 Learning Environment 0.457 0.223 2.049 114 .021 0.404 IV. Supporting Teaching and Student Learning Family and Community Outreach 0.174 0.217 0.803 114 .212 Assessment 0.647 0.214 3.022 114 .002* 0.596 Reflection 0.607 0.241 2.515 114 .007 0.496 Collegiality and Leadership 0.409 0.180 2.276 114 .012 0.449 p < .005. There is no effect size calculated for standards where the significance is over .10. In this ANCOVA, only Student Type is a significant covariate and possible confounding variable at the .05 level, p = 0.026. The teachers gender, years of experience, class size, school context, and geographic region did not co-vary with the observed gains in assessed scores. So, is Student Type a viable alternative for explaining observed results?
Education Policy Analysis Archives Vol. 14 No. 5 20 Table 8 Analysis of Covariance Source of Variations Degrees of Freedom Sum of SquaresMean Square F P Model 14 5.39 0.38 2.12 .017 Cohort Group 3 1.78 0.59 3.26 .025 Pre-Post Comparison 1 0.65 0.65 3.59 .061 Gender of Teacher 1 0.27 0.27 1.49 .226 Years Experience 1 0.00 0.00 0.01 .939 Class Size 1 0.04 0.04 0.24 .622 Student Type 3 1.75 0.58 3.21 .026* School Context 2 0.52 0.26 1.42 .247 Region 3 1.03 0.34 1.88 .138 Error 101 18.36 0.18 Corrected Total 115 23.75 R-Square 0.23 Significant at the p = 0.05 The Student Type variable was derived from candidate responses to the question, How would you generally describe your students? Res ponses typically were quite general (i.e. low, below average, average, above average, high, varied, or mixed.). We then coded all responses to this question into four possible categorie s: Low, Average, High, or Varied. The student type indicator is not based upon any standardized or objective source of data, but rather each teachers overall impression of their students. Student Type reflects the teachers perception of his or her students general ability rather than the actual ability level of students. If we look at how the various pre and post grou ps rated their students abilities and compare their observed overall scores, an important pattern emerges. Teachers who rated students abilities lower tended to score lower in this study than peers who rated students abilities higher. However, this relationship holds true for both pre and pos t group observations with teachers in the post groups consistently demonstrating improved scor es compared to teachers in the pre groups regardless of how they rated their students abilitie s. Figure 3 illustrates this result showing four parallel lines. The two lines lower on the x-axis are from the pre groups and the two lines higher on the x-axis are from the post groups. For each cate gory of student ability, the scores from the post observations are higher than those from the pre ob servation. Because all four lines never intersect across all four student ability categories, we can co nclude that there is no interaction between how a teacher rates student ability and their observed scor es. In fact, the certification process becomes a more likely explanation for observed differences since the only possible confounding variable shows no interaction with the results. This co-variation between observed scores and the teachers self reported impressions of student ability suggests that a teachers expectations for student success play an important role in predicting what kind of scores (both pre and post intervention) a teacher would receive in this study. Teachers with higher expectations for their studen ts based on a more positive assessment of student abilities tend to perform better on the standa rds that assess their knowledge and skills.
National Board Certification as Professional Development 21 2.200 2.300 2.400 2.500 2.600 2.700 2.800 2.900 3.000 3.100 low average high v aried Student TypeMean Rating Group 1 Post Group 2A Pre Group 2B Post Group 3 Pre Figure 3 Mean teacher rating by student type Two standards thenScientific Inquiry and Assessmentdemonstrated the most improvement as determined by the assessment proce ss. How might this result be interpreted? To help answer this question, we tu rn to qualitative data from the op en-ended interview questions. At the end of each of the 138 interviews conducted with 120 teachers for this study, participants were asked to generally address their expe rience with National Board certification. 16 The candidates responded with answers that provided detailed evidence regarding their overall, positive, and/or problematic aspects of the experience. A quantitat ive comparison of all 78 post-interview responses to this prompt provides a means of comparing how teachers opted to discuss aspects of the experience. For example, in their response, did te achers focus on issues of Engagement, Reflection, or Knowledge of Students? Analysis of these data using a grounded theory approach and coding scheme based upon the language and meaning of each of the 13 standard s supplies a gauge for determining which standards (if any) participating teachers found most significant to their own learning ( Glaser, 1995) The results strongly support the quantitative results. The th ree standards commented on most by teachers (Scientific Inquiry, Assessment, and Reflection) corresponded with the observed significant (or marginally significant) gains. Figu re 4 overlays the number of candidate comments regarding specific 16 18 out of 20 Teachers in Group 2A were intervie wed twice resulting in a total of 138 interviews. Of these, 78 were post interviews (40 from Group 1, 20 from Group 2B, and 18 from Group 2A-Post) and 60 were pre interviews (40 from Group 3, and 20 from Group 2A-Pre).
Education Policy Analysis Archives Vol. 14 No. 5 22 standards with the observed gain scores. The areas of greatest change correspond quite closely with the reported learning opportunities afforded by National Board certification in Adolescent and Young Adult Science. The convergence of these two sources of evidence supports the conclusion that the certification process promotes productive te acher learning in the areas of Scientific Inquiry, Assessment, and Reflection. Observed Gains and Teacher Comments0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8St01St02St03St04St05St06St07St08St09St10St11St12St13StandardGain0 5 10 15 20 25# of Comments Observed Gains Number of Comments Figure 4 Comparison of Observed Gains with Teacher Comments Scientific Inquiry: Evidence for Learning The National Board defines the Scientific Inquiry st andard as a scale that pertains to a teachers ability to develop in students the mental operations, habits of mind, and attitudes that characterize the process of inquiry (see Appendix D for ope rational definitions for all 13 standards of accomplished science teaching). Is it reasonable that the greatest teacher learning was identified with the Scientific Inquiry standard? To answer this question, we return to the comments made by the teachers who shared their ideas about the certification process. The open-ended interview data allow for further probes into the nature of teacher learning. When we discuss Scientific Inquiry, what does it mean? How does the Board conceptualize Scientific Inquiry? In its discussion of the standard, the NBPTS states that, It is not a basic goal of science inst ruction to fill studen ts with as much information as possible; ra ther, it is to help stud ents acquire the mental operations, habits of mind and attitudes th at characterize the process of scientific inquirythat is, to teach them how scientists question, think, and reason. (NBPTS, 1997, p. 31) According to the standard, the best way for teachers to reach th is goal is to ha ve students take an active role in their learning by arranging frequent opportunities for hands on science activities and open-ended inve stigations complete with post-activity time for reflection and analysis. Teachers understanding of teaching with scientific inqui ry is reflected in the choices and decisions made in their planning, lesson ma nagement, and assessment. Teachers choose ageand skill-level appropriate classroo m activities that are as much minds on as they are hands
National Board Certification as Professional Development 23 on. Other indications that a te acher effectively employs scient ific inquiry in the classroom pertain to questioning style, wait-time after asking questions, discussion management, and an acceptance of the unpre dictable consequences of an activi ty and student-centered pedagogy (NBPTS, 1997, p. 32). Though the definition of scientific inquiry ma y be broad and have different meanings to different teachers, the National Board defines it with specific characteristics and observable qualities. An illustrative list of skills, dispositions, habits of mind, and pedagogical approaches outlines what a teacher should know and be able to use effective ly in the science classroom. Consequently, pre-topost improvement in scores on Scientific Inquiry supports the claim that te achers are learning to align their practice more closely with the Nationa l Boards conception of scientific inquiry and teaching. The interview evidence generally supports this claim. For many teachers who commented on this standard, it is apparent that the National Boards version of scientific inquiry constituted a new a pproach to teaching science. For example, when Sharon, a teacher from Wyoming, was asked what she learned from the certification process, her response expresses a recurring theme among many candidates17. She said: I would point as an example to the increa sed use of inquiry wi thin my classroom. I think that that has had some strong be nefits in terms of helping students to think about what science is And science is a process and not just a memorization of facts to spew them back out at the teacher. And I do th ink that doing the National Boards has helped me to inco rporate that more into the classroom. (Group 2APost, Teacher #12) Mike, another teacher from Wyoming, concurred: [The] National Board is making me look at how much scientific inquiry Im doing where the students are actual ly doing the inquiry versus me just regurgitating. (Group 2B, Teacher #32) For both of these teache rs, National Board certification allowe d them to revisit, rethink, and retry a process that they were already familiar with in the sc ience classroom. For other teachers, scientific inquiry represente d a new way to teach. In attempting to fulfill the requirements of the portfolio, they were directed to teach according to the scientific inquiry method. As Susan, a teacher from Arkansas commented, I had a real tough time comi ng up with and dealing with the inquiry base process. And I found out that other science teache rs that had gone through the National Board Certificationsome who got it and some who didnthad a tough time with that. Its very difficult to not want to jump in and help the kids. And to see them sort of struggling and kind of thinking what they needed to do type thing. And even though, you know, I wouldI had to be very careful with my questions so that they would think of what they had to do next without me giving them an idea as to what to do (Group 2APost, Teacher #17) Susan is describing the issues as sociated with trying to teach in a new way. The efforts were demanding and went against her existing tendencies and habits of mind. Sh e needed to be much more self-conscious regarding how students were asked questions and how responses to students were formulated. For an experience d teacher with well-rehearsed scripts for interacting with stud ents, the standards for scientific inquiry proved to be a difficult challenge, but probably a significant learning experience. Learning associated with scientific inquiry ca n be traced to the portfolio requirements for National Board certification. The portfolio describes in detail the National Boards version of 17Pseudonyms are used for all teachers.
Education Policy Analysis Archives Vol. 14 No. 5 24 scientific inquiry. The prompts teachers need to address throughout the entry on Active Scientific Inquiry pertain to how decisions are made, actions taken, and evidence collected in support of student learning. This framework for reflection an d analysis provides the curriculum from which teachers plan, construct, and implement lessons. Assessment: Evidence for Learning Assessment in educational circles has a wide rang e of meanings, from standardized tests that provide a particular view of what students may un derstand to more immediate and classroom based activities. The National Board defines its Assessment standard as, the process of using formal and informal methods of data-gathering to de termine students growing scientific literacy, understanding, and appreciation (NBPTS, 1997, p. 45). The results from this process are then used to inform instructional decisions (Gallagher, 2000) ; hence, effective assessment relies heavily on a teachers strong understanding of content. Evidence in support of learning related to Assessment is substantial. The theme emphasized is the repeated use of focused, detailed, and extensive evidence around student learning. For many teachers, this is a new practice. Through the effort s to construct a meaningful portfolio, issues of student assessment are explored in rich detail. For example, Karen, a teacher from Kentucky, reports, Well, like I used to just grade a test. You know base d on how the grades were on the test that would kind of be my indica tion of if the kids learned or not. And now, I just see that theres all types of as sessment and that how a student does on your test is going to have an influenc e on your teaching and how you instruct. You know I never looked at that as a tool for changing my instruction. (Group 1, Teacher #9) Karen expresses a deeper understanding and appreci ation for assessment th at did not exist prior to the certification process. Her practice is enriched by assess ment, as it becomes a tool for improving student learning inst ead of a requirement at units end. Another effect of the National Boards emphasis on learner assessment also provoked Karen to change her view of students in the learning process. She continues, I realized that you really need to look at a student s individual needs and style you know like for a long ti me I taught just college bo und kids and you kind of think that they are just all the same And they are not .it just made me individualize more. (G roup 1, Teacher #9) Karen describes a significant adjustment regardin g her approach to teachi ng and learning, from a teacher-centered stance to a mo re student-centered appreciation. Assessment facilitated this change by allowing the teache r the opportunity to more cl osely examine wh at individual students are actually learning in relation to her teaching. The qualitative data also suggest that the learning associated with the assessment standard is even more profound than that observed in relation to the scientific inquiry standard. For example, once the power of detailed and intensified assessment of student learning is experienced, it changes the way a teacher thinks about practice. Ri ta, another teacher from Kentucky, says, And it trulyand it has al ready carried over to this yearyou know when kids dont write very wellyou almost dread re ading their writing. But I find myself really wanting to read thei r lab reports and stuff. And I feel like what I say to them on their papersI definitely give them more feedback. But my feedback is
National Board Certification as Professional Development 25 more direct. So I feel like I analyze their work better than I did before. (Group 1, Teacher #41) In this example, the teacher looks forward to a task that she previously dreaded. Assessment has not only improved her understa nding of student ideas and reasoning, but has also led to an improv ed appreciation for effective engagement through appropriate and complete feedback. These responses suggest that for some of the te achers sampled, a rather profound shift has taken place, perhaps pointing to the kind of self-su staining, generative learning now recognized in the literature as a relatively rare event (Franke, Carpenter, Fennema, Ansell, & Behrend, 1998; Franke, Carpenter, Levi, & Fennema, 2001). In this case, we cannot know how long such effects may last, nor how such reported insights actually ma y affect teaching practice. But in the annals of teacher learning, the reports of these teachers are remarkable in themselves against the backdrop of so many teachers dismal accounts of their pr ofessional development experiences. The process of Board certification appears to have been a transfor mative experience for at least some teachers on some dimensions of their practice. Interpreting Learning Outcomes Intriguing as this evidence is of learning from Board certification, we need to look further to uncover some important differences in how teachers experienced the process. For example, one study found evidence that professional growth from National Board certification resulted secondarily in changes in classroom behavior of candidates (Kowalski, et al., 1997). Another account, however, portrays teac hers regarding Board certificati on as a bureaucratic process undertaken for extrinsic reasons (i.e., additional income) (Ballou, 2003). Consistent with this view, teachers might simply jump through hoops while de-coupling the process from their teaching, much as they are reported to do with university-based Masters degrees coursework (Murnane, Singer, Willett, Kemple, & Olsen, 1991). Such a resu lt would hardly constitute a worthwhile form of professional development. Did we see this phenomenon in our interviews? Alternatively, as with other forms of learning, might influences of cer tification yield delayed effects? Some research on pre-service teacher education indicates such result s (e.g., Grossman, et al., 2000). In this reckoning, teachers may mull over what they have learned, only gradually introducing new ideas into their practice. Such possibilities suggest that there ma y be a range of outcomes, not all of one kind. Indeed, this is what we found. In particular, we identify three qualitatively different learning responses, which we label dynamic, technical, and deferred. Dynamic learning refers to self-reports of immediate, meaningful change in a teachers beliefs, understandings, and actions in the classroo m. Roughly half of all teachers interviewed postintervention fell into the dynamic learning category18. For example, Jasmine, a teacher from Tennessee, provides a glimpse of this when she states: The analytical part of learning doesnt ju st end with sending in your paperwork to National Certification. Its something then that you cant help but continue to do. The questions that I had to answer in written form pop into my head now all the time. (Group 2APost, Teacher #3) 18 We provide rough estimates of the proportions of teachers falling into the dynamic, technical, and deferred categories but note that coding on this aspect of the study was carried out by the lead author alone, so no reliability estimates are reported. Consequently we underscore the proper caution here.
Education Policy Analysis Archives Vol. 14 No. 5 26 Lynn, a teacher from Florida, presents another ex ample of this orientati on when she claims that the experience of serious reflecti on from the certification experience is carried over and you are just different. You think about it [teachinglearning] differently (G roup 1, Teacher #12). Both Lynn and Jasmine seem to have inte rnalized the National Board framework of reflection and action. The skills they acquired pertaining to the act of reflecting on practice and student learning persist well after the certification process is over. Jasmine cant help but continue with the same approach that was conveyed by repetition and focus through the portfolio prompts. Lynn describes the effects of learning as c arrying over into the current semester. Many comments from teachers focus on the new skills and knowledge gained from certification and the immediate impact on their teaching of scientific inquiry. For example, Shirley, a teacher from New York, is still involved wi th portfolio-like activities. She states, Im still doingyou knowmore inquiry based labs, more projects. My classroom has become much more studen t centered and more having students be more analytical and critical th inkers. (Group 1, Teacher #36) She has learned an appreciation for a more stude nt centered approach, for more analytical and critical thinking in the studen ts. A teacher from California provides another example. She describes how gains in self-confid ence provided the n ecessary strength to try a new type of lesson with her class. She says, The skills that I gained last year in focusing on good teaching and the things that they make you focus on in your portfolio I gained a lot of confidence in those skills and they became more natural and easy for me. Whereas I might not have taken that risk had I not gone through the process. (Group 1, Teacher #40) Connie, another teacher, hints that prior to certification the skills in question we re present, but remained weak or underdeveloped. These are skills and dispositions associated with conducting a more student-centered class where the teac her does not dispense knowledge, but helps students create their own understandings. The teac her says that these skills became more easy which implies that they were present before certification, whic h provided an opportunity to develop them further. She is ma king a direct connection between National Board certification in the previous semester an d new lessons implemente d in the cu rrent term. Dynamic learning might be described with another teachers description of learning from National Board certification. Paul, a teacher from Massachusetts, says, If you can get a kid to thin k about a subject that you are teachingif you can get a kid to internalize itthen he ll have it forever. Its the same thing I think with adults. (Group 1, Teacher #4) Dynamic learning may be interna lized but the important element is that the teacher acts upon that new knowledge, skill, or understanding to consciously and deliberately try to improve the learning experience in his clas s. How long this improved teac hing will continue is open to question. Forever, as Paul suggests? Or, might he revert to old ways after a few weeks or months? Longitudinal invest igations are needed to answer such questions. A second interpretation, technical learning indicates an emphasis on acquiring techniques useful in obtaining certification, but does not necessa rily carry over into teac hing itself. Teachers are learning how to be better candidates for National Board certification, but not necessarily how to be better teachers. An analysis of teacher comments s uggest that roughly one quarter of teachers interviewed post-intervention fell into this category With technical learning, the certification process is essentially de-coupled from the teachers actual practice in the classroom. Chris, a teacher from South Carolina, reflects on this prospect in comment s about other teachers whom she has observed: I think that while for some teachers its going to make them better teachers, for some teachers they are going to have to do during that year and not change what
National Board Certification as Professional Development 27 they are really doingthat they are puttin g on a show and they just do that well. And whether you maintain [that kind of te aching] afterwards is what I question. On how much thats being do ne. (Group 1, Teacher #7) Mathew, a teacher from Virginia ec hoes this observation when he says, I havent put on a dog and pony show th is year where Im inve nting all of these terrific lessons that I didnt have before. (Group 2B Teacher #31) Both perceive some of their colleag ues as not being honest with the spirit of self-reflection, selfrealization, and professional development that is part of the Nation al Board certification process. These dog and pony teachers put th eir efforts into impressing the assessors who evaluate the portfolios. They orie nt to the certification process, not to the improv ement of their own teaching; they respond to the incentives associated wi th certification rather than to intrinsic motivations for professional deve lopment. For example, Margaret from Maryland describes her experience with these words: I felt like it was more of a more of an exercise in trying to find out what they were looking for. And so I spent more of my energy doing that than in actually reflecting on my own practi ce and writing about it. (G roup 2B, Teacher #33) Margarets comment suggests that the requirements of certification interfered with the quality of her experience. She interprets the tasks as pleasing the assessors rather as addressing her genuine learning needs or those of her students. The emphasis here is on developing good strategi es for passing the tests, for writing the way the readers want you to write; and for picking up tips on how to successfully manage the process. Such learning is only (at best) tangentially related to classroom practice. Another teacher, Sarah from Florida, contributes an additional insight. She says, I think a lot of it is, wh at hoops can you jump thro ugh and how well, how good are you at writing, at sa ying, what you need to say to prove based upon the rubric? Are you good at being able to work through that? If you are, thats great. But someone might be a ve ry good teacher and just based upon what they submit, it might not be evidence of what th ey are doing. (Group 1, Teacher #29) Accomplished teachers, according to this vi ew, may not be good at proving they are accomplished. National Board certification may be too restrictive in format and style to fit some teachers modes of comm unication. Sarahs comment also underlies the importance of the technical knowledge needed to su cceed at communicating ones pr actice effectively through the portfolio. The choice of artifacts, decisions regarding lessons, acti ons taken during the taping of a class, and the details of how to analyze student work, all cont ribute to technical learning. While all candidates to a certain degree ne ed to address the demands of the technical components of certification, the problem se ems to arise when the technical learning overshadows the intended emphasis on self-reflection and student learning. For one teacher, this problem had no resoluti on. Jerry, a science educator from Florida, came up with a scientific metaphor in his response: Frankly, I found that it was so difficult. Its sort of like Heisenbergs Principle. You can either know where the electron is and not know what it s doing or know what its doing and not know where it is I could either teach as an effective teacher or I could go throug h this procedure to prove Im an effective teacher. (Group 2APost, Teacher #19) The implication here is that, I cant do both. The teacher could not devote the time and energy required to communicate the quality of pr actice effectively through the construction of the portfolio and teach with the same intensity that he was accustomed to prior to the demands of the certification process.
Education Policy Analysis Archives Vol. 14 No. 5 28 So for some teachers the certification process ac tually seemed to be a diversion from their teachingin favor of jumping through hoopsra ther than a stimulus for reflecting on or learning about teaching. So conceived, learning from certification had value that was narrowly instrumental at best. Yet a third possibility presented itself, that a genuine form of learning related to good teaching might be deferred to a time when teachers had more opportunity to reflect and to consider how to use what they had lear ned from the process. Such deferred learning holds out the possibility for genuine influences on practice at some future time. Approximately one quarter of teachers interviewed post-intervention fell into this category. Sharon, a teacher from Wyoming, illustrates this prospect when she says, Now that Ive completed everythingits al l turned inthe stre ssful parts of it are gone; and I have the opportunity to sort of look back and observe and see how some of the things have been incorporated into my teachingI think that it was particularly useful. (Gro up 2APost, Teacher #12) Sharon is describing how the st ress associated with technical learning interferes with the possibility of dynamic learning. He r remark that, I have the opportunity to sort of look back and observe and see how some of the things have been incorporated into my teaching is a meta-cognitive act, where the teacher thinks about how she th inks about her practice. This teacher is not claiming that she is more reflective, more student-centered, more focused on planning or assessment or enga gement. No specific outcome is identifiedyet. The teacher sees this kind of analysis as removed from the self At this point, the teacher is unaware of how her practice may have changed. She needs distance from the intense experience of certification and time to examine her practice to recognize di fferences in values, deci sion-making, and beliefs that may have arisen in response to the certification process. Deferred learning also may be related to uncertain ty. To the extent that a teacher is uncertain if learning took place as a result of National Boar d certification, the possibility exists that a learning outcome might be realized some time in the future In describing whether certification affected his practice, Mathew, a teacher from Virginia, comments, Im not sure that its changed, at this point, how I taught (Group 2B, Teacher #31). By qualifying the statement with at this point, he leaves open the possibility that lessons learned from the ex perience may be realized at a future point in time. As teachers reflect on the process after the fa ct, many may be considering how to make use of things they learned, exploring discrepancies betwee n their preferred methods and what they perceive to be preferred by the National Board. Such refl ection can move in two directionsto reaffirm a teachers commitment to her existi ng preferences, or to provoke so me change. To the extent that certification unsettled some teachers thinking, it holds the possibility of ushering changebut only the possibility. Discussion In the current climate of policy debate, single study results are often promoted, sometimes in the press, sometimes by policy advocates, as defini tive resolutions to complex questions. We reject such oversimplification in drawing conclusions from this study for policy and practice. Instead, we frame the implications along these lines, first for policy, then for the practice of professional development. Teacher learning has become a policy variable in the context of a wide range of efforts to improve education. From its inception, National Board certification was promoted as a professional development opportunity, part of the ongoing effort to professionalize teaching as a policy strategy.
National Board Certification as Professional Development 29 If standards serve as a critical carrier for the knowledge base of teaching, then standards-based practice clearly must become a hallmark of teaching if it is to realize the promise of professionalism. The certification process involves many of the hallm arks of effective professional development, but chiefly as it represents the use of standards in practice. What teachers learn from the process is to evaluate their own practice in the light of objective, external standards. But this process may or may not in fact yield the outcomes that the Nationa l Board and its proponents have sought. This study provides oneand the firstindication that te achers are undertaking worthwhile learning, bolstering the position of the advocates for professionalism as a policy choice. The immediate implication is that public investment in Board certification is warranted. Certainly, however, we underscore how slender is the evidence presen ted here. Limitations on this study include the restriction to one certification area; learning meas ured via a telephone rating task; reliabilities in the low end of the range; a modera te overall effect size; and qualitat ive evidence indicating that some but not all teachers took positive advantage of the process to improve their teaching. These limitations urge caution in any sweeping conclu sions, but we choose to emphasize the generally positive results, notwithstanding the limitations noted. Turning next to implications for professional development, research is just beginning to explore what teachers are learni ng from professional development. According to Wilson and Berne (1999), research in this area has yet to identif y, conceptualize, and assess what teachers are learning. Determining the effects of professiona l development on teaching and learning is notoriously difficult. Consider some of the probl ems. Teachers might acquire new knowledge or skills yet choose not to deploy them in their pr actice. Or they might make changes initially, but revert gradually to old ways. Or the changes they make might not enhance their practice. Some prior scholarship reveals teachers impor ting only certain aspects of reforms into their teaching with uncertain overall and long-term effects. Describing such problems, investigators have resorted to such metaphors as hybrids to indicate the distin ctive mix of grafting the new onto the old (see, for example, Cohen, 1990; Cuban, 1993). Furthermor e, change does not automatically mean improvement. The latter term requires a value judgment as well as an empirical result. In consequence, many problems attend any summary conclusions about teacher learning from professional development experiences. The stud y reported here cannot resolve such issues authoritatively. Results require qualified interp retation, which we offer along these lines. First, the National Board standards represent a broad consensus within the science education community that, in Joseph Schwabs evocat ive terms, Science is a narrative of inquiry, not simply a rhetoric of conclusions (Schwab, 1974) Instruction that aspires to teach students the methods of science is a critically im portant issue at the dawn of the 21st century. Consequently, the underlying values represented by the National Board standards cons titute a professional consensus; what these standards teach about science instruction are eminently defensible. Then, the overall effect size of 0.47 derived from multiple comparisons falls within the moderate to strong range based on several comparative criteria. In the field of science education, for example, a meta-analysis from the early 1980s serves as one comparison. Enz, Blecha, and Horak (1982) reviewed research projects that investigat ed the effects of professional development in science education on participating teachers and/or their students. In the sixteen studies gathered from 1973, the overall average effect size fo r science in-service projects was 0.84. Our result falls below this average, which would be re garded as on the high end of the range. Turning to our qualitative data, this study is al so significant in identifying which aspects of the National Boards standards appeared to exer t the greatest influence. Other studies will be required to confirm these results, including examinat ion of other certificate areas, but we offer one conjecture by way of explanation. A certain logic woul d suggest that the greatest learning is likely to occur where there was the largest discrepancy betw een standards-based prac tice and pre-existing
Education Policy Analysis Archives Vol. 14 No. 5 30 teaching. If this is roughly true, then our study indicates that many secondary science teachers are not emphasizing principles and practices of the sc ientific method in their instruction, nor using assessment feedback in ongoing instructional decision-making. This conjecture points to these aspects of practice as needful of improvement in both pre-service and in-service education for science teachers. Finally, the study clearly uncovered a mix of what we called dynamic, technical, and deferred learning. This too seems quite plausible. Some teachers might regard Board certification as a genuine learning opportunity, others might undert ake it for the extrinsic rewards, and still others might learn from the process in a gradually evolving manner. Mixed motives and outcomes are more nearly the norm in human affairs than singular or pr istine results. In fact, the different categories of learning described in this study support the conclusion that Board certification provides the opportunity for teachers to learn about specific as pects of their work. How that learning impacts practice remains unclear however. Teachers in this study demonstrated significant learning in the areas of Scientific Inquiry and Assessment regardless of whether th ey were successful at achieving Board certification or the characteristics of their particular school setting. Th erefore, it would appear that the benefits of Board certification go beyond the immediate financia l rewards successful candidates receive to take the form of improved knowledge and understanding of science instruction for both those who achieve and those who do not achiev e certification. If teacher learni ng is considered an important component to improving teacher quality and ultimate ly student achievement, then these results point to the possibility that the pro cess of Board certification may positively impact the quality of instruction (as defined by the National Board) and students learning experiences regarding two vital areas of instruction. Further research on this relationship is needed to pinpoint the degree to which science teaching improves, the duration of those changes, and the impact of changes in practice upon student achievement. On balanc e, though, we are inclined to read the overall pattern of results in support of National Board certification as a worthwhile form of professional development. The caveats, as always, are important, but so is the preponderance of the evidence.
National Board Certification as Professional Development 31 References Areglado, N. (1999). I became convinced. Journal of Staff Development, 20(1), 35. Bailey, D. L. & Helms, R. G. (2000). The national board certified teacher. Fastback 470. Bloomington, IN: Phi Delta Kappan Education Foundation, pp 847. Ball, D., & Cohen, D. (1995). Developing practice, developing prac titioners: Toward a practice-based theory of professional education. New York: National Commi ssion on Teaching and America's Future. Ballou, D. (2003). Certifying acco mplished teachers: A critical l ook at the Nati onal Board for Professional Teaching Standards. Peabody Journal of Education, 78(4), 201. Benz, J. (1997). Measurin g up: A personal journey through Nati onal Board Certification in Art. Art Education, 50(5), 20, 49. Bond, L., Smith, T., Baker, W., & Hattie, J. (2000). The certification system of the National Board for Professional Teaching Standards: A cons truct and consequential validity study. Greensboro, NC: Center for Educational Research and Evaluation at the University of North Carolina at Greensboro. Borko, H. (2004). Professional development and teacher learning: Mapping the terrain. Educational Researcher, 33(8), 3. Burroughs, R., Schwartz, T. A., & Hendricks-Lee, M. (2000). Communities of practice and discourse communities: Negotiating bou ndaries in NBPTS certification. Teachers College Record, 102(2), 344. Campbell, D. & Stanley, J. (1963). Experimental and quasi-experim ental designs for research. Hopewell, NJ: Houghton Mifflin Company. Campbell, D. & McCormac k, T. (1957). Military experience and attitudes toward authority. American Journal of Sociology, 62, 482. Cavaluzzo, L. (2004, November). Is National Board Certification an effective signal of teacher quality? Alexandria, VA: The CNA Corporation. Chase, B. (1999). NEA Today, 2(99), 17. Cohen, D. (1990). Revolution in one classroom: The case of Mrs. Oublier. Educational Evaluation and Policy Analysis, 12(3), 311. Cohen, J. (1977). Statistical power analysis for the behavioral sciences. New York, Academic Press. Cook, T. D. & Campbell, D. T. (1979). Quasi-experimentation: Design and analysis issues for field settings. Boston: Houghton Mifflin Company.
Education Policy Analysis Archives Vol. 14 No. 5 32 Crawford, J. & Impara, James. ( 2001, 4th edition). Critical issu es, current tren ds, and possible futures in quantitative methods. In V. Richardson (Ed.), Handbook of research on teaching (pp. 133). Washington, D.C.: American Educational Research Association. CREDE, (2002). The five standards fo r effective pedagogy. Center for Research on Education, Diversity & Excellence. Re trieved May 23, 2004 from: http://www.crede.ucsc.edu/standards/standards.html. Cuban, L. (1993, 2nd Edition.). How teachers taught: Constancy and change in American classrooms, 1890990. New York: Teachers College Press. Darling-Hammond, Linda (2000). Te acher quality and st udent achievement: a review of state policy evidence. Educational Policy Analysis Archives,8(1). Retrieved February 10, 2004, from http://epaa.asu.edu/epaa/v8n1/. Darling-Hammond, Linda. (2002, Se ptember 6). Research and rhetor ic on teacher certification: A response to "Teacher Certification Reconsidered," Education Policy Analysis Archives, 10(36). Retrieved February 10, 2004, from http://epaa.asu.ed u/epaa/v10n36.html. Darling-Hammond, L. and McLaughlin, Ida (1 996). Policies that support professional development in an era of reform. In M. W. O. McLaughlin, Ida (Ed.), Teacher Learning (1st ed.) (pp. 202). New Yo rk: Teachers College Press. Educational Testing Service. (1999). Technical analysis report. Princeton, NJ: National Board for Professional Teaching Standards. Enz, J., Blecha, M. & Horak, W. (1982). Review and analysis of report s of science inservice projects: Recommendations for the future. Paper presented at the annu al Meeting of the National Science Teachers As sociation, April, 1982, Chicago, IL. Floden, R. (2001, 4th edition). Re search on effects of teaching: A continuing model for research on teaching. In V. Richardson (Ed.), Handbook of research on teaching (pp. 3), Washington, D.C.: American Educ ational Research Association. Franke, M., Carpenter, T., Fennema, E., Ansell, E., & Behren d, J. (1998). Understanding teachers self-sustaining, generative change in the context of prof essional development. Teaching and Teacher Education, 14(1), 67. Franke, M., Carpenter, T., Levi, L., & Fennema, E. (2001). Ca pturing teachers generative change: A follow-up study of professional devel opment in mathematics. American Educational Research Journal, 38(3), 653. Gallagher, J. (2000). Teaching for understanding and application of science knowledge. School Science and Mathematics, 100(6), 310. Gardiner, S. (2000). I leave with more ideas than I can ever use. Journal of Staff Development, 21(4), 14.
National Board Certification as Professional Development 33 Garet, M. S., Porter, A. C., Desimone, L., Birman, B. F., Yoon, K. (2001). What makes professional development effective? Result s from a national sample of teachers. American Educational Research Journal, 38(4), 915. Goldhaber, D. & Anthony, E. (2004). Teacher quality and student achievement. Urban Diversity Series, Center on Reinventing Public Education Retrieved November 6, 2004, from http://www.crpe.org/workingpapers/pdf/NBPTSquality_report.pdf. Glaser, B. G. (1995) A look at grounded theory: 1984. In Glaser, B. G. (Ed.). Grounded theory, 1984 (pp. 3). Mill Valley, CA: Sociology Press. Griffin, K. (2003, February 8) Bonuses going to teachers wi th national certification, The Daily Oklahoman, p. 1. Grossman, P. L., Valencia, S. W., Evans, K., Thompson, C., Martin, S., & Place, N. (2000). Transitions into teaching: Learning to teac h writing in teacher education and beyond. Journal of Literacy Research, 32(4), 631 62 Hargraves, A. (1995). Development and desire: A postmodern perspective. In T. R. Guskey & M. Huberman (Eds.), Professional development in education: New paradigms and practices (pp. 9-34). New York: Teachers College, Columbia University. Hawley, Willis D. & Vall i, Linda (1999). The esse ntials of effective prof essional development. In Linda Darling-Hammond and. Gary Sykes (Eds.), Teaching as the learning profession (pp. 127). San Francisco: Jossey-Bass. Haynes, D. D. (1995). One teacher's experience with National Board assessment. Educational Leadership, 52, 58. Homack, S. R. (2001). Understanding what ANOVA post hoc tests are, really. Paper presented at the annual meeting of the Southwest Educational research Association, February 1, New Orleans, LA. Huberman, M. (1993). The model of the independent artisan in te achers' professi onal relations. In J. Little and M. McLaughlin (Ed.), Teachers' work: Individuals, colleagues, and contexts (pp. 11). New York: Te achers College Press. Ingvarson, L. (1998). Profession al development as the pursuit of professional standards: The standardsbased professional development system. Teaching and Teacher Education, 14, pp. 127. Jenkins, K. (2000). Earn ing board certification: Making time to grow. Educational Leadership, 57, 46. Jimenez, P. M. (1999). Psychoso cial intervention with drug ad dicts in prison. Description and results of programme. Intervencion Psichosocial, 8(2), 233.
Education Policy Analysis Archives Vol. 14 No. 5 34 Juin-jen C., Chung-cheng, L. & Ching-chong, L. (1999). The em ployment and wage effect of shifting to an indirect tax wh en firms set wages efficiently, Economic Record, 75, 156 166. Kennedy, M., Ball, D., McDiarmi d, G., & Williamson, X. (1993). A study package for examining and tracking changes in teacher's knowledge. East Lansing, MI: National Center for Research on Teacher Education, Michigan State University. Killeen, K., Monk, D., Plecki, M. (2002). School district spending on professional development: Insights available from national data, 1992. Journal of Education Finance, 28, Summer, 25. Knapp, M. (2003). Professional development as a policy pathway. In R. Floden (Ed.). Review of Research in Education, 27 (pp. 109). Washington, D.C.: American Educational Research Association. Koprowicz, C. L. (1994). What state legislators need to know about the National Board for Professional Teaching Standards., Denver, CO: National Conference of State Legislatures Kowalski, K., Chittenden, E., Spicer, W., Jones, J., & Tocci, C. (1997). Professional development in the context of National Board for Professional Teaching Standards certification: Implications beyond certification. Paper presented at the annu al meeting of the American Educational Research Association, Marc h 24, Chicago, IL. (ERIC Document Reproduction Service No. ED 412257). Lafferty, B. D. (1998). An empirical investigation of a leadership development program. Washington, DC, 248 leaves. (Unpublishe d Thesis, Ed.D. George Washington University). Leef, G. C. (2003). National Boar d Certification: Is North Caro lina getting its moneys worth? A Policy Report from the Nort h Carolina Education Alliance. Retrieved February 6, 2005, http://www.johnlocke.org/acrobat/policyReports/lee fcertification2.pdf. Little, J. W. (1993). Teacher' s professional development in a climate of education reform. Education Evaluation and Policy Analysis, 15(2), 129. Little, J. W. (1997). Excellence in Professional Develo pment and Professional Community (Working Paper ). Berkeley, CA: Unive rsity of California, Berkel ey for US Department of Education, Blue Ribbon Schools Program. Mahaley, D. (1999, September/Oc tober). One teacher's account. The Clearing House, p. 5. Manouchehehri, A. (2001). Collegial interaction and reflective practice. Journal of the Association of Teacher Educators, 22, 86. Marriot. (2001, February/March). Increased insight. Reading Today, p. 7. Merriam, S., & Simps on, E. (1995). A Guide to research for educators and trainers of adults (2nd ed.). Malabar, FL: Krieger Publishing Company.
National Board Certification as Professional Development 35 Mullens, J. E., Leighton, M. S.; Lagu arda, K. G., O'Brien, E. (1996). Student learning, teaching quality, and professional development: Theo retical linkages, current measurement, and recommendations for future data collection. Working Paper Series. Washington, DC: Policy Studies Associates. Murnane, R. J., Singer, J. D., Willett, J. B., Kemple, J. J., & Olsen, R. J. (1991). Who will teach? Policies that matter, Cambridge, MA: Harvar d University Press. National Board for Professional Teaching Standards. (1991). Toward high and rigorous standards for the teaching profession (3rd ed.). Washington, DC: Author. National Board for Professional Teaching Standards. (1997). Instruction manual for adolescent and young adult (AYA) science certification, Fairfax, VA: Author. National Board for Professional Teaching Standards. (2001a). The impact of national board certification on teachers (Survey results). Arlington, VA: Author. National Board for Professional Teaching Standards. (2001b). I am a better teacher (Survey results). Arlington, VA: Author. National Board for Professional Teaching Standards. (2001c). Adolescence and young adult science standards. Retrieved December 1, 2005, from http://www.nbpts.org/p df/aya_science.pdf. National Board for Professional Teaching Standards. (2003). Q & A: Questions and answers for teachers about national board certification. Arlington, VA: Author. National Board for Professional Teaching Standards. (2004a) State and local incentives for certification. Retrieved on August 8, 2004 from ht tp://www.nbpts.org/about/state.cfm. National Board for Professional Teaching Standards (2004b). The 5 core propositions of accomplished teaching. Retrieved August 8, 2004, from http://www.nbpts.org/sta ndards/five_core.html. National Commission on Teaching and America's Future (1996). What matters most: Teaching for America's future. New York: Teachers Colleg e, Columbia University. Odden, A., Archibald, S., Fermanich, M., & Gallagher, H. A. (2002). A cost framework for professional development. Journal of Education Finance, 28(1), 51. Reichardt, R. (2001). Toward a comprehensive appr oach to teacher quality. Policy Brief No. 13, Mid-Continent Research for Education and Learning, Aurora, CO. Retrieved July 20, 2005, from http://www.mcrel.org/pdfcon version/policybriefs/pb_tow ardacomprehensive.html. Roden, J. (1999). Winners and winners: What I learned from not earning National Board Certification. Teaching and Change, 6(4), 4169.
Education Policy Analysis Archives Vol. 14 No. 5 36 Rotberg, I. C., Futrell, M. H ., & Lieberman, J. M. (1998). National Board certification: Increasing participation and assessing impacts. Phi Delta Kappan, 79, 462. Rudner, L. M. (1992). Reducing er rors due to the use of judges. Practical Assessment, Research & Evaluation, 3(3). Retrieved August 8, 2004, from http://ericae.net/pare /getvn.asp?v=3&n=3. Schwab, J. J. (1974). Decisi on and choice: The coming duty of science teaching. Journal of Research in Science Teaching 11(4), 309. Sclafani, Susan (2005). The U.S.-China perspective. Presentation to the US-China Conference on Secondary Education Reform, Michigan State University, East Lans ing, MI, July 18, 2005. Stronge, J. H. (2002). Qualities of effective teachers. Alexandria, Va.: Association for Supervision and Curriculum Development. Shavelson, R., Webb, N., & Hotta, J. (1987). Th e concept of exchange ability in designing telecourse evaluations. CADE: Journal of Distance Education/Revue de l'enseignmement a distance. Retrieved November 3, 2004, from http://cade.athabascau.ca/vol2.1/shavelson.html. Stein, M. K. & Brown, C. A. (1997). Teacher lear ning in a social context: integrating collaborative and instit utional processes with the study of teacher change. In B. Nelson (Ed.), Mathematics teachers in transition (pp. 155.). Mahwah, NJ: Lawrence Erlbaum Associates. Sykes, G. (1999). Teacher and student learning. In Linda Da rling-Hammond & Gary Sykes (Eds.), Teaching as the learning profession (pp. 151). San Francisco: Jossey-Bass Publishers. Tracz, S., Sienty, S., Todo rov, K., Snyder, J., Taka shima, B., Pensabene, R., Olsen, B., Pauls, L., & Sork, J. (1995). Improvement in teaching skills: Pe rspectives from National Board for Professional Teaching Standards field test network candidates. Paper presented at the annual meeting of the Amer ican Educational Research Association, April, 1995, San Francisco, CA. Vandevoort, L. G., Amrein-Beardsley, A. & Berliner, D. C. (2 004, September 8). National board certified teachers and their students' achievement. Education Policy Analysis Archives, 12(46). Retrieved November 6, 2004, fr om http://epaa.asu.edu/epaa/v12n46/. Wiebke, K. (2000, Nov/Dec) My journey through Nati onal Board certification. Instructor, 110, 26. Wilson. S. M., & Berne, J. ( 1999). Teacher learning and the acquisition of professional knowledge: An examination of research on contemporary prof essional deve lopment. In A. Iran-Nejad & P. D. Pearson (Eds), Review of research in education (Vol. 24, pp. 173 209). Washington, DC: American Ed ucation Research Association.
National Board Certification as Professional Development 37 Yip, J. (2002, May). Two short notes on statistics. Porcupine! On line Journal, 25, May 2002. Retrieved November 10, 2004, from http:/ /www.hku.hk/ecology/porcupine/por25/25misc-stat.htm.
Education Policy Analysis Archives Vol. 14 No. 5 38 Appendix A Study Limitations and Caveats The limitations on this study are not numero us, but they are important. Most notable was the inconsistency in recruitment protocols for each of the three groups. In particular, the identification of subjects for Group 2 was problem atic. The main concern was the delayed group formation resulting in candidates participating in the study with differing degrees of experience with the intervention. The lack of a true pre-status for Group 2A was shown to be problematic on two accounts. First, institutional and procedural obstacles ha mpered the timely and efficient recruitment of participants for Group 2APre. The delays resulting from these obstacles resulted in a less then ideal pre-intervention status for data collection. We su spect that as teachers spent more time with the intervention, their assessed scores in this study improved. To explore this possibility, we performed a series of correlation studies that compared the observed data collected and the status of the candidate (amount of experience with the interventi on) at the time of the interview. The results indicate a weak to moderate relationship between the two variables (r= 0.40). The relationship was strong enough to support the conclusion that had the interviews for Group 2APre occurred at an earlier point or prior to the certification process, most likely the data collected from Group 2APost would have resulted in greater pre to post ga ins and a larger effect size for Group 2A. Another way this problem may have been avoide d would have been to increase the size of Group 2. In this study, the pre an d post observations of Group 2A played a more vital role in this analysis than any of the other observations, yet it was the smallest group, with only 18 teachers. Groups 1 and 3 each had 40 teachers. To address this problem, it would have been better to increase the size of Group 2 to 60 teachers allowing for 30 each in Group 2A and Group 2B. The Group 2 increase would then be balanced by a reduction in the size of Groups 1 and 3 to 30 teachers each. Such an alteration would have maintained the same number of teachers in the study, but would better reflect the relative influence each observation contributes to the overall analysis. Consistent use of random selection and random assignment also would have improved this study. Only Groups 1 and 2B were randomly select ed for data collection. Due to unavoidable time pressures, the pre-groups were interviewed on a first come, first served basis. Improved recruitment procedures would allow for more consistent use of random selection for all three groups and random assignment within Group 2 to either 2A or 2B. Due to the inconsistent use of random selection and the quasi-experimental nature of this research, the conclusions from this study are generalizable only to secondary science teachers who volunteer for National Board certification. Hence, external validity is limited to this population of teachers. Finally, inter-rater reliability was less than ideal for measurement instruments. Though a reliability of .458 between assessors is considered lo w to moderate in social research, there is reason to believe that with greater resources of time an d money this reliability measure could be improved. Had assessors met together for a weekend to wrestle with issues specific to this research, many of the observed inconsistencies could in all likelihood have been avoided. For example, during the calibration process, assessors practice scores were compared against a standard. If they consistently scored above the standard, then the trainer and a ssessor discussed the issue so that the appropriate adjustments could be made. To make this proce ss more effective, the scoring practices of the assessor need to be more intensive before the collection of data, then revisited periodically throughout the study to maintain consistency between all three assessors. The best way to improve
National Board Certification as Professional Development 39 agreement among raters on such a complex assessm ent activity is by investing more time in developing a common and agreed upon framework. Appendix B Inter-rater Reliabilities Inter-rater reliability is a measure of the degree to which raters agree in their assessment of each standard for each candidate. An examin ation of Figure B-1 re veals some important observations. First, measures of different standards have different levels of reliability, though these differences occupy a rather small range. With a maximum average reliability of 0.632 and a minimum average reliability of 0.259, the mean value of 0.45 8 demonstrates that more measures fall toward the lower than the higher end of the range. Table B-1 provides descriptive statistics comparing the ratings of the three assessors. Each measure is consistent across all three individuals with the exception of Assessor 1 who scored every transcript while the Assessors 2 and 3 each scored slightly more than half of the total number. It is important to note that if Assessor 1s data is remo ved from the study, an analysis of data from Assessors 2 and 3 yield nearly identical results, i. e. Scientific Inquiry and Assessment remain the two standards that demonstrate significant pre to post intervention gain. Table B-1: Assessor De scriptive Statistics Measure Assessor #1 Assessor #2 Mean 2.76 2.59 Standard Error 0.06 0.07 Median 2.82 2.62 Mode 2.85 2.52 Standard Deviation 0.64 0.57 Sample Variance 0.42 0.32 Kurtosis -0.52 -0.48 Skewness -0.16 -0.06 Range 2.79 2.40 Minimum 1.21 1.42 Maximum 4.00 3.83 Sum 364.77 170.75 Count 132.00 66.00 Largest (1) 4.00 3.83 Smallest (1) 1.21 1.42 Confidence Level (95.0%) 0.11 0.14
Education Policy Analysis Archives Vol. 14 No. 5 40 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Knowledge of Students Knowledge of Science and Instructional Resources Science Inquir y Goals and Conceptual Contexts of Science Engagemen t Equitable Participation Learning Environmen t Family and Community A ssessmen t Reflection Collegiality and Leadership Set I Set II Set III Set I V A verageMeasureInter-Rater Reliability (Pearson r) Groups I & II Groups I & III Figure B-1 Comparison of Interrater Reliability (Pearson r) across standards
National Board Certification as Professional Development 41 Appendix C Processing Example To make transcripts easier to understand and to allow for the content of the teachers response to be the sole focus of the assessment, tr anscripts were processed to make them all appear visually the same and to remove any evidence of the interviewer. Original Version (1) Do you think that the teacher has been e ffective in facilitating and supporting meaningful scientific discussion where students explor e and have the opportunity to understand important scientific ideas? What does this teacher do or not do that supports your answer? Assessor Scientific ideas? Yeahfor example the fact that therechanges in the organisms or fewer or more of one type of organism. They didnt just look for one reason. They talked about all the different reasons and how maybe what they saw didnt explain itbecause they were there just briefly. And I thought that was good. AssessorAnd how did the teacher help support that? Wellhe affirmed it. He sort of was like yeah. I thought just sort of re-agreeing with the students. AssessorIm sorry I couldnt hear Agreeing or affirming with the stud ents that made those comments. Final Version (1) Do you think that the teacher has been e ffective in facilitating and supporting meaningful scientific disc ussion where students explor e and have the opportunity to understand important scientific ideas? What does this teacher do or not do that supports your answer? Yeahfor example the fact that therechanges in the organisms or fewer or more of one type of organism. They didnt just look for one re ason. They talked about all the different reasons and how maybe what they saw didnt explain it because they were there just briefly. And I thought that was good. Wellhe affirmed it. He so rt of was like yeah. I thought just sort of reagreeing with the studentsa greeing or affirming with th e students that made those comments.
Education Policy Analysis Archives Vol. 14 No. 5 42 Appendix D The NBPTS Standards for Accompli shed Teaching in AYA Science Source: National Board for Professional Teaching Standards, 2001c, pp. 5-6. 1. Preparing the Way for Productive Student Learning I. Understanding StudentsThis scale pertains to Teacher Knowledge of students. More specifically, teachers know how students learn, ac tively get to know students as individuals, and determine students understandings of science as well as their individual learning backgrounds. II. Knowledge of ScienceThis scale pertains to teachers broad and current knowledge of science and science education, along with in-depth knowledge of one of the subfields of science, which they use to set important appropriate learning goals. III. Instructional ResourcesThis scale pertains to teachers ability to select and adapt instructional resources, including technology and laboratory and community resources, and create their own to support active student explorations of science. 2. Advancing Student Learning IV. Science InquiryThis scale pertains to a teachers ability to develop in students the mental operations, habits of mind, and attitudes that characterize the process of scientific inquiry. V. Conceptual UnderstandingsThis scale pe rtains to teachers use of a variety of instructional strategies to expand students un derstandings of the major ideas of science. VI. Contexts of ScienceThis scale pertai ns to the ability of a teacher to create opportunities for students to examine the human contex ts of science, including its history, reciprocal relationship with technology, ties to mathematics, and impacts on society so that students make connections across the disciplines of science and into other subject areas. 3. Establishing a Favorable Context for Student Learning VII. EngagementThis scale pertai ns to teachers ability to stimulate interest in science and technology and elicit all their students sust ained participation in learning activities. VIII. Equitable ParticipationThis scale pertains to ability of a teacher to take steps that ensure that all students, including those from groups which have historically not been encouraged to enter the world of science, participate in the study of science. IX. Learning EnvironmentThis scale pertains a teachers ability to create safe and supportive learning environments that foster high ex pectations for the success of all students and in which students experience the values inherent in the practice of science. 4. Supporting Teaching and Student Learning X. Family and Community OutreachThis scale pertains to the teachers ability to proactively work with families and communities to serve the best interests of each student. XI. AssessmentThis scale pertains to a teache rs ability to assess student learning through a variety of means that align with stated learning goals. XII. ReflectionThis scale pertains to a teache rs ability to constantly analyze, evaluate, and strengthen their practice in order to improve the quality of their students learning experiences. XIII. Collegiality and LeadershipThis scale pertains to a teachers willingness and ability to contribute to the quality of the practice of thei r colleagues, to the instructional program of the school, and to the work of the larger professional community.
National Board Certification as Professional Development 43 About the Authors David Lustick University of Mass achusetts Lowell Gary Sykes Michigan State University Email: DavidLustick@uml.edu David Lustick is a National Board certified secondary science teacher and an assistant professor of science and mathematics at the Unive rsity of Massachusetts Lowells Graduate School of Education where he focuses on issues of profes sional development, teacher learning, and science education. His research interests examine science education from a pedagogical, international, and policy perspective. Gary Sykes is a professor of educational admi nistration and teacher education at Michigan State University where he specializes in educational policy relating to teaching and teacher education. His research interests center on policy issues associated with the improvement of teaching and te acher education, on th e development of lead ership preparation programs, and on educational choice as an emerging policy issue.
Education Policy Analysis Archives Vol. 14 No. 5 44 EDUCATION POLICY ANALYSIS ARCHIVES http://epaa.asu.edu Editor: Sherman Dorn, University of South Florida Production Assistant: Chris Murre ll, Arizona State University General questions about ap propriateness of topics or particular articles may be addressed to the Editor, Sherman Dorn, firstname.lastname@example.org. Editorial Board Michael W. Apple University of Wisconsin David C. Berliner Arizona State University Robert Bickel Marshall University Gregory Camilli Rutgers University Casey Cobb University of Connecticut Linda Darling-Hammond Stanford University Gunapala Edirisooriya Youngstown State University Mark E. Fetler California Commission on Teacher Credentialing Gustavo E. Fischman Arizona State Univeristy Richard Garlikov Birmingham, Alabama Gene V Glass Arizona State University Thomas F. Green Syracuse University Aimee Howley Ohio University Craig B. Howley Ohio University William Hunter University of Ontario Institute of Technology Daniel Kalls Ume University Benjamin Levin University of Manitoba Thomas Mauhs-Pugh Green Mountain College Les McLean University of Toronto Heinrich Mintrop University of California, Berkeley Michele Moses Arizona State University Anthony G. Rud Jr. Purdue University Michael Scriven Western Michigan University Terrence G. Wiley Arizona State University John Willinsky University of British Columbia
National Board Certification as Professional Development 45 EDUCATION POLICY ANALYSIS ARCHIVES English-language Graduate -Student Editorial Board Noga Admon New York University Jessica Allen University of Colorado Cheryl Aman University of British Columbia Anne Black University of Connecticut Marisa Cannata Michigan State University Chad d'Entremont Teachers College Columbia University Carol Da Silva Harvard University Tara Donahue Michigan State University Camille Farrington University of Illinois Chicago Chris Frey Indiana University Amy Garrett Dikkers University of Minnesota Misty Ginicola Yale University Jake Gross Indiana University Hee Kyung Hong Loyola University Chicago Jennifer Lloyd University of British Columbia Heather Lord Yale University Shereeza Mohammed Florida Atlantic University Ben Superfine University of Michigan John Weathers University of Pennsylvania Kyo Yamashiro University of California Los Angeles
Education Policy Analysis Archives Vol. 14 No. 5 46 Archivos Analticos de Polticas Educativas Associate Editors Gustavo E. Fischman & Pablo Gentili Arizona State University & Universidade do Estado do Rio de Janeiro Founding Associate Editor for Spanish Language (1998003) Roberto Rodrguez Gmez Editorial Board Hugo Aboites Universidad Autnoma Metropolitana-Xochimilco Adrin Acosta Universidad de Guadalajara Mxico Claudio Almonacid Avila Universidad Metropolitana de Ciencias de la Educacin, Chile Dalila Andrade de Oliveira Universidade Federal de Minas Gerais, Belo Horizonte, Brasil Alejandra Birgin Ministerio de Educacin, Argentina Teresa Bracho Centro de Investigacin y Docencia Econmica-CIDE Alejandro Canales Universidad Nacional Autnoma de Mxico Ursula Casanova Arizona State University, Tempe, Arizona Sigfredo Chiroque Instituto de Pedagoga Popular, Per Erwin Epstein Loyola University, Chicago, Illinois Mariano Fernndez Enguita Universidad de Salamanca. Espaa Gaudncio Frigotto Universidade Estadual do Rio de Janeiro, Brasil Rollin Kent Universidad Autnoma de Puebla. Puebla, Mxico Walter Kohan Universidade Estadual do Rio de Janeiro, Brasil Roberto Leher Universidade Estadual do Rio de Janeiro, Brasil Daniel C. Levy University at Albany, SUNY, Albany, New York Nilma Limo Gomes Universidade Federal de Minas Gerais, Belo Horizonte Pia Lindquist Wong California State University, Sacramento, California Mara Loreto Egaa Programa Interdisciplinario de Investigacin en Educacin Mariano Narodowski Universidad To rcuato Di Tella, Argentina Iolanda de Oliveira Universidade Federal Fluminense, Brasil Grover Pango Foro Latinoamericano de Polticas Educativas, Per Vanilda Paiva Universidade Estadual Do Rio De Janeiro, Brasil Miguel Pereira Catedratico Un iversidad de Granada, Espaa Angel Ignacio Prez Gmez Universidad de Mlaga Mnica Pini Universidad Nacional de San Martin, Argentina Romualdo Portella do Oliveira Universidade de So Paulo Diana Rhoten Social Science Research Council, New York, New York Jos Gimeno Sacristn Universidad de Valencia, Espaa Daniel Schugurensky Ontario Institute for Studies in Education, Canada Susan Street Centro de Investigaciones y Estudios Superiores en Antropologia Social Occidente, Guadalajara, Mxico Nelly P. Stromquist University of Southern California, Los Angeles, California Daniel Suarez Laboratorio de Politicas Publicas-Universidad de Buenos Aires, Argentina Antonio Teodoro Universidade Lusfona Lisboa, Carlos A. Torres UCLA Jurjo Torres Santom Universidad de la Corua, Espaa