USF Libraries
USF Digital Collections

Educational policy analysis archives

MISSING IMAGE

Material Information

Title:
Educational policy analysis archives
Physical Description:
Serial
Language:
English
Creator:
Arizona State University
University of South Florida
Publisher:
Arizona State University
University of South Florida.
Place of Publication:
Tempe, Ariz
Tampa, Fla
Publication Date:

Subjects

Subjects / Keywords:
Education -- Research -- Periodicals   ( lcsh )
Genre:
non-fiction   ( marcgt )
serial   ( sobekcm )

Notes

General Note:
Includes EPAA commentary.

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
usfldc doi - E11-00056
usfldc handle - e11.56
System ID:
SFS0024511:00056


This item is only available as the following downloads:


Full Text
xml version 1.0 encoding UTF-8 standalone no
mods:mods xmlns:mods http:www.loc.govmodsv3 xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govmodsv3mods-3-1.xsd
mods:relatedItem type host
mods:identifier issn 1068-2341mods:part
mods:detail volume mods:number 4issue 7series Year mods:caption 19961996Month April4Day 44mods:originInfo mods:dateIssued iso8601 1996-04-04



PAGE 1

1 of 31 Education Policy Analysis Archives Volume 4 Number 7April 4, 1996ISSN 1068-2341A peer-reviewed scholarly electronic journal. Editor: Gene V Glass,Glass@ASU.EDU. College of Educ ation, Arizona State University,Tempe AZ 85287-2411 Copyright 1996, the EDUCATION POLICY ANALYSIS ARCHIVES.Permission is hereby granted to copy any a rticle provided that EDU POLICY ANALYSIS ARCHIVES is credi ted and copies are not sold.Respecting the Evidence: The Achievement Crisis Remains Real Lawrence C. Stedman State University of New York-Binghamtonstedman@binghamton.edu Abstract: Wherein Stedman answers Berliner and Biddle's repl y to his review of The Manufactured Crisis "It ain't so much the things we don'tknow that get us into trouble. It's thethings we know that just ain't so.".....Artemus Ward In his engaging book, HOW WE KNOW WHAT ISN'T SO, t he social psychologist Thomas Gilovich offers marvelous insights into the origins of human misconceptions. The problem, he finds, is not irrationality but flawed rationality--the very reasoning mechanisms that help us make sense of reality also lead to question able beliefs. These include the tendency to seek confirmatory information, the excessive impact of confirmatory information, a nd the tendency to evaluate evidence in a biased manne r. He explains that "We humans seem to be extremely g ood at generating ideas, theories, and explanations that have the ring of plausibility. We may be relatively deficient, however, in evaluating and testing our ideas once they are form ed" (p. 59). In a fascinating insight, he notes that people "pl ace a premium on being rational and cognitively consistent" and so rather than simply d isregard evidence, they "subtly and carefully 'massage' the evidence to make it consistent with t heir expectations" (p. 53).

PAGE 2

2 of 31 This leads to the illusion of objectivity: Although people consider their beliefs to be closel y tied to relevant evidence, they are generally unaware that the same evidence could be looked at differently, or that there is other, equally pertinent evidence to consi der (p. 80). One fundamental mechanism that gets us into partic ular trouble is what Gilovich calls "optional stopping": When the initial evidence supports our preferences, we are generally satisfied and terminate our search; when the initial evidence is hostile, however, we often dig deeper, hoping to find more comforting information, or to uncover reasons to believe that the original evidence was flawed (p. 82). Or, as he puts it more directly: I have argued that people often resist the challeng e of information that is inconsistent with their beliefs not by ignoring it, but by subje cting it to particularly intense scrutiny (p. 62). For complex issues, such as the condition of U.S. education and achievement, the desire for consistency outweighs the willingness to respec t ambiguity. For nearly all complex issues, the evidence is frau ght with ambiguity and open to alternative interpretation. One way that our desire s or preferences serve to resolve these ambiguities in our favor is by keeping our in vestigative engines running until we uncover information that permits a conclusion th at we find comforting (p. 83). Gilovich has captured well the fundamental failing of the MANUFACTURED CRISIS. Whether Berliner and Biddle are discussing the "myt hs" about achievement and schools, the power of rightwing disinformation, or the contras t between neoconservative and progressive reforms, they repeatedly offer a one-sided treatmen t of the evidence. With few exceptions, they accept at face value any information that supports their viewpoint, while they dissect and reinterpret any information that challenges it. The purpose of academic training and scholarship i s to rise above such flawed rationality; to learn how to critically analyze the evidence tha t supports your own favored arguments--and to treat fairly the evidence that contradicts it. It i s also a matter of learning to accept the complexit y and ambiguity of evidence--and to fairly present th at. Unfortunately, Berliner and Biddle failed to do th is--either in their book or in their response to me. They have even gone beyond the flaw ed rationality Gilovich describes. They ignored or dismissed entire areas of relevant evide nce--such as the extensive data on students' low levels of achievement and knowledge--and, in se lectively presenting other evidence--such as the data on test score trends--they winnowed out on ly that which supported their viewpoint and discarded the rest. In several cases, they have eve n directly misrepresented the actual data. What's worse is that they are now resorting to swe eping, disrespectful condemnations of those who disagree with their arguments and point o ut the limitations of their evidence. They characterize the various critiques of their book as "distorted portrayals and outright lies"; they labeled my analysis a "diatribe" and as "disingenuo us" and filled with "lacunae, misrepresentations, and trivialities". They have im pugned the motives of reviewers and, in my case, even attributed positions to me that I have n ever taken. This, too, is understandable, however. As Gilovich points out, the psychologist Robert Abelson argued that "beliefs are like possessions" and that, consequently, people are "possessive

PAGE 3

3 of 31and protective" of them and react defensively when their limitations are pointed out (pp. 86-87). The motivational determinants of belief are particu larly powerful. As Sir Francis Bacon put it in the NOVUM ORGANUM, "Man prefers to believe what he prefers to be true" (Gilovich, 1991, p. 75).THE PURPOSE OF MY EPAA REVIEW Berliner and Biddle were upset that my review focu sed on their treatment of the achievement evidence. That was its purpose and shou ld have been immediately obvious from the introductory paragraphs. Why did I focus on the achievement evidence? Becau se it underpins their basic argument about a manufactured crisis. They claim that U.S. s tudents and schools are actually doing well and that the evidence to the contrary and beliefs i n a crisis have been manufactured by right-wing school critics and administrations. Having already produced a general review of their book back in November in the WASHINGTON POST (one I am sure t hey must have seen) (Stedman, 1995), I felt it imperative to discuss, at length, in an a cademic forum, the details of how they treated the evidence on student achievement. EDUCATION WEEK als o had devoted a full-page general story about their book back in September (Viadero, 1995). It should be noted at the outset that even Berline r and Biddle considered such evidence so central to their argument that they spent several c hapters trying to explode the myths about the current condition of schools and achievement. Contrary to their repeated claim in their response I never stated that "their book" was based on four sweeping claims, but rather that thei r achievement analysis was. Nevertheless, the review was supposed to have contained the following two introductory sentences, which could have eliminated much of their consternation. This review is focused on the achievement analysis portion of the MANUFACTURED CRISIS. My more general review of the book can be found in the Education Review section of the Washington Post Sunday, November 5, 1995, pages 16-17. OVERVIEW: THE MAJOR FAILINGS OF THEIR ACHIEVEMENT A NALYSIS The actual evidence on student achievement is cruc ial to their argument. It directly addresses their claim that U.S. students are achiev ing well and that the educational crisis has been "manufactured". Instead of systematically reviewing the evidence, they selected a few pieces of data on each topic and reinterpreted them to suit t heir argument. They concentrated on trends (mostly stable) but ignored levels of achievement ( mostly low). Let me be clear at the outset. I believe that righ t-wing forces have been attacking the public schools and EXPLOITING the evidence, but the re is also extensive, credible evidence that there is a real achievement crisis, something Berli ner and Biddle continue to deny. They have still never dealt directly with the actual evidence about low achievement. Their response to my review repeats and reinforces the book's major failings in its treatment of the achievement evidence. Here's what they did (or did not do) in their analysis. They ignored a large and growing body of research w hich shows that student achievement has been weak for several decades. Our high school students lack important knowledge in history, civics, geography, and English; they have done poorly in mathematics and science and few write well. The evidence is overwhelming th at the achievement crisis is real. In the next section, I report on the latest National Asses sment of Educational Progress (NAEP) results. 1.

PAGE 4

4 of 31They analyzed the test score decline in a misleadin g fashion. Although they rightfully criticized the myth of a RECENT general achievement decline, they ignored the 1970s decline and failed to present any of the contradict ory evidence from the 1980s. They clearly overstated the case when they claimed "only ONE tes t, the SAT" ever suggested a decline (p. 35 emphasis original).Worse, they then overreached and tried to cast curr ent achievement in an historically positive light. Without the needed evidence, they c laimed that this generation of students achieves "substantially" higher than previous ones on "virtually all" commercial standardized tests--a contention that is directly r efuted by the major reviews of historical trends on such tests.In their response, they compounded their error by a rguing that then-and-now studies--including MY review of such research--supp ort such a sweeping contention. They claimed that "almost all" then-and-now studies show ed improvement when, in fact, many studies showed no change, several showed declines, and the ones showing improvement typically involved small gains (Stedman & Kaestle, 1991b). They did not mention that such studies have been fraught with problems. They also have never acknowledged that achievement on NAEP HIGH SCHOOL science and civics tests remains lower than in the past, below their 1969 levels. 2. They tried to claim that U.S. failure in internatio nal assessments is a "myth", but it is actually partly true. Although our younger students have done well in reading, our older students have done quite poorly in secondary school math and the high school sciences. Here, again, they overreached by claiming that U.S. schools "stack up very well" in the international comparisons. For one thing, they argu ed that curriculum differences were a major cause of the international achievement differ ences, but they based this on only one study of outdated 8th grade math data from 198182 and the data did not support their claim. This was a thin reed on which to characteriz e the standing of the U.S., and even if it had been true, it is still disturbing news for it m eans the U.S. curricula and programs are not up to international standards. More recent stud ies also do not support their assertions. In the 1991 IAEP math study, our 8th graders lagged well behind those in nearly all other countries, and this was true even when algebra curr icular differences were accounted for. 3. They systematically misrepresented major research s tudies and data on U.S. achievement. a) They graphed standardized test score trends from a study by Linn, Graue, and Sanders (1990), but somehow dropped the very te sts and grade levels which included declines! Worse, they offered these data as definitive proof of improving achievement, when in fact, Linn, Graue, a nd Sanders pointedly remarked that the results were "equivocal" and note d that part of the gains were caused by districts' repeated use of the same tests rather than by genuine improvement. The 1980s back-tobasics movement als o helped to artificially raise scores by frequent testing and skill-drill ap proaches (Stedman & Kaestle, 1991a). In their response, they claimed that the om itted data supports their original claims when, in fact, much of it contradic ts them. The data were also outdated, coming from the late 1970s through mid-19 80s, and thus are not even relevant to their claims about current student s or recent improvement. In a later section, I will discuss their continued mis characterizations of this study and its data.b) They graphed international math scores from a st udy by Westbury (1992), but somehow left out his 12th grade comparison wher e the U.S. did poorly! 4.

PAGE 5

5 of 31Worse, they disregarded Westbury's caution and impr operly compared our elite 8th grade algebra students to the AVERAGE Japanese student. Westbury actually used the top 20%. They claimed it proved t hat with a COMPARABLE curriculum our students do well in math, but never mentioned that our students spent far more time on algebra (61% vs. 26%), cover ed more test items, and were one grade older.c) They claimed the international assessments have improperly compared the broad mass of U.S. students to an overseas elite at tending high-status high schools, but this is old criticism from the early i nternational studies, and it was only partly true even back then. In the early IEA m ath studies, for example, researchers deliberately sampled college-bound stud ents who were taking math in their senior year of high school--in the U.S. th is was an elite group of only 18% of our students; in the second IEA math study, it was only 13%, a similar percentage to that in other countries (Stedman, 199 4a). d) They attributed the SAT decline to demographic c hanges in test takers, yet never reviewed the evidence which shows this explai ns much, but not all, of the decline. They also used AVERAGE SAT scores to c laim minority student performance gains, but this masked minority VERBAL declines in the late 1970s and late 1980s. These are serious, major failings (not molehills) which directly undermine their argument and impugn their credibility as scholars. It is lit tle wonder that they chose to attack me personally rather than deal forthrightly with the evidence. I have divided this response into several sections : THE NEW ACHIEVEMENT EVIDENCE--a review of the 1994 NAEP findings which shows that students continue to display serio us weaknesses in their knowledge and skills.THE EVIDENCE AND THEIR RESPONSE--a direct response to their arguments in their reply, organized around their four sweeping c laims about U.S. achievement which they continue to support.THE MANUFACTURED CRISIS REVISITED--a look at severa l major areas of errors and misrepresentation that were not covered in my original review, in particular their claims of high levels of parental satisfaction with local schools. Here again, they were so intent on fitting the data to t heir argument, that they distorted the evidence. It turns out that only about a quarter of public school parents rate their oldest child's school an A, while about half of the m rate their community's schools C through Fail!PROGRESSIVE REFORMS AND THE RIGHT-WING AGENDA--an e ndorsement of much of their reform agenda, coupled with an ana lysis of their one-sided presentation of a national rightwing agenda, whic h again demonstrates their Procrustean handling of evidence. In particular, I discuss their treatment of the Sandia Report, which they claimed provided a valid look at the achievement evidence and which they allege was suppressed by th e Bush administration. THE NEW ACHIEVEMENT EVIDENCE

PAGE 6

6 of 31 Students are struggling. The depth of the achievem ent problem is strongly borne out by the latest round of NAEP studies of reading, history, a nd geography achievement. Performance is reported for basic, proficient, and advanced levels In 1994, substantial portions of students did not even make the basic level while a majority fail ed to achieve the proficient level in each subject at each grade level tested: 4th, 8th, and 1 2th. I also review the results from NAEP's 1992 assessment of writing portfolios, which revealed th at little classroom writing is of high quality. 1994 HIGH SCHOOL SENIORS' ACHIEVEMENT I concentrate here on the data for high school sen iors because they provide the best overall assessment of K-12 performance. In reading, a quart er of our seniors failed to reach even the basic level (Williams, Reese, Campbell, Mazzeo, & P hillips, 1995, p. 15). Only about one-third demonstrated reading proficiency (or better). In ge ography, about a third were below the basic level, while only about a quarter displayed profici ency (or better) (Williams, Reese, Lazer, & Shakrani, 1995, p. 16). History showed the worst re sults. Over half the seniors were below the basic level and only 11% made the proficient level or higher (Williams, Lazer, Reese, & Carr, 1995, p. 19). These levels were set by NAEP's independent policy -making body-the National Assessment Governing Board with "contributions from a wide variety of educators, business and government leaders, and interested citizens" (Willi ams, Reese, Lazer, & Shakrani, 1995, p. 3). The reader should recognize that the results are ba sed on the judgments of panels, approved by the Governing Board, of what advanced, proficient, and basic students should know and be able to do in each subject asses sed (p. 9). Concerns have been raised about the construction a nd interpretation of these levels (Stedman, 1993), and this latest series of NAEP rep ort cards clearly labels them as "developmental" (Williams, Reese, Lazer, & Shakrani 1995, p. 3). Nevertheless, both the Commissioner of the National Center for Education S tatistics and the National Assessment Governing Board believe the levels are "useful and valuable" in reporting on student achievement. Fortunately, NAEP has returned to their practice o f making public sets of test items used in the assessments. This allows educators and the publ ic to appraise the items and evaluate student knowledge directly. The test themselves are quite r ich, combining constructed response questions with multiple choice ones. In geography, for exampl e, 60% of the testing time was devoted to constructed response items. The geography and histo ry tests offer a rich panoply of maps, graphs, photographs, cartoons, paintings, and magazine cove rs. A look at individual items avoids the scaling problems and reveals that many students hav e serious deficiencies in basic knowledge and skills.1994 GEOGRAPHY RESULTS Let's consider the geography results first (Willia ms, Reese, Lazer, & Shakrani, 1995). Less than half the seniors knew that slavery was a major reason many Caribbean people are of West African descent (p. 63). Only about a third recogni zed a description of a rain forest and could identify a country that had one. Only about a quart er could identify three or more of the following on a map--the Pyrenees Mountains, the Japanese Arch ipelago, the Mediterranean Sea, and the Persian Gulf. (And this was after the Persian Gulf War!) Only 10% could interpret a simple bar chart of predicted hydrocarbon emissions and give a reason for the trends displayed. Relatively stronger results were found for identif ying four world cities as major religious centers (76%), identifying shaded countries on a wo rld map as belonging to OPEC (65%), and

PAGE 7

7 of 31deciphering interpreting tabular data about two cou ntries (53%67%). Still, it should be noted that one-fourth to over onethird of the students had problems with such items. 1994 HISTORY RESULTS In history, the results were also disturbing (Will iams, Lazer, Reese, & Carr, 1995). Only about half of the high school seniors (55%) knew th at cotton trade was a main reason Great Britain leaned toward the Confederacy during the Ci vil War. The other choices were British plantation owners held slaves, most British immigra nts lived in the South, and British politicians wanted to conquer the U.S. Less than half of seniors could identify the purpo se of the Monroe Doctrine (41%), date a newspaper report about the Civil War destruction of Charleston (41%), or realized that preventing the spread of communism dominated U.S. foreign poli cy in the postwar period (47%). Less than half (47%) could interpret an 1876 magaz ine cover depicting the "Indian problem" even though general statements were permit ted about attitudes or events. Only a third were able to identify a consequence of Nat Turner's slave rebellion (tighter controls on slaves). Only a quarter know that the Camp David accords pro moted peace between Egypt and the U.S. (Other choices were the Soviet Union and China; Pal estinians and Jordanians; North Korea and the U.S.). Only 15% were able to interpret a simple cartoon showing the long, winding road necessary to spiritually fulfill the civil rights l aw after enactment. There were several strong spots. Over 80% properly interpreted two paintings of George Washington as reflecting the glorification of polit ical figures and the use of religious symbols and, in what was hardly a surprising result, 88% kn ew that the computer rather than the typewriter, superconductor, or radio produced the g reatest change in how people worked between 1960 and 1990.OTHER RECENT NAEP EVIDENCE ABOUT STUDENT PERFORMANC E Writing is another area that is important particul arly given the connection between critical thinking skills and written expression. In 1992, NA EP conducted the first national assessment of writing PORTFOLIOS gathered from classrooms across the country. Such an approach avoids the artificiality and time pressures of using a nationa l sit-down test to judge writing ability. The findings were troubling. Olson (1995) reported that "the best writing that students produce as part of their classroom work is still not very good." Only between 4-12% of the 8th graders achieved hig h marks (5 or 6) on the six-category evaluation scale. One-fourth to almost one-half rec eived low marks (1 or 2), depending on whether informative or narrative tasks were being c onsidered. Gary W. Phillips, the associate commissioner at the National Center for Education S tatistics, concluded that, "The moral of the story is that the writing is not very good in the n ation. Even the best is mediocre." This may be a bit harsh, however, given that there were writing samples achieving the highest ratings. The portfolio assessment methodolo gy also needs to be systematically and independently evaluated. No doubt, problems will be found that could require some adjustments to the results (up or down). In the meantime, thoug h, the findings suggest there is a serious writing problem and mirror those of the traditional set-task writing assessments that NAEP has conducted, including the one in 1992 (Applebee et a l., 1994). High school students have struggled over the years. In 1992, only about 2% to 23% produced "elaborated or better" writing, with the weakest performance on persuasive tasks (A pplebee et al., 1994, p. 5). (These are averages across four tasks of each type: persuasive narrative, and informative. On one informative task, students did much better, 46%; on another much poorer, 6%.) The percentages who produced "developed or better" responses was be tter but still troubling--only around 16% to half of the students performed acceptably. On most tasks, most students' writing was undeveloped

PAGE 8

8 of 31or minimally developed. This mirrors their inadequa te writing in prior NAEP assessements (Applebee et al., 1990, p.107; see Stedman, 1993 fo r information about earlier results and scoring methods.) The good news is that most students have done well with basic mechanics--spelling, grammar, and punctuation--and so additional WHOLE-c lass drill and practice in these areas is not warranted. NAEP also did a follow-up analysis of the 1992 rea ding assessment in which they explored student performance on different kinds of test questions (Olson, 1995). They found a marked drop-off in student understanding and profic iency as the questions became more open-ended and required more elaborated responses. At the three grade levels (4, 8, and 12), performance fell from around two-thirds correct on multiple-choice problems, to slightly above half on short, constructed answer questions, and th en to only one-fourth to around a third on questions requiring an extended response. All of which has important implications as we move toward more authentic assessment. We will most certainly find initially that student performance is even worse than what has been revealed by the more straight-forward, multiple-cho ice recall testing that has been done primarily so far. In math, NAEP analysts have determined that "less than half (of high school seniors) appeared to have a firm grasp of seventhgrade con tent" and only 5 percent "attained a level of performance characterized by algebra and geometry-when most have had some coursework in these subjects" (Mullis et al., 1991b, p. 80). Alth ough high school students have done well on basic operations such as adding whole numbers and r eading a line graph (90%+), many have trouble even with simple problems involving fractio ns, decimals, and percents (Mullis et al., 1991, pp. 302309). In 1990, for example, 34% of 1 7-year-olds could not find the area of a rectangle, given a diagram and the length of two si des (Mullis et al., 1991a, p. 306). Math educators who reviewed the NAEP data in the late 19 80s determined that students "exhibit serious gaps in their knowledge and are learning a number of concepts and skills at a superficial level" (Carpenter et al., 1988, pp. 40-41). They co ncluded that "students' achievement at all age levels shows major deficiencies." Although there ha ve been some modest gains in math achievement in the 1990s, their general conclusions are still appropriate today. By the way, the NAEP findings I have presented do NOT include dropouts; overall high school student achievement is, therefore, likely to be even worse than this evidence indicates. When we combine these recent results with those fro m the past several decades, we have a serious cause for concern. (See Stedman, 1993, for a review of this evidence and a discussion of its strengths and limitations.)BERLINER & BIDDLE'S REJOINDERS Instead of reviewing this extensive and troubling evidence about low achievement, Berliner and Biddle offered a series of rejoinders in their book about unrealistic standards, our students' focus on breadth of experience, and the n ature of the tests. As I explained in my review, however, the achievement standards are realistic (t hey might even have been set too low), knowledge is an important part of our students' exp erience, the achievement problems are not an artifact of psychometric scaling, and the tests inc orporated real-world tasks and knowledge. In their response, they took much the same approac h. First, they wrote that the "standards against which America's schools are to be judged an d found wanting are arbitrary and can be made up as one goes along". Historically, this is u ntrue. The major studies have not used "arbitrary" or "made up" standards; they have relie d strongly on schooland curriculum-based measures-the textbooks that are most widely used, teacher consensus about what is important to be tested, citizen panels on what students should k now and be able to do. Most people would certainly expect high school seniors to have master ed 7th grade math and basic social studies, but they have not.

PAGE 9

9 of 31 Berliner and Biddle then suggested that those of u s who are concerned about academic achievement are "school bashers" and "standardized test enthusiasts". (I, for one, am neither!) They label the solid evidence that U.S. general kno wledge and academic achievement have been low for decades as "Nonsense!" and "ludicrous" (see Stedman, 1993 for a review of the evidence). That is the level of their argumentation --dismissive and mocking, without ever examining the actual evidence. Their primary argume nt about historically low achievement was the following: We find it ludicrous that anyone should claim that "academic and general knowledge have been at low levels for decades" in this countr y. If this were actually true, how on earth did our nation ever manage to win World Wa r II, send astronauts to the moon, create a plethora of new pharmaceuticals, and invent the transistor and virtually all the computer technology now used worl d wide? For that matter, how did we achieve the world's highest rate of industrial p roductivity, and establish ourselves as this century's dominant super-power? "Low levels of academic and general knowledge? What nonsense! Let's examine this argument. These accomplishments did not depend upon the MASS of U.S. students and adults being wellinformed and k nowledgeable. Instead, they exemplify the prowess of the military-industrial complex in postwar America, the skills of a narrow technical elite, and the inventiveness of a single individual or group of individuals. It took a Jonas Salk to develop the polio vaccine, for example. The transistor was invented by John Bardeen, Walter Brattain, and William Shock ley. (This is the same Shockley who later espoused racially-charged ideas about intelligence being genetically determined.) The micro-computer revolution can be largely credited t o three school dropouts--Steve Jobs and Steve Wozniak who developed the Apple II computer and Bil l Gates who founded Microsoft. In other words, such accomplishments have readily existed alongside low levels of knowledge and achievement in the general population Our citizens' lack of knowledge of civics, history, geography, and literature, for example, ha d little bearing on our winning World War II or getting a man to the moon. (Let us also be careful lest we believe that it is only Americans who have discovered pharmaceuticals or that only a U.S. education was involved. Penicillin, for example, was developed by Alexander Fleming, but he was a Scottish biologist. Streptomycin was discovered by the American Selman Waksman, but he was born in Russia in 1888. The oral form of the polio vaccine was developed by the Poli sh-American Edward Sabin, born in 1906. Many of these discoverers, therefore, were educated well before World War II, long before the decades of low achievement that I was talking about !) I find it curious that Berliner and Biddle have un wittingly embraced here a Human Capital view of economic productivity and military-corporat e power, a view that they critique at great lengths in their book! According to their new argum ent, students' general knowledge and academic achievement have been the keys to U.S. eco nomic and technical accomplishment! In their book, however, Berliner and Biddle gave o nly a passing nod to the importance of knowledge and cultural heritage-even for social a nd civic reasons. Yet it is important that students be well informed about the key events, peo ple, issues, literary works, and social struggles that have shaped our multicultural societ y. Such information matters--it helps us as voters, workers, readers, newswatchers, and communi ty members. In a society torn by debates over immigration and affirmative action, we all sho uld be alarmed by how little our students know of world cultures and how poorly informed they are about our country's tortured racial history. The low levels of achievement also are unimpressiv e results for 12 years of schooling. The tests do measure much of what is being taught in ou r schools and show we are not succeeding in our efforts. This is the heart of the achievement c risis. A complex, democratic society needs a

PAGE 10

10 of 31well-read and knowledgeable citizenry and yet the e vidence shows we are not accomplishing this. THE EVIDENCE AND BERLINER & BIDDLE'S RESPONSESWEEPING CLAIM #1: "TODAY'S STUDENTS ARE OUT-ACHIEV ING THEIR PARENTS SUBSTANTIALLY" Their treatment of the achievement evidence contin ues to be one-sided. In their response, they wrote that "we were actually quite cautious in what we claimed about the achievements of students and their parents." That claim contrasts s trikingly with what they actually stated in their book about standardized test trends. They claimed t hat "virtually all of them would show that today's students are out-achieving their parents su bstantially" (p. 33). Not some of them, but virtually all of them. Not somewhat outperforming, but substantially. As I noted in my review, they did not present the evidence needed to support this sweeping generational claim; they failed to discuss the many reviews of historical trends th at refute it. They then had the amazing chutzpah to cite my own research on then-and-now studies to try to prove their claim. Note first, that they did not cite this research in their book, but are only bringing it in now, after the fact. Next, notice wh at they claimed I found: Additionally, when one looks at more than 20 "then" and "now studies of student achievement--reviewed previously by Stedman himself in his studies of literacy in the U. S.!-almost all the results show that the s tudents taking the test "now" outscore the students that took the test "then." They claim that "almost all the results" showed im provement. In fact, of the 13 local then-and-now studies done through the 1960s, seven showed no real change, including two that showed declines. Two of three then-and-now studies done in the 1970s showed declines relative to earlier students. Overall, across the century, m ore studies had gains than declines, but the gains were small and many trends were stable. The studies also suffered from a variety of flaws. Here's how Carl Kaestle and I (1991b) actually summarized our findings: If one takes age into account, more of the tests sh owed gains than declines, whereas many others showed approximately equal performance rates. But few of the studies were nationally representative. And the magnitude o f the changes, up or down, was usually half a school year or less--a shift that ca n easily be attributed to the margin or error caused by the problems we have described (p. 89). We then concluded: Our educated guess is that schoolchildren of the sa me age and socioeconomic status have been performing at similar levels throughout m ost of the twentieth century (we consider the 1970s in detail in Chapter 4). But we also caution that then-and-now studies are fraught with design and interpretation problems; reliance upon them to support arguments about literacy trends is unjustif ied (p. 89). This illustrates well their treatment of evidence-a misrepresentation of findings and other scholars' research, a continued effort to fit the e vidence to their argument, and a failure to acknowledge the complexity and problems with the da ta. Note as well that they completely disregarded one of the major conclusions of our literacy research. By focusing on trends, they again ignored the findings about the levels or depth of the achievement and illiteracy problems. We wrote:

PAGE 11

11 of 31Does this mean that things are rosy on the literacy front? Certainly not. The functional-literacy tests showed that a substantial portion of the population, from 20 to 30 percent, has difficulty coping with common re ading tasks and materials. The job literacy measures, for all their limitations, s how that there are substantial mismatches between many workers' literacy skills an d the reading demands of their jobs. Even if schools are performing about as well as they have in the past, they have never excelled at educating minorities and the poor or at teaching higher-order skills (p. 128). As I pointed out in my review, Berliner and Biddle selectively presented evidence on recent trends in commercial test scores, specifical ly data from a study by Linn, Graue, and Sanders. Remarkably, in presenting the data, they o mitted the very grades and tests that showed declines and only graphed those that showed gains! They also never mentioned that the researchers had determined that the test increases were partly caused by districts' repeated use of the same tests rather than by genuine improvement. Their explanation of their selectivity is a curiou s one--and should have been presented in their book, not after the fact now! First, as to th eir omission of SRA data--which showed reading and math declines in several grades--they argued th at the SRA data are "complex and mixed, and we judged that they required too much explanation t o warrant their inclusion in a book designed for general readers". That is both unscholarly and an insult to readers. They were, in fact, able to describe the data in a only few sentences in their response. It would have been easy for them to have included an extra bar in their graph covering the SRA data. They ponder: "What on earth would readers have gained had we displayed these da ta in TMC?" That is the nub of their problematic treatment of evidence. Readers would ha ve gotten an honest and more complete look at the elementary school data. SRA reading scores, for example, declined in 5 of the 8 elementary school grades! Their characterization of the data also varies, de pending upon whether they are trying to support their case or discredit other researcher's positions. In their book (p. 31), for example, they described annual gains of 2 percentile points on co mmercial tests as "large"; yet in their footnote in their response, when they are trying to discount the significance of the SRA data, they labeled annual reading declines of 1.5 percentile points as "tiny"! (Note as well that half the "gains" they did graph were under 1.5 points!) Second, as to their omission of high school data-which also showed some declines and where gains were less impressive--they now explain that they omitted them because high school students show less growth in academic subjects--yet wasn't that worth presenting?--and that Linn, Graue, and Sanders did not include CTBS and ITBS hi gh school data. This is a weak excuse. A scholar interested in presenting a thorough picture would have gotten the CTBS high school data, while ITBS doesn't even go the high school level! ( Riverside Publishing uses the ITED for high school students.) Furthermore, why not present the data that was at hand? The difficulty may have been that results would no t have fit their thesis as well. On the MAT, reading scores were up in 9th grade, but they declined in grades 10, 11, and 12. On the SRA, grades 11 and 12 showed declines in both readi ng and math. Overall, the CAT and Stanford showed annual gains of only around 1 percentile poi nt in reading and math, much less than the elementary school scores. Given such mixed evidence it is misleading, therefore, for them to claim that the "high school data SUPPORT our assert ions" (emphasis original!). Their characterizations of specific high school da ta were also questionable--as to the MAT math scores, they wrote "ALL four high school grade s provided evidence of increased scores in mathematics" (emphasis original) when in fact, 9th graders showed no change! As to MAT reading scores, they wrote, "The MAT reading tests generated mixed data for these four grades: scores were up in two grades, but scores were down in two others". As we have noted, however, scores in the last three grades 10-12 actually decl ined, by -.7, -.4, and -.7. Such repeated errors

PAGE 12

12 of 31lead one to distrust their analysis. It should be n oted, as well, that they never informed the reader that they were graphing only elementary school data --instead, they presented it as if were generally representative of student achievement, wh en it was not. Finally, they still have not acknowledged that K-8 test score increases should not be simply equated with improvement in achievement. The Lake WoeBeGone phenomenon of repeated test administrations and teaching-to-the t est is too wellestablished to be ignored. Furthermore, the 1980s back-to-basics movement also helped to artificially raise students' scores by emphasizing frequent testing and skill-drill app roaches (Stedman & Kaestle, 1991a). Berliner and Biddle's conclusion, however, continues their o verall sweeping characterization of the data and this study: "So, student achievement is UP on c ommercial tests, and that is exactly what we concluded." One final note--this evidence is outdated, so it d oes not support claims about current achievement trends! The renorming data covered the period from the late 1970s through the mid 1980s. The CAT test, for example, came from a 19781985 renorming. The CTBS data came from a 1987 renorming. The data, therefore, is not recent, but refers to trends from over a decade ago!SWEEPING CLAIM #2: ONLY THE SAT EVER SHOWED A DECLI NE Berliner and Biddle were right to challenge the my thology that we are currently in a massive, general decline. We are not. But they went well beyond that in their own assertions. They wrote, "The two of us know of only ONE test, t he SAT, that ever suggested such a decline" (p. 35). That is a sweeping claim and one that is u nsupported by the evidence. As I pointed out in my review, many major tests sh owed declines, particularly in the 1970s and at the high school level. These declines electr ified portions of the legislative, educational, and public communities--they led to major investigation s, including the College Board's ON FURTHER EXAMINATION (Wirtz, 1977). While conservati ve critics may have exaggerated their significance, the declines did occur and to c laim otherwise misleads readers. Unfortunately, they did not discuss this evidence in their respons e--or explain their claim. Scholars have a responsibility to present the full story, particularly contradictory evidence. Although trends have generally been stable, there a re important exceptions. Berliner and Biddle never mentioned in their book that high school stud ents' NAEP science and civics scores remain below their 1969 level, that high school reading sc ores fell in the late 1980s on several tests, and that the SRA tests showed reading and math declines at several grades. Their attempt now to discredit this evidence is cu rious. I noted that HIGH SCHOOL students' NAEP science and civics scores had declin ed substantially in the 1969-1976 period. They tried to challenge this with RECENT data from 9and 13-year olds! That was hardly relevant to my original comment. High school studen ts' scores are also a more important indicator of performance as they reflect the entire K-12 expe rience. I also noted that high school students' civics sco res slipped in the late 1980s, something they took issue with. In NAEP's report, THE CIVICS REPORT CARD, however, analysts noted "Seventeen-year-olds participating in the 1988 asse ssment performed significantly less well than their counterparts assessed in either 1976 or 1982" (Anderson et al., 1990, p. 13). And my judgment about science trends is not "simpl y wrong!" as they gleefully exclaimed. I stated that HIGH SCHOOL students' science scores "fell during the 1970s and have only partly rebounded". They even presented the data that bears me out in their response--17-year-olds had a scale score of 305 in 1969 (not 1970) and it droppe d steadily to 283 by 1982--this was a substantial drop of about a half a standard deviati on. By 1992, it had recovered to 294, or only about half the way back. There was also some slippage in reading and writin g scores, particularly for younger students. 9-year-olds dropped six scale points in N AEP reading achievement between 1980 and

PAGE 13

13 of 311990 while 8th graders dropped 10 scale points in w riting proficiency between 1984 and 1990. The latest reading assessment showed that 4th grade rs had dropped a minor three scale points between 1992 and 1994, while 12th graders had dropp ed five (Williams, Reese, Campbell, Mazzeo, & Phillips, 1995, p.7). Berliner and Biddle argued that "Stedman's interpr etation of the data is once again wrong! He sees a decline in reading scores when he should be seeing remarkable consistency of scores over time." This is far-fetched. I am no supporter of the decline thesis--as they well know--and stated so quite clearly in my review. In my general review of achievement trends (Stedman, 1993), which I cited in support of my comments, I w rote: I begin with literacy because it undergirds academi c performance and is a perennial concern of educators. Here, a picture is worth a th ousand words (see Figure 1). The picture for NAEP writing performance is similar to that for reading: both have remained basically stable for more than two decades (p. 216). They also claimed that I ignored the accomplishmen ts of schools "in the face of escalating social problems", yet in my EPAA review of their bo ok, I wrote: Given changing school populations and societal cond itions, generally stable scores are still a remarkable accomplishment for U.S. scho ols. This is an important message that the public needs to hear. Such severe distortions and misrepresentations do them no credit. THE SAT DECLINE Finally, there is the SAT decline itself. Here aga in, they attributed to me a position I did not take. They know I am no fan of the SAT; I have described it as an "irrelevant measure" of educational quality and national achievement (Stedm an, 1994b). Others disagree, however, and so it remains of interest. Indeed, its national pro minence is one reason they dealt with it. My concern again is their unscholarly and one-sided tr eatment of the evidence. The first problem was that they attributed the SAT decline to demographic changes in test takers, such as increases in minority students, yet never reviewed the research! The major investigations have concluded that the S AT decline was not entirely compositional (Stedman, 1993; Stedman & Kaestle, 19 91). The tremendous rise in minority test-takers, for example, cannot explain the large decline in WHITE students' SAT scores during the 1960s and 1970s. During one stretch, the pool o f test takers did not expand, yet scores still declined. This suggests that, to some extent, there was a real decline in performance. The most comprehensive analysis of the demographic changes-the College Board's special Advisory Panel study published in 1977 (Wir tz, 1977)--concluded that much of the 1960s decline, from 2/3rds to 3/4ths, but a smaller part of the 1970s decline, up to 30%, was due to demographic changes in test takers. (They reviewed a vast array of demographic indicators.) If one considers the additional effects of age (studen ts were getting younger) and birth order (younger siblings score more poorly), up to one-hal f of the 1970s decline may have been due to compositional changes. The Advisory Panel attribute d the remaining portion to an UNDETERMINED combination of school and societal fac tors. They may have misgivings about such research, but it was incumbent upon them to acknowledge its existence. Curiously, in spite of their misgivings about SAT scores themselves, they chose to use them to claim that minority students gained in achi evement in recent decades. They even went so far as to present a bar graph of SAT scores by mino rity groups to document their claim. The

PAGE 14

14 of 31problem, as I pointed out, was that they used AVERA GE SAT scores which masked minority verbal declines in the late 1970s and late 1980s (S tedman, 1994b). Here again, I find it remarkable that when an error is pointed out, they do not discuss the evidence pertaining to it. Instead, they again attributed a position to me tha t I have never taken--that the SAT is as meaningful a barometer as NAEP. Why can they not gr acefully acknowledge contradictory evidence or their errors? (It should also be noted that they essentially set up something of a straw man argument about the decline in their book. Several of the lea ding conservative critics have NOT focused on the decline for some time--these educators recogniz e and have acknowledged that scores recently have been stable. The so-called "myth" is no longer one in certain quarters.)SWEEPING CLAIM #3: U.S. STUDENTS "STACK UP VERY WEL L" IN INTERNATIONAL COMPARISONS The first problem here is that the so-called "myth of U.S. international failure is actually partly true. U.S. international performance has bee n dismal in secondary school mathematics and poor in several high school sciences. As I explaine d in my major review of the international assessments, these are real results and not an arti fact caused by sampling or curricular-test bias (Stedman, 1994a). Berliner and Biddle, however, do not accept ANY evidence that shows U.S. achievement in a negative light. The second problem is that they failed to review a nd summarize the findings about U.S. achievement from the major international assessment s. This would have led readers to a very different conclusion about the current state of U.S international performance. As I noted in my review, our students have "done well in reading and elementary school science, middling to poor in geography and secondary school science, and last or near-last in mathematics." That is a fair and balanced characterization of the international findings and shows that critics who make sweeping claims about a GENERAL U.S. failure are mi staken, but so are reviewers such as Berliner and Biddle who try to cast the internation al findings only in a positive light. Curiously, they now write that they decided agains t presenting these findings because the international validity problems are so great. Yet t his did not prevent them from making sweeping claims about the findings such as "Many, perhaps mo st, of the studies' results were generated by differences in curricula" (p. 63). A more scholarly approach, particularly for the general public, would have been to have presented the overall findi ngs and then discussed their strengths and limitations. Nor did they present any counter-argum ents or counter-evidence to their sweeping assertions about validity (I review their claims be low; see also Stedman, 1994a). The third problem is that Berliner and Biddle went well beyond challenging the mythology of a general U.S. international failure and reinter preted selective evidence into a highly positive, onesided view. They wrote that "American schools stack up very well" (p. 63), the international evidence "confirms impressive strengths of American education" (p. 64), and when opportunities to learn are considered, "American students' school achievement looks quite similar to that of students from other countries" (p. 58). Such sweepi ng contentions would not have been supportable by a general review of the internationa l research. WESTBURY STUDY AND THE PRINCIPLE OF CONTROL REVISIT ED One of their most egregious examples of reinterpre ting evidence was their handling of Westbury's (1992) study, which was their major piec e of "evidence" about curricular opportunities-tolearn. Comparing U.S. algebra stu dents to the average Japanese student, however, violated their own research precept--the P rinciple of Control. As they put it, to estimate the true effect of a factor using surve y data one MUST control, in the

PAGE 15

15 of 31analysis, for the effects of other crucial factors that can affect the relationship. Trained data analysts are very aware of this princi ple--indeed, it one of the first things taught in courses on statistics (p. 159). Clearly, U.S. students who take algebra in the 8th grade are a unique, elite group with marked advantages in college expectations, math int erest, parental support, social class, and academic ethic. Consequently, one cannot tell how m uch of their achievement reflects the effects of their curriculum and how much their background a dvantages. The comparison is, therefore, inappropriate and unwarranted and was specifically cautioned against by Westbury himself (1992). Furthermore, our algebra students actually had a m ore focused algebra program--they had spent 61% of their time on it compared to only 26% for Japanese students. They also had covered more test items and were one grade older. So even t he curricula--or opportunities to learn--were not similar as Berliner and Biddle asserted. (They also labeled the data as "achievement scores" when in fact it was only algebra scores.) In general, Berliner and Biddle argued that 8th gr ade math comparisons have been unfair because, unlike students in other countries, most o f OUR students do not take algebra in the 8th grade. Algebra items, however, make up only part of the international tests, and the results are virtually the same whether they are included or not In the 1991 IAEP-2 math study, for example, the U.S. still would have scored BELOW the internat ional average and trailed the leading countries by 16 to 18 percentage points (Lapointe, Mead, & Askew, 1992, pp. 39, 146). Their response to me was baffling: "Somehow Stedma n takes this simple demonstration of the effects of differences in curricula and opportu nity-to-learn and converts it into a series of assertions that we did not make in TMC and do not b elieve." As discussed, this was anything but a "simple demo nstration" of curriculum differences; in fact, it was quite flawed. Furthermore, I have to a sk: What "series of assertions"? I simply discussed Westbury's actual methods and findings th at pertained to THEIR opportunity-to-learn claim and noted that they failed to discuss the 12t h grade results which showed U.S. students at a serious mathematical disadvantage--even after curri cular differences had been taken into account! As I discussed in my review of the international as sessments (Stedman, 1994a), curriculum differences and opportunity-to-learn can only expla in part of the U.S. international achievement deficiency. Furthermore, the lack of U.S. curriculu m coverage, particularly in mathematics, often reflects our less demanding and weaker academic pro gram, and so does not excuse our low achievement. By the way, Berliner and Biddle also violated the Principle of Control in their public vs. private schools graph--p. 123--when they showed tha t public school students who take advanced math courses slightly outperformed private school s tudents. This does NOT prove, however, as they asserted, that the public-private difference i s simply a matter of curriculum--the public school advanced math takers are a select, elite gro up. Here again, they failed to disentangle curriculum and class effects. Furthermore, although their graph came from AFT research reported by Albert Shanker (1991, p. 10), they never mention ed that he concluded that both sectors were achieving poorly! (Although I agree with their gene ral point that the private vs. public school achievement gap has been overblown, I wouldn't char acterize the gaps as generally "small" as they did--in the 1990s NAEP comparisons they have o ften been substantial, but probably not that much more than would be expected given that private schools have a more upscale student body. I also think that Shanker's conclusion is an intrig uing one that is well worth exploring further.) VALIDITY AND SAMPLING BIAS IN THE INTERNATIONAL ASS ESSMENTS Finally, they offered a series of arguments about the appropriateness and validity of the international assessments which are not supportable In the first one, Berliner and Biddle are

PAGE 16

16 of 31caught in a Catch-22. They argue that the internati onal tests have not measured "the unique values and strengths of American education", includ ing "creativity, initiative, and independence of thought in students"--yet at the same time, thei r book criticizes today's schools for lacking these very features. They are clearly concerned tha t neoconservative strategies, such as work intensification and national standards, are dominat ing schooling and propose numerous progressive alternatives (cooperative learning, pro ject method, etc.) designed to rectify the situation and enhance creativity and initiative. There is also a certain hubris in asserting that American" education is "uniquely" focused on such things. As I noted, Japanese elementary stu dents have rich curricular and extracurricular activities--calligraphy, sewing, hands-on math and science activities, group problem-solving, electronics, dance, musical training, play, reading physical exercise, cooperative learning, school jobs, etc. Without explanation, however, they label ed this as one of my "stranger" assertions! Furthermore, our breath of focus hardly excuses our low levels of achievement and knowledge--our schools, parents, and policy makers all clearly value high levels of achievement. They also argued that sampling bias is a major pro blem for the international assessments, claiming that the assessments compare the broad mas s of U.S. high school students to select samples in high-status high schools overseas (p. 54 ). Others have claimed similarly that our average student was compared to an elite, universit y-bound group of European students. This is an old criticism, however, emerging out of the firs t round of IEA international assessments in 1964 and 1970-71. Even then, the severity of the sa mpling problem varied by country and subject. In mathematics, the assessment deliberatel y sampled seniors who were taking math as part of a college-preparatory sequence. This narrow ed the U.S. selection to college-bound students (only 18%) and thus avoided an unwarranted mass-to-elite comparison. Their claim is even less applicable to the second international IEA math study, where many countries had 12th grade math enrollment rates similar to that of the U.S. (which was only 13%). Furthermore, most of these countries outperfo rmed the U.S. by a considerable margin (Stedman, 1994a). Even some of the countries with h igher enrollment rates matched or outperformed the United States. Hungary, for exampl e, scored about the same as the small U.S. elite in several areas even though it enrolled half its students! In the second international science study in the mid-1980s, the U.S. actually had more selective 12th grade enrollments than most countries and still achieved more poorly in chemist ry, physics, and biology. (Their example of a Japanese teacher's comments about sampling problems is a red herring. It has nothing to do with the major international assessments--IEA or ETS's I AEP.) Critics have made too much of the variations in hi gh school enrollments. Most of the assessments have involved 9to 14-yearolds, ages when education is compulsory in developed countries and nearly 100% of the students are repre sented. Unfortunately, these are also the ages where the U.S. has struggled in several subjects. On another point. I, too, am concerned about the n ewsmedia's inadequate coverage of the international assessments, but that does not prove that U.S. schools "stack up very well". One of the worst features of Berliner and Biddle's response is that they repeatedly retreat from or even misrepresent their own position! As to variability, they now claim: Stedman asserts that we had argued that overall var iability in achievement among students should be greater in our country, but we d id not argue for such an effect. Yet, there's what they wrote in their book: Together these two problems [disparities in student wealth and inequities in funding] mean that scholastic achievements will vary far mor e in the United States than in other countries (p. 58). and

PAGE 17

17 of 31To state this issue succinctly, the achievement of students from American schools is a LOT more variable than is students achievement fr om elsewhere (p. 58, emphasis original). As I noted, the evidence does not bear out this sw eeping contention. In fact, the 1991 IAEP math and science studies showed our variabilit y was similar to that of other nations and less than that of Taiwan and Korea, the leading performe rs. I have no trouble with the implication of the stat es-to-nation comparison they presented. Clearly, there are enormous regional variations in U.S. achievement and it is always useful to look at disaggregations of data for other patterns. What I was concerned about was their failure to inform the reader that this comparison had been lab eled "experimental" and was technically problematic. (Contrary to their assertions in their response, they did not report in their book the details of the data or how the comparison was condu cted!) Furthermore, when even our best state scores (those from a few typically high-scoring mid -Western states), are only at the AVERAGE level of Taiwan and Korea, we have cause for concer n. Both aggregated and disaggregated scores indicate a serious problem in mathematics. Finally, although minority and low-income students achieve relatively poorly, that remains insufficient to explain our generally low achieveme nt. As I explained, the math deficit is not simply a minority student problem. In 1992, only 30 % of WHITE U.S. 8th graders demonstrated NAEP math proficiency while over a quarter did not even make the basic level. Nor are our problems due to low-achievers. Even our top half ha ve not kept pace internationally in math and science (Stedman, 1994a). Why do their "minds boggl e" over such straight-forward explanations? Instead of dealing with this evidence, they twiste d my explanation into an argument that I claimed the low scores of minority students had no impact on average scores! Which is, of course, ridiculous. The point is that a major math problem and gap remains even when one looks at (disaggregates) other portions of the data--such as white students and the top half. It is also worth noting that, with the same demographics, U.S. reading scores are quite strong internationally. Berliner and Biddle should have admitted that they selectively reviewed the international evidence, presenting only a couple of scattered pie ces that supported their viewpoint. I invite readers to read my comprehensive analysis of the in ternational assessments, in which I report the major findings and discuss the assessments' strengt hs and weaknesses (Stedman, 1994a).SWEEPING CLAIM #4: THE EDUCATIONAL CRISIS IS MANUFACTUREDIn addition, Stedman asserts that we made another sweeping claim," that "the general education crisis is [merely] a right-wing f abrication," although he provides no citation to justify this charge. Again, this misrep resents what we wrote. This is remarkable. This claim of theirs--that the general education crisis is not real and was manufactured by right-wing forces--is one of th e central arguments of their entire book. My review, however, was not focused on their polit ical assertions but rather on their claim that the achievement crisis is a myth. Hence, my ti tle "The Achievement Crisis is Real" and my extensive review of the achievement evidence in my section, "Low Achievement". Let me be clear. I believe that right-wing forces have been attacking the public schools and EXPLOITING the evidence (and have been aided by a m ix of social forces), but there is also extensive, credible evidence that there is a real a chievement crisis, something Berliner and Biddle continue to deny. They have still never dealt direc tly with the actual evidence about low achievement. Nevertheless, let us consider their charge. Note h ere that they had to add the word

PAGE 18

18 of 31"merely" to my quote before discussing it. Does my statement really misrepresent what they wrote? Let's quote and cite them from several places. Fir st, begin with the title: THE MANUFACTURED CRISIS. Manufactured? By whom? Well, a s they stated in their response "right-wing ideologues gained access to the White H ouse with the election of Ronald Reagan, and in our book we detailed their influence on White Ho use education policy." Here's how they explained the manufactured crisis and the lack of r eal evidence: We began our book by noting that throughout most of the Reagan and Bush years, the White House led an unprecedented and energetic attack on America's public schools, making extravagant and false claims about the supposed failures of those schools, and arguing that those claims were backed by "evidence." . No such White House attack on public education had ever before appeared in American history--indeed, even in the depths of the Nixon years the White House had not told such lies about our schools. Since the attack was well organized and was led by such powerful persons-and since its charge s were shortly to be echoed in other broadsides by leading industrialists and medi a pundits--its false claims have been accepted by many, many Americans. And these fa lsehoods have generated a host of poor policy decisions that have damaged the lives of hard-working educators and innocent students. In our book we labeled this attack "The Manufactured Crisis". Ironically, they claimed that I was the one that w as reducing complex realities to a "political slogan"! In the introduction to their book, they point quit e clearly to "organized malevolence" and "nasty lies," and alleged that "government official s and their allies were ignoring, suppressing, and distorting evidence" (xi). In their chapter 4, "Why Now?", they laid out their case that right-wing forces have manufactured the crisis, and titled various sections "The Entitlement of Reactionary Voices", "The Far Right", "The Religiou s Right", "The Neoconservatives" and "School-basing and Governmental Scapegoating". They argued that, Early in the 1970s, however, a number of wealthy pe ople with sharply reactionary ideas began to work together to promote a right-win g agenda in America (p. 133). . these foundations have undertaken various act ivities to "sell" reactionary views: funding rightwing student newspapers, internships and endowed chairs for right-wing spokespersons on American campuses. . lobbying for reactionary programs and ideologues in the federal Congress (p. 133). They were quite clear in arguing that the "Manufac tured Crisis was not merely an accidental set of events or a product of impersonal social forces" (p. 9) but involved a "serious campaign by identifiable persons to sell Americans the false idea their public schools were failing and that because of this failure the nation was at peril." They themselves, therefore, have made it quite cle ar that they believe that the achievement crisis was a right-wing fabrication.THE MANUFACTURED CRISIS REVISITED In my review, I only touched the tip of the iceber g as far as their errors and distorted evidence went. One of the most egregious examples o f misleading and selective presentation was their handling of opinion data on schools. It is wo rth exploring at length for it is both a crucial piece of evidence and argumentation and illustrates how they select confirmatory evidence and

PAGE 19

19 of 31ignore disconfirmatory.PARENTAL (DIS)SATISFACTION WITH THE SCHOOLS In a compelling comparison, Berliner and Biddle po inted out that opinion about the national status of education, which was supposedly influenced by the conservative assault, is negative, and then claimed that parents' judgments of their community's and children's schools, which were supposedly based on local information, a re quite positive. Here we have an important piece of evidence that goes right to the heart of t heir argument about a manufactured crisis. Berliner and Biddle argued that the negative opinio ns about national conditions are "stereotypic" reflecting "rumors" and "bad portrayals" in the "po pular press" and are, in essence, manufactured by right-wing neoconservative critics, whereas the positive opinions about local schools are based on "personal experience, direct observation, inform ed judgment, and discussions with others" (p. 112). In particular, parents of school-age children will have "first-hand, direct knowledge" and their opinions are "more likely to reflect reality. Thus, according to this argument, our schools are actually in good shape because that's what pare nts and local opinion says. At one level, this is a very curious argument for them to be making given their interest in sweeping educational changes. If it were true, it s pells disaster for their own reform agenda. It would mean that parents are quite satisfied with wh at is going in their local schools and there would be little justification for progressive refor ms. Before reviewing the actual data, let us consider a different perspective on why opinion about local schools might be more positive. Andrew Coulson (1994) makes an intriguing counterargument--namely, that citizens are better informed about the national condition of education than they are about the local one. Every few years, for example, the National Assessment of Educational Progress reports on students' knowledge and skills in major academic areas--history, civics, geography, reading, mathematics, writing, e tc.--and the findings are widely distributed in the media. It could well be that, if parents had th e same kind of detailed achievement information about local students' knowledge and performance, th ey would be just as critical of their local schools. I think Coulson is on to something. Few parents ev er visit classrooms, particularly at the high school level, or shadow students throughout a day; few have ever actually observed what goes on inside the schools. Few districts routinely gather and report to the local media and community information about what students know and can do. In most communities, there is no systematic testing and reporting of high school stu dents' knowledge in the key academic subjects. (I am referring here to curriculum-based exams in A lgebra II, English Literature, U.S. History, Civics, Spanish 2, etc. and not generic, commercial standardized tests of reading, math, and social studies that are sometimes reported.) If the results on such exams were regularly report ed, and if parents routinely spent time in classrooms during the day, judgments of local schoo ls could well be more negative. (Similarly, if parents were familiar with the many ethnographies o f school conditions that were produced over the past decade, they might be decidedly more criti cal of their local schools.) My primary concern here, though, is with the actua l evidence and how Berliner and Biddle presented it. For over 25 years, Gallup and Phi Del ta Kappa have surveyed the educational opinions of a national representative sample of adu lts, including public school parents. They have repeatedly asked respondents to rate the schools on the A, B, C, D, and Fail grading scale. Berliner and Biddle used this data to claim that p ublic school parents are "well satisfied with their schools" and "rate them highly" (p. 114) But, in presenting the data, they combined A and B ratings, which thus inflated the positive rat ings, and omitted grades of C entirely! Their graph of parental opinion was an unusual one, there fore, in that it contrasted A/B ratings with D/F ratings and left out Cs entirely (p. 113). The resu lt was a skewed comparison. (Their graph also contained a error-what they labeled as the adult sample's opinion of local schools was actually

PAGE 20

20 of 31 that from respondents with no children in schools!) Contrary to their selective approach, I here prese nt tables of the 1993 results complete with each of the grades, A through Fail, so that readers can inspect them (Elam, Rose, & Gallup, 1993). The first table gives the ratings by all res pondents, the second gives the ratings of public school parents. 1993 RATINGS--ALL RESPONDENTS ABCDFailDon't know Nation's Public Schools2174817412Public Schools in this community1037311147 1993 RATINGS--PUBLIC SCHOOL PARENTS ABCDFailDon't know Nation's Public Schools3164917411Public Schools in this community124428124<.5School your oldest child attends(did not specify public or private) 274518523 So what do we find? Public school parents certainl y do rate local schools more highly than national ones--fewer Cs, Ds, and Fails, and more As and Bs. But look closely at the data. Only about a QUARTER of public school parents rate their oldest child's school an A, which is hardly a ringing endorsement. A quarter apparently have se rious concerns about it, rating it C through Fail. (By 1995, this percentage had grown to over a third; see Elam & Rose, 1995). Furthermore, almost half the public school parents (44%) in 1993 expressed some displeasure with their community's schools, rating them C through Fail. (B y 1995, this figure had grown to exactly half.) Nonpublic school parents' responses were particula rly revealing as the next table shows. They were quite critical of their local public scho ols. About 2/3 rated them C through Fail. Although one might argue that they are less familia r with the public schools, one could conversely argue that the reason they became privat e school parents is because they know all too well what local schools are like.1993 RATINGS--NONPUBLIC SCHOOL PARENTSABCDFailDon't know Nation's Public Schools6948151210Public Schools in this community532419112 All this data hardly suggests that "American paren ts" are "well satisfied with their local schools" as Berliner and Biddle argued (p. 114).

PAGE 21

21 of 31 Berliner and Biddle compounded the distortions by then claiming that What is amazing is that this high level of parental satisfaction with their local schools is growing and is actually HIGHER today than it was seven years ago (p. 112). Although "satisfaction" (As & Bs) grew in the late 1980s, ratings in the 1990s leveled off. In fact, 1993 ratings were a point lower than those of 1991, and 1992 ratings were a point lower than those of 1986. By 1995, ratings had fallen bac k to 1986 levels (Elam & Rose, 1995). In any event, how do they explain these trends? Th ey don't bother to. A conservative critic might argue, however, that the reason satisfaction grew in the 1980s was because schools went back to the basics, raised standards, improved disc ipline, etc., but this interpretation is not considered by Berliner and Biddle. Interestingly, t he increases in parental satisfaction took place in the aftermath of reforms generated by A NATION A T RISK. Was this a reflection of real improvement? Or of national activity and publicity influencing local opinions? WHO IS SITTING IN JUDGMENT? Berliner and Biddle condemn several prominent educ ators for mistrusting positive parental opinion about their local schools. They wrote: Who are Doyle, Ravitch, Finn, and Stevenson to tell them they are wrong? (p. 114) In effect, these critics have proclaimed themselves part of an elite who, for the good of the nation, will be pleased to tell other Americ ans what they are to believe and how they are to act (p. 114). But isn't that exactly what Berliner and Biddle ha ve done in their 414 page book as they lay out a progressive reform agenda and critique th e conservative approach, one that turns out to have much parental support? Why do Berliner and Biddle only respect--and prese nt-parental opinion when it suits them and not respect it--or discuss it--in other ar eas? The PDK/Gallup opinion study that Berliner and Biddle relied on (Elam, Rose, & Gallup, 1993) r eported that the overwhelming majority of respondents have favored, for a long time, national achievement goals and standards, requiring a standard exam to get a high school diploma, and usi ng national tests to compare communities' achievement. Other parental opinions also ran counter to their (and my!) preferred approach. In 1993, two-thirds of PUBLIC SCHOOL PARENTS favored English immersion for language minority students or even instruction at parents' expenses o ver bilingual education. Half supported longer school years. In 1995, three-fourths of public scho ol parents favored a constitutional amendment to allow prayers to be spoken in public schools (El am & Rose, 1995). These results were similar to those from 1984. Most preferred a moment of sile nce for silent prayer or contemplation rather than spoken prayer. The 1995 poll also shows that parents continue to strongly support national exams and standards. Over 80% of public school parents suppor t higher standards in the major academic subjects for promotion and for graduation. About 60 % favor them even if it meant "significantly fewer students would graduate". About three-fourths even favor setting standards for kindergarten through 3rd grade. About two-thirds of public schoo l parents favor using standardized, NATIONAL exams for promotion in THEIR OWN community schools. Such parental opinions do not simply reflect the n ational conservative hegemony that emerged in the last decade during the Reagan and Bu sh administrations. Although support for such measures as national testing grew a bit in the 1980s, it has a long history (Elam, Rose, & Gallup, 1993). Way back in 1970, people were advoca ting NATIONAL tests to measure their

PAGE 22

22 of 31community's achievement and, even in the mid-1970s, most were advocating that all students be required to pass a standard exam to receive a high school diploma--and this was well before the conservative onslaught occurred that Berliner and B iddle labeled the MANUFACTURED CRISIS. So the issue of parental opinion is a compl ex one. This past year, the Phi Delta Kappa/Gallup poll ex plored the reasons parents rated their local schools higher than the nation's (Elam & Rose 1995). Their answers were striking and challenge Berliner and Biddle's complacency about a cademic achievement. Given a list of 11 possible reasons, Elam and Rose reported that the p arents made a "significant number-one choice: THE LOCAL SCHOOLS PLACE MORE EMPHASIS ON HIGH ACADE MIC ACHIEVEMENT" (p. 43, emphasis original). So, if Ber liner and Biddle are right that local parents are in the know about their public schools, then they should also respect their opinions about emphasizing academic achievement. One limitation of this finding, however, is that t he parents generally agreed with each of the choices they were offered--with one notable exc eption, that their children's schools were better because they had more to spend per pupil. Th at exception has relevance for the next section.PROBLEMS IN LAKE WOEBEGONE If parents truly were satisfied with their schools it would undermine Berliner and Biddle's case for reform. So they had to find some support i n the data for their reform agenda. PDK/Gallup asked respondents an open-ended question: "What do you think are the biggest problems with which the public schools of this community must dea l?" Here's how Berliner and Biddle characterized the findings: In fact, the biggest complaint that American parent s indicated in the 1993 Gallup poll was that their local schools were not supported ade quately. This complaint took precedence over their concerns about drug abuse, la ck of discipline, fighting, violence, gangs, and a host of other real and imagi ned problems (p. 114). This neatly fits the basic argument Berliner and B iddle are advancing, but is truly misleading. THERE WAS NO CONSENSUS IN PARENTAL OPIN IONS ABOUT SCHOOL PROBLEMS. A lack of proper financial support was th e most often mentioned problem, but ONLY 24% of the public school parents cited that. T HE VAST MAJORITY CITED OTHER PROBLEMS. It is unclear that funding took "preceden ce" over other problems. Respondents were not asked to rank problems. Almost half of them (43 %) were concerned about issues of order and behavior--15% cited discipline, 14% drugs, and 14% fighting, violence, and gangs. I find it curious that they would label some of th e problems "imagined". Why were they suddenly discounting certain parental opinions, giv en that it is supposedly informed opinion? (Interestingly, 10% of the public school parents re ported they had no idea what the biggest problems were.) They didn't mention that those with out children in school responded similarly to public school parents, which further undermines the ir argument about locally-informed opinions. It is likely that 1991-1993 concerns over finances were partly influenced by national happenings--the 1992 Bush-Clinton election campaign that focused in part of support for education and Jonathan Kozol's book SAVAGE INEQUALI TIES--rather than simply the "reality" of the local situation. The survey itself may also have played a part in inducing financial concerns in that there was a series of questions about educa tional expenditures--equal funding, the impact of money, support for poor communities, etc. (One h opes that those questions came after the question about biggest problems.) By 1995, the mention of financial support had drop ped in half to only 12%. A lack of discipline was mentioned just about as often (11%). Had local conditions changed so

PAGE 23

23 of 31dramatically? Had schools suddenly received adequat e funding? Or, had the national debate shifted? Berliner and Biddle identified the opinions about problems as those of "parents"--but it was actually parents with children currently in the public schools. Parents of nonpublic students made different, and quite intriguing, comments abou t their community's public schools. A lack of proper financial support was NOT the problem they m ost often mentioned in 1993 (or 1995). Instead, they were most often concerned about a lac k of discipline in the local schools (19%), the standards and quality of education (18%), and fight ing, violence, and gangs (17%). Although Berliner and Biddle ignored them, their opinions ab out local schools are worth listening to as they were the ones who decided to remove their chil dren from those schools--or not put them there in the first place. Opinions about public schools and reform, I believ e, reflect a complex, highly tangled interaction of parental experience with local schoo ls, the spirited national debate over educational reform, and a growing conservative hegemony. Instead of recognizing these complex influences on parental opinion, instead of respecting the opinions of all parents, it was far simpler for them to set up false dichotomies--parents vs. nonparents, national illusions vs. local realities, and manufactured crisis vs. high satisfaction. In the end, the "problem" became those without chi ldren. Berliner and Biddle commented about public school parents: The major problem they face is trying to persuade t hose who do not have children in the schools to agree to pay their share of school t axes (p. 114). Such a sweeping comment flies in the face of the v ery survey they were reporting on (Elam, Rose &, Gallup, 1993). Two-thirds of the res pondents WITHOUT children in school said they would be willing pay more taxes to improve the quality of public schools in poorer states and communities. That figure closely matches the 71 % of public school parents who said they'd be willing. 59% of those without children in school said they'd be willing to pay more federal taxes to improve inner-city schools, just about the same as the 62% of public school parents. Those without children in school also gave similar responses as to the local schools' biggest problems--although a lack of proper financial suppo rt was first, drugs, discipline, and violence together garnered the lion's share of the concerns. Berliner and Biddle then concluded their discussio n of parental opinion with: Perhaps it is time for citizens without children to join parents and go into the schools to see for themselves what is actually happening th ere (p. 114). Perhaps it is time for both groups (along with edu cational researchers) to do just that! The main point I am making in this section is that opinions about local schools are nowhere near as strong as Berliner and Biddle argue --one can hardly describe it as "remarkable degree of consumer satisfaction" (p. 113) when half the public school parents are rating their community's schools C through Fail. What it suggest s to me is that there is a deep well of dissatisfaction that could be enlisted in a movemen t toward progressive reform. But we must understand and respect the fact that public school parents have many conservative ideas about schooling and reform, shaped by national forces (an d conservative propaganda) but grounded as well in local experiences.PROGRESSIVE REFORMS AND THE RIGHT-WING AGENDA There should be little question that I basically a gree with Berliner and Biddle's reform mission. As I wrote in the WASHINGTON POST review,

PAGE 24

24 of 31Berliner and Biddle offer a welcome critique of the neoconservative agenda--privatization, national testing, gifted pro grams, and work intensification. They forcefully document the social problems plagui ng our schools--from economic stagnation to poverty--and provide a useful compend ium of alternative reform strategies--small schools, authentic assessment, eq uitable funding, and community involvement. As a progressive educator, therefore, I'm sympathe tic to their concerns. The ascendancy of the political right is troubling and could harm pub lic education greatly. We do need to overhaul school financing systems and do more for low-income rural and urban students. We do need to critically examine neoconservative reform strategie s and aggressively promote progressive alternatives. Ultimately, though, the book suffers from being on e-sided. While right-wing "organized malevolence" and government suppression of evidence make for good reading, they do not mean the educational crisis is a myth. Berliner and Biddle were so intent, for example, o n branding the major 1980s reform reports as ideologically conservative, that they ev en tarred thoughtful critiques of the schools by progressive educators. Their list of reports, for e xample, that were supposedly products of conservative ideologies and Human Capital theories included A PLACE CALLED SCHOOL by John Goodlad and HORACE'S COMPROMISE by Ted Sizer ( p. 140). THE SUPPRESSION OF THE SANDIA REPORT They were more on target when they described how t he conservative political agenda shaped the Department of Education's WHAT WORKS? re ports and how self-interested budget considerations may have led NSF to stand by a flawe d study predicting a national shortfall of scientists (pp. 162-164). But then they went furthe r and, without evidence, suggested that NSF stood its ground because the Reagan administration was interested in helping industrialists (p. 165). In a more dramatic tale, they also alleged the Bus h administration suppressed a major study of education-the Sandia Report--because it contradicted official claims about the poor state of education, and would have set the achievem ent record straight (pp. 165-168). This story is an important one because the report formed the b asis of several well-known articles challenging the notion of an educational crisis (se e, e.g., Bracey, 1991; 1992) and Berliner and Biddle extolled its virtues (pp. 26, 354). The report was rife with errors, however, which he lped delay its publication and they overlooked its substantial shortcomings-sloppy an alysis of the SAT and international data and omission of key achievement data (Stedman, 1994b). The allegation of suppression is a serious one and potentially libelous. Berliner and Biddle had an obligation to furnish the evidence for such charges IN their book and, in the interest of fairness, present alternative interpretations of th e events-particularly giving the viewpoints of those charged with suppressing. This they did not d o. They simply alleged that administration officials subjected the report to "unprecedented" N CES and NSF reviews, yet it seems that the reports' authors were involved in requesting the re views. In 1993, one of the authors, Robert Huelskamp, wrote that, "As our work unfolded in the spring of 1991, WE SUBJECTED a draft to peer review with the U.S. Department of Education, the National Science Foundation, and other researchers (most notably Gerald Bracey)" (Huelskam p, 1993, p. 719, emphasis added). It has struck many observers as reasonable that a report on education created by Department of Energy analysts--not by educators--sh ould be reviewed by education researchers at the National Center for Education Statistics, peopl e who would be more conversant with the data. Berliner and Biddle offered no evidence that such a review was unprecedented (nor did the

PAGE 25

25 of 31source they relied on--Tanner, 1993); indeed a majo r Energy report on the general condition of K-12 public schooling was itself something unpreced ented. As one of its authors noted, it was a departure from previous efforts that had focused on analyses of postsecondary education and the training of scientists and mathematicians (Huelskam p, 1993, pp. 718-719). Berliner and Biddle also wrote that "the report it self eventually appeared in the JOURNAL OF EDUCATIONAL RESEARCH--without fanfare, without e ven a listing of its authors!" (p. 159). In fact, Huelskamp (1993) first published a v ersion of the report in PHI DELTA KAPPAN, one of the largest circulating educational journals and informed readers that the "full report will be published in the May/June issue of the JOURNAL O F EDUCATIONAL RESEARCH" (p. 719). Furthermore, the entire issue of JER was devo ted to the report and its front cover listed the authors' names--C.C. Carson, R. M. Huelskamp, and T .D. Woddall--in bold print! Even though it took time for the final report to b e released, its ideas were widely circulated much earlier. The authors themselves distributed dr afts of the report even before the summer 1991 NCES and NSF reviews were completed (Miller, 1 991, p. 32). Gerald Bracey (1991) used them as the basis of his first annual report on the condition of education that appeared in PHI DELTA KAPPAN back in 1991, an article that received widespread publicity, and he later credited them with helping change conservative crit ics' views of the achievement decline (Bracey, 1992). The report's authors also testified to Congr ess in the summer of 1991 and the printed testimony, including a synopsis of the report, was readily available (HEARINGS ON THE STATE OF EDUCATION, 1991). To be sure, the entire episode is quite controvers ial. Miller (1991) reported that unnamed sources contended the authors were worried about po ssible reprisals (funding cut-offs), a GAO audit was conducted, several politically-charged st atements were revised out of the draft, etc. Several sources did charge that the report was bein g buried because it conflicted with Bush administration educational policy and that the Cong ressional testimony was needed to get the message out. Administration officials countered tha t the report was delayed because it was undergoing an expert review process. Whether it was suppressed, buried, delayed, or leg itimately subjected to additional reviews (or several of the above!), such actions do not mea n that the report's findings were valid and should be accepted. Berliner and Biddle claimed tha t NCES and NSF reviewers "dutifully detected trivial 'flaws' (p. 167) ), but like Tan ner (1993), they did not present the reviewers' findings or what was concluded about the nature and extent of the flaws. In fact, the reviewers raised serious, fundamental questions about the qua lity of the report, its data handling, and its conclusions. (Tanner argued that the reviewers were opinionated and provided one example where some reviewer had unprofessionally written "Nuts" n ext to a passage on a Sandia draft (p. 292)-but a blunt opinion hardly invalidates what ma ny reviewers found or what the summaries of the reviews concluded). In his summary of NCES's review, Emerson Elliott ( 1991), the commissioner of NCES, described the problems as follows: The report appears to be highly selective in the in formation it presents. Information that is widely known and understood is not presente d, and the data shown are consistently supportive of a picture of U.S. educat ion in a positive light. This could give rise to criticisms that the report is a biased presentation instead of the "balanced" presentation that has been claimed.. . the trends in educational performance among U .S. students are complex and not well-represented in this analysis. The oversimplifi cation leads to simplistic interpretations.In many places in the report the findings and inter pretations are not supported by the

PAGE 26

26 of 31data presented.. . the results of the science examinations in th e NAEP are provided. The assertion is made that the trends shown are consistent with t he results of exams in other subject matter areas. This is not the case, as demo nstrated in numerous analyses of NAEP and other achievement data.The discussion of international comparisons on test scores reflects this problem as well. Many other international comparisons have bee n made, and some of the issues identified in the issue discussion on p. 94 have be en addressed in studies. These findings should have been included for a more balan ced discussion of U.S. student performance.A longitudinal component over the course of a year permitted comparison of what students were actually taught during a year and how they performed on those test items. The U.S. performance, unfortunately, was rat her dismal. He concluded that the report contains: assertions that contradict what we know well from b roadly grounded research conducted over a number of years with repeated repl ications using different databasesmisinterpretations of the data presentedinappropriate policy conclusions [and]conclusions not well founded in the information pre sented. The NSF review determined that "the report rests o n a partial and flawed analysis" and that its conclusions are "not adequately supported" (Hou se, 1991). The NSF reviewers (several not just one as Berliner and Biddle suggested) found "s everal major flaws" typified by a "lack of understanding of the data series used" and "unresol ved conflicting interpretations" (House, 1991). They noted there were "dozens of flaws" and gave ma ny examples, including the Sandia analysts' sweeping claim there wasn't ANY NAEP test that show ed declines and their failure to recognize students' low achievement levels on the tests. My own review concluded that the report was generally right about steady trends, but that it is seriously flawed by errors in analysis, insufficient evidence, mischaracterizatio ns of the international data, and a failure to consider the evidence that U.S. students are performing at low levels. In spite of its findings, fundamental school reform is still warranted (Stedman, 1994b). Interested readers can find a detailed treatment o f the report's strengths and limitations in Stedman (1994b).SHAPING A PROGRESSIVE REFORM AGENDA Berliner and Biddle also characterized the present national agenda as right-wing and neoconservative, but it was developed across the po litical and educational spectrum--by governors of both parties, teacher union leaders, a nd state school superintendents. While right-leaning, it contains a complex mixture of ref orms. Even the national Goals 2000 program includes such long-time progressive objectives as p arental participation and ensuring children

PAGE 27

27 of 31come to school ready to learn. Let me be clear. I have no doubt that right-wing fo rces have organized an assault on the public schools; that conservative school critics ex ploited the evidence and exaggerated the decline. I was, for example, an early critic of the NATION AT RISK for misusing data, exaggerating the decline, and ignoring equity issue s (Stedman & Smith, 1983; see also Stedman & Kaestle, 1985). But just as conservative critics were wrong to argue that we were in a massive decline and needed to return to traditional schooli ng, so too, progressives such as Berliner and Biddle are now wrong to suggest that our schools ar e achieving well and that concerns about students' levels of knowledge are unfounded. As I explained in my WASHINGTON POST review (Stedm an, 1995), progressives should be willing to admit that achievement is low. But th at does not mean embracing a conservative agenda or calling for the U.S. to be #1 in the worl d in math and science, as the nation's Goals 2000 program does. Nor does it mean calling for the schools to go back to old-fashioned, regimented teaching. The existing curriculum is alr eady too facts-based and memorydriven and is not working. As I wrote in the POST review: An historical perspective helps here. Conservatives often blame the decline of excellence on 1960s liberalism, but students' achie vement and general knowledge were low even in the 1940s and 1950s--a clear indic ation traditional practices have never been very successful. Such persistent failure strengthens the case for a sweeping, progressive restructuring of schools. Berliner and Biddle, therefore, missed a great opp ortunity to strengthen their own case for progressive reform. By combining the progressives' call for cooperative learning and rich curricula along with the conservatives' emphasis on high levels of knowledge, we would be far more likely to develop reflective, well-informed st udents. (Note as well that thoughtful conservatives are also calling for innovative teach ing methods, an engaging, challenging curriculum, and an end to tracking.) A far more com pelling case for reform could be made--and one that could garner more universal support--when we explain that traditional methods have failed and that even children of the middle-class a re often not mastering important academic knowledge. I invite readers to compare my analyses of the con dition of educational achievement with theirs (see bibliography). Judge for yourselves who has produced the balanced, careful treatment of the data; who is willing to acknowledge the comp lexity of the data and achievement patterns, and who is working hard at understanding the eviden ce rather than trying to fit it into one neat, pat story. Although we should be concerned about th e growing influence of right-wing politics, let us also respect the evidence; the achievement c risis remains real and the need for fundamental school reform remains great.ReferencesAnderson, L., Jenkins, L., Leming, J., MacDonald, W ., Mullis, I., Turner, M., & Wooster, J. (1990). THE CIVICS REPORT CARD. Princeton, N.J.: Ed ucational Testing Service. Applebee, A. N., Langer, J. A., Mullis, I. V. S., & Jenkins, L. B. (1990). THE WRITING REPORT CARD, 1984-1988. Princeton, N.J.: NAEP.Applebee, A., Langer, J., Mullis, I., Latham, A., & Gentile, C. (1994). NAEP 1992 WRITING REPORT CARD. Washington, D.C.: National Center for Education Statistics. Berliner, D., & Biddle, B. (1995). THE MANUFACTURED CRISIS: MYTHS, FRAUD, AND THE ATTACK ON AMERICA'S PUBLIC SCHOOLS. New York: A ddison-Wesley.

PAGE 28

28 of 31Berliner, D., & Biddle, B. (1996). Making molehills out of molehills: Reply to Lawrence Stedman's review of THE MANUFACTURED CRISIS. EDUCAT ION POLICY ANALYSIS ARCHIVES, 4(3). http://seamonkey.ed.asu.edu/epaa/Bracey, G. (1991). Why can't they be like we were? PHI DELTA KAPPAN (October), 105-117. Bracey, G. (1992). The second Bracey report on the condition of public education. PHI DELTA KAPPAN (October), 104-117.Carpenter, T. P., Lindquist, M. M., Brown, C. A., K ouba, V. L., Silver, E. A., & Swafford, J. O. (1988). Results of the fourth NAEP assessment of ma thematics. ARITHMETIC TEACHER (December), 38-41.Carson, C. C., Huelskamp, R. M., & Woodall, R. D. ( 1993). Perspectives on Education in America. THE JOURNAL OF EDUCATIONAL RESEARCH, 86 (M ay/June), 259-310. Coulson, A. J. (1994). A Response to John Covaleski e. EDUCATION POLICY ANALYSIS ARCHIVES, 2(12). http://seamonkey.ed.asu.edu/epaa/Elam, S., & Rose, L. (1993). The 27th annual Phi De lta Kappa/Gallup Poll of the Public's attitudes toward the public schools. PHI DELTA KAPP AN, 77(1), 41-56. Elam, S., Rose, L., & Gallup, A. (1993). The 25th a nnual Phi Delta Kappa/Gallup Poll of the Public's attitudes toward the public schools. PHI D ELTA KAPPAN, 75(2), 137-152. Elliott, E. (1991). Review of the Sandia National L aboratory report on education. Letter to Richard E. Stephens, Associate Director for Univers ity and Science Education, Department of Energy from the Acting Commissioner, National Cente r for Education Statistics. Washington, D.C.: U.S. Department of Education.Gilovich, T. (1991). HOW WE KNOW WHAT ISN'T SO: THE FALLIBILITY OF HUMAN REASON IN EVERYDAY LIFE. New York: The Free Press.HEARINGS ON THE STATE OF EDUCATION. (1991). Hearing s before the Subcommittee on Elementary, Secondary, and Vocational Education of the Committee on Education and Labor, House of Representatives, One Hundred Second Congre ss. Hearings in Washington, D.C., May 1 and 3, and July 18, 1991. Serial No. 102-28. Washin gton, D.C.: U.S. Government Printing Office. ISBN 0-16-035543-5.House, P. (1991). Review of the Sandia National Lab oratory report on education. Letter to Richard E. Stephens, Associate Director for Univers ity and Science Education, Department of Energy, from the Director of the Division of Policy Research and Analysis, NSF. Washington, D.C.: National Science Foundation.Huelskamp, R. (1993). Perspectives on education in America. PHI DELTA KAPPAN, 74(9), 718-722.Kozol, J. (1991). SAVAGE INEQUALITIES: CHILDREN IN AMERICA'S SCHOOLS. New York: Crown Publishing.Lapointe, A., Mead, N., & Askew, J. (1992). LEARNIN G MATHEMATICS. Princeton, New Jersey: Educational Testing Service.

PAGE 29

29 of 31Linn, R. L., Graue, M. E., & Sanders, N. M. (1990). Comparing state and district results to national norms: The validity of claims that "everyo ne is above average". EDUCATIONAL MEASUREMENT: ISSUES AND PRACTICE (Fall), 5-14.Miller, J. (1991, October 9). Report questioning 'c risis' in education triggers an uproar. EDUCATION WEEK, p. 1, 32.Mullis, I. V. S., Dossey, J. A., Foertsch, M. A., J ones, L. R., & Gentile, C. A. (1991a). TRENDS IN ACADEMIC PROGRESS. Washington, D.C.: U.S. Govern ment Printing Office. (ED 338 720)Mullis, I. V. S., Dossey, J. A., Owen, E. H., & Phi llips, G. W. (1991b). THE STATE OF MATHEMATICS ACHIEVEMENT. Washington, D.C.: U.S. Dep artment of Education. Olson, L. (1995, February 8). Students' best writin g needs work, study shows. EDUCATION WEEK, 5.Shanker, A. (1991, Fall). Do private schools outper form public schools? AMERICAN EDUCATOR, 8-15, 40-41.Stedman, L. C. (1993). The condition of education: Why school reformers are on the right track. PHI DELTA KAPPAN, 75 (3), 215-225.Stedman, L. C. (1994a). Incomplete explanations: Th e case of U.S. performance in the international assessments of education. EDUCATIONAL RESEARCHER, 23(7), 24-32. Stedman, L. C. (1994b). The Sandia Report and U.S. achievement: An assessment. JOURNAL OF EDUCATIONAL RESEARCH (JanuaryFebruary), 133-14 6. Stedman, L. C. (1995, November 5). Putting the syst em to the test. [Review of THE MANUFACTURED CRISIS.] Education Review section of t he Washington Post, 16-17. Stedman, L. C. (1996). The achievement crisis is re al. [Review of THE MANUFACTURED CRISIS.] EDUCATION POLICY ANALYSIS ARCHIVES 4(1).http://seamonkey.ed.asu.edu/epaa/Stedman, L. C., & Kaestle, C. F. (1985). The test s core decline is over: Now what? PHI DELTA KAPPAN, 67(3), 204-210.Stedman, L. C., & Kaestle, C. F. (1991a). The great test score decline: A closer look. In C. F. Kaestle, H. Damon-Moore, L. C. Stedman, K. Tinsley, & W. V. Trollinger (Eds.), LITERACY IN THE UNITED STATES (Chapter 4). New Haven: Yale Univ ersity Press. Stedman, L. C., & Kaestle, C. F. (1991b). Literacy and Reading Performance in the United States from 1880 to the present. In C. F. Kaestle, H. Damo n-Moore, L. C. Stedman, K. Tinsley, & W. V. Trollinger (Eds.), Literacy in the United States (C hapter 3). New Haven: Yale University Press. Stedman, L. C., & Smith, M. S. (1983). Recent refor m proposals for American education. CONTEMPORARY EDUCATION REVIEW, 2(2), 85-104.Tanner, D. (1993). A nation 'truly' at risk. PHI DE LTA KAPPAN, 75(4), 288-297. Viadero, D. (1995, September 13). Book that bucks n egative view of schools stirs debate. EDUCATION WEEK, p. 8.

PAGE 30

30 of 31 Westbury, I. (1992). Comparing American and Japanes e achievement: Is the United States really a low achiever? EDUCATIONAL RESEARCHER, 21(5), 18-2 4. Williams, P., Lazer, S., Reese, C., & Carr, P. (199 5). NAEP 1994 HISTORY: A FIRST LOOK. Washington, D.C.: U.S. Department of Education.Williams, P., Reese, C., Campbell, J., Mazzeo, J., & Phillips, G. (1995). NAEP 1994 READING: A FIRST LOOK. Washington, D.C.: U.S. Department of Education. Williams, P., Reese, C., Lazer, S, & Shakrani, S. ( 1995). NAEP 1994 GEOGRAPHY: A FIRST LOOK. Washington, D.C.: U.S. Department of Educatio n. Wirtz, W. et al. (1977). ON FURTHER EXAMINATION. Ne w York: College Board.About the AuthorLawrence C. Stedman stedman@binghamton.edu Lawrence C. Stedman is Associate Professor of Educa tion at the State University of New York at Binghamton. His Ph.D. is from the Universit y of Wisconsin at Madison in Educational Policy Studies with a minor in Sociology. He has wo rked as a school district policy analyst, secondary school teacher, VISTA volunteer, and educ ational researcher. He has a keen interest in equal opportunity and school reform. His dissertati on and early articles centered on effective schools research and the reform reports of the earl y 1980s. He has helped evaluate ESL, minority achievement, merit pay, and dropout intervention pr ograms. More recently, his research has focused on the gene ral condition of education and its implications for policy-making. He has written arti cles on the test score decline, literacy trends, the international assessments, and the Sandia Repor t. He is currently investigating historical trends in students' and adults' general knowledge. It is the outgrowth of a book he helped author with Carl Kaestle and others on the history of the U.S. reading public ( Literacy in the United States: Readers and Reading Since 1880 Yale University Press, 1991). This new research h as been funded by a SUNY Faculty Research Grant and Fe llowship and by a National Academy of Education Spencer Foundation post-doctoal fellowshi p.Copyright 1996 by the Education Policy Analysis ArchivesEPAA can be accessed either by visiting one of its seve ral archived forms or by subscribing to the LISTSER V known as EPAA at LISTSERV@asu.edu. (To subscribe, s end an email letter to LISTSERV@asu.edu whose sole contents are SUB EPAA your-name.) As articles are published by the Archives they are sent immediately to the EPAA subscribers and simultaneously archived in three forms. Articles are archived on EPAA as individual files under the name of the author and t he Volume and article number. For example, the arti cle by Stephen Kemmis in Volume 1, Number 1 of the Archives can be retrieved by sending an e-mail letter to LISTSERV@asu.edu and making the single line in the letter read GET KEMMIS V1N1 F=MAIL. For a table of contents of the entire ARCHIVES, send the follow ing e-mail message to LISTSERV@asu.edu: INDEX EPAA F=MAIL, that is, send an e-mail letter and mak e its single line read INDEX EPAA F=MAIL. The World Wide Web address for the Education Policy Analysis Archives is http://seamonkey.ed.asu.edu/ Education Policy Analysis Archives are "gophered" in the directory Campus-Wide Inform ation at the gopher server INFO.ASU.EDU.

PAGE 31

31 of 31To receive a publication guide for submitting artic les, see the EPAA World Wide Web site or send an e-mail letter to LISTSERV@asu.edu and include the single l ine GET EPAA PUBGUIDE F=MAIL. It will be sent to you by return e-mail. General questions about appro priateness of topics or particular articles may be addressed to the Editor, Gene V Glass, Glass@asu.edu or reach him at College of Education, Arizona State Univers ity, Tempe, AZ 85287-2411. (602-965-2692)Editorial Board John Covaleskiejcovales@nmu.edu Andrew Coulson andrewco@ix.netcom.com Alan Davis adavis@castle.cudenver.edu Mark E. Fetlermfetler@ctc.ca.gov Thomas F. Greentfgreen@mailbox.syr.edu Alison I. Griffithagriffith@edu.yorku.ca Arlen Gullickson gullickson@gw.wmich.edu Ernest R. Houseernie.house@colorado.edu Aimee Howleyess016@marshall.wvnet.edu Craig B. Howley u56e3@wvnvm.bitnet William Hunterhunter@acs.ucalgary.ca Richard M. Jaeger rmjaeger@iris.uncg.edu Benjamin Levinlevin@ccu.umanitoba.ca Thomas Mauhs-Pughthomas.mauhs-pugh@dartmouth.edu Dewayne Matthewsdm@wiche.edu Mary P. McKeowniadmpm@asuvm.inre.asu.edu Les McLeanlmclean@oise.on.ca Susan Bobbitt Nolensunolen@u.washington.edu Anne L. Pembertonapembert@pen.k12.va.us Hugh G. Petrieprohugh@ubvms.cc.buffalo.edu Richard C. Richardsonrichard.richardson@asu.edu Anthony G. Rud Jr.rud@sage.cc.purdue.edu Dennis Sayersdmsayers@ucdavis.edu Jay Scribnerjayscrib@tenet.edu Robert Stonehillrstonehi@inet.ed.gov Robert T. Stoutstout@asu.edu

PAGE 32

1 of 2 Contributed Commentary on Volume 4 Number 1: Stedman The Achievement Crisis is Real: A Review of The Manufactured Crisis 11 April 1996Eugene Bartooebartoo@CECASUN.UTC.EDU On April 10, 1996 Andrew Coulson wrote: Like Berliner's 1993 EPAA paper, The Manufactured Crisis is a fanciful and selective romp through the data on public schooling Anyone who is currently relying on or quoting this material will do themsel ves a great favor by reading Larry Stedman's recent detailed critique of their treatme nt of the academic achievement data (also published in the Education Policy Analysis Archives ). OK, I haven't read B & B's book, but read Stedman I, B & B reply, and Stedman II. And Andrew's post. Some impressions: (1) I suspect that many foreign students study more lan guages for longer periods than our students do, but I also think, as has been said by others here, that many foreign students get to use those languages more often and hence the language s tudy takes. There are vast numbers of american students that have studied French for year s and find themselves speechless when confronted with a French waiter. The opportunity an d need to converse in another language is just not there and thus has an effect, I think, on the force of insisting on more rigor and time in learning languages. The argument that it broadens o ne's horizons may have been lost years ago. (2) I am engaged in a project in a local private girls school. I sit in on a precalculus class of junior girls; very bright, quick, high achievers They will do very well later; most will go to good schools; get good jobs; have good lives. This morning I asked them if any of them knew where the Pyrennes were [I read Stedman II last nig ht; I mulled over the data on geography from the NAEP]. Only three girls knew where it was. I'm not sure I knew where it was when I was sixteen. They had geography as seventh graders I imagine that that experience was more like "this is where Africa is". Is this a sobering findi ng? At my university we have one faculty member whose specialty is geography; one! In the teacher ed. program which I helped design some few years ago, we have a geography concentration for social science ed. folks. It has few majors; has had fewer graduates. They have trouble getting the courses; most are taught by adj uncts. For most people in the US, the closest thing they have ever gotten to geography was that y ellow-bordered magazine that is popular at garage sales. I was not surprised at the findings o f the NAEP. I don't know if that is something with which to be concerned. (3) I think Stedman is basically right. We do have poo r achievement. The achievement levels in this country have been poor forever. Most of our fellow citizens do not have a large store of knowledge as measured by the NAEP studies [or others like them]. Our children's achievement patterns are about what their parents' achievement patterns are. Perhaps the school

PAGE 33

2 of 2is culpable, but I didn't get the sense that Stedma n took that view. (4) The SAT issue is basically not controversial anymor e. There seems to be general agreement that those data have been stable for twen ty some years; that earlier declines were due to a variety of factors, the largest percentage of variance being SES changes in test takers. And who cares? (5) There has just been another summit on educational matters and the reaffirmations of the need for rigorous standards and assessment were given. Shanker concurred in his latest column, stating that it was time to take control of the content of the curriculum out of local hands. In some areas around here those are fightin' words. And so our dream is to raise ourselves by the boot straps of our children so that, for example, those bright girls would know where the Py rennes were [and I might know how to spell it]? Just some thoughts. Thanks for your indulgence

PAGE 34

1 of 1 Contributed Commentary on Volume 4 Number 1: Stedman The Achievement Crisis is Real: A Review of The Manufactured Crisis 10 April 1996John F. CovaleskieJCOVALES@NMU.EDU It is good to see Andrew attacking public schools again. If B & B are somewhat selective in their presentat ion of evidence, they are no more than following the lead, if not to the extent, of those who have been unfairly attacking schools for the past two decades. As I understand the core of their argument, it is that there is much evidence that schools are doing a remarkably good job with an increasingl y diverse population. To cite one example, SAT scores have been declining for some time, and t his is pointed to as a failure of schools. However, disagreggated data show that scores are ac tually rising for all subgroups, while the decline is due to the expansion of the test-taking pool among those who score lower, and were previousl;y excluded. The central claims of B & B s eems unrefuted: (1) schools are not perfect, and need to made better, but they are hardly failur es as they are painted, and (2) to make the case that schools are failing requires ignoring a great deal of data, and (3) the media have been complicit with those who, for ideological reasons, oppose public funding of education, quite regardless of its success or failure. Opposition to public education is hardly a matter of evidence; never has been, probably never will be. Support for public education is the same. The issue is what, if anything we owe collectively to our children.

PAGE 35

1 of 1 Contributed Commentary on Volume 4 Number 1: Stedman The Achievement Crisis is Real: A Review of The Manufactured Crisis 10 April 1996Larry S. Bowenlbowen@OSF1.GMU.EDU In addition to what John Stone has contributed to this thread, I believe it important to recognize that heightened expectations for public e ducation are at work in societal concern over our schools--a phenomenon that likely will persist and guarantee continuing frustration. And with over 2 million teachers in more than 15,000 school systems, criticisms of "the schools" and their teachers will vary enormously in validity. This is of course common sense, but it is oft lack ing when looking for actions to deal with real and severe problems. Attempts to deal with gla ring inequities in funding are too few, even within school districts where a PTA of one school p urchases large numbers of computers and software while another in a poor neighborhood langu ishes. The general effects way of thinking about the process of education is terribly unfortun ate, I believe, with both the layperson and educator unable to develop a differentiated way of thinking and approaching the state of schooling and all its complexities. In the Washington, D.C. area, for example, there i s strong evidence re a number of large districts which are viewed as either "excellent" or "terrible" by the media and the public in general. Outstanding schools in a terrible district are rarely acknowledged (or even known about by most people, I'm persuaded in listening), and in at least one highly-praised district there is mediocrity that fits the mantra of how "schools are failing." Until we are willing to seriously deal with the inequities in public education in those sc hools that are lousy and with teachers of children who are not attaining functional literacy the issue of public education quality will not be adequately addressed. I would suggest that those who are concerned with achieving excellence for all learners in a given school system begin by asking what the ineq uities are and proceed with determining the particular problems of the setting and insist that those problems are the problems for the school division and community as a whole, necessitating a "village" response. But that is hard work, especially for those who have "made it" and believe theirs is theirs.

PAGE 36

1 of 2 Contributed Commentary on Volume 4 Number 1: Stedman The Achievement Crisis is Real: A Review of The Manufactured Crisis 10 April 1996Andrew Coulsonandrewco@IX.NETCOM.COM Dan Cline takes me to task for including a persona l anecdote in my recent post criticizing The Manufactured Crisis In response, I'd just like to say: Good for you! I threw in that personal comment because, in my experience, it's true, and b ecause I was interested in getting a debate going, but it does not play even a minor role in my overall verdict on The Manufactured Crisis. In my post I mentioned Larry Stedman's critique, a nd my own very brief assessment of their treatment of the IQ data and the comparative spending and efficiency data (neither of which Dan mentioned). I have also, independently from Lar ry, examined their achievement data and arguments. When I complete my review of their case I plan to incorporate it into a book I'm writing on school reform. I do not intend to includ e my personal opinions on the educational breadth of European vs. U.S. students unless I can provide some concrete, broad-based data to support them. Berliner and Biddle, however, saw no such need for empirical evidence in making their own claim about American students being more "broad ly educated." They write, without even a token piece of supporting evidence, that: By comparison then, American teenagers probably hav e more nonacademic interests and a wider knowledge base than do students from co untries that stress narrow academic concerns (p52). They do not bother to prove that the emphasis on a cademic achievement enjoyed by other countries is indeed "narrow," nor do they bother to show that foreign students fail to enjoy all the same activities they list for American youth. The p ossibility that "narrow" concerns such as literature, history, geography, civics, might widen people's "knowledge base" also appears not to have occured to them. I trust that readers of The Manufactured Crisis, whether or not they like what B. & B. are saying, will be just as quick to discount their use of unsupported anecdote as Dan was to discount mine. ---------An aside on the "educational breadth" issue: This is such a vague term, and such a, well, "broa d" concept, that I never would have initiated it myself if I were trying to make an int ernational comparison. Still, since we're on the topic, I'll just address a few of Dan's comments on the subject. While I don't disagree with Dan's suggestion that learning second, third, or fourth languages is to some degree related to the child's environment, schooling is c learly a significant factor in Europe and in other regions, such as the Canadian province of Quebec. In Quebec it is eminently practical to become bi/multi-lingual, and schools generally teach whichever languages

PAGE 37

2 of 2(French or English or both) are not native to the s tudent. Success seems to depend on how intensively the languages are taught, and how much use the student makes of them (in and out of school). According to the OECD (Education at a Glan ce, 1995), foreign language teaching in 15 European countries takes up an average of 13% of te aching time. This is second only to mathematics, at 16%. It ranges from a low of 9% to a high of 26%. According to the Digest of Education Statistics. (1993), U.S. students spend r oughly 6.5% of their class time on foreign languages--half of the European average, and less t han the lowest rate of the 15 European countries studied by the OECD. Of course it remains to be argued that foreign languages add to one's "breadth" of knowledge, but I think that case can fairly easily be made.

PAGE 38

1 of 1 Contributed Commentary on Volume 4 Number 1: Stedman The Achievement Crisis is Real: A Review of The Manufactured Crisis 10 April 1996John StoneSTONEJ@EDUSERV.EAST-TENN-ST.EDU On April 9, 1996 Dan Cline wrote: However, I think responses to these works by policy analysts and scholars of whatever pursuasion, even the critics of the school s among them, should perhaps rise above conclusions based just on hunches derived fro m personal encounters. (snip) I am not sure whether Berliner & Biddle were making an empirical knowledge claim or a simple assertion that American students are mo re broadly educated, but in either case, an examination of the claim or assertion is m ore enlightened by empirical evidence, rather than recitation of impressions fro m personal encounters alone the latter can certainly lead to hypotheses to be subje cted to empircal test. I agree that we need to go beyond personal experie nce in assessing this issue, but on this list, I see arguments buttressed by personal experi ence almost daily. Andrew does not need me to defend him but I am confident that he could have be en much more specific and data based about Berliner & Biddle. There is a mountain of credible empirical evidence that points in the opposite direction from Berliner & Biddle. Their claim that the larger public's perception of the schools is the product of a conspiracy or right wing extremism or mere bad publicity is preposterous. I agree with Andrew that Berliner & Biddle are doing the ca use of public education no favor by trying to dismiss its critics and their arguments. I know that the criticisms, the unflattering studi es, and the negative reports in the media are distasteful--especially to people who are worki ng hard to do what they feel is the right thing. However, as one who follows that which is being sai d in the larger public arena, I think we of the education community will either have to move in the direction of that which the public wants or progressively be replaced. People who are as commit ted to public education as Al Shanker have been saying the same thing for years. Professor Ste dman's refutation of Berliner & Biddle may be painful to contemplate but if it moves us to confro nt reality rather than quibbling about its existence, he has, in my opinion, done us a great f avor.

PAGE 39

1 of 1 Contributed Commentary on Volume 4 Number 1: Stedman The Achievement Crisis is Real: A Review of The Manufactured Crisis 9 April 1996Dan ClineDHC@PAWNEE.ASTATE.EDU Berliner & Biddle have taken some heavy hits for t he Manufactured Crisis and for their response to Stedman's first response to the book. H is second response, "Respecting the evidence..." is apparently being received on this l ist as a balanced and correct refutation of the major claims made by Berliner & Biddle and I suppos e it is -it is certainly a significant and thoughtful contribution on its own. However, I thin k responses to these works by policy analysts and scholars of whatever pursuasion, even the criti cs of the schools among them, should perhaps rise above conclusions based just on hunches derive d from personal encounters. Andrew Coulson, for example, characterizes as "uns upported" and "far fetched" Berliner & Biddle's contention that American students are mo re broadly educated than their foreign counterparts based on his observation that in "pers onal travels" in Europe, "it is common to encounter high school and college students who spea k two, three or even four languages fluently" and are well versed in international events (by wha t measure, not stated). As long as we are on personal experience, I was raised in the Dakotas sp eaking German, English and Russian, as were my mother and her sixteen brother and sisters, out of economic necessity -farmers couln't do business with farmers and businesses from other com munities in the area who spoke other languages unless they were conversant in those lang uages. Since none of them (myself being the exception) attended school beyond the eighth grade, I wonder to what extent being multilingual is a measure of being "broadly educated." I wonder if being multilingual in Europe isn't more a function of close proximity of nations (like New Yo rk and Pennsylvania) and ready access to television broadcasts in many languages rather than a function of curriculum in the schools. I am not sure whether Berliner & Biddle were makin g an empirical knowledge claim or a simple assertion that American students are more br oadly educated, but in either case, an examination of the claim or assertion is more enlig htened by empirical evidence, rather than recitation of impressions from personal encounters alone--the latter can certainly lead to hypotheses to be subjected to empirical test.


xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam a22 u 4500
controlfield tag 008 c19969999azu 000 0 eng d
datafield ind1 8 ind2 024
subfield code a E11-00056
0 245
Educational policy analysis archives.
n Vol. 4, no. 7 (April 04, 1996).
260
Tempe, Ariz. :
b Arizona State University ;
Tampa, Fla. :
University of South Florida.
c April 04, 1996
500
Includes EPAA commentary.
505
Respecting the evidence : the achievement crisis remains real / Lawrence C. Stedman.
650
Education
x Research
v Periodicals.
2 710
Arizona State University.
University of South Florida.
1 773
t Education Policy Analysis Archives (EPAA)
4 856
u http://digital.lib.usf.edu/?e11.56