|USFDC Home||| RSS|
This item is only available as the following downloads:
xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam a22 u 4500
controlfield tag 008 c19969999azu 000 0 eng d
datafield ind1 8 ind2 024
subfield code a E11-00052
Educational policy analysis archives.
n Vol. 4, no. 3 (February 26, 1996).
Tempe, Ariz. :
b Arizona State University ;
Tampa, Fla. :
University of South Florida.
c February 26, 1996
Making molehills out of molehills : reply to Lawrence Stedman's review of The manufactured crisis / David C. Berliner [and] Bruce J. Biddle.
Arizona State University.
University of South Florida.
t Education Policy Analysis Archives (EPAA)
1 of 14 Education Policy Analysis Archives Volume 4 Number 3February 26, 1996ISSN 1068-2341A peer-reviewed scholarly electronic journal. Editor: Gene V Glass,Glass@ASU.EDU. College of Educ ation, Arizona State University,Tempe AZ 85287-2411 Copyright 1996, the EDUCATION POLICY ANALYSIS ARCHIVES.Permission is hereby granted to copy any a rticle provided that EDU POLICY ANALYSIS ARCHIVES is credi ted and copies are not sold.Making Molehills Out of Molehills: Reply to Lawrence Stedman's Review of The Manufactured Crisis David C. Berliner Arizona State Universityberliner@asu.eduBruce J. Biddle University of Missouripsybiddl@mizzou1.missouri.edu Abstract: Berliner and Biddle answer Lawrence Stedman's revi ew of their book The Manufactured Crisis, which was published in the Edu cation Policy Analysis Archives as Volume 4, Number 1, 1996. Throughout his term as founding editor of "Contemp orary Psychology," Edwin G. Boring insisted that the basic tasks of the responsible re viewer are to portray with honesty the intentions of authors and to assess carefully whether those in tentions are realized in their writings. Unfortunately, Lawrence Stedman (1996) does not ho nor such laudable tenets in his so-called "review" of our book, THE MANUFACTURED CR ISIS, appearing in Education Policy Analysis Archives, 4(1). Instead, Stedman ch ooses to ignore both the intentions that we stated clearly in our book and the vast bulk of wha t we actually wrote about in its eight chapters. Worse, he asserts falsely that our book was based o n four "sweeping claims" and then attacks us because the analyses with which we supposedly suppo rted these claims were "deeply flawed and misleading."
2 of 14 In fact, these so-called "sweeping claims" referre d to materials covered in but a portion of our second chapter. Further, two of Stedman's conce rns about our "sweeping claims" misrepresented what we had written, and the other t wo state positions with which Stedman agrees and are abundantly supported by the evidence he himself cites. In short, Stedman has written a review that is uninformative, disingenuou s, and as will soon become clear, trivial. Stedman has not succeeded in even making a mountain out of a molehill--all that was accomplished was to make molehills out of molehills WHAT WE WROTE ABOUT Since Stedman does not bother to tell readers what we actually wrote about in THE MANUFACTURED CRISIS, we should begin by doing so. W e began our book by noting that throughout most of the Reagan and Bush years, the W hite House led an unprecedented and energetic attack on America's public schools, makin g extravagant and false claims about the supposed failures of those schools, and arguing tha t those claims were backed by "evidence." To illustrate, in 1983 the White House released a wide ly-touted brochure, "A Nation at Risk," claiming (among other things) that the "average ach ievement of high school students on most standardized tests is now lower than 26 years ago w hen Sputnik was launched." This claim made an assertion about factual matters, but somehow no evidence was cited in "A Nation at Risk" to support it, nor could any have been given since it was false. Again, in 1989 John Sununu was to claim that Ameri cans "spend twice as much [on education] as the Japanese and almost 40 percent mo re than all the other major industrialized countries of the world," and George Bush (the "Educ ation President") was to intone that our nation "lavishes unsurpassed resources on [our chil dren's] schooling." These claims were equally untrue. Other damaging claims made by the White Hou se during these years argued: that American schools "always" look bad in international comparisons of achievements; that educational expenditures are not related to school achievements and that additional investments in education are "wasted"; that because of inadequa cies in our schools, American industrial workers are nonproductive; and that the typical p rivate school out-achieves the typical public school when dealing with similar students. These an d other false claims, designed to weaken Americans' confidence in their public schools, were all said to be backed by "evidence," although somehow the "evidence" in question was often only h inted at. This attack was led by specific persons--whom we n amed in our book--and created myths about education that were sometimes backed by no ev idence at all, sometimes supported by misleading analyses of inappropriate data, and some times aided by the deliberate suppression of contradicting information. No such White House atta ck on public education had ever before appeared in American history--indeed, even in the d epths of the Nixon years the White House had not told such lies about our schools. Since the attack was well organized and was led by such powerful persons--and since its charges were shortl y to be echoed in other broadsides by leading industrialists and media pundits--its false claims have been accepted by many, many Americans. And these falsehoods have since generated a host of poor policy decisions that have damaged the lives of hard-working educators and innocent studen ts. In our book we labeled this attack "The Manufactur ed Crisis" and detailed: the abundant evidence that contradicts its major my ths; the likely reasons for its appearance in the Reagan and Bush years; the ways in which the "reform" proposals associated with this attack would be likely to damage America's public schools;
3 of 14the real and escalating social problems faced by ou r country and its schools, that leaders of the attack had but little interest in solving; and what can be done today to help solve the real probl ems of our schools. As this brief summary suggests, our book was desig ned to cover a good deal of material. In it we also tried to write not a scholarly treati se but rather a work that could be read by the wide audience of educators, policy-makers, parents, and citizens in our country who are truly concerned about education today. However, these int entions are neither noted nor assessed by Stedman, so readers will have to read THE MANUFACTU RED CRISIS themselves to find out whether or not we succeeded in accomplishing them.DISINGENUOUS CHARGES So much for Stedman's sins of omission. What about those he committed? In his lead paragraph, Stedman asserts that our book made four "sweeping claims" about American educational achievement and implies that these cons titute the core of our arguments in TMC. This is nonsense, of course. The four "claims" in q uestion do not portray the major themes of our book. Rather, they focus only on narrow issues of s tudent achievement that are dealt with in but part of our second chapter. In addition, two of the supposed "sweeping claims" challenged by Stedman misrepresent what we actually wrote. One asserts that we had con cluded, "today's students are 'out-achieving their parents substantially' (p. 33). This quote was taken out of context. In one short sub-section of Chapter Two we reviewed longitudinal evidence fr om commercial tests of achievement such as the Iowa Test of Basic Skills, the California Ac hievement Test, and the like. Citing evidence originally developed by Linn, Graue, and Sanders (1 990), we noted that for some years average scores earned on these tests have been creeping upw ards and that the test developers have regularly had to recalibrate these tests in order t o make certain that the typical student again scores at the fiftieth percentile rank for the subj ects assessed. Commenting on this brief review, we wrote "So, if commercial tests were not recalibr ated, virtually all of them would show that today's students are out-achieving their parents su bstantially" (p. 33), and this sentence was the source of Stedman's misleading quote. We never claimed that equivalent effects have appe ared in the more extensive evidence from non-commercial tests of student achievement, n or did we state any general conclusions about today's students outscoring their parents i n school achievement anywhere in our book. So Stedman's assertion that we had made such a "sweepi ng claim" is not so. In fact, we were actually quite cautious in what we claimed about th e achievements of students and their parents. But while we are on the subject, related thoughts may be worth mentioning. As we noted in TMC, IQ test data from over a dozen industrializ ed nations show that today's children are about one standard deviation ABOVE their parents in measured intelligence, with the growth primarily in the decontexualized, abstract, problem -solving parts of the tests (sources cited in our book). Additionally, when one looks at more than 20 "then" and "now studies of student achievement--reviewed previously by Stedman himself in his studies of literacy in the U. S.!-almost all the results show that the students takin g the test "now" outscore the students that took the test "then." So while we were actually cautious in our book, and did not make the "sweeping claim" assigned to us by Stedman, the data suggest that such a claim might actually be made! In addition, Stedman asserts that we made another "sweeping claim," that "the general education crisis is [merely] a right-wing fabricati on," although he provides no citation to justify
4 of 14this charge. Again, this misrepresents what we wrot e. Rather, we devoted an entire chapter in our book to a careful analysis of the social origins of The Manufactured Crisis, and in it we pointed out that this episode in American history reflected MANY causes. It is certainly true that rightwing ideologues gained access to the White House wi th the election of Ronald Reagan, and in our book we detailed their influence on White House education policy. But school-bashing has been a popular indoor sport in America for years, a nd White House critics of the schools would not have gotten away with the lies and distortions of evidence they promoted had Americans not also been worried about unresolved problems in our society and its public schools, and had their efforts not been supported by industrial pronouncem ents and media irresponsibility. Thus, by reducing our careful analysis to a political slogan Stedman has seriously distorted what we wrote in TMC. So on two of our "sweeping claims," Stedman misrep resented us. As we shall see below, however, Stedman states that he generally agrees wi th the other two "sweeping claims" he correctly assigns to us. The additional evidence he cites provides no reason to question our interpretations of the data. We turn now to these i ssues. CREATING MOLEHILLS, PART ONE--THE MYTH OF DECLINING TEST SCORES The first of the "sweeping claims" which Stedman a ccurately assigns to us concerns the myth of declining test scores. After reviewing evid ence from many sources, we DID write, "standardized tests provide no evidence whatever th at supports the myth of a recent decline in the school achievement of the average American student" (p. 34). Moreover, Stedman states that he agrees with this claim, writing, "Berliner and Bidd le are generally right that achievement has been stable," and again, "the best that can be conc luded is that this generation of students generally performs about the same as earlier ones." So--to paraphrase a recent hamburger commercial--where's the beef? Stedman goes on to complain that we had not review ed even more evidence on the issue, cites various materials that HE had reviewed in pre vious publications, and implies that somehow these additional materials would cause one to rethi nk or possibly to revise the claim we had made (and with which he clearly agrees). But would addit ional insights have been gained had we added these extra materials to a chapter that was already overly long? To answer this question, let us scan the evidence alluded to by Stedman. For openers, Stedman complains about our portrayal of NAEP results. He writes that "high school students' NAEP civics scores, for exam ple, dropped substantially between 1969 and 1976 and have been slipping ever since." But is thi s true, and is it a substantive matter? Evidently not. NCES's "The Condition of Education, 1991" note d that no statistically significant differences appeared in average NAEP civics scores between 1976, 1982, and 1988 for either 13-year-olds or 17-year-olds (1991, pp. 143, 144). One data set showed slight gains, the other showed slight losses, but evidently neither of thes e "trends" mattered. Stedman also claims that "[NAEP] science scores al so fell during the 1970s and have only partly rebounded," but again is this true, and is t he matter substantive? Let readers judge for themselves. Average NEAP science scores for the yea rs 1970, 1973, 1977, 1982, 1986, 1990 and 1992 were: For 9-year-olds 225, 220, 220, 221, 224, and 229 and 231; for 13-year-olds 255, 250, 247, 250, 251, 255 and 258; and for 17-yearolds 3 05, 296, 290, 283, 288, 290, and 294, respectively (National Center for Educational Stati stics, 1994, p. 56). In short, Stedman's judgment about science scores is simply wrong! Over 22 years, two of the three age groups studied actually showed slight GAINS during this pe riod, but the most reasonable interpretation
5 of 14of the science data is again one of general stabili ty over time. Stedman also writes, "in the early 1990s, younger students' NAEP reading and writing performance slipped." Again, let readers judge the issue. Reading scores reported for 9-year-olds over seven administrations of the NAEP covering 21 years were: 208, 210, 215, 211, 212, 209, and 210, respectively (National Center for Educatio nal Statistics, 1994, p. 50.). Thus Stedman's interpretation of the data is once again wrong! He sees a decline in reading scores when he should be seeing remarkable consistency of scores o ver time. In addition, the NAEP writing test seems to have been administered four times between 1984 and 1992, and the following average scores were earned: for Grade 4-204, 206, 202, an d 207; and for Grade 8--267, 264, 257, and 274; (National Center for Educational Statistics, 1 994, p. 52). As before, Stedman's interpretation seems to be in error. It is difficult to understand how Stedman could misread such stable data sets and conclude that they indicate "slippage." (Curiou s readers may check the NAEP data for themselves. They appear in all recent editions of t he CONDITION OF EDUCATION.) For some reason, Stedman also chooses to complain about our review of SAT evidence. He challenges our conclusion that the notorious, so called "decline" in SAT scores in the late '60s and early '70s was largely generated by sharp increases in the range of students opting to take the test, asserting that we had ignored his publish ed demonstration that demographic changes in test takers explain "much, but not all" of this dec line in SAT scores. Two crucial points are relevant to this complaint. First, how could Stedma n or anyone else possibly know whether demographic changes do not explain all of the notor ious SAT "decline" since MANY important demographic characteristics of students are never m easured and thus cannot be entered into analyses concerned with the shifts in SAT scores? B ut more importantly, in the process of issuing his complaint, Stedman utterly ignores the point often made by other scholars, and repeated forcefully in TMC, that aggregate SAT scor es are NOT valid for judging the achievements of school districts, states, or the na tion as a whole because they are not based on random samples. So this complaint turns out to be a true tempest in a teapot. (Despite which, some readers may continue to wonder about other pos sible reasons for the SAT "decline." A plausible hypothesis is offered in Note 1.) In addition, Stedman challenges another of our con clusions that he does not bother to document. Based on disaggregated evidence from both SAT and NAEP scores, we asserted that the overall achievements of minority students have recently been slowly improving in America. In apparent contradiction, Stedman states that we h ad ignored SAT evidence showing "minority verbal declines in the late 1970s and late 1980s." But it is far from clear that these putative "declines" were substantive; the evidence for these putative "declines" in SAT scores was matched by more representative national data from t he NAEP that showed large gains in minority reading scores between 1971 and 1992 ( National Cen ter for Educational Statistics, 1994, p. 50 ); and once more the point made by Stedman does not co ntradict the general conclusion we wrote about in TMC. Thus again, there is less here then m eets the eye. Finally, Stedman accuses us of writing a "selectiv e" review of the work of Linn, Graue, and Sanders (1990) on commercial tests: failing to report data from the SRA; failing to report data that Linn et al. had generated on high school achievement; and failing also to note their "worries" that recent gains in commercial test scor es might have reflected school districts' repeated use of the same tests rather than genuine student improvement. Let us put these concerns to rest. Regarding the SRA issue, the data reported by Linn et al. are complex and mixed, and we judged that they required too much explanation to w arrant their inclusion in a book
6 of 14designed for general readers--but those data do NOT contradict the interpretation we gave (see Note 2).Regarding the high school issue, we chose again to leave the data out because academic achievement growth in basic subjects seems to be li mited at the high school level (see Coleman, Hoffer, & Kilgore, 1982, for example) and because Linn et al. did not report high school data for the CTBS and the ITBS--but aga in, the high school evidence does NOT contradict the conclusion we stated. (in fact t he high school data SUPPORT our assertions, and we provide them for the interested reader in Note 3). Regarding the interpretational "worries" of Linn et al., after noting some cautions, Linn and his colleagues provided the following summary f or their analyses, "The evidence reviewed provides strong support for the conclusion that norms obtained for grades 1-8 during the late 1970's or early 1980's are easier o n most tests than more recent norms." So, student achievement is UP on commercial tests, and that is exactly what we concluded. To summarize then, when one actually looks at the additional evidence alluded to by Stedman, one discovers that he has misrepresented s ome of it and that none of it generates insights that would have caused one to question the conclusions we stated in TMC--and with which Stedman states agreement. Truly, when it come s to challenging our statements about the myth of achievement decline, Stedman has labored mi ghtily and brought forth a mouse. CREATING MOLEHILLS, PART TWO--THE MYTH THAT AMERICA N SCHOOLS ALWAYS FAIL IN COMPARATIVE STUDIES Stedman also accuses us of making a fourth "sweepi ng claim"--that "U. S. students 'stack up very well' in international assessments (p. 63). This assertion is largely correct, although some context should be provided so that readers will und erstand what we did and did not mean when making this claim. In our analyses of the issues in volved in comparative studies of student achievement, we made five general points: Few of those studies have yet focused on the unique values and strengths of American education. 1. Many of the studies' results have obviously been af fected by sampling biases and inconsistent methods for gathering data. 2. Many, perhaps most, of the studies' results were ge nerated by differences in curricula--in opportunities to learn--in the countries studied. 3. Aggregate results for American schools are misleadi ng because of the huge range of school quality in this country--ranging from marvelous to terrible. 4. The press has managed to ignore most comparative st udies in which the United States has done well. (p. 63) 5. Of these general points, the first and third are p articularly crucial. By comparison, the United States operates an education system that has many unique features which reflect the values of our nation. Americans value a broad educa tion, and this means that they offer more curricular options in their schools and colleges an d lay less stress on the early mastery of core subjects than do most other industrialized nations. They also value creativity, initiative, and independence of thought in students, so they (somet imes, though not often enough) support curricula and classroom practices that encourage th ese traits rather than conformity to arbitrary standards. Our country also seeks to serve the need s of a huge range of students--including those from many different ethnic groups and those with bo th talents and handicaps--and this places unique burdens on our public schools. Americans als o believe that education should provide
7 of 14equal opportunities for all, and as a result we bui ld a unique set of second-chance opportunities into our school systems. And because we value highe r education strongly, we enroll a lot more of our young people into colleges and universities, an d our graduation rates are the highest in the world. Because of these reasons, and because most compara tive studies to date have assessed only the achievements of younger students in core s ubjects, they have, in effect, managed to AVOID most of the true strengths of American educat ion. Commenting on this situation, we wrote in TMC: "If Americans are truly interested in learning how their schools stack up comparatively, they should insist that at least som e comparative studies focus on the values that AMERICANS hold for their children and the unique st rengths of AMERICAN schools.... [To date] none of the studies seems yet to have investi gated breadth of student interests or knowledge; none has yet examined student creativity initiative, social responsibility, or independence of thought; and few have studied knowl edge among undergraduates or young people who have completed their educations. In fact comparative studies to date seem almost to have deliberately avoided looking at the strengths of American schools!" (p. 53). Given this biased focus, it is actually quite surprising that our country has done as well as it has in comparative studies of achievement, and it was with these and related thoughts in mind that we wrote, "The myth that American schools fail badly b y comparison with schools in other industrialized countries is also not supported by t he evidence. Instead, when we analyze that evidence responsibly and think carefully about its implications, we discover that American schools stack up very well" p. 63). In his critique of us Stedman AGAIN begins by stat ing his general agreement with our position. He writes, "U. S. performance in the inte rnational arena is not as dismal as school critics have asserted." (If needed, additional conf irmation of this point, on which Stedman and we agree, may be found in the recent thoughtful rev iew of comparative evidence by Gerald Bracey, 1996). So once again, where's the beef? Stedman seems not to have been concerned about the issues we raised in our first, second, or fifth general points summarized above; indeed, h e ignores them completely and as a result again misrepresents the thrust of much of what we w rote. (To illustrate, he asserts that we either wrote or implied that American performance in compa rative studies is generally "glowing." We neither wrote nor implied such a claim.) He does, h owever, take issue with our third and fourth points, again citing his own published studies, cla iming that the latter made substantive points that would contradict some of our conclusions. We t urn now to these latter issues. For one, Stedman asserts that American students "h ave done well in reading and elementary school science, middling to poor in geog raphy and secondary school science, and last or near-last in mathematics." Although we were fami liar with some of these apparent effects when we wrote TMC, we decided that validity problem s in the comparative research literature were so great that stating such detailed conclusion s was not justified at present, nor did we include them in our book. So here Stedman is compla ining about what we failed to assert. Moreover, we are far from the only scholars to have noted serious validity problems in comparative studies of achievement. A Japanese teac her of mathematics has recently discussed the serious difficulties of trying to equate sample s of American and Japanese students and of the absurd results that can be generated by studies bas ed on badly flawed samples (see Ishizaka, 1993). He questions Japanese superiority in mathema tics and is amazed that Americans believe the results of such flawed studies. But who is this teacher? Why should we put any credence in his remarks? Kazuoko Ishizaka is his name, and he i s Chief of the Curriculum Research Division of the National Institute for Educational Research in Japan (Note 4). Ishizaka also notes the
8 of 14errors inherent in the oft cited work of Stevenson and Stigler (1992), whom Stedman unwisely cites to support one of his stranger assertions abo ut the supposed strengths of Japanese education. For a second, Stedman characterizes our conclusion about opportunity-to-learn as a "red herring" and quarrels with our presentation of evid ence that was originally generated by Ian Westbury (1992) from the Second IEA Study of Mathem atics Achievement. In this presentation Westbury (and we) pointed out that the typical Japa nese 13-year-old has taken algebra whereas the equivalent American student has not, thus aggre gated mathematics scores for students of this age show Americans to be at a disadvantage; but whe n the American data are disaggregated to display achievements for students who have and have not taken algebra, the achievements of the former look quite similar to those of Japanese stud ents. Surprise! Somehow Stedman takes this simple demonstration of the effects of differences in curricula and opportunity-to-learn and converts it into a series of assertions that we did not make in TMC and do not believe. To repeat our major point: Education systems in various count ries offer sharply different curricula, differing sequences of courses, and differing oppor tunities to learn for students at a given age. These differences generate many of the so-called "f indings" of comparative studies of achievement, and nothing that Stedman writes contra dicts this general point. For a third, Stedman misrepresents our general poi nt about variability among schools in achievement generated by the enormous differences i n levels of funding for schools in our country--an effect that should be less prevalent in most other countries where schools are funded more equally. Stedman asserts that we had argued th at overall variability in achievement among students should be greater in our country, but we d id not argue for such an effect. For a fourth, Stedman objects to our graphic prese ntation of data from comparisons of NAEP and IAEP scores that were originally generated by NCES in 1993. The point we made in presenting those data was that they reveal HUGE dif ferences in average achievement among the American states, and that those differences are com parable in size to differences among nations reported in comparative studies, with the achieveme nts of the "top" American states looking rather like those of our "top" overseas competitors and the "bottom" American states looking like underdeveloped countries. To illustrate, average sc ores for Iowa, North Dakota, and Minnesota are right up there with the top performing Asian na tions of Taiwan and Korea; in contrast, Alabama, Louisiana, and Mississippi score right dow n there with the lowest performing nation, Jordan. To talk about an "average" score for our na tion as a whole may therefore be misleading. Stedman doesn't like the implication of this conclu sion, so he quarrels with details of the data generated by NCES (which we reported), but none of his quarrels vitiates the general point we made. Finally, Stedman misinterprets arguments about the evil effects of poverty and prejudice on student achievements in America that we made rep eatedly in TMC. He writes, "although racism and social inequality have taken a severe to ll on many of our students' academic development, this does not explain the poor general performance of U. S. students... [and] even our top half have not kept pace internationally in math and science." Apart from the fact such statements utterly ignore the fact that poverty and racism are much greater problems in our country than in most comparable nations, why on ear th would racist and social-inequality processes NOT depress the general, aggregate achiev ement scores of American students or the achievements of "the top half"? The mind boggles. To summarize: In Stedman's assault on our review o f comparative studies of achievement he chooses to ignore and in part to misrepresent wh at we had written, and again the substantive points he makes do not contradict those we actually wrote in TMC. Thus, as before, what
9 of 14Stedman writes represents a good deal of sound and fury but signifies very little. He has once again made molehills out of molehills.LIKELY MOTIVATIONS We cannot know all of the reasons why Stedman woul d choose to write such an unfortunate diatribe--one clearly at odds with the many embarrassingly flattering reviews that the TMC has received. Some of the few who have so far c riticized us had actually helped to create The Manufactured Crisis and presumably resent being found out and publicly scolded. Others apparently have bought into major myths we exposed in our book or derived and promoted inappropriate ideas for the "reform" of our schools and must now defend their untenable positions. And some may possibly be miffed because we did not chose to cite works of theirs that they considered relevant to the arguments of TMC. H owever, it seems quite likely that at least a portion of Stedman's dyspepsia reflects yet another motivation. This becomes clear in the latter part of Stedman's "review" when he states that Amer ican school achievements are 'not good enough' and that the two of us should be chastised because we did not express this idea in TMC. He writes, "although achievement trends, for the mo st part, have been stable, academic and general knowledge have been at low levels for decad es." And this leads him to claim that--in supposed contradiction to what we had written--"the achievement crisis is real." This stance is a remarkably familiar one, of cours e. Indeed, school bashing has been a popular indoor sport in America for years, and in C hapter Four of TMC we offered numerous examples of such sour judgments about our country's schools dating back over much of the century. In addition, this critical stance adopts s afe territory because the standards against which America's schools are to be judged and found wantin g are arbitrary and can be made up as one goes along. And for this reason, as prominent neoco nservatives have recently begun to discover that the myths of The Manufactured Crisis cannot be supported with evidence, their enthusiasm for this stance has blossomed. Those who adopt this stance today tend to bolster it with three arguments. Some suggest that American schools have 'always' been weak achie vers, and the fact that their achievements haven't risen recently should not be taken as a vot e of confidence. Others--enthusiasts for standardized testing--delight in pointing out that 'too many' students cannot 'pass' those tests at a given level or correctly answer selected items from those tests. And still others claim that although present standards were all very well for t he past, they are clearly inadequate for the demands of the future (which somehow are rarely exp lained). In his so-called "review" Stedman advances the first two of these arguments but, some how, not the third. Regardless of the arguments advanced, this stance reflects a value judgment, not evidence. Stedman is at least partly right, of course, in his suspicion that we do not share his values. We find it ludicrous that anyone should claim that "ac ademic and general knowledge have been at low levels for decades" in this country. If this we re actually true, how on earth did our nation ever manage to win World War II, send astronauts to the moon, create a plethora of new pharmaceuticals, and invent the transistor and virt ually all the computer technology now used world wide? For that matter, how did we achieve the world's highest rate of industrial productivity, and establish ourselves as this centu ry's dominant super-power? "Low levels" of academic and general knowledge? What nonsense! In addition, as we made abundantly clear in TMC, w e believe that America's long-suffering educators and hard-working students are more often the victims than the perpetrators of our country's serious and escalatin g social problems. We cannot believe that
10 of 14useful strategies for solving the problems of Ameri can education are likely to be promoted by unfairly scapegoating these deserving people. On the other hand, Stedman seems to share at least some of our values. Toward the end of his missive, he writes: "To succeed in our most tro ubled communities, we will need to overhaul school financing systems and break down powerful, e ntrenched bureaucracies. But school reform is no substitute for job creation, income re-distri bution, and political empowerment. We must make our educational efforts part of a broader soci al and political agenda, one that promotes full employment, community revitalization, and civic par ticipation." Such thoughts certainly parallel those we expresse d in our book. Too bad that Stedman did not bother to ponder the implications of these latter ideas for understanding the enormous accomplishments of American educators who have pers evered, indeed have often succeeded, in the face of escalating social problems that are FAR worse in our country than in other industrialized nations. But regardless of whether Stedman did or did not a gree with all the values we expressed in TMC, he should NOT have allowed such disagreemen ts to generate the lacunae, misrepresentations, and trivialities that character ize his supposed "review" of our book. Indeed, one of the hallmarks of good scholarship is that it is both honest and careful in its portrayal of the works of others, even those works with which one di sagrees. Either Lawrence Stedman is unfamiliar with the admirable standards expressed b y Edwin Boring, or he chose to ignore them completely when writing his unfortunate review.A NOTE OF THANKS We have both written books before, but this is the first time either of us has authored a work that is controversial. We have been truly star tled by some of the distorted portrayals and outright lies that have surfaced in so-called revie ws of TMC appearing in major media sources, but most of those sources do not provide opportunit ies for authors to correct such mischiefs. Thus, in closing, we would like to thank Gene Glass and the editorial board of Education Policy Analysis Archives for this opportunity to reply to Lawrence Stedman's disingenuous portrayal of THE MANUFACTURED CRISIS.NOTES1. The SAT decline began in the 1960s. Left out of most arguments about the causes of the decline is the fact that a powerful new medium of e ducation and entertainment came into play in the 1950s. Television viewing has consequences for cognition and effects on school performance. Because television entered the daily l ives of children on a regular basis in the early 1950s, the first of the TV-raised generations to gr aduate from high school were the classes leaving the public schools in the early to mid-1960 s. Coincidence? Probably not. The work of Keith Stanovitch (1993) is relevant here. In a clev er series of studies he shows that there is a high correlation between exposure to print and many kind s of performances on paper and pencil tests of general verbal information. If exposure to print went down in the 1950-1965 time period, then a reduction in verbal aptitude test scores would be expected. That is exactly what happened. And if the exposure-to-television hypothesis has any pr edictive power, then the verbal aptitude score decline should be greater than the decline in mathe matics aptitude score. And that happened too. Whether this sudden emergence of television in the lives of America's students did or did not
11 of 14have a depressing effect on average SAT scores will never be known. But it is clear that during this period the primary medium of recreation and in struction changed, and the SAT-originally calibrated in 1941--did not. The SAT is NOT a test of the ability to decode rapidly changing audio-visual information, though the cultivation of this aptitude has been required since the 1950s. The bottom line is this: two things changed in the 1960s, the medium through which students were acquiring most of their knowledge and the composition of the population electing to take the SAT. It seems more likely that the noto rious "decline" reflected these two factors rather than any supposed drop in school quality.2. Of the 24 scores (grades 1-12 in reading and in mathematics) for the median-level test-taker, the SRA tests show the following gains and declines from one norming to another: reading--up in four grades, down in eight grades, net loss 1.3 per centiles; mathematics--up in six grades, down in four grades, no change in two grades, net gain 1 .5 percentile ranks. The average for all grades and both subjects on the SRA is a net gain of .2 pe rcentile ranks per year for the median-level test-taker from one norming to another. On the SRA tests, then, what one sees is a tiny gain here and there, and a tiny loss here and there. But most important is that there is no discernible trend here at all. What on earth would readers have gaine d had we displayed these data in TMC?. 3. The estimated yearly change in percentile rank f or the median test taker on the reading part of the California Achievement Test (CAT), from one ren orming to the next, for grades 9-12, is: +2.1, +1.1, +.6, and +.1. Thus, in this case, every score reflects a gain. In Mathematics the comparable data are +2.0, +1.1, +.7, and +.3. Again each year a gain is evident. And if we had included the Stanford Achievement Tests (SAT), we w ould have reported that yearly gain scores for grades 9-12, between one renorming and the next were: for reading, + .8, 0.0, +1.0, +.8; and for mathematics, +1.0, +1.0, +1.0, +1.2. Which mean s that seven of the eight high school test scores were up, one was unchanged, and none showed a decrease. Thus we could have ENHANCED our claim about rising test scores for com mercial tests had we included high school data on the CAT and the Stanford!.The MAT reading tests generated mixed data for thes e four grades: scores were up in two grades, but scores were down in two others. The NET score i n reading, however, was up, and ALL four high school grades provided evidence of increased s cores in mathematics. So even had we included MAT high school data, our conclusion would not have been challenged. In sum, Stedman's claim that much was lost when we chose no t to provide results from the high school level is false.4. With some minimal editing to make his English cl earer, Mr. Ishizaka said: Based on the entrance examinations, students [in Ja pan] can choose one of the high schools of [a] large attendance area. So naturally the high schools are ranked according to their academic abilities. In the top r anking high school of the prefecture (state) where I taught, the average score of the ne wly entered students would ordinarily be 98 or even 99%. Almost all students g ot full marks. In my school, I taught the part-time students who work in the dayti me and study in the evening. The average score of those students is 2.1 [percent], j ust a little less than the average of all schools. The average when I participated in tha t test was just 3 [percent]. In the Second International Mathematics Study [SIMS ], Population B of Japanese students got extremely high scores. So many people believe that Japanese high school students do very well in mathematics. I have been teaching mathematics for ten years and I know how well they do. Their averag e on for the intended curriculum
12 of 14was just around 5 [percent] or less when I was a te acher of mathematics. That means that the majority of the Japanese high school stude nts do not attain what is intended by the government. If you look [at] the Japanese te xtbook it contains lots of materials but it does not mean that the students at tain all those materials. (p. 45) [When] we pick...certain samples of students it fre quently happens something like this....Japanese attainment trends of high school s tudents...[are] something like the letter "U" shape. They are either doing extremely w ell or extremely bad. I told you when I make a test, the average score was less than 5 points. Five points when the full score is 100. But in some of the best schools the average score is 98 or 99%. High schools of Japan were ranked according to thei r academic ability, and students trying to enter science and engineering fields ordi narily attend top level schools. In addition, Japanese society is [strong on] academic credentials. What school he or she is coming from is very important. Therefore up to t he time when they enter colleges and universities they study extremely hard. They st udy more than 2000 different kinds of test problems and remember how to answer t hose items. I myself had the experience of studying for the entrance examination When we look [at the SIMS tests] the answer is choosing from among five choic es. If we are practicing every day for the entrance examination, we know very quickly what would be the correct answer. If it is a written test, it would be a litt le different. Anyway, Japanese Population B samples [of SIMS] were chosen from the se upper extremes. I am not a specialist of international comparisons. [But] I kn ow what the high schools attainment trend [really] is. (pp. 6-7) Mr. Ishizaka also notes that Dr. Merry I. White, a leading Japanologist has written something like this "The curriculum--the courses taken and the mat erial covered--is so rich that a high school diploma in Japan can be said to be the equivalent o f a college degree in the U. S." Mr. Ishizaka thinks that Dr. White has lost her mind. And Mr. Is hizaka also noted that the U. S. Department of Education, in one of its pamphlets titled AMERICA 2 000 COMMUNITIES: GETTING STARTED quoted Harold Stevenson. Stevenson has made headlines many times claiming that in his comparison of fifth grade mathematics classes The average score of the lowest Japanese classroom is higher than the highest American class room average for arithmetic." (p. 13). Mr. Ishizaka simply thinks we are foolish to believe th is. And he might have some relevant background for commentary on this issue since he no t only taught in Japan and is a member of the Ministry, but he has had personal experience wi th U. S. schools. His own children attended Illinois public schools and found them to be great! References Berliner, D.C. and Biddle, B.J. (1995). The Manufactured Crisis: Myths, Fraud, and the Atta ck on America's Public Schools Reading, MA: Addison-Wesley. Bracey, G. (1996). International comparisons and th e condition of American education, Educational Researcher, 25(1), 5-11. Coleman, J. S., Hoffer, T., & Kilgore, S. (1982). High school achievement: Public, private, and Catholic schools compared. New York: Basic Books. Ishizaka, K. (1993). Japanese education--the myths and realities. Paper presented at the meetings of the Canadian Teachers Federation, Ottawa, Ontari o, Spring, 1993.
13 of 14 Linn, R. L., Graue, M. E., and Sanders, N. M. (1990 ). Comparing state and district test results to national norms: The validity of claims that "everyo ne is above average." Educational Measurement: Issues and Practice, 10(Fall), 5-14. National Center for Education Statistics (1991). Th e condition of education, 1991. Washington, DC: U. S. Department of Education, Office of Educat ional Research and Improvement (NCES 91-637).National Center for Education Statistics (1994). Th e condition of education, 1994. Washington, DC: U. S. Department of Education, Office of Educat ional Research and Improvement (NCES 94-149).Stedman, L. (1996) The achievement crisis is real: A review of The Manufactured Crisis. Education Policy Analysis Archives, 4(1). Stanovitch, K. (1993). Does reading make you smarte r? Literacy and the development of verbal intelligence. In H. W. Reese ( Ed.), Advances in child development, vol. 24 (pp. 133-180). San Diego, CA: Academic Press.Stevenson, H. W., and Stigler, J. W. (1992). The learning gap. New York: Summit Books. Westbury, I. (1992). Comparing American and Japanes e achievement: Is the United States really a low achiever? Educational Researcher, 21(5), 18-24.Copyright 1996 by the Education Policy Analysis ArchivesEPAA can be accessed either by visiting one of its seve ral archived forms or by subscribing to the LISTSERV known as EPAA at LISTSERV@asu.edu. (To sub scribe, send an email letter to LISTSERV@asu.edu whose sole contents are SUB EPAA y our-name.) As articles are published by the Archives they are sent immediately to the EPAA subscribers and simultaneously archived in three forms. Articles are archived on EPAA as individual files under the name of the author a nd the Volume and article number. For example, the article by Stephen Kemmis in Volume 1, Number 1 of the Archives can be retrieved by sending an e-mail letter to LISTSERV@a su.edu and making the single line in the letter rea d GET KEMMIS V1N1 F=MAIL. For a table of contents of the entire ARCHIVES, send the following e-mail message to LISTSERV@asu.edu: INDEX EPAA F=MAIL, tha t is, send an e-mail letter and make its single line read INDEX EPAA F=MAIL.The World Wide Web address for the Education Policy Analysis Archives is http://seamonkey.ed.asu.edu/ Education Policy Analysis Archives are "gophered" in the directory Campus-Wide Inform ation at the gopher server INFO.ASU.EDU.To receive a publication guide for submitting artic les, see the EPAA World Wide Web site or send an e-mail letter to LISTSERV@asu.edu and include the single l ine GET EPAA PUBGUIDE F=MAIL. It will be sent to you by return e-mail. General questions about ap propriateness of topics or particular articles may be addressed to the Editor, Gene V Glass, Glass@asu.ed u or reach him at College of Education, Arizona Sta te University, Tempe, AZ 85287-2411. (602-965-2692)Editorial Board John CovaleskieAndrew Coulson
14 of email@example.com@ix.netcom.comAlan Davis firstname.lastname@example.org Mark E. Fetlerfetlerctc.aol.com Thomas F. Greentfgreen@mailbox.syr.edu Alison I. Griffithagriffith@edu.yorku.ca Arlen Gullickson email@example.com Ernest R. Houseernie.firstname.lastname@example.org Aimee Howleyess016@marshall.wvnet.edu Craig B. Howley email@example.com William Hunterhunter@acs.ucalgary.ca Richard M. Jaeger firstname.lastname@example.org Benjamin Levinlevin@ccu.umanitoba.ca Thomas Mauhs-Pughthomas.email@example.com Dewayne Matthewsdm@wiche.edu Mary P. McKeowniadmpm@asuvm.inre.asu.edu Les McLeanlmclean@oise.on.ca Susan Bobbitt Nolensunolen@u.washington.edu Anne L. Pembertonapembert@pen.k12.va.us Hugh G. Petrieprohugh@ubvms.cc.buffalo.edu Richard C. Richardsonrichard.firstname.lastname@example.org Anthony G. Rud Jr.email@example.com Dennis Sayersdmsayers@ucdavis.edu Jay Scribnerjayscrib@tenet.edu Robert Stonehillrstonehi@inet.ed.gov Robert T. Stoutstout@asu.edu
xml version 1.0 encoding UTF-8 standalone no
mods:mods xmlns:mods http:www.loc.govmodsv3 xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govmodsv3mods-3-1.xsd
mods:relatedItem type host
mods:identifier issn 1068-2341mods:part
mods:detail volume mods:number 4issue 3series Year mods:caption 19961996Month February2Day 2626mods:originInfo mods:dateIssued iso8601 1996-02-26