USF Libraries
USF Digital Collections

Educational policy analysis archives

MISSING IMAGE

Material Information

Title:
Educational policy analysis archives
Physical Description:
Serial
Language:
English
Creator:
Arizona State University
University of South Florida
Publisher:
Arizona State University
University of South Florida.
Place of Publication:
Tempe, Ariz
Tampa, Fla
Publication Date:

Subjects

Subjects / Keywords:
Education -- Research -- Periodicals   ( lcsh )
Genre:
non-fiction   ( marcgt )
serial   ( sobekcm )

Record Information

Source Institution:
University of South Florida Library
Holding Location:
University of South Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
usfldc doi - E11-00147
usfldc handle - e11.147
System ID:
SFS0024511:00147


This item is only available as the following downloads:


Full Text
xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam a22 u 4500
controlfield tag 008 c20009999azu 000 0 eng d
datafield ind1 8 ind2 024
subfield code a E11-00147
0 245
Educational policy analysis archives.
n Vol. 8, no. 3 (January 05, 2000).
260
Tempe, Ariz. :
b Arizona State University ;
Tampa, Fla. :
University of South Florida.
c January 05, 2000
505
Social science research findings and educational policy dilemmas : some additional distinctions / Steven I. Miller [and] Marcel Fredericks.
650
Education
x Research
v Periodicals.
2 710
Arizona State University.
University of South Florida.
1 773
t Education Policy Analysis Archives (EPAA)
4 856
u http://digital.lib.usf.edu/?e11.147


xml version 1.0 encoding UTF-8 standalone no
mods:mods xmlns:mods http:www.loc.govmodsv3 xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govmodsv3mods-3-1.xsd
mods:relatedItem type host
mods:identifier issn 1068-2341mods:part
mods:detail volume mods:number 8issue 3series Year mods:caption 20002000Month January1Day 55mods:originInfo mods:dateIssued iso8601 2000-01-05



PAGE 1

1 of 15 Education Policy Analysis Archives Volume 8 Number 3January 5, 2000ISSN 1068-2341 A peer-reviewed scholarly electronic journal Editor: Gene V Glass, College of Education Arizona State University Copyright 2000, the EDUCATION POLICY ANALYSIS ARCHIVES. Permission is hereby granted to copy any article if EPAA is credited and copies are not sold. Articles appearing in EPAA are abstracted in the Current Index to Journals in Education by the ERIC Clearinghouse on Assessment and Evaluation and are permanently archived in Resources in Education Social Science Research Findings and Educational Po licy Dilemmas: Some Additional Distinctions Steven I. Miller Loyola University, Chicago Marcel Fredericks Loyola University, ChicagoAbstract The article attempts to raise several distinctions regarding the presumed relationship of social science research findings to social policy making. The distinctions are made using Glymour's critique of the Bell Curve An argument is made that (1) social science models and research findings are largely irrelevant to the actual concerns of po licy makers and (2) what is relevant, but overlooked by Glymour, is how ideological factors mediate the process. The forms that ideological med iation may take are indicated. Although there have been a variety of att empts to understand how social science research does or does not affect the "voices" of th ose being studied (Harding, 1993; Longino, 1993), we wish to revisit the issue from a nother angle. What has been overlooked in even the most ambitious constructivis ts' forays (Fuller, 1988) into dominant epistemologies is why such research findings are, generally, so overwhelmingly ineffective in social policy formula tion. That is, we wish to consider some of the deeply implicit notions of the "researc h act" (Denzin, 1989) itself; those that

PAGE 2

2 of 15contribute to either the tacit acceptance of such k nowledge production or generate vociferous attacks (Lakatos, 1978) of various sorts More specifically, our argument is that social policy makers assume an atypical "gatek eepers" role where, in this case, they must attempt to appropriate, translate, and filter social science research findings to relevant publics; however, the very act of doing so is most likely doomed to fail. Those who are then to "benefit" from the social policies, informed and enlightened by social science findings, are the very ones whose voice oft en cannot be heard. The issue is, to use Quine's (1969) overw orked phrase, one of an "indeterminancy of translation." It is not that a translation is im possible, however, but rather that some thing is lost in the translation. What is lost is the subject of ou r analysis, including an attempt to show—again borrowing from Quine (1960)—that there is indeed a "fact of the matter" about all of this, but an unexpected on e. We will attempt to show how the "translation" issue works by using the recent analy sis of the well known philosopher of science, Clark Glymour, to account for the relation ship of social science research, to social policy, to social practice. Specifically in his provocative article, "What went wrong? Reflections on Science by Observation and The Bell Curve (1998:1-32), Glymour recognizes the issues of evidence and polic y relevant to both the philosophy of science and social science and how they overlap int o the ambiguous realm of public policymaking. However, the need for additional an alysis lies not only in the fact that Glymour has not fully explored a series of mostly i mplicit, but very significant, assumptions that are involved in social policy maki ng, but also to illustrate that the nexus of scientific thinking and the formulation of social policy often support ideologically-based belief systems that selectively utilize "scientific" findings. Our aim will be to illustrate how even a well-known philoso pher such as Glymour falls victim to the very trap he is trying to expose and avoid. To begin with, Glymour's critique of the methodological (and in a deeper sense, ontological) issues he raises concerning the analys is of The Bell Curve (1994) are arguably some of the best made to date. The social sciences, Glymour argues, have been plagued by the alleged importance of uncovering the causal mechanisms underlying social behavior and practices. This is not a new pr oblem. What is important, as he points out, is the inability of the social sciences to ack nowledge that these implicit causal structures are highly complex, and being so, how th ey can produce contradictory conclusions within a given research domain. The com plexity of these causal structures is often overlooked by social scientists because of im plicit beliefs concerning the validity of the methodological techniques themselves (Campbe ll, 1987). For instance, if a social scientist can employ such relatively powerful quant itative techniques as multiple regression, discriminate analysis, and factor analy sis, there are usually two corresponding beliefs that seem to come into play: (1) that such techniques take precedence over "philosophical" beliefs concerning the nature of (and presumed importance of) causality, and (2) the use of such t echniques, irrespective of their ability—or lack of—to uncover true causal structure s, still improves the claims that can be made about social behavior over-andabove what could be said in their absence. Again such debates, as Glymour correctly points out mistake the importance of clear causal thinking with the technical application of m ethods. He states the issue (p. 1): Social statistics promised something less than a me thod of inquiry that is reliable in every possible circumstance, but someth ing more than sheer ignorance; it promised methods that, under explicit and often plausible circumstance, converge to the truth, whatever that may be, methods whose

PAGE 3

3 of 15liability to error in the short run can be quantifi ed and measured. Glymour further correctly points out (pp. 2-3) that social scientists are still under the sway of a certain form of positivism that is su spicious of causal analysis itself. For him, there is a solution: "Clear representation by directed graphs of causal hypotheses and their statistical implications, in train with r igorous investigation of search procedures, have been developed in the last decade in a thinly populated intersection of computer science, statistics and philosophy" (p. 3) However, even this solution, potentially elegant as it is, in our view, will not provide the needed framework for rational social policy making. We will try to addre ss why this is so in the sections that follows.I. To put the issue rather crudely, for thos e engaged in the policy making process what Glymour envisions, "just doesn't matter!" What we mean by this is that in social policy making, at many levels and across a variety of contexts, the discovery and justification of elegant (or even elementary) causa l processes is largely irrelevant to the decisions made by policy makers. Part of the proble m, to begin with, is the fact that there is what we will call an "ontological bifurcat ion" between social scientists and policy makers (who are usually not social scientist s). These two groups—at least based on our own experiences—simply view the "world" in d ifferent ways, and often in such fundamentally different ways, that although they want to communicate often they cannot because, ultimately, they are unable to do so. Whil e the story of why this is so is rather complex, Fuller's attempt to explain it is relevant here. He wrote (1988), for example, Unfortunately, as our remarks were meant to suggest the crucial epistemological differences occur at the level of t he different textual embodiments since a popularization of quantum mechanics offer s the lay reader no more access to the work of the profession al physicist than a state-of-the-art physics text offers the profession al physicist access to the general cultural issues which interest the lay publ ic. [His emphasis.](p. 272) There are indeed different "textual embod iments" that are at the heart of the issues, but for us the policy maker-as-gatekeeper r ole is the crucial one to consider. This role serves as the principle "translator" one, medi ating between the social scientist-as-researcher and the voices of specifica lly involved publics. In contrast with Fuller, however, we see the issue as primarily "ont ological", although heavily conditioned by the epistemological. By this we mean the issue of increased technique-sophistication, along with the causality issue, is believed to be necessary (and possibly sufficient) for an increasingly satisfacto ry and accurate "ontological-representation" of what social science research findings can do. We are suggesting, on the other hand, that the very belief in what social science can do for social policy making is at the center of differing views o f (social) reality between these two groups, leaving aside the affected publics. One ini tial way of capturing the difference is to begin with a few "themes" about evidence that fi gure into the debate but are often not explicitly indicated as such. These themes are fund amentally about what constitutes "good" evidence for (eventually) the making of "goo d" policy, or about how differing textual embodiments come about.

PAGE 4

4 of 15Theme 1: "What is your evidence?" From the policy maker's side of the ontol ogical divide, the pressing issue is to be able to "take and use" the evidence of social scien ce research, with methodological finesse(ness) be damned. Moreover, this is often th e case for policy makers who are trained as social scientists. The issue of the evid ence theme takes various forms. Perhaps, the most central one centers around the fo llowing distinction: "What evidence counts?" vs. "What counts as evidence?" The distinc tion is one with a difference, as we see it. Taking the latter one first, what counts as evidence includes a large class of possibilities, such as empirical and non-empirical (i.e., qualitative), historical, legal data, and so forth (Miller & Safer, 1993). Any of these types of evidence may be deemed to be relevant by the policy maker in terms of formulatin g, implementing or evaluating a given social policy. (Note 1) The issue is not triv ial since how it is addressed, and by whom, can determine a wide range of decisions affec ting peoples lives in terms of what voices they may or may not eventually have. What is crucial to see, however, is how c hoices as to what does not count as evidence automatically entail what evidence counts. Thus, if we reject the use of, fo r example, ethnographic findings as evidence for a so cial policy issue, and our only other choice is some type of empirical evidence, then the process of elimination dictates the epistemological choice of what evidence counts. Her e we may find a great deal of variation: experimental vs. correlational findings, for instance, and both further delineated by way of causal robustness. Moreover, e ach type of evidence may be further distinguished by such factors as "weight" and "numb er". Thus, the "weight of the evidence" may be a function of how "much" there is of it and how these concerns are counterbalanced by "internal" factors such as sampl ing strategies and numbers, parametric vs. non-parametric measures, the putativ e validity and reliability of measures used, their "normal distribution", and so on. All of these considerations need to be, b ut seldom are, taken into consideration by the policy maker. Or, more precisely, even when the y are their eventual impact on the policy making process is usually minimal.Theme 2: "Do you have a causal model?", or "Does yo ur data give rise to or support a pre-determined causal model?" In many policy making scenarios, Theme 2 may or may not be related to Theme 1 and this from either side of the ontological divide Social scientists who serve as (adjunct) policy makers in their role of "experts", based on our experience, seldom, if ever, explicitly engage in discussions of the causal robustness or the efficacy of their models. At best, such attempts are ad hoc; even whe re publication in empirical social science journals is concerned, the issue of "causal ity" is usually given the obligatory conceptual "nod" but then quickly forgotten. From t he view of the non-social scientist policy maker the issue is moot, since it is usually so far divorced from what needs to be accomplished, it is perceived as irrelevant. However, where a causal model could be sp ecified with the precision argued for by Glymour, the implications for policy making are probably not as dramatic as he makes them out to be. Consider his two models (pp. 16-18, figures 12 and 13, respectively) as examples.

PAGE 5

5 of 15 In (a), Herrnstein and Murray's (1994) mo del, IQ is the presumed cause of X (let's say some outcome variable), and while Education may "intervene" or "mediate" the IQ -X relationship, something the social scientist woul d want to know, Glymour argues the "answer" to (a) may be mistaken because of the inab ility to account for the possibility of "U" in case (b). The "U" (e.g., "latent factors", o ther unknown "variables") may themselves be correlated with X and Education and h ence give a false picture of what is presumed in (a). Now, both (a) and (b) are examples of mod els that "count". Let's also assume that (b) is somehow fully specified and with "U" account ed for the role of Education is either enhanced or drastically reduced (i.e., in terms of explained variance). What is the social scientist-aspolicy maker and policy-maker-non-soc ial-scientist to make of this for policy purposes? The first may examine the total am ount of variance explained (i.e., R2), with or without the underlying causal structure, as not being that relevant. By this we mean, the social scientist as policy maker may: (1) judge (b) to be a "better" c ausal model because when "U" is taken into account the ov erall percentage of variance explained in X is "greater" than in (a), (2) mainta in faith in (a) because the amount of unexplained variance (i.e., 1 — R2 ) has not been "sufficiently" reduced in model (b), or (3) perhaps "go with" (a) or (b) depending on what "U" is determined to be. If U is something like the mysterious "g-factor" for abilit y, as opposed to a more "straightforward" variable such as, hypothetically, "Parental Attitudes", the decision may be to stick with model (a) because it is putatively more amenable to policy making. On the other side, the non-social scientist policy mak er (even given some understanding of the technical issues) still needs to know what to do—and (a) or (b) will not be very useful here. Why not? One reason is that the policy maker (perh aps of either variety) is engaged—although most likely implicitly—in the form ulation of a practical argument ; one, roughly, similar to Aristotle's ( DeMotu Animalium Ch. 7., Nicomachean Ethics VU 3:1 47a; VI 2:113a, DeAnima III II:1143b. (cited in Green, 1980:xvi) where the conclusion of the argument is in the form of an "ac t", or here for the policy maker, "Do X." In such a case, even a well formed argument wit h "true" premises is no guarantee that a policy maker will take such an argument seri ously (Miller and Safer, 1993). For the policy maker, who happens to be a philosopher o f social science, let us say, the situation is even more desperate. Even with a fully specified model of the kind argued for by Glymour, the philosopher-as-policy-maker wil l quickly recall the possibility of radical under-determination (Quine, 1960). Converse ly, if the model is so fully specified, from a god's-eye point of view so that a ll possible (even incompatible) models are somehow integrated into a meta-model, the situa tion for making concrete ("Do X") policy decisions becomes exponentially worse becaus e of the complexity (and, most

PAGE 6

6 of 15likely, abstruseness) of the model. Ironically, if the super-model were to be "reduced" to a simple, parsimonious and elegant one, its "simpli city" would argue against its applicability to social policy concerns which now c ome to be viewed as "highly complex" and beyond the "simplicity" of the model. The ideas above may be further related in a general way with Glymour's (1980) own notion of "bootstrapping." (Note 2) Even if we had a good, formal, and elegantly simple model (theory) of, say, the determinants of income inequality (see Miller, 1987:237-242 for arguments against the bootstrappin g issue which, perhaps, ought to be the method-of-choice in showing how a causal-modeli ng framework is relevant to social policy-making). For instance, assume that the State Superintendent of Schools has evidence (in the form of standardized test scores u sed in the system) that there is a "strong" (e.g., r = .70) positive correlation betwe en test scores and the SES of schools, i.e., SES and Achievement Test scores covary. From a bootstrapping perspective, we might suggest that any of the models, such as the o nes noted above, could in conjunction with the evidence, be used to infer an hypothesis s omething like, "when controlling for IQ the relationship between SES and Achievement Tes t scores will be substantially reduced." Let us say this hypothesis is subsequentl y tested and IQ indeed does reduce the relationship between SES and test scores. This goes on in different ways and the theory is increasingly "confirmed"— in at least this sense of the elusive term (Achinstein, 1983). Bootstrapping would seem to be (if indeed it is increasingly supported) a desirable consequence for the policy maker; but in fact it is not.II. While desirable, an increasingly well con firmed theory is ordinarily of little pragmatic value for the policy maker. And this is n ot primarily due to the complexity or theoretical "simplicity" of the theory, nor to a la ck of reliability searches, or problems of adequate statistical modeling, but rather to (1) th e lack of a "logic" of policy implementation given the nature of the indicators i n causal-modeling approaches themselves, (2) the lack of a clear "inference to t he best explanation" model in which the issues raised previously—what counts as evidence an d what evidence counts—become central, and (3) the lack of acknowledging the powe r of what we will call Ideological Proclivities in determining the "meaning(s)" of (1) and (2). The major problem with using social scien ce methods and modeling to make social policy is the failure to see that a type of "naturalistic fallacy" is involved, whereby the "is", in this case of The Bell Curve as well as other attempts, is believed capable of being translated into the "ought" of policy making. To see this, some comments on the three points above. First, one of the most difficul t issues policy makers confront is the implementation of indicators (as a part of formulating and implementing a policy) whose "status" may be epistemically sound but ontological ly problematic. And, the problem is made worse as, paradoxically, we become more sophis ticated in (as Glymour applauds) the use of such techniques as factor analysis which are used to reveal complex "underlying structures" or concepts. Thus, even wit h a non-problematic construct such as SES, the policy maker is confronted with the issue of how to implement its effects. That is, if SES is correlated with, say, IQ (a problematic construct), the policy maker must decide if (a) the construct can be changed or altered in such a way that those who do not have "enough" of it can obtain "more" of it or (b) if new social arrangements have to be constructed wherein those who have "enough" or "too much" of it can be persuaded to "share" it with others (e.g., social policy issues such as desegregation of schools through

PAGE 7

7 of 15"bussing") who have "less" of it, or those who have "enough" of it are kept away from those who do not because doing so (anticipating poi nt three, ideology) is justified in some way. Now multiply this one variable case with the type of sophisticated causal modeling envisioned by Glymour and the problems inc rease accordingly. The second issue related to the one just mentioned, is that of providing an "inference to the best policy decision "based on conventional notions of inferen ce to the best explanation models (generally, Lipton, 1991). What is involved here is essentially the need for "rules" of inference which operate in two directions. The first involves the creation of a causal modeling theory which is the result of previous thinking and perhaps partial testing of the various "paths" in t he model. The complete model is then tested further and claims about its efficacy as a model are put forth. In principle the model (or parts of it) can then be taken as the fra mework for developing a social policy, which then is tested. Both traditional "deductive" notions of theory use and Glymour's bootstrapping would fall under this approach. Now, even granting the "status" problems of the variables in the model as being capable of t esting in some meaningful way, if such testing does take place the conclusions about wheth er the policy has "worked" are still problematic. One problem of course is the adequacy of the testing procedures themselves, while another one is how the evidence stands in relation to the model and to the policy that is being evaluated. In another words, can the same evi dence simultaneously constitute a best-inference explanation to both? In many cases, the answer to both is no. In the first instance, the way we often attempt to map the presu med causal relations of the model to the "real world" are contrived, or at best, constit ute a partial mapping. As Glymour correctly points out, the way we "conditionalize" a cross different samples is crucial in what one's measures do or do not show. But the poin t we wish to emphasize is that such evidence, both in the "what evidence counts" and "w hat counts as evidence" senses, is not necessarily the evidence that counts for the policy. For example, the finding that SES and School Achievement do vary and are "explained" by IQ, let us say for the entire state of California, is more of a way of "confirming" thi s assumed relationship in the model than of formulating, implementing or evaluating a p olicy. That is, because of the nature of policy making as a form of practical argument (" Do X"), even a high correlation of model-specified variables is no guarantee of policy relevance in either the formulation, implementation, or evaluation phases of policy maki ng. Yet such evidence may be strong confirming evidence for the model itself. On the other hand, what counts as evidenc e might be given a broad definition for a given policy irrespective of any causal modeling co nsiderations, or perhaps more accurately, incidentally of causal-model considerations. For example, the S uperintendent of Schools in a state is aware that the "literature is strongly supportive of a SES-IQ-School Achievement connection, and a similar pattern seems to be the case in her own school system. She formulates a specific po licy in which she believes the only way to raise test scores (which are deemed "not acc eptable") is to permit no one in teacher training programs with an IQ of less than 1 15; remove teachers who score below this; and significantly increase the salaries of pr esent and future teachers who are or will be at this level. Additionally, what counts as evid ence for the policy (in its formulation and implementation) may be a wide variety of "evide nce" including previous empirical and non-empirical studies, reports, anecdotal descr iptions, philosophical arguments, and so on. These same, or different, evidence sources m ay also be used to judge the "success" of the policy in its evaluation phase. In this scenario, which by the way actually often occurs, the inference-to-the-bestpolicy judgment is made on the basis of non-causal model based evidence as instances of the inference to the best explanation

PAGE 8

8 of 15(read "explanation" as "successful" policy). While all of these variations on the social policy-causal modeling theme are relevant in varyin g degrees to the policy making process, the most relevant one in our view is that of implicit or explicit ideological preferences. How this issue works, and how even Gly mour is not fully aware of its power, will be described below. However, before thi s is addressed, some further brief reflections on the points above may be in order. Although not addressed by him specificall y, we have found some of the recent work by Searle (1988, 1995; also see Review Symposi um on Searle, 1998) to be especially useful in situating the social science r esearch-social policy issue. In his continuing analysis of intentionality, Searle (1983 1998: 99-104) introduces the notion of "conditions of satisfaction," a phrase which ref ers to the possibilities of judging a large class of intentional states in terms of their propositional contents. Some intentional states such as beliefs and hypotheses can be judged as true or false according to what Searle refers to as their mind-to-world direction o f fit. That is, these intentional states are supposed to reflect the way the world is in terms o f an independently existing reality. On the other hand, intentional states such as desires and intentions have a different direction of fit: a world-tomind direction. Here, the issue is one of trying to make the world correspond to what is believed about it (see also, Anscombe, 1959; Austin, 1962). The interesting parallel to the policy ma king-social research issue is that the direction-of-fit problem is actually counterintuiti ve to what one would expect. If we look at Figure 1, Glymour and many social scientists wou ld expect that the increased sophistication of, especially, causal modeling proc esses will increasingly yield a true mind-to-world fit [i.e., A ]. And, indeed, while this may prove to be the case in some ontologicallyrealist sense, it comes at the incre ased cost of having to demonstrate that the world (in the policy making sense) is such, and, hence, we end up with C : trying to fit the world to (again, in terms of policy making) what we believe it should be like on the basis of what it is predicted to be. Figure 1On the other hand, the policy maker want the world to be like (b), but in trying to apply A to it, she must argue for D Both groups start out as "realists", in at least a broad ontological sense, but end up as "idealists" in hav ing to reconstruct the desired fit. What results is a type of "reversed intentionality" wher e beliefs become desires, and desires are fitted into the beliefs—a result where social p olicy which "fails" is not so much the fault of the model itself but, ironically, of its s ophistication. The double irony is that a "simple" model, while "fitting" in both senses, may be rejected by both policy makers and social scientists for this very reason. There i s, however, another factor that needs to

PAGE 9

9 of 15be addressed and we turn to this now.III. Glymour's article opening is entitled, "W hat went wrong...?" In effect nothing went wrong! By this we mean the critical dimension in trying to understand the relationship between social science causal-modeling and social policy is how the "variable" of ideological preference enters into th e equation. The importance of "U" (p. 18) in Glymour's critique is not in some covert emp irical variable influencing our model making but rather how model-making is interpreted b y way of ideological preferences and proclivities. It is this "variable" that ultima tely accounts for our constructions of social reality (Searle, 1995). The ideological factor is a world-to-mind problem of fit and does, of course, go in both directions—those of social scientists as well as policy makers. Moreover, while the ideological frameworks of those above may be implic it or explicit, there is yet another "level" or group that comes into play here, namely those affected by the policy. What "voice" these individuals obtain from the policies that are usually imposed on them is a function of how well decisions affecting them are u nderstood and the degree of political action garnered for or against the policy. Knowledg e of how the ideological factor operates is further complicated by the fact that th ere are at least two methodological stances one may take to characterize this process—a variety of the mind-to-world problem. These possibilities are given in Figure 2. Figure 2 The categories of "intervening" and "extr aneous" are meant to be used as they are in social research: an intervening variable as logi cally "fitting" between an independent and dependent variable, and extraneous, as a variab le separately influencing the independent and dependent variables (Nachmias & Nac hmias, 1981). For social research and policy, the intervening variable example sugges ts that an ideological stance is taken (by either social scientist, policy maker, on those directly affected) in such a way that one views it as being compatible with the social po licy. That is, the ideology becomes

PAGE 10

10 of 15the justification for the policy; it is a filter which translates th e findings into acceptable policy decisions. Thus, if one believes, as in the Bell Curve that there are empirical data which clearly support cognitive differences among r acial and ethnic groups, that belief system "intervenes" nicely between the research fin dings (and approach) and the policy subsequently formulated. In the "extraneous variabl e" model, the ideological belief system, let us say of the policy maker, is differen t because it admits of the possibility that the policy maker may reject the research findings and yet maintain the efficac y of a particular policy formulation. For instance, if SES differences are correlated with performance on standardized tests, one may reject t hat they have a hereditary basis and yet may find such results compatible with a "welfar e state liberalism" or "educational progressivism" social policy which would support a variety of educational interventions. Moreover, even if the research indicated that racia l or ethnic differences remained after controlling for SES, one could still argue that the meaning of SES is "interpreted" differently by different groups. Thus "income", for example, may be "equal" between two groups, but one group utilizes income to invest in "cultural capital" than the other, and it is this factor that makes the difference in test scores; again, an interpretation ideological compatible with the categories above. We are not suggesting, in some simplistic fashion, that ideological commitments or preferences are always working as "biasing-filte rs", but only that they are an often overlooked factor in explaining how social policies are formulated, implemented an d evaluated given social science research findings. A dditionally, the ideological proclivities of all directly or indirectly involved in policy making produce a variety of conflations that are often overlooked in discussion s of these issues. Thus, some feminist epistemologists (Tyson, 1998) see their particular agendas, and the social policies flowing from them, as being more (or only) compatib le with "qualitative" research methods—what counts as evidence and what evidence c ounts is ideologically conditioned. In a similar way, entire ideological m ovements such as "constructivism" (Cobb, 1994, Von Glaserfeld, 1995), while not being overtly hostile to empirical methods, do come down on the side of "ethnographic" approaches. How the ideological factor is prominent i n Glymour's thinking can be made clear when he states (p. 28): Sensibly read, much of the data of The Bell Curve as well as other data the book does not report, demands a revived and rationa l liberal welfare state, but instead the book ends with an incoherent, antiegalitarian plea for the program of right-wing Republicans. We now know where Glymour stands ideologi cally, although it is an open question if his political preferences were "caused" directly by the evidence, his reading of it, or irrespective of both. It is probably the middle option of the above. On the same page (p. 28) he berates The Bell Curve's assumptions that the decline of the two-parent family is a factor in such things as low school per formance. He may be correct in this, but his citing of Murray (1984) to the effect that two parent families are in decline in industrialized societies, does not tell us how or w hy the Murray evidence conforms to his own causal-modeling structures. Does the eviden ce in Murray adequately account for all the problems he has cited? If so, some passing mention of it could have been made. Continuing on (pp. 27-29), Glymour makes a huge leap from the fact that Herrnstein and Murray favor some form of privatized schooling to the "fact" that we will end up with "Ku Klux Klan schools, Aryan Nation Sch ools... and more schools of ignorance, separation, and hatred will bloom like s ome evil garden, subsidized by taxes"

PAGE 11

11 of 15(p. 29). Before the quote here he uses the phrase, "The consequences are predictable." How poor Modus Ponens is still abused! Where is the re any evidence that privatization has or will lead to such outcomes. There are severa l other instances in the remaining pages (pp. 29-30) of the article where Glymour does seem to be aware of what evidence counts or why it counts. For example, He favors neither more decentralization or privatiz ation of schools but rather national standards, testing and funding. He favors schools that are always open for children from 1 to 17, that can serve as both centers of learning and safe havens, and says they are the "sane and comparatively economical way to create and sustain a civil society." He favors early intervention efforts as worthy and these can produce lasting effects (contrary Herrnstein and Murray's conclusio ns) if "teachers are paid reasonably." He also says not having his vision of infancy to young adulthood quality schooling will result in higher "opportunit y costs" than the 100 billion per year cost he estimates. He believes "over credentialing" (carried out by co lleges and universities) penalizes the potentially positive effects of vario us compensatory efforts (i.e., affirmative action programs). Finally, Glymour gives us his complete policy vision (p. 30): "Here is an alternative vision, one I claim better warranted by the phenomena Herrnstein and Murray report: nationalized, serious, educational s tandards, tax supported day and night care, a living minimum wage, capital invested in sy stems that enable almost anyone with reasonable training to do a job well." He then conc ludes if policies advocated by such conservatives as Gingrich and Gramm are instituted, we will end up pretty much a nation like Honduras! In brief, the "policy" recommendations Gl ymour is advocating are not substantiated explicitly by any evidence that would count in their favor. And if there were such evidence, he does not tell us of its adeq uacy in causal-modeling terms. Ironically, Glymour's strong support for national s tandards is very close to what Hirsch (1996) has recently, and somewhat persuasively, arg ued for—although we would not equate Hirsch with being politically liberal. But t he most telling phrase, we believe, in all of this is the emphasized passage above; namely that from the same data presented by Herrnstein and Murray, Glymour draws quite differen t conclusions—certainly an interesting variant on the under-determination thes is. Finally, so that we may not be misunderst ood, we agree with almost all (except the Honduras slam!) that Glymour is advocating. We are just saying that you can't get there in the way the Glymour thinks you can. The "is" of causal-modeling processes in the social sciences will not translate in the "Do X" of policy making. If Glymour does not believe this, he ought to consider running for a lo cal school board.NotesOne may notice that the policy-making process invol ves at least these three stages. Each may have an independent or sequential relation to the issue of social science research findings as evidence. 1. Bootstrapping refers to the complexity of trying to adequately determine what evidence and what type of evidence properly applies to the testing of theories. The 2.

PAGE 12

12 of 15"bootstrapping" means that the evidence is first co nnected with the theory and both, then, are used to deduce the hypotheses of th e theory. The general issue is how theories are to be confirmed. Here, how do social science theories result in social policy?ReferencesAchinstein, P. (Ed.) (1983). The concept of evidence Oxford: Oxford University Press. Austin, J.L. (1962). Sense and Sensibilia New York: Oxford University Press. Campbell, D. (1987). "Guidelines for Monitoring the Scientific Competence of Prevention Intervention Centers," Knowledge 8(3): 389-430. Cobb, P. (1994). "Where Is The Mind? Constructivist and Sociocultural Perspectives on Mathematical Development," Educational Researcher (October): 13-20. Denzin, N.K. (1989). The Research Act: A Theoretical Introduction to Soc iological Methods 3rd edition. New Jersey: Prentice-Hall. Fuller, S. (1988). Social Epistemology Bloomington, IN: Indiana University Press. Glymour, Clark. (1980). Theory and Evidence Princeton: Princeton University Press. Glymour, Clark. (1998). "What Went Wrong? Reflectio ns on Science by Observation and The Bell Curve ." Philosophy of Science 65(1):1-32. Green, Thomas. (1980). Predicting the Behavior of The Educational System Syracuse, NY: Syracuse University Press.Harding, S. (1993). "Rethinking Standpoint Epistemo logy: "What is Strong Objectivity?" In L. Alcoff and E. Potter (Eds.), Feminist Epistemologies (pp. 49-82). New York: Routledge.Herrnstein, Richard J., and Charles Murray. (1994). The Bell Curve: Intelligence and Class Structure in American Life New York: Free Press. Hirsch, E.D., Jr. (1996). The Schools We Need and Wy We Don't Have Them New York: Doubleday.Lakatos, I. (1978). "History of Science and Its Rat ional Reconstructions." In I. Lakatos, The Methodology of Scientific Research Programs (pp. 102-138). Cambridge: Cambridge University Press).Lipton, Peter. (1991). Inference to the Best Explanation New York: Routledge. Longino, H. (1993). "Subjects, Power and Knowledge: Description and Prescription in Feminine Philosophies of Science." In L. Alcoff and E. Potter (Eds.), Feminist Epistemologies (pp. 101-120). New York: Routledge. Miller, Richard W. (1987). Fact and Method: Explanation, Confirmation and Real ity in the Natural and Social Sciences Princeton: Princeton University Press.

PAGE 13

13 of 15 Miller, Steven I., and L. Arthur Safer. (1993). "Ev idence, Ethics and Social Policy Dilemmas," Education Policy Analysis Archives 1(9) [Entire issue]. Murray, Charles. (1984). Losing Ground: American Social Policy 1950-1980 New York: Basic Books.Quine, W.V.O. (1969). Ontological Relativity and Other Essays New York: Columbia University Press.Quine, W.V.O. (1960). Word and Object Cambridge, MA: MIT Press. "Review Symposium on Searle," Philosopyy of the Social Sciences (1998) 28 (1 March): 102-121.Searle, J.R. (1998). Mind, Language and Society: Philosophy In the Real World New York: Basic Books.Searle, John R. (1995). The Construction of Social Reality New York: Free Press. Tyson, C.A. (1998). "A Response to ‘Coloring Episte mologies': Are Our Qualitative Research Epistemologies Racially Biased?," Educational Researcher (December): 21-22.Van Glaserfeld, E. (1995). Radical Constructivism: A Way of Knowing and Learni ng Washington, DC: The Falmer Press.About the AuthorsSteven I.MillerProfessor,Department of Leadership,Foundations and Policy Studies Loyola University, ChicagoDr.Miller specializes in policy studies as well as the philosophy of social science.He teaches courses in qualitative research methods,phi losophy of education and social theory applied to education.Marcel FredericksProfessor,Department of Sociology and AnthropologyLoyola University, ChicagoDr.Fredericks specializes in Medical Sociology,Soci al Theory and qualitative research methods. He is Director of the Office of Medical So ciology.Copyright 2000 by the Education Policy Analysis ArchivesThe World Wide Web address for the Education Policy Analysis Archives is epaa.asu.edu General questions about appropriateness of topics o r particular articles may be addressed to the Editor, Gene V Glass, glass@asu.edu or reach him at College of Education, Arizona State University, Tempe, AZ 8 5287-0211.

PAGE 14

14 of 15(602-965-9644). The Commentary Editor is Casey D. C obb: casey.cobb@unh.edu .EPAA Editorial Board Michael W. Apple University of Wisconsin Greg Camilli Rutgers University John Covaleskie Northern Michigan University Alan Davis University of Colorado, Denver Sherman Dorn University of South Florida Mark E. Fetler California Commission on Teacher Credentialing Richard Garlikov hmwkhelp@scott.net Thomas F. Green Syracuse University Alison I. Griffith York University Arlen Gullickson Western Michigan University Ernest R. House University of Colorado Aimee Howley Ohio University Craig B. Howley Appalachia Educational Laboratory William Hunter University of Calgary Daniel Kalls Ume University Benjamin Levin University of Manitoba Thomas Mauhs-Pugh Green Mountain College Dewayne Matthews Western Interstate Commission for HigherEducation William McInerney Purdue University Mary McKeown-Moak MGT of America (Austin, TX) Les McLean University of Toronto Susan Bobbitt Nolen University of Washington Anne L. Pemberton apembert@pen.k12.va.us Hugh G. Petrie SUNY Buffalo Richard C. Richardson New York University Anthony G. Rud Jr. Purdue University Dennis Sayers Ann Leavenworth Centerfor Accelerated Learning Jay D. Scribner University of Texas at Austin Michael Scriven scriven@aol.com Robert E. Stake University of Illinois—UC Robert Stonehill U.S. Department of Education David D. Williams Brigham Young UniversityEPAA Spanish Language Editorial BoardAssociate Editor for Spanish Language Roberto Rodrguez Gmez Universidad Nacional Autnoma de Mxico

PAGE 15

15 of 15 roberto@servidor.unam.mx Adrin Acosta (Mxico) Universidad de Guadalajaraadrianacosta@compuserve.com J. Flix Angulo Rasco (Spain) Universidad de Cdizfelix.angulo@uca.es Teresa Bracho (Mxico) Centro de Investigacin y DocenciaEconmica-CIDEbracho dis1.cide.mx Alejandro Canales (Mxico) Universidad Nacional Autnoma deMxicocanalesa@servidor.unam.mx Ursula Casanova (U.S.A.) Arizona State Universitycasanova@asu.edu Jos Contreras Domingo Universitat de Barcelona Jose.Contreras@doe.d5.ub.es Erwin Epstein (U.S.A.) Loyola University of ChicagoEepstein@luc.edu Josu Gonzlez (U.S.A.) Arizona State Universityjosue@asu.edu Rollin Kent (Mxico)Departamento de InvestigacinEducativa-DIE/CINVESTAVrkent@gemtel.com.mx kentr@data.net.mx Mara Beatriz Luce (Brazil)Universidad Federal de Rio Grande do Sul-UFRGSlucemb@orion.ufrgs.brJavier Mendoza Rojas (Mxico)Universidad Nacional Autnoma deMxicojaviermr@servidor.unam.mxMarcela Mollis (Argentina)Universidad de Buenos Airesmmollis@filo.uba.ar Humberto Muoz Garca (Mxico) Universidad Nacional Autnoma deMxicohumberto@servidor.unam.mxAngel Ignacio Prez Gmez (Spain)Universidad de Mlagaaiperez@uma.es Daniel Schugurensky (Argentina-Canad)OISE/UT, Canadadschugurensky@oise.utoronto.ca Simon Schwartzman (Brazil)Fundao Instituto Brasileiro e Geografiae Estatstica simon@openlink.com.br Jurjo Torres Santom (Spain)Universidad de A Coruajurjo@udc.es Carlos Alberto Torres (U.S.A.)University of California, Los Angelestorres@gseisucla.edu