xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam a22 u 4500
controlfield tag 008 c20009999azu 000 0 eng d
datafield ind1 8 ind2 024
subfield code a E11-00197
Educational policy analysis archives.
n Vol. 8, no. 53 (December 08, 2000).
Tempe, Ariz. :
b Arizona State University ;
Tampa, Fla. :
University of South Florida.
c December 08, 2000
Use of logic in educational research and policy making / Rick Garlikov.
Arizona State University.
University of South Florida.
t Education Policy Analysis Archives (EPAA)
xml version 1.0 encoding UTF-8 standalone no
mods:mods xmlns:mods http:www.loc.govmodsv3 xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govmodsv3mods-3-1.xsd
mods:relatedItem type host
mods:identifier issn 1068-2341mods:part
mods:detail volume mods:number 8issue 53series Year mods:caption 20002000Month December12Day 88mods:originInfo mods:dateIssued iso8601 2000-12-08
1 of 7 Education Policy Analysis Archives Volume 8 Number 53December 8, 2000ISSN 1068-2341 A peer-reviewed scholarly electronic journal Editor: Gene V Glass, College of Education Arizona State University Copyright 2000, the EDUCATION POLICY ANALYSIS ARCHIVES. Permission is hereby granted to copy any article if EPAA is credited and copies are not sold. Articles appearing in EPAA are abstracted in the Current Index to Journals in Education by the ERIC Clearinghouse on Assessment and Evaluation and are permanently archived in Resources in Education The Use of Logic in Educational Research and Policy Making Rick Garlikov Birmingham, Alabama (U.S.A.)Abstract While educational research is an empirical enterpri se, there is significant place in it for logical reasoning and anecdotal evi dence. An analysis of the article by Scott C. Bauer, "Should Achievement Tests be Used to Judge School Quality?" ( Education Policy Analysis Archives, 8 (46). Available: http://epaa.asu.edu/epaa/v8n46.html) is used to illustrate this point. I want to use the following to help demon strate the importance of logic, philosophy (particularly conceptual analysis), and insights based on anecdotal evidence, for educational research and policy making. In "Should Achievement Tests be Used to J udge School Quality?" ( EPAA Vol. 8, Number 46 ) Scott C. Bauer stated the following: At the 1998 Annual Meeting of the Mid-South Educati onal Research Association, W. James Popham raised the following q uestion: Is it appropriate to use norm-referenced tests to evaluat e instructional
2 of 7quality? Specifically, he challenged participants t o consider whether norm-referenced tests measure knowledge that is tau ght and learned in schools. Popham then invited researchers to part icipate with him in a study to answer the question: Should student scor es on standardized achievement tests be used to evaluate instructional quality in local schools? In a subsequent paper, Popham (1999) laid out the basic argument that frames this study. While standardized achievement tests are useful tools to provide evidence about a specific students' mastery of knowledge and skills in certain content domains, "Employing standardized achievement tests to ascert ain educational quality is like measuring temperature with a tables poon" (p. 10). There are several difficulties with using aggregate measures from norm-referenced tests to judge the performance of a school. [Two of these are described, which I omit here.] [Third,] scores on standardized achieveme nt tests may not be attributable to the instructional quality of a scho ol. Student performance may be caused by any number of factors, including what's taught in schools, a student's native intell igence, and out-of-school learning opportunities that are heavi ly influenced by a students' home environment. Popham terms this last issue the problem of "confounded causality." Here we report the results of one of seve ral local studies designed to provide empirical evidence to answer th e question of whether student scores on standardized achievement tests represent reasonable measures of instructional quality. This last sentence is only true if the te rm "reasonable" is understood to mean something like "credible to people who think about the issue in certain ways" or "credible to reasonable people who think about the issue in certain ways." It has to be understood in a way not dissimilar from the legal p rinciple of considering "what a reasonable person would have believed or done in a similar situation" in order to assess the guilt or innocence of a defendant. This is bec ause the study only actually surveys what people believe in regard to whether students who gave correct ans wers to individual standardized test questions were more li kely to have been taught the information necessary to answer those test items in school or were more likely to have learned it elsewhere. The study did not measure whether students did learn the information in school or whether they learned it el sewhere, but whether teachers and parents thought students learned the information in school or lear ned it elsewhere. Consider the following paragraph in Bauer 's article: The notion that aggregate scores on stand ardized tests should serve as an indicator of school quality relies on a n assumption of causality. The underlying logic is that the scores are predominantly caused by something the school does or has some con trol over. For this assumption to hold, at a minimum we must be wi lling to believe that student performance on standardized tests is r elated to school quality, that the tests measure the skills and abil ities stressed in school programs, and that there are no antecedent factors that might otherwise explain aggregate student performance on the tests. If the
3 of 7data presented here are credible, the soundness of this assumption must be questioned. On average about half of the it ems on the rated test suffer from "confounded causality" on at least one of these criteria. There is an ambiguity in the word "should ", as he uses it, in the first sentenceÂ—the two meanings being (1) "should" in the political se nse of whether policy ought to rely on standardized test scores to judge schools because p eople accept or believe that test items show direct causal correlations between the quality of school instruction and student test scores and thus, by extension, accept test scores o f a measure of the efficacy of what is taught and learned in schools, (2) whether test ite ms actually show direct causal correlations between school instruction and student test scores and thus serve as an actual measure of what is taught and learned in sch ools. In the second sense it is not true that For this assumption to hold [i.e., the assumption that scores are predominantly caused by something the school does or has some control over], at a minimum we must be willing to believe that student performance on standardized tests is related to sch ool quality...." For the assumption to hold, what is necessary is that student performance on standardized test scores actually is related to school quality. Our beliefs about the a ccuracy of that statement have nothing to do with whether the assumption holds or not. We can believe it all we want, or disbelieve it all we want, and neither that belief nor that disbelief will make it true or false. The proper conclusion is not that nearly half the items rated suffered from confounded causality, but that teachers and parents believed nearly half the items suffered from confounded causality. The test for seeing how much, if anything of what is measured on standardized tests is actually taught in schools would require a very different kind of studyÂ—one which attempts either to find out precisely where s tudents learned the information which they used to answer test items correctly, or at a m inimum to find out whether students knew the information before it was taught in school or not, using some sort of pre-test/instruction/post-test differentiation meth odology. However, this latter would still only acc ount for students learning the information prior to instruction. It would not account for stu dents' learning the information during or after instruction, though not because of the instruction (alone). For example, it is a fairly common phenomenon for teachers to "teach" a principle that students do not understand, and that a parent or someone else then explains to the student in a way that the student comprehends it. Now it may be that the parent would not have done this without the teacher's introduction, but it is still then a joint teaching effort, not a result only of school instruction alone. And I suspect th ere is some evidence that in school districts where there is not such parentor mentor -child interaction about school work, students do not learn it as well nor test as well. I also suspect that success on achievement tests, and academic or "grading" succes s in school in general comes in large part from parent or mentor interaction with schoolinitiated subject matter. The same argument could be given with regard to students' le arning on their ownÂ—through reflection or additional study from other sourcesÂ—m aterial that was introduced in the classroom but that was not learned in the classroom nor from what the teacher (or textbook) said or did. The point, however, is that where and when students have learned something is a social science kind of question, as is the question of where and when what proportions of students learn a particular item in school or el sewhere. And it is not dependent upon
4 of 7where or when parents or teachers or anyone thinks students have learned somethingÂ—unless the parent or teacher knows for su re. (The problem for the social scientists, however, in this latter case is ascerta ining whether the parent does know for sure or not, because even if the parent is correct and does know, it is difficult for someone else to know the parent's claim is correct, particularly if the researcher or other third party was not present during the process.) But now consider Popham's (or Bauer's, I can't tell which) claim: "Finally, scores on standardized achievement tests may not be attrib utable to the instructional quality of a school. Student performance may be caused by any number of factors, including what's taught in schools, a student's native intelligence, and out-of-school learning opportunities that are heavily influenced by a stud ents' home environment." If that is true, as it certainly seems to be since students do learn things, or figure out things, on their own or from others outside of schoolÂ—things which sometimes are tested on standardized testsÂ— that is alone sufficient to show that test scores cannot be reasonably attributable to instructional quality in schools alone. For if there are possible and reasonably likely other "confounding" or contri buting causes of student success on standardized tests, then logic alone demands that t est scores cannot legitimately be used to assess the quality of school instruction. Surve ys about parent or teacher beliefs regarding this matter are unnecessary and logically irrelevant. But that does not make this survey nor th is paper unimportant. There are two things involved that are important. The first is th at something may be politically popular even if it is not legitimate. So a survey of wheth er people think that standardized test scores reflect the quality of instruction in school s may be important to know for determining public policies (and news reporting pol icies) about using and/or reporting such assessments. If it turned out that the public did not have as much confidence in or concern about this form of assessment as legislator s and newspapers seem to think they have, it might be politically feasible to get rid o f these tests in a way that reasoning alone will not permit, because what is thought important to report in the news and what is thought necessary to legislate are often more depen dent on what is believed to be desired by the public than on what reason might show is des irable or what evidence might show is false about public perceptions. Second, this survey is interesting and us eful as a teaching tool for the public, and in that regard is very important. For what Bauer ha s done is to show that people who look at individual test items are not confident abo ut the significance of individual test item scores, and that therefore they cannot be confident about the meanin g or significance of aggregate scores and that, by extension, no one can be. It is one thing for someone to believe tests are significant withou t looking at and reflecting on the individual questions and the significance of each o f them; it is quite another to believe that tests scores have significant meaning after ex amining the individual test questions and their likely significance. The survey was a wa y of getting people to do such an examination and to show them, and others, what happ ened when they did. For many people that is more convincing than logic alone, ev en if it should not logically be necessary. I point out the above using the Bauer stu dy because that study is not unique in educational research in regard to trying to demonst rate what is essentially a logical matter by use of empirical research. Further, it i s not unique in educational research for researchers to draw logically unwarranted or unjust ified conclusions from perfectly good data that they have collected. The point is that w hile logic and philosophy or conceptual analysis alone are often insufficient to provide kn owledge about educational phenomena, they are both necessary in order to understand the significance of such data.
5 of 7 Moreover, they often show what data to se ek. When Popham, or anyone, first realized that there logically could be confounded causality in regard to students' answering standardized test items correctly, that r ealization alone showed there was a problem that needed to be studied empirically in or der to determine whether the logical possibility was the actual or likely or even system atic or overwhelming occurrence. But all too often in educational research and in educat ional policy-making, it is "empirical" research that is held to be all that is important, not logic nor anecdotal evidence nor insight based on anecdotal evidence. That seems to me to be a mistake because while logic and apparent single occurrences alone do not show what is happening systematically or statistically, they point out mat ters that either need to be studied empirically or they point to conceptual problems th at may have to be addressed before empirical studies can be done. In some cases they also point out the actual futility of relying on a practice or policy that intuitively se ems to be effective and that may even be traditionalÂ—such as determining the efficacy of sch ools by comparing (standardized) test scores. There are far more logical and concep tual matters involved in education and in educational research than is commonly believed o r accepted. And I think it is a grave mistake to think that empirical studies alone are t he proper or necessary way to do educational research and the only proper means to g uide educational policy.ReferenceBauer, S.C. (2000). Should Achievement Tests be Use d to Judge School Quality? Education Policy Analysis Archives, 8 (46). Available: http://epaa.asu.edu/epaa/v8n46.html About the AuthorRick GarlikovRick Garlikov is a philosopher and photographer who resides in Birmingham, Alabama. He holds a graduate degree in Philosophy from the U niversity of Michigan. He is the author of The Meaning of Love of Making the Most of Your University Courses: What to Expect Academically at College and of Teaching About Thinking; Thinking About Teaching: Why Teaching "Facts" Is Not Enough an online book of essays about teaching for reasoning and understanding. Rick cond ucts introductory philosophy instruction via e-mail and offers a free "homework help" service that tries to help students (or parents) understand in greater depth t he material with which they are having difficulty, so that they can then work through thei r assignments (or teach their children) on their own. These materials and services are avai lable at http://www.garlikov.com.Copyright 2000 by the Education Policy Analysis ArchivesThe World Wide Web address for the Education Policy Analysis Archives is epaa.asu.edu General questions about appropriateness of topics o r particular articles may be addressed to the Editor, Gene V Glass, email@example.com or reach him at College of Education, Arizona State University, Tempe, AZ 8 5287-0211. (602-965-9644). The Commentary Editor is Casey D. C obb: firstname.lastname@example.org
6 of 7EPAA Editorial Board Michael W. Apple University of Wisconsin Greg Camilli Rutgers University John Covaleskie Northern Michigan University Alan Davis University of Colorado, Denver Sherman Dorn University of South Florida Mark E. Fetler California Commission on Teacher Credentialing Richard Garlikov email@example.com Thomas F. Green Syracuse University Alison I. Griffith York University Arlen Gullickson Western Michigan University Ernest R. House University of Colorado Aimee Howley Ohio University Craig B. Howley Appalachia Educational Laboratory William Hunter University of Calgary Daniel Kalls Ume University Benjamin Levin University of Manitoba Thomas Mauhs-Pugh Green Mountain College Dewayne Matthews Western Interstate Commission for HigherEducation William McInerney Purdue University Mary McKeown-Moak MGT of America (Austin, TX) Les McLean University of Toronto Susan Bobbitt Nolen University of Washington Anne L. Pemberton firstname.lastname@example.org Hugh G. Petrie SUNY Buffalo Richard C. Richardson New York University Anthony G. Rud Jr. Purdue University Dennis Sayers Ann Leavenworth Centerfor Accelerated Learning Jay D. Scribner University of Texas at Austin Michael Scriven email@example.com Robert E. Stake University of IllinoisÂ—UC Robert Stonehill U.S. Department of Education David D. Williams Brigham Young UniversityEPAA Spanish Language Editorial BoardAssociate Editor for Spanish Language Roberto Rodrguez Gmez Universidad Nacional Autnoma de Mxico firstname.lastname@example.org
7 of 7 Adrin Acosta (Mxico) Universidad de Guadalajaraadrianacosta@compuserve.com J. Flix Angulo Rasco (Spain) Universidad de Cdizfelix.email@example.com Teresa Bracho (Mxico) Centro de Investigacin y DocenciaEconmica-CIDEbracho dis1.cide.mx Alejandro Canales (Mxico) Universidad Nacional Autnoma deMxicocanalesa@servidor.unam.mx Ursula Casanova (U.S.A.) Arizona State Universitycasanova@asu.edu Jos Contreras Domingo Universitat de Barcelona Jose.Contreras@doe.d5.ub.es Erwin Epstein (U.S.A.) Loyola University of ChicagoEepstein@luc.edu Josu Gonzlez (U.S.A.) Arizona State Universityjosue@asu.edu Rollin Kent (Mxico)Departamento de InvestigacinEducativa-DIE/CINVESTAVrkent@gemtel.com.mx firstname.lastname@example.org Mara Beatriz Luce (Brazil)Universidad Federal de Rio Grande do Sul-UFRGSlucemb@orion.ufrgs.brJavier Mendoza Rojas (Mxico)Universidad Nacional Autnoma deMxicojaviermr@servidor.unam.mxMarcela Mollis (Argentina)Universidad de Buenos Airesmmollis@filo.uba.ar Humberto Muoz Garca (Mxico) Universidad Nacional Autnoma deMxicohumberto@servidor.unam.mxAngel Ignacio Prez Gmez (Spain)Universidad de Mlagaaiperez@uma.es Daniel Schugurensky (Argentina-Canad)OISE/UT, Canadadschugurensky@oise.utoronto.ca Simon Schwartzman (Brazil)Fundao Instituto Brasileiro e Geografiae Estatstica email@example.com Jurjo Torres Santom (Spain)Universidad de A Coruajurjo@udc.es Carlos Alberto Torres (U.S.A.)University of California, Los Angelestorres@gseisucla.edu