xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam a22 u 4500
controlfield tag 008 c20049999azu 000 0 eng d
datafield ind1 8 ind2 024
subfield code a E11-00361
Educational policy analysis archives.
n Vol. 12, no. 12 (April 05, 2004).
Tempe, Ariz. :
b Arizona State University ;
Tampa, Fla. :
University of South Florida.
c April 05, 2004
Achievement gap : should we rely on SAT scores to tell us anything about it? / Dale Whittington.
Arizona State University.
University of South Florida.
t Education Policy Analysis Archives (EPAA)
xml version 1.0 encoding UTF-8 standalone no
mods:mods xmlns:mods http:www.loc.govmodsv3 xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govmodsv3mods-3-1.xsd
mods:relatedItem type host
mods:identifier issn 1068-2341mods:part
mods:detail volume mods:number 12issue 12series Year mods:caption 20042004Month April4Day 55mods:originInfo mods:dateIssued iso8601 2004-04-05
1 of 19 A peer-reviewed scholarly journal Editor: Gene V Glass College of Education Arizona State University Copyright is retained by the first or sole author, who grants right of first publication to the EDUCATION POLICY ANALYSIS ARCHIVES EPAA is a project of the Education Policy Studies Laboratory. Articles appearing in EPAA are abstracted in the Current Index to Journals in Education by the ERIC Clearinghouse on Assessment and Evaluation and are permanently archived in Resources in Education Volume 12 Number 12April 5, 2004ISSN 1068-2341The Achievement Gap: Should We Rely on SAT Scores t o Tell Us Anything About It? Dale Whittington Shaker Heights (OH) City School DistrictCitation: Whittington, D., (2004, April 5). The ach ievement gap: Should we rely on SAT scores to tell us anything about it?. Education Policy Analysis Archives, 12 (12). Retrieved [Date] from http://epaa.asu.edu/epa a/v12n12/.AbstractIncreasing numbers of students taking the SAT have declined to identify their race/ethnicity. I examined the impa ct of non-respondents on the validity of reported racial/ ethnic differences and year-to-year changes in test perfor mance. Using an analysis reported by Wainer (1988) and SAT data from 1996 to 2003, I confirmed WainerÂ’s findings that non-res pondents prevent accurate estimations of group differences b ased on SAT data. I then explored the impact of College Board press release information on news reports about the achievement g ap. I found frequent reports of racial/ethnic differences in SA T scores and year-to-year changes in scores but negligible consi deration of non-respondents. Press releases and media reports should include information about non-respondents and their impact on accuracy of reported differences based on race/ethn icity.IntroductionThe term Â“achievement gapÂ” has taken on particular and important meanings in the past decade. Â“The achievement gapÂ” has become a shorthan d way to refer to differences in academic achievement between European Americans and members of minority groups who historically have been disenfranchised. For some, t he gap refers exclusively to differences between African Americans and European Americans. F or others, it refers to a broader group
2 of 19 of students: those who arenÂ’t facile in English, th e poor, or members of other disadvantaged ethnic groups. Regardless of who is included in oneÂ’s definition, the literature abounds with descriptions of gaps in student performance on test scores, which a re probably the most commonly used indicators of student achievement (Jencks & Phillip s, 1998; Herrnstein & Murray, 1994; Lee, 2000; Kober, 2001). Additional evidence of a gap ha s been found in data related to other indicators of achievement, such as grades, dropout rates, college attendance or earnings. (Portes, 1996; Ferguson, 2001; Kane, 1998; Vars & B owen, 1998; Johnson & Neal, 1998; Roderick & Cambrun, 1999) National organizations of schools dedicated to closing the gap, such as the Minority Student Achievement Network, h ave been formed to address this phenomenon. The benchmark against which minority ac hievement is measured is White/European-American performance, and closing th e gap usually means increasing minority achievement relative to that of White/European Amer icans. Reasons for the gap have been explored; the resulti ng explanations have been legion. Some have focused on the historical legacy of racism, pr ejudice and segregation (Spring, 2000; House, 1999). Others have examined factors affectin g childrenÂ’s readiness for school and their ability to learn in school, such as the role of pov erty and other differences associated with SES (Ferguson, 2001; Barton, 2003), cultural difference s in language or in adaptation to school (Mercado, 2001, Portes, 1996; Portes, 1999, Ogbu, 1 999; Ogbu, 2003), family and parenting (Okagaki & Frensch, 1998; McAdoo, 1978). Another ar ena of inquiry has focused on how well education serves its varied constituents. Some hav e looked at inequities in resources and opportunity to learn (Kozol, 1991; Barton, 2003; Mi ckelson, 2001); others have looked at how schools and educators respond or relate to student diversity (Spring, 2000; Pollock, 2001; Ferguson, 1998; Delpit, 1996; Cohen & Steele, 2002) or how they encourage (or fail to encourage) academic excellence (Ogbu, 2003; Barton, 2003). Finally, many have discussed the qualities of tests or teachersÂ’ assessments of students that can contribute to extraneous differences, i.e. error and bias. (Airasian, 2001, Gould, 1981; Valencia & Suzuki, 2001) One common research focus has been change in the ga p over time, and change typically is examined using test data. Studies have traced the u ps and downs of minority student test performance, usually compared to that of European A mericans. These studies employ various types of test scores, most often using NAEP and col lege admissions scores such as the SAT. (Kober, 2001; Lee, 2000; Miller, 2003; Powell & Ste elman, 1996) For several decades, NAEP report cards have been issued that describe student achievement in eight subject areas; these report cards not only provide a current snapshot, b ut also depict longitudinal change. The accountability movement has triggered issuance of s chool, district and state report cards, and with the passage of the No Child Left Behind act, t hese report cards must include disaggregated data and report differences in test p erformance from one year to the next. Finally, each summer, the College Board and ACT iss ue reports on student performance on college admissions tests, notably the SAT and ACT a ssessment. These reports link student achievement to a variety of variables including rac e/ethnicity. NAEP reports are based on the results of samples th at were selected to be representative of the national population of students. Data from stat e achievement tests are based on the performance of all students at a particular grade l evel. However, college admissions tests are administered on a self-selected set of students who have interest in attending college. This self-selection means that the data are not represen tative of students in the nation or a particular state in any year. Furthermore, the percentage of h igh school students who intend to apply to college fluctuates from year to year. One consequen ce is that year-to-year differences in student performance can be based on true changes in student knowledge or ability, on changes in who takes the test, or both. These sampling issues present serious problems with using college admissions test scores to make inferences about the academic qualifications o f students in the nation, to make comparisons among states, or to track year-to-year differences in test performance. ACT and
3 of 19 the College Board are careful to warn against using SAT or ACT test results to make comparisons among states (College Entrance Examinat ion Board, 2002; ACT, 2003). ACTÂ’s web site includes information briefs that explain h ow changes in a schoolÂ’s test performance over time can be influenced by a variety of factors including who decides to take the test (ACT, 2004). The College BoardÂ’s guidelines on test use i nclude a warning against using aggregate data to make judgments, not only about states, but also about schools or districts. However, they are less explicit about comparisons among grou ps of students disaggregated by, among other things, race/ethnicity (College Entrance Exam ination Board, 2002). Racial/ethnic comparisons of SAT or ACT data are pa rticularly suspect because each year a substantial portion of students do not report their race/ethnicity on the survey questionnaires that they complete when registering for the tests t hey are about to take. This is not a recent problem. Fifteen years ago, Howard Wainer (1988) ex amined the impact of this missing information on the accuracy of racial/ethnic compar isons using data from SATÂ’s administered from 1981 to 1985. Among his findings are the follo wing: The percentage of students not reporting their race /ethnicity (to be called non-respondents here) was substantial each year (12 to 14%)--large enough to be called the Â“second largest minority groupÂ” taking the test Non-respondents, as a group, were not similar to or representative of the students who did report their race/ethnicity. While there were variations in the performance of n on-respondents from year to year and in their performance relative to the nation, their underperformance was somewhat consistent (22 to 26 points below the national SATV means and 21 to 28 points below the national SAT-M mean). The error caused by the missing racial/ethnic infor mation for the non-respondents overwhelmed any differences detected each year amon g disaggregated groups as well as any changes in test performance gaps over time. He concluded, Â“Â…the nonresponse to ethnic identifie rs is sufficient to introduce noise of a greater magnitude than the changes being interprete d as real.Â” (Wainer, p. 778). WainerÂ’s data suggest that the non-respondent group while not representative of those who did report their race/ethnicity, were nevertheless comparatively stable in terms of their numbers and test performance. Since the mid-1990Â’s, however this group has changed considerably. Table 1 reports the numbers of students (in thousan ds) in each group taking the test each year covered by this study. It also reports the combined number of non-white respondents. Examination of the table reveals that, with the exc eption of American Indians, the numbers of students in each non-white group increased since 19 96. By contrast, the numbers of white students increased, then decreased. Finally, both t he proportion and number of non-respondents more than tripled. Table 1. Students (in thousands) from Each Racial/E thnic Group Taking the SAT by Year Year Am. Indian Asian Am. African Am. Mexicn Am. Puerto Rican Other Latin/Hisp.Other Total Non-WhiteWhite No Resp. 19969841073713322831068194199711891104013333132769410619981094115411436363457041231999896119431438383577181462000897120451439393607121882001810212147144039370704202
4 of 19 20028103123481442393776992532003710112650154339 381 670355 Figure 1 reports change in the percentage of examin ees who were non-respondents on the question of racial/ethnic identity. In the mid-nine ties, the percentages resembled percentages in the data reported by Wainer (1988), but this group has steadily increased so that by 2003, they made up one-fourth of all those taking the test. In the process, they became, to use WainerÂ’s parlance, the largest minority group taking the SAT It should be noted that throughout this time period, non-respondents were the only Â“racial/ ethnicÂ” group that has been majority male. Figure 1. Percentage of SAT 1 Examinees Not Reporti ng Their Race/Ethnicity In sum, then, the recent escalation of interest in the achievement gap, continued use of tests to describe that gap, and the dramatic growth of the n on-respondent group raise two questions. First, how have students who declined to indicate t heir race/ethnicity from 1996 to 2003 resembled those reported by Wainer (1988) and how h as this affected the validity of his conclusion regarding the amount of noise that overw helms the use of SAT scores as an index of the achievement gap and changes in that gap? Sec ond, are there practices in releasing and disseminating information about college admissions test scores and racial/ethnic performance on those tests that seem to foster inappropriate us e of these scores when describing the achievement gap to the public in general and the ed ucation establishment? To answer the first question, I used data released on the College BoardÂ’s web site to track the test performance of non-respondents and to replicat e the study conducted by Wainer (1988). To answer the second question, I examined the Colle ge BoardÂ’s press release information from the summer of 2003, each stateÂ’s data on non-respon dents, press releases about the SAT score results for 2003 issued by state departments of education throughout the United States, and articles from selected local and national newsp apers about SAT results. I also revisited articles about the achievement gap that have includ ed SAT information. In all cases, I looked for two kinds of information: 1) data disaggregated by race/ethnicity and discussion of that data and 2) information about non-respondents and their impact on our ability to make inferences about racial/ethic groups or differences between gr oups.Question 1How have students who declined to indicate their ra ce/ethnicity from 1996 to 2003 resembled those reported by Wainer (1988), and how has this a ffected the validity of his conclusion regarding the amount of noise that overwhelms the u se of SAT scores as an index of the achievement gap and changes in that gap?Method. Using College-Bound Seniors data reported annually by the College Board on its web
5 of 19 site, I determined how many students took the test each year and how they performed on the test. This permitted a comparison with those examin ees employed in WainerÂ’s analysis. Then I employed WainerÂ’s procedure for estimating the repr esentation of white students in the non-respondent group and determining the degree to which these non-respondents were affecting the validity of examining test performanc e disaggregated by race/ethnicity. For this procedure, Wainer used the numbers of stud ents in each racial/ethnic group and the mean SAT-V and SAT-M scores for each group to devel op estimates of the percentage of non-respondents who were white. He employed this ap proach to test the assumption that the mean scores of the non-respondents from each group were the same as those of students who reported their race/ethnicity. Were this to be the case, then the reluctance of some to report their race/ethnicity would have little or no effect on the validity of differences between groups that were found for those based who did report thei r race/ethnicity. For each year, he made two estimates, one based on the math scores and the other based on the verbal scores. These estimates were compared fo r consistency. The degree of difference between them indicated how much the assumption that non-respondents were like those who responded was violated. It should be noted that thi s estimating procedure yields only rough estimates.Wainer estimated the proportion of whites in the no n-respondent group using the following formula: P(MSATwhite) + (1-P) (MSATnonwhite) = MSATnonresponse, Where P = the proportion of white non-respondents, (1-P) = the proportion of non-white non-respondents MSATwhite = SAT mean of self-identified white examinees MSATnonwhite = SAT mean of examinees identifying themselves as members of another group, and MSATnonresponse = SAT mean of non-respondent examinees. Like Wainer, I used this formula to estimate the pe rcentage of white non-respondents twice, once using SAT-Verbal scores and once using SAT-Mat h scores. I repeated this analysis for each year from 1996 to 2003.Findings. Table 2 reports the SAT I mean scores for the verb al portion of the test for each year from 1996 to 2003. In general, the amount of c hange for any self-identified group over that time period was negligible. The largest change for self-identifying students was an increase in 12 scale-score points in the mean for Asian America n students. In contrast, the change for non-respondents was dramatic, an increase of 24 sca le-score points. This change in non-respondentsÂ’ mean scores also contrasts with th e means reported by Wainer. The difference between the highest and lowest non-respo ndent means in that study was 10 points, and there was no discernable pattern of change. Table 2. SAT Verbal Mean Scores for Each Year YEAR Am. Indian Asian Am. African Am. Mexican Am. Puerto Rican Other Latin/Hisp.Other White No Resp. Total Non-White 19964834964344554524655115264864661997475496434451454466512526489466199848049843445345246151152649046719994844984344534554635115274924672000482499434453456461508528495467
6 of 19 200148150143345145746050352949746620024795014304464554585025275014642003480508431448456457501529510466 Another way to look at non-respondentsÂ’ SAT perform ance is to examine how they compared to the national mean. Figure 2 shows that the SAT I ve rbal performance of non-respondents steadily increased so that by 2003, their mean scor e slightly exceeded that of the national mean. In other words, not only has this group grown but it now includes more high-performing examinees. Figure 2. Comparison of SAT Verbal Scores: National Mean vs. Mean of Those Not Reporting Their Race/Ethnicity Examination of the SAT I results for the mathematic s portion of the test yields similar results. As can be seen in Table 3, the largest increase in mean scores for any identified racial/ethnic group was that for Asian-American scores, an increa se of 17 scale-score points. By contrast, the mean for non-respondents increased by 31 scalescore points. The comparison to the change in the national mean depicted in Figure3 sho ws an increase in scores, with the non-respondents outperforming the national group of examinees in 2003. In other words, the non-respondents became an increasingly able group w ith respect to math from 1996 to 2003, a pattern not evident in the math scores reported for 1980 to 1985 by Wainer (1988). Table 3. SAT Math Mean Scores for Each Year YEAR Am Indian Asian Am African Am Mexicn Am Puerto Rican Other Latin/Hisp.OtherWhite No Resp. Total Non-White 199647755842245944546651252349447919974755604234584474685145265024801998483562426460447466514528503483199948156042245644846451352850548020004815654264604514675155305094842001479566426458451465512531510484
7 of 19 20024835694274574514645145335164852003482575426457453464513534525485 Figure 3. Comparison of SAT Math Scores: National M ean vs. Mean of Those Not Reporting Their Race/Ethnicity These changes in the non-respondentsÂ’ mean test per formance on the verbal and math portions of the SAT I in addition to the dramatic increase in their numbers, suggest that there have been substantial and steady changes in the non -respondent group since the mid-1990Â’s. These changes not only made the group more variable over time, compared to their counterparts in the early 1980Â’s, but also reinforc e the need to re-estimate the composition of this group with respect to race/ethnicity and compa re it to the composition of those who do report their race/ethnicity.Using the same procedure employed by Wainer (1988), I estimated the percentage of whites in the non-respondent group twice for each year covere d by this study, once based on verbal scores and once based on mathematics scores. Then I used the difference between the verbal and mathematics-based estimates to calculate the di fference in the number of estimated white examinees in the non-respondent group for each year Table 4 reports the results. Table 4. Estimates of Percent of White Non-Responde nts based on SAT-V and SAT-M and Difference in the Estimated Number of White Non -Respondents, based on the two estimates YearPercent based on SAT-VPercent based on SAT-M Difference in Estimated Number of White Non-Respondents 199633.634.3643199839.547.49,244199738.644.76,481199941.852.215,062200046.354.715,713200149.055.312,646
8 of 19 200258.764.715,105200370.081.741,507 The range of non-respondents who are estimated to b e White based on the verbal and math means is much more varied for the verbal portion of the test for this time period (33.6% to 70.0%) than for the time period reported by Wainer (43% to 51%). For the mathematics portion of the test, the estimates based on recent means te nd to be larger and more varied (34.4% to 81.7%) compared to those reported by Wainer (1988) (29% to 46%). The differences in the estimated number of white students ranged from very small (632) to more than double the largest estimate reported by Wainer (1988) (41,507 vs. 18,000). Wainer develops a hypothetical Â“extreme caseÂ” to il lustrate the consequence of these differences on the Â“realÂ” means of African American s, compared to Whites. By holding numbers of examinees and means constant, he estimat ed that the 1980 mean verbal score for African Americans would have to change about 16 poi nts in order for the percentage of non-white respondents based on the SAT-V score mean to equal the estimated percentage of non-white non-respondents based on the SAT-M score mean. Based on 1984 data, the requisite change would have to be 35 points.Following WainerÂ’s lead, I found that the smallest difference, based on 1996 data, would be 1.8 scale-score points, slightly larger than the 1-poin t change reported for 2002 to 2003. For all other years, the differences are much larger, rangi ng from 17 points in 1998 to 122 points in 2003. Hence, WainerÂ’s conclusion that the error ass ociated with non-respondents dwarfed the size of any year-to-year gains reported is confirme d. It should be noted, furthermore, that the implicati ons of this finding cannot be applied in a consistent fashion to each of the states. The map i n Figure 4, developed using SYSTATÂ’s boundary map of the continental United States (SYST AT, 2000), shows the range of percentages of SAT examinees declining to report th eir race/ethnicity by state. The percentages range from a low of 12.4 percent (North Dakota) to a high of 30.4 percent (Connecticut), and the percentage of non-respondent s is moderately correlated (r =.61) with the participation rate reported for each state. Suffice it to say that this analysis would need to be carried out for each state in order to determine th e impact non-respondents have on the racial/ethnic findings in that state. Figure 4. U.S. Map Showing Percentage of Examinees Declining to Specify their Race/Ethnicity on the Student Descriptive Questionn aire Note 1 Note: Data source: non-respondent data reported by the College Board (2003). Question 2.Are there practices in releasing and disseminating information about college admissions test scores and racial/ethnic performance on those tests that seem to foster inappropriate use of these scores when describing the achievement gap to the public in general and the education establishment?
9 of 19 The initial source of information about SAT results for 2003 appears in a College Board press release and associated tables, charts and reports t hat were issued on August 26, 2003 (College Board, 2003). The press release includes t ext reporting overall changes in performance and participation, differences by gende r, changes in performance by members of various ethnic groups, upcoming changes in the SAT Verbal test and a Â“snapshotÂ” of test takers. Race/ethnicity is included in statements regarding the diversity of the test-taking population: This year saw the largest increase in the number of SAT takers in more than 15 years. Thirty-eight percent of SAT takers are first -generation college-bound students. The proportion of minority students takin g the SAT is at an all-time high of 36 percent, up 1 percentage point from last year an d 6 points from 10 years ago. Â“Higher SAT scores, a record number of test-takers, and more diversity add up to a brighter picture for American education. While we c ertainly need to make more progress, the fact remains that we are clearly head ed in the right direction,Â” said College Board President Gaston Caperton.Thirty-six percent of SAT takers in the class of 20 03 were minorities. The number of Mexican American SAT takers increased by 56 percent between 1993 and 2003. SAT takers in the Other Hispanic category increased by 50 percent during the same period. Race/ethnicity also figured into statements about t est performance: The overall verbal scores were aided by a strong sh owing from Asian American SAT takers, whose mean verbal scores were, for the first time, higher than the national mean. Additionally, Mexican American and A frican American SAT takers improved their average scores by two points and one point, respectively, from a year ago. In fact, virtually all ethnic and racial group s showed stronger performance on their verbal scores compared to a year ago. Accompanying the text of the press release is a set of 18 graphs and tables depicting, among other things, the diversity of the test-takers (in one table and one graph), the changes in college bound students over time (four graphs/tables), plus racial and ethnic differences in high school preparation, grades and test performance (eight gra phs/tables). Nowhere is there a statement about non-respondents, nor are they included in any table or graphic in the press release. This information does appear in Table 4-1 on page 10 of the College Bound Seniors report that can be accessed in a box entitled Â“ArchivesÂ” next to th e press release documentation on the web site. In other words, while it is possible to find out th at many examinees are declining to report their race/ethnicity, it is not evident in the materials featured as part of the press release. Furthermore, the language of the press release and its featured tables and graphs makes assertions about minority and non-minority SAT test takers that may not be true. The number of non-respondents simply overwhelms any trends that c an be discerned from the information about the respondents.SAT results are treated as major news by most state sÂ’ departments of education, as well as the press. On the same day as the College BoardÂ’s press release, or soon thereafter, 29 departments of education in the various states and the District of Columbia issued press releases about the SAT results for 2003. (Six issue d no press releases in 2003, and 16 issued press releases but none about the SAT. Either they reported nothing on testing (n=3) or reported about other test results such as NAEP, sta te tests, and /or ACT results (n=13).) Of the 29 press releases about the SAT, 18 included inform ation about race/ethnicity. Sixteen of them included text and/or tables about racial/ethnic tes t performance. Eight of them included
10 of 19 comments or tables pertaining to the diversity of t est-takers. The number or percentage of students declining to i ndicate their race/ethnicity appears in five of these press releases and accompanying materials. In three cases, tables include non-respondent numbers. The Texas press release rep orts an increase in the percent of non-respondents in Texas and the nation. The most detailed set of information appears in the Florida press release and accompanying report entitled SAT Trends: Florida and the Nation The press release itself merely comments on the change in scores compared to 2002: Â“FloridaÂ’s average verbal score rose two points, du e largely to higher scores among Hispanic, African-American and Asian males.Â” The re portÂ’s summary includes several bulleted items about the racial/ethnic comp osition of the test takers. Example: Â“Nationwide, the percentage of minority te st takers has also been increasing, but at a slower rate than in Florida. I n 1988, minorities represented 23% of the test takers nationwide, about the same as in Florida; by 2003 the percentage had increased to 36%, with Asians, whose scores are typically well above average, representing 9.6%, compared to 4.4% for Florida.Â” ( p. i) SAT Trends Â’ introductory summary includes a set of bulleted i tems entitled Â“ SAT Scores by Racial-Ethnic Groups.Â” Not all of these bulleted it ems discuss test performance; they also cover diversity of the test-taking group, first language, and income. One item provides the most detailed discussion from any press release about no n-respondents and how they have changed over time: An increasing percentage of test takers are declini ng to indicate their race-ethnicity. In 2002, 19% of test takers in both Florida and the U.S. did not do so. The number of non-respondents in 2003 rose to 24% for Florida and 25% for the U.SÂ….In past years, those who did not provide this information h ad lower average scores than those who did. In 2003 the trend was reversedÂ…This break in the trend makes any changes in scores by race problematicÂ…. Â“ (p. iv). Despite this warning, however, five of 21 pages of tables included in this report focus on racial/ethnic differences in one way or another.A content analysis of a representative sample of ne wspapers or televised news reports is beyond the scope of this research. However, I did g ather articles about the 2003 release of SAT scores from newspapers from various cities in t he United States; many of them have a national presence in the sense that they are distri buted nationally, or other journalists frequently cite them as sources of news information. Akron Beacon Journal Atlanta-Journal Constitution Boston Globe Chicago Sun-Times Christian Science Monitor Dallas Morning News Denver Post Indianapolis Star Las Vegas Review-Journal Los Angeles Times Miami Herald New York Times Philadelphia Inquirer San Francisco Chronicle Seattle Times
11 of 19 St. Petersburg Times Wall Street Journal Washington Post All 18 of these newspapers published at least one a rticle on the SAT. Twelve of them discussed racial/ethnic performance in one or more of the following ways: Differences among groups at the national level Differences among groups at the state or local leve l Changes in group membersÂ’ performance compared to l ast year or past years The alleged presence of bias in the test (cited Fai rTest) Possible causes for differences, such as difference s in school funding, SES, or inequalities in education based on race, ethnicity or income Ten of them discussed the participation of students of various racial/ethnic groups in the College Board testing program. Some articles mentio ned the level of participation; others discussed increases in participation. A quotation f rom the president of the College Board, Gaston Caperton, was often included: Â“Higher SAT sc ores, a record number of test takers, and more diversity add to a brighter picture for Americ an education.Â” News sources designed to serve a national audience, either a lay audience or one of educators, were mixed in their reporting on SAT sco res for the class of 2003. Major national news magazinesÂ— Time, Newsweek, and U.S. News and World Report Â—did not report SAT results. However, USA Today included SAT results in its August 27 issue and of the 15 paragraphs in the article, five discussed the achie vement gap. Education Week in an article dated September 3, 2003, reported on the increase o f SAT and ACT examinees and the questionable level of college preparation coursewor k these students have had. Part of the article focuses on the level of high school courses these students have taken; the last half of the article focuses on racial/ethnic differences in test performance and why such differences exist. Black Issues in Higher Education published an article on September 11 that, like th e Education Week article, discusses differences in college preparat ion for students from various racial/ethnic groups. The article includes two tabl es, one based on ACT scores, the other based on SAT scores, that report means for disaggre gated groups. Of note, the ACT table includes two additional categories: "Prefer not to respond" and "No Resp.". The College Board table does not.Last, but not least, the United States Department o f Education, on August 28, 2003, issued a statement about SAT scores as evidence of the achie vement gap and disparities in the education system. In his October 8, 2003, prepared remarks for the High School Leadership Summit, Secretary of Education Rod Paige stated tha t the disparity continues between the education of disadvantaged or low-income students a nd that of other students. The speech goes on to use SAT performance of African American and Hispanic students as the sole illustration of this point.The only news source that reported on the presence of students who declined to indicate their race/ethnicity was The Miami Herald. Its August 29, 2003, article on the SAT scores of Broward County School District high school seniors states: Â“Results were broken down by race and ethnicity, bu t it was difficult to determine how Broward was doing in narrowing the achievement gap between black and Hispanic and white and Asian students. The reason: Almost 1 in 4 students chose not to list their race or ethnicity. The number of students who didnÂ’t answer increased by almost 100 percent.Â” The source of this insight is unclear. The article refers to Katherine Blasik, BrowardÂ’s director of research and evaluation. It also reports state resu lts, indicating, perhaps, familiarity with
12 of 19 FloridaÂ’s report on SAT trends. Nevertheless, the r eaders of the Miami Herald were cautioned not to look at the SAT as a source of information a bout the achievement gap.DiscussionTaken together, the College BoardÂ’s press release, those of various states, and news reported from large urban and national news sources suggest that the information the public reads about the SAT is grounded in what the College Board repor ts in its press release about SAT results. If one were to depend on the information provided in t he press release without referring to the College Bound Seniors tables, several incorrect inferences could be made : That the magnitude of Â“achievement gapsÂ” among raci al/ethnic groups can be determined from SAT data That changes in the SAT performance of students fro m various racial/ethnic groups can be determined That the diversity of those taking the SAT can be d etermined and tracked over time That racial/ethnic differences in courses taken, in come, first-generation college enrollment, high school GPA can all be determined f rom College Board information WainerÂ’s results make it clear that even when the p roportions of non-respondents was smaller, there were enough of them to create noise that over whelms any evidence of change over time in test scores reported for students from various r acial/ethnic groups or any evidence of group differences based on SAT results. The results of th is study not only confirm WainerÂ’s results but also show that the noise generated by recent gr oups of non-respondents has increased from overwhelming to deafening. While Wainer does not discuss inferences about raci al/ethnic differences of other kinds, the magnitude of the non-respondent rate and the change in this groupÂ’s test performance suggest that the other inferences associated with racial/et hnic differences reported in the College BoardÂ’s press release should be treated as suspect as well. The graphs in Figure 4 illustrate why this should be the case. The first two breakdow ns report take-takers based on data reported in the College BoardÂ’s press release and College Bound Seniors for the class of 2003; the third and fourth breakdowns make use of the est imates of white non-respondents from Table 4.The lower estimate, based on SAT-V 2003 score means yields a percentage of white test-takers in Breakdown 3 that exceeds by one perc ent that reported in the College BoardÂ’s press release with non-respondents removed. Coincid entally, the Â“increased diversityÂ” from 2002 to 2003 was also one percent. Note that the re maining percentage of non-respondents (those who are estimated to be non-white) is larger than any of the minority groups, except African Americans. The estimate based on math score means yields a per centage of white test-takers in Breakdown 4 that more substantially exceeds that re ported in the press release. Indeed it all but negates the increase in minority representation among test-takers reported in the press release (from 30% minority in 1993 to 36% in 2003). The proportion of estimated minority non-respondents is larger than any of the groups of minority respondents except African Americans and Asian Americans. Basis for Breakdown of Racial/Ethic GroupsPie Chart Associated with Each Basis
13 of 19 Breakdown 1. Based on breakdown reported in the College Board press release Breakdown 2 Based on data as reported inCollege Bound Seniors with actualnon-respondents included Breakdown 3. Estimated breakdown basedon SAT-V analysis Breakdown 4. Estimated breakdown basedon SAT-M analysis Figure 5. Breakdowns of the Race/Ethnicity of SAT-1 Test Takers from the Class of 2003 Based on Different Estimates Note 2
14 of 19 These results suggest that non-respondents prevent valid use of data from College Bound Seniors to describe majority and minority test-takers with respect to: The size of the achievement gap Changes in the magnitude of the achievement gap The diversity of those taking the SAT Differences in GPA among racial/ethnic groups First-generation college attendance High school preparation RecommendationsThe combined results from this study suggest severa l further steps. First, we need to find out more about these non-res pondents. Who are they? Why are they declining to indicate their race/ethnicity? Several hypotheses have surfaced. One idea, derived from the growth in resistance to affirmative action, is that the non-respondents are white males who see no benefit in reporting their race/ethnicity. It is documented that non-respondents are majority male a nd both sets of estimates appearing in Table 4 suggest a dramatic increase in white non-re spondents. A second suggestion is that students from various g roups are resisting reporting their race/ethnicity as a protest against the use of a ca tegory they regard as arbitrary and grounded in mistaken notions about race. A third suggestion is based on findings of Claude S teele and others pertaining to stereotype threat. (Aronson, 2002; Steele & Aronson, 1998) The notion that reporting oneÂ’s race during a test affects test performance may be filtering its way into the middle-class, professional African-American community. Hence, increasing numbe rs of high-achieving minority students may not be revealing their race on test application s. In all probability, there are multiple reasons, as might be inferred from a brief analysis of the 2003 data from my district. It was possible to iden tify the race/ethnicity of all students, match that to their SAT scores and then compare this raci al/ethnic and test score data to that from the College BoardÂ’s College Bound Seniors 2003 report for Shaker Heights High School The data from the College Bound Seniors report for the high school reveal a 23 percent non -response rate in 2003, which is similar to our state and the national data in terms of percentage. Our non-respondents, unlike their national and state co unterparts, are majority female. Compared to those reporting their race/ethnicity, a dispropo rtionate number of our non-respondents are White/European-American. Eighty-two percent of them are White; 18 percent are Black/African-American. By contrast, 36 percent of our high schoolÂ’s SAT test takers in the class of 2003 are Black/African-American; 59 percen t of them are White/European-American. The non-respondent verbal mean was higher than the overall mean, and both groups of non-respondents had higher SAT-V scores than their counterparts. The math scores, on the other hand, present a contradiction. While the Whit e non-respondents had higher SAT-M scores, their Black/African-American counterparts h ad lower SAT-M scores. Second, while descriptions of racial/ethnic differe nces and score changes over time have always been questionable, the sheer size of the non -respondent group makes use of SAT data for such purposes irresponsible. At the very least, test sponsors and state departments of education need to be clear in their press releases about the presence of non-respondents and their impact on oneÂ’s ability to make inferences ab out students taking college admissions tests in the same way they warn about other misuses of te st results. The College Board and ACT provide very clear cautions against using their dat a to make inferences about state rankings, for example, and these cautions seems to have taken hol d. Education departmentsÂ’ press
15 of 19 releases and newspaper articles often make adjustme nts that seem to heed this caution. For example, they reported data only for other states w ith similar participation rates, or they warn readers that participation rate and mean scores are strongly related to each other. Furthermore, test companiesÂ’ press releases themselves need to p rovide clear information about non-respondentsÂ—the number of them, their known cha racteristics compared to respondents (i.e. test performance, gender). Finally, those who have used these data in news art icles, policy papers, or published research need to reconsider their use. The difference in the opportunities and achievement of children from various racial/ethnic groups is perhaps the mo st important issue facing education today. We need to examine and describe these differences, but we need to do it with data that we can count on to provide an accurate picture. Alternativ e data sources such as state data and NAEP results can do the job much more effectively. ReferencesAronson, J. (2002) Stereotype threat: Contenting an d coping with unnerving expectations. In J. Aronson (Ed.), Improving academic achievement (pp. 279-301) San Diego CA: Academic Press. ACT. (2004) How significant are changes in my schoo lÂ’s average ACT composite scores over time?, ACT Re search information brief 98-3. Retrieved January 9, 2004 f rom http://www.act.org/research/briefs/98-3.html. ACT. (2004) Monitoring changes in high school ACT c omposite scores over time, ACT Research information brief 2000-2. Retrieved January 9, 2004 from http://www.a ct.org/research/briefs/2000-2.html. ACT. (2003) ACT national and state scores. Retrieve d January 9, 2004 from http://www.act.org/news/data /03/index.html Airasian, P. (2001) Classroom assessment, 4th ed. Boston: McGraw Hill. Barton, P.E. (2003) Parsing the achievement gap (ET S Policy Information Center Report). Princeton, NJ: Educational Testing Service Cohen, G.L. & Steele, C.M. (2002), A barrier of mis trust: How negative stereotypes affect cross-race m entoring. In J. Aronson (Ed.), Improving academic achievement (pp. 303-327). San Diego CA: Academic Press. College Board. (2003) SAT verbal and math scores up significantly as a record-breaking number of stude nts take the test. Retrieved January 9, 2004 from http://www.co llegeboard.com/press/article/0,3183,26858,00.html College Entrance Examination Board. (2002) Guidelines on the used of college board test scores and related data. New York: College Entrance Examination Board. Ferguson, R.F. (2001) A diagnostic analysis of blac k-white GPA disparities in Shaker Heights, Ohio. In D. Ravitch, (Ed.), Brookings Papers of Education Policy, 2001 (pp. 417-414). Washington, DC: Brookings Instituti on Press. Ferguson, R.F. (1998) TeachersÂ’ perceptions and exp ectations and the black-white test score gap. In C. Jencks & M. Phillips, (Eds.), The black-white test score gap (pp. 318-374). Washington, DC: Brookings Instituti on Press Gould, S.J. (1981) The mismeasure of man New York: W.W. Norton & Company. Herrnstein, R. J. & Murray, C. (1994) The bell curve: Intelligence and class structure in American life New York: Free Press. House, E.R. (1999) Race and policy. Education Policy Analysis Archives, 7 (16). Retrieved August 11, 2000, from http://epaa.asu.edu/epaa/v7n16.html Jencks, C & Phillips, M. (1998) The black-white tes t score gap: An introduction. In C. Jencks & M. Phi llips, (Eds.), The black-white test score gap (pp. 1-51). Washington, DC: Brookings Institution Press Johnson, W.R. & Neal, D. (1998) Basic skills and th e black-white earnings gap. In C. Jencks & M. Phill ips, (Eds.), The black-white test score gap ( pp. 480-497 ) Washington, DC: Brookings Institution Press Kane, T.J. (1998) Racial and ethnic preferences in college admissions. In C. Jencks & M. Phillips, (Ed s.), The black-white test score gap (pp. 431-456). Washington, DC: Brookings Instituti on Press. Kober, N. (2001) It takes more than testing. A repo rt of the Center on Education Policy. Washington, D C: Center on Education Policy. Lee, J. (2002). Racial and ethnic achievement gap t rends: Reversing the progress toward equity? Educational Researcher, 31 (1), 3-12.
16 of 19 McAdoo, H.P. (1978) Factors related to stability in upwardly mobile black families. Journal of Marriage and the Family. 40 (November) 761-776. Mercado, C. I. (2001) The learner: Â“RaceÂ”, Â“ethnici ty,Â” and linguistic differences. In V. Richardson, (Ed.), Handbook of research on teaching (pp. 668-694). Washington, DC: American Educationa l Research Association. Mickelson, R.A. (2001) Subverting Swann: Firstand second-generation segregation in the Charlotte-Mec klenburg schools. American Educational Research Journal, 38 (1) 215-252. Miller, G.E. (2003) Analyzing the minority gap in a chievement scores: Issues for states and federal go vernment. Educational Measurement: Issues and Practice, 22 (3), 30-36. Ogbu, J.U. (1999) Beyond language: Ebonics, proper English, and identify in a Black-American speech co mmunity. American Educational Research Journal 36 (2), 147-184. Ogbu, J.U. (2003) Black American students in an affluence suburb, A s tudy of academic disengagement Mahwah, NJ: Lawrence Erlbaum Associates. Okagaki, L. & Frensch, P.A. (1998) Parenting and ch ildrenÂ’s school achievement: A multiethnic perspect ive. American Educational Research Journal, 35 (1), 124-144. Pollock, M. (2001) How the question we ask most abo ut race in education of the very question we most s uppress. Educational Researcher, 30 (9), 2-12. Portes, P. (1996) Ethnicity and culture in educatio nal psychology. In Berliner & Calfee, eds. Handbook of educational psychology (pp. 331-357). New York: Simon & Schuster Macmilla n. Portes, P. (1999) Social ansd psychological factors in the academic achievement of children of immigra nts: A cultural history puzzle. American Educational Research Journal, 36 (3), 489-507. Powell, B. & Steelman, L.C. (1996) Bewitched, bothe red, and bewildering: The use and misuse of state S AT and ACT scores. Harvard Educational Review, 66 (1), 27-59. Roderick, M. & Cambrun, E. (1999) Risk and recovery from course failure in the early years of high sch ool. American Educational Research Journal. 36 (2), 303-343. Spring, J. (2000 ). American education, 9th ed. Boston: McGraw Hill. Steele, C.M. & Aronson, J. Stereotype threat and th e test performance of academically successful Afric an Americans. In C. Jencks & M. Phillips, (Eds.), The black-white test score gap (pp. 410-427). Washington, DC: Brookings Instituti on Press. SYSTAT (2000). SYSTAT 10 graphics Chicago: 2000. Valencia, R.R. & Suzuki, LA. (2001) Intelligence testing and minority students Thousand Oaks, CA: Sage Publications. Vars, F. E. & Bowen, W.G. (1998) Scholastic Aptitud e Test scores, race, and academic performance in se lective colleges and universities. In C. Jencks & M. Phill ips, (Eds.) The black-white test score gap (pp. 457-479). Washington, DC: Brookings Institution Press Wainer, H. (1988) How accurately can we assess chan ges in minority performance on the SAT? American Psychologist, 43 (10), 774-778.NotesPercentages of non-respondents from Alaska, Hawaii and the District of Columbia were 26.6%, 20.2% and 28.5% respectively. 1. Based on aggregated data reported by the U.S. Censu s Bureau for the 2000 census, whites made up 63% of 18-year olds in 2000. The sec ond largest group was Hispanic (16.1%), followed by African Americans (14.1%). Thi s age group was majority male (51.3%) in 2000. 2.About the AuthorDale Whittington, Ph.D.Director of Research & Evaluation
17 of 19 Shaker Heights City School District15600 Parkland Dr.Shaker Heights, OH 44120216-295-4363216-295-4340 (fax)Email: firstname.lastname@example.orgDale Whittington is Director of Research and Evalua tion in the Shaker Heights (Ohio) City School District. In addition to managing the distri ct's testing program, she conducts a variety of studies on the schools, students, and programs. She was previously Associate Professor in the Department of Education and Allied Studies at John Carroll University, where she taught courses on research methods, testing and educationa l psychology. The World Wide Web address for the Education Policy Analysis Archives is epaa.asu.edu Editor: Gene V Glass, Arizona State UniversityProduction Assistant: Chris Murrell, Arizona State University General questions about appropriateness of topics o r particular articles may be addressed to the Editor, Gene V Glass, email@example.com or reach him at College of Education, Arizona State Un iversity, Tempe, AZ 85287-2411. The Commentary Editor is Casey D. Cobb: firstname.lastname@example.org .EPAA Editorial Board Michael W. Apple University of Wisconsin David C. Berliner Arizona State University Greg Camilli Rutgers University Linda Darling-Hammond Stanford University Sherman Dorn University of South Florida Mark E. Fetler California Commission on TeacherCredentialing Gustavo E. Fischman Arizona State Univeristy Richard Garlikov Birmingham, Alabama Thomas F. Green Syracuse University Aimee Howley Ohio University Craig B. Howley Appalachia Educational Laboratory William Hunter University of Ontario Institute ofTechnology Patricia Fey Jarvis Seattle, Washington Daniel Kalls Ume University Benjamin Levin University of Manitoba Thomas Mauhs-Pugh Green Mountain College Les McLean University of Toronto Heinrich Mintrop University of California, Los Angeles Michele Moses Arizona State University Gary Orfield Harvard University Anthony G. Rud Jr. Purdue University Jay Paredes Scribner University of Missouri
18 of 19 Michael Scriven University of Auckland Lorrie A. Shepard University of Colorado, Boulder Robert E. Stake University of IllinoisÂ—UC Kevin Welner University of Colorado, Boulder Terrence G. Wiley Arizona State University John Willinsky University of British ColumbiaEPAA Spanish and Portuguese Language Editorial BoardAssociate Editors for Spanish & Portuguese Gustavo E. Fischman Arizona State Universityfischman@asu.eduPablo Gentili Laboratrio de Polticas Pblicas Universidade do Estado do Rio de Janeiro email@example.comFounding Associate Editor for Spanish Language (199 8-2003) Roberto Rodrguez Gmez Universidad Nacional Autnoma de Mxico Adrin Acosta (Mxico) Universidad de Guadalajaraadrianacosta@compuserve.com J. Flix Angulo Rasco (Spain) Universidad de Cdizfelix.firstname.lastname@example.org Teresa Bracho (Mxico) Centro de Investigacin y DocenciaEconmica-CIDEbracho dis1.cide.mx Alejandro Canales (Mxico) Universidad Nacional Autnoma deMxicocanalesa@servidor.unam.mx Ursula Casanova (U.S.A.) Arizona State Universitycasanova@asu.edu Jos Contreras Domingo Universitat de Barcelona Jose.Contreras@doe.d5.ub.es Erwin Epstein (U.S.A.) Loyola University of ChicagoEepstein@luc.edu Josu Gonzlez (U.S.A.) Arizona State Universityjosue@asu.edu Rollin Kent (Mxico) Universidad Autnoma de Puebla email@example.com Mara Beatriz Luce (Brazil) Universidad Federal de Rio Grande do Sul-UFRGSlucemb@orion.ufrgs.br Javier Mendoza Rojas (Mxico)Universidad Nacional Autnoma deMxicojaviermr@servidor.unam.mx Marcela Mollis (Argentina)Universidad de Buenos Airesmmollis@filo.uba.ar Humberto Muoz Garca (Mxico) Universidad Nacional Autnoma deMxicohumberto@servidor.unam.mx Angel Ignacio Prez Gmez (Spain)Universidad de Mlagaaiperez@uma.es
19 of 19 DanielSchugurensky (Argentina-Canad) OISE/UT, Canadadschugurensky@oise.utoronto.ca Simon Schwartzman (Brazil) American Institutes forResesarchÂ–Brazil (AIRBrasil) firstname.lastname@example.org Jurjo Torres Santom (Spain) Universidad de A Coruajurjo@udc.es Carlos Alberto Torres (U.S.A.) University of California, Los Angelestorres@gseisucla.edu EPAA is published by the Education Policy Studies Laboratory, Arizona State University