xml version 1.0 encoding UTF-8 standalone no
record xmlns http:www.loc.govMARC21slim xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govstandardsmarcxmlschemaMARC21slim.xsd
leader nam a22 u 4500
controlfield tag 008 c20009999azu 000 0 eng d
datafield ind1 8 ind2 024
subfield code a E11-00173
Educational policy analysis archives.
n Vol. 8, no. 29 (June 24, 2000).
Tempe, Ariz. :
b Arizona State University ;
Tampa, Fla. :
University of South Florida.
c June 24, 2000
Whither Advanced Placement? / William Lichten.
Arizona State University.
University of South Florida.
t Education Policy Analysis Archives (EPAA)
xml version 1.0 encoding UTF-8 standalone no
mods:mods xmlns:mods http:www.loc.govmodsv3 xmlns:xsi http:www.w3.org2001XMLSchema-instance xsi:schemaLocation http:www.loc.govmodsv3mods-3-1.xsd
mods:relatedItem type host
mods:identifier issn 1068-2341mods:part
mods:detail volume mods:number 8issue 29series Year mods:caption 20002000Month June6Day 2424mods:originInfo mods:dateIssued iso8601 2000-06-24
1 of 19 Education Policy Analysis Archives Volume 8 Number 29June 24, 2000ISSN 1068-2341 A peer-reviewed scholarly electronic journal Editor: Gene V Glass, College of Education Arizona State University Copyright 2000, the EDUCATION POLICY ANALYSIS ARCHIVES. Permission is hereby granted to copy any article if EPAA is credited and copies are not sold. Articles appearing in EPAA are abstracted in the Current Index to Journals in Education by the ERIC Clearinghouse on Assessment and Evaluation and are permanently archived in Resources in Education Whither Advanced Placement? William Lichten Yale UniversityAbstract This is a review of the Advanced Placement (AP) Program. In disagreement with claims of the College Board, ther e is firm evidence that the average test performance level has dropped The College Board's scale and claims for AP qualification disagree seri ously with college standards. A majority of tests taken do not qualify It appears that "advanced placement" is coming closer to "placement ." This article recommends that the College Board's policy of conce ntrating on numbers of participants should be changed to an emp hasis on student performance and program quality. Introduction In 1953 the College Board began the Advance d Placement (AP) program, to challenge a small, elite group of able students. AP students took a college course in high school and an external exam to qualify for admissio n to advanced undergraduate work. The strength of AP was its eschewing fads for a sol id collaboration between high school teachers and college professors, with an emphasis o n subject content. An important
2 of 19 feature was the evaluation of a high school student 's work by outside examiners who were college faculty. Since that time the program has taken on a life of its own and has spread widely throughout American high schools. The number of par ticipants has more than doubled every decade. Today, more than half of American hig h schools and a third of four year college-bound seniors participate in this burgeonin g program. More than a million AP exams, five hundred times the original number, are taken each year. Whereas overall assessments of American pub lic schools range from highly critical (National Committee on Excellence in Education, 198 3, Ravitch, 1985, Finn, 1991) to favorable, even optimistic (Carson et al., 1993, Br acey, 1991-1998), all sides give AP their approval. This shows itself in a growing numb er of legislatures and state boards which support AP (twenty-three states in 1998, incl uding D.C., College Board, 1998) in a variety of ways. The heart of the AP program is i ts examination, which is given at the end of the academic year, usually to high school se niors or juniors. Unlike norm referenced examinations, such as SAT and ACT, which are scored in percentiles or equivalent, AP gives criterion referenced examinati ons, which are pass or fail. The criterion in AP is whether or not the colleges will accept the student for advanced placement. Thus, any critical evaluation of the suc cess of the AP program must hinge on the degree to which the program succeeds in overcom ing this hurdle. The College Board widely quotes its grade s cale: Table 1 Present College Board Interpretation of AP Scores (approximate grade equivalents in parentheses) 5: extremely well qualified (A) 4: well qualified (B) 3: qualified (C) 2: possibly qualified 1: no recommendation The College Board (1999a) claimed that, Almost two-thirds of the students achieved grades o f 3 or above on AP's 5-point scale-sufficiently high to qualify for cred it and/or enrollment in advanced courses at virtually all four-year collage s and universities, including the most selective. It is an open secret (Hyser, 1999) that bot h this claim and scale (Table 1) disagree with college standards. This disparity is a sign of remarkably poor communication between the colleges and the College Board. This pa per discusses in detail the seriously misleading conclusions that follow from Table 1.
3 of 19 The Colleges and Advanced Placement The success of the program is judged by mea surable exam performance, as opposed to intangible benefits, which are difficult to eval uate objectively (Lichten and Wainer, 2000). The raison d'tre of the program is qualification for advanced place ment by the colleges and universities. To determine college pra ctice, the author uses an enlarged version of the sample of Morgan and Ramist, 1998, b ut twice as large to include the lower end and make for more representativeness. (Se e Table 2. The sample may be slightly lenient, since it under-represents small c olleges, which sometimes have stricter AP admission policies.) These colleges and universities divide (by average AP scores) into three classes: "highly selective" (mean AP grade greater than or e qual to 3.4, average SAT scores approximately greater than or equal to 610, ), "sel ective" (AP 2.6-3.4, SAT ca 500-610), and "non-selective" (AP Â£ 2.6, SAT ca <500). (See Table 2. Sources for SAT or equivalent ACT scores are College Board (1999b) and Princeton Review (1998). AP data is obtained from the Educational Testing Servi ce (ETS).) (Note 1) Then, with 5% dropped (typically colleges with only one AP candid ate), the number of exams is 218,359 in highly selective, 519,521 in selective a nd 67,386 in non-selective schools. The data in Table 2 differ for each of the three types of colleges. Highly selective schools require a "4" or more, with about three out of five exams qualifying to receive advanced placement. About half of the selective sch ools take "4's" and half take "3's", with about half of the exams qualifying. Non-select ive schools usually accept a "3", but only one out of three exams qualify. Overall, score s of 5s and 4s qualify, 55% of 3s pass, and essentially all 1s and 2s fail, for an average pass rate of 49%. These results obviously disagree with College Board claims (Table 1 and subsequent text), and confirm Hyser (1999). English Literature seems to have slipped fa rther than other subjects. Some colleges, not all highly selective, will not even accept a "5 for AP credit. The shift from a "3" to a "4" in selective colleges occurs more often for Eng lish Literature than for other subjects (Table 2). Table 2 Data on AP for a Representative Sample of Colleges College or University Ave. Score % > 3 Number of... Pass Score Comments SATAP...exams...candidates Non-SelectiveAlbany St U.4301.35.787533 Prairie View A&M U 4201.5411.181483 TN State U4601.7116.22711663 NC Agricultural Tech St 4601.9222.12991703
4 of 19 Morgan State U 4751.9524.11621023 Eastern KY U4552.0728.73661903 State U W GA4612.1331.62751543 Spelman College 5372.2233.25613113, 4sci. & Engl. 4 U Southern MS5152.2936.14182192 Western KY U4952.3640.35142583 U West FL5352.3644.22401153 U NC Wilmington 4542.3741.89775253 U TX Pan AmNA2.539.712825593 U South FL5452.5245.919938943 U CA Riverside 5112.5547.5413015763 Appalachian St U 5402.5951.617328023 SelectiveGeorge Mason U 5152.6349.413286534 FL State U.5762.6953.2483620303 Auburn U5692.7455.217078364 James Madison5852.7457.5401616314 U. CA Irvine5202.7756.3824727084 Clemson U5572.8860.5396316493 U.CA Davis5652.9462.4714126583 MI State U5402.9562.7415720863, 4English et al 4Cornell College 6003.0162.1182703 U. GA Athens5993.0266.1602924933, 4 U. Texas6013.0867.81483850633 PA State U.5933.0967.8636227534 UNC Chapel Hill 6103.271.1938629904English 5 U UT5653.2874.9383514963 Boston College6303.2876421313113, 4 Tulane U6453.3376.730029734
5 of 19 Brigham Young 6103.3577.91039239603 Highly SelectiveU. IL Urbana6103.4278.31038935964 College of WM & Mary 6553.5983.334529284 U. Virginia6433.6188.3948823514 Carnegie Mellon 6413.798733108524English 5 Cornell U.6603.8188.4982623154 Duke U.6853.9189.5661514674 Stanford U.7034.1392.1839017494 Yale U.7304.2594.451699984 Engl. 5 (or 4 &760 SATV, II) Of all exams that result in advanced placement cred it, 32% came from students applying to highly selective colleges, 63% from selective co lleges and only 5% from non-selective colleges. Overall college attendance divides approximately into 18% of students at highly selective colleges, 36% at selec tive institutions and 46% at non-selective schools (based on composite SAT score percentiles furnished by the College Board). Extreme cases are Yale and the predominantl y minority Albany (GA) State U. Applicants forwarding AP exams to Yale's admissions office take an average number of 5.2 AP exams. Three quarters of these 5169 exams (a bout 3900) from 998 candidates meet Yale's "4" requirement. At Albany State, with a freshman class of 660, 53 AP candidates take 87 exams, of which five are accepta ble at a score of 3 or higher. The contrast between these two schools points up the su ccesses and failures of the program.The College Board Scale To test the College Board scale (Table 1), assume for the sake of argument that all "qualified" and say half of "possibly qualified" pe rsons merit AP. Then, if one applies Table 1 to the current figures for 1's. 2's, 3's, 4 's and 5's (116, 240, 286, 207, and 142 thousands, with 0% ,50%, 100%, 100%, 100% passing, resp.), about (120+286+207+142)=755 out of the total of 991 (thou sands) or 76% would qualify. Yet less than half of the sample qualified. The College Board scale overestimates the fraction of successful examinations by over a quart er of the total, by no means a trivial amount. In 1999, this would amount to approximately 300,000 examinations incorrectly predicted by the College Board's scale (Table 1). T hese examinations produce a revenue to C.E.E.B. of over $20 million and cause an obviou s conflict of interest. Table 3 shows a scale, in agreement with Hyser (1999), which drop s down by a full step on a five point range, i.e. such that half of the exams with a "3" qualify:
6 of 19 Table 3 A New Scale That Represents AP Data More Accurately Than the Old Scale of Table 1 (letter grades author's estimates)5: well qualified (A) 4: qualified (A-, B+) 3: possibly qualified (B or C) 2,1: no recommendation Under the same assumptions as before, (143+ 207+142) out of 991 (thousands), or 49%, would qualify. The latter figure agrees quite well with the data. Thus, the old scale (Table 1) is quite misleading and the new scale (Ta ble 3) is a good fit. (Note 2) Note that a majority of the AP examinations are not passing. Since about one out three students taking the AP courses never take the examinations, the overall examination pass rate is only about one for every three course enrollments. (Note 3) The College Board and the Colleges Disagree The major disagreement between the two grad e scales (Tables 1 and 3) shows a yawning gap in communication between CEEB and the c olleges. Because the scoring criteria for A.P. are not public information, one c an only guess at the causes for the discrepancy between the College Board's claims (Tab le 1) and the facts of college admissions. CEEB denies that such a discrepancy cou ld reflect any change in quality: ...each exam grade indicates the same level of coll ege-level learning from year to year and state to state. AP provides a true national standard of achievement that is constant over time. We make eve ry effort to protect it from grade inflation. (College Board, 1996). This claim, coupled with the allegedly cons istent success rate, is a chimera for a several reasons. One major cause is apparent to thi s former college teacher upon inspection of Table 1: the very grade inflation tha t CEEB assures us does not exist. The Lira failed to avoid inflation by peggi ng itself to the Euro, when the value of the Euro dropped. Likewise, AP scores have been peg ged to college grades the same way since the beginning of the program in the 1950' s, as shown in Table 1. As a person whose teaching career spanned this interval, the au thor remembers well the changes in grade scales. In the 1950's the average grade in in troductory courses at Yale lay midway between a satisfactory "C" and a good "B" (or at 80 on a numerical scale). Today a "C" is unsatisfactory and a "B" is satisfactory in real ity; an average grade is midway between B+ and A(at 90 on the same scale). Grades have go ne up similarly in other colleges since AP's birth in 1956. Thus it should be evident from Table 1 that the AP scale would likewise shift by an entire grade, as it has. That the College Board misses this inference is a sign of the lack of contact between it and the colleges. The constancy of the average pass rate at a bout 2/3, measured by fraction of scores greater than or equal to 3, is also illusory for a subtle reason, related to Simpson's
7 of 19paradox. Actually, in most AP tests, the fraction o f examinations scoring at 3 or higher is decreasing over the years as the pool of test ta kers expands and takes in students of lower ability. The overall result appears to be con stant because of shifts in test takers towards easier exams. The number of U.S. History exams at 3 or mo re has declined to 51% in 1999. On the other hand, in English Literature the percentag e of tests with scores of 3 or higher has held up to 68% in 1999. This result reinforces other evidence (see Table 2, comments) for declining grading standards in Englis h Literature vis a vis other subjects. However, for both exams only about 40% of test-take rs truly qualify for colleges AP. How could the quality of AP exam papers slide downw ard so badly? An explanation given by three authors, one of whom (Jones) is the present head of the AP program, is that ... over long time intervals test scores are not ne cessarily comparable, as the entire scale may gradually shift. Changing demograp hics of the test-taking population must also be considered.... (Pfeiffenber ger, Zolandz and Jones, 1991) Since the number of tests has increased fiv e hundred-fold during the past 45 years, one should not be surprised at such a drift. Another sign of the CEEB-college gap is the lack of qualified graders. To keep AP's raison d'tre one would want at least a majority to be college faculty who teach the subject matter of the AP examinations. Yet, of 556 graders in the 1999 AP U.S. History exam, 316 came from high schools and 60 from commun ity colleges, unaccredited and other non-college sources, or colleges which failed to list their average SAT scores (typically very low-level institutions). Only a min ority of 180 came from accredited four year colleges. Likewise, of 619 graders of the Engl ish Literature examination for 1996, only a minority of 269 were 4-year college teachers (Note 4) An unfortunate outcome of this loss of contact is that the AP program seems t o have lost its major source of quality, its close collaboration with the colleges.Mandates A serious source of disagreement between C ollege Board and higher education faculty is the increasing number of legal restricti ons. The colleges view these as micromanagement by unqualified lay persons which en dangers the high quality of American higher education. In the words of two form er University presidents: An important reason why American higher education h as become pre-eminent in the world is the greater willingness of the government to respect the autonomy of colleges and universities a nd to refrain from imposing its own judgements on what Justice Felix F rankfurter once described as "the four essential freedoms of a univ ersityto determine for itself on academic grounds who may teach, what may be taught, how it should be taught, and who may be admitted to study. (Bowen and Bok, 1998) The College Board takes the opposite point of view and welcomes this type of government intervention as an aid to program (and r evenue) growth:
8 of 19Because of the leadership shown by the legislators and educators in these states, the growth in their students' participation in the Program has been truly remarkable.(College Board, 1995) Examples of State MandatesExtra credit for AP courses The state Regents have overridden a vote of the U niversity of California, Berkeley faculty and have mandated that admissions staff give a full grade point extra credit for AP courses (Sahagun and Weis s, 1999). Extra credit towards admissions (in the University of California and oth ers) also is based on enrollment in courses with the label "AP," not necessarily on sat isfactory exam performance Since the overall examination pass rate is only about one for every three course enrollments, mandating preferential admission to enrolled students is questionable. Paying of examination fees In the view of college faculty graders, the pract ice of some states' paying all examination fees indiscriminatel y encourages unqualified persons (even those who have not taken the AP course) to take a f lyer and overloads the system with inferior examinations. As an extreme example, grade rs tell of examination papers that are totally blank, except for a message saying that the student took AP because of external pressure from parents or school. Since nothing was lost because the fee was prepaid, the student took the path of least resistance and hande d in the blank exam. Requiring that AP courses be given in all high scho ols College faculty and deans cast a jaundiced eye on mandatory high school participatio n, which they view as dragging in schools that are unqualified to handle AP. As point ed out by the author and H. Wainer (2000), there are schools that fail even to produce a single "3" on any AP exams. In corroboration, Table 4 shows that states that pay s tudent fees and require all high schools to offer AP tend to be at the bottom of the list. Mandating acceptance of AP examinations with a "3" or higher The College Board's qualification estimates (Table 1), backed by mandat es in a growing number of states, would require acceptance into advanced courses of c andidates with a score of "3". This would be unacceptable to colleges that no longer ho nor a "3". If these mandates were accepted, it would rob the colleges of the discreti on to place students on the basis of all relevant information, not just a single, obsolete, numerical grade. That AP success could be a self-fulfilling prophecy follows from this sce nario: AP is seen as a successful, growing program. 1. The State wishes to improve its educational system. 2. College Board assures AP quality and the value of a "3." 3. On this cue, the State mandates college credit for a "3." 4. Colleges comply; the great majority of examinees ge t AP credit. 5. Enrollment in AP courses soars. 6. This scenario is a closed loop that include s the College Board and the State government. Out of the loop are the college faculti es. Despite the CEEB's enthusiastic support of these mandates and its growing success i n gaining state support, it is safe to predict that the colleges will resist. In the words of Bowen and Bok (1998),
9 of 19 "... it is very difficult to stop people from findi ng a path toward a goal in which they firmly believe..." and efforts to impose solutions on the colleges are "likely to bring forth ingenious efforts...that can have a wide variety of other consequences, not all of them benign." University faculty can use a variety of me asures to circumvent state mandates on AP. Private universities of course are not bound by governmental rules. State universities have a harder time and do not always succeed, as is shown by UC Berkeley's well-known loss of diversity since affirmative action was vote d down. However, state universities preserve quality by granting only elective credit t o AP scores of "3." Another strategy, as discussed later in this article, is to place AP stu dents in standard beginning classes, rather than in remedial courses. Nevertheless, the pressur e from mandates is on college faculty either to go along and lower quality or to misrepor t their AP policy. In either case, Table 2 would be incorrect.Table 4 Advanced Placement Scores by StatesState Number of Tests per 100 grads Performance Mandates* %=>3%=>4DC83.773.449.5 Missouri13.674.644.3 Connecticut47.072.143.8 Massachusetts46.772.043.4 New Jersey42.370.642.7 Illinois33.372.342.7 Hawaii34.967.241.6 Maryland48.071.541.5 Delaware40.571.241.4 New Hampshire32.470.441.3 California55.965.737.5 Rhode Island29.869.437.4 North Dakota9.372.137.2 Tennessee24.264.736.5 Washington23.668.436.5 Wisconsin29.668.336.4 Iowa14.270.036.3 Montana17.166.936.3 Pennsylvania27.165.736.0 Virginia56.765.636.0C
10 of 19 Louisiana10.863.835.3 Colorado36.566.335.2P United States36.664.135.2 Utah63.567.635.1 New York62.464.135.0 Oregon19.967.134.9 Ohio24.565.534.9 Wyoming8.163.734.8 Maine26.167.434.4 Kansas13.764.634.3 Michigan26.865.334.0 Vermont31.664.533.9 Idaho16.267.133.5 Arizona27.563.033.1 Georgia34.060.332.6P Alaska39.263.631.3 North Carolina42.659.930.9 Texas38.057.830.8 Nebraska12.162.729.9 Florida54.556.229.5P Minnesota28.658.629.1P New Mexico21.956.129.1 Oklahoma19.758.828.9 South Carolina44.555.128.5C, PAlabama21.057.328.3 Nevada31.756.026.2 West Virginia15.755.224.3 Kentucky23.550.724.2P South Dakota16.555.524.0 Arkansas15.352.023.9 Indiana21.650.223.4C, PMississippi14.245.519.9 *Mandates: P= State pays fees for all AP examinees C= All schools required to give A P courses How AP Actually Performs The College Board's literature has emphasiz ed the positive aspects of the increase in
11 of 19 numbers of test takers, but has paid less attention to act ual performance of AP students (College Board, 1994, 1995, 1996, 1998). Consider s ome data (obtained from ETS) on actual choices made by students in calculus in 14 c olleges. One finds the following distribution in Table 5.Table 5 Actual Placement of Calculus Students in 14 College sAP Score in Calc AB Percentage taking first calculus course at level sh own No CourseRemedial1st Calc2nd Calc3rd Calc No AP exam 29%45%21%3%1% 3 24%17%37%22%2% Note that the majority of incoming students without an AP background either took no math or enrolled in a remedial course. Also, onl y a small fraction (22%) of students with a score of 3 ("qualified" in Table 1) actually took an advanced course, although the majority (61%) placed out of the remedial course. T his shows that, for scores of "3" and lower, the AP Calculus AB examination is no longer acting as an advanced placement, but more as a placement examination. (Students with a score of "1" or "2" usually are placed in the remedial course. Students with a scor e of 4 or 5 are likely to take an advanced course.) If one considers the overall perf ormance of all AP students who finished Calculus AB and estimates that ca 2/3 actually took the exam, only a quarter or less achieved advanced placement in this sample. Especially in the foreign languages, colleg es often use AP exams interchangeably with other criteria, such as SAT I and SAT II score s and even high school credits, to make placement decisions. AP and Minorities AP results for minority students are distur bing. (Note 5) The author finds College Board statements on this topic misleading (College Board, 1996; Coley and Casserly, 1992). For example, CEEB cites the movie "Stand and Deliver" on Escalante's success in teaching AP calculus to Hispanic children. However neither Escalante nor his emulators have succeeded in repeating his success with minori ty students. (Lichten and Wainer, 2000; Mathews 1988, 1997, 1998; Woo, 1998). Further more, most of his students took the AP Calculus AB exam, much of which is high scho ol level material. In the College Board's words (1996), Woodrow Wilson High School (Washington, D.C.) provi des an excellent example of a predominantly minority urban high scho ol with a well established Advanced Placement program that serves a substantial proportion of its students. In actuality, in 1998, out of a total of 38 3 AP examinations, 85 were taken by African Americans, of which 18 received a "3" or hi gher (estimated 6 or 7 for "4" or higher). In its press releases, CEEB often quotes t he increased number of minority students taking AP exams, but says nothing about th eir success rate. Consider the facts on minority AP performance. If a passing grade were 3, 35% of African-American AP
12 of 19examinations would qualify. A shift to a "4" would lower this to 14%, or one out of seven exams. These results are consistent with PSAT-AP ab ility-performance relation (Camara, 1997; Lichten and Wainer, 2000). Minority students typically score about one standard deviation (15 I.Q., 6 ACT, or 100 SAT points) below average, which translates into an AP pass rate of about half of that for majority pup ils. In urban school districts, such as Detroit, students in selective high schools perform well on AP exams. On the other hand, the much large r number of pupils at unselective schools do extremely poorly in the AP program. In s ome, not a single AP candidate passes the exam (Lichten and Wainer, 2000). In the late 1990's more than 2 million pers ons graduated each year from high school, of which about 1 million (40%) went to four year co lleges. About 400,000 took AP exams (18%). About 200,000 (9%) scored at "3" or hi gher and approximately 100,000 (4%) scored at "4" or higher. For African-Americans the corresponding figures were about 250,000 graduates, 75 thousand (30%) to four year colleges, 15,000 (6%) AP exams, 5,000 (2%) passed at "3" or higher (less tha n 1% at "4" or higher). AP success occurs for a small fraction of high school graduate s; for minority students, the fraction is extremely small. In lawsuits on behalf of African-American, Hispanic and FilipinoAmerican students, six civil rights organizations have charg ed the University of CA with discriminatory admissions policies. The suits cite the practice of giving extra credit for AP courses to college applicants and the lower avai lability to minority students of AP courses. (Berthelsen, 1999; Nieves, 1999; Rosenfeld 1999; Rios, 1999; Sahagun and Weiss, 1999; Daniel et al. vs State of CA et al ., 1999). UC claims to take into account inequality of opportunity for honors/AP students, b ut state mandates prohibit such discretion (Sahagun and Weiss, 1999). Clearly, admi ssion policies that favor AP participants work against minority pupils. Affirmat ive action, in which lower test scores for minorities do not exclude them from admissions to selective colleges, is of proven benefit (Bok and Bowen, 1998). Other Low Performing Groups on AP Not just minorities are disadvantaged on th e AP examinations. Table 4 shows large differences in AP performance among the states. Poo r, rural states usually show low AP scores; wealthy, urban states generally do well. Th us, Washington, D.C. is at the top of the table (Note 6); IN does poorly. Preference on c ollege admission to students in AP classes means students from low performing states a nd schools will be handicapped. Common Sense and AP There are few lasting success stories in Am erican Education (Tyack and Cuban, 1995). As effective educational programs spread, th e imitations often become less true to the original. A law of diminishing returns sets in as the originally well-qualified (often self-selected), wellinformed and highly motivated group of teachers and pupils becomes flooded by the deluge of badly qualified, illinfo rmed and poorly motivated followers. The program becomes less selective and quality decl ines. AP is no exception to the rule. Consider th e largest AP program, English Literature. From Haag's (1985) data, the average PSAT-verbal sc ore of test takers in 1982 was an estimated 62 (recentered scale), far above average. By 1997, from Camara's (1997) data, the average had declined 9.5 points to 52.5, which is close to average (approximately 50 for the PSAT), an exceptional loss of selectivity (The 50% success point for AP English
13 of 19 Literature on the PSAT is 45, well below average.) To claim that quality could be maintained in the face of such dilution of the exam ination taker pool would be incredible. (Other programs, such as U.S. History, have been mo re selective.) College introductory courses match the leve l of average students. Below average students take remedial courses. Only the small mino rity of above average high school students capable of doing college level work are su ited to the AP program. As the AP program expands, it reaches students who are not ye t ready to do college-level work. The data confirm common sense: only a minority of stude nts are capable of doing college-level work in advance. Otherwise, standard introductory college courses would be unnecessary. In confirmation, a survey of K-16 (school a nd college) students by the Education Trust (1999) showed the high school-college gap. Th ree quarters of U.S. high school graduates enter some kind of college, but many arri ve unprepared. Nearly half take a remedial course, one third fail to make it into the sophomore class, and less than half graduate from college. With few exceptions, nationa l and state standardized tests fail to cover the abilities needed in college. In the Trust 's words, it "doesn't make any sense" that the fastest growing courses in high schools are col lege level (AP) and the biggest growth in college courses has been high school level, reme dial courses. (Note 7) In summary, the major slide in the qualification scale, the heart of AP, results from lower average student ability Whither AP? The College Board endorses continuing the e xpansion rate of AP for the next decade (College Board, 2000). What would be the outcome of this policy? Classical economics says that the decision to increase production hinge s on the marginal rate of return Additional production increases profits up to the p oint of diminishing returns, after which losses outweigh gains. There are also intangible li mits on expansion. If a farmer plants to the point that the grain becomes poor in quality, o r the land is damaged by erosion, the damage to his/her reputation or land may not show i n dollars and cents, but it could be important in the long run. Likewise, expansion of the AP program reach es diminishing returns, as the marginal yield of pupils qualifying drops (Table 6, last col umn). In lieu of hard data (CEEB does not keep records of actual number of qualified exam inations), this table is based partially on Table 2 (for the year 2000) and information the author could glean from various sources. Table 6 is based on a conservative project ion of present trends, such that all selective colleges will no longer accept a "3". Act ually, some colleges now require a "5" for AP in some subjects and some give no AP credit for English Literature.Table 6 Estimated Diminishing Returns in the AP ProgramYear Number of Exams % qualifying Qualifying Examinations Increase in Total Number % of added number qualifying NumberIncrease 196015,00075%10,000Â—Â—Â—197070,00075%50,00040,00055,00075%
14 of 19 1980150,00069%100,00050,00080,00063%1990500,00060%300,000200,000350,00057%20001,400,00048%650,000350,000900,00039%20102,300,00035%800,000150,000900,00017% Table 6 shows how further increases add rel atively few qualified examinations. On the other hand, the costs mount in terms of examina tion fees, training teachers, smaller class sizes, lowered quality of graders and loss of respect for AP. The net benefits diminish to the point that continued expansion of t he program does more harm than good. In the opinion of the author, that point was passed long ago. Conclusions A fundamental flaw in the AP program follow s from the failure to distinguish between criterion and norm referenced programs. Norm referenced programs, such as SAT or ACT, put students in rank order for convenie nt sorting. The larger the number of parsons taking such a test, the better are the norm s. On the other hand, the colleges' AP criteri on is inflexible. As long as AP served a small, elite population chosen from selective schoo ls, increasing the program size had little or no effect on the pass rate or on quality. Now that the level of test takers has dropped below the criterion, the failure rate has i ncreased sharply, and program quality has suffered. To reestablish quality, major reforms to AP are needed. These include an honest grade scale which is aligned with college standards removing unwise mandates, and better selection of faculty and students into cours es, examinations and grading. (Note 8)NotesThe author is indebted to Neil Dorans, Drew Gitomer Penelope Laurans, Maxine Lurie, Jonathan Lurie, L. Scott Miller, Rick Morgan, Len R amist, Howard Wainer, and Warren Willingham for helpful discussions, suggestions, cr iticisms and information. This paper was partially researched while the author was a vis itor at the Educational Testing Service, 1998-1999. This paper is not approved by, and does not express the views of the Educational Testing Service nor of any of its emplo yees. If colleges were arranged by SAT, rather than by AP scores, the grouping would be slightly different. For example, Spelman College wo uld be listed as selective. 1. This result (actually 26.5%) is robust. For example if "possibly qualified" meant one quarter of the students passed, the resultant s hift would be 27.1%. 2. W. Currie, quoted in Rothschild (1999), estimated a bout 55% of students enrolled in AP courses take the examinations. The more conse rvative figure of 2/3 used here changes the fraction of AP enrollees passing the te sts to about a third. 3. Community college faculty do not have direct contac t with the AP program and the content of AP-level college courses. They and high school teachers usually do not have the advanced education and research experience of college and university faculty. 4. Data for African-American scores in AP tests are fr om 1998 figures from ETS. The 5.
15 of 19present paper does not consider Asian-Americans as "minority." Washington, DC has a higher per capita income than any state. Also the overwhelming majority of AP tests there are taken b y students (majority as well as minority) from private schools. 6. A similar inversion occurs between AP English Liter ature and SAT II English. The former has average PSAT scores of 52.5 (roughly com parable to senior SAT scores of 540); the latter has average SAT scores of 568 ( College Board, 1997). Students taking the AP exam have lower verbal ability than t hose who take the high school exam. 7. The College Board (2000) recently announced plans t o put ten AP courses in every high school in the country by the year 2010 and exp and the program to over 2 million examinations. This move, if it ever became real, would exacerbate the problems of the program: bloated size, ill-qualifie d faculty and students, and growing failure rates, especially among minorities. Calculus BC is the exception that proves the rule about AP. This small program ( 31,000 exams in 1999) is still a success by all measures. Colleges still accept a "3 for AP, the pass rate is very high (79%), yet the student ability distribution on the PSAT is no higher than for calculus AB (Camara et al, 1997). The success of BC may be due to the same features of AP in its early days: self-selected, ab le, well-motivated faculty and students. 8.ReferencesBerthelsen, C. (1999). Suit Says Advanced-Placement Classes Show Bias. The New York Times, July 28, p. A18 YNE (Western edition). Bowen, W.G. and Bok, D. (1998). The Shape of the River. Long-Term Consequences of Considering Race in College and University Admissio ns. Princeton, N.J.: Princeton. Bracey, G. W. (1991-1998). Why Can't They Be Like W e Were? Phi Delta Kappan (Oct.), pp. 104-117, annual Bracey Report on the Co ndition of American Education, 1992-1998.Bracey, G. W. (1997). Setting the Record Straight: Responses to Misconceptions about Public Education in the United States. Alexandria, VA: Association for Curriculum Development.Camara, W. (1997) The Relationship of PSAT/NMSQT Sc ores and AP Examination Grades. RN-02, November. New York: College Board.Carson, C.C., Huelskamp, R. M., and Woodall, T.D. ( 1993). Perspectives on Education in America. J. Educational Research 86, No. 3, May/ June, pp. 259-311. Coley, R.J. and Casserly, P. L. (1992). A Study of AP Students in High Schools with Large Minority Populations Princeton, NJ: The Educational Testing Service. College Board. (1994). College and University Guide to the Advanced Placement Program. New York: The College Board, p. 32.College Board. (1995). Advanced Placement Program. 40th Anniversary Yearbo ok. New
16 of 19York: The College Board.College Board. (1996). A Secondary School Guide to the Advanced Placement Program. New York: College Entrance Examination Board and Pr inceton NJ: Educational Testing Service.College Board. (1997). AP. Grading, Interpreting an d Using Advanced Placement Examinations. New York: College Board and ETS.College Board. (1998). Advanced Placement State Initiatives New York: College Board. College Board. (1999a). More Schools, Teachers, and Students Accepted the A P Challenge in 1998-1999. Collegeboard.org/990831b.html College Board. (1999b). The College Handbook 2000. New York: College Board. College Board. (2000). HTTP://WWW.Collegeboard.org/ press/html/index.html. Daniel et al. vs State of California et al (1999). Los Angeles: Superior Court of California.Education Trust. (1999). Thinking K-16. Washington, DC. Finn, C.E. Jr. (1991). We Must Take Charge. Our Schools and Our Future. NY: Free Press.Haag, C. H. (1985). Using the PSAT/NMSQT to Help Id entify Advanced Placement Students. Publication 273678. New York: College Boa rd. Hyser, R. H. (1999). Is a 3 a C? The Reliability of the Advanced Placement United States History Test for College Credit. The History Teacher, 32, No. 2, February, 223-235. Lichten, W. and Wainer, H. (2000). The AptitudeAc hievement Function: An Aid for Allocating Educational Resources, with an Advanced Placement Example. Educational Psychology Review, 12 No. 2, 201-228. (Information available at http://www.wkap.nl//oasis.htm/222765.) Mathews, J. (1988). Escalante: The Best Teacher in America New York: Holt. Mathews, J. (1997). A Math Teacher's Lesson in Divi sion. Movie Inspiration Has Found That Fame Can Stand in the Way of Success. The Washington Post May 21, D1 Mathews, J. (1998). Class Struggle New York: Times Books. Morgan, R. and Ramist, L. (1998). Advanced Placement Students in College: An Investigation of Course Grades at 21 Colleges Princeton: ETS. Report No. SR-98-13. National Committee on Excellence in Education. (198 3). A Nation at Risk. The Imperative for Educational Reform. Washington: U.S. Department of Education.
17 of 19 Nieves, E. (1999). Civil Rights Groups Suing Berkel ey Over Admissions Policy. New York Times, Feb. 3, p. A11.Pfeiffenberger, W., Zolandz, A. M., and Jones, L. ( 1991). Testing Physics Achievement: Trends Over Time and Place. Physics Today, September, pp.30-37. Princeton Review. (1998). Complete Book of Colleges. New York: Random House. Ravitch, D. (1985). The Schools We Deserve New York: Basic. Rios et al. vs. the Regents of the University of CA, Case C.99-052 5, U.S. District Court, Northern Dist. of CA.Rosenfeld, S. (1999). Cal hit with race-bias suit. San Francisco Examiner, Feb. 2, pp. A1, A10.Rothschild, E. (1999). Four Decades of the Advanced Placement Program. The History Teacher 32 No. 2, February, pp. 175-206. Sahagun and Weiss (1999). Bias Suit Targets Schools Without Advanced Classes. Los Angeles Times July 28, p. A1. Tyack, D. B. and Cuban, L. (1995). Tinkering Toward Utopia. A Century of Public School Reform Cambridge, MA: Harvard. Woo, E. (1998). Calculus Test Scores Drop at Garfie ld; Film is Blamed. Los Angeles Times August 19, 1:5About the AuthorWilliam LichtenProfessor Emeritus of PhysicsFellow, Institution for Social and Policy StudiesYale UniversityNew Haven CT 06520-8209 Email: firstname.lastname@example.org Copyright 2000 by the Education Policy Analysis ArchivesThe World Wide Web address for the Education Policy Analysis Archives is epaa.asu.edu General questions about appropriateness of topics o r particular articles may be addressed to the Editor, Gene V Glass, email@example.com or reach him at College of Education, Arizona State University, Tempe, AZ 8 5287-0211. (602-965-9644). The Commentary Editor is Casey D. C obb: firstname.lastname@example.org
18 of 19EPAA Editorial Board Michael W. Apple University of Wisconsin Greg Camilli Rutgers University John Covaleskie Northern Michigan University Alan Davis University of Colorado, Denver Sherman Dorn University of South Florida Mark E. Fetler California Commission on Teacher Credentialing Richard Garlikov email@example.com Thomas F. Green Syracuse University Alison I. Griffith York University Arlen Gullickson Western Michigan University Ernest R. House University of Colorado Aimee Howley Ohio University Craig B. Howley Appalachia Educational Laboratory William Hunter University of Calgary Daniel Kalls Ume University Benjamin Levin University of Manitoba Thomas Mauhs-Pugh Green Mountain College Dewayne Matthews Western Interstate Commission for HigherEducation William McInerney Purdue University Mary McKeown-Moak MGT of America (Austin, TX) Les McLean University of Toronto Susan Bobbitt Nolen University of Washington Anne L. Pemberton firstname.lastname@example.org Hugh G. Petrie SUNY Buffalo Richard C. Richardson New York University Anthony G. Rud Jr. Purdue University Dennis Sayers Ann Leavenworth Centerfor Accelerated Learning Jay D. Scribner University of Texas at Austin Michael Scriven email@example.com Robert E. Stake University of IllinoisÂ—UC Robert Stonehill U.S. Department of Education David D. Williams Brigham Young UniversityEPAA Spanish Language Editorial BoardAssociate Editor for Spanish Language Roberto Rodrguez Gmez Universidad Nacional Autnoma de Mxico firstname.lastname@example.org
19 of 19 Adrin Acosta (Mxico) Universidad de Guadalajaraadrianacosta@compuserve.com J. Flix Angulo Rasco (Spain) Universidad de Cdizfelix.email@example.com Teresa Bracho (Mxico) Centro de Investigacin y DocenciaEconmica-CIDEbracho dis1.cide.mx Alejandro Canales (Mxico) Universidad Nacional Autnoma deMxicocanalesa@servidor.unam.mx Ursula Casanova (U.S.A.) Arizona State Universitycasanova@asu.edu Jos Contreras Domingo Universitat de Barcelona Jose.Contreras@doe.d5.ub.es Erwin Epstein (U.S.A.) Loyola University of ChicagoEepstein@luc.edu Josu Gonzlez (U.S.A.) Arizona State Universityjosue@asu.edu Rollin Kent (Mxico)Departamento de InvestigacinEducativa-DIE/CINVESTAVrkent@gemtel.com.mx firstname.lastname@example.org Mara Beatriz Luce (Brazil)Universidad Federal de Rio Grande do Sul-UFRGSlucemb@orion.ufrgs.brJavier Mendoza Rojas (Mxico)Universidad Nacional Autnoma deMxicojaviermr@servidor.unam.mxMarcela Mollis (Argentina)Universidad de Buenos Airesmmollis@filo.uba.ar Humberto Muoz Garca (Mxico) Universidad Nacional Autnoma deMxicohumberto@servidor.unam.mxAngel Ignacio Prez Gmez (Spain)Universidad de Mlagaaiperez@uma.es Daniel Schugurensky (Argentina-Canad)OISE/UT, Canadadschugurensky@oise.utoronto.ca Simon Schwartzman (Brazil)Fundao Instituto Brasileiro e Geografiae Estatstica email@example.com Jurjo Torres Santom (Spain)Universidad de A Coruajurjo@udc.es Carlos Alberto Torres (U.S.A.)University of California, Los Angelestorres@gseisucla.edu