PUBLIC SCHOOL ACADEMIC PERFORMANCE

title-s.gif (3668 bytes)

Search

Home
Overview
Blog
Research
Classics
Criticism
International
Activism
New
Links

National Trends

[The following text is excerpted from Chapter 6 of Market Education: The Unknown History.]

     How do today’s public school students compare, on average, to those of two, four, or eight decades ago? Are they academically better or worse off? That question is answered in the following sections by drawing together the results of a broad range of national and international studies.

Overall Achievement Before 1970

     As far back as 1906, researchers began conducting "then and now" studies that analyzed changes in student achievement over time, particularly in reading proficiency. Several attempts have been made to piece together the results of these studies in order to obtain a coherent long-term picture. The best such effort was conducted by Professors Lawrence Stedman and Carl Kaestle in 1991. As a caveat to their conclusions, Stedman and Kaestle observed that many "then and now" comparisons were not nationally representative and failed to control for demographic changes in the test groups over time, undermining the validity of their results. These problems were especially severe in the older studies, and so the evidence for the early to mid nineteen-hundreds is somewhat sketchy.
     With that proviso in mind, Stedman and Kaestle found that average reading achievement for students in school at any given age stagnated for the first seventy years of this century. Their reference to age level was meant to account for the increasingly popular practice of social promotion. Students in the earlier part of this century were not generally promoted to the next grade unless they had mastered the previous grade’s material, whereas modern students are frequently pushed through the system regardless of their level of achievement. This means that the students in any given grade today are younger, on average, than students who were in that same grade fifty or eighty years ago. Age is thus a more reliable indicator of how long a student has been in school than grade, providing a fairer basis for comparing the effectiveness of historical and modern schools.
     Though the original Stedman/Kaestle analysis was careful to consider many of the significant influences on student achievement (such as the age factor just described) they did overlook one relevant aspect of pre-nineteen-seventies schooling: the ever-lengthening school year. In 1909-10, pupils attended school for an average of 113 days. By 1969-70, the figure had jumped to 161.7 (where it has remained, roughly speaking, ever since). Because classes were in session five days a week, this amounts to a difference of almost two and a half months of schooling per year. Students in the sixth grade in 1969-70 had thus attended school for fifteen more months than 1909-10 students. The difference for grade ten pupils was twenty-five months; about three additional school years. Taking these variations into account is just as important as comparing students by age rather than grade, since it provides a more accurate picture of how much time students actually spent in the classroom. Students in the late sixties received much more schooling but scored no higher on reading achievement tests, indicating a probable decline in the efficiency of public school instruction during the first two thirds of this century.

Overall Achievement After 1970

     Student achievement has stagnated or fallen in most subjects since 1970, with the largest and most thoroughly established decline occurring in basic literacy. That is the verdict of the five most reliable sources of evidence: the National Assessment of Education Progress (NAEP), the International Evaluation of Education Achievement (IEA), the Young Adult Literacy Survey (YALS), the National Adult Literacy Survey (NALS), and the International Adult Literacy Survey (IALS). Together, these five groups of tests cover the gamut of ages from 9 to 25, and a full range of academic subjects. A great many other measures of student achievement are available, predominantly norm-referenced tests such as the Iowa Test of Basic Skills, but these are properly regarded by the majority of analysts as unreliable indicators of national trends, and so they are not considered here.
     The NAEP covers the most curricular ground, measuring the knowledge and skills of U.S. students in reading, writing, mathematics, science, geography, literature, and U.S. history. Tests in some subjects date back to 1969, while most of the others were introduced during the seventies. Every few years, the NAEPs are administered to nationally representative samples of fourth, eighth, and twelfth graders. Taken as a whole, their results have remained essentially flat, with a slight, but statistically significant, downturn between 1992 and 1994 in reading. NAEP writing scores for 11th graders also show a slight decline since the late nineteen-eighties. Those who stress self-esteem over academic achievement can console themselves, however, with the fact that students’ perceptions of their writing ability improved noticeably despite the drop in their actual performance. The key NAEP results are graphed in figures 1 and 2.
     The next set of trend results comes from the IEA, and encompasses reading, mathematics, and science achievement. The IEA tested the reading abilities of students from numerous countries in both 1970 and 1990. The two tests were not entirely identical, but the recent doctoral dissertation of Petra Lietz has made score comparisons possible by looking at how students performed on items that were common to both tests. The results, charted in figure 3, reveal that reading achievement of U.S. fourteen-year-olds dropped from 602 to 541 during the twenty years leading up to 1990—about 8 percent on the 800 point scale. Only one of the other seven participating countries suffered a worse drop in achievement than the United States.
     The IEA’s First and Second International Mathematics Studies (FIMS and SIMS) were conducted in the mid-nineteen-sixties and the mid-nineteen-eighties, respectively, and tested both thirteen- and seventeen-year-olds. Taken as a whole, scores were essentially constant, with younger students loosing ground, while the older students gained somewhat. As depicted in figure 4, the gain recorded by seventeen-year-olds was slightly larger than the drop of their younger counterparts, but the reliability of the data for seventeen-year-olds is in doubt. David Robitaille, author of the study comparing FIMS and SIMS results, acknowledged that "Only 18 of the 136 items used in the second study at the Senior Level had also been used in the first. This limits the drawing of achievement comparisons between the two studies [at the Senior, i.e. seventeen-year-old, Level]."
     Science knowledge was tested by the IEA in 1970-71 and 1983-84, and scored on an 800 point scale. Both ten-year-olds and fourteen-year-olds participated, with the raw U.S. score results indicating a marginal improvement for the younger group and a more significant drop among older students. Taking the two age groups together, the raw scores pointed to a decline in U.S. science achievement over time, but this was not the whole story. The researchers conducting the tests noticed that U.S. students participating in 1983-84 were, on average, eight months older than their 1970-71 counterparts, giving them an advantage on the test. When this age advantage was statistically adjusted away, the IEA found that scores for both groups of U.S. students had dropped: by 16 points among ten-year-olds, and a whopping 47 points among fourteen-year-olds (see figure 5). These were by far the worst performance trends of any participating nation. In 1997, the results of the Third International Mathematics and Science Study (TIMSS) were released, but unfortunately no effort was made by the researchers involved to allow comparisons between TIMSS results and those of earlier IEA math or science studies.
     Reading tests of older students and recent high-school graduates echo the disappointing findings of the IEA. Several sophisticated investigations of literacy skills have been conducted since the mid-nineteen-eighties. Two of these studies, the National Adult Literacy Survey of 1992 and the Young Adult Literacy Survey of 1985, were designed to be directly comparable, using the same 0 to 500 score range, and the same five levels of achievement. As shown in figure 6, the average score of 21 to 25 year-olds fell from 293 to 280 over the intervening seven year period (from the middle of level 3 to the bottom).
     Two years later, an International Adult Literacy Survey was conducted in seven nations, including the United States. Its overall structure and scoring system were identical to those of its U.S.-only precursors, and the specific kinds of tasks required to score at each of the five levels of achievement were essentially the same. Unfortunately, the lead researcher for both projects cautions that the results of the tests might not be entirely comparable due to differences in some details of the testing procedures. Keeping this caution in mind, the verdict of the IALS is still grim. One out of every four 16 to 25 year-olds scored at the lowest level of literacy achievement in 1994, a larger percentage than ever before and the second worst showing among the nations tested. These results, along with those of the NALS, are charted in figure 7.

The Opinion Gap

[The following text is excerpted from Chapter 1 of Market Education: The Unknown History.]

     A conspicuous discrepancy exists between the quality ratings Americans give to their own local schools versus the ratings they give to the nation's schools at large. In 1995, for instance, only 20% of those polled gave the nation’s schools a quality rating of A or B, while twice as many gave that rating to their local schools. Many educators see this as an indication that the recent criticism of public schooling is misguided, since the schools with which people are presumably more familiar are given higher, if still mediocre, scores. Parents themselves say that the chief reason they rate their local schools better is because their schools place more emphasis on high academic achievement than those elsewhere. But are they right?
     In 1992, education scholar Harold Stevenson published the results of a decade’s worth of international studies comparing not only educational performance, but attitudes as well. In his studies, he looked at hundreds of classrooms and families in the U.S., China, Taiwan, and Japan. What he found was that American parents were by far the most satisfied with their local schools, while their children had the worst performance overall. Though in the first grade they were only slightly behind their Asian counterparts in mathematics, by the fifth grade the best American schools had lower scores than the worst schools from all three other nations. Unaware of this fact, the American parents reported being quite pleased with the performance of their schools and their children.
   One possible explanation for this discrepancy is that Americans just have much lower standards than people in other countries, and are thankful for even the most meager successes of their children. But that fails to explain the preoccupation of U.S. citizens and news media with the middling to poor standing of their children on international tests. It seems more reasonable to think that U.S. parents rate the nation’s schools poorly because they are familiar with the international test results, and that they rate their own schools more highly because they have no idea how they compare to others in the U.S. or abroad. To parents, it seems, no news is good news. Those who say that the discrepancy between parental ratings of local versus national schools are the result of lack of information appear to be right, but the information is lacking at the local rather than the national level. As George Gallup himself remarked more than twenty-five years ago: "Since [parents and the general public] have little or no basis for judging the quality of education in their local schools, pressures are obviously absent for improving the quality."
     A Chicago fifth-grade teacher expressed a similar view to author Jonathan Kozol, saying "It’s all a game… Keep [the kids] in class for seven years and give them a diploma if they make it to eighth grade. They can’t read, but give them the diploma. The parents don’t know what’s going on. They’re satisfied."
     Another prime example of this situation recently played itself out in East Austin, Texas. Consistently greeted by A’s and B’s on their children’s report cards, the parents of Zavala Elementary School had been lulled into complacency, believing both the school and its students were performing well. In fact, Zavala was one of the worst schools in the district and its students ranked near the bottom on statewide standardized tests. When a new principal took over the helm and requested that the statewide scores be read out at a PTA meeting, parents were dismayed by their children’s abysmal showing, and furious with teachers and school officials for misleading them with inflated grades.

[End of excerpts from Market Education: The Unknown History.]

Articles of Interest

The Condition of Education:
Why School Reformers are on the Right Track

by Lawrence C. Stedman

     Looking at trends in the SAT and the National Assessment of Educational Progress (NAEP), professor Stedman argues that the evidence on student academic achievement is mixed. While the verbal portion of the SAT declined significantly between the mid-sixties and the mid-seventies, the same drop is not to be found on the NAEP reading tests. Nevertheless, he observes, performance levels in key subjects have been low for years, and are in serious need of improvement.
     Stedman's conclusions follow quite reasonably from the data explored in this article, but several sources not available at the time of its publication paint an even bleaker picture. An analysis of international reading trends published by Dr. Petra Lietz in 1996 revealed a significant drop in U.S. literacy between 1970 and 1990 (Changes in Reading Comprehension Across Cultures and Over Time). The same pattern of decline in U.S. reading ability is to be found in looking at the Young Adult Literacy Survey of 1985 and the National Adult Literacy Survey of 1992 (the results of which were only released in 1993). In fact, the International Adult Literacy Survey of 1994 revealed the highest level of U.S. functional illiteracy in more than a decade. Other international data sources on mathematics and science achievement are similarly damning.


Respecting the Evidence:
The Achievement Crisis Remains Real

By Lawrence C. Stedman

     In this paper, a sequel to his Review of The Manufactured Crisis, Stedman summarizes his research on the achievement of U.S. public school students, and refutes the insupportably rosy claims of authors David Berliner and Bruce Biddle. His overall conclusion is that achievement has stagnated for most of this century at a comparatively low level, dropping somewhat during the sixties and seventies, rising in the eighties, and dropping again during the mid-1990s.


Schooling and Literacy Over Time

By Andrew J. Coulson

     This article chronicles the dismal trends in education over the last century, from rampant spending growth, to deterioration in textbooks and the teaching of reading. The full citation is: "Schooling and Literacy Over Time: The Rising Cost of Stagnation and Decline," Research in the Teaching of English, vol. 30, no. 3, October 1996, pp. 311-327.

 

About this Site       Back to Research Page       Send Comments

Copyright 1998
www.schoolchoices.org
All rights reserved

Browse Colleges By State

Browse Colleges By Major