How Changes in U.S. Reading Instruction Compare Internationally
The Organisation for Economic Co-operation and Development recently issued its new book-length report, “Measuring Innovation in Education 2019.” The authors use the PISA, TIMSS, and PIRLS databases to look at changes in a slew of instructional and system practices across the OECD nations between 2006 and 2016. Amidst the jargon and complicated charts, there are a number of interesting takeaways. Today, I’ll take a look at what they found with regards to reading instruction (there’s much more in the volume, and I’ll try to get to it in future posts). The vast majority of the findings on reading, including all of those noted below, are for fourth-graders.
Before I turn to the findings, a note regarding the volume’s emphasis on “innovation”: I get why that title was used, it’s because “innovation” sells. But, as much as anything, the volume helps make clear just how amorphous that term really is. You’ll see what I mean.
First off, despite energetic Common Core-inspired efforts to change how the U.S. teaches English language arts (ELA), there was little evident change between 2006 and 2016 when it came to key ELA practices in U.S. schools. While the average OECD nation saw a five percent bump over that time in the share of students saying their teachers “ask them to identify the main ideas of a text at least once a week,” the U.S. figure didn’t change (Figure 4.6). Similarly, while there was a 12 percent increase in the OECD average when it came to the share of fourth-graders who said teachers asked them “to draw inferences and generalizations from a text at least once a week” between 2006 and 2016, the U.S. share rose just two percent (Figure 4.5).
It turns out that over 90 percent of U.S. teachers were already regularly doing these Common Core-endorsed practices back in 2006—at a much higher rate than teachers in most OECD nations. It’s possible, of course, that U.S. teachers who were already regularly doing these things started to do them even more frequently, but the top-line story is that—for all the Common Core-induced hoopla—there was little obvious change in U.S. practice, while other nations actually spent 2006-2016 doing more of what the U.S. was already doing back in the Bush years.
Second, for better or worse, between 2006 and 2016, there was a clear OECD-wide shift toward giving students less choice over reading. The share of fourth-graders saying that, at least once a week, their teachers allowed them to read “items of their own choice” during lessons fell by eight percent, and by more than thrice that in Italy, the Slovak Republic, and Lithuania (Figure 7.1). Meanwhile, the share of students who said teachers gave them time “to read books of their own choice” at least once a week also fell by eight percent, and by more than three times that in Finland, Norway, and Denmark (Figure 7.2). The U.S. remained static in reading assignments (at 92 percent) while it declined eight percentage points in choice of books (to 87 percent). Even so, the U.S. remained far above the OECD norm on both counts.
Third, I could find at least one massive shift in U.S. practice when it came to reading, and it had nothing to do with instructional practice. Rather, the U.S. massively outpaced the OECD norm when it came to fourth-graders reporting that they “use computers to write stories and texts at least once a week.” Back in 2006, just 21 percent of U.S. students said they did so, a bit below the OECD average of 24 percent. By 2016, 53 percent of U.S. students said they were regularly writing on computers, compared to an OECD average of 34 percent (Figure 4.7).
Finally, relative to the international community, the U.S. has declined in the role of teacher aides but increased in the amount of time teachers spend one-on-one with struggling students. When students struggle in reading, one response is to have a teacher aide or adult volunteer work with struggling readers. Across the OECD, teachers report that just seven percent of those students had an aide or adult volunteer to work with them in 2006; by 2016, the figure was up to 13 percent. In the U.S. the figure had been 16 percent in 2006—or about twice the OECD average—but slumped to 11 percent by 2016 (Figure 10.1). At the same time, U.S. teachers reported an increase in the time they spent working one-on-one with students when they fall behind, from 89 percent in 2006 to 96 percent in 2016. This seven-point increase compared to a static OECD average, which sat at 89 percent in 2006 and in 2016 (Figure 10.3).
Three thoughts on all of this.
The whole notion of “innovation” can obscure more than it reveals. “Innovation” tends to be imbued with the presumption that different is good. And yet, there’s no reason to imagine that the changes reported here are necessarily good for students. Thus, saddling this volume with the tag “innovation” can get in the way of sensible discussion about what the results mean for kids. After all, around the world, students are generally getting less opportunity to occasionally choose what they read—it’s not clear that this is either “innovative” or the opposite. It just is.
The data also make clear that our mental pictures of how other nations approach schooling can be unreliable, which urges caution when arguing that the U.S. needs to be more like Country X or Y. For instance, the numbers make it appear the U.S. was doing more “Common Core-like” reading instruction in 2006—a few years before the Common Core was introduced—than were most OECD nations. And, yet, the Common Core was depicted as a huge change in how the U.S. would teach reading, with advocates suggesting that the U.S. would be catching up with what many other nations were doing.
The trick in all of this, of course, is that it’s incredibly difficult to know if the kinds of shifts documented here have practical import. Even if we had compelling research on the impact of school aides or regular one-on-one time with teachers, we don’t know how such results would play out if the strategies were employed nationwide. Presumably, the results would depend mightily on the quality of the aides, the training of teachers, what aides did, and much more. Thus, while many may turn to a report like this hoping to learn the “new new thing” that they should be doing, the volume is more useful for the questions it raises than the answers it provides.
Frederick Hess is director of education policy studies at AEI and an executive editor at Education Next.
This post originally appeared on Rick Hess Straight Up.