BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Putting A New Study On Building Knowledge Into Perspective

Following

A recent study finding dramatic boosts in reading achievement from a knowledge-building curriculum has come in for criticism, some of it well-taken. But the study should be seen as just one more piece of evidence casting serious doubt on standard literacy instruction.

The study, which I wrote about in an earlier piece, analyzed the reading scores of two groups of children. Beginning in kindergarten, one group got a curriculum, based on the Core Knowledge Sequence, that builds academic knowledge. The other group got “business as usual,” which generally means a focus on reading comprehension skills, like “finding the main idea,” over substantive content.

Children in both groups had applied to Core Knowledge charter schools that were oversubscribed. The researchers compared the children who got in through a lottery (the “treatment” group) with those who did not and went to other schools (the “control” group). They found that by third grade, the treatment group significantly outperformed the control group on state reading tests.

That was especially true for students living in an area characterized as low-income. They made such significant gains that the test-score gap between that group and higher-income students was eliminated.

Although the still-unpublished study hasn’t attracted attention from the mainstream press, at least two education outlets have run stories on it. Both raised questions about the study’s findings.

PD and Read-Alouds

A story in Education Week quoted the lead researcher, David Grissmer, as saying that because only one of the nine schools in the study was in a low-income area, it’s possible that factors other than the curriculum affected the results there.

The article continued: “There are other potential differences between the treatment and control groups that could have affected results. Most of the Core Knowledge school teachers received professional development on how to implement the guidelines, while it’s not clear what PD teachers at other schools received. The program also uses different methods than many other reading curricula—relying more heavily on read-alouds, for example.”

But these “differences” aren’t factors that interfere with isolating the effect of a knowledge-building curriculum. They’re factors that are inextricably linked to that kind of curriculum and therefore part of the curriculum’s effect.

The evidence indicates that PD for teachers works best when it’s grounded in the specific content of the curriculum. But the standard elementary curriculum either doesn’t specify content or has extremely thin content, because the focus is on comprehension “skills.” So the PD may have been better in the Core Knowledge schools, but that was possible only because the schools were using a curriculum with specified rich content.

Similarly, a heavy reliance on read-alouds is a component of any effective elementary knowledge-building curriculum. In a classroom using the standard approach, the teacher may read aloud for 10 or 15 minutes from a text chosen not for its topic but for how well it lends itself to modeling a comprehension skill. Then students spend most of the rest of the reading block, which averages two hours, reading on their own to practice the skill. In a school using a knowledge-building curriculum, teachers spend half an hour or more reading aloud from texts too complex for students to read on their own because that’s the most effective way to build knowledge of a new topic before students are fluent readers.

Valid Cautions About the Data

Another critique of the study came from the Hechinger Report’s Jill Barshay. Barshay dug into the data and came up with several valid cautions:

  • At the school serving low-income families, she wrote, the median family income was over $50,000. Schools in poorer districts might not achieve the same results. (While Barshay characterized these figures as applying to the school, they actually applied to the district. Average income levels at the school may have been even higher—or lower. It’s puzzling that the study didn’t include school-level demographic data, which is usually publicly available.)
  • The data from the low-income school was based on only 16 students, a sample size so small that it’s hard to be confident of the results. (There were over 500 students in the treatment group as a whole, though, so this caveat doesn’t affect the general finding that Core Knowledge students got a significant boost on reading tests.)
  • The reading scores of the Core Knowledge students didn’t increase in fourth, fifth, and sixth grade, suggesting that they got the entire benefit of the curriculum by third grade. That runs counter to the theory that acquiring knowledge is a cumulative process that results in ever-increasing gains in comprehension, at least up to a point.

I confess that I missed these points when I wrote about the study, and I’m grateful to Barshay for bringing them up. It’s important to subject scientific evidence to critical review, and I hope that the authors of the study discuss these limitations more prominently in their final write-up of the results.

Barshay made two other points I consider less convincing. She found it highly significant that roughly half of the 1,000 families that won admission to the Core Knowledge schools chose not to enroll their children. In an email, Barshay explained that in her view, that made the sample less random: Maybe those parents realized their children weren’t likely to do well at a Core Knowledge school, skewing the sample in favor of kids who were likely to succeed. True, but then again, maybe an equivalent number in the control group would have decided not to enroll their children if they had gotten in, for the same reason. In that case, the two groups would still be comparable.

Barshay also argued that because the Core Knowledge schools were charters, they may not have been comparable to the schools the control group ended up attending. “It’s impossible from the study design to distinguish whether the Core Knowledge curriculum itself made the difference or if it could be attributed to other things that these charter schools were doing, such as teacher training or character education programs,” Barshay wrote. (Barshay told me she was speaking in general terms about features of charter schools rather than primarily about the schools in the study.)

As far as I can tell, it’s possible that some control group students also ended up at charters, albeit not Core Knowledge charters. But leaving that question aside, it’s true that charters often differ in significant ways from traditional public schools—and you could also argue that parents who make the effort to apply to charters are likely to be more involved in their children’s education.

However, the researchers discounted those differences because, on average, charter schools in non-urban areas have no better track record than traditional public schools. (All schools in the study were in non-urban areas.) That doesn’t address whether the specific charter schools in this study differed from the schools that the control group attended, but it would be impossible for researchers to control for all the possible differences between the Core Knowledge schools and others in the study.

It’s Hard to Run Experiments on Knowledge-Building

And therein lies the rub, or part of it. It’s hard to control all the variables in any education study—there’s always attrition, there’s always variation in the way teachers deliver whatever the “intervention” is, etc. That’s even more difficult when a study extends over several years, as this one did. And it seems that the only way to see the effect of a knowledge-building curriculum on standardized reading tests is to run a study that lasts several years.

Another commentator on the recent study, teacher and author Nathaniel Hansford, argues that “this is why we need to look at multiple studies and studies with different designs to be sure than an intervention or pedagogy is effective.” I agree in theory, but in practice that could leave us waiting indefinitely.

The Core Knowledge study took 14 years to come out—those kindergartners are now 19—and I’ve been told it cost an enormous amount of money. I don’t know of any similar long-range studies in the pipeline, and even if there is one, we’re unlikely to see the results for over a decade. Do we really want to wait that long before re-evaluating our approach to reading comprehension?

Fortunately, we don’t have to. Barshay and Hansford have each said that this one study isn’t “convincing” or “definitive” proof that a knowledge-building curriculum boosts reading comprehension. That’s true, but no one is claiming that it is—or at least, I’m not. Instead, we need to see it as evidence that, combined with other evidence, strongly suggests that a well-implemented knowledge-building curriculum is likely to work better than what most schools are currently doing.

Other Evidence on Building Knowledge

Some of that other evidence comes from experimental studies, but the effects—while statistically significant—have been small. That may be because they generally haven’t lasted more than a year or two, and that’s too soon to see results—at least, results on standardized reading tests.

Those tests are considered the gold standard for measuring progress in reading comprehension, but, as Professor Hugh Catts has argued, they can be seriously misleading. Different measures of comprehension often disagree. In one study, researchers used four different comprehension tests to measure the same group of students and found that over half the time, students identified as poor readers by one test were put in a different category by another. The same was true for readers identified as being in the top 10%.

One likely reason for that variation is that the passages on standardized reading tests are on random topics that students haven’t learned about in school. The theory is that the tests are assessing abstract reading comprehension ability, not content knowledge. But if students lack knowledge of the topic, or of enough of the vocabulary used in a passage, they may not be able to understand the passage well enough to demonstrate their “skills.”

So this supposedly scientific approach to measuring progress in reading comprehension—which is used not only by teachers and government authorities but also by many researchers—is due for a serious re-examination. It would be far more reliable, and fair, to test students’ ability to make inferences or find the main idea of passages on topics they’ve actually learned about.

We also have some accidental experiments that suggest the benefits of building knowledge at the elementary level. One occurred in France several decades ago, when the government abandoned its longstanding national curriculum for elementary schools, which focused on building knowledge of specified, rich content. The result, as E.D. Hirsch, Jr., has pointed out, was an overall decrease in student achievement and a widening of the gap between students from high- and low-income families.

There’s also evidence from the United States. Researchers at the Thomas B. Fordham Institute discovered that children who got 30 more minutes a day of social studies than average had higher reading scores by fifth grade. The benefit was greatest for those from low-income families and negligible for those from the highest-income families. At the same time, an extra 30 minutes a day on reading was not correlated with higher reading scores. One likely reason is that social studies was providing students with the knowledge and vocabulary they needed to understand the passages on reading tests—especially if they were unlikely to pick up that knowledge at home.

And of course, we have plenty of evidence that the standard skills-focused approach to comprehension isn’t working for most students. Only about a third score proficient or above on national reading tests, with little or no change in decades. Large gaps between students at the upper and lower ends of the socioeconomic spectrum persist.

Plus, even if there’s not a lot of laboratory evidence that building knowledge boosts comprehension, there’s ample undisputed evidence that having knowledge—either of the specific topic or of general academic information—is correlated with better reading comprehension.

Not to mention that there’s a lot of anecdotal evidence from the increasing number of schools that have adopted knowledge-building curricula. Teachers are reporting that students are more engaged, better able to make connections between different texts, and possessed of more sophisticated vocabularies. One teacher in an urban, low-income school recently told me he’d overheard his seventh-graders discussing Kantian ethics. Another in a small Southern city has seen students who are still learning English write well-informed essays about the American Revolution. These are marked differences from what they observed before.

It's true that evidence based on observations can be misleading. For example, it can look like students are reading when they’re just guessing at words. But it’s relatively easy to do experiments showing whether phonics instruction works. In an area like comprehension, which is much trickier to measure, it makes sense to also look at what’s actually going on in classrooms.

That’s not to say that all that schools need to do is adopt a knowledge-building curriculum. To implement that kind of curriculum well, teachers need to understand why it’s important, and they need support in delivering it.

Nor will the adoption of knowledge-building curricula solve all our social ills or even completely level the academic playing field. But given all the available evidence, it would almost certainly make things a lot better than they are now. As one teacher whose school made the shift said to me, “Why not try it? What have you got to lose?”

Follow me on Twitter or LinkedInCheck out my website or some of my other work here