Posted by Mark Liberman
https://languagelog.ldc.upenn.edu/nll/?p=70939&utm_source=rss&utm_medium=rss&utm_campaign=is-the-decline-of-writing-making-journalism-dumber
https://languagelog.ldc.upenn.edu/nll/?p=70939
No. At least, there've been plenty of dumb articles over past decades and centuries, and plenty of smart ones recently. But I have some complaints about one particular recent article in The Economist, "Is the decline of reading making politics dumber? As people read less they think less clearly, scholars fear", 9/4/2025.
I should start by saying that the quality of articles in The Economist is generally very high, in my opinion, and its articles about language are especially good. So why was I disappointed in this one?
Here are its first two paragraphs:
The experiment was simple; so too, you may have thought, was the task. Students of literature at two American universities were given the first paragraphs of “Bleak House” by Charles Dickens and asked to read and then explain them. In other words: some students reading English literature were asked to read some English literature from the mid-19th century. How hard could it be?
Very, it turns out. The students were flummoxed by legal language and baffled by metaphor. A Dickensian description of fog left them totally fogged. They could not grasp basic vocabulary: one student thought that when a man was said to have “whiskers” it meant he was “in a room with an animal I think…A cat?” The problem was less that these students of literature were not literary and more that they were barely even literate.
My first complaint: there's no link to the referenced experiment. We're not even given the title of the publication documenting it, or the names of its authors.
Here's why that matters. Internet search reveals what the publication was: Susan Carlson et al., "They Don’t Read Very Well: A Study of the Reading Comprehension Skills of English Majors at Two Midwestern Universities", CEA Critic 2024. And checking that publication reveals several relevant facts:
- Although the study was published in March 2024, the study was done in January to April of 2015, more than 10 years ago.
- The 85 subjects in the study came from two Kansas regional universities.
- Their average ACT Reading score was 22.4, which is "low intermediate level",
- The authors divided the subjects' Bleak House explanations into three categories: problematic, competent, and proficient.
- Their discussion focused on the students in the "problematic" category: 49 of 85.
In other words, they discuss the worst students in a sample with low scores to start with.
Why did they do that? As they explain,
The 85 subjects in our test group came to college with an average ACT Reading score of 22.4, which means, according to Educational Testing Service, that they read on a “low-intermediate level,” able to answer only about 60 percent of the questions correctly and usually able only to “infer the main ideas or purpose of straightforward paragraphs in uncomplicated literary narratives,” “locate important details in uncomplicated passages” and “make simple inferences about how details are used in passages”. In other words, the majority of this group did not enter college with the proficient-prose reading level necessary to read Bleak House or similar texts in the literary canon. As faculty, we often assume that the students learn to read at this level on their own, after they take classes that teach literary analysis of assigned literary texts. Our study was designed to test this assumption.
So the study was designed to test the university and its faculty, not the students. The conclusion, basically, is that the university and faculty failed to fix the problem, and the students didn't fix the problem on their own.
I'm not convinced that being able to read and understand the first seven paragraphs of Bleak House is an appropriate measure for the reading ability of modern American youth. That novel's many words and phrases from the 19th-century British court system make it hard for a modern American reader to grasp the context. I'd be more impressed if the students failed to understand the start of Emma or Tom Sawyer or Alice's Adventures in Wonderland or similar.
But let's grant that Carlson et al. have proved their point, and just note that The Economist's writer badly mis-read (or maybe mis-represented?) their work.
My second complaint is that The Economist's writer goes on to use the Flesch-Kincaid readability measure:
We also analysed almost 250 years of inaugural presidential addresses using the Flesch-Kincaid readability test. George Washington’s scored 28.7, denoting postgraduate level, while Donald Trump’s came in at 9.4, the reading level of a high-schooler.
See my 2015 post "More Flesch-Kincaid grade-level nonsense", which points out that different choices of punctuation strongly modulate the Flesch-Kincaid index, as in this example from one of Donald Trump's speeches, which was used in a stupid newspaper article to prove that Trump operates at a 4th grade level:
It’s coming from more than Mexico. It’s coming from all over South and Latin America. And it’s coming probably — probably — from the Middle East. But we don’t know. Because we have no protection and we have no competence, we don’t know what’s happening. And it’s got to stop and it’s got to stop fast. [Grade level 4.4]
It’s coming from more than Mexico, it’s coming from all over South and Latin America, and it’s coming probably — probably — from the Middle East. But we don’t know, because we have no protection and we have no competence, we don’t know what’s happening. And it’s got to stop and it’s got to stop fast. [Grade level 8.5]
It’s coming from more than Mexico, it’s coming from all over South and Latin America, and it’s coming probably — probably — from the Middle East; but we don’t know, because we have no protection and we have no competence, we don’t know what’s happening. And it’s got to stop and it’s got to stop fast. [Grade level 12.5]
That post closes this way:
It's uncharitable and unfair of me to imply that the author of the Globe piece might be "stupid". But at some point, journalists should look behind the label to see what a metric like "the Flesch-Kincaid score" really is, and ask themselves whether invoking it is adding anything to their analysis except for a false facade of scientism.
That's enough complaining for now. But since The Economist's article also frets about secular changes in sentence length, let me refer interested readers to the slides for my talk at the 2022 SHEL ("Studies on the History of the English Language") conference.
https://languagelog.ldc.upenn.edu/nll/?p=70939&utm_source=rss&utm_medium=rss&utm_campaign=is-the-decline-of-writing-making-journalism-dumber
https://languagelog.ldc.upenn.edu/nll/?p=70939