CCSS authors argued that high-school students were unprepared to read the complex texts of college and careers, basing their conclusions on Williamson’s (2008) computation of the average Lexile (L) levels for 75 11th- and 12th-grade content-area textbooks. The average of these 75 texts, at 1123L, was 259L lower than college texts and 125L lower than workplace texts. This difference drove the accelerated staircase of text complexity (based on Lexiles) from K through Grade 12. Our study asks whether text complexity across distinctly different content areas can be treated as a monolith based on Lexiles, especially in relation to vocabulary.
To address the nature of complexity in different content-area textbooks, we examined two databases. The first was Bormuth’s (1969) database, which systematically analyzed texts from nine content areas. The second dataset consisted of texts from the middle unit of the median textbook in the Williamson analysis for four content areas: chemistry, mathematics, literature, and history. Main text and sidenotes were separated and analyzed separately according to: (a) Lexiles and components (sentence length, word frequency), (b) word frequency (e.g., frequent, very rare), and (c) word types (e.g., proper names, academic words).
Results of the first analysis indicated a significant difference between content-area texts for word frequency but not sentence length or Lexile. The average frequency of words was higher in literature texts than in chemistry and social studies but not mathematics texts. Results of the second analysis indicate that the rare words in the four content areas vary in their percentages but that the rare word types are distinct across content areas.
The use of a single metric such as Lexiles to describe all high-school texts across all content areas fails to acknowledge the significant variation across content areas and within textbooks. We conclude that Williamson’s study was inadequate by not accounting for differences between content areas and measuring the complexity of textbooks across sections.
Following the mandates of the CCSS, the Lexile Framework for Reading is widely used to establish the complexity of texts. Recently, Lexile’s formula has been used to not only measure the complexity of a text, but to create simplified versions of informational articles at different levels. In theory, this simplification allows for differences in reading ability. But when simplification is based on quantitative measures, students’ comprehension may not improve. This study examined how simplified versions of a text affect reading comprehension.
All 335 students from grades 4-8 at three proficiency levels read one of five versions of an informational text retrieved from Newsela and then took a comprehension test. Results from a 3- way ANOVA showed no significant interaction between grade, reading level, and text condition. Pairwise comparisons showed that below-level readers’ scores were lower than the scores of on-level or above-level readers only when given extremely lower levels of texts. Regression analysis showed no significant contribution of text level to overall comprehension scores. Additionally, analyses showed that different types of comprehension were affected differently by simplifying the text. For example, questions requiring reasoning and evidence resulted in lower comprehension levels with simplified text.
Heidi Anne Mesmer will consider the findings in relation to a recent study which she has authored and which has not previously been presented at LRA: Does One Size Fit All? Exploring the Contribution of Text Features, Content, and Grade of Use on Comprehension (in press, Reading Psychology). Results of this study indicated that texts did having differing levels of various word features along both grade and content lines, especially in the area of sentence length. In addition, content and grade moderated the relationship between sentence length and comprehension.
Science is built on replications and elaborations of existing research. The two studies in this session add to a growing body of work that indicates that basing evaluations on sentence length, the dominant variable in the Lexile Framework, fails to capture the complexity of texts for students. In particular, as the ideas of texts become more complex in content areas, attention needs to be focused on the demands of new vocabulary, not simply the length of sentences.