Text Complexity Systems: A Teacher’s Toolkit

    Text Complexity Systems: A Teacher’s Toolkit

    Hiebert, E. H., 2018. Text Complexity Systems: A Teacher's Toolkit. TextProject, Inc., Santa Cruz, CA.

    By knowing the strengths and weaknesses of a text complexity system, a teacher can better match student to text. What are the strengths and weaknesses of a specific text complexity system? When should a teacher use Guided Reading Levels to match students to text? When is it better to use Lexiles?

    Reading always involves a reader and a text. In working to support their students’ interactions with texts, teachers often wonder if they’ve provided appropriate texts. Are the levels appropriate? Too high? Too low? In response to these questions, there seems to be no shortage of systems that advise teachers about text complexity and suitability. The texts in most classrooms are organized by levels. All the texts with one level are in one bin; texts with other levels in other bins. If teachers are in a school that uses a particular independent reading program, texts are organized in yet another manner based on ATOS scores, a readability formula developed by Renaissance Learning. And, when it comes to state assessments, there is often yet another system—Lexile Measures, a leveling tool developed by Metametrics. 

    Presently, the two most widespread systems for establishing text complexity are a qualitative system, Guided Reading Levels, and a quantitative method of predicting text complexity, Lexile Framework. This paper describes these systems, clarifying how each system supports teachers in their goal of increasing students’ capacity as readers.  But prior to these descriptions, I give a short review of a pre-digital text complexity system. As the discussion will show, this “old” system continues to provide useful information about which current systems can be ambiguous. 

    Text Complexity in the Not-too-distant Past

    In the pre-digital days, text complexity was often established by comparing the words in a text with those on a prescribed list. The words on the list were correlated with particular grade levels. Sentence length came into play, but the first analysis involved matching the words in a text with the words on a list. Table 1 gives information from one of the pre-digital text complexity formulas—the Dale-Chall (Dale & Chall, 1948) readability formula. The 12 texts in Table 1 have all been used in core reading or literature programs during the past seven years.

    Table 1

    Information from Dale -Chall Readability formula

    Narrative TextsInformational Texts
     Title #Hard words per 100 D-C Grade Level 5 Hardest Words Title #Hard words per 100 D-C Grade Level 5 Hardest Words
    Amos and Boris  2.73-4
    • phosphorescence
    • sextant
    • evaded
    • mackerel
    • treading
     A Night to Remember 4.75-6
    • davit
    • shudders
    • hefty
    • rummage
    • flopper
    James and the Giant Peach  2.73-4
    • spiker
    • nastier
    • jiffy
    • ramshackle
    • laurel
    Black Bear Cub  2.02
    • slushy
    • plods
    • munching
    • forage
    • aspen
    Red Badge of Courage 7.0HS
    • sinuous
    • malediction
    • imprecations
    • hillock
    • exhortations
    Boy, Were We Wrong About Dinosaurs 1.52
    • waddle
    • tendons
    • rhinos
    • scaly
    • clumsily
    The Horned Toad Prince*  4.75-6
    • sassy
    • blustery
    • critter
    • lassoed
    • fella
    How Ben Franklin Stole the Lightning  2.33-4
    • lickety
    • odometer
    • newfangled
    • hilarious
    • bifocals
    The Little House  0.83 1-2
    • tenement
    • skating
    • dumped
    • ripen
    • daisies
    Smokejumpers 2.33-4
    • retardant
    • parachutist
    • gulch
    • goggles
    • embers
    Zlateh the Goat  2.53-4
    • gulden
    • furrier
    • dreidel
    • splendor
    • thatched
    The Life and Times of Ants 3.03-4
    • pheromone
    • mantises
    • ramble
    • antennae
    • emits

    * Without Spanish words

    As shown in Table 1, the Dale-Chall text complexity formula identifies words in a text that are predicted to be rare. The number of rare words per 100 words of text range from 0.83 (The Little House) to 7 (Red Badge of Courage). Examining the rare words, some of which are included in Table 1, further clarifies the challenge of text for individuals or groups. For example, two texts that have been identified as exemplar Grade 2-3 texts, Amos and Boris and Boy, Were We Wrong About Dinosaurs, differ in the nature of rare words: Three of the rare words in Amos and  Boris have three or more syllables, while only one word in Boy, Were We Wrong About Dinosaurs has three syllables—clumsily.

    Using the Dale-Chall readability formula to compute text complexity was a tedious process for teachers. There were, however, some advantages to this system. First and foremost, the emphasis was on vocabulary, allowing teachers to get a sense of the challenging words for students in texts. Second, since teachers were doing the computations and not relying on externally assigned text levels, teachers’ examination of texts meant that they knew what was required on the part of readers. When teachers don’t do the calculations and when the vocabulary recognition demands of texts are not presented alongside text complexity levels, teachers are left wondering as to the vocabulary recognition demands of texts for their students. 

    Text Complexity Today

    Guided Reading Levels 

    Judgments by human beings about the level of a text have a long history, even longer than the quantitative systems that are frequently thought of as defining text complexity. As early as 1846, McGuffey labeled texts with grade equivalents. At the present time, Guided Reading Levels (GRL) is the most widely used human judgment system in American classrooms (Fountas & Pinnell, 1996). The system has 26 levels—from A to Z—that covers texts from the earliest levels to the highest levels. According to Fountas and Pinnell (2012), the GRLs are established on the basis of 10 dimensions: genre, text structure, content, themes and ideas, language and literacy features, sentence complexity, vocabulary, words (number and difficulty), illustrations, and book and print features. When a GRL for a text is published, however, the specifics for these 10 factors are not reported. Rather, a single level for a text is given. 

    Table 2

    Lexiles and Guided Reading Levels for 12 Widely Used Texts

    LexileGuided Reading LevelTypical Grade Level of Use
    LexileSentence Length (X)Word Frequency (X)
    Narrative
    Amos and Boris90013.633.53S 2
    James and the Giant Peach91012.273.31S4
    Red Badge of Courage91013.913.55Y7
    The Horned Toad Prince*90013.633.53L4
    The Little House900 15.143.73L2
    Zlateh the Goat92014.773.63V4
    Informational
    A Night to Remember91013.413.48 U 7
    Black Bear Cub91014.163.58M2
    Boy, Were We Wrong91013.443.48N2
    How Ben Franklin Stole the Lightning91015.253.62R4
    Smokejumpers91012.783.43S4
    The Life and Times of Ants90012.633.36Q4

    * Without Spanish words

    The GRLs for the 12 illustrative texts are provided in Table 2. Lower-grade texts, such as The Little House, are assigned lower levels than higher-grade texts, such as Red Badge of Courage. Within a grade, however, the levels can vary. Guided reading levels for texts that are typically assigned to the Grade 4-5 span range from L (The Horned Toad Prince) to V (Zlateh the Great)—a span of 11 levels. The designation of the GRL does not indicate what makes Zlateh the Great so much harder than The Horned Toad Prince. Indeed, the vocabulary summary of the Dale-Chall suggests that the vocabulary in The Horned Toad Prince is more challenging than the vocabulary in Zlateh. 

    The influence of the 10 features on the assignment of GRLs has not been described by either the developers or the publishers of the levels. Variability might be expected within the features in individual texts. For example, one text might have easy vocabulary but a challenging text structure, while another text might have short sentences but unfamiliar vocabulary. All recent analyses of texts based on GRLs confirm that there is considerable variation within levels. Indeed, variation within a level can be so substantial that one level overlaps another level (Koons, Elmore, Sanford-Moore, & Stenner, 2017; Toyama, Hiebert, & Pearson, 2017). For example, the texts for Level D fell into the same range as Level C texts in Koons et al.’s analysis of 1,000 leveled texts, while texts at three sets of adjacent levels (G and H, I and J, and K, L, and M) had similar means and ranges. 

    In the Cunningham, Spadorcia, Erickson, Koppenhaver, Sturm, and Yoder (2005) analysis of 18 features of a set of leveled texts, only one measure predicted the assigned levels of texts: number of words in texts. The other 17 variables—four other discourse-level measures (e.g., number of unique words in texts), four sentence-level measures (number of morphemes per sentence), and nine word-level measures (e.g., percentage of the 100 most-frequent words in written English) did not predict level assignment. Similarly, Hatcher (2000) found that the number of words in a text had the highest correlation to text levels, while four other measures (number of pages, length of longest sentence, various syntactic features, and number of words with six or more letters) correlated less well to text levels. 

    The number of words in texts can influence beginning readers’ attention to texts. Beginning readers are unlikely to stay with the task of reading if a book extends beyond their endurance level, although important exceptions such as Hop on Pop (Dr. Seuss, 1963) and Green Eggs and Ham (1960) can be identified. However, vocabulary recognition demands loom large for beginning and struggling readers. To date, the manner in which vocabulary recognition figures into assignment of levels is uncertain. 

    The Lexile Framework

    From 1923 when Lively and Pressey offered the first quantitative formula for calculating the complexity of texts to the early 1980s, approximately 200 formulas had been developed (Klare, 1984).For many generations of reading teachers, these formulas needed to be applied manually. Computers changed the process. Texts could quickly be analyzed digitally, provided that someone had developed a formula and texts had been digitized. This opportunity spawned a second generation of text complexity formulas, of which the Lexile Framework (Lexiles) has dominated the marketplace. 

    Similar to first-generation text complexity formulas, Lexile measures reflect syntax and word frequency/vocabulary (Stenner, Burdick, Sanford, Burdick, 2007). Lexile measures base the first component on the average number of words in a sentence. It is in the analysis of vocabulary that the digitized analyses of texts differ most from the earlier formulas. With digitization, the contents of texts can be retained in databanks. These databanks make it possible to establish the frequency of each word in the lexicon in relation to all other words. A word is assigned a rank, based on its frequency within the databank. The frequency of the most common words in written English had been established almost a century ago (Thorndike, 1921); now, however, the frequency of obscure words (e.g., davit, sextant) can be established instantaneously.

    The word frequency/vocabulary measure that determines a Lexile score uses the average of frequencies of all words in a text. However, a problem arises with this procedure. A group of 2,500 morphological families (e.g., help, helped, helping, helps, helper, helpful, helpless, unhelpful) accounts for 91% of all of the words in texts from grade 1 to college and career (Hiebert, Goodwin, & Cervetti, 2018). The other 9% of the words in texts is accounted for by an estimated 300,000 words. The Lexile formula attempts to deal with this discrepancy statistically. However, the distribution of words in written English is sufficiently skewed that the variation in the word frequency measure is limited. As illustrated in Table 2, the range for word frequency is small (0.42), while the range for average words in sentences is large (2.98), even in a group of texts within a limited Lexile span. As a result, word frequency/vocabulary is less of a factor in predicting Lexiles of texts than sentence length (Cunningham, Hiebert, & Mesmer, 2018; Deane, Sheehan, Sabatini, Futagi, & Kostin, 2006).

    To illustrate the influence of syntax on Lexile scores, I chose a 250-word section of A Night to Remember. I made changes in syntax but retained the rare vocabulary. The changes from the original (Example A) are evident in Example B. 

    Example A: Thayer thought of all the good times he had had and of all the future pleasures he would never enjoy. He thought of his father and his mother, of his sisters and brother.

    Example B: Thayer thought of all the good times he had had. He thought of all the future pleasures he would never enjoy. He thought of his father and mother. He thought of his sisters and brother.

    The word frequency stayed the same but the average sentence length changed from 14.11 to 8.52. As a result, the Lexile moved from 890 (grade 4-5 band) to 590 (lower half of the grade 2-3 band). These changes in sentence length do not mean, however, that the text is appropriate for second graders. For second graders, contemplating one’s life in the face of imminent death (as Thayer is doing in the text) is likely a challenging concept. 

    When thousands of texts are analyzed, the comparative difficulty of texts is apparent. For example, the assignments of 240 Lexiles to Cowgirl Kate and Cocoa (Silverman, 2006), 930 Lexiles to Volcanoes (Simon, 2006), and 1540 Lexiles to Lincoln’s (1865) Second Inaugural Address indicate differences in complexity. When texts in smaller ranges are examined as in Table 2, the assignments of texts can be less differentiated. The illustrative texts were chosen to be in the 900 to 920 Lexile range, which places the texts approximately in the middle of the grade 4-5 band. The typical grades of use, however, range from grade 2 through high school. 

    Syntax can make a difference in comprehension. More complex ideas are usually expressed in sentences with clauses and phrases, which increase the comprehension demands on readers. But when editors attempt to turn complex sentences into simple ones, texts are not necessarily easier to comprehend. Short sentences tend to have fewer links between ideas, requiring readers to make more inferences (Pearson, 1974). 

    Changes, such as the ones illustrated in the example from A Night to Remember, are relatively easy to make, leading several companies to provide sets of texts on the same topic on different Lexile levels. For example, one company provides a text with five versions, with Lexile scores that range from 600 to 1200-1300. An analysis shows that differences across versions of the same text reflect changes in sentence length and little change in word frequency/vocabulary (Hiebert, 2018). A recent study with ninth graders found that students, even those with below-grade-level proficiency, performed similarly on the high- and low-Lexile passages (Lupo, 2017). More information on these types of adaptations can be found in Hiebert (2018). 

    A Text Complexity Toolkit for Teachers

    Teachers no longer need to manually examine texts to determine complexity since this information typically accompanies texts. An overall GRL or Lexile score of a text gives a sense of where a text fits relative to thousands of other texts, but these designations do not inform teachers about vocabulary recognition demands of texts. To increase their students’ capacity with reading complex text, teachers need information on vocabulary demands to understand which texts will aid in “growing” their students’ reading and thinking. GRLs and Lexile scores provide an initial step in establishing the direction for instruction but teachers’ expertise will always be needed in determining the difficulty of texts for their students. What follows are five tools for teachers to apply in better understanding the complexity of texts. 

    Know the strengths and gaps in current text complexity systems

    As well as a snapshot of the relative complexity of a text in relation to tens of thousands of texts, GRLs and Lexile scores indicate the complexity of sentences. With movement upward through either GRLs or Lexiles, sentence structure can be expected to become more complex. This shared emphasis on sentence length means that the two systems correlate well to each other (Koons et al., 2017). Information about sentence length is useful in planning instruction. For example, the high Lexile scores of two primary-level texts in Table 2 reflect long sentences—The Little House (15 words per sentence) and Black Bear Cub (14 words per sentence). Most young readers will need substantial support to parse the clauses and phrases embedded in sentences in these texts. Consider a sentence from Black Bear Cub: “Tired and full from the sweet honey, Mother Bear lies, cooling herself in the shade of a pine tree at the edge of a small clearing.” (Lind, 1994, p. 17). A substantial amount of information is conveyed in this sentence, including that pine trees provide shade, which has the effect of cooling the temperature of a living being. 

    Similarly, the assigned GRL of a text likely relates to the use of particular text and sentence structures. Texts at GRLs A through D frequently use a repetitive text and sentence structure where objects or experiences are enumerated (e.g., children turn a box into a car, a table, etc.). As levels increase, text structures take on more conventional narrative and informational forms (Koons et al., 2017).

    In programs based on GRLs, a level is likely to be highly indicative of the number of words in a text. Such information can aid primary-level teachers in attending to their students’ stamina in reading independently—unarguably one of the most critical competencies of proficient reading. With the knowledge that target texts for end of grade 1 (Level J) are approximately twice the length of target texts at the beginning of grade 1 (Level E), teachers can consciously attend to increasing students’ stamina. 

    Teachers also need to be aware of the additional knowledge with which students need to be proficient to be successful with texts. Concepts and vocabulary are essential to text comprehension. Specific information on unique or challenging vocabulary is not readily apparent from either a Lexile score or a GRL. When vocabulary plays an ambiguous role in the assignment of text complexity, teachers need to have additional tools with which to determine the instruction required for particular texts and students. Four such tools are described in the remainder of this paper. 

    Be cautious about assigning students to a single level—of anything

    Readers’ background knowledge and vocabulary recognition of the concepts and words in texts constitute the strongest influences on readers’ comprehension (Cromley & Azevedo, 2007; Ozuru, Dempsey, & McNamara, 2009). When vocabulary is not a major source for establishing text complexity, readers’ performances with texts designated to be similar in complexity can be inconsistent. For example, the titles and three hardest words in texts from a popular leveled text series (Reading A to Z) illustrate the disparity within a level (in this case, J): We Do Yoga (yoga, cobra, pose), Our Class Flag (flakes, monkey, soccer), and Josh Gets Glasses (firefighters, cowboys, math). Each topic presents a unique set of words. Most of the hard words are not easily decodable but students’ familiarity with soccer and math can be expected to differ from knowledge of cobra and pose. Students’ performances with texts can be expected to vary considerably when there are differences in the topics and concepts.

    Further, the effect of labeling students according to their reading proficiency, such as “700 Lexile” or “Level F” for reading instruction, can affect both students and teachers (Hiebert, 1983). Even young children are aware of the pecking order that comes from classroom-imposed labels. Students who recognize themselves to be in the lower echelons of their class’s readers can be affected by their sense of inadequate competency. Further, the adults in students’ lives—teachers and parents—can make evaluations and tailor their expectations on the basis of these labels. 

    The potential consequences of placing students in groups do not mean eliminating small group work in classrooms. Small groups are a necessity, if teachers are to provide the specific support students need. The issue is the groups’ longevity and the permanence of the groups’ descriptors. The nature of talk with which teachers communicate with students can also make a difference. 

    Be cautious of treating a text as having a similar level of complexity throughout

    Just as readers are not unitary in reading proficiency, so too texts are not unitary in their demands on readers. The assignment of a single letter or number to a text fails to acknowledge the changes that occur throughout a text. The assignment of 920 Lexile or a GRL of Y to Red Badge of Courage (Red Badge), for example, leaves the impression that the difficulty for readers remains static throughout the entire text. For many students, however, the first chapter or two of any text will be the most difficult. The first chapter of Red Badge immerses readers in a young private’s thoughts about warfare. The style of Red Badge is unusual and the colloquial language of the soldiers will be challenging for contemporary American students. Thus, the most significant challenge will come at the beginning of the text as students become accustomed to the perspective and language. 

    Informational texts are even more likely to put the weight of the new information in the first paragraph or two. In Smokejumpers, for example, the first pages are filled with vocabulary that is likely new to many students—parachutist and even the word smokejumper. Once readers move beyond the beginnings of a text, they have background knowledge for the remainder of the text. A single assignment of text complexity fails to recognize the development of background knowledge over the course of a text. A similar observation can be made about vocabulary. In current quantitative tools, every appearance of a word contributes to the complexity score of a text. However, by the 10th appearance of a rare name such as Reba (in The Horned Toad Prince), readers presumably have assigned meaning to the word. 

    When struggling readers become discouraged, they apply a strategy such as skimming or they simply give up. Discussions and demonstrations that establish the manner in which texts progress can support readers in developing confidence. Such guidance can be an outstanding way to prepare students for numerous reading tasks, including state-mandated assessments. 

    Learn about how vocabulary works in texts and communicate this understanding to students in lessons and discussions 

    Current text complexity systems fail to provide explicitness on vocabulary recognition demands—the very feature that is at the heart of text complexity (Rickets, Nation, & Bishop, 2007; Sénéchal, Ouellette, & Rodney, 2006).  When teachers have fundamental understandings about how vocabulary works in texts, they can conduct lessons and discussions with students to support their vocabulary knowledge and, as a result, increase access to complex texts.

    A generative vocabulary approach supports students in understanding how words work through lessons and discussions as new words are taught and learned. A generative approach aims to provide students with the capability to access rare words when they encounter them in text. A full description of a generative vocabulary approach is provided elsewhere (Hiebert, 2016; Hiebert & Pearson, 2013). However, several key insights and strategies derived from the approach follow. 

    The first strategy is to develop and support students’ expectation that any text is likely to have words previously not encountered in texts—words that are rare. Remember that a group of 2,500 morphological families accounts for an average of 91% of the words in a sample of exemplar complex texts from grades K through college and career.  Percentages of total words accounted for by this group of morphological families vary from 97% (K-1) to 89-90% (middle school texts through CCR). In grades 2 through 5 where the foundation of students’ reading proficiency is established, the percentage is around 92-94%. 

    Second, students need to be aware that generative vocabulary instruction provides them with strategies for rare words, such as capitalizing on morphological knowledge, a critical strategy since almost a third of the rare words are members of the 2,500 morphological families. Another critical strategy is the understanding that proper nouns comprise another third of the rare words in texts. Characters’ names are often repeated, which means that the number of rare words per 100 does not translate into 6 or 7 unique words. Even so, students need a strategy for giving labels to proper nouns, especially when they can’t pronounce words such as Sagittarius or Vincennes. 

    Third, the focus of ELA instruction needs to be on developing concept clusters and related vocabulary. ELA standards address the strategies and skills of reading and writing but the typical stories or informational texts with which standards are to be applied remain undefined. Such generality is unfortunate in light of the consistent research finding that background knowledge is the best predictor of how well students will comprehend a text (Cromley & Azevedo, 2007; Ozuru, Dempsey, & McNamara, 2009). As a result of this lack of guidance on content, ELA programs typically encompass lots of topics without students gaining a deep understanding of individual topics. 

    Concept clusters need to be in the foreground of reading instruction from the first days of kindergarten and first grade. Since leveled texts at adjacent levels are often similar in their demands, sorting texts by topics within a set of texts covering several levels can be an effective way to ensure that students develop vocabularies related to concept clusters. An illustration of clustering of leveled texts by topics with shared vocabulary, rather than discrete levels, appears in Table 3. Three sets of leveled texts were combined: Leveled Literacy Intervention (Fountas & Pinnell, 2008), Windows on Literacy/National Geographic (informational leveled texts from National Geographic, 2002), and My Sidewalks (decodable leveled texts from Juel, Paratore, Simmons, & Vaughn, 2008). Texts from each series within a set of levels (e.g., D to F) were analyzed to determine common vocabulary—words with specific vowel patterns and words related to particular topics (ones common to stories and informational texts such as nature, animals, transportation). Such an approach ensures that students develop conceptual clusters of knowledge, while at the same time expanding their recognition vocabularies because they see the same words across several texts. 

    Table 3

    Illustration of Sorting Texts by Topic and Word Patterns1

    TopicTextsProgramTopic Words with Target Patterns
    NatureWhen Seasons ChangeMy Sidewalkstree, shines
    Seeds Grow into PlantsWindows on Literacyvines, trees, wheat, seeds, pea, bean(s)
    Here is a TreeLLIcave, tree, hole, hive, bees
    A Tree’s LifeWindows on Literacyseedling(s), tree seeds, pine, pinecones
    BugsMy Sidewalksvines, stones
    GamesMy Friend and IWindows on Literacypainting, ride, bike, play, reading, home
    Jessie Likes to Look at the MapLLIkites, play, ice, cream
    Having FunWindows on Literacygames, skate(d), played(ed), ride/rode, bikes, read
    Family TalesMy Sidewalksbaseball
    Here is a Big BallLLIgame, play

    1 From Hiebert and Kurland (2017)

    Another aspect of conceptual clusters relates to the many rare words in the elementary grades that are concrete—words such as tractor, poodle. Vocabulary instruction that supports conceptual clusters will ensure that students’ vocabulary recognition includes this group of concrete but rare words. (Word Pictures at www.textproject.org contain pictures for about 600 concrete words, organized into conceptual clusters).

    All possible topics can’t be covered in an ELA curriculum nor should that be the goal. To develop expertise in a single or several topics requires that students develop strategies for using text to acquire new knowledge. These strategies transfer when students encounter new topics. Clustering texts with similar content aids students in developing the conceptual clusters that underlie proficient comprehension. This procedure also supports students’ vocabulary recognition. Texts with similar content will have shared vocabulary, allowing students to become facile with these words as they see them repeated across texts. 

    Put a priority on teaching students to select texts

    Teachers’ guides include many recommended reading strategies. But rarely among the myriad strategies is anything said about the strategy of self-selection, which refers to students’ ability to identify texts that they can comprehend. This strategy may be one of the most fundamental for a habit of life-long reading. After all, once out of school, almost all of people’s reading is self-selected. In classrooms where students’ independent reading time is productive, lessons address how to choose texts (Manning, Lewis, & Lewis, 2010). These choices include attention to the complexity of words and ideas and also to students’ interests and goals in reading. 

    In generations past, students were taught to apply the “five finger” rule in choosing appropriate texts (Reutzel & Fawson, 2002). The strategy was to open a book randomly and to fold a finger for every unknown word. If, by the end of the sample, students had used up all of their fingers, the text was likely a challenging one. Recent digital analyses of texts give some credence to the five-finger rule. Middle-grade to middle-school texts typically have from 6 to 8 rare words per every 100 words (Hiebert et al., 2018). If students can’t figure out more than a handful of words in a sample of text, the text is likely to be challenging for independent reading. 

    An awareness of the challenge posed by new vocabulary in text is only one of the skills related to self-selection strategies. Consciously developing conceptual clusters of knowledge through reading is important. Recognizing that texts are the source for expanding one’s knowledge and clarity about the areas in which one is gaining expertise are also critical aspects of independent reading. Students’ interests quite naturally vary. One student might be particularly interested in stories of survival in adverse physical environments (e.g., Hatchet), while another student may be interested in the resiliency of characters in trying social environments (e.g., Bud, Not Buddy). Providing students with the tools to face the challenges of vocabulary, whether in narrative or informational text, will support their progress through increasingly complex texts for both school-based and pleasure reading.

    References

    Cromley, J. G., & Azevedo, R. (2007). Testing and refining the direct and inferential mediation model of reading comprehension. Journal of Educational Psychology, 99(2), 311-325.

    Cunningham, J.W., Hiebert, E.H., & Mesmer, H.A.E. (2018). Investigating the validity of two widely used quantitative text tools. Reading and Writing. [Published on line January 6, 2018. https://doi/org/10.1007/s11145-017-9815-4]

    Cunningham, J. W., Spadorcia, S. A., Erickson, K. A., Koppenhaver, D. A., Sturm, J. M., & Yoder, D. E. (2005). Investigating the instructional supportiveness of leveled texts. Reading Research Quarterly, 40(4), 410-427.

    Dale, E., & Chall, J. S. (1948). A formula for predicting readability: Instructions. Educational Research Bulletin, 37-54.

    Deane, P., Sheehan, K. M., Sabatini, J., Futagi, Y., & Kostin, I. (2006). Differences in text structure and its implications for assessment of struggling readers. Scientific Studies of Reading, 10(3), 257-275.

    Fountas, I.C., & Pinnell, G.S. (1996). Guided reading: Good first teaching for all children. Portsmouth, NH: Heinemann.

    Fountas, I.C. & Pinnell, G.S. (2012). The F & P Text Level Gradient: Revision to Recommended Grade-Level Goals. Portsmouth, NH: Heinemann. Retrieved from: http://www.heinemann.com/fountasandpinnell/pdfs/WhitePaperTextGrad.pdf

    Fountas, I.C., & Pinnell, G.S. (2008). Leveled literacy intervention. Portsmouth, NH:  Heinemann.  

    Hatcher, P. (2000). Predictors of Reading Recovery book levels. Journal of Research in Reading, 23(1), 67-77.

    Hiebert, E.H. (1983). An examination of ability grouping for reading instruction. Reading Research Quarterly, 18, 231-255. 

    Hiebert, E.H. (2016). New perspectives in learning vocabulary. New York: Pearson. Retrieved from https://mypearsontraining.com/assets/files/documents/LitPri581L386myPerspectivesFreddyHiebertWhitePaper_HR.pdf

    Hiebert, E.H. (2018). Multi-level text sets: Leveling the playing field or sidelining struggling readers? Text Matters series. Retrieved from:  Hiebert-Multi-Level-Texts-Sets.pdf

    Hiebert, E.H., Goodwin, A.P., & Cervetti, G.N. (2018). Core vocabulary: Its morphological content and presence in exemplar texts. Reading Research Quarterly, 53(1), pp. 29-49.

    Hiebert, E.H., & Kurland, M. (October 2017). Prototype of the Hiebert/Kurland Reading Selection Optimizer. Santa Cruz, CA: TextProject.

    Hiebert, E.H., & Pearson, P.D.  (2013). Generative vocabulary instruction in ReadyGEN. New York: Pearson. https://assets.pearsonschool.com/asset_mgr/current/201532/pdf_161229.pdf

    Juel, C., Paratore, J. R., Simmons, D., & Vaughn, S. (2008). My sidewalks on reading street: Intensive reading intervention. Glenview, IL: Pearson Scott Foresman.

    Klare, G. R. (1984). Readability. In P. D. Pearson, R. Barr, M.L. Kamil, & P. Mosenthal (Eds.), Handbook of reading research (Vol. 1; 681-744). New York: Longman.  

    Koons, H., Elmore, J., Sanford-Moore, E., & Stenner, A.J. (July 2017). The relationship between Lexile text measures and early grades Fountas & Pinnell reading levels (Metametrics Research Brief). Durham, NC: Metametrics.

    Lively, B.A., & Pressey, S.L. (1923). A method for measuring the “vocabulary burden” of textbooks. Educational Administration and Supervision, 9, 389-398.  

    Lupo, S. M. (2017). Comprehension, text difficulty, background knowledge, and talk: A comparison of KWL and Listen Read Discuss (Doctoral dissertation). Retrieved from: https://doi.org/10.18130/v3132d

    Manning, M., Lewis, M., & Lewis M. (2010). Sustained silent reading: An update of the research. In E.H. Hiebert & D.R. Reutzel (Eds.), Revisiting silent reading: New directions for teachers and researchers (pp. 112-128). Newark, DE:  International Reading Association.  

    McGuffey, W.H. (1846/1997). McGuffey’s eclectic readers. New York: John Wiley & Sons.

    National Geographic (2002). Windows on literacy. Washington, DC: National Geographic Society.  

    Ozuru, Y., Dempsey, K., & McNamara, D. S. (2009). Prior knowledge, reading skill, and text cohesion in the comprehension of science texts. Learning and instruction, 19(3), 228-242.

    Pearson, P. D. (1974). The effects of grammatical complexity on children’s comprehension, recall, and conception of certain semantic relations. Reading Research Quarterly, 10(2), 155-192.

    Reading A-Z. Leveled books. Retrieved from https://www.readinga-z.com/books/leveled-books/

    Reutzel, D. R., & Fawson, P. C. (2002). Your Classroom Library: New Ways to Give It More Teaching Power. New York: Scholastic Professional Books. 

    Ricketts, J., Nation, K., & Bishop, D. V. (2007). Vocabulary is important for some, but not all reading skills. Scientific Studies of Reading, 11, 235-257.

    Sénéchal, M., Ouellette, G., & Rodney, D. (2006). The misunderstood giant: On the predictive role of early vocabulary to future reading. In D. K. Dickinson & S. B. Neuman, Handbook of early literacy research (Vol. 2, pp.173-182). New York, NY: Guildford Press.

    Stenner, A. J., Burdick, H., Sanford, E. E., & Burdick, D. S. (2007). The Lexile framework for reading technical report. Durham, NC: Metametrics.

    Thorndike, E. L. (1921). The teacher’s word book. New York: Teachers College, Columbia University.

    Toyama, Y., Hiebert, E.H., & Pearson, P.D. (2017).  An analysis of the text complexity of leveled passages in four popular classroom reading assessments. Educational Assessment, 22(3), 193-170. 

    Literature Cited:

    Burton, V.L. (1942/2012). The little house. Boston, MA:  HMH Books for Young Readers.

    Crane, S. (1895/2017). Red badge of courage. CreateSpace Independent Publishing Platform.  

    Curtis, C.P. (2004). Bud, not buddy. New York: Dell Publishing.

    Dahl, R. (1961/2007). James and the giant peach. New York: Puffin Books.

    Goldish, M. (2014). Smokejumpers. New York: Bearport Publishing Co.  

    Hopkins, J.M. (2010). The horned toad prince. Atlanta, GA: Peachtree Publishers.

    Kudlinski, K.V. (2008). Boy, were we wrong about dinosaurs! New York: Puffin Books.

    Lincoln, A. (1865). Second inaugural address. Retrieved from:  http://www.bartleby.com/124/pres32.html

    Lind, A. (1997). Black bear cub. New York: Scholastic.

    Lord, W. (1955/2004). A night to remember. New York: Holt Paperbacks.

    Micucci, C. (2006). The life and times of the ant. Boston, MA: HMH Books for Young Readers.

    Paulsen, G. (1987).  Hatchet. New York: The Trumpet Club/Scholastic.

    Schanzer, R. (2002). How Ben Franklin stoke the lightning. New York: HarperCollins.

    Seuss, Dr. (1960). Green eggs and ham. New York: Random House.

    Seuss, Dr. (1963). Hop on pop. New York: Random House.

    Silverman, E. (2006). Cowgirl Kate and Cocoa. Boston, MA: HMH Books for Young Readers.

    Simon, S. (2006). Volcanoes. New York: HarperCollins.

    Singer, I.B. (1984). Zlateh the goat and other stories. New York: HarperCollins.

    Steig, W. (2009). Amos and Boris. Boston, MA: Houghton Mifflin Harcourt.