Summary of P. David Pearson on Will Our Tests Support or Subvert Our Vision of Deeper Learning of English/Language Arts?

    by Elfrieda H. Hiebert | July 9, 2013

    P. David Pearson’s presentation as part of TextProject’s Virtual Institute on Assessment and the Common Core addresses the role of the new generation of assessments in supporting deeper learning in English/Language Arts, rather than acquisition of simplistic objectives. To provide a perspective on the new Common Core-aligned assessments Professor Pearson gave a brief overview of literacy assessments over the past five decades.

    • 1960s: Tests were present in classrooms and schools but they had few consequences for students and teachers. During this time, however, the Title 1 Act of 1967 was passed and set the stage for the use of assessments for accountability.
    • 1970s: Behavioral objectives became prominent and were the basis for criterion-referenced tests. These objectives and tests were used to create state-wide assessments. Skills management systems based on the behavioral objectives also meant the breaking-down of literacy into small grain sizes. For example, beginning readers were pretested on knowledge of individual consonants. If they did not reach a level of mastery, they were taught and assessed again until reaching the designated level of mastery.
    • 1980s: This was a period of consolidation of skills-based learning, with an increasing emphasis on assessments in districts and states.
    • 1990s: Models of performance assessment were promoted, including portfolio assessments through state (e.g., Vermont, Maryland) and national projects (e.g., New Standards Project). The efforts died away because of issues related to psychometrics (e.g., reliability), cost (e.g., teacher time), and politics (e.g., values).
    • 2000s: During the No Child Left Behind era, assessments focused on specific standards. The most prominent assessment of this era—DIBELS—illustrates the return to a small grain size of literacy proficiencies. As in the earlier era, such assessments had a heavy influence on directing instruction to the bits and pieces of literacy.

    Next, Dr. Pearson moved to the assessments of the 2010s—those of the two consortia (PARCC & Smarter Balanced) that are developing Common Core-compliant assessments. Dr. Pearson identified these unique features of this new-generation of assessments:

    • There is an increased weight given to open-ended responses and complex performance tasks. In that students are to be take assessments on-line, even selected responses can require more reflection than standard multiple-choice items. For example, technology-enhanced items allow students to highlight sentences or words from a text in response to a question.
    • Performance tasks have external validity (in that they are connected to the tasks of college and careers). They also have curricular validity in that they promote higher-order thinking.
    • To be successful on the assessments, students require regular practice with complex text and its academic language.
    • The assessments require the use of evidence from within the text.

    Could these new assessments present an opportunity for students’ progression to the desired goals? Dr. Pearson’s answer is: “Students who have learned how to read and write in curriculum that requires constructed responses and real writing will perform well on PARRC and SBAC assessments. They will have developed some transferrable practices that will serve them well in these new circumstances.” Since developing transferrable knowledge and skills is the goal of instruction, Dr. Pearson concluded, the new assessments do present an opportunity for students to progress to desired goals of literacy.