(Texas teachers’ assessments indicate students are performing on or above grade level. STAAR scores say thousands of students are reading below grade level. Two studies reveal that the written, taught, and tested curriculum standards are misaligned, and students are caught in the middle.

Texas is testing students with reading passages that are one-to-three grade levels higher than on-grade level instruction to the detriment of students and their schools. Research from the Texas A&M University Commerce(1) and the University of Mary Hardin-Baylor(2) coupled with current data on the STAAR and a readability formula called The Lexile Framework, shows thousands of Texas students who demonstrated they have met the grade level standards (called the TEKS) are considered reading “below grade level” in the Texas testing system. The readability of the STAAR exam is too difficult and it needs to be corrected so that educators and families have clear, transparent information about how well each student is actually reading. This is not about doing away with standardized tests. This is about aligning our state’s standardized test with our state’s standards and curriculum. Reading on grade-level is an important milestone for every child and deserves all of the attention our state leaders and educators are giving it.

State assessments are an important tool to help determine reading achievement and drive instruction in the classroom. But something is not right with our state’s current STAAR exam. The results of this test are not telling the entire story. STAAR reading results have shown very little improvement since 2012, which is inconsistent with all previous tests in Texas over the past 30 years. Over the past year, state leaders in Texas placed significant attention on third-grade reading achievement, based on student scores on the State of Texas Assessment of Academic Readiness (STAAR). In his State of the State address, Governor Greg Abbott emphasized the need for pre-K programs and early education, drawing attention to the fact that “only about 40 percent of third-graders are reading at grade level by the time they finish the third grade.” In its recommendation to lawmakers, the Texas Commission on Public School Finance even emphasized the need to direct more resources to the roughly 225,000 students not achieving the state’s “Meets” standard on the third-grade reading STAAR exam.

STAAR reading results have shown little improvement since 2012. Exhibit 1 shows that STAAR reading results have shown very little improvement since 2012, which is inconsistent with all previous tests in Texas over the past 30 years. For example, with both TAAS and TAKS, the two tests that preceded STAAR, there was an initial drop in student performance as a more difficult test was implemented, then there was steady improvement. So why is STAAR so different from every other test Texas has ever implemented? Have our students suddenly declined in ability? Have our teachers suddenly lost their skills, commitment and passion? Of course not! As many have suspected, the problem is in both the design of STAAR and the setting of performance labels.

Studies Reveal Misalignment Between the TEKS and STAAR We agree with Commissioner Morath’s slide shown in Exhibit 2 that explains, “The State of Texas Assessments of Academic Readiness (STAAR) are designed to tell us how well our students know grade level knowledge & how well they can demonstrate grade level skills.” And the TEKS analysis shown in Exhibit 3 states that students will “read grade-level text with fluency and comprehension” as well as “read aloud grade-level stories.”

Considering the language in the TEKS and the stated purpose of STAAR, it is perplexing and concerning that two separate, independent readability studies have concluded that the STAAR reading passages were written one to three grade levels above the targeted grade reading level when compared to various readability formulas. A 2012 Texas A&M study found “that all reading passages, except for 8th grade, were written at least two grade levels above grade level.” A separate 2016 University of Mary HardinBaylor study came to the same conclusion, “Overall, for each grade level, the reading passages were one to three grade levels above the students’ current grade level.”

While we all want Texas students to excel beyond their grade level in their education, it is certainly not fair or accurate to determine how a student or their school is performing based in large part on the flawed STAAR test.

STAAR misalignment negatively impacts students

Students are being incorrectly identified as to reading level and ability resulting in unnecessary and costly interventions and loss of other educational opportunities. Schools are being assigned inaccurate accountability ratings and subjected to unnecessary state interventions or closure. Schools are devoting significant resources trying to correct problems that in many cases don’t exist but are simply the result of an improperly aligned state standardized exam.

For example: A 3rd grade student, Natalie, took the 2018 STAAR reading test for the first time last spring. She correctly answered 25 out of 34 (74%) questions and received an “Approaches Grade Level” rating from TEA on her STAAR report card. This was concerning and confusing to her parents, her teachers and her school because their yearlong results showed that Natalie was reading on-grade level before she took the STAAR test. As further evidence of the disconnect, Natalie also received a 710 Lexile reading measure on her STAAR report card. A Lexile reading measure helps Natalie’s parents, teacher and school select books (from the book fair, library, classroom resource materials, etc.) that she can read independently and continue to improve her reading skills.

Exhibit 4 shows that a 710L tells Natalie’s parents, teacher, and school that she is reading and understanding text at about the same level as a 4th grade student in the 2nd month of school. Remember: Natalie is in the 3rd grade. Her accomplishment is good news and deserves to be celebrated! There are thousands of students just like Natalie whose reading performance does not warrant an “at-risk” label or remedial instruction.

A second example is George, a high school student, who received a 1550L on his English I EOC indicating that he reads beyond a 12th grade level as a freshmen in high school. However, according to TEA’s performance labels, he has not met the “Masters Grade Level” standard because his score is not high enough. This is NOT about lowering standards for students. It IS about aligning what is tested on STAAR with the grade level TEKS that teachers are required to teach, and students are expected to learn. Texas has a state system that intentionally tests students with reading passages that are from one-to three grade levels higher than on-grade level instruction and then attaches inaccurate labels to paint the picture that we have students and schools in crisis that are truly not.

Next Steps for Educators, Parents, Legislators and Public School Supporters

  • Immediately stop STAAR testing and delay the campus A-F accountability ratings that are scheduled for release in August 2019 until the tests are correctly aligned to grade-level expectations.
  • Immediately re-calibrate all of the STAAR tests (reading, mathematics, writing, science, and social studies) Meets and Masters Grade Level performance labels to align with grade level expectations.
  • Determine the readability of the reading and writing passages on STAAR and share that information with the educator item review committees.
  • Immediately void the 2018 accountability ratings based on the flawed STAAR tests.
  • Require, in statute if necessary, that TEA and the state assessment contractor include readability studies, as well as Lexile measures, when creating the state assessments for 2019 and beyond to ensure alignment between the written, taught, and tested state curriculum.

Definitions and Sources

What is readability?

“Historically, there are three levels of texts used in the classroom: independent, instructional, and frustration. Text written at a student’s independent level is text students can read on their own, without help. Text written at a student’s instructional level is text used for teaching them in a way to improve their reading skills. Text written at a student’s frustration level should be avoided, as it is so difficult it may discourage a child from reading.” (Vacca, Vacca, Gove, Burkey, Lenhart, & McKeon, 2012)

In “Text Complexity: A Study of STAAR Readability” from the University of Mary Harden Baylor, the authors wrote that teachers are taught to avoid frustration-level material, yet the Texas STAAR exam presses this issue. They wrote, “data indicated the 2015 third grade STAAR test was written on average on a sixth-grade reading level, which would fall within a frustration level for most of the assessed third graders. If the STAAR passages were written on a third or fourth-grade level, they could still contain rigorous informational text, yet the pass rate would not need to reflect such low standards.”

What are Lexile Scores?

One of the most commonly used measures of readability is Lexile measures, which are nationally recognized tools for assessing a student’s reading ability. All 50 states, including Texas, use Lexile scores to measure the difficulty of books and materials. The goal, as stated on the TEA website, is to find materials that are “not too easy, not too hard, but just right.”

There are significant differences in the characteristics of text at each of the Lexile levels. Longer sentences, words with multiple meanings, words with letter-sound combinations that don’t fit traditional letter and sound relationships, multisyllabic words all factor into a Lexile score. Handling all of these elements requires extended amounts of time and practice. The link to TEA’s Texas Assessment Management Website which contains additional information regarding Lexiles is shown below.

https://texasassessment.com/families/literacy-and-lexiles/

Sources:

1. STAAR Reading Passages: The Readability is Too High. Szabo and Sinclair, January 2012.

2. Text Complexity: A Study of STAAR Readability. Pilgrim and Lopez, October 2016