Pencils Down? Not So Fast

ednext-blog-feb16-banner-nytimedtestpost

New York State education officials raised a ruckus two weeks ago when they announced that annual statewide reading and math tests, administered in grades 3–8, would no longer be timed. The New York Post quickly blasted the move as “lunacy” in an editorial. “Nowhere in the world do standardized exams come without time limits,” the paper thundered. “Without time limits, they’re a far less accurate measure.” Eva S. Moskowitz, founder of the Success Academy charter schools had a similar reaction. “I don’t even know how you administer a test like that,” she told the New York Times.

I’ll confess that my initial reaction was not very different. Intuitively, testing conditions would seem to have a direct impact on validity. If you test Usain Bolt and me on our ability to run one hundred meters, I might finish faster if I’m on flat ground and the world record holder is forced to run up a very steep incline. But that doesn’t make me Usain Bolt’s equal. By abolishing time limits, it seemed New York was seeking to game the results, giving every student a “special education accommodation” with extended time for testing.

But after reading the research and talking to leading psychometricians, I’ve concluded that both the Post and I had it wrong—untimed tests are not less accurate. While there’s not a deep body of research on timed versus untimed tests, the studies that do exist indicate that for non-learning-disabled students, extra time does not significantly alter outcomes. Students with learning disabilities have been found to perform significantly better under extended time conditions than they do under fixed time conditions, which is why special education students often get extra time on exams. But for students without disabilities, no significant differences have been found.

Broadly speaking, there are two types of tests: speed and power tests. In a speed test—like a timed multiplication drill—the method of answering the questions (multiply) is clear and obvious. The test seeks to determine how many questions you can answer correctly in the allotted time. A power test, on the other hand, presents students with a smaller number of more complex questions. When what’s being tested is your ability to figure out how to answer the questions, your speed and the time allotted don’t matter as much.

In a “speed” test, Usain Bolt would kick my butt. We’d likely score the same on a hundred-meter “power” test, but running ability would not be the issue. We could walk, skip, or turn cartwheels, since our ability to cover the distance is what’s being measured, not how quickly we do it.

Most state tests are power tests, notes professor Andrew Porter, the former dean of the University of Pennsylvania’s Graduate School of Education and a past president of the American Educational Research Association. They are designed so that nearly all students will be able to complete all items within the allotted time. Thus, there’s no reason to expect any difference in performance if time limits are dropped. “Some students, if given a lot of time, will take a lot of time,” Porter notes. “It doesn’t mean they’re going to do any better.”

Should other states follow New York’s lead? For many, the point is moot. Eighteen states administer untimed, computer-based tests from the Smarter Balanced Assessment Consortium (SBAC). These “adaptive” tests don’t interpret speed as significant. If two test takers are presented with the same item and provide the same correct answer, but one does it twice as fast, both will still get the same next item.

If lifting time limits doesn’t change results or validity, why bother? Consider it a power test of the “opt-out” movement—and one New York might fail. Last year, the number of students declining to take state exams in New York quadrupled to 20 percent of all those who were supposed to be tested, according to data from the state’s education department. That makes New York one of the biggest opt-out states in the nation. Education officials seem to be gambling that allowing unlimited time will give parents one less reason to complain about “test pressure.”

My gut tells me that even if eliminating time limits blunts a bit of opt-out grumbling for now, the real source of test pressure is not the clock—it’s adults pressuring kids to perform. Any leveling or reduction in the number of parents refusing to let kids sit for state tests this year will likely be a function of New York’s moratorium on linking test scores to teacher evaluations. With less at stake for now, school administrators and teachers are less likely to transfer their anxieties to students, wittingly or unwittingly. But look for the pressure to return with a vengeance when the moratorium ends in 2020.

Ironically, the move to end time limits could backfire. Not because it will alter the validity of the tests, but simply because it’s a waste of time and money. “I would never, ever give a test that I didn’t put a time limit on,” Porter says. “In most states across the country, there’s a move to decrease the amount of time spent testing. This moves in the opposite direction.”

—Robert Pondiscio

This post comes from Flypaper. A shorter version of this piece originally ran in the New York Daily News.

Last Updated

NEWSLETTER

Notify Me When Education Next

Posts a Big Story

Business + Editorial Office

Program on Education Policy and Governance
Harvard Kennedy School
79 JFK Street, Cambridge, MA 02138
Phone (617) 496-5488
Fax (617) 496-4428
Email Education_Next@hks.harvard.edu

For subscription service to the printed journal
Phone (617) 496-5488
Email subscriptions@educationnext.org

Copyright © 2024 President & Fellows of Harvard College