Past Is Prologue on Common Core Tests

The PARCC and Smarter Balanced state testing consortia set in motion significant improvements in state testing systems and substantially improved the quality of student assessments, either directly or indirectly, nationwide. In a recent post we outlined the advances the consortia produced. But in our work we have also identified several priorities for states and districts as they evolve their testing regimes under the Every Student Succeeds Act.

Rigor

Assessments should set a high bar for student learning, measure what matters most, reflect good instruction, and provide educators timely feedback. Research has confirmed that the consortia assessments are the highest quality tests in the country, and, therefore, should be the go-to choice for state leaders. States that decide to build their own assessments should prioritize test questions that measure critical thinking, writing, and students’ grasp of original texts, as the consortia tests do.

A helpful resource is CCSSO’s Criteria for Procuring and Evaluating High Quality Assessments, which was developed as a roadmap for states wanting to create rigorous tests outside of the PARCC and Smarter Balanced networks, including for grades and content areas not available through the consortia.

State leaders should set performance standards on their tests that are consistent with college and career readiness expectations. This is particularly important as states pursue new instructional approaches such as competency-based and personalized learning, strategies that give students more flexibility to move at their own pace, but that require rigorous standards and tests that reflect such standards to ensure that students are learning and that goal posts aren’t lowered for some students.

State assessments should complement these instructional innovations by providing honest, objective benchmarks of annual student progress. To that end, they can leverage research from the PARCC and Smarter Balanced consortia, such as on the design and scoring of innovative new test items and new methods for setting performance levels benchmarked against evidence of what is required for students to be academically ready for college and careers.

States and districts should also communicate to policymakers and the public the value of rigorous assessments that measure students’ grasp of challenging academic content and foundational skills, a key task education leaders have often neglected in the face of the recent anti-testing movement. While the most important element of a child’s education is strong instruction from an excellent teacher, high-quality state tests provide critical information about how well schools are serving our children, particularly our most vulnerable students.

Transparency

One of the most important contributions of the PARCC and Smarter Balanced consortia was their commitment to transparency. Teachers (including higher education faculty) were involved in the review of every assessment item.  By the time an item had been through the PARCC review process, more than 30 sets of eyes had studied it. The professional judgment of educators helped ensure the PARCC and Smarter Balanced assessments measured the knowledge and skills every student should master at each grade level, included texts worth reading and problems worth solving,  and helped build buy-in among stakeholders who matter most—those who work with students every day.

The consortia also sought to make the new tests transparent to the public, particularly but not solely parents and students. At PARCC, we wanted to make sure people could understand how these tests worked: what the questions looked like; the kinds of texts students would read; the sort of math problems they would solve.

Prior to the tests being administered for the first time, we published practice tests that mirrored the look and feel of the actual assessments so students and their parents could try them out. Following the launch of the PARCC assessments in 2015, we released thousands of actual test questions, along with annotated student work, to show what the tests look like, how they are scored, and what student work looks like at different performance levels. This can be a particularly powerful way to communicate what students will be expected to know and be able to do. It also serves as a great professional development tool for educators.

Further, we knew the importance of the consortia score reports; they were the way most parents would be engaging with their students and their schools about testing. States in both consortia invested significant resources in the design of score reports to provide stakeholders with clear information about student growth, finer-grained details about student mastery of standards, and comparisons to other students’ performance in districts, states, and the consortia.

We also sought to make the reports accessible to parents of a wide range of backgrounds, including those for whom English is not their first language. Through focus groups, multiple rounds of pilots and review, and ongoing feedback (even after the first year of testing), the consortia prioritized these reports as a key part of their work.

While we made considerable strides on reporting, the ongoing engagement of key stakeholders can further build trust for and understanding of the consortia reports, the assessments, and their role in helping all students meet higher standards. The work is critical to countering the anti-testing movement, which has tried  in recent years to erode confidence in the critical role that standards and assessments play in raising expectations for the nation’s’ disadvantaged students.

Timely Results

Although we made considerable progress in getting students’ results to educators and parents more quickly, we didn’t go far enough. Prior to PARCC and Smarter Balanced, results of spring tests often were released the following fall. PARCC and Smarter Balanced states typically released results from spring testing during the summer. But that’s not fast enough to use the information in students’ end-of-year grades, or in providing students support over the summer.

Slow results are frustrating to parents and teachers alike, but teachers tend to understand that scoring performance tasks, essays, and extended math problems reliably takes time. While it may be tempting to administer assessments earlier in the year, we advise against that. To the contrary, state tests should be administered as late in the year as possible, in order to give students and teachers time to cover the material for the grade or course. Maximizing the use of computerized scoring would help reduce the amount of time required to score assessments. We need to push the testing industry to continue research and development efforts that improve the reliability and validity of automated scoring.

Statewide year-end statewide testing is an important check on the performance of the education system, serving a critical role in school accountability and helping to drive instruction toward college- and career-readiness.

But end-of-year tests are post-facto assessments of learning. By their very nature, they are unable to provide real-time information to inform day-to-day instructional decisions that drive students’ learning.  They are but one piece of the puzzle.

An aligned system also needs regular assessments for learning—such as formative assessments that give teachers fine-grained information in real time, information about the effectiveness of particular instructional strategies and signals about whether certain topics need to be revisited so that students fully master the concepts.

It goes without saying that all assessment tools should be aligned with underlying standards and the curriculum resources a district has selected. Curriculum-embedded assessments are a critical piece of a comprehensive instructional strategy.

Aligning standards, curriculum, assessment, and instruction is hard work, requiring time, support for teachers, and opportunities for professional collaboration. School systems should prioritize professional development that helps educators build their assessment literacy, so they become facile in using information from assessments to adapt their instruction. Teachers should have the opportunity, either on their own or collaboratively, to craft quality tasks aligned to district curriculum and state standards.

The improvements in the nation’s tests in recent years are real. Education leaders need to hold firm on these advances in the rigor and technical quality of their tests, even as they take on the next steps of aligning high-quality curriculum and instruction. But truly meaningful student progress depends on teachers having the classroom tools—including quality curriculum and aligned assessments for learning—they need to help all children reach higher standards. Education leaders and the partners that support them should redouble their efforts to ensure better standards and assessments aren’t the end of this story.

— Laura Slover and Lesley Muldoon

Laura Slover, the CEO of CenterPoint Education Solutions, helped launch the Partnership for Assessment of College and Careers (PARCC) in 2010 and served as its CEO until 2017. Lesley Muldoon, the Chief of Policy and Advocacy at CenterPoint Education Solutions, helped launch PARCC and subsequently served as Chief of Staff until 2017.

This piece originally appeared on the FutureEd website. FutureEd is an independent, solution-oriented think tank at Georgetown’s McCourt School of Public Policy. Follow on Twitter at @futureedGU

Last Updated

NEWSLETTER

Notify Me When Education Next

Posts a Big Story

Business + Editorial Office

Program on Education Policy and Governance
Harvard Kennedy School
79 JFK Street, Cambridge, MA 02138
Phone (617) 496-5488
Fax (617) 496-4428
Email Education_Next@hks.harvard.edu

For subscription service to the printed journal
Phone (617) 496-5488
Email subscriptions@educationnext.org

Copyright © 2024 President & Fellows of Harvard College