How the Common Core Changed Standardized Testing

When the U.S. Department of Education awarded $350 million to two consortia of states in September 2010 to develop new assessments measuring performance of the Common Core State Standards, state commissioners of education called it a milestone in American education.

“By working together, states can make greater—and faster—progress than we can if we go it alone,” said Mitchell Chester, the late Massachusetts education commissioner and chair of the PARCC Governing Board from 2010 to 2015.

Eight years later, the number of states participating in at least one of the two consortia that developed the new assessments has dropped from 44 to 16, plus the District of Columbia, the U.S. Virgin Islands, Bureau of Indian Education, and the Department of Defense Education Agency. The reasons for leaving vary, but the decrease in participation makes it easy for some to declare the program a failure.

A closer look, however, suggests that Commissioner Chester’s optimism was not misplaced. Indeed, the testing landscape today is much improved. In many states, assessments have advanced considerably over the previous generation of assessments, which were generally regarded as narrowly focused, unengaging for students, and pegged at low levels of rigor that drove some educators to lower expectations for students.

Today, many state assessments measure more ambitious content like critical thinking and writing, and use innovative item types and formats, especially technology-based approaches, that engage students. These shifts are significant and are the result of the two state-led consortia—PARCC and the Smarter Balanced Assessment Consortium—which have ushered in the significant progress that Chester foresaw in 2010. Those programs upped the game for assessment quality, and other states and testing companies followed suit. Today’s tests—whether they are consortia based or not—are a far cry from their predecessors.

According to a new report from the Thomas B. Fordham Institute on Common Core implementation, some states are backtracking on the quality and rigor of standards. In that context, it is even more important to hold firm on the progress we’ve made on assessment.

As leaders of the organization that launched and managed the PARCC assessments from 2010 to 2017, we would like to share some reflections on the evolution in the testing landscape that came about as the result of the two assessment consortia.

Establishing a Common, Higher Bar

One of the most important features of state tests today is their focus on college and career readiness. Unlike in the past, tests now measure a broad range of knowledge and skills that are essential to readiness and report students’ progress toward that goal. Tests of old, like the standards undergirding them, often fell short of measuring the most important knowledge and skills that are critical for being prepared for college and for work.

PARCC and Smarter Balanced set these advances in motion by establishing common performance levels for the assessments across the states in their consortia, a process that engaged K-12 and higher education leaders, using research and professional educator judgment to define what college- and career-ready performance looks like. Recent reports from Education Next and the National Center for Education Statistics (NCES) confirm that PARCC and Smarter established a more rigorous bar for proficiency.

The fact that these common performance levels are shared by multiple states means that for the first time at this scale, states are able to compare individual student results. This is an important advance for educational equity; in the past, states set different performance levels, some higher than others, in effect establishing different targets for what level of academic achievement was expected of students and exacerbating the problem of disparities by Zip code. Consistent assessment standards also help families who move across state lines (within a consortium) and are now able to track student progress more easily.

The recent NCES study shows that cut scores for what states consider proficient have risen when compared to performance levels for the National Assessment of Educational Progress (NAEP). It also shows that the difference between the state with the lowest performance standard and the state with the highest standard narrowed between 2013 (before the consortia tests) and 2015 (after the consortia assessments).

Taken together, this research is clear that the consortia assessments, particularly PARCC, set a higher standard for student proficiency and that most other states—whether administering a consortium test or not—raised the bar as well. These new, shared expectations of what students should know and be able to do reflect the expectations of the world of college and the workforce much more fully than did their predecessors.

Engaging Educators, Building Transparency

For many years, large-scale assessments have been a black box for educators, providing limited opportunities for them to participate in test development and little information on what’s assessed, how it will be scored, and what to do with the results. While many states have historically had a representative set of teachers review test items, the consortia were able to foster a depth and breadth of educator engagement that set a new bar for the industry. Indeed, the consortia engaged thousands of classroom educators to review items and offer insights on development of key policies such as accessibility and accommodations and performance-level setting.

This engagement from teachers and administrators helped align the assessments with instructional practices effective teachers use in classrooms. It also helped ensure transparency, as did the release of thousands of original items and full-length practice tests for every grade level.

The design of the assessments has also helped push the education field in important ways by sending signals about the critical knowledge and skills for students to master at each grade level. Writing is a prime example: The consortia assessments include more extensive measurement of writing than most previous state assessments, and include a strong focus on evidence-based writing. We have heard from educators that this in turn has driven their schools to focus more on writing in the curriculum, giving students more opportunities to build this critical skill. This common focus can help ensure an equitable education for all children and close achievement gaps.

Moving to Computer Testing

Beyond the content and quality of the tests and expectations for student mastery, PARCC and Smarter Balanced helped change the way assessments are delivered. When the consortia were first established in 2010, only six of the 26 original PARCC states were administering some state assessments via computer.

Moving to online testing was a key priority for states for multiple reasons: Technology-enhanced items allowed for measuring knowledge and skills that paper and pencil tests could not assess, typically deeper learning concepts; computer-delivered tests could also allow for more efficient test administration technology and improve access to the assessments for students with disabilities and English learners.

Computer-based tests can also reduce the amount of time needed to score and report results, particularly if automated scoring technologies are used. And, critically, it is less expensive to administer and score computer-based tests than paper-based versions.

States were understandably cautious about transitioning to computer testing, given the investments required in local technology infrastructure and the lack of familiarity that many students and teachers had using computers for high-stakes assessments. The PARCC and Smarter Balanced teams conducted research and development to help states prepare for the transition, while state leaders worked with partners to prepare schools and districts.

In 2011, four years prior to the launch of the PARCC and Smarter Balanced assessments, the State Education Technology Directors Association reported that 33 states offered some type of online testing; only 5 of these states required that students take the end-of-year assessment online and none of these states planned to administer PARCC. In the first year of PARCC administration, 2015, more than 75 percent of students took the assessments online—far exceeding the consortium’s 50 percent goal for the first year. By spring 2017, more than 95 percent of students took the assessments via computer. This is a remarkable shift for states to make over less than a decade, one that took significant leadership from state and local officials to make a reality.

Bringing States Together

Above all, the experience of the consortia demonstrated that collective state action on complex work is doable. It can improve quality significantly, and it can leverage economies of scale to make better use of public dollars. Indeed, states that left the consortia to go it alone, ended up spending millions of dollars to develop their new tests from scratch. This successful model of collective state action—and the lessons learned—should influence states’ current approaches to joint work in science, career and technical education, and civics.

And yet, there is more to do.

The political battles over testing (and education more broadly) limited the advances in assessment that the leaders of the consortia envisioned in 2010.

For example, concerns about testing time caused the PARCC states to move away from their initial bold vision of embedding the assessments into courses and distributing them throughout the year. This was an innovative design that would have more closely connected assessment to classrooms, but states ultimately determined it was too challenging to implement at scale. Luckily, there is now an opportunity for states to explore models like this through some of the flexibility provided under the federal Every Student Succeeds Act.

In addition, a number of states, for largely political reasons, pulled out of the consortia to develop or buy tests on their own, which means that many parents and policymakers can no longer compare test results across state lines using a shared, annual, and widely used benchmark for student success. In contrast, NAEP—which is administered once every two years to a sample of students in 4th, 8th, and 12thgrades—serves as an important high-level barometer of student progress in the nation, but doesn’t provide information to school systems that can be used to inform academic programming, supports and interventions, or professional learning.

Further, testing “opt outs” in some states meant that the data from the assessments were not as useful as they could be because they did not fully reflect all students in a school. This limited districts’ ability to fully make use of the data to make instructional decisions. Opt outs are less prevalent today, but still create a challenge to schools seeking to have complete academic picture of their student body.

Finally, we learned that leaders taking on an ambitious reform agenda should not give short shrift to the communications and outreach required to build support for and understanding of the work—including building strong relationships with stakeholders and seeking to form coalitions of supporters. Reform leaders should not assume that good work on its own will win the day, especially if key stakeholders don’t know about or support it.

Despite these challenges, the quality of state testing has improved substantially in recent years. Millions of students today take assessments borne out of or influenced by this work—tests that better reflect what they know, and what the nation needs them to know.

— Laura Slover and Lesley Muldoon

Laura Slover, the CEO of CenterPoint Education Solutions, helped launch the Partnership for Assessment of College and Careers (PARCC) in 2010 and served as its CEO until 2017. Lesley Muldoon, the Chief of Policy and Advocacy at CenterPoint Education Solutions, helped launch PARCC and subsequently served as Chief of Staff until 2017.

This piece originally appeared on the FutureEd website. FutureEd is an independent, solution-oriented think tank at Georgetown’s McCourt School of Public Policy. Follow on Twitter at @futureedGU

For a follow up to this post, please read: Past is Prologue in Common Core Testing

Last Updated

NEWSLETTER

Notify Me When Education Next

Posts a Big Story

Business + Editorial Office

Program on Education Policy and Governance
Harvard Kennedy School
79 JFK Street, Cambridge, MA 02138
Phone (617) 496-5488
Fax (617) 496-4428
Email Education_Next@hks.harvard.edu

For subscription service to the printed journal
Phone (617) 496-5488
Email subscriptions@educationnext.org

Copyright © 2024 President & Fellows of Harvard College