Learning Facts

The brave new world of data-informed instruction

In just the last ten years, goaded by broad and still unsettled cultural shifts, education practices have changed dramatically. Schools are no longer just recording and analyzing inputs—dollars spent, number of days of instruction, numbers of students per teacher—but pushing their data-gathering and analysis efforts into the brave new world of outcomes. Who is dropping out and why? Which students are reading at grade level, and which are not? How are 4th graders doing on fractions and decimals? Today’s educators are deciphering, and using, the results of student assessments better than ever. And it is not a reform at the margins. “Nearly all states are building high-tech student data systems to collect, categorize and crunch the endless gigabytes of attendance logs, test scores and other information collected in public schools,” reported the New York Times in a front-page story last May, confirming the scope of the trend.

Hundreds of state education departments and school districts, driven in part by the No Child Left Behind (NCLB) mandate to demonstrate “adequate yearly progress,” are retooling their assessment regimens and technology systems to use outcome data to drive instructional and policymaking decisions (see Figure 1). A study of 32 San Francisco Bay Area K–8 schools released in 2003 by the Bay Area School Reform Collaborative (now Springboard Schools) found that “what matters most [in closing the achievement gap] is how schools use data.” In fact, those schools that had accelerated the progress of their low-performing students—who were catching up with high-performing students—were those that regularly captured data for the purpose of improving results.

What follows is a close look at three schools that have integrated data into their instructional decisionmaking. Each has concluded that the practice has helped improve student achievement. I examine a traditional public school, a district-turned-charter school run by an education management organization, and a relatively new charter school. The experiences of these schools illustrate the benefits of mining both internal assessments and standardized test results for data to guide curriculum decisions and inform classroom instruction.

Four Stars in the Lone Star State

Evelyn S. Thompson Elementary School is located in a Texas district that has been nationally recognized for improving student achievement over the last decade. The state’s accountability measures, on which No Child Left Behind was modeled, have grown increasingly stringent. Even as the district’s percentage of low-income students has risen, student achievement in the Aldine Independent School District (AISD)—a mid-sized urban district with 66 schools and 56,000 students on the northwest edge of Houston—has kept pace. But it wasn’t always so.

In 1994, the first time the new Texas Assessment of Academic Skills (TAAS) tests were given to all 4th-, 8th-, and 10th-grade students, half the kids in the district failed. Until then, Aldine thought it had a decent school system; its students often received awards and scholarships and the local press wrote favorably about them. But when the district received an “academically unacceptable” label that year, it got a wake-up call as well. “Our educational philosophy had been all about self-esteem as our key goal,” recalls Superintendent Nadine Kujawa, a Texas native who had worked in the district since the early 1960s and in 1994 was the deputy superintendent of human resources and instruction.

Over the next decade Aldine undertook major structural reforms that emphasized academics and student achievement. “Now,” says Kujawa, “we believe that if students are successful academically, self-esteem will take care of itself.”

The right data, provided at the right time and in the right way, can be a powerful driver for school improvement.

The key to this new emphasis on achievement was the TRIAND data-management system, developed in partnership with a local software vendor to capture, analyze, and share specific student achievement data among administrators, school leaders, teachers, and even parents. Over the last eight years, the district has spent $32 million on the hardware systems necessary to track student demographic and performance data districtwide, and another $2 million on additional computers that allow teachers to access the system; much of this funding has come from the federal E-Rate program, which has allocated more than $10 billion toward Internet infrastructure in K–12 schools and libraries since 1996 (see “World Wide Wonder?” research, Winter 2006).

At Thompson Elementary, set in a quiet working-class neighborhood, this top-down push was welcomed by teachers like Cherie Grogan, a math teacher and self-described “data geek.” For years Grogan has been creating forms in Microsoft Excel to help teachers better understand their students’ annual test results. She is one of the school’s “skills specialists,” four experienced teachers who support classroom teachers in specific subjects, including math, reading, writing, science, and bilingual education. In addition to modeling lessons for teachers and working with small groups of students, the skills specialists also regularly analyze student scores on diagnostic, formative, and standardized tests across classrooms, subjects, and grades. It is a responsibility they have always held and that has consumed an increasing amount of their time over the last decade as the district continues to emphasize the regular use of student assessment results to guide instruction.

This support begins even before the school year does: the week before students arrived last year, Thompson’s skills specialists sat down with teachers to review the prior year’s test scores (known as the Texas Assessment of Knowledge and Skills, or TAKS, the successor to TAAS) for 3rd and 4th grades (the only grades currently tested at Thompson) in order to identify gaps in skills and knowledge, and to develop preliminary plans for addressing those problems throughout the year.

Once the school year begins there are weekly meetings between the school’s principal, Sara McClain, and the skills specialists to review data and plot strategies for supporting those students and teachers who need help. For example, when reviewing reading scores across the 4th grade, they found that many of the students were struggling with the concept of summarization. In the following weeks, two of the skills specialists with experience in teaching reading went into the 4th-grade classrooms to model additional lessons on summarization and worked separately with small groups of students who still weren’t grasping the skill.

In addition to TAKS scores, the data gathered at Thompson Elementary come from diagnostic tests like the Texas Primary Reading Inventory, which is designed to measure students’ skill levels and needs. As soon as scores from these beginning-of-year diagnostic assessments are available (usually in mid-September), the skills specialists sit down with McClain and a stack of printouts from TRIAND in one of their weekly meetings and assess which of their “kiddos” are struggling to read at grade level. As they flip through the spreadsheets, the skills specialists flag the students having trouble and bring the numbers to life with anecdotes about those students’ work in the classroom. McClain encourages the specialists to follow up with each teacher to share the data and ensure that teachers have created small reading groups based on learning needs, as well as lunchtime or afterschool tutoring for those students still reading at low levels.

Like other schools in Aldine, Thompson Elementary also regularly administers its own tests to measure whether students are mastering the district’s standards as well as the school’s benchmarks. And although grading, analyzing, and discussing interim assessments takes an estimated three to four hours per month, even veteran teachers seem to consider the new responsibilities a help to their teaching, rather than a hindrance. “It’s great to see where kids are performing, where my gaps are, and what I need to do,” says Debra Bingham, a 16-year veteran teacher who now specializes in 3rd-grade reading.

Sara McClain, who has been principal at Thompson for eight years, says that despite some teacher turnover in the early years, as the district’s instructional reforms took hold the remaining teachers came to embrace the new, more rigorous approach. “Teachers tend to come in early and stay late,” she notes. They see the use of data “not as additional but as a part of instruction, their professional responsibility, and their high expectations.”

The increased attention to the data seems to have paid off. Between 1994 and 2002, the percentage of Thompson students passing the state’s math assessment rose from 65 to 98 percent. And even with the more rigorous state standards, the school has maintained a passing rate in the 90s since then. Principal McClain attributes this to a consistent, constant, schoolwide focus on student results. “Previously we were very scattered,” she says. “We needed to increase expectations and get everyone going in the same direction around student achievement.”

A Better View in Chula Vista

Fed up with years of dismal performance from its 1,100 K–6 students (four out of five of them poor), in 1997 administrators at Mae L. Feaster Elementary School in Chula Vista, California, just south of San Diego, decided to partner with members of the community and convert the school into a charter school. They gave management of the rambling block of “portable” and permanent buildings to Edison Schools.

Over the next eight years the reborn Feaster-Edison Charter School showed a steady improvement in student achievement, increasing average scores on the Stanford Achievement Test (or SAT-9) from the 19th to the 34th national percentile between 1998 and 2001 and from 17 percent to 32 percent proficient on English language-arts tests between 2001 and 2005.

A large part of the secret to Feaster-Edison’s success, say the school’s teachers and administrators, is its use of data. Feaster-Edison relies heavily on both standardized test scores and Edison’s own benchmark assessments to inform and adjust instruction throughout the year. Before school starts each year, all the school’s teachers are taught how to work with data as part of a weeklong “teacher academy.” Also before the school year begins, teachers review student assessment results from the prior spring, both by student and by “strand” (groups of standards), on the California Standards Test, the state’s annual standardized test for grades 2 through 11 The analysis is both retrospective, identifying instructional strengths and weaknesses, and prospective. Test scores are reorganized according to new classroom assignments so that teachers can use those data in preparing for the year ahead.

Reviewing student performance on the prior year’s standardized tests can also highlight critical schoolwide issues. “Data start the conversation,” says Principal Erik Latoni, who noted that one priority was to improve English-language development among all students, a decision prompted by data showing lack of improvement, or in some cases a decline, in students’ English language skills. Latoni and his teachers decided that all lead teachers would attend two training programs that year to help them supplement student reading and writing skills with listening and speaking skills. He also asked lead teachers to discuss tactics for addressing the problem with other teachers in their upcoming “house meetings,” daily gatherings in which all teachers within each grade level (or “house”) gather in an empty classroom to discuss curriculum, instruction, and administrative matters. As a result of those meetings, for instance, 3rd-grade teachers created a writing rubric to regularly assess students’ English-language skills and ensure that they are continuing to make progress.

These house meetings are also where teachers collaborate to analyze their students’ results on Edison’s monthly benchmark assessments. Edison uses state standardized assessments to create these tests; each month, students are tested on a sampling of the standards they will be expected to master by the end of the year. At Feaster-Edison, benchmark assessments are administered on laptop computers wheeled into each classroom on a cart and connected wirelessly to Feaster-Edison’s main system, making results available immediately to teachers and administrators. Before meeting with others in their grade, teachers are expected to examine their students’ results and fill out a benchmark analysis form, provided by Edison, which asks what standards are not yet mastered, which students are not proficient in those areas, and what teachers plan to do about it. “You have to give teachers time to analyze data, and link the conclusions back to instruction, so that [using data] isn’t just an activity. There should be a change in instruction,” says Francisco Escobedo, the school’s former principal and now a vice president of achievement for Edison.

In grade-level meetings, teachers compare data across the grade, looking for patterns and opportunities to borrow strategies from one another. The steps they take next vary according to teacher and grade. Fifth-grade lead teacher Joshua White looks at student performance on each “strand” of standards in reading, writing, and math, both within his own class and across the grade. He then takes a highlighter to the printout of results, marking strands on which a majority of the class struggled. In addition to covering the material again and making sure students understand the vocabulary used in those questions, he will often put some of the test items up on an overhead transparency and walk students through techniques for answering them.

Teachers also work together to determine how to cover standards that were missed the first time. For example, in the 3rd grade, all of the teachers use the same math curriculum at the same pace, so they are able to coordinate their reteaching where necessary. At the January benchmark assessment, data showed that many 3rd-grade students missed the test’s two questions on statistics, so 3rd-grade teachers created new lessons on the subject and added them to the February calendar.

Such regular use of student achievement data at Feaster-Edison—and at all Edison schools, since the company’s national office is also reviewing the data—reinforces the sense of accountability for student performance that is shared by teachers, principals, and Edison itself. “You have to see data as helping you be accountable for something that’s really important to you,” says John Chubb, chief academic officer of Edison Schools. “We are responsible for student achievement outcomes and scores. We are literally hired and fired on that basis.”

Pioneering on the East Coast

Elm City College Preparatory School is located near New Haven’s Wooster Square, an Italian neighborhood just a stone’s throw from Yale University. The charter school opened its doors in the fall of 2004, with an elementary and a middle school crammed into a small building that had housed a Catholic school. The school is part of Achievement First, a nonprofit charter school management system founded in 2003 by two University of North Carolina graduates (Class of 1994), Doug McCurry and Dacia Toll. McCurry and Toll wanted to replicate the success of Amistad Academy, a high-performing charter middle school they opened in New Haven in 1999. Achievement First now runs 10 schools, serving nearly 1,700 students; five are in New Haven and five in New York City.

It was at Amistad that cofounder McCurry, who was also a teacher at the school, began to develop interim assessments to track his students’ progress in math over the course of the year. “As a charter school, we were accountable for results, but there was confusion about whether we were getting there,” says McCurry. Using the Connecticut state standards and previously released test items from the state’s standardized test, the Connecticut Mastery Test, McCurry worked backward to develop a scope and sequence for the curriculum, as well as cumulative assessments that would be administered every six weeks.

Today, teachers at all Achievement First schools use a consistent scope and sequence of instruction, and progress is measured using interim assessments, modeled on the ones McCurry developed, to inform ongoing instruction in reading, writing, and math. Principals and Achievement First staff look at these and other student data to make initial plans, allocate resources, and make instructional decisions throughout the year. As with many other successful data-driven schools, at Elm City the work begins before school starts, when teachers and principals—both Dale Chu, who heads up the elementary grades, and Marc Michaelson, who oversees the middle school—use a variety of diagnostic tests to understand the ability and achievement levels of their incoming students.

So far, the Connecticut Mastery Test has been of limited use to Elm City because scores arrive after the school year is in full swing. Though the test is being revamped, Elm City is not convinced that standardized tests will ever add much value at the classroom level.

“Our number-one data point for driving instruction is the interim assessment,” says middle-school principal Michaelson. These assessments are given manually, with paper and pencil, mirroring the testing conditions in which students take the state test, and hand-scored by teachers. Teachers analyze their own class results by completing a “reflection form,” similar to Edison’s benchmark analysis form, that requires them to list the questions and standards that were not mastered, the names of students who missed each question, and how they plan to work with those students to address those areas. The cost of these interim assessments to Achievement First is not insignificant: McCurry estimates that each one costs $500 to $1,250 to develop. With 12 separate assessments needed for each grade—6 for math and 6 for reading—and sometimes additional tests in subjects such as writing and grammar, the school could easily spend $20,000 for a single grade’s tests; fortunately, each assessment can be used in all Achievement First schools in a given state.

Administering, grading, and analyzing these assessments is also time-consuming, especially as the cumulative assessments get longer over the course of the year, covering all material taught to date. Michaelson estimates that the process of administering the test to a class, hand-grading each one, analyzing the class results, and discussing them with him takes each teacher anywhere from three hours for the reading assessment in the early part of the year to seven hours for math near the end of the year. For teachers, though, the value of this regular, specific data analysis seems to outweigh the hassle. “I am okay with the time it takes to grade and analyze the data because it’s ultimately for my kids’ benefit and mine as a teacher,” says Seisha Keith, who teaches 6th-grade reading and writing.

Elm City has now created a data-collection culture that affects all aspects of the school, including how teachers are recruited, prepared, and supported. “Teachers have to be data-driven to get in the door,” says Michaelson, who looks for evidence that prospective teachers have a quantifiable history of student achievement gains and are able to provide thoughtful explanations of how they have helped students improve. All new Achievement First teachers receive training on how to use interim assessments, while all new principals learn about how to analyze assessment results and have effective one-on-one conversations with teachers about the data.

The evidence so far suggests that Elm City is working for students. In its first year, 2004–05, the percentage of kindergarten and 1st-grade students reading at or above grade level increased from 26 to 96 percent; in the same period, the percentage of 5th graders reading at or above grade level increased from 18 to 55 percent. What’s more, if the experience of Amistad Academy is any indication of Achievement First’s potential here, then Elm City’s improvement is likely to continue. Students in the class of 2004 at Amistad Academy went from 38 percent mastery in reading in 6th grade to 81 percent mastery in 8th grade. Their peers in New Haven public schools during this same period climbed slightly, from 25 percent mastery in 6th grade to 31 percent mastery in 8th grade. In math, Amistad students went from 35 percent mastery in 6th grade to 76 percent mastery in 8th grade, while their New Haven peers’ performance declined from 24 percent mastery in 6th grade to only 19 percent mastery in 8th grade.

Lessons Learned

As the experiences of these three schools make clear, the use of data can help teachers and leaders stay focused on student achievement. These schools have all managed to achieve impressive results by using data on student performance in subjects like reading, math, and science, and yet all still manage to deliver a well-rounded curriculum that includes art, music, and physical education. They also discovered that not all data are the same, that data collection and analysis are only tools, and that the tools must be properly used to be effective.

Results from annual standardized tests can be useful for accountability purposes, but student progress must be measured on a far more frequent basis if the data are being used to inform instruction and improve achievement. To be useful in this way, interim assessments must be tied to clear standards. Another lesson is that ample time must be taken to analyze the implications of student assessment results, to plan for how instruction should be modified accordingly, and to act on those conclusions; otherwise, the data become more information that gathers dust on the shelf. Data should launch a conversation about what’s working, what’s not, and what will be done differently as a result. Administrators and principals must make explicit the time commitment necessary for capturing, analyzing, and acting on data, and support the work by scheduling time and allocating resources for it. In Aldine, district officials are looking at ways to build time for reteaching into the school year, based on feedback from teachers.

The results shown by these schools, however preliminary, were not achieved overnight. They are the product of steady and open analysis of student and school performance over many years. All of these schools have become collaborative, transparent communities in which information is shared freely and regularly as a means of focusing the entire school on improving instruction and increasing student achievement. Some, like Aldine and Feaster-Edison, have found that some teachers embrace these practices readily and that those who do not choose to work in other schools. Elm City, meanwhile, recruits teachers who are comfortable using data, so they are committed to this approach from the beginning.

Despite the progress made by these and other schools, there are still barriers to more widespread adoption of these practices. For example, the time and effort that teachers and administrators must spend on manual processes should be addressed by more streamlined technology systems. Technology could make a powerful difference by administering tests, automating their grading, and displaying data—to district leaders, principals, teachers, and students—in a timely way  that makes strengths and weaknesses clear and next steps more obvious. The right data, provided at the right time and in the right way, can be a powerful driver for school improvement.

Julie Landry Petersen is communications manager at NewSchools Venture Fund, a nonprofit venture philanthropy firm based in San Francisco, California.

Last Updated

NEWSLETTER

Notify Me When Education Next

Posts a Big Story

Business + Editorial Office

Program on Education Policy and Governance
Harvard Kennedy School
79 JFK Street, Cambridge, MA 02138
Phone (617) 496-5488
Fax (617) 496-4428
Email Education_Next@hks.harvard.edu

For subscription service to the printed journal
Phone (617) 496-5488
Email subscriptions@educationnext.org

Copyright © 2024 President & Fellows of Harvard College