Portfolio Assessment

Can it be used to hold schools accountable?

At the Beacon School in Manhattan, the teachers and administrators thought they had resolved, at least to their satisfaction, the long national debate over how best to assess students’ work. From the school’s outset in 1993, Beacon’s educators decided to treat their diverse student body, 26 percent of which comes from low-income families, like graduate students. Instead of taking the usual multiple-choice tests and receiving letter grades, the high schoolers would complete long-term projects and defend their work before faculty panels.

Beacon, a public alternative school, soon became a national model for advocates of what modern educators call “portfolio assessment.” Portfolios, a term derived from the carrying case of paintings or drawings that artists present as proof of their talents, are collections of student work. As graduation time neared each year at Beacon, seniors had to present portfolios of their essays, lab reports, problem solutions, and research projects from the past three years–three projects in science, three in history, four in English, and three in foreign languages.

But when New York State began requiring students to pass the standardized Regents tests in order to graduate from high school, Beacon was forced to reduce the number of projects and cut the time for assessing them. For instance, principal Stephen Stoll says the biology course that had 70 labs a year ago now has only 30, because students need more time to learn the terms and concepts that will be on the Regents test.

With the nationwide efforts to raise graduation standards and the increasing use of standardized testing, the idea of basing promotion and graduation decisions on portfolios of students’ work has fallen out of fashion as swiftly as slide rules gave way to calculators. Some schools have tried to keep portfolios as a tool for classroom teachers, but even the most ardent advocates have acknowledged that samples of student work cannot compete with the ability of standardized testing to quickly and cheaply determine the overall performance of a school or a school district.

“If the goal is simply to sort, stratify, and rank, portfolios add little if you already have test data,” says Monty Neill, executive director of FairTest, a Massachusetts-based organization that opposes standardized testing. “If the goal is rich feedback at individual or school level, portfolios of some sort are indispensable while tests are of minimal use as they provide far too little information.”

Waning Interest

The idea of authentic assessment–evaluating children based on an in-depth examination of their work rather than their scores on standardized tests–goes back a century, to the beginnings of the progressive education movement. Even then portfolios were considered time consuming, but the approach fit well with the progressives’ emphasis on cultivating research skills and creative thinking rather than building a broad base of knowledge in the subject. Moreover, many teachers and students liked portfolios, and they became a key part of the alternative public schools that sprang up during the 1960s and 1970s.

At places like the famed Central Park East Secondary School in Manhattan, Deborah Meier and other progressive educators began to experiment with judging low-income inner-city students on the basis of collections of their best work and oral examinations. They found that if students did well on these alternative assessments, they gained admission to college and tended to do well there.

The National Writing Project, begun in 1974 at the University of California at Berkeley, stemmed from a similar notion: that regular reviews of the process of writing, with repeated drafts and frequent editing, were a better way to assess how the student was doing than the old way of grading grammar and spelling tests and the final version of any written assignment. Those series of drafts would be all an assessor needed to judge the student.

The portfolio idea gained strength in the 1980s. Drew Gitomer, vice president for research at the Educational Testing Service (ETS), worked on the Arts Propel project with Howard Gardner and Dennie Palmer Wolf of Harvard’s Project Zero. “We explored the idea of portfolios in writing, music, and art–the latter for all students, not just the serious musicians/artists,” Gitomer said. “These efforts, as well as many others, were focused on teachers and classrooms, rather than measures of accountability.”

Even as several southern governors, including Richard Riley of South Carolina, James Hunt of North Carolina, and Bill Clinton of Arkansas, were working to spread the standards movement, which would become the most significant threat to portfolio assessment, some states experimented with portfolios on a large scale. Vermont and Kentucky investigated the possibility of using portfolio assessments instead of standardized tests to judge the progress of schools, districts, and the state. Some schools in both states piloted programs in which student work instead of multiple-choice tests was used to evaluate their academic progress.

But in 1994, RAND Corporation researcher Daniel Koretz, now at the Harvard Graduate School of Education, released a report on portfolio assessment in Vermont that many experts say dampened enthusiasm for this method of grading. Koretz found that portfolio assessment was not all that useful in evaluating schools or students because one school might require one kind of project, another school quite a different one. It was difficult to compare their work and determine whether the standards were high enough. Teachers, Koretz found, also complained that portfolios were cutting into valuable teaching time. Math teachers, he wrote, “frequently noted that portfolio activities take time away from basic skills and computation, which still need attention.”

Koretz’s careful methodology and national reputation had an impact, but there were signs that portfolios were already losing ground. Around the same time as the report’s release, British prime minister John Major discarded the portfolio system that had been used for 20 years as the nation’s graduation exam in English. Dylan Wiliam, a British assessment expert who now works for ETS, said Major felt “that timed written examinations were the fairest way to assess achievement at the end of compulsory schooling.” Nevertheless, about 40 percent of the English exam grade and 20 percent of the math grade is still based on portfolio-like elements.

Middle Ground

The decline of portfolios as a large-scale accountability measure is not necessarily a bad thing, Gitomer said. “The power of portfolios resides in its coming out of the student’s own classroom practice. The value resides in the wealth of information available and the various conversations that one can have about the work and the portfolio creator. If all you’re going to do is give a single score, there are far more efficient ways of getting at a student’s achievement level.”

Ronald Wolk, founder of the newspaper Education Week, said he appreciates the need for large-scale assessments, but thinks the standardized tests that are replacing portfolios are no easier to judge than actual student work. “Officials object to using portfolios for assessment because they are too subjective,” said Wolk, who admires the Beacon School’s grading system. “But that is exactly how the writing on Regents exams is scored. Teachers read and grade the exams according to their best judgment. At Beacon, at least, the teachers use rubrics that they have crafted and honed over the years.”

Most critics of portfolio assessment say they like the emphasis on demonstrated writing and oral skill, but have seen too many instances in which a refusal to give traditional tests of factual recall leads to charmingly written essays with little concrete information to support their arguments.

Advocates of portfolios respond that such lapses can be blamed on bad teaching, but not on the use of portfolios, since if portfolios are used properly they can also lead students to master a broad range of material. Neill said the idea is to collect key pieces that provide evidence of learning in key areas. Even with a standard high-school grading system, he said, “unless all children take precisely the same curriculum, and master it to a similar degree, and then recall it all, they will come into any college course with different aspects of knowledge and different gaps in that knowledge.”

At Beacon, Stoll said the faculty is trying to maintain the portfolio system in a limited form, “but it is hard. You have the teacher telling the student to get his portfolio done and he says that he is studying for the Regents test. It is like mixing two different currencies, and the bad currency drives out the good currency in a certain sense.” Beacon’s request to be exempt from the Regents tests was turned down by Richard P. Mills, New York’s commissioner of education, who had tried portfolio assessments when he was the state school superintendent in Vermont.

The argument between advocates of standardized tests and advocates of portfolios usually ends with each side saying it cannot trust the results produced by the other. Authentic assessment “is costly indeed, and slow and cumbersome,” said Chester E. Finn Jr., president of the Thomas B. Fordham Foundation and a supporter of standardized testing, “but I think its biggest flaw as an external assessment is its subjectivity and unreliability.”

Robert Holland, a senior fellow at the Lexington Institute, a Virginia-based think tank, raised the issue of cheating. “Scorers may have no way to tell if the work samples came from a student or a smart uncle or from an Internet download,” he said. Portfolio supporters note that regular tests have also produced cheating incidents.

Lisa Graham Keegan, chief executive officer of the Washington-based Education Leaders Council, said she thinks portfolios can help teachers assess their students’ progress, but are not a good tool for determining how a school or a district is doing. She remembers a visit to a northern Arizona school where “the writing teacher was showing me a portfolio of a student’s work in which the student was writing about kamikaze pilots during World War II.” Keegan was state school superintendent for Arizona at the time and saw that “the essay was horribly written, with glaring spelling and grammatical errors, and yet had received a score of 23 out of 25 points.

“The teacher was just glowing with what a mature and moving topic the student had chosen without any direction from her. I was less impressed and said so–something along the lines of how I could appreciate that the student had something interesting to say, but my first impression was that he didn’t know how to say it–and wasn’t that the first order task for the teacher?”

Having students display their personal strengths is fine, Keegan said, as long as they still learn to read, write, and do math capably before they graduate. “A collection of student work can be incredibly valuable,” she said, “but it cannot replace an objective and systematic diagnostic program. Hopefully, we will come to a place where we incorporate both.”

Jay Mathews is a reporter and columnist at the Washington Post and the author, most recently, of Harvard Schmarvard: Getting Beyond the Ivy League to the College That Is Best for You (2003).

Last Updated

NEWSLETTER

Notify Me When Education Next

Posts a Big Story

Business + Editorial Office

Program on Education Policy and Governance
Harvard Kennedy School
79 JFK Street, Cambridge, MA 02138
Phone (617) 496-5488
Fax (617) 496-4428
Email Education_Next@hks.harvard.edu

For subscription service to the printed journal
Phone (617) 496-5488
Email subscriptions@educationnext.org

Copyright © 2024 President & Fellows of Harvard College