<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=192888919167017&amp;ev=PageView&amp;noscript=1">
Tuesday,  April 23 , 2024

Linkedin Pinterest
News / Nation & World

Good teaching, poor scores: Doubt cast on evaluations

Students' performance is unreliable way to gauge teachers

The Columbian
Published: May 13, 2014, 5:00pm

WASHINGTON — In the first large-scale analysis of systems that evaluate teachers based partly on student test scores, two researchers found little or no correlation between quality teaching and the appraisals teachers received.

The study, published Tuesday in Educational Evaluation and Policy Analysis, a peer-reviewed journal of the American Educational Research Association, is the latest in a growing body of research that has cast doubt on whether it is possible for states to use empirical data in identifying good and bad teachers.

“The concern is that these state tests and these measures of evaluating teachers don’t really seem to be associated with the things we think of as defining good teaching,” said Morgan Polikoff, an assistant professor of education at the University of Southern California. He worked on the analysis with Andrew Porter, dean and professor of education at the University of Pennsylvania.

The number of states using teacher-evaluation systems based in part on student test scores has surged in the past five years. Many states and school districts use the evaluation systems in decisions on hiring, firing and compensation.

The rapid adoption has been propelled by the Obama administration, which made the systems a requirement for any state that wanted to compete for Race to the Top grant money or receive a waiver from the most onerous demands of No Child Left Behind, the 2002 federal education law.

Thirty-five states and the District of Columbia require student achievement to be a “significant” or the “most significant” factor in teacher evaluations. Just 10 states do not require student test scores to be used in teacher evaluations.

Most states use “value-added models” — VAMs — statistical algorithms designed to figure out how much teachers contribute to their students’ learning, holding constant such factors as demographics.

Polikoff and Porter analyzed 327 fourth- and eighth-grade mathematics and English-language-arts teachers across six school districts in New York; Dallas; Denver; Charlotte-Mecklenburg, N.C.; Memphis, Tenn.; and Florida’s Hillsborough County.

The data came from the Measures of Effective Teaching, a larger project funded by the Bill and Melinda Gates Foundation. Polikoff and Porter’s work also received a $125,000 Gates Foundation grant.

The researchers found that some teachers who were well-regarded based on student surveys, classroom observations by principals and other indicators of quality had students who scored poorly on tests. The opposite also was true.

Teacher-evaluation systems have stirred up controversy and some recent legal challenges.

The Houston Federation of Teachers filed a federal lawsuit this month charging that Houston’s “value-added” teacher-evaluation system violates educators’ rights.

A similar challenges popped up in Tennessee. In Florida, teachers are in an uproar over a state system that assesses some educators using scores of students they never taught.

Last month, the American Statistical Association urged states and school districts not to use VAM systems in personnel decisions, noting that recent studies found that teachers account for a maximum of about 14 percent of a student’s test score, with other factors responsible for the rest.

Polikoff said policymakers should rethink how they use VAM models.

“We need to slow down or ease off completely for the stakes for teachers, at least in the first few years, so we can get a sense of what do these things measure, what does it mean,” Polikoff said. “We’re moving these systems forward way ahead of the science in terms of the quality of the measures.”

Loading...