Over the past several years, Charlotte-Mecklenburg Schools has invested millions of dollars and countless hours on measuring teachers’ performance and trying to make them more effective.
It’s a quest that drives education policy and research across the nation, and CMS has been a pioneer.
Two studies released this week highlight how elusive those goals can be.
One, being presented Thursday to the N.C. Board of Education, examines the state’s attempt to gauge teacher effectiveness by crunching student test scores. Researchers at UNC-Chapel Hill and Vanderbilt University looked at five years of “value-added” teacher ratings.
In 2009, when the ratings started, CMS appeared to be better than other large districts on a ratio that compares top performers with below-par teachers. But over time CMS stayed flat while districts such as Wake, Cabarrus and Union surpassed the district.
The other is a national study by TNTP (formerly The New Teacher Project), which has worked with CMS and other districts to figure out how to reward top teachers and help others improve. It’s the kind of work that CMS had hoped would move large numbers of teachers to higher levels of performance.
But after intensive study of three large districts (Superintendent Ann Clark says CMS wasn’t one of them), TNTP concluded that no one has found anything that’s consistently effective. After the districts spent an average of $18,000 per teacher per year, only 3 of 10 teachers saw significant improvement, 5 of 10 made little change and 2 of 10 declined, says the report, titled “The Mirage.”
The notion persists that we know how to help teachers improve and could achieve our goal of great teaching in far more classrooms if we just applied that knowledge more widely. It’s a hopeful and alluring vision, but our findings force us to conclude that it is a mirage.
TNTP report on teacher development
“Unfortunately, our research shows that our decades-old approach to teacher development, built mostly on good intentions and false assumptions, isn’t helping nearly enough teachers reach their full potential – and probably never will,” the TNTP report concludes.
Using tests to rate teachers
A quick refresher for those who aren’t immersed in eduspeak: Value-added ratings are based on the notion that complex formulas can tease out how much teachers contribute to their students’ year-to-year progress on standardized exams.
No one claims that scores are entirely under the teacher’s control, or that you can look at one student’s results and gauge how good the teacher is. But given enough scores over a period of time, proponents say you can get a meaningful measure of which teachers consistently get the best results.
Skeptics say the approach puts too much emphasis on testing and ignores the personal qualities that make a great teacher shine.
If we don’t talk about differences in effectiveness of teachers, we’re jeopardizing our students.
CMS Superintendent Peter Gorman in 2010
Former CMS Superintendent Peter Gorman was gung-ho on value-added ratings. He introduced a barrage of local tests designed to produce data on all teachers. He talked about using the ratings to reward top performers and help all teachers improve their skills. By 2014, he hoped to tie all CMS teachers’ pay to performance, based partly on test scores.
His plan collided with the recession. There was no money for rewards. Amid layoffs, some teachers believed the ratings would be a pretext for firings. Parents balked at the surge in testing.
Gorman resigned in 2011.
Leadership of the district has been in flux ever since. So have efforts to use teacher ratings to benefit students.
But the state picked up where Gorman left off.
What N.C. numbers show
Today, teachers who give state exams get value-added ratings. Those ratings are used, along with five other measures, in job evaluations.
Individual ratings are not made public, but for each school and district the state reports the number of teachers who met, exceeded or failed to meet expected growth for their students.
The Consortium for Educational Research and Evaluation – North Carolina reviewed those numbers to answer some big-picture questions.
Boosting test scores is not the only important thing a teacher does.
Report on value-added ratings in North Carolina
Yes, the researchers conclude, students who get top-rated teachers do show bigger gains on state exams, though the effect is stronger in math and science than reading and English.
And North Carolina’s public schools are making progress at getting those top teachers into struggling schools, rather than remaining concentrated in the high-performing, low-poverty schools that tend to attract them. CMS is specifically cited for making gains on that front. In recent years the district has offered a range of incentive programs – Strategic Staffing, Project LIFT and Opportunity Culture, for example – that offer bonuses and/or higher pay to teachers with top ratings willing to work in struggling schools.
The report also looks at the ratio of teachers in the top category to those in the bottom one in the 11 largest districts. CMS “is steady and high,” it notes. But while CMS remains well ahead of such districts as Guilford and Gaston, it has fallen behind Wake, Cabarrus and Union counties. The state report runs through 2013. I checked the 2014 ratings and found those trends continuing. (Test scores and ratings for 2015 will come this fall.)
Charter chain stands out
The TNTP report, released Tuesday, left researchers stymied. They looked at all sorts of efforts to improve teacher effectiveness, from coaching and mentoring to district training and outside workshops.
Using a range of measures, including value-added ratings, they found that teachers in the three large districts tended to get better in their first five years, then level off, regardless of efforts to help them improve. And those with low effectiveness tended to overestimate their skills, the report says.
“Even when teachers do improve, we were unable to link their growth to any particular development strategy,” the report says. “School systems are not helping teachers understand how to improve – or even that they have room to improve at all.”
But there was an exception: TNTP also studied “a midsize charter management organization working in several cities” (none of the participants was named). There they found stronger teacher improvement that continued at all experience levels, accompanied by bigger gains from students. Teachers there were more likely to acknowledge room to improve, the report says.
The researchers credit “a robust and deliberate culture of high expectations and continuous learning.”
“In focus groups, (charter) teachers reflected on the sense that everyone in their school community is constantly working toward better instruction, and pushing each other to do their best work,” it says.
▪ For the national report on teacher development, go to http://tntp.org/publications and select “The Mirage.”
▪ For the report on N.C. teacher effectiveness, go to http://stateboard.ncpublicschools.gov/ and select the Aug. 6 agenda under “SBE meetings.” The report on distribution of teachers is item IV.
▪ Look up school and district performance on teacher effectiveness ratings at http://apps.schools.nc.gov/pls/apex/f?p=155:1
▪ To keep up with the Observer’s Your Schools blog: www.charlotteobserver.com/news/local/education/your-schools-blog/
Where the districts rank
The report going to the N.C. Board of Education calculates a ratio of teachers who exceeded expectations to those who fell below the standard. Higher ratios are better. Here are 2014 results for select districts, calculated by the Observer (percentages and ratios may not match precisely because of rounding).
Wake: 27 percent of teachers exceeded the standard while 9 percent fell short. Ratio: 2.9.
Cabarrus: 25 percent exceeded, 9 percent fell short. Ratio: 2.7.
Union: 24 percent exceeded, 10 percent fell short. Ratio: 2.3.
CMS: 24 percent exceeded, 13 percent fell short. Ratio: 1.9.
Guilford: 19 percent exceeded, 14 percent fell short. Ratio: 1.4.
North Carolina: 20 percent exceeded, 16 percent fell short. Ratio: 1.3.
Gaston: 21 percent exceeded, 20 percent fell short. Ratio: 1.04.