Pages

Thursday, July 25, 2013

Teacher Evaluation Report Out: You May Be Disappointed

The Michigan Council for Educator Effectiveness has put out their report, recommending how teachers should be evaluated in Michigan. Remember, they were asked to do this by Governor Snyder, and the committee that came up with these recommendations was chaired by Dr. Deborah Ball, Dean of the University of Michigan School of Education.

I have not read it yet, but based on the comments of people who have, I think that a lot of people--maybe you, maybe not--will be disappointed in the recommendations.

On the MCEE website you will find the Executive Summary and the entire report, available for download. The Detroit Free Press has posted it here. I encourage you to read it!

As I said, I haven't read it, but some other people have. This is Steve Norton, from Michigan Parents for Schools, summary:

The full copy of the report, plus an executive summary and more, is available here.

Good news: strong recommendation that evaluation of teacher practice be a major piece of teacher "ratings."
Bad news: complies with existing law in that "objective measures of student growth" (i.e. test scores of some kind) form 50% of the evaluation, with a large portion of that based on value added models.
Issues: Contrary to the report, existing examples of VAM suffer from severe problems of statistical validity (is it measuring what you think it is?) and reliability (do you get the same results consistently?). Biggest problem: statistical correction for out-of-school factors is nearly impossible given the data legally available to schools. For example: the report says that VAM models can control for "socio-economic status." However, schools do not have access to information like family income or parental education; they only have a yes/no flag for students eligible or not eligible for free or reduced price lunch. Academic investigation into VAM has relied on much more detailed and rich data to control for home factors than is available to any school district. Without adequately controlling for these outside factors, which we already know are the largest influences on academic performance, VAM models are producing unreliable estimates.

In fairness to the committee, I think that 50% based on assessments is part of the new state law. BUT--it's a bad state law, and I think the committee could have decided to ignore it. They didn't.

Steve Norton also sent me this information:

Yes, the 50% based on student growth using VAM is currently in law for what the "governor's council" must report. Since this was a required feature of the recommended state evaluation system, one must conclude that the Legislature intended to make this a feature of whatever evaluation system they approve. 
This section of the Revised School Code was passed in July 2011. From MCL 380.1249: 
"(2) Beginning with the 2013-2014 school year, the board of a school district or intermediate school district or board of directors of a public school academy shall ensure that the performance evaluation system for teachers meets all of the following:
(a) The performance evaluation system shall include at least an annual year-end evaluation for all teachers. An annual year-end evaluation shall meet all of the following:
(i) For the annual year-end evaluation for the 2013-2014 school year, at least 25% of the annual year-end evaluation shall be based on student growth and assessment data. For the annual year-end evaluation for the 2014-2015 school year, at least 40% of the annual year-end evaluation shall be based on student growth and assessment data. Beginning with the annual year-end evaluation for the 2015-2016 school year, at least 50% of the annual year-end evaluation shall be based on student growth and assessment data. All student growth and assessment data shall be measured using the student growth assessment tool that is required under legislation enacted by the legislature under subsection (6) after review of the recommendations contained in the report of the governor's council on educator effectiveness submitted under subsection (5)...."
"(5) Not later than April 30, 2012, the governor's council on educator effectiveness shall submit to the state board, the governor, and the legislature a report that identifies and recommends all of the following for the purposes of this section and that includes recommendations on evaluation processes and other matters related to the purposes of this section:
(a) A student growth and assessment tool. The student growth and assessment tool shall meet all of the following:
(i) Is a value-added model that takes into account student achievement and assessment data, and is based on an assessment tool that has been determined to be reliable and valid for the purposes of measuring value-added data...."

And this, folks, is what we're up against.

1 comment:

  1. Hi Ruth,

    I read through a large portion of the report and have some concerns. I read through certain areas multiple times and I'm not sure I fully understand parts of it.

    1. One part stated that 5% (I believe it was 5) of a teacher's evaluation can come from the entire school's data. I'm not sure how this helps anyone or is an accurate assessment of the individual teacher (is it somehow supposed to act as a motivator to all work together "better"?).

    2. There is a graphic that lays out where students are already taking standardized tests that measure their progress on that year's specific standards. The recommendation seemed to be that students should be tested each year (somehow) in order to standardize data from each teacher. I feel like I'm understanding this part wrong, but it seems like they're advocating for standardized testing in every subject at every grade level. If so, this is crazy.

    3. I'm a bit confused as to what happens to teachers who do not have students for a full year. In the report it states that teachers can be held accountable for certain areas they do not teach, as long as they know they're being assessed on it (for example an Art teacher could be held accountable for writing scores, since it's a collaborative effort across content areas to get students to write). What happens to teachers who are on a trimester schedule (like Skyline) and only have students for 12 weeks? How can we actually attribute student data to specific teachers? Even with semesters it will be difficult. As a teacher I would feel much more comfortable with my students being tested if I actually could work with them for a full year

    ReplyDelete

AddThis