Pages

Wednesday, February 3, 2016

Guest Post: A Parent Reviews Her Child's M-STEP Results, and Learns...

A guest post by Naomi Zikmund-Fisher about the M-STEP results, and what they mean.

Last week, we finally received my children’s scores from the M-STEP test they took last spring. My son, a fourth grader at the time (now 5th) and my daughter, a high school Junior (now senior) both took the test. For more on that decision, you can read here.


In the interest of maintaining some of their privacy, I’m not going to share how my kids did on the test. More to the point, it probably doesn’t matter how they did on the test, as their performance on this first round appears to be being largely discarded.


As a former teacher and administrator, I probably know more about how to read a score report than most parents. Theoretically, I should be able to get all there is to get out of these scores. So, here’s what I learned from looking at my children’s score reports:


  1. Last spring, my children were doing about as well in their academic progress as their teachers said they were. There were no real surprises. You could have looked at their report cards and gotten the same information that M-STEP gives you.


  1. That information is wildly out of date. They took this test in a window from March to May. I got the scores in January. Whatever new information may have been useful in the scores is no longer pertinent.


  1. The science and social studies tests measure curriculum alignment more than anything else. They are broken out by different smaller subjects (e.g. physical science, life science or economics, geography). You can see that in this sample of a child’s 4th grade science scores.

Sample M-Step information provided to parents, in this case for the science test.



When they say a child is proficient, what does that mean?

My children did best in areas that they had studied recently and worst in those from previous years. In other words, this test measured what classes they were taking, not anything about my children or about whether their teachers were teaching well.



  1. The target area for “proficient” is, in some cases, shockingly small. Scores are reported graphically (among other ways) on a continuum of four ranges. Proficient is the second to the top and is the smallest area, sometimes by quite a bit.


But shouldn’t “just fine” be a fairly broad range of kids? When did we stop recognizing that “normal” isn’t a single point, it’s a spectrum?

Sample of information provided to parents. Note that the grey "margin of error" overlaps both the "partially proficient" and advanced categories, meaning that a child who scores in the yellow/gray overlap as "partially proficient" might actually be "proficient" on another day. Note also that the green ball of "proficient" is a much smaller area than the bars for not proficient, partially proficient, or advanced.

This picture shows the score graphic for the same student whose subject scores were above. This child is supposedly proficient in 4th grade science [the score is right in the middle of the green bubble]. As you can see, this is quite a feat, since the “Proficient” range is about 5.5% of the total.

What’s more, while it’s great that the score report acknowledges a “margin of error” around the score, that margin is substantially larger than the target itself. This means that three kids who score as “partially proficient,” “proficient,” and “advanced” might all know exactly the same amount of science. We sing the praises of one (and the wonderful teacher who taught her) while wringing our hands about another (and the mediocre educator she had) when there is truly no difference at all.


In the end, what I realize once again is that this data is designed to measure districts and schools much more than to give us any useful information about individual children. Even without the huge delay in score reporting, the amount of useful information, that you can’t find more easily somewhere else, about a single child is minimal.


It’s reasonable to say that the measure of a school or a district is how well its children are prepared for the next phase of life. The problem is, we’re substituting this test for the real answer to that question. We’re asking our kids to take hours upon hours of tests – time they could spend actually learning something – in service of measuring their school system.


If we already know how they’re going to do on the tests, then we already know what the answer we’re going to get will be. And if we don’t already know how they’re going to do on the tests, it’s either a really bad test or a school so out of touch with students that it should be obvious in multiple other ways.


I can say unequivocally, however, as an educator and as a parent, that the M-STEP given last spring was just plain a waste of my children’s time.


4 comments:

  1. FYI All the images in this post say I'm not authorized to view them.

    ReplyDelete
  2. And a waste of the teachers' time and my child's time and the schools' time and the district's time and the state's time—and my huge investment of parental time spent skimming it and throwing it out. It's in my taxpayer role that this burns me up most. How much money is piddled away on this?

    ReplyDelete
    Replies
    1. That is a number that might be hard to pin down but is probably worth researching.

      Delete

AddThis