I've never been a particularly anti-test person. I always thought that test taking was a skill that people needed, just like writing, arithmetic, or learning how to tie a shoelace. But I also thought we shouldn't overemphasis tests.
For myself, I have pleasant memories of filing into the school cafeteria to take the Iowa tests. I was lucky that my name wasn't something long like Smolensky or Blagojevich [sneaky reference to a former Illinois governor who is going to jail this week] because they were too long to fit in the bubbles. And of course I made sure to fill in the bubbles well with my newly-sharpened #2 pencil.
Do you remember the Iowa tests? They have a long history. They were first developed in 1935. The Iowa tests are what are called "norm-referenced" tests. In other words, they score test-takers on a bell curve. Some students will be above average; the majority will be average; and some students will be below average. [Below, there is a drawing of a bell curve. See how it looks like a bell? Hence the name.]
The Bell Curve |
On the other hand, you can see the problem. In the broader population, it is impossible for everybody to score well. Half the test-takers would have to be below average. If a large population moves its scores (everyone starts reading better), then the bell curve shifts to the right--but still, half the kids are below average. Also, norm-referenced tests tend to focus on the kinds of questions that differentiate between students, and not the kinds of questions that show proficiency in certain areas. In other words, the point of the test is to rank students.
If, by chance, you have ever had a teacher who "graded on the curve" or "curved the grades," that teacher was working toward a certain middle ground. If she or he expected that most of the class would get a B, but the average grade was a C, they might think "I guess I made that test too hard" and they would move the average up to a B. I was very thankful for that in college physics, where my C- turned into a B- thanks to a professor's curving of the final exam.
In my test-taking heyday--which might have been eighth or ninth grade--I enjoyed the tests as a break from my regular school work. I enjoyed completely filling in the circles. I didn't feel any pressure about the tests, because a) they didn't mean much of anything and b) I always scored well on tests. In other words, whether it was an Iowa test or an IQ test (which is also scored on the bell curve), I was always well along on the right-hand side of that curve.
I never had test anxiety, which definitely helped.
But it's also true that students who fit my profile tended to do well. And of course, doing well is positively reinforcing. And since I did well the first time, why get nervous about the next year's test?
What, you might wonder, is "my profile?" Well, to begin with, I lived in a primarily white, upper middle/middle class town. I had two well-educated parents, both with graduate degrees. I had lots of books in my house. Only one of my grandparents had finished high school, but they all could read in more than one language. It turns out that the confluence of a comfortable income and an educated, literate family lead a certain population to do very well on tests. And that is true, whether the test is a norm-referenced test like the Iowa tests, or a criterion-referenced test like the MEAP.
Criterion-referenced tests sound, on paper, a lot better. In a sense, they are more like the kinds of tests that we took in high school. If you were taught the future tense in a language class, you would be expected to demonstrate that knowledge on a test. Theoretically, every student in the class could get an A if they had studied and mastered the future tense. And in this simple example, that might actually happen.
In real life, in criterion-referenced tests like the MEAP, that never happens. There are a lot of reasons for this, but here are a few.
1. Students come in with different weaknesses. If one of those weaknesses is reading, that will show up in every single other test. The social studies and science tests--and even the math tests--require a lot of reading.
2. The "cut scores," as to what "proficiency" means, change. That just happened this year in Michigan, and guess what--a lot of kids who looked "proficient" last year don't look proficient this year. Even though they might have actually done better.
3. Not all teachers teach everything that might be on the tests. And even if they cover the subject matter, the questions might be unintelligible to the student. Take, for instance, an example that a teacher gave me a few years ago. Her students (upper elementary) had a question on the reading comprehension exam about logging. Yes, I'm talking about the cutting down of trees. Her students, however, had a different understanding of logging. One logs into a computer, and logs out of a computer. . . That reading comprehension passage made no sense at all to those kids.
But anyway, back to me. I've now had two children go through the college application process. They've taken a lot of tests. Like me when I was growing up, they have two parents with graduate degrees. Like me growing up, they live in a middle class community--and an academic community too! That is another kind of privilege. Like me, they did relatively well.
And my youngest son? He said to me, "I like the tests."
"Really? Why?" I asked.
"Well," he said, "we don't do any work during the testing periods!"
And that's Part I. More about testing, coming soon to a blog near you.
Thank you for this information. I think it's really important that the public starts having a dialogue about testing and what it means and doesn't mean. It's really important that we all understand what is being measured and what is being valued in these tests.
ReplyDeleteOne thing I'd like to add to the reading point you made is that the tests measure reading in a particular situation: a timed test. This is very different from our reading experiences in the rest of our lives. And I think it's very different from what makes a good reader. I've talked to a lot of high school students and adults who think they are poor readers because they are slow readers, and I've watched (and been) a fast reader who misses the details necessary for critical analysis.
Good points about the reading. My daughter reads and processes slowly. She has fantastic grades in school while taking accelerated courses. But her ACT scores are low. She never has time to read and process. In real life (i.e. work) I can't think of many instances where we are timed. No one comes into the office and says "you have 30 minutes to write this report. On your mark, get set, GO!"
ReplyDeleteYes, that is a good point about the reading. And the ACT and SAT are timed tests.
ReplyDeleteTechnically, the MEAP is not a timed test but in reality I think there is a limit to how much time kids can sit and work on a test, especially when the rest of the class is waiting. . .
I'm looking forward to reading more on this topic. Our kids started school in upstate New York where they were given NWEA's MAP test. When we moved to Michigan, we were disappointed that our district (Ply-Canton) and most other districts in Michigan, didn't offer MAP. MAP is now popping up around Michigan, albeit slowly, and given it's goal of measuring growth (not proficiency), and the work on a new teacher evaluation that will include growth, I think it will become more prevalent. Hopefully you can shed some light on MAP so parents don't automatically give it the cold shoulder.
ReplyDelete-common_cents
Common Cents,
ReplyDeleteLook for quite a bit more about the NWEA MAP test in the coming week. My friend and I actually went and spoke to the school board about it last week. I haven't actually tried out the test myself and I'm interested that you like it. You can read about my problems with the test (and mostly with the volume of testing going on, the age at which it is started, etc.) this week. I would like to know what it is that you like about the test. How, for instance, in your old district did teachers use it to make a difference for students?