I've never been a particularly anti-test person. I always thought that test taking was a skill that people needed, just like writing, arithmetic, or learning how to tie a shoelace. But I also thought we shouldn't overemphasis tests.
For myself, I have pleasant memories of filing into the school cafeteria to take the Iowa tests. I was lucky that my name wasn't something long like Smolensky or Blagojevich [sneaky reference to a former Illinois governor who is going to jail this week] because they were too long to fit in the bubbles. And of course I made sure to fill in the bubbles well with my newly-sharpened #2 pencil.
Do you remember the Iowa tests? They have a long history. They were first developed in 1935. The Iowa tests are what are called "norm-referenced" tests. In other words, they score test-takers on a bell curve. Some students will be above average; the majority will be average; and some students will be below average. [Below, there is a drawing of a bell curve. See how it looks like a bell? Hence the name.]
|The Bell Curve|
On the other hand, you can see the problem. In the broader population, it is impossible for everybody to score well. Half the test-takers would have to be below average. If a large population moves its scores (everyone starts reading better), then the bell curve shifts to the right--but still, half the kids are below average. Also, norm-referenced tests tend to focus on the kinds of questions that differentiate between students, and not the kinds of questions that show proficiency in certain areas. In other words, the point of the test is to rank students.
If, by chance, you have ever had a teacher who "graded on the curve" or "curved the grades," that teacher was working toward a certain middle ground. If she or he expected that most of the class would get a B, but the average grade was a C, they might think "I guess I made that test too hard" and they would move the average up to a B. I was very thankful for that in college physics, where my C- turned into a B- thanks to a professor's curving of the final exam.
In my test-taking heyday--which might have been eighth or ninth grade--I enjoyed the tests as a break from my regular school work. I enjoyed completely filling in the circles. I didn't feel any pressure about the tests, because a) they didn't mean much of anything and b) I always scored well on tests. In other words, whether it was an Iowa test or an IQ test (which is also scored on the bell curve), I was always well along on the right-hand side of that curve.
I never had test anxiety, which definitely helped.
But it's also true that students who fit my profile tended to do well. And of course, doing well is positively reinforcing. And since I did well the first time, why get nervous about the next year's test?
What, you might wonder, is "my profile?" Well, to begin with, I lived in a primarily white, upper middle/middle class town. I had two well-educated parents, both with graduate degrees. I had lots of books in my house. Only one of my grandparents had finished high school, but they all could read in more than one language. It turns out that the confluence of a comfortable income and an educated, literate family lead a certain population to do very well on tests. And that is true, whether the test is a norm-referenced test like the Iowa tests, or a criterion-referenced test like the MEAP.
Criterion-referenced tests sound, on paper, a lot better. In a sense, they are more like the kinds of tests that we took in high school. If you were taught the future tense in a language class, you would be expected to demonstrate that knowledge on a test. Theoretically, every student in the class could get an A if they had studied and mastered the future tense. And in this simple example, that might actually happen.
In real life, in criterion-referenced tests like the MEAP, that never happens. There are a lot of reasons for this, but here are a few.
1. Students come in with different weaknesses. If one of those weaknesses is reading, that will show up in every single other test. The social studies and science tests--and even the math tests--require a lot of reading.
2. The "cut scores," as to what "proficiency" means, change. That just happened this year in Michigan, and guess what--a lot of kids who looked "proficient" last year don't look proficient this year. Even though they might have actually done better.
3. Not all teachers teach everything that might be on the tests. And even if they cover the subject matter, the questions might be unintelligible to the student. Take, for instance, an example that a teacher gave me a few years ago. Her students (upper elementary) had a question on the reading comprehension exam about logging. Yes, I'm talking about the cutting down of trees. Her students, however, had a different understanding of logging. One logs into a computer, and logs out of a computer. . . That reading comprehension passage made no sense at all to those kids.
But anyway, back to me. I've now had two children go through the college application process. They've taken a lot of tests. Like me when I was growing up, they have two parents with graduate degrees. Like me growing up, they live in a middle class community--and an academic community too! That is another kind of privilege. Like me, they did relatively well.
And my youngest son? He said to me, "I like the tests."
"Really? Why?" I asked.
"Well," he said, "we don't do any work during the testing periods!"
And that's Part I. More about testing, coming soon to a blog near you.