The dawning of the age of Aquarius...by which I mean my understanding of evaluation as critical to doing good work, came in a graduate school class where I finally understood how you could create a theory, and test that theory's outcomes mathematically. In other words, if you embed evaluation in the beginning, you can really understand whether you are getting the effects you want. And in the spirit of scientific inquiry, you have to be open to the idea that your hypothesis is just an idea, a theory. . . and it might or might not be doing what you want it to do.
For instance, the drive to get women to have mammograms rests on the premise that mammograms save lives through early detection. But do mammograms save lives? This question sounds simple, but turns out to be pretty complicated!
Anyway, back to education.
Typically, teachers develop lesson plans in a "student will be able to do x, y, or z," and evaluation of a lesson would relate back to that plan.
Example: Students will be able to write all 26 letters in lower case and upper case.
Now, that's an easy example, because you either can or can't do that. And obviously, process objectives are harder to measure--those small steps along the way that allow you to finally master a skill.
It's not always so easy, anyway--if a student does poorly on a test or other evaluation, does that mean that the teacher did a bad job teaching? Was the test simply too difficult? Did the student run out of time?
And then there is the question of the value of what is being taught. My friend told me today that her grandfather used to drill her at the dining room table over the fifty states and their capitols. Is that important? (Because if it is, we are doing a lousy job teaching it! I will bet most kids don't know them.) But we can certainly argue about whether or not this is an important concept.
So, that's about evaluating the student. But what about evaluating the teacher? My kids have asked me why bad teachers get to be teachers. Was it because they were good teachers, and now they are bad? Was it because
And then there is the question of the organizational evaluation--in this case, the schools. Some things could be useful for the students, and working for the teachers, but not working for the organization. For instance, plenty of programs have been cut because they were too expensive for the organization--even though they appeared to be working for both students and teachers.
What bothered me twenty years ago was that evaluation seemed like a waste of time and money. What I think now is completely different. I think now that it can save time and money if you are doing the wrong thing--or give you peace of mind and let you know you're on the right track if you are doing the right thing. But I think that most people, and most organizations, are in the place where I was twenty years ago--doing evaluations to prove a point that has already been pre-determined to be correct, instead of using evaluations as a way to improve.
You might think this was kind of long-winded. I have a feeling, though, that if we (by which I really am referring to all schools, but Ann Arbor schools in particular) had used rigorous evaluations, we would have found out a lot earlier that many of the programmatic attempts to reduce the "equity gap" were not working.
The point will become clearer in Part II, where I take a look at the assessment of the Ann Arbor Public Schools Language Partnership Program, a program that I discussed a few weeks ago. Think of that as a case study. The point of assessing the assessment is decidedly NOT to point fingers at what is not working, and it is NOT to praise what parts of the program are working. The point is to see what is being evaluated, who is doing the evaluation, and whether and/or how those things that need to be evaluated are being examined...
And the reason for using this program as a case study is this: the Ann Arbor Public Schools are in serious discussions with the University of Michigan about setting up a lab school in Ann Arbor. Aside from the fact that they are doing this without a community planning process (which I have a BIG beef about, and blogged about last month), I would like to use this mini-cooperative program (the Language Partnership Program) as a lens through which we can see how evaluation of a larger cooperative program might work.
UPDATE 11/14/2010: Part II can now be found here.