Pages

Sunday, October 24, 2010

Part I: What Is Evaluation and Why Does It Matter Anyway?

Once upon a time, when I first started working for a nonprofit, I didn't think evaluation really mattered. I mean, sure--it mattered for outside funders, and if I needed to convince you that the sky was red, I could probably write a report that convinced you. If I couldn't quite do that, I usually could explain to you why the program plan to turn the sky red hadn't worked. I could do pre- and post-tests, I could analyze data. I could convince you that a program had worked, or come close to working, but what I couldn't do is tell you why evaluation should really, truly be important to me.

The dawning of the age of Aquarius...by which I mean my understanding of evaluation as critical to doing good work, came in a graduate school class where I finally understood how you could create a theory, and test that theory's outcomes mathematically. In other words, if you embed evaluation in the beginning, you can really understand whether you are getting the effects you want. And in the spirit of scientific inquiry, you have to be open to the idea that your hypothesis is just an idea, a theory. . . and it might or might not be doing what you want it to do. 

For instance, the drive to get women to have mammograms rests on the premise that mammograms save lives through early detection. But do mammograms save lives? This question sounds simple, but turns out to be pretty complicated!

Anyway, back to education.
Typically, teachers develop lesson plans in a "student will be able to do x, y, or z," and evaluation of a lesson would relate back to that plan.  
 

Example: Students will be able to write all 26 letters in lower case and upper case.

Now, that's an easy example, because you either can or can't do that. And obviously, process objectives are harder to measure--those small steps along the way that allow you to finally master a skill.
It's not always so easy, anyway--if a student does poorly on a test or other evaluation, does that mean that the teacher did a bad job teaching? Was the test simply too difficult? Did the student run out of time?

And then there is the question of the value of what is being taught. My friend told me today that her grandfather used to drill her at the dining room table over the fifty states and their capitols. Is that important? (Because if it is, we are doing a lousy job teaching it! I will bet most kids don't know them.) But we can certainly argue about whether or not this is an important concept.

So, that's about evaluating the student. But what about evaluating the teacher? My kids have asked me why bad teachers get to be teachers. Was it because they were good teachers, and now they are bad? Was it because a thorough an honest evaluation of them was never done when they were student teachers? [Schools of Education don't like to give up on student teachers.]

And then there is the question of the organizational evaluation--in this case, the schools. Some things could be useful for the students, and working for the teachers, but not working for the organization. For instance, plenty of programs have been cut because they were too expensive for the organization--even though they appeared to be working for both students and teachers.

What bothered me twenty years ago was that evaluation seemed like a waste of time and money. What I think now is completely different. I think now that it can save time and money if you are doing the wrong thing--or give you peace of mind and let you know you're on the right track if you are doing the right thing. But I think that most people, and most organizations, are in the place where I was twenty years ago--doing evaluations to prove a point that has already been pre-determined to be correct, instead of using evaluations as a way to improve.

You might think this was kind of long-winded. I have a feeling, though, that if we (by which I really am referring to all schools, but Ann Arbor schools in particular) had used rigorous evaluations, we would have found out a lot earlier that many of the programmatic attempts to reduce the "equity gap" were not working.

The point will become clearer in Part II, where I take a look at the assessment of the Ann Arbor Public Schools Language Partnership Program, a program that I discussed a few weeks ago. Think of that as a case study. The point of assessing the assessment is decidedly NOT to point fingers at what is not working, and it is NOT to praise what parts of the program are working. The point is to see what is being evaluated, who is doing the evaluation, and whether and/or how those things that need to be evaluated are being examined...

And the reason for using this program as a case study is this: the Ann Arbor Public Schools are in serious discussions with the University of Michigan about setting up a lab school in Ann Arbor. Aside from the fact that they are doing this without a community planning process (which I have a BIG beef about, and blogged about last month), I would like to use this mini-cooperative program (the Language Partnership Program) as a lens through which we can see how evaluation of a larger cooperative program might work.

UPDATE 11/14/2010: Part II can now be found here.

2 comments:

  1. I agree that evaluation is important, both on the program side and on the personnel side.

    But, it's also quite hard, as your mammogram link points out. It's especially hard with things like teaching.

    One way to approach it is just to look at the numbers: how many students can write their letters. The teachers with the lowest numbers are failing, either train them or fire them.

    The problem is that the classes may not be the same. A teacher whose students are all the offspring of PhD parents will be easier to teach than a class of children who were born addicted. So just looking at the teachers' numbers may miss the particularities of the students.

    So you could say, scrap the numbers, let's go for a qualitative evaluations. What do parents and students say about the teachers? That is obviously an invitation to error: a popular teacher may not teach worth a hoot, and an unpopular teacher may yet motivate kids to excellence (I know that's what I saw in school).

    Complicating all this is the question of the goals of the evaluation. Are you evaluating teachers to hire or fire them, or to inform their professional development. When the job is on the line, the reasons for honesty drop.

    I know that you know all this, but I want to put it out there, especially because it's not just about education.

    Right now the city, county, urban county, United Way, and Community Foundation are proposing to consolidate funding, and to make the funding outcome-based. There are potential benefits of this to be sure, but there are also real potential drawbacks. You get more passing grades per dollar teaching the children of doctors than you do the children of dealers, so if funding decisions are based on unsophisticated outcome measurements, it could be a recipe for disenfranchising those who most need assistance.

    Okay, enough ranting on my part. Keep up the good work!

    ReplyDelete
  2. Those are all good points. And, in addition, there is the dratted tendency that the more data you have, the more questions you have, therefore the more data you want...

    And actually, I find that the questions themselves are often more interesting than the answers. In general, I don't think we ask nearly enough questions! In high school, I got in trouble with my biology teacher for asking "why" way too often. I realized later that she didn't have much depth of knowledge in her subject, and she was afraid of the questions because she wasn't comfortable telling us that she didn't know the answers.

    As a side note, in my opinion education isn't really very different from other programmatic work, so all of this should be applicable to issues around consolidated funding or other programs.

    Last, I'll take "Keep up the good work" as a positive evaluative comment:)

    ReplyDelete

AddThis