Three weeks ago, I promised a second post about evaluation. The first post can be found
here. I said at the time that I wanted to do this because the Ann Arbor schools are considering an alliance with UM for a "lab school," and since we already have a partnership program--the Ann Arbor Public Schools Languages Partnership--it is worth taking a look at
how it is being evaluated,
who is doing the evaluating, and
what is being evaluated. This might offer some valuable insights for the future. As I wrote before,
The point of assessing the assessment is decidedly NOT to point fingers at what is not working, and it is NOT to praise what parts of the program are working. The point is to see what is being evaluated, who is doing the evaluation, and whether and/or how those things that need to be evaluated are being examined...
In other words: I am not evaluating the program itself. I am evaluating the evaluation process and product.
I've posted the documents, which I received from the district, on the web site
Ann Arbor Area Government Document Repository, and this is the documents
link. This is a very cool web site where people can post files they have acquired from local governmental units and make them widely available.
What I got was an assessment of third graders who had gone through the partnership program. There is a "
CEFR grid," which lays out the broad, overarching goals of the program based on an internationally-recognized standard. There is a sample of the
assessment tool that the students filled out. And there is a
summary of the students' assessments.
Was what was done, done appropriately? Done well?
For instance--is it appropriate to use the Common European Framework guidelines? Absolutely. That reflects an international standard for language learning.
Was the assessment the students filled out appropriate for third graders? I thought so! Take a look:
It seems age appropriate to me. Later on in the survery there were some places where students had an opportunity to write more. The summary document describes that 1,034 students were surveyed, and shares items such as "60% say they learned most of what was taught."
So--that's fine. It seems like an age-appropriate, large-scale survey of 3d graders.
There's only one problem, and that is that
this survey is as far as it goes.
Although this is a fine assessment of how the students felt about learning Spanish, it is a terrible fail as far as assessing the Languages Partnership Program.
Let's remember that this is a new program.
Before we expand it, isn't there more that we want to know?
NOTE: Since this was my written request, "Can you please send me copies of any evaluations that AAPS has done about the Ann Arbor Languages Partnership," I'm going to assume that what I got is all that there is at this point. It is certainly possible that there is other material out there.
On the academic side:
It's really nice that 60% of the students say they learned most of what was taught, but does any independent assessment--say, by their teachers--bear that out?
If some students are not learning, who are those students?
Does this appear to decrease, or increase, the achievement gap?
It's really nice that a majority of students feel their teachers (who are for the most part, I believe, pre-student teachers) teach "very" or "pretty" well, but is there any other assessment of that? By university supervisors, AAPS mentors, or principals?
What about the students' regular teachers--do they feel that this program is value-added, or that it takes away from other activities? [I think this is important to know, as the program gets expanded.]
What about the school principals? Do they feel that this program is value-added?
If school time is at a premium, and it is a zero-sum game, do we lose anything by adding Spanish? If so, what is it? If teaching a language is important, did we give enough time to it or should we give more time?
How much repetition should teachers be expected to do as we expand the program to fourth and then fifth grade? Will they be doing numbers and colors over and over again?
What did we do with students who already know Spanish from their homes? Were they classroom helpers? Were they given other things to do? How did that work?
Does it work to use pre-student teachers?
What happened at the end of April when the UM winter term ended? Did the lessons end then?
Were the apprentice teachers responsive and responsible?
Were community members used, and if so, did that work well, or not? If not, what were the roadblocks?
On the financial side:
What was the original budget? Did we go over the budget? Under the budget?
Did either the University or AAPS have unexpected costs that they had to cover, and if so, what was the cost?
What lessons did we learn about the financing?
Is this program sustainable going forward?
On the partnership side:
Is the partnership working overall?
Were there any surprises? What were they?
What parts went smoothly and what parts did not?
And two more things...
First, from the A2LP web site:
While classes this year were taught in the Media Center at each school, and Media Specialists served as mentors for our ATs, in the 2010-2011 school year, classes will be held in students’ regular classrooms, with classroom teachers as mentors.
Under which setting do things seem to go more smoothly...in the media centers or in the classrooms? Do the classroom teachers feel prepared to be mentors for Spanish language apprentice teachers?
Second, from the A2LP web site:
Future research studies will include the perspectives of community members, whether or not they are parents of students participating in A2LP. While many community members expressed their opinions at board meetings and in online discussion forums prior to the program’s start, the Partnership is interested in formally documenting the response to the program after several months/years of operation.
In fact, the
A2LP brochure specifically states:
The Partnership will be advised by a community committee, which will be chaired by the Superintendent and the University’s Director of Teacher Education.
But when I asked about the committee, I was told that:
There is not a committee as of yet. There is a plan to initiate the committee in the coming months. The main goal of the committee will be to help secure funding for professional development and promotion of the programs.
Uh-Oh. As far as I'm concerned, that's a piece of the assessment that I can comment on: Fail. First of all, the program is a year-and-a-half in (if you include planning time, it's more), and there is no community committee yet. Second, the goal of the committee is no longer advising, but securing funding?
[It's no accident, I think, that I signed up for a committee during the budget forums last year, and have heard nary a word about them yet.]
On the AAPS web site, the Ann Arbor Languages Partnership program information is embedded in the
World Languages page. There is a fuller description on the
UM web site. The UM one closes with these words:
The program is the first of its kind in the School of Education and we want to ensure that its development is data-driven.
From the AAPS side, don't we absolutely agree? In order to make our work data-driven, our evaluations cannot simply be about students' assessments of themselves. They must be about the program operations as well, and they should be reflective in nature so that the partners know
whether the program is working and/or financially sustainable, how well it is working, and what could be done to make it work better.