Skip to main content


Welcome, the Hub connects all projects

Voices From The Field


MSPnet Blog: “Accountability, denial, and the unit of improvement”

See All Blog Posts

posted August 28, 2015 – by Brian Drayton

You can count on presidential campaigns to make clear the conventional “wisdom” on education.  Any reader can recount the main ingredients:  [a] public schools are failing or in crisis, [b] if we don’t fix this, we can’t be Top Nation, because all those unskilled students will inhibit our economy;  [c] teachers and students are not doing their best, so we need to set high standards to tell them what is important, [d] accept no excuses, [e]   deploy an elaborate system of accountability, and [f] let market competition produce innovation and Quality, and don’t worry too much if the accountability system gives you bad news about the “innovations.”  You can’t make an omelette without breaking eggs, and in any case, things will get better after just a little longer.

In recent weeks, all the candidates have been asked about their educational policies and the one term that’s cropped up in all mouths (at least, all the mouths returning relevant answers), from the most conservative to the most “progressive,” is “accountability.”    This usually carries two components:  measurement and responsibility.  The candidates reflect the mainstream view (and in the case of education the mainstream stretches from bank to bank) that numbers matter, that they tell clear stories, and that we are measuring the right things.   It is also mainstream (part of the triumph of the technocratic mindset) to treat the components of the system as isolable elements whose behaviors can be interpreted with no reference to the rest of the system.

Inconveniently, there is strong evidence that this is a poor model.  If you make other assumptions, and actually incorporate more of the complexity of the real world, for example in teacher evaluation,

the fundamental message from the research is that the percentage of .. year-to-year, class-to-class, and school-to-school effective and ineffective teachers appears to be much smaller than is thought to be the case. When the class is the unit of analysis, and student growth is the measure we use to judge teacher effectiveness, what we find is a great deal of adequacy, competency, and adeptness by teachers in response to the complexity of the classroom. And, we see much less of the extraordinarily great and horribly bad teachers of political and media myth. (David C. Berliner here)

Here let me introduce a definition of the “Semmelweis fallacy”:  “the reflex-like tendency to reject new evidence or new knowledge because it contradicts established norms, beliefs or paradigms.”

The blog  “Chronotope”  last June discussed this fallacy in the context of education research and how easily it is ignored, owing to prior intellectual commitments (the post is here) .  Carl Hendrick, the blogger, mentions several paradoxes that can be attributed to this mindset.  There is, for example, what he calls “Whole school cognitive dissonance”:

Whole School Cognitive Dissonance: What is the value in a school preaching Growth Mindsets in an assembly yet basing their entire school enterprise on the reductive and fixed mode of target grades and narrow assessment measures based on poor data? Why are kids explicitly told that their brain is malleable but implicitly told their target grades are not?

Now, much of the thrust of the blog post is about the resistance of teachers and school people to research results.  I think, however, that much resistance that we are seeing is healthy. So many mandates have been imposed, lifted, revised, and contradicted by others over the past few years, with such varying justifications and sanctions, that it’s hard not to see schools as pawns in some other game;  under those circumstances, endless patient compliance cannot be expected from the human subjects.

And every mandate has been related to someone (students and teachers, usually)  accounting to someone else (not a teacher)  about something (achievement measures).   Yet no matter how much accounting is going on, no one is satisfied with the answers.  Perhaps we’re looking at the wrong things.  Or holding the wrong people to account.

But as we enter another school year, I’d like to hear:  Is the data being collected where you are improving the students’ STEM learning experience?  That is, if John or Judy take a test this year, are the results going to help them next year?  Is it the “system” that is being measured, by sampling the anonymous flow of student achievers every few months, or is STEM education — particular people’s STEM education — getting better?  By this, I don’t mean, Are they pushed to get better test scores, but Do the test results mean that J or J themselves, next year, encounter more engaging classes, more authentic experiences of science  (etc) practices,  immersion in more meaningful science (etc) content?

 

 

 

 


Blog comments have been archived, commenting is no longer available.
This blog post has 10 comments, showing all.

Accountability = blame

posted by: Louise Wilson on 8/29/2015 7:14 am

You find something you don't like, you find someone to blame, problem solved. The American way! I don't understand this at all. how does finding someone to blame fix the problem? I always try to blame the weather, a bird flying overhead, the length of the grass. None of this seems to fix any of my problems. Perhaps we can all have a real scapegoat, one for each school, tethered in the front yard, and we can all hold the goat accountable and have a barbecue at the end of each year?
Actually I have found in my classes that if I have kids who work in class, talk to each other about how to solve problems, and I tell them I don't care about their grades, I care about whether they're doing and learning, they do better in their comparative end-of-year tests anyway. Because they've learned something. The kids who don't participate obviously don't learn anything.
This is really hard for students to deal with: first they need faith that I will make sure they get credit for a class if they work. Second that they will learn how to do math by doing math (gasp!) instead of by copying math off the board. Third that there's nobody or nothing to blame but themselves if they don't learn. Because blame doesn't matter.

Berliner reference....citation

posted by: Joseph Gardella on 8/29/2015 9:19 am

Hi Brian; another great commentary. wish I had time to participate more in these discussions. There is no question that what I read, with a keen eye to regression analysis as a basis for value added measures, reflects the need to bring some critical analysis to the current narrative here in Buffalo and New York State. NY's Gov. Cuomo has aggressively implemented teacher evals with flawed (in my view) value added components and reliance on "Common Core" testing, while creating a huge backlash among parents who don't want testing for teacher evaluation alone. This has damaged the Common Core because it is seen as a testing regime instead of a new approach to integrated learning standards....

but your point about the inability of "reformers" to respond to legitimate criticism or even just questions about validity or evidence is a great point.

I also wanted to alert you that the link to the Berliner reference is incorrect, It comes up with a not found reference when I click it. I think, since I can find many Berliner papers on MSPnet library that you have the reference number incorrect....

Can you correct that as I would like to read the particular Berliner reference you cite.

Thanks

Joe Gardella

Berliner citation

posted by: Brian Drayton on 8/30/2015 8:57 pm

Hi, Joseph,
Thanks for your comments. The Berliner quote is from an article called "Exogenous Variables and Value-Added Assessments: A Fatal Flaw" . But I'll try to fix the link!

Berliner reference....found it

posted by: Joseph Gardella on 9/1/2015 8:25 am

Hi, Joseph,
Thanks for your comments. The Berliner quote is from an article called "Exogenous Variables and Value-Added Assessments: A Fatal Flaw" . But I'll try to fix the link!

Hi Brian:
I found the reference by searching after I posted this....so I have it.
If anyone is interested let me know and I will provide it. I don't think it is presently posted on MSPNET library...I went to the original journal.
Thanks!
Joe

Berliner article

posted by: Brian Drayton on 9/1/2015 11:02 am

Hi, Joseph,
Thanks for this. Just FYI, the article was posted to the MSPNet library last year the link in my original post now should take people directly to the abstract page in the library. I had just introduced a typo in my citation, since corrected.

-- brian

Reframe the improvement narrative

posted by: Arthur Camins on 8/29/2015 11:42 am

First, we need to reframe the terms of the education reform debate, see here: http://www.washingtonpost.com/blogs/answer-sheet/wp/2014/11/19/how-to- reframe-the-educational-reform-debate/

Next, need to build a movement that urges candidates to offer different solutions. Here is the speech, we need candidates to give: http://www.huffingtonpost.com/arthur-camins/the-k12-education-speech-_ b_7755854.html

Fixing things

posted by: F. Joseph Merlino on 8/29/2015 6:15 pm

You can't "fix" education any more than you can fix a tomato plant. The process of learning is organic. The student is living being. It not a car that you fix.

My question to you

posted by: Brian Drayton on 8/31/2015 9:57 pm

Curious:
as we enter another school year, Id like to hear: Is the data being collected where you are improving the students STEM learning experience?
That is, if John or Judy take a test this year, are the results going to help them next year?
Is it the system that is being measured, by sampling the anonymous flow of student achievers every few months, or is STEM education particular peoples STEM education getting better?
By this, I dont mean, Are they pushed to get better test scores, but Do the test results mean that J or J themselves, next year, encounter more engaging classes, more authentic experiences of science (etc) practices, immersion in more meaningful science (etc) content?

from Ed Week

posted by: Brian Drayton on 9/3/2015 4:42 am

This article by Dianis, Jackson, and Noguera suggests that, despite the value of our assessment regime for diagnosing educational impacts for disaggregated subgroups, it is not actually designed to enable educational improvement. It is also not serving the cause of improved equity of educational opportunities OR outcomes:

"the data produced by annual standardized tests are typically not made available to teachers until after the school year is over, thereby making it impossible to use the information to respond to student needs. Thus, students of color are susceptible to all of the negative effects of the annual assessments, without any of the positive supports to address the learning gaps."


http://www.edweek.org/ew/articles/2015/06/10/test-taking-compliance-do es-not-ensure-equity.html

does anything change for the students?

posted by: Louise Wilson on 9/7/2015 6:55 am

Generally, no. The tests are taken, and feedback is within a day in our school, although teachers are not generally trained on how to find the results. But students are assigned to classes without regard to their current skill level (so students in grade level equivalents of 2nd to 11th grade can be found in the same 10th grade math course) and everyone is expected to make the best of it. Nobody gets a better experience from the tests, because sorting students according to current skill level is apparently bad.
Students are being tested to satisfy government requirements, not to assist them. There is no honesty in interpretation, so a kid who has math skills at the 3rd grade level is not being told "hey, if you want to be a doctor, you need to work on this" but is instead being placed into classes according to age, and the teacher is told to pass the kids along.
The only time high stakes testing has an impact on the student is when they go from high school to college, and even then not so much because colleges take our students with low test scores from inner city schools, so they give the students an opportunity, but the skill level of the students is so low they can't keep up.
Students have no real idea of the skills they will need for a STEM career: I have students who think they want to be engineers who refuse to engage in any math or modeling or applied science task. I'm baffled.