PPRuNe Forums - View Single Post - AF 447 Thread No. 8
View Single Post
Old 11th Jun 2012, 16:03
  #1201 (permalink)  
PJ2
 
Join Date: Mar 2003
Location: BC
Age: 76
Posts: 2,484
Received 0 Likes on 0 Posts
Lyman, TTex600;
Quote:
Originally Posted by Lyman
*
Quote:
We do not have any evidence that the crew actually followed any procedure to identify what the cause of the initial situation was.
We also have not seen a procedure to follow.

Good point. The rote is trained, the thinking is not.
Well, we know what documents are available that deal with the UAS and ADR abnormals. We don't yet know what's trained. I referenced some documents in a post to Flyinheavy, on June 1: http://www.pprune.org/tech-log/48235...ml#post7222304 .

So we know at least, that the information to deal with the UAS event and/or failed ADRs was available and provided very specific guidance on how to do the memory items, the checklist items and what the thinking was behind the 2006 update. The thinking is clear, but again we don't know what the training actually was with reference to these documents. Also, these documents may not be available to all operators. They are not "required" for action in the same sense as, say, an AD.

O.C.

I thought about your response and agreed with it because to pilots it makes sense..."If not the UAS drill/checklist specifically, why not the standard SOPs at least?"

To be clinically accurate with reference to alf5071h's point concerning hindsight bias, we still have hints of the statement, "Why didn't they stick to SOPs?" One way to examine the question is, "How far back do we go before making up our minds as to what happened?" Can we ever make up our minds and say?


The other side of this same question is here, in PPRuNe: We have spent nine threads and three years and are still unable to say why, and we remain unable to say why the aircraft was pitched up and, more importantly, why it was held there when all of us who fly transports know that the airplane is going to run out of energy with the pitch attitudes recorded in the data. We know this to be true and expect that others who do this work would know too. How do we sort that out so that realistic, preventative action may take place? Or is hindsight bias increasingly serving the courts?

Acknowledging the phenomenon of hindsight bias still permits learning and change from "mistakes" but not because of the assumptions we may about such (assumed) "mistakes" or pilot behaviour which we can read in the tiny little bits of data we have from the recorders. How we learn is perhaps captured in the statement we're all familiar with: "The crew did not wake up that morning intending to have an accident." We might even extend this to organizational thinking.

From Dekker's, Drift Into Failure:

"The idea of the amoral calculator, of course, works only if we can prove that people knew, or could reasonably have known, that things were going to go wrong as a result of their decisions. Since the 1970's, we have 'proven' the time and again in accident inquiries (for which the public costs have risen sharply since the 1970's) and courts of law. Our conclusions are most often that bad or miscreant people made amoral trade-offs, that they didn't invest enough effort, or that they were negligent in their understanding of how their own system worked. Such findings not only instantiate, but keep reproducing the Newtonian-Cartesian logic that is so common-sense to us. We hardly see it anymore, it has become almost transparent. Our activities in the wake of failure are steeped in the language of this worldview: Accident inquiries are supposed to return probable 'causes.' The people who participate in them are expected by media and industry to explain themselves and their work in terms of broken parts (we have found what was wrong: here it is). Even so-called 'systemic' accident models serve as a vehicle to find broken parts, though higher upstream, away from the sharp end (deficient supervision, insufficient leadership). In courts, we argue that people could reasonably have foreseen harm, and that harm was indeed 'caused' by their action or omission. We couple assessments of the extent of negligence, or the depth of the moral depravity of people's decisions, to the size of the outcome. If the outcome was worse (more oil leakage, more dead bodies), then the actions that led up to it must have been really, really bad. The fine gets higher, the prison sentence longer.

t is not, of course, that applying this family of explanations leads to results that are simply false. That would be an unsustainable and useless position to take. If the worldview behind these explanations remains invisible to us, however, we will never be able to discover just how it influecnes our own rationalities. We will not be able to question it, nor our own assumptions. We might simply assume that this is the only to look at the world. And that is a severe restriction, a restriction that matters. Applying this worldview, after all, leads to particular results."

- Dekker, Sidney. Drift Into Failure, Surrey, Ashgate, 2011, p.5-6

Last edited by PJ2; 11th Jun 2012 at 16:18.
PJ2 is offline