Go Back  PPRuNe Forums > PPRuNe Worldwide > The Pacific: General Aviation & Questions
Reload this Page >

Norfolk Island Ditching ATSB Report - ?

Wikiposts
Search
The Pacific: General Aviation & Questions The place for students, instructors and charter guys in Oz, NZ and the rest of Oceania.

Norfolk Island Ditching ATSB Report - ?

Thread Tools
 
Search this Thread
 
Old 27th Sep 2012, 04:50
  #421 (permalink)  
 
Join Date: Mar 2002
Location: Seat 1A
Posts: 8,556
Received 75 Likes on 43 Posts
it seems much more likely that in the end he was over water looking for a cloud break and quite simply found himself in the water.

I have heard of that happening several times before, but in my time I have never heard of anyone trying a controlled, powered ditching at night before...
Watch Four Corners.
Capn Bloggs is offline  
Old 27th Sep 2012, 06:21
  #422 (permalink)  
 
Join Date: Nov 2009
Location: SE Qld, Australia
Age: 77
Posts: 1,170
Received 39 Likes on 26 Posts
But Four Corners went on and on about what a hero Dominic was in carrying out a rare successful ditching in a jet, yet the ATSB report states they never even saw the water until after impact, so surely in that case there was much more luck prevailing here than good management?

DBTW, you just might be right. I've thought it strange that the ATSB were surprisingly uncritical about the actual ditching.

Wouldn't you just love to hear the FO's version?
Dora-9 is offline  
Old 27th Sep 2012, 06:58
  #423 (permalink)  
 
Join Date: Mar 2002
Location: Seat 1A
Posts: 8,556
Received 75 Likes on 43 Posts
Pitch-black night with no lights, how do you reckon they'd see the sea?? Doesn't mean you can't attempt a perfectly reasonable ditching with a reasonable chance of pulling it off.

Given that the pax had time to put on their jackets AND get the rafts ready, I'd say DBTW's conspiracy theory, that they flopped into the water with no warning as the crew were doing descending orbits through holes in the clouds hoping to break visual, is a bit far-fetched.
Capn Bloggs is offline  
Old 27th Sep 2012, 07:13
  #424 (permalink)  
 
Join Date: Jul 2008
Location: NSW
Age: 64
Posts: 150
Likes: 0
Received 0 Likes on 0 Posts
CB, you obviously have a clear understanding of what really happened, and I really don't need to budge you on that. Just a couple of questions then. Didn't the jackets go on during the instrument approach phase? And were they doing orbits?

I don't know. If so, I certainly think it unwise because it would be even more disorienting! Most of the near and actual unintentional controlled flights into the sea I know about have been in straight line descents. If performed in a turn they are (pretty much) always fatal.

Last edited by DBTW; 27th Sep 2012 at 07:27.
DBTW is offline  
Old 27th Sep 2012, 07:15
  #425 (permalink)  
 
Join Date: Feb 2009
Location: on the edge
Posts: 823
Likes: 0
Received 0 Likes on 0 Posts
Pitch-black night with no lights, how do you reckon they'd see the sea?? Doesn't mean you can't attempt a perfectly reasonable ditching with a reasonable chance of pulling it off
Which makes an attempted landing at the actual airstrip more reasonable than flying it into the ocean.
blackhand is offline  
Old 27th Sep 2012, 07:37
  #426 (permalink)  
 
Join Date: Jul 2006
Location: Brisbane
Posts: 705
Likes: 0
Received 0 Likes on 0 Posts
How the FDR and CVR could have helped

It brings me back to the issue of the appearance of both engine nacelles in the underwater footage. No bulging of the nacelles which would be evident had they been spinning on impact, also no curling of the compressor blades. I heard when Dom said they were going to ditch the FO asserted herself and said "No we're not" I reckon the last missed approach was asymmetric (witness said it sounded "different") and the other died as they turned back in. Although they may have briefed the ditching it happened a little sooner than they thought.
Just my 20c worth.
flying-spike is offline  
Old 27th Sep 2012, 07:43
  #427 (permalink)  
 
Join Date: Aug 2009
Location: Lisbon
Posts: 995
Likes: 0
Received 0 Likes on 0 Posts
"No we're not"
Sounds just like the AN VH-INH 'mishap' in SYD when el Capitaino ask for TOGA and the Flight Engineer said "no you won't"!! Well something like that, the memory is fading a little, and I do miss Big Trev so much, and he gave his Pilots a 'bulging narcelle' over that episode also!! Does he still wear the wig?
Oh yeah, a big hello to Gary T in PNG also

Last edited by Cactusjack; 27th Sep 2012 at 07:45.
Cactusjack is offline  
Old 27th Sep 2012, 08:21
  #428 (permalink)  
 
Join Date: Mar 2002
Location: Seat 1A
Posts: 8,556
Received 75 Likes on 43 Posts
I heard when Dom said they were going to ditch the FO asserted herself and said "No we're not" I reckon the last missed approach was asymmetric
Thanks for that, Love!
Capn Bloggs is offline  
Old 27th Sep 2012, 08:31
  #429 (permalink)  
 
Join Date: Mar 2002
Location: Seat 1A
Posts: 8,556
Received 75 Likes on 43 Posts
Which makes an attempted landing at the actual airstrip more reasonable than flying it into the ocean.
Possibly, but the actual inbound tracks (for the 11/29 VORs, at least) go nowhere near the runway itself. Given terrain, it is questionable whether a forced land landing would have been more successful than what happened. Now the VOR 04 (VOR at threshold) and slide off the runway at low speed? A possibility.

DBTW, I was a bit tongue-in-cheek about about descending orbits until Visual. However, your theory:
it seems much more likely that in the end he was over water looking for a cloud break and quite simply found himself in the water.
just doesn't stack up. The report clearly states what happened in the few minutes prior to alighting. If the report is correct the crew knew exactly when they were going to touchdown.
Capn Bloggs is offline  
Old 27th Sep 2012, 08:43
  #430 (permalink)  
 
Join Date: Feb 2009
Location: on the edge
Posts: 823
Likes: 0
Received 0 Likes on 0 Posts
Thanks Capn.
Makes his decision to ditch seem an option, would you have done the same? or maybe tried to drop to the 600 ft level and then slam it on, you know like a bandit approach into Bulolo when its clouded.
blackhand is offline  
Old 27th Sep 2012, 08:52
  #431 (permalink)  
 
Join Date: Mar 2002
Location: Seat 1A
Posts: 8,556
Received 75 Likes on 43 Posts
Makes his decision to ditch seem an option, would you have done the same?
I refuse to answer on the grounds that I might incriminate myself!

you know like a bandit approach into Bulolo when its clouded.
I don't know what a bandit is, nor do I know where Bulolo is, thank goodness!
Capn Bloggs is offline  
Old 27th Sep 2012, 08:59
  #432 (permalink)  
 
Join Date: Jul 2008
Location: NSW
Age: 64
Posts: 150
Likes: 0
Received 0 Likes on 0 Posts
If the report is correct the crew knew exactly when they were going to touchdown.
There's the rub CB...the report can only go on what the investigators were told, and I am only asking the question as to whether what they have been told stacks up...I am not comfortable with what they have been told.
DBTW is offline  
Old 27th Sep 2012, 09:00
  #433 (permalink)  
 
Join Date: Jul 2007
Location: in the classroom of life
Age: 55
Posts: 6,864
Likes: 0
Received 1 Like on 1 Post
I wonder how planned it was.

The report says at 10.25 they conducted an approach, and at 10.26 there was a unreadable transmission. assumed to be at the point of ditching.

So 1 minute? Either they had flamed out, and it was rather unplanned, or perhaps DBTW's theory is closer to the mark.

If it was well planned why then were only 3 out of 5 likely jacket bodies in jackets. The lady strapped in I can understand but a well planned ditching would have had the all other pax and the crew in jackets.

I also think that a planned ditching would have involved a very clear instruction to the unicom operator on where about he intended to conduct it, and that he wanted all resources pointed in that direction.

I think the report is lacking in a lot of things..... 2 things in particular. FDR & CVR and they are only 48m down. This is within diving range for pro folk. Actually I know some folk who can do it. Why did the ATSB not go get them, and as the Air France A330 has shown, if you find them, even buried under great amounts of pressure for a year or more, they survive.

ATSB.... GO GET THE F RECORDERS!!!!!!!!!!
Jabawocky is offline  
Old 27th Sep 2012, 09:14
  #434 (permalink)  
 
Join Date: Feb 2009
Location: on the edge
Posts: 823
Likes: 0
Received 0 Likes on 0 Posts
I refuse to answer on the grounds that I might incriminate myself!
Na, there was a hole and we followed all the way down
blackhand is offline  
Old 27th Sep 2012, 10:47
  #435 (permalink)  

Bottums Up
 
Join Date: Feb 2000
Location: dunnunda
Age: 66
Posts: 3,440
Likes: 0
Received 1 Like on 1 Post
If the ditching was planned, then IMO, doing so without advising the NLK R/T man, with whom they were in contact, was the most negligent act, of a poorly executed flight.

And one can't blame Pel Air, CASA, the ATSB, or any one else for that.

I find it so difficult to believe that one could ditch and not tell anyone where, that I find it even more difficult to believe that the ditching was planned.
Capt Claret is offline  
Old 27th Sep 2012, 10:57
  #436 (permalink)  
 
Join Date: Aug 2000
Location: Australia
Posts: 743
Likes: 0
Received 0 Likes on 0 Posts
ATSB need to recover both the data recorder and FDR recorder to resolve this investigation.
It is my belief that they flew into the water before they were ready. No real ditching briefing had been given, nor was the cabin prepared. Why did the FO tell Norfolk radio to standby when he was after details if they were about to ditch?

I understand that the gear is held up under pressure and that it would droop down after the punps and pressure were lost, but certainly not extend fully and lock down. The underwater photos show the gear down and locked.

Did they get partially visual and decide to circle at low level back to the runway and flew into the water?

It would be interesting to see how much the flight crew statements changed if the FDR and cockpit recorders where located!
Dog One is offline  
Old 27th Sep 2012, 11:44
  #437 (permalink)  
 
Join Date: Mar 2002
Location: Seat 1A
Posts: 8,556
Received 75 Likes on 43 Posts
As with other crashes, the crew often don't say much. This wasn't your standard Sim ride; they knew they were about die.

Agree about the recorders, although while they will tell us more about the last bits of the flight, the systemic-issues discussion probably will not benefit a lot.
Capn Bloggs is offline  
Old 27th Sep 2012, 11:46
  #438 (permalink)  
 
Join Date: Apr 2007
Location: Sunny side up
Posts: 1,206
Likes: 0
Received 0 Likes on 0 Posts
Did they ever make a mayday call?
Worrals in the wilds is offline  
Old 27th Sep 2012, 12:17
  #439 (permalink)  
 
Join Date: Jul 2008
Location: Australia
Posts: 1,253
Received 195 Likes on 90 Posts
No they didn't make a Mayday. It is also unlikely that the scenarios put forward here occurred as the passenger accounts of what happened match the crew accounts. I think the crew would have understood their obligations under the TSI Act that they were legally required to provide all information with self-incrimination not being a defense against not telling the truth.
Also the F/O would not have had any reason to give an account that was not accurate. If they didn't tell the truth then I doubt that they will be looking forward to the Senate Enquiry. If I was the crew I wouldn't be looking forward to it anyway because I don't think that it will exonerate them if that is what they are after. The Senator's are not fools and they will soon realise that the accident could have been avoided if different decisions were made by the crew even after they realised that the weather was below minima and they didn't have the fuel for an alternate.

Remember the Mt Hotham accident? After the report was released there was all sorts of speculation that the ATSB had got it wrong and that it wasn't the pilots fault but a failed engine that caused them to hit the hill.A supplementary report released in response to the publicity showed that the pilot had flown a similar dodgy approach in the morning. End of speculation.
Lookleft is offline  
Old 29th Sep 2012, 04:11
  #440 (permalink)  
 
Join Date: Aug 2003
Location: Sale, Australia
Age: 80
Posts: 3,832
Likes: 0
Received 0 Likes on 0 Posts
I couldn't let the thread die without posting something the iconoclasts (it's all Dominics/crews fault) might mull over. Those whose olfactory organs are so distressed as to be unable to smell the cheese might wish to log off now.

Reconstructing human contributions to accidents:
The new view on error and performance


Sidney W. A. Dekker

Abstract

Problem


How can we reconstruct the human contribution to accidents? Investigators easily take the position of retrospective outsider, looking back on a sequence of events that seems to lead to an inevitable outcome, and pointing out where people went wrong. This does not explain much, however, and may not help prevent recurrence.

Method and results

In this paper I examine how investigators can reconstruct the human contribution to accidents in light of what has recently become known as the new view of human error. The commitment of the new view is to relocate controversial human assessments and actions back into the flow of events of which they were part and which helped bring them forth, to see why assessments and actions made sense to people at the time. The second half of the paper is dedicated to one way in which investigators could begin to reconstruct people's unfolding mindsets.

Impact on industry

In an era where a large portion of accidents gets attributed to human error, it is critical to understand why people did what they did, rather than judging them for not doing what we now know they should have done. This paper contributes by helping investigators avoid the traps of hindsight, and by presenting a method with which investigators can begin to see how people's actions and assessments could actually have made sense at the time.
Introduction

In human factors today there are basically two different views on human error and the human contribution to accidents . One view, recently dubbed "the old view" (AMA, 1998; Reason, 2000), sees human error as a cause of failure. In the old view of human error:

• Human error is the cause of most accidents.

• The engineered systems in which people work are made to be basically safe; their success is intrinsic. The chief threat to safety comes from the inherent unreliability of people.

• Progress on safety can be made by protecting these systems from unreliable humans through selection, proceduralization, automation, training and discipline.

The other view, also called "the new view", sees human error not as a cause, but as a symptom of failure (Rasmussen & Batstone, 1989; Woods et al., 1994; AMA, 1998; Reason, 2000; Hoffman & Woods, 2000). In the new view of human error:

• Human error is a symptom of trouble deeper inside the system.

• Safety is not inherent in systems. The systems themselves are contradictions between multiple goals that people must pursue simultaneously. People have to create safety.

• Human error is systematically connected to features of peoples tools, tasks and operating environment. Progress on safety comes from understanding and influencing these connections.

The new view of human error represents a substantial movement across the fields of human factors and organizational safety (Reason, 1997; Rochlin, 1999) and encourages the investigation of factors that easily disappear behind the label "human error"—long-standing organizational deficiencies; design problems; procedural shortcomings and so forth. The rationale is that human error is not an explanation for failure, but instead demands an explanation, and that effective countermeasures start not with individual human beings who themselves were at the receiving end of much latent trouble (Reason, 1997) but rather with the error-producing conditions present in their working environment. Most of those involved in accident research and analyses are proponents of the new view. For example:
"...simply writing off ... accidents merely to (human) error is an overly simplistic, if not naive, approach.... After all, it is well established that accidents cannot be attributed to a single cause, or in most instances, even a single individual." (Shappell & Wiegmann, 2001, p. 60).

However, our willingness to embrace the new view of human error in our analytic practice is not always matched by our ability to do so. When confronted by failure, it is easy to retreat into the old view. We seek out the "bad apples" and assume that with them gone, the system will be safer than before. An investigation's emphasis on proximal causes ensures that the mishap remains the result of a few uncharacteristically ill-performing individuals who are not representative of the system or the larger practitioner population in it. It leaves existing beliefs about the basic safety of the system intact.

The pilots of a large military helicopter that crashed on a hillside in Scotland in 1994 were found guilty of gross negligence. The pilots did not survive—29 people died in total—so their side of the story could never be heard. The official inquiry had no problems with "destroying the reputation of two good men", as a fellow pilot put it. Potentially fundamental vulnerabilities (such as 160 reported cases of Uncommanded Flying Control Movement or UFCM in computerized helicopters alone since 1994) were not looked into seriously (Sunday Times, 25 June 2000).

Faced with a bad, surprising event, we seem more willing to change the players in the event (e.g. their reputations) than to amend our basic beliefs about the system that made the event possible. To be sure, reconstructing the human contribution to a sequence of events that led up to an accident is not easy. As investigators we were seldom—if ever—there when events unfolded around the people now under investigation. As a result, their actions and assessments may appear not only controversial, but truly befuddling when seen from our point of view. In order to understand why people could have done what they did, we need to go back and triangulate and interpolate, from a wide variety of sources, the kinds of mindsets that they had at the time. But working against us are the inherent biases introduced by hindsight (Fischoff, 1975) and the multiple pressures and constraints that operate on almost every investigation—political as well as practical (Galison, 2000).

In this paper I hope to make a contribution to our ability to reconstruct past human performance and how it played a role in accidents. I first capture some of the mechanisms of the hindsight bias, and observe them at work in how we routinely handle and describe human performance evidence. Trying to avoid these biases and mechanisms, I propose ways forward for how to reconstruct people's unfolding mindsets. Most examples will come from aviation, but they, and the principles they illustrate, should apply equally well to domains ranging from driving to shipping to industrial and occupational safety.

The mechanisms of hindsight

One of the safest bets we can make as investigators or outside observers is that we know more about the incident or accident than the people who were caught up in it—thanks to hindsight:

• Hindsight means being able to look back, from the outside, on a sequence of events that led to an outcome we already know about;

• Hindsight gives us almost unlimited access to the true nature of the situation that surrounded people at the time (where they actually were versus where they thought they were; what state their system was in versus what they thought it was in);

• Hindsight allows us to pinpoint what people missed and shouldn't have missed; what they didn't do but should have done.

From the perspective of the outside and hindsight (typically the investigator's perspective), we can oversee the entire sequence of events—the triggering conditions, its various twists and turns, the outcome, and the true nature of circumstances surrounding the route to trouble. In contrast, the perspective from the inside of the tunnel is the point of view of people in the unfolding situation. To them, the outcome was not known, nor the entirety of surrounding circumstances. They contributed to the direction of the sequence of events on the basis of what they saw on the inside of the unfolding situation. For investigators, however, it is very difficult to attain this perspective. The mechanisms by which hindsight operates on human performance data are mutually reinforcing. Together they continually pull us in the direction of the position of the retrospective outsider. The ways in which we retrieve human performance evidence from the rubble of an accident, represent it, and re-tell it, typically sponsors this migration of viewpoint.

Mechanism 1: Making tangled histories linear by cherry-picking and re-grouping evidence

One effect of hindsight is that ”people who know the outcome of a complex prior history of tangled, indeterminate events, remember that history as being much more determinant, leading ’inevitably’ to the outcome they already knew” (Weick, 1995, p28). Hindsight allows us to change past indeterminacy and complexity into order, structure, and oversimplified causality (Reason, 1990). In trying to make sense of past performance, it is always tempting to group individual fragments of human performance which prima facie point to some common condition or mindset. For example, "hurry" to land is a leitmotif extracted from the evidence in the following investigation, and that haste in turn is enlisted to explain the errors that were made:

”Investigators were able to identify a series of errors that initiated with the flightcrew’s acceptance of the controller’s offer to land on runway 19…The CVR indicates that the decision to accept the offer to land on runway 19 was made jointly by the captain and the first officer in a 4-second exchange that began at 2136:38. The captain asked: ’would you like to shoot the one nine straight in?’ The first officer responded, ’Yeah, we’ll have to scramble to get down. We can do it.’ This interchange followed an earlier discussion in which the captain indicated to the first officer his desire to hurry the arrival into Cali, following the delay on departure from Miami, in an apparent to minimize the effect of the delay on the flight attendants' rest requirements. For example, at 2126:01, he asked the first officer to ’keep the speed up in the descent’… (This is) evidence of the hurried nature of the tasks performed.” (Aeronautica Civil, 1996, p. 29)

But the fragments used to build the argument of haste come from over half an hour of extended performance. The investigator treats the record as if it were a public quarry to pick stones from, and the accident explanation the building he needs to erect. The problem is that each fragment is meaningless outside the context that produced it: each fragment has its own story, background, and reasons for being, and when it was produced it may have had nothing to do with the other fragments it is now grouped with. Also, behavior takes place in between the fragments. These intermediary episodes contain changes and evolutions in perceptions and assessments that separate the excised fragments not only in time, but also in meaning. Thus, the condition, and the constructed linearity in the story that binds these performance fragments, arises not from the circumstances that brought each of the fragments forth; it is not a feature of those circumstances. It is an artifact of the investigator. In the case described above, ”hurry” is a condition identified in hindsight, one that plausibly couples the start of the flight (almost 2 hours behind schedule) with its fatal ending (on a mountainside rather than an airport). ”Hurry” is a retrospectively invoked leitmotif that guides the search for evidence about itself. It leaves the investigator with a story that is admittedly more linear and plausible and less messy and complex than the actual events. Yet it is not a set of findings, but of tautologies.

Mechanism 2: Finding what people could have done to avoid the accident

Tracing the sequence of events back from the outcome—that we as investigators already know about—we invariably come across joints where people had opportunities to revise their assessment of the situation but failed to do so; where people were given the option to recover from their route to trouble, but did not take it. These are counterfactuals—quite common in accident analysis. For example, "The airplane could have overcome the windshear encounter if the pitch attitude of 15 degrees nose-up had been maintained, the thrust had been set to 1.93 EPR (Engine Pressure Ratio) and the landing gear had been retracted on schedule" (NTSB, 1995, p. 119). Counterfactuals prove what could have happened if certain minute and often utopian conditions had been met. Counterfactual reasoning may be a fruitful exercise when trying to uncover potential countermeasures against such failures in the future.

But saying what people could have done in order to prevent a particular outcome does not explain why they did what they did. This is the problem with counterfactuals. When they are enlisted as explanatory proxy, they help circumvent the hard problem of investigations: finding out why people did what they did. Stressing what was not done (but if it had been done, the accident would not have happened) explains nothing about what actually happened, or why.

In addition, counterfactuals are a powerful tributary to the hindsight bias. They help us impose structure and linearity on tangled prior histories. Counterfactuals can convert a mass of indeterminate actions and events, themselves overlapping and interacting, into a linear series of straightforward bifurcations. For example, people could have perfectly executed the go-around maneuver but did not; they could have denied the runway change but did not. As the sequence of events rolls back into time, away from its outcome, the story builds. We notice that people chose the wrong prong at each fork, time and again—ferrying them along inevitably to the outcome that formed the starting point of our investigation (for without it, there would have been no investigation).

But human work in complex, dynamic worlds is seldom about simple dichotomous choices (as in: to err or not to err). Bifurcations are extremely rare—especially those that yield clear previews of the respective outcomes at each end. In reality, choice moments (such as there are) typically reveal multiple possible pathways that stretch out, like cracks in a window, into the ever denser fog of futures not yet known. Their outcomes are indeterminate; hidden in what is still to come. In reality, actions need to be taken under uncertainty and under the pressure of limited time and other resources. What from the retrospective outside may look like a discrete, leisurely two-choice opportunity to not fail, is from the inside really just one fragment caught up in a stream of surrounding actions and assessments. In fact, from the inside it may not look like a choice at all. These are often choices only in hindsight. To the people caught up in the sequence of events there was perhaps not any compelling reason to re-assess their situation or decide against anything (or else they probably would have) at the point the investigator has now found significant or controversial. They were likely doing what they were doing because they thought they were right; given their understanding of the situation; their pressures. The challenge for an investigator becomes to understand how this may not have been a discrete event to the people whose actions are under investigation. The investigator needs to see how other people's "decisions" to continue were likely nothing more than continuous behavior—reinforced by their current understanding of the situation, confirmed by the cues they were focusing on, and reaffirmed by their expectations of how things would develop.

Mechanism 3: Judging people for what they did not do but should have done
Where counterfactuals are used in investigations, even as explanatory proxy, they themselves often require explanations as well. After all, if an exit from the route to trouble stands out so clearly to us, how was it possible for other people to miss it? If there was an opportunity to recover, to not crash, then failing to grab it demands an explanation. The place where investigators look for clarification is often the set of rules, professional standards and available data that surrounded people's operation at the time, and how people did not see or meet that which they should have seen or met. Recognizing that there is a mismatch between what was done or seen and what should have been done or seen—as per those standards—we easily judge people for not doing what they should have done.

Where fragments of behavior are contrasted with written guidance that can be found to have been applicable in hindsight, actual performance is often found wanting; it does not live up to procedures or regulations. For example, ”One of the pilots…executed (a computer entry) without having verified that it was the correct selection and without having first obtained approval of the other pilot, contrary to procedures.” (Aeronautica Civil, 1996; p. 31).
Investigations invest considerably in organizational archeology so that they can construct the regulatory or procedural framework within which the operations took place, or should have taken place. Inconsistencies between existing procedures or regulations and actual behavior are easy to expose when organizational records are excavated after-the-fact and rules uncovered that would have fit this or that particular situation. This is not, however, very informative. There is virtually always a mismatch between actual behavior and written guidance that can be located in hindsight (Suchman, 1987; Woods et al., 1994). Pointing that there is a mismatch sheds little light on the why of the behavior in question. And for that matter, mismatches between procedures and practice are not unique to mishaps (Degani & Wiener, 1991).

Another route to constructing a world against which investigators hold individual performance fragments, is finding all the cues in a situation that were not picked up by the practitioners, but that, in hindsight, proved critical. Take the turn towards the mountains on the left that was made just before an accident near Cali, Colombia in 1995 (Aeronautica Civil, 1996). What should the crew have seen in order to notice the turn? They had plenty of indications, according to the manufacturer of their aircraft:
”Indications that the airplane was in a left turn would have included the following: the EHSI (Electronic Horizontal Situation Indicator) Map Display (if selected) with a curved path leading away from the intended direction of flight; the EHSI VOR display, with the CDI (Course Deviation Indicator) displaced to the right, indicating the airplane was left of the direct Cali VOR course, the EaDI indicating approximately 16 degrees of bank, and all heading indicators moving to the right. Additionally the crew may have tuned Rozo in the ADF and may have had bearing pointer information to Rozo NDB on the RMDI” (Boeing, 1996, p. 13).

This is a standard response after mishaps: point to the data that would have revealed the true nature of the situation. Knowledge of the ”critical” data comes only with the omniscience of hindsight, but if data can be shown to have been physically available, it is assumed that it should have been picked up by the practitioners in the situation. The problem is that pointing out that it should have does not explain why it was not, or why it was interpreted differently back then (Weick, 1995). There is a dissociation between data availability and data observability (Woods et al., 1994)—between what can be shown to have been physically available and what would have been observable given the multiple interleaving tasks, goals, attentional focus, interests, and—as Vaughan (1996) shows—culture of the practitioner.
There are also less obvious or not documented standards. These are often invoked when a controversial fragment (e.g. a decision to accept a runway change (Aeronautica Civil, 1996), or the decision to go around or not (NTSB; 1995)) knows no clear pre-ordained guidance but relies on local, situated judgment. For these cases there are always ”standards of good practice” which are based on convention and putatively practiced across an entire industry. One such standard in aviation is ”good airmanship”, which, if nothing else can, will explain the variance in behavior that had not yet been accounted for.

While micromatching, the investigator frames people's past assessments and actions inside a world that s/he has invoked retrospectively. Looking at the frame as overlay on the sequence of events, s/he sees that pieces of behavior stick out in various places and at various angles: a rule not followed here; available data not observed there; professional standards not met overthere. But rather than explaining controversial fragments in relation to the circumstances that brought them forth, and in relation to the stream of preceding as well as succeeding behaviors which surrounded them, the frame merely boxes performance fragments inside a world the investigator now knows to be true. The problem is this after-the-fact-world may have very little relevance to the actual world that produced the behavior under investigation. The behavior is contrasted against the investigator’s reality, not the reality surrounding the behavior in question at the time. Judging people for what they did not do relative to some rule or standard does not explain why they did what they did. Saying that people failed to take this or that pathway—only in hindsight the right one—judges other people from a position of broader insight and outcome knowledge that they themselves did not have. It does not explain a thing yet; it does not shed any light on why people did what they did given their surrounding circumstances. The investigator has gotten caught in what William James called "the psychologist's fallacy" a century ago: he has substituted his own reality for the one of his object of study.

It appears that in order to explain failure, we seek failure. In order to explain missed opportunities and bad choices, we seek flawed analyses, inaccurate perceptions, violated rules—even if these were not thought to be influential or obvious or even flawed at the time (Starbuck & Milliken, 1988). This search for people's failures is another well-documented effect of the hindsight bias: knowledge of outcome fundamentally influences how we see a process. If we know the outcome was bad, we can no longer objectively look at the behavior leading up to it—it must also have been bad (Fischoff, 1975; Woods et al., 1994; Reason, 1997).
Brian Abraham is offline  


Contact Us - Archive - Advertising - Cookie Policy - Privacy Statement - Terms of Service

Copyright © 2024 MH Sub I, LLC dba Internet Brands. All rights reserved. Use of this site indicates your consent to the Terms of Use.