Wikiposts
Search
Australia, New Zealand & the Pacific Airline and RPT Rumours & News in Australia, enZed and the Pacific

Erebus 25 years on

Thread Tools
 
Search this Thread
 
Old 18th Oct 2006, 15:37
  #101 (permalink)  
 
Join Date: Feb 2000
Location: with the porangi,s in Pohara
Age: 66
Posts: 983
Likes: 0
Received 0 Likes on 0 Posts
negative on the "Bollocks"

400Rulz....that is an interesting piece,and a personal perspective.....read it several times just to take it in,and personally think its very balanced...dont have an issue with your opinion as I too agree that the PIC is responsible no matter what happens....I live in that box everyday,so I hope you my previous post didnt allude you to the fact that I dont think Collins was not responsible...because he was,I have no issue with that......

Those Erebus flights were not Regular daily flights and from day one there were issues about who was certified to fly them etc.....but to lump this whole sagarso on one person..the PIC...and say it was his "fault" which it was,(because the PIC IS ALWAYS RESPONSIBLE) is a double standard...
for this "type" of operation

I have no doubt about what you say about the low flying....did it myself around the North pole and many places around Mtns and Glaciers in the "Far North"....it was my choice as PIC....Dangerous???? only if you pranged it......

I do respect your position,but this issue has deeply divided opinions....getting past the PIC thing is more my intention....like you ...much respect for those that were affected.....have one myself....PB
pakeha-boy is offline  
Old 18th Oct 2006, 16:06
  #102 (permalink)  
 
Join Date: Aug 2003
Location: Sale, Australia
Age: 80
Posts: 3,832
Likes: 0
Received 0 Likes on 0 Posts
Firstly, there should be an acknowledgment that if and when the pilot makes a mistake, his will probably be the final enabling one at the apex of a whole pyramid of errors down below. This will, in turn, take the heat off investigations – the ‘we intend to find and punish the culprit’ syndrome.

Yet it is only recently that very dubious management malpractices are being identified and their contribution to accidents given sufficient weight. For though the pilot’s actions are at the tip of the iceberg of responsibility, many other people have had a hand in it – faceless people in aircraft design and manufacture, in computer technology and software, in maintenance, in flying control, in accounts departments and in the corridors of power. But the pilot is available and identifiable.

Research has shown that internally generated thought can block the simultaneous taking up of externally generated information, i.e. that from the outside world. Originally, our ability to blank out too many unwanted stimuli was a survival advantage. But we get nothing for nothing.
The natural adaptations that in past times helped us to survive may now in the technological age be one of our greatest threats. A capacity for conscious representation had survival value for those creatures possessing it, but to achieve its time-saving purposes, conscious experience had to be of very limited capacity.
According to Professor Dixon, formerly Professor of Psychology at University College, London, ‘Conscious awareness is a small flawed barred window on the great tide of information which flows unceasingly into, around and out of the four hundred thousand million neurones and hundred billion synapses which comprise the human nervous system.’ And under stress – the stress of landing, the stress of take off – awareness narrows and the window becomes a tunnel.

A mental ‘set’ is a readiness for a particular thought process to the exclusion of others, resulting in fixation. ‘Set’ is another of the survival characteristics we have inherited. The human brain evolved to help individuals live and survive circumstances very different from our own. It pre disposes us to select our focus on that part of the picture paramount at the time – a vision often so totally focused that it ignores the rest of the environment. The pattern of selectivity programmed into humans by the ancient world is totally obsolete in the present day, where a flexible scanning throughout the visual environment is required. The human beings in the cockpit have to steer a difficult course between too many and too few visual stimuli.

High arousal contributes to ‘set’. The mind becomes tunnelled on a particular course of action. Add to that the ingredient of fatigue or stress and it is not difficult to see that a ‘set’ as hard as concrete can result. Furthermore, ‘set’, particularly in the captain, is infectious. There is a follow-my-leader syndrome. So it is easy to see why most aircraft accidents are caused by ‘silly’ mistakes in the approach and landing phase.

Professor Reason in Human Error (1990) distinguishes between active error, the effects of which are felt almost immediately, and latent error, the adverse consequences of which may lie dormant within the system for a long time. This can clearly be seen in aviation, where pilots at the sharp end make an active error, while latent error lies behind the lines within the management support system. Many of these are already there awaiting a trigger, usually supplied by the pilot. ‘There is a growing awareness within the human reliability community that attempts to discover and neutralise those latent failures will have a greater beneficial effect upon system safety than will localised efforts to minimise active errors.’

As long ago as 1980, Stanley Roscoe wrote that:
The tenacious retention of ‘pilot error’ as an accident ‘cause factor’ by governmental agencies, equipment manufacturers and airline management, and even by pilot unions indirectly, is a subtle manifestation of the apparently natural human inclination to narrow the responsibility for tragic events that receive wide public attention. If the responsibility can be isolated to the momentary defection of a single individual, the captain in command, then other members of the aviation community remain untarnished. The unions briefly acknowledge the inescapable conclusion that pilots can make errors and thereby gain a few bargaining points with management for the future.
Everyone else, including other crew members, remains clean. The airline accepts the inevitable financial liability for losses but escapes blame for inadequate training programmes or procedural
indoctrination. Equipment manufacturers avoid product liability for faulty design,. Regulatory agencies are not criticised for approving an unsafe operation, failing to invoke obviously needed precautionary restrictions, or, worse yet, contributing directly by injudicious control or unsafe clearance authorisations. Only the pilot who made the ‘error’ and his family suffer, and their suffering may be assuaged by a liberal pension in exchange for his quiet early retirement – in the event that he was fortunate enough to survive the accident

The operating crew are the last line of defence for every ones mistakes.

Relying on human beings not to be human in a safety related business is insanity

EGPWS (Electronic Ground Proximity Warning System)– Even Good People Will make an error Some day

400Rulz - the above are a collection put together from various professional authors. It is indeed most interesting that your father on hearing of the accident was instantly able to determine its fate. How so? What other holes in the Swiss cheese do you think there may have been, if any? Might I ask what your aviation credentials are?
Brian Abraham is offline  
Old 18th Oct 2006, 23:54
  #103 (permalink)  
 
Join Date: Aug 2005
Location: New Zealand
Age: 62
Posts: 56
Likes: 0
Received 0 Likes on 0 Posts
A Short Reply.....

Hi BA,
You may indeed ask - 11600 hrs widebody (767/747), 1000 hrs F27, 900 odd hours military fast jet and transport. My old man took one of those flights down to McMurdo. The briefing material was all there - the onus was on the operator on the day to swot up. One of the problems was that a lot of the material was anecdotal from previous flights (incidentally, that's how Route Guides are established), so if a different route was being flown on the day, it was up to the crew to ascertain if the route was the same. Collins was a fastidious person, and there is no doubt in my mind that he and his crew had briefed thoroughly. HOWEVER, the grid MORAs on the chart are there for a reason. No visual reference, no descent below MSH. I have a strong suspicion that it was more of a commercial decision to descend in order to give the pax value for money. That is a decision made on the flight deck.
The discussion on this will continue for many years to come. Nearly everybody in NZ knew someone on that flight - personally I knew half a dozen. But that is also why the issue is highly charged - it is emotive. I do not disagree that AirNZ should have been allocated some of the blame as there were a lot of "bad management practices" at the time (and there still are) but that does not absolve the crew of the final responsibility for the safe conduct of the flight. We are the last defence.
PB thanks for the reply
400Rulz is offline  
Old 24th Oct 2006, 06:22
  #104 (permalink)  
 
Join Date: Aug 2003
Location: Sale, Australia
Age: 80
Posts: 3,832
Likes: 0
Received 0 Likes on 0 Posts
400Rulz and PB. A follow up to the previous.
Sidney Dekker
Associate Professor
Centre for Human Factors in Aviation, IKP
Linköping Institute of Technology
SE - 581 83 Linköping
Sweden
[email protected]
Punishing People or Learning from Failure?
The choice is ours
Disinheriting Fitts and Jones '47
Abstract
In this paper I describe how Fitts and Jones laid the foundation for aviation human factors by trying to understand why human errors made sense given the circumstances surrounding people at the time. Fitts and Jones remind us that human error is not the cause of failure, but a symptom of failure, and that "human error"—by any other name or by any other human—should be the starting point of our investigations, not the conclusion. Although most in aviation human factors embrace this view in principle, practice often leads us to the old view of human error which sees human error as the chief threat to system safety. I discuss two practices by which we quickly regress into the old view and disinherit Fitts and Jones: (1) the punishment of individuals, and (2) error classification systems. In contrast, real progress on safety can be made by understanding how people create safety, and by understanding how the creation of safety can break down in resourcelimited systems that pursue multiple competing goals. I argue that we should de-emphasize the search for causes of failure and concentrate instead on mechanisms by which failure succeeds, by which the creation of safety breaks down.
Keywords: human error, mechanisms of failure, safety culture, human factors, classification, creation of safety
Introduction
The groundwork for human factors in aviation lies in a couple of studies done by Paul Fitts and his colleague Jones right after World War II. Fitts and Jones (1947) found how features of World War II airplane cockpits systematically influenced the way in which pilots made errors. For example, pilots confused the flap and gear handles because these typically looked and felt the same and were co-located. Or they mixed up the locations of throttle, mixture and propeller controls because these kept changing across different cockpits. Human error was the starting point for Fitts' and Jones' studies—not the conclusion. The label "pilot error" was deemed unsatisfactory, and used as a pointer to hunt for deeper, more systemic conditions that led to consistent trouble. The idea these studies convey to us is that mistakes actually make sense once we understand features of the engineered world that surrounds people. Human errors are systematically connected to features of people's tools and tasks. The insight, at the time as it is now, was profound: the world is not unchangeable; systems are not static, not simply given. We can re-tool, re-build, re-design, and thus influence the way in which people perform. This, indeed, is the historical imperative of human factors understanding why people do what they do so we can tweak, change the world in which they work and shape their assessments and actions accordingly.
Years later, aerospace human factors extended the Fitts and Jones work. Increasingly, we realized how trade-offs by people at the sharp end are influenced by what happens at the blunt end of their operating worlds; their organizations (Maurino et al., 1995). Organizations make resources available for people to use in local workplaces (tools, training, teammates) but put constraints on what goes on there at the same time (time pressures, economic considerations), which in turn influences the way in which people decide and act in context (Woods et al., 1994; Reason, 1997). Again, what people do makes sense on the basis of the circumstances surrounding them, but now circumstances that reach far beyond their immediate engineered interfaces. This realization has put the Fitts and Jones premise to work in organizational contexts, for example changing workplace conditions or reducing working hours or de-emphasizing production to encourage safer trade-offs on the line (e.g. the "no fault go-around policy" held by many airlines today, where no (nasty) questions will be asked if a pilot breaks off his attempt to land). Human error is still systematically connected to features of people's tools and tasks, and, as acknowledged more recently, their operational and organizational environment.
Two views of human error
These realizations of aviation human factors pit one view of human error against another. In fact, these are two views of human error that are almost totally irreconcilable. If you believe one or pursue countermeasures on its basis, you truly are not able to embrace the tenets and putative investments in safety of the other. The two ways of looking at human error are that we can see human error as a cause of failure, or we can see human error as a symptom of failure (Woods et al., 1994). The two views have recently been characterized as the old view of human error versus the new view (Cook, Render & Woods, 2000; AMA, 1998; Reason, 2000) and painted as fundamentally irreconcilable perspectives on the human contribution to system success and failure.
In the old view of human error:
• Human error is the cause of many accidents.
• The system in which people work is basically safe; success is intrinsic. The chief threat to safety comes from the inherent unreliability of people.
• Progress on safety can be made by protecting the system from unreliable humans through selection, proceduralization, automation, training and discipline.
This old view was the one that Fitts and Jones remind us to be skeptical of. Instead, implicit in their work was the new view of human error:
• Human error is a symptom of trouble deeper inside the system.
• Safety is not inherent in systems. The systems themselves are contradictions between multiple goals that people must pursue simultaneously. People have to create safety.
• Human error is systematically connected to features of peoples tools, tasks and operating environment. Progress on safety comes from understanding and influencing these connections.
Perhaps everyone in aviation human factors wants to pursue the new view. And most people and organizations certainly posture as if that is exactly what they do. Indeed, it is not difficult to find proponents of the new view—in principle—in aerospace human factors. For example:
"...simply writing off aviation accidents merely to pilot error is an overly simplistic, if not naive, approach.... After all, it is well established that accidents cannot be attributed to a single cause, or in most instances, even a single individual. In fact, even the identification of a 'primary' cause is fraught with problems. Instead, aviation accidents are the result of a number of causes..."
(Shappell & Wiegmann, 2001, p. 60).
In practice, however, attempts to pursue the causes of system failure according to the new view can become retreads of the old view of human error. In practice, getting away from the tendency to judge instead of explain turns out to be difficult; avoiding the fundamental attribution error remains very hard; we tend to blame the man-in-the-loop. This is not because we aim to blame—in fact, we probably intend the opposite. But roads that lead to the old view in aviation human factors are paved with intentions to follow the new view. In practice, we all too often choose to disinherit Fitts and Jones '47, frequently without even knowing it. In this paper, I try to shed some light on how this happens, by looking at the pursuit of individual culprits in the wake of failure, and at error classification systems. I then move on to the new view of human error, extending it with the idea that we should de-emphasize the search for causes and instead concentrate on understanding and describing the mechanisms by which failure succeeds.
The Bad Apple Theory I: Punish the culprits
Progress on safety in the old view of human error relies on selection, training and discipline— weeding and tweaking the nature of human attributes in complex systems that themselves are basically safe and immutable. For example, Kern (1999) characterizes "rogue pilots" as extremely unreliable elements, which the system, itself safe, needs to identify and contain or exile:
"Rogue pilots are a silent menace, undermining aviation and threatening lives and property every day.... Rogues are a unique brand of undisciplined pilots who place their own egos above all else—endangering themselves, other pilots and their passengers, and everyone over whom they fly. They are found in the cockpits of major airliners, military jets and in general aviation...just one poor decision or temptation away from fiery disaster." (back cover)
The system, in other words, contains bad apples. In order to achieve safety, it needs to get rid of them, limit their contribution to death and destruction by discipline, training or taking them to court (e.g. Wilkinson, 1994). In a recent comment, Aviation Week and Space Technology (North, 2000) discusses Valujet 592 which crashed after take-off from Miami airport because oxygen generators in its forward cargo hold had caught fire. The generators had been loaded onto the airplane without shipping caps in place, by employees of a maintenance contractor, who were subsequently prosecuted. The editor:
"...strongly believed the failure of SabreTech employees to put caps on oxygen generators constituted willful negligence that led to the killing of 110 passengers and crew. Prosecutors were right to bring charges. There has to be some fear that not doing one's job correctly could lead to prosecution." (p. 66)
Fear as investment in safety? This is a bizarre notion. If we want to know how to learn from failure, the balance of scientific opinion is quite clear: fear doesn't work. In fact, it corrupts opportunities to learn. Instilling fear does the opposite of what a system concerned with safety really needs: learn from failure by learning about it before it happens. This is what safety cultures are all about: cultures that allow the boss to hear bad news. Fear stifles the flow of safety-related information—the prime ingredient of a safety culture (Reason, 1997). People will think twice about going to the boss with bad news if the fear of punishment is hanging over their heads. Many people believe that we can punish and learn at the same time. This is a complete illusion. The two are mutually exclusive. Punishing is about keeping our beliefs in a basically safe system intact.
Learning is about changing these beliefs, and changing the system. Punishing is about seeing the culprits as unique parts of the failure. Learning is about seeing the failure as a part of the system. Punishing is about stifling the flow of safety-related information. Learning is about increasing that flow. Punishing is about closure, about moving beyond the terrible event. Learning is about continuity, about the continuous improvement that comes from firmly integrating the terrible event in what the system knows about itself. Punishing is about not getting caught the next time. Learning is about countermeasures that remove error-producing conditions so there won't be a next time.
The construction of cause
Framing the cause of the Valujet disaster as the decision by maintenance employees to place unexpended oxygen generators onboard without shipping caps in place immediately implies a wrong decision, a missed opportunity to prevent disaster, a disregard of safety rules and practices.
Framing of the cause as a decision leads to the identification of responsibility of people who made that decision which in turns leads to the legal pursuit of them as culprits. The Bad Apple Theory reigns supreme. It also implies that cause can be found, neatly and objectively, in the rubble. The opposite is true. We don't find causes. We construct cause. "Human error", if there were such a thing, is not a question of individual single-point failures to notice or process—not in this story and probably not in any story of breakdowns in flight safety. Practice that goes sour spreads out over time and in space, touching all the areas that usually make practitioners successful. The "errors" are not surprising brain slips that we can beat out of people by dragging them before a jury. Instead, errors are series of actions and assessments that are systematically connected to people's tools and tasks and environment; actions and assessments that often make complete sense when viewed from inside their situation. Were one to trace "the cause" of failure, the causal network would fan out immediately, like cracks in a window, with only the investigator determining when to stop looking because the evidence will not do it for him or her. There is no single cause. Neither for success, nor for failure.
The SabreTech maintenance employees inhabited a world of boss-men and sudden firings. It was a world of language difficulties—not just because many were Spanish speakers in an environment of English engineering language, as described by Langewiesche (1998, p. 228):
"Here is what really happened. Nearly 600 people logged work time against the three Valujet airplanes in SabreTech's Miami hangar; of them 72 workers logged 910 hours across several weeks against the job of replacing the "expired" oxygen generators—those at the end of their approved lives. According to the supplied Valujet work card 0069, the second step of the sevenstep process was: 'If the generator has not been expended install shipping cap on the firing pin.' This required a gang of hard-pressed mechanics to draw a distinction between canisters that were 'expired', meaning the ones they were removing, and canisters that were not 'expended', meaning the same ones, loaded and ready to fire, on which they were now expected to put nonexistent caps. Also involved were canisters which were expired and expended, and others which were not expired but were expended. And then, of course, there was the simpler thing—a set of new replacement canisters, which were both unexpended and unexpired."
And, oh by the way, as you may already have picked up: there were no shipping caps to be found in Miami. How can we prosecute people for not installing something we do not provide them with? The pursuit of culprits disinherits the legacy of Fitts and Jones. One has to side with Hawkins (1987, p. 127) who argues that exhortation (via punishment, discipline or whatever measure) "is unlikely to have any long-term effect unless the exhortation is accompanied by other measures... A more profound inquiry into the nature of the forces which drive the activities of people is necessary in order to learn whether they can be manipulated and if so, how". Indeed, this was Fitts's and Jones's insight all along. If researchers could understand and modify the situation in which humans were required to perform, they could understand and modify the performance that went on inside of it. Central to this idea is the local rationality principle (Simon, 1969; Woods et al., 1994). People do reasonable, or locally rational things given their tools, their multiple goals and pressures, their knowledge and their limited resources. Human error is a symptom—a symptom of irreconcilable constraints and pressures deeper inside a system; a pointer to systemic trouble further upstream.
The Bad Apple Theory II: Error classification systems
In order to lead people (e.g. investigators) to the sources of human error as inspired by Fitts and Jones '47, a number of error classification systems have been developed in aviation (e.g. the Threat and Error Management Model (e.g. Helmreich et al., 1999; Helmreich, 2000) and the Human Factors Analysis and Classification System (HFACS, Shappell & Wiegmann, 2001)). The biggest trap in both error methods is the illusion that classification is the same as analysis. While classification systems intend to provide investigators more insight into the background of human error, they actually risk trotting down a garden path toward judgments of people instead of explanations of their performance; toward shifting blame higher and further into or even out of organizational echelons, but always onto others. Several false ideas about human error pervade these classification systems, all of which put them onto the road to The Bad Apple Theory.
First, error classification systems assume that we can meaningfully count and tabulate human errors. Human error "in the wild", however—as it occurs in natural complex settings—resists tabulation because of the complex interactions, the long and twisted pathways to breakdown and the context-dependency and diversity of human intention and action. Labeling certain assessments or actions in the swirl of human and social and technical activity as causal, or as "errors" and counting them in some database, is entirely arbitrary and ultimately meaningless. Also, we can never agree on what we mean by error:
• Do we count errors as causes of failure? For example: This event was due to human error.
• Or as the failure itself? For example: The pilot's selection of that mode was an error.
• Or as a process, or, more specifically, as a departure from some kind of standard? This may be operating procedures, or simply good airmanship. Depending on what you use as standard, you will come to different conclusions about what is an error.
Counting and coarsely classifying surface variabilities is protoscientific at best. Counting does not make science, or even useful practice, since interventions on the basis of surface variability will merely peck away at the margins of an issue. A focus on superficial similarities blocks our ability to see deeper relationships and subtleties. It disconnects performance fragments from the context that brought them forth, from the context that accompanied them; that gave them meaning; and that holds the keys to their explanation. Instead it renders performance fragments denuded: as uncloaked, context-less, meaningless shrapnel scattered across broad classifications in the wake of an observer's arbitrary judgment.
Second, while the original Fitts and Jones legacy lives on very strongly in human factors (for example in Norman (1994) who calls technology something that can make us either smart or dumb), human error classification systems often pay little attention to systematic and detailed nature of the connection between error and people's tools. According to Helmreich (2000), "errors result from physiological and psychological limitations of humans. Causes of error include fatigue, workload, and fear, as well as cognitive overload, poor interpersonal communications, imperfect information processing, and flawed decision making" (p. 781). Gone are the systematic connections between people's assessments and actions on the one hand, and their tools and tasks on the other. In their place are purely human causes—sources of trouble that are endogenous; internal to the human component. Shappell and Wiegmann, following the original Reason (1990) division between latent failures and active failures, merely list an undifferentiated "poor design" only under potential organizational influences—the fourth level up in the causal stream that forms HFACS. Again, little effort is made to probe the systematic connections between human error and the engineered environment that people do their work in. The gaps that this leaves in our understanding of the sources of failure are daunting.
Third, Fitts and Jones remind us that it is counterproductive to say what people failed to do or should have done, since none of that explains why people did what they did (Dekker, 2001). With the intention of explaining why people did what they did, error classification systems help investigators label errors as "poor decisions", "failures to adhere to brief", "failures to prioritize attention", "improper procedure", and so forth (Shappell & Wiegmann, 2001, p. 63). These are not explanations, they are judgments. Similarly, they rely on fashionable labels that do little more than saying "human error" over and over again, re-inventing it under a more modern guise:
• Loss of CRM (Crew Resource Management) is one name for human error—the failure to invest in common ground, to share data that, in hindsight, turned out to have been significant.
• Complacency is also a name for human error—the failure to recognize the gravity of a situation or to adhere to standards of care or good practice.
• Non-compliance is a name for human error—the failure to follow rules or procedures that would keep the job safe.
• Loss of situation awareness is another name for human error—the failure to notice things that in hindsight turned out to be critical.
Instead of explanations of performance, these labels are judgments. For example, we judge people for not noticing what we now know to have been important data in their situation, calling it their error—their loss of situation awareness.
Brian Abraham is offline  
Old 24th Oct 2006, 06:35
  #105 (permalink)  
 
Join Date: Aug 2003
Location: Sale, Australia
Age: 80
Posts: 3,832
Likes: 0
Received 0 Likes on 0 Posts
Fourth, error classification systems typically try to lead investigators further up the causal pathway, in search of more distal contributors to the failure that occurred. The intention is consistent with the organizational extension of the Fitts and Jones '47 premise (see Maurino et al., 1995) but classification systems quickly turn it into re-runs of The Bad Apple Theory. For example, Shappell & Wiegmann (2001) explain that "it is not uncommon for accident investigators to interview the pilot's friends, colleagues, and supervisors after a fatal crash only to find out that they 'knew it would happen to him some day'." (p. 73) HFACS suggests that if supervisors do not catch these ill components before they kill themselves, then the supervisors are to blame as well. In these kinds of judgments the hindsight bias reigns supreme (see also Kern, 1999). Many sources show how we construct plausible, linear stories of how failure came about once we know the outcome (e.g. Starbuck & Milliken, 1988), which includes making the participants look bad enough to fit the bad outcome they were involved in (Reason, 1997). Such reactions to failure make after-the-fact data mining of personal shortcomings—real or imagined—not just counterproductive (sponsoring The Bad Apple Theory) but actually untrustworthy. Fitts' and Jones' legacy says that we must try to see how people—supervisors and others—interpreted the world from their position on the inside; why it made sense for them to continue certain practices given their knowledge, focus of attention and competing goals. The error classification systems do nothing to elucidate any of this, instead stopping when they have found the next responsible human up the causal pathway. "Human error", by any other label and by any other human, continues to be the conclusion of an investigation, not the starting point. This is the old view of human error, re-inventing human error under the guise of supervisory shortcomings and organizational deficiencies. HFACS contains such lists of "unsafe supervision" that can putatively account for problems that occur at the sharp end of practice. For example, unsafe supervision includes "failure to provide guidance, failure to provide oversight, failure to provide training, failure to provide correct data, inadequate opportunity for crew rest" and so forth (Shappell & Wiegmann, 2001, p. 73). This is nothing more than a parade of judgments: judgments of what supervisors failed to do, not explanations of why they did what they did, or why that perhaps made sense given the resources and constraints that governed their work. Instead of explaining a human error problem, HFACS simply re-locates it, shoving it higher up, and with it the blame and judgments for failure. Substituting supervisory failure or organizational failure for operator failure is meaningless and explains nothing. It sustains the fundamental attribution error, merely directing its misconstrued notion elsewhere, away from front-line operators.
In conclusion, classification of errors is not analysis of errors. Categorization of errors cannot double as understanding of errors. Error classification systems may in fact reinforce and entrench the misconceptions, biases and errors that we always risk making in our dealings with failure, while giving us the illusion we have actually embraced the new view to human error. The step from classifying errors to pursuing culprits appears a small one, and as counterproductive as ever. In aviation, we have seen The Bad Apple Theory at work and now we see it being re-treaded around the wheels of supposed progress on safety. Yet we have seen the procedural straightjacketing, technology-touting, culprit-extraditing, train-and-blame approach be applied, and invariably stumble and fall. We should not need to see this again. For what we have found is that it is a dead end. There is no progress on safety in the old view of human error.
People create safety
We can make progress on safety once we acknowledge that people themselves create it, and we begin to understand how. Safety is not inherently built into systems or introduced via isolated technical or procedural fixes. Safety is something that people create, at all levels of an operational organization (e.g. AMA, 1998; Sanne, 1999). Safety (and failure) is the emergent property of entire systems of people and technologies who invest in their awareness of potential pathways to breakdown and devise strategies that help forestall failure. The decision of an entire airline to no longer accept NDB approaches (Non-Directional Beacon approaches to a runway, in which the aircraft has no vertical guidance and rather imprecise lateral guidance) (Collins, 2001) is one example of such a strategy; the reluctance of airlines and/or pilots to agree on LASHO—Land And Hold Short Operations—which put them at risk of traveling across an intersecting runway that is in use, is another. In both cases, goal conflicts are evident (production pressures versus protection against known or possible pathways to failure). In both, the trade-off is in favor of safety. In resource-constrained systems, however, safety does not always prevail. RVSM (Reduced Vertical Separation Minima) for example, which will make aircraft fly closer together vertically, will be introduced and adhered to, mostly on the back of promises from isolated technical fixes that would make aircraft altitude holding and reporting more reliable. But at a systems level RVSM tightens coupling and reduces slack, contributing to the risk of interactive trouble, rapid deterioration and difficult recovery (Perrow, 1984). Another way to create safety that is gaining a foothold in the aviation industry is the automation policy, first advocated by Wiener (e.g. 1989) but still not adopted by many airlines. Automation policies are meant to reduce the risk of coordination breakdowns across highly automated flight decks, their aim being to match the level of automation (high, e.g. VNAV (Vertical Navigation, done by the Flight Management System); medium, e.g. heading select; or low, e.g. manual flight with flight director) with human roles (pilot flying versus pilot not-flying) and cockpit system display formats (e.g. map versus raw data) (e.g. Goteman, 1999). This is meant to maximize redundancy and opportunities for double-checking, capitalizing on the strengths of available flightdeck resources, both human and machine.
When failure succeeds
People are not perfect creators of safety. There are patterns, or mechanisms, by which their creation of safety can break down—mechanisms, in other words, by which failure succeeds. Take the case of a DC-9 that got caught in windshear while trying to go around from an approach to Charlotte, NC, in 1994 (NTSB, 1995). Charlotte is a case where people are in a double bind: first, things are too ambiguous for effective feedforward. Not much later things are changing too quickly for effective feedback. While approaching the airport, the situation is too unpredictable, the data too ambiguous, for effective feedforward. In other words, there is insufficient evidence for breaking off the approach (as feedforward to deal with the perceived threat). However, once inside the situation, things change too rapidly for effective feedback. The microburst creates changes in winds and airspeeds that are difficult to manage, especially for a crew whose training never covered a windshear encounter on approach or in such otherwise smooth conditions. Charlotte is not the only pattern by which the creation of safety breaks down; it is not the only mechanism by which failure succeeds. For progress on safety we should de-emphasize the construction of cause—in error classification methods or any other investigation of failure. Once we acknowledge the complexity of failure, and once we acknowledge that safety and failure are emerging properties of systems that try to succeed, the selection of causes—either for failure or for success—becomes highly limited, selective, exclusive and pointless. Instead of constructing causes, we should try to document and learn from patterns of failure. What are the mechanisms by which failure succeeds? Can we already sketch some? What patterns of breakdown in people's creation of safety do we already know about? Charlotte—too ambiguous for feed forward, too dynamic for effective feedback—is one mechanism by which people's investments in safety are outwitted by a rapidly changing world. Understanding the mechanism means becoming able to retard it or block it, by reducing the mechanism's inherent coupling; by disambiguating the data that fuels its progression from the inside. The contours of many other patterns, or mechanisms of failure, are beginning to stand out from thick descriptions of accidents in aerospace, including the normalization of deviance (Vaughan, 1996), the going sour progression (Sarter & Woods, 1997), practical drift (Snook, 2000) and plan continuation (Orasanu et al., in press). Investing further in these and other insights will represent progress on safety. There is no efficient, quick road to understanding human error, as error classification methods make us believe. Their destination will be an illusion, a retread of the old view. Similarly, there is no quick safety fix, as the punishment of culprits would make us believe, for systems that pursue multiple competing goals in a resource constrained, uncertain world. There is, however, percentage in opening the black box of human performance—understanding how people make the systems they operate so successful, and capturing the patterns by which their successes are defeated.
Acknowledgements
The work for this paper was supported by a grant from the Swedish Flight Safety Directorate and its Director Mr. Arne Axelsson.
Brian Abraham is offline  
Old 24th Oct 2006, 07:07
  #106 (permalink)  
prospector
Guest
 
Posts: n/a
Well, you can either accept that the buck stops nowhere, and nobody is responsible, or, as many people do, accept the opinion of Sir Geoffery Roberts, a very experienced aviator and Airline administrator. He is quoted as saying,
"I say quite flatly, the main cause was the fact a pilot failed to locate himself in relation to ground features and flew his aircraft into the side of a mountain".
 
Old 24th Oct 2006, 12:22
  #107 (permalink)  
 
Join Date: May 2000
Location: Here. Over here.
Posts: 189
Likes: 0
Received 0 Likes on 0 Posts
...Or you may like to think a little more deeply and consider the statement from the privy council appeal, where they said
The Royal Commission Report convincingly clears Captain Collins and First Officer Cassin of any suggestion that negligence on their part had in any way contributed to the disaster. That is unchallenged.
Desert Dingo is offline  
Old 24th Oct 2006, 13:37
  #108 (permalink)  
 
Join Date: Aug 2005
Location: New Zealand
Age: 62
Posts: 56
Likes: 0
Received 0 Likes on 0 Posts
Unchallenged?

Hey DD.
If that finding is unchallenged, then plz explain why so many professional aviators have a problem with the findings? Their Lordships had no aviation training, and had to rely on third parties for perspective. In an aircraft under my command, the onus is on me to ensure the passengers I carry arrive at their destination. Or at least arrive. Somewhere.
As for causal factors, I have no doubt that there was a deal of contribution the the accident by AirNZ. It would be totally unreasonable to suggest otherwise, but as a professional, I must also accept my responsibility as P-i-C to ensure the safe conduct of my flight. There were laid down procedures for a letdown to McMurdo - these were not followed.
There were distractions on the flight deck, these were not ignored. And, of course, there were commercial pressures to ensure the "punters" got what they paid for. HOWEVER, these two factors should be not be a consideration when YOU are in charge of an aircraft. Company SOP's are there for a reason. You have to have a damn good reason to ignore them. I didn't see (or hear) of one good reason from any of the written submissions or the verbal transcript that suggested otherwise. What other pilots did on the run should have had absolutely no bearing on what happened on the day. Unfortunately, mindset determined what would happen, not professional dictum.
Any pilot who suggests he is beyond reproach, is one to avoid flying with. We are only human, and as such are prone to making errors. But when we do err, lets accept that it was our responsibilty to be the final stopgap. If you can't accept that as a condition of your employment, perhaps you should be looking elsewhere. It doesn't matter what a judge or jury determines, it is about accepting responsibility. I daresay if Jim had have lived, he would have accepted that. It was the sort of person he was.
400R
400Rulz is offline  
Old 24th Oct 2006, 18:51
  #109 (permalink)  
 
Join Date: Feb 2000
Location: with the porangi,s in Pohara
Age: 66
Posts: 983
Likes: 0
Received 0 Likes on 0 Posts
Privy Council

DD...yeah, well mate...the privy council doesnt live in the world I live in ......a bold statement to think these blokes were experienced aviation individuals??? in what fields are we talking about???....as suggested ...3rd parties provided info so that they were able to give/pass judgement.....

The combination of 400,s and Brians arguments is where I feel we must find common ground,this is not the first incident of this kind and it certainly wont be the last.....

I constantly try to find reasoning in decisions I make,some worthy of extra training,some absolutly brilliant,more so the first than the last......but at least I,m critical of myself and of no other......

the bottom line ....as PIC ....dont pass the buck,you are an will be responsible for the conduct of your flight,if your not up to it,dont take it,and as 400 stated and I believe this,Jim would have found himself accountable whether it was is cockup or not....that is the PIC,S resonsibilty....

thats the other problem inherant in aviation....it can never be a combination of people and problems,someone has to be the scapegoat....we need to nail "someone"....one individual has to be responsible.In case,s like this,it is just not the case

Brian,great reading mate,actually copied it and fwded to several people in my Ops dept for reading...if you dont mind......PB

Last edited by pakeha-boy; 24th Oct 2006 at 19:06.
pakeha-boy is offline  
Old 24th Mar 2007, 05:42
  #110 (permalink)  
 
Join Date: Jun 2001
Location: Paradise
Age: 68
Posts: 1,552
Received 52 Likes on 20 Posts
From "The Dominion" today, 24th March 07

Justice Mahon's royal commission of inquiry into the 1979 Erebus crash has changed the way many of the world's transport accident inquiries are handled, says retired investigator Ron Chippindale.

Mr Chippindale, who was among those who received a special service medal this week for their work on Erebus after the DC10 crash which killed 257 people, said the broad approach taken by Justice Mahon had since been adopted internationally.

Justice Mahon's inquiry pointed the finger at Air New Zealand management rather than the DC10 pilots. A section looking at possible airline management issues contributing to accidents is now included in all International Civil Aviation Organisation accident inquiries.

Mr Chippindale said that, at the time of his inquiry, the objective was to sift through the information and come to a proximate conclusion as to the probable cause - the last critical error that made the crash inevitable. He found the probable cause was the captain's decision to continue flying at a low level toward an area of poor visibility when the crew were uncertain of their position.

Justice Mahon disagreed with the investigator's approach, of attempting to find a single cause. This had long since been discarded by the legal profession. He set the pattern for allocating a number of causes without giving priority to any particular one.

Though that had now been taken up overseas, Mr Chippindale said he still took issue with Justice Mahon's exoneration of the pilots. He instead blamed the airline for changing the computer flight coordinates without telling the pilots, and the optical illusion of a "whiteout", which made it look as though they were flying over flat ice when, in fact, the ground was rising quickly.

Mr Chippindale said the pilots had instructions not to descend unless the air was "gin clear". "They had practised that in the simulator and they had written instructions to that effect.

" Had they stayed above minimum safe altitude the accident wouldn't have happened. That's called airmanship.

"They were down at just under 1500 feet at high speed and not sure of their position. In my opinion that was inexcusable." Though the airline had changed flight coordinates without telling the pilots, he wondered whether they had coordinates in the first place. "We examined the briefing which was shoddy, pathetic and hopeless. If they (the pilots) had maintained the minimum safe altitude, the wrong coordinates would not have made a difference."

Judge Mahon came to a different conclusion about the written instructions, saying that Air New Zealand had not objected to other flights going to low altitude and that the airline had therefore condoned it.

"He omitted to mention that all the other aircraft had descended in clear conditions, where this one descended through a gap in the cloud and that made a critical difference," Mr Chippindale said. "Just because a lot of people get away with going up a one way street in the wrong direction doesn't mean it's allowed."
chimbu warrior is offline  
Old 24th Mar 2007, 10:42
  #111 (permalink)  
prospector
Guest
 
Posts: n/a
Still begs the question, Even if they were in the position where they thought they were, following the track they thought they were on, why could they not see a 12,000ft mountain that they knew, or should have known was some 40 miles away, this in an area of renown brilliant visibility in excess of 100 miles. Sector whiteout, it would appear, explains why they never saw this mountain, but if they had never, as no one else at the time had, heard of sector white out how do you explain a descent without even wondering why they could not see this mountain.??

How can anyone not agree with Mr. Chippindales findings??
 
Old 28th Sep 2007, 20:09
  #112 (permalink)  
 
Join Date: Aug 2007
Location: eidw
Posts: 28
Likes: 0
Received 0 Likes on 0 Posts
Hello !

Anybody know if the TVNZ docudrama "Erebus-The Aftermath"
is available as video tape or DVD ?
13370khz is offline  
Old 29th Sep 2007, 05:26
  #113 (permalink)  
 
Join Date: Jul 2002
Location: australia
Posts: 358
Likes: 0
Received 0 Likes on 0 Posts
Having spent the last hour reading this whole thread I feel inclined to align myself with prospector & co.
I believe that there were many factors leading to the crash but at the end of the day I can't go past the fact that the SOP's mandated no descent below FL160 unless VMC and no descent below 6000' ....period.
When I read John Kings book a few months back on the crash, one pilot who had conducted a previous flight testified that although he had unlimited visibility, and although he was an ex strike pilot with plenty of low level experience, and although he dearly wanted to, when McMurdo ATC invited him to do a low run down the strip he declined and remained at 6000'. If that chap was PiC of flight 901 would the accident have happened? I don't believe so.That is not a dig at J Collins...just what I believe to be most likely.
Jim C had his flight plan mucked up, he didn't know that, but he did know that it was a new environment for the whole crew and that he expected VHF contact earlier, and radar contact earlier, and TACAN reception, and he also knew the SOPs for letting down.
Lots of people made mistakes, Collins' mistake is easy to take a shot at. No one person is totally to blame. Many people carry a share of the blame.
cjam is offline  
Old 30th Sep 2007, 04:58
  #114 (permalink)  
 
Join Date: Mar 2005
Location: America's 51st State
Posts: 294
Received 45 Likes on 18 Posts
Hi all,

I have always felt that the crew were made scapegoats in this accident, however having read 400Rulz posts on the subject, I have changed my view on the cause of this accident somewhat to acknowledge that the crew were the last line of defence in preventing this accident, however I still steadfastly stick to the view that ANZ must share responsibility in this accident.

I note the relevant comments about the Privvy Council having little or no aviation experience, however I also note that Sir Geoffrey Roberts may have had a bias in ANZ's favour.

Regards.

VH-MLE
VH-MLE is offline  
Old 3rd Oct 2007, 05:12
  #115 (permalink)  
 
Join Date: Apr 2002
Location: Alice Springs
Posts: 1,744
Likes: 0
Received 0 Likes on 0 Posts
Powerful logic

Brian Abraham that is valuable information that we should all reflect on for a long long time. Probably the most valuable post I have ever seen on here.
On Sept 11th 2001? I was in Adelaide, watching those aircraft fly into the WT Centre. I had flown down to hear Professor James Reason speak at a breakfast meeting the following morning. (the only time I ever got a free feed from CASA. I had to fly 3000 km to get it.)
I remember very clearly that James Reason said emphatically and repetitively that the "blame game" was counter productive, and it is obviously damaging. It fixes nothing.
From another source, I remember the saying "I don't care who is to blame, I want to know who is going to fix it, and how"
We have to decide whether we are going on a witch hunt to find a politically convenient outcome of an investigation, or if we are genuinely trying to improve safety by finding the traps that the players fell into, and fixing them. This has been known since 1947, and it has been denied since 1947. It is being denied today. I heard a saying today "the fools are in charge, but the wise men are shouting louder".
Did Capt Collins know he was going to fly into a mountain? Of course not. He had a reputation as a competent, reliable, stable pilot, so we can reasonably assume that he considered what he was doing was safe. And when faced with information that challenged that safety he took steps to climb out. Too late. I think the CVR backs up that.
There is much evidence that Capt Collins was tricked by false information, and lack of essential knowledge about whiteout and visual tricks in that region.
It has been said that he descended below the level allowed by SOP's. It also appears that he had a long history as a safe, responsible pilot. Why then did he descend lower than the SOP'S permitted? Did he know the SOP's?.
Does this tell us something about the credibility of the SOP's
Powerful forces are involved. Air NZ was a government owned airline, and the regulator was the govt, and a large English insurance company, all had much deeper pockets than the pilots could ever have. The legal and financial possibilities were frightening.
Mr Chippendale did what he thought was right, and it was a convenient outcome for many. It followed the normal pattern of accident investigation, but was it right?
The other pilots and the Judge did not think so, and the final outcome was decided by their lordships in another country. A country where the big insurance company lived.
Many things could have been done better
but the NZ people are not silly and they quietly re-organised things. Today most of the people who were involved at the time of the Erebus crash have long gone. I had great respect for the head of the navigation section who stood up in court and said "I did it wrong". Air NZ is not a newcomer. They have been operating (earlier as TEAL) for more than half a century.
But the inappropriate, simplistic, military style administration system still prevails, in our society, and the knowledge which has been available to us since 1947 is still denied.
It's convenient for our administrators that way. That's why we have to have independent legal systems to pull them into line sometimes. Checks and balances.
And Air NZ operates today as a safe, respected airline, as it should, because someone challenged the system.

Last edited by bushy; 3rd Oct 2007 at 06:44.
bushy is offline  
Old 3rd Oct 2007, 06:29
  #116 (permalink)  
 
Join Date: Apr 2002
Location: Alice Springs
Posts: 1,744
Likes: 0
Received 0 Likes on 0 Posts
Removing the traps-or making them known.

One day I got the "phone call from hell".
It was ASA, and the voice said "one of your aeroplanes has had an engine failure at Woop Woop and has landed on the road. Do you want me to call the police, or will you do it?"
After I had stared at the wall for about a minute I phoned the refueller at Woop Woop and asked him to provide a vehicle, and go out and look after the pilot and passengers until I could get another aeroplane to pick them up. (refuellers know everything, and control the resources out there). Then I phoned the LAME who looked after that aeroplane.
Within five minutes I got a call from the Woop Woop refueller to say"they put AVTUR in that aeroplane". It had a piston engine and should have had AVGAS. The LAME was pleased to hear this news, but I was not.
So this is a simple case isn't it? The pilot stuffed up. So you sack him, and get on with the work. Problem solved. Isn't it? Or is it?
That's not what happened. When the pilot got back to base, I shook his hand, and thanked him for saving the aeroplane and passengers. I told him to take a week off work, and that his job was safe.
Then I talked to the Woop Woop refueller and found a couple of things I did not expect. The avtur drums and the avgas drums were stored in close proximity, in the the same compound. I remember the old drums used to have large areas of bright colour as well as small lettering to indicate what grade of fuel they contained. This company painted all of the drums the same colour, and the small identifying lettering was a different colour for each grade. One layer of the swiss cheese had been removed. But the one I did not expect in Australia was THE REFUELLERS COULD NOT READ!!!
The Woop Woop refueller agreed to make sure the different types of fuel were stored at different sites, and to be more carefull allocating labour. He was normally present himself, but was not on this occasion). He built more fences.
I talked to our ever helpful FOI and he promised to speak to the fuel company about more prominent labeling on fuel drums.
Sure, the pilot could have done it better. Couldn't we all? He had had a very valuable lesson (one you cannot buy) and did lot's of good work for me before going on to bigger and better things with my recommendation and support.
Let him who is without sin throw the first stone.
Had I just sacked the pilot, the trap would still be out there for others, and no-one would know. This would have been the simplistic, military style convenient pseudo solution that is too common in aviation. And it would have adversely affected one of the industry's better pilots.
A much better outlook was described in 1947, and is still being denied.
bushy is offline  
Old 3rd Oct 2007, 10:54
  #117 (permalink)  
 
Join Date: Feb 2007
Location: Darwin, Australia
Age: 53
Posts: 424
Likes: 0
Received 4 Likes on 3 Posts
Bushy (both posts)
werbil is offline  
Old 3rd Oct 2007, 15:41
  #118 (permalink)  
 
Join Date: Aug 2001
Location: Stuck in the middle...
Posts: 1,638
Likes: 0
Received 1 Like on 1 Post
Bushy,

Re your distinction between the work of Chippendale and Mahon - I suggest they were both right, because they were looking for different things. Chippendale was correct in stating that the actions of the pilots placed the aircraft in a situation where it suffered CFIT - and that is CFIT - it was not a person in Auckland who disconnected the A/P and descended the a/c to 1600'. So he was looking at the actual actions in the immediate lead-up to the impact. That's understandable, he was a copper.

However Mahon then looked at why the crew thought it was ok to act as they did - that is, what they (as competent, experienced aviators) were thinking - and he found that there was background to it which did stretch back to Auckland.

One was looking at 'what has happened'? - an a/c hit a mountain; the other was looking at 'what caused the a/c to be in a position to hit the mountain?' - human factors and the ol' chain...
Taildragger67 is offline  
Old 4th Oct 2007, 01:26
  #119 (permalink)  
 
Join Date: Apr 2002
Location: Alice Springs
Posts: 1,744
Likes: 0
Received 0 Likes on 0 Posts
Quite right.

They were both right. In their own way. According to their own culture and standards. But thank god for Gordon Vette and the courts, and in particular Justice Mahon.

Last edited by bushy; 4th Oct 2007 at 01:57.
bushy is offline  
Old 4th Oct 2007, 03:31
  #120 (permalink)  
 
Join Date: May 2003
Location: 'round here
Posts: 394
Likes: 0
Received 0 Likes on 0 Posts
From memory in the accident report there were 5 factors required by Air NZ SOPs for you to descend below the MSA. You needed all 5, not a couple, not near enough, and they descended without all 5. And yes, the captain has the final authority to operate the aircraft as he sees fit but the SOPs are what the AOC is legally based on in a court of law. When you add in the fact that the US military did an awful lot of training to operate in that environment and had pretty strong reservations about Air NZ doing what they were doing. It was crazy to think that widebody longhaul anywhere else on the Air NZ network could prepare you for VFR polar ops. There were so many holes lined up the second someone in marketing first suggested "let's do antartica" it doesn't bare thinking about.
stillalbatross is offline  


Contact Us - Archive - Advertising - Cookie Policy - Privacy Statement - Terms of Service

Copyright © 2024 MH Sub I, LLC dba Internet Brands. All rights reserved. Use of this site indicates your consent to the Terms of Use.