PPRuNe Forums - View Single Post - Norfolk Island Ditching ATSB Report - ?
View Single Post
Old 29th Sep 2012, 04:11
  #440 (permalink)  
Brian Abraham
 
Join Date: Aug 2003
Location: Sale, Australia
Age: 80
Posts: 3,832
Likes: 0
Received 0 Likes on 0 Posts
I couldn't let the thread die without posting something the iconoclasts (it's all Dominics/crews fault) might mull over. Those whose olfactory organs are so distressed as to be unable to smell the cheese might wish to log off now.

Reconstructing human contributions to accidents:
The new view on error and performance


Sidney W. A. Dekker

Abstract

Problem


How can we reconstruct the human contribution to accidents? Investigators easily take the position of retrospective outsider, looking back on a sequence of events that seems to lead to an inevitable outcome, and pointing out where people went wrong. This does not explain much, however, and may not help prevent recurrence.

Method and results

In this paper I examine how investigators can reconstruct the human contribution to accidents in light of what has recently become known as the new view of human error. The commitment of the new view is to relocate controversial human assessments and actions back into the flow of events of which they were part and which helped bring them forth, to see why assessments and actions made sense to people at the time. The second half of the paper is dedicated to one way in which investigators could begin to reconstruct people's unfolding mindsets.

Impact on industry

In an era where a large portion of accidents gets attributed to human error, it is critical to understand why people did what they did, rather than judging them for not doing what we now know they should have done. This paper contributes by helping investigators avoid the traps of hindsight, and by presenting a method with which investigators can begin to see how people's actions and assessments could actually have made sense at the time.
Introduction

In human factors today there are basically two different views on human error and the human contribution to accidents . One view, recently dubbed "the old view" (AMA, 1998; Reason, 2000), sees human error as a cause of failure. In the old view of human error:

• Human error is the cause of most accidents.

• The engineered systems in which people work are made to be basically safe; their success is intrinsic. The chief threat to safety comes from the inherent unreliability of people.

• Progress on safety can be made by protecting these systems from unreliable humans through selection, proceduralization, automation, training and discipline.

The other view, also called "the new view", sees human error not as a cause, but as a symptom of failure (Rasmussen & Batstone, 1989; Woods et al., 1994; AMA, 1998; Reason, 2000; Hoffman & Woods, 2000). In the new view of human error:

• Human error is a symptom of trouble deeper inside the system.

• Safety is not inherent in systems. The systems themselves are contradictions between multiple goals that people must pursue simultaneously. People have to create safety.

• Human error is systematically connected to features of peoples tools, tasks and operating environment. Progress on safety comes from understanding and influencing these connections.

The new view of human error represents a substantial movement across the fields of human factors and organizational safety (Reason, 1997; Rochlin, 1999) and encourages the investigation of factors that easily disappear behind the label "human error"—long-standing organizational deficiencies; design problems; procedural shortcomings and so forth. The rationale is that human error is not an explanation for failure, but instead demands an explanation, and that effective countermeasures start not with individual human beings who themselves were at the receiving end of much latent trouble (Reason, 1997) but rather with the error-producing conditions present in their working environment. Most of those involved in accident research and analyses are proponents of the new view. For example:
"...simply writing off ... accidents merely to (human) error is an overly simplistic, if not naive, approach.... After all, it is well established that accidents cannot be attributed to a single cause, or in most instances, even a single individual." (Shappell & Wiegmann, 2001, p. 60).

However, our willingness to embrace the new view of human error in our analytic practice is not always matched by our ability to do so. When confronted by failure, it is easy to retreat into the old view. We seek out the "bad apples" and assume that with them gone, the system will be safer than before. An investigation's emphasis on proximal causes ensures that the mishap remains the result of a few uncharacteristically ill-performing individuals who are not representative of the system or the larger practitioner population in it. It leaves existing beliefs about the basic safety of the system intact.

The pilots of a large military helicopter that crashed on a hillside in Scotland in 1994 were found guilty of gross negligence. The pilots did not survive—29 people died in total—so their side of the story could never be heard. The official inquiry had no problems with "destroying the reputation of two good men", as a fellow pilot put it. Potentially fundamental vulnerabilities (such as 160 reported cases of Uncommanded Flying Control Movement or UFCM in computerized helicopters alone since 1994) were not looked into seriously (Sunday Times, 25 June 2000).

Faced with a bad, surprising event, we seem more willing to change the players in the event (e.g. their reputations) than to amend our basic beliefs about the system that made the event possible. To be sure, reconstructing the human contribution to a sequence of events that led up to an accident is not easy. As investigators we were seldom—if ever—there when events unfolded around the people now under investigation. As a result, their actions and assessments may appear not only controversial, but truly befuddling when seen from our point of view. In order to understand why people could have done what they did, we need to go back and triangulate and interpolate, from a wide variety of sources, the kinds of mindsets that they had at the time. But working against us are the inherent biases introduced by hindsight (Fischoff, 1975) and the multiple pressures and constraints that operate on almost every investigation—political as well as practical (Galison, 2000).

In this paper I hope to make a contribution to our ability to reconstruct past human performance and how it played a role in accidents. I first capture some of the mechanisms of the hindsight bias, and observe them at work in how we routinely handle and describe human performance evidence. Trying to avoid these biases and mechanisms, I propose ways forward for how to reconstruct people's unfolding mindsets. Most examples will come from aviation, but they, and the principles they illustrate, should apply equally well to domains ranging from driving to shipping to industrial and occupational safety.

The mechanisms of hindsight

One of the safest bets we can make as investigators or outside observers is that we know more about the incident or accident than the people who were caught up in it—thanks to hindsight:

• Hindsight means being able to look back, from the outside, on a sequence of events that led to an outcome we already know about;

• Hindsight gives us almost unlimited access to the true nature of the situation that surrounded people at the time (where they actually were versus where they thought they were; what state their system was in versus what they thought it was in);

• Hindsight allows us to pinpoint what people missed and shouldn't have missed; what they didn't do but should have done.

From the perspective of the outside and hindsight (typically the investigator's perspective), we can oversee the entire sequence of events—the triggering conditions, its various twists and turns, the outcome, and the true nature of circumstances surrounding the route to trouble. In contrast, the perspective from the inside of the tunnel is the point of view of people in the unfolding situation. To them, the outcome was not known, nor the entirety of surrounding circumstances. They contributed to the direction of the sequence of events on the basis of what they saw on the inside of the unfolding situation. For investigators, however, it is very difficult to attain this perspective. The mechanisms by which hindsight operates on human performance data are mutually reinforcing. Together they continually pull us in the direction of the position of the retrospective outsider. The ways in which we retrieve human performance evidence from the rubble of an accident, represent it, and re-tell it, typically sponsors this migration of viewpoint.

Mechanism 1: Making tangled histories linear by cherry-picking and re-grouping evidence

One effect of hindsight is that ”people who know the outcome of a complex prior history of tangled, indeterminate events, remember that history as being much more determinant, leading ’inevitably’ to the outcome they already knew” (Weick, 1995, p28). Hindsight allows us to change past indeterminacy and complexity into order, structure, and oversimplified causality (Reason, 1990). In trying to make sense of past performance, it is always tempting to group individual fragments of human performance which prima facie point to some common condition or mindset. For example, "hurry" to land is a leitmotif extracted from the evidence in the following investigation, and that haste in turn is enlisted to explain the errors that were made:

”Investigators were able to identify a series of errors that initiated with the flightcrew’s acceptance of the controller’s offer to land on runway 19…The CVR indicates that the decision to accept the offer to land on runway 19 was made jointly by the captain and the first officer in a 4-second exchange that began at 2136:38. The captain asked: ’would you like to shoot the one nine straight in?’ The first officer responded, ’Yeah, we’ll have to scramble to get down. We can do it.’ This interchange followed an earlier discussion in which the captain indicated to the first officer his desire to hurry the arrival into Cali, following the delay on departure from Miami, in an apparent to minimize the effect of the delay on the flight attendants' rest requirements. For example, at 2126:01, he asked the first officer to ’keep the speed up in the descent’… (This is) evidence of the hurried nature of the tasks performed.” (Aeronautica Civil, 1996, p. 29)

But the fragments used to build the argument of haste come from over half an hour of extended performance. The investigator treats the record as if it were a public quarry to pick stones from, and the accident explanation the building he needs to erect. The problem is that each fragment is meaningless outside the context that produced it: each fragment has its own story, background, and reasons for being, and when it was produced it may have had nothing to do with the other fragments it is now grouped with. Also, behavior takes place in between the fragments. These intermediary episodes contain changes and evolutions in perceptions and assessments that separate the excised fragments not only in time, but also in meaning. Thus, the condition, and the constructed linearity in the story that binds these performance fragments, arises not from the circumstances that brought each of the fragments forth; it is not a feature of those circumstances. It is an artifact of the investigator. In the case described above, ”hurry” is a condition identified in hindsight, one that plausibly couples the start of the flight (almost 2 hours behind schedule) with its fatal ending (on a mountainside rather than an airport). ”Hurry” is a retrospectively invoked leitmotif that guides the search for evidence about itself. It leaves the investigator with a story that is admittedly more linear and plausible and less messy and complex than the actual events. Yet it is not a set of findings, but of tautologies.

Mechanism 2: Finding what people could have done to avoid the accident

Tracing the sequence of events back from the outcome—that we as investigators already know about—we invariably come across joints where people had opportunities to revise their assessment of the situation but failed to do so; where people were given the option to recover from their route to trouble, but did not take it. These are counterfactuals—quite common in accident analysis. For example, "The airplane could have overcome the windshear encounter if the pitch attitude of 15 degrees nose-up had been maintained, the thrust had been set to 1.93 EPR (Engine Pressure Ratio) and the landing gear had been retracted on schedule" (NTSB, 1995, p. 119). Counterfactuals prove what could have happened if certain minute and often utopian conditions had been met. Counterfactual reasoning may be a fruitful exercise when trying to uncover potential countermeasures against such failures in the future.

But saying what people could have done in order to prevent a particular outcome does not explain why they did what they did. This is the problem with counterfactuals. When they are enlisted as explanatory proxy, they help circumvent the hard problem of investigations: finding out why people did what they did. Stressing what was not done (but if it had been done, the accident would not have happened) explains nothing about what actually happened, or why.

In addition, counterfactuals are a powerful tributary to the hindsight bias. They help us impose structure and linearity on tangled prior histories. Counterfactuals can convert a mass of indeterminate actions and events, themselves overlapping and interacting, into a linear series of straightforward bifurcations. For example, people could have perfectly executed the go-around maneuver but did not; they could have denied the runway change but did not. As the sequence of events rolls back into time, away from its outcome, the story builds. We notice that people chose the wrong prong at each fork, time and again—ferrying them along inevitably to the outcome that formed the starting point of our investigation (for without it, there would have been no investigation).

But human work in complex, dynamic worlds is seldom about simple dichotomous choices (as in: to err or not to err). Bifurcations are extremely rare—especially those that yield clear previews of the respective outcomes at each end. In reality, choice moments (such as there are) typically reveal multiple possible pathways that stretch out, like cracks in a window, into the ever denser fog of futures not yet known. Their outcomes are indeterminate; hidden in what is still to come. In reality, actions need to be taken under uncertainty and under the pressure of limited time and other resources. What from the retrospective outside may look like a discrete, leisurely two-choice opportunity to not fail, is from the inside really just one fragment caught up in a stream of surrounding actions and assessments. In fact, from the inside it may not look like a choice at all. These are often choices only in hindsight. To the people caught up in the sequence of events there was perhaps not any compelling reason to re-assess their situation or decide against anything (or else they probably would have) at the point the investigator has now found significant or controversial. They were likely doing what they were doing because they thought they were right; given their understanding of the situation; their pressures. The challenge for an investigator becomes to understand how this may not have been a discrete event to the people whose actions are under investigation. The investigator needs to see how other people's "decisions" to continue were likely nothing more than continuous behavior—reinforced by their current understanding of the situation, confirmed by the cues they were focusing on, and reaffirmed by their expectations of how things would develop.

Mechanism 3: Judging people for what they did not do but should have done
Where counterfactuals are used in investigations, even as explanatory proxy, they themselves often require explanations as well. After all, if an exit from the route to trouble stands out so clearly to us, how was it possible for other people to miss it? If there was an opportunity to recover, to not crash, then failing to grab it demands an explanation. The place where investigators look for clarification is often the set of rules, professional standards and available data that surrounded people's operation at the time, and how people did not see or meet that which they should have seen or met. Recognizing that there is a mismatch between what was done or seen and what should have been done or seen—as per those standards—we easily judge people for not doing what they should have done.

Where fragments of behavior are contrasted with written guidance that can be found to have been applicable in hindsight, actual performance is often found wanting; it does not live up to procedures or regulations. For example, ”One of the pilots…executed (a computer entry) without having verified that it was the correct selection and without having first obtained approval of the other pilot, contrary to procedures.” (Aeronautica Civil, 1996; p. 31).
Investigations invest considerably in organizational archeology so that they can construct the regulatory or procedural framework within which the operations took place, or should have taken place. Inconsistencies between existing procedures or regulations and actual behavior are easy to expose when organizational records are excavated after-the-fact and rules uncovered that would have fit this or that particular situation. This is not, however, very informative. There is virtually always a mismatch between actual behavior and written guidance that can be located in hindsight (Suchman, 1987; Woods et al., 1994). Pointing that there is a mismatch sheds little light on the why of the behavior in question. And for that matter, mismatches between procedures and practice are not unique to mishaps (Degani & Wiener, 1991).

Another route to constructing a world against which investigators hold individual performance fragments, is finding all the cues in a situation that were not picked up by the practitioners, but that, in hindsight, proved critical. Take the turn towards the mountains on the left that was made just before an accident near Cali, Colombia in 1995 (Aeronautica Civil, 1996). What should the crew have seen in order to notice the turn? They had plenty of indications, according to the manufacturer of their aircraft:
”Indications that the airplane was in a left turn would have included the following: the EHSI (Electronic Horizontal Situation Indicator) Map Display (if selected) with a curved path leading away from the intended direction of flight; the EHSI VOR display, with the CDI (Course Deviation Indicator) displaced to the right, indicating the airplane was left of the direct Cali VOR course, the EaDI indicating approximately 16 degrees of bank, and all heading indicators moving to the right. Additionally the crew may have tuned Rozo in the ADF and may have had bearing pointer information to Rozo NDB on the RMDI” (Boeing, 1996, p. 13).

This is a standard response after mishaps: point to the data that would have revealed the true nature of the situation. Knowledge of the ”critical” data comes only with the omniscience of hindsight, but if data can be shown to have been physically available, it is assumed that it should have been picked up by the practitioners in the situation. The problem is that pointing out that it should have does not explain why it was not, or why it was interpreted differently back then (Weick, 1995). There is a dissociation between data availability and data observability (Woods et al., 1994)—between what can be shown to have been physically available and what would have been observable given the multiple interleaving tasks, goals, attentional focus, interests, and—as Vaughan (1996) shows—culture of the practitioner.
There are also less obvious or not documented standards. These are often invoked when a controversial fragment (e.g. a decision to accept a runway change (Aeronautica Civil, 1996), or the decision to go around or not (NTSB; 1995)) knows no clear pre-ordained guidance but relies on local, situated judgment. For these cases there are always ”standards of good practice” which are based on convention and putatively practiced across an entire industry. One such standard in aviation is ”good airmanship”, which, if nothing else can, will explain the variance in behavior that had not yet been accounted for.

While micromatching, the investigator frames people's past assessments and actions inside a world that s/he has invoked retrospectively. Looking at the frame as overlay on the sequence of events, s/he sees that pieces of behavior stick out in various places and at various angles: a rule not followed here; available data not observed there; professional standards not met overthere. But rather than explaining controversial fragments in relation to the circumstances that brought them forth, and in relation to the stream of preceding as well as succeeding behaviors which surrounded them, the frame merely boxes performance fragments inside a world the investigator now knows to be true. The problem is this after-the-fact-world may have very little relevance to the actual world that produced the behavior under investigation. The behavior is contrasted against the investigator’s reality, not the reality surrounding the behavior in question at the time. Judging people for what they did not do relative to some rule or standard does not explain why they did what they did. Saying that people failed to take this or that pathway—only in hindsight the right one—judges other people from a position of broader insight and outcome knowledge that they themselves did not have. It does not explain a thing yet; it does not shed any light on why people did what they did given their surrounding circumstances. The investigator has gotten caught in what William James called "the psychologist's fallacy" a century ago: he has substituted his own reality for the one of his object of study.

It appears that in order to explain failure, we seek failure. In order to explain missed opportunities and bad choices, we seek flawed analyses, inaccurate perceptions, violated rules—even if these were not thought to be influential or obvious or even flawed at the time (Starbuck & Milliken, 1988). This search for people's failures is another well-documented effect of the hindsight bias: knowledge of outcome fundamentally influences how we see a process. If we know the outcome was bad, we can no longer objectively look at the behavior leading up to it—it must also have been bad (Fischoff, 1975; Woods et al., 1994; Reason, 1997).
Brian Abraham is offline