PPRuNe Forums - View Single Post - Norfolk Island Ditching ATSB Report - ?
View Single Post
Old 29th Nov 2012, 03:39
  #647 (permalink)  
Creampuff
 
Join Date: Nov 2000
Location: Salt Lake City Utah
Posts: 3,079
Received 0 Likes on 0 Posts
Professor Hollnagel does makes some interesting observations. All of the bolding is mine:
Even though accident investigations ostensibly aim to find the “root cause”, the determination of a cause reflects the interests of the stakeholders as much as what actually happened.

Finding a cause is thus a case of expediency as much as of logic
. There are always practical constraints that limit the search in terms of, e.g., material resources or time. Any analysis must stop at some time, and the criterion is in many cases set by interests that are quite remote from the accident investigation itself. [A] cause is always a judgement made in hindsight and therefore benefits from the common malaise of besserwissen [German for ‘know-it-alls’]. More precisely, a cause – or rather, an acceptable “cause” – usually has the following characteristics:

- It can unequivocally be associated with a system structure or system function (people, components, procedures, etc.).

- It is possible to do something to reduce or eliminate the cause within accepted limits of cost and time. This follows partly from the first characteristic, which in a sense is a necessary condition for the second.

- It conforms to the current “norms” for explanations. This in particular means that the cause corresponds to the most popular theory at the time. For instance, before the 1960s it was uncommon to use “human error” as a cause, while it practically became de rigueur during the 1970s and 1980s. Later on, in the 1990s, the notion of organisational accidents became accepted, and the norm for explanations changed once more.
In the article from which the above quote is extracted, Professor Hollnagel also refers to a study undertaken as part of the second phase of the Human Error in Air Traffic Management project, carried out by Dedale SA for Eurocontrol. After that discussion, he states:
The issue here is, of course, not so much whether the subjects acted correctly but rather that the observers often would classify actions wrongly because they could not see the situation from the subject’s point of view. The lesson to be learned is that an action should not be classified as an “error” only based on how it appears to an observer.
And as part of the concluding discussion:
The conclusion is that the term “human error” should be used carefully and sparingly – if it is to be used at all. In the long term it may be prudent to refrain from considering actions as being either correct or incorrect, firstly because these distinctions rarely apply to the action in itself but rather to the outcome, and secondly because they imply a differentiation that is hard to make in practice. The alternative is to acknowledge that human performance (as well as the performance of technological systems) is always variable.

The consequence of acknowledging the existence of this variability is that many so-called “human errors” can be seen as the outcome of successful performance adjustments, which include ways of saving attention, managing workload, making decisions based on heuristics (in the sense of naturalistic decision making), etc. As long as these adjustments meet the socio-technical expectations to acceptable results, they are seen as being goal-oriented, effective, and reflecting the intelligence of human beings. Moreover, “errors” or “poor decision-making” resulting from such intentionally sub-optimal actions are often detected and recovered in time. Because these adjustments usually are successful they become the norm, and are therefore also used when the conditions – in retrospect – are unfavourable. It is thus only when the detection and/or recovery for some reason fails, that they become “human errors”.
Creampuff is offline