PPRuNe Forums - View Single Post - AF 447 Thread No. 7
View Single Post
Old 7th Apr 2012, 13:51
  #1310 (permalink)  
safetypee
 
Join Date: Dec 2002
Location: UK
Posts: 2,455
Likes: 0
Received 9 Likes on 5 Posts
Much of the current debate is becoming wound-up by hindsight (the spinning hamster wheel). Often in such cases there is inadvertent drift is towards ‘blame and train’, or attempting to fix the problems of a specific accident and thus overlooking generic issues.

Whilst a different form of computation may have prevented this accident, it is unlikely that the industry will think of all possible situations, and even judge some as too extreme to consider – problems of human judgement, cost effectiveness, ‘unforeseeable’ scenarios.
Similarly a different pitot design could have prevented the situation developing, but this action was in hand. In hindsight, the need for at least one modified pitot (and associated crew action) indicates poor judgement in the use of previous data (no blame intended – just a human condition), yet this was perhaps tempered by practicality (ETTO).
Furthermore, knowledge of the icing conditions from research and previous engine problems could have required a temporary restriction in flying in or close to such conditions.
The generic issues here are the failures to learn from previous and often unrelated events, and in judging the risks associated with the identified threats – current state of knowledge or application of knowledge.

None of the above involves the crew; the objective is to protect the sharp end from the ambiguities of rare or novel situations such that their inherent human weaknesses are not strained by time critical situations.
Where crews do encounter these rare situations, then the limited human ability is an asset (human as hazard or human as hero). Protection should not, and often cannot, be achieved by more and more SOPs. Human performance will vary according to experience, knowledge, and capability. We cannot expect the detection and assessment of rare situations to be consistently good, we hope that the assessments and actions are sufficient, and thus safe, but in balance with those ‘miraculous saves’ celebrated by the industry, we have to suffer a few weak performances as part of the norm (again no blame intended) we are not all the same.

Many aspects of the high-level generic view are summarised by J. Reason – “you can’t always change the human, but you can change the conditions in which they work”. However this view should not be restricted to the immediate human-system interface, there are many more facets to the SHEL model of HF.
Another view from the same author is that ‘We Still Need Exceptional People’. This requires the need for continuous learning at all levels in the industry, not just more crew training but real learning in design, regulation, operations, crew, and accident investigation.

This accident, situation, and activities before during and after the event, represents a rare and novel situation, even perhaps ‘unforeseeable’, but from each there are aspects which we must learn. But how can we ensure that we learn the ‘right’ lessons?
safetypee is online now