PPRuNe Forums - View Single Post - What lessons have been applied from AF447?
Old 17th Jul 2018, 07:52
  #7 (permalink)  
safetypee
 
Join Date: Dec 2002
Location: UK
Posts: 2,451
Likes: 0
Received 9 Likes on 5 Posts
Paul, there may be greater value in considering what lessons might have been learnt, or if learn’t which ones have been acknowledged, and of course why / why not.

From an engineering view, the initial technical malfunction involved all pitots and thence the air data computers (ADC).
Similar problems with probes in ice crystal were known, but the relatively new design should have accommodated this; it did not - surprise. The main point is that many people ‘did not believe’ (Meldrew effect) that three systems could fail simultaneously - they forgot the assumptions in certification.

Assumptions become cast in stone, ‘hazardous’ 10^-x (7?), we forget that remote probabilities can still occur - so we are surprised, our disbelief is realisation - we forget that reality is not as we imagine.
The crew were unable to manage the situation - surprise, another assumption, that the human could do something, but not an assumption specifically in certification. The event was ‘catastrophic’ 10^-y (9?). The gap between hazardous and catastrophic assumed an unspecified human contribution in being able to do something, not to be relied on, but equally not dismissed in many people’s minds.

Yet considering the probes together with the ADCs, why should we expect humans to be able manage multiple system malfunctions, in systems which were designed to minimise human ‘error’ or manage events deemed beyond both human and automatic abilities (self comparing triple ADCs, triple probes)

The automated cross-comparison ADC logic involved (assumed 10^-x) 2 out of 3 voting which should result in at least one valid output. However, with at least 2 out of 3 pitot inputs being unreliable, the ADC logic was unable to provide a meaningfull value for airspeed display, which also affected many other systems: flight control, guidance, altimeter correction, ... And most importantly was unable to notify the crew that the output was unreliable (pitot input and ADC output were not flagged as invalid, or labelled ‘no computed data’, because the systems were working exactly as designed).

So the impoverished human manager was expected to manage an unexplained situation, with misleading annunciations and displays, and associated systems degradation. A situation which was not considered during design and certification, or if it was then there was no advice to aid the crew either in identification or resolution.
‘Unreliable Airspeed’ was the resultant, not the malfunction; it is difficult to manage complex indeterminate malfunctions by attempting to mange a single visible resultant.

Who knew? 20+ previous events managed by the ‘crew’; did this reinforce the assumption that the human will manage future events until a technical fix was available - regulator?

Who learnt? Overtly perhaps the aircraft manufacturer. The pitots were changed, but also a backup speed system BUSS, is now available (which as I understand is independent of air data, AoA based, but not via ADCs).
As a technical advancement this might re-acknowledge the limits certification, and that having experienced one ‘unforeseen’ failure, there could be more. Thus uncalled-for belt and braces changes; also listening to the industry which identified the importance of a speed related instrument in surprising/startling situations - an island, safe haven, in a sea of confusion.
Good engineering practice (sales potential - get marketing to pay for the change), a marker for future system design or at least refreshing safety thinking.

Then there are areas about system interface and assumptions about human behaviour, safety views of threats and management, training, checklists, and of course CRM (if you can define that).
safetypee is offline