PPRuNe Forums - View Single Post - AF 447 Thread No. 12
View Single Post
Old 4th Jun 2017, 13:09
  #1433 (permalink)  
alf5071h
 
Join Date: Jul 2003
Location: An Island Province
Posts: 1,257
Likes: 0
Received 1 Like on 1 Post
A critical weakness following an event is that we continue to seek blame - the human, opposed to considering what could (should) be learnt. Additionally, the focus remains on the crew at the time of the event opposed to considering influencing factors such as training and regulation; and then there is automation, which of course is specified by the human.

Consider why the regulator required more training for unreliable airspeed (UAS), a procedure already trained for, opposed to avoiding the real threat - ice crystals. Compare this with safety activity for engine roll-back in a similar timescale. Warn crews to avoid cb's by a greater margin, use the wxr, do what is done every day, but do it better.

How was the training conducted, did the outcome match the intent. Where UAS procedures involve immediate actions, were these overemphasised opposed to the follow up actions more applicable to the cruise conditions of this and previous events.
Was the training simulation realistic, with multiple effects, systems and control degradation, or was only airspeed considered. Also, was the malfunction pre-announced, thus little benefit for training surprise and startle reported in other events.

If immediate actions require manoeuvring are Captains expected to take control, thus other crew members have no opportunity to experience degraded control systems. (if indeed that was simulated).
Did the requirement for additional training actually contribute to this event ?

And simple lessons learn't. Shared decision making for avoiding weather - is it better to ask what avoidance should be taken, forcing other crew members to actively consider the options; opposed to "is 15 left OK?" which is easily agreed with, but without any learning value for cruise pilots or future Captains.

Many automation lessons have been learnt. The pitots were being changed before the event, this negates many subsequent system issues. Revised assumptions about human ability to manage flight without ASI resulted in BUSS, not a requirement, but chosen after the fact.

So maybe we are learning that the human is not as good as assumed, but the human is still very capable. We should not expect a human to solve an automation malfunction, particularly where the function was automated because the potential situation was one which human might not be able to resolve (catch 22) - dual pitot error and ADC comparison in a triple mix system.

Have we still to learn that we should not build complex systems which rely on human intervention in case of failure, and then blame the human for failing to resolve the issue.
Blame or error in this sense is the gap between the expectancy of human performance before the event and that after the event, the difference and root problem, is in our initial assumption, which of course may be unforeseeable at that time.
alf5071h is offline