PPRuNe Forums - View Single Post - Automation dependency stripped of political correctness.
Old 19th Jan 2016, 18:02
  #115 (permalink)  
alf5071h
 
Join Date: Jul 2003
Location: An Island Province
Posts: 1,257
Likes: 0
Received 1 Like on 1 Post
For practical and technical discussion, consider a hypothetical accident, what might be learnt, what safety concepts are involved, or what might trigger checks of current operations:-
  • The contribution from regulation.
    The process of continuing-airworthiness identifies a weakness in pitot systems, which if simultaneously blocked by ice crystals generate erroneous displays of airspeed and abnormalities in other systems. The severity of the problem warrants major modification (three pitot per aircraft). This will take time, but in mitigation at least one pitot per aircraft should to be changed as soon as possible, but a small residual risk remains if an unmodified aircraft encounters the rare conditions.

    In mitigation all flight crews should have refresher training for flight without airspeed. With this the regulatory focus has changed to human failure vice system fault; the human is seen as the threat to be re-trained to minimise risk.
    Alternatively and more technically correct, the threat to the aircraft is ice crystals associated with large storms. Thence with a focus on the human as an asset, Cb identification and avoidance may be the better mitigation, which is based on normal everyday activity; emphasised with an alerting awareness of the ‘real’ threat.

  • The contribution of the training process.
    Crews were (erroneously) required to have refresher training for flight without airspeed; anyway shouldn't qualified crews be capable without refreshing. Operators interpret and delegate the training requirements in house or to third party simulation. Was there any checking that the output of the training matched the need, what need, not just the requirement but the real threat; did operators /trainers know of, or consider the real threat?

    The abnormal and emergency checklist has a drill for abnormal airspeed. How might a crew decide to select this drill, what is abnormal? With three ‘independent’ airspeed systems it is assumed that with any system disagreement the odd one out will be disregarded … but what if two or all three are in error.
    If the additional training focussed on loss of airspeed how was this simulated, just the removal of the airspeed display without consequential failures of other systems / warnings; would this training be consistent with a real event.

    The UAS drill has both memory and follow up items, but initially they relate to different situations delineated by a preceding conditional statement – ‘if an emergency follow memory actions, or otherwise go to subsequent actions’.
    Consider a situation which may only be an event to an experienced captain, but might be interpreted as an emergency to a less experienced first officer.
    Why should drills need conditions before memory items, should there ever be an unboxed item before a memory drill? What if the drills are the sole basis of training?

    The manufacturer and regulator probably knew what was meant (dynamic vs static situations), but left the ‘definition’ of emergency to each operator … thence to each crew.
    Training to identify and avoid Cbs, together with a reminder to increase the miss distance for ice crystals could be simpler, cheaper, and directly related to the threat.
    Why focus on recovery in preference to avoidance, c.f. stall training.

    Do SOPs require control to be transferred to the Captain with a major emergency (including simulation), if so then first officers may never get to feel the aircraft in an abnormal condition – they only read the checklist and thus inappropriate boxed items become the basis of their learning.

  • The contribution of behavioural shaping in normal operation.
    Normal operations involve crews detecting and avoiding large storms. Good CRM practices require shared decision making. Do captains state “we should deviate 15deg left of the storm ahead” seeking crew cross check / concurrence, which is easily given if there is no gross misjudgement; what do first officers learn from this?
    An alternative is to ask “what action should we take for the storm ahead”, this requires all crew to participate with active assessment and judgement, which provides opportunity for practicing decision making skills and gain experience of the situation.
    In situations where a first officer deputises for a captain on long flights, are the existing views or implementation of CRM sufficient for all operations, to avoid Cbs by a reasonable margin, or even greater with ice crystals?

An afterthought; ‘what if’ a simultaneous malfunction of engines was considered instead of pitot error - engines are only a larger pitot; would crews be required to be trained for flight without power.
Not in the case of recent restrictions on two aircraft types which stressed Cb avoidance to minimise the risk of engine malfunction in ice crystals (software being updated).
Was this learned from the pitot events; more likely that the powerplant departments knew about the problems of ice crystals before pitot events occurred - since 1990s; but this information would only be of value if shared, learned, and remembered – back to the process of airworthiness and regulation – beware ivory towers.

So this has nothing to do with automation dependency ... ... exactly.
alf5071h is offline