PPRuNe Forums - View Single Post - Monitoring the standby ADI at critical phases of flight
Old 6th Feb 2017, 12:59
  #16 (permalink)  
alf5071h
 
Join Date: Jul 2003
Location: An Island Province
Posts: 1,257
Likes: 0
Received 1 Like on 1 Post
Advice like 'sit on your hands' etc, generally requires a conscious choice, its another context sensitive judgement, which startle may degrade.
It might be possible to train pilots for some subconscious inactivity, but this could conflict with situations which require quick action.

Systems safety partly considers this with different levels of alerting, red, amber, etc, and to a lesser extent in abnormal drills. However, both of these depend on interpretation of what is required and when to react, it depends on training, and how a pilot perceives a situation - the real context, not that imagined by regulation or trainers.

The reliability of the wider technical 'system' (man machine environment) has also improved, so too the ability of components to self-check, thus reducing the generation of misleading information (weaknesses in this accident, PC12, AMS 737, MAD MD80 takeoff config).
There has been some improvement in human interfaces by providing advanced warning of hazards or system degradation - strategic awareness vs tactical decisions; GPWS - EGPWS, wind shear alerting - predictive systems, hard failure vs alternate or reversionary control modes, stall warning vs push.

However, there are fewer advances in comparator systems which remain at the triplex, dual-dual standard, (a factor in this accident and AF447). This is not a technical problem, but more the choice in certification specification; it's where the regulators draw the line of safety, 10^-x.
Conversely the operational regulation continues to promote pilot responsibility for safety, 'pilots are not expected to fail'; this is neither reality or achievable. This belief promotes the old view of human error, blame the human, more training possible; or even an attitude of resignation - 'what more can we do, we are safe enough'.
The industry is unable to provide an acceptable boundary for pilots' contributions to safety; instead it is sidelined to another judgement of 'good enough', but this is difficult to judge particularly if the boundary of human performance is unknown, or worse still believing that humans should not fail.

There are few opportunities for improving a highly reliable industry; existing methods are limited by cost or judgement of the required level of safety.
Cost-reliability might favour more automation, but this could reduce the valuable human ability to adapt.
Advancing technology could improve monitors and cross comparison - focusing automation on those aspects of human weakness, but this implies revised certification or checking that the assumed levels of reliability are being achieved (this accident).
Resistance to startle implies improved knowledge and experience - don't rush; and a better understanding of risk and risk assessment - assumptions in certification - reducing perceived fear.

All of the above require some change in the way in which we think about safety, particularly how investigators and regulators view human performance.
Many of the views above suggest otherwise, with the risk of little or no safety improvement.
alf5071h is offline