PPRuNe Forums - View Single Post - Indonesian B737 runway overrun/crash
View Single Post
Old 11th Apr 2007, 01:11
  #320 (permalink)  
alf5071h
 
Join Date: Jul 2003
Location: An Island Province
Posts: 1,257
Likes: 0
Received 1 Like on 1 Post
Many questions - no answers

I wonder if the investigation will ever be able to determine why the Captain failed to discontinue the approach; unless of course, with hindsight, he is able to provide a plausible explanation – enabling us to have an understanding of the event.
There appears to be similarities with the accidents in Bangkok (747) and Burbank (737). In these events, the situational cues for the safest course of action appeared to be obvious (with hindsight), but for some inexplicable reason the pilot did not perceive them or they did not trigger the required action. The pilot’s behavior was irrational, out of character.
Are there circumstances where any pilot might act in this manner; – yes, based on my own experience, but fortunately this was not in the air. Anger, fixation, determination (press-on-itis) all contribute to the opportunity for our perception to fail – a realization of the situation but not responding with the appropriate the action. Failed decision making or is this ‘partial incapacitation’? Thus although the Captain may have understood the situation, aided by the First Officers input, he was ‘unable’ to connect with the necessary change in his activity.
What will the report make of this; what, if any recommendations can be made to improve safety?
I doubt if any specific human issue can be identified – something that can be rectified. Accidents consist of an accumulation (coincidences) of many factors and failures, none of which caused the accident, but without any one, it could have been avoided. If we cannot identify cause, then how can it be eliminated? Perhaps it would be better to strengthen the defensive safety barriers that should prevent the ‘coincidences’ forming, but what are they, which ones do we strengthen?

Significant emphasis is placed on the value of cross checking and monitoring, but as indicated in this thread, and in many other accidents, the process is prone to error.
Will a First Officer ever be an effective safety monitor?
I have doubts; the same human weaknesses apply to the pilot monitoring as well as the pilot flying. Even when errors are identified, will any form of intervention be effective. Have there been any successful ‘FO’ takeovers in erroneous situations (ignoring incapacitation)? But there have been instances of failed ‘takeover control’ interventions with serious safety consequences.
Many ‘FO’ discussions relate to ‘taking control’. Is this as taught – are FOs led to believe that this is their primary safety function? From a regulatory standpoint, a newly qualified FO is safe – s/he can fly the aircraft in the event of P1 incapacitation, s/he can monitor and detect errors from ‘the norm’ (SOPs) and participate in the operation of the aircraft, but do they have sufficient experience to intervene (or not) where a more experience Captain is dealing with an unusual situation (to the inexperienced FO), yet due to high workload/fixation fails to communicate intentions / perceptions. When exactly is a FO sufficiently experienced to be an effective monitor and a method of intervention?

Solutions may come from better alerting and initial intervention – getting the pilot flying to change the focus of attention, by providing compelling situation displays with safe courses of action. It is unlikely that these qualities will be found in a new FO, thus we need new technology (previous post) or very experienced FOs – how. One viewpoint is given in Eliminating "cockpit-caused" accidents, – by changing the process of monitoring to use the most experienced pilot as the monitor, this also enables FOs to gain experience quickly.

So the Captain will be a better safety monitor, but not completely error proof. However, there will be operations that require the Captain to fly the aircraft, but with appropriate use of automation the Captain should be able to remain the primary monitor (task allocation) and the FO the very necessary backup for the more difficult operations. There are still weaknesses, but the change offers improvements (less risk of error) over current operations.

Although the FO in this accident appears to have followed the PACE process, his interjections were probably started too late, ultimately too late to takeover even if he could have; – experience, culture, human nature, etc. The ‘error’ – flight path deviation / stabilization, appears to have started much earlier in the approach – during the VNAV segment or even with the FMS setup? Again, there is some similarity with other accidents; the chain of ‘coincidence’ starts well before the ‘accident situation’. The defensive skill here, required by pilots and management, is to ‘see it coming’, whatever ‘it’ is.

Many questions - no answers, only possibilities.

Revisiting the Swiss cheese model of accidents.
alf5071h is offline