PPRuNe Forums - View Single Post - TAM A320 crash at Congonhas, Brazil
View Single Post
Old 27th Aug 2007, 01:01
  #1881 (permalink)  
alf5071h
 
Join Date: Jul 2003
Location: An Island Province
Posts: 1,257
Likes: 0
Received 1 Like on 1 Post
The discussion drifts back to automation and system design, but these may not be significant contributors to the “cause” of the accident.
Using the description of an accident as “a collection of seemingly unconnected contributing factors, where the absence of any one would have avoided it”, then from the information presented so far there appears to have been a high probability of overrunning without the human-thrust lever problems.

It can be assumed that the crew were familiar with moving the thrust levers rearwards and selecting reverse – they did it on every normal landing. Failure to retard the TL would probably be detected by the inability to select / lack of reverse – similar to other aircraft types. I discount the Retard call as auditory sensing is weak and usually the first sense to deteriorate with high workload, stress, fatigue, etc.
The specific difference in this landing was that one reverse was inoperative. The crew procedure required both TL to be selected to reverse in the same way as in normal operations,this provides normal detecting / monitoring cues for erroneous operation. The ‘error’ appears to originate the use of a non-standard or old procedure. Although the lack of reverse was in the briefing, the exact procedure was not discussed which deprived the handling pilot opportunity to visualise his intended actions (recall and refresh from memory), and for the monitoring pilot to both understand the ‘plan’ (what he would be monitoring) and an opportunity to interject if the briefed procedure was incorrect. The possibility that neither crew member knew of the revised procedure remains open, but they had landed in this configuration previously – what procedure was used then? Thus the error may have involved a failure to recall / monitor actions after touchdown, probably due to human factors issues, which could have been exacerbated by stress induced by a demanding (risky) situation.

Post#1893 reports an earlier A320 incident; considering the previous ATR incident and an earlier excursion with a 737, then the indications of this operation was one of high risk. Only a small change in the circumstances could have resulted in any aircraft experiencing an overrun:- heavier rainfall, worn tyres, a little more rubber on the runway, higher weight, longer touchdown point, less wind, and of course an MEL item effecting retardation.
So why didn’t everyone get upset about the 737 excursion; … non fatal? Were crew fortunate or skilled in being able to turn onto the grass preventing the drop onto the road, which perhaps reflects the difference between an ‘overrun’ incident and a fatal accident? Did the other A320 (#1893) ‘depend’ on reverse for a safe landing or the 737 use a computed landing distance based on reverse (cf Midway)?
Thus from this aspect it could be argued that human-thrust lever interface only contributed to the severity of the accident and was not “a causal” contribution. The MEL’ed reverse and associated procedure appears to be more important.

So why didn’t the 737 overrun (or previous ATR, F100 incidents) ring the safety alarm bells, trigger a risk assessment and the reconsideration of operations during the temporary conditions affecting the runway?
Why did the TAM overrun generate 1900+ PPrune posts vs 11 for the 737? Are we misjudging the important aspects of safety, being biased by fatalities, or incorrectly focussing on the ‘bright’ or emotive aspects of automation?
The apparent discarding of near misses without learning from them will perpetuate the risks for others, and as in this case with more severe results. Are we inadvertently thinking that the 737 event or similar “couldn’t happen to us” or “I wouldn’t make that mistake”; and then when we err, we look to the tools for ‘blame’ and not the human contribution.
The risk assessment reflected in this thread appears to be biased, if true then is this due to a failure in our beliefs, knowledge, training, safety management culture or just another facet of human behaviour?
alf5071h is offline