PPRuNe Forums - View Single Post - AF 447 Thread No. 7
View Single Post
Old 6th Apr 2012, 02:23
  #1280 (permalink)  
Diagnostic
 
Join Date: Aug 2011
Location: Near LHR
Age: 57
Posts: 37
Likes: 0
Received 0 Likes on 0 Posts
@Old Carthusian,

Hi,

As with my reply to PJ2 I've tried to reduce the quotes a little, but if you think I've destroyed the context, then I'm sorry and please point out what's wrong.

Originally Posted by Old Carthusian
I am afraid we are still faced with the question of why?
Why what specifically? I think you're asking "why didn't the AF447 crew follow the UAS procedure", but I'm not sure if you're asking a bigger "why"? Sorry if I'm missing something obvious. I'll assume you're referring to the UAS procedure question here, rather than "why the zoom climb" etc.

Originally Posted by Old Carthusian
It does still come down to the individual crew.
Do you mean it's always an individual crew decision, or it was only a problem with the AF447 crew, or ...? Sorry, again, I can't grasp your specific meaning.

Originally Posted by Old Carthusian
It is something that I learned flying replica biplanes (note that I have never flown big transport aircraft but I feel what I learned has some relevance). - know your machine. Know your drills. There is no escape from this.
I completely agree that this should be the objective. However, are we expecting too much of pilots, to be both pilots and flight engineers? With aircraft of the complexity of the A330, the "know your machine" mantra, while it remains the objective, is impossible (realistically) with the same depth as you know your biplanes. The more complex the machine, the more ways it can go wrong, or at least, behave "unexpectedly". For example, just remember how many pilots here were unaware of the stall warning being disabled under 60 knots IAS.

Originally Posted by Old Carthusian
The crews who didn't initially recognise UAS were still able to successfully deal with the problem.
As I said to PJ2, I can't agree with that as being an acceptable result, meaning we should just blame the AF447 crew and stop looking deeper. From reading that BEA report, it looks to me that controlled flight sometimes continued in spite of and not because of what some of the 13 crews did (e.g. premature AP reconnection, with incorrect airspeed being used). I don't class that as "successfully" dealing with the problem by any measure - expect that they didn't crash (see my previous comments on that).

Originally Posted by Old Carthusian
One crew (AF447) wasn't and followed a totally inappropriate behaviour pattern.
I agree about their behaviour, but they are not the only crew to fail to identify the UAS.

Originally Posted by Old Carthusian
Evidence indicates that the safeguards expected in a transport aircraft were not utilised but were for some reason ignored. This is, I am afraid, a crew issue - not a machine issue.
I politely disagree that it is so clear-cut. If you make the machine complex enough, and add in human imperfections, then you could get a man/machine interface which will be OK for some people, some of the time, and fail to "get through to" different people or at different times. IMHO that would be, in part, a machine (design) issue.

To suggest that this is (only) a crew issue implies that you believe the machine is perfect. And yet a UAS situation was reportedly not identified at all by 4 out of 13 other crews. Don't you think that might be pointing to it being too difficult for typical crews to reliably recognise a UAS, using the current recognition method being taught?

Originally Posted by Old Carthusian
It also relates to this particular crew not the others.
I don't understand exactly what "it" refers to in that sentance, so I can't comment.

Originally Posted by Old Carthusian
We have to be very careful in trying to find a 'hard' solution when the cause may well lie in the 'soft' factors.
I am not trying to find a "hard" (i.e. systems) solution - sorry if you thought that I was, as I can't have been clear enough. The "soft" (human) factors clearly played a large part when looking at the whole crash sequence.

I'm suggesting that it is possible to mitigate some inevitable "soft" (i.e. human) factors (e.g. no human is perfect; we all have circadian rhythms & limited attention spans etc. etc.) by improving some systems behaviours, to better support the pilots when things go wrong (i.e. tell them clearly about a UAS event - don't leave them to work it out from hints). That is in addition, of course, to extra training, more hand-flying for the crews etc. etc.
Diagnostic is offline