Go Back  PPRuNe Forums > Flight Deck Forums > Tech Log
Reload this Page >

AF 447 Thread No. 7

Tech Log The very best in practical technical discussion on the web

AF 447 Thread No. 7

Old 6th Apr 2012, 04:00
  #1281 (permalink)  
 
Join Date: Oct 2011
Location: Lower Skunk Cabbageland, WA
Age: 69
Posts: 354
Talking @OC

As Diagnostic said,
I politely disagree that it is so clear-cut. If you make the machine complex enough, and add in human imperfections, then you could get a man/machine interface which will be OK for some people, some of the time, and fail to "get through to" different people or at different times. IMHO that would be, in part, a machine (design) issue.
Yes. The original idea that set OC off on this vector was the suggestion that UAS be an aural warning. No, they "shouldn't need" that, BUT!

If I'm sitting in the back doing my knitting, I want every possible chance of surviving any unprepared pilot's mistake. If they didn't recognize or respond correctly to UAS, maybe they would have done if the airplane had shouted at them.

Back to Swiss cheese: should we not plug every possible (known) hole in it? As someone already pointed out, the closing of any one hole in the Swiss you-know-what may have prevented this horrible crash. If it costs $$$, well, I don't really give a .
Organfreak is offline  
Old 6th Apr 2012, 04:44
  #1282 (permalink)  
 
Join Date: Feb 2001
Location: UK
Posts: 647
I have another wild suggestion; when the aircraft gives up flying itself because of UAS, instead of an audible alert (we know audible alerts don’t always get through, such as the stall warning, when the crew is in cognitive overload), how about intermittently clearing the glass cockpit of other stuff (which for AF447 crew did not help them at all) and putting up a big message; “You have UAS. At this height, use power and pitch” (or in other appropriate circumstances: “ Use memory items and QRH”).

(I say another wild idea, because of my post on 1.11.11, post 596 on final crew conversation thread, page 30:

Two psychological factors are still open, and I see no easy way to overcome them, nor have the experts here put forward solutions that I have seen:

Highly stressed people can be oblivious to audible warnings. What has been described as the “cavalry charge” happened when the FOs were handed control manually which they had never practiced and in circumstances they didn’t understand, or agree about (PNF showed some sign of awareness);

And the reason I followed this from the outset through all threads – when a stressed pilot forms the wrong conclusion, he/she tends to stay with it regardless of ineffective attempts to correct the wrong problem. I have seen this in my field (gliding safety and accident analysis) – only test pilots, or rare individuals, can keep a clear head and systematically fault find.

[snip]

A wild suggestion . . . [snip]


After the system gives up and hands a basketful of trouble to the pilots to hand fly their way out of it without any training (or only inappropriate training), the “system” should know enough that it then stalled and stayed stalled, even when speed fell below 60 (it thought). How about for one second out of every 4, the glass screen blocks out everything else and displays;

” STALL! You are staying stalled! Get out of it!”

Would it be beyond the wit of man to even devise a “computer knows best mode – it will recover as the pilots have not realised” before it’s too late?

Told you it was wild.)
----------------
chrisN is offline  
Old 6th Apr 2012, 05:01
  #1283 (permalink)  
 
Join Date: Jul 2009
Location: The land of the Rising Sun
Posts: 165
Diagnostic
To clarify my rather hasty response - the point I am trying to make is that a change in the interface is not necessarily going to result in a future avoidance of this kind of accident. It is rather a measure which may well be useful but cannot take the place of training, CRM, using SOPs and a proper culture in the airline. If you recall the crew of AF447 ignored the stall warning - what guarantee do you have that they would have paid any attention to a UAS warning? The evidence of their actions suggests that it may well have been a waste of time. I referenced Korean Airlines. One of their accidents involved a captain ordering his co-pilot to tear out the stall warning klaxon because it was bothering him. The crew of AF447 ignored the stall warning. Now to touch on the point of the other accidents - all resulted in recovery and a return to normal flight. The outcome is the important thing here not necessarily the process. But even if a warning system is devised this is not a rapid process - it needs careful consideration and testing so that it can be properly deployed.
I understand that you would like to avoid the fact that this is a crew issue - I would too but one has to be honest and look at this issue dispassionately. It is possible that Air France have been developing a far too casual culture with respect to safety and this is more of a concern than the existence of a UAS system. There have been a number of worrying incidents of which AF447 was the worst which suggests that this is the case. I am also a little disturbed by your comments on 'knowing your machine' - it is imperative that a professional tries to know as much about his aircraft as he can. Not every detail but he at least knows how to use SOPs and does use them.
Old Carthusian is offline  
Old 6th Apr 2012, 05:14
  #1284 (permalink)  
 
Join Date: Aug 2011
Location: Grassy Valley
Posts: 2,123
Exclamation

Perhaps a different word, then. Strictly speaking, it is not a WARN, anyway, it is a STATUS. A CUE.

The computer is sampling three sensors continuously, and when they are out of limits, the computer fails different combinations.

Giving the pilots a heads up that the computer senses trouble is not a WARNING.
As I say, it is a STATUS/REPORT.

At some point, 447 lived and died with the duff AIRDATA. The Pilots did not know until 17 seconds after the autopilot quit, whilst the a/c was maneuvering.

What would have happened if the computer had flashed SPEEDS/FAULT? Forget this crew, what of the next one?
The condition was ill addressed at the time. There are no excuses three years on.
Lyman is offline  
Old 6th Apr 2012, 07:52
  #1285 (permalink)  
PJ2
 
Join Date: Mar 2003
Location: BC
Age: 72
Posts: 2,427
Hi PJ2,

Thanks very much for your comments all through these threads, and for the opportunity to discuss. I've tried to minimise the quotes, while still (hopefully) keeping the context - if you feel this has distorted things, then sorry & please correct me.

Quote:
Originally Posted by PJ2
On an increase in automated responses, I can understand the logic of such an argument (the BUSS relies upon this logic), but what concerns me from a pilot's p.o.v. is long-term reduced situational awareness and the need for in-depth understanding of high-altitude, high-Mach No. swept-wing flight, (old fashioned "airmanship", I guess) because it is still humans who are doing the piloting.

I do understand this p.o.v. and as I said, I'm not yet convinced about totally automated responses, but at least an explicit UAS warning seems (with hindsight) a clear improvement, doesn't it?

After all, if there is an engine fire, the systems (I don't know if it's the FADEC or others) detect the excessive temperature and alert you, as the pilot, to that specific problem. (I flew several of these in a B737 simulator, some years ago - that bell gets the heart racing ). The system does not just say "Hey, something is wrong - I know what the problem is, but you have to work it out from some gauges on the panel - and hurry up!". Why give the crew a specific fire warning (or low fuel warning, or any of the other warnings where the system highlights the specific issue), and not give the crew a specific UAS warning?

Put in this way, possibly. It took until the industry had the microprocessor power to indicate a straightforward engine flameout/failure. Up until glass, the most significant failure on an aircraft and one which we practice every simulator session had no warnings - one just knows the failure by the noise if catastrophic, by the behaviour of the airplane and by examining the engine indications for fuel flow, oil pressure/temperature, N1/N2/N3 rotation, high vibs and so on.

These are good arguments for solid indications and computerized drills which monitor crew actions. The Airbus has ECAM drills for engine failures, fires and severe damage. For the longest time, an engine failure was an aircraft performance system-based failure of which, it had been taken for granted, a crew would recognize and handle using the established, trained, checked memorized, QRH or ECAM/EICAS procedures as well as standard CRM communications techniques.

Not to re-argue the matter, but we know that none of these crew actions were done with AF447 and for this reason (which I have elaborated upon in earlier posts), I concur with O.C., that this is primarily a performance/crew accident. There certainly are training-and-standards issues here as well, and there are airmanship, system knowledge and CRM issues. The HF Report will, (and should) be thick and deeply researched.

They did not respond to the ECAM, nor to the stall warning so would a voice/aural/visual or combination of all three? The question is open and certainly discussable.

In your B767 example a few posts ago, is it sensible (and optimal) to make the crew "jump through the mental hoops" to try to work backwards from the "Rudder ratio" EICAS caution, to the underlying UAS event?

In a word, yes. In fact the failure confirmed our initial assessment that we had an airspeed indication problem because other systems were responding. Fortunately it was only the one pitot-static system and we knew the architecture of the standby system, (and so trusted it...I'm not sure I'd trust an ISIS in the same way.)

Quote:
Originally Posted by PJ2
I offer this view out of a concern for what remains inexplicable, and that is the instant decision to pitch a transport aircraft up at such high pitch-rates (increasing 'g'-loading to 1.55g) to such high pitch attitudes and keep the aircraft there.

Completely agreed - it's currently inexplicable, due to the lack of justification/explanation voiced by the PF. As I think several here have already said, the human factors part of the final report will make interesting reading, but it may be more like "educated guesswork" in this area, than any of us would want.

However, if the PF had correctly announced and followed the UAS procedure, then they would both have been focussed on the 5 degree pitch target instead, wouldn't they - at least possibly?

Or focussed on maintaining what are obviously successful pitch and thrust values until they take a moment to slow down and gather thoughts. Nothing in transport flying requires immediate action except the rejected takeoff, a TCAS, GPWS or Stall warning - taking the time to sort things out permits the mind to "re-discipline and re-focus" itself from "cruise flight and the next waypoint" to an abnormal or emergency procedure, then the PF calls it and they get on with the drill or checklist. It is absolutely standard cockpit discipline, period.

Quote:
Originally Posted by PJ2
I would be interested in either data or an argument that this indicates an interface problem, for, as you are, I am open to any information that shows that normal training and SOPs for this event are inadequate in some circumstances and because of obscurity are best left to automated responses.

As I see it, BEA Interim Report 2 page 51 onwards provide evidence for either:

a) too difficult to recognise UAS via the existing interface, or/and
b) insufficient training to recognise UAS via the existing interface.

IMHO these are related - the less obvious the interface to report a UAS (and to also encourage that the UAS procedure should be followed) to the crew, the more training, skill, concentration, ongoing crew practice will be needed. Or do you have a different view?

No, I don't have a different view and I see training as the answer to many of these issues which crop up. I was dismayed when I watched recurrent PPCs drop to 3hrs, and equally dismayed when they went to 18 month periods instead of 12 month periods. I don't know what the standard is today, but these are corporate bean counting decisions made by non-flyers and fighting them in today's environment is very challenging, (as in "show us where this is a problem).

More details below...

Quote:
Originally Posted by PJ2
In re your observation, "Several other crews did not recognise & handle UAS correctly.", I don't recall specifically where there were untoward outcomes due recognition and handling issues with other crews in other events but again am open to new information.

Agreed that I've seen nothing regarding untoward outcomes in that BEA report, but IMHO that's not what the metric being measured should be.

I concur but only in part. Sometimes nothing happening when an abnormal occurs means something. It is part of this discussion which, as with other significant issues, this industry is having. The other fascinating discussions we know about...wither automation? Wither ultra-long haul and fatigue issues? Wither instrumentation and presentations? etc.

Quote:
Originally Posted by PJ2
[...] to my recollection, (and I have been wrong on more than a few things before!), the UAS events haven't been problematic as most crews "did nothing" and the airspeed returned within a minute or less.

That is my understanding too (apart from duration - BEA mention up to 3min 20sec of continuous invalid speeds). But consider what we learn from the BEA report about the various crews lack of following UAS procedures, and what that means about the chances of a potentially different outcome next time.

As I understand it, one of the reasons for crew procedures is precisely to prevent different outcomes depending on crew, time of day, visibility, and all the other variables which a crew has to deal with. Once we see lack of adherance to procedures, don't we get closer to the chances of "bad things" happening? That has been my experience, both with flying and with other highly-controlled situations.

Of the 13 UAS events where the BEA had sufficient detail to know what the crew did / did not do:

"Four crews did not identify an unreliable airspeed"

and

"For the cases studied [which I interpret as being all 13 cases] the recording of the flight parameters and the crew testimony do not suggest application of the memory items in the unreliable airspeed procedure:
* The reappearance of the flight directors suggests that there were no disconnection actions on the FCU;
* The duration of the engagement of the Thrust Lock function indicates that there was no rapid autothrust disconnection actions then manual adjustment on the thrust to the recommended thrust;
* There was no search for display of an attitude of 5°."

So as I read it, all 13 crews "got it wrong", to a greater or lesser extent, with a third of them (4 out of 13) failing to do any UAS procedure, and all 13 failing to do the memory items. Isn't that just a timebomb waiting for a crew getting things badly wrong in the future, when they are presented with an unrecognised UAS at the "wrong time" (sleepy, poor CRM, "startle factor" etc.)? If they get distracted trying to diagnose a non-existant instrument fault (which is really just temporary UAS), couldn't that potentially lead to another AF447-like event? IMHO, based on reading other accident reports where distraction was a factor - yes.

Using your comments, I will re-read IR2's relevant sections. I have to admit that I don't recall finding these comments, but quite frankly every time I read these three reports I find something new. I'd like to think it isn't me but...

Quote:
Originally Posted by PJ2
The ability to "look through" the automation and decide for oneself what the airplane is doing, what it needs and why, is being lost because it is being supplemented and when supplements occur, practice and therefore skill, then thinking and knowing atrophy

I understand that concern, and I would much much prefer ATPL pilots to be better trained, better paid and kept highly-skilled.

Pay well and good people will come. Infatuation with technology, and here with "automation" is almost always based upon the wrong reasons - money, (as in one less crew member) rather than utility, safety and reliability. Computers remove people from direct contact with things because they model the world so well. In aviation, that model is pretty good and serves us well providing we can ignore it without result. Either in the cockpit or the boardroom, once direct contact "with the environment" is lost, the potential for uninformed tactical decisions (mistakes) increases although one usually finds that the priority of financial decisions may be enhanced.

However, are you saying that aircraft system designers shouldn't help flight crew by giving an explicit warning for UAS, even though the systems know that there is "just" a UAS event (which has a procedure to follow) and not some other instrumentation fault (which needs to be investigated, diagnosed, coped with, etc.)?

No, I'm not saying that but not because such indications aren't in some way required, but because it has been determined by the industry that engine failures "don't need explicit warnings" (because they broadcast failure in other ways). I think there is a strong argument for some kind of indicating system which sorts out for the pilot the variations of failure which are possible - right now they are in the FOM, (About thread five or so someone posted a very good page from an A300 FOM showing the variation of effects of blocked pitot, blocked pitot drain hole and variations on blocked static ports etc. We know the airspeed acts as an altimeter if the pitot tube is blocked but the static hole is open - A clear way of assessing such a problem through CRT graphics and commands would be a better option than just warning of a "UAS" in cruise. Also, it has been suggested to find better ways to derive airspeed from GPS for such failures.

As with the AoA guage, none of this is of any greater safety if it isn't trained and checked and for that it also has to be "STC'd" and certified by the FAA and (possibly?) regulated?

I have a view about how an automated response might be considered, in a way that still keeps the crew "in the loop", but I'd like to initially focus on giving explicit UAS warnings (to try to drive the following of UAS procedures).

I have always loved automation because it is indeed a tremendous innovation which makes aviation much safer. But it is not the third pilot, unless one treats automation in the same way one uses CRM with one's crew members. One ought to be able to "look through" what the autoflight system is doing and disconnect and hand-fly if one doesnt' like it. But I had crew members who would refuse to hand-fly and weren't confident in disconnecting the autothrust. I thought it was a sad thing to admit.

Quote:
Originally Posted by PJ2
I have had kindly pointed out to me a recent conference at the Royal Aeronautical Society entitled, "The Aircraft Commander in the 21rst Century". There is an excellent videoed presentation from this conference by Captain Scott Martin, (Gulfstream Experimental Test Pilot) on the very topic at hand.

Many thanks - I look forward to viewing that when I'm back with a normal internet connection.

It is a pleasure engaging in this kind of discussion. ;-)
PJ2 is offline  
Old 6th Apr 2012, 12:48
  #1286 (permalink)  

Plastic PPRuNer
 
Join Date: Sep 2000
Location: Cape Town
Posts: 1,890
"Hi Guys, sorry to bother you but my airspeed sensors are momentarily unreliable and so autopilot and autothrottle will disconnect."

"Flight data was nominal when this happened and SOP in this situation is to maintain appropriate pitch and power while we sort it out."

Would you like to do that yourselves or shall I take care of it?"

Seriously, wouldn't that (or a more formal equivalent) have been a more helpful introduction to the situation than a flashing UAS alert & prompt disconnect?

After that, the "startle" response, inadequate training, mode confusion, poor CRM and other factors seem to led to the fatal outcome.

Last edited by Mac the Knife; 7th Apr 2012 at 03:40.
Mac the Knife is offline  
Old 6th Apr 2012, 19:47
  #1287 (permalink)  
 
Join Date: Jun 2009
Location: florida
Age: 77
Posts: 1,134
And then we worry about too much automation?

Check this out:

http://www.nytimes.com/2012/04/04/bu...e-airport.html

So crew becomes more and more and more of a monitor. But who does what when data is lost or is unreliable?

I can just see Hal transmitting to others, "Our GPS data is unreliable and we're handing the jet over to Dave".

Years ago the wiz kids wanted a ground avoidance feature built into our Viper FBW system. The thing would pull us out of a dive when close to the ground. We humans fought the idea and explained that there were conditions when we might violate the coded criteria. Maybe press a bit to ensure a good hit or maybe we were trying to get real low for escape and evasion.

To appease the whiz kids we agreed to a big flashing "X" on the HUD. A year later one really agressive pilot flew right thru the big flashing "X" and augered. Least he didn't have 200 SLF items with him.
gums is offline  
Old 6th Apr 2012, 20:12
  #1288 (permalink)  
 
Join Date: Jun 2009
Location: Bedford, UK
Age: 66
Posts: 1,224
PJ2, thanks. you are generous with your time. Much appreciated.
Mr Optimistic is offline  
Old 6th Apr 2012, 21:30
  #1289 (permalink)  
 
Join Date: Feb 2011
Location: Nearby SBBR and SDAM
Posts: 873
On surprises

Hi,

I found in 447 threads surgeons, anthropologists, organists and many others i.e. not just "technically oriented minds". This was a surprise for me.

Proactive technicians (i include myself) prefer to "anticipate" than to be caught in surprises.

Surprises coming from her sometimes (perhaps many times) challenge us. In my life i had surprises from GF's, machines of many types (including GA and old birds), wife and many women.

Surprises can be "controlled" by Redundancy.

I will "throttle back" my commenting on man machine interface in this thread saying:

1) Considering the factual information we could access and the results from the high synergy discussions we had.

2) Considering the likelihood of 447 crew had some surprises in their last 4 minutes of their life, working hard.

3) Considering despite all their efforts they expressed some "surprise" on SPEEDS.

4) Considering they ever had been able to know the AoA when they were falling.

5) Considering they expressed some surprise with the fact their efforts didn't succeed.

6) Considering the surprise (to many) on location of 447 wreckage, so near LKP.

7) Consider the surprise the protected plane stalled.

8) Considering how fast the plane was doomed

9) Considering the surprise for us to learn they barely understood what really occurred

10) Considering the surprise expressed near the end

And last but not least, considering the surprising reaction (comprehensible) on a AoA indicator and to an (early warning) UAS indicator here in the thread. And after fundamenting my conclusion from an anthropologist thought:

I would suggest to consider:

1) A resource to provide EARLY WARNING on impending "factors" like UAS, AoA and also perhaps, REC MAX "nearing".

2) To study the best way to implement the 3 above resources in a "man machine interface" context. If it would be aural, flashing display, redundant, etc. must be done by an R&D on that IMO important issue.
We can only guess here. Is not our "problem".

After saying that i appreciated the comments from Chris Scott, mm43, Bear, MB, gums, PJ2, OC, Of, chrisN, rgb, jcjeant, CONF iture, lomapaseo, A33Zab, OK465, BOAC, bubers44, HN39, CJ, safetypee, DW, Linktrained, rh, TD and recently Diagnostic and from some others, including via PM channels. I took all comments into account.

I frankly think there are some important issues to be discussed on the "interface" so i considered, some time ago a thread on the issue.

The reasons i tried to show some points (i consider) relevant to 447, i will concentrate in the Man machine interface and anomalies thread.

It will be another surprise if the final report doesn't consider the influence of this A/C interface on the HF aspects when addressing the "surprising" attitudes of PF, PM and even the Captain on last flight of F-GZCP.

My interest is in "safety" and my only agenda here in PPRuNe is try to contribute to this important objective: Aviation Safety

Through a minimum of Surprises Always when possible.

Last edited by RR_NDB; 6th Apr 2012 at 22:53. Reason: Text impvmt, fixing site (apparently) editing tool glitches
RR_NDB is offline  
Old 6th Apr 2012, 21:58
  #1290 (permalink)  
 
Join Date: Oct 2011
Location: Lower Skunk Cabbageland, WA
Age: 69
Posts: 354
RR_NDB:
I found in 447 threads surgeons, anthropologists, organists and many others i.e. not just "technically oriented minds". This was a surprise for me.
I am the guilty organist as mentioned above. However, I indeed do have a "technically oriented mind," since I am also an electronic and mechanical technician in the service of repairing old Hammond Organs. They are extremely complicated (but old-school) devices. Also was a theatrical lighting designer for thirty years, an extremely technical art. But enough about me.
Organfreak is offline  
Old 6th Apr 2012, 23:05
  #1291 (permalink)  
 
Join Date: Jul 2009
Location: DFW
Age: 57
Posts: 246
This topic advanced far beyond my expertise long ago ( I just drive the things), therefore I ceased commentary. I still have nothing to add to the technical aspects some have discussed, but would like to add this for your consideration: Airbus training is focused on the wrong targets when considering aircraft control. One is judged by his/her knowledge of the protections - with little an no emphasis placed on degraded modes. I would be willing to make a wager that this ill fated crew could do a quite nice job of describing the limits of protection, but had little idea of the capabilities of degraded flight laws and had little training on abnormal ops pertaining to degraded flight laws. If any person of influence is reading, I would like to request that the training focus be changed from "it protects you and flies like any other airplane" to "you need to know how to deal with it when the automation fails you". I would also suggest that the regulators include UAS procedures on type rating checks. They could just remove one of the numerous instrument approaches and replace it with the UAS drill. (on type rides and PC's, we spend hours droning around watching the autoflight system perform redundant approaches - which proves little more than our ability to push the approach button)
TTex600 is offline  
Old 6th Apr 2012, 23:24
  #1292 (permalink)  
 
Join Date: Oct 2011
Location: invalid value
Posts: 39
Diagnostic,

In interim report no. 2, the section on previous unreliable airspeed events and in particular the 13 events that was examined closer, I see a factual listing of the technical effects and of the crew's handling/actions and nothing more. I see no judgment on whether the crew's actions were right or wrong and I see no intent of the BEA to infer any.

I think you are reading too much into it and I see no basis for characterizing the handling of those 13 events as inadequate, wrong, mis-handled or in-correct.
Hamburt Spinkleman is offline  
Old 6th Apr 2012, 23:45
  #1293 (permalink)  
 
Join Date: Aug 2011
Location: Near LHR
Age: 53
Posts: 37
@Old Carthusian,

Thanks for the clarification. I'll reply to your points out-of-order as you introduced an important point later in your reply:

Originally Posted by Old Carthusian
I understand that you would like to avoid the fact that this is a crew issue
In that case, unfortunately you misunderstand me - I'm not trying to avoid anything. I don't know how much clearer I could be about my views on this, than the last 2 paragraphs of my previous reply to you. I'll try one more time...

The crew clearly made mistakes (as you have said); many of the exact causes for those mistakes we don't (and never will) know for sure (as we can't ask what they thought at the time) although I sincerely hope that the BEA HF group can add a useful interpretation (i.e. educated guess) of the limited available data, in the final report.

However I believe that simply saying "this is a crew issue" and not looking deeper for likely causes of incorrect crew behaviour, and then fixing those causes, would be doing a disservice in trying to prevent another tragedy. One of the areas which seems relevant to me, and where we have evidence of other crew behaviour for comparison with AF447, is in the area of UAS recognition, and that's where I have been specifically focussing in my recent comments, when this subject recently re-surfaced.

Of course UAS is not the whole story for AF447, but UAS is where things started to go wrong for them (i.e. they responded with a zoom climb instead of flying pitch & power), so IMHO it deserves some focus. In the past, several professionals here have kindly contributed that their airlines are improving training of high-altitude UAS. But why limit the improvements to training, when the aircraft could also give a less obfuscated indication of UAS? Don't we want the pilots to receive clear warnings, to encourage the recognition of UAS and hence increase the liklihood that they would then follow the UAS procedure?

Originally Posted by Old Carthusian
To clarify my rather hasty response - the point I am trying to make is that a change in the interface is not necessarily going to result in a future avoidance of this kind of accident.
Very true - I can't (and won't attempt to) prove that a specific UAS warning will result in a future avoidance, and you can't prove the opposite. However, on balance, the widespread problems shown by the BEA analysis of those 13 UAS events, make me believe that this is an area where there is a systemic problem, and since the PF was doing his "zoom climb" instead of following the UAS procedure, then if he had followed that UAS procedure instead, we may not be having this discussion at all.

Originally Posted by Old Carthusian
It is rather a measure which may well be useful but cannot take the place of training, CRM, using SOPs and a proper culture in the airline.
I completely agree with you (I bet you never thought I'd say that ). All those things are also needed. My point is (as Organfreak and RR_NDB have also said), why not try to reduce or remove all relevant holes in the swiss cheese? Even you have listed multiple topics in your comment above - so we're agreed that this is not a "fix one thing and it'll never happen again" accident, therefore why stop at the obvious human factors? The man/machine interface needs to be designed to communicate clearly with humans who are having a bad day, or just back from their holidays, or in the low part of their circadian rhythm, or ... All pilots are human, even the best.

Originally Posted by Old Carthusian
If you recall the crew of AF447 ignored the stall warning - what guarantee do you have that they would have paid any attention to a UAS warning?
That's a very interesting topic, so I'll tell you my current hypothesis about why I think they (especially the PF) ignored the stall warning in AF447 (dysfunctional CRM may have prevented the PNF from voicing his opinion, even if he didn't want to ignore the stall warning). But first, you are asking for a guarantee - that's unreasonable. I could ask you for a guarantee that they wouldn't pay attention to a UAS warning, but I won't do that because it's an unreasonable thing for me to do and it's impossible for you to guarantee that either. So let's not ask for guarantees and instead be open-minded to possible improvements, OK?

My current hypothesis is that the UAS situation was not recognised as being specifically that (especially not by the PF; I'm unsure about the PNF), and instead they believed they had a multiple instrumentation problem which needed to be diagnosed from square one - as well as the PF having to hand-fly at high altitude in turbulance and Alt2. From that, misinterpretation of the starting point, they couldn't make sense of the different (and varying) IAS readings as relating to a single failed component (because there was no single failed component!), and kept trying to understand their readings, which then became difficult to fit onto a mental model once stalled (even though all 3 were consistent eventually), as they don't train for being fully stalled.

Therefore my hypothesis is that the stall warning was being deliberately ignored as they (especially PF) thought it was a malfunction, as part of the same instrumentation problem which was affecting the IAS.

If they hadn't "gone off at a tangent" trying to diagnose what was a temporary UAS, and had instead received a clear warning from the aircraft like "This is a UAS situation, all my pitot probe pressures are different so I have to disconnect the AP - recommend you fly pitch & power which for this alt is X/Y", then would the zoom climb and all the subsequent problems still have happened? Neither of us know the answer, but anything which stopped that zoom climb from being done would have been an improvement over what actually happened.

So that's my current hypothesis. I could be wrong (partly or completely) - we'll never know for sure either way, although I'm happy to be guided by the professionals here and the HF part in the final BEA report.

Originally Posted by Old Carthusian
Now to touch on the point of the other accidents [I think you mean the other 13 UAS events in the BEA report??] - all resulted in recovery and a return to normal flight. The outcome is the important thing here not necessarily the process.
That's clearly your view, as you've said it several times. I disagree and instead believe that the process is at least as important as the outcome. After all, how many of those other pilots are just "a bad day" away from mis-identifying a UAS, and doing something else which is dangerous? Unless you present some compelling evidence that not following the UAS procedure is safer than following it, then I don't see me changing my view, although I'm happy to seriously consider whatever is causing you to dismiss the importance of following the UAS procedure.

Originally Posted by Old Carthusian
But even if a warning system is devised this is not a rapid process - it needs careful consideration and testing so that it can be properly deployed.
I completely agree - but I don't see those as reasons not to start the ball rolling on investigating this, especially as other problems which you have highlighted (e.g. airline culture) may take even longer to improve.

Last edited by Diagnostic; 7th Apr 2012 at 01:28. Reason: Speling :)
Diagnostic is offline  
Old 7th Apr 2012, 01:01
  #1294 (permalink)  
 
Join Date: Jan 2012
Location: Canada
Age: 50
Posts: 18
Quote, Mac the Knife:

"Hi Guys, sorry to bother you but my airspeed sensors are momentarily unreliable and so autopilot and autothrottle will disconnect."

"Flight data was nominal when this happened and SOP in this situation is to maintain appropriate pitch and power while we sort it out."

Would you like to do that yourselves or shall I take care of it?"

Seriously, wouldn't that (or a more formal equivalent) have been a more helpful introduction to the situation than a flashing UAS alert & prompt disconnect?"
---------------------------------------------------------------------

-That does seem way more like it...

I find it amazing the Autopilot just dumps everything on the pilot suddenly, but there is no reminder there is no stall protection or other "Normal Law" limiters now...

And to top it off, the "Normal law" limitations do not come back on when the airspeeds agree again...

Also, clearly the lack of clear visibility of the out-of-the-way side control joystick, combined with the lack of synchronization of movements with the other side, makes matters even worse as to clarifying the situation...

This reminds a lot of the Roll partial Autopilot-disengage feature (after applying roll for x seconds) of the Airbus Autopilot, which partially disengages the Autopilot for roll control only, a previously little understood feature devoid of much warning apparently, which killed that Russian captain who had his kid sitting at the controls (and all the passengers)...

I think also to have some other completely independent speed indicator, like a GPS, could be a back-up of last resort to form a mental picture of what is really going on (regardless of the mental gym of correcting the GPS value)... Here the system is very "brittle", because even the control tower will relay information from the same faulty source if the aircraft pitot fails! That killed a bunch of people as well...

It is very hard to form a mental picture of what is going on, if you suddenly have reason to be suspicious of the basic parameters of everything... At least with the GPS, you would have confidence the data is pointing you in the right direction...

One poster here quite rightly pointed out the first warning in blocked pitot tubes is sometimes the rudder ratio overspeed, which requires making mental loops backwards to figure out what this could mean: This is just poor "interface ergonomics" (if that term makes sense)...

Someone said there was no incidence indicator on the Airbus, the most relevant data to aircraft behaviour, other than the "Stall" warning itself... I find that strange...

I think they need some mass-market designers to help design these cockpits and functions in a more intuitively useable way...

G.
Gaston444 is offline  
Old 7th Apr 2012, 01:03
  #1295 (permalink)  
 
Join Date: Feb 2011
Location: Nearby SBBR and SDAM
Posts: 873
Degraded modes happens

Degraded modes training and a full comprehension of "what is involved" is important. Why?

PF could be suddenly "inserted in the loop" and is required to act precisely. Before acting must "understand".

Based on factual info we have, AF447 PF acted as if plane was in imminent danger. And very early created a sound imminent danger: A threat to stall the airliner.

My concern with degraded modes is: The performance of the "effective aircraft" (System + PF) specially in the transitory relies on:

1) A good understanding of the equipment characteristics. Intimacy with her.

2) A good interface to allow a fast (if possible, immediate) assessment of the problem

3) A good training (involving Pavlov behavior )

4) Good use of SOP, CRM, etc.

In this case we observed problems in all 4 items. And we had a fast degradation. Fast degradation is a real threat. Gen. Chuck Yeager had this kind of threats. Challenger had this (SRB's at low temp)

Airliner pilots (commercial operation) seems being not trained adequately for degraded modes. If they are, in several carriers.

PF (probably by lack of understanding of the first problem) seems to have contributed to an accelerated degradation.

And the System DID KNOW before AP and A/THR dropped what started around the increase of noise heard on CVR. Why not to process this information and share it with the crew?

The paper of an Airbus SAS designer mentioned in one earlier post sez you need to scan to detect UAS. This sounds not giving to the crew an useful "insider information".

This information can be valuable. And can save precious seconds to better act.

The reason i included item 2 is because it was possible for Airbus SAS to create a resource (UAS data processor) before System degradation. (Through a single chime? Double?)

Specially after 30+ UAS cases and STILL using OBSOLETE AS sensors. AS as an Important parameter (to the stability of the System)

And additionally because the System is (still today) operating without redundancy. (because we don't have the required AS probes) They simply can fail SIMULTANEOUSLY.

Why they didn't? Diagnostic presented this important question.

I am still studying the possible reasons to this.
RR_NDB is offline  
Old 7th Apr 2012, 01:13
  #1296 (permalink)  
 
Join Date: Aug 2011
Location: Near LHR
Age: 53
Posts: 37
@Organfreak,

Hi,

Originally Posted by Organfreak
If they didn't recognize or respond correctly to UAS, maybe they would have done if the airplane had shouted at them.
That's exactly my view - maybe they would have responded correctly, since such a warning reduces the ambiguity about what is being recommended to them. For me, as with all the accident reports I have read over the decades, it's about...:

Originally Posted by Organfreak
Back to Swiss cheese: should we not plug every possible (known) hole in it? As someone already pointed out, the closing of any one hole in the Swiss you-know-what may have prevented this horrible crash.
Exactly


@chrisN,

Hi,

Originally Posted by chrisN
I have another wild suggestion; when the aircraft gives up flying itself because of UAS, instead of an audible alert (we know audible alerts don’t always get through, such as the stall warning, when the crew is in cognitive overload), how about intermittently clearing the glass cockpit of other stuff (which for AF447 crew did not help them at all) and putting up a big message; “You have UAS. At this height, use power and pitch” (or in other appropriate circumstances: “ Use memory items and QRH”)
I understand your expertise gives you a different insight than me into human factors (mine being commercial rather than aviation), so you could well be right - I'm not saying that the warning has to be delivered as an audible alert.

In my recent reply to Old Carthusian, I gave my alternative hypothesis for the PF's apparent ignoring of the stall warning. Even if you are correct that cognitive overload caused the stall warnings to be ignored (and I agree that this is plausible), a UAS warning on AF447 would have been delivered at least 5s before the first stall warning started (based on the CVR transcript), so it had a chance to be recognised first.


@gums,

Nice to see you back, sir

Originally Posted by gums
So crew becomes more and more and more of a monitor. But who does what when data is lost or is unreliable?
This "garbage in, garbage out" problem (as someone else described it!) is exactly why having an automated response to UAS is a larger topic, than having a clearer and more explicit warning about it, IMHO.

Originally Posted by gums
To appease the whiz kids we agreed to a big flashing "X" on the HUD. A year later one really agressive pilot flew right thru the big flashing "X" and augered. Least he didn't have 200 SLF items with him.
Any warning can be ignored, as your example graphically demonstrates. My point is that if AF447 had followed the correct UAS procedure it would have meant no zoom climb, no zoom climb means no stall, and no stall means no crash.

What I find so interesting is that crew not following the UAS procedure is a bigger problem than just AF447. As I said in another reply, are we just "someone having a bad day" away from another crew not recognising UAS and so not following the UAS procedure, in as dangerous a way as the PF in AF447? Could a specific UAS warning reduce the chances of that behaviour? So far, I don't see a compelling reason why it couldn't, and every reason to think that it might.


@RR_NDB,

Hi,

Thanks for the links in your "surprises" posting - they were all new to me. PM to follow when I get a few minutes

P.S. I just saw your new posting a few moments ago - nice summary.


@Hamburt Spinkleman,

Hi,

Originally Posted by Hamburt Spinkleman
In interim report no. 2, the section on previous unreliable airspeed events and in particular the 13 events that was examined closer, I see a factual listing of the technical effects and of the crew's handling/actions and nothing more. I see no judgment on whether the crew's actions were right or wrong and I see no intent of the BEA to infer any.
I politely disagree - these seem to be clear judgements to me:

"Four crews did not identify an unreliable airspeed"

(So 4 out of 13 UAS events were unrecognised by the crew, therefore by definition their actions were wrong as those actions were missing!.)

"For the cases studied, the recording of the flight parameters and the crew testimony do not suggest application of the memory items in the unreliable airspeed procedure:
* The reappearance of the flight directors suggests that there were no disconnection actions on the FCU;
* The duration of the engagement of the Thrust Lock function indicates that there was no rapid autothrust disconnection actions then manual adjustment on the thrust to the recommended thrust;
* There was no search for display of an attitude of 5°."

(So in all 13 UAS events which were examined in detail, specific parts of the UAS procedure were not followed - the lack of crews aiming for 5 degrees as a memory item, seems particularly worrying and particularly relevant to AF447)

There are several other places where you just need to compare what the crews did (as described by the BEA in IR2), with the actual UAS procedure showing what they should have done, and "join the dots".

Originally Posted by Hamburt Spinkleman
I think you are reading too much into it and I see no basis for characterizing the handling of those 13 events as inadequate, wrong, mis-handled or in-correct.
Unless you can show me where the 13 crews are described as correctly following the UAS procedure, then by definition, their actions were incorrect & wrong (and any other synonyms I have used), and so I consider my characterisation as justified by that evidence given by the BEA.

Please do provide me with your evidence that those 13 crews correctly & completely followed the UAS procedures (which would therefore contradict those BEA quotes above, wouldn't it?), and I'll happily reconsider my position.

Last edited by Diagnostic; 7th Apr 2012 at 01:19. Reason: Recent post by RR_NDB
Diagnostic is offline  
Old 7th Apr 2012, 01:34
  #1297 (permalink)  
 
Join Date: Jul 2009
Location: DFW
Age: 57
Posts: 246
Originally Posted by RR NDB
The paper of an Airbus SAS designer mentioned in one earlier post sez you need to scan to detect UAS.
Ones instrument scan atrophies about six months into flying the bus. Or any other airplane with a two axis FD and a nav display with a pretty green/magenta line.
TTex600 is offline  
Old 7th Apr 2012, 02:11
  #1298 (permalink)  
 
Join Date: Dec 2002
Location: UK
Posts: 1,920
Surprise

Re surprise see:- http://www.pprune.org/safety-crm-qa-...ml#post7113960

Situation awareness is a central aspect of this and many other accidents.
The crew may have understood the situation, but choose an inappropriate action; alternatively the crew failed to comprehend the situation and thus the action was incorrect.

See the ‘surprise’ reference and the problems of hindsight, when fundamental surprise can biased towards situational surprise and thus hinders learning – safety improvement.

Classic ref for Aviation Decision Making:- http://www.dcs.gla.ac.uk/~johnson/pa...ithlynne-p.pdf
safetypee is offline  
Old 7th Apr 2012, 02:51
  #1299 (permalink)  
 
Join Date: Jul 2009
Location: The land of the Rising Sun
Posts: 165
Diagnostic
Everything keeps on coming back to training, SOPs and CRM. Supposing the crew were surprised then training should kick in. A pause, a scan of the instrument panel (and remember the only instrument that was not reliable was the Airspeed Indicator). PJ2 also pointed this out - this was not a serious incident at first. However, the crew actions made it into a serious incident. It also seems that the PFs scan broke down almost immediately and that the PNF did not intervene sufficiently. So a UAS warning might not have made any difference. This is why I asked the question - what guarantee do you have that yhey would have paid any attention to a UAS warning? It is not that I am against adding such a warning but that the warning would not necessarily have made a difference to their response. The cockpit voice transcript indicates a very rapid 'over reaction' to the initial incident. Once again this is not indicative of an interface issue. There was no attempt to use the SOPs or to diagnose the problem. This is more indicative of flight crew problems - training should enable you to deal with the unexpected. It seems in this case it didn't. We know from the other UAS incidents that these are recoverable and here once again I stress - outcomes are important. Even the stall was recoverable but once again there was a deficiency in the approach and this time with a clearly audible warning. The warning didn't help. Unfortunately this is a crew caused accident with very little help from the machine. It touches on pilot training and airline culture and how these are carried out with automation.
Old Carthusian is offline  
Old 7th Apr 2012, 03:02
  #1300 (permalink)  
 
Join Date: Feb 2011
Location: Nearby SBBR and SDAM
Posts: 873
Why UAS processing (to pilots*) was not introduced

Hi,

Diagnostic:

Your diagnosis are being helpful.

You made a question indeed difficult to answer: Why an UAS processor was not offered?

Possibilities:

1) They consider the task a pilot responsibility

2) They prefer to be conservative not risking

3) Management / communication (inside and outside) Airbus SAS issues

4) They never considered or was not considered important

5) Other factors (cost, ROI, etc.)


IMO (rare) conditions of this unique crash may led to consider AoA and even UAS "processing" as important. I hope so.

At least, both factors were (as it seems) undetected by a non adequately trained crew.

Submitted to complex "inputs" that required the creation of HF study in order to analyze the accident thoroughly.

The answer Why, as you put, i prefer to say it was a mix of the 5 items perhaps with #4 as the most influential.

(*) Airbus SAS (and others) are processing UAS just to the System. The System is not fed with garbage. The pilots need to "process" any garbage through scan and brain.

Thiells 727 and AF447 shows the GIGO "feature" of humans sometimes fails wrongly in "processing garbage".
RR_NDB is offline  

Thread Tools
Search this Thread

Contact Us - Archive - Advertising - Cookie Policy - Privacy Statement - Terms of Service - Do Not Sell My Personal Information

Copyright © 2018 MH Sub I, LLC dba Internet Brands. All rights reserved. Use of this site indicates your consent to the Terms of Use.