PDA

View Full Version : AF 447 Thread No. 7


Pages : 1 2 3 4 5 [6]

CONF iture
5th Apr 2012, 00:27
In MANAGED mode, the corrections applied by the autoflight system are displayed in the SELECT window of the FCU panel. I want to make clear at this point that we're not talking about control inputs of a magnitude to effect 5000fpm descent rate immediately, simply that this is the value the software uses in certain configurations. Lower to the ground, you'll see -1500 appear in the window for a split second as corrections are applied, but you don't start descending at 1500fpm.
Never heard read or seen anything like it ... Anyone with valuable experience on 320 to confirm this ?

mm43
5th Apr 2012, 01:51
Originally posted by RR_NDB ...

Why not to provide an instantaneous (even non causal, before an eventual Law change) indication to the crews (Airbus SAS, Boeing, Embraer, etc.) (PRECISELY) of UAS?Let's go back to AF447 Thread No.4 Post #616 (http://www.pprune.org/tech-log/454653-af-447-thread-no-4-a-31.html#post6547413) - which was directed at you. My point is that if the A/C is to "detect" and "report" UAS, the system may as well be programmed to maintain the "status quo" as I proposed.

Originally posted by Machinbird ...

If the regular sim sessions were not long enough to exercise this capability, then they were not long enough. Any misunderstanding should have been found and fixed.I can agree with your point, though I believe Airbus would prefer that these sort of "misunderstandings" by third parties didn't impact on its bottom line.

Diagnostic
5th Apr 2012, 02:41
@mm43,

Well, they did. Or at least the PNF announced the loss of speeds and later the change to ALT Law.
I agree sir that the PNF announced those 2 points, but as we see in the CVR transcript, no-one mentioned (i.e. vocalised) the UAS process (memory items - decide whether needed or not, then QRH). If we accept (as I do), that there will be no relevant words omitted from the transcript we've seen (e.g. I don't need to read any "last words" from the crew, if they said things to loved ones etc., so their omission wouldn't surprise me), then either they (especially the PF):

a) did truly recognise the UAS situation, didn't vocalise that recognition, and forgot that there was a procedure to follow (IMHO unlikely);

or

b) did not truly recognise the UAS situation for being that, and (this is speculation by me) treated it as some kind of unknown instrument / system failure, which they then got (terminally) bogged-down in trying to understand - which was not helped when instruments responded in unexpected ways when stalled, even though they were not faulty (e.g. low IAS due to high AoA affecting the pitot probes, leading to intermittent stall warning etc).

On the evidence I've read so far (in all the BEA reports and these threads), I vote for (b). This adds me to the list of other readers who believe that a explicit "You have a UAS condition! Follow the UAS procedure!" warning, may have helped, instead of the procedures assuming that the UAS condition would be correctly recognised by every crew, every time.

As you have pointed out, adding such an explicit UAS warning could then lead to an attempt at automation keeping the "status quo" when that happens, although I see that as a step of development beyond that of just giving a warning.

I'm sure some of the professional pilots will say that a trained ATPL pilot should not need that kind of "spoon-feeding", and I don't disagree. However for various reasons too long to explain right now, experience in my own (non-aviation) field makes me think this sort of explicit warning message should be seriously considered (not that Airbus or Boeing or any other manufacturers care what I think :oh: ).

I do find it "a bit of a stretch" to blame the systems/human interface when many similar situations have been successfully handled by other crews.
I respectfully disagree that "similar situations have been successfully handled by other crews" (depending on the subjective interpretation of "similar" and "successfully" of course :) ) as I explain here:
http://www.pprune.org/tech-log/460625-af-447-thread-no-6-a-34.html#post6673738
http://www.pprune.org/tech-log/460625-af-447-thread-no-6-a-61.html#post6747450
http://www.pprune.org/tech-log/468394-af-447-thread-no-7-a-9.html#post6804546

The BEA (in interim report 2, page 51 onwards in the English edition) discuss those UAS events with enough details to examine (13 to be exact). To quote myself from previously:

"Sure, none of the other flights crashed, but several were not handled according to the QRH, not all of them went into Alt* law meaning that subsequent actions cannot sensibly be compared to AF447"

The BEA explicitly mention the lack of other crews following the correct memory items, among other things. Therefore this looks much less like an AF447-only mistake IMHO, and makes the system/ human interface again an area where there should be focus, given that UAS events will continue to occur with current pitot technology.

RR_NDB
5th Apr 2012, 02:44
Hi,

mm43: (http://www.pprune.org/tech-log/468394-af-447-thread-no-7-a-63.html#post7118629)

Let's go back to AF447 Thread No.4 Post #616 (http://www.pprune.org/tech-log/454653-af-447-thread-no-4-a-31.html#post6547413) - which was directed at you.

I agree fully with you. After 7+ months "doing some R&D :8 on that, i became progressively convinced the Man machine interface (very probably) played an important role on this case.

The reason i specified "just indications" (instead of "actions") was twofold;

1) Allow a smooth learning curve (for all "players"). Is stepwise.

2) Doesn't represent "automation". Just another "AID". An extra resource.

Why not a COMBO: (UAS & AOA). :) Two major "components" here.

I believe Airbus would prefer that these sort of "misunderstandings"

:E

PS

Certainly your post #616 was processed (in batch :) ) all this months. thank you for remembering. I wouldn't.

Diagnostic
5th Apr 2012, 03:53
@RR_NDB,

Hi,

I know your post wasn't addressed to me, but I just wanted to say that I agree with all your points above :ok:

I wonder why this explicit UAS warning isn't being done already. Is it that plane manufacturers are assuming correct recognition of the UAS situation every time? (A dangerous assumption, IMHO, with confirmation of that danger provided by the BEA report that I mentioned.) Or is there something which we're not considering, that is preventing them providing this explicit warning?

Lyman
5th Apr 2012, 04:11
A sore spot since the beginning. Rapid understanding is critical, there is no call for delay, imho. Yes, resources. "WARNING: speeds suspect, recall Pitch and Power for conditions and config. No delay."

There was a great pressure, to which BEA succumbed, in releasing a memorandum which Airbus scanned and re-released.

"There are no NEW mechanical issues with our a/c, per BEA"
They needed that for the airshow.

Diagnostic, consider what it would mean in the middle of this investigation, for Airbus to reconfigure their cockpit systems to include UAS/WARN.

Everyone knows the problem, but so long as Airbus do not address it with upgrades, it does not "exist".

To react is to confess. In the case of Air France, they had a reason and an excuse to r/r the Pitot Probes. The PILOTS would not fly until they were changed. :D:D

It would be impossible for pilots to gather to protest a fleetwide problem caused by the aircraft itself........and not just a subcontractor eg Pitots, or Duff "pipe"

No amount of soothing words can alter the fact that UAS remains a problem for the platform, not just the pilots.

mm43
5th Apr 2012, 04:46
@Diagnostic... adding such an explicit UAS warning could then lead to an attempt at automation keeping the "status quo" when that happens, although I see that as a step of development beyond that of just giving a warning.Thanks for your well thought-out post.

That "step beyond" was an attempt to stop the initial "misunderstanding" and/or "startle factor" that has previously been discussed. It may be viewed as problematic to an outcome, but we are currently dealing with an outcome that became a "problem".;)

bubbers44
5th Apr 2012, 06:25
Since hand flying skills are not required any more maybe the autopilot should just go to attitude hold and autothrottles freeze in their cruise mode. No more stalls at 35,000 and training doesn't have to be changed. New guys with 350 hrs would love it.

HazelNuts39
5th Apr 2012, 06:52
Not all UAS scenarios are as obvious as AF447. The high altitude / ice particle scenario is a relatively recent addition to the family.

RR_NDB
5th Apr 2012, 09:01
Hi,

Diagnostic (http://www.pprune.org/members/365970-diagnostic):

Or is there something which we're not considering, that is preventing them providing this explicit warning? (http://www.pprune.org/tech-log/468394-af-447-thread-no-7-a-63.html#post7118683)

Important question. Some possibilities. Will comment ASAP.

mm43
5th Apr 2012, 09:35
As HN39 has mentioned, the likelihood of ice crystals above FL300 and less than -40 deg C wasn't considered.

The following proposed amendment is the outcome of more recent UAS events.

http://oi42.tinypic.com/2yjvbc3.jpg

Old Carthusian
5th Apr 2012, 09:40
I still remain to be convinced by the man/machine interface explanation. We are running the danger of over-generalising from one specific example. Nothing points to a problem with the interface - rather it consistently points to a failure of crew performance. There was and still is a procedure for UAS which if followed clears the incident up quite effectively. Other crews have successfully dealt with the issue so why didn't this one? What was the difference that caused this crew to stall and crash their aircraft?
This is the issue which all attempts to blame the machine fail to address. Why this crew? What was so different about them that they couldn't follow the SOPs, that CRM was non-existant and that there was no clear chain of command? The interface is not the issue because other crews handled it successfully. Training and culture can also be added to the mix but it also and significantly comes down to the individual members of the crew and does not go beyond them. Certainly the aircraft and the controls can probably be re-designed so that this sort of incident would never happen again and I would favour this but I can't help suspecting that without better training someone will find another way to crash one of these aircraft.

jcjeant
5th Apr 2012, 10:18
Hi,

Why this crew? What was so different about them that they couldn't follow the SOPs, that CRM was non-existant and that there was no clear chain of command?
It is indeed always the same question we pose and she defies statistics
Have a incompetent pilot in a crew can happen
Have two incompetent pilots in a crew is rare
Have three incompetent pilots in a crew defies statistics

HazelNuts39
5th Apr 2012, 10:58
As HN39 has mentioned, the likelihood of ice crystals above FL300 and less than -40 deg C wasn't considered.That's not what I said or meant. The Airbus requirement (ice crystals) in your diagram covers the atmospheric conditions of AF447 (FL350; SAT -38.8). I was thinking about obstructed static ports, blocked pitot drain holes combined or not with blocked pitot 'intake', in various phases of flight. Sorry if that wasn't stated explicitly.

DozyWannabe
5th Apr 2012, 14:05
CONF's experience sounds like he ended up in Direct Law or Abnormal Attitude, as Alternate in any mode should still have autotrim enabled.

Diagnostic
5th Apr 2012, 14:13
@mm43,

Hi,

@DiagnosticThanks for your well thought-out post.
And thanks for your continued comments :)

That "step beyond" was an attempt to stop the initial "misunderstanding" and/or "startle factor" that has previously been discussed. It may be viewed as problematic to an outcome, but we are currently dealing with an outcome that became a "problem".;)
I can certainly see some merit to an automated "UAS handling", and I firmly believe that the "startle factor" is indeed an issue to some extent, in any man/machine monitoring interface (as described in Dr Bainbridge's paper).

My specific concern is that if such automated UAS handling is introduced, then the (a) recognition of UAS and (b) subsequent actions, have got to be correct. :) I just don't know whether the level of confidence in the automation is there yet (and the pilot/passenger confidence in the automation!), to make this option (i.e. an automated UAS response) better than leaving the human in the loop (i.e. a guided UAS response / warning). I'm very happy to read the views of the experts in this area.

The option I am most concerned about, is leaving the situation as it is, due to the inadequate UAS recognition and handling by the other crews (in addition to AF447) as highlighted in the BEA Interim Report 2. While this was only one hole in the AF447 "swiss cheese", if that hole can be closed (or at least made smaller), in a (relatively) low cost / low risk way, then surely we reduce the risk of all the holes lining-up again.

@Old Carthusian,

Hi,

I still remain to be convinced by the man/machine interface explanation. We are running the danger of over-generalising from one specific example.
My point is that there is not just one example. If that was the case, I wouldn't be spending any time commenting on this point. :) The BEA Interim Report 2 makes it clear (at least to me) that there is a much larger problem with UAS recognition & handling, which was the start of the sequence of events leading to the crash.

To use your words: I "remain to be convinced" that, had the AF447 crew truly realised that the "lost speeds" (to quote the PNF) were actually expected due to a temporary UAS, would any of the subsequent events leading to the crash have happened? Reduce the "startle factor", reduce the crew's concern that this is an unusual problem, remind them to turn off the FD etc. - does the PF then follow whatever (unfortunately unknown) cues he did, which caused the "zoom climb"? Perhaps not.

I'm not trying to convince you that I'm "correct", but over some decades working with diagnosing complex systems, I have seen many many times, that having an incorrect mental model of what is happening at the beginning of a problem drastically reduces the liklihood of correct handling (especially quick & efficient handling), as that problem continues. It's from that experience, that I see similarities with the sequence of events on AF447.

Nothing points to a problem with the interface - rather it consistently points to a failure of crew performance.
I politely suggest that if several crew's behaviour was wrong (which is a documented fact), then by definition, the "interface" isn't well-designed. Or there are many crews who are sub-standard. Which one is easier to fix?

Other crews have successfully dealt with the issue so why didn't this one?
If by "successfully" you mean "without crashing", then yes. :) But as I've said before, I do not accept that as a good standard of measurement. Several other crews did not recognise & handle UAS correctly. Are you really OK with that, as long as they don't crash on that specific time they mis-handle it? I'm not. I see this as a larger problem which needs to be understood & fixed, so that crews do not (for example) follow incorrect FD commands, during UAS events.

This is the issue which all attempts to blame the machine fail to address. Why this crew?
See above - it's not just this crew who failed to recognise & follow the UAS procedure. I'm not trying to "blame" the machine - this is undoubtedly a "swiss cheese" situation with many holes. This is just one of the holes, but it's one where improvements (e.g. a specific warning / message is given), seems achievable.

What was so different about them that they couldn't follow the SOPs, that CRM was non-existant and that there was no clear chain of command?
I certainly agree that there were other problems like CRM (more holes in the swiss cheese). I'm just saying that, from everything I've read, UAS recognition & handling is one hole in that cheese, and if any of the holes had been closed, then the accident wouldn't have happened.

The interface is not the issue because other crews handled it successfully.
See above. I politely disagree that it's possible to be so definite.

Training and culture can also be added to the mix
Agreed, although these are difficult & long-term issues. That's not to say that airlines shouldn't try to improve these, but being pragmatic, I would rather have a partitial improvement (e.g. better UAS warnings, and perhaps assisted UAS handling) in the shorter-term, while waiting for longer-term improvements in training & CRM etc., than not have any improvement in the shorter-term, while waiting for longer-term improvements.

but it also and significantly comes down to the individual members of the crew and does not go beyond them.
On this point I politely disagree, as I explain above. I'm happy to see if future posts change my mind, but I don't know how it is possible to be so definite that this is a crew-only problem (by which I interpret you as saying an AF447 crew-only problem).

CONF iture
5th Apr 2012, 14:17
CONF's experience sounds like he ended up in Direct Law or Abnormal Attitude, as Alternate in any mode should still have autotrim enabled.
If it was Direct Law the USE MAN PITCH TRIM PFD MSG would have show up ...

Hamburt Spinkleman
5th Apr 2012, 17:03
the inadequate UAS recognition and handling by the other crews (in addition to AF447) as highlighted in the BEA Interim Report 2.

The BEA Interim Report 2 makes it clear (at least to me) that there is a much larger problem with UAS recognition & handling

if several crew's behaviour was wrong (which is a documented fact)

Several other crews did not recognise & handle UAS correctly. Are you really OK with that, as long as they don't crash on that specific time they mis-handle it?
Inadequate, wrong, mis-handle, in-correct. What in report No. 2 do you see that warrants those terms?

PJ2
5th Apr 2012, 18:18
Hello Diagnostic;

Enjoying reading your contributions, thank you. If I may offer a thought and a comment on the points of discussion between you and Old Carthusian...

On an increase in automated responses, I can understand the logic of such an argument (the BUSS relies upon this logic), but what concerns me from a pilot's p.o.v. is long-term reduced situational awareness and the need for in-depth understanding of high-altitude, high-Mach No. swept-wing flight, (old fashioned "airmanship", I guess) because it is still humans who are doing the piloting.

I offer this view out of a concern for what remains inexplicable, and that is the instant decision to pitch a transport aircraft up at such high pitch-rates (increasing 'g'-loading to 1.55g) to such high pitch attitudes and keep the aircraft there.

I would be interested in either data or an argument that this indicates an interface problem, for, as you are, I am open to any information that shows that normal training and SOPs for this event are inadequate in some circumstances and because of obscurity are best left to automated responses.

As has been observed throughout the thread by those who fly these aircraft, such pitch attitudes at cruise altitudes are simply never intentionally achieved for the very reasons loss of control occurred.

In re your observation, "Several other crews did not recognise & handle UAS correctly.", I don't recall specifically where there were untoward outcomes due recognition and handling issues with other crews in other events but again am open to new information. There are no characterizations one way or the other in IR2 [Interim Report 2], Appendix 7 regarding crew responses one way or another and from what I've read I don't see any descriptions of difficulties experienced by other crews in the body of IR2. There were a few events such as the Air Caraibes, (report here (http://www.eurocockpit.com/docs/ACA.pdf), in French), the Northwest and the TAM events but to my recollection, (and I have been wrong on more than a few things before!), the UAS events haven't been problematic as most crews "did nothing" and the airspeed returned within a minute or less.

The argument here isn't at the stage of deciding whether more automation, the same level of automation or reduced interventions are needed. This is very much a continuing dialogue between pilots and engineers! The ability to "look through" the automation and decide for oneself what the airplane is doing, what it needs and why, is being lost because it is being supplemented and when supplements occur, practice and therefore skill, then thinking and knowing atrophy

I have had kindly pointed out to me a recent conference at the Royal Aeronautical Society entitled, "The Aircraft Commander in the 21rst Century". There is an excellent videoed presentation (http://media.aerosociety.com/aerospace-insight/2012/03/23/evolution-and-future-of-the-flightdeck/6566/?utm_source=The+Royal+Aeronautical+Society+e-communications&utm_campaign=23592cb938-20120329_RAeS_HTML_Newsletter_Mar_12&utm_medium=email) from this conference by Captain Scott Martin, (Gulfstream Experimental Test Pilot) on the very topic at hand. From the site:

In this exclusive video from the conference, Captain Scott Martin, Experimental Test Pilot at Gulfstream Aerospace (http://www.gulfstream.com/)talks us through the evolution of the flight deck and how Gulfstream manages to balance the role of automation with providing easily accessible information for the pilot.

He also discusses key issues for future flightdeck design in integrating information technology and computers into aircraft and how this ‘second revolution’ in human flight not only affects the military and airline pilot, but also the GA and private flyer.

Additionally he talks about the expectations of the next generation of pilots in dealing with these glass cockpits and recommendations in designing the human-machine interface.

DozyWannabe
5th Apr 2012, 18:57
If it was Direct Law the USE MAN PITCH TRIM PFD MSG would have show up ...

As effective PF during our experiments, I was relying on the TRE to watch ECAM as my concentration was 100% on the PFD, SS and trim wheel. I'd be impressed if you could read ECAM while trying to maintain what they were doing.

rgbrock1
5th Apr 2012, 19:09
HazelNuts39 wrote:

Not all UAS scenarios are as obvious as AF447. The high altitude / ice particle scenario is a relatively recent addition to the family.

Why? Surely aircraft have been flying at such altitudes for quite some time now. What has changed?

CONF iture
5th Apr 2012, 19:27
As effective PF during our experiments, I was relying on the TRE to watch ECAM as my concentration was 100% on the PFD, SS and trim wheel. I'd be impressed if you could read ECAM while trying to maintain what they were doing.
Good ... USE MAN PITCH TRIM PFD MSG

PJ2
5th Apr 2012, 19:33
rgbrock1;
Quote:
Not all UAS scenarios are as obvious as AF447. The high altitude / ice particle scenario is a relatively recent addition to the family.
Why? Surely aircraft have been flying at such altitudes for quite some time now. What has changed? 5th Apr 2012 11:57Perhaps I can help. In climbing through FL200 or so (IIRC), in a B767-200, captain flying, I noticed my CAS gradually decreasing - no EICAS messages, no Master Cautions; what would have been a normal 320kt climb speed, (again IIRC), had decreased gradually to around 250kts - the rate of climb had not increased and I glanced over to the captain's ASI and it read 320kts or so. It was very subtle. We took the reading off the standby ASI and it agreed with the right-side ASI, so we used that ASI. About that time the amber "Rudder ratio" EICAS caution annunciated and an aileron lockout (again, IIRC) annunciated. As the left-side ASI approached 350kts we expected an overspeed warning but it did not occur. We continued the climb and leveled off at flight plan altitude with the right side reading equal to the standby and the left-side pegged on the stop. The right-side autopilot engaged and we continued to destination. On approach, as the air warmed the left-side ASI indication returned to normal. We wrote it up. This was around 1985/86. I never saw it again.

I've thought of that often - what if we had lost all speed indications? There were no pitch-power tables at the time but we could have used the FOM Long-range cruise numbers to keep us safe and monitored pitch, comparing it with past experience. We'd have probably continued; it was night, winter conditions at departure - destination was daylight and a bit warmer. I doubt if ours was the only such experience.

The difficulty with automation is GIGO - if the info isn't available to the flight crew, what's the automation using? The notion of "historical figures" has been broached, (as in, what's the airplane been doing over the past ten minutes") but that's what pilots do anyway, and supplementing such awareness gradually destroys such awareness.

A few have hit upon a very good point - if we fix this, then what will be the next cause? Or do we teach airmanship sufficiently to keep the aircraft safe? While cadet programs teach technical competence, do they teach one how to be "a pilot"?

DozyWannabe
5th Apr 2012, 20:00
Good ... USE MAN PITCH TRIM PFD MSG

Well, in which case I'd say the sim configuration must have been slightly off - would you have a chance to re-run your experiment in the not too distant future? Alas mine was very much a one-off.

infrequentflyer789
5th Apr 2012, 21:33
CONF's experience sounds like he ended up in Direct Law or Abnormal Attitude, as Alternate in any mode should still have autotrim enabled.

I think back when CONF first posted he stated that his sim was a dual ADR fail to trigger UAS. That would leave one ADR in and AOA would trigger abnormal attitude law, which would stop autotrim. Apparently without putting up the PFD message which also ties in with what CONF saw [yep I agree, sounds stupid, think it should be fixed, but not relevant to 447].

I don't think the SIM would replicate the fall in measured airspeed at the pitots at high AOA and fail all ADRs as a result.

safetypee
5th Apr 2012, 23:31
Some thoughts;-
Global warming is put aside, but not eliminated (see refs).
Engine design. Problems were reported with aircraft engines as early as 1989/90. Serious research and regulatory activities started in the mid – late 90s after some aircraft suffered multiple engine rollbacks.
Modern engines use very high efficiency aerodynamic components with close tolerance fittings. Whereas older designs could suffer some ice accretion without obvious problems, the new systems degrade rapidly. An analogy is with super-critical wing sections suffering from ‘bug splatter’.
Even so, larger engines appear to manage ice crystal icing easier than smaller engines, but performance/degradation depends on individual designs – see centrifuge issues below. In addition, use of internal anti-icing heat adds to the possibility of melting some crystals providing the ‘glue’ (freezing water) for other crystals to stick together. With older/unheated engines the crystals tended to bounce off (possible origin of no airframe ice [accretion] below -40C).
Changes in design and location of probes. TAT probes encountered problems in similar timescales as engines (also some reports of A310 / Concord pitot problems – flight test?).
New pitot designs perhaps did not consider ice crystal capture/build up, or they enhanced the particle melting ‘glue’ aspects.
Airframe aerodynamic efficiencies resulted in probe locations where there is more catchment of ice crystals. High curvature flow around the aircraft nose tends to centrifuge out the heavier water / crystal content, but smoother low curvature flow results in more lightweight ice crystals entering the tube; again specific design issues with probe, aircraft, and location.
Avionics. Availability of modern colour radar may encourage crews to fly closer storm centres than previously. The older ‘cloud and clunk’ WXR gave a single boundary defined by skilful use of tilt/gain – stay out of this area and a bit more; new radars have several ‘automatic’ colours thus a choice of acceptability – keep out of the red, but yellow / green may be acceptable. This false reasoning has been reinforced with sales talk of ‘cleaver’ electronic features; pilots overlook this and also that most aircraft WXR do not detect ice. Ice crystal conditions at best might only show as a green zone.
Thus the exposure to ice crystals - frequency of encountering the conditions and the duration in the conditions has increased.
Changes in operational complexity – airspace limits, e.g. does RNP / RVSM (or crew’s perception of safety limits ) increase the probability of encountering areas of Cbs. Crew’s awareness of the icing threats depends on training and incidents reported. Modern airframes appear to tolerate more icing encounters – better design / efficient systems, but this may not apply universally to all aircraft or all aspects of a single type.
The industry appears to be less aware of icing risks; have we forgotten many of the rules of thumb – don’t fly in/under the anvil of Cbs.
Complacency?

http://icingalliance.org/meetings/RIF_2009/documents/AIAA%20June%202009_Mason_version_nss.pdf

IASCC - International Air Safety & Climate Change conference (http://easa.europa.eu/conferences/iascc/) - presentations, workshop 1, day 2, Eric Duvivier, EASA - "High Altitude Icing Environment"

http://www.ukfsc.co.uk/files/Safety%20Briefings%20_%20Presentations/Flight%20Ops%20-%20Ice%20Particle%20threat%20AIAA%202006%20Posted%20Oct%2020 09.pdf

Diagnostic
6th Apr 2012, 00:14
Hi PJ2,

Thanks very much for your comments all through these threads, and for the opportunity to discuss. I've tried to minimise the quotes, while still (hopefully) keeping the context - if you feel this has distorted things, then sorry & please correct me.

On an increase in automated responses, I can understand the logic of such an argument (the BUSS relies upon this logic), but what concerns me from a pilot's p.o.v. is long-term reduced situational awareness and the need for in-depth understanding of high-altitude, high-Mach No. swept-wing flight, (old fashioned "airmanship", I guess) because it is still humans who are doing the piloting.
I do understand this p.o.v. and as I said, I'm not yet convinced about totally automated responses, but at least an explicit UAS warning seems (with hindsight) a clear improvement, doesn't it?

After all, if there is an engine fire, the systems (I don't know if it's the FADEC or others) detect the excessive temperature and alert you, as the pilot, to that specific problem. (I flew several of these in a B737 simulator, some years ago - that bell gets the heart racing :) ). The system does not just say "Hey, something is wrong - I know what the problem is, but you have to work it out from some gauges on the panel - and hurry up!". Why give the crew a specific fire warning (or low fuel warning, or any of the other warnings where the system highlights the specific issue), and not give the crew a specific UAS warning?

In your B767 example a few posts ago, is it sensible (and optimal) to make the crew "jump through the mental hoops" to try to work backwards from the "Rudder ratio" EICAS caution, to the underlying UAS event?

I offer this view out of a concern for what remains inexplicable, and that is the instant decision to pitch a transport aircraft up at such high pitch-rates (increasing 'g'-loading to 1.55g) to such high pitch attitudes and keep the aircraft there.
Completely agreed - it's currently inexplicable, due to the lack of justification/explanation voiced by the PF. As I think several here have already said, the human factors part of the final report will make interesting reading, but it may be more like "educated guesswork" in this area, than any of us would want.

However, if the PF had correctly announced and followed the UAS procedure, then they would both have been focussed on the 5 degree pitch target instead, wouldn't they - at least possibly?

I would be interested in either data or an argument that this indicates an interface problem, for, as you are, I am open to any information that shows that normal training and SOPs for this event are inadequate in some circumstances and because of obscurity are best left to automated responses.
As I see it, BEA Interim Report 2 page 51 onwards provide evidence for either:

a) too difficult to recognise UAS via the existing interface, or/and
b) insufficient training to recognise UAS via the existing interface.

IMHO these are related - the less obvious the interface to report a UAS (and to also encourage that the UAS procedure should be followed) to the crew, the more training, skill, concentration, ongoing crew practice will be needed. Or do you have a different view?

More details below...

In re your observation, "Several other crews did not recognise & handle UAS correctly.", I don't recall specifically where there were untoward outcomes due recognition and handling issues with other crews in other events but again am open to new information.
Agreed that I've seen nothing regarding untoward outcomes in that BEA report, but IMHO that's not what the metric being measured should be.

[...] to my recollection, (and I have been wrong on more than a few things before!), the UAS events haven't been problematic as most crews "did nothing" and the airspeed returned within a minute or less.
That is my understanding too (apart from duration - BEA mention up to 3min 20sec of continuous invalid speeds). But consider what we learn from the BEA report about the various crews lack of following UAS procedures, and what that means about the chances of a potentially different outcome next time.

As I understand it, one of the reasons for crew procedures is precisely to prevent different outcomes depending on crew, time of day, visibility, and all the other variables which a crew has to deal with. Once we see lack of adherance to procedures, don't we get closer to the chances of "bad things" happening? That has been my experience, both with flying and with other highly-controlled situations.

Of the 13 UAS events where the BEA had sufficient detail to know what the crew did / did not do:

"Four crews did not identify an unreliable airspeed"

and

"For the cases studied [which I interpret as being all 13 cases] the recording of the flight parameters and the crew testimony do not suggest application of the memory items in the unreliable airspeed procedure:
* The reappearance of the flight directors suggests that there were no disconnection actions on the FCU;
* The duration of the engagement of the Thrust Lock function indicates that there was no rapid autothrust disconnection actions then manual adjustment on the thrust to the recommended thrust;
* There was no search for display of an attitude of 5°."

So as I read it, all 13 crews "got it wrong", to a greater or lesser extent, with a third of them (4 out of 13) failing to do any UAS procedure, and all 13 failing to do the memory items. Isn't that just a timebomb waiting for a crew getting things badly wrong in the future, when they are presented with an unrecognised UAS at the "wrong time" (sleepy, poor CRM, "startle factor" etc.)? If they get distracted trying to diagnose a non-existant instrument fault (which is really just temporary UAS), couldn't that potentially lead to another AF447-like event? IMHO, based on reading other accident reports where distraction was a factor - yes.

The ability to "look through" the automation and decide for oneself what the airplane is doing, what it needs and why, is being lost because it is being supplemented and when supplements occur, practice and therefore skill, then thinking and knowing atrophy
I understand that concern, and I would much much prefer ATPL pilots to be better trained, better paid and kept highly-skilled.

However, are you saying that aircraft system designers shouldn't help flight crew by giving an explicit warning for UAS, even though the systems know that there is "just" a UAS event (which has a procedure to follow) and not some other instrumentation fault (which needs to be investigated, diagnosed, coped with, etc.)?

I have a view about how an automated response might be considered, in a way that still keeps the crew "in the loop", but I'd like to initially focus on giving explicit UAS warnings (to try to drive the following of UAS procedures).

I have had kindly pointed out to me a recent conference at the Royal Aeronautical Society entitled, "The Aircraft Commander in the 21rst Century". There is an excellent videoed presentation (http://media.aerosociety.com/aerospace-insight/2012/03/23/evolution-and-future-of-the-flightdeck/6566/?utm_source=The+Royal+Aeronautical+Society+e-communications&utm_campaign=23592cb938-20120329_RAeS_HTML_Newsletter_Mar_12&utm_medium=email) from this conference by Captain Scott Martin, (Gulfstream Experimental Test Pilot) on the very topic at hand.
Many thanks - I look forward to viewing that when I'm back with a normal internet connection. :)

Diagnostic
6th Apr 2012, 00:30
@Hamburt Spinkleman,

Hi,

Inadequate, wrong, mis-handle, in-correct. What in report No. 2 do you see that warrants those terms?
I believe I have answered this during my reply above to PJ2 - the short answer is page 51 onwards in that report. However if you disagree, I'm happy to again quote specific examples from the report, to explain my choice of words.

Or do you see this part of the BEA report as showing correct UAS procedures were followed in all 13 cases? Or were completely followed in even one case?

Old Carthusian
6th Apr 2012, 00:49
Diagnostic
I am afraid we are still faced with the question of why? It does still come down to the individual crew. It is something that I learned flying replica biplanes (note that I have never flown big transport aircraft but I feel what I learned has some relevance). - know your machine. Know your drills. There is no escape from this. The crews who didn't initially recognise UAS were still able to successfully deal with the problem. One crew (AF447) wasn't and followed a totally inappropriate behaviour pattern. Evidence indicates that the safeguards expected in a transport aircraft were not utilised but were for some reason ignored. This is, I am afraid, a crew issue - not a machine issue. It also relates to this particular crew not the others. I would suggest that reading some of the Korean Airlines accident reports would be productive. They are different accidents but the cultural parallels and CRM failures are instructive and one can see a bearing on this accident. We have to be very careful in trying to find a 'hard' solution when the cause may well lie in the 'soft' factors.

Diagnostic
6th Apr 2012, 02:23
@Old Carthusian,

Hi,

As with my reply to PJ2 I've tried to reduce the quotes a little, but if you think I've destroyed the context, then I'm sorry and please point out what's wrong.

I am afraid we are still faced with the question of why?
Why what specifically? I think you're asking "why didn't the AF447 crew follow the UAS procedure", but I'm not sure if you're asking a bigger "why"? Sorry if I'm missing something obvious. I'll assume you're referring to the UAS procedure question here, rather than "why the zoom climb" etc.

It does still come down to the individual crew.
Do you mean it's always an individual crew decision, or it was only a problem with the AF447 crew, or ...? Sorry, again, I can't grasp your specific meaning. :(

It is something that I learned flying replica biplanes (note that I have never flown big transport aircraft but I feel what I learned has some relevance). - know your machine. Know your drills. There is no escape from this.
I completely agree that this should be the objective. However, are we expecting too much of pilots, to be both pilots and flight engineers? With aircraft of the complexity of the A330, the "know your machine" mantra, while it remains the objective, is impossible (realistically) with the same depth as you know your biplanes. The more complex the machine, the more ways it can go wrong, or at least, behave "unexpectedly". For example, just remember how many pilots here were unaware of the stall warning being disabled under 60 knots IAS.

The crews who didn't initially recognise UAS were still able to successfully deal with the problem.
As I said to PJ2, I can't agree with that as being an acceptable result, meaning we should just blame the AF447 crew and stop looking deeper. From reading that BEA report, it looks to me that controlled flight sometimes continued in spite of and not because of what some of the 13 crews did (e.g. premature AP reconnection, with incorrect airspeed being used). I don't class that as "successfully" dealing with the problem by any measure - expect that they didn't crash (see my previous comments on that).

One crew (AF447) wasn't and followed a totally inappropriate behaviour pattern.
I agree about their behaviour, but they are not the only crew to fail to identify the UAS.

Evidence indicates that the safeguards expected in a transport aircraft were not utilised but were for some reason ignored. This is, I am afraid, a crew issue - not a machine issue.
I politely disagree that it is so clear-cut. If you make the machine complex enough, and add in human imperfections, then you could get a man/machine interface which will be OK for some people, some of the time, and fail to "get through to" different people or at different times. IMHO that would be, in part, a machine (design) issue.

To suggest that this is (only) a crew issue implies that you believe the machine is perfect. And yet a UAS situation was reportedly not identified at all by 4 out of 13 other crews. Don't you think that might be pointing to it being too difficult for typical crews to reliably recognise a UAS, using the current recognition method being taught?

It also relates to this particular crew not the others.
I don't understand exactly what "it" refers to in that sentance, so I can't comment.

We have to be very careful in trying to find a 'hard' solution when the cause may well lie in the 'soft' factors.
I am not trying to find a "hard" (i.e. systems) solution - sorry if you thought that I was, as I can't have been clear enough. The "soft" (human) factors clearly played a large part when looking at the whole crash sequence.

I'm suggesting that it is possible to mitigate some inevitable "soft" (i.e. human) factors (e.g. no human is perfect; we all have circadian rhythms & limited attention spans etc. etc.) by improving some systems behaviours, to better support the pilots when things go wrong (i.e. tell them clearly about a UAS event - don't leave them to work it out from hints). That is in addition, of course, to extra training, more hand-flying for the crews etc. etc.

Organfreak
6th Apr 2012, 03:00
As Diagnostic said,
I politely disagree that it is so clear-cut. If you make the machine complex enough, and add in human imperfections, then you could get a man/machine interface which will be OK for some people, some of the time, and fail to "get through to" different people or at different times. IMHO that would be, in part, a machine (design) issue.


Yes. The original idea that set OC off on this vector was the suggestion that UAS be an aural warning. No, they "shouldn't need" that, BUT!

If I'm sitting in the back doing my knitting, I want every possible chance of surviving any unprepared pilot's mistake. If they didn't recognize or respond correctly to UAS, maybe they would have done if the airplane had shouted at them.

Back to Swiss cheese: should we not plug every possible (known) hole in it? As someone already pointed out, the closing of any one hole in the Swiss you-know-what may have prevented this horrible crash. If it costs $$$, well, I don't really give a :mad:.

chrisN
6th Apr 2012, 03:44
I have another wild suggestion; when the aircraft gives up flying itself because of UAS, instead of an audible alert (we know audible alerts don’t always get through, such as the stall warning, when the crew is in cognitive overload), how about intermittently clearing the glass cockpit of other stuff (which for AF447 crew did not help them at all) and putting up a big message; “You have UAS. At this height, use power and pitch” (or in other appropriate circumstances: “ Use memory items and QRH”).

(I say another wild idea, because of my post on 1.11.11, post 596 on final crew conversation thread, page 30:

Two psychological factors are still open, and I see no easy way to overcome them, nor have the experts here put forward solutions that I have seen:

Highly stressed people can be oblivious to audible warnings. What has been described as the “cavalry charge” happened when the FOs were handed control manually which they had never practiced and in circumstances they didn’t understand, or agree about (PNF showed some sign of awareness);

And the reason I followed this from the outset through all threads – when a stressed pilot forms the wrong conclusion, he/she tends to stay with it regardless of ineffective attempts to correct the wrong problem. I have seen this in my field (gliding safety and accident analysis) – only test pilots, or rare individuals, can keep a clear head and systematically fault find.

[snip]

A wild suggestion . . . [snip]


After the system gives up and hands a basketful of trouble to the pilots to hand fly their way out of it without any training (or only inappropriate training), the “system” should know enough that it then stalled and stayed stalled, even when speed fell below 60 (it thought). How about for one second out of every 4, the glass screen blocks out everything else and displays;

” STALL! You are staying stalled! Get out of it!”

Would it be beyond the wit of man to even devise a “computer knows best mode – it will recover as the pilots have not realised” before it’s too late?

Told you it was wild.)
----------------

Old Carthusian
6th Apr 2012, 04:01
Diagnostic
To clarify my rather hasty response - the point I am trying to make is that a change in the interface is not necessarily going to result in a future avoidance of this kind of accident. It is rather a measure which may well be useful but cannot take the place of training, CRM, using SOPs and a proper culture in the airline. If you recall the crew of AF447 ignored the stall warning - what guarantee do you have that they would have paid any attention to a UAS warning? The evidence of their actions suggests that it may well have been a waste of time. I referenced Korean Airlines. One of their accidents involved a captain ordering his co-pilot to tear out the stall warning klaxon because it was bothering him. The crew of AF447 ignored the stall warning. Now to touch on the point of the other accidents - all resulted in recovery and a return to normal flight. The outcome is the important thing here not necessarily the process. But even if a warning system is devised this is not a rapid process - it needs careful consideration and testing so that it can be properly deployed.
I understand that you would like to avoid the fact that this is a crew issue - I would too but one has to be honest and look at this issue dispassionately. It is possible that Air France have been developing a far too casual culture with respect to safety and this is more of a concern than the existence of a UAS system. There have been a number of worrying incidents of which AF447 was the worst which suggests that this is the case. I am also a little disturbed by your comments on 'knowing your machine' - it is imperative that a professional tries to know as much about his aircraft as he can. Not every detail but he at least knows how to use SOPs and does use them.

Lyman
6th Apr 2012, 04:14
Perhaps a different word, then. Strictly speaking, it is not a WARN, anyway, it is a STATUS. A CUE.

The computer is sampling three sensors continuously, and when they are out of limits, the computer fails different combinations.

Giving the pilots a heads up that the computer senses trouble is not a WARNING.
As I say, it is a STATUS/REPORT.

At some point, 447 lived and died with the duff AIRDATA. The Pilots did not know until 17 seconds after the autopilot quit, whilst the a/c was maneuvering.

What would have happened if the computer had flashed SPEEDS/FAULT? Forget this crew, what of the next one?
The condition was ill addressed at the time. There are no excuses three years on.

PJ2
6th Apr 2012, 06:52
Hi PJ2,

Thanks very much for your comments all through these threads, and for the opportunity to discuss. I've tried to minimise the quotes, while still (hopefully) keeping the context - if you feel this has distorted things, then sorry & please correct me.

Quote:
Originally Posted by PJ2
On an increase in automated responses, I can understand the logic of such an argument (the BUSS relies upon this logic), but what concerns me from a pilot's p.o.v. is long-term reduced situational awareness and the need for in-depth understanding of high-altitude, high-Mach No. swept-wing flight, (old fashioned "airmanship", I guess) because it is still humans who are doing the piloting.

I do understand this p.o.v. and as I said, I'm not yet convinced about totally automated responses, but at least an explicit UAS warning seems (with hindsight) a clear improvement, doesn't it?

After all, if there is an engine fire, the systems (I don't know if it's the FADEC or others) detect the excessive temperature and alert you, as the pilot, to that specific problem. (I flew several of these in a B737 simulator, some years ago - that bell gets the heart racing http://images.ibsrv.net/ibsrv/res/src:www.pprune.org/get/images/smilies/smile.gif ). The system does not just say "Hey, something is wrong - I know what the problem is, but you have to work it out from some gauges on the panel - and hurry up!". Why give the crew a specific fire warning (or low fuel warning, or any of the other warnings where the system highlights the specific issue), and not give the crew a specific UAS warning?

Put in this way, possibly. It took until the industry had the microprocessor power to indicate a straightforward engine flameout/failure. Up until glass, the most significant failure on an aircraft and one which we practice every simulator session had no warnings - one just knows the failure by the noise if catastrophic, by the behaviour of the airplane and by examining the engine indications for fuel flow, oil pressure/temperature, N1/N2/N3 rotation, high vibs and so on.

These are good arguments for solid indications and computerized drills which monitor crew actions. The Airbus has ECAM drills for engine failures, fires and severe damage. For the longest time, an engine failure was an aircraft performance system-based failure of which, it had been taken for granted, a crew would recognize and handle using the established, trained, checked memorized, QRH or ECAM/EICAS procedures as well as standard CRM communications techniques.

Not to re-argue the matter, but we know that none of these crew actions were done with AF447 and for this reason (which I have elaborated upon in earlier posts), I concur with O.C., that this is primarily a performance/crew accident. There certainly are training-and-standards issues here as well, and there are airmanship, system knowledge and CRM issues. The HF Report will, (and should) be thick and deeply researched.

They did not respond to the ECAM, nor to the stall warning so would a voice/aural/visual or combination of all three? The question is open and certainly discussable.

In your B767 example a few posts ago, is it sensible (and optimal) to make the crew "jump through the mental hoops" to try to work backwards from the "Rudder ratio" EICAS caution, to the underlying UAS event?

In a word, yes. In fact the failure confirmed our initial assessment that we had an airspeed indication problem because other systems were responding. Fortunately it was only the one pitot-static system and we knew the architecture of the standby system, (and so trusted it...I'm not sure I'd trust an ISIS in the same way.)

Quote:
Originally Posted by PJ2
I offer this view out of a concern for what remains inexplicable, and that is the instant decision to pitch a transport aircraft up at such high pitch-rates (increasing 'g'-loading to 1.55g) to such high pitch attitudes and keep the aircraft there.

Completely agreed - it's currently inexplicable, due to the lack of justification/explanation voiced by the PF. As I think several here have already said, the human factors part of the final report will make interesting reading, but it may be more like "educated guesswork" in this area, than any of us would want.

However, if the PF had correctly announced and followed the UAS procedure, then they would both have been focussed on the 5 degree pitch target instead, wouldn't they - at least possibly?

Or focussed on maintaining what are obviously successful pitch and thrust values until they take a moment to slow down and gather thoughts. Nothing in transport flying requires immediate action except the rejected takeoff, a TCAS, GPWS or Stall warning - taking the time to sort things out permits the mind to "re-discipline and re-focus" itself from "cruise flight and the next waypoint" to an abnormal or emergency procedure, then the PF calls it and they get on with the drill or checklist. It is absolutely standard cockpit discipline, period.

Quote:
Originally Posted by PJ2
I would be interested in either data or an argument that this indicates an interface problem, for, as you are, I am open to any information that shows that normal training and SOPs for this event are inadequate in some circumstances and because of obscurity are best left to automated responses.

As I see it, BEA Interim Report 2 page 51 onwards provide evidence for either:

a) too difficult to recognise UAS via the existing interface, or/and
b) insufficient training to recognise UAS via the existing interface.

IMHO these are related - the less obvious the interface to report a UAS (and to also encourage that the UAS procedure should be followed) to the crew, the more training, skill, concentration, ongoing crew practice will be needed. Or do you have a different view?

No, I don't have a different view and I see training as the answer to many of these issues which crop up. I was dismayed when I watched recurrent PPCs drop to 3hrs, and equally dismayed when they went to 18 month periods instead of 12 month periods. I don't know what the standard is today, but these are corporate bean counting decisions made by non-flyers and fighting them in today's environment is very challenging, (as in "show us where this is a problem).

More details below...

Quote:
Originally Posted by PJ2
In re your observation, "Several other crews did not recognise & handle UAS correctly.", I don't recall specifically where there were untoward outcomes due recognition and handling issues with other crews in other events but again am open to new information.

Agreed that I've seen nothing regarding untoward outcomes in that BEA report, but IMHO that's not what the metric being measured should be.

I concur but only in part. Sometimes nothing happening when an abnormal occurs means something. It is part of this discussion which, as with other significant issues, this industry is having. The other fascinating discussions we know about...wither automation? Wither ultra-long haul and fatigue issues? Wither instrumentation and presentations? etc.

Quote:
Originally Posted by PJ2
[...] to my recollection, (and I have been wrong on more than a few things before!), the UAS events haven't been problematic as most crews "did nothing" and the airspeed returned within a minute or less.

That is my understanding too (apart from duration - BEA mention up to 3min 20sec of continuous invalid speeds). But consider what we learn from the BEA report about the various crews lack of following UAS procedures, and what that means about the chances of a potentially different outcome next time.

As I understand it, one of the reasons for crew procedures is precisely to prevent different outcomes depending on crew, time of day, visibility, and all the other variables which a crew has to deal with. Once we see lack of adherance to procedures, don't we get closer to the chances of "bad things" happening? That has been my experience, both with flying and with other highly-controlled situations.

Of the 13 UAS events where the BEA had sufficient detail to know what the crew did / did not do:

"Four crews did not identify an unreliable airspeed"

and

"For the cases studied [which I interpret as being all 13 cases] the recording of the flight parameters and the crew testimony do not suggest application of the memory items in the unreliable airspeed procedure:
* The reappearance of the flight directors suggests that there were no disconnection actions on the FCU;
* The duration of the engagement of the Thrust Lock function indicates that there was no rapid autothrust disconnection actions then manual adjustment on the thrust to the recommended thrust;
* There was no search for display of an attitude of 5°."

So as I read it, all 13 crews "got it wrong", to a greater or lesser extent, with a third of them (4 out of 13) failing to do any UAS procedure, and all 13 failing to do the memory items. Isn't that just a timebomb waiting for a crew getting things badly wrong in the future, when they are presented with an unrecognised UAS at the "wrong time" (sleepy, poor CRM, "startle factor" etc.)? If they get distracted trying to diagnose a non-existant instrument fault (which is really just temporary UAS), couldn't that potentially lead to another AF447-like event? IMHO, based on reading other accident reports where distraction was a factor - yes.

Using your comments, I will re-read IR2's relevant sections. I have to admit that I don't recall finding these comments, but quite frankly every time I read these three reports I find something new. I'd like to think it isn't me but...

Quote:
Originally Posted by PJ2
The ability to "look through" the automation and decide for oneself what the airplane is doing, what it needs and why, is being lost because it is being supplemented and when supplements occur, practice and therefore skill, then thinking and knowing atrophy

I understand that concern, and I would much much prefer ATPL pilots to be better trained, better paid and kept highly-skilled.

Pay well and good people will come. Infatuation with technology, and here with "automation" is almost always based upon the wrong reasons - money, (as in one less crew member) rather than utility, safety and reliability. Computers remove people from direct contact with things because they model the world so well. In aviation, that model is pretty good and serves us well providing we can ignore it without result. Either in the cockpit or the boardroom, once direct contact "with the environment" is lost, the potential for uninformed tactical decisions (mistakes) increases although one usually finds that the priority of financial decisions may be enhanced.

However, are you saying that aircraft system designers shouldn't help flight crew by giving an explicit warning for UAS, even though the systems know that there is "just" a UAS event (which has a procedure to follow) and not some other instrumentation fault (which needs to be investigated, diagnosed, coped with, etc.)?

No, I'm not saying that but not because such indications aren't in some way required, but because it has been determined by the industry that engine failures "don't need explicit warnings" (because they broadcast failure in other ways). I think there is a strong argument for some kind of indicating system which sorts out for the pilot the variations of failure which are possible - right now they are in the FOM, (About thread five or so someone posted a very good page from an A300 FOM showing the variation of effects of blocked pitot, blocked pitot drain hole and variations on blocked static ports etc. We know the airspeed acts as an altimeter if the pitot tube is blocked but the static hole is open - A clear way of assessing such a problem through CRT graphics and commands would be a better option than just warning of a "UAS" in cruise. Also, it has been suggested to find better ways to derive airspeed from GPS for such failures.

As with the AoA guage, none of this is of any greater safety if it isn't trained and checked and for that it also has to be "STC'd" and certified by the FAA and (possibly?) regulated?

I have a view about how an automated response might be considered, in a way that still keeps the crew "in the loop", but I'd like to initially focus on giving explicit UAS warnings (to try to drive the following of UAS procedures).

I have always loved automation because it is indeed a tremendous innovation which makes aviation much safer. But it is not the third pilot, unless one treats automation in the same way one uses CRM with one's crew members. One ought to be able to "look through" what the autoflight system is doing and disconnect and hand-fly if one doesnt' like it. But I had crew members who would refuse to hand-fly and weren't confident in disconnecting the autothrust. I thought it was a sad thing to admit.

Quote:
Originally Posted by PJ2
I have had kindly pointed out to me a recent conference at the Royal Aeronautical Society entitled, "The Aircraft Commander in the 21rst Century". There is an excellent videoed presentation (http://media.aerosociety.com/aerospace-insight/2012/03/23/evolution-and-future-of-the-flightdeck/6566/?utm_source=The+Royal+Aeronautical+Society+e-communications&utm_campaign=23592cb938-20120329_RAeS_HTML_Newsletter_Mar_12&utm_medium=email) from this conference by Captain Scott Martin, (Gulfstream Experimental Test Pilot) on the very topic at hand.

Many thanks - I look forward to viewing that when I'm back with a normal internet connection. http://images.ibsrv.net/ibsrv/res/src:www.pprune.org/get/images/smilies/smile.gif

It is a pleasure engaging in this kind of discussion. ;-)

Mac the Knife
6th Apr 2012, 11:48
"Hi Guys, sorry to bother you but my airspeed sensors are momentarily unreliable and so autopilot and autothrottle will disconnect."

"Flight data was nominal when this happened and SOP in this situation is to maintain appropriate pitch and power while we sort it out."

Would you like to do that yourselves or shall I take care of it?"

Seriously, wouldn't that (or a more formal equivalent) have been a more helpful introduction to the situation than a flashing UAS alert & prompt disconnect?

After that, the "startle" response, inadequate training, mode confusion, poor CRM and other factors seem to led to the fatal outcome.

gums
6th Apr 2012, 18:47
Check this out:

http://www.nytimes.com/2012/04/04/business/a-satellite-system-that-could-end-circling-above-the-airport.html

So crew becomes more and more and more of a monitor. But who does what when data is lost or is unreliable?

I can just see Hal transmitting to others, "Our GPS data is unreliable and we're handing the jet over to Dave".

Years ago the wiz kids wanted a ground avoidance feature built into our Viper FBW system. The thing would pull us out of a dive when close to the ground. We humans fought the idea and explained that there were conditions when we might violate the coded criteria. Maybe press a bit to ensure a good hit or maybe we were trying to get real low for escape and evasion.

To appease the whiz kids we agreed to a big flashing "X" on the HUD. A year later one really agressive pilot flew right thru the big flashing "X" and augered. Least he didn't have 200 SLF items with him.

Mr Optimistic
6th Apr 2012, 19:12
PJ2, thanks. you are generous with your time. Much appreciated.

RR_NDB
6th Apr 2012, 20:30
Hi,

I found in 447 threads surgeons, anthropologists, organists and many others i.e. not just "technically oriented minds". This was a surprise for me.

Proactive technicians (i include myself) prefer to "anticipate" than to be caught in surprises.

Surprises coming from her sometimes (perhaps many times) challenge us. In my life i had surprises from GF's, machines of many types (including GA and old birds), wife and many women. :)

Surprises can be "controlled" by Redundancy (http://www.pprune.org/tech-log/454653-af-447-thread-no-4-a-31.html#post6547154).

I will "throttle back" my commenting on man machine interface in this thread saying:

1) Considering the factual information we could access and the results from the high synergy discussions we had.

2) Considering the likelihood of 447 crew had some surprises in their last 4 minutes of their life, working hard.

3) Considering despite all their efforts they expressed some "surprise" on SPEEDS.

4) Considering they ever had been able to know the AoA when they were falling.

5) Considering they expressed some surprise with the fact their efforts didn't succeed.

6) Considering the surprise (to many) on location of 447 wreckage, so near LKP.

7) Consider the surprise the protected plane stalled.

8) Considering how fast the plane was doomed

9) Considering the surprise for us to learn they barely understood what really occurred

10) Considering the surprise expressed near the end

And last but not least, considering the surprising reaction (comprehensible) on a AoA indicator and to an (early warning) UAS indicator here in the thread. And after fundamenting my conclusion from an anthropologist thought:

I would suggest to consider:

1) A resource to provide EARLY WARNING on impending "factors" like UAS, AoA and also perhaps, REC MAX "nearing".

2) To study the best way to implement the 3 above resources in a "man machine interface" context. If it would be aural, flashing display, redundant, etc. must be done by an R&D on that IMO important issue. We can only guess here. Is not our "problem".

After saying that i appreciated the comments from Chris Scott, mm43, Bear, MB, gums, PJ2, OC, Of, chrisN, rgb, jcjeant, CONF iture, lomapaseo, A33Zab, OK465, BOAC, bubers44, HN39, CJ, safetypee, DW, Linktrained, rh, TD and recently Diagnostic and from some others, including via PM channels. I took all comments into account.

I frankly think there are some important issues to be discussed on the "interface" so i considered, some time ago a thread on the issue.

The reasons i tried to show some points (i consider) relevant to 447, i will concentrate in the Man machine interface and anomalies thread (http://www.pprune.org/tech-log/481350-man-machine-interface-anomalies.html).

It will be another surprise if the final report doesn't consider the influence of this A/C interface on the HF aspects when addressing the "surprising" attitudes of PF, PM and even the Captain on last flight of F-GZCP.

My interest is in "safety" and my only agenda here in PPRuNe is try to contribute to this important objective: Aviation Safety

Through a minimum of Surprises :} Always when possible.

Organfreak
6th Apr 2012, 20:58
RR_NDB:
I found in 447 threads surgeons, anthropologists, organists and many others i.e. not just "technically oriented minds". This was a surprise for me.

I am the guilty organist as mentioned above. However, I indeed do have a "technically oriented mind," since I am also an electronic and mechanical technician in the service of repairing old Hammond Organs. They are extremely complicated (but old-school) devices. Also was a theatrical lighting designer for thirty years, an extremely technical art. But enough about me. :O

TTex600
6th Apr 2012, 22:05
This topic advanced far beyond my expertise long ago ( I just drive the things), therefore I ceased commentary. I still have nothing to add to the technical aspects some have discussed, but would like to add this for your consideration: Airbus training is focused on the wrong targets when considering aircraft control. One is judged by his/her knowledge of the protections - with little an no emphasis placed on degraded modes. I would be willing to make a wager that this ill fated crew could do a quite nice job of describing the limits of protection, but had little idea of the capabilities of degraded flight laws and had little training on abnormal ops pertaining to degraded flight laws. If any person of influence is reading, I would like to request that the training focus be changed from "it protects you and flies like any other airplane" to "you need to know how to deal with it when the automation fails you". I would also suggest that the regulators include UAS procedures on type rating checks. They could just remove one of the numerous instrument approaches and replace it with the UAS drill. (on type rides and PC's, we spend hours droning around watching the autoflight system perform redundant approaches - which proves little more than our ability to push the approach button)

Hamburt Spinkleman
6th Apr 2012, 22:24
Diagnostic,

In interim report no. 2, the section on previous unreliable airspeed events and in particular the 13 events that was examined closer, I see a factual listing of the technical effects and of the crew's handling/actions and nothing more. I see no judgment on whether the crew's actions were right or wrong and I see no intent of the BEA to infer any.

I think you are reading too much into it and I see no basis for characterizing the handling of those 13 events as inadequate, wrong, mis-handled or in-correct.

Diagnostic
6th Apr 2012, 22:45
@Old Carthusian,

Thanks for the clarification. I'll reply to your points out-of-order as you introduced an important point later in your reply:

I understand that you would like to avoid the fact that this is a crew issue
In that case, unfortunately you misunderstand me - I'm not trying to avoid anything. I don't know how much clearer I could be about my views on this, than the last 2 paragraphs of my previous reply to you. I'll try one more time...

The crew clearly made mistakes (as you have said); many of the exact causes for those mistakes we don't (and never will) know for sure (as we can't ask what they thought at the time) although I sincerely hope that the BEA HF group can add a useful interpretation (i.e. educated guess) of the limited available data, in the final report.

However I believe that simply saying "this is a crew issue" and not looking deeper for likely causes of incorrect crew behaviour, and then fixing those causes, would be doing a disservice in trying to prevent another tragedy. One of the areas which seems relevant to me, and where we have evidence of other crew behaviour for comparison with AF447, is in the area of UAS recognition, and that's where I have been specifically focussing in my recent comments, when this subject recently re-surfaced.

Of course UAS is not the whole story for AF447, but UAS is where things started to go wrong for them (i.e. they responded with a zoom climb instead of flying pitch & power), so IMHO it deserves some focus. In the past, several professionals here have kindly contributed that their airlines are improving training of high-altitude UAS. But why limit the improvements to training, when the aircraft could also give a less obfuscated indication of UAS? Don't we want the pilots to receive clear warnings, to encourage the recognition of UAS and hence increase the liklihood that they would then follow the UAS procedure?

To clarify my rather hasty response - the point I am trying to make is that a change in the interface is not necessarily going to result in a future avoidance of this kind of accident.
Very true - I can't (and won't attempt to) prove that a specific UAS warning will result in a future avoidance, and you can't prove the opposite. :) However, on balance, the widespread problems shown by the BEA analysis of those 13 UAS events, make me believe that this is an area where there is a systemic problem, and since the PF was doing his "zoom climb" instead of following the UAS procedure, then if he had followed that UAS procedure instead, we may not be having this discussion at all. :)

It is rather a measure which may well be useful but cannot take the place of training, CRM, using SOPs and a proper culture in the airline.
I completely agree with you (I bet you never thought I'd say that :) ). All those things are also needed. My point is (as Organfreak and RR_NDB have also said), why not try to reduce or remove all relevant holes in the swiss cheese? Even you have listed multiple topics in your comment above - so we're agreed that this is not a "fix one thing and it'll never happen again" accident, therefore why stop at the obvious human factors? The man/machine interface needs to be designed to communicate clearly with humans who are having a bad day, or just back from their holidays, or in the low part of their circadian rhythm, or ... All pilots are human, even the best. :)

If you recall the crew of AF447 ignored the stall warning - what guarantee do you have that they would have paid any attention to a UAS warning?
That's a very interesting topic, so I'll tell you my current hypothesis about why I think they (especially the PF) ignored the stall warning in AF447 (dysfunctional CRM may have prevented the PNF from voicing his opinion, even if he didn't want to ignore the stall warning). But first, you are asking for a guarantee - that's unreasonable. := I could ask you for a guarantee that they wouldn't pay attention to a UAS warning, but I won't do that because it's an unreasonable thing for me to do and it's impossible for you to guarantee that either. So let's not ask for guarantees and instead be open-minded to possible improvements, OK?

My current hypothesis is that the UAS situation was not recognised as being specifically that (especially not by the PF; I'm unsure about the PNF), and instead they believed they had a multiple instrumentation problem which needed to be diagnosed from square one - as well as the PF having to hand-fly at high altitude in turbulance and Alt2. From that, misinterpretation of the starting point, they couldn't make sense of the different (and varying) IAS readings as relating to a single failed component (because there was no single failed component!), and kept trying to understand their readings, which then became difficult to fit onto a mental model once stalled (even though all 3 were consistent eventually), as they don't train for being fully stalled.

Therefore my hypothesis is that the stall warning was being deliberately ignored as they (especially PF) thought it was a malfunction, as part of the same instrumentation problem which was affecting the IAS.

If they hadn't "gone off at a tangent" trying to diagnose what was a temporary UAS, and had instead received a clear warning from the aircraft like "This is a UAS situation, all my pitot probe pressures are different so I have to disconnect the AP - recommend you fly pitch & power which for this alt is X/Y", then would the zoom climb and all the subsequent problems still have happened? Neither of us know the answer, but anything which stopped that zoom climb from being done would have been an improvement over what actually happened.

So that's my current hypothesis. I could be wrong (partly or completely) - we'll never know for sure either way, although I'm happy to be guided by the professionals here and the HF part in the final BEA report.

Now to touch on the point of the other accidents [I think you mean the other 13 UAS events in the BEA report??] - all resulted in recovery and a return to normal flight. The outcome is the important thing here not necessarily the process.
That's clearly your view, as you've said it several times. I disagree and instead believe that the process is at least as important as the outcome. After all, how many of those other pilots are just "a bad day" away from mis-identifying a UAS, and doing something else which is dangerous? Unless you present some compelling evidence that not following the UAS procedure is safer than following it, then I don't see me changing my view, although I'm happy to seriously consider whatever is causing you to dismiss the importance of following the UAS procedure.

But even if a warning system is devised this is not a rapid process - it needs careful consideration and testing so that it can be properly deployed.
I completely agree - but I don't see those as reasons not to start the ball rolling on investigating this, especially as other problems which you have highlighted (e.g. airline culture) may take even longer to improve. :)

Gaston444
7th Apr 2012, 00:01
Quote, Mac the Knife:

"Hi Guys, sorry to bother you but my airspeed sensors are momentarily unreliable and so autopilot and autothrottle will disconnect."

"Flight data was nominal when this happened and SOP in this situation is to maintain appropriate pitch and power while we sort it out."

Would you like to do that yourselves or shall I take care of it?"

Seriously, wouldn't that (or a more formal equivalent) have been a more helpful introduction to the situation than a flashing UAS alert & prompt disconnect?"
---------------------------------------------------------------------

-That does seem way more like it...

I find it amazing the Autopilot just dumps everything on the pilot suddenly, but there is no reminder there is no stall protection or other "Normal Law" limiters now...

And to top it off, the "Normal law" limitations do not come back on when the airspeeds agree again...

Also, clearly the lack of clear visibility of the out-of-the-way side control joystick, combined with the lack of synchronization of movements with the other side, makes matters even worse as to clarifying the situation...

This reminds a lot of the Roll partial Autopilot-disengage feature (after applying roll for x seconds) of the Airbus Autopilot, which partially disengages the Autopilot for roll control only, a previously little understood feature devoid of much warning apparently, which killed that Russian captain who had his kid sitting at the controls (and all the passengers)...

I think also to have some other completely independent speed indicator, like a GPS, could be a back-up of last resort to form a mental picture of what is really going on (regardless of the mental gym of correcting the GPS value)... Here the system is very "brittle", because even the control tower will relay information from the same faulty source if the aircraft pitot fails! That killed a bunch of people as well...

It is very hard to form a mental picture of what is going on, if you suddenly have reason to be suspicious of the basic parameters of everything... At least with the GPS, you would have confidence the data is pointing you in the right direction...

One poster here quite rightly pointed out the first warning in blocked pitot tubes is sometimes the rudder ratio overspeed, which requires making mental loops backwards to figure out what this could mean: This is just poor "interface ergonomics" (if that term makes sense)...

Someone said there was no incidence indicator on the Airbus, the most relevant data to aircraft behaviour, other than the "Stall" warning itself... I find that strange...

I think they need some mass-market designers to help design these cockpits and functions in a more intuitively useable way...

G.

RR_NDB
7th Apr 2012, 00:03
with little an no emphasis placed on degraded modes (http://www.pprune.org/tech-log/468394-af-447-thread-no-7-a-65.html#post7121854)

Degraded modes training and a full comprehension of "what is involved" is important. Why?

PF could be suddenly "inserted in the loop" and is required to act precisely. Before acting must "understand".

Based on factual info we have, AF447 PF acted as if plane was in imminent danger. And very early created a sound imminent danger: A threat to stall the airliner.

My concern with degraded modes is: The performance of the "effective aircraft" (System + PF) specially in the transitory relies on:

1) A good understanding of the equipment characteristics. Intimacy with her. :)

2) A good interface to allow a fast (if possible, immediate) assessment of the problem

3) A good training (involving Pavlov behavior :) )

4) Good use of SOP, CRM, etc.

In this case we observed problems in all 4 items. And we had a fast degradation. Fast degradation is a real threat. Gen. Chuck Yeager had this kind of threats. Challenger had this (SRB's at low temp)

Airliner pilots (commercial operation) seems being not trained adequately for degraded modes. If they are, in several carriers.

PF (probably by lack of understanding of the first problem) seems to have contributed to an accelerated degradation.

And the System DID KNOW before AP and A/THR dropped what started around the increase of noise heard on CVR. Why not to process this information and share it with the crew?

The paper of an Airbus SAS designer mentioned in one earlier post sez you need to scan to detect UAS. This sounds not giving to the crew an useful "insider information". :}

This information can be valuable. And can save precious seconds to better act.

The reason i included item 2 is because it was possible for Airbus SAS to create a resource (UAS data processor) before System degradation. (Through a single chime? Double?)

Specially after 30+ UAS cases and STILL using OBSOLETE AS sensors. AS as an Important parameter (to the stability of the System)

And additionally because the System is (still today) operating without redundancy. (because we don't have the required AS probes) They simply can fail SIMULTANEOUSLY.

Why they didn't? Diagnostic presented this important question. (http://www.pprune.org/tech-log/468394-af-447-thread-no-7-a-63.html#post7118683)

I am still studying the possible reasons to this.

Diagnostic
7th Apr 2012, 00:13
@Organfreak,

Hi,

If they didn't recognize or respond correctly to UAS, maybe they would have done if the airplane had shouted at them.
That's exactly my view - maybe they would have responded correctly, since such a warning reduces the ambiguity about what is being recommended to them. For me, as with all the accident reports I have read over the decades, it's about...:

Back to Swiss cheese: should we not plug every possible (known) hole in it? As someone already pointed out, the closing of any one hole in the Swiss you-know-what may have prevented this horrible crash.
Exactly :)


@chrisN,

Hi,

I have another wild suggestion; when the aircraft gives up flying itself because of UAS, instead of an audible alert (we know audible alerts don’t always get through, such as the stall warning, when the crew is in cognitive overload), how about intermittently clearing the glass cockpit of other stuff (which for AF447 crew did not help them at all) and putting up a big message; “You have UAS. At this height, use power and pitch” (or in other appropriate circumstances: “ Use memory items and QRH”)
I understand your expertise gives you a different insight than me into human factors (mine being commercial rather than aviation), so you could well be right - I'm not saying that the warning has to be delivered as an audible alert.

In my recent reply to Old Carthusian, I gave my alternative hypothesis for the PF's apparent ignoring of the stall warning. Even if you are correct that cognitive overload caused the stall warnings to be ignored (and I agree that this is plausible), a UAS warning on AF447 would have been delivered at least 5s before the first stall warning started (based on the CVR transcript), so it had a chance to be recognised first.


@gums,

Nice to see you back, sir

So crew becomes more and more and more of a monitor. But who does what when data is lost or is unreliable?
This "garbage in, garbage out" problem (as someone else described it!) is exactly why having an automated response to UAS is a larger topic, than having a clearer and more explicit warning about it, IMHO.

To appease the whiz kids we agreed to a big flashing "X" on the HUD. A year later one really agressive pilot flew right thru the big flashing "X" and augered. Least he didn't have 200 SLF items with him.
Any warning can be ignored, as your example graphically demonstrates. :( My point is that if AF447 had followed the correct UAS procedure it would have meant no zoom climb, no zoom climb means no stall, and no stall means no crash.

What I find so interesting is that crew not following the UAS procedure is a bigger problem than just AF447. As I said in another reply, are we just "someone having a bad day" away from another crew not recognising UAS and so not following the UAS procedure, in as dangerous a way as the PF in AF447? Could a specific UAS warning reduce the chances of that behaviour? So far, I don't see a compelling reason why it couldn't, and every reason to think that it might.


@RR_NDB,

Hi,

Thanks for the links in your "surprises" posting - they were all new to me. PM to follow when I get a few minutes :)

P.S. I just saw your new posting a few moments ago - nice summary. :ok:


@Hamburt Spinkleman,

Hi,

In interim report no. 2, the section on previous unreliable airspeed events and in particular the 13 events that was examined closer, I see a factual listing of the technical effects and of the crew's handling/actions and nothing more. I see no judgment on whether the crew's actions were right or wrong and I see no intent of the BEA to infer any.
I politely disagree - these seem to be clear judgements to me:

"Four crews did not identify an unreliable airspeed"

(So 4 out of 13 UAS events were unrecognised by the crew, therefore by definition their actions were wrong as those actions were missing!.)

"For the cases studied, the recording of the flight parameters and the crew testimony do not suggest application of the memory items in the unreliable airspeed procedure:
* The reappearance of the flight directors suggests that there were no disconnection actions on the FCU;
* The duration of the engagement of the Thrust Lock function indicates that there was no rapid autothrust disconnection actions then manual adjustment on the thrust to the recommended thrust;
* There was no search for display of an attitude of 5°."

(So in all 13 UAS events which were examined in detail, specific parts of the UAS procedure were not followed - the lack of crews aiming for 5 degrees as a memory item, seems particularly worrying and particularly relevant to AF447)

There are several other places where you just need to compare what the crews did (as described by the BEA in IR2), with the actual UAS procedure showing what they should have done, and "join the dots".

I think you are reading too much into it and I see no basis for characterizing the handling of those 13 events as inadequate, wrong, mis-handled or in-correct.
Unless you can show me where the 13 crews are described as correctly following the UAS procedure, then by definition, their actions were incorrect & wrong (and any other synonyms I have used), and so I consider my characterisation as justified by that evidence given by the BEA.

Please do provide me with your evidence that those 13 crews correctly & completely followed the UAS procedures (which would therefore contradict those BEA quotes above, wouldn't it?), and I'll happily reconsider my position.

TTex600
7th Apr 2012, 00:34
The paper of an Airbus SAS designer mentioned in one earlier post sez you need to scan to detect UAS.

Ones instrument scan atrophies about six months into flying the bus. Or any other airplane with a two axis FD and a nav display with a pretty green/magenta line.

safetypee
7th Apr 2012, 01:11
Re surprise see:- http://www.pprune.org/safety-crm-qa-emergency-response-planning/478368-monitoring-intervention.html#post7113960

Situation awareness is a central aspect of this and many other accidents.
The crew may have understood the situation, but choose an inappropriate action; alternatively the crew failed to comprehend the situation and thus the action was incorrect.

See the ‘surprise’ reference and the problems of hindsight, when fundamental surprise can biased towards situational surprise and thus hinders learning – safety improvement.

Classic ref for Aviation Decision Making:- http://www.dcs.gla.ac.uk/~johnson/papers/seattle_hessd/judithlynne-p.pdf

Old Carthusian
7th Apr 2012, 01:51
Diagnostic
Everything keeps on coming back to training, SOPs and CRM. Supposing the crew were surprised then training should kick in. A pause, a scan of the instrument panel (and remember the only instrument that was not reliable was the Airspeed Indicator). PJ2 also pointed this out - this was not a serious incident at first. However, the crew actions made it into a serious incident. It also seems that the PFs scan broke down almost immediately and that the PNF did not intervene sufficiently. So a UAS warning might not have made any difference. This is why I asked the question - what guarantee do you have that yhey would have paid any attention to a UAS warning? It is not that I am against adding such a warning but that the warning would not necessarily have made a difference to their response. The cockpit voice transcript indicates a very rapid 'over reaction' to the initial incident. Once again this is not indicative of an interface issue. There was no attempt to use the SOPs or to diagnose the problem. This is more indicative of flight crew problems - training should enable you to deal with the unexpected. It seems in this case it didn't. We know from the other UAS incidents that these are recoverable and here once again I stress - outcomes are important. Even the stall was recoverable but once again there was a deficiency in the approach and this time with a clearly audible warning. The warning didn't help. Unfortunately this is a crew caused accident with very little help from the machine. It touches on pilot training and airline culture and how these are carried out with automation.

RR_NDB
7th Apr 2012, 02:02
Hi,

Diagnostic:

Your diagnosis are being helpful. :)

You made a question indeed difficult to answer: Why an UAS processor was not offered?

Possibilities:

1) They consider the task a pilot responsibility

2) They prefer to be conservative not risking

3) Management / communication (inside and outside) Airbus SAS issues

4) They never considered or was not considered important

5) Other factors (cost, ROI, etc.)

IMO (rare) conditions of this unique crash may led to consider AoA and even UAS "processing" as important. I hope so.

At least, both factors were (as it seems) undetected by a non adequately trained crew.

Submitted to complex "inputs" that required the creation of HF study in order to analyze the accident thoroughly.

The answer Why, as you put, i prefer to say it was a mix of the 5 items perhaps with #4 as the most influential.

(*) Airbus SAS (and others) are processing UAS just to the System. :} The System is not fed with garbage. The pilots need to "process" any garbage through scan and brain. :mad:

Thiells 727 and AF447 shows the GIGO "feature" of humans sometimes fails wrongly in "processing garbage".

CONF iture
7th Apr 2012, 02:22
Therefore my hypothesis is that the stall warning was being deliberately ignored as they (especially PF) thought it was a malfunction, as part of the same instrumentation problem which was affecting the IAS.
I do share that view, and there could be part of the reason in the OSV info on unreliable airspeed indication received by all AF pilots 6 months before 447 (third Interim Report).

http://i45.servimg.com/u/f45/11/75/17/84/af447_14.jpg (http://www.servimg.com/image_preview.php?i=139&u=11751784)


The mention STALL warning is made in a way that it does not deserve specific attention and could almost be disregarded (my own interpretation of the memo)
6. Spurious or persistent STALL warning
When you read the memo, it does seem normal to have to deal with a continuous STALL warning without taking any necessary action ...

http://i45.servimg.com/u/f45/11/75/17/84/af447_15.jpg (http://www.servimg.com/image_preview.php?i=140&u=11751784)



What to say about the recommandations :
2. Do not panic - Be in control.

At default to specifically train in the simulator, that memo should have stated the necessary steps :

PF Maintain 2.5 degrees of pitch
PNF Set the thrust parameters as they are usually in CRZ
Wait for improvement


http://i45.servimg.com/u/f45/11/75/17/84/af447_28.png (http://www.servimg.com/image_preview.php?i=142&u=11751784)

RR_NDB
7th Apr 2012, 03:05
Hi,

Old Carthusian, If i may respectfully ask and comment on some points: (http://www.pprune.org/tech-log/468394-af-447-thread-no-7-a-65.html#post7122022)

Everything keeps on coming back to training, SOPs and CRM.

Everything?

(and remember the only instrument that was not reliable was the Airspeed Indicator)

How you can stay so sure on that? RHS was not even recorded. This information may be lost forever. Like the not measured AS during "sensors outage".

It is not that I am against adding such a warning but that the warning would not necessarily have made a difference to their response.

Well, this crash is history. What about this kind of warning in other flights in similar situations?

training should enable you to deal with the unexpected.

If just training could be sufficient why they decided to create an HF study group? The crew was affected just by "lack or inadequate training"? Other crews (with training) would be immune to similar issues? And if "multiple faults" happens (or are showed) simultaneously? The big bird (388) climbing out of Changi had multiple " simultaneous faults" due electrical cabling (harness) damage ( as a result) of the uncontained engine failure of eng #2.

There was no attempt to use the SOPs or to diagnose the problem.

We actually don't know why (if ever will). We have all factual info?

This is more indicative of flight crew problems


What other reasons you can suggest? Or could mention, please.

training should enable you to deal with the unexpected.


As a designer i would like to benefit from: "training should enable with the unexpected" We never know what to expect. This is virtually impossible. You may anticipate just a subset. Creativity in some cases "makes the difference". I am not against SOP, etc.

Even the stall was recoverable

With a crew that (seems) failed to identify a "simple" UAS and subsequently stalled the jet? The SIM made and summarized by TD shows how difficult would be the "task".

The warning didn't help.

HF study was motivated by this (and other facts)

Unfortunately this is a crew caused accident

Certainly.

with very little help from the machine

We are anxious to read the final report on this.

It touches on pilot training and airline culture and how these are carried out with automation.


Lack of (training) at cruise FL :}

RR_NDB
7th Apr 2012, 03:56
Hi,

CONF iture:

info on unreliable airspeed indication received by all AF pilots 6 months before 447 (http://www.pprune.org/tech-log/468394-af-447-thread-no-7-a-66.html#post7122035)

Another relevant '"input" that could partially explain PF, PM and Captain attitudes.

Their (crew) "processors output" very probably will be understood. The output was not random. There were clear bias. The "input" of this memo to them could have biased their actions since the beginning. And during a significant part of the stall. Even during the zoom climb.

Also perhaps affecting the "model" PF made and used in doing "persistent NU".

At default to specifically train in the simulator, that memo should have stated the necessary steps : :ok:

"Sachon contenir l'effet de suprise" :E

Old Carthusian
7th Apr 2012, 05:31
Diagnostic
If I may I will focus on the training aspect of your posts. This does not mean that I disregard the CRM and SOP issues. The question would be is the new warning necessarily going to improve safety or is is going to lessen it? Adding a warning is not necessarily a good thing. With regard to training - hard training, easy execution. One needs to practice unusual situations frequently so that one knows how to react in an unusual situation. Training enables one to deal with the situation more effectively - by relying on your training you are better prepared. It is nice if you can train for the specific situation but even so as others have so eloquently put it in other threads the pause, evaluate, act triad works wonders in such situations.
I can suggest several explanations of why the crew acted the way they did - all of which would fit the facts. If you wish I will sketch a couple of scenarios out in a later post but at the moment I would rather not speculate beyond the evidence we have. This accident also links in with Air France pilot culture and I believe other airlines who have developed a 'corrupted' pilot culture. The existance of a warning does not necessarily lead to increased safety - better training does.

RetiredF4
7th Apr 2012, 10:03
Allow me to comment to the issues, adressed by OC at Diagnostic.

OC
Supposing the crew were surprised then training should kick in. A pause, a scan of the instrument panel (and remember the only instrument that was not reliable was the Airspeed Indicator).

The crew was surprised, not by the UAS event (which was not identified yet), but by the drop out of the AP and the need to fly manual. That sudden manual flying chewed up most of the crews attention and was hindering in identifying the cause of the problem (UAS). This cause could have been multifold including the WX situation/ turbulence. The minimal hand flying skills led to the zoom up.

OC
PJ2 also pointed this out - this was not a serious incident at first. However, the crew actions made it into a serious incident. It also seems that the PFs scan broke down almost immediately and that the PNF did not intervene sufficiently. So a UAS warning might not have made any difference.

It could have made a difference from the beginning. Knowing, that the problem of AP drop out is UAS, or even knowing it before the AP drops out and manual flying is imminent, saves time in analyzing the situation and helps initiating the necessary steps. The zoom climb would have not been part of the procedure, as often mentioned before.


OC
The cockpit voice transcript indicates a very rapid 'over reaction' to the initial incident. Once again this is not indicative of an interface issue. There was no attempt to use the SOPs or to diagnose the problem.

Because the initial incident was not fully identified. The sytem knew that the speed became unreliable, but the indication to the crew was the the handover to manual flying in some expected turbulent WX situation without communicating the known information (UAS) of the reason for the handover.

I go with diagnostic, most of the other mentioned UAS events had not been handeled as they should have been, and in most of those cases the identification of the UAS was marginal, late, or not existent. There is lots of room for improvement, not only on the training issue.

IMHO MIL crews are trained to work around the unexpected event / problem, as they can´t plan a military mission and the asociated tasks in great detail, and if they can do it, it anyway comes different to the planning in the end due to a multitude of possible factors including bad guys trying to shoot holes in your plane.

But air transport crews are best trained in handling standard and non standard situations acording to SOP´s and CRM. To implement those procedures the identification of the problem has to be quick and simple and is a prerequisit to implement the correct procedure. The ECAM is the best example for that need.

Even when PNF mentioned "..we have lost speeds..." , we cant be sure, that it dawned onto them that the AP-disconnect had anything to do with the missing speed indication and was initally just a simple UAS situation. Because it was neither acknowledged by the PF nor was the necessary procedure mentioned.

The first necessary step " maintain aircraft control" was already hindered by the unknown cause of the problem and the lacking manual handling skills. Would the cause UAS have been known from the beginning, his handling might have been simplified as PJ2 sstates by doing nothing. But instead they failed in " maintaining aircraft control" , did never "analyze the situation" and could therefore not "take proper action".

It comes down to a misidentification of a problem (UAS), which could have been communicated to the crew by a clearer and more expedite way.

Clandestino
7th Apr 2012, 10:50
Lost a friend at Cali back in 95 or 96 or...... Stoopid flight management system turned the jet the wrong way and they noticed the error but kept descending whle turing back to the approach fix. Not good.

A decade before his lifespan was abruptly terminated on the slopes of El Deluvio, your unfortunate friend won the accolade of USAF instructor of the year (IIRC he was flying Rhino at the time), which makes him far, far, far above average pilot in anyone's book. His captain was extremely experienced and well respected for his professional skills and knowledge. Yet they turned their aeroplane ninety degrees away from the arrival track which leads straight down the valley to Cali. They did not level off when realized they are lost but rather kept descending towards the mountain hidden in the darkness. Their deaths were not in vain altogether; we got EGPWS and terrain profiles on approach plates. Unfortunately, the shear ugliness of the spectacle made us turn our heads in horror, therefore a lot of us failed to realize the most important lesson of Cali: even the best pilots can underperform occasionally. Instead we distract & amuse ourselves with half-cooked theories, which do have some founding but very quickly and not entirely unintentionally develop into something without any appreciable connection to reality.

So goes the discussion about AF447 too. We let our imagination run wild and invent evil computers, megastorms, complete instruments failures etc just to turn our mind away of the picture that scares us because we can easily imagine our portrait in it: the pilot who forgets how to fly in the midair. For what is so far known, the final crew of the F-GZCP might have been one of the best on the fleet, or one of the worst or anywhere in between, the preliminary reports don't say a lot but few things are certain: 1) they have been deemed competent to perform the flight by their superiors and training dept 2) they have lost the idea of what is happening, what they should do and what is the aeroplane capable of. Saying they were incompetent (or worse) is just another nervous let's-get-over-this-quick proposition and just as its twin of let's-speculate-about-technology-we-don't-understand is the manifestation of the fear of grappling with the real issues risen (again) by the spectre of AF447.

At our level of technology, we can't have ECAM/EICAS procedure or signal light, or voice warning or whatever that would warn the crew of unreliable airspeed. It is not as simple as plain failures, speed signal and indication are there but it takes intelligence to compare them with known weight, thrust level and attitude of the aeroplane to decide which indication is realistic and which is not. No matter how much capabilities of our computers are increased they are still computers, they are capable of much faster operation but are not a mil closer to true intelligence then first pocket calculators. Now please, do prove me wrong. Not by infantile "Dear aeroplane systems designers, I don't know the principles behind it but I demand it should work in such-and-such manner" but rather go on about designing the device that will work as you propose. I'm serious here. One well designed system will do more good than a million complaints about badly designed ones.

Unreliable airspeed is basically loss of airspeed procedure, aggravated on the modern airliner by the false alerts, thrown up by computers unable to overcome their IF...THEN logic. PJ2 mentioned his experience on B767 which mirrors experience of Aeroperu 603 crew - with all static ports blocked there could be no valid measurement of airspeed and altitude and crew was bombarded by false failure messages such as "MACH TRIM" and "RUDDER RATIO". At one point they got stall and overspeed warnings going simultaneously. Yet they kept the aeroplane flying and only lost their battle when they failed to realize ATC was giving altitude from their C-mode altitude and not from elevation primary radar. Anyway, there were ASI failures since there were first ASIs and drill is the same on every fixed wing, be it Rans or A380: pitch+power=performance. Despite the exaltations of some, there is nothing indicating that either attitude or power information was lost at any time. It just wasn't utilized.

Proposed unusual attitude training will do exactly nothing towards eradicating AF447-type accidents. Issue was not botched recovery from high AoA, issue is recovery was not even attempted as stall was not recognized. Even more important issue is that aeroplane was actively pulled into stall, a factor missing at every other incident listed in interim2 that makes clear some crews managed to botch-up and get the stall warning. However, all of them pushed when warned, some loosing a chunk of altitude in the process.

HazelNuts39
7th Apr 2012, 10:51
"This is a UAS situation, all my pitot probe pressures are different so I have to disconnect the AP - recommend you fly pitch & power which for this alt is X/Y"The trigger was that the system detected a sudden drop in one of the three airspeed values. The "all pitot probe pressures are different" occurred much later at 2:12:xx when the ADR DISAGREE message was generated.

Thirty knots drop of IAS in one second is not sufficient to identify UAS. It is not inconceivable that it would occur as a real change of airspeed in a windshear/downburst close to the ground or in the vicinity of the jetstream at altitude. Considering the multitude of possible causes and flight conditions, IMHO the computers cannot reliably identify UAS, but must leave the diagnosis of the problem to intelligent humans.

P.S. Sorry Clandestino, cross-posted

RR_NDB
7th Apr 2012, 12:49
Hi,

RetiredF4:

It could have made a difference from the beginning. Knowing, that the problem of AP drop out is UAS, or even knowing it before the AP drops out and manual flying is imminent, saves time in analyzing the situation and helps initiating the necessary steps. The zoom climb would have not been part of the procedure, as often mentioned before. (http://www.pprune.org/tech-log/468394-af-447-thread-no-7-a-66.html#post7122361)

:ok:

Saves time in analyzing the situation and helps initiating the necessary steps. Reducing (or even eliminating) IMHO important "uncertainty". Surprises can bring problems. :)

Murphy's Law "rounds the picture". :}

That sudden manual flying chewed up most of the crews attention and was hindering in identifying the cause of the problem (UAS). This cause could have been multifold including the WX situation/ turbulence.

"Stressing" Human Factors

The system knew that the speed became unreliable, but the indication to the crew was the the handover to manual flying in some expected turbulent WX situation without communicating the known information (UAS) of the reason for the handover.

Currently the System has the information (GIGO "feature"). This is an "insider information". Not immediately given to the crew. And could be. The Airbus paper mentions the pilots NEED to do the scan to identify the issue. IMHO this can be improved:

There is lots of room for improvement, not only on the training issue.

I think so (in this "UAS aspect").

IMHO MIL crews are trained to work around the unexpected event / problem, as they can´t plan a military mission and the asociated tasks in great detail, and if they can do it, it anyway comes different to the planning in the end due to a multitude of possible factors including bad guys trying to shoot holes in your plane.

But air transport crews are best trained in handling standard and non standard situations acording to SOP´s and CRM. To implement those procedures the identification of the problem has to be quick and simple and is a prerequisit to implement the correct procedure. The ECAM is the best example for that need.

Useful comparison.

The first necessary step " maintain aircraft control" was already hindered by the unknown cause of the problem and the lacking manual handling skills. Would the cause UAS have been known from the beginning, his handling might have been simplified as PJ2 sstates by doing nothing. But instead they failed in " maintaining aircraft control" , did never "analyze the situation" and could therefore not "take proper action".

might!

It comes down to a misidentification of a problem (UAS), which could have been communicated to the crew by a clearer and more expedite way

could

by a clearer and more expedite way

K.I.S.S. principle IMHO is MANDATORY for the "interface" (http://www.pprune.org/tech-log/481350-man-machine-interface-anomalies.html)

Specially in "difficult conditions" (WX, IMC, etc.) in order to stay more distant to the "FUBAR threshold"

CONF iture
7th Apr 2012, 13:41
The trigger was that the system detected a sudden drop in one of the three airspeed values.
...
Thirty knots drop of IAS in one second is not sufficient to identify UAS.
Is it sufficient to command the AP disconnect ?

safetypee
7th Apr 2012, 13:51
Much of the current debate is becoming wound-up by hindsight (the spinning hamster wheel). Often in such cases there is inadvertent drift is towards ‘blame and train’, or attempting to fix the problems of a specific accident and thus overlooking generic issues.

Whilst a different form of computation may have prevented this accident, it is unlikely that the industry will think of all possible situations, and even judge some as too extreme to consider – problems of human judgement, cost effectiveness, ‘unforeseeable’ scenarios.
Similarly a different pitot design could have prevented the situation developing, but this action was in hand. In hindsight, the need for at least one modified pitot (and associated crew action) indicates poor judgement in the use of previous data (no blame intended – just a human condition), yet this was perhaps tempered by practicality (ETTO) (www.abdn.ac.uk/~wmm069/uploads/files/Aberdeen_ETTO.pdf).
Furthermore, knowledge of the icing conditions from research and previous engine problems could have required a temporary restriction in flying in or close to such conditions.
The generic issues here are the failures to learn from previous and often unrelated events, and in judging the risks associated with the identified threats – current state of knowledge or application of knowledge.

None of the above involves the crew; the objective is to protect the sharp end from the ambiguities of rare or novel situations such that their inherent human weaknesses are not strained by time critical situations.
Where crews do encounter these rare situations, then the limited human ability is an asset (human as hazard or human as hero). Protection should not, and often cannot, be achieved by more and more SOPs. Human performance will vary according to experience, knowledge, and capability. We cannot expect the detection and assessment of rare situations to be consistently good, we hope that the assessments and actions are sufficient, and thus safe, but in balance with those ‘miraculous saves’ celebrated by the industry, we have to suffer a few weak performances as part of the norm (again no blame intended) we are not all the same.

Many aspects of the high-level generic view are summarised by J. Reason – “you can’t always change the human, but you can change the conditions in which they work”. However this view should not be restricted to the immediate human-system interface, there are many more facets to the SHEL model of HF.
Another view from the same author is that ‘We Still Need Exceptional People’ (www.flightsafety.org/asw/mar09/asw_mar09_p53-56.pdf). This requires the need for continuous learning at all levels in the industry, not just more crew training but real learning in design, regulation, operations, crew, and accident investigation.

This accident, situation, and activities before during and after the event, represents a rare and novel situation, even perhaps ‘unforeseeable’, but from each there are aspects which we must learn. But how can we ensure that we learn the ‘right’ lessons?

RetiredF4
7th Apr 2012, 14:29
clandestino
A decade before his lifespan was abruptly terminated on the slopes of El Deluvio, your unfortunate friend won the accolade of USAF instructor of the year (IIRC he was flying Rhino at the time), which makes him far, far, far above average pilot in anyone's book.

A title does say nothing about flying expierience, as do hours as well.

He started pilot training in 1979, and when he left the airforce 7 years later to become a flight engineer on B727, he had accumulated a total of 1.362 hours.

Clandestino
even the best pilots can underperform occasionally

Tat is a very true statement.
It´s astonishing how reading thousands of pages on the various AF447 threads could lead to the conclusion, that only this very unfortunate AF447 crew was able to do such mistakes, all others being skygods and unfailable, and therefore thoughts about an improvement of systems, of the machine man interface is not necessary and a waste of time.

RR_NDB
7th Apr 2012, 14:34
Hi,

safetypee:

This requires the need for continuous learning at all levels in the industry, not just more crew training but real learning in design, regulation, operations, crew, and accident investigation. (http://www.pprune.org/tech-log/468394-af-447-thread-no-7-a-66.html#post7122582)

However this view should not be restricted to the immediate human-system interface,

We Still Need Exceptional People


This accident, situation, and activities before during and after the event, represents a rare and novel situation, even perhaps ‘unforeseeable’, but from each there are aspects which we must learn.




But how can we ensure that we learn the ‘right’ lessons?

Being diligent, only "helps". No guarantees.

Thanks for very good links. Will comment asap on your thread.

RR_NDB
7th Apr 2012, 14:55
Thirty knots drop of IAS in one second is not sufficient to identify UAS. (http://www.pprune.org/tech-log/468394-af-447-thread-no-7-a-66.html#post7122571)

There are (quite common) techniques to "detect" the "signal" (threshold) "buried" in the noise ("garbage").

Net result (reliability) could be high. And PRIOR to GIGO threshold (protection) of the System. (Truly "non causal")*

* To the crew. I.e. Before Law change.

Mr Optimistic
7th Apr 2012, 15:06
While the systems were announcing the multitude of consequential warnings, and the uas was appreciated by the crew at some level least, attitude power and altitude were all reading correctly. the crew were perhaps looking for a complex failure of their complex machine where none existed. i wonder if that psychology and how to prevent it causing brain freeze isn't what needs some attention. adding further logic branches to be half remembered doesn't sound compelling.

PJ2
7th Apr 2012, 17:53
safetypee;

An excellent, thoughtful and thought-provoking post, thank you and thank you for the links.

In safety work it is not always easy to learn how, where and when to place one's "focus". We can "tune in" varying "causes" according to our models of this and that, and even lend to such models' details a taxonomy which can at once legitimate such approaches and even explain but also carry the potential to limit such approaches unintentionally, a sort of "auto-immune" disease of process, as it were.
The generic issues here are the failures to learn from previous and often unrelated events, and in judging the risks associated with the identified threats – current state of knowledge or application of knowledge. Yes, I think so. Interestingly, this describes some of the system characteristics now known to have occurred prior to 9/11, and, as we are slowly learning, prior to the 2008 financial and to a certain extent, social meltdown in the U.S.

franzl;
Quote:
Clandestino
even the best pilots can underperform occasionally Tat is a very true statement.
It´s astonishing how reading thousands of pages on the various AF447 threads could lead to the conclusion, that only this very unfortunate AF447 crew was able to do such mistakes, all others being skygods and unfailable, and therefore thoughts about an improvement of systems, of the machine man interface is not necessary and a waste of time. First, I know from a few "first-hand" experiences in a number of types including the A330 and A340 that there is no such thing as infallibility in the cockpit. I have plenty of colleagues who can say the same thing. Anyone who does high-risk work (doctors/nurses/engineers/pilots) will have things which he/she has done that keep them awake at night. It is testimony to the success of processes in place which provide redundancy and resiliency in mitigating systems that the accident rate is what it is.

Anyone who flies and is contributing to this discussion knows this very same thing but that sense of someone's approach can't always come through in just a post or two.

If comments default to blame or dissing a crew, the contributor hasn't flown long enough, they are living an illusion informed only by ego, or they aren't a pilot.

Rather than agreeing or disagreeing then, I would like to recognize that over the life of eight or so threads on this accident, that while we have some responses which seem to do this they always seem to fade away while those who do this work (flying transports, engineering safety systems, etc) on a regular basis do have and do provide broader perspectives. Most know that "blaming this crew" doesn't cut it and dooms any responses to pedantic repetitions of the notion of "primary causes", leaving us to chase down, focus upon and fix "the cause" while waiting for the next cause. The simplest example is, "the accident occurred while approaching runway 31L so we won't use "runway 31L" anymore. I thought the ETTO presentation was worth examining in this light. And as many have clarified, focusing on the crew is not the same thing as blaming the crew.

In this, the BEA "Human Factors" group have a Herculean task.

PJ2
7th Apr 2012, 20:07
It comes down to a misidentification of a problem (UAS), which could have been communicated to the crew by a clearer and more expedite way. Then the first question to ask is, "Why would a pilot conclude that there is a requirement to act or respond in any way, before the abnormality has been identified?"

We both know that identifying the problem is a fundamental aviation principle. If there is difficulty in identifying the problem, wait. As in many abnormals which occur, there was no emergency, no need to act unilaterally, instantly. The safety of the aircraft was never in question, (that's possibly a hindsight observation). Why was action taken, vice not?, is what it comes down to I think. And this point is germane to the discussion where crews did not immediately identify or respond 100% correctly, because in all events but this one there was no accident. Why?

I do see your point - a warning vice "circumstantial evidence" so to speak, would probably stop action and cause a change in behaviour but then the question becomes, At what point do we stop designing for such things?, and the larger question is, In terms of interventions and attention-getting, what is the balance between pilot and automation? There are within us all, the sources of an accident - do we expect that design and engineering, and then Standards & Training/SOPs/CRM/HF (SHEL), will resolve all these issues? If not, what is acceptable and why?

RetiredF4
7th Apr 2012, 20:31
PJ2
I do see your point - a warning vice "circumstantial evidence" so to speak, would probably stop action and cause a change in behaviour but then the question becomes, At what point do we stop designing for such things?

We never will stop designing for such things, we are on that path and no rationale will stop it. The question will only be how quick it will be done. And i fear, the new generation of pilots / system managers does not want anything back from the past and will accept any gadget which releaves some workload and some asociated necessary training. I´m not saying that it makes me feel safer as a pax in airtravel.

The problem lies not in the difficult tasks which have to be fullfilled routinly, but in the onetime ones, which we still leave the pilots to do. But with limited knowledge and no training (knowledge and training developes expierience) even simple tasks can lead to desaster.

Back to the AP: If that one is forced off due to mechanical loads or any not computable malfunction, then be it. But when it is deliberately switched off by the system after thorough evaluation of its inputs in straight and level flight a courtesy warning prior drop off should be not too hard and expensive to design and implement. It´s just that nobody thought about it until now.

PJ2
7th Apr 2012, 21:11
franzl;
And i fear, the new generation of pilots / system managers does not want anything back from the past and will accept any gadget which releaves some workload and some asociated necessary training. The dumbing-down of aviation has been going on since the early 80's. Automation in many eyes IS the third pilot and training pilots to push the right buttons and, as TTex600 says in (Post #1292 (http://www.pprune.org/tech-log/468394-af-447-thread-no-7-a-65.html#post7121854)), "Airbus training is focused on the wrong targets when considering aircraft control. One is judged by his/her knowledge of the protections - with little an no emphasis placed on degraded modes.", to which I would add, one is judged largely though not excusively in recurrent training sessions by one's knowledge and operation of the autoflight system when everything is going right, and not by one's ability to skillfully and with knowledge, take over the airplane until the automatics are happy again.

Non-use of the autoflight system in recurrent training and, for the most part, in flight, is discouraged and is taken as a sign that one doesn't know the autoflight system thoroughly enough. The fact that guys refused to disconnect the autothrust when offered (as long ago as 15 years now), was and is a canary-in-the-mine along with other signs as far as I'm concerned and because there are hundreds of ways to fail at doing this but actual failure is minimal, and very few ways to do it all correctly and there are millions of examples that it is done well, warning systems which cater to rare inattention or, less rare, lack of airmanship and a growing surface knowledge of one's profession just kick that can down the road a bit.

I'm taking a rest from this for a while, to think.

Cheers!

gums
7th Apr 2012, 21:32
As a non-heavy pilot, I have really enjoyed the posts here and "meeting" the pilots and engineers. I also appreciate the comraderie and the warm welcome I have received, being a lite pilot and all that.

This quote applies to the heavy pilots as much as to my genre,

"Only the spirit of attack born in a brave heart will bring success to any fighter aircraft no matter how highly developed it may be" - Galland
We initial cadre of the Viper took this to heart, and despite all the "protections", or what we called "limits", imposed by our FBW flight controls, we did just fine. That being said, the jet was so easy to fly that we lost pilots just because of that. We did not have the cosmic auto-throttle or autopilot that the 'bus has. But the jet would still do what the engineers had programmed despite our excessive commands. For example, one test bird measured over a hundred pounds of back stick by the pilot. This was funny, as max command was about 34 pounds. Nevertheless, the jet gave you everything it could without departing from controlled flight. We quickly became "spoiled".

So I question the current mentality of the folks up front on my airliner. Are they simply system monitors or are they pilots?

Do they spend more time in the sim setting up the flight management system routes, waypoints and altitudes and such than considering what they would do when a flock of geese entered both engines shortly after takeoff?

I realize that the sims prolly don't handle "out of the envelope" flight conditions very well. But my experience on a sim flight check was that the instructor threw everything in the book at you. So I imagine that the major carriers are now exposing the crews to the UAS problem and loss of other data that the flight control system uses.

I doubt that the stall recovery techniques we have discussed here will get much attention. Why? Because most here would not have held back stick after the airspeed went south, but we would have simply kept the attitude and power we had when the event occurred. Then figure out what the hell was happening.

Look at this, and imagine what I did... gear up and then....

http://s120.photobucket.com/albums/o196/gatlingums/?action=view&current=rightwing.jpghttp://i120.photobucket.com/albums/o196/gatlingums/rightwing.jpg

Well, left stick and the sucker kept flying. Reduce power to stay at the same speed as I knew the thing was still flying at that speed so why be a Chuck Yeager?

The FBW was giving me max left roll except for the extra pound or two of stick command I was applying. So the FBW prevented a serious situation. Not to imply the jet was easily controlled, but we got her back on the ramp in one piece, and I had a stiff drink shortly thereafter.

So how would the current crop of "system monitors" react to something similar? And we did not have over 30 previous incidents - I was the first.

Diagnostic
8th Apr 2012, 03:07
@PJ2,

Thanks for your comments. I note you're taking a break from this, as I will also be doing. I've learned new information from some people, including you, over the past few days, thanks.

Not to re-argue the matter, but we know that none of these crew actions were done with AF447 and for this reason (which I have elaborated upon in earlier posts), I concur with O.C., that this is primarily a performance/crew accident. There certainly are training-and-standards issues here as well, and there are airmanship, system knowledge and CRM issues. The HF Report will, (and should) be thick and deeply researched.
As I said (more than once) to O.C., I agree with that - the crew behaviour certainly appears to be the primary cause of this accident (which clearly implicates training etc.).

However, in BEA IR2 was evidence of inappropriate UAS recognition (and therefore incorrect subsequent procedure) by several crews, not just AF447. It therefore seemed sensible to consider what could be done to help crews via improvements in the man/machine interface, especially since I have seen parallels to such "hidden decisions" in large computer systems, and their adverse effects on quick & efficient troubleshooting by humans.

What I have read over the last few days is that such changes are seen as unnecessary by some people, so I'll withdraw from the thread for the time being, but with my personal opinion (as a GA pilot & engineer, and therefore acknowledging that I don't have the experience to know the consequences) still being that improvements here may be a "net win", and hence worth investigating further, at least.

As you said, the HF part of the report will make interesting reading!


@TTex600

Hi,

As with PJ2, your comments about the state of current training and what pilots are being measured on (and hence what training is being driven to produce), are very interesting - and worrying. Unfortunately I also see this management drive to "train to meet tick-boxes" instead of "train to meet actual requirements" being done outside of aviation.


@safetypee,

Hi,

Another "thanks" from me for those links. Much earlier in these threads, I saw mention of the Dr Bainbridge "Ironies of Automation" paper which I read at the time. I'll see how the papers / discussions in your recent links relate to her conclusions. Some of the descriptions in her paper really resonated with me, as being human behaviour which I've seen and/or experienced.


@RR_NDB,

Hi Mac,

Thanks again for your recent posts & analysis. It will be interesting to see if the BEA recommend manufacturers to re-visit the current situation about UAS warnings / recognition.

Airbus SAS (and others) are processing UAS just to the System. :} The System is not fed with garbage. The pilots need to "process" any garbage through scan and brain. :mad:
Indeed - that extra workload on the crew, to figure-out themselves (again) what has already been decided by the avionics, seems unnecessary & unhelpful.


@CONF iture,

Hi,

When you read the memo, it does seem normal to have to deal with a continuous STALL warning without taking any necessary action ...
Thanks for highlighting this - very interesting. I hadn't recognised the correlation to my thoughts about the PF's treatment of the still warning (i.e. deliberately ignoring it), with this document saying that the crew could sometimes get a stall warning with, as you say, no instructions to respect it using SOP. It's a shame that the English version of IR3 pages 63 & 64 which mention this document, didn't translate it.

This leads onto the concern that, if they ignore a stall warning for long enough, then it may no longer still be an approach to stall warning, but may now be a we're stalled warning!

Again, it'll be interesting to see how (or if) the HF part of the final report believes this document may have factored into the crew's (especially the PF's) treatment of the stall warning.


@RetiredF4,

Hi franzl,

Thanks for your comments. I see that you've mentioned all the main points I had been thinking about (and some more), and already explained them to O.C. much better than I was doing!


@HazelNuts39,

Hi,


"This is a UAS situation, all my pitot probe pressures are different so I have to disconnect the AP - recommend you fly pitch & power which for this alt is X/Y"
The trigger was that the system detected a sudden drop in one of the three airspeed values. The "all pitot probe pressures are different" occurred much later at 2:12:xx when the ADR DISAGREE message was generated.
Thanks for the correction. I thought I had previously mentioned the sudden drop of airspeed values in another reply, but it must have been in a draft that I deleted, as I can't see it now. In the above quote I was giving one (obviously amateur and anthropomorphized) example message text, but hadn't meant it to be intended as accurate for any one specific situation (hence why I mentioned "X/Y" instead of specific values). I should have been more careful to avoid misinterpretation.

Considering the multitude of possible causes and flight conditions, IMHO the computers cannot reliably identify UAS, but must leave the diagnosis of the problem to intelligent humans.
But decisions are already being made by the avionics about when the air data is unreliable, to then trigger the AP to disconnect etc. I understand that these decisions may not be perfect, but given that they exist already, I'm just suggesting that it is worthwhile exposing the reason for the avionic's decision to the crew - instead of expecting crew to figure out the reason for AP disconnect etc. again, and and perhaps getting that analysis wrong (or at least getting distracted by trying to do it).

You bring up an interesting point - of those 2 effects caused by pitot blockage (airspeed discrepancies and rapid change of airspeed reading), I expect it's more difficult for humans to detect the rapid change of detected airspeed (unless they happen to be looking at the specific instrument at that moment) than to detect discrepancies, since immediately after a change, the (incorrect) speed may stabilise (as seems to have happened on AF447). I want to do some more reading and think about that.


@Old Carthusian,

Hi,

Everything keeps on coming back to training, SOPs and CRM.
Yes! And I continue to acknowledge this, as I have done over the last few days.

I'm specifically looking deeper into an SOP issue (i.e. the SOP for UAS), but unfortunately (and despite further clarification from me) you keep raising a different part of the overall problem (training) without acknowledging any common ground regarding SOP. Indeed, yet again you've repeated that the outcome of a UAS is what is important, ignoring my point that the UAS recognition and procedure (or lack of it) followed by the crew, is also important.

You've also again asked for a "guarantee" from me, despite me pointing out how unreasonable that is. If I wanted to, I could also ask you to guarantee things about the opinions in some of your replies, which you couldn't do. :)

Therefore I don't see any value in further conversation if there is no mutual respect here, and I'll politely withdraw from further conversation with you for the moment. Thanks for your comments anyway.

[Added: Before someone reminds me, I know UAS is not an SOP, although the knowledge that the UAS Abnomal procedure exists, counts as being an SOP, IMHO. :)]

RR_NDB
8th Apr 2012, 17:18
Hi,

The System could also provide good news to the crew. :) (http://www.pprune.org/tech-log/481350-man-machine-interface-anomalies.html#post7124234)

May be (the System) should ASSERTIVELY inform to a hard working crew submitted to uncertainties (as in Thiells B727 and AF447) IMMEDIATELY when the Air Speed (and related indications) became reliable again.

IMO UAS and Reliable Air Speed (RAS :) ) should not be programmed by Designers as a System "inside information". Better to communicate BOTH.

The anomaly came from just the sensors (OBSOLETE). Not even (caused) from ADM, etc.

The confidence in the System (of great importance) MUST BE PRESERVED!

("Good news" in Easter day :8 :) )

Lyman
8th Apr 2012, 18:21
"Guarantee?" Yes, you'll want marketing, down the Hall, but they're out golfing.

:p

AlphaZuluRomeo
9th Apr 2012, 11:53
@ RR_NDB:
While I agree that the system could also provide good news to the crew, I don't think your example is pertinent.
Indeed, the system has to be sure of itself to provide news (particularly good news such as: you now can rely on...)
The current logic is: The fact that the 3 airspeed sources give ~ the same value is an indication that this value is correct as long as no error occured.

As soon as an error occured (and until a system reset on the ground, with a check of its sources), the system is no more able to determine if what it "senses" is true or not.
You can easily imagine that 3 frozen/clogged pitots will at some time indicate ~ the same (incorrect) airspeed value. It would be very dangerous to rely on that (incorrect) value.
I know that ice eventually melt, but ice is not the only potential problem encountered by the pitots. What about volcanic ashes, for example?

Hence it's far safer IMO to let the crew do its job, as pilots: Assess the failure, eventually see that the values ​​seem correct again and decide to give them (or not) (limited) confidence. In the mean time, fly pitch&power, which can be done without relying on airspeed indications.

Regards.

Mac the Knife
9th Apr 2012, 14:34
Stunning discussion by some very smart people. Puts me in mind of "Dark Star".

"No, no, Doolittle, you talk to it.
Teach it Phenomenology, Doolittle."

CONF iture
9th Apr 2012, 16:07
Also, I do not buy the sidestick vs control column argument one bit. Any pilot watching the pitch attitudes seen here does not need sidestick or column position to tell him/her that something extremely serious is about to happen if control of the aircraft isn't taken over immediately and the nose lowered to normal cruise attitudes.

This is the part that is very definitely not complicated.

As you know ... I sincerely disagree here, the sidestick vs control column argument is a serious one and cannot be discarded from the equation that helped to the final result.
I would agree the argument is not making a difference regarding the initial pitch up as the displacement of the sidestick was limited as would have been such displacement of a control column, but the difference is later on especially when the cpt came back.

Lyman
9th Apr 2012, 16:37
When he was telling Bonin (PF) CLIMB!, what was the other pilot seeing? BECAUSE, as we know, BONIN was holding NOSE UP, he said so. He was PF, and yet the other pilot said CLIMB? Didn't HE SEE THE DISPLAYS?

"But I have been holding NOSE UP". Tell us again the displays were working?

Why did PF have to TELL anyone he was commanding NU?

This tells us Number One: No one knew (except RHS!) RHS STICK POSITION.

NUMBER TWO, with the a/c commanded NU, how did the others not see this on the displays?

NO ONE knew the attitude. For THREE, BEA stays quiet after ""I have no displays" (When Captian enters). We must assume NO DISPLAYS came back, ever.....

PILOT A: "Climb, Climb"

PILOT B: "NO, Do Not Climb"

PILOT C: "But I have been holding NOSE UP for some while"

None knew attitude. What display?

These are discussions of what they should do with the CONTROLS. And they are clraerly confused. FOR ONE, they know they are plummeting down, For Two, they are guessing at which way to cammand the NOSE. How can PNF tell "CLIMB", if he know the a/c is PITCHED UP at 17 degrees? The displays are not working, and they are trying to suss anything that will stop the descent, without knowledge of NU, clearly.

roulishollandais
9th Apr 2012, 23:34
AT THE TOP OF THE EVEREST OF MAN-MACHINE INTERFACE :suspect:, I pay tribute to TEST-PILOTS :\, and TEST-INGINEERS :\ who offer their lifes to improve Air Safety, and I also pay homage to TEST-PARACHUTISTS :\ WHO TEST THEIR ZERO-ZERO EJECTABLE SEATS at price of their own health.

Maybe it is the worst fault of Airbus to have used airline pilots to test A330 also killing the test-pilot Nick Warner, and passengers confronting stall in AF447.

1994 A330 test flight crash - Wikipedia, the free encyclopedia (http://en.wikipedia.org/wiki/1994_A330_test_flight_crash)

AlphaZuluRomeo
10th Apr 2012, 00:44
roulishollandais,

I cannot distinguish many similarities between the crash of 1994 and AF447. Care to elaborate?

I sincerely hope I misunderstood your point, and that this is not (airbus) bashing disguised as a tribute (to test crews). :bored:

PS: Do humans still test (0/0) seats? Last time I checked, dummies ruled. :confused:

RR_NDB
10th Apr 2012, 01:06
I don't think your example is pertinent. (http://www.pprune.org/tech-log/468394-af-447-thread-no-7-a-67.html#post7125472)

I will comment reasons to mention these flights in the example:

1) PF's stalled their planes

2) Pitot's issue triggered the sequence of events

3) Stall caused LOC or (non recoverable situations)

4) They didn't identified AS issue (due icing)

5) Redundant indications were not enough. Result: Confusion and misleading

(In Thiells case a misleading was obvious. In AF447 "lack of minimum understanding" seems a fact. (based in what we have) Confusion seems to be resulted. When you are fed with REDUNDANT erratic indications a "misleading" occur: Reduce (or destroy) your confidence in the System you is operating. And open new :E possibilities. :}

Yes, there are some (imo minor) differences, i agree: 727 heater off, 447 obsolete Pitot's, 727 stall then LOC, 447 full stall: net result, an amazing* LOC. :{ This 2 crashes has IMO more commonalities than other possible examples. Yes, every accident is unique.

Indeed, the system has to be sure of itself to provide news (particularly good news such as: you now can rely on...)


AZR, i am "technically oriented". But, this "non technical value" (confidence) is of extreme importance. I don't like the scenario of a crew with no confidence on the "interface" (even partially). This is very dangerous and can led to "dramatic" possibilities.:E:E:E

It would be very interesting to understand the current approach in "detail" (algorithm). Can be in "managerial" level. An outline. Maybe A33Zab could also help on this.

As soon as an error occured (and until a system reset on the ground, with a check of its sources), the system is no more able to determine if what it "senses" is true or not.

I may ask: Why? Frankly, IMO there are better approaches. This "long latch" (long outage) and "reset on ground" tells me: Room for improvement.

You can easily imagine that 3 frozen/clogged pitots will at some time indicate ~ the same (incorrect) airspeed value. It would be very dangerous to rely on that (incorrect) value.


DSP techniques (very common) and good algorithms solve this easily. Typical engineering problem that instrumentation (industrial) engineers has full capability to deliver "State of the art" implementation. No problem! Remember, you process the signal (normal), the onset of "garbage generation :) and the recover of the Pitot's normal operation. (447 case). In Thiells 727 perhaps plane crashed with 3 tubes frozen. (we don't know). An interface probably never would provide "good news".

I know that ice eventually melt, but ice is not the only potential problem encountered by the pitots. What about volcanic ashes, for example?


Good question. Before clogging (i mean, during transitory) DSP easily detect. Probably tubes will be no longer operational, thus creating a more serious scenario. Fortunately NOTAM's solve this. ;) There is no "instant" ash. :) like 447 crew faced after entered WX. So, ash, no problem: Same DSP, same approach. (long latch for other reason) :)

Hence it's far safer IMO to let the crew do its job, as pilots: Assess the failure, eventually see that the values

I will continue (editing) in few minutes. But i anticipate: "Globally speaking" i.e. looking to all factors (reliability, crew "workability" on the issue (assessment, cross checking (as mentioned paper put) my feeling is: Better to review current approach.

As lomapaseo IIRC well put (as i understood his post). WE could, should (quasi must) post our engineering POV. Safety is our ultimate goal. (It's my agenda, my reason to think, to analyze, to read, etc. here in PPRuNe)

* The AF447, LOC seems unique in (commercial airliner) aviation history. No abnormal attitude, and only a long and slow RH turn. (Even "G protected" during the zoom climb), Indeed unique? I don't remember nothing similar. (Unless the ROD is considered an abnormal attitude, may be of a new type :})

Organfreak
10th Apr 2012, 01:08
This link shows much more detail about what went wrong. Draw your own conclusions as to what Mr. Hollandaise had in mind:

ASN Aircraft accident Airbus A330-321 F-WWKH Toulouse-Blagnac Airport (TLS) (http://aviation-safety.net/database/record.php?id=19940630-0)

RR_NDB
10th Apr 2012, 01:50
Hi,

Mac the Knife

"Dark Star" (http://www.pprune.org/tech-log/468394-af-447-thread-no-7-a-67.html#post7125724)

...an increasing number of systems malfunctions (for instance the toilet facilities 'blew up' destroying the ship's entire stock of toilet paper) (http://en.wikipedia.org/wiki/Dark_Star_(film))

:O

While attempting to repair the laser, Talby is blinded and inadvertently triggers a more serious problem, causing extensive damage to the ship's main computer and a major malfunction with Thermostellar Bomb #20, which, on arrival at their target planet, becomes belligerent and refuses to obey orders and drop from the bomb bay.


:eek:

CONF iture
10th Apr 2012, 03:28
Thanks for your PM Lyman.
I can't tell there was an issue with the attitude indicator(s) but it is a possibility there was one or they thought there was one as they choose to manipulate the ATT HDG switch or is it the way it is teached at airfrance to turn simultaneously both AIR DATA and ATT HDG switches ...
For sure there was confusion, big time, and those invisible sidesticks just contributed to that confusion.

CONF iture
10th Apr 2012, 13:12
http://i45.servimg.com/u/f45/11/75/17/84/af447_29.png (http://www.servimg.com/image_preview.php?i=143&u=11751784)


It has been suggested by the BEA that On n’a pas une bonne annonce de… et … de vitesse were going together but most probably they don't as they are separated by 4 seconds and the translation made is incorrect : Usually une annonce is not a display but an announcement made through a public aural system.

Annonce furtive ou persistante "STALL" (Appendix 3 in IR3)

IMO the PF was telling that the STALL WRN that came up one second earlier was a false one.


http://i45.servimg.com/u/f45/11/75/17/84/af447_30.png (http://www.servimg.com/image_preview.php?i=144&u=11751784)

jcjeant
10th Apr 2012, 14:29
Hi,

It has been suggested by the BEA that On n’a pas une bonne annonce de… et … de vitesse were going together but This is not the first time differences (significant) were detected between the versions (English and French) in the preliminary report No. 3 and cause confusions
Announcing and displaying are two different things with different meanings
So .. what is the true word ? the one in the french version or the one in the english version ?
It does not add anything positive to the professionalism of BEA when it comes to communicating
It seems amazing that BEA can't ensure the service of a translator familiar with the aeronautics language as they have sufficient funds
It will be interesting to scrutinize carefully the two versions of the final report (if it is translated into English)
BTW .. stay that the BEA is french and so the original publishing (in french) must be considered like official release

RR_NDB
10th Apr 2012, 15:27
Hi,

CONF iture:

IMO the PF was telling that the STALL WRN that came up one second earlier was a false one. (http://www.pprune.org/tech-log/468394-af-447-thread-no-7-a-67.html#post7127453)

I will comment on that (well observed and likely) probable fact on:

Man-machine interface and anomalies (Man-machine interface and anomalies) thread, on the ramp "being fueled".

Mac

I hope the improvement of "interface" crew currently use in "advanced planes" (all types) in the aftermath of AF447 HOW, WHY, analysis and subsequent "works".

* Albert Schweitzer

AlphaZuluRomeo
10th Apr 2012, 15:31
CONF iture, jcjeant,

"On n'a pas une bonne annonce ... de vitesse" cannot imply anything else than a display AFAIK. What do you have in mind, regarding an "announcement" ??

I agree that the term is not the more common in french ("on n'a pas une bonne indication" seems better suited) but remember it was the middle of the night and the beginning of troubles. Using "une annonce" in place of "une indication" doesn't surprise me that much (french is my native language).

I really don't think the BEA is to blame, here, unless wanting to quibble about everything it writes.
I insist on here, because sometime translation issues are real, and may lead to misunderstanding and/or nonsenses.

RR_NDB
10th Apr 2012, 15:36
...the sidestick vs control column argument is a serious one and cannot be discarded from the equation that helped to the final result. (http://www.pprune.org/tech-log/468394-af-447-thread-no-7-a-67.html#post7125872)

I hope this is being addressed by BEA with the right* emphasis.

I would agree the argument is not making a difference regarding the initial pitch up as the displacement of the sidestick was limited as would have been such displacement of a control column, but the difference is later on especially when the cpt came back.


"but the difference is later on especially when the cpt came back."

:ok:


* right means, IMO relevant

roulishollandais
10th Apr 2012, 16:00
Do humans still test (0/0) seats? Last time I checked, dummies ruled.
May be they no more do. I thought in the 90's it was already the case., But I had to wonder one of my friend, CLAUDE JOHARY, had to test a Mark-IV (type to be confirmed) from the ground. After what I had some problems with his back, and could no more apply as astronaut. He died in 1995 as passenger.
MS.760 Paris 11 Octobre 1995 - Uzech - Arostles (http://www.aerosteles.net/fiche.php?code=uzech-paris)

I cannot distinguish many similarities between the crash of 1994 and AF447. Care to elaborate?

The ONLY similaritie I want to point out is the non adequate man on the non adequate place ; and that concerns HMI.

A33Zab
10th Apr 2012, 16:17
Quote:




Indeed, the
system has to be sure of itself to provide news (particularly good news such as:
you now can rely on...)
AZR, i am "technically oriented". But, this "non technical value"
(confidence) is of extreme importance. I don't like the scenario
of a crew with no confidence on the "interface" (even partially). This is
very dangerous and can led to "dramatic" possibilities.http://images.ibsrv.net/ibsrv/res/src:www.pprune.org/get/images/smilies/evil.gifhttp://images.ibsrv.net/ibsrv/res/src:www.pprune.org/get/images/smilies/evil.gifhttp://images.ibsrv.net/ibsrv/res/src:www.pprune.org/get/images/smilies/evil.gif


It would be very interesting to understand the current approach in "detail"
(algorithm). Can be in "managerial" level. An outline. Maybe A33Zab could
also help on this.



If i understand your question correctly,

'Good news' is if ECAM message/action is properly handled, understood and cleared by crew.

VERY good news is if ECAM message or FLAG is cleared by the system itself because that means the anomaly is not present anymore.

For the speed indication it is more difficult:
but they are in view (ADR 3 standy/isis and selectable to show on any PFD)
so if they are consistent again after anomaly the crew will know UAS is not present anymore.

HazelNuts39
10th Apr 2012, 16:36
Usually une annonce is not a display but an announcement made through a public aural system.An 'annonce' can be in writing, e.g. the small ads in a newspaper. Good point, nevertheless.
Found this in the "Lexique-bilingue" on the Dassault-Aviation website:

ANNONCIATEUR DE MODE DE VOL: FLIGHT MODE ANNONCIATOR (FMA)

RR_NDB
10th Apr 2012, 18:19
For the speed indication it is more difficult: (http://www.pprune.org/tech-log/468394-af-447-thread-no-7-a-67.html#post7127782)

On indications i would suggest: FLASHING (or warning by other suitable means) when TWO are different. If THREE are different better to not be presented to crew: Likely just GARBAGE with "MISLEADING CAPABILITIES" like occurred in the Thiells 727 ferry flight.

Will comment further your recent posts on this, IMHO very important issue:


Preventing loss of CONFIDENCE, CONFUSION and (potential) MISLEADING.

ACARS (currently) can be programmed to report UAS not observed (or observable) by crews?

Beacause AFAIK UAS is being kept by the System as an "INSIDER INFORMATION". A privilege of the System. (as per mentioned Airbus SAS paper crew must scan...)

This IMO could even be considered A Design Flaw*: Simply because at reduced cost (negligible cost) you can do better:

1) System PROACTIVELY helping crew (informing UAS onset) and

2) system CLEARING the issue (for a likely busy crew: scanning, trying to correlate, etc.)

Remark:

A DSP (Digital Signal Processing) on ANALOG (raw) air data (before ADM) can seems as miraculous (even for us, EE's) :)

*

Or lack of an important AID to help crews yet submitted to dozens of Air Speed anomalies closely related to the sensors being used.

In the Thales (now formally obsolete) equipped A/C an effort (Review or upgrade) would be essential.

The mere replacement to the US probes ("BF") IMHO is not enough. Wait to the final report (and it's consequences) shows how slow the bureaucracy is.

In the meantime pilots are at risk in not detecting timely a sensor limitation in a design with no redundancy.

A33Zab
10th Apr 2012, 19:56
Beacause AFAIK UAS is being kept
by the System as an "INSIDER INFORMATION". A privilege of the System. (as per
mentioned Airbus SAS paper crew must scan...)



IMO this is posted before by several posters:

Crew must scan because system has limited pre-programmed parameters to make logic decision to decide which ADRs are wrong (if dual or more ADRs are wrong and different).
In such a case a single good source can be rejected by the system.

Human (and Crew in specific) can add or skip parameters to make a - better! - logic decision on wrong and different information.
They have to isolate the faulty sources to prevent the system for rejecting the only good source and using the wrong sources.

Nothing to do with 'INSIDER INFORMATION', unfortunate during the fase 1 (as determind by BEA) the UAS of 2 or more sources lasted not long enough to trigger the ADR DISAGREE ECAM message (needs 10s to prevent spurious warnings).

in fase 2 (Continued Stall warning fase) all the speeds returned to consistent values.

In the start of fase 3 - when AoA became invalid due to hi AoA (resulting in CAS NCD) the ADR DISAGREE triggered because the UAS lasted more than 10s.

Lyman
10th Apr 2012, 20:20
A33Zab

"In the start of fase 3 - when AoA became invalid due to hi AoA (resulting in CAS NCD) the ADR DISAGREE triggered because the UAS lasted more than 10s."

The Pitots do not pivot, and what caused the AoA vanes to read high, caused the airflow to glance across the aperture of the sensors, dropping the pressure within. Are you saying that at remarkably high values of AoA, the speeds can be trusted regardless of UAS? And not be falsely quite low, together?

?

Doesn't BUSS moot this arduous thread?

RR_NDB
10th Apr 2012, 21:00
Hi,

AZR:

They have to isolate the faulty sources to prevent the system for rejecting the only good source and using the wrong sources. (http://www.pprune.org/tech-log/468394-af-447-thread-no-7-a-68.html#post7128123)


So, crew must help the System in this situation?

With good DSP processing of THREE sources (that always must be considered suspect) the crew could instead be "helped" by the System.

Sometimes (even on A/C) you use 5 redundant elements. E.G. in Airbus SAS design.

RR_NDB
10th Apr 2012, 21:22
[/quote]the UAS of 2 or more sources lasted not long enough to trigger the ADR DISAGREE ECAM message (needs 10s to prevent spurious warnings). (http://www.pprune.org/tech-log/468394-af-447-thread-no-7-a-68.html#post7128123)[/quote]

There is room for improvement. Time duration is ONE of the information a good DSP can use to generate reliable output.

And an spurious warning (threshold can be "digitally calibrated") is not a problem. The scan could verify easily is a "false positive".

I prefer a rare "false positive" than not being informed on UAS happening in the background. And worse: Generating DECISIONS for short duration anomalies.

What i don't agree:

A short duration anomaly be capable to reconfigure the A/C without ever telling what was happening. This sounds "opacity" capable to difficult crew operation. There are reasons for that? Let's work in the details and pilots could benefit from "state of the art" resources.

Jimmy Hoffa Rocks
10th Apr 2012, 21:34
Please forgive my off the track intervention, here but a A-330 pilot just told me that his company had some rapid TAT increases , one of which in they lost 4000 feet.

Rapid TAT increase while in cruise ?could someone explain this ? ?


" total temperature, which occurs on the wing leading or nose of your aircraft. The total temperature depends on the local, atmospheric, static temperature and on the velocity of the aircraft"


Please excuse if addressed earlier no time to read all the posts.

mm43
10th Apr 2012, 22:32
While we are debating the 'minutiae' associated with the AF447 accident, I have been doing some work on a Search Engine that will selectively search the individual PPRuNe Rumour and News / Tech Log threads on the subject.

At the time of this post, there have been 13 substantive threads on the AF447 event, involving 25,184 posts. That doesn't include those posts deleted by the moderators, e.g. 949 were deleted form the first thread - Air France A330-200 Missing.

The BEA's Final Report will no doubt attract many many more posts!http://images.ibsrv.net/ibsrv/res/src:www.pprune.org/get/images/icons/mpangel.gif

RR_NDB
10th Apr 2012, 23:34
25,184 posts and counting

:eek:

The BEA's Final Report will no doubt attract many many more posts!
(http://www.pprune.org/tech-log/468394-af-447-thread-no-7-a-68.html#post7128378)
:confused:

Administrators should prepare for DOS like (traffic exceeding thresholds)

So, this will continue for years?

:confused:

Another reason for a case study :)

mm43
11th Apr 2012, 00:16
Hi RR_NDB,
Administrators should prepare for DOS like (traffic exceeding thresholds)Apparently the Air France A330-200 Missing thread recorded its maximum server traffic of 11,419 visitors at 14:46 UTC on 1st June 2009.

I wouldn't worry.:ok:

CONF iture
11th Apr 2012, 01:01
The BEA's Final Report will no doubt attract many many more posts!
Not too sure about that ... so too many to comment in the early days following a crash, but so little to read a final report (http://www.pprune.org/4653807-post445.html).

mm43
11th Apr 2012, 01:28
Originally posted by CON fiture ...
Not too sure about that ...Perhaps you may be right - time will tell.

The link you gave I think was in relation to the A320 down on the Hudson??

jcjeant
11th Apr 2012, 04:27
Hi,

http://www.pprune.org/rumours-news/300539-brand-new-etihad-a340-600-damaged-toulouse-several-wounded-post4653807.html#post4653807

HazelNuts39
11th Apr 2012, 10:36
Rapid TAT increase while in cruise ?could someone explain this ? ?The obvious reason is that the local static temperature increases. As the thrust at an RPM as well as the maximum thrust decreases with increasing ambient temperature, it could become insufficient to maintain speed and altitude.

Another possible cause is ice accretion on the TAT sensors. Depending on what sensors the engine control system uses, that can also result in a thrust loss.

john_tullamarine
11th Apr 2012, 11:52
Thread #8 starts starts here (http://www.pprune.org/tech-log/482356-af-447-thread-no-8-a.html).