PDA

View Full Version : Computers in the cockpit and the safety of aviation


BOAC
1st Jul 2009, 10:28
PROMPTED by the AF447 accident, but NOT directed at Airbus specifically, I would like to open a discussion on this. ALL manufacturers are moving towards FBW/software control and protection etc.

It seems to me that we have reached a dangerous 'fork in our airway'. The FBW and software make for an amazing, clever and safe operation when they are working. Our 'new' pilots don't really need the old-fasioned basic flying skills, since these systems prevent abuse/mishandling.

What is frightening to me is that after 4 weeks of 'phone-a-friend'/post a PDF chapter/analyse ACARS messages we STILL do not really seem to be sure what the AF crew had left. Experts sift back and forth, 'maybe this and maybe that, but....' - all with the luxury of time. This crew had minutes to sort out an apparent cascading deterioration.

To me this says we need 2 things, 2 basic foundation-level things for starters.

We need a system in the cockpit that DEFINITELY leaves a crew with a basic flying panel, albeit limited - maybe no IAS or altitude, but at least power and attitude and does not just dump a pile of hot poop in the crews' laps and go off shrugging its shoulders. If that means a simple, battery powered AI, then fit it.

We need the crew to be able to revert to this basic instrumentation and make a reasonable fist out of descending away from performance limiting altitudes where they can take time and try to 'reboot' all the gismos at a more leisurely pace. We need basic skills, as demonstrated by the AMS, PGF and Buffalo accidents and far less 'over-confidence' in the magic.

2 tasks then, as I see this. One is for the manufacturer/regulators/operators to ensure something usable remains, and not to be seduced into glittery-eyed fascination with how clever everything is. The second for the pilot fraternity to press hard for a change in the philosophy and application of training and recurrent testing. Learning how to programme and push the buttons is important, but more important is to be able to pick up the pieces. These requirements WILL impact on the bean-counters. The question is how do we get it done?

EGMA
1st Jul 2009, 19:33
BOAC: Spot on!

I'm a private pilot and know how easy it is to become disoriented in IMC at night even when all is well. I also write safety critical software (non aviation) so I also understand the problems in handling real data that may become corrupt/invalid; generally a tidy abort is the best outcome.

Humans are lousy at monitoring automated systems and most automated systems are lousy at giving meaningful error messages, when they eventually FU. The situation that the AF pilots found themselves in IS going to happen again and with future advances in FBW technology it will be even harder for the pilots to trouble shoot.

In an ideal world the human would fly the plane and the computer would monitor his/her performance ... but that's not going to happen ... the bean counters don't like it when our pilot shortens the fatigue life of an airframe with a heavy landing.

Lets not fool ourselves; the ultimate goal of FBW is to facilitate aerodynamically unstable passenger transports with the fuel savings that that would bring. We are in a learning phase; the important lesson that we must learn is that any computerized system has limitations; just like our pilot. The problem is getting our FBW system (when it can no longer cope) to hand over to the pilot efficiently. Herein lies the problem, it can't hand over until it fails; and when its failed it's too late. The pilot needs to know what started the sequence, not the result.

It seems to me that a possible solution would be to provide an independent flight performance monitoring system. It need not concern itself with who is flying (computer/pilot) or provide corrective action. A simple aural 'CFIT in 30 seconds' would be all that was necessary, in the AF case 'airspeed' would have been all they needed to be told to know that things were going south.

I don't know what happened to AF447 but I'm certain that distraction/misinformation would have been a major contributory factor.

And yes, you're right, an independent basic flight control system would have been much more use than a trouble shooting manual.

BOAC
1st Jul 2009, 20:00
Thank you EGMA for your support. I was beginning to think we ought to be discussing plumb bobs and cats tails a la R&N thread..................

Mr Optimistic
2nd Jul 2009, 11:31
...in modern aircraft ? In addition to the full strapdown IMU ? If so, doesn't this give 'attitude of last resort' info ?

GlueBall
2nd Jul 2009, 11:55
Standard, inexpensive, self contained, electro-mechanical SAI [Standby Attitude Indicator with built in gyroscope], completely independent from everything and anything, hot wired to its own battery, would save the day whenever the glass dashboard goes on vacation.

BOAC
2nd Jul 2009, 12:56
Should we consider non-pressure driven engine instruments ie N1, not EPR? I would hope the pitch/power tables offer a back-up if they are EPR's?

Mr Optimistic
2nd Jul 2009, 21:31
..if it were possible to have an attitude reference of last resort (or be able to switch to known good source...yes I know), would a big button which did no more than try to get wings level and pitch to +3 degrees an impossibility ? It would have to have control algorithms built up by extensive testing of modelled upsets, take very conservative assumptions about the control deflections it could apply, and leave throttle settings to the pilots). Might be a compromise (does the right things in this scenario but not that, acts too slowly in this case etc) but better than guessing when disorientated ?

If this is rubbish, pls laugh and then delete.

Still like vertical gyros but these were only good for a a few seconds in an accelerating environment (where it could no longer find 'g' and correct itself).

BOAC
2nd Jul 2009, 21:51
Not really - the pilots need to be able to select the pitch they require - maybe level flight, maybe a descent, and, of course, there's more software involved in that.

Tmbstory
3rd Jul 2009, 07:24
BOAC:

Thank you for the post. It is 100 percent on the mark.

Your Basic Foundation - Level Things,number 1 and 2, have a lot of merit.

The Regulatory Authorities and the Industry ( both Manufacturing, Management and Pilots ) must work for and understand Safety and apply the original concept of Part 25 certification.

Regards


Tmb

Mr Optimistic
3rd Jul 2009, 16:14
..and without articulating the problem(s) so clearly and backing it explicitly with your (collective) professional experience that you don't start a public row and put the wind up the paying customers. That's a big ask, as they say.

BOAC
3rd Jul 2009, 16:28
....and hoping for 'big' answers from 'big' players!

Carnage Matey!
3rd Jul 2009, 17:39
We need a system in the cockpit that DEFINITELY leaves a crew with a basic flying panel, albeit limited - maybe no IAS or altitude, but at least power and attitude and does not just dump a pile of hot poop in the crews' laps and go off shrugging its shoulders. If that means a simple, battery powered AI, then fit it

Perhaps we should turn the question around and ask what aircraft does NOT provide this information. I know on the A320 you will always have at least a PFD and an N1 indication.

BOAC
3rd Jul 2009, 17:49
Did the BA 320 that had the electrical problem a while back retain both of those?

Carnage Matey!
3rd Jul 2009, 18:09
Perhaps I should have said you'll always have an AI and an N1. The standby AI was working and I've not seen anything in the AAIB report that suggest the auto-switchover of the engine indications to the lower ECAM display had failed (that said IIRC on an A320 the UNRELIABLE IAS drill requires the pilot to select the thrust to certain physical gates on the thrust quadrant (CLB/MCT/TOGA) so an actual N1/EPR isn't required). Notwithstanding that, the failures in this case were not to do with computers in the cockpit but to do with electrical failure, so is it really pertinent to the thread?

Of course if you are talking about a further failure following dispatch with the lower ECAM display inop then we are starting to get into a whole new realm of possibilities.

BOAC
4th Jul 2009, 07:35
CM - I was specifically TRYING to keep away from types, manufacturers and specific failures; merely to try and see if we can set some ground rules in an industry which is changing significantly in many ways.

Safety Concerns
4th Jul 2009, 15:08
sorry to be the one to buck the trend but the ground rules have already been set.

Pilots due their human nature were and still are the weakest link in the chain (that is not intended to be derogatory just fact). The figures are clear, more automation, less accidents better safety. Yes the automation does occasionally fail or get it wrong or there may still be situations yet to be covered by software but the answer most certainly isn't hand control back to fallible humans.

The answer unfortunately for pilots is to continue striving forward until the automation has been perfected.

BOAC
4th Jul 2009, 16:41
The answer unfortunately for pilots is to continue striving forward until the automation has been perfected. - unfortunately, since we have no idea how long this process will take, you need to change that to The answer unfortunately for pilots and passengers....... I think we owe it at least to the latter to seek some escape from whence we are heading.

Mr Optimistic
4th Jul 2009, 17:23
Echoing SC, isn't the question settled in principle and now the only room for consideration relates to how to reduce the v. small residual risk even further, ie when human intervention is still needed (software goes for a walk, equipment failures, bad weather right in front of you ?). Tailored training, (even) better i/f !, better sims, ops margins ?

Even if it is not agreed, isn't it how the issue will be presented/managed ?

BOAC
4th Jul 2009, 19:10
You have chosen your user-name well!:)

alf5071h
4th Jul 2009, 19:49
BOAC A ‘big’ answer (well lengthy), not a big player; and not really an answer, more observations and questions.
Re #1 “… to ensure something usable remains…”
“… for the pilot fraternity to press hard for a change in the philosophy and application of training and recurrent testing.”

CS 25 (certification requirements for large aircraft) provisions ‘the something usable’.
Invariably, use of ‘the something usable’ is assumed (see the relatively new HF design requirements - CS 25 AMC 1302), but how can we guarantee that the crew will use it in all circumstances?
Many crews and operations do not understand the basis of certification and the assumptions therein; the result can be inappropriate training, poor SOPs, and unfounded concerns that may lead to inappropriate actions following a failure – ‘I know better’ attitude. Thus the associated problem is an understanding, which stems from knowledge – training – or more accurately education.
Due to the inherent limitations in human performance there is always the risk that crews will focus on trouble shooting and the reinstatement of the high-tech systems. This trait is perhaps more prevalent as the industry’s operations and training become technology dependent.

Our problem is that we are becoming technology ‘junkies’; no longer are we ‘children of the magenta line’, but we are developing into hardened technology addicts with all of the dependencies therein. This is partly a function of the techo-sociological world we live in; the initial schooling, play and relaxation, and the behaviours within other industries all around us – it (technology dependency) becomes our ‘expectation’.

An additional problem is that similar sociological, commercial and operational pressures, which generate the technological dependency, also affect the application of crew training and testing. Operators can elect to maintain a standard higher than the minimum specified by regulation; thankfully many do, but unfortunately this does not provide complete immunity from the random nature of accidents in a highly reliable industry.
Thus within training and testing, the problem could be associated with lowering of standards, application of the rules – (what can we get away with), i.e. falling industrial professionalism. This is often is reflected in personal attitudes to professionalism – airmanship, but also corporate culture.
Just because ‘your training system’ does not teach an aspect, should not negate self-improvement, even though we work in a high pressure time critical environment with ever increasing demands on ‘our’ spare time.

(FAST) A mid-1990’s study by a major manufacturer looked at accidents in which airplane systems were involved in an accident or where they could have prevented the event and did not. It was found that in approximately 70% of the accidents involving airplane systems, the original design assumptions were inadequate for the situation existing at the time of the accident due to changes in…
the aviation system airplane operational usage personnel demographics evolving infrastructure or other considerations.

Thus current problems probably result from ‘change’ and ‘systematic complexity’, the ‘systems’ involving human activity. Complexity itself isn’t a problem; it’s the way we deal with complexity and the human interface, including understanding, need, objective, and mechanism of the ‘system’. In this respect we may be overregulated, – too many operational regulations, thus too complex to expect reliable implementation or to be correlated with certification regulations, e.g. certification claims alleviation for short duration ‘safe’ flight without airspeed, assuming that crew’s are adequately trained – is this always true – P2/P3 combination?
Can we see these changes in the complexity – are we looking? If seen, how are their importance assessed, do we choose an appropriate activity to combat any hazard?
We are a safe industry by most standards, but in safety there is no place for complacency – failures in looking, assessing, and deciding.

The industry depends on technology; in general, we created that need. The industry has yet to understand all aspects of technological dependency (accidents are often an unfortunate learning process), and individually we need to have greater understanding of the technology and the surrounding objectives and assumptions when we use it.
We have yet to learn to live with ‘aviation’ technology – we have to change, but in this change there may be more hazards. Combating these aspect requires thought and analysis – basic thinking.
For all of the human weaknesses, the human is still a powerful defensive aid – we can identify problems and deduce solutions. Do we teach pilots to think in these ways and adequately train them for the range of critical situations (stress, time dependent) which might be encountered?
Thus this problem is not only about technology, but the process of how to think – situation awareness, decision making – full circle back to airmanship, including skills and personal standards.

We are part of a self-generated complex system. In implementing technology, perhaps we have forgotten to look (think), or judge wider ranging aspects in the larger system which unfortunately may only surface with use – contact with aspects of the system, us - humans.

“No plan survives contact with the enemy” - ‘The principles of War’. Von Clausewitz

CS 25 Large Aircraft. (www.easa.eu.int/ws_prod/g/doc/Agency_Mesures/Certification_Spec/CS-25%20Amdt%205.pdf)
FAST Presentation. (www.nlr-atsi.nl/fast/overview.php)

BOAC
5th Jul 2009, 14:22
Alf - many thanks for your time and effort in a well-thought out post.

I think the core of your post can be summed up in the word 'airmanship' you use. I too believe strongly in the capability of the human brian to reason its way through an unexpected situation, often with success where 'systems' would fail. I recall the period in BA when we were informed that the word 'airmanship' was not to be used, as the great god DODAR was the correct expression. With that thoughtless directive died a lot of 'airmanship'. I always remember one of my first QFIs telling me that 'airmanship was the ability to avoid a situation where you have to use airmanship':)

As you accurately put it, there is a lack of emphasis/motivation/direction/incentive - call it what you will - to encourage new pilots to improve their skills as I feel we were brought up to do. Reliance on the 'magenta line' is encouraged to the point that when it fails, they are effectively confused, if not lost. Many times I have said "young man/woman - look at this solution to the situation" to be met by a puzzled "why should I bother" look?

I think the most appropriate paragraph of yours is

"We have yet to learn to live with ‘aviation’ technology – we have to change, but in this change there may be more hazards. Combating these aspect requires thought and analysis – basic thinking.
For all of the human weaknesses, the human is still a powerful defensive aid – we can identify problems and deduce solutions. Do we teach pilots to think in these ways and adequately train them for the range of critical situations (stress, time dependent) which might be encountered?

Thus this problem is not only about technology, but the process of how to think – situation awareness, decision making – full circle back to airmanship, including skills and personal standards."

I too think we have way to go in embracing the new technology - it is STILL outstripping us as it was in the early A320 days.

hexboy
5th Jul 2009, 17:16
Having read this thread after reading a number of others about various recent tragic accidents, it is good to see that some thought/comments can be posted without the poster being shot down by the "other" side. (Aren't all pilots on the same side?)
The 2 sides being - older pilots who have converted from cables/hydraulics to flybywire in their careers , and younger pilots who have only flown flybywire after receiving the correct training for this equipment as required by their management and aviation authorities.

As you rightly said - we owe it to the passengers and also to all the crews who do such a magnificent job day after day and night after night.

As a lot of the discussion seems to centre around computers - which are essential - and the inability of flight crew to cope when these are not working correctly for whatever reason, I would like to ask whether it would not be practical for an airline to include in its training schedule, a module where each pilot is required, on an annual basis, to do a certain number of hours in a basic, single engine, cable operated training aircraft.
It could be done at a GA field and all kinds of situations including go arounds, engine failures etc. could be included.

Yes, a large number of airline pilots fly/own small planes, gliders, aerobatic planes etc. and with proof, these could be exempted from the annual requirement.
It would be interesting to know, where an accident is proved to be mainly due to pilot error, whether the PF has any ongoing experience with small basic aircraft.

With a young son who is just starting out on a flybywire aircraft for a large airline, it would make me very glad to know that his training included some ongoing basic flying skills as well as all the electronics.

BOAC
6th Jul 2009, 12:13
Hex - thanks for your (obvious) interest. Personally I do not think basic flying skills are in question, more are they being given the right emphasis in the training and testing on modern a/c? As I have said, I perceive the problem as being one of dimininishing emphasis on these. Even in the 90's I saw command checks being failed on 737's following a runway change to a parallel runway at about 10 miles, where the 'candidate' went head down to programme the computer for the new runway, thereby messing up the approach - probably because of the training mentality' which existed which was that the flight management system should be used at all times rather than just flying the a/c onto the new runway. It is very easy to be distracted by all the button pushing and electronic messages away from the primary task which is to fly the a/c. Concomitant with that we must ensure that there is enough information to enable this primary task.

flipster
6th Jul 2009, 14:51
Good questions and answers all,

I'm with BOAC on this one!

In my limited recent experience - Boeing, AB and other passenger carrying aluminium tubes all have 'standby' AIs in the shape ISIS/ISI or a Stby AI plus an RMI (or sommat similar) and they are usually have independent power supplies for gyros and air data feeds etc.
However, the question is - when was the last time we practiced using said standby instruments? Its a bl**dy tricky scan - even to maintain S and L at height. Furthermore, SA can be difficult to acheive without RMIs/BDHI and DME/ILSs etc. Loss of any power/thrust indications only makes matters worse.

Sadly, I don't think there there is any requirement to practice/test any sort of skill on these rudimentary insts - I'm not sure it was even part of the Boeing/Airbus initial training (and it did not crop up in some previous companies' recurrent trg)- surely this is wrong? However, in a previous life and although there was no legal/staff/company requirement to do so, I used to get myself, and our pilots/ FIs to practice a descent, arrival and an SRA/ILS on standby insts once a year - it always produced some interesting debrief points!

Next time you fly, imagine such an approach on a dark and dirty night with limited battery time left!

jeanray
6th Jul 2009, 15:09
Being a PPL, I do not fly glass panels, but, before 9-11, was always in the cockpit with my friends of the TAP (portugal airline), and always "horrified" by the computer controls... The pilots do not FLY anymore: they punch keys and buttons. Most of them don't even need to hold the stick or the yoke. What's the point of being a PILOT? If the computer goes, so does the plane. So why do we need a pilot?
In our small club, we have a member who is an airline Captain. He comes in when he has time, and flies the old chipmunk, the robin, the Yak or the 152.
His comment: ".. need to remember what flying is about"
Another is an Australian Airforce pilot, now 84 (and STILL flying!).
His comment: ".. too many instruments" (we are talking of the six basic!)
No comment...
Gimme wires and cables, and let me drive!

john_tullamarine
7th Jul 2009, 01:24
following a runway change to a parallel runway at about 10 miles, where the 'candidate' went head down to programme the computer for the new runway

You have to be kidding .. but I know you're not .. a sad indictment of the Industry's training programs and general understanding of flight management .. regardless of which seat is concerned.

BOAC
7th Jul 2009, 07:22
JT - I saw it happen too on an early training detail where I was 'Safety Pilot', this time into CPH. I'm sure it is/was a result of 'culture' rather than piloting skills, as ALL the pilots involved were well-hardened 'old' and traditional 'stick and throttle' men (and in CPH it was BIG CAVOK as well that day).

I well recall the introduction of the 734 into my then company (late 80's), and the whole training dept was so starry-eyed about the 'fantastic' FMC that we were told EVERYTHING had to be actioned via the CDU and 'execute' and that to use LVLCHG, V/S or (heaven forbid) manual was a black mark. This caused me to screw up a linecheck into Geneva which is/was renowned for 'shortcuts' onto the southwesterly over the lake, and my desire to disconnect and fly down to capture the g/s was negated by the ruling 'philosophy', and by the time I had entered the new altitude, checked and executed and the auto had 'thought' about a gentle throttle closing and descending.........................shortly after that the trainers recognised it was still just an aeroplane.:)

It is this sort of mental environment I fear now - what is known in UK as 'The King's new Clothes' from the song. Perhaps we need the 'little boy' to shout out again?

Clandestino
9th Jul 2009, 00:50
Excellent thread BOAC! :D

However, some misconceptions about FBW and flying the modern transport aeroplane posted here really give me creeps, because I suspect that they are so widespread and wrongfully accepted to be true that there might be some FTOs/TRTOs/CAAs/airlines basing their policies on them.

Our 'new' pilots don't really need the old-fasioned basic flying skills, since these systems prevent abuse/mishandling.

That's a folly that can turn out to be fatal too easily. From personal experience, the skills needed to fly A320 safely are not much different from skills required on ATR-42 or DHC-8 Q400. Sadly, too many lives and airframes were lost in proving that one can stay well clear of all the FBW protections and yet wreck the aeroplane. A320 FBW can prevent overbank, overspeed, overload or stall. It cannot recognize that the runway is wet and that landing fast and long is not a good idea. You still need head mounted computer to resolve that.

Lets not fool ourselves; the ultimate goal of FBW is to facilitate aerodynamically unstable passenger transports with the fuel savings that that would bring.

Hopefully not. Even if it were true, the certification requirements would have to be changed for worse. As it is, A320 is flying sweetly and handling docilely in direct law, when there are no protections, there is direct stick-to-control-displacement and trim is manual via pitch wheel. From what I've gathered about 777 and E-jets, to get them certified, their manufacturers needed to prove that they can be safely flown in degraded FBW modes, so it seems that no current FBW transport aeroplane is unstable. May it long remain so.

Pilots due their human nature were and still are the weakest link in the chain (that is not intended to be derogatory just fact).

Sometimes they are, but it is not to be taken for granted. I was unable to find a single instance where "pilot error" was not facilitated by some systemic error, like: low quality initial training, insufficient recurrent training, weak regulatory oversight, badly designed procedures, lousy cockpit ergonomics, management pressure.... just name it, there are tons of them. It's unfair for two guys/gals at the pointy end to get all the glory when all goes well. It's even more unfair to unload all the blame on them when it doesn't.

The pilots do not FLY anymore: they punch keys and buttons.

It's a misperception. People whose skills end with punching the buttons are not pilots, they are system operators and are not supposed to be allowed in the flight deck (in the perfect world, anyway). Pilots fly and continue to fly when systems fail. When computers tumble, system operators have nothing to fall back upon.

alf5071h
9th Jul 2009, 01:38
hexboy, I assume that you, I, BOAC and JT, would take an ‘older’ view (#22).
However, I had the good fortune to grow up with technology, develop, test, certificate, and see the early systems into service. Now with hindsight perhaps the training and support associated with those systems was less than that required. The manufacturers did not adequately prepare the industry for the technology change – they only ‘sold’ it.

In mitigation, most operators’ wish lists were similar to that of any computing system – ‘let’s have it all’. Pilots’, chief pilots in particular (very old) did not know what they were requesting or understand how technology was to be employed. Thus, supported by ‘marketing’, the technology was to be everything to everyone.
More recently, there are signs with the advent of second generation Airbus and FBW Boeing aircraft that this trend is reversing, and Airbus in particular has put enormous effort into operational support and human factors.

I don’t support the need for refresher flying on ‘cable’ aircraft. If pilots have been appropriately trained in the basics, which might be questionable, then these skills should not be lost even if they degrade due to lack of practice. If FBW aircraft are occasionally hand flown and the more obscure skills are practiced in the simulator, then crews’ should have sufficient capability (flying skill) to deal with most situations.
The problem that I perceive is that crew’s don’t know when to use these basic skills. This is a complex issue relating to situation awareness and assessment, and decision making – the airmanship aspects; but most of all it is the lack ‘experience’, the ‘know how’, ‘know when’, ‘know why’, that are so important in aviation. These aspects, relating to technology, would not be gained in a few flights in ‘cable’ aircraft, nor in routine operations with technology without assistance.

Crew’s flying with modern technology (in fact all crew) must be taught the thinking skills which would enable them to deal with a range of problem situations often seen as ‘emergencies’ in current operations. Also, individuals have to practice these skills and develop a wide range of tacit knowledge (know how), contributing to, and enhancing experience.
One of the most powerful tools for this is debriefing; now where’s that in modern operations?
A pilot’s whom strive for self improvement should conduct self debriefing (analysis – how did I do?), continue to learn, and seek a greater depth of information.
But herein lies another problem; many training systems qualify pilots with frozen ATPLs. These pilots ‘have the qualification’ for Captaincy, they might believe that they already have the necessary knowledge – they ‘have passed the exam’, and often there is no subsequent examination of airmanship unless the operator and the more ‘knowledgeable’ Captains encourage and develop airmanship in these pilots. But where is the time and opportunity for this in a modern high pressured operation.

In addition, I detect a growing lack of confidence in junior pilots, they wish to fall back on SOPs and seek more regulation in their operations, they like being ‘boxed in’, a feeling of security – a possible result of the current litigious society, and enhanced by over regulation and weak corporate culture.

So again, I conclude, the problem is not just technology, or in this instance flying skills, it is the human interaction with all of the many aspects in aviation and the world at large – the big system.
Therefore, solutions might reside in a broad spectrum of activities such as teaching aviation thinking skills, developing airmanship and experience, revising the regulatory structure, and reviewing organizational pressures.
A starting solution could be to ensure that all Captains mentor the newer pilots, provide time to debrief, and in the absence of guidance, explain technology related SOPs. These might simplify some of the complexities of aviation life by focusing on what is important, when, and why – small changes in professional culture, but it would be a start.

In Europe the regulatory aspects are gelling and there is focus on organizational safety; but I fear that underlying this is the belief that safety can be regulated, a consequence of an ‘administrative’ image (as FAA), opposed to an agency in which a much needed co-operative, partnership approach to safety might develop.
These are a long way from the problems of technology, yet at the workface, it’s up to us to contribute the best we can. Everyone will have to work hard to retain the current well-deserved professional status and provide guidance for future generations to achieve the same – we have to get them thinking about ‘it’ – technology too.

Airbus Safety Library. (www.airbus.com/en/corporate/ethics/safety_lib/index.html)
Skybrary Human Factors – Airbus contribution. (www.skybrary.aero/index.php/Portal:OGHFA)
Tacit Knowledge. ( http://proceedings.informingscience.org/InSITE2004/050maqso.pdf)
Professionalism, (for Law, read Aviation). (www.ontariocourts.on.ca/coa/en/ps/speeches/professionalism.htm)

SLFinAZ
9th Jul 2009, 02:12
I think the single biggest question I have is the amount of actual time hand flying the aircraft as a ratio. The total time and takeoff/landing data for the AF crew is posted in the other thread, with 2 or 3 takeoff and landings a month and minimal hand flying otherwise how much "feel" to you retain? Especially if all but the 1st/last minute or so is AP controlled and the AT is enabled. Recognizing the complexity involved in the diagnostics/troubleshooting I still think that some minimal % of "handflown" takeoff and approaches has to instill a degree of feel for flying the plane while dealing with complications and unexpected scenario's.

BOAC
9th Jul 2009, 08:20
Some excellent views coming in there - thanks to all. I do feel this topic needs to be looked at seriously.

I would echo 'clandestino' particularly in 2 places:

"It cannot recognize that the runway is wet and that landing fast and long is not a good idea. You still need head mounted computer to resolve that."

and

"People whose skills end with punching the buttons are not pilots, they are system operators and are not supposed to be allowed in the flight deck (in the perfect world, anyway). Pilots fly and continue to fly when systems fail. When computers tumble, system operators have nothing to fall back upon."

A good summary, I feel. I worry that the new generation of pilots have grown up with superb WII, X-box etc simulation and an understanding that the 'box' is wonderful. It cannot be far away that they will do most of their flying training with EFIS type displays and basic GPS/LNAV facilities.

The first quote from clandestino points out that we must somehow instill this basic 'airmanship'/'seat of the pants'/'anal sphincter tightening' recognition - call it what you will - in their upbringing. Calls to 'cancel' the concept of airmanship in exchange for some structured process of analysis should be discouraged. I recall an excellent (and unpopular) company article from a BA pilot pointing out that not everyone is 'comfortable' with DODAR as a panacea for all ills since THEIR perfectly acceptable logic processes were trampled on by the rigid constraints imposed.

I am pleased to hear that alf thinks that the 'trend' is reversing. What I would like to achieve here are gentle nudges in that direction.

Lastly, for SLFinAZ - things have not changed much over the years. Long-haul pilots often went a month or more between landings. All that has really changed is that pilots now-a-days fly more frequently than of old while the need for 'hand-flying' skills has reduced.

Tmbstory
9th Jul 2009, 19:12
The more "normal" types of failures can be coped with fairly quickly.

The unknown (initially) emergency takes a finite time to come to grips with.

If you have that time then good things will follow, if not, then you will be in the papers the next day.

Either with computers or manually, you have to act in the time that you have.

The posts on ' Airmanship Qualities' are good value.


Tmb

alf5071h
10th Jul 2009, 01:23
Re: "It cannot recognize that the runway is wet and that landing fast and long is not a good idea. You still need head mounted computer to resolve that."

I don’t agree with this completely. Many landing accidents have similarities with other human error accidents. In these, the crew either failed to identify the conditions, or after detecting the situation, failed to act correctly (incorrect choice of action); the latter is perhaps more prevalent in landing overruns.

Although a computer (technology), may not be able to detect the landing conditions with sufficient accuracy to calculate the landing performance, there are components of existing systems which could provide an alert of increased risk – FMS wind, +windshield wipers in use, +approach speed for FMS wt, and +flight path angle/altitude.
A simple computation (energy?) could provide an alert when a ‘risky’ situation exists – a heads up to the crew, “have you seen this”, “have you considered … “; i.e. time to start thinking.

Alerting of this form is the basis of cross crew monitoring and CRM/intervention; however, both pilots could be subject to error simultaneously, and also there are personality issues such that the alert may not be given.
The advantage of a technology based alert is that humans are biased towards it – we like to believe what computers tell us.

Now consider a higher level of alert like EGPWS. This is a highly reliable system where failure to heed the warnings could indicate irrational behavior; yet some pilots do not pull-up.
For EGPWS, the higher level of technology with auto pull-up may provide the necessary stimulus. It would be a brave or foolhardy pilot who did not allow the autopilot to pull-up, and even if it was decided to overrule the warning, there has to be a conscious effort to disconnect the autopilot and maneuver the aircraft.
IIRC this is ‘technology aided decision making’ where significant aspects of a situation are presented to a pilot as a course of action, which once in place the human is biased to agree with what is happening (The Loss Aversion Heuristic).

So its not all bad news about technology: It’s what we use it for and how we use it that matters; it has limits and understanding these is important.
In a similar approach and landing situation, technology might not be able to predict (look ahead, what if) to aspects of a situation which a humans could judge better – if only the human would ‘look ahead’.
E.g. a storm passing over the airport presents hazards of windshear, lightning, turbulence, but once clear, a landing may be attempted in relative safety, but what reminds the pilot to consider that the storm could have flooded the runway, and that outflow winds could give a tailwind – similar situation as above, different time frame – the future. Solutions to this type of problem probably require human thought, but it is that always forthcoming; are humans any more reliable than computers?

Errors in Aviation Decision Making. (www.dcs.gla.ac.uk/~johnson/papers/seattle_hessd/judithlynne-p.pdf)

Analyzing Explanations for Seemingly Irrational Choices. (www.insightassessment.com/pdf_files/IJAP_Analysis_Paper.pdf)

Perspectives on Human Error. (http://csel.eng.ohio-state.edu/woods/error/app_cog_hand_chap.pdf)

Jesper
11th Jul 2009, 22:09
Im a frequent pprune reader and I sincerly think this is, by far, the best thread i'we ever read on here! Hopefully the guys with something to say in these kind of questions have a glance here now and then!
Thanks!

Tmbstory
12th Jul 2009, 19:26
There is excellent information in these posts for discussion.

However please remember that in an actual critical situation you may be faced with a decision to make and the action to take, in far less time than it takes to read an average post. It may be only seconds.


Tmb

alf5071h
13th Jul 2009, 02:18
Six design rules for human designers of automation:
Provide rich, complex, and natural signals.
Be predictable.
Provide a good conceptual model.
Make the output understandable.
Provide continual awareness without annoyance.
Exploit natural mappings.
Or an alternative view:-
1. Keep things simple: People have simple minds, so talk down to them.
2. Always give people a conceptual model: People have this thing about “understanding,” so give them stories they can understand (people love stories).
3. Give reasons: People are not very trusting, so make up some reasons for them. That way they think they have made the decision
4. Make people think they are in control: People like to feel as if they are in control, even though they aren’t. Humor them. Give them simple things to do while we do the important things.
5. Continually reassure: People lack self-confidence, so they need a lot of reassurance. Pander to their emotions.

Before anyone lights the flaming spears, read the source of these quotes (www.jnd.org/dn.mss/Norman%20HowToTalkToPeopleDOFT.pdf); and don’t forget that humor has an important role in thinking as it involves more than one viewpoint.

Related link.
Don Norman's jnd.org / user advocacy and human-centered design (http://www.jnd.org/)

jolly girl
13th Jul 2009, 08:30
Apologies if this is a bit off thread (but maybe not, the thread name is "Computers in the cockpit and the safety of aviation")...

Lately I have become interested in what researchers* refer to as "disturbance management," described as when "highly complex dynamic and event-driven domains such as aviation require operators to diagnose and cope with the consequences of breakdowns in human-machine performance that interact, cascade and escalate over time while maintaining the integrity and goals (i.e. efficiency, safety) of an underlaying dynamic process."

I am looking for examples of (for a lack of a better way to put it) ineffective disturbance management in the form of open-domain incident or accident reports. (I am aware of Strasburg, Air India and Cali in the '90s, but am looking for something more recent.) If anyone could point me in the right direction I would be much obliged.

Jolly

PS - ALF5071H - reference your comment "there is always the risk that crews will focus on trouble shooting and the reinstatement of the high-tech systems," research in the sim indicates this is the case. (Sarter & Woods in Human Factors, 1997 and 2000.)

A37575
17th Jul 2009, 09:03
Humans are lousy at monitoring automated systems
I can't resist this. Back in 1990 I was flying as a contract pilot for a German 737 operator. Loved the job and a wonderful experience. The F/O's however were quite nervous about hand flying even under the best of weather and the automatics were engaged with seconds after lift off. On the other hand I was well aware of the use it or lose it principle and kept my hand in literally with hand flown SIDs and STARs using basic navaids where applicable (RMI)or the NAV selection of the HSI. Of course, it had to happen and a thoroughly alarmed first officer reported to the chief pilot that this Englishman actually hand flies this German registered 737. Tea and bikkes followed and the kindly chief pilot explained that in this airline the first officers were not trained to monitor raw data hand flying, but only trained to monitor the automatic pilot. His point taken, it was back to button pushing and knob twirling for me.

And now you tell me that humans are lousy at monitoring automatic systems! :D

Tmbstory
17th Jul 2009, 12:38
Glad to read your post and hope you continue the hand flying whenever you get the chance.

I have retired now but used to enjoy hand flying Corporate Jets from take-off , to cruise and down to landing . I expected the pilots who I was responsible for to be able to do the same. A few stories there!

Pilots should be competent in both hand flying and autopilot / automatics operation. It gives one a lot of satisfaction.

Regards

Tmb

BOAC
17th Jul 2009, 13:58
A37575 brings us back to the topic of over-reliance in automation. Is the 6-monthly 'test' flown purely in autopilot? Is 'hand-flying' at suitable moments discouraged because the monitoring level by the other pilot is either too high or not practised?

Only if the 'automation' is completely 'fool-proof' and multiply redundant with ZERO prospect of failure can we, in my opinion, set off down this road.

Are we there yet? I don't even need 'answers on a postcard'.

alf5071h
20th Jul 2009, 23:36
BOAC re # 40. It obviously depends what’s in the ‘test’. Basic hand flying can be practiced in most situations in highly automated aircraft; this requires personal will power and appropriate SOPs – corporate culture.

Perhaps those who suggest more general hand flying, but not that related to automation or the situations we might allow degrading system lead us into, may have identified another important (emerging?) issue.
Does thier plea indicate aspects of a lack of confidence which I sense is increasing; do pilots feel that they need to hand fly ‘elsewhere’ because they feel ill-prepared to handle the big jets – even when hand flying.
If so, this could be due to a lack of appropriate basic training or that the complexity of technology enhanced aircraft in the modern aviation environment is too difficult to manage with the current level of training or experience, i.e. not only has the aircraft / technology changed, but so too has the operating environment. We don’t appear to have many Cessna’s fitted with FMS or auto-flight systems, nor those which have the necessary performance to fly complicated SID/STARs, CDA, Cat 3, etc.
This begs the question are we relying on simulations or part task trainers too much; do they fail to provide a sufficient ‘big picture’ of both the problems and hazards of technology in a complex operating environment? Are the interfaces – the links and consequences, of the generally well simulated aircraft systems adequately exercised in the operating environment, i.e. do we simulate ATC or operational issues with sufficient accuracy?

For hand-flying issues related to safety statistics (and the situations we allow technology get us into) I would have expected calls for more ‘loss of control’ flying and operations closer to the edge of established safety margins (limiting runways, RTO). These areas may be technology related, but as argued previously, if the technology is understood, pilots should be able to avoid any hazardous situation. Even with ‘gross’ failures and unforeseen situations (very rare events), the basics of aviation (not necessary hand flying skills) should suffice in maintaining safe flight.
Unfortunately recent events suggest that this assumption is incorrect, – why … I argued that the standards of training and of professionalism are in decline – that we should revisit airmanship.
However on reflection, what if the assumption that the essential elements of airmanship can be taught without flying is wrong?
Do we need to fly to ‘experience’ airmanship?

BOAC
21st Jul 2009, 06:54
alf - because they feel ill-prepared to handle the big jets – even when hand flying.I know from experience in my last airline that there is a great reluctance to depart from the comfort of an autopilot coupled ILS and fly a visual from downwind on a 'nice day'. Very early on in my time there I briefed a visual into XXX and was later advised by P2 that he was 'surprised' as 'very few Captains do that'. Countless times I have had to push hard to get Bloggs to fly even an A/P 'coupled' visual and have in desperation a few times had to take control to save both time and passenger comfort. In questioning afterwards it appeared that the lack of 'example' was persuading Bloggs that it was 'difficult' and therefore better not to risk 'messing up'. As airports go further along the lines of 'no visuals' and automation marches ever onwards in the cockpit, I fear we will eventually lose the ability to position an aeroplane visually from a random position. Likewise the art of 'orientation' - where am I and where am I going - is subsumed by the seductive 'magenta line' - "You are here, don't fret" psychology.

Regarding 'Cessnas' 'fitted with FMS', we are not far away.

As for However on reflection, what if the assumption that the essential elements of airmanship can be taught without flying is wrong?
Do we need to fly to ‘experience’ airmanship? - you and I are old enough to recall this enless debate. I have always believed that it was an inbuilt faculty, although it could be honed through teaching and 'absorption'. Almost an inbuilt 'self-preservation' desire?

It is back to the big question - the way we appear to be heading at the moment, do we need these 'skills'? This leads inevitably to the fully automated aeroplane with system failure well into unlikely probabilities.

john_tullamarine
21st Jul 2009, 07:16
there is a great reluctance to depart from the comfort of an autopilot coupled ILS

I guess that 737s today are different animals to the ones I remember ? .. some of us considered it a bit of a nuisance to use the autopilot .. anything under around half an hour's sector length generally was hand flown go to woe .. unless weather or traffic dictated otherwise. My first flight on the line post sim endorsement was MEL SYD (bit over an hour) ... the autopilot and FD didn't get a look in.

If nothing else the sector confirmed to me that the aircraft was easier to fly than the sim .. and the visuals were absolutely magic compared to the sim's.

BOAC
21st Jul 2009, 08:30
the autopilot and FD didn't get a look in. - well, JT, now-a-days you'd be up for 'a chat'. SOPs and all that. No, the a/c are not REALLY different. The growth of the 737 from 1/200 to 900 makes it a far less responsive beast, but push/pull and aim still work and they still go around the same corners:)

The problems (amongst others) with the 'visual' against the ILS is that:-

1) You have to think and work a bit more
2) You have to plan things for yourself (no 'turn left now and reduce speed 180kts' sort of thing
3) With the coupled ILS you don't really need to 'monitor' it, do you........................(AMS)
4) If it 'goes wrong' the chances are it will ONLY be your fault
5) You actually need to look out of the window and not at the coloured screens.

Why risk it?

john_tullamarine
21st Jul 2009, 10:09
.. guess I'm just an over-the-hill anachronistic dinosaur ...

We were fortunate to have a pragmatic turn of the paradigm company. The 737 boss, when the 300 was introduced, started down the button pressing line but then the emphasis shifted to do it either way .. but be able to do it competently ... both ways !

BOAC
21st Jul 2009, 10:37
JT - viz post #27?

Clandestino
21st Jul 2009, 14:22
'Tis funny how people stressing the human failability and how wonderful automatics are, conveniently forget that machines do break down. I've had autopilots cutting out and refusing to reengage, flight directors developing mind of their own, autopilots deciding that they know better than FD and wandering away, a couple of times I've had localizers fail under me and bent glideslopes were too numerous to count. And I've been flying for living only last 8 1/2 years.

A couple of weeks ago I've thrown away well prepared and briefed ILS approach in favour of visual approach. It wasn't for the sheer fun of it, though it was fun, but because there was small but very red radar return too close to ILS FAF for my liking and 4-miles visual lineup gave smooth ride. Now if I were automatic ILS fanatic, I might have given my pax a rough ride or wait in holding or even divert. As it turned out, arrival was on time and completely uneventful.

Mind you, as BOAC somewhat ironically pointed out, visual approach is not "lets turn off everything, drop the gear and flaps and dive for the first runway we see". Visual approach needs good preparation and execution to give satisfactory results and it usually is more labour-intensive than automatic ILS.

As for riding the magenta line, it is basic airmanship, applicable to anything from UL glider to An-225, is that pilot has to know where he is, where he wants to go, has to fly the aeroplane and not allow the aeroplane to lead him. Magenta line is good tool for keeping the situational awareness but it has to be always crosschecked against the pilot's idea of position and direction. If pilot follows track line blindly, required redundancy is lost and chance of mishap greatly increases. In other words, FMSes, GPSes and all other electronic marvels of the modern age are not replacement for pilot's situational awareness, but a supplement to it.

Similar situation is with Airbus ECAM that automatically generates electronic checklists for detected failures. Just because it says:"now press such-and-such button" it doesn't mean that Airbus pilot doesn't have to know its systems very well or is not required to know what each button does. Especially as ECAM is not able to detect every failure, is (theoretically) quite capable of spouting rubbish when confronted with multiple failures and is unable to calculate all the consequences of failures. Also it can't be much improved, as it would require its CPU to be replaced with something intelligent, and AFAIK no one made intelligent computer yet.

Regarding the airmanship, I don't think its congenital, it has to be acquired and nurtured. Otherwise it withers and dies. Its easy to say "Such-and-such was :mad:poor pilot, as final report suggests." However seeing 20 000 hr pilots putting its trust in the only faulty instrument on board and stalling the otherwise serviceable aeroplane or busting the MDA on non-precision approach and flying his aeroplane into ground or taking-off without clearance should really make us think. If they really were so substandard, how come they have enjoyed such a long careers? I'm not sure whether essential elements of airmanship can be taught without actually flying but I'm certain that anyone who acts upon belief that airmanship is old-fashioned and can be replaced by system operation proficiency is in for a very rude awakening.

There is Arab saying that goes something like: "Any child can walk into the lion's den, but only the bravest men would go in to save it." Self-preservation kicks in only if one is aware of the danger. Ones who are not aware of the ways their APs/FMSes can let them down can easily dismiss my ranting as another old-wives-tale. Sadly, not to their own peril alone.

JollyGirl, I'm not sure how to interpret highly complex dynamic and event-driven domains such as aviation require operators to diagnose and cope with the consequences of breakdowns in human-machine performance that interact, cascade and escalate over time while maintaining the integrity and goals (i.e. efficiency, safety) of an underlying dynamic process. and how does it relate to Cali, Strasbourg and Bangalore crashes. I can offer you my perspective on them: all were cases of blindly following missprogrammed flight guidance computers, despite ample warnings that something is wrong. In Bangalore altitude selector was set below aerodrome elevation - something that was known to be dangerous in any such equipped aircraft, not just A320, and yet it was done. Strasbourg was case where design of control unit helped setting 3000fpm instead of 3° flight path but PFD showed nose down pitch, altimeters unwound rapidly, IVSIs have shown rapid descent and there was no reaction from flight deck. Even worse, the aeroplane would have missed the mountain it hit, if it were only on the proper final approach track. Cali was the case of following the track line despite it being to the 90° to the desired track. Also it is basic airmanship if there's doubt about one's position in descent, level off is a must. Sadly, it got neglected here.

617SquadronDB
21st Jul 2009, 19:43
BOAC brings an interesting perspective and a very informative, thought-provoking post.:ok:

john_tullamarine
21st Jul 2009, 22:55
JT - viz post #27?

with you all the way ..


Clandestino - salut !

Three Wire
22nd Jul 2009, 04:20
JT re post 43, you must ask yourself why the management changed their approach.

Firstly, the new whizz=bang 732ADVs had all this newfangled button stuff. To force people to learn it, hand flying was as near as dammit banned. I can remember using the A/P to fly a circuit to final! That was OK as long as we weren't using the sim and renewals were in the a/c.

But, in the next round of PPC/OPC/IRT, too many people started failing the mandatory handflying bit (OEI ILS with PF doing the button pushing). I did and had a remedial session and so did quite a few others.

So someone somewhere decided on a compromise. And I guess the reason was money.

EGMA
22nd Jul 2009, 11:17
Just a thought ...

As pilots we learned that it was very beneficial to learn from others mistakes (or die from our own), probably the only industry that reports its incidents properly.

The computing industy, in any safety critical area, need to understand this lesson. It is not sufficent for a computer system to simply fail; reboot and all will be well, is not good enough. This is simply a recipe for a later disaster, proper real time data audits are required.

It is more important to know why a system fails than to know that the diagnostics say all is OK.

A37575
24th Jul 2009, 13:32
So someone somewhere decided on a compromise. And I guess the reason was money

Maybe. But there is at least one regional operator in Australia operating the ever-so-easy-to fly Saab 340, that mandates the autopilot SHALL be used at all times be it a visual circuit or IMC. This is not because it is safer pe se, but because someone has read CRM and TEM and all that stuff pushed out by the University of Texas and is now convinced that hand flying any aeroplane is potentially dangerous and therefore to be avoided like a plague.

So now you have the ridiculous situation when once perfectly capable pilots are forced into twirling knobs and pushing buttons like a kid flying his radio controlled model aeroplane around a sports oval. And of course, once this crap goes into the company operations manual and in turn gets the wise old CASA nod of "approval" the rot inevitably sets in and the once capable pilot gradually sinks into a lazy hazy daze of automation. But what about his recurrent simulator training? Most of that will be automation, too. Despite countless research papers that warn of the dangers of automation complacency, (I am sure these are rarely ever seen by airline ops management people) the juggernaut of blind reliance on automation rolls on.

Read the editorial comment in Flight International 21-27 July 2009. Among other points it says "airline safety advance has stalled..pilot training looks like the key..it is high time the regulators and airlines reviewed how recurrent training is done in modern aircraft..in all the loss of control accidents over the past 20 years the aircraft could have been controlled..several involved failure to manage a stall..to describe it as pilot error is an oversimplification, obscuring the fact that the pilot was not trained to deal with the situation..conditioned trust in normally reliable automation...failure of their recurrent training to reinforce basic practices.

Ho hum! heard it all before. Now lets get back to those wonderful lazy real time LOFT exercises in the simulator - on full automatics of course..

alf5071h
24th Jul 2009, 19:17
Whilst searching for info on JG’s request I was reminded that ‘trust in automation’ appears in several documents.
The comparison between humans and automation is interesting. If a Captain trusts the FO to fly the aircraft - has faith or belief in the person, then is there any difference in the nature of ‘trust’ in automation?
Human trust stems from knowledge of a person, but in our industry more often from standardized training where there is an acceptable level of capability, i.e. the person can be trusted to conduct his duties in a range of situations.

So the question might be how we achieve a similar understanding about automation.
SOPs that require maximum use of automation duck the issue; - trust autos all the time. Conversely SOPs which enable the crew a choice of when to use autos might also fail in defining the level of trust as it depends on knowledge of the automation’s capabilities and reliability – is s/he (it) a ‘good chap’.

… More thinking required.

Is trust in humans comparable to trust in machines? (www.humanfactors.illinois.edu/Reports&PapersPDFs/humfac04/madwieg.pdf)

The cognitive capabilities of humans. (www.humanfactors.uiuc.edu/Reports&PapersPDFs/chapters/Wickens_Durso%20Aviation.PDF)

john_tullamarine
24th Jul 2009, 23:02
If a Captain trusts the FO to fly the aircraft - has faith or belief in the person, then is there any difference in the nature of ‘trust’ in automation?

Of course not, as a philosophical matter ... however, a pragmatic concern with the above statement relates to the competence with which the Captain can take over from a deteriorating situation (whether F/O or A/P) and save the day ... ?

Providing that the Captain understands and recognises the practical limits of competence of either his/her F/O or A/P ...

alf5071h
25th Jul 2009, 02:21
John, “ Providing that the Captain understands and recognises the practical limits …”

Yes, recognition is a problem, part of communication.
Autos are ‘Dumb and Dutiful’, and unlike a human, generally they cannot tell you when there are struggling – they just quit at the limiting condition.
So the ability (of the crew), or inability (of the autos) to communicate are a potential problems.
Knowing when and what to ask the autos to do is an important part of being able to trust the system to perform as expected.
Thus, it may be that our expectation is a source of error. Do we expect autos to behave – think, react like a human because they appear capable of human like control and calculation (but without actually being able to think).

Not to reopen the AF accident here, but as an example, why should we be so concerned about a failure of the IAS displays and loss some flight envelope protections, whilst the aircraft remains flyable with a manual control system and attitude display?
Consider days of yore with piston power, crossing the Atlantic in icing conditions, no A/P, poor radar, systems freezing up (IAS failure), but the flights continued safety.
Why should we now focus on the failure of technology as a cause and seek to blame it as if ‘it’ was some third entity?
Why not re-examine the human, not for blame, but for change, things we no longer do, or can do. In these rare situations of technology failure, perhaps our expectation is that we should be able to do these things, but we can’t; does that mean we can’t trust ourselves?

That’s probable enough philosophical hot air to generate a Cb!

john_tullamarine
25th Jul 2009, 02:56
I liken the problem you cite to the analogy of PhDs ... 600 years ago, the newly annointed knew ALL of European wisdom .. nowadays, the PhD knows a lot about three-fifths of five-eighths in the overall scheme of things ..

In similar vein, is it reasonable to expect the modern (ie younger) pilot to be able to maintain the manipulative and situational skills of yesteryear as well as keeping on top of increasingly more convoluted and complex electronic systems ?

Or do we accept that the manipulative and cognitive risks of yesteryear have been displaced, to some extent, by the risks of generally improbably electronic failure .. and accept that, if the latter occurs in adverse conditions, the risk of hull loss may be higher than what a similar set of circumstances may have resulted in in the past ?

Indeed, is there an answer at the end of the day ?

BOAC
25th Jul 2009, 08:38
Autos are ‘Dumb and Dutiful’, and unlike a human, generally they cannot tell you when there are struggling – they just quit at the limiting condition.- is there then a case for designing a softly degrading system? EG Hal - "David - I'm not feeling very well.....":)

In similar vein, is it reasonable to expect the modern (ie younger) pilot to be able to maintain the manipulative and situational skills of yesteryear as well as keeping on top of increasingly more convoluted and complex electronic systems ?
- pretty much back to the beginning of the thread here? I argue yes, and to assist we should make the auto systems
a) even 'more' failsafe
b) Easy to understand/use with proper training.

LeadSled
26th Jul 2009, 05:15
following a runway change to a parallel runway at about 10 miles, where the 'candidate' went head down to programme the computer for the new runway

You have to be kidding .. but I know you're not .. a sad indictment of the Industry's training programs and general understanding of flight management .. regardless of which seat is concerned.JT and BOAC,
Nothing is black and white, just shades of grey. With all due respect, whether "re-programming" (which you are not doing -- that's for software people) for the runway change --- as a decision to be evaluated by the check pilot, as "good" or "bad" needs a lot more information. Which airport/aircraft/runway even before we consider weather --- and who was pilot flying and who was support, and so on and so forth.

If the "re-programming" is confined to simply FMCS (by whatever name) selection of the runway, which brings up the ILS frequency and the go-around ---- this is no more labor intensive than manually selecting an ILS frequency by the keyboard, indeed, on some Boeing types, fewer keystrokes for lots more operationally useful information.

Even before the days of the "magenta line", at places like so many of the major US airports, I would suggest that having the ILS up for the runway you have just sidestepped to is a good idea ----- for the DME, if nothing else --- if it is at 10 miles of so ---- if it is a change at 800 ft at KLAX, from 25L to 25R, another story.

Quite honestly, I could spend several hours with a student posing all sorts of variations on the theme of just this one item, a "relatively" late runway change --- even before you get to type specific recommendations.

Lest you want to write me of as a "modern" technology captive, I go back far enough to have actually flown an MF Range, and was forced to become all too familiar with VAR, the US and Europe, at the time, having long since graduated to the VOR and (mercifully briefly) the Decca Navigator.

Modern flighdeck systems are wonderful aids, but we are seeing them become a crutch ---- the latest being Honeywell's "stable approach monitor" add-on to EGPWS ---- obviously Honeywell believes there is a market/safety sales pitch for this "you'r hot/high/going to land long and run off the end" performance prediction monitor. I abhor the thought that we actually need it, but I fear Honeywell is probably onto a nice little earner.

Remember:
Rule 1: Fly the aeroplane
Rule 2: Repealed due to politically correct EOE policies.
Rule 3: There is no Rule 3, see Rule 1

All in all, a very thought provoking thread.

Tootle pip!!

PS: Remember BOAC landing at Sharjah instead of Dubai, QF, Barber's Point instead of Honolulu, LH at Northolt, and many more --- having the ILS up might have been a bloody good idea --- how many have been trapped by "under planning" a visual approach??

john_tullamarine
26th Jul 2009, 07:57
but we are seeing them become a crutch

.. is probably the concern we are pushing.

I'm all for tuning the ILS with a close in runway change ... I think it's silly not to have basic guidance to minimise the chance of foul ups as you cite .. but is there a sound need to play with the FMS when you only have a few minutes until touchdown ?

I can recall on my 733 check to the line years ago .. the checkie had a leg and, in CAVOK, three parts the way downwind .. started drawing circuits on the box. No problem and he did a nice circuit and landing .. but why bother ?

Only the opinion of a dinosaur and I am well aware of tempus fugit so maybe I should keep my archaic opinions to myself in this modern song and dance flying world ?

A37575
26th Jul 2009, 11:28
I can recall on my 733 check to the line years ago .. the checkie had a leg and, in CAVOK, three parts the way downwind .. started drawing circuits on the box. No problem and he did a nice circuit and landing .. but why bother ?


The check pilot should have been sacked or at the very least dropped back to F/O. Now let's see how he planned to fly the presumed visual circuit. On LNAV, maybe? Whether on LNAV or following a magenta line on his MAP with the hdg bug he would be forced to follow his flight director which gives him the LNAV steering. So now you either have a pilot heads down in the circuit blindly pinning his hopes on the FD to teach him how to fly a circuit.

OR: If he intends to fly a visual circuit by actually looking at the runway as he tracks downwind and base then what is the logic in setting up an LNAV flight plan unless he is going to rely on the FD. Like JT I have observed first officers frantically pushing buttons on the CDU to build a beautiful circuit pattern that Picasso would have been proud of. And why? I 'll tell you why. Because they cannot keep their fingers of the buttons. It is a simple case of automatics addiction and my guess is 70 percent of glass cockpit pilots are addicted. I have seen countless times, pilots settling themselves into a simulator for local airwork flying. Even before adjusting their seats you see them dive into the CDU and type madly away. And you know even then the poor bastards are addicted. A sad tale, but true..

john_tullamarine
26th Jul 2009, 12:33
The checkie concerned also had an Access database for the contents of his refrigerator ... sad case ... but a nice bloke in spite of his addiction to inappropriate uses for computers ...

BOAC
27th Jul 2009, 11:10
Leadsled - it doesn't really matter what you call it!

"needs a lot more information."

CPH/734/04(?05?)/CAVOK/'keys punched' (if that's better:)) by PH.

Old story, so no ILS freq or g/a 'available' in those days. IE No need when a quick 'review' brief is preferable. Back in the days I referred to earlier when the kit was viewed as 'PFM' by the starry-eyed trainer managers.

4Greens
27th Jul 2009, 15:06
An important requirement is unusual attitude recovery training. Unfortunately this cannot be adequately covered in the simulator however sophisticated. This is because there are no physiological effects such as 'g' forces. This needs to be practiced on a real aircraft. Most ex military pilots have had this type of training and would probably agree on its value.

A37575
28th Jul 2009, 12:15
this cannot be adequately covered in the simulator however sophisticated. This is because there are no physiological effects such as 'g' forces

The vital skill in unusual attitude recovery training is the recognition of the situation by simply looking at the flight instruments. If the aircraft is at 135 degrees angle of bank and 30 degrees nose high attitude then if the pilot has been properly trained he will know that certain actions are necessary to get right side up. This is easily taught in most airline type simulators. Certainly in the 737 simulator I use, the instruments will also show a complete barrel roll although of course the simulator doesn't move.

While it would be nice (?) to have a simulator that gives you a gut wrenching vomit inducing 3G manoeuvre, it isn't going to happen. If you know how to recover from aforesaid manoeuvre in IMC by observing the flight instruments and correcting via the flight instruments then that is better than just reading it from a book. To say that it is dangerous to teach unusual attitude recovery training simply because the G forces can not be replicated, is nothing more than negligence - leaving the student or whoever, right out on a limb and the unfortunate passengers too. It is similar to saying it is too dangerous to teach you how to swim but I'll teach you how not to go near the water..

Tmbstory
28th Jul 2009, 13:33
A37575:

Unusual Attitude Recovery:

In my experience I did not notice any significant postive or negative "G" loadings. Maybe we were too busy!

Tmb

4Greens
29th Jul 2009, 12:24
A37575 : I'm still trying to find where I said it was 'dangerous to teach......etc

The word dangerous doesn't feature in my post.

4Greens
31st Jul 2009, 19:43
No reply to my last post.

It would be useful to combine type simulator training in unusual attitudes plus small aircraft limited panel instrument training to get used to the g forces involved.

It may also be an issue where complex failures are not dealt with in type conversions. Did my conversion to glass cockpit and was operating on the line with limited understanding of what you did when the lights went out.

Training and/or lack of it is a major issue.

alf5071h
1st Aug 2009, 01:45
4Greens - “An important requirement is unusual attitude recovery training.”

I would not disagree that unusual attitude recovery training is an important subject, but what is the exact relevance to automation / computers?

It may be more beneficial to look at the reasons for the loss of control.
If there have been system failures, then why did they fail, and how did the crew manage these failures given that in most, if not all circumstances the aircraft is still flyable – rule 1 fly the aircraft.
‘Loss of control’ accidents without system failure appear to have elements of non normal operation, surprise, and hazards of physiological disorientation – these are not failures of technology or the aircraft.

Thus, the higher priority for training might be related to how pilots manage system failures, how they fly an aircraft in a degraded state, and how they manage themselves when dealing with the unexpected or when challenged by weakness of human physiology – always trust the instruments.
It would better to avoid the hazardous situations, rather than relying on recovering from an upset, if indeed it is recognized / recognizable.

BOAC
1st Aug 2009, 07:40
We are, as alf says, drifting a little from my initial 'pointer', but it does seem that if airlines concentrated on training for low speed recoveries, including significant out-of-trim scenarios in G/As, most of the situations would be covered.

4Greens
1st Aug 2009, 18:24
We are getting there; more and relevant training required.

Capot
3rd Aug 2009, 07:37
I was working on the ramp at Bahrain in the early 70's when a Qantas B707 (I can't remember the variant) landed after a major upset on route. We disembarked the passengers, all very subdued and shaken, and got them off to the hotels we had organised.

In the cabin, we found evidence that something had gone wrong, including soap still stuck to the toilet ceilings. One of the passengers had told how her baby had "flown" a considerable distance from her seat row to another, without injury. We reckoned that this was in negative "G".

As I recall,, the flight crew said that Captain's Flight Director had indicated a increasing bank which the AP did not correct. The action of disengaging it and applying a manual incremental correction put the aircraft out of control, because the Director was wrong in the first place. I'm sure that someone knows what really happened; that story is very probably wrong.

The point of the post is that the Captain told us that he had eventually brought the aircraft under control again at 6,000 ft, by simply going back to his basic single-engine training using the basic panel, yoke, rudder and engine controls and taking the action he had been taught for "recovery from unusual attitudes". That would have been a laconic over-simplification, of course.

I have found one short record of this incident describing it as a "steep dive and recovery", but I'm sure it was a lot more than that. The aircraft was, according to the same record, written off due to structural damage. My memory was that a Boeing team spent 2 - 3 weeks crawling over the aircraft and found little damage, after which it was ferried away. That would have been consistent with the way Boeings were built, but I guess the record is right and it was finally written off.

I think about that incident occasionally, usually as I get on board an Airbus.

Clandestino
5th Aug 2009, 23:15
is it reasonable to expect the modern (ie younger) pilot to be able to maintain the manipulative and situational skills of yesteryear as well as keeping on top of increasingly more convoluted and complex electronic systems ?

Well that's certainly how certifying authorities see it: use all the neat gadgets as they're supposed to be used, but bring the aeroplane safely back to earth when they turn their electronic backs on you. Reasonable? Absolutely! Easy to accomplish? Hell, no! It takes dedication, time and hard work to get to the point where you're comfortable with any level of automation, from maximum to zero but also balancing at that point is not an easy feat. Whoever claimed that airline pilots have an easy life, lied.

As for the unusual attitudes, if nothing else, ASN database of control loss cases (http://aviation-safety.net/database/dblist.php?Event=REL&lang=en&page=1) can make me fan of Airbus "it's better to prevent UA than to try to recover from it" approach. Most of them are airframe or flight controls failures and in a few cases where control was lost at the altitude that would allow recovery, it was spatially disoriented crew that brought the ship in UA in the first place and I think that chances of such a crew recovering the aeroplane are very, very close to zero. To set the record straight: my opinion is that FBW Airbus pilots absolutely do need to be trained in unusual attitudes and proficient in manual flying (manual thrust too) because a) protections can fail b) one can stay clear of protection activation area and yet wreck the aeroplane (GF, Armavia).

And if you go practicing unusual attitudes in real aeroplane, don't go in anything that isn't aerobatic and don't go without parachute. The aeroplane I flew on my basic aerobatics course (as part of my CPL ) was recycled into beer cans following wing failure at root. Both instructor and student still fly today, with their logbooks showing number of landings lagging by one behind the number of takeoffs.

justanotherflyer
8th Aug 2009, 22:51
If a Captain trusts the FO to fly the aircraft - has faith or belief in the person, then is there any difference in the nature of ‘trust’ in automation?

JT's response:



Of course not, as a philosophical matter ... however, a pragmatic concern with the above statement relates to the competence with which the Captain can take over from a deteriorating situation (whether F/O or A/P) and save the day ... ?

Providing that the Captain understands and recognises the practical limits of competence of either his/her F/O or A/P ...


I wonder if this interesting question about different kinds of trust isn't worth exploring a little more, though. I suspect the trust we give to technology could well be of a different nature than that we give to other humans. Perhaps the error (if such it be) of assuming they are the same, gives rise to subtle dangers.

Let's say the GPS in one's car says the route takes the next turn to the left, but the spouse in the passenger seat insists it's in the opposite direction. Which way will you turn if faced with a snap decision?

(disclaimer: I don't have a GPS in my car - not that sad!)

john_tullamarine
9th Aug 2009, 01:52
Let's say the GPS in one's car says the route takes the next turn to the left, but the spouse in the passenger seat insists it's in the opposite direction. Which way will you turn if faced with a snap decision?

That depends ...

(a) married men know (or, with progressive marital wisdom (tuition ?), eventually will come to know) the correct answer.

(b) single chaps probably will waste some time debating the pros and cons of the question and may even foolishly opt for the electronic option.

Capot
9th Aug 2009, 09:24
I thought I was alone until I saw a cartoon with a couple sitting in a car half-submerged in a river; there's a crossroads just behind them and she, the passenger, is saying to him....

"For God's sake, haven't you learned that when I say 'Left', I mean 'Right'".

alf5071h
11th Aug 2009, 01:13
Trust is discussed in the reference below – beware, it’s a heavyweight scientific review.

The similarities in the characteristics of trust with some aspects of decision making and experience (naturalistic decision making, experts vs novices) are interesting.
“… high-trust individuals may be better able to adjust their trust to situations in which the automation is highly capable as well as to situations in which it is not.” Judgement - airmanship?

This suggests that trust may be a facet of experience, and thus the training issue (as ever) is how to 'teach' experience.

How do we achieve progressive marital wisdom? Experience.
How do we learn … that when I say 'Left', I mean 'Right'? Experience.

It’s not that we need more training; it is the need to concentrate on that which is relevant to experience - trust in automation, by knowing how to learn / remember / recall, and by acquiring know-how which appears to be a central component of experience.

Trust in Automation: Designing for Appropriate Reliance. (www.engineering.uiowa.edu/~csl/publications/pdf/leesee04.pdf)

4Greens
18th Aug 2009, 08:08
For the readers of this thread the FAA are now looking into requiring some form of unusual attitude training in aircraft.

Tee Emm
18th Aug 2009, 12:05
For the readers of this thread the FAA are now looking into requiring some form of unusual attitude training in aircraft

Reinventing the wheel. We did UA training in Tiger Moths in 1950...including spinning under the hood and it wasa real canvas hood which allowed no peeking outside.

BOAC
18th Aug 2009, 12:55
4Greens - does that mean that there is none at present? We've been doing it in the UK for at least 5 years on recurrents.

4Greens
18th Aug 2009, 22:41
BOAC not sure what your recurrent training is. If you mean in the sim, the FAA are talking about actual aircraft training. The sim doesnt give you the physiological difficulties of disorientation etc.

john_tullamarine
19th Aug 2009, 02:37
The sim doesnt give you the physiological difficulties of disorientation

Then my hat's off to you, good sir. I've certainly had the "leans" in the sim myself and I've seen numerous students get well and truly totally disorientated until they get on top of the box's quirky sensations.

4Greens
19th Aug 2009, 08:14
Its the g forces added in that intensify these sensations. This cannot be simulated in the standard airline simulators. Curiously the only time I am aware that some people felt disoriented was when one of our early 767 sims used to go sideways on the ground!

BOAC
19th Aug 2009, 10:43
4Greens - all sim based - I cannot see any operators allowing their precious airframes to be taken off line to do mini-aeros!

I agree with you about the sim, and the point is, JT, that yes, you can get disorientated but you cannot get the g forces and motion which can and do cause spatial disorientation. However, as para 1, I'm sure that is all we are going to get.

Linking back to previous on the 'recent' TK 737 at AMS, the PGF A320 and the Thomsonfly 737 at BOH I would, as others, press hard for some formal sim training in low speed out-of-trim full power g/a handling for low-slung beasties at least for starters.

Tmbstory
19th Aug 2009, 11:26
4 Greens:

Your post #82 reminds me of an L382 Hercules Simulator that on an ILS approach to Denver USA, used to get a quirk and transfer itself (display) due north at a great rate of knots straight through a mountain. It was a hell of a relief as you came out the other side!

This was quite awhile ago and certainly made an impression.

Tmb

Tee Emm
19th Aug 2009, 13:23
Linking back to previous on the 'recent' TK 737 at AMS, the PGF A320 and the Thomsonfly 737 at BOH I would, as others, press hard for some formal sim training in low speed out-of-trim full power g/a handling for low-slung beasties at least for starters

We did that in the 737Classic simulator today. Autopilot held the glide-slope by steady back trimming. Assumed autothrottle played up and throttles closed -all this at 1500 feet. Very educational; The rapid speed decay to Vref minus 25 knots accompanied by the noise of the stab trim moving steadily back made us wonder how crews can possibly miss these things. Did a GA at stick shaker.

I must say, it takes very concise handling to get optimum pitch attitudes during the go around. If flaps were immediately selected to Flap 15 on GA (normal GA procedure) but at speeds well below Vref, a stall can occur. There is no question that you must leave the gear and flaps until reaching Vref when Flap 15 can now be set. Conducted in IMC, this exercise is one of the best pure flying skill practice I can recommend. The danger is blind chasing of the flight director pitch bar which reacts to overcontrolling in pitch. Because of this some pilots elect to switch off the FD until a stabilised climb is attained.
We already include this low speed low altitude out of trim handling during type rating and recurrent training. It certainly gives pilots vital practice at rapid flight instrument scan in IMC - especially if ground contact is imminent.

4Greens
19th Aug 2009, 22:25
Just a quote from Flight International of 18-24th August. Article headed 'Need for upset recovery training drives FAA update':

"Calspan plans to certificate two of its four variable stability Bombardier Learjets under the new category for anticipated pilot training programmes according to a company official".

BOAC
25th Jun 2010, 19:21
Definitely time to dig this one up again and hopefully bring the esoteric (and off-topic) discussions on the Tripoli thread across.

What has triggered this 'revival'? The comments on the BA 056 report (kindly linked by 'sooperfrank')

4.2.3 The apparent increase in the number of software related incidents involving various type certificated aircraft is becoming a cause of concern.
There is also a common thread through many recent accidents and it is time to
train for a new type of emergency that addresses the failure modes in highly
automated aircraft. The interface between pilots and aircraft automation, as well as how this should be incorporated into aviation training, requires a review. This includes addressing how automation fails, how pilots should cope with it and how to get through the failures. New phrases for automation failures that were similar to "dead foot, dead engine" slogans that helped them identify which engine had quit are now needed.
It is therefore recommended that:

The Regulatory and certificating authorities of all States of Design and
States of Manufacture should introduce requirements to:
• Review all software control and hardware control logics and
combinations thereof to ensure that all probable defect possibilities
are identified;
• Review the processes used to introduce modifications to control
software since issuance of the original type certification, e.g. consider
a recertification process; and
• Verify that appropriate resolutions for such occurrences have been
developed and are in place to prevent un-commanded actions that
can result in an accident.
• Improve the robustness of the software/hardware logic through the
introduction of additional parameters to consider prior to an automatic
change is critical control surfaces.
• Introduction of a flight deck crew “alert/approval/override” facility prior
to an inadvertent change to critical control surfaces.
• Account for spurious mechanical and electrical failures and their
impact on the software and hardware logic system.
• Operators should provide flight crews with more basic hand flying and
simulator flight training on new generation aircraft to address the
technological developments in aviation, inclusive of effective stall
training.

How about that? On topic or what?

DozyWannabe
26th Jun 2010, 02:16
Hey BOAC,

I'm jumping in here and want to start by saying that as a long-term lurker and occasional poster I have a deep and abiding respect for you and your opinions. Alas I haven't had the joy of being at the controls of an aircraft since my AEF days (I ended up with long hair, pacifism and rock music in short measure around the age of 14 ;), but I do know a fair bit about the process of software development. And while I didn't end up having the skill to work in the kind of real-time software development that backs up FBW technology, I was fortunate enough to be taught by someone who was.

While I agree substantively with the point that pilots be trained in identification of automation failures and correct responses to same, I feel that the quote you posted betrays some misunderstanding of the processes involved.

• Review all software control and hardware control logics and
combinations thereof to ensure that all probable defect possibilities
are identified
I was privileged enough to be shown examples of the processes that Airbus went through to define potential system failure points, and I have to say that "exhaustive" doesn't even begin to describe the tiniest fraction of the detail they went into (I suspect that Boeing were every bit as stringent). Software engineering at its purest is a discipline that is the equal of any more traditional mode of engineering one can think of. The problem is that as with any engineering discipline, the scope of failure testing is limited by the imagination of the engineers concerned. Mistakes were made, and compounded with bullish sales techniques, when the first failures occurred there was a sense of falling to hubris. But the same could also be said of transitioning from props to jet transports, or from mechanical to hydraulic controls.

• Review the processes used to introduce modifications to control
software since issuance of the original type certification, e.g. consider
a recertification process; and
• Verify that appropriate resolutions for such occurrences have been
developed and are in place to prevent un-commanded actions that
can result in an accident.
Again, not something limited to software-based automation, and resolution of such should be treated as any other AD. Every software modification that I've heard of being applied to flight control systems - even those considered major in terms of importance - have tended to be very minor in terms of the actual physical effect produced. Having said that I do agree that any such changes, and any alterations to piloting technique prior to those changes being applied, should be communicated to pilots at the first opportunity.

• Improve the robustness of the software/hardware logic through the
introduction of additional parameters to consider prior to an automatic
change is critical control surfaces.
Here I come back to engineering. In software, as in mechanical or any other engineering discipline, added complexity tends to increase potential points of failure. As such, any engineer worth their salt will be very careful about introducing an increase in complexity. In the case of BA056, armed with the information from that incident one could make a good case that extra input parameters would be helpful. However, it is a decision that should be made in a logical manner, and certainly not in the heat of the moment.

• Introduction of a flight deck crew “alert/approval/override” facility prior to an inadvertent change to critical control surfaces.
An understandable reaction in the circumstances, but again one must be wary or added complexity (and like it or not, an override does introduce further complexity).

• Account for spurious mechanical and electrical failures and their
impact on the software and hardware logic system.
We're back to the limits of engineering experience and imagination. There's absolutely nothing wrong with the statement - but again you'd be amazed how many failures are accounted for in the design of such systems. It is, alas, impossible to account for all of them. In the case of BA056 we're talking about a dual engine failure at a hot and high airfield caused by a side-effect of a maintenance procedure that only affects a single type of engine. I think in this case the oversight should be forgiven.

• Operators should provide flight crews with more basic hand flying and simulator flight training on new generation aircraft to address the
technological developments in aviation, inclusive of effective stall
training.
I'd call that stating the bleedin' obvious, and that any air transport operator using advances in automation as an excuse to skimp on training should have the book thrown at them. Regardless of how easy automation makes things when the winds are fair, preparation for and understanding of how things can go wrong should be paramount in pilot training before graduating from single-engine circuit-bashing. But the firms doing the training at the early stages must take responsibility too.

Getting down to the fundamentals, the fact is that advances in aviation have come at the price of painfully-learned lessons before and after the introduction of digital technology in the flight deck. The killer is and always has been human complacency.

BOAC
26th Jun 2010, 07:47
DW- welcome to the thread (and flattery, as always will........................:p) Heaven knows, I may even have flogged you round in a Chippie before your hair (and other bits) grew:)

To me the point here is that it appears that the 'automatic' retraction of LEdge devices with reverse deployment (neat and practical) was not thought through, in as much as in the wrong situation with 'average' flying skill at the front, it would have caused a hull loss. (All kudos to the handling pilot and PIC for keeping their cool in rather exciting circumstances). I have not read all the tech details, but a 'safety link' ie was reverse actually selected and were we on the ground would appear to have been useful.

I am not clear about your reference to "a dual engine failure at a hot and high airfield"?

DozyWannabe
26th Jun 2010, 10:17
As I understood the incident, two engines suffered a partial thrust reverser unlock/deploy situation*, which caused the automatics to stow the L/E lift devices, possibly only until the gear left the ground, but enough to seriously reduce the lift generated. This was resolved by "firewalling" the throttles to keep her in the air long enough for a circuit.

This is only from reading the 2009 thread though - any further details would be good to know.

This kind of thing isn't limited to digital automation though - it can happen mechanically as well. The infamous 737 rudder PCU issue was a result of a combination of wear to the valve and very low temperatures causing an uncommanded reversal, and the AA191 DC-10 crash was a result of uncommanded slat retraction as a consequence of engine pylon failure.

* - so maybe "failure" was the wrong word

aterpster
1st Sep 2010, 19:24
Link to two papers concerning the AAL 965 December, 1995 CFIT near Cali, Colombia.

One is the NTSB ATC chairman's factual report. The other is an article I wrote for the April, 1996 ALPA Magazine about the crash and related issues.

Index of /cali (http://www.terps.com/cali)

MountainBear
1st Sep 2010, 23:46
At BOAC invitation I will respond here to bearfoil's comments in the Islamabad thread.

In one sense I agree with FlightSaftey and others that the historical trend towards fully automated planes must run its course.Where I disagree (perhaps) is that I think that:

(1) this trend deserves to be followed to it's logical conclusion because it has earned the opportunity to do so by factually demonstrating it can improve safety. I have no inherent love for a machine over man (usually quite the opposite).

(2) that this trend should run its course by real world experimentation. By that I mean you take fully automated commercial flights, with real passengers in them, and let them fly their routes, and see if they crash or not.

(3) That the results of this experimentation should dictate whether the human being has any future on the flight deck.

In his last comment bearfoil wondered if I have bias. I most certainly do. But my bias isn't toward the result but towards the process. I don't care whether human beings are or are not in the cockpit 100 years from now. But I do believe that this decision should be made based upon factual data rather than appeals to emotion ("consider the tradition of the pilot!" or "machines rob humanity of dignity!") or philosophical paeans to "balance". If balance causes more death than unbalance, balance can take a hike.

BOAC
2nd Sep 2010, 07:31
Thanks MB - a balanced view. Despite your views in the last para, I suspect that 'emotions' will be high on the list of decisive factors here. I have no doubt that we have the technology to reliably automate a large part of the process.

It will take a generation or two of 'pilots' to grow out the idea of 'being there', and a major deciding factor will be the media-led public reaction. Which is more headline-grabbing?

"Pilot gets lost and flies xxx passengers into a hill" or

"Terrified passengers fly around for 3 hours before crashing into a school for disabled orphans in the middle of YYY after automatic plane control is lost."

OK, just a little bias there:)

Gulfcapt
2nd Sep 2010, 22:00
BOAC, thanks for the link to this thread from the Islamabad one; good reading here. Its a worthwhile and important topic. However, I fear the pilots who should read it will not...

Automatics addiction is a very real problem that is not limited to the younger generation. I have watched older, more experienced pilots go heads-down at the most inappropriate times. One in particular could talk-the-talk on the pitfalls of automation, yet when push came to shove he stopped flying and started typing.

My perception is automatics addiction is not a function of age, intelligence or education. Unfortunately, automatics addiction does not equal automatics mastery.

My limited experience with automatics training (I say limited because most instructors are addicts not masters) encourages "gee-whiz, look at that." The emphasis is what the jet can do without me rather than what it can do for me. It is a subtle yet important difference in perspective.

Finally, two pet-peeves: I dislike referring to the FMS/FMC as "Captain Honeywell" (fill in the blank on manufacturer) as it indicates an abdication of control to the boxes. Also, the notion that we can automate the human out of the cockpit. We are the most capable, flexible and powerful computer aboard any aircraft. So much talk about pilot error and so little about those everday things we do that make aviation as safe as it is.

Before I get flamed for asserting how good humans are, I know we are fallible. But I also know Sparky is fallible too. One quick example: My last two trips across the International Dateline made all three FMS's go goofy. ETA's were so bad I actually broke out a CR-3 and figured them manually. Thus far, Honeywell has no explanation. By the way, nothing special about breaking out a CR-3; just doing my job.

Not sure I contributed anything to this topic, but thanks for starting it BOAC.
Best,
GC

PBL
3rd Sep 2010, 11:35
Just to put the cat truly amongst the pigeons, how about this, which I just wrote? Fully-Automatic Execution of Critical Manoeuvres in Airline Flying (http://www.abnormaldistribution.org/2010/09/03/fully-automatic-execution-of-critical-manoeuvres-in-airline-flying/)

PBL

BOAC
3rd Sep 2010, 12:20
I can see no obstacles to what you propose other than emotive ones.

However, taking your 4) (Airblue Islamabad) - with the technology that would permit an 'automatic' CTL, surely it would be more logical to simply produce an 'automatic' approach to R12? The only time a CTL would then be required would be in the event of a very late runway change since there would otherwise be no need for an approach at all on R30.

Tee Emm
3rd Sep 2010, 13:52
I would, as others, press hard for some formal sim training in low speed out-of-trim full power g/a handling for low-slung beasties at least for starters

I agree. But from my experience no one is really interested in learning the lessons of past incidents and accidents. I sent a carefully constructed letter to the Australian regulator suggesting the need for pilots to practice more manual flying in the simulator during cyclic/recurrent training. Included with the letter was similar recommendations from overseas accident reports where loss of control caused the crash.

The reply from the chief regulator was short and not sweet. He said that the Australian regulations covering proficiency and instrument rating tests already legally ensured pilot skills were up to scratch an that on the contrary more emphasis should be placed on automatics skills.

In my view automation complacency is so well entrenched in aviation that it is a lost cause to hope that manual flying practice will be actively pursued by todays airline pilots.:mad:

BOAC
3rd Sep 2010, 15:40
Alarming!

Tee Emm's post is a reminder that we need to focus on the here and now and what should be done in training and skill development to avoid the sorts of accidents we are seeing more of now. PBL's concept stuff is worthy of attention, but will takes years to get through the design and regulatory gates.

PBL
3rd Sep 2010, 16:29
I am not sure if I was proposing the concept in the sense of suggesting commercial aviation should go in that direction. I was noting that there is a clear argument for 4 out of the 6 accidents I mentioned that the state of the art in control systems can do, now, what those airplanes apparently did not do under pilot control. And maybe for all 6, depending on what we find out. I suspect this argument will be supported if one looks further back as well. And if the argument is out there to be made, then someone will use it to say we should go that route. I don't know if that someone would be me.

I see not only significant regulatory hindrances to realisation, but also significant technical and procedural hindrances.

First, technical. Such aircraft control systems must be shown to be reliable and fail-safe, and current kit is not so designed. Think of Turkish, whose AT was fed data from just one RA, and RAs are known to fail relatively often, compared with other avionics. We are a ways away from thorough fail-safe design through and through. And that is a prerequisite. No more map-shifts. And so on.

Second, procedural. In busy TMAs, the entire traffic control is predicated on flexibility, in heading, altitude, and airspeed when sequenced on final. Switching to pre-programmed full automation will require a massive system requirements change, and I am not sure anyone yet knows how it could be done, even the theory of it.

I don't know which of these would prove the bigger challenge.

I agree that it might make more sense to devise an appropriate approach to RWY 12 at Islamabad rather than CTL from RWY 30. If one wishes to turn that into a general argument that one doesn't do CTL's on full automatics, that might well be a negotiating point in a step change to full automatics. But if the response to that by some parties is to continue to allow hand-flown CTL's, then we would not have advanced from the current situation. If one switches to full automatics, one wants to do so especially for the demonstrably more risky procedures, I would think.

There are a lot of details, and it will take a lot of work and a lot of time, if we go that route.

PBL

alf5071h
3rd Sep 2010, 17:20
Peter, your provocative article #95, argues for improving safety by replacing the human with automation. For the moment, the human aspects in design and maintenance are put aside.

Technically, full automation might be possible, but as discussed in http://www.pprune.org/safety-crm-qa-emergency-response-planning/422301-where-next-crm-3.html#post5889774 this would involve constraint.

The example military operations are constrained to specific airfields, tasks, and I suspect weather conditions. Would such constraints be acceptable to civil operations, or if not, what costs (practicality) will the industry / travelling public tolerate to achieve such idealised safety?

Accepting constraints might well improve safety; no precision guidance, no autolanding, no flights to that airport = safety. This theoretical argument concludes that is safer to stay on the ground than fly, or use other means of transport (which may not be as safe as aviation).

I would argue that when discussing automation, practicality has to be the foremost view. By all means consider academic theories, but don’t loose sight of the practicalities.
Perfect safety may only exist in theory; in practice it involves managing risk – “safety is the avoidance of unnecessary risk” - safety is a compromise.

Practical solutions for improving safety should come from identifying and avoiding risk, both strategically and tactically, and in planning and practice.
We have to define ‘unnecessary’, which is undoubtedly connected with the situation, both now and future, what is the goal, or objective; how do these change with time and task.

Automation (technology) I suggest, is not better than the human in these tasks, even with human weaknesses resulting in error. The currently accepted judgement is that the human (with current automation) meets the requirements for safety – the public perception (TM #97 !!!).

For the accidents cited, assuming human involvement, we need to understand why the human performance did not meet the requirements of safety, whereas the vast majority of similar operations have done – why are these accidents or apparently the human behaviour in them, different from daily operations.
With such understanding, from accident reports (not always forthcoming or of sufficient depth), then it might be possible to pursue a combination of man and machine, e.g. technology aided decision making, situation awareness, EGPWS like systems and auto pull-up, and LOC auto recovery, as a stepping stone to increased automation.
Perhaps a practical study of the human and the man-machine interface would be more worthwhile.

alf5071h
3rd Sep 2010, 17:39
Peter, your post #99 touched on a key issue in that for many if not all recent accidents, a technological solution already exists. A notable exception might be runway overrun due to inadequate information about the runway condition.

EGPWS if correctly configured – software updates, database revisions, GPS nav, will provide an adequate safety boundary if the warning is heeded – terrain, landing short. Automation would replace human activity (or of increasing concern, inactivity).

Bank angle limiting / alerting is available. In those aircraft which suffered an upset with these facility, then activation of FD guidance / automatic pull-up could have prevent the accident (EGPWS last resort warnings were given). EGPWS auto pull-up was tested (Apr 2005).

Improved takeoff configuration warnings exist, not all aircraft have them. The problem here as with other safety aspects is ‘Grandfather rights’ – the industry regulators judge that we are safe enough (TM #97 !!!).
So if the human regulators are suffering weaknesses in judgement and fail to take timely action – a human condition, should we not automate them too?

BOAC
3rd Sep 2010, 17:46
"Perhaps a practical study of the human and the man-machine interface would be more worthwhile. "

Which is more-or-less where we came in - from post #21
"We have yet to learn to live with ‘aviation’ technology – we have to change, but in this change there may be more hazards. Combating these aspect requires thought and analysis – basic thinking.
For all of the human weaknesses, the human is still a powerful defensive aid – we can identify problems and deduce solutions. Do we teach pilots to think in these ways and adequately train them for the range of critical situations (stress, time dependent) which might be encountered?

Thus this problem is not only about technology, but the process of how to think – situation awareness, decision making – full circle back to airmanship, including skills and personal standards."

I too think we have way to go in embracing the new technology - it is STILL outstripping us as it was in the early A320 days."

aterpster
4th Sep 2010, 08:45
BOAC:

However, taking your 4) (Airblue Islamabad) - with the technology that would permit an 'automatic' CTL, surely it would be more logical to simply produce an 'automatic' approach to R12? The only time a CTL would then be required would be in the event of a very late runway change since there would otherwise be no need for an approach at all on R30.

"...surely it would be more logical..." indeed it would because it would be safer, much safer than any type of low-altitude, level flight CTL maneuver.

ZAGORFLY
15th Jan 2011, 15:05
How many times just an hand GPS like the Garmin Pilot II would have save the day when all the "reliable" airspeed indication are gone? I would have one in my fly bag all the time!

john_tullamarine
16th Jan 2011, 10:17
would have save the day when all the "reliable" airspeed indication are gone?

Putting aside the case of multiple failures, if the problem is just a routine loss of pitot statics, surely the time honoured flight with unreliable airspeed approach (ie pitch plus thrust for the configuration) would be more appropriate ? A GPS might help as well, but appears to be rather optional ?

safetypee
16th Jan 2011, 15:42
It is somewhat ironic that in this thread “Computers in the cockpit and safety of aviation” where the general tenor is that computers are contributing to current safety problems, that it is suggested that more computation is required (GPS # 104).
Perhaps apart from a very few (extremely rare) incidents involving multiple system failure, where either detection of preceding errors or recognition of previous incidents should have prevented the occurrence, there should be no need for further computational additions.

Why attempt to use GPS with a total speed failure when there are perfectly adequate procedures (Pitch / Power) to temporarily cope with situation until a more suitable solution can be found.
What happens to rule 1 – fly the aircraft, while you fumble with the GPS, switch to speed mode, etc, etc. This ‘computer’ is just as likely to be distracting as the failure.
No, no more computers; just fly the aircraft, deal with the situation and control the human tendency to generate fear from the latest, loudest, brightest failure by understanding the basis of certification, the availability of systems backups, and procedures required to manage these.
But foremost remember manage ‘yourself’, minimise surprise / stress, and fly the aircraft not the computer.

BOAC
16th Jan 2011, 17:10
Well, I'm pleased to see the old thread given the kiss of life - it is relevant.

I think it should be made 'SOP' for a card with pitch and power numbers to be placed on the panel and kept updated. Simple and effective (cheap, too, for the beancounters).

411A
16th Jan 2011, 20:36
I think it should be made 'SOP' for a card with pitch and power numbers to be placed on the panel and kept updated.
No need on the panel, our Flight Engineer has a superb copy in his QRH...always at the ready.
No Flight Engineer?
Your tough luck.:=
Sorry...:}

john_tullamarine
16th Jan 2011, 23:01
No need on the panel, our Flight Engineer has a superb copy in his QRH...always at the ready.

.. or, at the expense of speaking heresy in the current flight management environment ... if one occasionally hand flies on raw data .. then one necessarily knows the relevant numbers as a memory item ...

For folk transitioning I always made sim time to fit in practice on a total takeoff pitot static failure (plus anything else which might have been useful) with an IMC recovery off an ILS. Appeared to be useful and certainly built up the confidence after a couple of runs.

galaxy flyer
17th Jan 2011, 00:15
J_T

Now it is heresy to know the proper pitch and power for every flight regime! In my prior plane (Lockheed, Mr 411A), I could arrive after a NAT crossing and state exactly what power setting and pitch angle to be flown, in advance, throughout the descent and landing to Frankfurt-Main. Now, I try, but the temptation is to think it a frivolous exercise in being an old pilot.

GF

john_tullamarine
17th Jan 2011, 00:37
.. perhaps we both are becoming antiquated and saurian, good friend ?

BOAC
17th Jan 2011, 13:29
411A has missed the point as usual - we ALL have QRHs with tables of pitch and power, the problem is that when the s hits the f there is not a lot of time to go thumbing through a QRH to look up a table to find you have actually just stalled.

I expect, like JT, gf and others, we all know/knew the numbers. It is becoming apparent that lots of others don't and hence my simple, cheap suggestion. (Cheaper than an F/E)

john_tullamarine
17th Jan 2011, 23:18
Which is why, I suspect, most of the older folk have a view that pilots should be competent at

(a) playing a nice tune on the FMS AND

(b) raw data stick and rudder things AND

(c) all likely combinations in between.

He/she who is not able to address such requirements risks an unpleasant surprise, sooner or later, when the appropriate set of holes lines up one dark and dirty night .. and bites the offender on the tail.

PBL
18th Jan 2011, 10:49
While I really do hesitate to disturb the sewing club's coffee hour, I do feel I should point out that there is no such thing as raw data any more (with - I should say here, so as not to offend certain sensibilities - the exception of The Greatest Airplane Ever Built, which along with other airplanes built some 40 years ago might actually have had some. I am talking here about airplanes 25 years old or less).

And if there should be any raw data around, you really don't want it. Just ask the crew of QF72. Your "raw data" is heavily computer-mediated and must be. And of course control functions have been mediated since WWII, although only in the last 22 years in-service by digital computers.

The relevant question is what critical functions should be mediated and how. Most pilots are probably not aware of the development techniques used to assess and ensure safety (in the sense of minimising and mitigating dangerous failures) of critical systems. And, I suppose, even less aware that these selfsame techniques apply to systems whose behavior is partly human.

PBL

BOAC
18th Jan 2011, 14:33
The relevant question is what critical functions should be mediated and how. Most pilots are probably not aware of the development techniques used to assess and ensure safety (in the sense of minimising and mitigating dangerous failures) of critical systems. And, I suppose, even less aware that these selfsame techniques apply to systems whose behavior is partly human. - nor need to be. Now that you have bust in on our coffee break, I challenge that! RAW DATA is 'un-mediated', or as close as you can get. It means NOT having complex computer programmes deciding which input to the system is the one/s we will 'accept'. You have joined the coffee break at the point where we are discussing power and pitch attitudes. If you are suggesting that some damn wiggly amps are 'mediating' on those basic values then I think it is time to stop software development - the technology is not yet nearly good enough to have that sort of interference, as we saw at PGF..

My point is give us pitch and power and we can survive loss of other sensors. The job of software designers is to produce the flawless perfect system. Long way to go.Just ask the crew of QF72. Your "raw data" - I cannot. Which 'Raw Data' are you talking about?

PBL
18th Jan 2011, 20:09
RAW DATA is 'un-mediated', or as close as you can get. It means NOT having complex computer programmes deciding which input to the system is the one/s we will 'accept'.


If that is so, taken literally, then most modern airplanes don't feed any raw data to either cockpit instruments or flight controls. It is thoroughly massaged.

If you are suggesting that some damn wiggly amps are 'mediating' on those basic values then I think it is time to stop software development

Yes, I thought I was disturbing the sewing club. Concerning the raw data in the QF72 incident, the PRIMs didn't filter it, and thereby caused the altitude excursions that turned a flight into an accident.

PBL

BOAC
18th Jan 2011, 20:26
the PRIMs didn't filter it, and thereby caused the altitude excursions that turned a flight into an accident. - hoping I have understood your language - we are talking about instrument indications - are you, or are you talking about software generated control inputs? Did the 'PRIMS' DISPLAY incorrect attitude and power during the 'excursions'?

john_tullamarine
19th Jan 2011, 00:10
then most modern airplanes don't feed any raw data to either cockpit instruments

I play with mid level turboprops these days and the fleet certainly has both raw data primary and standby flight data tucked away in amongst all the gee-whizz bells and whistles, FMSs and other like bits of kit.

Do not most (all ?) larger machines still have independent standby AH, ASI and altimeter ? It is these to which we should turn when the others start to give one the discomforts ...

The problem is that the skillset required to use them to the exclusion of the fancy stuff is quite different if these are ALL that the pilot is reasonably left with on the dark and dirty night.

However, providing that the pilot has maintained that skillset, which is becoming increasingly difficult, it is a comparatively straightforward exercise to recover the aircraft to a safe landing.

The computers are great .. but only if they are working properly.

Clandestino
23rd Jan 2011, 07:59
the problem is that when the s hits the f there is not a lot of time to go thumbing through a QRH to look up a table to find you have actually just stalled. That' why when-I-was-on-the-Airbus, pitch and power combinations that kept you both out of stall and and overspeed long enough to find pitch/power table in the QRH were memory items; TOGA/15 to acceleration, CLB/10 to FL100, CLB/5 above.

Don't ask me about 330, I've only flown 19s and 20s and then for a short while.

I do feel I should point out that there is no such thing as raw data any moreDoc, its semantics: raw data has specific meaning for transport pilot and it's not literally raw data as in e.g.: gauge needle been direct driven by the aneroid box or bourdon tube. For us flying the line it means there are no either computed position or computed guidance orders; pilot takes input from his instruments and calculates aeroplane's position, actual direction and desired flightpath in his head and flies his aeroplane accordingly. I guess (and hope) that no civil aeroplane can be certified in transport category unless it has demonstrated it can be flown in raw data mode.

IMHO QF72 has so far only proven that we have not basically moved from the DP Davies principle of acceptable statistical probability of stickpusher activating when not needed. Exact hows and whys of the QF72 is something I eagerly await too.

PBL
23rd Jan 2011, 11:44
.....Airbus, pitch and power combinations that kept you both out of stall and and overspeed long enough to find pitch/power table in the QRH were memory items; TOGA/15 to acceleration, CLB/10 to FL100, CLB/5 above.

Don't ask me about 330

It's similar.


...... there is no such thing as raw data any more

.....semantics: raw data has specific meaning for transport pilot and it's not literally raw data

I know, but perhaps my point was not well made in one sentence.

There are lots of systems mediating between the physical flying environment and the control surfaces, some of whose data paths go through human eyes and brains sitting in two front seats. It used to be the case that the set of data paths from the environment to the eyes used to be well-understood; usually very reliable with known and simple failure modes. This interface was well understood and relied upon by those eyes in the front seat; the brain was the weakest point in that data path to the controls.

My point: it ain't that way any more. The path from environment to eyes has complex failure modes which the eyes sitting in the front seat cannot fathom in real time. Conversely, some of the systems which used to be relatively unreliable, for example navigation, based on reception of ground-based signals, have become far more reliable, as have systems such as the flight director. The question of what the eyes in the front seat can rely on, and should retain wariness of, has changed radically in the last two decades, with the explosion of avionics mediating everything. The answer is not necessarily that one is best off relying on the interface on which one has traditionally relied, and on which the low end of GA still does.

People flying modified forty-year-old designs are likely thinking appropriately when they think that, when things go pear-shaped, they want to see and use the good old traditional interface, which one calls "raw data". But one should be aware that that is on more modern kit as much an artificial, algorithm-mediated construction as flight director guidance. Witness some recent ADs from EASA.

Concerning QF72, the accident happened because the flight control was being driven by "raw data". Exactly how and why that data was generated has, to my knowledge, not yet been answered, despite dissection of the box through which it passed. Similar things have happened to analogue data-mediated flight control systems, such as the accident to the X31 16 years ago, but in that case the pathways are well understood.

The question is: which filtered data, of what sort and at which stage, are of most use to the eyes in the front seats when there are problems with the veridical operation of all systems? Maybe the most useful data is in fact a data range: the system "thinks" that the actual value of crucial parameters lies in range X-to-Y, with "here" (say, on the FD) the "most recommended" course of action. That is often what a good hazard&risk analysis of data corruption would suggest is the best information to provide to the eyes in the front seat. And you don't get around that Hazan simply by wishing for the same things you have in your weekend Cessna.

PBL

alf5071h
23rd Jan 2011, 18:41
Peter, raw data or otherwise, those at the front of the aircraft are going to use whatever is presented. Thus as you know, a key aspect of certification is that this data must not be hazardously misleading.
There will, as has been seen throughout aviation history and unfortunately we tend to focus on, the rare exception of low accuracy data (computer ‘glitch’) often resulting in accidents. If we are discussing these then it may be beneficial to look at the complete safety arena, e.g comparing the accident rate from computer problems against overrun accidents – both from a human and technological viewpoint.

However, the debate (as usual) comes from just a few views. Predominantly there is a division between the academic (certification) / engineering view and that of the operating crew.
Many issues are in assumptions originating from these views. The designer/certification engineer may assume a particular level of pilot knowledge and proficiency, whilst the pilot assumes ‘fool proof’, accurate information. Perhaps these are extreme examples, but each view builds up a store of false information or bias about a particular operation. Accidents often originate from these beliefs.
Also it’s the assumed context in which systems operate that can cause problems. An example, yet to be proven, might be the rare, short periods of flight without reliable airspeed. The assumption that pilots can manage with pitch/power has been shown historically to be good enough in a benign context (aircraft type and weather), but in a context of a highly augment aircraft with multiple failures, at night, with an relatively ‘inexperienced’ crew (context), and when penetrating a line of storms, it may be too much to expect.

Even then, there may still be two views; the pilots suggest design/certification action, but design/certification suggests more operator/pilot training.
It matters little in these high level safety debates whether the data is raw or ‘enhanced’; in an emergency the pilot seeks a compromise solution, as no doubt does the design engineer before certification.
Perhaps both fractions require a better understanding of each others viewpoint and capabilities; the resultant educated compromise will benefit safety.

john_tullamarine
23rd Jan 2011, 19:24
Perhaps both fractions require a better understanding of each others viewpoint and capabilities; the resultant educated compromise will benefit safety

Which is why there will always be a role for the certification TP.

MountainBear
24th Jan 2011, 02:01
That is often what a good hazard&risk analysis of data corruption would suggest is the best information to provide to the eyes in the front seatAye, and there's the rub.

Where in the traditional pantheon of aviate, navigate, communicate does risk analysis fit in. I remember a comment I once read from the captain of the UA Flight 232 when asked by a reporter how he knew what to do after he lost control of all his flight surfaces. His response, "We'll we just tried the first thing that came into our heads and thankfully it worked," [That's a paraphrase but it gets the gist].

Risk in inherent in complex systems. And where there is risk, if a man is honest, there is luck. Good luck. Bad luck. I'm not sure that the wise course of action is to toss the burden of risk analysis of complex data systems in the pilots lap. Might he be better of taking a mid-point in a range of values. He might. Might a pilot be better off believing instrument x over instrument y. He might.

Or maybe he might just be better of flying the plane and saying a prayer.

PBL
24th Jan 2011, 07:38
Perhaps both fractions require a better understanding of each others viewpoint and capabilities; the resultant educated compromise will benefit safety

Which is why there will always be a role for the certification TP.

Yes, but hisher role is limited, by virtue of the math. Given the complexity of today's designs and the dependence of almost every control-loop data path (in the sense in which I introduced the term) on SW, the prevalent reliability model must be the exponential model used for SW reliability.

Given that model, it is not possible to test statistically, through flight test or indeed veridical simulation, a design's resilience to major, hazardous or catastrophic effects - three out of the four classes of in-flight anomaly. That has to be performed entirely in the head.

That circumstance is what makes the airworthiness assessment fundamentally different nowadays from what it used to be a few decades ago. Or, rather, what should make it different.

PBL

alf5071h
25th Jan 2011, 23:50
Peter, I disagree that the role of the certification pilot is limited by ‘math’.
Modern systems certification involves both man and machine; thus, more than one perspective is required, and neither need dominate.
If the certification is to be done mainly ‘in the head’, then why not use the head of a certification pilot (test pilot, evaluation pilot, and line pilot) who should have the better understanding of the context – the situation in which an anomaly has to evaluated and a judge a plausible crew response. It is the combination of man and machine that has to be resilient.

In essence, this thread asks if modern designs are good enough; but alternativly are humans good enough to operate the human inspired designs?
Furthermore, instead of framing the problem as failures in design or operation, perhaps we should be asking why the certification process (judgement), which promotes safety by regulation of both man and machine, appears to have failed. Has it failed because it is now fundamentally different or because it still needs to change?

MountainBear
26th Jan 2011, 06:36
but alternativly are humans good enough to operate the human inspired designs?

Let me rephrase this question slightly....

What are the limitations of a human being in a complex data acquisition environment when he/she has to make judgments in a matter of a few seconds?

What are the limitations of computer software when presented with circumstances beyond design parameters?

To me it is obvious that when (a) the pilot is not the software designer and (b) the software designer is not on the flight deck that a perfect interface between the two is impossible and no amount of training or design can change that reality.

If that conclusion is true, then the next question becomes just how much cash should be thrown at software design and flight crew training and how much should be left up to the proverbial "wing and a prayer."

PBL
26th Jan 2011, 06:55
Peter, I disagree that the role of the certification pilot is limited by ‘math’.

alf, that may be either because I haven't explained myself well, or because you are not that familiar with the statistical reasoning, or both.

The practical limit of statistical testing of software-based functionality is around one failure/dangerous failure per hundred thousand hours; or 1 in 10^(-5) per ophour. You can bench-test the kit to this level, and maybe perform a certain limited variety of partial-integration tests, but you can't do full integration without flight test.

Keep in mind that the certification standard for DAL A critical kit is 1 in 10^(-9) per op hour, that is, ten thousand times the reliability level of which you can be assured to any reasonable level of confidence by bench testing and flight experience.

If you want to be assured with reasonable confidence that dangerous anomalies will not occur with a probability any more than 1 in 10^(-6) per op hour, it will actually take you the total ophours in the entire service life of the fleet to do so. And you are still, with 1 in 10^(-6), a factor of one thousand under the usual certification requirement for catastrophic events, and a factor of ten under that for hazardous events. That is the combinatorics of software anomalies and there is no way around that math. Recall what you said earlier: raw data or otherwise, those at the front of the aircraft are going to use whatever is presented. Thus as you know, a key aspect of certification is that this data must not be hazardously misleading. and it follows from what I just said that you currently cannot confidently get within a factor of ten of that assurance by general methods. There are some specific methods for specific architectures which promise to be able to attain such assurance with confidence, but these methods are state-of-the-art research (I just reviewed what will be a seminal piece of work on this, which should appear in 2011. Then add the umpteen years it will take for this to become common knowledge.....).

It took ten years of flying Boeing 777's around the world before the critical configuration anomaly showed itself out of Perth in 2005. It took 15 years of flying A330's around the world before the filtering anomaly showed up at Learmonth.

Software-based systems are simply different. The math was put out there by a couple of seminal papers in 1993, and at the turn of the century there were still some supposedly-knowledgeable avionics designers who did not know the hard limitations on testing of software or "proven through experience" supposed-validations. Ten years after that, with Byzantine anomalies on one heavily-used machine that came within days of having its AW certificate revoked, the 2005 Perth incident and Learmonth and similar, avionics engineers and assessors are somewhat more aware of the severe limitations.

I work on critical-digital-system standardisation committees with engineers who were still not precisely aware of the statistical limitations even a couple of years ago, fifteen years after the published results, even though there was a general awareness. However, the situation has recently changed in some countries such as Germany. I can't talk about the work until it is concluded and published, though, because of the protocols involved in standardisation work. It does not cover either avionics or medical equipment - just everything else.


Modern systems certification involves both man and machine; thus, more than one perspective is required, and neither need dominate.

Unfortunately the math dominates, as the auto industry now knows well. Manufacturers and component suppliers do extensive road testing of all bits of kit, as well as enormously much unit testing and partial-integration testing. But some of that kit really does get 10^8 to 10^10 ophours on it, amazingly, from all the installations throughout the industry. And it fails. And that costs the manufacturers and suppliers huge amounts of money in compensation, which they don't talk about but would dearly like to reduce.

The aviation industry doesn't see that - often - because the number of op hours aren't there.

That doesn't make the role of a certification test pilot any less important than it ever was, as you carefully point out with good reason. But there are some things heshe just can't do.

PBL

alf5071h
28th Jan 2011, 18:01
Peter, the statistical explanation does not clarify how a pilot is limited in the overall certification, even though in your view the math dominates.

Considering two recent accidents (A330 AF447 and 737 TK1951), the system problems originated with the sensors where known limitations of software, operating as designed, created operational problems. There was nothing to find in bench testing at whatever level was tested.
The resultant operational problems relate to the human-system interface, the situation, and human behaviour; AFAIK behaviour cannot be modelled adequately by math / bench tests. Thus it is in the human-situation area that a pilot might aid certification.

With respect to the process of certification, the current statistical approach is limited as you describe, yet the industry seeks resilience both in systems and operation to improve safety. Does that imply that resilience cannot be achieved with statistics?
With an enormous caveat of hindsight, in the two accidents, each of the sensor faults had been previously identified and considered by the regulators; the resultant decisions lacked elements of resilience.

For the A330, I assumed that the assessed risk of loss of all airspeed was statistically remote, but this wasn’t proven for the pertaining conditions, just a judgement, but equally there wasn't a total loss of sensed speed. The inadequacy was in the design specification for sensor selection, yet this was statistically acceptable in certification. The operational question is whether this acceptability (with hindsight) was satisfactory for all scenarios – yes it’s OK on a clear day with an experienced crew, but perhaps not at night near Cbs. It is this sort of judgement which a pilot should be able to help with.

The 737 accident IMHO is clear cut – a problem of grandfather rights. Rad alt anomalies were known; new installations either use triple mix or modern dual self-monitoring sensors. This newer 737 just used the old standard, allowed by certification. However, consider which operating standard the certification assumed – what the crew will do, possibly that of the latest ‘state of the art’ system (note the similarities with the MD-80 take-off config warning). Thus there was a gap between what should happen in operation (assumption) and what actually did happen (reality); it is the nature and significance of this gap which cannot be identified by statistics, but pilot input could provide guidance, experience, intuition.

A final point on resiliency is that the concept requires organisations to ‘learn’. In both accidents, the regulators did not learn from preceding incidents. This is a weakness of both the certification process (continued airworthiness) and humans in the process; a weakness perhaps aided by the statistical approach and associated statistical thinking. Thus I would argue for the process to change, there should be a balancing contribution from non-statistical operational judgement.
If not, the industry will have to accept rare accidents such as AF447 – limitations of design and human judgement in certification, and as with TK1951 – limitations of the operating human and the certification process.
I don’t judge which end of the system, design or human, requires change, but point out that there is something in the middle where greater pilot involvement than currently recognised might help make that judgement, preferably before the event.

MountainBear
28th Jan 2011, 20:39
If not, the industry will have to accept rare accidents such as AF447 – limitations of design and human judgement in certification, and as with TK1951 – limitations of the operating human and the certification process.

What's so wrong with industry treating this acceptance as the desired outcome rather than a hinted at tragedy?

Stated in economic terms: at some point in time the marginal utility of the next incremental improvement in safety becomes negative.

I find PBLs instance on the math curious because it's Bayes Theorem that says that when presented with statistically rare events we are better off just ignoring those events than trying to solve for them. I guess when those rare events involve the deaths of many human beings then all of a sudden the math goes right out the window and industry has to look like it's doing something. What it tells me is that underneath all the hardheaded talk about math and volumes of proofs lies a warm and beating heart that is ultimate decision maker.

PBL
30th Jan 2011, 19:23
I find PBLs instance on the math curious because it's Bayes Theorem that says that when presented with statistically rare events we are better off just ignoring those events than trying to solve for them.

Actually, MB, when testing supposedly-ultra-reliable systems, Bayesian methods say that when presented with statistically rare events, such as a failure behavior of the system under test, we are better off throwing the system away and starting again.

What it tells me is that underneath all the hardheaded talk about math and volumes of proofs lies a warm and beating heart that is ultimate decision maker.

Fine words. But the certification regulations require a case be presented, and if you are a manufacturer of FBW aircraft you have to persuade the regulators that your critical systems have a failure rate of less than 1 in 10^(-9) op-hours. So someone on the manufacturer's side has to do a bit of math to say "here's the argument" and someone on the regulator's side has to follow that math to be able to say "this is a good/insufficient argument". It's easier with hardware, because the properties of hardware are continuous (something breaks; you make it stronger). But it is devilish hard with software. And, humans in it or not, everything in the control loop(s) of a FBW aircraft goes through large amounts of digitally-programmed behavior. You can't expect humans to debug real-time programs magically as they go wrong, if they go wrong. So they had better be right. And that is where the math comes in.

PBL

Shell Management
30th Jan 2011, 20:43
you have to persuade the regulators that your critical systems have a failure rate of less than 1 in 10^(-9) op-hours

Of course reliability is not an attribute of software.

MountainBear
30th Jan 2011, 22:39
Bayesian methods say that when presented with statistically rare events, such as a failure behavior of the system under test, we are better off throwing the system away and starting again.Correct, when viewed from the perspective of the software designer. But while the pilot has the luxury of throwing systems away (flying the plane manually) the pilot doesn't have the luxury of rebuilding complex software systems on the fly. He has to deal with the failure as it is, in a few seconds, with many lives at stake.

You can't expect humans to debug real-time programs magically as they go wrong, if they go wrong. So they had better be right. And that is where the math comes in.Exactly. And I'm in full agreement with you so long as we understand "right" to be statistically right, that is probabilistic.

PBL
1st Feb 2011, 09:04
Correct, when viewed from the perspective of the software designer.

Thank you. I'm always pleased to know when I have said something that is right, especially when it is something on which I am expert :)

Also correct, BTW, when viewed from the perspective of the software user. In this case, the pilots.


But while the pilot has the luxury of throwing systems away (flying the plane manually)

You cannot fly most modern commercial transport aircraft "manually". Everything a pilot sees and does, from "raw data" to control responses, is part of a control system loop which goes through numbers of programmable-electronic systems. (I do acknowledge that on Boeing 737 aircraft, some of the control loops are still analogue mechanical systems. I doubt that will last another twenty years.) Anything a pilot wants to see or do rests on the reliable behavior of those programmable-electronic systems.

Maybe some just have to see it to believe it. We draw causal control-flow diagrams of airplane systems in which the pilot is part of the control loop. It is a valuable analytical technique which we occasionally try to teach to others, but for the most part remain best at ourselves. For most control parameters (or what one might think of as such), these graphs have twenty to forty elements, of which at most three are "pilot see", "pilot think", "pilot do" and the great proportion of the rest are programmable-electronic.

Now, all of those programmable-electronic elements are subject to the statistical phenomena about which I have been talking. If you think that an anomalous condition can most always be saved by those three nodes containing the word "pilot" above, then I can only admire your faith in the ability of software engineers to write perfect multi-ten-thousand-to-million-line programs. I can also say that few in the industry share that faith, although some do profess it publically on behalf of their employers.

PBL

MountainBear
1st Feb 2011, 23:57
Also correct, BTW, when viewed from the perspective of the software user. In this case, the pilots.I admit I'm baffled.

In a prior post you said this:

You can't expect humans to debug real-time programs magically as they go wrong, if they go wrong. The reason that Bayes Theorem implies a different course of action for software designers as opposed to pilots is because the factual situation changes. Software designs have the luxury of rebuilding the system; pilots don't.

I agreed with your slight redefinition of my original comment because I thought we were tying to say the same thing only using slightly different words. Now I wonder if you are just being argumentative.

BOAC
2nd Feb 2011, 07:46
Now I wonder if you are just being argumentative. - I have often thought that too, but on balance I don't think PBL understands what we mean by 'raw data'. To me (as a pilot) this means that although the data has passed through many ICs and the like it is essentially the 'truth' and not some software programmer's interpretation of what he/she THINKS I should be seeing and in the case of control functions it should be what I ask of the system. My training should then govern what I ask.

I am now old and 'retired' but I grew up in a world where I could stall an aircraft if I wished, exceed the g limitation where necessary to avoid dying, choose which of 3 differing inputs I wished to accept and expect my control surfaces to do what I actually ask. It now appears that these choices are being removed, and while there is no logical statistical argument for 1 and 2 in the civil world, 3 is vital and should not be delegated to some programme with some 'acceptable' level of error and 4 is 'ideal'. I am, however, delighted to be given 'information' on what HAL thinks is wrong, but I don't want him interfering, Dave.

The problem comes (in line with my thread) when pilot training and ability becomes so degraded to make pilots 'system operators' only, when all the 'interferences' above become essential and we route inexorably towards the Airbus Captain and dog world.

PBL
2nd Feb 2011, 20:46
Now I wonder if you are just being argumentative.
- I have often thought that too, but on balance I don't think PBL understands what we mean by 'raw data'.

Gentlemen, please don't let's be tempted to slip into gratuitous insults merely because we don't understand the relevant engineering! The title of this thread is Computers..and..Safety of Aviation, a matter on which I happen to be expert. If that's what you want to discuss, fine. If not, may I suggest you just let it be?

PBL

BOAC
2nd Feb 2011, 21:30
I'm struggling to see I don't think PBL understands what we mean by 'raw data'. as a gratuitous insult. It is simply a statement of opinion based on observation.

As an (expert) 'end user' I (and others) happen to find 'raw data' a major factor in the 'Safety of Aviation'

PBL
3rd Feb 2011, 12:18
I'm struggling to see [my comment]
as a gratuitous insult. It is simply a statement of opinion based on observation.


Well, I don't believe that. I think you're just trying to needle.


As an (expert) 'end user' I (and others) happen to find 'raw data' a major factor in the 'Safety of Aviation'

Another statement of opinion, I suppose. Let's see if this one is any better. Can you name any accidents in which a crew's inability to fly on "raw data" was a factor? (Note this is a very specific question.)

MB is puzzled about the use of Bayesian techniques in the evaluation of (supposedly-) ultrareliable systems. His response to the comments I am offering is to imagine I am being argumentative. I get enough of that kind of banter from the people I live with, the cats and the ex-ladyfriends. Can we get back to pretending we are professional people with certain sorts of expertise having a technical discussion?

To me (as a pilot) ["raw data"] means that although the data has passed through many ICs and the like it is essentially the 'truth' and not some software programmer's interpretation of what he/she THINKS I should be seeing

Let me say it again, just in case it wasn't clear enough the first time around. According to this explicit definition, very few airline pilots on modern kit see any "raw data".

Now, let me turn to querying the definition of raw data. RAs fall over every so often. They don't appear to use BITE (or, not effectively) and standard fault-tolerance methods don't appear to be used with multiple RAs in certain kit (say, Turkish Airlines's Boeing 737NGs). The generic failure rate is, I guess, somewhere between 1 in 10^(-4) and 1 in 10^(-5) per op-hour. According to the definition above, a Turkish Airlines RA on approach to AMS a couple years ago ceased providing true data rather abruptly. So, according to the definition above, the question arises: when you are looking at a "raw-data"-delivering instrument, such as a VSI, ASI, or altimeter, how do you know you are getting "raw data"?

I actually think the definition above is wrong. And I think it can be partially fixed with a little thought. And I think that, when you try to fix it, you will maybe get some initial inkling of the problems associated with reliable data-paths. All that is then needed to make my general point about modern kit is to interpose a couple of computers.

PBL

BOAC
3rd Feb 2011, 12:43
I think you're just trying to needle. - you are wrong - simple statement of fact. I guess that is a foreign concept to you.
Can we get back to pretending we are professional people with certain sorts of expertise having a technical discussion? - if that was for me,
I regret not. I find it too tiring. I think you cannot contribute anything to my knowledge, understanding of aviation or enjoyment of life. I now have to place you on my ignore list but wish you a happy and glorious career with your theoretical work, lady-friends and cats..

PBL
3rd Feb 2011, 12:55
I think you cannot contribute anything to my knowledge, understanding of aviation or enjoyment of life.

Quite obviously not. But you can't fault me for not trying.

I now have to place you on my ignore list

I don't know whether to be mortified or relieved!

PBL

MountainBear
3rd Feb 2011, 17:49
MB is puzzled about the use of Bayesian techniques in the evaluation of (supposedly-) ultrareliable systems.

I'm not puzzled by that at all. I understand it quite well.

What does puzzle me is your own contradictions. You say one thing in a post. Then the exact opposite in a following post. It's that reality that makes me wonder if you are being argumentative.

If that's what you want to discuss, fine.

You remind me of a younger sibling who once proudly proclaimed to me that "you have to play by the definitions in my dictionary." To me your posts amount to the claim that the letter A is equal to the letter B in their graphical design. I don't know how to have that discourse on that basis.

What I get is an constant appeal to your authority as an expert. One of the main reasons I chose to remain anonymous on these boards is precisely because I would rather discusses matters free from such appeals. Rational men are able to see the light of truth where ever it may shine, in the gutter no less than the academy.

PBL
3rd Feb 2011, 18:13
I'm not puzzled by that at all. I understand it quite well.

That is not the impression I am getting. The impression I am getting is that you don't know this material at all well.


What does puzzle me is your own contradictions. You say one thing in a post. Then the exact opposite in a following post.


That old humbug again. So, quote a contradiction that you claim I have proposed.

What I get is an constant appeal to your authority as an expert.

Sorry if that style grates on you. It's a career-related illness, I fear. But I don't think you'll find me claiming expertise where I don't have it.

So, is my appeal to return to the subject matter of the thread falling on stony ground? Do you feel, like BOAC, unable to get anything more out of a technical discussion?

PBL

MountainBear
3rd Feb 2011, 18:30
So, is my appeal to return to the subject matter of the thread falling on stony ground? Do you feel, like BOAC, unable to get anything more out of a technical discussion?

It's amazing what a little bit of knowledge and a whole lot of arrogance can produce.

Just like my younger sibling: "Bow to my expertise or I will kill the discussion." Hopefully, you'll grow out of this mentality one day.

If anyone actually wants to discuss, as opposed to sneer, computers and aviation I will still check this thread from time to time.

PBL
3rd Feb 2011, 19:13
It's amazing what a little bit of knowledge and a whole lot of arrogance can produce.

MB, I find it a shame that many PPRuNe contributions are vastly more eloquent with insults than they are with technical material.

I think we may live in different worlds. I correspond on a regular basis with the people who actually did the work to which I was referring, and there is (as there usually is in these circles) mutual respect for each other's capabilities and interests. No one would ever say or write something like the above.

Which doesn't mean to say no one is arrogant. Just that personality characteristics is an uninteresting topic of conversation. We are much more interested in Bayesian methods and CCFDs.

However, accusing someone of not knowing (thoroughly) what they are talking about is a serious one in our circles, and usually requires proof.

Which is how I know you don't live there: the fact that you don't feel the need to establish, with proof, your suggestion that I may have contradicted myself is a firm indication. Let me suggest that my world is far preferable to live in than one in which discussion differences are resolved through throwing insults.

But I am still curious, of course. Where exactly is that contradiction that you think I offered?

I doubt you'll answer. But if you were living in my world, that would get you thrown out.

PBL

BOAC
3rd Feb 2011, 21:00
If anyone actually wants to discuss, as opposed to sneer, computers and aviation I will still check this thread from time to time- thanks MB - that is why I started it.

I take it there is still a 'slide rules at dawn' battle in progress, but it is beautifully peaceful here:)

Sciolistes
6th Feb 2011, 14:20
The latest from Flight Global: Industry sounds warnings on airline pilot skills (http://www.flightglobal.com/articles/2011/02/06/352727/industry-sounds-warnings-on-airline-pilot-skills.html)

Piltdown Man
7th Feb 2011, 08:46
I don’t think the good old days were that good. I can still remember the sheer bloody effort spend learning about the errors and limitations of instruments. Just like a politician, not one of the wretched things ever told the truth and with the slightest provocation they told huge great porkies. That was if you could actually read the buggers. The “three pointer altimeter,” monochromatic dials, pointers the same colour as the displacement indicators and so on. They were difficult to read during day time. Reading them at dusk was virtually impossible as the instrument lighting was not bright enough. At night it wasn’t much better. I still remember my fingers being burnt by post lights when you had to swap them about during an approach at night in an F27. And then there’s the fuel trim indicators – a gauge with an arc length of something like 2.5 inches where you were expected to set something like 78.9%. And this little gauge was there to help you control up to 30% on the engine’s fuel flow. As for backup, you had to remember which engine the standby horizon was connected to. From what I can understand, this aircraft was typical for the period.

Then there’s the last generation of steam instruments, the ones fitted to jet transport aircraft just before the world went glass. I’m talking here about aircraft manufactured up to about 25 years ago. The legibility of these instruments was excellent as was their reliability if you compared them with previous generations. But they suffered from being incredibly complicated, very expensive and by modern standards inaccurate. First generation “glass” left these things for dust in the reliability stakes. Unfortunately, this stuff was fitted without the background knowledge we had with old fashioned steam instruments. We were told that these thing were accurate – even when the system knew it self that it wasn’t. The “magenta” line was always (and still is) a few pixels wide when these systems know that they may have an error of up to two miles. We weren’t taught how these things could be miss-programmed, could suffer from interference or in many cases even where the data came from. Surprisingly the data often came from the same “black boxes” that supplied the previous generation of steam instruments. And I remember being told that I didn’t have enough experience to fly an “all glass aircraft.” There were even more idiots around in training during this period.

Virtually all modern FBW digital aircraft now have solid-state transducers and a high degree of redundancy. They are reliable. But the worst case scenarios are not practiced with enough regularity. My own aircraft, one of the cheapest jets money can buy, still leaves you with a flight path vector if all ADC data are removed. If the screens capable of supplying that information go blank, I still have a battery driven attitude indicator. The engine data is capable of being displayed on three screens. Overall, I’d suggest that following a catastrophic instrumentation failure, you still have a flyable aircraft – but only if you were trained to use it. And I tell you what, we are. It is included in our type training and elements are practiced during bi-annual recurrent training.

Regarding modern flight decks, we face two big problems. Firstly is “mode awareness” – I have lost count of the amount of times I personally have been caught by my aircraft doing something I didn’t want it to do. Either I trap by error by noticing the untoward behaviour or my ever vigilant, normally thirty years younger colleague spots it. This is only possible in system where both seats respect each other as equals when it comes to flying. The fact that most F/O’s can fly better than me is not significant. The second is receiving training that exposes you mode and system failure at critical times and how this physically interacts with the aircraft. This has to be planned by imaginative trainers and be very type specific. I don’t think every airline does this nor are they aware of some of the nasty little surprises hidden inside the aircraft they every day. A crew of a 737 at BOH a few years ago received several nasty shocks as did the poor sods at AMS.

Solution? A free flow of information and training so that we can a) Recognise the early onset of source data supply problems, if any and b) Have a rapidly executable plan that will always allow an immediate escape using data that is reliable.

PM

Diversification
26th May 2011, 14:47
I have so often seen complaints about pilots typing in data before a landing. I have the following suggestion.
Today we have memory sticks with capacities above 64 GBytes which could easily store all available information about all runways known e.g. to Jeppesen.
Then a small program could take the current position (altitude and heading) from the on-board system. Then wind conditions could added by hand using a touch screen to compute a nearly optimal descent. - A minimum of typing.
The result could be reviewed and - if accepted - transferred to the on-board system. Voila!
But maybe I am too far into the future?

Regards from an old real-time-system designer.

BOAC
23rd Jun 2011, 07:36
Time to awake an old friend (post #1 here) and check whether any others now agree? Take a moment also to read Sciolistes' link from February, and note with some irony perhaps that the Air France "corporate safety manager Bertrand de Courville dealt with the art of safe go-arounds." during the 'presentations' - an interesting focus perhaps in the light of what we now think.

Ladies and gentlemen - there have been many calls for a significant shift in focus in 'training' - I again add mine. Time for the remaining few 'old' pilots with any clout in the system to take a stand against the accountants and managers who are dazzled in the headlights of an imperfect technology and training system and ensure all pilots simply have BASIC flying skills and the BASIC tools with which to use them.

Young Paul
23rd Jun 2011, 19:06
Well, I think a sense of perspective about "the art of safe go arounds" is in order here. Even in the 20 years I've been flying, things have developed in this regard. For the first few years of doing an instrument rating, I was incredibly adept at doing a single-engine go-around at ILS Cat I minima - and this was what we had legally to be able to demonstrate. The only thing that ever changed was which engine was shut down.

Over the last 20 years, it has been pointed out that life is rarely that simple. And so go-around techniques (and everything else) have been finessed to cover a wide variety of other options. A go-around from the runway. A go-around and level-off at 1000'. A safe go-around at Cat III minima. An engine failure on short final followed by a go-around. A go-around with jammed flaps. Windshear go-arounds. A go-around with an obstacle clearance procedure. Doubtless there were some good ol' boys who regardless of what you threw at them would have the skills to do the right thing intuitively. Perhaps. It was also discovered that there were about 80% of the people who might well be thrown by the unexpected. Not every pilot is as brilliant as those good ol' boys. And no airline could count on having a good ol' boy in the left hand seat on the day when it all went to pot.

So go-around techniques continued (and continue) to be a matter of debate. The appropriate level of automation continued to be discussed and developed.

There are some things which, in my opinion, are absolutely brilliant about how the Airbus is set up. Like when doing a CFIT manoeuvre, you can just firewall the throttles and pull the stick back as far as it will go and hold it there. You don't have to worry about overstressing anything or whether you could pull harder - the aeroplane will simply give you everything it is capable of. And not worrying about that, try and work out why the heck you have granite in front of you and how you can get away from it.

Except that you have predictive GPWS, so may well have avoided that in the first place.

And here's the rub. All these computers are there to improve safety. GPWS, EGPWS, Weather Radar, Windshear, Predictive windshear, TCAS, GPS, FMC, Autopilot ... all have been added because, all else being equal, they add safety margin. "Good" pilots have had a share in a significant number of the most famous air accidents - I'm not going to enumerate them, but I'm sure you won't have to think hard - and in some cases, they were not using systems that aircraft manufacturers had made available to protect them.

You can talk about this being an erosion of professionalism if you like. I don't think so. What it means is that a different skill set is required. The same happens in every job. If you go back to the 50s and 60s, there were airliner crashes most months, it seems (I had to do a quiz, and working through the headlines of those years was startling). If aviation were as dangerous now as it was then, we would be seeing hull losses daily.

theficklefinger
23rd Jun 2011, 20:40
Reminds of debates about ethics and morality...

I think when the airlines want to have a serious discussion about safety practices, then hiring practices will be first on the agenda, not whether we need TCAS I or II, or third FMS on board.

When all the lights go out, we either have pilots that can dead reckon and hand fly or we do not. Let's start there.

It's laughable to discuss SOPS and safety strategies when the airlines continue to hire personality over experience.

alf5071h
23rd Jun 2011, 22:24
BOAC (#149), hopefully avoiding previous debates; we the ‘older’ generation, should also take care not to be “dazzled in the headlights of an imperfect technology and training system”.
What may have been 'basic' to us may not represent current views of training requirements – the minimum skill set to conduct flight operations safely. I stress ‘minimum’ and ‘safe’, used as in airmanship terms which requires additional and progressive acquisition of skills with operational experience – training on the job.
In the view above, I assume that the industry has deliberately changed the required level of training with the advent of advanced technology – automation. If so, then either this training does not match the requirements of the new generation of pilots, perhaps dependant on automation, or it does not match the expectation of the older captains in modern operational situations, of whom most have the relevant experience and skills.
The former leaves new pilots ill-equipped to conduct operations without additional support; the latter places greater workload and responsibility on captains, and in reality, probably both.

These thoughts were developed in the adjacent thread - http://www.pprune.org/safety-crm-qa-emergency-response-planning/454443-eager-beaver-pilot.html (#19)

Many of the problems stem from ‘change’ and how changes have been identified and managed; ‘change’ was also an issue earlier in this thread - http://www.pprune.org/safety-crm-qa-emergency-response-planning/379780-computers-cockpit-safety-aviation.html#post5041334 (#20)

Perhaps what the industry is concerned about (link @ #146) identifies with the relatively recent changes in ‘imperfect technology and training system’, and thus are reactions to first contact with the enemy – no plan survives contact with the enemy. Conversely, is the industry over-reacting to a few surprising ‘automation’ accidents (salience), in what is a very safe mode of transport, but one which always expects improvement?

I agree that the industry needs to review (change) the current situation, but not necessarily ‘back to (the old) basics’ – you cannot turn the clock back.
What has changed? Man, machine, or situational context – the big system – human, technical, social aspects.
What is inadequate about the existing technology and training – remember that nothing is perfect. The man / machine only need to be adequate for the task and context (not perfect safety). Is the current (changed) context too complex for the present man / machine combination?

Thus what needs to be improved? Basics are important, but what are the ‘basic’ skills and tools for today’s context? Will these be adequate for the foreseeable future involving the ‘new’ man /machine and operational situations?
How and when are improvements to be achieved? Perhaps the latter (timing) is the pressing issue.
IMHO the discussion should focus on the changes, not just on taking a stand. If you are going to take a problem to management, have a workable solution beforehand – and one in your favour, safety.

BOAC
24th Jun 2011, 07:27
I do not have a solution, alf - that is for more qualified folk than me. Nor is it wrong to try to ensure that this very 'management' or whatever is driving the situation is thinking about the issue. No, I do not wish to revert to flying ILS on Turn and Slip, or inclinometers or wing-warping, but you must admit that whatever 'bells and whistles' (make that now 'Ecam and alerts'?) a system provides, IF(when?) it goes tits up, providing a pilot has basic control of the a/c and basic reliable (even if crude) information, he/she should be capable of recovering to a more safe environment. I'm sure, as one of the older generation like me, you treasured the ability to disconnect the autopilot from the automatic vertical/lateral navigation system and, knowing where you were and where you wanted to go were then able to use basic piloting skills to actually FLY the aircraft while the 'what's it doing now' mist burnt off?

My fear is that this concept is no longer built into the mindset of the 'modern' pilot. My fear is that in the 447 case (since that is where I began this thread) they were 'expecting' something magic to happen' rather than ensuring that it did. That primarily is my concern.

I said many moons ago, that once the 'spamcan' in the PPL syllabus has some form of LNAV as a standard (which is probably not far away) the concept of 'where am I' may completely disappear too. I saw that very mindset 5 years ago when there was an LNAV issue in my company and one particular pilot was COMPLETELY lost without the HSI Nav display - no concept of PLOG/time/tuning a beacon or even looking out of the window - just a disbelieving fixed stare at a useless EHSI. The problem is with us - let's address it. The call is for input from greater minds than mine, alf, particularly those with influence. After all, the little boy shouting that the king was actually naked probably did not have a complete set of clothes in a holdall for the king, but simply and genuinely felt someone should be aware.

Your opening para is indeed valid, alf, but who, then is to tell the king?

MountainBear
25th Jun 2011, 07:50
What it means is that a different skill set is required.I agree.

I said many moons ago, that once the 'spamcan' in the PPL syllabus has some form of LNAV as a standard (which is probably not far away) the concept of 'where am I' may completely disappear too.I think there is a cogent argument to be made that 'where am I' as a concept should go away. At least, it should go away as a matter of first priority. One's position is no longer determined by peering through the glass of your bi-plane and following the dirt road to the landing strip. Where one is, in the first instance, a function of what flight system or what instrument one chooses to give attention. It's based upon what mental model one has constructed of events as feed to you by those instruments. What goes under the term 'loss of situational awareness' is really the result of data overload, or mode confusion, or wrong priorities. The amount of accidents where the GPWS is going off on the flight deck while the crew blithely plows the plane into the ground is astonishing.

The way I see it is that in modern FBW aircraft before you even get to the 'where am I' question the pilot has a credibility problem. Is that GPWS warning accurate or is it malfunctioning. Is that altimeter that's dropping 10,000 meters/minute on AF 447 accurate or the computer run amok. It's rare that the pilot loses situational awareness; it's more often the case that he's simply wrong about the situation in the first instance. And that's usually the result of the fact that he's chosen to believe his eyes (his biological system) over the instruments in the plane, or because he's chosen to believe what the instruments are telling him in terms of raw data rather than filtering that through the logic of the automation, or some other type of mode confusion.

The point that I'm driving at is developing new skills are not enough. To the extant that a pilot in a modern FBW aircraft is a computer jockey, he just doesn't need to do things differently he needs to think about flying differently. He needs new conceptual tools and different training. I don't think that retreating to paeans about 'basic airmanship' is healthy. All that will do is create a type of dual-consciousness that will exacerbate mode confusion rather than resolve it.

Tee Emm
25th Jun 2011, 11:49
It's rare that the pilot loses situational awareness;

In the simulator we see it all the time. And I don't mean a pilot looking at his MAP and pointing to that VOR symbol over there. During the course of being radar vectored the instructor will cover or fail the MAP mode and freeze the simulator and ask the student to point to the position of the aircraft on the Jepps chart.

You would be dismayed how many pilots have difficulty with this simple task. Sometimes the instructor will take control of the simulator and move the aircraft to several positions within a 50 mile radius and then ask the student to show his position on an en-route chart using RMI indications requiring cross radials or ADF readings coupled with DME readings. Again we see much sucking of teeth especially if the instructor then asks the student the MSA in that sector. This is only part of situational awareness by definition. Sometimes it takes several minutes for the student to work out his present position with much turning of the Jepps chart sideways or upside down. This is basic instrument navigation but reliance on the MAP has meant these navigation skills are lost.

alf5071h
26th Jun 2011, 19:04
BOAC et al, this is an excellent thread, but like that of AF447, the search for meaningful understanding and solutions to a complex problem often leads to repetition and entrenched thinking.
However, by revisiting ‘the two tasks’ in # 1, it may be possible to have a deeper, although more conceptual view of the issues; I offer the following:
… the manufacturer/regulators/operators to ensure something usable remains, …
If we remove AF447 and its speculative aspects from the wider view of current safety, then the existing requirements appear to be satisfactory. Where technical issues appear to dominate accidents, regulatory interpretation and/or the operational implementation (human factors) also contribute serious weaknesses, e.g. 737 Rad Alt, A320 Congonhaus, MD80 TOCW.
Other accidents almost exclusively involve the use, the application, of what equipment/knowledge ‘remains’ (or is normally available); LOC, disorientation, overrun.

… a change in the philosophy and application of training and recurrent testing.
This task reflects the problems of applying what ‘remains’, what is normal (as above). One solution proposed so far is what I have described as ‘more of the same’ (blame and train), and which other posts have described as specific changes in education, training, checking, and operation. The report in the link @ #146 follows the same theme.

However, with a conceptual view, I suggest that these solutions are only treating the symptoms of a much deeper problem. An obvious candidate is the increasing use technology, but not to discard interrelated aspects of human behaviour and the overall operational ‘system’.

Technology / automation may encourage complacency in operational, organisational, design and regulatory judgement; we are assuming too much, there is technological bias in our risk management.
Not that the human is lazy, but we do like to be efficient; high in trust, and making many (often undisclosed) assumptions.
We depend on automation, and in the extreme may believe that automation can replace the unique human ability to think. We no longer practice ‘old skills’ associated with understanding (situational awareness) because the required level of ‘understanding’ is presented in suitable formats; EFIS, FMS, Autopilot/FD, but most of the modern human-machine interfaces are adequate for basic flying tasks.

At a regulatory level this search for efficiency might result in lower standards (old assumptions), allowing greater complexity in operations - crowded airspace, longer duty time, etc.
At the operator-management level, there is a lower calibre recruiting, reduced training, etc.
And at the sharp end … … what exactly is the problem with automation; not seeing emerging problems, not appreciating ‘change’, or the need to change our thoughts or actions; not being very thorough.

Much of the above comes from ‘The ETTO principle’, Efficiency - Thoroughness Trade-Off (Hollnagel), how we balance getting the job done vs cost, time, and resource. This involves the sharp-end, management, regulators, and designers.

AF 447; regulatory assumption that crew can fly pitch / power, delay in retrofitting pitots was acceptable, crew fly close Cbs because of route structure / other traffic – efficiency!
737 Rad Alt, MD 80 TOCW; maintenance, fault reporting and rectification, lower regulatory standards – grandfather rights (assumption), – efficiency!
Disorientation; crew rush to engage autopilot, early turns, weak crosschecking, – efficiency!
Overrun; press-on-itis, approximate calculations, poor knowledge, – efficiency!
How do we balance our quest for efficiency in normal operations with thoroughness to maintain safety?
We need to “enhance [our] abilities to respond, monitor, anticipate, and learn” - Hollnagel The ETTO Principle. (www.abdn.ac.uk/~wmm069/uploads/files/Aberdeen_ETTO.pdf)

BOAC
26th Jun 2011, 21:13
alf - thanks as always for a thoughtful and comprehensive post. I cannot respond to it all, but would say:

"repetition and entrenched thinking." is where I think we are. Take the 3 year cycle of LPC/OPC 'topics' - how often is a wildie thrown in? Do we just tick the box for UPs, double engine fail, loss of all hydrayulics etc etc without looking at the increasing complexity of the a/c systems, what can go wrong and how we both recognise that and react to it? Hence the secong bullet point you post.

"We depend on automation, and in the extreme may believe that automation can replace the unique human ability to think. We no longer practice ‘old skills’ associated with understanding (situational awareness) because the required level of ‘understanding’ is presented in suitable formats; EFIS, FMS, Autopilot/FD, but most of the modern human-machine interfaces are adequate for basic flying tasks." - yes, to me a very large part of the problem. Indeed I would go further than you and say "all of the modern human-machine interfaces are more than adequate for basic flying tasks". Most are indeed excellent. The AB system included. I have, however, maintained for a long time that these outstanding systems are ahead of human capacity at this time and thereby too complicated. My point is - are we ready when it fails and do we have the necessary human skills to notice and react and the equipment with which to cope.

Tee Emm's post is a case in point - that sort of SA is rarely, if ever, in a pilot's life-time needed now - it was 'bread and butter'. When the wick goes out, however, on the EHSI or whatever, what should be a simple task of sorting things out methodically is vastly complicated by a lack of awareness as to what the magic stuff had been doing for the last x minutes/hours. Get airborne, plug it in, and when we see 1 hour or so to go, start paying attention to things. Seen that before?

Keep it coming folks - something needs to change..

Lonewolf_50
27th Jun 2011, 13:09
PBL made some interesting points regarding data that promotes a level of safety to be gained by relying increasingly on the computer/automation. The problem is that the depth of inquiry and investigation over accidents, where the combined man/machine lash up failed, is orders of magnitude deeper than the depth of inquiry into those events where "it nearly went pear shaped" or other system issues arose but the plane landed safely. It is my guess that any number of events of that sort are never captured. There may be "conventional wisdom" or a variety of anecdotal evidence about how poor a given system is, but until failure, or near catastrophe, where is the data that allows one to make a case for change or improvement? Within each organization I suspect that the attention paid to the "not quite right but it didn't kill us this time" varies. That leads to the idea that data for analysis is further skewed, as a certain percentage of this will remain "in house" for a variety of reasons. :(

In gross terms, the analysis scheme PBL was resting upon in the linked article somewhat resembles "counting the hits and ignoring the misses." As a data collection method on the man/machine system, this seems a step toward a technique that is a No-No of significant severity. You have to account for the hits and the misses to get a sense of what your data is telling you. (An example is the rigor of drug tests in the US that FDA gets all shirty about ... and even then the outcome isn't perfect). I am not convinced that data collection by exception is going to take the industry in the proper direction, since it looks to create a built-in bias.

As an industry (I recall discussing briefly with PJ2 some FOQA issues a while back) there are disincentives and obstacles to the industry wide sharing of "hey bubba" moments and lesser "it went wrong" incidents that were not fatal. But I also understand that there are programs to do just that.

A few pages back, one of the old hands called for a required debrief session after each leg or trip. Having grown up in military flying, that was part of the event. The sortie was not complete until we'd all sat down, cigarettes and coffee in the old days, coffee and nothing more recently, and walked the mission front to back in about ten to fifteen minutes to see what we did right, wrong, and what to do about any of it. The CRM environment the Navy got very involved with encouraged this in terms of the working the approach to the debrief as a no fault event.

That left the sticky issue of dealing with SOP and Rule breaches. If during a flight, something blatantly wrong was done or commanded, then what? (Oh, by the way, what rule set did the organization overlay on the system? Varies by organization). Sometimes, the PIC would address it formally. Sometimes, it was the PIC who was the culprit. The Navy's Anymouse program was able to bring to light a few of these things, via a non-attribution safety gram entering the flight safety system and "something not right" was aired rather than being buried. I will guess that airlines have similar structures in place. If the culture of the flying professionals in the organization is "I can be a better aviator/crewman/crewmember" each day, the above system worked better than when that attitude was not evident in the organizaiton from top to bottom.

What has this to do with Computers in the Cockpit and aviation safety?

The debriefing and the documentation of any and all, even seemingly minor, hitches and glitches on each and every system ought to be part of every flight. The designers and those who work on system improvement and adjustments need data in order to get get funds for system adjustments or improvements. So too those who keep track of training and proficiency of aircrews.

Are man/machine interface issues handled well enough in your (or any) organization?

Snipped the rest, as I am wandering into areas I don't know enough about.

BOAC
27th Jun 2011, 13:23
Snipped the rest, as I am wandering into areas I don't know enough about. - I would guess that is a shame - Why not float 'ideas' if nothing else on the topic? They can stimulate discussion. If they are 'rubbished', I'm sure as ex Navy you are used to brushing that off......................:D

Tee Emm
27th Jun 2011, 14:19
Why not float 'ideas' if nothing else on the topic? They can stimulate discussion.Back to basics. Firstly - can the aircraft be flown safely by hand? If it cannot then it should never have been certified in the first place. Of course all jet transports are capable of being hand flown.

There is evidence that over-reliance on automation is causing accidents - particularly loss of control in IMC when the automation disconnects for whatever reason. That problem is easily fixed by practice. And not just the last few miles of an ILS either. Pilots should be encouraged to hand fly for the prime purpose of maintaining pure flying skills.

Commonsense dictates the automatics should be used when landing in weather worse than Cat 1 ILS. Nevertheless pilots should maintain the basic skills to fly an ILS by hand. Where airspace navigation is based upon tight tolerances thus requiring automatics, then of course the rules are there.

We see on Pprune pages contributors advocating hand flying only in VMC. That does not fix the problem. Any fool can hand fly by looking at the horizon.

Sharp pilots know that hand flying up to 10,000 ft and during descent below 10,000 ft (arbitrary numbers) is probably the simplest method of keeping current. Switching off the flight directors increases scanning skills which is why hand flying is being carried out in the first place. If in IMC then all the better because hand flying in IMC is no big deal. That said, it is a big deal if you are frightened of hand flying. Time for simulator practice.

If automation dependency has you by the short and curly, then you have only yourself to blame. Of course there are operators that demand full use of automatics for every minute of flight and no leeway apart from take off and the last few seconds of landing approach. But until operators stop paying lip service to the potential dangers of automation dependency and encourage crews to hand fly in appropriate conditions (and that depends on the judgement of the captain) then we simply go around in circles waiting for the next inevitable loss of control in IMC tragedy. Unfortunately, I doubt if things will ever change and we are stuck with automation dependency.

Young Paul
27th Jun 2011, 16:41
For the sake of discussion ...

How much effort should we really be putting into this? The circumstances in which there is a total and permanent loss of enough of the flight systems are pretty unusual - even with the large amount of aviation in the world today, how many hull losses are we talking about? One in five years? And in the case of AF, wasn't the more fundamental problem thought to be the fact that it flew through a CB? Surely the more important issue is to ensure that pilots avoid the situations where their superior basic handling skills are needed to save lives?

On the other hand, I can think of several occurrences of double engine failures on twinjets that have occurred in the last five years. Should training to handle this wisely not be getting a higher priority?

Particularly in a professional pilots' forum, of course there will be people who nod sagely and say that being a professional pilot requires good basic flying skills. And they ought to have. But my suggestion (I am open to being refuted) is that the real risks that need to be addressed aren't shortcomings in basic flying skills. For instance, I genuinely think that one of the greatest risks not that will CAUSE the next accident but which will be a significant contributory factor is poor CRM. Most people have got to grips with it. But accidents and near accidents will continue to happen where this has broken down - and what it will actually look like will be CFIT (with a captain assuring a junior fo that he knows the local area) or loss of control in a CB (with the capt unhappy about flying so close to the red bit but not wanting to intervene a fifth time in the fos operation) ...

Thoughts?

Sciolistes
27th Jun 2011, 18:40
Young Paul,

If I understand you correctly, you are saying that training should be targeted at the areas of most risk to address that risk? You cite loss of thrust from both engines as an example.

I disagree to the extent that I believe that flying is a 360deg problem requiring 360deg solution. What I mean by that is that a pilot does not develop superior skills and awareness by tackling single issues. Sure, one would be more proficient dealing with total loss of trust by practicing it. But also one would be more proficient by developing a superior sense of situational awareness and general competence.

The issues of computers in the cockpit is not simply an issue of hand flying, but an overall issue of maintaining the required skills to be sufficiently aware and knowledgeable. Hand flying specifically, doesn't just improve one area of one's ability, it attunes the pilot to the nature of the environment he is operating in.

You also cite CRM as a problem. CRM is generally a problem when the F/O lets his responsibility to ensure a safe flight slide. I believe a major reason for this is lack of confidence which generates a lack of willingness to tackle unsafe practices by Captains. I am sure that an F/O who is competent in all aspects of flying and thus confident in his knowledge and ability to handle the aircraft in any recoverable situation, is not the kind of F/O who would let a Captain continue with an approach to Mangalore such that it entire approach is high and a touchdown so far down the runway that it must have been obvious the aircraft was is great danger. That is an extreme example, but even in my airline F/Os who have no fear of disciplinary action (quite the opposite) often fail to challenge Captains when it is their job to do so.

Specifically targeting high risk failures is not going to help anyone in the long run. Specifically encouraging crew to develop as complete, confident and thus thoroughly able and flexible pilots is critical. Being able to confidently take over from the automation at any point in a normal or abnormal flight is, I believe, absolutely critical to that development.

Young Paul
27th Jun 2011, 20:36
I agree that there's a need to be an all-round pilot. What I'm challenging is how great the role of "traditional" skills and "traditional" behaviour of the aeroplane should be in this matrix.

To take traditional aeroplane behaviour to start with. I am quite happy (even as an Airbus pilot) to say that FBW was placed in a life-critical application ten years too early. Now, however, on the back of 30 years of practice, I don't have any significant doubts about it. The designers had to make decisions: should we have manual reversion? Should we be able to fly with flight control computers all off? Should there be tactile feedback in the control system? Whilst the decisions they made at the time may have been ambitious, I don't think that history demonstrates they were wrong. Where there have been hull-losses of fbw aircraft, I don't think you can really show that it was an issue with the computers. At worst, it was an issue at the man-machine interface, which highlights the real safety issue - that is, the human factor - people not using the system properly. So in what circumstances do we need to know that we can fly it in the "traditional" way? I think BA have gone too far in saying that you can't fly without the ... whatever it is they say you can't fly without. Autothrottle? But all airlines have to make a decision which can be justified to the regulatory authorities.

Now traditional skills - the ability to switch off all the automatics and fly by hand. In what circumstances is it necessary? I flew with a technically switched-on fo the other day who knew that you could get the best descent out of an A320 by switching the automatics off. That was safely done in VMC. But the number of times when I've been put in a position in many years where this made the difference between a landing and a go-around is - well, one. If you're high, you can ask for extra miles. Or go around. Or you can plan things from further back to make sure that you reach the gate. And the price of switching all the automatics off voluntarily is that unless you're very careful, you're taking not only those protections out, but also the inclusion of your fellow-pilot in the monitoring loop. Much better as a rule, surely, to work within the constraints of the automatic systems.

What about technical issues or multiple system failures or lightning strikes coupled with autopilot disengagement? Again, whilst as pilots we should be able to cope with this, would it be proportionate to gear our training towards a once-in-a-flying-career combination of failures? To be honest, when everything goes wrong, none of us really knows if we'll be successful when we step up to the plate. Every famous air incident you can think of - in each case, the pilots didn't know what they were going to be facing when they went to work in the morning. In all, arguably, they were heroes - in some, they saved lives, in some, they didn't. But it's a bit like the engine failure at v1, thing - the danger is that you prepare for the "worst" case, and get very good at managing that, but don't know how to cope with anything less than extreme. We really don't want people switching autopilots and flight directors off because one of the flight control computers has failed, do we? But I think there's a risk of that. Again, I've heard of experienced captains, doubtless very good at handling the aeroplane, who have used their superior handling skills to escape from encounters with CBs that good airmanship ought to have kept them away from in the first place.

alf5071h
28th Jun 2011, 01:54
Lonewolf_50 refers to ‘probabilistic’ regulation, which has served the industry well, but the method does not capture human activity except by assumption. Adverse human contributions, to some extent, have been mitigated by selection, training, and proficiency, but even so, these are still subject to basic human fallibility.
Most new designs overtly aim to guard against error, but a technology-driven complacent industry, ‘by counting numbers’, might have unwittingly accepted that the safety improvements apparently rooted in technology would mitigate even lesser human standards.
More likely, commercial pressures have argued for a stabilisation (containment) in the quest for ever higher levels of safety (by using technology); this might be a facet of an ‘almost totally safe transport system’ (Amalberti). (www.ida.liu.se/~eriho/SSCR/images/Amalberti%20_(2001).pdf)

Tee Emm reiterates some of the sharp-end issues and in part identifies a solution “If automation dependency has you by the short and curly, then you have only yourself to blame.” A facet of self discipline perhaps?
This solution is still only treating a symptom as there are many situations where humans now have to depend on automation, e.g. RVSM, PNAV, because the industry has changed. Thus the availability of automation (and other technologies) has altered both the operating situation and having a choice in executing a task (auto vs manual). Furthermore, human nature biases individual assessment of capability – we think that we are better than we are; complacency, ‘we can do that when required’, etc, etc, - "repetition and entrenched thinking".

If as BOAC states, modern systems are ahead of (beyond) human capability, in not having the skills for failure cases, then the context of the failure should be avoided. But the context is driven by the perception of safety, the risk of encountering a situation – probability. Moreover, if this a public perception – a social perception, then the industry might have more to fear from the media than technology (cf nuclear industry).

Avoiding the context (operational situation) could involve either, or a selection of, highly reliable technical solutions (everything automatic), focused training, or changing the workplace / task.
IMHO it is not technology that is beyond human capability, it is the situations which the human has to face if technology fails, that demand too much; this is an artefact of the modern ‘system’ – the modern technological complacent industry.
Assuming that the problems are perceived as being severe enough to worry about (probability), then solutions may not emerge until the industry recognises that some situational demands are too great for the human, whether these individuals are at the sharp-end, in management or design, or regulators.

“We cannot change the human condition. But we can change the conditions under which humans work”, Professor James Reason.

“… to really understand risk, we have no choice but to take account of the way people interpret events.” Professor Nicolas Bouleau in “To understand risk, use your imagination”, New Scientist, 27 June 2011.

BOAC
28th Jun 2011, 07:20
If as BOAC states, modern systems are ahead of (beyond) human capability, - my 'broadbrush' comment also takes into account the fact the the systems are extremely complex. So complex that in the case of 447, if a question on the operation of the FBW system is asked 2 years after a ? 6 minute ? disaster window, we still get conflicting answers from 'experts'. These systems (and I do accept the need for them, by the way) must operate in a way that either there is no possibility of human confusion through the cycling of their various code loops AND/OR there is a clear and available 'escape route' available to a pilot to allow a less than perfect but survivable exit from the problem. It may sound trite, but when you are trying to fly out of whatever 447 had, RVSM, PRNAV and even alpha protection CAN be dispensed with. Of course, 'acceptable risk' rears its head and we could more-or-less shrug our shoulders and say C'est la Vie', but I believe current trends suggest not.

I would also expand on Reason - the 'human condition' is changing itself with time as each generation grows up with a different tech landscape and thus we need to be constantly reviewing the way we change 'the conditions'. Anecdotally a recent thread about an email 'problem' was sorted by the poster's 9 year-old grandson arriving on the scene with an Android 'App' to solve it. In 10 years or so, said grandson could be in the RHS of a transport aircraft. Are we adapting our training philosophy at the same speed?

aviatorhi
28th Jun 2011, 08:27
I might be a little late to the discussion... but; BRING BACK THE FEs! When failures/emergencies of this magnitude occur in adverse weather the two guys up front have their hands full flying the thing, having a guy in back who know the systems and can maintain them as best as possible is going to beat a computer any day of the week.

BOAC
28th Jun 2011, 10:18
Unfortunately not! Short of using the fire axe on the PF's head, the F/E has no way of stopping PF exceeding stall AoA. F/Es are also excellent at working with and hitting big mucky whirring and thrashing bits, and not at diagnosing where a couple of zeros in a memory stack may have rolled off the end and dropped onto the next stack.

There is, of course, nowhere for the F/E to sit any more:)

aviatorhi
28th Jun 2011, 11:58
They have no way of stopping it but the relieve a lot of the workload on the two pilots who have lights and warning going off as they're trying to get the plane under control, and I hope you're not saying that excessive AoA is the only thing that can happen.

Point being, planes have grown more and more complex over time with less and less direct control of the systems, just lights and procedures that aren't as completely understood, clear or complete as they need to be.

Tee Emm
28th Jun 2011, 12:44
Professor Nicolas Bouleau in “To understand risk, use your imagination”, New Scientist, 27 June 2011.

Fine words - but no captain would have ever thought a nervous Nellie first officer would have snatched back the un-guarded thrust levers on that Ryan Air 737 and aborted during rotation simply because "things didn't seem right".

alf5071h
30th Jun 2011, 02:10
TM, I interpreted ‘the fine words’ as to be directed at management; someone who can restrict the operational situations which pilots might face.

With your example (737 RTO), the generic risky situation evolved from relatively recent RTO training materials which introduced ‘if unable to fly’.
This text requires an evaluation, whereas an engine failure is a simple ‘if-then’ assessment (If engine fails below V1, Then stop).

The speed trend is a poorly described artefact of technology which has be put to good use in some situations (IMHO it’s a crutch for an inferior speed display, but that’s another matter).
The decision to add speed / trend anomalies or similar in an RTO decision increase operational complexity, such situations must be bounded. What is anomalous, how much change from the norm, what difference can be tolerated, when, why.

The availability of technology (speed trend) introduces the opportunity for complexity, but its humans who control the level of complexity; e.g. with specific guidance in a procedure – "If ASI fail or error (<= 5 kt split) below 80 kts, Then stop; trend vector N/A. Above 80kts go, use the standby and resolve any ambiguity at a safe altitude".
The guidance / procedure controls the circumstance of the situation; this should be based on a risk assessment – technical probabilities, e.g. is the trend vector an essential item, if not then ignore; cf dual vs single ASI failure during take off – bound the situation with an error margin and total speed.

In an RTO scenario, controlling the context / circumstance of the situation and thus opportunity for dilemma*, well formed operational guidance should maintain safety even with high tech systems.

* I use ‘dilemma’ opposed to error in this instance, as the decision would likely be ‘correct’ for the situation at the time, based on what was perceived and judged against the crew’s belief – vague / incomplete guidance, and common (perhaps mistaken) knowledge and training about a facet of new technology.

Tee Emm
1st Jul 2011, 13:58
The 80 knot take-off roll check of the captains and copilots ASI's in the 737 is really a gross error check. Not five knots or even 10 knots, because even as the call-out is made, the aircraft is accelerating so quickly that the speed comparison is useless a second or so later.

From simulator experience, by the time either crew member focuses on the standby ASI for comparison purposes, another 10-20 knots has passed by. Boeing do mention use of the ground speed reading as a confirming factor since rarely does one see a defective ground speed read-out. Of course any ground speed check must take into account the wind component.

What is important though, is what happens if the PNF does not call out "80 knots" either because he had his attention elsewhere, or he simply was too slow to react or of course if his ASI had not yet reached 80 knots.

In that case, it is encumbant on the PF to make his own call-out based upon his own ASI - for example "going through 95 knots my side." In turn, this should stir the other pilot either to agree or not agree or remain bemused. The latter is more likely given the time factor.

All the time the aircraft is accelerating towards V1. Believe me we see this in the simulator a lot when a fault is set into one or other of the main ASI's. There is immediate doubts in both pilots minds.

This is where a knowledge of expected ground speed is good airmanship and rather than risk a high speed rejected take off under the circumstances, the next best thing is to rotate on the expected ground speed reading allowing for the wind component.

Provided the pilot applies commonsense knowledge of initial climb pitch attitude coupled with known N1 for the circumstances, the confusion of which ASI is the problem can be sorted out later at relative leisure. Sorry about thread drift

alf5071h
3rd Jul 2011, 13:58
TM, there’s no thread drift if you consider the effects of technology; this becomes a good example of potential problems of ‘computers in the cockpit’.

I would be very surprised if Boeing recommended (officially) a ground speed (GS) check during take-off. Where does GS come from, how accurate, update rate, etc, etc, and what time is available for a crew member to look and crosscheck? You mention some of the problems.
The ‘availability’ of GS is an artefact of technology – “let’s use it because it’s there”, without thought to the added complexity and potential for confusion.

Why do we check/crosscheck ASIs?
With old steam driven systems, as you say, - a gross error check.
However, with modern technology, many ADC driven instruments have a comparator alerting system, either ‘Speed’ or for the total system, ‘ADC’.
So the take-off SOP could be simplified by deleting the speed check and relying on the comparator; but is that warning one of those inhibited during take off? If so, would that imply that the malfunction (amber level) does not pose significant risk, certainly not sufficient for rejecting at high speed?

Operators may still call 80kts or similar as an indication of the takeoff progress, or because it could reduce the PF task of glancing into the flight deck.
Or is the call an acceleration check? If so, how would such a system be used, what action might ensue (speed requires a function of time for acceleration, humans are poor time keepers). Speed trend is a form of acceleration (check exact computation and smoothing), but do the crew know what specific value is required for each take-off? Perhaps they are only familiar with a range of values (approximations) gained from experience.
Most aircraft have a designated engine thrust parameter, if that value is set and maintained then the take-off thrust is assured (providing the value has been calculated correctly). A reduction in the thrust parameter should be part of the engine failure process; If engine failed, Then …
Keep SOPs simple, practical, meaningful.

Most of the above is just my view of what you posted, but skewed by technology (automation, computers in the cockpit).
With increasing use of technology there is opportunity for operators to mis-judge the effects of ‘change’, to carry on using the same old procedures (complacency), or unwittingly add unnecessary complexity – ‘because it seems like a good idea’ – “it improves safety”.
Is - 'use it because it's there', a failed application of CRM - see adjacent thread.

Unfortunately many of the technology inspired changes, particularly in SOPs involve weak or inaccurate risk assessment which may not improve safety. Technology is not the cause of this, it’s human judgement; and judgement is part of ‘airmanship’, except in this instance it should be exercised by management and regulators, who have to apply professionalism.

DozyWannabe
6th Jul 2011, 16:14
So complex that in the case of 447, if a question on the operation of the FBW system is asked 2 years after a ? 6 minute ? disaster window, we still get conflicting answers from 'experts'.

I'm not so sure of that - what you're getting is a combination of people with varying amounts of knowledge. In my case a reasonable idea of the design philosophy and some knowledge (by no means complete) of the logic trees and reliability/testing phases involved, current and former Airbus FBW pilots in the form of PJ2 and Chris Scott among others, those who are pilots that don't necessarily know the systems but have their opinions anyway - and finally those who we know well have a major axe to grind with Airbus and are deliberately muddying the waters like they always do when the subject comes up.

In terms of the current Airbus pilots in particular, you have speculation based on knowledge which is sound but may not be current - notably none of them has suggested that the pilots in the case you're referring to were confused by laws, displays or ergonomics. One current pilot is fascinated with the possibility of Byzantine software failure (which I'm not discounting out of hand but suspect is unlikely). If you read the threads however, the ones claiming confusion on the flight deck and slating the systems design are all from the latter two camps.

Just thought I'd better clear that up - on with the discussion!

safetypee
7th Jul 2011, 01:25
BOAC has a problem “How to live in an unfathomable world”, as we all do according to New Scientist, 17 May 2011. (www.newscientist.com/article/mg21028127.100-o)

“… opposing positions are predictable, but they are also incoherent, unintelligible and entirely unhelpful in navigating the complexities of our technological age.”

The gist of the New Scientist article is that we fail to distinguish between various levels of technology.
Level 1 is simply functional, level 2 is part of a network with increasing complexity, and level 3 is a highly complex system with adaptive subsystems and human interaction, which we cannot fully understand. Level 3 systems are beyond our cognitive abilities.

The problem is that we tend to focus on levels 1 and 2 because we can understand and assess them, and manage their complexity. It’s our expectation that all technology be like this, we remain in control, except that in reality at level 3 we are not.

“Level 3 systems whose implications you cannot fathom.”

“We are not the ‘knowledge society’; that's Level 1. We are in fact an ignorance society, continually creating more and more ignorance as we busily expand the complexity of the anthropogenic Earth. But our ignorance is not a 'problem' with a 'solution': it is inherent in the techno-human condition.”

“The question now is how to enable rational and ethical behaviour in a world too complex for applied rationality, how to make our ignorance an opportunity for continual learning and adjustment.
This necessary evolution does not demand radical changes in human behaviour and institutions, but the opposite: a courageous realisation that the condition we are always trying to escape - of ignorance and disagreement about the consequences of our actions - is in fact the source of the imagination and agility necessary to act wisely in the Level 3 world.”

Take care not to interpret the final quote out of context;

“… that to participate ethically, rationally and responsibly in the world we are creating together, we must accept fundamental cognitive dissonance as integral to the techno-human condition. What we believe most deeply, we must distrust most strongly.”

IMHO this is not the distrust of technology / automation, it’s about how we should trust/distrust what we feel about it, how technology can be used, and what can be expected with human interaction. We need to be a learning society, except in this instance there is a limit to our understanding, and we need “agility necessary to act wisely in the Level 3 world”.

We have to accept that we may never understand aspects of ‘level 3’; complex technical systems in a vast operational environment, with human interaction, such as AF 447.

BOAC
7th Jul 2011, 07:40
BOAC has a problem - phew - someone to talk to......:)

Yes, in essence a good summary, but it is missing 'level 4'. The result of our acquiescing to 'level 3' leads to the age old question of who 'supervises the supervisors', does it not? We 'learn' to live with a complex system we do not really understand - where are the 'long-stops' on this? Particularly in aviation, we surely need to ensure that this complex and almost unfathomable sequence of bits and bytes and failure modes etc etc is 'fit for purpose', at least for the time being, until we have truly automated systems.

That takes us to level 4, where AI rules. Therein is a dark pit. Let's hope beta testing of level 4 is VERY thorough.

The article says - far more eloquently than I can -
"how to make our ignorance an opportunity for continual learning and adjustment.
This necessary evolution does not demand radical changes in human behaviour and institutions, but the opposite: a courageous realisation that the condition we are always trying to escape - of ignorance and disagreement about the consequences of our actions - is in fact the source of the imagination and agility necessary to act wisely in the Level 3 world.”

which I my crude way was a call for a major review of the way we teach it -
a courageous realisation.

safetypee
7th Jul 2011, 18:11
BOAC, - level 4, I don’t think so.
I interpreted level 3 as including the deeper ‘bits and bytes’.
I abbreviated the quote “Level 3 systems whose implications you cannot fathom”, which continues … “With input from tablet computers, cameraphones and walls of dancing video, and with much of your memory outsourced to Google and your social relations to Face-book, you now embody the accelerating charge of the Five Horsemen of converging technology - nanotechnology, biotechnology, robotics, information and communication technology, and applied cognitive science – whose cumulative potency will transform the human-Earth system in ways that are impossible to predict.”

Re …. who 'supervises the supervisors', again the article implies that this is up to us, apart from deity, there is no one else. We, humans, have created this ‘mess’ and thus with the necessary courageous realisation have to ‘self-police’ the situation.

A later quote explains this in part –

“We have to become a lot smarter in moving ourselves and our institutions of learning and innovation, of political and economic decision-making, out of their Level I playrooms. This transition will require us to increase the diversity of world views involved in creating and assessing our technological activities. It asks us to create more richly imagined futures, seeded with more potential choices, so that we have improved opportunities to learn from and respond to the choices we are making.”

I am not sure what aviation might pick out of that, but IMHO, part of the realization must include ‘transition’, that the industry is changing; and ‘learning’, if not from the very rare level 3 accidents, then from everyday behavior – how humans successfully manage these complex technological systems, in complex operational environments, with normal human interaction.
Aviation, with modern aircraft, is a very safe form of transport.


… we have created a ‘mess’. Perhaps the following is an appropriate summary:-
A difference between a difficulty (level 1 & 2) and a mess (level 3) is that when the problem is a difficulty, an individual claiming to have the solution is an asset, but when the problem is a mess that individual is usually a large part of the problem!
Paraphrased from Systems Failure, J Chapman. (www.demos.co.uk/files/systemfailure2.pdf)

syseng68k
7th Jul 2011, 21:56
PBL

Have just finished working through this thread and as an engineer, found
your posts most interesting, though you obviously don't suffer fools
gladly. A certain arrogance is normal in some professions and I don't
have any problem with that, nor am I offended, so let's not have this
degenerate into ad hominem territory. Please, let's have more like #127

Even if the systems have a failure rate of only 1 in 10e5 hours, they
are arguably more consistent than the human souls that drive them. The
problem that is the fact that the system, collectively, does not degrade
anything like gracefully enough at the extreme ends of it's capability
in terms of flight control. This is not the same thing as system failure
due, eg: a software bug. It is a limit in the capabilities of the system
as designed. Despite the complexity, there seems to be no overall
coordinating intelligence providing a big picture monitoring view at
all times. They say a picture is worth a thousand words. Why ?. Because
a picture is effectively parallel processed by the brain, while reading
text or scanning instruments is serial and it takes much longer to
assimilate the meaning. Trying to diagnose problems by wading through
pages of error messages, and / or getting out handbooks, finding the
right page, ad nauseum, takes far too much time in an emergency. There
just has to be a better way. In some ways, modern a/c are quite primitive,
despite all the complexity and shiny paintwork.

There's should be more than enough data running around the system as a
whole to enable a monitoring processor / subsystem to spot trends and
provide a continuous assessment of current state and developing
situations. If the crew choose to ignore this, it's another issue,
but system failure in such a way as to produce ambiguous information
does nothing to inspire confidence in those systems and is arguably more
dangerous that having no data at all.

More R&D, fresh thinking and intelligence is needed. Perhaps a second
revolution, as was the original ab fbw concept in it's time...

Regards,

Chris

MountainBear
8th Jul 2011, 02:35
There have been some interesting posts made here while I have been off rattling some cages in another thread.

To understand risk, use your imaginationImagination is just as fallible as thinking, however. It is no panacea.

This transition will require us to increase the diversity of world views involved in creating and assessing our technological activities. This made me laugh. How does the mere increase in diversity increase safety. Not all ideas are created equal, not all probabilities are equally likely. Diversity is unhelpful if it leads us down blind alleys and over steep cliffs. Shuttling the onus from thinking to imagination to values all the while looking for the silver bullet that will solve the problem of human fallibility is nothing but an academic shell game unworthy of honorable men.

My honest disagreement with PBL and others stems from my firm conviction that there are some events we can never predict and thus there is no rational way to quantify them; when we hear phrases like "the odds of that are 1 in 10e5 hours" we need to treat this as a best guess and not anything certain.

The follow-up point is that generally human beings do a bad job of estimating odds and the more unlikely an event the worse we are at estimating it. Quantifying events with numbers often gives us a false sense of security. Once we put a number to it we think we understand it and thus we feel we control it. Until it all falls apart....then statistician runs into a corner and says, "well don't blame me if the one in a million event happened on your watch. It's not my problem you were unlucky."


Hand flying specifically, doesn't just improve one area of one's ability, it attunes the pilot to the nature of the environment he is operating inThis is true. The problem is that with airplanes the mistakes are often costly. That's the motivation behind my posts in the Ryanair thread: experience is costly. Human learning is costly. The question then becomes at what point does the cost of the experience become more than the flying public will bear and it's simply cheaper to automate the flight deck and get rid of the pilots entirely.

People like to pretend they know things when they really don't and they like to pretend things are free when they are not. All it takes is someone to put his nose up in the air, stick a number on the problem, and put his hand in the other guys pocket and the rabble in the crowd will give him a cheer. :mad:

syseng68k
8th Jul 2011, 13:28
MountainBear, #178


All it takes is someone to put his nose up in the air, stick a number on
the problem, and put his hand in the other guys pocket and the rabble in
the crowd will give him a cheer.
That's the human condition in the 21st century. The obsession with
putting numbers on and compartmentalising everything, is a sickness of
the modern age and owes everything to the age of enlightenment, when man
started to discard religion and superstition in favour of science and
the classification of everything. As you quite rightly say, once
something has been quantified, everyone can go away and be happy in the
knowledge that due diligence has been satisfied, even though noone but
specialists in the field understand what the numbers actually mean.
In some ways, it's all gone too far, but it's not unique to aviation.

Having said that, it's often the case that the only way to get an
indication that there has been an improvement in any process is put
numbers on things via analytical methods. In aviation, as in things
like the climate debate, the change may be so small that it's down in
the noise and difficult to measure reliably anyway. Even so, the effort
is worthwhile if progress is made. The 1 in 10e5 will be a statistical
value that is based on mtbf values (also statistical) for individual
components and would be updated with data from in service components
over a multi year timescale. Obviously, the value doesn't mean that
there will be no failures until 10e5 hours. That single failure could be
in the next 5 minutes, but the figures are usually very conservative
and real world kit is often far more reliable than the figures might
suggest.

Any engineer will tell you that it's not possible to make any system
100% reliable. In many areas, it's a devil's compromise between cost,
safety and performance. The graph of cost vs improved safety probably
looks something like an exponential decay, in that you can get vast
improvement at the start of the curve, but beyond a certain point, you
could spend another 10x present cost to get any serious effect at all.
I suspect we are well down that curve in terms of civil aviation and
most likely need the analytical methods to detect anything.

An activity where you put several hundred people into an aluminium can,
together with tens of tons of fuel, then fly it at 35k feet, will always
be high risk, irrespective of how reassuring the numbers are. You are
also correct in saying that learning is high cost, though excessive
timidity in terms of risk taking can be a serious bar to progress. If
you look at the early space program in the US in the 60's, a high degree
of risk was accepted to attain great goals and was, imho, an example of
the highest aspirations of mankind, even though the initial driver for
it was arguably less than altruistic. Take big risks, make great
progress. If they had had the health and safety culture that exists now,
where nothing moves because of multi layered a** covering, the program
would never have got off the ground.


The question then becomes at what point does
the cost of the experience become more than the flying public will bear
and it's simply cheaper to automate the flight deck and get rid of the
pilots entirely.
I don't see the connection here. It seems as though you think that the
pilot's are the problem, when I would suggest that the systems are
nothing like smart enough in terms of the way they interface with the
pilot. Nor in the way that they degrade when expected to handle
something outside a strictly defined set of limits. From comms /
information theory, you achieve the lowest error rate when you match the
transmitting and receiving ends and use a low noise channel. Put simply,
if you want to talk to humans, the onus is for the system to talk the
correct language, rather than at present, where the human is expected to
adapt to the inadequacies and rigidity of the system.

Fully automated flight decks are, imho, a fantasy and will never happen
until computing has at least the same reasoning and abstract problem
solving ability as a human brain, trained in an activity and augmented
by years of experience. A lot of human processing is analog and driven
by subconscious responses, even if it is learned. Imagine trying to
model all that in a computer http://images.ibsrv.net/ibsrv/res/src:www.pprune.org/get/images/smilies/cwm13.gif. Having worked in computing and electronics
for a lifetime, I can tell you computers are not even close yet,
thankfully...

Regards,

Chris

MountainBear
8th Jul 2011, 22:30
I don't see the connection here. It seems as though you think that the
pilot's are the problem,

I don't know if the pilots are the problem or not, I think it's too soon to tell. We haven't given full automation an opportunity to produce results.

when I would suggest that the systems are nothing like smart enough in terms of the way they interface with the pilot


That leads to a circular argument. They are 'nothing like smart enough' because they haven't been programmed to be. They haven't been programmed to be precisely because the pilot is there.

It's unfair to blame the machine or the programmers behind the machine for the inadequacy of the design document they were handed. The human/machine interface only becomes a issue when you assume that that a human being must be on the flight deck. Take away that design requirement and the design of a FBW system is going to look a lot different than it does now.

A37575
9th Jul 2011, 13:00
I would be very surprised if Boeing recommended (officially) a ground speed (GS) check during take-off.

This is an airmanship check. The Boeing 737 FCTM in fact does mention the subject under a section on unreliable airspeed where it states that "ground speed information is available from the FMC and on the instrument displays (as installed). These indications can be used as a cross-check".

With ground speed read-outs in full view of both pilots during take off it could be argued in court that the crew would be negligent not to use such a valuable resource if available.

alf5071h
9th Jul 2011, 20:46
A37575, “With ground speed read-outs in full view …”
However, the prosecuting ‘Airmanship’ council would argue that if the crew knew that the GS value displayed was delayed in processing, plus a few seconds smoothing, with potential errors due to ‘your’ wind evaluation and addition/subtraction, and looking at the display - head down during take off; … then what’s the legal score.

It’s a fatuous debate. I am sure that we would agree that the airmanship issue is primarily about judgement; but judgement in aviation has to be supported with knowledge, which if inaccurate, misused, or absent when essential, then the resultant decision might have greater risk than for a less complex situation without that information.

This is part of the debate on automation. Recent posts suggest that it is impossible for the human to ‘know’ (understand) everything about highly complex systems (technology, human, and environment). Thus operations have to be conducted with a relatively lower level of knowledge. However in many instances this may represent a lower risk than with older systems due to greater data accuracy, clarity of display, and lower technology failure rates.

I have argued that if the overall risk has not been lowered, either due to the way in which technology is being used (erroneous organisational requirement or individual choice), or increasing operational complexity, then the situation should be changed – change the task.

Trying to use too much or inappropriate data (all available resources) due to complexity-induced knowledge deficiencies is just as ‘culpable’ (or more so), than overlooking/mistaking ‘good’ data. Both of these (opposing) views contain facets of human fallibility (use everything because ‘I know better’, vs mistakes in perception).
I would argue that the industry has to accept that the greater use of technology requires us to change the way we operate, at least to think about it – because the humans are at or beyond the limit of capability.
Adding more task to an already over tasked human is not a good choice, particularly when airmanship is a strenuous mental task.

Tee Emm
10th Jul 2011, 06:20
Adding more task to an already over tasked human is not a good choice, particularly when airmanship is a strenuous mental task.
http://images.ibsrv.net/ibsrv/res/src:www.pprune.org/get/images/statusicon/user_offline.gif http://images.ibsrv.net/ibsrv/res/src:www.pprune.org/get/images/buttons/report.gif (http://www.pprune.org/report.php?p=6561848) http://images.ibsrv.net/ibsrv/res/src:www.pprune.org/get/images/buttons/reply_small.gif (http://www.pprune.org/newreply.php?do=newreply&p=6561848&noquote=1)

Your point is well made. Except your statement that airmanship is a strenuous mental task. Presumably, you jest of course? But the design of the PFD in sophisticated aircraft is such that all the information needed to fly the aircraft on instruments is in the PFD. Combined with that information being fed into flight directors, in theory anyway, the instrument scan can be reduced to one flight instrument - and that is the PFD.

This has always been the problem when for regulatory reasons raw data is required to be tested during proficiency instrument rating tests. Pilots who are rusty at raw data instrument flying are like this because most of their flying on jet transports is gazing at the PFD with tunnel-vision.

Most PFD have a ground-speed read out and it should not exactly overload the crew to note that figure during the 80 knot call-out which is done from the IAS in the PFD anyway.

syseng68k
10th Jul 2011, 14:00
MountainBear, #180


That leads to a circular argument. They are 'nothing like smart enough'
because they haven't been programmed to be. They haven't been programmed
to be precisely because the pilot is there.
My response to that, to break the loop, is that they haven't been programmed
to be because the technology to do it safely doesn't exist.

It's no problem to program computers to take off from London, fly to New
York and land. The problem is how to handle the probably millions of
failure modes, their combinations and sequences, that could interfere
with the task. This is without considering environmental factors such as
weather and other unpredictable events. Computers are pretty dumb,
in that they can only be programmed to process a set of rules within strictly defined
limits. Outside these limits, the machine has no code to execute, nor
algorithm available to process the data and can only generate a failure
message. "Does not compute", applies here. It's very difficult, if not
impossible, to program a machine to handle chaos.

You might then argue that we can throw ai techniques at the problem, but
the response would be that ai technology is nowhere near mature enough
to be given responsibility for a high risk activity like flying several
hundred souls at 35k feet under full automatic control. It will possibly
never be, despite any claims that it can be done by optimistic
technologists.


It's unfair to blame the machine or the programmers behind the machine
for the inadequacy of the design document they were handed. The
human/machine interface only becomes a issue when you assume that that a
human being must be on the flight deck. Take away that design
requirement and the design of a FBW system is going to look a lot
different than it does now.
I don't think of it in terms of "blame" and have never subscribed to
blame culture. I'm sure that the systems have been tested rigorously to
the original specification and all involved were altruistic and
dedicated in their intent. All i'm arguing for is a much more holistic
(sorry about that word) view of the whole system. In the same way that
intuition and instinct, as well as learning, all contribute to the
merging of man and machine in m/cycle riding, car driving, light a/c
flying and more. The exact opposite to current civil aviation practice,
where crew seem to be trained and encouraged to fly the computers, not
the aircraft.

As for the flight deck, if it ever got to the stage that the crew became
redundant, there would most likely be no flight deck, just more racks of
avionics humming away in the background. But what an opportunity for the
beancounters. Not only do you save the space and weight of the kit on
the flight deck, but dispense with the services of expensive and sometimes
unpredictable crew as well. A win-win situation all round i'm sure :rolleyes:...

Regards,

Chris

MountainBear
10th Jul 2011, 19:19
My response to that, to break the loop, is that they haven't been programmed to be because the technology to do it safely doesn't exist.

Necessity is the mother of invention :-)

My point is that fully automated flight decks deserve the opportunity. I cannot emphasis that word enough. I make no predictions or promises. I hold no bias one way or the other. I hold that position because I believe two things to be true:

(1) Automation of flight decks has historically proven to be safer than human beings alone. Unless one is willing to put it all down to a grand coincidence I know of no other explanation for the majority of the decrease in risk in accidents in the 20th century.

(2) That the human population continues to expand. That we continue to put more and more planes into the air, ever bigger planes filled with more and more people. That the margins demanded in terms of airline separation continue to shrink. That the system works only because it is a system and not people flying randomly.

So for me, computers, computer programmers, who already doing an excellent job with drones in the military, deserve the opportunity to take it to the next level. Will they succeed; I do not know. But I don't think that pilot training and ever more training is going be adequate to the future demands of a system meeting the needs of 20 billion people.

From weaving looms to cars to computers automation has been resisted at every step but certain populations. Automation in airplanes has been no different. Even today, certain minority groups like the Amish continue to resist all modern technology; that is their prerogative. Yet I don't think that most people desire to go back to the horse and buggy days, occasional fits of pastoral romanticism aside.

alf5071h
11th Jul 2011, 18:33
TM #183, http://images.ibsrv.net/ibsrv/res/src:www.pprune.org/get/images/smilies/thumbs.gif
We might debate if mental tasks are strenuous or not; perhaps your view is from one with expertise, but we must not overlook the average individual, or even the experienced in demanding circumstances.
I use Kerns definition of airmanship which includes many mental tasks – discipline, skill, knowledge, awareness, judgement.
Thus I argue strenuous, but will accept ‘relative’ in relation to experience / situation.

Lonewolf_50
11th Jul 2011, 19:30
alf5071h
I would not disagree that unusual attitude recovery training is an important subject, but what is the exact relevance to automation / computers?
Relevance is that if your computer flies and you don't, you get rusty. More importantly, you scan gets rusty so that when you need it, you have to play catch up or it is too slow for conditions.

It may be more beneficial to look at the reasons for the loss of control.
If the airplane can do it, you need to be trained to deal with it. Arguing perfect prevention is a good way to fill up graves.
If there have been system failures, then why did they fail, and how did the crew manage these failures given that in most, if not all circumstances the aircraft is still flyable – rule 1 fly the aircraft.Rule one requires practice and proficiency. If you don't practice, you won't remain proficient.

‘Loss of control’ accidents without system failure appear to have elements of non normal operation, surprise, and hazards of physiological disorientation – these are not failures of technology or the aircraft.
They are a failure of the man machine interface. I do not find it that useful to pretend they can be separated.
Thus, the higher priority for training might be related to how pilots manage system failures, how they fly an aircraft in a degraded state, and how they manage themselves when dealing with the unexpected or when challenged by weakness of human physiology – always trust the instruments.
Trust the instruments, and know when the instruments are working, or aren't. I completely agree that training in degraded mode is critical for safe flying ... since eventually, any machine will break, or, as a computer is the topic here, have a small hitch and need at the least to be cycled on and off, if not reprogrammed back at base once safely on the ground.
It would better to avoid the hazardous situations, rather than relying on recovering from an upset, if indeed it is recognized / recognizable.
Presuming pure and perfect prevention fills graves. Yes, work on airmanship and judgment to improve hazard prevention, but if the plane can do it, you need to know how to fly out of it, and practice it.

In re the RTO and an FO whose attention is asserted as wandering ...
This is where teaching condition based scan patterns is useful. Teach and train particular scan patterns, and scan variations, that are tailored to particular critical conditions. That allows you to pick up on critical performance data in a timely fashion.

sys: (chris?)

They say a picture is worth a thousand words. Why ?. Because
a picture is effectively parallel processed by the brain, while reading
text or scanning instruments is serial and it takes much longer to
assimilate the meaning.
Possibly why those suites of nice round gauges worked so well for so long.

They painted a picture.
Trying to diagnose problems by wading through pages of error messages, and / or getting out handbooks, finding the right page, ad nauseum, takes far too much time in an emergency. There just has to be a better way. In some ways, modern a/c are quite primitive, despite all the complexity and shiny paintwork.
In a multi place aircraft, the trick to all that is pilot flying, FLY, Pilot non flying, work to filter out the non essential from the essential.

That is another area in this era of computers in the cockpit that absolutely must have emphasis in training. (Sim seems a great place to practice such things.)

Back to BOAC's original premise:

When I flew T-28's, I had to know its systems. When I flew Hueys, I had to know its systems. They were similar enough in complexity, with the added worry of hydraulics in the latter. Avionics mostly a wash.

When I flew SH-60's, I had to know a HELL of a lot more stuff, since it had more systems. So, the training was more intense, and the amount of work I had to do to stay proficient in my aircraft was considerably more detailed. (I sometimes hated knowing that I knew what a Mux De Mux was, but know it I did, since I had to talk to technicians when it went south.)

You have to know your systems, and I think the training and professional end of this lays a serious burden on leadership in the industry, in pilots, in training departments, and in the corporate management sector.

How do you encourage and incite active curiousity among your pilot work force in diving into any and every detail of how the bird works? This sort of enthusiasm can't be limited to the pilots. It needs to permeate the entire culture of your airline, because now and again, you'll find things out that you want to bring to the attention of the supplier, and others who fly your bird.

That last is a cultural imperative that I think increases in gradient as the computer era puts its stamp more firmly on flying.

alf5071h
12th Jul 2011, 01:47
Lonewolf50, slick quips and quotes; good enough to debate. http://images.ibsrv.net/ibsrv/res/src:www.pprune.org/get/images/smilies/thumbs.gif
But seriously;

Yes, skills degrade without practice, but the computers don’t fly upset recoveries (perhaps they should, and ASI failures), so what skills are degrading due to automation/computers, and are these relevant to system failures (man and machine) in current operations?

Prevention is a good place to start, nothing is perfect. Safety requires a combined effort, thus recovery (action during, and after the ‘upset’) is also necessary.

Failures of the man-machine interface, I would use the generic ‘malfunction’ not just failure. The problem-space consists of a combination of technology, human, and situation.


Generally those who argue for more training align with a ‘blame and train’ culture. However, your views appear to represent an alternative of addressing the man / machine aspects, particularly the man.

Well reasoned arguments indicate that the pilot can no longer be expected to fully understand the technical system, nor the designers accommodate the irrationalities of human behaviour or combination of technical failures, and neither, can understand the entirety of complex situations.

In operation, ‘malfunction’ happens and we expect the human (best placed and probably best equipped - brain power) to solve the problem, which primarily is to fly the aircraft. But this is not normal flying, not a normal situation, indeed it is a situation which some humans may have been unable or unwilling to foresee.
At great cost we could train for a wide range of scenarios with extensive knowledge requirements, yet never be sure that every situation has been considered or that knowledge would be recalled.

Some areas of the industry might consider themselves safe enough; the trade off between safety and economics is balanced. Thus their task is to maintain the status quo; this could be an aspect of technological complacency or the reality of a sufficiently safe system (public perception).

If the latter is true, then the primary safety task might be to avoid the ‘big one’. Apparently what we don’t know is if an accident is the ‘big-one’ or just an accumulation of many ‘small ones’, and if either situation has automation/technology as a root cause.
More likely as with previous accidents, it is a combination of man, machine, and situation; - complexity, which we are ill equipped to understand.

BOAC
12th Jul 2011, 08:10
Overall, most here seem to be dancing around 'the head of my pin' even if sometimes in opposite directions.

I come back to post #1 - as long as there is going to be a 'human' in place in the cockpit we must ensure that when the 'perfect protection' fails - as it inevitably will - that the human is:-

1) Left with basic information to enable a reasonably rapid analysis of the situation and an equally rapid 'plan of action' to be formed

2) Equipped with the basic skills to execute the 'plan'

3) Given flight controls that will 'execute' his/her demands without interference

4) Have the opportunity (and training) to processs 1) without feeling (or being) 'pressured' into working through some complex electronic jungle of 'information/action' BEFORE it can be done.

Returning (I know!) to the trigger for this thread, 447, why could the whole shooting match not just be dropped? Autotrim into a stall - no - if PF finds not enough elevator authority, move the trim wheel. Voting out a series of confusing (to the computers, certainly) input conflictions - why not an earlier default to "Dave I don't really understand all this. Here is a basic aeroplane".

When it all goes 'south' we do not need any more 'bells and whistles'. The crew were not trying to land on another planet or achieve earth orbit docking with the ISS. They needed to stabilise, descend and turn out of the weather- a fairly basic (yes, challenging) flying task to the 'older generation'. They did not achieve this. Industry needs to address this, be it lack of training, the wrong training, the wrong 'automatics', the wrong 'mind-set' or whatever. Whatever it was, to find a crew apparently 'looking at' a high nose, high power and a huge rate of descent over several minutes and NOT working out that the AB-taught recovery is not working needs to be examined, and examined thoroughly, and not just written off as 'pilot error'. After all, how many times have my 'gums' beaten around the fact that a high r of d is one of the 'symptoms' of a stall, both on the blackboard and in the cockpit? Having the 'fact' that you cannot stall an aircraft drummed into you will probably suppress any sense of recognition that you just have.

I do not see progress here. I see 'defensive positions' at many points and an assault on the child who shouts about the King's clothes. Where does the responsibility lie? On another thread I was accused of "You have been busy ripping software engineers a new set of orifices it seems" what ever that bizarre fetish might be. I am not 'anti' the software. I am not 'anti' the writers. They will write, and write as well as humanly possible within their brief and any 'understanding' they might have of the flying 'task'. We have HR and accountants taking a lead role in how we train, practice and operate. The 'business' pressures are immense. Often 'blaming;' the crew is expedient. Is the 'long-stop' perhaps the test pilots? Is it amongst a 'stronger' training community? Are they able to say, 'hang on'? Where is the 'moderation/reality check' to be?

safetypee
13th Jul 2011, 01:47
BOAC, “I am not 'anti' the writers. They will write, and write as well as humanly possible within their brief and any 'understanding' they might have of the flying 'task'.”

But if the writer’s understanding came from the piloting community, then aren’t we the cause of your concerns. Did we think that it wouldn’t happen, couldn’t happen, that we can understand system failures, we avoid Cbs by a large margin, and we can hand fly fly/pitch power with an airspeed failure.
More likely these concerns represent the lack of understanding about AF447; – yes, wait for the report, or at least the data.

Alternatively such concerns only arise with hindsight, such situations were not, or even could not be conceived beforehand. Recall the speculation about events before AF447; the other ‘successful’ events, possible regulatory assumptions about the relative ease of flight without airspeed, a short duration malfunction which average crews could manage easily.

http://archlab.gmu.edu/people/rparasur/Documents/Auto5302008.pdf - another theoretical view for the mixing pot.

BOAC
13th Jul 2011, 07:17
SP - you did not address the rest of my last para in your 'quote'.

I'll TRY to answer your points:

But if the writer’s understanding came from the piloting community, then aren’t we the cause of your concerns. - quite possibly -see my last para. Did we think that it wouldn’t happen, couldn’t happen, that we can understand system failures, - not in my case, anyway - cannot speak for the rest.
we avoid Cbs by a large margin, and we can hand fly fly/pitch power with an airspeed failure. Yes - and?More likely these concerns represent the lack of understanding about AF447; – yes, wait for the report, or at least the data. - basically what I have been saying on the other threads, but you need to FORGET 447 and think 'big picture', please? 447 is just one small cameo. In my opinion it is the speed and direction the industry is making relative to the 'progress'of human understanding.
Alternatively such concerns only arise with hindsight, such situations were not, or even could not be conceived beforehand.. I do not agree - the 'writing on the wall' was there in many peoples' minds and was reinforced by the absolute disbelief following the ?first? AB crash in service (Air India?) caused by mode confusion.

Lonewolf_50
14th Jul 2011, 19:55
Alf5071h
Failures of the man-machine interface, I would use the generic ‘malfunction’ not just failure. The problem-space consists of a combination of technology, human, and situation.
Agree, which points to an issue that was raised in another thread. Some of the pilots pointed out that a lot of their simulator training was spent doing regular procedures with all systems on. Of all the things to train, that's what one needs a simulator for least. Each repitition of those evolutions that you undertake in the daily job reaffirms that pattern through repitition. Apparently, most flights go along nicely with all systems working, thanks to fairly high reliability systems in place today.

So what do you do with that expensive sim time?

You do what you can't do in the aircraft with a load of paying customers, you set up situations that address man and machine interfaces in those situations where the crew have to make a difference. I'd say that's time better spent in the sim. (Personal bias, I must confess, given how I used to run training sim events all those years ago. Heh, I liked to see 'em sweat! :E ).

If the computers make things work, you need to work out how they work, and how they don't work. That way, you get the most out of the computers, regardless of how many are, or are not, working.
Well reasoned arguments indicate that the pilot can no longer be expected to fully understand the technical system, nor the designers accommodate the irrationalities of human behaviour or combination of technical failures, and neither, can understand the entirety of complex situations.
I think you are selling pilots a bit short on that score. I do not believe that comprehension of how the bits and pieces fit together is beyond the average professional pilot. Education is a continuum. I firmly believe that pilots get significant job satisfaction from expanding their professional knowledge.

alf5071h
15th Jul 2011, 14:00
Lonewolf_50, :ok:
“So what do you do with that expensive sim time?”
As you indicate we spend a lot of time training specifics, but in complex ‘high tech’ situations we require greater generic skills, particularly those associated with situation assessment. Thus we require fewer action skills (many are still essential), and more thinking, pattern generating / matching skills.

Pilots have to be taught how to think in a high tech world – the patterns will differ from those in basic training, and then use and practice these skills in representative scenarios; not just the man – machine interface, but the man – situation interface.

I agree that we should not ‘sell the pilots short’. Humans are still a valuable source in complex situation, but also most prone to failure.
Perhaps my bias was from experience; like you I had the pleasure of flying the T28, but I’m not sure that I really understood all of it.

“If the computers make things work, you need to work out how they work,”
I am not convinced of the need for that, or the practicality of achieving it – generating appropriate knowledge and the reliable ability to recall it when needed.
The previously linked Systems Failure (www.demos.co.uk/files/systemfailure2.pdf) suggests considering complex systems at a higher level; e.g. autoland is computer controlled, but a pilot does not need to know if this involves dual-dual or dual-dissimilar software/hardware; what is important is fail passive vs fail op - ‘do something’ vs ‘do nothing’ (do less) in the event of failure. There has to be a balance between knowing what, and knowing how; technology has changed this balance, has the training changed too?

Education, yes, but time and resource has to be balanced under currently dominating economic pressure. Are we, the industry safe enough, or are we blinded by the potential of automation; only the next accident might determine that.

A37575
16th Jul 2011, 13:41
Education, yes, but time and resource has to be balanced under currently dominating economic pressure

During a visual descent in the real aircraft the crew were directed by ATC to join the circuit downwind. F/O was PNF. To the surprise of the captain and a qualified observer in the jump seat, the PF turned base far too soon with the result the aircraft was so high on final that a go-around had to be made.

Later the F/O explained that despite 1000 hours on type he had never done circuits and landings in the simulator and certainly never given the practice at entering the circuit on the downwind leg. But he had done well over a hundred ILS on the autopilot in the simulator and lots of one engine inop go-arounds and numerous multiple emergencies during LOFT exercises as well as no shortage of taxiing, another hundred holding patterns on automatics and LNAV but NEVER a circuit or a 35 knot crosswind landing.

All of these were regulatory box ticking exercises during the cyclic training regime. And all he wanted to do was to increase his pure flying skills by manual flying a few circuits without any automatics preferably in a crosswind. But the syllabus did not allow for that..

There surely must be a lesson here for the trainers as almost certainly this disregard of practice manual flying in the simulator is widespread in favour of heads down button pressing and autopilot`monitoring`.

safetypee
16th Jul 2011, 20:25
BOAC, #191
Re last part of your paragraph: - Where is the 'moderation/reality check' to be? There is little point in seeking additional responsibility without evidence of what the problem is – “dancing around 'the head of my pin'”.

If the piloting community are ‘a’ cause, and apart from you there are no great shouts for change, then is the industry happy with the current safety level. We still might be deluding ourselves – complacent, or overpowered by the bean counters.
In the first instance the moderation / reality check should be with the regulators, who have to balance the public (political) inputs with those of the industry, and of course the facts from accidents.
EASA safety review 2010 (www.easa.europa.eu/communications/docs/annual-safety-review/2010/EASA-Annual-Safety-Review-2010.pdf )– a good, safe year.
EASA action plan (www.easa.europa.eu/sms/docs/European%20Aviation%20Safety%20Plan%20%20(EASp)%202011-2014%20v1.2.pdf) - only tenuous links with automation (sect 5, automation policy, sect 6).

Avoiding Cbs by a large margin; do we? We may not avoid Cbs by a sufficient margin compared with previous operations with low res radar etc. Investigating incidents in similar conditions to AF447, indicated that some regional crew’s tend to cut the Cb mis distance quite fine. This appeared to be aided by the use of modern technology – high definition radar and accurate autopilot controlled flight track – we know where the storms are, where we are, where we are going – technological complacency; but do we know the significance of knowing or the limitations of our knowledge.

The Big Picture; many LOC accidents.
How many were a direct result (prime contribution) of failed automation, – few if any.
How many were due to crew/auto interface, - superficially a significant number, and of these speed awareness and trim contributed to many.
How many were due to disorientation, Go Around, FBW vs steam aircraft, - there is a mixture, but all in normal non-emergency operations.

If these contributors are all clearly identified, then which is more significant?
My biased, non-statistical rough cut, places the human, and only the human in pole position.
This is not blame, but recognition of human limitations, although from a different viewpoint – that involving modern technology – but that may just be a part of evolution.

"tempora mutantur nos et mutamur in illis"

MountainBear
17th Jul 2011, 23:19
- basically what I have been saying on the other threads, but you need to FORGET 447 and think 'big picture', please? 447 is just one small cameo. In my opinion it is the speed and direction the industry is making relative to the 'progress'of human understanding.

I don't think your thinking big enough.

My issue with the whole training/complexity debate is that the debate tends to implicitly assume that the state of affairs we have at present will continue. It won't. Whether you call that 'progress' or use a more neutral term like 'evolution' technological change is a fact of life. That's a major reason why you can't rely on training to get you out of the difficulty. The training is always changing because the underlying technology is always changing and so there is no fundamental store or bank of experience than one can rely on. The idea that you or I or any group of people can yell "stop, wait till we catch up" is just sheer folly. The Cpt. with 30 years experience is in some ways worse off than the new guy because he's got the old technology cluttering his head and the new guys doesn't.

Mode confusion is just an interim problem. That doesn't mean that it should be ignored while it exists. But the long term trend is going to be the elimination of mode confusion by the elimination of the being that is confused, the human. In the short run, training probably is our best hope. But improvements in training is not a sustainable long term model for increased improvements in airline safety over the next 30-50 years.

john_tullamarine
18th Jul 2011, 05:23
The idea that you or I or any group of people can yell "stop, wait till we catch up" is just sheer folly

That's a real worry to me.

A basic tenet of activity in many areas is the "knock it off" (or some similar catch cry) approach to discontinuing whatever it is that appears to be going off the rails.

Clearly, we can't adopt a stop and wait approach but we certainly need the option of adopting some other technique to replace/supplement that which may be causing us grief in the short term. For us old pharts, that might be a preference for the big O.F.F button, followed by getting things under control via a bit of pushing and pulling, and then the ON button to go back onto the automatics.

While I am quite happy with the idea that the Airbus machines enhance safety statistics overall, there remains that niggling disquiet if I can't find an OFF button to give me some chance in the event that the automatics simply give up ? Caveat - I haven't flown any Airbus machines so I am a tad in the dark as to the cockpit specifics.

senseofrelief
19th Jul 2011, 00:06
Noticed this thread is still alive..Now what exactly is being said? The point?

Seems everyone of these posts seem to live and die within a CRM type of rarefied crew environment that seem so far out of reality from normal aviation business, that honestly, nothing here is worth commenting on.

Lonewolf_50
20th Jul 2011, 17:00
alf5071h

I appreciate that sim time costs money. I have some small experience in pilot training syllabus change and revision with the Navy. We were plagued by the know nothings who would harp on the "replace aircraft hours with sim hours" without knowing what limitations sims have ... or even knowing which generation of sims we had been funded for.
(Grrrr, still makes me mad, even now, how foolish this "guidance" was as compared to tools at hand). And then if we wanted to buy new or upgrade the sims ... where is the money?) See also the thorny problem discussed in sim training for the stall or upset or spin case: can one afford to build the sim that can give you that training?

I suspect that the airline industry runs into similar institutional problems.

In re knowing your systems to the depth I advocate, versus "need to know level of training."

I cannot concur with knowing the systems to the depth of one or two briefing slides. The ability to trouble shoot and work through a degraded mode requires both well crafted SOPs and procedures (QRH/Memory Items/ECAMS/what have you) and an understanding of what the system is doing as you turn various things on and off. You need depth of understanding.

A rough analogy is the understanding of how a car works when one has overhauled an engine and replaced a transmission,
versus
"get in and drive" level of knowledge typically resident in a motorist.

The former is often able to get more out of a car, or know what not to do with it, than the latter as things begin to go wrong.

As systems get more complex and interrelated, the pilots must be educated, or educate themselves, or both, on how these complex pieces interract with one another.

Any organization will want to standardize training to ensure a certain minimum standard is achieved and maintained, and a predictable result be attained. (Ecucation in depth will help with the ability of aircrew to interact with techs/maintenance, and thus reduce fault isolation and remedy time cycles).

The institution has to invest in the continuing education.

See John T"s comments about change. It is ever with us.

So too is the requirement, not option, of both education and training so that you get the most out of your system. <== That would seem to get a return on the bottom line, would it not, if only via cost avoidance?

(At this point, seque to FOQA and someone yelling about Six Sigma in pilot training, and I'll be riding off into the sunset. :) )

Final thought: the computer in the cockpit is like a firearm in the hands of the standard citizen. Dangerous if you aren't well trained and educated in its use, a great asset if you are well trained and educated in its use.