PDA

View Full Version : Cockpits should be more user-friendly, says study


Self Loading Freight
8th Jan 2004, 08:54
Aeroplanes Would Be Safer If Cockpits Were More Human-friendly, Says New Study

Aircraft could achieve an even higher level of safety if cockpit designers took more of the psychological characteristics of pilots into account, according to researchers. Although the air accident rate has been constantly decreasing over the last decades, many modern aircraft have computerised controls systems which are so complex that they even over-tax the mental capabilities of fully-trained pilots, say the researchers.

The team, from the University of Newcastle upon Tyne and the University of York, UK, report their findings in the January edition of the International Journal of Human-Computer Studies*.

They say that, during emergencies, pilots are overloaded with technical information which they are unable to process quickly enough. This could mean that wrong decisions are made at crucial moments – and these cannot be reversed.

Dr Denis Besnard, Dr Gordon Baxter and David Greathead analysed the disaster in which a British Midland aeroplane bound for Belfast crashed onto the M1 near the village of Kegworth in Leicestershire on January 8, 1989, killing 47.

(rest on http://www.sciencedaily.com/releases/2004/01/040107074159.htm )

----------------------------------------------------------

I've always wondered if glass cockpits spend too much time pretending to be string-driven, and whether the urge to keep cross-type training as simple as possible means that everyone's still stuck with the basic usability ideas of the 50s and 60s...

R

Phoenix_X
8th Jan 2004, 18:17
Though in my opinion he's right that some things need improvement to the machine-human interface, I must say that Kegworth was hardly a modern glass cockpit. It would be interesting what he'd write about the B777 or A320/30/40 cockpits.

alf5071h
8th Jan 2004, 19:54
The Times (7 Jan) also ran a link to the academic paper under the heading “Too much data put jets in danger” whereas the press release from Newcastle University opens with “Aircraft could achieve an even higher level of safety if cockpit designers took more of the psychological characteristics of pilots into account, …” Not to belittle the accuracy of The Times’ technical reporting …, but I will anyway.

The paper’s authors actually argue that: (my italics) … that research in intelligent agents (computers) can enhance the reliability of human - machine systems by improving the human - machine interaction in general. Agents need (a) to provide support to help operators make the right decisions and take the appropriate actions, and (b) to act as barriers when they try to perform erroneous actions.
However in presenting their case for intelligent systems to assist pilots in unusual emergency situations (with co-occurrences), most of the attributes of a good monitoring pilot are identified. Thus do the authors seek to automate the crew; I doubt that, as they acknowledge that the crew has to retain overall responsibility as the final decision maker.

The paper is titled “When mental models go wrong: co-occurrences in dynamic, critical systems”, and discusses issues common with CRM, situation awareness, and decision making; thus it is a reference for CRM instructors who require a slightly different view of the problem of human error without too much psycho tech. Read ‘automation’ in the sense of a display of computed information, and ‘control’ as the control of the situation, not necessarily the aircraft.

IMHO by discussing the Kegworth accident, the authors infer that an improvement in human - machine interaction through training (their argument) had failed and therefore the industry requires to refocus on training in addition to the use of computers with ‘human reasoning’.

I hope that their research can be progressed, for if nothing else it could re-identify many of the human issues that the industry needs to reinforce by CRM teaching.

Move to Safety and CRM?

ft
8th Jan 2004, 23:27
Just read a few good articles/books on this subject.

Intelligent agents provide a finesse, rather than a solution to the problem (Woods, Patterson, Roth, 2002). What needs to address data overload is to enable context-sensitivity in the cockpit. To date, we are not able to create intelligent agents able to achieve this, be they rule based expert systems, model based or based on AI routines (Billings, 1997).

The human operator still provides a unique capability to switch the focus of attention to the part of the data field where it is needed the most (Woods et al). Automated agents still have a long way to go to be as capable in this respect. They also need to interface with the crew seamlessly, or they might add to rather than subtract from the data overload problem (Billings).

The very confirmation bias mentioned in the paper by Besnard & Greathead (2004) will in fact pose a problem when creating such adaptive agents. The operators might home in on the solutions presented by the system and fail to notice cues that the system did not account for, since it was not designed to do so, but which the operators probably would have noticed if left to their own devices.

We already know that this is a problem. We have a fairly good idea about how to resolve the situation. We are simply unable to achieve it, presently, and the article stops well short of where earlier work on the subject has taken us.

References
Besnard, D., Greathead, D. (2004). When mental models go wrong. Co-occurences in dynamic, critical systems.

Billings, C.E. (1997). Aviation Automation - The Search for a Human-Centered Approach.

Woods D.D., Patterson E.S., Roth E.M. (2002). Can We Ever Escape From Data Overload? A Cognitive Systems Diagnosis. In Cognition, Technology and Work (4:22-36).

Rather basic, and very undigested so don’t let yourself be fooled by the referencing. This post does not by any means meet scientific standards. But IMO, that report contained nothing new or unknown to us, even if the papers made it a headline. They know that aviation accidents sell copies, I guess... *sigh*

Cheers,
Fred

safetypee
9th Jan 2004, 00:31
Ft
The so called intelligent agents (computers) may be able to assist the crew in some circumstances, most likely where crew have made a hasty decision on a limited data set or acted incorrectly. So in the Kegworth accident if the engine vibration warning had an alerting function together with a detection of smoke source from the left engine, these data could have been combined and displayed to the crew suggesting a possible course of action – ‘problem with left engine’. Similarly if the crew then continued with the incorrect action of shutting down the right engine the intelligent agent could question the action. Most of which we hope would be done by a crew today exhibiting good CRM.

The difficulty in 'educating' intelligent agents to predict the crew’s intention may seen in the A320 Habshiem accident. Airbus aircraft / systems have very good but limited intelligent agents. The A320 system correctly deduced from the low altitude, gear down, flap out, etc data that the pilot’s intention was to land. The pilot did not communicate to the aircraft / system that he was making a fly past, which ultimately results in a go around with associated change of operating mode. The pilot’s intentions would have clearer to the system if he had moved the thrust levers forward or if had not flown so low or slow in the first instance. Unfortunately whilst the other crew member did know the captains intention he also lacked system knowledge or failed to intervene in time.

With today’s technology, if EGPWS were interfaced with the existing Airbus logic an intelligent agent could determine if the deduced intention to land was actually going to take place on a runway, if not then the system would alert the crew and / or select an automatic go around. i.e. if EGPWS had been fitted at Habshiem the crew may not have flown so low, and thus having made an error (lack of knowledge) the crew may have had time to recover.
Classic computer-aided threat and error management.

But then, the crew could have switched the system off.

ft
9th Jan 2004, 01:40
When the system is able to successfully deduce the correct conclusions, all is fine and dandy. If they suggest the correct actions, even better.

The problem lies in creating a system which will do so, with sufficient accuracy and with an interface intuitive enough to aid the data overload problem rather than augment it.

If the wrong solution is suggested, there is a very strong chance of the crew falling into a situation of confirmation bias, where all indications suggesting something else will be ignored or pushed aside. If many solutions are presented, chances are a crew in a situation of cognitive overload will go with the first solution rather than evaluate the picture presented.

As expert systems aren't likely to emerge, let's turn the attention to model based systems. They will require very advanced models, and the models will still be unable to deal with situations not predicted by the model designers. Sioux City, e g - is it likely that the model designer would have considered the chance of the ruptured disk severing all the hydraulic systems when even the designers of the aircraft did not?

Of course, designing a model would add some redundancy in the design process. Perhaps the model designer would have caught this potential problem when the designer did not?

I would not call the A320 systems which caused that fly-by crash intelligent agents in this sense. They are rather more classical automation. The problems are often the same. It is unclear what the automation or agent is doing and on what data it is basing its actions. Keeping on top of the behaviour and internal modes of the automation might add to the data overload problem rather than reduce it. An intelligent agent must, in order to be effective, be a team player just as much as the other guy in the cockpit. Can we achieve that today? I'd say no, we can't. Thus, we need to be very, very careful indeed when attempting to implement such solutions. Recall the A330 test flight crash at Toulouse in 1994. There, an intelligent agent decluttered the display. While the intention was to reduce the complexity, it compounded the seriousness of the situation as important data was hidden from the flight crew without them being aware.

IMO, the aircraft would still have thought it was a landing attempt at Mulhouse-Habsheim as they were over the runway. And if landings when the EGPWS signalled that it was not above a runway were to be inhibited, how many go-arounds would be caused by e g position shift? How would we make it clear to the operators what was going on? How large would the risk of an accident due to these erroneous go-arounds be?

As you almost said, the difficulty lies in educating the intelligent agents. The question is not 'could they be useful' but rather 'can we make them useful today'. I would say it depends and, I reiterate, that we should take great care when trying to implement them.

We probably need more basic solutions rather than such finesses.

Cheers,
Fred

BoeingMEL
9th Jan 2004, 02:19
Most people would agree that our (the UK's) AAIB is probably the finest aircraft accident investigation agency in the world...bar none. Their report on Kegworth was lucid, convincing, comprehensive and has stood the test of time. If that type of 737's cockpit is as confusing or misleading as the academics now claim, why have there been no similar accidents in the hundreds of thousands of approaches and landings by similar types since?

So sorry to be cynical...and I don't normally stereotype... but when those of us who have spent many safe and incident-free years at the coal-face are assessed and judged by the "experts" with pony-tails, it makes me weep. bm

alf5071h
9th Jan 2004, 04:28
BoeingMEL, whilst agreeing with your positive comment on the AAIB, the academic report did not challenge the AAIB’s findings, but I also agree that the academics do need to discus the practical aspects of flight safety with members of industry such as you. The paper only used Kegworth as an example of an accident which was precipitated by crew error in circumstances where there were multiple but poor cues of the technical failure.

I have flown with a similar engine instrument display as those in the 737, and as you state it is not confusing or misleading; yet my aircraft had a vibration warning in the MWS. One of the critical issues in the Kegworth accident and in many others, is that crews act on a single data source. An engine failure is not normally indicated by high vibration alone, other confirmatory data sources must be sought before taking action. Scan all of the instruments and associated systems to form an adequate mental model before taking action. If the 737 had an intelligent computer system to check all of the items, such as described by safetypee then may be – may be, Kegworth may not have happened. The other advantage of a computerised system is that it would not have negative transfer of previous experiences (“the captain stated that he rarely scanned the vibration gauges because, in his experience, he had found them to be unreliable in other aircraft.”).

I also agree with safetypee with the use of EGPWS. This equipment goes a long way towards being an intelligent system. With GPS input its accuracy is as good as any navigation system (aircraft can autoland with GPS), the data base of airfields is improving daily, the aircraft’s vector flight path, speed, and acceleration are known, and the altitude is very precise based on up to three sources (baro, radio, GPS). The system’s understanding of the crews intention is simple – to fly safely avoiding a CFIT or approach and landing accident. As such it is uniquely equipped to achieve that task and has done so on many occasions.

ft the aircraft at Habshiem crashed into trees that I understood were at the side of the runway, probably at a distance that EGPWS would have detected as not being a runway. If not, then an aircraft flying over a runway and not achieving a satisfactory touchdown point could be warned by logic similar to that in the new RAAS addition to EGPWS; where runway distance remaining is called out before / following a long touchdown. (RAAS - Runway Awareness and Advisory System – an aid to prevent runway incursions).

I agree with you that an intelligent agent must be a team player; this infers that as a tem member it should also follow CRM principles, but whereas a human may solve the complexities of accidents such as Sioux City, an intelligent agent would probably be limited to the detection of crew error in more routine operations e.g. that extra check required at Kegworth or the alerts and display of an EGPWS.

DanAir1-11
9th Jan 2004, 12:05
Habsheim was a combination of factors, as most accidents usually are. Primarily , the flt crew had not been correctly briefed concerning which rwy the crowd would be aligned with and had anticipated the longer of the two ? runways and I believe (although am prepared to stand corrected as always) had planned accordingly Only when they became visual did they become aware of the 'different' orientation of the fly past. They also descended to 30 FEET, which was well below AF stated procedure for such events and at this point the attitude was fairly extreme meaning that for one, trees were not immediately visible and 2 that the rear fuse was significantly lower than the forward sect, possibly giving false visual cues. The Alpha floor ?? function had, I believe, started to spool up the engines (due to increased attitude vs AS) fractionally before the crew recognised the danger and manually advanced the throttles.
To blame the a/c design is unfair in this case as without pointing the finger as such, it was sloppy work on a couple of fronts that caused this accident.
In the early days of 727 operations a few a/c were lost due to the high descent rates that could develop with the advanced (at the time) flaps. Ohio river Cincinatti was if I recall correctly one such incident of a fair few. I recall that the design was roundly queried at the time and as history has shown, as better teqniques were developed to handle the aircraft, such incidences were eliminated and the type has gone on to be one of the many boeing success stories.

safetypee
9th Jan 2004, 21:48
FT
Replies above cover most of what I could add to your points on Habshiem and EGPWS. However I am concerned by you implication that a go around has increased risk. Within our very safe industry how can a go around have a higher risk than a landing? A landing should be seen as an approach without a go around, thus where an aircraft continues to fly safely this is much safer than attempting a more ‘risky’ landing.

For those looking for a link to the original paper “When mental models go wrong” here are links. pdf 248 kb (http://www.sciencedirect.com/science?_ob=MImg&_imagekey=B6WGR-4B3JSG9-2-3&_cdi=6829&_orig=browse&_coverDate=01%2F31%2F2004&_sk=999399998&view=c&wchp=dGLbVtb-zSkWW&_acct=C000050221&_version=1&_userid=10&md5=5587582138bf203d9ad45c6263ec2fdf&ie=f.pdf) or html web page (http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6WGR-4B3JSG9-2&_user=10&_handle=W-WA-A-A-D-MsSAYZW-UUW-AUDBYUUVAV-AYUVBVYED-D-U&_fmt=full&_coverDate=01%2F31%2F2004&_rdoc=5&_orig=browse&_srch=%23toc%236829%232004%23999399998%23472633!&_cdi=6829&view=c&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=65af4f4698a442030b885c8205c3bb00
)

For more info on RAAS see www.egpws.com/raas/raas.html

ft
11th Jan 2004, 04:50
safetypee,
I think you are thinking about it the wrong way. We're talking non-required go-arounds here, caused by alerting systems and/or automation. If there is even the slightest need for a go-around, very rarely should the approach be pushed. That is, however, not the topic of the discussion.

A go-around is always a deviation from the normal flow of things and will increase the chances of something going wrong many times. Crossing runway traffic, configuration mess-up, being vectored around at low altitude in poor WX.

To make it absolutely clear: A non-required go-around due to a false/nuisance alert severely increases the risk of mishaps, as compared to completing the approach and landing.

Cheers,
Fred

safetypee
12th Jan 2004, 17:19
Ft
Either we do not share the same mental model of safety in flight, or we are viewing the same issue from different ends of the tunnel.

Although a go around is a rarely executed maneuver I cannot see how one would “severely increase the risk of a mishap”. Following your line of argument suggests that any continued flight increases the risk of a mishap. Major accident statistics are based on hull loss per million departures, eliminating differences between long haul and short haul operations; thus risk would not be increased by any additional flight time from a go around i.e. although there are risks associated with go around, they are not increased by conducting more of them.

If, as you state “a go around is always a deviation from the normal”, inferring that the deviation is hazardous, then risk will only increase if it is likely that the go around results in undesired effect. If this is true then the industry should seek methods of reducing the risk or minimising the effects of human error within that risk for existing operations, i.e. spend more training time on go arounds, or automate the go around. The last point I believe is where we started; if the crew have made a mistake and not acted on the various levels of alerting, then the system should intervene in the flight handling. To day we teach CRM and expect, in extreme circumstances, that the non flying crew member will intervene to take control - fly a go around.

The down side of more automation may increase the number of non-required go-arounds, but this depends on its integrity and reliability; modern systems are very good. However on the positive side an intelligent agent may eliminate a significant number of accidents where CRM failed or crew intervention did not occur, thus the overall effect should result in improved safety of flight – reduced risk?

As you stated previously, the intelligent agent has to be a team player just as much as the other guy in the cockpit; I found the PACE doc via this link from another thread that defines the role of a non flying pilot (CRM etc), perhaps these are also the characteristics of intelligent agent. P.A.C.E. (http://uk.geocities.com/[email protected]/alf5071h.htm) 26 kb

My definitions of hazard and risk are:
Hazard: A condition, event, or circumstance that could lead to or contribute to an unplanned or undesired event.
Risk: An expression of the impact of an undesired event in terms of event severity and event likelihood. FAA Order 8040.4, Safety Risk Management
http://www.asy.faa.gov/Risk/Policy/Order8040-4.pdf

edit - updated PACE link

Tinstaafl
12th Jan 2004, 20:01
I think there's slight difference in case between the approach vs go-around risk examples. My view is:

IF theapproach & landing is 'normal' then a go-around entails a slightly higher risk.

IF the approach & landing is going pear shaped then a go-around gives the lesser risk.

ft
12th Jan 2004, 22:39
The NTSB (2002a) shows that for U.S. air carrier accidents, the first event initiating 11 out of 44 accidents occurred during manoeuvering, approach or landing. For 1998, the corresponding numbers are 10 out of 42 (NTSB, 2002b). This does not include the descent, nor does it include the climb and additional manoeuvering which will be part of a missed. They do include the actual landing which will probably not, unless you are in such a low energy state that it will be a touch (Transport Canada, 1998). Still, the numbers should be close enough for this discussion.

According to these numbers, 25% of all accidents occur in the phases of flight which will be repeated if you go around, for whatever reason.

The implication of this is that if you go around for a non-valid reason, such as a nuisance or false alert from such an automation finesse (Woods, Patterson, Roth, 2002) as has been suggested, you have suddenly increased the accident rate by 25% for that flight for no gain at all.

Is that acceptable? What rate of false alerts vs. cases where the finesse actually avoids an accident is required for anything to be gained? That needs to be very carefully studied and evaluated, especially before adding even more potential data to an environment where data overload is already a big problem in critical situations. How large is the risk of overdependence? How much will interfacing with the system tax the cognitive abilities of the operators? How large is the risk of misinterpretation when it goes off? Etc etc etc, ad nausea.

To figure that out, we need the accident rates, the rate of accidents which the system could have avoided, the effectiveness of the system, the nuisance alert rate of the system and so on. I won’t even try to guesstimate all those figures. For all I know, it might indeed be effective. Then again, it might not. And even if it is effective, is it cost effective? Perhaps the same resources could be spent elsewhere and do more good?

BTW, the PACE link doesn’t appear to work. I would like to read that I think so if you can get me a working link it would be much appreciated!

It’s a fascinating subject, isn’t it?

Cheers,
Fred

References
National Transportation Safety Board (2002a). Annual Review of Aircraft Accident Data. U.S. Air Carrier Operations, Calender Year 1999. Retrieved on the 12th of February 2004 from http://www.ntsb.gov/publictn/2002/ARC0203.pdf

National Transportation Safety Board (2002b). Annual Review of Aircraft Accident Data. U.S. Air Carrier Operations, Calender Year 1998. Retrieved on the 12th of February 2004 from http://www.ntsb.gov/publictn/2002/ARC0202.pdf

Transport Canada Civil Aviation. (1998). Notice to Pilots and Air Operators - Low-Energy Hazards/ Balked Landing/Go-Around. Retrieved on the 12th of February 2004 from http://www.tc.gc.ca/CivilAviation/commerce/circulars/AC0141.htm

Woods D.D., Patterson E.S., Roth E.M. (2002). Can We Ever Escape From Data Overload? A Cognitive Systems Diagnosis. In Cognition, Technology and Work (4:22-36).

reverserunlocked
13th Jan 2004, 01:32
Habsheim was a lash up on the part of the crew, pure and simple. It had little to do with the aircraft's systems or data presentation.

I studied this accident quite intensively for a course, and although there were many different aspects to it, as there always are, the basic factor was poor airmanship, and lack of familiarity with the aircraft.

Remember that Habsheim is a general aviation airfield, and the runway is way too short for an A320. Rather than a fly past, the manouevre became an approach and go-around, except that the trees (which wouldn't have been there, if this runway had been suitable for an A320) were in the way.

Had this trick been tried in a 737, it would have stalled and crashed into the trees. On the 320, it pulled max nose up and held it there until it ran into the trees. It's a no-brainer.

safetypee
13th Jan 2004, 17:03
Ft
Thanks for the references; yes this is an interesting subject.
The data that you quote is very sound and is in agreement with my source; Boeing Statistical Summary of Commercial Jet Airplane Accidents. (http://www.boeing.com/news/techissues/pdf/statsum.pdf )

I believe that the point of debate remains the differences between safety / risk in current operations and that in a future operation using an intelligent agent. Currently the approach, landing and GA phases of flight account for a high proportion of accidents. GA’s flown to-day are safe (low risk); it is the poor approaches that do not result in a GA that are hazardous, many of these contributing to accidents within the 25% of the total.

In a future operation with computerized assistance / intervention I would argue that some, even a considerable proportion of these accidents would be avoided. Even if the computerized assistance / intervention did result in a few unwarranted GA’s there is little to suggest that these would be less safe than the entire successful GA’s flown currently. The point of comparison is any risk from a false computer warning vs the failure of the crew to act. Currently the false warning rate of systems such as EGPWS are extremely low, but the failure rate of crews to act (incidents and accidents) is high.

Thus (IMHO and with due deference to the learned gents quoted) the deduction (implication) that an unwarranted GA will lead to a sudden increase in the accident rate by 25% is illogical.

As an analogy, reverting the ‘intelligent agent / computer’ back to to-days monitoring pilot then what additional hazard exists when a go around is flown as a result of a SOP callout? There should be none, as this method of operating is the basis of current human factors and CRM training. To-days risks, those items leading to an accident, are the failures in the ‘intelligent agent’ (human) to heed the warning or a failure to intervene. If these errors were removed by automation (or more reliable humans) then we should have a safer industry.

The Transport Canada advice is very specific, relating to low energy GA’s from below 50 ft and / or a baulked landing (near or on the runway) where airspeed should be lower than that at a point where a normal go around is commenced.
IMHO TC has put their case rather strongly for large commercial aircraft. Whilst crews must be aware that it takes time to achieve high thrust from low power, or that flaps cannot be sequenced if the airspeed is less than that specified at the threshold, it remains that the aircraft is perfectly flyable in it’s the existing configuration (to me this is an airmanship issue).

What is the difference between stable flight on a 3 deg approach and level / climbing flight with higher power in the same configuration? The aircraft can be flown perfectly safely, the crew must consider a change in procedure – accelerate before flap retraction, and whilst not all of the safety margins for obstacle clearance etc will be met, the flight is just like a take off in an unconventional configuration. My last aircraft was certificated for take off with landing flap.
The TC advice is for crews to decide to GA early (intelligent intervention) or if the decision is late, ensure that crews are trained for non normal drills and are aware of the hazards at that stage of flight. In my career the largest hazard during a baulked landing were the elephants I had just seen crossing the middle of the runway!

Just because student pilots move up to big jets does not mean that they cannot fly a touch and go maneuver when warranted; if like me, many pilots were trained via touch and go landings, I hope that they found them valuable experience of an aircraft’s capabilities (and in my fast jet days we did not even ‘touch’ in order to save tyre wear).

I have re linked PACE, although you will have to page down to the reference.

alf5071h
14th Jan 2004, 23:39
reverserunlocked
Good summary of Habshiem; the chat in this link supports your view. Habshiem (http://users2.ev1.net/~neilkrey/crmdevel/maillist/archive/mar_97/0076.html).

The fact that it was a GA airfield also supports earlier posts that EGPWS would have provided a terrain warning and that RAAS would alert the crew to insufficient runway length.