Go Back  PPRuNe Forums > Flight Deck Forums > Tech Log
Reload this Page >

Automation Bogie raises it's head yet again

Wikiposts
Search
Tech Log The very best in practical technical discussion on the web

Automation Bogie raises it's head yet again

Thread Tools
 
Search this Thread
 
Old 17th Jan 2011, 16:40
  #141 (permalink)  
Guest
 
Join Date: Apr 2009
Location: On the Beach
Posts: 3,336
Likes: 0
Received 0 Likes on 0 Posts
sevenstrokeroll:

When I was first learning to fly in California, USA...a number of radar mistakes lead airplanes into the sides of mountains. when I first flew to one of these airports, I insisted on flying the entire procedure as published...it added quite a bit of time to the flight, but I knew where I was every step of the way...right down to minimums.
EGPWS has greatly reduced CFIT from errant radar vectors. Not everyone has EGPWS/TAWS, although with the handheld VFR GPS devices that have terrain alerting they should.

Moving map MVA charts would be great, too. In fact, pilots with geo-referenced Jepp charts have just that, but not in the U.S. The U.S. filed an exception with ICAO to radar charts.
aterpster is offline  
Old 17th Jan 2011, 19:27
  #142 (permalink)  
 
Join Date: Jul 2002
Location: UK
Posts: 3,093
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by aterpster
This is the VOR/DME chart they were using:

http://tinyurl.com/4jcxrld
I'm pretty sure they were actually using this one (note Rozo identified as R):

http://www.rvs.uni-bielefeld.de/publ...ep/Cali/c2.gif

Regardless of how you feel about the unwillingness or inability to go raw data when they realised they were confused, the fact also remains that if they had entered "ROZO" into the FMS they would also most likely have come in for a safe landing.
DozyWannabe is offline  
Old 17th Jan 2011, 21:09
  #143 (permalink)  
Guest
 
Join Date: Apr 2009
Location: On the Beach
Posts: 3,336
Likes: 0
Received 0 Likes on 0 Posts
DozyWannabe:

I'm pretty sure they were actually using this one (note Rozo identified as R):

http://www.rvs.uni-bielefeld.de/publ...ep/Cali/c2.gif

Regardless of how you feel about the unwillingness or inability to go raw data when they realised they were confused, the fact also remains that if they had entered "ROZO" into the FMS they would also most likely have come in for a safe landing.
They were indeed filed via airways (W3) to the Cali VOR, followed by the ILS to Runway 01, as you have linked. But, all the discussions with approach control were about instead flying the Rwy 19 procedure, which I linked. (Both charts have ROZO on them).

It's not a matter of how I "feel;" they were obligated to fly the VOR/DME arrival/IAP to Runway 19 once they accepted the clearance. You feel they would have come in for a safe landing had they managed to jury rig the IAP and skirt the margins of the procedure. But, they screwed it up royally. As soon as things got screwed up they had an obligation to either abandoned the VOR/DME Rwy 19, and rejoin the airway at altitude to proceed for the ILS Rwy 1 or, if they were savvy, do the simple task of recovering the Runway 19 approach via raw data. That was a very simple thing to do in a 757.

As to skirting the procedure by going direct to ROZO, they would have been violating this statement on their AAL flight plan (as well as the Colombia AIP procedure):

From the accident report:

AA did however, provide the flightcrew with written terrain information on the flightplan. This noted that: "Critical terrain exists during the descent--Strict adherence to STAR necessary for terrain clearance." The evidence suggests that the flightcrew did not take this information into consideration during the descent into Cali.

Further, the accident report fails to mention whether the accident aircraft was an original FMS (no GPS sensor, just IRUs with DME/DME/VOR update). Because of the date of manufacture of the aircraft, it probably was a first generation FMS. That alone should have prohibit use of LNAV below en route altitudes because of lack of robust DME geometry, which can and has caused serious map shifts in Latin America during those years. Also, although I am uncertain, I suspect Colombia was not WGS84 compliant on the date of the accident.
aterpster is offline  
Old 18th Jan 2011, 01:21
  #144 (permalink)  
fdr
 
Join Date: Jun 2001
Location: 3rd Rock, #29B
Posts: 2,956
Received 861 Likes on 257 Posts
"my Observations And Comments About The Accident"

"If the world were perfect, it wouldn't be"
"Yogi" Berra

Ataps': did you update your observations with the hindsight of the evidence that was rather skillfully derived by the AAI's?

(Peter is correct, the investigation techniques were advanced by the efforts of those involved in this investigation).

Would love to operate in a world that humans don't suffer from human constraints and where machines don't break down, and where computers don't suffer BSOD's. But that is going to have to be in another universe. In this one, we have "the germans in charge of law and order, the british in charge of cooking (and car electrics), the french in charge of personal hygiene... etc, and the Italians in charge of maintenance..." (apologies to all).

AA965 [oops... 587 was bigfoot....or how we learn about certification issues after the event, thanks zee] had 2 nominally performing humans (3 if the ATCO is included...) acting in a standard routine and didn't need much for the round bit to go pear shaped. What may be considered rash or wilful disregard later is often hard to distinguish pre accident from being expedient, and "applied initiative" at work.

Expectancy was a significant issue with the FMS-NDB issue, and it occurred at a tragically inopportune time with a crew that were at a low arousal state, in high risk conditions. That the crew didn't recognise that they had multiple SA errors starting to compound again confirms that they were human. (as was the software developer who released the NDB that was not consistent with ARINC standards... the same for the people who considered that the QMS of the NDB was adequate...).

" The future ain't what it used to be "
Yogi

If Peter's Risks Digest methodology is not to your liking, perhaps you may be interested in looking at risk as considered by Erik Holnagel (if there is some discomfort with Reason's simple model of causation), or at least consider SA matters in general as well described by Mica Endsley. As contributory factors, I believe if anything that the AA[965] investigation could have followed further down the path of causation, to highlight opportunities for reinforcing systemic weaknesses. (Vaughn or McDonald can also give some insight to how technology retains flaws, against the best intentions of those entrusted with it's implementation).

After Annex 13, and all other processes are complete, it is always the intent of this industry to avoid repetitive failings. Civil tort law has different priorities, and does not in itself lend to avoidance of repetition. If it did, my computer screen wouldn't regularly go to BSOD.

"It's deja vu all over again"
Yogi

regards.


FDR

Last edited by fdr; 18th Jan 2011 at 04:08. Reason: 965 not 587, Dooh! thanks Zee.
fdr is offline  
Old 18th Jan 2011, 07:18
  #145 (permalink)  
PBL
 
Join Date: Sep 2000
Location: Bielefeld, Germany
Posts: 955
Likes: 0
Received 0 Likes on 0 Posts
Terpster, Dozy, fdr,

there is such a lot to say that it has taken me some time to think about how to reduce it to a few sentences. Let me first say that I am glad you all take these issues seriously. I think that Cali could provide a good test example for thoughts and tropes about automation and human operation.

First, terpster asked for references to our work on Cali. Here an extended answer. The paper AG RVS - Analysing the Cali Accident With a WB-Graph was presented at the first Human Error and Systems Development workshop, organised by Chris Johnson in Glasgow in March 1997. It is now called Human Error, Safety, and Systems Development and has had seven meetings. It is now due for the 8th, but I haven't heard. Chris also started a workshop on Accident Analysis, of which the first two meetings, in Glasgow and Virginia (organised by John Knight and Michael Holloway) were superb, but then petered out. The problem seemed to be that everyone wanted to come and talk, but no one would/could submit a paper (we got three submissions the next year!). That is, everyone is interested in accidents (witness the explosion of threads here on recent accidents) but few actually want to work on them, that is, submit an analysis in public and subject it to open criticism. There is no open forum for accident analysis, not even in the context of the premier system safety conferences. One should ask why this is. I was on the Editorial Board of the UK IMechE Journal of Risk and Reliability for a number of years, responsible for system safety and accident analysis, and we got not one submission. Even the people I worked closely with on the Bieleschweig Workshops submitted not one paper on accident analysis during the entire time. Bieleschweig was all just talk and slides (except for the papers I produced) - a good example of the PowerPoint syndrome.

The paper was incorporated into the first draft of a Causal System Analysis text in 1991:
http://www.rvs.uni-bielefeld.de/publ...i_accident.pdf.

This text will not appear as it is. It has been split (and extended) into a text on system safety for computer-based systems, which is in draft form, and there will be a separate text on WBA. The original turned out to be simply too hard to teach from. It includes some logic, and proof of correctness of an accident analysis (according to explicit criteria) and it turns out that no one who is interested in accident analysis has the background in logic to be able to read this, let alone apply it themselves. Even after ten years. So I have given up this approach to teaching.

The last paper is AG RVS - Comments on Confusing Conversation at Cali, which will be incorporated in extended form in an article on safety-critical communication protocols in the Handbook of Technical Communication, ed. Gibbon and Mehler, Mouton de Gruyter, Berlin, to appear 2011.

Terpster, you will see if you look at these carefully that we knew about your work when you wrote it! Thanks for including some of it again above.

Second, I would like to emphasise and expand the view proposed by fdr. There is a context in which the human pilots played out their fatal game. Terpster says earlier here "the pilots did this and this wrong" and puts responsibility solely there, with the human pilots. But in his older writing which he quotes, he acknowledges the context (namely, what is "common acceptable behavior" in US operations) and puts the responsibility for encouraging/allowing that context to develop solely in the hands of the US regulatory authority (FAA). I want to say: thank you for making my point for me!

Third, let me say explicitly what terpster repeatedly points out: it is true that the pilots did not follow good, safe, TERPS-consistent procedure. But then, as terpster keeps pointing out, this is endemic in US operations. Not only that, but there are parts of the world (dare one say, Colombia?) which are not necessarily regulated by TERPS (indeed, one might say anywhere outside the US). Pilots cannot be expected to know all the approach-design criteria in use all over the world, just as those pilots regularly flying across German airspace cannot be expected to have read and understood German air law (first of all, it's in German; second, even if you know German, German-legal-speak is a different language with some - and I emphasis some - syntax in common).

There is no point of disagreement with the fact that the Cali pilots did not follow advisable, safe procedure. But I disagree strongly that that is the only factor (even terpster must back away from that claim, as his indictment of the FAA shows). I would even doubt that it is the most important factor, given that that kind of behavior is pervasive, as terpster points out, and most people behaving like that don't crash. There is a line of thinking about explanation, which I shall call "contrastive explanation" after the late Peter Lipton (Inference to the Best Explanation, 2nd edition, Routledge, London, 2004), which proposes that explanatory causal factors are those factors which were different in the (in this case) fatal case from how they are in all the non-fatal cases. If we are to explain contrastively (and Lipton gives deep arguments why we do and should, which I have not completely digested), then the contrast is not in how these pilots accepted a clearance and tried to clarify what that clearance was, but in the to them misleading naming conventions and the misleading "affirmative"s uttered by the controller when he knew that "negative" was the correct procedural response. (Terpster, if we are to lay responsibility on pilots for not following defined procedure, why not on the controller for not following defined procedure? The only answer could be: the controller is there to ensure separation only. But of course that is not his only role. There was only one aircraft around, so it cannot be. He is also issuing approach clearances, distributing critical information, and, one would forlornly hope, trying to ensure the approach is more or less followed).

Fourth, just as the pilots obtained misleading information through their miscommunication with the controller, the pilots also obtained misleading information from the nav database, whose detailed and sometimes whimsical design they were not completely familiar with (and, indeed, it takes a computer expert to be completely familiar with such things. That's my day job). Now, one may want to argue who is responsible for that? The pilots who were misled and "should have known better", or the DB designer who should have thought through the safety consequences of the design decision (routinely: hazard analysis, risk analysis, elimination and mitigation. I would bet you that no hazard analysis as we system safety people teach it was performed on that DB design before it was used)? The answer, surely, is that assigning responsibility is a different question from determining causality. As a causal factor, it is irrefutably there. Similarly with the FMS. Let me also say that the manufacturer, Honeywell, is very concerned with such questions, not particular as a consequence of the Cali accident and the adverse court decision but because they have some very smart people there who take such things very seriously indeed.

There is lots more to say, but let me (almost) quit here. The final thing is that it is futile to continue to put the majority of the responsibility on people not following procedure to the letter. They never do. People working their roles in complex systems optimise their roles according to criteria local to them (e.g., "I can get my job done faster with less fuss, and have more time to think about the *important* things, that is, what I consider important"). This is a pervasive phenomenon which has been identified independently in two noteworthy works and is probably about as permanent a feature of human operations in complex systems as there is. You cannot wish it away by saying "people should have followed procedure to the letter", because they never, or almost never, do. The phenomenon was identified first by Jens Rasmussen and called "migration to the boundary" (in his Accimaps paper from 1997). It was also rediscovered, ostensibly independently, by Scott Snook in his work on the Iraq Black Hawk friendly-fire shootdown, where he called it "practical drift".

So in some sense terpsters admonition that people should be sticking to procedure is tilting at windmills, if you take it at face value. The only way to change the human operators' habits is to create a context which does not allow them the latitude to "optimise" their work to the point at which safety is diminished.

Such contexts could be created at the carrier by, for example, instituting a rule that all approaches are to be flown as published. Then up go the fuel bills! Up go the flight times! Controllers in busy airspace such as NY, SF and LA are terminally aggravated! In short, the whole way in which a major carrier uses airspace and gets along with the rest of the system is radically changed. Won't work.

In contrast, fixing a DB design or an FMS design is easy.

Terpster says some of that design is still with us. I have my opinions on that, and I am working hard in standards circles to see that things evolve for the better.

PBL
PBL is offline  
Old 18th Jan 2011, 10:30
  #146 (permalink)  
fdr
 
Join Date: Jun 2001
Location: 3rd Rock, #29B
Posts: 2,956
Received 861 Likes on 257 Posts
migration

Hi Peter, atapster,

"There are known knowns;
there are things we know we know.
We also know there are known unknowns;
that is to say we know there are some things we do not know.
But there are also unknown unknowns,
the ones we don't know we don't know".


Sec Def, "Rambo" Rumsfeld.


Jens and Eriks work as well as your own, Peter, inherently acknowledge that the systems are at the best non linear and at worst are stochastic. Certainly, Eriks resonance model's assume underlying stochastic system performance. These models are far more realistic albeit less "tidy" in application, than the near linear Reason Model. (I may be rather unfair on the limitations of artistry rather than the concept of James Reason's popular model). The potential for an error from any source to not only act directly but to result in unintended behaviour of the system is high. In operations, the sight of a trained crew all being head down dealing with a minor problem, incorporating their CRM training... while heading toward terrain, thunderstorms, other traffic or alligators (old days anyway...), or again being head down while taxiing on a crowded ramp while checking the CMC to confirm that a PA is needed to be made to the passengers for a "hard landing", or... having a 1kt overspeed due to environmental changes and pulling the aircraft into the vertical (well... 17 degrees nose high etc at FL420).

Migration to the boundary is a characteristic of the feed forward from successful outcomes where no incident occurred, under the conditions of human economy of effort, and environmental and operational stress vectors. Over time, the nett outcome will naturally trend to and beyond the regulatory boundaries unless monitoring of the actual performance leads to intervention and return to nominal design. Humans adapt pretty well, and the ability to find new methods to economise on the procedures is remarkable, inventiveness probably being as important as opposing thumbs in evolution (sorry creationists).

[This stands apart from the gross wilful violation of say having a child in the seat inadvertently nudging the APFD mode into CWS and the crew not having either the self preservation instincts to be near the controls, or the SA to understand what the mode of the AP was... (history does repeat, with a recent tail strike by a passenger flying an RPT jet...where do you start the remediation training for that? Put the pax in the sim?)].

Economy drives operation, design (mars surveyor... O rings... ), operational deviation, and risk acceptance away from nominal (assuming that nominal is a valid status anyway), as the result of all interactions that occur. Hardly a surprise, the second law of thermodynamics states that a closed system will tend to increased entropy. Human systems appear to also suffer divergence from nominal unless action is taken to restore performance. The background stochastic signal of performance mixing with variations from nominal performance just give some apparent arbitrary misfortune to the time and place of the event.

while hoping to use either Monte Carlo simulation or neural networking to improve modelling of off nominal performance, have been more effective at getting system crashes than modelling aircraft crashes.... main outcome has been less confidence in most of the existing mnemonics such as CRM and AQP, SMS etc, and more focussed on the basic loss of SA that exists at the core of almost all events.

"Therefore, failure-tolerant management systems basically depend on the continuous and efficient communication of corporate and individual values and intentions. One of the major risk problems
related to the introduction of information technology in centralised systems may therefore be the temptation of rational, scientifically minded experts to design large systems in which centralised data banks with factual information are the basis for communication between decision makers, and, unwillingly, disturb the communication of values and intentions which is necessary for error recovery".
Rasmussen, RNL, 1985 Risk and Information Processing

From designing jumbos to justifying invasions, somethings don't change in a hurry.

"You've got to be very careful if you don't know where you're going, because you might not get there".
Yogi...
fdr is offline  
Old 18th Jan 2011, 14:07
  #147 (permalink)  
 
Join Date: Jul 2003
Location: An Island Province
Posts: 1,257
Likes: 0
Received 1 Like on 1 Post
fdr, PBL, et al, whilst it is interesting to seek a high level (academic) understanding of the human / system interface, it is also necessary to provide practical guidance for operators – a bit more than "You've got to be very careful …

Humans have lived with ‘automation’ for a long time, generally adapting to the new circumstances and situations which arise. However, in aviation either due to the extent, or rapid development and implementation of technology in a complex operating environment, it appears that we in part failing to adapt and cope.

More recent and contemporary utopias are almost invariably dystopias. This is interesting and in its way surprising development. It suggest that we have found technology more of a burden and a threat than a liberation and help as was promised.” A.C. Grayling, Utopia. ‘Ideas that matter’.

You provide a hint to solutions focusing on the loss of situation awareness. I would look further into this asking if an appropriate awareness was ever attained – having a plan; as Yogi said ”… if you don't know where you're going, ..."
I see these aspects as shortfalls in mental preparation, poor strategic decision making, which generates and updates the mental model. Perhaps there is a reluctance to do this (assuming that it was done pre technology), because of the existence of technology; ‘it’ provides, or will provide answers otherwise generated via a mental model – so now we have don’t think, just use the EFIS, FMS, autopilot.

A possible contributing problem is that the EFIS, FMS, or autos are not good devices for establishing or maintaining the mental model.
Is this due to a design issue or the way in which we have been taught to construct our mental model (planning) when using technology? We are unlikely to improve the design in the near future – at least in a meaningful flight-safety timescale, thus we might only have the human to work with.

A solution? Perhaps a way forward might require greater understanding of the technology we use, the situations where it is / is not of benefit, and how and when to use this information. Not just more training, practical training and continual knowledge-building focussed on the critical use areas of technology.
alf5071h is offline  
Old 18th Jan 2011, 14:15
  #148 (permalink)  
 
Join Date: Jul 2000
Location: Thailand
Posts: 942
Likes: 0
Received 0 Likes on 0 Posts
Pardon me but shouldn't it be Bogy, as in; an imaginary evil being or spirit; goblin; anything one especially and often needlesly fears?

Rather than Bogie; undercarriage having pairs of wheels that swivel so that curves can be negotiated. An axle holding the undercarriage wheels.
rubik101 is offline  
Old 18th Jan 2011, 14:38
  #149 (permalink)  
Per Ardua ad Astraeus
 
Join Date: Mar 2000
Location: UK
Posts: 18,579
Likes: 0
Received 0 Likes on 0 Posts
Alternative spelling, R101.
BOAC is offline  
Old 18th Jan 2011, 15:39
  #150 (permalink)  
 
Join Date: Jul 2009
Location: France - mostly
Age: 84
Posts: 1,682
Likes: 0
Received 0 Likes on 0 Posts
The alternative spelling is bogey?
HazelNuts39 is offline  
Old 18th Jan 2011, 16:10
  #151 (permalink)  
 
Join Date: Jul 2002
Location: UK
Posts: 3,093
Likes: 0
Received 0 Likes on 0 Posts
Originally Posted by alf5071h
‘it’ provides, or will provide answers otherwise generated via a mental model – so now we have don’t think, just use the EFIS, FMS, autopilot.
That was never the intent though - automatics are there to provide a backstop, not do your thinking for you. In an auto-flight environment you need to think *and* use the EFIS, FMS and autopilot.
DozyWannabe is offline  
Old 18th Jan 2011, 16:23
  #152 (permalink)  
fdr
 
Join Date: Jun 2001
Location: 3rd Rock, #29B
Posts: 2,956
Received 861 Likes on 257 Posts
mental models: "You can observe a lot by watching."

Alf,

"Always go to other people's funerals, otherwise they won't go to yours."
Yogi.

Sorry if the malapropism is lost in translation from Mr Berra's comment of "you have got to be careful..."

The application of FMS, ND's etc changed the problems, as you say, for various reasons, not merely as a response to loss of SA, there being a slip between reality and the mental model of the operator, however in most incidents and accidents there is evidence of a loss of SA other than some structural failure events.

Advanced applied technology such as APFD/display systems and increased automation resulted in new ways of getting hurt, as humans didn't act as the designer predicted. Nor do the systems always. Removing the human from the control loop has also resulted in new failure modes, from increased detachment of the cognitive capacity of the individual to the system, acting to reduce SA in some occasions. The new opportunity for errors and slips change failure modes, such as MK's B742F OPT issues at Halifax. If humans are considered to be excessively prone to failure, then for a real mess just add computers. For all the frailty of the human in respect to system failure modes, they also remain the best last hope of intervention when conditions are not exactly as programmed by the... human in the design or automation processes.

Human conditioning is not immune from similar problems, an excessive expectation on the benefits of CRM, or a mis application of the concepts can just as easily result in target fixation, poor workload management and losses of SA. It is embarrassing to see a crew CRM a problem to perfection and then not implement a solution... just as it is pretty depressing to see a crew dealing with all the processes of CRM, EMC, and similar programs while running out of fuel in a holding pattern, or flying away from an airfield while on fire etc... The salient point remains in most cases, that any processes that reduce SA are not conducive to good health... including company SOP's, say, that cause a saturation of the crew at a critical time of operation, such as EK's pre departure routines, or KAL's pedantic application of pre takeoff reminders... or large legacy carriers operating an advanced cockpit variant in the same manner as a legacy system to "standardise".

"...And I for winking at your discords too
Have lost a brace of kinsmen: all are punish'd".

Wm Shakespeare, (1564 -1616) Romeo & Juliet, Prince, Scene III

Management are accountable for the unintended consequences of ill considered process changes, both legally and morally.

My views may be considered to be depressing, or not. Human failure tends to be the result of (IMHO) loss of SA in the main, even a large percentage of violations result from the individual achieving a state of loss of SA due to the extent of the deviation undertaken. I would think that it is a point of some hope for improved safety as it follows that processes, procedures, practices and design of systems that promote SA are going to generally be advantageous, but more specifically, the capacity of the individual to be trained in heightened SA awareness, and to be at the very least alerted to the precursors and indicators of SA loss is a practical program. This is not warm fuzzy stuff; the effect of simulating conditions where a student ends up with a substantial slip in their SA model can be fairly traumatic.

System safety does improve to some extent with the application of emerging technology that is developed as a result of the detailed analysis of system behaviour. The system demands also evolve and so the desired performance level to achieve an acceptable risk level is also an ever changing target.

PBL; thanks as always for adding qualitatively to these discussions on this forum.
fdr is offline  
Old 18th Jan 2011, 17:15
  #153 (permalink)  
Guest
 
Join Date: Apr 2009
Location: On the Beach
Posts: 3,336
Likes: 0
Received 0 Likes on 0 Posts
PBL:

Terpster, Dozy, fdr.

there is such a lot to say that it has taken me some time to think about how to reduce it to a few sentences. Let me first say that I am glad you all take these issues seriously. I think that Cali could provide a good test example for thoughts and tropes about automation and human operation.

We indeed have a communications issue between us. You are a scientist. I am not. I gave up reading the first paper you cite/linked because it delves far beyond my education, skills, and experiences. My primary life’s work was as an airline pilot. That was and is a job that does not require any formal post-high-school education whatsoever. Granted, many of the pilots hired by my airline (and other carriers) in the 1960s had undergraduate degrees, because the airline strongly desired (and usually could command) that level of formal education.


Those degrees did not have to be aviation-related, and typically were not (and are not to this day). The successful completion of an undergraduate degree provided a measure of assurance to the airline H.R. gurus that the candidate would likely be both disciplined and sufficiently intellectually alert to avoid difficulty with the equipment ground school courses over his (and later, or her) career. Nonetheless, because of expansion pressures some pilots were hired who had no college education, but who had a substantial amount of varied flying time. These college-deprived pilots typically did quite well throughout their airline careers. What the successful candidates had in common, were those difficult-to-define set of human skills necessary to be a good planner, evaluator, and able to continuously think ahead of the aircraft, so to speak, while being able to sequence tasks in the sense of appearing to “be able to juggle several balls at the same time.”


My education as an accountant, combined with my pre-airline stint as an instrument flight and ground instructor, plus a compelling interest, caused me to become an instrument procedures analyst, but as an adjunct (avocation) to my career as an airline pilot. I learned about the design concepts (and sometimes lack thereof) and construction methods used to build TERPs instrument approach and departure procedures. This is not an area of interest intended for the line pilot, nor should it be. PANS-OPS are not all that much different than TERPs, except for circle-to-land criteria. After all, a given airplane needs to operate at an acceptable target level of safety while IMC, whether in TERPs or PANS-OPS airspace. In fact, with the advent of performance-based navigation, the nominal differences between TERPs and PANS-OPS will eventually disappear. This is already true today with RNP AR instrument approach procedures.

As to my taking the FAA to task in 1996, I felt then they were doing a poor job of promulgating and explaining RNAV procedures. They have improved that mission greatly in the intervening 16 years, although the inevitable conflicts in the U.S. between the “900 pound gorilla” (FAA ATC) and the remainder of the agency continues as it always has.


I stand by my writings about Cali I penned in 1996. And, my comments in Post #128 of this thread are consistent with my view of the Cali accident. I do, however, take exception to contributing factors 3 and 4 more strongly today than I did in 1996. Those factors continue unabated to this day. But wary, informed, cautious pilots are able to resolve the seeming discrepancies quite nicely (the EDDS ILS 07 in your country recently discussed on this forum is a good example). I don’t worry for a moment, though, that the pilot proficient in both RNAV and ground-based instrument procedures would have any problem flying that ILS approach in any modern aircraft. What I do worry about on a systemic basis is the ever greater possibility of a repeat of “We’d like direct Rozo” (as in “we’d like direct to the middle marker.”) I don’t foresee a repeat of the wild excursion of AAL 965 into the mountains miles removed from protected airspace; instead I see some member of today’s “direct-to” crowd eventually shaving off a hilltop not far off to the side of the modern RNAV containment areas.


Finally, you dwell on the communications difficulties between the captain of AAL 965 and the Cali controller. My pragmatic view of that one is, and always has been, “Hey captain, knock off your doomed attempt to turn a Colombian controller into an FAA controller, and get back on your unabridged flight plan.”

In no way do I denigrate your work. It is extremely important for this industry. But, your mission is to open the avionics device and examine/critique the fundamental work of the designers. Indeed, that is quite necessary. My primary mission is to get the device to work safely in spite of its blemishes.
aterpster is offline  
Old 18th Jan 2011, 17:52
  #154 (permalink)  
 
Join Date: Jul 2003
Location: An Island Province
Posts: 1,257
Likes: 0
Received 1 Like on 1 Post
DW, my phrase might have better been constructed as “there is an attitude of ‘don’t think just use’ …”, my concern is that some people have not been taught or do not understand the intent of automation.

In an auto-flight environment you need to think *and* use the EFIS, FMS and autopilot.” I agree, but how is this accomplished, how does the industry ensure that the intent of design and certification is realised in operation, how is this gap, full of assumptions, to be filled?

fdr, as depressing as your views might appear to some, I agree with you that awareness and its cognitive components are at the root of many human problems in auto operations. I often quote Orasanu and Martin who for me simplify the problem to that of either not understanding the situation, or with understanding, choosing an incorrect course of action. Thus situation awareness is central to current problems as is the accumulation of the various forms of knowledge.
If we wish to use Kern’s analogy of Airmanship, where knowledge are the pillars/walls of our building, and SA the roof, then for issues of automation and technology many pilots are living in the wrong house, not by choice, but due to lack of appropriate resource (training knowledge – what and how) and proficiency in assembling the resource, practice, practice, practice.
alf5071h is offline  
Old 18th Jan 2011, 22:51
  #155 (permalink)  
 
Join Date: Jun 2000
Location: last time I looked I was still here.
Posts: 4,507
Likes: 0
Received 0 Likes on 0 Posts
There's a lot of over complicating a simple case here. I am a simple guy and like to keep things easy. In many airlines they have an analysis system when problems crop up; to help the crew find their way through the maze of info, options etc. and to avoid rushing into the first option that pops into your head. In Cali they were perfectly on VNAV descent into a STAR for an ILS. It was dark, the PF had not been there before, they had briefed only the northerly ILS, they were FL200, 60nm out at 280-300kts. They were asked "if they'd like a straight in NPA to the southerly". Consider the facts from a simple airmanship point of view. Which part of NO THANKS don't you understand. All the 'causal contributions' came afterwards. The root cause was not mismanagement of the FMS, and Jeppe mistakes and Boeing mistakes with no auto-stow speed brake etc.etc. It was the overwhelming desire to try and save 10mins and change from a very safe Plan A to a very dodgy unprepared Plan B. That to my simple mind is an airmanship problem. That is the root cause of many prangs. Trying to dress it up afterwards as something else is not just. It smells like a mistake, sounds like a mistake and flew like a mistake. 10 minutes late is better than 20 years early. This will sadly not be the last such demonstration of rushing into a black smoking hole somewhere in the aviation world in the future. Such sad events have a habit of repeating themselves. Even if Jeppe had programed the R point correctly I judge the chances of a G/A to have been high. Either that or the chances of sliding off the end.... It should have been a non-starter. This is not easy hindsight, it is looking at the peformance parameters, the crew experience, the time of day and the type of approach in Plan B. "NO THANKS ATC. Kind of you to offer, but...." I stand by my opinion. We can agree to disagree. Fine.
RAT 5 is offline  
Old 19th Jan 2011, 01:01
  #156 (permalink)  
fdr
 
Join Date: Jun 2001
Location: 3rd Rock, #29B
Posts: 2,956
Received 861 Likes on 257 Posts
Rat 5: congratulations, I get you don't make mistakes. Thats a relief, now we only need to sort out every other pilot.

I am unfortunately a lesser evolved species, being merely "human" (my wife may disagree on that classification on occasions) as are the other pilots that crash aircraft, design lousy procedures, build poor computers, and run whack-a-doodle governments around this polluted and poisoned rock we call home.

Searching around in a 5 mtr hole in the ground with smoking wreckage and the remains of bodies nearby, looking for answers why all pilots aren't fool proof takes the work that I do, and which I heavily rely on the research and tools developed by people such as PBL, Holnagel, Rasmussen, Johnson, Endsley, Mauro, Klein, Sumwalt... and co to understand why the dad guys aren't making the same "OBVIOUS" decision that you do. Wearing Biohazard garments takes this discussion out of the realm of being "academic" [BOACs, Atapsters posit].

Being merely mortal, when my passengers are on board, the only thing I know is that I make mistakes, the designer makes mistakes, my crew makes mistakes, and the world is a dynamic place. My job is to attempt to catch those errors to the best of my ability, or avoid those areas of risk that I can reasonably do so, while being urged by all and sundry to expedite, save time, money etc... While reasonably proficient at doing so, I also understand that prior performance is a poor predictor of future performance, and StuF happens; that make it interesting to understand what issues are being [re] invented out there to try and derail my [any your] polished performances.

Individual performance in almost all if not all tasks have temporal qualities, (what I like about the resonance concepts...), an example is the Capt who inexplicably continues to ask for a higher altitude than the aircraft performance will permit (on a standards evaluation flight...) yet after being labelled a "BOZO"... then proceeds to a few years later save 800 people when a well respected B744 operator (competitor airline) taxies across his runway on a MTOW departure in a fully loaded B744. Not one out of the numerous other crew in his aircraft, the other aircraft or ATC recognised the critical condition, but the "BOZO" does a reject and saves everyone...

Most people if brought back from the dead and given the last 16 seconds again would probably make different decisions... (the "Omega III" principle from "Galaxy Quest").

fdr is offline  
Old 19th Jan 2011, 01:20
  #157 (permalink)  
Guest
 
Join Date: Apr 2009
Location: On the Beach
Posts: 3,336
Likes: 0
Received 0 Likes on 0 Posts
fdr:

Wearing Biohazard garments takes this discussion out of the realm of being "academic" [BOACs, Atapsters posit].
Being one of the ALPA accident investigators on the landmark TWA Flight 514 crash in 1974 resulted in me becoming more academic.
aterpster is offline  
Old 19th Jan 2011, 03:52
  #158 (permalink)  
 
Join Date: Oct 2007
Location: fort sheridan, il
Posts: 1,656
Likes: 0
Received 0 Likes on 0 Posts
aterpster

was that the crash at Dulles/near dulles? IF SO, that crash contributed greatly to my understanding of the pilot responsbility during approach phase and to have a good idea about terrain.
sevenstrokeroll is offline  
Old 19th Jan 2011, 03:57
  #159 (permalink)  
 
Join Date: Dec 2002
Location: Where the Quaboag River flows, USA
Age: 71
Posts: 3,414
Received 3 Likes on 3 Posts
That's it--Mt Weatherly, VA.

GF
galaxy flyer is offline  
Old 19th Jan 2011, 09:21
  #160 (permalink)  
PBL
 
Join Date: Sep 2000
Location: Bielefeld, Germany
Posts: 955
Likes: 0
Received 0 Likes on 0 Posts
It is astonishing to me - but really not, because I experience it a lot - that we come directly from talking about the causes of the Cali accident to relating the individual experiences and resumes of the discussants.

So let me briefly discuss here the tropes that have been raised, and then let's please get back to talking about the important point: automation and human interaction with it.

First, academics is my day job. I also do a bunch of other things. I run a small engineering company. I also have experience with aviation accident analysis that few if any other people on this WWW site have (no, not walking around wreckage fields, although I have handled this and that broken part sitting in a state transport accident investigation facility from time to time). Let's not put people into shoeboxes and assume we know everything about what they do and what they can do from their day job.

Let's get on to the tropes. Easy ones first.

Originally Posted by RAT 5
There's a lot of over complicating a simple case here. I am a simple guy and like to keep things easy. ..... Consider the facts from a simple airmanship point of view. ..... All the 'causal contributions' came afterwards. The root cause was not mismanagement of the FMS, and Jeppe mistakes and Boeing mistakes with no auto-stow speed brake etc.etc. .... That to my simple mind is an airmanship problem. That is the root cause of many prangs.
So, from a "simple" point of view (and I agree wholeheartedly with the adjective!): (a) there was a failure of airmanship; (b) other stuff is secondary; (c) failure of airmanship is thus "root cause".

That is the classic trope "blame the pilot if we can".

(Just to be clear here, I am rephrasing "came afterwards" as "secondary", because of course these other things did not literally come afterwards, they were installed in the airplane before the flight and executed their function during the incident, not "afterwards".)

No one working in aviation human factors accepts such a line of argument. The reason is part (b). Why is other stuff "secondary"? What criteria are being applied to conclude that other stuff is "secondary"? That these phenomena do not contribute to "root causes" (plural please, not singular)? Because they are, demonstrably, causal factors.

It's easy to write a note on a WWW forum and say "I don't agree; I am a simple man, and I say the pilots screwed up and you can ignore everything else." One can just as easily write a note on a WWW forum and say "I don't agree; I am a simple man and I say 2+2=5 and you can ignore everything else." And the two statements are about equivalent in worth, whatever that is.

To terpster I would say: sorry that you don't think you can understand the Cali analysis. The problem lies certainly in our presentation, because I can assure you from experience that people with widely differing backgrounds understand very quickly how to perform rigorous causal analyses. A 6-hour day can suffice, although two is more usual.

But I would equally point out that it is naive to dismiss technical results because one cannot understand them. Aerodynamics is complicated engineering and math, which you cannot learn in a couple of days of practice, as you can causal analysis. But you believe the figures in your FCOM. So what's different there?

What is different is that everyone and her dog thinks they can perform causal analysis, and thinks their preferred opinion is immune to criticism. The first is wrong for most people (they don't have the necessary discipline and training, just like with aerodynamics); the second is wrong for everybody, always. No analysis is immune to criticism. Even though the analysis is rigorous, and only open to observation that one has made a mistake, which may then be corrected ("look, you have here 2+2=5; but 2+2=4!" "You're right! Here, we changed it"), the two following questions of a causal analysis are always valid: (i) why did you stop there?, and (ii) why did you summarise these facts like that / why did you formulate this phenomenon/these phenomena in that way? One can only answer "because X" and then X is -always- open to discussion.

There is a similar phenomenon with the puzzle known nowadays as the "Monty Hall Problem" (look it up in Wikipedia). It has to do with estimating probabilities during a guessing game (one guesses to win - or lose - some prize). The probabilities may be derived, by either frequentist methods or Bayesian reasoning, and you can even establish them by playing the game over and over again and looking at the results (Bayesians can even guess random probabilities at first and then use Bayesian updating on the results of these game plays to reach the conclusion that others reach by ab initio reasoning).

However, the probabilities are, mostly, incorrectly estimated by most people at first. The odd phenomenon is that people stick with that incorrect estimate, even in the face of both proof and experiment that it is wrong. And not inexperienced people, either, but sometimes even people with PhDs in math.

Because of this phenomenon, the problem has also been well studied by cognitive psychologists. It is an amazing phenomenon. There is a very good book about it, The Monty Hall Problem, by Jason Rosenhouse, Oxford University Press 2009, which I read last week.

Unlike the Monty Hall game, causal reasoning occurs at some level in everyone's life all the time. This probably explains why more people are able to give their causal judgements as "simple" people, and stick with them, ignoring argument which demonstrates that their view is unsubstantiated.

Causal analysis ain't like voting. It isn't a matter of opinion. "Simple" means "naive", and naivety is about as welcome in causal analysis as it is in aerodynamics.

Back to the trope (a); (b); (c) above. In my experience, the only expert people who seriously propose it are very senior lawyers in compensation disputes. No scientific expert on automation and human factors whom I know (and I really do know most of them professionally on some level) supports such a view.

PBL
PBL is offline  


Contact Us - Archive - Advertising - Cookie Policy - Privacy Statement - Terms of Service

Copyright © 2024 MH Sub I, LLC dba Internet Brands. All rights reserved. Use of this site indicates your consent to the Terms of Use.