Wikiposts
Search
Australia, New Zealand & the Pacific Airline and RPT Rumours & News in Australia, enZed and the Pacific

Merged: Erebus site launched

Thread Tools
 
Search this Thread
 
Old 2nd Mar 2010, 09:42
  #201 (permalink)  
 
Join Date: Jul 2002
Location: UK
Posts: 3,093
Likes: 0
Received 0 Likes on 0 Posts
Your presumption that I would stick an oar in without reading the thread is quite depressing.

I don't understand, unless you were an ANZ management pilot of the era, why you're so vociferous in your condemnation of the pilots. ANZ and Morrie Davies were cowards hiding behind a rulebook that they knew was being routinely ignored. Chippindale was just a cat's paw for Davies and Muldoon, who didn't want his only national flag-carrier's reputation tarnished.
DozyWannabe is offline  
Old 2nd Mar 2010, 13:29
  #202 (permalink)  
 
Join Date: Nov 2006
Location: SoCalif
Posts: 896
Likes: 0
Received 0 Likes on 0 Posts
Mr. Seatback2:
3) Recently in this forum, much has been made of the Bendix weather radar, and its' ability to pick up ice, etc. The manual for the type of radar installed on the aircraft stated it should not be used for terrain avoidance. Aside from the GPWS (sadly), what other radar systems could the crew have used at the time? Just seems to be disagreement within the forum (not unusual, I know).
Well, gee, why does it have that mode named MAP? FYI, it was a very good mapping device. I don't believe the pilots were looking at it. There was a radar indicator mounted just outboard of each pilot's outboard knee, and difficult to observe by anybody else, unfortunately.

While not guaranteed to be accurate in azimuth or altitude, that generation of Wx Radar is reliably accurate in display of distance to a target. It changes the pulse width in MAP mode to be more optimum for detecting terrain, which doesn't scintillate like rainfall. I have confidence Mount Erebus could have been seen on that radar in time to warn the pilots of their position error.

The GPWS primary sensor for terrain is the radio altimeter, whose antennas look straight down from the center of the belly, and whose beam width is only 60-70 degrees. The radio altimeter was designed and used as a landing aid. GPWS came along much later, and became required only four years before this accident, and nuisance warnings happened sometimes, usually by degraded radio altimeter installations, as seen by the Turk 737 at AMS.

The primitive nonvolatile memory of the GPW computer showed it had transmitted warnings to the best of its ability. As you know, they were level at 260 knots at 1500 feet, approaching a 300 foot cliff followed by a steep rise of terrain.

What radar of the day could have seen Erebus in time to prevent this accident? Installing more radio altimeters, with antennas just below the radome on the chin would have provided a forward look. It would require two, each mounted 30 degrees off the center line, to see terrain in a turn.

Then, of course, they would no longer be accurate height measuring devices, and they would be locked on to the closest target, the 1500 feet to the sea in this case. I'll let you do the math to decide if that would have provided more warning than they received from the GPWS.

GB
Graybeard is offline  
Old 2nd Mar 2010, 23:30
  #203 (permalink)  
prospector
Guest
 
Posts: n/a
Dozy Wannabe,

You say " why you're so vociferous in your condemnation of the pilots".

Go through all of my posts on this thread and the previous one "Erebus 25 years on" and you will not find anything I have said that is in any way vociferous condemnation of the pilots.

My argument is, was, and will always be, that Justice Mahon was wrong with his summing up of the events.

It is a well known fact that all other operators going down to the Antarctic required a certain experience level prior to going down as P1. It was no skin off the management of Air NZ if only one or two pilots were given the Antarctic sight seeing flight. That would have satisfied common sense and the original requirements laid down by NZCAA.

Who was it that pressured Air NZ management into giving these flights to senior members of their association????

From Bob Thomson in his " History of New Zealand Antarctic Research Programme 1965-88.
"Air New Zealand and NZALPA went to some lengths to ensure that their senior pilots and members were seen as professionals who knew it all and did not therefor need to seek advice from elsewhere, such as the RNZAF,USAF, USN or the Division."

It has been stated previously in this thread, or the previous one, that 3 times a month Auckland to Los Angeles in the flight levels is no qualification for 1,500ft, bloodshot VFR, in Antarctica.

Nowhere in Justice Mahons summing up was any blame directed towards the people who created the situation where a Capt found himself in the position Capt Collins did.


If you wish to talk of vociferous condemnation have a reread of the garbage you printed.

"ANZ and Morrie Davies were cowards hiding behind a rulebook that they knew was being routinely ignored. Chippindale was just a cat's paw for Davies and Muldoon, who didn't want his only national flag-carrier's reputation tarnished."
 
Old 3rd Mar 2010, 01:48
  #204 (permalink)  
 
Join Date: Feb 2010
Location: USA
Posts: 13
Likes: 0
Received 0 Likes on 0 Posts
Dozy Wannabe,

You say " why you're so vociferous in your condemnation of the pilots".

Go through all of my posts on this thread and the previous one "Erebus 25 years on" and you will not find anything I have said that is in any way vociferous condemnation of the pilots.

My argument is, was, and will always be, that Justice Mahon was wrong with his summing up of the events.

It is a well known fact that all other operators going down to the Antarctic required a certain experience level prior to going down as P1. It was no skin off the management of Air NZ if only one or two pilots were given the Antarctic sight seeing flight. That would have satisfied common sense and the original requirements laid down by NZCAA.

Who was it that pressured Air NZ management into giving these flights to senior members of their association????

From Bob Thomson in his " History of New Zealand Antarctic Research Programme 1965-88.
"Air New Zealand and NZALPA went to some lengths to ensure that their senior pilots and members were seen as professionals who knew it all and did not therefor need to seek advice from elsewhere, such as the RNZAF,USAF, USN or the Division."

It has been stated previously in this thread, or the previous one, that 3 times a month Auckland to Los Angeles in the flight levels is no qualification for 1,500ft, bloodshot VFR, in Antarctica.

Nowhere in Justice Mahons summing up was any blame directed towards the people who created the situation where a Capt found himself in the position Capt Collins did.


If you wish to talk of vociferous condemnation have a reread of the garbage you printed.

"ANZ and Morrie Davies were cowards hiding behind a rulebook that they knew was being routinely ignored. Chippindale was just a cat's paw for Davies and Muldoon, who didn't want his only national flag-carrier's reputation tarnished."
+1

You only need to look at how reluctant the US military were to even acknowledge Air NZ going down there in the first place. Not to mention the fact that the DC 10 is one of numerous aircraft wrecks currently resting in peace in Antartica. It wasn't a place that they really should have been visiting. Hence the strict requirements for descent that unfortunately weren't followed.

The crew were hopelessly unqualified for the environment once they commenced descent.
workingman303 is offline  
Old 3rd Mar 2010, 02:17
  #205 (permalink)  
 
Join Date: Jun 2001
Location: due south
Posts: 1,332
Likes: 0
Received 0 Likes on 0 Posts
DozyWannabe: How dare you besmirch the memory and reputation of Ron Chippendale.
He was a honest true gentleman who was beholden to no one, and for you to state otherwise merely publicises your ignorance.
henry crun is offline  
Old 3rd Mar 2010, 11:35
  #206 (permalink)  
 
Join Date: May 2000
Location: Here. Over here.
Posts: 189
Likes: 0
Received 0 Likes on 0 Posts
Henry Crun
DozyWannabe: How dare you besmirch the memory and reputation of Ron Chippendale.
He was a honest true gentleman who was beholden to no one, and for you to state otherwise merely publicises your ignorance.
Enough of your righteous indignation.
I don't think DozyWannabe is the one displaying ignorance here.
Ron Chippindale did a pretty good job of demolishing his reputation all by himself.

Consider a few of Chippendale's errors just for starters:
  • He added words into the CVR transcription to suit his pre-conceived idea of what had happened. (The "Bit thick here Bert" conversation) See (1) below.
  • He accepted the misinterpreted Flight Data Recorder evidence to show a panic application of left rudder by the pilot immediately prior to impact. This rudder movement was the inertia effect on the rudder during impact as aircraft slewed to the left. In his report the synchronisation of CVR and FDR had been manually adjusted to give a result to fit with Chippindale's notion that the crew saw the terrain at the last moment. See (2) below.
  • He testified on oath that flight plan Annex 'J' (the old route proving plan that had the track direct to McMurdo and over Mt Erebus) had been recovered from the crash site. It was not recovered from the crash site. See (3) below.
  • He later admitted that he knew ANZ were lying to him. See (4) below
Also:
The peak of Mt Erebus is about 20 nm from Mc Murdo. Collins locked the aircraft onto the NAV track after completing his orbits and descent. They were still engaged on the NAV track when Collins said “Were 26 miles north, we’ll have to climb out of this.”If Collins (or anyone else on the flight deck) believed the NAV track was taking them over Mt Erebus, and here they were at 1500 feet with less than 10 miles to run pointing straight at a 12,450 foot mountain they would have to be bloody suicidal.
  • When questions at the enquiry about why Collins put the aircraft in this position of danger, Chippindale replied that he had given the matter careful consideration, and Collins must have been suddenly afflicted by some medical or psychological malady which made him oblivious to the mortal danger looming in front of him. When it was pointed out that this must have simultaneously happened to everyone else present on the flight deck, and was patently an absurd proposition his credibility suffered badly.
And:
In Chippindale's own report:
2.5 The flight plan was printed for each flight from a computer stored record which, until the night before the flight, had the longitude for the McMurdo destination point incorrectly entered ………….
……… In the case of this crew no evidence was found to suggest that they had been misled by this error in the flight plan shown to them at the briefing.
  • This was another patently absurd conclusion for him to make.
Ron Chippindale may have been 'a true gentleman' but he made many serious errors in his investigation of the Erebus tragedy.


************************************************************ ***********
(1) In 1987 during a claim for compensation by the dependents of the deceased Chippindale asserted that the engineers displayed their mounting alarm by the tone of their voices. Here again the evidence disproves his claim. He also claimed by implication that the voices marked by the Washington team as unidentified were in fact the voices of the engineers. He claimed this despite previously saying “At no time did I attribute any comment to any person. I relied totally upon the recognition of the voices made by the team in Washington.”
<snip>
So what did Chippindale actually do in order to create his theory of mounting concern? He took overlapping snatches of different conversations of passengers and cabin crew speaking in the galley area and flight deck and attributed them to the engineers when the Washington team agreed the voices were unidentifiable. He added words to the transcript which the Washington team agreed were unintelligible and suggested they suited his theory that the engineers were expressing their concern about flying conditions to the pilots. He latched onto a few remarks passing between Mulgrew and Moloney. After his theory was disproved by evidence given to the Royal Commission in 1980, he claimed seven years later, contrary to the opinions of seven to nine others, and supported only by Gemmell, that the engineers expressed mounting alarm by their tone of voice.
The conclusion must be that Chippindale’s claims are untrue. The engineers voiced no queries about the proposed descent, expressed no mounting alarm as the flight continued, and expressed no dissatisfaction. Those claims ought not to have been made by an inspector of air accidents. They brought no credit to the Office of Air Accidents Investigation. They were approved for release to the public by the Minister of Transport on 12 June 1980 and are still at the time of writing on the website of that Office’s successor. They have done lasting damage. They must have caused grief over the years to the flight crew’s families. They have created a fantasy scenario of events which supposedly led to the disaster that endures in the public mind to this day as media comments such as Cullen’s, Rudman’s, and Rankin’s bear witness and perpetuates this untrue scenario into history.
Chippindale’s evidence in the court case brought for compensation by the dependents of those killed by the crash against the US Government no doubt contributed to their case failing. He attended in person to give evidence “at the direction of the New Zealand Government”. The US Government paid for his transportation to and from the US.

Stewart McFarlane, Senior lecturer in Law, University of Auckland (now retired)
http://www.investigatemagazine.com/a...ate_nov_4.html
************************************************************ ***********
(2) And so, in my opinion, the transcribers had made the mistake which investigators have often made in times gone by and in different circumstances. Many police inquiries have gone wrong for the same reason. The mistake they made was to first postulate what they thought had happened, and then treat all information which did not fit their theory as being not correct.
So here we had this investigatory defect revealed in startling form. The transcribers disregarded the simple facts which the 'black box' was telling them and substituted their own version of what it was trying to say.

Peter Mahon - 'Verdict on Erebus'
ChapterXXVII - How the 'Black Box' readout was misinterpreted
************************************************************ ***********
(3) Material Not Given on Despatch
Fig. 46. Annex.J
Annex J, when plotted onto a topographical map, indicates that the Air New Zealand flight path crosses Mt Erebus. It was distributed by Air New Zealand in the Antarctic envelope during 1977, when the flight path did, in fact, cross Mt Erebus, but was replaced by Exhibit 164 in 1978, when the flight path was shifted to McMurdo Sound.
Mr Chippindale stated on oath that Annex J had been recovered from the ice. This proved that it had been in Captain Collins' possession, and that it would have demonstrated to him the flight path crossed Mt Erebus, had he troubled to plot it against any of the charts he had. Much later it transpired that Mr Chippindale's evidence was untrue. It was never recovered from the ice at all. Instead, Mr Chippindale had evidently gone to Air New Zealand and asked them to supply him with copies of documents which they alleged they had given to the pilots, and Exhibit J was one of these documents. Mr Chippindale then, for reasons he has not explained, concluded that it would be in order for him to swear to the Royal Commission that his team had recovered it from the ice. Mr Chippindale subsequently defended his actions in a press release, in which he said that Mr LA. Johnson had given evidence on oath that he had handed Exhibil J to the pilots. But this claim of Mr Chippindale's is also untrue, as Mr Johnson did not say that at all.

Fig. 47. 1977 Flight Plan of 10. 10. 77 — NDB Waypoint. Mr Chippindale also said on oath that a flight plan was recovered from the ice which showed an Erebus route. Since this was an old 1977 flight plan, The implication was that the crew had it well in advance and had taken it with them to show the route, so that they must have known the flight path crossed Mt Erebus, even if they had not plotted the flight plan given to them on despatch. Since Flight Engineer Brooks had been to the ice in 1977, the implication was that he had given it to Captain Collins. However, research revealed that the waypoint on the flight plan said to have been recovered from the ice differed from the waypoint on Flight Engineer Brooks' 1977 flight. In view of that and of the dubious background to Mr Chippindale's claim that Exhibit J was recovered, it might be safer to conclude that, on the balance of probability, the 1977 Erebus flight plan was not recovered from the ice either.

Stewart McFarlane, Senior lecturer in Law, University of Auckland
'The Erebus Papers' p78
************************************************************ ***********
(4) Obituary: Vale Ron Chippindale:
Erebus investigator was one of the many victims of TE 901, the disaster that will not go away

In November 1989, 10 years after the crash of Air New Zealand flight TE 901, chief air accidents investigator Ron Chippindale admitted to me that he knew Air New Zealand had lied about sightseeing flights to Antarctica not being allowed lower than 16,000 feet. But he’d gone along with that fiction, during his own investigation of that terrible disaster, and all through the long royal commission that followed, at the end of which Justice Peter Mahon accused the airline of concocting “palpably false evidence” and “an orchestrated litany of lies.”
Because of that ringing phrase, Justice Mahon became another victim of Mt Erebus, driven from the Bench for it by his fellow judges and a furious prime minister, Rob Muldoon. But Ron Chippindale was an Erebus victim too, never forgiven by many pilots for obstinately supporting the airline’s lie that TE 901 had no right to be flying below 16,000 feet when he knew otherwise.
But even his 1989 admission did not stop Chippindale continuing to accuse the pilots of causing the crash by bad airmanship. Despite conclusive evidence to the contrary, he still held that they were flying at a low altitude knowingly uncertain where they were in the hostile, mountainous Antarctic environment. And he bizarrely told me that they could have saved the DC10 and its 237 passengers and 20 crew by sliding it across the icy slopes it hit to a standstill, rather than letting it smash to smithereens after the ground proximity warning system shrieked its awful “Whoop whoop! Pull up!” That would have been a feat of airmanship unparalleled in aviation history.
(more)

Poneke's Weblog
http://poneke.wordpress.com/2008/02/13/te901/
************************************************************ ***********

Last edited by Desert Dingo; 3rd Mar 2010 at 11:45.
Desert Dingo is offline  
Old 3rd Mar 2010, 23:17
  #207 (permalink)  
prospector
Guest
 
Posts: n/a
henry crun,
A lot of verbiage from a number of people who obviously have no idea of the meaning of the request for a VMC descent below MSA.

"In November 1989, 10 years after the crash of Air New Zealand flight TE 901, chief air accidents investigator Ron Chippindale admitted to me that he knew Air New Zealand had lied about sightseeing flights to Antarctica not being allowed lower than 16,000 feet".

That statement alone shows the value of the rest.

The minimum descent was 16,000ft, until after passing overhead McMurdo, then a descent to 6,000ft could be made, if the weather conditions were suitable.

Last edited by prospector; 4th Mar 2010 at 01:28.
 
Old 4th Mar 2010, 02:55
  #208 (permalink)  
 
Join Date: Jun 2001
Location: due south
Posts: 1,332
Likes: 0
Received 0 Likes on 0 Posts
Desert Dingo: Mahon's investigation was not error free, so does that make him a cat's paw for the ALPA who wanted to protest the reputation of one of their own. ?

Your point (4). As prospector points out, Chippendale knew that ANZ rules did permit flight below 16,000ft, to suggest otherwise is absurd.
That shows how much reliance can be placed of the ramblings of a jounalist
henry crun is offline  
Old 4th Mar 2010, 03:23
  #209 (permalink)  
 
Join Date: Nov 2006
Location: SoCalif
Posts: 896
Likes: 0
Received 0 Likes on 0 Posts
Earwitness

I was acquainted with a Douglas FE who had spent time at ANZ working with their FEs, and was friends with the FE on that flight. Not long after, maybe as late as January, 1980, he told me he had heard the tape. He said his friend, the only one who had been on that trip before, said some 90 seconds before the collision, "I don't like (the look of) this. Let's get out of here."

Would he have recognized his friend's voice? Would the FE seat position, somewhat behind the pilots, have given him more sense of the white-out?

GB
Graybeard is offline  
Old 4th Mar 2010, 15:17
  #210 (permalink)  
 
Join Date: Aug 2003
Location: Sale, Australia
Age: 80
Posts: 3,832
Likes: 0
Received 0 Likes on 0 Posts
it was well below the minimums required for the approved cloud break procedure
There was no cloud break procedure in place at the time prospector. Descent from 16,000 had to be made in VMC. The crew had been briefed the NDB was unavailable, although in fact it was on the air. Bearing in mind all the services provided by McMurdo were on a "your risk" basis, would a diligent crew have used the NDB despite being briefed it was unavailable.
request for a VMC descent below MSA
Chippendale makes the same mistake in sections of his report inferring that McMurdo had a controlling function. He does of course set out the reality. He notes the briefing did not contain information as to the extent of the controllers authority to control the flights. The USN had previously advised NZ Authorities "Air traffic control/flight following shall take the form of location advisory of Deep Freeze aircraft and position report relay only."
we have a record of flights descending below the laid down minima, but they were all carried out in "brilliantly clear conditions"
Chippendale - Whiteout: The condition may occur in a crystal clear atmosphere or under a cloud ceiling with ample comfortable light and in a visual field filled with trees, huts, oil drums and other small objects.

Would seem to give lie that "brilliantly clear conditions" would be a savior to an inexperienced crew.
Brian Abraham is offline  
Old 4th Mar 2010, 19:47
  #211 (permalink)  
 
Join Date: Jul 2002
Location: UK
Posts: 3,093
Likes: 0
Received 0 Likes on 0 Posts
Henry Crun :
For your benefit I'll repeat the situation regarding the Court Of Appeal and Privy Council findings, which was that Mahon's error, if it can be called that, was to state in his report that representatives of ANZ had lied to him without first putting that question to them while they were giving evidence. They found no fault with his other conclusions.

Graybeard :
The pilots responded immediately to the FE's comment of "I don't like this" and started a procedure to turn around and climb. While the comment certainly displayed anxiety, at no point did the FE exhort the pilots to expedite the maneouvre, so it's not like he was ignored.

I feel I should point out that I am not disputing Chippindale's qualities as a human being or as an aviator - simply that as a former military pilot and later as an investigator primarily dealing with light aircraft accidents, he drew some conclusions from the evidence that could be considered questionable - possibly because he had little experience of commercial airline operation.
DozyWannabe is offline  
Old 4th Mar 2010, 23:41
  #212 (permalink)  
 
Join Date: Nov 2006
Location: SoCalif
Posts: 896
Likes: 0
Received 0 Likes on 0 Posts
I've never heard the tape, but I understood the Douglas FE to say the "I don't like this" was some 90 seconds prior to the GPWS warning, after which the Captain said, "Climb power, please." Not so?
Graybeard is offline  
Old 5th Mar 2010, 00:42
  #213 (permalink)  
prospector
Guest
 
Posts: n/a
Brian Abraham,

I believe you are cherry picking some of the requirements that had to be met prior to descent below 16,000ft.

Yes it had to be VMC, but only under the specified requirements.

These included
2. Avoid Mt Erebus area by operating in an arc from 120 Grid through 360 grid to 270 grid from McMurdo Field, within 20nm of TACAN Ch29.

4. Descent to be coordinated with local radar control as they may have other traffic in the area.

When 901 said they would descend VMC they took on the responsibility of maintaining their own terrain separation, and separation from other traffic.

How could any controller separate them from other traffic if they had not been identified on his radar???

The VMC descent was only approved under the conditions stipulated, none of which was met.

The weather was reported at McMurdo to be completely overcast at 3,500ft with other cloud layers above, mountain tops in the area were covered in cloud.

Other aircraft in the area reported Ross Island as being completely obscured by cloud.

"Aircraft Accident Report No.79-139 is still the single official document as to the cause of the crash. Everything published subsequently, including the Royal Commission Report, is opinion".

That according to my references, is the state of play at this point in time,if you could disprove that statement then please do.
 
Old 5th Mar 2010, 05:24
  #214 (permalink)  
 
Join Date: Aug 2003
Location: Sale, Australia
Age: 80
Posts: 3,832
Likes: 0
Received 0 Likes on 0 Posts
The VMC descent was only approved under the conditions stipulated, none of which was met.
Therein lies the rub. Perhaps with one exception (from memory - not checked), none of the flights complied with the requirement. The fact that they may have been in gin clear conditions is no defence, given Chippendales "Whiteout: The condition may occur in a crystal clear atmosphere".

It must be borne in mind when discussing Chippendales findings, that "Pilot Error" was par for the course in those days. Flight safety has moved on since then, with the realisation that pilot error is, more often than not, the end result of systemic failings within the organisation. And Air New Zealand proved to be replete with those, with its laissez-faire attitude to the operation.

It seems opportune to post the following.

Punishing People or Learning from Failure?
The choice is ours

Sidney Dekker
Associate Professor
Centre for Human Factors in Aviation, IKP
Linköping Institute of Technology
Abstract
In this paper I describe how Fitts and Jones laid the foundation for aviation human factors by trying to understand why human errors made sense given the circumstances surrounding people at the time. Fitts and Jones remind us that human error is not the cause of failure, but a symptom of failure, and that "human error"—by any other name or by any other human—should be the starting point of our investigations, not the conclusion. Although most in aviation human factors embrace this view in principle, practice often leads us to the old view of human error which sees human error as the chief threat to system safety. I discuss two practices by which we quickly regress into the old view and disinherit Fitts and Jones: (1) the punishment of individuals, and (2) error classification systems. In contrast, real progress on safety can be made by understanding how people create safety, and by understanding how the creation of safety can break down in resourcelimited systems that pursue multiple competing goals. I argue that we should de-emphasize the search for causes of failure and concentrate instead on mechanisms by which failure succeeds, by which the creation of safety breaks down.

Introduction
The groundwork for human factors in aviation lies in a couple of studies done by Paul Fitts and his colleague Jones right after World War II. Fitts and Jones (1947) found how features of World War II airplane cockpits systematically influenced the way in which pilots made errors. For example, pilots confused the flap and gear handles because these typically looked and felt the same and were co-located. Or they mixed up the locations of throttle, mixture and propeller controls because these kept changing across different cockpits. Human error was the starting point for Fitts' and Jones' studies—not the conclusion. The label "pilot error" was deemed unsatisfactory, and used as a pointer to hunt for deeper, more systemic conditions that led to consistent trouble. The idea these studies convey to us is that mistakes actually make sense once we understand features of the engineered world that surrounds people. Human errors are systematically connected to features of people's tools and tasks. The insight, at the time as it is now, was profound: the world is not unchangeable; systems are not static, not simply given. We can re-tool, re-build, re-design, and thus influence the way in which people perform. This, indeed, is the historical imperative of human factors—understanding why people do what they do so we can tweak, change the world in which they work and shape their assessments and actions accordingly.

Years later, aerospace human factors extended the Fitts and Jones work. Increasingly, we realized how trade-offs by people at the sharp end are influenced by what happens at the blunt end of their operating worlds; their organizations (Maurino et al., 1995). Organizations make resources available for people to use in local workplaces (tools, training, teammates) but put constraints on what goes on there at the same time (time pressures, economic considerations), which in turn influences the way in which people decide and act in context (Woods et al., 1994; Reason, 1997). Again, what people do makes sense on the basis of the circumstances surrounding them, but now circumstances that reach far beyond their immediate engineered interfaces. This realization has put the Fitts and Jones premise to work in organizational contexts, for example changing workplace conditions or reducing working hours or de-emphasizing production to encourage safer trade-offs on the line (e.g. the "no fault go-around policy" held by many airlines today, where no (nasty) questions will be asked if a pilot breaks off his attempt to land). Human error is still systematically connected to features of people's tools and tasks, and, as acknowledged more recently, their operational and organizational environment.

Two views of human error
These realizations of aviation human factors pit one view of human error against another. In fact, these are two views of human error that are almost totally irreconcilable. If you believe one or pursue countermeasures on its basis, you truly are not able to embrace the tenets and putative investments in safety of the other. The two ways of looking at human error are that we can see human error as a cause of failure, or we can see human error as a symptom of failure (Woods et al., 1994). The two views have recently been characterized as the old view of human error versus the new view (Cook, Render & Woods, 2000; AMA, 1998; Reason, 2000) and painted as fundamentally irreconcilable perspectives on the human contribution to system success and failure.

In the old view of human error:

• Human error is the cause of many accidents.
• The system in which people work is basically safe; success is intrinsic. The chief threat to safety comes from the inherent unreliability of people.
• Progress on safety can be made by protecting the system from unreliable humans through selection, proceduralization, automation, training and discipline.

This old view was the one that Fitts and Jones remind us to be skeptical of. Instead, implicit in their work was the new view of human error:
• Human error is a symptom of trouble deeper inside the system.
• Safety is not inherent in systems. The systems themselves are contradictions between multiple goals that people must pursue simultaneously. People have to create safety.
• Human error is systematically connected to features of peoples tools, tasks and operating environment. Progress on safety comes from understanding and influencing these connections.

Perhaps everyone in aviation human factors wants to pursue the new view. And most people and organizations certainly posture as if that is exactly what they do. Indeed, it is not difficult to find proponents of the new view—in principle—in aerospace human factors. For example:

"...simply writing off aviation accidents merely to pilot error is an overly simplistic, if not naive, approach.... After all, it is well established that accidents cannot be attributed to a single cause, or in most instances, even a single individual. In fact, even the identification of a 'primary' cause is fraught with problems. Instead, aviation accidents are the result of a number of causes..." (Shappell & Wiegmann, 2001, p. 60).

In practice, however, attempts to pursue the causes of system failure according to the new view can become retreads of the old view of human error. In practice, getting away from the tendency to judge instead of explain turns out to be difficult; avoiding the fundamental attribution error remains very hard; we tend to blame the man-in-the-loop. This is not because we aim to blame—in fact, we probably intend the opposite. But roads that lead to the old view in aviation human factors are paved with intentions to follow the new view. In practice, we all too often choose to disinherit Fitts and Jones '47, frequently without even knowing it. In this paper, I try to shed some light on how this happens, by looking at the pursuit of individual culprits in the wake of failure, and at error classification systems. I then move on to the new view of human error, extending it with the idea that we should de-emphasize the search for causes and instead concentrate on understanding and describing the mechanisms by which failure succeeds.

The Bad Apple Theory I: Punish the culprits
Progress on safety in the old view of human error relies on selection, training and discipline— weeding and tweaking the nature of human attributes in complex systems that themselves are basically safe and immutable. For example, Kern (1999) characterizes "rogue pilots" as extremely unreliable elements, which the system, itself safe, needs to identify and contain or exile:

"Rogue pilots are a silent menace, undermining aviation and threatening lives and property every day.... Rogues are a unique brand of undisciplined pilots who place their own egos above all else—endangering themselves, other pilots and their passengers, and everyone over whom they fly. They are found in the cockpits of major airliners, military jets and in general aviation...just one poor decision or temptation away from fiery disaster."

The system, in other words, contains bad apples. In order to achieve safety, it needs to get rid of them, limit their contribution to death and destruction by discipline, training or taking them to court (e.g. Wilkinson, 1994). In a recent comment, Aviation Week and Space Technology (North, 2000) discusses Valujet 592 which crashed after take-off from Miami airport because oxygen generators in its forward cargo hold had caught fire. The generators had been loaded onto the airplane without shipping caps in place, by employees of a maintenance contractor, who were subsequently prosecuted. The editor:

"...strongly believed the failure of SabreTech employees to put caps on oxygen generators constituted willful negligence that led to the killing of 110 passengers and crew. Prosecutors were right to bring charges. There has to be some fear that not doing one's job correctly could lead to prosecution." (p. 66)

Fear as investment in safety? This is a bizarre notion. If we want to know how to learn from failure, the balance of scientific opinion is quite clear: fear doesn't work. In fact, it corrupts opportunities to learn. Instilling fear does the opposite of what a system concerned with safety really needs: learn from failure by learning about it before it happens. This is what safety cultures are all about: cultures that allow the boss to hear bad news. Fear stifles the flow of safety-related information—the prime ingredient of a safety culture (Reason, 1997). People will think twice about going to the boss with bad news if the fear of punishment is hanging over their heads. Many people believe that we can punish and learn at the same time. This is a complete illusion. The two are mutually exclusive. Punishing is about keeping our beliefs in a basically safe system intact. Learning is about changing these beliefs, and changing the system. Punishing is about seeing the culprits as unique parts of the failure. Learning is about seeing the failure as a part of the system. Punishing is about stifling the flow of safety-related information. Learning is about increasing that flow. Punishing is about closure, about moving beyond the terrible event. Learning is about continuity, about the continuous improvement that comes from firmly integrating the terrible event in what the system knows about itself. Punishing is about not getting caught the next time. Learning is about countermeasures that remove error-producing conditions so there won't be a next time.

The construction of cause
Framing the cause of the Valujet disaster as the decision by maintenance employees to place unexpended oxygen generators onboard without shipping caps in place immediately implies a wrong decision, a missed opportunity to prevent disaster, a disregard of safety rules and practices. Framing of the cause as a decision leads to the identification of responsibility of people who made that decision which in turns leads to the legal pursuit of them as culprits. The Bad Apple Theory reigns supreme. It also implies that cause can be found, neatly and objectively, in the rubble. The opposite is true. We don't find causes. We construct cause. "Human error", if there were such a thing, is not a question of individual single-point failures to notice or process—not in this story and probably not in any story of breakdowns in flight safety. Practice that goes sour spreads out over time and in space, touching all the areas that usually make practitioners successful. The "errors" are not surprising brain slips that we can beat out of people by dragging them before a jury. Instead, errors are series of actions and assessments that are systematically connected to people's tools and tasks and environment; actions and assessments that often make complete sense when viewed from inside their situation. Were one to trace "the cause" of failure, the causal network would fan out immediately, like cracks in a window, with only the investigator determining when to stop looking because the evidence will not do it for him or her. There is no single cause. Neither for success, nor for failure.

The SabreTech maintenance employees inhabited a world of boss-men and sudden firings. It was a world of language difficulties—not just because many were Spanish speakers in an environment of English engineering language, as described by Langewiesche (1998, p. 228):

"Here is what really happened. Nearly 600 people logged work time against the three Valujet airplanes in SabreTech's Miami hangar; of them 72 workers logged 910 hours across several weeks against the job of replacing the "expired" oxygen generators—those at the end of their approved lives. According to the supplied Valujet work card 0069, the second step of the sevenstep process was: 'If the generator has not been expended install shipping cap on the firing pin.' This required a gang of hard-pressed mechanics to draw a distinction between canisters that were 'expired', meaning the ones they were removing, and canisters that were not 'expended', meaning the same ones, loaded and ready to fire, on which they were now expected to put nonexistent caps. Also involved were canisters which were expired and expended, and others which were not expired but were expended. And then, of course, there was the simpler thing—a set of new replacement canisters, which were both unexpended and unexpired."

And, oh by the way, as you may already have picked up: there were no shipping caps to be found in Miami. How can we prosecute people for not installing something we do not provide them with? The pursuit of culprits disinherits the legacy of Fitts and Jones. One has to side with Hawkins (1987, p. 127) who argues that exhortation (via punishment, discipline or whatever measure) "is unlikely to have any long-term effect unless the exhortation is accompanied by other measures... A more profound inquiry into the nature of the forces which drive the activities of people is necessary in order to learn whether they can be manipulated and if so, how". Indeed, this was Fitts's and Jones's insight all along. If researchers could understand and modify the situation in which humans were required to perform, they could understand and modify the performance that went on inside of it. Central to this idea is the local rationality principle (Simon, 1969; Woods et al., 1994). People do reasonable, or locally rational things given their tools, their multiple goals and pressures, their knowledge and their limited resources. Human error is a symptom—a symptom of irreconcilable constraints and pressures deeper inside a system; a pointer to systemic trouble further upstream.
Brian Abraham is offline  
Old 5th Mar 2010, 05:29
  #215 (permalink)  
 
Join Date: Aug 2003
Location: Sale, Australia
Age: 80
Posts: 3,832
Likes: 0
Received 0 Likes on 0 Posts
The Bad Apple Theory II: Error classification systems
In order to lead people (e.g. investigators) to the sources of human error as inspired by Fitts and Jones '47, a number of error classification systems have been developed in aviation (e.g. the Threat and Error Management Model (e.g. Helmreich et al., 1999; Helmreich, 2000) and the Human Factors Analysis and Classification System (HFACS, Shappell & Wiegmann, 2001)). The biggest trap in both error methods is the illusion that classification is the same as analysis. While classification systems intend to provide investigators more insight into the background of human error, they actually risk trotting down a garden path toward judgments of people instead of explanations of their performance; toward shifting blame higher and further into or even out of organizational echelons, but always onto others. Several false ideas about human error pervade these classification systems, all of which put them onto the road to The Bad Apple Theory.

First, error classification systems assume that we can meaningfully count and tabulate human errors. Human error "in the wild", however—as it occurs in natural complex settings—resists tabulation because of the complex interactions, the long and twisted pathways to breakdown and the context-dependency and diversity of human intention and action. Labeling certain assessments or actions in the swirl of human and social and technical activity as causal, or as "errors" and counting them in some database, is entirely arbitrary and ultimately meaningless. Also, we can never agree on what we mean by error:
• Do we count errors as causes of failure? For example: This event was due to human error.
• Or as the failure itself? For example: The pilot's selection of that mode was an error.
• Or as a process, or, more specifically, as a departure from some kind of standard? This may be operating procedures, or simply good airmanship. Depending on what you use as standard, you will come to different conclusions about what is an error.
Counting and coarsely classifying surface variabilities is protoscientific at best. Counting does not make science, or even useful practice, since interventions on the basis of surface variability will merely peck away at the margins of an issue. A focus on superficial similarities blocks our ability to see deeper relationships and subtleties. It disconnects performance fragments from the context that brought them forth, from the context that accompanied them; that gave them meaning; and that holds the keys to their explanation. Instead it renders performance fragments denuded: as uncloaked, context-less, meaningless shrapnel scattered across broad classifications in the wake of an observer's arbitrary judgment.

Second, while the original Fitts and Jones legacy lives on very strongly in human factors (for example in Norman (1994) who calls technology something that can make us either smart or dumb), human error classification systems often pay little attention to systematic and detailed nature of the connection between error and people's tools. According to Helmreich (2000), "errors result from physiological and psychological limitations of humans. Causes of error include fatigue, workload, and fear, as well as cognitive overload, poor interpersonal communications, imperfect information processing, and flawed decision making" (p. 781). Gone are the systematic connections between people's assessments and actions on the one hand, and their tools and tasks on the other. In their place are purely human causes—sources of trouble that are endogenous; internal to the human component. Shappell and Wiegmann, following the original Reason (1990) division between latent failures and active failures, merely list an undifferentiated "poor design" only under potential organizational influences—the fourth level up in the causal stream that forms HFACS. Again, little effort is made to probe the systematic connections between human error and the engineered environment that people do their work in. The gaps that this leaves in our understanding of the sources of failure are daunting.

Third, Fitts and Jones remind us that it is counterproductive to say what people failed to do or should have done, since none of that explains why people did what they did (Dekker, 2001). With the intention of explaining why people did what they did, error classification systems help investigators label errors as "poor decisions", "failures to adhere to brief", "failures to prioritize attention", "improper procedure", and so forth (Shappell & Wiegmann, 2001, p. 63). These are not explanations, they are judgments. Similarly, they rely on fashionable labels that do little more than saying "human error" over and over again, re-inventing it under a more modern guise:
• Loss of CRM (Crew Resource Management) is one name for human error—the failure to invest in common ground, to share data that, in hindsight, turned out to have been significant.
• Complacency is also a name for human error—the failure to recognize the gravity of a situation or to adhere to standards of care or good practice.
• Non-compliance is a name for human error—the failure to follow rules or procedures that would keep the job safe.
• Loss of situation awareness is another name for human error—the failure to notice things that in hindsight turned out to be critical.
Instead of explanations of performance, these labels are judgments. For example, we judge people for not noticing what we now know to have been important data in their situation, calling it their error—their loss of situation awareness.

Fourth, error classification systems typically try to lead investigators further up the causal pathway, in search of more distal contributors to the failure that occurred. The intention is consistent with the organizational extension of the Fitts and Jones '47 premise (see Maurino et al., 1995) but classification systems quickly turn it into re-runs of The Bad Apple Theory. For example, Shappell & Wiegmann (2001) explain that "it is not uncommon for accident investigators to interview the pilot's friends, colleagues, and supervisors after a fatal crash only to find out that they 'knew it would happen to him some day'." (p. 73) HFACS suggests that if supervisors do not catch these ill components before they kill themselves, then the supervisors are to blame as well. In these kinds of judgments the hindsight bias reigns supreme (see also Kern, 1999). Many sources show how we construct plausible, linear stories of how failure came about once we know the outcome (e.g. Starbuck & Milliken, 1988), which includes making the participants look bad enough to fit the bad outcome they were involved in (Reason, 1997). Such reactions to failure make after-the-fact data mining of personal shortcomings—real or imagined—not just counterproductive (sponsoring The Bad Apple Theory) but actually untrustworthy. Fitts' and Jones' legacy says that we must try to see how people—supervisors and others—interpreted the world from their position on the inside; why it made sense for them to continue certain practices given their knowledge, focus of attention and competing goals. The error classification systems do nothing to elucidate any of this, instead stopping when they have found the next responsible human up the causal pathway. "Human error", by any other label and by any other human, continues to be the conclusion of an investigation, not the starting point. This is the old view of human error, re-inventing human error under the guise of supervisory shortcomings and organizational deficiencies. HFACS contains such lists of "unsafe supervision" that can putatively account for problems that occur at the sharp end of practice. For example, unsafe supervision includes "failure to provide guidance, failure to provide oversight, failure to provide training, failure to provide correct data, inadequate opportunity for crew rest" and so forth (Shappell & Wiegmann, 2001, p. 73). This is nothing more than a parade of judgments: judgments of what supervisors failed to do, not explanations of why they did what they did, or why that perhaps made sense given the resources and constraints that governed their work. Instead of explaining a human error problem, HFACS simply re-locates it, shoving it higher up, and with it the blame and judgments for failure. Substituting supervisory failure or organizational failure for operator failure is meaningless and explains nothing. It sustains the fundamental attribution error, merely directing its misconstrued notion elsewhere, away from front-line operators.

In conclusion, classification of errors is not analysis of errors. Categorization of errors cannot double as understanding of errors. Error classification systems may in fact reinforce and entrench the misconceptions, biases and errors that we always risk making in our dealings with failure, while giving us the illusion we have actually embraced the new view to human error. The step from classifying errors to pursuing culprits appears a small one, and as counterproductive as ever. In aviation, we have seen The Bad Apple Theory at work and now we see it being re-treaded around the wheels of supposed progress on safety. Yet we have seen the procedural straight jacketing, technology-touting, culprit-extraditing, train-and-blame approach be applied, and invariably stumble and fall. We should not need to see this again. For what we have found is that it is a dead end. There is no progress on safety in the old view of human error.

People create safety
We can make progress on safety once we acknowledge that people themselves create it, and we begin to understand how. Safety is not inherently built into systems or introduced via isolated technical or procedural fixes. Safety is something that people create, at all levels of an operational organization (e.g. AMA, 1998; Sanne, 1999). Safety (and failure) is the emergent property of entire systems of people and technologies who invest in their awareness of potential pathways to breakdown and devise strategies that help forestall failure. The decision of an entire airline to no longer accept NDB approaches (Non-Directional Beacon approaches to a runway, in which the aircraft has no vertical guidance and rather imprecise lateral guidance) (Collins, 2001) is one example of such a strategy; the reluctance of airlines and/or pilots to agree on LASHO—Land And Hold Short Operations—which put them at risk of traveling across an intersecting runway that is in use, is another. In both cases, goal conflicts are evident (production pressures versus protection against known or possible pathways to failure). In both, the trade-off is in favor of safety. In resource-constrained systems, however, safety does not always prevail. RVSM (Reduced Vertical Separation Minima) for example, which will make aircraft fly closer together vertically, will be introduced and adhered to, mostly on the back of promises from isolated technical fixes that would make aircraft altitude holding and reporting more reliable. But at a systems level RVSM tightens coupling and reduces slack, contributing to the risk of interactive trouble, rapid deterioration and difficult recovery (Perrow, 1984). Another way to create safety that is gaining a foothold in the aviation industry is the automation policy, first advocated by Wiener (e.g. 1989) but still not adopted by many airlines. Automation policies are meant to reduce the risk of coordination breakdowns across highly automated flight decks, their aim being to match the level of automation (high, e.g. VNAV (Vertical Navigation, done by the Flight Management System); medium, e.g. heading select; or low, e.g. manual flight with flight director) with human roles (pilot flying versus pilot not-flying) and cockpit system display formats (e.g. map versus raw data) (e.g. Goteman, 1999). This is meant to maximize redundancy and opportunities for double-checking, capitalizing on the strengths of available flightdeck resources, both human and machine.

When failure succeeds
People are not perfect creators of safety. There are patterns, or mechanisms, by which their creation of safety can break down—mechanisms, in other words, by which failure succeeds. Take the case of a DC-9 that got caught in windshear while trying to go around from an approach to Charlotte, NC, in 1994 (NTSB, 1995). Charlotte is a case where people are in a double bind: first, things are too ambiguous for effective feed forward. Not much later things are changing too quickly for effective feedback. While approaching the airport, the situation is too unpredictable, the data too ambiguous, for effective feed forward. In other words, there is insufficient evidence for breaking off the approach (as feed forward to deal with the perceived threat). However, once inside the situation, things change too rapidly for effective feedback. The microburst creates changes in winds and airspeeds that are difficult to manage, especially for a crew whose training never covered a windshear encounter on approach or in such otherwise smooth conditions.

Charlotte is not the only pattern by which the creation of safety breaks down; it is not the only mechanism by which failure succeeds. For progress on safety we should de-emphasize the construction of cause—in error classification methods or any other investigation of failure. Once we acknowledge the complexity of failure, and once we acknowledge that safety and failure are emerging properties of systems that try to succeed, the selection of causes—either for failure or for success—becomes highly limited, selective, exclusive and pointless. Instead of constructing causes, we should try to document and learn from patterns of failure. What are the mechanisms by which failure succeeds? Can we already sketch some? What patterns of breakdown in people's creation of safety do we already know about? Charlotte—too ambiguous for feedforward, too dynamic for effective feedback—is one mechanism by which people's investments in safety are outwitted by a rapidly changing world. Understanding the mechanism means becoming able to retard it or block it, by reducing the mechanism's inherent coupling; by disambiguating the data that fuels its progression from the inside. The contours of many other patterns, or mechanisms of failure, are beginning to stand out from thick descriptions of accidents in aerospace, including the normalization of deviance (Vaughan, 1996), the going sour progression (Sarter & Woods, 1997), practical drift (Snook, 2000) and plan continuation (Orasanu et al., in press). Investing further in these and other insights will represent progress on safety. There is no efficient, quick road to understanding human error, as error classification methods make us believe. Their destination will be an illusion, a retread of the old view. Similarly, there is no quick safety fix, as the punishment of culprits would make us believe, for systems that pursue multiple competing goals in a resource constrained, uncertain world. There is, however, percentage in opening the black box of human performance—understanding how people make the systems they operate so successful, and capturing the patterns by which their successes are defeated.

Acknowledgements
The work for this paper was supported by a grant from the Swedish Flight Safety Directorate and its Director Mr. Arne Axelsson.
Brian Abraham is offline  
Old 5th Mar 2010, 09:14
  #216 (permalink)  
 
Join Date: Feb 2010
Location: USA
Posts: 13
Likes: 0
Received 0 Likes on 0 Posts
And to add to Prospector's comments, I understood the TACAN was not working at the time so irrespective of what the crew thought they were doing, without TACAN they shouldn't have descended.
workingman303 is offline  
Old 5th Mar 2010, 10:37
  #217 (permalink)  
 
Join Date: Feb 2007
Location: Darwin, Australia
Age: 53
Posts: 424
Likes: 0
Received 4 Likes on 3 Posts
Brian - a very interesting paper.

IMHO people are losing sight of
Third, Fitts and Jones remind us that it is counterproductive to say what people failed to do or should have done, since none of that explains why people did what they did (Dekker, 2001).
The results of the flight speak for themselves and to attempt to place the cause of the accident on one decision is extremely simplistic. At least give those that were tragically killed the respect of trying to learn as much as we can about how to prevent future accidents rather than playing the blame game.
werbil is offline  
Old 5th Mar 2010, 11:49
  #218 (permalink)  
 
Join Date: Oct 2007
Location: East of Java
Age: 64
Posts: 45
Likes: 0
Received 0 Likes on 0 Posts
Prospector – Your comments and reliance on what you propose are the facts are disingenuous at best, but 10/10 for unrelenting opposition to what is a phenomenal amount of evidence and reasonable argument that has consistently contradicted just about everything you are relying on, short of the date : a remarkable stance considering how comprehensively your case has been undermined; your myopic approach to aviation safety analysis is a credit to the 1920’s where you obviously prefer to reside.

The let downdown procedure, VMC ,visibility, ANZs’ SOP’s ect is a Strawman argument that you propose like clockwork as the standard facet of your justification, negating the fact that a trained crew flew a functioning DC-10 in clear weather into the side of a mountain. The FL160 limit and descent procedure you rely on is inconsequential - ipso if the DC-10 had not descended below FL160 it would not have crashed is the same argument as if the DC-10 had not taken off it would not have crashed, ipso facto.

That is a fact as well: the selective representation of the passengers’ photographic evidence, which to put it politely, was more conspicuous by what it omitted, is also by coincidence how you have drummed on while selectively representing the facts in your case and very selectively omitting to include numerous other facts which I’m sure your uncomfortable with, but have been bought to your attention by numerous posters’.

For your supposition to hold water you have to complicity agree that the crew of TE901 deliberately flew at the mountain, not ‘into the mountain’ but by a deliberate wilful act they flew ‘at the mountain’. Prove that point and I’ll never comment again. Short of the crew being suicidal, that’s where your argument falls down, along with Aircraft Accident Report No.79-139, regardless of that fact that the NZ Govt maintains that this is the official record; well they would wouldn’t they, considering how much effort the put into the content.

Anyone with any background in accident investigation knows that the Mahon/Vette analysis was a world class piece of research and analysis, which along with the Mahon report presented the facts – not opinions as you suggest – and the conclusions, along with multiple examples of duplicity between the airline being investigated, the CAA and the Govt.

Furthermore, the Mahon and Vette investigations, subsequently went on to improve investigation techniques and more importantly aviation safety - Quite an achievement. The Chippendale report did what was politically expedient, short and simple. Anyone tarnished with that brush very probably spends a lot of their time attempting to justify themselves through posting on various web pages…ring any bells?

Also, very convenient that the FDR has gone missing as well. Prospector old chap, if you’re not smelling a rat by now, perhaps a visit to an otorhinolaryngologist is required.

Justice Mahon and Gordon Vettes’ subsequent investigations, analysis and systematic approach solved the riddle of this tragedy, in the process opening the management of ANZ, the collusion of the NZ Govt and the CAA at the time to the oft repeated accusation of complicity and illegality, obstrufication, deliberate destruction of evidence, perjury ect, ect…in the process Mahon and Vette , in a monument to pettiness and NZ governmental vindictiveness, were well and truly shafted for demonstrating the truth when it was deemed inconvenient. A shameful result for NZ and anyone with an ounce of professional scruples or sense of justice, which by extension excludes yourself based on your postings; in addition, you constant harping on does a disservice to the aviation safety advances that resulted in Justice Mahon and Gordon Vettes’ research.

So, here is a proposal. I will approach TVNZ, or one of the production companies they use, and propose a top to tail thorough review of the evidence and analysis in a modern aviation safety investigation environment: you can use the Aircraft Accident Report No.79-139 as a basis for investigation - subject to peer review – which I’ll call team A in this context, while simultaneously an independent review by current, modern investigators, which I’ll call team B in this context, schooled in modern analysis and techniques will work on the available evidence and produce a report based on known empirical data, Human Factors, Cognitive and Behavioural Science, Cockpit Resource Management , Threat and Error Management ect, ect, and we’ll see what the result is…my two bob’s on team B.

Let me know if you’re keen, I think it’s an interesting idea and has some value, at the very least, it will put the dampers on your constant whining.

I await with the fatalism of the convicted for your very predictable, knee jerk response

Last edited by flatfootsam; 5th Mar 2010 at 13:53.
flatfootsam is offline  
Old 5th Mar 2010, 13:38
  #219 (permalink)  
 
Join Date: Dec 2005
Location: Wellington,NZ
Age: 66
Posts: 1,678
Received 10 Likes on 4 Posts
Sorry, flatfootsam, that's too hard for me to read.

Could you format it into paragraphs, maybe increase the font size a smidgeon?

Sort of like I've done here.
Tarq57 is offline  
Old 5th Mar 2010, 13:48
  #220 (permalink)  
 
Join Date: Oct 2007
Location: East of Java
Age: 64
Posts: 45
Likes: 0
Received 0 Likes on 0 Posts
I just did...bit of a problem on the editing button; it all went a bit haywire, but it's back in a normative format
flatfootsam is offline  


Contact Us - Archive - Advertising - Cookie Policy - Privacy Statement - Terms of Service

Copyright © 2024 MH Sub I, LLC dba Internet Brands. All rights reserved. Use of this site indicates your consent to the Terms of Use.