Go Back  PPRuNe Forums > Flight Deck Forums > Rumours & News
Reload this Page >

Aeroflot A320 takes off on Oslo Taxiway

Wikiposts
Search
Rumours & News Reporting Points that may affect our jobs or lives as professional pilots. Also, items that may be of interest to professional pilots.

Aeroflot A320 takes off on Oslo Taxiway

Thread Tools
 
Search this Thread
 
Old 7th Mar 2010, 17:24
  #61 (permalink)  
 
Join Date: May 2002
Location: In a nice house
Posts: 981
Likes: 0
Received 0 Likes on 0 Posts
I think there are more of these sorts of incidents creeping in - there is much more pressure, more things to do, in a shorter time than we used to have. Airports seem to get more complicated and there is more traffic.
Although as professional pilots we try to ignore the time pressures, it is always there and because we are professional pilots, we try to get away on time. The company I work for has a 1 hour report time. It is pretty much impossible at many of the bases we work from to thoroughly prepare the flight in that time. People do come in early, but there is so much paperwork, notices to read, NOTAMs to wade through, slots, weather, commercial pressures, and various other company paraphenalia, as well as the hassles getting through security, that it often is impossible not to feel rushed. We do brief taxiing, including possible runway incursion points, but even when taxying out there are various checks to do, cabin to talk to, checklists to complete, as well as radio calls and monitoring. Obviously no one chooses to take off on a taxiway, but I can see how it could happen. I haven't read the report, but the last saviour is usually the red stop bars. If you fall through that hole then you are stuffed.
Perhaps the CAA should have the balls to say enough is enough, and ensure all companies give a 1.5 hour report time?
I think some taxiway charts are appalling though, and very difficult to work out when the chart doesn't really resemble the airport. It can take ages to get things changed too.
I hope many procedures are simplified in the future - airports and airspace are busy enough and making things as simple as possible has got to help avoid some of the common mistakes.
Airbus Girl is offline  
Old 7th Mar 2010, 20:24
  #62 (permalink)  
 
Join Date: Jan 2005
Location: Europe
Posts: 260
Likes: 0
Received 5 Likes on 3 Posts
Originally Posted by Guttn
It`s easy to blame the two up front, an indeed they have a lot of the blame as they are the ones who decide to go or stay. But what is really interesting is why this happened. Early afternoon at OSL is not a very busy time of day. Weather was good. Daylight. I suspect the runways were black with good breaking action as well. Taxiways most likely in good condition with all signs easily readable. Having flown out of OSL quite a bit, I have found that when this kind of situation occurs (almost no traffic, good wx etc) you are given the "cleared for takeoff" amazingly early at times. Not trying to point blame here, but such a contributing factor could be a cause. Of course not the only one, but maybe one that can get the snowball rolling.
So apparently OSL ATC has not paid attention to the recommendation from the report of the Pegasus incident that was mentioned earlier (2006/20 | sht). Here's a slightly tidied Google Translate translation of that recommendation:
There is a risk that flight crews are wrong on the path they are on, and try to use taxiway path to take-off at the airports with taxiways parallel to the runway. SHT recommends Avinor to consider introducing a procedure so that flights are not given the departure clearance before ATC has determined that the plane has passed the point where the only remaining options for take-off are on the t/o runway.
(SL recommendation 31/2006)
To put it mildly: a missed opportunity...

Last edited by xetroV; 7th Mar 2010 at 20:57.
xetroV is offline  
Old 7th Mar 2010, 20:55
  #63 (permalink)  
 
Join Date: Jan 2005
Location: Europe
Posts: 260
Likes: 0
Received 5 Likes on 3 Posts
Originally Posted by alf5071h
A refreshing view from bfisk; well said.
We should not judge the crew, but look at how they might have seen the situation at that time.

Many people posting in this thread suffer hindsight bias – stating failure, asking how or why didn’t they, and using derogatory descriptions.
Most people have difficulty in relating human limitations or weaknesses with the often hidden opportunities for error in a situation.
Some people express surprise, not able to believe what has occurred. To some extent we all bare the pain of other’s misfortunes (it’s a way of learning), yet few people look deeply into an occurrence to gain sufficient understanding needed for learning. Others gloss over the event, give glib responses which might indicate that they would not have suffered the same error – that in itself represents a problem with their thinking or an attitude of resignation that nothing more can be done excepting to automate the aircraft. These are missed opportunities for learning.

Errors often originate at the interface of the environment and the human, thus both the situational and the human aspects must be considered in seeking an understanding. We choose crew for their attitude and train for aptitude; these are not closed entities, thus we must also review the situation – the organisation, and the variability within situations and operations.
James Reason indicates that we should first look to the organisation (the situation) as this could have precipitated the error. Furthermore, seeking to change the organisation might be a more practical, easier, and cost effective solution.

We should look to ourselves, could we suffer the same error; how can this be avoided.
Then consider how errors may be detected, trapped, or mitigated.
Don’t try to put the human in a box (bad apples), look at the aspects where humans excel, note the weaknesses, and ensure that the operational situations do not allow these weaknesses to lead to error. A positive outlook may provide many answers – share these aspects in this forum.
How do people create safety in their daily operations?
What features are there at other airports which help the human overcome the risks in operations and avoid error?
Well said, alf5071h. I think that pilots who believe they would not have made the same error display a disconcerting lack of awareness of their own vulnerability to errors - they are human too, after all. People who believe that such incidents only happen to bad pilots are deluding themselves into a false sense of security. The truth (being that such incidents can happen to all of us) is a lot less comfortable, but if we want to avoid such mishaps in the future, we must face that truth nevertheless.

Placing oneself above these particular crews is in my opinion a sign of poor airmanship and lacking professionalism. I find it somewhat alarming to see so many derogatory comments from (what appear to be) fellow pilots about these Aeroflot and KLM crews and so little willingness to really learn from these two unfortunate (but luckily not fatal) mistakes.
xetroV is offline  
Old 21st Mar 2010, 09:40
  #64 (permalink)  
 
Join Date: Jul 2008
Location: Norway
Age: 38
Posts: 35
Likes: 0
Received 0 Likes on 0 Posts
YXAXXXX
GG ENZZXXSN
111437 ENGMYNYV
(GM0035/10 NOTAMN
Q) ENOR/QXXXX/IV/BO/A/000/999/6012N01105E005
A) ENGM
B) 1003111440 C) 1005312359
E) WHEN DEPARTURE FROM INTERSECTION A3 RWY01L IS PLANNED, CONFIRM
LINING UP THE RWY. RWY CONFUSION ECPERIENCED BTN TWY MIKE AND RWY01L
I reckon that this NOTAM issued is a follow-up to this incident and a way to prevent it from happening again at OSL.

I don't understand this measure, because a lot of runways has parallel taxiways, and making sure this doesn't happen again at OSL doesn't really solve the problem all together.
BoeingOnFinal is offline  
Old 25th Mar 2010, 08:17
  #65 (permalink)  
 
Join Date: Nov 2007
Location: Australia
Posts: 29
Likes: 0
Received 0 Likes on 0 Posts
Some of the comment in this thread carries the implication of "I don't know what is wrong with pilots now: these things did not happen in my young days."

Well, they did.

James Reason, who was the original developer of the "swiss cheese model", arrived at it many years ago when he realised that all humans were inescapably subject to lapses of concentration, errors of judgement and dissociation: that is, where the brain decides to do one thing but the hands do another.

Here he is talking to Norman Swan in an Australian Radio National transcript in 2005:
Norman Swan: Hello and welcome to The Health Report with me Norman Swan. Today an interview in our summer series with enormous implications beyond health to almost any walk of life and industry. 2005 unfortunately was a big year for light plane crashes in Australia, at least some of which were alleged to be the result of pilot error. It was also a big year, again unfortunately, for injuries to people in hospitals from doctors' and nurses' mistakes. And this is a special feature on this very human frailty, error.

We all make mistakes, but in some jobs, like piloting an aeroplane, or surgery, mistakes can be fatal; why do they occur? What’s going on in our heads and around us when disaster hits? In aviation, errors are accepted as inevitable, so planes and cockpit routines are designed to minimise the impact of mistakes when they occur.

Sadly, hospitals haven’t often learnt these lessons, which means that the human factors in health care have either been ignored, or seen as too hard to deal with. The result is unnecessary injuries to patients, unsafe systems with needless harm.

But slowly, human factors are being acknowledged, and that’s due in no small part to one of the gurus in this field, psychologist Jim Reason, who’s Emeritus Professor at the University of Manchester in the UK. He was in Australia, in Perth, earlier this year at the Annual Conference of the Royal Australasian College of Surgeons. He told me that his journey in this field started with his own error-prone absentmindedness. I asked him for examples.

Jim Reason: Goodness: getting into the bath with your socks on, saying ‘Thank you’ to a stamp machine, and on one occasion putting cat-meat into the teapot. So you begin to see the kinds of conditions that create it, where you have for examples two objects, like a teapot, and a cat’s bowl, which are for putting in, you just get the wrong ones in.

Norman Swan: And you collected those stories?

Jim Reason: I collected them from other people; people really quite enjoyed doing this, and I watched for example my wife on one occasion. She was making tea and she reached down not the caddy but the Nescafe jar, and she put three teaspoonsful of Nescafe, and then screwed up the jar and put it back. Now that’s pretty trivial, it’s just that it struck me that her hand knew what she was handling, in other words not the lid of the tea-caddy, which slid on and off, but a screw-top jar. And the whole pointer for my early interests, was that it tells you a great deal about how you control your automatic actions. You have an intention to act, which you then delegate to various routines, and if you don’t check on the progress at certain choice points, then there is a strong possibility that you’ll go trundling down the strong but wrong route, the one that you most usually, most familiarly, do.

Norman Swan: And did your earlier research tell you why people go down that track?

Jim Reason: There’s two parts to the answer. The first part is the conditions under which absentmindedness occur, tend to be first of all in very familiar, very routine settings. They involve preoccupation or distraction, so that your limited attentional capacity is tied down either by some worry or by something going on around you, and there’s almost always some change either in the plan or in the circumstances. So you might, for example, get up and say ‘I’m not going to put sugar on my cereal, I want to lose weight’, but of course unless you attend exactly at that moment when you pour out the cereal, you’ll put the sugar on. And similarly, we on one occasion swapped the knife and fork drawers around and it took us three or four months actually, to get back to a correct, error-free performance.

Norman Swan: Particularly absentminded family?

Jim Reason: Well, yes. I am the most absentminded person....

Norman Swan: And you found this problem of delegating the task right, but your brain unattending to it, is a common problem not just in health care but in industry?

Jim Reason: It’s a universal problem. Little slips of action like that can create big disasters. It’s the context that determines the disaster, and if you’re driving unusually a double-decker bus, and you usually drive a single-decker bus, and you come to a low bridge that you used to go under with your single-decker, then you sweep off the top of the bus and kill six passengers, I mean that’s a terrible consequence for a very obvious, very routine slip of habit. And if there’s a general rule of absentmindedness, it is under-specified. If you under-specify some action either because you’re inattentive or because you’ve forgotten something or you have insufficient knowledge, there are many ways in which you can under-specify the control of actions. But they always, nearly always default to that which has been familiar, that which has been routine, that which has been frequent in that context. [...]
Aviation has largely digested the truth that people are not infallible, that the humanity slice of the cheese has holes in it and to provide a safe industry, we need other slices which, even though they have their own holes, will avoid the likelihood of holes lining up.

The medical profession on the other hand has historically proceeded on the basis that doctors are infallible. There is no medical equivalent of investigative agencies like the NTSB - why would there be if no mistakes were ever made or (as cynics observe) doctors simply bury their mistakes.

But in truth doctors are just as fallible as airline pilots, maintenance engineers and air traffic controllers.

In 2007 Boston surgeon Atul Gawande explored what medicine was starting to learn from the airline industry, in his New Yorker feature, "The Checklist". He reported how Johns Hopkins Hospital critical care specialist Peter Pronovost had developed a check list tackling the steps required to put a line into a patient's vein (either for injecting drugs or the like or to remove blood samples - in intensive care beds lines often need to be left in place for a number of days).

The list contained just five items and nurses were backed by the hospital administration to stop doctors who missed any of the steps:
[...] Pronovost and his colleagues monitored what happened for a year afterward. The results were so dramatic that they weren’t sure whether to believe them: the ten-day line-infection rate went from eleven per cent to zero. So they followed patients for fifteen more months. Only two line infections occurred during the entire period. They calculated that, in this one hospital, the checklist had prevented forty-three infections and eight deaths, and saved two million dollars in costs.

Pronovost recruited some more colleagues, and they made some more checklists. One aimed to insure that nurses observe patients for pain at least once every four hours and provide timely pain medication. This reduced the likelihood of a patient’s experiencing untreated pain from forty-one per cent to three per cent. They tested a checklist for patients on mechanical ventilation, making sure that, for instance, the head of each patient’s bed was propped up at least thirty degrees so that oral secretions couldn’t go into the windpipe, and antacid medication was given to prevent stomach ulcers. The proportion of patients who didn’t receive the recommended care dropped from seventy per cent to four per cent; the occurrence of pneumonias fell by a quarter; and twenty-one fewer patients died than in the previous year. The researchers found that simply having the doctors and nurses in the I.C.U. make their own checklists for what they thought should be done each day improved the consistency of care to the point that, within a few weeks, the average length of patient stay in intensive care dropped by half. [...]
What this shows is what skilled people fail to do if they depend simply on their skill. (Along the way, Gawande tells of how development of checklists turned the Boeing B17, the Flying Fortress, from a plane that was literally unflyable to an aircraft so successful that 13,000 were eventually built.) The New Yorker article was the basis for Gawande's recent best-selling book, The Checklist Manifesto.

Bottom line: nobody, but nobody, is infallible.
altonacrude is offline  
Old 25th Mar 2010, 18:43
  #66 (permalink)  
 
Join Date: Jan 2008
Location: The foot of Mt. Belzoni.
Posts: 2,001
Likes: 0
Received 0 Likes on 0 Posts
Oddly enough, for years, ATC towers have been fitted with devices to prevent this very thing happing during VMC.
They are called windows. - You look through them.
ZOOKER is offline  
Old 25th Mar 2010, 19:19
  #67 (permalink)  
 
Join Date: May 2008
Location: Las Vegas
Posts: 6
Likes: 0
Received 0 Likes on 0 Posts
Aeroflot

I'm not sure ATC windows help to get the crew to follow instructions. One day in SEA an Aeroflot 777 lands slows to taxi spead and dosn't leave the runway, the tower was beggy them to exit as it sent Alaska and two Southwests around. At least they were looking
spike727 is offline  

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off



Contact Us - Archive - Advertising - Cookie Policy - Privacy Statement - Terms of Service

Copyright © 2024 MH Sub I, LLC dba Internet Brands. All rights reserved. Use of this site indicates your consent to the Terms of Use.