PPRuNe Forums

PPRuNe Forums (https://www.pprune.org/)
-   Rumours & News (https://www.pprune.org/rumours-news-13/)
-   -   Spanair accident at Madrid (https://www.pprune.org/rumours-news/339876-spanair-accident-madrid.html)

FourGreenNoRed 21st Oct 2008 17:51

69: No offense, but I can not agree with your last post. All correctly analysed and probably what happened, but throwing percentage numbers onto the crew I found somewhat disturbing. Why 60 % and not 69 % or 49 % or 33.3 %. You give yourself a disclaimer at the end, basically saying that you dont mind the opinion others since its just your personel opinion. But accusing the crew to have 90 % (why not 88.7%?) responsiblility shows a lack of fligth safety related knowledge. Lets just quote James Reason, classification of errors:

• Intentional Noncompliance
• Procedural
• Communication
• Proficiency
• Operational Decision

One of which led to the catastrophic outcome of the accident. But what about the firewalls which are in place to avoid those errors.

http://www.bmj.com/content/vol320/is...eaj26ja.f1.gif

The holes in the defences arise for two reasons: Active failures and latent conditions. Nearly all adverse events involve a combination of these two sets of factors. So latent conditions being a part of the error chain or rather the error prevention and recognition program. We find tons of latent conditions and neither one prevented things from happening in Madrid. CRM, SOP, Maintenance, Opertional supervision, Training and so on. It correct, at the end it was the SIC who made the mistake. You might as well blame his hand who didnt move the lever.

The US NAvy developed a system called HFACS (Human Factors Analysis and Classification System) a tool developed to investigate conditions which are leading to errors based on the Swiss Cheese model.

You might know all this, but allow me to jeapordise your blame post.

justme69 21st Oct 2008 18:29

Sorry it didn't come across more clear that those "percentages" were only a wild indication of my feelings, not an actual analysis of the situation. To quote myself:


Again, this is a "rough" PERSONAL opinion, nothing more.
So indeed, the percentages can be freely changed to just about anything you want as long as the order of importance and relative amount is similar. I.e. the copilot 40%, the pilot 30% and 30% for the rest of the factors.

It's just a way of expressing my view, not an actual study of the issues involved, and with the information we have in hand right now. And yes, I do not have hardly any knowledge at all about the aviation safety industry.

But I do believe that this particular case is one where relatively few factors were the real culprits. I believe that work conditions, operations, training, safety culture, etc, had little to do and it was more the case of a single, human mistake that coupled with an unfortunate, untimely technical fault.

I don't think that either, the mistake or the technical malfunction, as it has been on-going in MD-82's for a long time, could've been really avoided (in this particular case, more TOWS tests should've been required, but again, the TOWS could've failed just as well between the time they were tested and the time they were needed). And there is no way to AVOID (not reduce, AVOID) human mistakes. They will always keep happening.

Imagine the PERFECT scenario. The pilots are 100% well trained. The work conditions and operations are OPTIMAL. The airplane is in perfect shape. All tests were done and it all was fine.

The pilot forgets to set the flap. He just forgets. His mind tells him the flaps are set but they are not. It can happen, right? If you don't agree it can happen, then indeed there is no need for TOWS.

So, in spite of everything "perfect", the pilots manage to just miss the setting of Flaps the 2 or 3 times they are required by the SOP i.e.

And just 1 minute before take off, the TOWS in the MD-82 fails.

So now, where does the "blame" fall? Even more training? We established that it was optimal. Even better SOPs/checklists? We established they are the best they can be. Even more maintenance? We established the airplane was in perfect condition and recently tested. Better management? Why, we said conditions and operations were perfect.

It's ENOUGH of a condition for the pilots to miss this single item and, in the case of the MD-82, the alarm failing to operate for this accident to be, likely, unavoidable. Sure better training could perhaps allow them to recover the airplane on time and not enter the full stall condition, but we all know that it is hard to trust that an aircraft fully loaded with tail wind, close to the ground and speed way too slow to properly climb after ground effect can be necessarily recovered after running out of runaway space.

There is only one of two conditions for this accident to not EVER (in practical terms) occur:

-The pilots can NEVER make a configuration mistake.
... or
-The TOWS can NEVER fail right before the takeoff.

But the pilots are humans, and therefore, the first one can not be put in practice. And the TOWS are an electrical machine and, therefore, the second one can't be implemented either.

When they both align, no other factor is NEEDED (it may exist, but it is not needed) for these accidents to happen. In some "perfect" circunstances (which are not quite the case in Madrid, BTW), this cheese would have ONLY two holes. If you want more holes, you need i.e. to make sure there is some way to detect "FOR SURE" that the TOWS didn't fail right before the takeoff. Then you'll have a third hole, which may also be missed by the pilots etc, of course, but at least it's there. Right now, there are only two in a perfect scenario.

But, of course, in Madrid's case, the scenario wasn't perfect. The TOWS had only (theoretically) been tested by the pilots some 5 hours before, not inmediately before take off. If tested, it would've likely be found defective and the airplane AOG.

Also, maintenance was called on a "smaller problem" that was actually related, but they failed to figure it out.

And the flight's conditions weren't "perfect" because of the delay for the "rat probe heater failure", so everybody was somewhat rushed and distracted (but not enough to justify making extra, basic, mistakes, but making them more likely nonetheless).

Etc, etc.

FrequentSLF 21st Oct 2008 18:53


I don't think that either, the mistake or the technical malfunction, as it has been on-going in MD-82's for a long time, could've been really avoided (in this particular case, more TOWS tests should've been required, but again, the TOWS could've failed just as well between the time they were tested and the time they were needed).
IMHO the poor design of the TOWS is a major contributory cause to this accident. But as many pointed out the money factor was predominant.
The TOWS is a simple ON/OFF system where no models are involved, just make sure that some conditions are satisfied. Is an ON/OFF logic, which could have been developed fool proof.

lomapaseo 21st Oct 2008 19:19


The TOWS is a simple ON/OFF system where no models are involved, just make sure that some conditions are satisfied. Is an ON/OFF logic, which could have been developed fool proof.
Nothing is fool proof. Somewhere along the line it requires human intervention or lack thereof and humans make mistakes. The more foolproof you think that you make it the more the human relies on complacency to just get by with the least amount of work.

I'll wait and see what the investigators analysis of the TOWs finds relative to its performance with average pilots, average mechanics and a combination of worst conditions in operation.

ChristiaanJ 21st Oct 2008 19:48


Originally Posted by BEagle
1. No configuration warning check was conducted prior to the start of the take-off roll.

(My emphasis).

Aren't you forgetting something...?

0. The correct take-off configuration was not set and not checked properly prior to the start of the take-off roll.

Why? We will never know.

CJ

777fly 21st Oct 2008 21:33

The root cause of this accident was the failure of both the flight crew and the maintenance staff to fully consider why certain system abnormalities were indicated and what would be the consequential effects of the maintenance actions that were carried out. Flight crew, in particular, need to maintain a systems knowledge equivalent to that attained in the conversion course, in order to understand the potential for knock-on effects if systems are disabled or degraded by maintenance action. A few 'what if?' questions might have prevented this accident. The lack of lateral thinking left a big black hole, into which the flight crew fell under pressure of the abnormal circumstances. As I have previously posted, checklist discipline and a refusal to be hurried is of paramount importance in stressful abnormal circumstances, as this situation was.

Oldlae 21st Oct 2008 22:52

The engineers involved with the initial snag that the RAT was indicating the incorrect temperature probably thought that the indicating system was unserviceable and carried out their actions in compliance with the MEL where flight not into icing conditions was OK. It is possible that the high temperature the RAT probe indicated was caused by the aircraft being "in flight" for what ever reason heating up the probe, to prevent icing, and the conduction of the heat from the probe heating element caused an overheat indication of the RAT. Was this ever covered by the manufacturers maintenance training course which I presume the engineers attended?
Would this be a case where Boeing have picked up a "poisoned chalice" from the original manufacturer?

777fly 21st Oct 2008 23:41

Oldlae

No, but some lateral thinking about why those indications were happening might have saved the day.

justme69 22nd Oct 2008 01:39

Boeing, allegedly, told everybody after Detroit's Northwest accident that the TOWS couldn't be trusted 100% and they should be very frequently tested, specially shortly before each take off.

Unfortunately, such recommendation was never made mandatory by regulatory bodies anywhere in the world.

And worse yet, it seems Spanair was never told about it (or they weren't smart enough to find out, one of the two or a bit of both).

I guess Boeing figured on their own analysis that that was the best and most effective course of action (incidently, also the cheapest).

I concur that, with frequent tests before each take off, the likehood of an unnoticed TOWS failure at the same time as an unnoticed misconfiguration is very unlikely and reasonably safe for such an old airplane. Not "fool proof", but we've seen even worse designs being used for everyday critical operations in all industries.

At least it adds another layer to the cheese. Now 3 things would have to fail:
-The pilots missing a configuration error.
-The TOWS failing around that same time and not having been noticed on daily tests.
-The pilots missing the pre-takeoff TOWS test revealing the TOWS have become inop or the TOWS failing exactly in the few minutes (instead of up to 24h) between the check and the takeoff.

The chances of all three holes aligning are no longer that great as in the other case, where only two holes needed alignment.

Not perfect, but a pretty good, cheap and easy "patch" for the time being, until a better solution is found, if deemed necessary. At least, it would've (should've) worked on the only two known cases since Detroit: Spanair and Map. But in neither of those cases the crew performed a TOWS check right before takeoff. It wasn't even required in their SOPs.

PEI_3721 22nd Oct 2008 02:00

777fly re 2271 “the failure of both the flight crew and the maintenance staff to fully consider why certain system abnormalities were indicated and what would be the consequential effects of the maintenance actions that were carried out.”

It was not so much the lack of lateral thinking as the lack of direct thinking – consideration of the consequences of work on any system that could result in the combination of errors.
If the combination of errors has to be considered, then why isn’t this done by someone higher up the management chain? The certification requirements shave the subject but fall short of hard defences. The manufacturer or FOEB (predominantly operators) who assemble the MMEL could have considered the possible errors and required a TOCW test after any work on adjacent systems. Are the FOEB qualified to think about the likelihood of error (human factors) or the consequences of error?
But this is all in hindsight, what we require is the foresight to avoid the next major accident, which most probably will not involve TOCW.

777fly 22nd Oct 2008 02:33

PEI 3721

Sorry, but that is exactly what I was saying. Lateral, direct or 'out of the box' thinking is required. In my experience, MEL rectifications and alleviations follow simple consequential paths and usually, but do not always, anticipate knock-on effects, particularly when multiple failures are involved.. The resultant effects can be dire if the rectification process addresses an apparently simple fault which is, in fact, just a symptom of a far larger problem. A classic case of treating the symptoms rather than the disease itself. This appears to be what happened in this incident.

Litebulbs 22nd Oct 2008 08:20

777fly
 
Lateral or Outside the `box thinking is what regulators have tried to move away from in recent times and I agree with this. It is the quality of the procedures that you follow that need to be changed. The MEL should be doing the thinking outside of the box, not the engineer.

If you want a change in approach, then you need to remove the MEL as the first go to book, if an aircraft has a fault on prior to departure. If you had to quote a maintenance/schematic manual or wiring diagram reference along with a MEL reference, then you would be forcing the engineer to be giving him/herself a quick refresher on the system to be deferred.

But this approach lengthens the time it takes to dispatch the aircraft and so costs money. So write better MEL's.

How do you prove that the engineer had an understanding of the air/gnd system to the type of aircraft. He/she may have sat the course and passed the exams, but in the modules dealing with air/gnd sensing and anti ice protection, only got 75%, meaning that 25% of the syllabus was not understood. That is why you don't think outside the box. You follow procedures, i.e. what the MEL tells you to do.

captplaystation 22nd Oct 2008 10:21

I have never flown for a company ( & I have flown for a few ) where engineers were not under real/perceived pressure to get the aircraft back on line ASAP.
Asking in this situation that the person concerned thinks laterally could in fact merely distract him from carrying out what may be a complicated task in itself. Theoretically lateral thinking should be great, but realistically a robust and comprehensive MEL procedure which does that for him would achieve much the same result whilst also covering the poor blokes back when he has to explain why the troubleshooting took so long.
Of course persuading Airlines & Authorities that the MEL should be more comprehensive won't be so easy, as someone has to actually produce & authorise it, and future defects will be more time consuming.
Blinkers tend to be worn regarding safety, and the lessons quickly forgotten at the temple of the great God of commercial expediency, don't know how many accidents it would take to change that unfortunate mantra.

Rananim 22nd Oct 2008 12:32

Going after the engineer is not the answer.You go after the system.If you have to play the blame-game,attack from the top-down.The flight crew will carry the can but if you want the big picture,you have to delve deeper.Why did the Spanair CP not collate and disseminate the lessons learnt from Detroit and Lanzarote,why did he not emphasize the frailties of the MD-80 air-ground system to his crews,and why did he not instigate Boeing's recomendation?
Encourage a training culture where system knowledge is taught at a much deeper level.Dont scratch the surface.Very often a pilot only knows that if X happens he must do Y.He may or may not know why X has happened and why Y is remedial.And you test this level of in-depth understanding orally in a classroom with visual props/aids with engineers as instructors .This CBT is okay but its superficial and promotes rote memorization over lateral thinking borne out of a thorough understanding of the systems.Of course its cheaper and less time-consuming which is why they do it.
They spend time and money on CRM classes telling us that the flight deck is a democracy(which it isnt) and what they should be doing instead is devoting those resources on a return to the fundamentals.Return the role of the CP to its original glory;he sits at board level,is divorced from economics totally,and fights for his one and only mandate;safety.

captplaystation 22nd Oct 2008 12:41

Totally with you on that score.
It is very easy to blame the last person to handle the "component" particularly when they are dead, the blame then passes to the next in line, the poor engineer.
Much more relevant to ask those difficult questions you identified of the post holder/ the regulators/ the manufacturers. . . . . . but SO much easier to blame the pilots & engineers thereby conveniently ignoring the dumbing down of knowledge/respect that has for fiscal reasons been encouraged in this profession for too long now.
All of the responsibility/ none of the authority that is the dream scenario for bean counters, and is very close to the current status "enjoyed" by those on the front line.

lomapaseo 22nd Oct 2008 14:45

Litebulbs


Lateral or Outside the `box thinking is what regulators have tried to move away from in recent times and I agree with this. It is the quality of the procedures that you follow that need to be changed. The MEL should be doing the thinking outside of the box, not the engineer.
Right on mate:ok:

Please paste this into all threads following an accident

SPA83 22nd Oct 2008 15:25

http://nsa03.casimages.com/img/2008/...2713689930.jpg

agusaleale 22nd Oct 2008 23:37

Rananim and Spa83:

Agree 100%

bubbers44 23rd Oct 2008 02:38

Lateral or Outside the `box thinking is what regulators have tried to move away from in recent times and I agree with this. It is the quality of the procedures that you follow that need to be changed. The MEL should be doing the thinking outside of the box, not the engineer

Then we need to do a lot of work on MEL's because it didn't work this time. Smart engineers and pilots would have prevented this disaster. Systems knowledge by either would have made them realize that the only time the RAT heater works is in the air. Disconnecting the RAT heater does not fix the problem, only the symptom.

justme69 23rd Oct 2008 07:21

An interesting article published yesterday:

Safety slip in Madrid crash also seen in U.S. - USATODAY.com

It talks of 55 voluntarily reported cases of bad takeoff configurations in the past 7 years or so in the USA alone. That's a lot more than I had found, that were around a dozen or so cases, but once I found it was pretty common, I didn't continue looking that closely. Most of them, of course, were catched in-extremis by the Take Off Configuration Warning Systems.

The Spanair pilots were of the few unlucky ones (together with Lanzarote and Reagan's cases, i.e.) that had an unnoticed TOWS failure shortly before they needed it the most. After so many years and so many million flights, I guess it was due to happen.

It comes to point to that, even with sufficient training, experience, safety culture, management, maintenance, etc, human error is still a piece of the puzzle that just can not be avoided and therefore needs as much help as possible from technology, etc.

But I don't think I'm saying anything new here. Traffic accidents, even by professional and experienced taxi/bus/truck drivers, happen every single day when humans in charge of vehicles make bad choices against everything they have been trained for. And no, most of them are not "careless" or "suicidal", they are just humans carrying their children to school everyday but don't even bother doing a basic visual check of all four wheels before entering the car. And constanly, driving schools, TV safety campains, police controls, improved vehicle designs, better roads and signage, etc, etc are reminding us to watch our speed, buckle up, not forget to turn on lights at night, etc, etc, etc.

And yet, we all do make those basic mistakes at times, putting our own lifes at risk against our better judgment for "unkown reasons".

BTW, does anybody know if there was ever an investigation report for the Indian Airways accident of Dic 17 1978 of B737 VT-EAL that explained why it tried to take off w/o slats and it crashed?


All times are GMT. The time now is 12:35.


Copyright © 2024 MH Sub I, LLC dba Internet Brands. All rights reserved. Use of this site indicates your consent to the Terms of Use.