PPRuNe Forums - View Single Post - Spanair accident at Madrid
View Single Post
Old 1st Nov 2008, 23:38
  #2337 (permalink)  
alf5071h
 
Join Date: Jul 2003
Location: An Island Province
Posts: 1,257
Likes: 0
Received 1 Like on 1 Post
“More training – better training”
More, or better training might be identified with the need to link knowledge with know-how (tacit knowledge), which is difficult to teach and best gained from experience.
The industry promotes a ‘no error’ operational philosophy, yet humans learn from error. Similarly the lingering blame culture restricts the number of error reports and restricts opportunity to learn from others.
Pilots can be taught the fundamentals of the TOCWS and the ground-air switching logic, but in order to associate them in situations when a probe overheats requires skills of critical thinking:- how is the situation understood and/or related to the next operation, what if, comparison, association, accurate memory recall, etc.
These skills are essential in aviation, and generally acquired from being in the relevant situations – experience.
Thus for a probe failure, full understanding of the situational aspects might require a pilot to have experienced ‘the specific failure’. However, probes can fail for many reasons – with or without TOCWS implications, that’s one reason we call maintenance to determine the nature of the failure;- CRM, use all available resources, provided that they too have the required knowledge and know-how.
The MMEL DDG (Dispatch deviation guide) should be the documented reference for allowing and managing these failures, but MELs may not consider human error, and probably not combined with other errors.

Following this line of argument, then it might be unreasonable to expect pilots to know the specific association between TOCW / ground-air switch / probe.
Flight crew are seen, and see themselves as the last line of defence, which breeds personal responsibility, but there are limits to the practicality of this.
The regulations (#2104) might suggest that system design should limit the dependency on pilot’s knowledge (and vulnerability to error), which argues for improved system integrity. If this was not the intent of the regulation then this accident identifies a mismatch between what the regulation (CS 25) assumes about a pilot’s knowledge and that required by JAR-FCL (training); a gap in the regulations which the operational industry fell into.

“What can be learned?”
Investigations of accidents in complex systems can usually determine ‘what’ happened quite quickly; finding out ‘why’ things happened is much more difficult. Blame, should not – must not, enter these phases of investigation, and is normally an issue for the lawyers, but this segregation is not always made.
Perhaps the most disappointing aspect of the the investigation so far, is that the ‘why’ aspects appear to be missing. Perhaps the jump to the legal national requirement (blame) has eclipsed the need to determine ‘why’; a pity as it is this understanding from which the industry might learn.

It is interesting to relate what is known about this accident with the causes of the recent financial ‘crash’. (cf New Scientist 25 Sept “The blunders that led to the banking crisis.”) Although the banking collapse is seen as an industry wide issue, the reasons for failing apply equally to an individual organization, bank or operator.

The crisis did not come without warning.” Were the outcomes of previous MD 80 accidents and incidents sufficiently heeded? What action was taken by the continued airworthiness process, the manufacturer, and operators?

By definition they are rare, extreme events, so all the [math] models you rely on in normal times don't work any more, " What assumptions have been made about aircraft system failure in the MD 80 and opportunities for error? Did these change with in-service experience?

… each liquidity crisis is inevitably different from its predecessors, not least because major crises provoke changes in the shape of markets, regulations and the behaviour of players.” Was the Spanair operation towards the ‘end of the chain’ where previous experience, knowledge, or requirements may not have been passed on? Or if available the information not used due to a lack of awareness of the severity / frequency of the problem?

…wrongly assumed that two areas of vulnerability could be treated in isolation, each with its own risk model. When the two areas began to affect each other … there was no unifying framework to predict what would happen, " Something for aviation to learn? MMELs rarely consider combined interactions in systems and / or human vulnerability.

These models typically assume that market prices will continue to behave much as they have in the past, and that they are reasonably predictable. Statistical models based on short time series of data are a terrible way to understand [these kinds of] risks.
The banks had set great store by their use of statistical models designed to monitor the risks inherent in their investments. The models were not working as well as hoped - in particular that they were ignoring the risks of extreme events and the connections. The real risk, … turns out to be a cycle of drops.
" Complacency? Drops – small incremental changes in normal procedure, moving away from the assumed safe standard and so become the norm. Are these changes identifiable with FOQA, LOSA, etc, and are these safety tools based on the correct norm – risk, certification / training assumptions?

… each bank had been content to use a measure called "value at risk" that predicted how much money it might lose from a given market position ("What do we stand to lose?") . In aviation, is this synonymous with an insular approach to flight safety – not sharing safety information, not considering the experiences of other operators?

Statistical models have proved almost useless at predicting the killer risks for individual banks, and worse than useless when it comes to risks to the financial system as a whole. The models encouraged bankers to think they were playing a high-stakes card game, when what they were actually doing was more akin to lining up a row of dominoes.
Banks should be careful not to assume that they have it right and the rest of the world has it wrong. And regulators - who have lately allowed themselves to be blinded by science - should have no qualms about shutting down activities they do not understand. We shouldn't need another warning.
” Cf Revisiting the Swiss Cheese Model of Accidents.

[USA Today] – Alan Greenspan … was "shocked" to discover, as a once-in a-century financial crisis spread, that his bedrock belief that financial firms could police themselves turned out to be "flawed."
"I made a mistake in presuming that the self-interests of organizations, specifically banks and others, were such as that they were best capable of protecting their own shareholders and their equity," "… a flaw in the model that defines how the world works.
"

Hopefully not an epitaph for aviation SMS and devolved regulatory oversight.
alf5071h is offline