PPRuNe Forums - View Single Post - Main Gear Boxes and The Grand Lottery
View Single Post
Old 12th Apr 2009, 17:34
  #29 (permalink)  
JimL
 
Join Date: May 2003
Location: Europe
Posts: 900
Received 14 Likes on 8 Posts
FH1100 Pilot,

The purpose of collecting data on flights is not to simulate but to measure; and with that measurement comes the ability to spot emerging anomalies (clusters and trends) and predict when failure is about to occur.

The great thing about neural nets is that they support machine learning and do not need the setting of thresholds; eventually, after establishing what normality is, and without human intervention, they can spot abnormality and therefore the potential anomaly.

Perhaps what some of us are bemoaning, is that we saw a demonstration of at least one of these programs (the GE software running over the Bristow data-store) a couple of years ago. In fact we also saw the same software running over the HOMP data. What this can spot is something like a pilot who constantly approaches faster than the rest of the pilots - not fast enough so that it triggers an alert (because thresholds have to be set high enough to avoid nuisance alerts) but a group of data points that sit in a cluster and so become an abnormal pattern.

There will always be faulty parts (bad material, poor machining, incorrect assembly etc) and they will conspire to break outside the normal inspection pattern or before overhaul. The breaking of such parts will always take the occurrence outside the 'extremely remote'. What collection of data and monitoring does is to allow us to eliminate premature failure by spotting the precursors.

In the same way, HOMP (FDM) permits us to identify behaviour patterns that, when isolated are not themselves dangerous but, when put in the mix with other elements might be the final link in the causal chain of the accident.

What most of us really want, is to take the human out of the heavy process that is post flight data analysis; much better that software systems do that for us and then alert when the abnormal pattern is observed - they are much better at this than we are. Leave the humans to the standard intervention (nuts, bolts, mag-plugs, inspection) - they are really good at that.

The problem I have with the arithmetic of a probability of 'extremely remote' is that it has to encompass the knowledge of continuing airworthiness (when do I target my inspection, when do I do my overhaul); hence the target figure is preserved because the faulty element, when found, is removed from the calculation. For a practical example look to the introduction of the EC155 to Nigeria; there never was going to be an engine failure because no engine sat in the aircraft for more than 200 hrs. It is setting these intervention intervals that is the real skill and the one which permits a very small figure like 'extremely remote' to exist.

In an extremely complex system, we can mitigate the errors made in the establishment of such intervals only by monitoring. It was this very point that was the basis of the quote from the HARP report.

Jim

Last edited by JimL; 12th Apr 2009 at 17:46.
JimL is offline