PPRuNe Forums - View Single Post - Can automated systems deal with unique events?
Old 26th Oct 2015, 16:21
  #1 (permalink)  
slast
 
Join Date: Jan 2010
Location: Marlow (mostly)
Posts: 364
Likes: 0
Received 1 Like on 1 Post
Can automated systems deal with unique events?

There has always been interesting comment on Prune about software reliability, bugs, design requirements, testing, etc., most recently under the topic of a B787 Dreamliner engine issue. There appear to be a significant number of Ppruners who are serious and knowledgeable on the subject.

I would like to ask those members a philosophical question. This has an impact on the argument that a safety priority now should be the elimination of human pilots from the system via automation.

The question is whether it is feasible (within a foreseeable timeframe) for humans to create automated systems that can deal with truly unique (not just "extremely improbable") events.

The pro-automation lobby (see for example thread I started in March, " "Pilotless airliners safer" - London Times article") starts from the view that as pilots appear to be the dominant primary cause in aviation accident reports, removing them will remove the errors they make and the consequent accidents.

This first started being discussed seriously in the late 1980s, when the Flight Engineer function was automated out of the B747 to create the -400 and the DC10 the MD11, etc. (Note - this was not the same as the 3-person vs. 2 person crew controversy so please don't mix that into it!)

There has been a multiple-order-of-magnitude increase in computing capability since then, but my feeling is still the same. Human pilots on board will always be able to make SOME attempt to deal with a completely unforeseen and unique event that arises from a coincidence of imperfections in the total aviation system (vehicle, environment, and people) - even if unable to do so 100% successfully.

So: is it possible to replace this capability with a human-designed and manufactured system, without creating additional vulnerability to human error elsewhere?

The entire industry works on a concept of "acceptable" and "target" levels of safety, involving probability of occurrence and consequences of events that society is willing to take. The regulatory authorities lay down numbers for those probability and consequences elements at various levels.

It seems to me that it would not be possible to design any automated system to control physical equipment like an aircraft without making assumptions about that aircraft and its components, one of which must be that component failure ALWAYS meets the probability required.

In reality, human errors occur in all stages of the process of getting a paying customer to their destination. In the vast majority of cases these errors are caught by the myriad checks in the system, but some are not. When two or more such trapping failures coincide, they may end up as a problem that until now has required the pilot(s) to act creatively, because the situation has never been considered as a possibility. That lack of foresight in itself might even be classed as a human error in the specification and implementation of the checking process.

To a human designing an overall automated control system, either an event is possible and can occur no more often than the required frequency, or it is impossible and need not be considered. There isn't a halfway house where the design engineer can say "this isn't supposed to happen but I think it might, so I'll cater for it." Apart from anything else, what steps can he take to cater for it when there is no means of knowing what the other circumstances are?

Take an uncontained engine failure, which is supposed to be a very improbable event. To quote a Skybrary summary: "Each uncontained failure will result in a “unique” combination of collateral damage ....... [which] carries the greater potential risk and that will require creative pilot assessment to ensure a positive outcome is achieved." That was amply demonstrated on QF32, where the problem originated as human errors in manufacturing, and were prevented from becoming a catastrophe by the pilots.

Other "unique" event examples which show that they are not so rare as to be negligible might include 2 within a few years in one airline alone - the BA B777 dual engine flameout on short final LHR and B744 leading edge flap retraction on takeoff at JNB. Both were survived largely due to instantaneous on-the-spot human "creativity" in recognising that the situation did not conform to any known precedent.

Issues of bugs, validation, verification, system analysis etc, appear to me to be essentially about meeting probability requirements for "known" possibilities. Is there an additional requirement that will have to met for "creativity" in such a system before a pilotless system can even start to be considered?

Unless such a creative artificial intelligence system is included, is the concept of automating the pilot out of the commercial aircraft cockpit doomed to fail, because ALL human error, and with it 100% of the liability for all consequences of any unique event, will clearly be transferred to the manufacturer and/or other suppliers?

Finally, in the event of such an event, will "society" in the form of the legal processes which will inevitably follow, agree that the numbers used since the 1950s etc. to define an acceptable level of safety to the authorities are the correct ones to meet expectations in the mid 21st century? In other words, will potential product liability issues stop the bandwagon?

Any thoughts on this, ladies and gentlemen?
slast is online now